TOKREPO · ARSENAL
Stable

Workflow Orchestration

n8n, Prefect, Inngest, Kestra, Activepieces — durable workflow engines that wrap AI agents in cron, retries, and observability.

8 assets

What's in this pack

This pack collects the eight production-grade workflow engines that AI teams reach for when a prompt-in-a-loop stops being enough. Each one solves the same trio of problems — schedule, retry, observe — but with different ergonomics. Pick by where your team already lives (Python, Node, no-code, JVM).

# Engine Best for
1 n8n No-code / low-code, 400+ integrations, self-hostable
2 Prefect Python-first data + AI workflows, dynamic DAGs
3 Inngest TypeScript / Node, event-driven step functions
4 Kestra YAML declarative, JVM, plugin ecosystem
5 Activepieces OSS Zapier alternative, branching flows
6 Trigger.dev Long-running TypeScript jobs with retries baked in
7 Temporal Distributed workflow primitives for serious scale
8 Windmill Polyglot scripts (Python/TS/Bash/Go) with a UI

Every entry is open source and self-hostable. Hosted SaaS tiers exist for most, but this pack documents the OSS install path first.

Why orchestration matters for AI agents

Prompts are deterministic the first time you run them and a coin flip every time after. Add a real-world action — call an API, write to a database, send a Slack message — and "it failed once at 3am" becomes a weekly event. A workflow engine solves three things:

  1. Idempotency. Steps can replay safely. If the LLM call succeeded but the next step crashed, you don't re-bill the API on retry.
  2. Backoff and dead-letter. Failed steps retry on an exponential schedule, then land in a DLQ where a human can inspect.
  3. Observability. Every step is recorded with inputs, outputs, latency, and cost. When the agent does something weird, you can replay the exact context it saw.

Without one of these, your agent is a Jupyter notebook with extra steps. With one, it's a service.

Install in one command

# Install the entire pack (manifests for all 8 engines)
tokrepo install pack/workflow-orchestration

# Or pick the one your team uses
tokrepo install n8n
tokrepo install prefect
tokrepo install inngest

The TokRepo manifest gives you Docker Compose / Helm starting points plus the agent-friendly defaults (idempotency keys, retry policies, observability hooks). Run tokrepo uninstall <slug> to remove.

Common pitfalls

  • Treating retries as infinite. Default retry counts on most engines are 3–5. If your LLM provider is having a bad hour, you'll burn the budget and still fail. Cap retries at 3 for paid LLM calls and route the failure to a slower fallback model.
  • Idempotency keys done wrong. A common mistake is keying on request_id that changes per retry. Key on the input hash so retries deduplicate against the same logical task.
  • Missing token-cost observability. Vanilla orchestrators record latency but not LLM token spend. Wrap your model call to emit a tokens_in / tokens_out / cost_usd metric per step, otherwise you'll blow past budget and not know which workflow did it.
  • Choosing the wrong scheduler granularity. Cron-based engines (n8n, Activepieces) struggle with sub-minute triggers. If you need event-driven sub-second response, use Inngest, Trigger.dev, or Temporal.
  • Hosting the engine on the same node as the agent. A runaway agent will OOM the orchestrator. Run the engine on a separate node or container with hard memory limits.

When this pack alone isn't enough

Workflow orchestration is the durability layer. It does not give you the agent itself, the LLM gateway, or the eval. Pair this pack with:

  • Python Agent Frameworks for the actual agent code that runs inside each step
  • MCP Server Stack if you want orchestrator triggers callable by AI agents over MCP
  • LLM Eval & Guardrails to bake an eval step into every workflow before production output ships

Together those four packs are the minimum stack for any AI workflow you'd let run unattended.

How to choose between the eight engines

If your team already writes Python data pipelines, Prefect is the lowest-friction onboarding because tasks look like decorated functions. If your team is TypeScript-native and event-driven, Inngest wins on developer experience. If you need non-engineers to build their own automations, n8n or Activepieces unlock that audience with a visual canvas. Temporal is the right answer when you genuinely need distributed-systems guarantees across thousands of long-running activities. Kestra suits JVM shops that want declarative YAML pipelines with strong plugin discovery. Trigger.dev is for product engineers shipping background jobs in serverless TypeScript apps. Windmill is the dark-horse pick when you want polyglot scripts plus a UI for ops. Most teams end up running two engines in parallel, not one — that's normal, and the manifest defaults in this pack make sure both inherit the same retry and observability conventions.

INSTALL · ONE COMMAND
$ tokrepo install pack/workflow-orchestration
hand it to your agent — or paste it in your terminal
What's inside

8 assets in this pack

Config#01
Awesome n8n — Workflow Automation Template Collection

Curated n8n workflow templates for data sync, notifications, CRM automation, and more. Import directly into your n8n instance.

by n8n·285 views
$ tokrepo install awesome-n8n-workflow-automation-template-collection-ab76a229
Skill#02
n8n-as-code — AI Agent Superpowers for n8n

Give your AI agent n8n superpowers with 537 node schemas, 7,700+ templates, and TypeScript workflow definitions. Works with Claude Code, Cursor, VS Code, and OpenClaw. MIT license.

by n8n·143 views
$ tokrepo install n8n-code-ai-agent-superpowers-n8n-9de58e04
Agent#03
n8n — AI-Native Workflow Automation

Open-source workflow automation with 400+ integrations and built-in AI capabilities. Build AI agents, RAG pipelines, and automation workflows with a visual editor.

by n8n·103 views
$ tokrepo install n8n-ai-native-workflow-automation-b7ec9ae7
MCP#04
n8n MCP Server — Build Automations with AI, 1,396 Nodes

MCP server giving AI agents access to 1,396 n8n nodes and 2,709 workflow templates. Build and manage n8n automations through natural language.

by n8n·81 views
$ tokrepo install n8n-mcp-server-build-automations-ai-1-396-nodes-4635d46c
Agent#05
Prefect — Python Workflow Orchestration

Prefect orchestrates resilient data pipelines in Python with scheduling, retries, caching, and event-driven automation. 22K+ stars. Apache 2.0.

by AI Open Source·130 views
$ tokrepo install prefect-python-workflow-orchestration-dfa1e8d2
Script#06
Inngest — Durable AI Workflow Orchestration

Run reliable AI workflows with automatic retries and state persistence. Replace queues and scheduling with durable step functions. TypeScript, Python, Go SDKs. 5,200+ stars.

by Script Depot·130 views
$ tokrepo install inngest-durable-ai-workflow-orchestration-f09e8059
Config#07
Kestra — Event-Driven Orchestration & Scheduling Platform

Kestra is an open-source orchestration platform for scheduling and running complex data pipelines, ETL jobs, and automation workflows with declarative YAML.

by AI Open Source·117 views
$ tokrepo install kestra-event-driven-orchestration-scheduling-platform-556ae291
Agent#08
Activepieces — Open-Source AI Workflow Automation

Open-source workflow automation with 400+ integrations and AI agent support. Visual builder, MCP server compatibility, self-hostable alternative to Zapier. 21K+ stars.

by AI Open Source·91 views
$ tokrepo install activepieces-open-source-ai-workflow-automation-13ddf27d
FAQ

Frequently asked questions

Is workflow orchestration free?

All eight engines in this pack are open source and self-hostable on free tiers. n8n, Prefect, Inngest, Kestra, Activepieces, Trigger.dev, Temporal, and Windmill all publish OSS Docker images. The trade-off is you operate the engine yourself — DB, queues, scaling. Hosted SaaS tiers exist for most (n8n Cloud, Prefect Cloud, Inngest Cloud) when you'd rather pay than operate.

How does this compare to running cron + a Python script?

Cron has no retries, no idempotency, no observability, and no DLQ. Your script either succeeds or runs again next interval. For an AI agent that calls billed APIs and produces side effects, that's not safe. The cheapest version of this pack — Inngest with the free tier and a Vercel function — is roughly the cron-equivalent surface but with retries and a UI for free.

Will it work with Claude Code or Cursor agents?

Yes, but indirectly. Claude Code and Cursor are interactive coding agents — they run on your laptop. Workflow engines run on a server. The pattern is: Claude Code writes the agent code, you wrap it in a step function (Inngest, Prefect task), the engine runs it on a schedule. The engine doesn't care that the step calls Claude API.

Difference vs Apache Airflow?

Airflow exists and works, but its DAG-as-Python model and slow scheduler tick make it a poor fit for AI agents that need event-driven sub-second response and frequent dynamic branching. Prefect, Inngest, and Temporal all started as 'modern Airflow' projects. We left Airflow out of this pack because every team that tries it for AI agents migrates within six months.

Operational gotcha I should plan for?

Backpressure. When a downstream LLM provider slows down, your orchestrator queues will balloon and OOM the engine. Set per-queue concurrency limits and a global rate limiter on outbound LLM calls before you hit production scale. Inngest and Trigger.dev have this built in; n8n and Prefect need explicit configuration.

MORE FROM THE ARSENAL

12 packs · 80+ hand-picked assets

Browse every curated bundle on the home page

Back to all packs