TOKREPO · ARSENAL
Stable

Python Agent Frameworks

Phidata, AGiXT, AutoGPT, OpenAI Swarm, CrewAI — Python-first frameworks for shipping multi-agent systems beyond the LangGraph default.

5 assets

What's in this pack

This pack collects the five Python-first agent frameworks that ship enough production code to use today, ranked by GitHub stars and active commit cadence. Each one represents a different design philosophy — pick by the shape of your problem, not the brand name.

# Framework Style Best for
1 Phidata data-app-style agents dashboards + tool-calling agents
2 AGiXT full agent platform self-hosted multi-provider
3 AutoGPT autonomous loop open-ended goal pursuit
4 OpenAI Swarm hand-off routing lightweight multi-agent
5 CrewAI role-based crews sequential team workflows

The list intentionally excludes LangGraph (covered as a default in most LangChain stacks) and the JS-only frameworks. It covers the design space Python users actually pick from when LangGraph's StateGraph feels too heavyweight.

Why "stay in Python" matters

Most teams have a Python data stack already — pandas, FastAPI, Postgres drivers, the ML libs. Switching to a TypeScript or Go agent runtime means rewriting the data plumbing for one feature. These five frameworks sidestep that entirely. They run inside your existing FastAPI service, share your venv, and call the same DB driver your backend already uses.

The trade-off shape:

  • Phidata is the most "Python-y" — agents are classes with tool methods, very SQLAlchemy-flavored. Best when your agent IS a data app (dashboards, internal tools).
  • AGiXT is a full platform with a UI, provider abstraction, chains, extensions. Heaviest install, most features out of the box.
  • AutoGPT is the autonomous-loop archetype — goal in, plan + execute + reflect, repeat. Heavier on token cost; lighter on dev work.
  • OpenAI Swarm is the smallest — agents hand off to each other via a transfer_to_X() function. Experimental but the cleanest mental model for routing.
  • CrewAI lets you define Agent, Task, Crew objects — explicit roles, sequential or hierarchical execution. Great for "marketing → review → publish" pipelines.

Install in one command

# Install the pack — drops requirements + example agents into your project
tokrepo install pack/python-agent-frameworks

# Or pick one
tokrepo install crewai
tokrepo install openai-swarm
tokrepo install autogpt

The TokRepo CLI fetches each framework's getting-started template, places it under agents/<framework>/, and adds dependencies to pyproject.toml or requirements.txt. Run pytest agents/ to validate the examples before wiring to your real prompts.

Common pitfalls

  • Don't use AutoGPT for closed-form tasks. Its strength is open-ended goals. For "summarize this PDF" you'll burn tokens reflecting on a problem that has one obvious step.
  • Swarm is experimental, not production. OpenAI ships it as a "design pattern" repo. Use the routing pattern; build the production loop yourself or copy it into a maintained fork.
  • CrewAI sequential mode hides parallelism. If two tasks have no dependency, declare them parallel — sequential mode runs everything end-to-end even when not needed.
  • Phidata's storage layer assumes Postgres. SQLite works for local dev but the storage interface is built around JSONB. Plan a real Postgres for any deployed run.
  • AGiXT updates fast. The schema between minor versions has broken before. Pin a specific tag in production and read the changelog before upgrading.

When this pack alone isn't enough

If your problem is one Python-only service, this pack is enough. If you have:

You can also combine: a CrewAI orchestrator written in Python that calls a Spring AI agent in Java for a specific tool, evaluated by Promptfoo before shipping. The frameworks here are agent runtimes — they don't dictate the rest of your stack.

INSTALL · ONE COMMAND
$ tokrepo install pack/python-agent-frameworks
hand it to your agent — or paste it in your terminal
What's inside

5 assets in this pack

Script#01
Phidata — Build & Deploy AI Agents at Scale

Framework for building, running, and managing AI agents at scale. Memory, knowledge, tools, reasoning, and team workflows. Monitoring dashboard included. 39K+ stars.

by Script Depot·120 views
$ tokrepo install phidata-build-deploy-ai-agents-scale-d1627127
Script#02
AGiXT — Extensible AI Agent Automation Framework

Open-source AI agent automation platform with 50+ provider integrations, plugin system, chain-of-thought workflows, and persistent memory. Self-hostable via Docker.

by Agent Toolkit·90 views
$ tokrepo install agixt-extensible-ai-agent-automation-framework-6528233f
Script#03
AutoGPT — Autonomous AI Agent Platform

Build and deploy autonomous AI agents that accomplish goals with minimal human input. Visual builder, marketplace, and API. The original autonomous agent. 183K+ stars.

by Script Depot·101 views
$ tokrepo install autogpt-autonomous-ai-agent-platform-6764deda
Script#04
OpenAI Swarm — Lightweight Multi-Agent Orchestration

Educational multi-agent framework by OpenAI. Ergonomic agent handoffs, tool calling, and context variables. Minimal abstraction over Chat Completions API. 21K+ stars.

by OpenAI·121 views
$ tokrepo install openai-swarm-lightweight-multi-agent-orchestration-d75dad10
Script#05
CrewAI — Multi-Agent Orchestration Framework

Python framework for orchestrating role-playing AI agents that collaborate on complex tasks. Define agents with roles, goals, and tools, then let them work together autonomously. 25,000+ stars.

by Agent Toolkit·114 views
$ tokrepo install crewai-multi-agent-orchestration-framework-97fce2da
FAQ

Frequently asked questions

Are these frameworks free?

All five are MIT or Apache 2.0 open source — no per-seat costs, no usage caps from the framework itself. You'll still pay for the LLM API behind them (OpenAI, Anthropic, etc), and AGiXT's hosted version costs money. Self-hosting any of the five is genuinely free, including Phidata's optional managed dashboard.

How does CrewAI compare to LangGraph?

CrewAI is role-based and sequential by default — you describe Agents and Tasks, then assemble a Crew. LangGraph is graph-based — you draw nodes and edges yourself. CrewAI is faster to express team workflows; LangGraph is better when control flow is irregular or has cycles. Many teams prototype in CrewAI then port hot paths to LangGraph for control.

Will any of these work with Claude Code or Cursor?

These frameworks are runtimes, not editor integrations. You build agents with them, then run as a Python service. Your editor (Claude Code, Cursor) is where you write the code, but the agent itself runs as your service. For editor-side subagents, see Awesome Claude Code Subagents pack instead.

Difference vs Multi-Agent Frameworks pack?

Multi-Agent Frameworks is platform-agnostic — it includes JS, Go, and Python options side-by-side for comparison shopping. This pack is Python-only, deeper into the Python design choices. If you're committed to Python, start here. If you're picking the language too, start with Multi-Agent Frameworks.

What's the operational gotcha with AutoGPT?

It will happily loop forever if you don't cap iterations or token budget. The default config can spend $50 of OpenAI credits in a single 'analyze the market' task before noticing it's circling. Always set max_iterations, max_cost_in_usd, and a strict goal — vague goals plus unlimited budget is the failure mode.

MORE FROM THE ARSENAL

12 packs · 80+ hand-picked assets

Browse every curated bundle on the home page

Back to all packs