为什么选它
Swarm’s core idea was multi-agent coordination as tool calls. An agent doesn’t pass messages to a central orchestrator — it simply has a tool transfer_to_specialist_agent() that changes which agent receives the next user message. The whole framework fits in a few hundred lines of Python. It was designed to teach a pattern, not to replace CrewAI or LangGraph.
The pattern was compelling enough that OpenAI promoted it into the official Agents SDK in 2025 — Python + TypeScript, with production features layered on: built-in tracing, guardrails (input/output validation), streaming, tool schemas, and deep integration with the OpenAI Responses API. The handoff-via-tool-call mental model is unchanged; everything around it is hardened.
Reach for this when you want minimal abstraction + OpenAI ecosystem tightness. You give up some of LangGraph’s control-flow power and CrewAI’s opinionated role model in exchange for a very small API surface that composes with OpenAI primitives naturally.
Quick Start — Agents SDK Handoff Example
Handoffs are first-class: declare them on the agent, and the framework injects a transfer_to_X tool automatically. Tracing is enabled by default and visible in the OpenAI dashboard. For TypeScript, install @openai/agents — same model with identical API.
# pip install openai-agents
from agents import Agent, Runner, handoff
triage = Agent(
name="Triage",
instructions="Route to the right specialist. Use handoffs; do not answer directly.",
)
billing = Agent(
name="Billing",
instructions="Answer billing questions only. Be precise.",
)
support = Agent(
name="Support",
instructions="Answer product/support questions. Be friendly.",
)
# Triage can hand off to either specialist
triage.handoffs = [handoff(billing), handoff(support)]
# Single user message, routed automatically
result = Runner.run_sync(triage, "My card was charged twice last month.")
print(result.final_output) # handled by Billing
print(result.last_agent.name) # -> "Billing"
# Another
result2 = Runner.run_sync(triage, "How do I reset my 2FA code?")
print(result2.final_output) # handled by Support核心能力
Agents + handoffs
An Agent is an LLM + system prompt + tools. Handoffs are other agents wrapped as tools. Routing is "agent decides which handoff tool to call" — no external orchestrator.
Built-in tracing
Every run produces a trace visible in OpenAI’s dashboard (platform.openai.com/traces). Span-level visibility into LLM calls, tool calls, and handoffs without separate instrumentation.
Guardrails
Input and output guardrails validate messages before/after LLM calls. Block prompt injection, enforce JSON schemas, catch hallucinations. First-class API.
Streaming + async
Runner.run_streamed yields events as they happen (token stream, tool call, handoff). Full async support for high-concurrency services.
Tool-first thinking
Everything is a tool: handoffs, function calls, computer use, web search. Composable and small. No framework-specific abstractions on top of what the model already knows.
TypeScript parity
Python and TypeScript SDKs with matching APIs. Useful for Node.js backends and Edge deployments (Cloudflare Workers, Vercel Edge).
对比
| Abstraction Size | Vendor Lock-in | Production Ready | Best Fit | |
|---|---|---|---|---|
| OpenAI Agents SDKthis | Tiny (handoffs as tools) | OpenAI tracing tied-in; models not | Yes (2025) | OpenAI-first apps |
| Swarm (2024 edu) | Tiny | None | No (educational) | Understanding the pattern |
| CrewAI | Medium (roles + tasks) | None | Yes | Role-driven pipelines |
| LangGraph | Large (state graph) | None | Yes | Complex control flow |
实际用例
01. Customer support triage
A generalist agent routes to billing/tech/account specialists via handoffs. Each specialist has only the tools and instructions it needs — simpler than a single mega-agent with many tools.
02. OpenAI-centric apps
If you’re already paying OpenAI and using Responses API, Agents SDK is the path of least resistance — tracing, guardrails, and function calling all live in the same ecosystem.
03. Multi-language deployments
Python backend + TypeScript edge code with identical APIs. No re-architecting when moving agents between runtimes.
价格与许可
OpenAI Agents SDK: MIT licensed, free to use. No per-agent licensing. openai-agents-python / openai-agents-js.
Tracing: included free with OpenAI API usage. Traces visible at platform.openai.com/traces for all users — no separate observability subscription.
Model cost: you pay the underlying OpenAI API. Handoffs add a modest number of extra turns vs. a monolithic prompt — typically 20-40% more tokens for meaningfully better structure.
相关 TokRepo 资产
OpenAI Swarm — Lightweight Multi-Agent Orchestration
Educational multi-agent framework by OpenAI. Ergonomic agent handoffs, tool calling, and context variables. Minimal abstraction over Chat Completions API. 21K+ stars.
CapRover — Automated Docker Deployment PaaS with Web UI
CapRover is an open-source PaaS (Heroku on Steroids) with a beautiful web UI, one-click apps, automatic HTTPS, and Docker Swarm support. Deploy any app in seconds.
Portainer — Docker & Kubernetes Management Made Easy
Portainer is an open-source container management UI for Docker, Swarm, and Kubernetes. Deploy, monitor, and manage containers through an intuitive web interface.
Claude Swarm — Multi-Agent Orchestration with SDK
Python-based multi-agent orchestration built on Claude Agent SDK. Opus decomposes tasks, Haiku workers execute in parallel waves with real-time TUI dashboard and budget control.
常见问题
Is Swarm still maintained?+
The original swarm repo is archived as an educational reference. The production-ready successor is the OpenAI Agents SDK (openai-agents-python and openai-agents-js). New projects should use Agents SDK.
Can Agents SDK use non-OpenAI models?+
Yes — the Agent accepts any compatible model client. In practice most integrations are OpenAI-tight (tracing, Responses API), but the core handoff pattern works with any LLM.
Agents SDK vs CrewAI — which should I pick?+
Agents SDK if you value minimal abstraction + OpenAI integration and have simple routing needs. CrewAI if you want richer role/task modeling and plan for more than a handful of agents.
How does tracing compare to LangSmith / Langfuse?+
Agents SDK tracing is OpenAI-specific and free out of the box, showing runs, handoffs, and tool calls. LangSmith/Langfuse are more general-purpose and have richer eval/dataset features. Use Agents SDK tracing for default ops; add Langfuse when you need deeper LLM engineering workflows.
Does the Agents SDK support guardrails for prompt injection?+
Yes. Input guardrails run before the LLM; output guardrails run after. Define as simple async functions; throw to block the call. Not a silver bullet for security, but a clean place to enforce policy.