为什么选它
agno’s pitch centers on speed + observability. Agent instantiation runs in microseconds (benchmark published in the README), which matters when you spin up agents per-request or per-user at high throughput. The Agent UI reads the agent storage and renders a live timeline of reasoning, tool calls, and knowledge retrieval — no separate observability service to configure.
Relative to CrewAI, agno feels lighter and more modular. You compose agents, teams, memory, and knowledge from small building blocks. There’s no SOP you must buy into. For some teams this is liberating; for others CrewAI’s more opinionated structure ships faster.
Teams in agno are practical — not as powerful as LangGraph for complex control flow, but enough for the vast majority of "three specialists plus a coordinator" workflows. Combined with memory, knowledge, and a growing tool catalog, agno is one of the most feature-complete "batteries included" frameworks in 2026.
Quick Start — Single Agent with Memory + Knowledge
memory=Memory(...) persists across runs; knowledge=PDFUrlKnowledgeBase(...) + vector_db=... gives the agent RAG against your docs. enable_agentic_memory=True lets the agent decide what to remember itself (akin to Letta). add_references=True injects source attributions into replies.
# pip install -U agno openai lancedb
from agno.agent import Agent
from agno.models.openai import OpenAIChat
from agno.memory.v2.memory import Memory
from agno.memory.v2.db.sqlite import SqliteMemoryDb
from agno.knowledge.pdf_url import PDFUrlKnowledgeBase
from agno.vectordb.lancedb import LanceDb, SearchType
# Persistent memory across sessions
memory = Memory(
db=SqliteMemoryDb(table_name="user_memory", db_file="tmp/agent.db"),
model=OpenAIChat(id="gpt-4o-mini"),
)
# Knowledge base from a public PDF
kb = PDFUrlKnowledgeBase(
urls=["https://arxiv.org/pdf/2310.08560"], # the MemGPT paper
vector_db=LanceDb(table_name="agent_kb", uri="tmp/lancedb", search_type=SearchType.hybrid),
)
kb.load(recreate=False) # index once
agent = Agent(
model=OpenAIChat(id="gpt-4o-mini"),
memory=memory,
knowledge=kb,
user_id="william",
enable_agentic_memory=True, # agent can write/read its own memory
add_references=True, # cite knowledge passages in replies
markdown=True,
)
agent.print_response("What does the MemGPT paper say about paged memory? Remember that I care about Python examples.",
stream=True)
# Next session the same agent remembers the "Python examples" preference,
# and answers about MemGPT using retrieved passages with citations.核心能力
Microsecond instantiation
agno is optimized for spinning up agents per request or per user. Benchmarks show ~10μs vs milliseconds in other frameworks — significant at high concurrency.
First-class memory (v2)
Short-term + long-term memory with SQLite/Postgres storage. agentic_memory mode lets the agent decide what to persist — similar pattern to Letta, with lower setup overhead.
Built-in knowledge (RAG)
PDF, URL, text, and Wikipedia knowledge sources. Pluggable vector DBs (LanceDB, PgVector, Qdrant, Chroma, Weaviate). Hybrid search and re-ranking built in.
Teams for multi-agent
Team class with coordinate / collaborate / route modes. Each member has independent instructions, tools, and model. Coordinator LLM decides flow in coordinate mode.
Massive tool catalog
80+ tools: search (DuckDuckGo, Tavily, Exa), finance (YFinance), comms (Slack, Gmail, Discord, WhatsApp), data (SQL, Python REPL, Snowflake), scraping (Firecrawl), and more. Easy to write custom @tool functions.
Agent UI
Next.js dashboard that reads agent storage and shows a real-time timeline of every run — messages, tool calls, reasoning, knowledge hits, memory updates. The headline feature that keeps phidata users on it.
对比
| Strength | Team Abstraction | Observability | Learning Curve | |
|---|---|---|---|---|
| agnothis | Fast + UI + batteries-included | Team (3 modes) | Agent UI | Low |
| CrewAI | Mature role abstraction | Crew + tasks | Enterprise UI (paid) | Low |
| LangGraph | Reliable control flow | Graph + supervisors | LangGraph Studio + LangSmith | Medium |
| AutoGen | Research-strong | GroupChat | Studio + trace logs | Medium |
实际用例
01. Observable production agents
Apps where ops needs to see inside the agent. Agent UI gives a timeline of reasoning, tool calls, memory, and retrieval hits without a separate observability vendor.
02. Knowledge-heavy assistants
The integrated knowledge + memory stack handles document RAG plus user-specific facts in one library — less plumbing than wiring mem0 + Langfuse + a vector DB separately.
03. High-concurrency APIs
Per-request agent instantiation for personalized answers. agno’s microsecond overhead makes "new agent per user" viable without an in-memory cache.
价格与许可
agno: MIT open source. Free to self-host.
agno Cloud / managed: platform offering from the Agno team at agno.com. Pay for hosting and managed Agent UI. Self-hosting remains first-class and fully supported.
Model + DB costs: LLM API plus your vector DB of choice. LanceDB is a cheap default (embedded, no service to run); Postgres + pgvector if you already run Postgres.
相关 TokRepo 资产
Open Interpreter — AI That Runs Code on You
Natural language interface that executes Python, JS, and shell commands on your computer. Local-first, model-agnostic. 63K+ stars.
Engram — Persistent Memory System for AI Agents
Agent-agnostic persistent memory system with SQLite full-text search. Ships as MCP server, HTTP API, CLI, and TUI. Gives any AI coding agent long-term memory across sessions. 2,300+ stars.
Haystack — Production RAG & Agent Framework
Build composable AI pipelines for RAG, agents, and search. Model-agnostic, production-ready, by deepset. 18K+ stars.
Centrifugo — Scalable Real-Time Messaging Server
Centrifugo is a scalable real-time messaging server that adds live updates to any application. It handles WebSocket connections, scales horizontally with Redis or NATS, and provides a language-agnostic API — a self-hosted alternative to Pusher, Ably, and Socket.IO.
常见问题
agno vs phidata — do I need to migrate?+
phidata was renamed to agno in 2025. Imports change from phi.* to agno.*; API is largely identical. Follow the migration guide in the repo — in most cases it is a find-and-replace plus a pip install -U agno.
agno vs CrewAI — when to pick which?+
agno if you value low overhead, built-in memory/knowledge, and the Agent UI. CrewAI if you want the more opinionated role/task abstraction and a larger community. Both are fast to ship with; pick based on which defaults match your project.
Does agno replace LangChain?+
No — different scope. LangChain is a broad toolkit for LLM applications (retrievers, loaders, tools, agents). agno is a focused agent framework with optional knowledge and memory. You can use LangChain components inside agno tools or run them independently.
How does agno’s Team compare to LangGraph?+
agno Teams cover the "coordinator + specialists" pattern cleanly. LangGraph covers arbitrary graphs including loops, conditionals, and HITL. Reach for LangGraph when agno’s Team modes feel constraining; most real workflows fit one of the three Team modes.
Is Agent UI self-hostable?+
Yes. The Next.js app ships in the repo and runs locally or on your own server. For production, protect it behind auth — treat it like any internal ops tool.