Multi-Agent Framework
agno — Fast, Production-Ready Agent Framework (Phidata Successor) logo

agno — Fast, Production-Ready Agent Framework (Phidata Successor)

agno is the 2025 rebrand of phidata — a Python agent framework designed for low instantiation overhead, built-in memory, knowledge, teams, and a full-featured Agent UI for production observability.

Why agno

agno’s pitch centers on speed + observability. Agent instantiation runs in microseconds (benchmark published in the README), which matters when you spin up agents per-request or per-user at high throughput. The Agent UI reads the agent storage and renders a live timeline of reasoning, tool calls, and knowledge retrieval — no separate observability service to configure.

Relative to CrewAI, agno feels lighter and more modular. You compose agents, teams, memory, and knowledge from small building blocks. There’s no SOP you must buy into. For some teams this is liberating; for others CrewAI’s more opinionated structure ships faster.

Teams in agno are practical — not as powerful as LangGraph for complex control flow, but enough for the vast majority of "three specialists plus a coordinator" workflows. Combined with memory, knowledge, and a growing tool catalog, agno is one of the most feature-complete "batteries included" frameworks in 2026.

Quick Start — Single Agent with Memory + Knowledge

memory=Memory(...) persists across runs; knowledge=PDFUrlKnowledgeBase(...) + vector_db=... gives the agent RAG against your docs. enable_agentic_memory=True lets the agent decide what to remember itself (akin to Letta). add_references=True injects source attributions into replies.

# pip install -U agno openai lancedb
from agno.agent import Agent
from agno.models.openai import OpenAIChat
from agno.memory.v2.memory import Memory
from agno.memory.v2.db.sqlite import SqliteMemoryDb
from agno.knowledge.pdf_url import PDFUrlKnowledgeBase
from agno.vectordb.lancedb import LanceDb, SearchType

# Persistent memory across sessions
memory = Memory(
    db=SqliteMemoryDb(table_name="user_memory", db_file="tmp/agent.db"),
    model=OpenAIChat(id="gpt-4o-mini"),
)

# Knowledge base from a public PDF
kb = PDFUrlKnowledgeBase(
    urls=["https://arxiv.org/pdf/2310.08560"],         # the MemGPT paper
    vector_db=LanceDb(table_name="agent_kb", uri="tmp/lancedb", search_type=SearchType.hybrid),
)
kb.load(recreate=False)   # index once

agent = Agent(
    model=OpenAIChat(id="gpt-4o-mini"),
    memory=memory,
    knowledge=kb,
    user_id="william",
    enable_agentic_memory=True,   # agent can write/read its own memory
    add_references=True,          # cite knowledge passages in replies
    markdown=True,
)

agent.print_response("What does the MemGPT paper say about paged memory? Remember that I care about Python examples.",
                     stream=True)

# Next session the same agent remembers the "Python examples" preference,
# and answers about MemGPT using retrieved passages with citations.

Key Features

Microsecond instantiation

agno is optimized for spinning up agents per request or per user. Benchmarks show ~10μs vs milliseconds in other frameworks — significant at high concurrency.

First-class memory (v2)

Short-term + long-term memory with SQLite/Postgres storage. agentic_memory mode lets the agent decide what to persist — similar pattern to Letta, with lower setup overhead.

Built-in knowledge (RAG)

PDF, URL, text, and Wikipedia knowledge sources. Pluggable vector DBs (LanceDB, PgVector, Qdrant, Chroma, Weaviate). Hybrid search and re-ranking built in.

Teams for multi-agent

Team class with coordinate / collaborate / route modes. Each member has independent instructions, tools, and model. Coordinator LLM decides flow in coordinate mode.

Massive tool catalog

80+ tools: search (DuckDuckGo, Tavily, Exa), finance (YFinance), comms (Slack, Gmail, Discord, WhatsApp), data (SQL, Python REPL, Snowflake), scraping (Firecrawl), and more. Easy to write custom @tool functions.

Agent UI

Next.js dashboard that reads agent storage and shows a real-time timeline of every run — messages, tool calls, reasoning, knowledge hits, memory updates. The headline feature that keeps phidata users on it.

Comparison

 StrengthTeam AbstractionObservabilityLearning Curve
agnothisFast + UI + batteries-includedTeam (3 modes)Agent UILow
CrewAIMature role abstractionCrew + tasksEnterprise UI (paid)Low
LangGraphReliable control flowGraph + supervisorsLangGraph Studio + LangSmithMedium
AutoGenResearch-strongGroupChatStudio + trace logsMedium

Use Cases

01. Observable production agents

Apps where ops needs to see inside the agent. Agent UI gives a timeline of reasoning, tool calls, memory, and retrieval hits without a separate observability vendor.

02. Knowledge-heavy assistants

The integrated knowledge + memory stack handles document RAG plus user-specific facts in one library — less plumbing than wiring mem0 + Langfuse + a vector DB separately.

03. High-concurrency APIs

Per-request agent instantiation for personalized answers. agno’s microsecond overhead makes "new agent per user" viable without an in-memory cache.

Pricing & License

agno: MIT open source. Free to self-host.

agno Cloud / managed: platform offering from the Agno team at agno.com. Pay for hosting and managed Agent UI. Self-hosting remains first-class and fully supported.

Model + DB costs: LLM API plus your vector DB of choice. LanceDB is a cheap default (embedded, no service to run); Postgres + pgvector if you already run Postgres.

Related Assets on TokRepo

Frequently Asked Questions

agno vs phidata — do I need to migrate?+

phidata was renamed to agno in 2025. Imports change from phi.* to agno.*; API is largely identical. Follow the migration guide in the repo — in most cases it is a find-and-replace plus a pip install -U agno.

agno vs CrewAI — when to pick which?+

agno if you value low overhead, built-in memory/knowledge, and the Agent UI. CrewAI if you want the more opinionated role/task abstraction and a larger community. Both are fast to ship with; pick based on which defaults match your project.

Does agno replace LangChain?+

No — different scope. LangChain is a broad toolkit for LLM applications (retrievers, loaders, tools, agents). agno is a focused agent framework with optional knowledge and memory. You can use LangChain components inside agno tools or run them independently.

How does agno’s Team compare to LangGraph?+

agno Teams cover the "coordinator + specialists" pattern cleanly. LangGraph covers arbitrary graphs including loops, conditionals, and HITL. Reach for LangGraph when agno’s Team modes feel constraining; most real workflows fit one of the three Team modes.

Is Agent UI self-hostable?+

Yes. The Next.js app ships in the repo and runs locally or on your own server. For production, protect it behind auth — treat it like any internal ops tool.

Compare Alternatives