Multi-Agent Framework
phidata — Observable, Memory-First Agent Framework logo

phidata — 观测优先、内置记忆的 Agent 框架

phidata(2025 年更名 agno)是 Python Agent 框架,内置记忆、知识、工具与 Agent UI——设计目标是"把 Agent 行为看得一清二楚"。

为什么选它

phidata’s pitch is "agents you can actually see". Every agent writes its memory, knowledge retrieval, tool calls, and responses to a Postgres or SQLite store, and the Agent UI renders those as a live timeline. You debug agents by scrolling a dashboard instead of grepping logs. In 2025 the project rebranded to agno (now at agno.com), keeping the phidata API largely intact.

Multi-agent support in phidata/agno is lightweight but practical. Use the Team primitive to coordinate agents; each team member has its own model, tools, and instructions, and the team has a coordinator that routes tasks. Less elaborate than CrewAI’s SOPs or LangGraph’s graphs, but enough for most real-world pipelines.

Where phidata stands out is the "batteries included" defaults: memory, knowledge (vector DB wrappers), tools, UI, and a clean CLI. You’re productive in an hour, and your ops team can see what’s happening without separate instrumentation.

Quick Start — Team of Three with Agent UI

Team.mode="coordinate" uses a coordinator LLM to decide which member answers next. "collaborate" broadcasts the task to all members. "route" picks a single member. print_response streams the whole exchange; persist it by setting storage in the Agent/Team constructors.

# pip install -U agno openai duckduckgo-search yfinance
from agno.agent import Agent
from agno.models.openai import OpenAIChat
from agno.team import Team
from agno.tools.duckduckgo import DuckDuckGoTools
from agno.tools.yfinance import YFinanceTools

web_agent = Agent(
    name="Web Researcher",
    model=OpenAIChat(id="gpt-4o-mini"),
    tools=[DuckDuckGoTools()],
    instructions="Search the web, cite sources.",
)
finance_agent = Agent(
    name="Finance Analyst",
    model=OpenAIChat(id="gpt-4o-mini"),
    tools=[YFinanceTools(stock_price=True, analyst_recommendations=True)],
    instructions="Use financial data only. No speculation.",
)

team = Team(
    name="Investment Research",
    mode="coordinate",              # or "collaborate" / "route"
    model=OpenAIChat(id="gpt-4o-mini"),
    members=[web_agent, finance_agent],
    instructions="Produce a crisp investment brief with web + financial data.",
    success_criteria="A 2-paragraph brief with at least one web source and one financial figure.",
    show_tool_calls=True,
    markdown=True,
)

team.print_response("Analyst brief on Anthropic circa 2026", stream=True)

# Launch the Agent UI to watch teams + agents visually
# agno playground start           (reads from ~/.agno/settings.yaml)

核心能力

Team modes

coordinate (coordinator picks next member), collaborate (all members respond), route (single member). Simple knobs; cover most real multi-agent patterns.

Built-in memory

Short-term (per-run) and long-term (cross-run) memory out of the box, backed by SQLite, Postgres, or SingleStore. No separate mem0/Zep integration needed for basic cases.

Knowledge (RAG)

PDF, URL, text, and CSV knowledge sources with pluggable vector DBs (LanceDB, PgVector, Qdrant, Chroma). Agents retrieve automatically.

Agent UI

Next.js app that reads the agent store and renders a timeline — messages, tool calls, reasoning, knowledge hits. Hosted locally or on your own server. The main reason many users pick phidata/agno.

Massive tool catalog

80+ built-in tools: DuckDuckGo/Tavily search, YFinance, Slack, Gmail, GitHub, shell, Python REPL, SQL, PostgresTools, and plenty more. Tool bloat is real; cherry-pick.

Fast runtime

agno emphasizes low-overhead agent instantiation (microseconds) and small memory footprint — important for concurrent agents. Faster than CrewAI on simple loops per their published benchmarks.

对比

 Observability Built-inMulti-Agent ComplexityLearning CurveBest Fit
phidata / agnothisAgent UI out of the boxLow-mediumLowObservable production agents
CrewAIVia CrewAI Enterprise UIMediumLowRole-based pipelines
LangGraphVia LangGraph Studio + LangSmithHighMediumComplex control flow
AutoGenVia Studio + trace logsMediumMediumResearch, coding

实际用例

01. Observable single-agent products

Even without multi-agent needs, phidata/agno is a strong single-agent framework because of the UI. Shipping a product where ops needs to see agent reasoning: start here.

02. Research teams (finance, legal, analyst)

Multiple specialists (web, finance, SEC filings) coordinated by a team lead agent. Coordinate mode + show_tool_calls gives end users transparent citations.

03. Knowledge-heavy assistants

The built-in knowledge + vector DB combo handles RAG without extra integration work. Great for internal assistants over your docs.

价格与许可

agno (phidata) OSS: MIT licensed. Free. pip install -U agno.

Agent UI: open-source dashboard. Runs locally or on a server you control. Optional managed hosting on agno.com.

Infra cost: SQLite for dev, Postgres for production. Vector DB of your choice. Plus LLM API costs — agno’s model=OpenAIChat(...) abstracts any OpenAI-compatible endpoint.

相关 TokRepo 资产

常见问题

phidata vs agno — are they different?+

Same project; 2025 rebrand from phidata to agno. The Python package was renamed to agno; imports change from phi.* to agno.*. The phidata repo redirects to agno-agi/agno on GitHub.

agno vs CrewAI?+

agno ships with a first-class Agent UI and heavier "batteries included" feel (memory, knowledge, tools all present). CrewAI has a stronger role/task abstraction and a larger community. Try both on a small example; preference is usually immediate.

Does agno support local models?+

Yes. agno.models.ollama, LM Studio, vLLM, Together, LiteLLM, and any OpenAI-compatible endpoint. Swap the model= argument; rest stays identical.

Is Agent UI production-ready?+

Used in production by multiple teams in 2025-2026. Treat it like an internal ops tool (auth-protect it, behind a VPN) rather than a customer-facing surface.

How does multi-agent performance scale?+

agno’s instantiation overhead is low (microseconds), so thousands of concurrent agents are feasible. The bottleneck is LLM API throughput, not agno itself. For extreme scale, use a gateway (LiteLLM, Portkey) in front of your models.

同类推荐