ScriptsMar 31, 2026·2 min read

Phidata — Build & Deploy AI Agents at Scale

Framework for building, running, and managing AI agents at scale. Memory, knowledge, tools, reasoning, and team workflows. Monitoring dashboard included. 39K+ stars.

TL;DR
Phidata provides a Python framework for building AI agents with memory, knowledge bases, tools, reasoning, and multi-agent team workflows.
§01

What it is

Phidata is a Python framework for building, running, and managing AI agents at scale. It provides built-in support for agent memory, knowledge bases, tool integration, reasoning chains, and multi-agent team workflows. The framework includes a monitoring dashboard for tracking agent performance and costs.

AI engineers building production agent systems, teams that need multi-agent coordination, and developers who want a structured approach to agent development use Phidata as their agent framework. It supports multiple LLM providers including OpenAI, Anthropic, and Google.

§02

How it saves time or tokens

Building agents from scratch requires implementing memory management, tool calling, knowledge retrieval, and multi-agent coordination. Phidata packages these as composable abstractions. The Agent class handles conversation history, tool execution, and structured outputs automatically. Team workflows coordinate multiple specialized agents without writing custom orchestration logic.

§03

How to use

  1. Install Phidata:
pip install phidata
  1. Create a simple agent:
from phi.agent import Agent
from phi.model.openai import OpenAIChat
from phi.tools.duckduckgo import DuckDuckGo

agent = Agent(
    model=OpenAIChat(id='gpt-4o'),
    tools=[DuckDuckGo()],
    show_tool_calls=True,
    markdown=True,
)
agent.print_response('What are the latest developments in AI agents?')
  1. Add knowledge and memory for persistent context:
from phi.knowledge.pdf import PDFKnowledgeBase
from phi.vectordb.pgvector import PgVector

knowledge = PDFKnowledgeBase(
    path='docs/',
    vector_db=PgVector(table_name='docs', db_url='postgresql://...'),
)
agent = Agent(knowledge=knowledge, search_knowledge=True)
§04

Example

from phi.agent import Agent
from phi.model.openai import OpenAIChat
from phi.tools.duckduckgo import DuckDuckGo
from phi.tools.newspaper4k import Newspaper4k

# Research agent team
researcher = Agent(
    name='Researcher',
    model=OpenAIChat(id='gpt-4o'),
    tools=[DuckDuckGo()],
    instructions=['Find relevant sources and extract key facts'],
)

writer = Agent(
    name='Writer',
    model=OpenAIChat(id='gpt-4o'),
    tools=[Newspaper4k()],
    instructions=['Write clear, well-structured articles from research'],
)

# Team coordination
team = Agent(
    team=[researcher, writer],
    instructions=['Research the topic, then write an article'],
)
team.print_response('Write an article about quantum computing in 2026')
§05

Related on TokRepo

§06

Common pitfalls

  • Agent memory is in-memory by default. For persistence across sessions, configure a database backend (PostgreSQL with pgvector is recommended).
  • Tool calling increases token usage. Each tool call adds input and output tokens to the conversation. Monitor costs through the Phidata dashboard or provider billing.
  • Multi-agent teams can be token-expensive since each agent in the team processes the conversation. Start with single agents and add team coordination only when needed.

Frequently Asked Questions

How does Phidata compare to LangChain and CrewAI?+

Phidata focuses on agent-first development with built-in memory, knowledge, and team coordination as first-class features. LangChain is a broader toolkit for LLM applications with agent support added later. CrewAI specializes in multi-agent role-based workflows. Phidata sits between them, offering structured agent development with less abstraction than LangChain.

What LLM providers does Phidata support?+

Phidata supports OpenAI, Anthropic Claude, Google Gemini, Groq, Together AI, Ollama, and other OpenAI-compatible APIs. You configure the model provider through the model parameter when creating an Agent.

Does Phidata support persistent agent memory?+

Yes. Phidata agents can persist memory to PostgreSQL, SQLite, or other database backends. This allows agents to maintain context across sessions, remember user preferences, and build long-term knowledge from interactions.

What is the Phidata monitoring dashboard?+

Phidata includes a monitoring dashboard that tracks agent runs, token usage, tool calls, and response times. It provides visibility into agent performance and costs. The dashboard is available through the Phidata cloud platform.

Can Phidata agents use custom tools?+

Yes. Define custom tools as Python functions with type annotations. Phidata automatically generates the tool schema from your function signature. The agent can then call your tool during conversations based on the function description and parameter types.

Citations (3)
🙏

Source & Thanks

Created by Phidata. Licensed under Apache 2.0. phidatahq/phidata — 39,000+ GitHub stars

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.

Related Assets