Phidata — Build & Deploy AI Agents at Scale
Framework for building, running, and managing AI agents at scale. Memory, knowledge, tools, reasoning, and team workflows. Monitoring dashboard included. 39K+ stars.
What it is
Phidata is a Python framework for building, running, and managing AI agents at scale. It provides built-in support for agent memory, knowledge bases, tool integration, reasoning chains, and multi-agent team workflows. The framework includes a monitoring dashboard for tracking agent performance and costs.
AI engineers building production agent systems, teams that need multi-agent coordination, and developers who want a structured approach to agent development use Phidata as their agent framework. It supports multiple LLM providers including OpenAI, Anthropic, and Google.
How it saves time or tokens
Building agents from scratch requires implementing memory management, tool calling, knowledge retrieval, and multi-agent coordination. Phidata packages these as composable abstractions. The Agent class handles conversation history, tool execution, and structured outputs automatically. Team workflows coordinate multiple specialized agents without writing custom orchestration logic.
How to use
- Install Phidata:
pip install phidata
- Create a simple agent:
from phi.agent import Agent
from phi.model.openai import OpenAIChat
from phi.tools.duckduckgo import DuckDuckGo
agent = Agent(
model=OpenAIChat(id='gpt-4o'),
tools=[DuckDuckGo()],
show_tool_calls=True,
markdown=True,
)
agent.print_response('What are the latest developments in AI agents?')
- Add knowledge and memory for persistent context:
from phi.knowledge.pdf import PDFKnowledgeBase
from phi.vectordb.pgvector import PgVector
knowledge = PDFKnowledgeBase(
path='docs/',
vector_db=PgVector(table_name='docs', db_url='postgresql://...'),
)
agent = Agent(knowledge=knowledge, search_knowledge=True)
Example
from phi.agent import Agent
from phi.model.openai import OpenAIChat
from phi.tools.duckduckgo import DuckDuckGo
from phi.tools.newspaper4k import Newspaper4k
# Research agent team
researcher = Agent(
name='Researcher',
model=OpenAIChat(id='gpt-4o'),
tools=[DuckDuckGo()],
instructions=['Find relevant sources and extract key facts'],
)
writer = Agent(
name='Writer',
model=OpenAIChat(id='gpt-4o'),
tools=[Newspaper4k()],
instructions=['Write clear, well-structured articles from research'],
)
# Team coordination
team = Agent(
team=[researcher, writer],
instructions=['Research the topic, then write an article'],
)
team.print_response('Write an article about quantum computing in 2026')
Related on TokRepo
- AI Agent Tools -- explore agent frameworks and platforms
- Multi-Agent Frameworks -- deep dive into Phidata multi-agent capabilities
Common pitfalls
- Agent memory is in-memory by default. For persistence across sessions, configure a database backend (PostgreSQL with pgvector is recommended).
- Tool calling increases token usage. Each tool call adds input and output tokens to the conversation. Monitor costs through the Phidata dashboard or provider billing.
- Multi-agent teams can be token-expensive since each agent in the team processes the conversation. Start with single agents and add team coordination only when needed.
Frequently Asked Questions
Phidata focuses on agent-first development with built-in memory, knowledge, and team coordination as first-class features. LangChain is a broader toolkit for LLM applications with agent support added later. CrewAI specializes in multi-agent role-based workflows. Phidata sits between them, offering structured agent development with less abstraction than LangChain.
Phidata supports OpenAI, Anthropic Claude, Google Gemini, Groq, Together AI, Ollama, and other OpenAI-compatible APIs. You configure the model provider through the model parameter when creating an Agent.
Yes. Phidata agents can persist memory to PostgreSQL, SQLite, or other database backends. This allows agents to maintain context across sessions, remember user preferences, and build long-term knowledge from interactions.
Phidata includes a monitoring dashboard that tracks agent runs, token usage, tool calls, and response times. It provides visibility into agent performance and costs. The dashboard is available through the Phidata cloud platform.
Yes. Define custom tools as Python functions with type annotations. Phidata automatically generates the tool schema from your function signature. The agent can then call your tool during conversations based on the function description and parameter types.
Citations (3)
- Phidata GitHub— Framework for building AI agents with memory, knowledge, and team workflows
- Phidata Documentation— Supports multiple LLM providers and tool integration
- Phidata Agents Guide— Multi-agent team coordination
Related on TokRepo
Source & Thanks
Created by Phidata. Licensed under Apache 2.0. phidatahq/phidata — 39,000+ GitHub stars
Discussion
Related Assets
NAPI-RS — Build Node.js Native Addons in Rust
Write high-performance Node.js native modules in Rust with automatic TypeScript type generation and cross-platform prebuilt binaries.
Mamba — Fast Cross-Platform Package Manager
A drop-in conda replacement written in C++ that resolves environments in seconds instead of minutes.
Plasmo — The Browser Extension Framework
Build, test, and publish browser extensions for Chrome, Firefox, and Edge using React or Vue with hot-reload and automatic manifest generation.