Key Features
- Full MCP support: Tools, resources, prompts, notifications, OAuth, and sampling
- Composable patterns: Orchestrator-workers, parallel map-reduce, evaluator-optimizer loops, intent routers
- Multi-provider: OpenAI, Anthropic, Google, Azure, Bedrock with unified interface
- Durable execution: Temporal backend for reliable long-running workflows
- Agent-as-MCP-server: Expose your agents as MCP servers for other agents to call
- Structured logging: Built-in token accounting and observability
- Cloud deployment: Deploy agents to production with scaling support
Example
from mcp_agent.app import MCPApp
from mcp_agent.agents.agent import Agent
from mcp_agent.workflows.llm.augmented_llm import RequestParams
app = MCPApp(name="my-agent")
async with app.run() as ctx:
agent = Agent(
name="researcher",
instruction="You are a research assistant.",
server_names=["fetch", "filesystem"],
)
async with agent:
llm = await agent.attach_llm("anthropic")
result = await llm.generate_str(
"Summarize the latest AI news",
request_params=RequestParams(model="claude-sonnet-4-5-20250514"),
)
print(result)FAQ
Q: What is mcp-agent? A: mcp-agent is a Python framework with 8.2K+ stars for building AI agents using the Model Context Protocol. It provides composable workflow patterns like orchestrator, map-reduce, and evaluator-optimizer, supporting 5 LLM providers.
Q: How do I install mcp-agent?
A: Run uv add "mcp-agent[anthropic]" or pip install "mcp-agent[anthropic]". Add provider extras like [openai, google, azure, bedrock] as needed.
Q: Which LLM providers does mcp-agent support? A: OpenAI (GPT-4o default), Anthropic Claude, Google, Azure OpenAI, and AWS Bedrock — all through a unified interface.