Introduction
CrewAI is a production-grade Python framework for building multi-agent AI systems. Unlike single-agent approaches, CrewAI lets you define teams of specialized AI agents that collaborate, delegate, and share context to solve complex problems.
Core concepts:
- Agents — Autonomous units with specific roles, goals, backstories, and tool access. Each agent has a persona that guides its behavior.
- Tasks — Defined units of work assigned to agents, with expected outputs and optional dependencies.
- Crews — Teams of agents working together on a set of tasks, with configurable process flows (sequential, hierarchical, or custom).
- Flows — Event-driven orchestration layer for building structured, multi-step workflows that combine crews with conditional logic.
- Tools — 60+ built-in tools (web search, file I/O, code execution) plus easy custom tool creation.
- Memory — Short-term, long-term, and entity memory for agents to learn and improve across interactions.
Built from scratch (not a LangChain wrapper), optimized for performance and minimal resource usage.
FAQ
Q: How is CrewAI different from AutoGen or LangGraph? A: CrewAI focuses on role-based agent collaboration with a simpler API. It's standalone (no LangChain dependency), has built-in memory, and supports both autonomous crews and structured flows.
Q: What LLMs does it support? A: Any LLM via LiteLLM — OpenAI, Anthropic Claude, Google Gemini, local models via Ollama, and 100+ others.
Q: Can I use custom tools?
A: Yes. Decorate any Python function with @tool or create a class inheriting from BaseTool. CrewAI also integrates with LangChain tools.
Q: Is it production-ready? A: Yes. CrewAI AMP (Agent Monitoring Platform) provides enterprise features: observability, security guardrails, and deployment management.
Works With
- Python 3.10+
- OpenAI / Anthropic / Google / Ollama / 100+ LLMs via LiteLLM
- Built-in tools: SerperDev, Browserbase, Firecrawl, code interpreter
- CrewAI AMP for production monitoring