ScriptsApr 6, 2026·2 min read

CrewAI — Multi-Agent Orchestration Framework

Python framework for orchestrating role-playing AI agents that collaborate on complex tasks. Define agents with roles, goals, and tools, then let them work together autonomously. 25,000+ stars.

TL;DR
CrewAI lets you define AI agents with roles, goals, and tools, then orchestrates their collaboration on complex tasks.
§01

What it is

CrewAI is a Python framework for building multi-agent systems where each agent has a defined role, goal, and set of tools. Agents collaborate on complex tasks through a crew structure that manages delegation, task ordering, and result synthesis. The framework supports sequential and hierarchical process types.

CrewAI targets developers building AI systems that require multiple specialized agents working together: research teams, content pipelines, data analysis workflows, or any task that benefits from division of labor among AI agents.

§02

How it saves time or tokens

This workflow provides pip install and a working crew definition. Instead of building agent coordination logic from scratch, you define agents declaratively and let CrewAI handle task delegation, context passing, and result aggregation. The role-based model makes it intuitive to decompose complex tasks.

§03

How to use

  1. Install CrewAI:
pip install crewai crewai-tools
  1. Define agents and tasks:
from crewai import Agent, Task, Crew

researcher = Agent(
    role='Senior Researcher',
    goal='Find the latest AI trends',
    backstory='Expert in AI research with 10 years of experience',
    tools=[search_tool]
)

writer = Agent(
    role='Technical Writer',
    goal='Create clear, engaging content',
    backstory='Experienced tech writer specializing in AI topics'
)

research_task = Task(
    description='Research the top 5 AI trends for 2026',
    agent=researcher,
    expected_output='Detailed report with sources'
)

writing_task = Task(
    description='Write a blog post based on the research',
    agent=writer,
    expected_output='1000-word blog post'
)

crew = Crew(
    agents=[researcher, writer],
    tasks=[research_task, writing_task]
)

result = crew.kickoff()
  1. Run the crew and collect results.
§04

Example

from crewai import Agent, Task, Crew, Process

# Hierarchical process with a manager
analyst = Agent(role='Data Analyst', goal='Analyze data patterns')
strategist = Agent(role='Strategy Consultant', goal='Develop recommendations')

crew = Crew(
    agents=[analyst, strategist],
    tasks=[
        Task(description='Analyze Q1 sales data', agent=analyst),
        Task(description='Create growth strategy', agent=strategist)
    ],
    process=Process.hierarchical,
    manager_llm='gpt-4o'
)

result = crew.kickoff()
print(result)
§05

Related on TokRepo

§06

Common pitfalls

  • Each agent makes separate LLM API calls, so a crew with 5 agents on a 3-task pipeline can make 15+ API calls. Monitor costs when using expensive models.
  • Agent backstories significantly influence output quality. Vague backstories produce generic results; specific backstories produce focused outputs.
  • Hierarchical process requires a manager LLM, which adds another API call per task delegation. Use sequential process for simpler workflows.

Frequently Asked Questions

What is the difference between sequential and hierarchical processes?+

Sequential process executes tasks in order, passing each result to the next task. Hierarchical process adds a manager agent that delegates tasks dynamically and can reassign work based on intermediate results.

What LLM providers does CrewAI support?+

CrewAI supports OpenAI, Anthropic, Google Gemini, and any provider via LiteLLM. You can set the model per agent or use a default for the entire crew.

Can agents use custom tools?+

Yes. CrewAI supports custom tool creation via the @tool decorator or by subclassing BaseTool. You can also use LangChain tools directly. The crewai-tools package provides pre-built tools for web search, file reading, and more.

How does delegation work between agents?+

When allow_delegation is enabled, an agent can pass subtasks to other agents in the crew. The delegating agent specifies what it needs, the target agent executes, and the result flows back. This enables dynamic collaboration.

Can I use CrewAI with local models?+

Yes. Use Ollama or any OpenAI-compatible local model server. Set the model name in your agent configuration to point to the local endpoint. Performance depends on the local model's capability.

Citations (3)
🙏

Source & Thanks

Created by CrewAI. Licensed under MIT.

crewAI — ⭐ 25,000+

Thanks to the CrewAI team for making multi-agent orchestration accessible.

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.

Related Assets