CrewAI is one of the most popular multi-agent frameworks in 2026, and for good reason: it makes it easy to define agents with roles, give them tools, and coordinate them on complex tasks. It is also one of the most misused frameworks — people treat it like a magic box, skip understanding what is actually happening, and then wonder why their agents loop forever or produce nonsense.
This tutorial builds a real multi-agent research system from scratch. We will go slowly enough to understand each piece, then point you at what to learn next.
What You Are Building
A two-agent research system:
- Researcher agent: searches the web for information on a topic
- Writer agent: takes the research and writes a structured summary
This is the simplest meaningful multi-agent pattern. If you understand this, you can extend it to five agents, ten tools, and production-grade workflows.
Prerequisites
- Python 3.10 or higher
- Basic Python knowledge (functions, classes, decorators)
- OpenAI API key (or compatible provider)
pip install crewai crewai-tools
Understanding the Core Concepts
Before writing code, understand these four concepts. CrewAI without this mental model is just copy-pasting.
Agent: An LLM with a role, goal, and backstory. The role and goal shape how the model behaves. Think of it as a job description for your AI.
Task: A specific unit of work with a description, expected output, and assigned agent. Tasks are what agents actually do.
Tool: A function an agent can call to interact with the world — search the web, read files, query APIs. Agents without tools are just text generators.
Crew: The orchestrator that runs agents through tasks in sequence or in parallel, passing outputs between them.
Building the Research Crew
Step 1: Define Your Tools
from crewai_tools import SerperDevTool
# SerperDev gives agents web search capability
# Sign up at serper.dev for a free API key
search_tool = SerperDevTool()
Step 2: Define the Agents
from crewai import Agent
researcher = Agent(
role="Senior Research Analyst",
goal="Find accurate, current information on the given topic",
backstory="""You are an experienced research analyst who finds
primary sources, cross-references claims, and surfaces the most
relevant information. You cite your sources and flag uncertainty.""",
tools=[search_tool],
verbose=True, # Set to False in production
max_iter=3, # Critical: prevents infinite loops
)
writer = Agent(
role="Technical Writer",
goal="Transform research findings into clear, structured summaries",
backstory="""You are a technical writer who takes raw research
and turns it into well-organized, accurate summaries. You preserve
the key facts, cite sources, and write for a developer audience.""",
verbose=True,
max_iter=2,
)
The max_iter parameter is important. Without it, agents can loop indefinitely when they cannot complete a task. Set it explicitly.
Step 3: Define the Tasks
from crewai import Task
research_task = Task(
description="""Research the following topic thoroughly:
{topic}
Find at least 3 reliable sources. Note key facts, recent developments,
and any debates or controversies in the field.""",
expected_output="""A structured research report with:
- Key findings (bullet points)
- Sources (URLs or citations)
- Notable debates or open questions""",
agent=researcher,
)
writing_task = Task(
description="""Using the research provided, write a comprehensive summary
of {topic} suitable for software developers new to the subject.
Focus on practical implications and concrete examples.""",
expected_output="""A 500-word summary with:
- Clear explanation of the topic
- Why it matters for developers
- 2-3 concrete examples or use cases
- What to explore next""",
agent=writer,
context=[research_task], # This tells the writer to use research output
)
The context parameter on the writing task is what creates the pipeline. Without it, the writer agent has no access to the researcher's output.
Step 4: Create and Run the Crew
from crewai import Crew, Process
crew = Crew(
agents=[researcher, writer],
tasks=[research_task, writing_task],
process=Process.sequential, # Tasks run in order
verbose=True,
)
result = crew.kickoff(inputs={"topic": "vector databases in 2026"})
print(result.raw)
Complete Working Example
import os
from crewai import Agent, Task, Crew, Process
from crewai_tools import SerperDevTool
os.environ["OPENAI_API_KEY"] = "your-key-here"
os.environ["SERPER_API_KEY"] = "your-key-here"
search_tool = SerperDevTool()
researcher = Agent(
role="Senior Research Analyst",
goal="Find accurate, current information on the given topic",
backstory="You are an experienced analyst who finds primary sources and cross-references claims.",
tools=[search_tool],
verbose=False,
max_iter=3,
)
writer = Agent(
role="Technical Writer",
goal="Transform research findings into clear, structured summaries",
backstory="You turn raw research into well-organized, accurate summaries for developers.",
verbose=False,
max_iter=2,
)
research_task = Task(
description="Research {topic}. Find at least 3 sources. Note key facts and recent developments.",
expected_output="Structured research report with key findings, sources, and open questions.",
agent=researcher,
)
writing_task = Task(
description="Using the research, write a developer-focused summary of {topic}.",
expected_output="500-word summary with explanation, developer implications, and examples.",
agent=writer,
context=[research_task],
)
crew = Crew(
agents=[researcher, writer],
tasks=[research_task, writing_task],
process=Process.sequential,
)
result = crew.kickoff(inputs={"topic": "CrewAI framework internals"})
print(result.raw)
Run this and you should see the researcher search the web, compile findings, and the writer produce a structured summary. First run typically takes 30-60 seconds.
Common Mistakes and How to Fix Them
Mistake 1: Not setting max_iter. Agents without iteration limits can loop indefinitely when they get stuck. Always set max_iter on every agent.
Mistake 2: Tasks with no expected_output. The expected_output field is not decoration — it tells the agent what success looks like. Vague or missing expected outputs produce inconsistent results.
Mistake 3: Not using context between tasks. If your second agent does not have the first task in its context list, it does not have access to the first agent's output. This is the most common reason pipelines produce disconnected results.
Mistake 4: Verbose in production. verbose=True is useful for debugging but prints a lot. Turn it off before deploying anything.
Mistake 5: Assuming agents are stateless. Agents in CrewAI maintain conversation history within a run. This is usually what you want, but it means early context can influence later decisions in unexpected ways.
What to Learn Next
This example uses sequential processing — tasks run one after another. CrewAI also supports hierarchical processing (a manager agent coordinates workers) and parallel execution.
For production systems, you will need to think about:
- Error handling when agents fail or time out
- Cost control (verbose agents with web search get expensive quickly)
- Output validation (agents do not always produce the expected format)
- Observability (logging what each agent actually did)
The Phase 5 (Frameworks) and Phase 6 (Multi-Agent Systems) curriculum at MindloomHQ's Agentic AI course goes deep on these topics — structured lesson sequences covering CrewAI, LangGraph, and production multi-agent patterns.
CrewAI is genuinely useful when you understand what it is doing. Treat it as a coordination layer, not magic, and you will build systems that actually work.