CrewAI Integration for Multi-Agent Systems
CrewAI is a framework for building multi-agent systems where each agent has a role, goal, and set of tools. The "crew" concept makes architecture intuitive: agents work as team specialists, and tasks are delegated by roles. Unlike LangGraph with its explicit graph transition descriptions, CrewAI uses a declarative approach with automatic flow management.
Basic Structure: Agents and Tasks
from crewai import Agent, Task, Crew, Process
from crewai_tools import SerperDevTool, ScrapeWebsiteTool, FileWriterTool
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o", temperature=0.1)
# Define agents
researcher = Agent(
role="Senior Research Analyst",
goal="Find current and accurate information on the given topic",
backstory="""You are a research analyst with 10 years of experience.
Specialize in the technology sector.
Always verify sources and point out contradictions.""",
tools=[SerperDevTool(), ScrapeWebsiteTool()],
llm=llm,
verbose=True,
max_iter=5, # Agent maximum iterations
memory=True,
)
writer = Agent(
role="Content Strategist",
goal="Create a structured analytical report",
backstory="Experienced technical writer specializing in business analytics.",
tools=[FileWriterTool()],
llm=llm,
verbose=True,
)
# Define tasks
research_task = Task(
description="""Research the {topic} market for 2025.
Cover: key players, market size, trends, forecasts.
Find at least 5 current sources.""",
expected_output="Structured research data with sources",
agent=researcher,
async_execution=False,
)
write_task = Task(
description="""Based on provided research, create an analytical report.
Format: introduction, key findings (table), trends, conclusions.
Length: 1500–2000 words.""",
expected_output="Ready analytical report in markdown format",
agent=writer,
context=[research_task], # Uses research_task output as context
output_file="report.md",
)
# Create crew
crew = Crew(
agents=[researcher, writer],
tasks=[research_task, write_task],
process=Process.sequential, # Sequential execution
verbose=True,
)
result = crew.kickoff(inputs={"topic": "LLM solutions market for enterprise sector"})
Hierarchical Process: Manager Coordinates Agents
manager_llm = ChatOpenAI(model="gpt-4o", temperature=0)
hierarchical_crew = Crew(
agents=[researcher, analyst, writer, qa_reviewer],
tasks=[research_task, analysis_task, writing_task, review_task],
process=Process.hierarchical,
manager_llm=manager_llm, # LLM manager makes delegation decisions
verbose=True,
)
In hierarchical mode, the manager automatically decides which agent to delegate to and whether results need reworking.
CrewAI Flows: Imperative Control
from crewai.flow.flow import Flow, listen, start, router
from pydantic import BaseModel
class ContentState(BaseModel):
topic: str = ""
research_result: str = ""
analysis_result: str = ""
quality_score: float = 0.0
final_content: str = ""
class ContentCreationFlow(Flow[ContentState]):
@start()
def initialize(self):
print(f"Starting work on topic: {self.state.topic}")
@listen(initialize)
def run_research(self):
research_crew = Crew(agents=[researcher], tasks=[research_task], process=Process.sequential)
result = research_crew.kickoff(inputs={"topic": self.state.topic})
self.state.research_result = result.raw
@listen(run_research)
def run_analysis(self):
analysis_crew = Crew(agents=[analyst], tasks=[analysis_task])
result = analysis_crew.kickoff(inputs={"research": self.state.research_result})
self.state.analysis_result = result.raw
@router(run_analysis)
def check_quality(self):
# Evaluate quality
score = evaluate_quality(self.state.analysis_result)
self.state.quality_score = score
if score >= 0.8:
return "write_content"
return "improve_analysis"
@listen("improve_analysis")
def improve_analysis(self):
# Re-analyze with additional context
pass
@listen("write_content")
def write_final_content(self):
write_crew = Crew(agents=[writer], tasks=[write_task])
result = write_crew.kickoff(inputs={"analysis": self.state.analysis_result})
self.state.final_content = result.raw
flow = ContentCreationFlow()
flow.kickoff(inputs={"topic": "AI Application in Logistics"})
Custom Tools
from crewai.tools import BaseTool
from pydantic import BaseModel, Field
class DatabaseQueryInput(BaseModel):
sql_query: str = Field(description="SQL query to execute")
database: str = Field(description="Database name", default="analytics")
class DatabaseQueryTool(BaseTool):
name: str = "query_database"
description: str = "Execute SQL query against analytics database"
args_schema: type[BaseModel] = DatabaseQueryInput
def _run(self, sql_query: str, database: str = "analytics") -> str:
# Validation: SELECT only
if not sql_query.strip().upper().startswith("SELECT"):
return "Error: only SELECT queries allowed"
result = db.execute(sql_query, database=database)
return result.to_json()
db_tool = DatabaseQueryTool()
analyst.tools.append(db_tool)
Practical Case Study: Competitive Analysis Automation
Task: quarterly competitive analysis for strategy department. Previously took 3 weeks with 2 analysts.
CrewAI Team:
- Scout Agent — monitor competitors' news, press releases, job postings (SerperDevTool + ScrapeWebsiteTool)
- Financial Agent — analyze public financial reports (custom SEC/IFRS parsing tool)
- Product Agent — analyze product changes, changelog, app store reviews
- Strategy Analyst — synthesize data, identify strategic patterns
- Report Writer — final report with executive summary
Configuration: Process.hierarchical with manager_llm, tasks execute partially in parallel (Scout, Financial, Product — simultaneously; Strategy Analyst waits for all three).
Results per Quarter:
- Report preparation time: 3 weeks → 4 hours autonomous + 2 hours analyst review
- Competitor coverage: 5 → 12 companies
- Analysis depth: 23 aspects vs 11 previously
- Missed significant events: analysts estimated 2–3 per quarter → 0
CrewAI vs LangGraph
| Criterion | CrewAI | LangGraph |
|---|---|---|
| Abstraction | High (roles, tasks) | Low (nodes, edges) |
| Customizability | Limited | Full |
| Barrier to entry | Low | Medium |
| Prod-ready | Requires careful tuning | More predictable |
| Debugging | More difficult | Via LangSmith |
Timeline
- Prototype CrewAI with 3 agents: 2–4 days
- Production crew with custom tools: 1–2 weeks
- Complex Flow with conditional routing: 2–3 weeks
- Integration with enterprise systems: +1–2 weeks







