* chore: update memory management and dependencies - Enhance the memory system by introducing a unified memory API that consolidates short-term, long-term, entity, and external memory functionalities. - Update the `.gitignore` to exclude new memory-related files and blog directories. - Modify `conftest.py` to handle missing imports for vcr stubs more gracefully. - Add new development dependencies in `pyproject.toml` for testing and memory management. - Refactor the `Crew` class to utilize the new unified memory system, replacing deprecated memory attributes. - Implement memory context injection in `LiteAgent` to improve memory recall during agent execution. - Update documentation to reflect changes in memory usage and configuration. * feat: introduce Memory TUI for enhanced memory management - Add a new command to the CLI for launching a Textual User Interface (TUI) to browse and recall memories. - Implement the MemoryTUI class to facilitate user interaction with memory scopes and records. - Enhance the unified memory API by adding a method to list records within a specified scope. - Update `pyproject.toml` to include the `textual` dependency for TUI functionality. - Ensure proper error handling for missing dependencies when accessing the TUI. * feat: implement consolidation flow for memory management - Introduce the ConsolidationFlow class to handle the decision-making process for inserting, updating, or deleting memory records based on new content. - Add new data models: ConsolidationAction and ConsolidationPlan to structure the actions taken during consolidation. - Enhance the memory types with new fields for consolidation thresholds and limits. - Update the unified memory API to utilize the new consolidation flow for managing memory records. - Implement embedding functionality for new content to facilitate similarity checks. - Refactor existing memory analysis methods to integrate with the consolidation process. - Update translations to include prompts for consolidation actions and user interactions. * feat: enhance Memory TUI with Rich markup and improved UI elements - Update the MemoryTUI class to utilize Rich markup for better visual representation of memory scope information. - Introduce a color palette for consistent branding across the TUI interface. - Refactor the CSS styles to improve the layout and aesthetics of the memory browsing experience. - Enhance the display of memory entries, including better formatting for records and importance ratings. - Implement loading indicators and error messages with Rich styling for improved user feedback during recall operations. - Update the action bindings and navigation prompts for a more intuitive user experience. * feat: enhance Crew class memory management and configuration - Update the Crew class to allow for more flexible memory configurations by accepting Memory, MemoryScope, or MemorySlice instances. - Refactor memory initialization logic to support custom memory configurations while maintaining backward compatibility. - Improve documentation for memory-related fields to clarify usage and expectations. - Introduce a recall oversample factor to optimize memory recall processes. - Update related memory types and configurations to ensure consistency across the memory management system. * chore: update dependency overrides and enhance memory management - Added an override for the 'rich' dependency to allow compatibility with 'textual' requirements. - Updated the 'pyproject.toml' and 'uv.lock' files to reflect the new dependency specifications. - Refactored the Crew class to simplify memory configuration handling by allowing any type for the memory attribute. - Improved error messages in the CLI for missing 'textual' dependency to guide users on installation. - Introduced new packages and dependencies in the project to enhance functionality and maintain compatibility. * refactor: enhance thread safety in flow management - Updated LockedListProxy and LockedDictProxy to subclass list and dict respectively, ensuring compatibility with libraries requiring strict type checks. - Improved documentation to clarify the purpose of these proxies and their thread-safe operations. - Ensured that all mutations are protected by locks while reads delegate to the underlying data structures, enhancing concurrency safety. * chore: update dependency versions and improve Python compatibility - Downgraded 'vcrpy' dependency to version 7.0.0 for compatibility. - Enhanced 'uv.lock' to include more granular resolution markers for Python versions and implementations, ensuring better compatibility across different environments. - Updated 'urllib3' and 'selenium' dependencies to specify versions based on Python implementation, improving stability and performance. - Removed deprecated resolution markers for 'fastembed' and streamlined its dependencies for better clarity. * fix linter * chore: update uv.lock for improved dependency management and memory management enhancements - Incremented revision number in uv.lock to reflect changes. - Added a new development dependency group in uv.lock, specifying versions for tools like pytest, mypy, and pre-commit to streamline development workflows. - Enhanced error handling in CLI memory functions to provide clearer feedback on missing dependencies. - Refactored memory management classes to improve type hints and maintainability, ensuring better compatibility with future updates. * fix tests * refactor: remove obsolete RAGStorage tests and clean up error handling - Deleted outdated tests for RAGStorage that were no longer relevant, including tests for client failures, save operation failures, and reset failures. - Cleaned up the test suite to focus on current functionality and improve maintainability. - Ensured that remaining tests continue to validate the expected behavior of knowledge storage components. * fix test * fix texts * fix tests * forcing new commit * fix: add location parameter to Google Vertex embedder configuration for memory integration tests * debugging CI * adding debugging for CI * refactor: remove unnecessary logging for memory checks in agent execution - Eliminated redundant logging statements related to memory checks in the Agent and CrewAgentExecutor classes. - Simplified the memory retrieval logic by directly checking for available memory without logging intermediate states. - Improved code readability and maintainability by reducing clutter in the logging output. * udpating desp * feat: enhance thread safety in LockedListProxy and LockedDictProxy - Added equality comparison methods (__eq__ and __ne__) to LockedListProxy and LockedDictProxy to allow for safe comparison of their contents. - Implemented consistent locking mechanisms to prevent deadlocks during comparisons. - Improved the overall robustness of these proxy classes in multi-threaded environments. * feat: enhance memory functionality in Flows documentation and memory system - Added a new section on memory usage within Flows, detailing built-in methods for storing and recalling memories. - Included an example of a Research and Analyze Flow demonstrating the integration of memory for accumulating knowledge over time. - Updated the Memory documentation to clarify the unified memory system and its capabilities, including adaptive-depth recall and composite scoring. - Introduced a new configuration parameter, `recall_oversample_factor`, to improve the effectiveness of memory retrieval processes. * update docs * refactor: improve memory record handling and pagination in unified memory system - Simplified the `get_record` method in the Memory class by directly accessing the storage's `get_record` method. - Enhanced the `list_records` method to include an `offset` parameter for pagination, allowing users to skip a specified number of records. - Updated documentation for both methods to clarify their functionality and parameters, improving overall code clarity and usability. * test: update memory scope assertions in unified memory tests - Modified assertions in `test_lancedb_list_scopes_get_scope_info` and `test_memory_list_scopes_info_tree` to check for the presence of the "/team" scope instead of the root scope. - Clarified comments to indicate that `list_scopes` returns child scopes rather than the root itself, enhancing test clarity and accuracy. * feat: integrate memory tools for agents and crews - Added functionality to inject memory tools into agents during initialization, enhancing their ability to recall and remember information mid-task. - Implemented a new `_add_memory_tools` method in the Crew class to facilitate the addition of memory tools when memory is available. - Introduced `RecallMemoryTool` and `RememberTool` classes in a new `memory_tools.py` file, providing agents with active recall and memory storage capabilities. - Updated English translations to include descriptions for the new memory tools, improving user guidance on their usage. * refactor: streamline memory recall functionality across agents and tools - Removed the 'depth' parameter from memory recall calls in LiteAgent and Agent classes, simplifying the recall process. - Updated the MemoryTUI to use 'deep' depth by default for more comprehensive memory retrieval. - Enhanced the MemoryScope and MemorySlice classes to default to 'deep' depth, improving recall accuracy. - Introduced a new 'recall_queries' field in QueryAnalysis to optimize semantic vector searches with targeted phrases. - Updated documentation and comments to reflect changes in memory recall behavior and parameters. * refactor: optimize memory management in flow classes - Enhanced memory auto-creation logic in Flow class to prevent unnecessary Memory instance creation for internal flows (RecallFlow, ConsolidationFlow) by introducing a _skip_auto_memory flag. - Removed the deprecated time_hints field from QueryAnalysis and replaced it with a more flexible time_filter field to better handle time-based queries. - Updated documentation and comments to reflect changes in memory handling and query analysis structure, improving clarity and usability. * updates tests * feat: introduce EncodingFlow for enhanced memory encoding pipeline - Added a new EncodingFlow class to orchestrate the encoding process for memory, integrating LLM analysis and embedding. - Updated the Memory class to utilize EncodingFlow for saving content, improving the overall memory management and conflict resolution. - Enhanced the unified memory module to include the new EncodingFlow in its public API, facilitating better memory handling. - Updated tests to ensure proper functionality of the new encoding flow and its integration with existing memory features. * refactor: optimize memory tool integration and recall flow - Streamlined the addition of memory tools in the Agent class by using list comprehension for cleaner code. - Enhanced the RecallFlow class to build task lists more efficiently with list comprehensions, improving readability and performance. - Updated the RecallMemoryTool to utilize list comprehensions for formatting memory results, simplifying the code structure. - Adjusted test assertions in LiteAgent to reflect the default behavior of memory recall depth, ensuring clarity in expected outcomes. * Potential fix for pull request finding 'Empty except' Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com> * chore: gen missing cassette * fix * test: enhance memory extraction test by mocking recall to prevent LLM calls Updated the test for memory extraction to include a mock for the recall method, ensuring that the test focuses on the save path without invoking external LLM calls. This improves test reliability and clarity. * refactor: enhance memory handling by adding agent role parameter Updated memory storage methods across multiple classes to include an optional `agent_role` parameter, improving the context of stored memories. Additionally, modified the initialization of several flow classes to suppress flow events, enhancing performance and reducing unnecessary event triggers. * feat: enhance agent memory functionality with recall and save mechanisms Implemented memory context injection during agent kickoff, allowing for memory recall before execution and passive saving of results afterward. Added new methods to handle memory saving and retrieval, including error handling for memory operations. Updated the BaseAgent class to support dynamic memory resolution and improved memory record structure with source and privacy attributes for better provenance tracking. * test * feat: add utility method to simplify tools field in console formatter Introduced a new static method `_simplify_tools_field` in the console formatter to transform the 'tools' field from full tool objects to a comma-separated string of tool names. This enhancement improves the readability of tool information in the output. * refactor: improve lazy initialization of LLM and embedder in Memory class Refactored the Memory class to implement lazy initialization for the LLM and embedder, ensuring they are only created when first accessed. This change enhances the robustness of the Memory class by preventing initialization failures when constructed without an API key. Additionally, updated error handling to provide clearer guidance for users on resolving initialization issues. * refactor: consolidate memory saving methods for improved efficiency Refactored memory handling across multiple classes to replace individual memory saving calls with a batch method, `remember_many`, enhancing performance and reducing redundancy. Updated related tools and schemas to support single and multiple item memory operations, ensuring a more streamlined interface for memory interactions. Additionally, improved documentation and test coverage for the new functionality. * feat: enhance MemoryTUI with improved layout and entry handling Updated the MemoryTUI class to incorporate a new vertical layout, adding an OptionList for displaying entries and enhancing the detail view for selected records. Introduced methods for populating entry and recall lists, improving user interaction and data presentation. Additionally, refined CSS styles for better visual organization and focus handling. * fix test * feat: inject memory tools into LiteAgent for enhanced functionality Added logic to the LiteAgent class to inject memory tools if memory is configured, ensuring that memory tools are only added if they are not already present. This change improves the agent's capability to utilize memory effectively during execution. * feat: add synchronous execution method to ConsolidationFlow for improved integration Introduced a new `run_sync()` method in the ConsolidationFlow class to facilitate procedural execution of the consolidation pipeline without relying on asynchronous event loops. Updated the EncodingFlow class to utilize this method for conflict resolution, ensuring compatibility within its async context. This change enhances the flow's ability to manage memory records effectively during nested executions. * refactor: update ConsolidationFlow and EncodingFlow for improved async handling Removed the synchronous `run_sync()` method from ConsolidationFlow and refactored the consolidate method in EncodingFlow to be asynchronous. This change allows for direct awaiting of the ConsolidationFlow's kickoff method, enhancing compatibility within the async event loop and preventing nested asyncio.run() issues. Additionally, updated the execution plan to listen for multiple paths, streamlining the consolidation process. * fix: update flow documentation and remove unused ConsolidationFlow Corrected the comment in Flow class regarding internal flows, replacing "ConsolidationFlow" with "EncodingFlow". Removed the ConsolidationFlow class as it is no longer needed, streamlining the memory handling process. Updated related imports and ensured that the memory module reflects these changes, enhancing clarity and maintainability. * feat: enhance memory handling with background saving and query analysis optimization Implemented a background saving mechanism in the Memory class to allow non-blocking memory operations, improving performance during high-load scenarios. Added a query analysis threshold to skip LLM calls for short queries, optimizing recall efficiency. Updated related methods and documentation to reflect these changes, ensuring a more responsive and efficient memory management system. * fix test * fix test * fix: handle synchronous fallback for save operations in Memory class Updated the Memory class to implement a synchronous fallback mechanism for save operations when the background thread pool is shut down. This change ensures that late save requests still succeed, improving reliability in memory management during shutdown scenarios. * feat: implement HITL learning features in human feedback decorator Added support for learning from human feedback in the human feedback decorator. Introduced parameters to enable lesson distillation and pre-review of outputs based on past feedback. Updated related tests to ensure proper functionality of the learning mechanism, including memory interactions and default LLM usage. --------- Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com> Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
Homepage · Docs · Start Cloud Trial · Blog · Forum
Fast and Flexible Multi-Agent Automation Framework
CrewAI is a lean, lightning-fast Python framework built entirely from scratch—completely independent of LangChain or other agent frameworks. It empowers developers with both high-level simplicity and precise low-level control, ideal for creating autonomous AI agents tailored to any scenario.
- CrewAI Crews: Optimize for autonomy and collaborative intelligence.
- CrewAI Flows: Enable granular, event-driven control, single LLM calls for precise task orchestration and supports Crews natively
With over 100,000 developers certified through our community courses at learn.crewai.com, CrewAI is rapidly becoming the standard for enterprise-ready AI automation.
CrewAI AMP Suite
CrewAI AMP Suite is a comprehensive bundle tailored for organizations that require secure, scalable, and easy-to-manage agent-driven automation.
You can try one part of the suite the Crew Control Plane for free
Crew Control Plane Key Features:
- Tracing & Observability: Monitor and track your AI agents and workflows in real-time, including metrics, logs, and traces.
- Unified Control Plane: A centralized platform for managing, monitoring, and scaling your AI agents and workflows.
- Seamless Integrations: Easily connect with existing enterprise systems, data sources, and cloud infrastructure.
- Advanced Security: Built-in robust security and compliance measures ensuring safe deployment and management.
- Actionable Insights: Real-time analytics and reporting to optimize performance and decision-making.
- 24/7 Support: Dedicated enterprise support to ensure uninterrupted operation and quick resolution of issues.
- On-premise and Cloud Deployment Options: Deploy CrewAI AMP on-premise or in the cloud, depending on your security and compliance requirements.
CrewAI AMP is designed for enterprises seeking a powerful, reliable solution to transform complex business processes into efficient, intelligent automations.
Table of contents
- Why CrewAI?
- Getting Started
- Key Features
- Understanding Flows and Crews
- CrewAI vs LangGraph
- Examples
- Connecting Your Crew to a Model
- How CrewAI Compares
- Frequently Asked Questions (FAQ)
- Contribution
- Telemetry
- License
Why CrewAI?
CrewAI unlocks the true potential of multi-agent automation, delivering the best-in-class combination of speed, flexibility, and control with either Crews of AI Agents or Flows of Events:
- Standalone Framework: Built from scratch, independent of LangChain or any other agent framework.
- High Performance: Optimized for speed and minimal resource usage, enabling faster execution.
- Flexible Low Level Customization: Complete freedom to customize at both high and low levels - from overall workflows and system architecture to granular agent behaviors, internal prompts, and execution logic.
- Ideal for Every Use Case: Proven effective for both simple tasks and highly complex, real-world, enterprise-grade scenarios.
- Robust Community: Backed by a rapidly growing community of over 100,000 certified developers offering comprehensive support and resources.
CrewAI empowers developers and enterprises to confidently build intelligent automations, bridging the gap between simplicity, flexibility, and performance.
Getting Started
Setup and run your first CrewAI agents by following this tutorial.
Learning Resources
Learn CrewAI through our comprehensive courses:
- Multi AI Agent Systems with CrewAI - Master the fundamentals of multi-agent systems
- Practical Multi AI Agents and Advanced Use Cases - Deep dive into advanced implementations
Understanding Flows and Crews
CrewAI offers two powerful, complementary approaches that work seamlessly together to build sophisticated AI applications:
-
Crews: Teams of AI agents with true autonomy and agency, working together to accomplish complex tasks through role-based collaboration. Crews enable:
- Natural, autonomous decision-making between agents
- Dynamic task delegation and collaboration
- Specialized roles with defined goals and expertise
- Flexible problem-solving approaches
-
Flows: Production-ready, event-driven workflows that deliver precise control over complex automations. Flows provide:
- Fine-grained control over execution paths for real-world scenarios
- Secure, consistent state management between tasks
- Clean integration of AI agents with production Python code
- Conditional branching for complex business logic
The true power of CrewAI emerges when combining Crews and Flows. This synergy allows you to:
- Build complex, production-grade applications
- Balance autonomy with precise control
- Handle sophisticated real-world scenarios
- Maintain clean, maintainable code structure
Getting Started with Installation
To get started with CrewAI, follow these simple steps:
1. Installation
Ensure you have Python >=3.10 <3.14 installed on your system. CrewAI uses UV for dependency management and package handling, offering a seamless setup and execution experience.
First, install CrewAI:
pip install crewai
If you want to install the 'crewai' package along with its optional features that include additional tools for agents, you can do so by using the following command:
pip install 'crewai[tools]'
The command above installs the basic package and also adds extra components which require more dependencies to function.
Troubleshooting Dependencies
If you encounter issues during installation or usage, here are some common solutions:
Common Issues
-
ModuleNotFoundError: No module named 'tiktoken'
- Install tiktoken explicitly:
pip install 'crewai[embeddings]' - If using embedchain or other tools:
pip install 'crewai[tools]'
- Install tiktoken explicitly:
-
Failed building wheel for tiktoken
- Ensure Rust compiler is installed (see installation steps above)
- For Windows: Verify Visual C++ Build Tools are installed
- Try upgrading pip:
pip install --upgrade pip - If issues persist, use a pre-built wheel:
pip install tiktoken --prefer-binary
2. Setting Up Your Crew with the YAML Configuration
To create a new CrewAI project, run the following CLI (Command Line Interface) command:
crewai create crew <project_name>
This command creates a new project folder with the following structure:
my_project/
├── .gitignore
├── pyproject.toml
├── README.md
├── .env
└── src/
└── my_project/
├── __init__.py
├── main.py
├── crew.py
├── tools/
│ ├── custom_tool.py
│ └── __init__.py
└── config/
├── agents.yaml
└── tasks.yaml
You can now start developing your crew by editing the files in the src/my_project folder. The main.py file is the entry point of the project, the crew.py file is where you define your crew, the agents.yaml file is where you define your agents, and the tasks.yaml file is where you define your tasks.
To customize your project, you can:
- Modify
src/my_project/config/agents.yamlto define your agents. - Modify
src/my_project/config/tasks.yamlto define your tasks. - Modify
src/my_project/crew.pyto add your own logic, tools, and specific arguments. - Modify
src/my_project/main.pyto add custom inputs for your agents and tasks. - Add your environment variables into the
.envfile.
Example of a simple crew with a sequential process:
Instantiate your crew:
crewai create crew latest-ai-development
Modify the files as needed to fit your use case:
agents.yaml
# src/my_project/config/agents.yaml
researcher:
role: >
{topic} Senior Data Researcher
goal: >
Uncover cutting-edge developments in {topic}
backstory: >
You're a seasoned researcher with a knack for uncovering the latest
developments in {topic}. Known for your ability to find the most relevant
information and present it in a clear and concise manner.
reporting_analyst:
role: >
{topic} Reporting Analyst
goal: >
Create detailed reports based on {topic} data analysis and research findings
backstory: >
You're a meticulous analyst with a keen eye for detail. You're known for
your ability to turn complex data into clear and concise reports, making
it easy for others to understand and act on the information you provide.
tasks.yaml
# src/my_project/config/tasks.yaml
research_task:
description: >
Conduct a thorough research about {topic}
Make sure you find any interesting and relevant information given
the current year is 2025.
expected_output: >
A list with 10 bullet points of the most relevant information about {topic}
agent: researcher
reporting_task:
description: >
Review the context you got and expand each topic into a full section for a report.
Make sure the report is detailed and contains any and all relevant information.
expected_output: >
A fully fledge reports with the mains topics, each with a full section of information.
Formatted as markdown without '```'
agent: reporting_analyst
output_file: report.md
crew.py
# src/my_project/crew.py
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from crewai_tools import SerperDevTool
from crewai.agents.agent_builder.base_agent import BaseAgent
from typing import List
@CrewBase
class LatestAiDevelopmentCrew():
"""LatestAiDevelopment crew"""
agents: List[BaseAgent]
tasks: List[Task]
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'],
verbose=True,
tools=[SerperDevTool()]
)
@agent
def reporting_analyst(self) -> Agent:
return Agent(
config=self.agents_config['reporting_analyst'],
verbose=True
)
@task
def research_task(self) -> Task:
return Task(
config=self.tasks_config['research_task'],
)
@task
def reporting_task(self) -> Task:
return Task(
config=self.tasks_config['reporting_task'],
output_file='report.md'
)
@crew
def crew(self) -> Crew:
"""Creates the LatestAiDevelopment crew"""
return Crew(
agents=self.agents, # Automatically created by the @agent decorator
tasks=self.tasks, # Automatically created by the @task decorator
process=Process.sequential,
verbose=True,
)
main.py
#!/usr/bin/env python
# src/my_project/main.py
import sys
from latest_ai_development.crew import LatestAiDevelopmentCrew
def run():
"""
Run the crew.
"""
inputs = {
'topic': 'AI Agents'
}
LatestAiDevelopmentCrew().crew().kickoff(inputs=inputs)
3. Running Your Crew
Before running your crew, make sure you have the following keys set as environment variables in your .env file:
- An OpenAI API key (or other LLM API key):
OPENAI_API_KEY=sk-... - A Serper.dev API key:
SERPER_API_KEY=YOUR_KEY_HERE
Lock the dependencies and install them by using the CLI command but first, navigate to your project directory:
cd my_project
crewai install (Optional)
To run your crew, execute the following command in the root of your project:
crewai run
or
python src/my_project/main.py
If an error happens due to the usage of poetry, please run the following command to update your crewai package:
crewai update
You should see the output in the console and the report.md file should be created in the root of your project with the full final report.
In addition to the sequential process, you can use the hierarchical process, which automatically assigns a manager to the defined crew to properly coordinate the planning and execution of tasks through delegation and validation of results. See more about the processes here.
Key Features
CrewAI stands apart as a lean, standalone, high-performance multi-AI Agent framework delivering simplicity, flexibility, and precise control—free from the complexity and limitations found in other agent frameworks.
- Standalone & Lean: Completely independent from other frameworks like LangChain, offering faster execution and lighter resource demands.
- Flexible & Precise: Easily orchestrate autonomous agents through intuitive Crews or precise Flows, achieving perfect balance for your needs.
- Seamless Integration: Effortlessly combine Crews (autonomy) and Flows (precision) to create complex, real-world automations.
- Deep Customization: Tailor every aspect—from high-level workflows down to low-level internal prompts and agent behaviors.
- Reliable Performance: Consistent results across simple tasks and complex, enterprise-level automations.
- Thriving Community: Backed by robust documentation and over 100,000 certified developers, providing exceptional support and guidance.
Choose CrewAI to easily build powerful, adaptable, and production-ready AI automations.
Examples
You can test different real life examples of AI crews in the CrewAI-examples repo:
Quick Tutorial
Write Job Descriptions
Check out code for this example or watch a video below:
Trip Planner
Check out code for this example or watch a video below:
Stock Analysis
Check out code for this example or watch a video below:
Using Crews and Flows Together
CrewAI's power truly shines when combining Crews with Flows to create sophisticated automation pipelines.
CrewAI flows support logical operators like or_ and and_ to combine multiple conditions. This can be used with @start, @listen, or @router decorators to create complex triggering conditions.
or_: Triggers when any of the specified conditions are met.and_Triggers when all of the specified conditions are met.
Here's how you can orchestrate multiple Crews within a Flow:
from crewai.flow.flow import Flow, listen, start, router, or_
from crewai import Crew, Agent, Task, Process
from pydantic import BaseModel
# Define structured state for precise control
class MarketState(BaseModel):
sentiment: str = "neutral"
confidence: float = 0.0
recommendations: list = []
class AdvancedAnalysisFlow(Flow[MarketState]):
@start()
def fetch_market_data(self):
# Demonstrate low-level control with structured state
self.state.sentiment = "analyzing"
return {"sector": "tech", "timeframe": "1W"} # These parameters match the task description template
@listen(fetch_market_data)
def analyze_with_crew(self, market_data):
# Show crew agency through specialized roles
analyst = Agent(
role="Senior Market Analyst",
goal="Conduct deep market analysis with expert insight",
backstory="You're a veteran analyst known for identifying subtle market patterns"
)
researcher = Agent(
role="Data Researcher",
goal="Gather and validate supporting market data",
backstory="You excel at finding and correlating multiple data sources"
)
analysis_task = Task(
description="Analyze {sector} sector data for the past {timeframe}",
expected_output="Detailed market analysis with confidence score",
agent=analyst
)
research_task = Task(
description="Find supporting data to validate the analysis",
expected_output="Corroborating evidence and potential contradictions",
agent=researcher
)
# Demonstrate crew autonomy
analysis_crew = Crew(
agents=[analyst, researcher],
tasks=[analysis_task, research_task],
process=Process.sequential,
verbose=True
)
return analysis_crew.kickoff(inputs=market_data) # Pass market_data as named inputs
@router(analyze_with_crew)
def determine_next_steps(self):
# Show flow control with conditional routing
if self.state.confidence > 0.8:
return "high_confidence"
elif self.state.confidence > 0.5:
return "medium_confidence"
return "low_confidence"
@listen("high_confidence")
def execute_strategy(self):
# Demonstrate complex decision making
strategy_crew = Crew(
agents=[
Agent(role="Strategy Expert",
goal="Develop optimal market strategy")
],
tasks=[
Task(description="Create detailed strategy based on analysis",
expected_output="Step-by-step action plan")
]
)
return strategy_crew.kickoff()
@listen(or_("medium_confidence", "low_confidence"))
def request_additional_analysis(self):
self.state.recommendations.append("Gather more data")
return "Additional analysis required"
This example demonstrates how to:
- Use Python code for basic data operations
- Create and execute Crews as steps in your workflow
- Use Flow decorators to manage the sequence of operations
- Implement conditional branching based on Crew results
Connecting Your Crew to a Model
CrewAI supports using various LLMs through a variety of connection options. By default your agents will use the OpenAI API when querying the model. However, there are several other ways to allow your agents to connect to models. For example, you can configure your agents to use a local model via the Ollama tool.
Please refer to the Connect CrewAI to LLMs page for details on configuring your agents' connections to models.
How CrewAI Compares
CrewAI's Advantage: CrewAI combines autonomous agent intelligence with precise workflow control through its unique Crews and Flows architecture. The framework excels at both high-level orchestration and low-level customization, enabling complex, production-grade systems with granular control.
- LangGraph: While LangGraph provides a foundation for building agent workflows, its approach requires significant boilerplate code and complex state management patterns. The framework's tight coupling with LangChain can limit flexibility when implementing custom agent behaviors or integrating with external systems.
P.S. CrewAI demonstrates significant performance advantages over LangGraph, executing 5.76x faster in certain cases like this QA task example (see comparison) while achieving higher evaluation scores with faster completion times in certain coding tasks, like in this example (detailed analysis).
- Autogen: While Autogen excels at creating conversational agents capable of working together, it lacks an inherent concept of process. In Autogen, orchestrating agents' interactions requires additional programming, which can become complex and cumbersome as the scale of tasks grows.
- ChatDev: ChatDev introduced the idea of processes into the realm of AI agents, but its implementation is quite rigid. Customizations in ChatDev are limited and not geared towards production environments, which can hinder scalability and flexibility in real-world applications.
Contribution
CrewAI is open-source and we welcome contributions. If you're looking to contribute, please:
- Fork the repository.
- Create a new branch for your feature.
- Add your feature or improvement.
- Send a pull request.
- We appreciate your input!
Installing Dependencies
uv lock
uv sync
Virtual Env
uv venv
Pre-commit hooks
pre-commit install
Running Tests
uv run pytest .
Running static type checks
uvx mypy src
Packaging
uv build
Installing Locally
pip install dist/*.tar.gz
Telemetry
CrewAI uses anonymous telemetry to collect usage data with the main purpose of helping us improve the library by focusing our efforts on the most used features, integrations and tools.
It's pivotal to understand that NO data is collected concerning prompts, task descriptions, agents' backstories or goals, usage of tools, API calls, responses, any data processed by the agents, or secrets and environment variables, with the exception of the conditions mentioned. When the share_crew feature is enabled, detailed data including task descriptions, agents' backstories or goals, and other specific attributes are collected to provide deeper insights while respecting user privacy. Users can disable telemetry by setting the environment variable OTEL_SDK_DISABLED to true.
Data collected includes:
- Version of CrewAI
- So we can understand how many users are using the latest version
- Version of Python
- So we can decide on what versions to better support
- General OS (e.g. number of CPUs, macOS/Windows/Linux)
- So we know what OS we should focus on and if we could build specific OS related features
- Number of agents and tasks in a crew
- So we make sure we are testing internally with similar use cases and educate people on the best practices
- Crew Process being used
- Understand where we should focus our efforts
- If Agents are using memory or allowing delegation
- Understand if we improved the features or maybe even drop them
- If Tasks are being executed in parallel or sequentially
- Understand if we should focus more on parallel execution
- Language model being used
- Improved support on most used languages
- Roles of agents in a crew
- Understand high level use cases so we can build better tools, integrations and examples about it
- Tools names available
- Understand out of the publicly available tools, which ones are being used the most so we can improve them
Users can opt-in to Further Telemetry, sharing the complete telemetry data by setting the share_crew attribute to True on their Crews. Enabling share_crew results in the collection of detailed crew and task execution data, including goal, backstory, context, and output of tasks. This enables a deeper insight into usage patterns while respecting the user's choice to share.
License
CrewAI is released under the MIT License.
Frequently Asked Questions (FAQ)
General
- What exactly is CrewAI?
- How do I install CrewAI?
- Does CrewAI depend on LangChain?
- Is CrewAI open-source?
- Does CrewAI collect data from users?
Features and Capabilities
- Can CrewAI handle complex use cases?
- Can I use CrewAI with local AI models?
- What makes Crews different from Flows?
- How is CrewAI better than LangChain?
- Does CrewAI support fine-tuning or training custom models?
Resources and Community
Enterprise Features
- What additional features does CrewAI AMP offer?
- Is CrewAI AMP available for cloud and on-premise deployments?
- Can I try CrewAI AMP for free?
Q: What exactly is CrewAI?
A: CrewAI is a standalone, lean, and fast Python framework built specifically for orchestrating autonomous AI agents. Unlike frameworks like LangChain, CrewAI does not rely on external dependencies, making it leaner, faster, and simpler.
Q: How do I install CrewAI?
A: Install CrewAI using pip:
pip install crewai
For additional tools, use:
pip install 'crewai[tools]'
Q: Does CrewAI depend on LangChain?
A: No. CrewAI is built entirely from the ground up, with no dependencies on LangChain or other agent frameworks. This ensures a lean, fast, and flexible experience.
Q: Can CrewAI handle complex use cases?
A: Yes. CrewAI excels at both simple and highly complex real-world scenarios, offering deep customization options at both high and low levels, from internal prompts to sophisticated workflow orchestration.
Q: Can I use CrewAI with local AI models?
A: Absolutely! CrewAI supports various language models, including local ones. Tools like Ollama and LM Studio allow seamless integration. Check the LLM Connections documentation for more details.
Q: What makes Crews different from Flows?
A: Crews provide autonomous agent collaboration, ideal for tasks requiring flexible decision-making and dynamic interaction. Flows offer precise, event-driven control, ideal for managing detailed execution paths and secure state management. You can seamlessly combine both for maximum effectiveness.
Q: How is CrewAI better than LangChain?
A: CrewAI provides simpler, more intuitive APIs, faster execution speeds, more reliable and consistent results, robust documentation, and an active community—addressing common criticisms and limitations associated with LangChain.
Q: Is CrewAI open-source?
A: Yes, CrewAI is open-source and actively encourages community contributions and collaboration.
Q: Does CrewAI collect data from users?
A: CrewAI collects anonymous telemetry data strictly for improvement purposes. Sensitive data such as prompts, tasks, or API responses are never collected unless explicitly enabled by the user.
Q: Where can I find real-world CrewAI examples?
A: Check out practical examples in the CrewAI-examples repository, covering use cases like trip planners, stock analysis, and job postings.
Q: How can I contribute to CrewAI?
A: Contributions are warmly welcomed! Fork the repository, create your branch, implement your changes, and submit a pull request. See the Contribution section of the README for detailed guidelines.
Q: What additional features does CrewAI AMP offer?
A: CrewAI AMP provides advanced features such as a unified control plane, real-time observability, secure integrations, advanced security, actionable insights, and dedicated 24/7 enterprise support.
Q: Is CrewAI AMP available for cloud and on-premise deployments?
A: Yes, CrewAI AMP supports both cloud-based and on-premise deployment options, allowing enterprises to meet their specific security and compliance requirements.
Q: Can I try CrewAI AMP for free?
A: Yes, you can explore part of the CrewAI AMP Suite by accessing the Crew Control Plane for free.
Q: Does CrewAI support fine-tuning or training custom models?
A: Yes, CrewAI can integrate with custom-trained or fine-tuned models, allowing you to enhance your agents with domain-specific knowledge and accuracy.
Q: Can CrewAI agents interact with external tools and APIs?
A: Absolutely! CrewAI agents can easily integrate with external tools, APIs, and databases, empowering them to leverage real-world data and resources.
Q: Is CrewAI suitable for production environments?
A: Yes, CrewAI is explicitly designed with production-grade standards, ensuring reliability, stability, and scalability for enterprise deployments.
Q: How scalable is CrewAI?
A: CrewAI is highly scalable, supporting simple automations and large-scale enterprise workflows involving numerous agents and complex tasks simultaneously.
Q: Does CrewAI offer debugging and monitoring tools?
A: Yes, CrewAI AMP includes advanced debugging, tracing, and real-time observability features, simplifying the management and troubleshooting of your automations.
Q: What programming languages does CrewAI support?
A: CrewAI is primarily Python-based but easily integrates with services and APIs written in any programming language through its flexible API integration capabilities.
Q: Does CrewAI offer educational resources for beginners?
A: Yes, CrewAI provides extensive beginner-friendly tutorials, courses, and documentation through learn.crewai.com, supporting developers at all skill levels.
Q: Can CrewAI automate human-in-the-loop workflows?
A: Yes, CrewAI fully supports human-in-the-loop workflows, allowing seamless collaboration between human experts and AI agents for enhanced decision-making.





