Compare commits

..

14 Commits

Author SHA1 Message Date
Devin AI
538f1a2a84 Enhance error handling and type annotations for optional chromadb dependency
Co-Authored-By: Joe Moura <joao@crewai.com>
2025-03-11 05:52:17 +00:00
Devin AI
dcf8e8d6e7 Fix linting issues in test file
Co-Authored-By: Joe Moura <joao@crewai.com>
2025-03-11 05:46:17 +00:00
Devin AI
fde4c4e92b Update uv.lock
Co-Authored-By: Joe Moura <joao@crewai.com>
2025-03-11 05:44:12 +00:00
Devin AI
36e7236ffe Fix #2324: Make chromadb an optional dependency to support Alpine Linux
Co-Authored-By: Joe Moura <joao@crewai.com>
2025-03-11 05:44:05 +00:00
João Moura
41a670166a new docs 2025-03-10 17:59:35 -07:00
João Moura
a77496a217 new images 2025-03-10 17:35:51 -07:00
João Moura
430260c985 adding state docs 2025-03-10 16:53:23 -07:00
João Moura
334b0959b0 updates 2025-03-10 16:53:23 -07:00
João Moura
2b31e26ba5 update 2025-03-10 16:53:23 -07:00
Brandon Hancock (bhancock_ai)
7122a29a20 fix mistral issues (#2308) 2025-03-10 12:08:43 -04:00
João Moura
f3ddb430a7 fix image 2025-03-09 04:34:38 -07:00
João Moura
435bfca186 preparing new version 2025-03-09 04:24:05 -07:00
João Moura
2ef896bdd5 update readme 2025-03-08 20:39:15 -08:00
Brandon Hancock (bhancock_ai)
59c6c29706 include model_name (#2310) 2025-03-07 16:55:18 -05:00
31 changed files with 3379 additions and 692 deletions

View File

@@ -1,4 +1,4 @@
Copyright (c) 2018 The Python Packaging Authority
Copyright (c) 2025 crewAI, Inc.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

181
README.md
View File

@@ -2,21 +2,46 @@
![Logo of CrewAI](./docs/crewai_logo.png)
# **CrewAI**
**CrewAI**: Production-grade framework for orchestrating sophisticated AI agent systems. From simple automations to complex real-world applications, CrewAI provides precise control and deep customization. By fostering collaborative intelligence through flexible, production-ready architecture, CrewAI empowers agents to work together seamlessly, tackling complex business challenges with predictable, consistent results.
</div>
**CrewAI Enterprise**
Want to plan, build (+ no code), deploy, monitor and interare your agents: [CrewAI Enterprise](https://www.crewai.com/enterprise). Designed for complex, real-world applications, our enterprise solution offers:
### Fast and Flexible Multi-Agent Automation Framework
- **Seamless Integrations**
- **Scalable & Secure Deployment**
- **Actionable Insights**
- **24/7 Support**
CrewAI is a lean, lightning-fast Python framework built entirely from
scratch—completely **independent of LangChain or other agent frameworks**.
It empowers developers with both high-level simplicity and precise low-level
control, ideal for creating autonomous AI agents tailored to any scenario.
- **CrewAI Crews**: Optimize for autonomy and collaborative intelligence.
- **CrewAI Flows**: Enable granular, event-driven control, single LLM calls for precise task orchestration and supports Crews natively
With over 100,000 developers certified through our community courses at
[learn.crewai.com](https://learn.crewai.com), CrewAI is rapidly becoming the
standard for enterprise-ready AI automation.
# CrewAI Enterprise Suite
CrewAI Enterprise Suite is a comprehensive bundle tailored for organizations
that require secure, scalable, and easy-to-manage agent-driven automation.
You can try one part of the suite the [Crew Control Plane for free](https://app.crewai.com)
## Crew Control Plane Key Features:
- **Tracing & Observability**: Monitor and track your AI agents and workflows in real-time, including metrics, logs, and traces.
- **Unified Control Plane**: A centralized platform for managing, monitoring, and scaling your AI agents and workflows.
- **Seamless Integrations**: Easily connect with existing enterprise systems, data sources, and cloud infrastructure.
- **Advanced Security**: Built-in robust security and compliance measures ensuring safe deployment and management.
- **Actionable Insights**: Real-time analytics and reporting to optimize performance and decision-making.
- **24/7 Support**: Dedicated enterprise support to ensure uninterrupted operation and quick resolution of issues.
- **On-premise and Cloud Deployment Options**: Deploy CrewAI Enterprise on-premise or in the cloud, depending on your security and compliance requirements.
CrewAI Enterprise is designed for enterprises seeking a powerful,
reliable solution to transform complex business processes into efficient,
intelligent automations.
<h3>
[Homepage](https://www.crewai.com/) | [Documentation](https://docs.crewai.com/) | [Chat with Docs](https://chatg.pt/DWjSBZn) | [Examples](https://github.com/crewAIInc/crewAI-examples) | [Discourse](https://community.crewai.com)
[Homepage](https://www.crewai.com/) | [Documentation](https://docs.crewai.com/) | [Chat with Docs](https://chatg.pt/DWjSBZn) | [Discourse](https://community.crewai.com)
</h3>
@@ -47,8 +72,19 @@ Want to plan, build (+ no code), deploy, monitor and interare your agents: [Crew
## Why CrewAI?
The power of AI collaboration has too much to offer.
CrewAI is a standalone framework, built from the ground up without dependencies on Langchain or other agent frameworks. It's designed to enable AI agents to assume roles, share goals, and operate in a cohesive unit - much like a well-oiled crew. Whether you're building a smart assistant platform, an automated customer service ensemble, or a multi-agent research team, CrewAI provides the backbone for sophisticated multi-agent interactions.
<div align="center" style="margin-bottom: 30px;">
<img src="docs/asset.png" alt="CrewAI Logo" width="100%">
</div>
CrewAI unlocks the true potential of multi-agent automation, delivering the best-in-class combination of speed, flexibility, and control with either Crews of AI Agents or Flows of Events:
- **Standalone Framework**: Built from scratch, independent of LangChain or any other agent framework.
- **High Performance**: Optimized for speed and minimal resource usage, enabling faster execution.
- **Flexible Low Level Customization**: Complete freedom to customize at both high and low levels - from overall workflows and system architecture to granular agent behaviors, internal prompts, and execution logic.
- **Ideal for Every Use Case**: Proven effective for both simple tasks and highly complex, real-world, enterprise-grade scenarios.
- **Robust Community**: Backed by a rapidly growing community of over **100,000 certified** developers offering comprehensive support and resources.
CrewAI empowers developers and enterprises to confidently build intelligent automations, bridging the gap between simplicity, flexibility, and performance.
## Getting Started
@@ -100,6 +136,14 @@ pip install 'crewai[tools]'
```
The command above installs the basic package and also adds extra components which require more dependencies to function.
For vector storage and RAG capabilities, install with the chromadb extra:
```shell
pip install 'crewai[chromadb]'
```
Note: If you're using Alpine Linux or other environments where onnxruntime is not available, you can still use CrewAI without the chromadb dependency, but with limited vector storage functionality.
### Troubleshooting Dependencies
If you encounter issues during installation or usage, here are some common solutions:
@@ -321,18 +365,16 @@ In addition to the sequential process, you can use the hierarchical process, whi
## Key Features
**Note**: CrewAI is a standalone framework built from the ground up, without dependencies on Langchain or other agent frameworks.
CrewAI stands apart as a lean, standalone, high-performance framework delivering simplicity, flexibility, and precise control—free from the complexity and limitations found in other agent frameworks.
- **Deep Customization**: Build sophisticated agents with full control over the system - from overriding inner prompts to accessing low-level APIs. Customize roles, goals, tools, and behaviors while maintaining clean abstractions.
- **Autonomous Inter-Agent Delegation**: Agents can autonomously delegate tasks and inquire amongst themselves, enabling complex problem-solving in real-world scenarios.
- **Flexible Task Management**: Define and customize tasks with granular control, from simple operations to complex multi-step processes.
- **Production-Grade Architecture**: Support for both high-level abstractions and low-level customization, with robust error handling and state management.
- **Predictable Results**: Ensure consistent, accurate outputs through programmatic guardrails, agent training capabilities, and flow-based execution control. See our [documentation on guardrails](https://docs.crewai.com/how-to/guardrails/) for implementation details.
- **Model Flexibility**: Run your crew using OpenAI or open source models with production-ready integrations. See [Connect CrewAI to LLMs](https://docs.crewai.com/how-to/LLM-Connections/) for detailed configuration options.
- **Event-Driven Flows**: Build complex, real-world workflows with precise control over execution paths, state management, and conditional logic.
- **Process Orchestration**: Achieve any workflow pattern through flows - from simple sequential and hierarchical processes to complex, custom orchestration patterns with conditional branching and parallel execution.
- **Standalone & Lean**: Completely independent from other frameworks like LangChain, offering faster execution and lighter resource demands.
- **Flexible & Precise**: Easily orchestrate autonomous agents through intuitive [Crews](https://docs.crewai.com/concepts/crews) or precise [Flows](https://docs.crewai.com/concepts/flows), achieving perfect balance for your needs.
- **Seamless Integration**: Effortlessly combine Crews (autonomy) and Flows (precision) to create complex, real-world automations.
- **Deep Customization**: Tailor every aspect—from high-level workflows down to low-level internal prompts and agent behaviors.
- **Reliable Performance**: Consistent results across simple tasks and complex, enterprise-level automations.
- **Thriving Community**: Backed by robust documentation and over 100,000 certified developers, providing exceptional support and guidance.
![CrewAI Mind Map](./docs/crewAI-mindmap.png "CrewAI Mind Map")
Choose CrewAI to easily build powerful, adaptable, and production-ready AI automations.
## Examples
@@ -563,13 +605,39 @@ Users can opt-in to Further Telemetry, sharing the complete telemetry data by se
CrewAI is released under the [MIT License](https://github.com/crewAIInc/crewAI/blob/main/LICENSE).
## Frequently Asked Questions (FAQ)
### Q: What is CrewAI?
A: CrewAI is a cutting-edge framework for orchestrating role-playing, autonomous AI agents. It enables agents to work together seamlessly, tackling complex tasks through collaborative intelligence.
### General
- [What exactly is CrewAI?](#q-what-exactly-is-crewai)
- [How do I install CrewAI?](#q-how-do-i-install-crewai)
- [Does CrewAI depend on LangChain?](#q-does-crewai-depend-on-langchain)
- [Is CrewAI open-source?](#q-is-crewai-open-source)
- [Does CrewAI collect data from users?](#q-does-crewai-collect-data-from-users)
### Features and Capabilities
- [Can CrewAI handle complex use cases?](#q-can-crewai-handle-complex-use-cases)
- [Can I use CrewAI with local AI models?](#q-can-i-use-crewai-with-local-ai-models)
- [What makes Crews different from Flows?](#q-what-makes-crews-different-from-flows)
- [How is CrewAI better than LangChain?](#q-how-is-crewai-better-than-langchain)
- [Does CrewAI support fine-tuning or training custom models?](#q-does-crewai-support-fine-tuning-or-training-custom-models)
### Resources and Community
- [Where can I find real-world CrewAI examples?](#q-where-can-i-find-real-world-crewai-examples)
- [How can I contribute to CrewAI?](#q-how-can-i-contribute-to-crewai)
### Enterprise Features
- [What additional features does CrewAI Enterprise offer?](#q-what-additional-features-does-crewai-enterprise-offer)
- [Is CrewAI Enterprise available for cloud and on-premise deployments?](#q-is-crewai-enterprise-available-for-cloud-and-on-premise-deployments)
- [Can I try CrewAI Enterprise for free?](#q-can-i-try-crewai-enterprise-for-free)
### Q: What exactly is CrewAI?
A: CrewAI is a standalone, lean, and fast Python framework built specifically for orchestrating autonomous AI agents. Unlike frameworks like LangChain, CrewAI does not rely on external dependencies, making it leaner, faster, and simpler.
### Q: How do I install CrewAI?
A: You can install CrewAI using pip:
A: Install CrewAI using pip:
```shell
pip install crewai
```
@@ -577,27 +645,62 @@ For additional tools, use:
```shell
pip install 'crewai[tools]'
```
### Q: Does CrewAI depend on LangChain?
A: No. CrewAI is built entirely from the ground up, with no dependencies on LangChain or other agent frameworks. This ensures a lean, fast, and flexible experience.
### Q: Can I use CrewAI with local models?
A: Yes, CrewAI supports various LLMs, including local models. You can configure your agents to use local models via tools like Ollama & LM Studio. Check the [LLM Connections documentation](https://docs.crewai.com/how-to/LLM-Connections/) for more details.
### Q: Can CrewAI handle complex use cases?
A: Yes. CrewAI excels at both simple and highly complex real-world scenarios, offering deep customization options at both high and low levels, from internal prompts to sophisticated workflow orchestration.
### Q: What are the key features of CrewAI?
A: Key features include role-based agent design, autonomous inter-agent delegation, flexible task management, process-driven execution, output saving as files, and compatibility with both open-source and proprietary models.
### Q: Can I use CrewAI with local AI models?
A: Absolutely! CrewAI supports various language models, including local ones. Tools like Ollama and LM Studio allow seamless integration. Check the [LLM Connections documentation](https://docs.crewai.com/how-to/LLM-Connections/) for more details.
### Q: How does CrewAI compare to other AI orchestration tools?
A: CrewAI is designed with production in mind, offering flexibility similar to Autogen's conversational agents and structured processes like ChatDev, but with more adaptability for real-world applications.
### Q: What makes Crews different from Flows?
A: Crews provide autonomous agent collaboration, ideal for tasks requiring flexible decision-making and dynamic interaction. Flows offer precise, event-driven control, ideal for managing detailed execution paths and secure state management. You can seamlessly combine both for maximum effectiveness.
### Q: How is CrewAI better than LangChain?
A: CrewAI provides simpler, more intuitive APIs, faster execution speeds, more reliable and consistent results, robust documentation, and an active community—addressing common criticisms and limitations associated with LangChain.
### Q: Is CrewAI open-source?
A: Yes, CrewAI is open-source and welcomes contributions from the community.
A: Yes, CrewAI is open-source and actively encourages community contributions and collaboration.
### Q: Does CrewAI collect any data?
A: CrewAI uses anonymous telemetry to collect usage data for improvement purposes. No sensitive data (like prompts, task descriptions, or API calls) is collected. Users can opt-in to share more detailed data by setting `share_crew=True` on their Crews.
### Q: Does CrewAI collect data from users?
A: CrewAI collects anonymous telemetry data strictly for improvement purposes. Sensitive data such as prompts, tasks, or API responses are never collected unless explicitly enabled by the user.
### Q: Where can I find examples of CrewAI in action?
A: You can find various real-life examples in the [CrewAI-examples repository](https://github.com/crewAIInc/crewAI-examples), including trip planners, stock analysis tools, and more.
### Q: What is the difference between Crews and Flows?
A: Crews and Flows serve different but complementary purposes in CrewAI. Crews are teams of AI agents working together to accomplish specific tasks through role-based collaboration, delivering accurate and predictable results. Flows, on the other hand, are event-driven workflows that can orchestrate both Crews and regular Python code, allowing you to build complex automation pipelines with secure state management and conditional execution paths.
### Q: Where can I find real-world CrewAI examples?
A: Check out practical examples in the [CrewAI-examples repository](https://github.com/crewAIInc/crewAI-examples), covering use cases like trip planners, stock analysis, and job postings.
### Q: How can I contribute to CrewAI?
A: Contributions are welcome! You can fork the repository, create a new branch for your feature, add your improvement, and send a pull request. Check the Contribution section in the README for more details.
A: Contributions are warmly welcomed! Fork the repository, create your branch, implement your changes, and submit a pull request. See the Contribution section of the README for detailed guidelines.
### Q: What additional features does CrewAI Enterprise offer?
A: CrewAI Enterprise provides advanced features such as a unified control plane, real-time observability, secure integrations, advanced security, actionable insights, and dedicated 24/7 enterprise support.
### Q: Is CrewAI Enterprise available for cloud and on-premise deployments?
A: Yes, CrewAI Enterprise supports both cloud-based and on-premise deployment options, allowing enterprises to meet their specific security and compliance requirements.
### Q: Can I try CrewAI Enterprise for free?
A: Yes, you can explore part of the CrewAI Enterprise Suite by accessing the [Crew Control Plane](https://app.crewai.com) for free.
### Q: Does CrewAI support fine-tuning or training custom models?
A: Yes, CrewAI can integrate with custom-trained or fine-tuned models, allowing you to enhance your agents with domain-specific knowledge and accuracy.
### Q: Can CrewAI agents interact with external tools and APIs?
A: Absolutely! CrewAI agents can easily integrate with external tools, APIs, and databases, empowering them to leverage real-world data and resources.
### Q: Is CrewAI suitable for production environments?
A: Yes, CrewAI is explicitly designed with production-grade standards, ensuring reliability, stability, and scalability for enterprise deployments.
### Q: How scalable is CrewAI?
A: CrewAI is highly scalable, supporting simple automations and large-scale enterprise workflows involving numerous agents and complex tasks simultaneously.
### Q: Does CrewAI offer debugging and monitoring tools?
A: Yes, CrewAI Enterprise includes advanced debugging, tracing, and real-time observability features, simplifying the management and troubleshooting of your automations.
### Q: What programming languages does CrewAI support?
A: CrewAI is primarily Python-based but easily integrates with services and APIs written in any programming language through its flexible API integration capabilities.
### Q: Does CrewAI offer educational resources for beginners?
A: Yes, CrewAI provides extensive beginner-friendly tutorials, courses, and documentation through learn.crewai.com, supporting developers at all skill levels.
### Q: Can CrewAI automate human-in-the-loop workflows?
A: Yes, CrewAI fully supports human-in-the-loop workflows, allowing seamless collaboration between human experts and AI agents for enhanced decision-making.

BIN
docs/asset.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 66 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

BIN
docs/crews.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

BIN
docs/flows.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

View File

@@ -0,0 +1,454 @@
---
title: Crafting Effective Agents
description: Learn best practices for designing powerful, specialized AI agents that collaborate effectively to solve complex problems.
icon: robot
---
# Crafting Effective Agents
## The Art and Science of Agent Design
At the heart of CrewAI lies the agent - a specialized AI entity designed to perform specific roles within a collaborative framework. While creating basic agents is simple, crafting truly effective agents that produce exceptional results requires understanding key design principles and best practices.
This guide will help you master the art of agent design, enabling you to create specialized AI personas that collaborate effectively, think critically, and produce high-quality outputs tailored to your specific needs.
### Why Agent Design Matters
The way you define your agents significantly impacts:
1. **Output quality**: Well-designed agents produce more relevant, high-quality results
2. **Collaboration effectiveness**: Agents with complementary skills work together more efficiently
3. **Task performance**: Agents with clear roles and goals execute tasks more effectively
4. **System scalability**: Thoughtfully designed agents can be reused across multiple crews and contexts
Let's explore best practices for creating agents that excel in these dimensions.
## The 80/20 Rule: Focus on Tasks Over Agents
When building effective AI systems, remember this crucial principle: **80% of your effort should go into designing tasks, and only 20% into defining agents**.
Why? Because even the most perfectly defined agent will fail with poorly designed tasks, but well-designed tasks can elevate even a simple agent. This means:
- Spend most of your time writing clear task instructions
- Define detailed inputs and expected outputs
- Add examples and context to guide execution
- Dedicate the remaining time to agent role, goal, and backstory
This doesn't mean agent design isn't important - it absolutely is. But task design is where most execution failures occur, so prioritize accordingly.
## Core Principles of Effective Agent Design
### 1. The Role-Goal-Backstory Framework
The most powerful agents in CrewAI are built on a strong foundation of three key elements:
#### Role: The Agent's Specialized Function
The role defines what the agent does and their area of expertise. When crafting roles:
- **Be specific and specialized**: Instead of "Writer," use "Technical Documentation Specialist" or "Creative Storyteller"
- **Align with real-world professions**: Base roles on recognizable professional archetypes
- **Include domain expertise**: Specify the agent's field of knowledge (e.g., "Financial Analyst specializing in market trends")
**Examples of effective roles:**
```yaml
role: "Senior UX Researcher specializing in user interview analysis"
role: "Full-Stack Software Architect with expertise in distributed systems"
role: "Corporate Communications Director specializing in crisis management"
```
#### Goal: The Agent's Purpose and Motivation
The goal directs the agent's efforts and shapes their decision-making process. Effective goals should:
- **Be clear and outcome-focused**: Define what the agent is trying to achieve
- **Emphasize quality standards**: Include expectations about the quality of work
- **Incorporate success criteria**: Help the agent understand what "good" looks like
**Examples of effective goals:**
```yaml
goal: "Uncover actionable user insights by analyzing interview data and identifying recurring patterns, unmet needs, and improvement opportunities"
goal: "Design robust, scalable system architectures that balance performance, maintainability, and cost-effectiveness"
goal: "Craft clear, empathetic crisis communications that address stakeholder concerns while protecting organizational reputation"
```
#### Backstory: The Agent's Experience and Perspective
The backstory gives depth to the agent, influencing how they approach problems and interact with others. Good backstories:
- **Establish expertise and experience**: Explain how the agent gained their skills
- **Define working style and values**: Describe how the agent approaches their work
- **Create a cohesive persona**: Ensure all elements of the backstory align with the role and goal
**Examples of effective backstories:**
```yaml
backstory: "You have spent 15 years conducting and analyzing user research for top tech companies. You have a talent for reading between the lines and identifying patterns that others miss. You believe that good UX is invisible and that the best insights come from listening to what users don't say as much as what they do say."
backstory: "With 20+ years of experience building distributed systems at scale, you've developed a pragmatic approach to software architecture. You've seen both successful and failed systems and have learned valuable lessons from each. You balance theoretical best practices with practical constraints and always consider the maintenance and operational aspects of your designs."
backstory: "As a seasoned communications professional who has guided multiple organizations through high-profile crises, you understand the importance of transparency, speed, and empathy in crisis response. You have a methodical approach to crafting messages that address concerns while maintaining organizational credibility."
```
### 2. Specialists Over Generalists
Agents perform significantly better when given specialized roles rather than general ones. A highly focused agent delivers more precise, relevant outputs:
**Generic (Less Effective):**
```yaml
role: "Writer"
```
**Specialized (More Effective):**
```yaml
role: "Technical Blog Writer specializing in explaining complex AI concepts to non-technical audiences"
```
**Specialist Benefits:**
- Clearer understanding of expected output
- More consistent performance
- Better alignment with specific tasks
- Improved ability to make domain-specific judgments
### 3. Balancing Specialization and Versatility
Effective agents strike the right balance between specialization (doing one thing extremely well) and versatility (being adaptable to various situations):
- **Specialize in role, versatile in application**: Create agents with specialized skills that can be applied across multiple contexts
- **Avoid overly narrow definitions**: Ensure agents can handle variations within their domain of expertise
- **Consider the collaborative context**: Design agents whose specializations complement the other agents they'll work with
### 4. Setting Appropriate Expertise Levels
The expertise level you assign to your agent shapes how they approach tasks:
- **Novice agents**: Good for straightforward tasks, brainstorming, or initial drafts
- **Intermediate agents**: Suitable for most standard tasks with reliable execution
- **Expert agents**: Best for complex, specialized tasks requiring depth and nuance
- **World-class agents**: Reserved for critical tasks where exceptional quality is needed
Choose the appropriate expertise level based on task complexity and quality requirements. For most collaborative crews, a mix of expertise levels often works best, with higher expertise assigned to core specialized functions.
## Practical Examples: Before and After
Let's look at some examples of agent definitions before and after applying these best practices:
### Example 1: Content Creation Agent
**Before:**
```yaml
role: "Writer"
goal: "Write good content"
backstory: "You are a writer who creates content for websites."
```
**After:**
```yaml
role: "B2B Technology Content Strategist"
goal: "Create compelling, technically accurate content that explains complex topics in accessible language while driving reader engagement and supporting business objectives"
backstory: "You have spent a decade creating content for leading technology companies, specializing in translating technical concepts for business audiences. You excel at research, interviewing subject matter experts, and structuring information for maximum clarity and impact. You believe that the best B2B content educates first and sells second, building trust through genuine expertise rather than marketing hype."
```
### Example 2: Research Agent
**Before:**
```yaml
role: "Researcher"
goal: "Find information"
backstory: "You are good at finding information online."
```
**After:**
```yaml
role: "Academic Research Specialist in Emerging Technologies"
goal: "Discover and synthesize cutting-edge research, identifying key trends, methodologies, and findings while evaluating the quality and reliability of sources"
backstory: "With a background in both computer science and library science, you've mastered the art of digital research. You've worked with research teams at prestigious universities and know how to navigate academic databases, evaluate research quality, and synthesize findings across disciplines. You're methodical in your approach, always cross-referencing information and tracing claims to primary sources before drawing conclusions."
```
## Crafting Effective Tasks for Your Agents
While agent design is important, task design is critical for successful execution. Here are best practices for designing tasks that set your agents up for success:
### The Anatomy of an Effective Task
A well-designed task has two key components that serve different purposes:
#### Task Description: The Process
The description should focus on what to do and how to do it, including:
- Detailed instructions for execution
- Context and background information
- Scope and constraints
- Process steps to follow
#### Expected Output: The Deliverable
The expected output should define what the final result should look like:
- Format specifications (markdown, JSON, etc.)
- Structure requirements
- Quality criteria
- Examples of good outputs (when possible)
### Task Design Best Practices
#### 1. Single Purpose, Single Output
Tasks perform best when focused on one clear objective:
**Bad Example (Too Broad):**
```yaml
task_description: "Research market trends, analyze the data, and create a visualization."
```
**Good Example (Focused):**
```yaml
# Task 1
research_task:
description: "Research the top 5 market trends in the AI industry for 2024."
expected_output: "A markdown list of the 5 trends with supporting evidence."
# Task 2
analysis_task:
description: "Analyze the identified trends to determine potential business impacts."
expected_output: "A structured analysis with impact ratings (High/Medium/Low)."
# Task 3
visualization_task:
description: "Create a visual representation of the analyzed trends."
expected_output: "A description of a chart showing trends and their impact ratings."
```
#### 2. Be Explicit About Inputs and Outputs
Always clearly specify what inputs the task will use and what the output should look like:
**Example:**
```yaml
analysis_task:
description: >
Analyze the customer feedback data from the CSV file.
Focus on identifying recurring themes related to product usability.
Consider sentiment and frequency when determining importance.
expected_output: >
A markdown report with the following sections:
1. Executive summary (3-5 bullet points)
2. Top 3 usability issues with supporting data
3. Recommendations for improvement
```
#### 3. Include Purpose and Context
Explain why the task matters and how it fits into the larger workflow:
**Example:**
```yaml
competitor_analysis_task:
description: >
Analyze our three main competitors' pricing strategies.
This analysis will inform our upcoming pricing model revision.
Focus on identifying patterns in how they price premium features
and how they structure their tiered offerings.
```
#### 4. Use Structured Output Tools
For machine-readable outputs, specify the format clearly:
**Example:**
```yaml
data_extraction_task:
description: "Extract key metrics from the quarterly report."
expected_output: "JSON object with the following keys: revenue, growth_rate, customer_acquisition_cost, and retention_rate."
```
## Common Mistakes to Avoid
Based on lessons learned from real-world implementations, here are the most common pitfalls in agent and task design:
### 1. Unclear Task Instructions
**Problem:** Tasks lack sufficient detail, making it difficult for agents to execute effectively.
**Example of Poor Design:**
```yaml
research_task:
description: "Research AI trends."
expected_output: "A report on AI trends."
```
**Improved Version:**
```yaml
research_task:
description: >
Research the top emerging AI trends for 2024 with a focus on:
1. Enterprise adoption patterns
2. Technical breakthroughs in the past 6 months
3. Regulatory developments affecting implementation
For each trend, identify key companies, technologies, and potential business impacts.
expected_output: >
A comprehensive markdown report with:
- Executive summary (5 bullet points)
- 5-7 major trends with supporting evidence
- For each trend: definition, examples, and business implications
- References to authoritative sources
```
### 2. "God Tasks" That Try to Do Too Much
**Problem:** Tasks that combine multiple complex operations into one instruction set.
**Example of Poor Design:**
```yaml
comprehensive_task:
description: "Research market trends, analyze competitor strategies, create a marketing plan, and design a launch timeline."
```
**Improved Version:**
Break this into sequential, focused tasks:
```yaml
# Task 1: Research
market_research_task:
description: "Research current market trends in the SaaS project management space."
expected_output: "A markdown summary of key market trends."
# Task 2: Competitive Analysis
competitor_analysis_task:
description: "Analyze strategies of the top 3 competitors based on the market research."
expected_output: "A comparison table of competitor strategies."
context: [market_research_task]
# Continue with additional focused tasks...
```
### 3. Misaligned Description and Expected Output
**Problem:** The task description asks for one thing while the expected output specifies something different.
**Example of Poor Design:**
```yaml
analysis_task:
description: "Analyze customer feedback to find areas of improvement."
expected_output: "A marketing plan for the next quarter."
```
**Improved Version:**
```yaml
analysis_task:
description: "Analyze customer feedback to identify the top 3 areas for product improvement."
expected_output: "A report listing the 3 priority improvement areas with supporting customer quotes and data points."
```
### 4. Not Understanding the Process Yourself
**Problem:** Asking agents to execute tasks that you yourself don't fully understand.
**Solution:**
1. Try to perform the task manually first
2. Document your process, decision points, and information sources
3. Use this documentation as the basis for your task description
### 5. Premature Use of Hierarchical Structures
**Problem:** Creating unnecessarily complex agent hierarchies where sequential processes would work better.
**Solution:** Start with sequential processes and only move to hierarchical models when the workflow complexity truly requires it.
### 6. Vague or Generic Agent Definitions
**Problem:** Generic agent definitions lead to generic outputs.
**Example of Poor Design:**
```yaml
agent:
role: "Business Analyst"
goal: "Analyze business data"
backstory: "You are good at business analysis."
```
**Improved Version:**
```yaml
agent:
role: "SaaS Metrics Specialist focusing on growth-stage startups"
goal: "Identify actionable insights from business data that can directly impact customer retention and revenue growth"
backstory: "With 10+ years analyzing SaaS business models, you've developed a keen eye for the metrics that truly matter for sustainable growth. You've helped numerous companies identify the leverage points that turned around their business trajectory. You believe in connecting data to specific, actionable recommendations rather than general observations."
```
## Advanced Agent Design Strategies
### Designing for Collaboration
When creating agents that will work together in a crew, consider:
- **Complementary skills**: Design agents with distinct but complementary abilities
- **Handoff points**: Define clear interfaces for how work passes between agents
- **Constructive tension**: Sometimes, creating agents with slightly different perspectives can lead to better outcomes through productive dialogue
For example, a content creation crew might include:
```yaml
# Research Agent
role: "Research Specialist for technical topics"
goal: "Gather comprehensive, accurate information from authoritative sources"
backstory: "You are a meticulous researcher with a background in library science..."
# Writer Agent
role: "Technical Content Writer"
goal: "Transform research into engaging, clear content that educates and informs"
backstory: "You are an experienced writer who excels at explaining complex concepts..."
# Editor Agent
role: "Content Quality Editor"
goal: "Ensure content is accurate, well-structured, and polished while maintaining consistency"
backstory: "With years of experience in publishing, you have a keen eye for detail..."
```
### Creating Specialized Tool Users
Some agents can be designed specifically to leverage certain tools effectively:
```yaml
role: "Data Analysis Specialist"
goal: "Derive meaningful insights from complex datasets through statistical analysis"
backstory: "With a background in data science, you excel at working with structured and unstructured data..."
tools: [PythonREPLTool, DataVisualizationTool, CSVAnalysisTool]
```
### Tailoring Agents to LLM Capabilities
Different LLMs have different strengths. Design your agents with these capabilities in mind:
```yaml
# For complex reasoning tasks
analyst:
role: "Data Insights Analyst"
goal: "..."
backstory: "..."
llm: openai/gpt-4o
# For creative content
writer:
role: "Creative Content Writer"
goal: "..."
backstory: "..."
llm: anthropic/claude-3-opus
```
## Testing and Iterating on Agent Design
Agent design is often an iterative process. Here's a practical approach:
1. **Start with a prototype**: Create an initial agent definition
2. **Test with sample tasks**: Evaluate performance on representative tasks
3. **Analyze outputs**: Identify strengths and weaknesses
4. **Refine the definition**: Adjust role, goal, and backstory based on observations
5. **Test in collaboration**: Evaluate how the agent performs in a crew setting
## Conclusion
Crafting effective agents is both an art and a science. By carefully defining roles, goals, and backstories that align with your specific needs, and combining them with well-designed tasks, you can create specialized AI collaborators that produce exceptional results.
Remember that agent and task design is an iterative process. Start with these best practices, observe your agents in action, and refine your approach based on what you learn. And always keep in mind the 80/20 rule - focus most of your effort on creating clear, focused tasks to get the best results from your agents.
<Check>
Congratulations! You now understand the principles and practices of effective agent design. Apply these techniques to create powerful, specialized agents that work together seamlessly to accomplish complex tasks.
</Check>
## Next Steps
- Experiment with different agent configurations for your specific use case
- Learn about [building your first crew](/guides/crews/first-crew) to see how agents work together
- Explore [CrewAI Flows](/guides/flows/first-flow) for more advanced orchestration

View File

@@ -0,0 +1,505 @@
---
title: Evaluating Use Cases for CrewAI
description: Learn how to assess your AI application needs and choose the right approach between Crews and Flows based on complexity and precision requirements.
icon: scale-balanced
---
# Evaluating Use Cases for CrewAI
## Understanding the Decision Framework
When building AI applications with CrewAI, one of the most important decisions you'll make is choosing the right approach for your specific use case. Should you use a Crew? A Flow? A combination of both? This guide will help you evaluate your requirements and make informed architectural decisions.
At the heart of this decision is understanding the relationship between **complexity** and **precision** in your application:
<Frame caption="Complexity vs. Precision Matrix for CrewAI Applications">
<img src="../..//complexity_precision.png" alt="Complexity vs. Precision Matrix" />
</Frame>
This matrix helps visualize how different approaches align with varying requirements for complexity and precision. Let's explore what each quadrant means and how it guides your architectural choices.
## The Complexity-Precision Matrix Explained
### What is Complexity?
In the context of CrewAI applications, **complexity** refers to:
- The number of distinct steps or operations required
- The diversity of tasks that need to be performed
- The interdependencies between different components
- The need for conditional logic and branching
- The sophistication of the overall workflow
### What is Precision?
**Precision** in this context refers to:
- The accuracy required in the final output
- The need for structured, predictable results
- The importance of reproducibility
- The level of control needed over each step
- The tolerance for variation in outputs
### The Four Quadrants
#### 1. Low Complexity, Low Precision
**Characteristics:**
- Simple, straightforward tasks
- Tolerance for some variation in outputs
- Limited number of steps
- Creative or exploratory applications
**Recommended Approach:** Simple Crews with minimal agents
**Example Use Cases:**
- Basic content generation
- Idea brainstorming
- Simple summarization tasks
- Creative writing assistance
#### 2. Low Complexity, High Precision
**Characteristics:**
- Simple workflows that require exact, structured outputs
- Need for reproducible results
- Limited steps but high accuracy requirements
- Often involves data processing or transformation
**Recommended Approach:** Flows with direct LLM calls or simple Crews with structured outputs
**Example Use Cases:**
- Data extraction and transformation
- Form filling and validation
- Structured content generation (JSON, XML)
- Simple classification tasks
#### 3. High Complexity, Low Precision
**Characteristics:**
- Multi-stage processes with many steps
- Creative or exploratory outputs
- Complex interactions between components
- Tolerance for variation in final results
**Recommended Approach:** Complex Crews with multiple specialized agents
**Example Use Cases:**
- Research and analysis
- Content creation pipelines
- Exploratory data analysis
- Creative problem-solving
#### 4. High Complexity, High Precision
**Characteristics:**
- Complex workflows requiring structured outputs
- Multiple interdependent steps with strict accuracy requirements
- Need for both sophisticated processing and precise results
- Often mission-critical applications
**Recommended Approach:** Flows orchestrating multiple Crews with validation steps
**Example Use Cases:**
- Enterprise decision support systems
- Complex data processing pipelines
- Multi-stage document processing
- Regulated industry applications
## Choosing Between Crews and Flows
### When to Choose Crews
Crews are ideal when:
1. **You need collaborative intelligence** - Multiple agents with different specializations need to work together
2. **The problem requires emergent thinking** - The solution benefits from different perspectives and approaches
3. **The task is primarily creative or analytical** - The work involves research, content creation, or analysis
4. **You value adaptability over strict structure** - The workflow can benefit from agent autonomy
5. **The output format can be somewhat flexible** - Some variation in output structure is acceptable
```python
# Example: Research Crew for market analysis
from crewai import Agent, Crew, Process, Task
# Create specialized agents
researcher = Agent(
role="Market Research Specialist",
goal="Find comprehensive market data on emerging technologies",
backstory="You are an expert at discovering market trends and gathering data."
)
analyst = Agent(
role="Market Analyst",
goal="Analyze market data and identify key opportunities",
backstory="You excel at interpreting market data and spotting valuable insights."
)
# Define their tasks
research_task = Task(
description="Research the current market landscape for AI-powered healthcare solutions",
expected_output="Comprehensive market data including key players, market size, and growth trends",
agent=researcher
)
analysis_task = Task(
description="Analyze the market data and identify the top 3 investment opportunities",
expected_output="Analysis report with 3 recommended investment opportunities and rationale",
agent=analyst,
context=[research_task]
)
# Create the crew
market_analysis_crew = Crew(
agents=[researcher, analyst],
tasks=[research_task, analysis_task],
process=Process.sequential,
verbose=True
)
# Run the crew
result = market_analysis_crew.kickoff()
```
### When to Choose Flows
Flows are ideal when:
1. **You need precise control over execution** - The workflow requires exact sequencing and state management
2. **The application has complex state requirements** - You need to maintain and transform state across multiple steps
3. **You need structured, predictable outputs** - The application requires consistent, formatted results
4. **The workflow involves conditional logic** - Different paths need to be taken based on intermediate results
5. **You need to combine AI with procedural code** - The solution requires both AI capabilities and traditional programming
```python
# Example: Customer Support Flow with structured processing
from crewai.flow.flow import Flow, listen, router, start
from pydantic import BaseModel
from typing import List, Dict
# Define structured state
class SupportTicketState(BaseModel):
ticket_id: str = ""
customer_name: str = ""
issue_description: str = ""
category: str = ""
priority: str = "medium"
resolution: str = ""
satisfaction_score: int = 0
class CustomerSupportFlow(Flow[SupportTicketState]):
@start()
def receive_ticket(self):
# In a real app, this might come from an API
self.state.ticket_id = "TKT-12345"
self.state.customer_name = "Alex Johnson"
self.state.issue_description = "Unable to access premium features after payment"
return "Ticket received"
@listen(receive_ticket)
def categorize_ticket(self, _):
# Use a direct LLM call for categorization
from crewai import LLM
llm = LLM(model="openai/gpt-4o-mini")
prompt = f"""
Categorize the following customer support issue into one of these categories:
- Billing
- Account Access
- Technical Issue
- Feature Request
- Other
Issue: {self.state.issue_description}
Return only the category name.
"""
self.state.category = llm.call(prompt).strip()
return self.state.category
@router(categorize_ticket)
def route_by_category(self, category):
# Route to different handlers based on category
return category.lower().replace(" ", "_")
@listen("billing")
def handle_billing_issue(self):
# Handle billing-specific logic
self.state.priority = "high"
# More billing-specific processing...
return "Billing issue handled"
@listen("account_access")
def handle_access_issue(self):
# Handle access-specific logic
self.state.priority = "high"
# More access-specific processing...
return "Access issue handled"
# Additional category handlers...
@listen("billing", "account_access", "technical_issue", "feature_request", "other")
def resolve_ticket(self, resolution_info):
# Final resolution step
self.state.resolution = f"Issue resolved: {resolution_info}"
return self.state.resolution
# Run the flow
support_flow = CustomerSupportFlow()
result = support_flow.kickoff()
```
### When to Combine Crews and Flows
The most sophisticated applications often benefit from combining Crews and Flows:
1. **Complex multi-stage processes** - Use Flows to orchestrate the overall process and Crews for complex subtasks
2. **Applications requiring both creativity and structure** - Use Crews for creative tasks and Flows for structured processing
3. **Enterprise-grade AI applications** - Use Flows to manage state and process flow while leveraging Crews for specialized work
```python
# Example: Content Production Pipeline combining Crews and Flows
from crewai.flow.flow import Flow, listen, start
from crewai import Agent, Crew, Process, Task
from pydantic import BaseModel
from typing import List, Dict
class ContentState(BaseModel):
topic: str = ""
target_audience: str = ""
content_type: str = ""
outline: Dict = {}
draft_content: str = ""
final_content: str = ""
seo_score: int = 0
class ContentProductionFlow(Flow[ContentState]):
@start()
def initialize_project(self):
# Set initial parameters
self.state.topic = "Sustainable Investing"
self.state.target_audience = "Millennial Investors"
self.state.content_type = "Blog Post"
return "Project initialized"
@listen(initialize_project)
def create_outline(self, _):
# Use a research crew to create an outline
researcher = Agent(
role="Content Researcher",
goal=f"Research {self.state.topic} for {self.state.target_audience}",
backstory="You are an expert researcher with deep knowledge of content creation."
)
outliner = Agent(
role="Content Strategist",
goal=f"Create an engaging outline for a {self.state.content_type}",
backstory="You excel at structuring content for maximum engagement."
)
research_task = Task(
description=f"Research {self.state.topic} focusing on what would interest {self.state.target_audience}",
expected_output="Comprehensive research notes with key points and statistics",
agent=researcher
)
outline_task = Task(
description=f"Create an outline for a {self.state.content_type} about {self.state.topic}",
expected_output="Detailed content outline with sections and key points",
agent=outliner,
context=[research_task]
)
outline_crew = Crew(
agents=[researcher, outliner],
tasks=[research_task, outline_task],
process=Process.sequential,
verbose=True
)
# Run the crew and store the result
result = outline_crew.kickoff()
# Parse the outline (in a real app, you might use a more robust parsing approach)
import json
try:
self.state.outline = json.loads(result.raw)
except:
# Fallback if not valid JSON
self.state.outline = {"sections": result.raw}
return "Outline created"
@listen(create_outline)
def write_content(self, _):
# Use a writing crew to create the content
writer = Agent(
role="Content Writer",
goal=f"Write engaging content for {self.state.target_audience}",
backstory="You are a skilled writer who creates compelling content."
)
editor = Agent(
role="Content Editor",
goal="Ensure content is polished, accurate, and engaging",
backstory="You have a keen eye for detail and a talent for improving content."
)
writing_task = Task(
description=f"Write a {self.state.content_type} about {self.state.topic} following this outline: {self.state.outline}",
expected_output="Complete draft content in markdown format",
agent=writer
)
editing_task = Task(
description="Edit and improve the draft content for clarity, engagement, and accuracy",
expected_output="Polished final content in markdown format",
agent=editor,
context=[writing_task]
)
writing_crew = Crew(
agents=[writer, editor],
tasks=[writing_task, editing_task],
process=Process.sequential,
verbose=True
)
# Run the crew and store the result
result = writing_crew.kickoff()
self.state.final_content = result.raw
return "Content created"
@listen(write_content)
def optimize_for_seo(self, _):
# Use a direct LLM call for SEO optimization
from crewai import LLM
llm = LLM(model="openai/gpt-4o-mini")
prompt = f"""
Analyze this content for SEO effectiveness for the keyword "{self.state.topic}".
Rate it on a scale of 1-100 and provide 3 specific recommendations for improvement.
Content: {self.state.final_content[:1000]}... (truncated for brevity)
Format your response as JSON with the following structure:
{{
"score": 85,
"recommendations": [
"Recommendation 1",
"Recommendation 2",
"Recommendation 3"
]
}}
"""
seo_analysis = llm.call(prompt)
# Parse the SEO analysis
import json
try:
analysis = json.loads(seo_analysis)
self.state.seo_score = analysis.get("score", 0)
return analysis
except:
self.state.seo_score = 50
return {"score": 50, "recommendations": ["Unable to parse SEO analysis"]}
# Run the flow
content_flow = ContentProductionFlow()
result = content_flow.kickoff()
```
## Practical Evaluation Framework
To determine the right approach for your specific use case, follow this step-by-step evaluation framework:
### Step 1: Assess Complexity
Rate your application's complexity on a scale of 1-10 by considering:
1. **Number of steps**: How many distinct operations are required?
- 1-3 steps: Low complexity (1-3)
- 4-7 steps: Medium complexity (4-7)
- 8+ steps: High complexity (8-10)
2. **Interdependencies**: How interconnected are the different parts?
- Few dependencies: Low complexity (1-3)
- Some dependencies: Medium complexity (4-7)
- Many complex dependencies: High complexity (8-10)
3. **Conditional logic**: How much branching and decision-making is needed?
- Linear process: Low complexity (1-3)
- Some branching: Medium complexity (4-7)
- Complex decision trees: High complexity (8-10)
4. **Domain knowledge**: How specialized is the knowledge required?
- General knowledge: Low complexity (1-3)
- Some specialized knowledge: Medium complexity (4-7)
- Deep expertise in multiple domains: High complexity (8-10)
Calculate your average score to determine overall complexity.
### Step 2: Assess Precision Requirements
Rate your precision requirements on a scale of 1-10 by considering:
1. **Output structure**: How structured must the output be?
- Free-form text: Low precision (1-3)
- Semi-structured: Medium precision (4-7)
- Strictly formatted (JSON, XML): High precision (8-10)
2. **Accuracy needs**: How important is factual accuracy?
- Creative content: Low precision (1-3)
- Informational content: Medium precision (4-7)
- Critical information: High precision (8-10)
3. **Reproducibility**: How consistent must results be across runs?
- Variation acceptable: Low precision (1-3)
- Some consistency needed: Medium precision (4-7)
- Exact reproducibility required: High precision (8-10)
4. **Error tolerance**: What is the impact of errors?
- Low impact: Low precision (1-3)
- Moderate impact: Medium precision (4-7)
- High impact: High precision (8-10)
Calculate your average score to determine overall precision requirements.
### Step 3: Map to the Matrix
Plot your complexity and precision scores on the matrix:
- **Low Complexity (1-4), Low Precision (1-4)**: Simple Crews
- **Low Complexity (1-4), High Precision (5-10)**: Flows with direct LLM calls
- **High Complexity (5-10), Low Precision (1-4)**: Complex Crews
- **High Complexity (5-10), High Precision (5-10)**: Flows orchestrating Crews
### Step 4: Consider Additional Factors
Beyond complexity and precision, consider:
1. **Development time**: Crews are often faster to prototype
2. **Maintenance needs**: Flows provide better long-term maintainability
3. **Team expertise**: Consider your team's familiarity with different approaches
4. **Scalability requirements**: Flows typically scale better for complex applications
5. **Integration needs**: Consider how the solution will integrate with existing systems
## Conclusion
Choosing between Crews and Flows—or combining them—is a critical architectural decision that impacts the effectiveness, maintainability, and scalability of your CrewAI application. By evaluating your use case along the dimensions of complexity and precision, you can make informed decisions that align with your specific requirements.
Remember that the best approach often evolves as your application matures. Start with the simplest solution that meets your needs, and be prepared to refine your architecture as you gain experience and your requirements become clearer.
<Check>
You now have a framework for evaluating CrewAI use cases and choosing the right approach based on complexity and precision requirements. This will help you build more effective, maintainable, and scalable AI applications.
</Check>
## Next Steps
- Learn more about [crafting effective agents](/guides/agents/crafting-effective-agents)
- Explore [building your first crew](/guides/crews/first-crew)
- Dive into [mastering flow state management](/guides/flows/mastering-flow-state)
- Check out the [core concepts](/concepts/agents) for deeper understanding

View File

@@ -0,0 +1,390 @@
---
title: Build Your First Crew
description: Step-by-step tutorial to create a collaborative AI team that works together to solve complex problems.
icon: users-gear
---
# Build Your First Crew
## Unleashing the Power of Collaborative AI
Imagine having a team of specialized AI agents working together seamlessly to solve complex problems, each contributing their unique skills to achieve a common goal. This is the power of CrewAI - a framework that enables you to create collaborative AI systems that can accomplish tasks far beyond what a single AI could achieve alone.
In this guide, we'll walk through creating a research crew that will help us research and analyze a topic, then create a comprehensive report. This practical example demonstrates how AI agents can collaborate to accomplish complex tasks, but it's just the beginning of what's possible with CrewAI.
### What You'll Build and Learn
By the end of this guide, you'll have:
1. **Created a specialized AI research team** with distinct roles and responsibilities
2. **Orchestrated collaboration** between multiple AI agents
3. **Automated a complex workflow** that involves gathering information, analysis, and report generation
4. **Built foundational skills** that you can apply to more ambitious projects
While we're building a simple research crew in this guide, the same patterns and techniques can be applied to create much more sophisticated teams for tasks like:
- Multi-stage content creation with specialized writers, editors, and fact-checkers
- Complex customer service systems with tiered support agents
- Autonomous business analysts that gather data, create visualizations, and generate insights
- Product development teams that ideate, design, and plan implementation
Let's get started building your first crew!
### Prerequisites
Before starting, make sure you have:
1. Installed CrewAI following the [installation guide](/installation)
2. Set up your OpenAI API key in your environment variables
3. Basic understanding of Python
## Step 1: Create a New CrewAI Project
First, let's create a new CrewAI project using the CLI. This command will set up a complete project structure with all the necessary files, allowing you to focus on defining your agents and their tasks rather than setting up boilerplate code.
```bash
crewai create crew research_crew
cd research_crew
```
This will generate a project with the basic structure needed for your crew. The CLI automatically creates:
- A project directory with the necessary files
- Configuration files for agents and tasks
- A basic crew implementation
- A main script to run the crew
<Frame caption="CrewAI Framework Overview">
<img src="../../crews.png" alt="CrewAI Framework Overview" />
</Frame>
## Step 2: Explore the Project Structure
Let's take a moment to understand the project structure created by the CLI. CrewAI follows best practices for Python projects, making it easy to maintain and extend your code as your crews become more complex.
```
research_crew/
├── .gitignore
├── pyproject.toml
├── README.md
├── .env
└── src/
└── research_crew/
├── __init__.py
├── main.py
├── crew.py
├── tools/
│ ├── custom_tool.py
│ └── __init__.py
└── config/
├── agents.yaml
└── tasks.yaml
```
This structure follows best practices for Python projects and makes it easy to organize your code. The separation of configuration files (in YAML) from implementation code (in Python) makes it easy to modify your crew's behavior without changing the underlying code.
## Step 3: Configure Your Agents
Now comes the fun part - defining your AI agents! In CrewAI, agents are specialized entities with specific roles, goals, and backstories that shape their behavior. Think of them as characters in a play, each with their own personality and purpose.
For our research crew, we'll create two agents:
1. A **researcher** who excels at finding and organizing information
2. An **analyst** who can interpret research findings and create insightful reports
Let's modify the `agents.yaml` file to define these specialized agents:
```yaml
# src/research_crew/config/agents.yaml
researcher:
role: >
Senior Research Specialist for {topic}
goal: >
Find comprehensive and accurate information about {topic}
with a focus on recent developments and key insights
backstory: >
You are an experienced research specialist with a talent for
finding relevant information from various sources. You excel at
organizing information in a clear and structured manner, making
complex topics accessible to others.
llm: openai/gpt-4o-mini
analyst:
role: >
Data Analyst and Report Writer for {topic}
goal: >
Analyze research findings and create a comprehensive, well-structured
report that presents insights in a clear and engaging way
backstory: >
You are a skilled analyst with a background in data interpretation
and technical writing. You have a talent for identifying patterns
and extracting meaningful insights from research data, then
communicating those insights effectively through well-crafted reports.
llm: openai/gpt-4o-mini
```
Notice how each agent has a distinct role, goal, and backstory. These elements aren't just descriptive - they actively shape how the agent approaches its tasks. By crafting these carefully, you can create agents with specialized skills and perspectives that complement each other.
## Step 4: Define Your Tasks
With our agents defined, we now need to give them specific tasks to perform. Tasks in CrewAI represent the concrete work that agents will perform, with detailed instructions and expected outputs.
For our research crew, we'll define two main tasks:
1. A **research task** for gathering comprehensive information
2. An **analysis task** for creating an insightful report
Let's modify the `tasks.yaml` file:
```yaml
# src/research_crew/config/tasks.yaml
research_task:
description: >
Conduct thorough research on {topic}. Focus on:
1. Key concepts and definitions
2. Historical development and recent trends
3. Major challenges and opportunities
4. Notable applications or case studies
5. Future outlook and potential developments
Make sure to organize your findings in a structured format with clear sections.
expected_output: >
A comprehensive research document with well-organized sections covering
all the requested aspects of {topic}. Include specific facts, figures,
and examples where relevant.
agent: researcher
analysis_task:
description: >
Analyze the research findings and create a comprehensive report on {topic}.
Your report should:
1. Begin with an executive summary
2. Include all key information from the research
3. Provide insightful analysis of trends and patterns
4. Offer recommendations or future considerations
5. Be formatted in a professional, easy-to-read style with clear headings
expected_output: >
A polished, professional report on {topic} that presents the research
findings with added analysis and insights. The report should be well-structured
with an executive summary, main sections, and conclusion.
agent: analyst
context:
- research_task
output_file: output/report.md
```
Note the `context` field in the analysis task - this is a powerful feature that allows the analyst to access the output of the research task. This creates a workflow where information flows naturally between agents, just as it would in a human team.
## Step 5: Configure Your Crew
Now it's time to bring everything together by configuring our crew. The crew is the container that orchestrates how agents work together to complete tasks.
Let's modify the `crew.py` file:
```python
# src/research_crew/crew.py
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from crewai_tools import SerperDevTool
@CrewBase
class ResearchCrew():
"""Research crew for comprehensive topic analysis and reporting"""
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'],
verbose=True,
tools=[SerperDevTool()]
)
@agent
def analyst(self) -> Agent:
return Agent(
config=self.agents_config['analyst'],
verbose=True
)
@task
def research_task(self) -> Task:
return Task(
config=self.tasks_config['research_task']
)
@task
def analysis_task(self) -> Task:
return Task(
config=self.tasks_config['analysis_task'],
output_file='output/report.md'
)
@crew
def crew(self) -> Crew:
"""Creates the research crew"""
return Crew(
agents=self.agents,
tasks=self.tasks,
process=Process.sequential,
verbose=True,
)
```
In this code, we're:
1. Creating the researcher agent and equipping it with the SerperDevTool to search the web
2. Creating the analyst agent
3. Setting up the research and analysis tasks
4. Configuring the crew to run tasks sequentially (the analyst will wait for the researcher to finish)
This is where the magic happens - with just a few lines of code, we've defined a collaborative AI system where specialized agents work together in a coordinated process.
## Step 6: Set Up Your Main Script
Now, let's set up the main script that will run our crew. This is where we provide the specific topic we want our crew to research.
```python
#!/usr/bin/env python
# src/research_crew/main.py
import os
from research_crew.crew import ResearchCrew
# Create output directory if it doesn't exist
os.makedirs('output', exist_ok=True)
def run():
"""
Run the research crew.
"""
inputs = {
'topic': 'Artificial Intelligence in Healthcare'
}
# Create and run the crew
result = ResearchCrew().crew().kickoff(inputs=inputs)
# Print the result
print("\n\n=== FINAL REPORT ===\n\n")
print(result.raw)
print("\n\nReport has been saved to output/report.md")
if __name__ == "__main__":
run()
```
This script prepares the environment, specifies our research topic, and kicks off the crew's work. The power of CrewAI is evident in how simple this code is - all the complexity of managing multiple AI agents is handled by the framework.
## Step 7: Set Up Your Environment Variables
Create a `.env` file in your project root with your API keys:
```
OPENAI_API_KEY=your_openai_api_key
SERPER_API_KEY=your_serper_api_key
```
You can get a Serper API key from [Serper.dev](https://serper.dev/).
## Step 8: Install Dependencies
Install the required dependencies using the CrewAI CLI:
```bash
crewai install
```
This command will:
1. Read the dependencies from your project configuration
2. Create a virtual environment if needed
3. Install all required packages
## Step 9: Run Your Crew
Now for the exciting moment - it's time to run your crew and see AI collaboration in action!
```bash
crewai run
```
When you run this command, you'll see your crew spring to life. The researcher will gather information about the specified topic, and the analyst will then create a comprehensive report based on that research. You'll see the agents' thought processes, actions, and outputs in real-time as they work together to complete their tasks.
## Step 10: Review the Output
Once the crew completes its work, you'll find the final report in the `output/report.md` file. The report will include:
1. An executive summary
2. Detailed information about the topic
3. Analysis and insights
4. Recommendations or future considerations
Take a moment to appreciate what you've accomplished - you've created a system where multiple AI agents collaborated on a complex task, each contributing their specialized skills to produce a result that's greater than what any single agent could achieve alone.
## Exploring Other CLI Commands
CrewAI offers several other useful CLI commands for working with crews:
```bash
# View all available commands
crewai --help
# Run the crew
crewai run
# Test the crew
crewai test
# Reset crew memories
crewai reset-memories
# Replay from a specific task
crewai replay -t <task_id>
```
## The Art of the Possible: Beyond Your First Crew
What you've built in this guide is just the beginning. The skills and patterns you've learned can be applied to create increasingly sophisticated AI systems. Here are some ways you could extend this basic research crew:
### Expanding Your Crew
You could add more specialized agents to your crew:
- A **fact-checker** to verify research findings
- A **data visualizer** to create charts and graphs
- A **domain expert** with specialized knowledge in a particular area
- A **critic** to identify weaknesses in the analysis
### Adding Tools and Capabilities
You could enhance your agents with additional tools:
- Web browsing tools for real-time research
- CSV/database tools for data analysis
- Code execution tools for data processing
- API connections to external services
### Creating More Complex Workflows
You could implement more sophisticated processes:
- Hierarchical processes where manager agents delegate to worker agents
- Iterative processes with feedback loops for refinement
- Parallel processes where multiple agents work simultaneously
- Dynamic processes that adapt based on intermediate results
### Applying to Different Domains
The same patterns can be applied to create crews for:
- **Content creation**: Writers, editors, fact-checkers, and designers working together
- **Customer service**: Triage agents, specialists, and quality control working together
- **Product development**: Researchers, designers, and planners collaborating
- **Data analysis**: Data collectors, analysts, and visualization specialists
## Next Steps
Now that you've built your first crew, you can:
1. Experiment with different agent configurations and personalities
2. Try more complex task structures and workflows
3. Implement custom tools to give your agents new capabilities
4. Apply your crew to different topics or problem domains
5. Explore [CrewAI Flows](/guides/flows/first-flow) for more advanced workflows with procedural programming
<Check>
Congratulations! You've successfully built your first CrewAI crew that can research and analyze any topic you provide. This foundational experience has equipped you with the skills to create increasingly sophisticated AI systems that can tackle complex, multi-stage problems through collaborative intelligence.
</Check>

View File

@@ -0,0 +1,604 @@
---
title: Build Your First Flow
description: Learn how to create structured, event-driven workflows with precise control over execution.
icon: diagram-project
---
# Build Your First Flow
## Taking Control of AI Workflows with Flows
CrewAI Flows represent the next level in AI orchestration - combining the collaborative power of AI agent crews with the precision and flexibility of procedural programming. While crews excel at agent collaboration, flows give you fine-grained control over exactly how and when different components of your AI system interact.
In this guide, we'll walk through creating a powerful CrewAI Flow that generates a comprehensive learning guide on any topic. This tutorial will demonstrate how Flows provide structured, event-driven control over your AI workflows by combining regular code, direct LLM calls, and crew-based processing.
### What Makes Flows Powerful
Flows enable you to:
1. **Combine different AI interaction patterns** - Use crews for complex collaborative tasks, direct LLM calls for simpler operations, and regular code for procedural logic
2. **Build event-driven systems** - Define how components respond to specific events and data changes
3. **Maintain state across components** - Share and transform data between different parts of your application
4. **Integrate with external systems** - Seamlessly connect your AI workflow with databases, APIs, and user interfaces
5. **Create complex execution paths** - Design conditional branches, parallel processing, and dynamic workflows
### What You'll Build and Learn
By the end of this guide, you'll have:
1. **Created a sophisticated content generation system** that combines user input, AI planning, and multi-agent content creation
2. **Orchestrated the flow of information** between different components of your system
3. **Implemented event-driven architecture** where each step responds to the completion of previous steps
4. **Built a foundation for more complex AI applications** that you can expand and customize
This guide creator flow demonstrates fundamental patterns that can be applied to create much more advanced applications, such as:
- Interactive AI assistants that combine multiple specialized subsystems
- Complex data processing pipelines with AI-enhanced transformations
- Autonomous agents that integrate with external services and APIs
- Multi-stage decision-making systems with human-in-the-loop processes
Let's dive in and build your first flow!
## Prerequisites
Before starting, make sure you have:
1. Installed CrewAI following the [installation guide](/installation)
2. Set up your OpenAI API key in your environment variables
3. Basic understanding of Python
## Step 1: Create a New CrewAI Flow Project
First, let's create a new CrewAI Flow project using the CLI. This command sets up a scaffolded project with all the necessary directories and template files for your flow.
```bash
crewai create flow guide_creator_flow
cd guide_creator_flow
```
This will generate a project with the basic structure needed for your flow.
<Frame caption="CrewAI Framework Overview">
<img src="../../flows.png" alt="CrewAI Framework Overview" />
</Frame>
## Step 2: Understanding the Project Structure
The generated project has the following structure. Take a moment to familiarize yourself with it, as understanding this structure will help you create more complex flows in the future.
```
guide_creator_flow/
├── .gitignore
├── pyproject.toml
├── README.md
├── .env
├── main.py
├── crews/
│ └── poem_crew/
│ ├── config/
│ │ ├── agents.yaml
│ │ └── tasks.yaml
│ └── poem_crew.py
└── tools/
└── custom_tool.py
```
This structure provides a clear separation between different components of your flow:
- The main flow logic in the `main.py` file
- Specialized crews in the `crews` directory
- Custom tools in the `tools` directory
We'll modify this structure to create our guide creator flow, which will orchestrate the process of generating comprehensive learning guides.
## Step 3: Add a Content Writer Crew
Our flow will need a specialized crew to handle the content creation process. Let's use the CrewAI CLI to add a content writer crew:
```bash
crewai flow add-crew content-crew
```
This command automatically creates the necessary directories and template files for your crew. The content writer crew will be responsible for writing and reviewing sections of our guide, working within the overall flow orchestrated by our main application.
## Step 4: Configure the Content Writer Crew
Now, let's modify the generated files for the content writer crew. We'll set up two specialized agents - a writer and a reviewer - that will collaborate to create high-quality content for our guide.
1. First, update the agents configuration file to define our content creation team:
```yaml
# src/guide_creator_flow/crews/content_crew/config/agents.yaml
content_writer:
role: >
Educational Content Writer
goal: >
Create engaging, informative content that thoroughly explains the assigned topic
and provides valuable insights to the reader
backstory: >
You are a talented educational writer with expertise in creating clear, engaging
content. You have a gift for explaining complex concepts in accessible language
and organizing information in a way that helps readers build their understanding.
llm: openai/gpt-4o-mini
content_reviewer:
role: >
Educational Content Reviewer and Editor
goal: >
Ensure content is accurate, comprehensive, well-structured, and maintains
consistency with previously written sections
backstory: >
You are a meticulous editor with years of experience reviewing educational
content. You have an eye for detail, clarity, and coherence. You excel at
improving content while maintaining the original author's voice and ensuring
consistent quality across multiple sections.
llm: openai/gpt-4o-mini
```
These agent definitions establish the specialized roles and perspectives that will shape how our AI agents approach content creation. Notice how each agent has a distinct purpose and expertise.
2. Next, update the tasks configuration file to define the specific writing and reviewing tasks:
```yaml
# src/guide_creator_flow/crews/content_crew/config/tasks.yaml
write_section_task:
description: >
Write a comprehensive section on the topic: "{section_title}"
Section description: {section_description}
Target audience: {audience_level} level learners
Your content should:
1. Begin with a brief introduction to the section topic
2. Explain all key concepts clearly with examples
3. Include practical applications or exercises where appropriate
4. End with a summary of key points
5. Be approximately 500-800 words in length
Format your content in Markdown with appropriate headings, lists, and emphasis.
Previously written sections:
{previous_sections}
Make sure your content maintains consistency with previously written sections
and builds upon concepts that have already been explained.
expected_output: >
A well-structured, comprehensive section in Markdown format that thoroughly
explains the topic and is appropriate for the target audience.
agent: content_writer
review_section_task:
description: >
Review and improve the following section on "{section_title}":
{draft_content}
Target audience: {audience_level} level learners
Previously written sections:
{previous_sections}
Your review should:
1. Fix any grammatical or spelling errors
2. Improve clarity and readability
3. Ensure content is comprehensive and accurate
4. Verify consistency with previously written sections
5. Enhance the structure and flow
6. Add any missing key information
Provide the improved version of the section in Markdown format.
expected_output: >
An improved, polished version of the section that maintains the original
structure but enhances clarity, accuracy, and consistency.
agent: content_reviewer
context:
- write_section_task
```
These task definitions provide detailed instructions to our agents, ensuring they produce content that meets our quality standards. Note how the `context` parameter in the review task creates a workflow where the reviewer has access to the writer's output.
3. Now, update the crew implementation file to define how our agents and tasks work together:
```python
# src/guide_creator_flow/crews/content_crew/content_crew.py
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
@CrewBase
class ContentCrew():
"""Content writing crew"""
@agent
def content_writer(self) -> Agent:
return Agent(
config=self.agents_config['content_writer'],
verbose=True
)
@agent
def content_reviewer(self) -> Agent:
return Agent(
config=self.agents_config['content_reviewer'],
verbose=True
)
@task
def write_section_task(self) -> Task:
return Task(
config=self.tasks_config['write_section_task']
)
@task
def review_section_task(self) -> Task:
return Task(
config=self.tasks_config['review_section_task'],
context=[self.write_section_task]
)
@crew
def crew(self) -> Crew:
"""Creates the content writing crew"""
return Crew(
agents=self.agents,
tasks=self.tasks,
process=Process.sequential,
verbose=True,
)
```
This crew definition establishes the relationship between our agents and tasks, setting up a sequential process where the content writer creates a draft and then the reviewer improves it. While this crew can function independently, in our flow it will be orchestrated as part of a larger system.
## Step 5: Create the Flow
Now comes the exciting part - creating the flow that will orchestrate the entire guide creation process. This is where we'll combine regular Python code, direct LLM calls, and our content creation crew into a cohesive system.
Our flow will:
1. Get user input for a topic and audience level
2. Make a direct LLM call to create a structured guide outline
3. Process each section sequentially using the content writer crew
4. Combine everything into a final comprehensive document
Let's create our flow in the `main.py` file:
```python
#!/usr/bin/env python
import json
from typing import List, Dict
from pydantic import BaseModel, Field
from crewai import LLM
from crewai.flow.flow import Flow, listen, start
from guide_creator_flow.crews.content_crew.content_crew import ContentCrew
# Define our models for structured data
class Section(BaseModel):
title: str = Field(description="Title of the section")
description: str = Field(description="Brief description of what the section should cover")
class GuideOutline(BaseModel):
title: str = Field(description="Title of the guide")
introduction: str = Field(description="Introduction to the topic")
target_audience: str = Field(description="Description of the target audience")
sections: List[Section] = Field(description="List of sections in the guide")
conclusion: str = Field(description="Conclusion or summary of the guide")
# Define our flow state
class GuideCreatorState(BaseModel):
topic: str = ""
audience_level: str = ""
guide_outline: GuideOutline = None
sections_content: Dict[str, str] = {}
class GuideCreatorFlow(Flow[GuideCreatorState]):
"""Flow for creating a comprehensive guide on any topic"""
@start()
def get_user_input(self):
"""Get input from the user about the guide topic and audience"""
print("\n=== Create Your Comprehensive Guide ===\n")
# Get user input
self.state.topic = input("What topic would you like to create a guide for? ")
# Get audience level with validation
while True:
audience = input("Who is your target audience? (beginner/intermediate/advanced) ").lower()
if audience in ["beginner", "intermediate", "advanced"]:
self.state.audience_level = audience
break
print("Please enter 'beginner', 'intermediate', or 'advanced'")
print(f"\nCreating a guide on {self.state.topic} for {self.state.audience_level} audience...\n")
return self.state
@listen(get_user_input)
def create_guide_outline(self, state):
"""Create a structured outline for the guide using a direct LLM call"""
print("Creating guide outline...")
# Initialize the LLM
llm = LLM(model="openai/gpt-4o-mini", response_format=GuideOutline)
# Create the messages for the outline
messages = [
{"role": "system", "content": "You are a helpful assistant designed to output JSON."},
{"role": "user", "content": f"""
Create a detailed outline for a comprehensive guide on "{state.topic}" for {state.audience_level} level learners.
The outline should include:
1. A compelling title for the guide
2. An introduction to the topic
3. 4-6 main sections that cover the most important aspects of the topic
4. A conclusion or summary
For each section, provide a clear title and a brief description of what it should cover.
"""}
]
# Make the LLM call with JSON response format
response = llm.call(messages=messages)
# Parse the JSON response
outline_dict = json.loads(response)
self.state.guide_outline = GuideOutline(**outline_dict)
# Save the outline to a file
with open("output/guide_outline.json", "w") as f:
json.dump(outline_dict, f, indent=2)
print(f"Guide outline created with {len(self.state.guide_outline.sections)} sections")
return self.state.guide_outline
@listen(create_guide_outline)
def write_and_compile_guide(self, outline):
"""Write all sections and compile the guide"""
print("Writing guide sections and compiling...")
completed_sections = []
# Process sections one by one to maintain context flow
for section in outline.sections:
print(f"Processing section: {section.title}")
# Build context from previous sections
previous_sections_text = ""
if completed_sections:
previous_sections_text = "# Previously Written Sections\n\n"
for title in completed_sections:
previous_sections_text += f"## {title}\n\n"
previous_sections_text += self.state.sections_content.get(title, "") + "\n\n"
else:
previous_sections_text = "No previous sections written yet."
# Run the content crew for this section
result = ContentCrew().crew().kickoff(inputs={
"section_title": section.title,
"section_description": section.description,
"audience_level": self.state.audience_level,
"previous_sections": previous_sections_text,
"draft_content": ""
})
# Store the content
self.state.sections_content[section.title] = result.raw
completed_sections.append(section.title)
print(f"Section completed: {section.title}")
# Compile the final guide
guide_content = f"# {outline.title}\n\n"
guide_content += f"## Introduction\n\n{outline.introduction}\n\n"
# Add each section in order
for section in outline.sections:
section_content = self.state.sections_content.get(section.title, "")
guide_content += f"\n\n{section_content}\n\n"
# Add conclusion
guide_content += f"## Conclusion\n\n{outline.conclusion}\n\n"
# Save the guide
with open("output/complete_guide.md", "w") as f:
f.write(guide_content)
print("\nComplete guide compiled and saved to output/complete_guide.md")
return "Guide creation completed successfully"
def kickoff():
"""Run the guide creator flow"""
GuideCreatorFlow().kickoff()
print("\n=== Flow Complete ===")
print("Your comprehensive guide is ready in the output directory.")
print("Open output/complete_guide.md to view it.")
def plot():
"""Generate a visualization of the flow"""
flow = GuideCreatorFlow()
flow.plot("guide_creator_flow")
print("Flow visualization saved to guide_creator_flow.html")
if __name__ == "__main__":
kickoff()
```
Let's analyze what's happening in this flow:
1. We define Pydantic models for structured data, ensuring type safety and clear data representation
2. We create a state class to maintain data across different steps of the flow
3. We implement three main flow steps:
- Getting user input with the `@start()` decorator
- Creating a guide outline with a direct LLM call
- Processing sections with our content crew
4. We use the `@listen()` decorator to establish event-driven relationships between steps
This is the power of flows - combining different types of processing (user interaction, direct LLM calls, crew-based tasks) into a coherent, event-driven system.
## Step 6: Set Up Your Environment Variables
Create a `.env` file in your project root with your API keys:
```
OPENAI_API_KEY=your_openai_api_key
```
## Step 7: Install Dependencies
Install the required dependencies:
```bash
crewai install
```
## Step 8: Run Your Flow
Now it's time to see your flow in action! Run it using the CrewAI CLI:
```bash
crewai flow kickoff
```
When you run this command, you'll see your flow spring to life:
1. It will prompt you for a topic and audience level
2. It will create a structured outline for your guide
3. It will process each section, with the content writer and reviewer collaborating on each
4. Finally, it will compile everything into a comprehensive guide
This demonstrates the power of flows to orchestrate complex processes involving multiple components, both AI and non-AI.
## Step 9: Visualize Your Flow
One of the powerful features of flows is the ability to visualize their structure:
```bash
crewai flow plot
```
This will create an HTML file that shows the structure of your flow, including the relationships between different steps and the data that flows between them. This visualization can be invaluable for understanding and debugging complex flows.
## Step 10: Review the Output
Once the flow completes, you'll find two files in the `output` directory:
1. `guide_outline.json`: Contains the structured outline of the guide
2. `complete_guide.md`: The comprehensive guide with all sections
Take a moment to review these files and appreciate what you've built - a system that combines user input, direct AI interactions, and collaborative agent work to produce a complex, high-quality output.
## The Art of the Possible: Beyond Your First Flow
What you've learned in this guide provides a foundation for creating much more sophisticated AI systems. Here are some ways you could extend this basic flow:
### Enhancing User Interaction
You could create more interactive flows with:
- Web interfaces for input and output
- Real-time progress updates
- Interactive feedback and refinement loops
- Multi-stage user interactions
### Adding More Processing Steps
You could expand your flow with additional steps for:
- Research before outline creation
- Image generation for illustrations
- Code snippet generation for technical guides
- Final quality assurance and fact-checking
### Creating More Complex Flows
You could implement more sophisticated flow patterns:
- Conditional branching based on user preferences or content type
- Parallel processing of independent sections
- Iterative refinement loops with feedback
- Integration with external APIs and services
### Applying to Different Domains
The same patterns can be applied to create flows for:
- **Interactive storytelling**: Create personalized stories based on user input
- **Business intelligence**: Process data, generate insights, and create reports
- **Product development**: Facilitate ideation, design, and planning
- **Educational systems**: Create personalized learning experiences
## Key Features Demonstrated
This guide creator flow demonstrates several powerful features of CrewAI:
1. **User interaction**: The flow collects input directly from the user
2. **Direct LLM calls**: Uses the LLM class for efficient, single-purpose AI interactions
3. **Structured data with Pydantic**: Uses Pydantic models to ensure type safety
4. **Sequential processing with context**: Writes sections in order, providing previous sections for context
5. **Multi-agent crews**: Leverages specialized agents (writer and reviewer) for content creation
6. **State management**: Maintains state across different steps of the process
7. **Event-driven architecture**: Uses the `@listen` decorator to respond to events
## Understanding the Flow Structure
Let's break down the key components of flows to help you understand how to build your own:
### 1. Direct LLM Calls
Flows allow you to make direct calls to language models when you need simple, structured responses:
```python
llm = LLM(model="openai/gpt-4o-mini", response_format=GuideOutline)
response = llm.call(messages=messages)
```
This is more efficient than using a crew when you need a specific, structured output.
### 2. Event-Driven Architecture
Flows use decorators to establish relationships between components:
```python
@start()
def get_user_input(self):
# First step in the flow
# ...
@listen(get_user_input)
def create_guide_outline(self, state):
# This runs when get_user_input completes
# ...
```
This creates a clear, declarative structure for your application.
### 3. State Management
Flows maintain state across steps, making it easy to share data:
```python
class GuideCreatorState(BaseModel):
topic: str = ""
audience_level: str = ""
guide_outline: GuideOutline = None
sections_content: Dict[str, str] = {}
```
This provides a type-safe way to track and transform data throughout your flow.
### 4. Crew Integration
Flows can seamlessly integrate with crews for complex collaborative tasks:
```python
result = ContentCrew().crew().kickoff(inputs={
"section_title": section.title,
# ...
})
```
This allows you to use the right tool for each part of your application - direct LLM calls for simple tasks and crews for complex collaboration.
## Next Steps
Now that you've built your first flow, you can:
1. Experiment with more complex flow structures and patterns
2. Try using `@router()` to create conditional branches in your flows
3. Explore the `and_` and `or_` functions for more complex parallel execution
4. Connect your flow to external APIs, databases, or user interfaces
5. Combine multiple specialized crews in a single flow
<Check>
Congratulations! You've successfully built your first CrewAI Flow that combines regular code, direct LLM calls, and crew-based processing to create a comprehensive guide. These foundational skills enable you to create increasingly sophisticated AI applications that can tackle complex, multi-stage problems through a combination of procedural control and collaborative intelligence.
</Check>

View File

@@ -0,0 +1,771 @@
---
title: Mastering Flow State Management
description: A comprehensive guide to managing, persisting, and leveraging state in CrewAI Flows for building robust AI applications.
icon: diagram-project
---
# Mastering Flow State Management
## Understanding the Power of State in Flows
State management is the backbone of any sophisticated AI workflow. In CrewAI Flows, the state system allows you to maintain context, share data between steps, and build complex application logic. Mastering state management is essential for creating reliable, maintainable, and powerful AI applications.
This guide will walk you through everything you need to know about managing state in CrewAI Flows, from basic concepts to advanced techniques, with practical code examples along the way.
### Why State Management Matters
Effective state management enables you to:
1. **Maintain context across execution steps** - Pass information seamlessly between different stages of your workflow
2. **Build complex conditional logic** - Make decisions based on accumulated data
3. **Create persistent applications** - Save and restore workflow progress
4. **Handle errors gracefully** - Implement recovery patterns for more robust applications
5. **Scale your applications** - Support complex workflows with proper data organization
6. **Enable conversational applications** - Store and access conversation history for context-aware AI interactions
Let's explore how to leverage these capabilities effectively.
## State Management Fundamentals
### The Flow State Lifecycle
In CrewAI Flows, the state follows a predictable lifecycle:
1. **Initialization** - When a flow is created, its state is initialized (either as an empty dictionary or a Pydantic model instance)
2. **Modification** - Flow methods access and modify the state as they execute
3. **Transmission** - State is passed automatically between flow methods
4. **Persistence** (optional) - State can be saved to storage and later retrieved
5. **Completion** - The final state reflects the cumulative changes from all executed methods
Understanding this lifecycle is crucial for designing effective flows.
### Two Approaches to State Management
CrewAI offers two ways to manage state in your flows:
1. **Unstructured State** - Using dictionary-like objects for flexibility
2. **Structured State** - Using Pydantic models for type safety and validation
Let's examine each approach in detail.
## Unstructured State Management
Unstructured state uses a dictionary-like approach, offering flexibility and simplicity for straightforward applications.
### How It Works
With unstructured state:
- You access state via `self.state` which behaves like a dictionary
- You can freely add, modify, or remove keys at any point
- All state is automatically available to all flow methods
### Basic Example
Here's a simple example of unstructured state management:
```python
from crewai.flow.flow import Flow, listen, start
class UnstructuredStateFlow(Flow):
@start()
def initialize_data(self):
print("Initializing flow data")
# Add key-value pairs to state
self.state["user_name"] = "Alex"
self.state["preferences"] = {
"theme": "dark",
"language": "English"
}
self.state["items"] = []
# The flow state automatically gets a unique ID
print(f"Flow ID: {self.state['id']}")
return "Initialized"
@listen(initialize_data)
def process_data(self, previous_result):
print(f"Previous step returned: {previous_result}")
# Access and modify state
user = self.state["user_name"]
print(f"Processing data for {user}")
# Add items to a list in state
self.state["items"].append("item1")
self.state["items"].append("item2")
# Add a new key-value pair
self.state["processed"] = True
return "Processed"
@listen(process_data)
def generate_summary(self, previous_result):
# Access multiple state values
user = self.state["user_name"]
theme = self.state["preferences"]["theme"]
items = self.state["items"]
processed = self.state.get("processed", False)
summary = f"User {user} has {len(items)} items with {theme} theme. "
summary += "Data is processed." if processed else "Data is not processed."
return summary
# Run the flow
flow = UnstructuredStateFlow()
result = flow.kickoff()
print(f"Final result: {result}")
print(f"Final state: {flow.state}")
```
### When to Use Unstructured State
Unstructured state is ideal for:
- Quick prototyping and simple flows
- Dynamically evolving state needs
- Cases where the structure may not be known in advance
- Flows with simple state requirements
While flexible, unstructured state lacks type checking and schema validation, which can lead to errors in complex applications.
## Structured State Management
Structured state uses Pydantic models to define a schema for your flow's state, providing type safety, validation, and better developer experience.
### How It Works
With structured state:
- You define a Pydantic model that represents your state structure
- You pass this model type to your Flow class as a type parameter
- You access state via `self.state`, which behaves like a Pydantic model instance
- All fields are validated according to their defined types
- You get IDE autocompletion and type checking support
### Basic Example
Here's how to implement structured state management:
```python
from crewai.flow.flow import Flow, listen, start
from pydantic import BaseModel, Field
from typing import List, Dict, Optional
# Define your state model
class UserPreferences(BaseModel):
theme: str = "light"
language: str = "English"
class AppState(BaseModel):
user_name: str = ""
preferences: UserPreferences = UserPreferences()
items: List[str] = []
processed: bool = False
completion_percentage: float = 0.0
# Create a flow with typed state
class StructuredStateFlow(Flow[AppState]):
@start()
def initialize_data(self):
print("Initializing flow data")
# Set state values (type-checked)
self.state.user_name = "Taylor"
self.state.preferences.theme = "dark"
# The ID field is automatically available
print(f"Flow ID: {self.state.id}")
return "Initialized"
@listen(initialize_data)
def process_data(self, previous_result):
print(f"Processing data for {self.state.user_name}")
# Modify state (with type checking)
self.state.items.append("item1")
self.state.items.append("item2")
self.state.processed = True
self.state.completion_percentage = 50.0
return "Processed"
@listen(process_data)
def generate_summary(self, previous_result):
# Access state (with autocompletion)
summary = f"User {self.state.user_name} has {len(self.state.items)} items "
summary += f"with {self.state.preferences.theme} theme. "
summary += "Data is processed." if self.state.processed else "Data is not processed."
summary += f" Completion: {self.state.completion_percentage}%"
return summary
# Run the flow
flow = StructuredStateFlow()
result = flow.kickoff()
print(f"Final result: {result}")
print(f"Final state: {flow.state}")
```
### Benefits of Structured State
Using structured state provides several advantages:
1. **Type Safety** - Catch type errors at development time
2. **Self-Documentation** - The state model clearly documents what data is available
3. **Validation** - Automatic validation of data types and constraints
4. **IDE Support** - Get autocomplete and inline documentation
5. **Default Values** - Easily define fallbacks for missing data
### When to Use Structured State
Structured state is recommended for:
- Complex flows with well-defined data schemas
- Team projects where multiple developers work on the same code
- Applications where data validation is important
- Flows that need to enforce specific data types and constraints
## The Automatic State ID
Both unstructured and structured states automatically receive a unique identifier (UUID) to help track and manage state instances.
### How It Works
- For unstructured state, the ID is accessible as `self.state["id"]`
- For structured state, the ID is accessible as `self.state.id`
- This ID is generated automatically when the flow is created
- The ID remains the same throughout the flow's lifecycle
- The ID can be used for tracking, logging, and retrieving persisted states
This UUID is particularly valuable when implementing persistence or tracking multiple flow executions.
## Dynamic State Updates
Regardless of whether you're using structured or unstructured state, you can update state dynamically throughout your flow's execution.
### Passing Data Between Steps
Flow methods can return values that are then passed as arguments to listening methods:
```python
from crewai.flow.flow import Flow, listen, start
class DataPassingFlow(Flow):
@start()
def generate_data(self):
# This return value will be passed to listening methods
return "Generated data"
@listen(generate_data)
def process_data(self, data_from_previous_step):
print(f"Received: {data_from_previous_step}")
# You can modify the data and pass it along
processed_data = f"{data_from_previous_step} - processed"
# Also update state
self.state["last_processed"] = processed_data
return processed_data
@listen(process_data)
def finalize_data(self, processed_data):
print(f"Received processed data: {processed_data}")
# Access both the passed data and state
last_processed = self.state.get("last_processed", "")
return f"Final: {processed_data} (from state: {last_processed})"
```
This pattern allows you to combine direct data passing with state updates for maximum flexibility.
## Persisting Flow State
One of CrewAI's most powerful features is the ability to persist flow state across executions. This enables workflows that can be paused, resumed, and even recovered after failures.
### The @persist Decorator
The `@persist` decorator automates state persistence, saving your flow's state at key points in execution.
#### Class-Level Persistence
When applied at the class level, `@persist` saves state after every method execution:
```python
from crewai.flow.flow import Flow, listen, persist, start
from pydantic import BaseModel
class CounterState(BaseModel):
value: int = 0
@persist # Apply to the entire flow class
class PersistentCounterFlow(Flow[CounterState]):
@start()
def increment(self):
self.state.value += 1
print(f"Incremented to {self.state.value}")
return self.state.value
@listen(increment)
def double(self, value):
self.state.value = value * 2
print(f"Doubled to {self.state.value}")
return self.state.value
# First run
flow1 = PersistentCounterFlow()
result1 = flow1.kickoff()
print(f"First run result: {result1}")
# Second run - state is automatically loaded
flow2 = PersistentCounterFlow()
result2 = flow2.kickoff()
print(f"Second run result: {result2}") # Will be higher due to persisted state
```
#### Method-Level Persistence
For more granular control, you can apply `@persist` to specific methods:
```python
from crewai.flow.flow import Flow, listen, persist, start
class SelectivePersistFlow(Flow):
@start()
def first_step(self):
self.state["count"] = 1
return "First step"
@persist # Only persist after this method
@listen(first_step)
def important_step(self, prev_result):
self.state["count"] += 1
self.state["important_data"] = "This will be persisted"
return "Important step completed"
@listen(important_step)
def final_step(self, prev_result):
self.state["count"] += 1
return f"Complete with count {self.state['count']}"
```
## Advanced State Patterns
### State-Based Conditional Logic
You can use state to implement complex conditional logic in your flows:
```python
from crewai.flow.flow import Flow, listen, router, start
from pydantic import BaseModel
class PaymentState(BaseModel):
amount: float = 0.0
is_approved: bool = False
retry_count: int = 0
class PaymentFlow(Flow[PaymentState]):
@start()
def process_payment(self):
# Simulate payment processing
self.state.amount = 100.0
self.state.is_approved = self.state.amount < 1000
return "Payment processed"
@router(process_payment)
def check_approval(self, previous_result):
if self.state.is_approved:
return "approved"
elif self.state.retry_count < 3:
return "retry"
else:
return "rejected"
@listen("approved")
def handle_approval(self):
return f"Payment of ${self.state.amount} approved!"
@listen("retry")
def handle_retry(self):
self.state.retry_count += 1
print(f"Retrying payment (attempt {self.state.retry_count})...")
# Could implement retry logic here
return "Retry initiated"
@listen("rejected")
def handle_rejection(self):
return f"Payment of ${self.state.amount} rejected after {self.state.retry_count} retries."
```
### Handling Complex State Transformations
For complex state transformations, you can create dedicated methods:
```python
from crewai.flow.flow import Flow, listen, start
from pydantic import BaseModel
from typing import List, Dict
class UserData(BaseModel):
name: str
active: bool = True
login_count: int = 0
class ComplexState(BaseModel):
users: Dict[str, UserData] = {}
active_user_count: int = 0
class TransformationFlow(Flow[ComplexState]):
@start()
def initialize(self):
# Add some users
self.add_user("alice", "Alice")
self.add_user("bob", "Bob")
self.add_user("charlie", "Charlie")
return "Initialized"
@listen(initialize)
def process_users(self, _):
# Increment login counts
for user_id in self.state.users:
self.increment_login(user_id)
# Deactivate one user
self.deactivate_user("bob")
# Update active count
self.update_active_count()
return f"Processed {len(self.state.users)} users"
# Helper methods for state transformations
def add_user(self, user_id: str, name: str):
self.state.users[user_id] = UserData(name=name)
self.update_active_count()
def increment_login(self, user_id: str):
if user_id in self.state.users:
self.state.users[user_id].login_count += 1
def deactivate_user(self, user_id: str):
if user_id in self.state.users:
self.state.users[user_id].active = False
self.update_active_count()
def update_active_count(self):
self.state.active_user_count = sum(
1 for user in self.state.users.values() if user.active
)
```
This pattern of creating helper methods keeps your flow methods clean while enabling complex state manipulations.
## State Management with Crews
One of the most powerful patterns in CrewAI is combining flow state management with crew execution.
### Passing State to Crews
You can use flow state to parameterize crews:
```python
from crewai.flow.flow import Flow, listen, start
from crewai import Agent, Crew, Process, Task
from pydantic import BaseModel
class ResearchState(BaseModel):
topic: str = ""
depth: str = "medium"
results: str = ""
class ResearchFlow(Flow[ResearchState]):
@start()
def get_parameters(self):
# In a real app, this might come from user input
self.state.topic = "Artificial Intelligence Ethics"
self.state.depth = "deep"
return "Parameters set"
@listen(get_parameters)
def execute_research(self, _):
# Create agents
researcher = Agent(
role="Research Specialist",
goal=f"Research {self.state.topic} in {self.state.depth} detail",
backstory="You are an expert researcher with a talent for finding accurate information."
)
writer = Agent(
role="Content Writer",
goal="Transform research into clear, engaging content",
backstory="You excel at communicating complex ideas clearly and concisely."
)
# Create tasks
research_task = Task(
description=f"Research {self.state.topic} with {self.state.depth} analysis",
expected_output="Comprehensive research notes in markdown format",
agent=researcher
)
writing_task = Task(
description=f"Create a summary on {self.state.topic} based on the research",
expected_output="Well-written article in markdown format",
agent=writer,
context=[research_task]
)
# Create and run crew
research_crew = Crew(
agents=[researcher, writer],
tasks=[research_task, writing_task],
process=Process.sequential,
verbose=True
)
# Run crew and store result in state
result = research_crew.kickoff()
self.state.results = result.raw
return "Research completed"
@listen(execute_research)
def summarize_results(self, _):
# Access the stored results
result_length = len(self.state.results)
return f"Research on {self.state.topic} completed with {result_length} characters of results."
```
### Handling Crew Outputs in State
When a crew completes, you can process its output and store it in your flow state:
```python
@listen(execute_crew)
def process_crew_results(self, _):
# Parse the raw results (assuming JSON output)
import json
try:
results_dict = json.loads(self.state.raw_results)
self.state.processed_results = {
"title": results_dict.get("title", ""),
"main_points": results_dict.get("main_points", []),
"conclusion": results_dict.get("conclusion", "")
}
return "Results processed successfully"
except json.JSONDecodeError:
self.state.error = "Failed to parse crew results as JSON"
return "Error processing results"
```
## Best Practices for State Management
### 1. Keep State Focused
Design your state to contain only what's necessary:
```python
# Too broad
class BloatedState(BaseModel):
user_data: Dict = {}
system_settings: Dict = {}
temporary_calculations: List = []
debug_info: Dict = {}
# ...many more fields
# Better: Focused state
class FocusedState(BaseModel):
user_id: str
preferences: Dict[str, str]
completion_status: Dict[str, bool]
```
### 2. Use Structured State for Complex Flows
As your flows grow in complexity, structured state becomes increasingly valuable:
```python
# Simple flow can use unstructured state
class SimpleGreetingFlow(Flow):
@start()
def greet(self):
self.state["name"] = "World"
return f"Hello, {self.state['name']}!"
# Complex flow benefits from structured state
class UserRegistrationState(BaseModel):
username: str
email: str
verification_status: bool = False
registration_date: datetime = Field(default_factory=datetime.now)
last_login: Optional[datetime] = None
class RegistrationFlow(Flow[UserRegistrationState]):
# Methods with strongly-typed state access
```
### 3. Document State Transitions
For complex flows, document how state changes throughout the execution:
```python
@start()
def initialize_order(self):
"""
Initialize order state with empty values.
State before: {}
State after: {order_id: str, items: [], status: 'new'}
"""
self.state.order_id = str(uuid.uuid4())
self.state.items = []
self.state.status = "new"
return "Order initialized"
```
### 4. Handle State Errors Gracefully
Implement error handling for state access:
```python
@listen(previous_step)
def process_data(self, _):
try:
# Try to access a value that might not exist
user_preference = self.state.preferences.get("theme", "default")
except (AttributeError, KeyError):
# Handle the error gracefully
self.state.errors = self.state.get("errors", [])
self.state.errors.append("Failed to access preferences")
user_preference = "default"
return f"Used preference: {user_preference}"
```
### 5. Use State for Progress Tracking
Leverage state to track progress in long-running flows:
```python
class ProgressTrackingFlow(Flow):
@start()
def initialize(self):
self.state["total_steps"] = 3
self.state["current_step"] = 0
self.state["progress"] = 0.0
self.update_progress()
return "Initialized"
def update_progress(self):
"""Helper method to calculate and update progress"""
if self.state.get("total_steps", 0) > 0:
self.state["progress"] = (self.state.get("current_step", 0) /
self.state["total_steps"]) * 100
print(f"Progress: {self.state['progress']:.1f}%")
@listen(initialize)
def step_one(self, _):
# Do work...
self.state["current_step"] = 1
self.update_progress()
return "Step 1 complete"
# Additional steps...
```
### 6. Use Immutable Operations When Possible
Especially with structured state, prefer immutable operations for clarity:
```python
# Instead of modifying lists in place:
self.state.items.append(new_item) # Mutable operation
# Consider creating new state:
from pydantic import BaseModel
from typing import List
class ItemState(BaseModel):
items: List[str] = []
class ImmutableFlow(Flow[ItemState]):
@start()
def add_item(self):
# Create new list with the added item
self.state.items = [*self.state.items, "new item"]
return "Item added"
```
## Debugging Flow State
### Logging State Changes
When developing, add logging to track state changes:
```python
import logging
logging.basicConfig(level=logging.INFO)
class LoggingFlow(Flow):
def log_state(self, step_name):
logging.info(f"State after {step_name}: {self.state}")
@start()
def initialize(self):
self.state["counter"] = 0
self.log_state("initialize")
return "Initialized"
@listen(initialize)
def increment(self, _):
self.state["counter"] += 1
self.log_state("increment")
return f"Incremented to {self.state['counter']}"
```
### State Visualization
You can add methods to visualize your state for debugging:
```python
def visualize_state(self):
"""Create a simple visualization of the current state"""
import json
from rich.console import Console
from rich.panel import Panel
console = Console()
if hasattr(self.state, "model_dump"):
# Pydantic v2
state_dict = self.state.model_dump()
elif hasattr(self.state, "dict"):
# Pydantic v1
state_dict = self.state.dict()
else:
# Unstructured state
state_dict = dict(self.state)
# Remove id for cleaner output
if "id" in state_dict:
state_dict.pop("id")
state_json = json.dumps(state_dict, indent=2, default=str)
console.print(Panel(state_json, title="Current Flow State"))
```
## Conclusion
Mastering state management in CrewAI Flows gives you the power to build sophisticated, robust AI applications that maintain context, make complex decisions, and deliver consistent results.
Whether you choose unstructured or structured state, implementing proper state management practices will help you create flows that are maintainable, extensible, and effective at solving real-world problems.
As you develop more complex flows, remember that good state management is about finding the right balance between flexibility and structure, making your code both powerful and easy to understand.
<Check>
You've now mastered the concepts and practices of state management in CrewAI Flows! With this knowledge, you can create robust AI workflows that effectively maintain context, share data between steps, and build sophisticated application logic.
</Check>
## Next Steps
- Experiment with both structured and unstructured state in your flows
- Try implementing state persistence for long-running workflows
- Explore [building your first crew](/guides/crews/first-crew) to see how crews and flows can work together
- Check out the [Flow reference documentation](/concepts/flows) for more advanced features

View File

@@ -6,20 +6,23 @@ icon: handshake
# What is CrewAI?
**CrewAI is a cutting-edge framework for orchestrating autonomous AI agents.**
**CrewAI is a lean, lightning-fast Python framework built entirely from scratch—completely independent of LangChain or other agent frameworks.**
CrewAI enables you to create AI teams where each agent has specific roles, tools, and goals, working together to accomplish complex tasks.
CrewAI empowers developers with both high-level simplicity and precise low-level control, ideal for creating autonomous AI agents tailored to any scenario:
Think of it as assembling your dream team - each member (agent) brings unique skills and expertise, collaborating seamlessly to achieve your objectives.
- **[CrewAI Crews](/guides/crews/first-crew)**: Optimize for autonomy and collaborative intelligence, enabling you to create AI teams where each agent has specific roles, tools, and goals.
- **[CrewAI Flows](/guides/flows/first-flow)**: Enable granular, event-driven control, single LLM calls for precise task orchestration and supports Crews natively.
## How CrewAI Works
With over 100,000 developers certified through our community courses, CrewAI is rapidly becoming the standard for enterprise-ready AI automation.
## How Crews Work
<Note>
Just like a company has departments (Sales, Engineering, Marketing) working together under leadership to achieve business goals, CrewAI helps you create an organization of AI agents with specialized roles collaborating to accomplish complex tasks.
</Note>
<Frame caption="CrewAI Framework Overview">
<img src="crewAI-mindmap.png" alt="CrewAI Framework Overview" />
<img src="crews.png" alt="CrewAI Framework Overview" />
</Frame>
| Component | Description | Key Features |
@@ -53,12 +56,87 @@ Think of it as assembling your dream team - each member (agent) brings unique sk
</Card>
</CardGroup>
## How Flows Work
<Note>
While Crews excel at autonomous collaboration, Flows provide structured automations, offering granular control over workflow execution. Flows ensure tasks are executed reliably, securely, and efficiently, handling conditional logic, loops, and dynamic state management with precision. Flows integrate seamlessly with Crews, enabling you to balance high autonomy with exacting control.
</Note>
<Frame caption="CrewAI Framework Overview">
<img src="flows.png" alt="CrewAI Framework Overview" />
</Frame>
| Component | Description | Key Features |
|:----------|:-----------:|:------------|
| **Flow** | Structured workflow orchestration | • Manages execution paths<br/>• Handles state transitions<br/>• Controls task sequencing<br/>• Ensures reliable execution |
| **Events** | Triggers for workflow actions | • Initiate specific processes<br/>• Enable dynamic responses<br/>• Support conditional branching<br/>• Allow for real-time adaptation |
| **States** | Workflow execution contexts | • Maintain execution data<br/>• Enable persistence<br/>• Support resumability<br/>• Ensure execution integrity |
| **Crew Support** | Enhances workflow automation | • Injects pockets of agency when needed<br/>• Complements structured workflows<br/>• Balances automation with intelligence<br/>• Enables adaptive decision-making |
### Key Capabilities
<CardGroup cols={2}>
<Card title="Event-Driven Orchestration" icon="bolt">
Define precise execution paths responding dynamically to events
</Card>
<Card title="Fine-Grained Control" icon="sliders">
Manage workflow states and conditional execution securely and efficiently
</Card>
<Card title="Native Crew Integration" icon="puzzle-piece">
Effortlessly combine with Crews for enhanced autonomy and intelligence
</Card>
<Card title="Deterministic Execution" icon="route">
Ensure predictable outcomes with explicit control flow and error handling
</Card>
</CardGroup>
## When to Use Crews vs. Flows
<Note>
Understanding when to use [Crews](/guides/crews/first-crew) versus [Flows](/guides/flows/first-flow) is key to maximizing the potential of CrewAI in your applications.
</Note>
| Use Case | Recommended Approach | Why? |
|:---------|:---------------------|:-----|
| **Open-ended research** | [Crews](/guides/crews/first-crew) | When tasks require creative thinking, exploration, and adaptation |
| **Content generation** | [Crews](/guides/crews/first-crew) | For collaborative creation of articles, reports, or marketing materials |
| **Decision workflows** | [Flows](/guides/flows/first-flow) | When you need predictable, auditable decision paths with precise control |
| **API orchestration** | [Flows](/guides/flows/first-flow) | For reliable integration with multiple external services in a specific sequence |
| **Hybrid applications** | Combined approach | Use [Flows](/guides/flows/first-flow) to orchestrate overall process with [Crews](/guides/crews/first-crew) handling complex subtasks |
### Decision Framework
- **Choose [Crews](/guides/crews/first-crew) when:** You need autonomous problem-solving, creative collaboration, or exploratory tasks
- **Choose [Flows](/guides/flows/first-flow) when:** You require deterministic outcomes, auditability, or precise control over execution
- **Combine both when:** Your application needs both structured processes and pockets of autonomous intelligence
## Why Choose CrewAI?
- 🧠 **Autonomous Operation**: Agents make intelligent decisions based on their roles and available tools
- 📝 **Natural Interaction**: Agents communicate and collaborate like human team members
- 🛠️ **Extensible Design**: Easy to add new tools, roles, and capabilities
- 🚀 **Production Ready**: Built for reliability and scalability in real-world applications
- 🔒 **Security-Focused**: Designed with enterprise security requirements in mind
- 💰 **Cost-Efficient**: Optimized to minimize token usage and API calls
## Ready to Start Building?
<CardGroup cols={2}>
<Card
title="Build Your First Crew"
icon="users-gear"
href="/guides/crews/first-crew"
>
Step-by-step tutorial to create a collaborative AI team that works together to solve complex problems.
</Card>
<Card
title="Build Your First Flow"
icon="diagram-project"
href="/guides/flows/first-flow"
>
Learn how to create structured, event-driven workflows with precise control over execution.
</Card>
</CardGroup>
<CardGroup cols={3}>
<Card

View File

@@ -61,6 +61,36 @@
"quickstart"
]
},
{
"group": "Guides",
"pages": [
{
"group": "Concepts",
"pages": [
"guides/concepts/evaluating-use-cases"
]
},
{
"group": "Agents",
"pages": [
"guides/agents/crafting-effective-agents"
]
},
{
"group": "Crews",
"pages": [
"guides/crews/first-crew"
]
},
{
"group": "Flows",
"pages": [
"guides/flows/first-flow",
"guides/flows/mastering-flow-state"
]
}
]
},
{
"group": "Core Concepts",
"pages": [

View File

@@ -1,6 +1,6 @@
[project]
name = "crewai"
version = "0.102.0"
version = "0.105.0"
description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks."
readme = "README.md"
requires-python = ">=3.10,<3.13"
@@ -21,7 +21,6 @@ dependencies = [
"opentelemetry-sdk>=1.22.0",
"opentelemetry-exporter-otlp-proto-http>=1.22.0",
# Data Handling
"chromadb>=0.5.23",
"openpyxl>=3.1.5",
"pyvis>=0.3.2",
# Authentication and Security
@@ -45,7 +44,7 @@ Documentation = "https://docs.crewai.com"
Repository = "https://github.com/crewAIInc/crewAI"
[project.optional-dependencies]
tools = ["crewai-tools>=0.36.0"]
tools = ["crewai-tools>=0.37.0"]
embeddings = [
"tiktoken~=0.7.0"
]
@@ -64,6 +63,9 @@ mem0 = ["mem0ai>=0.1.29"]
docling = [
"docling>=2.12.0",
]
chromadb = [
"chromadb>=0.5.23",
]
[tool.uv]
dev-dependencies = [

View File

@@ -14,7 +14,7 @@ warnings.filterwarnings(
category=UserWarning,
module="pydantic.main",
)
__version__ = "0.102.0"
__version__ = "0.105.0"
__all__ = [
"Agent",
"Crew",

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.13"
dependencies = [
"crewai[tools]>=0.102.0,<1.0.0"
"crewai[tools]>=0.105.0,<1.0.0"
]
[project.scripts]

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.13"
dependencies = [
"crewai[tools]>=0.102.0,<1.0.0",
"crewai[tools]>=0.105.0,<1.0.0",
]
[project.scripts]

View File

@@ -5,7 +5,7 @@ description = "Power up your crews with {{folder_name}}"
readme = "README.md"
requires-python = ">=3.10,<3.13"
dependencies = [
"crewai[tools]>=0.102.0"
"crewai[tools]>=0.105.0"
]
[tool.crewai]

View File

@@ -184,7 +184,7 @@ class Crew(BaseModel):
default=None,
description="Maximum number of requests per minute for the crew execution to be respected.",
)
prompt_file: Optional[str] = Field(
prompt_file: str = Field(
default=None,
description="Path to the prompt json file to be used for the crew.",
)
@@ -808,7 +808,6 @@ class Crew(BaseModel):
)
if skipped_task_output:
task_outputs.append(skipped_task_output)
last_sync_output = skipped_task_output
continue
if task.async_execution:
@@ -822,10 +821,8 @@ class Crew(BaseModel):
)
futures.append((task, future, task_index))
else:
# Process any pending async tasks before executing a sync task
if futures:
processed_outputs = self._process_async_tasks(futures, was_replayed)
task_outputs.extend(processed_outputs)
task_outputs = self._process_async_tasks(futures, was_replayed)
futures.clear()
context = self._get_context(task, task_outputs)
@@ -835,14 +832,11 @@ class Crew(BaseModel):
tools=tools_for_task,
)
task_outputs.append(task_output)
last_sync_output = task_output
self._process_task_result(task, task_output)
self._store_execution_log(task, task_output, task_index, was_replayed)
# Process any remaining async tasks at the end
if futures:
processed_outputs = self._process_async_tasks(futures, was_replayed)
task_outputs.extend(processed_outputs)
task_outputs = self._process_async_tasks(futures, was_replayed)
return self._create_crew_output(task_outputs)
@@ -854,17 +848,12 @@ class Crew(BaseModel):
task_index: int,
was_replayed: bool,
) -> Optional[TaskOutput]:
# Process any pending async tasks to ensure we have the most up-to-date context
if futures:
processed_outputs = self._process_async_tasks(futures, was_replayed)
task_outputs.extend(processed_outputs)
task_outputs = self._process_async_tasks(futures, was_replayed)
futures.clear()
# Get the previous output to evaluate the condition
previous_output = task_outputs[-1] if task_outputs else None
# If there's no previous output or the condition evaluates to False, skip the task
if previous_output is None or not task.should_execute(previous_output):
if previous_output is not None and not task.should_execute(previous_output):
self._logger.log(
"debug",
f"Skipping conditional task: {task.description}",
@@ -872,13 +861,8 @@ class Crew(BaseModel):
)
skipped_task_output = task.get_skipped_task_output()
# Store the execution log for the skipped task
if not was_replayed:
self._store_execution_log(task, skipped_task_output, task_index)
# Set the output on the task itself so it can be referenced later
task.output = skipped_task_output
return skipped_task_output
return None

View File

@@ -1,50 +0,0 @@
from functools import wraps
from typing import Any, Callable, Optional, Union, cast
from crewai.tasks.conditional_task import ConditionalTask
from crewai.tasks.task_output import TaskOutput
def task(func: Callable) -> Callable:
"""
Decorator for Flow methods that return a Task.
This decorator ensures that when a method returns a ConditionalTask,
the condition is properly evaluated based on the previous task's output.
Args:
func: The method to decorate
Returns:
The decorated method
"""
setattr(func, "is_task", True)
@wraps(func)
def wrapper(self, *args, **kwargs):
result = func(self, *args, **kwargs)
# Set the task name if not already set
if hasattr(result, "name") and not result.name:
result.name = func.__name__
# If this is a ConditionalTask, ensure it has a valid condition
if isinstance(result, ConditionalTask):
# If the condition is a boolean, wrap it in a function
if isinstance(result.condition, bool):
bool_value = result.condition
result.condition = lambda _: bool_value
# Get the previous task output if available
previous_outputs = getattr(self, "_method_outputs", [])
previous_output = previous_outputs[-1] if previous_outputs else None
# If there's a previous output and it's a TaskOutput, check if we should execute
if previous_output and isinstance(previous_output, TaskOutput):
if not result.should_execute(previous_output):
# Return a skipped task output instead of the task
return result.get_skipped_task_output()
return result
return wrapper

View File

@@ -6,11 +6,18 @@ import os
import shutil
from typing import Any, Dict, List, Optional, Union, cast
import chromadb
import chromadb.errors
from chromadb.api import ClientAPI
from chromadb.api.types import OneOrMany
from chromadb.config import Settings
try:
import chromadb
import chromadb.errors
from chromadb.api import ClientAPI
from chromadb.api.types import OneOrMany
from chromadb.config import Settings
CHROMADB_AVAILABLE = True
except ImportError:
CHROMADB_AVAILABLE = False
# Define placeholder types for type checking
ClientAPI = Any
OneOrMany = Any
from crewai.knowledge.storage.base_knowledge_storage import BaseKnowledgeStorage
from crewai.utilities import EmbeddingConfigurator
@@ -42,9 +49,10 @@ class KnowledgeStorage(BaseKnowledgeStorage):
search efficiency.
"""
collection: Optional[chromadb.Collection] = None
collection: Optional[Any] = None # Will be chromadb.Collection when available
collection_name: Optional[str] = "knowledge"
app: Optional[ClientAPI] = None
app: Optional[Any] = None # Will be ClientAPI when available
embedder: Optional[Any] = None
def __init__(
self,
@@ -61,63 +69,98 @@ class KnowledgeStorage(BaseKnowledgeStorage):
filter: Optional[dict] = None,
score_threshold: float = 0.35,
) -> List[Dict[str, Any]]:
with suppress_logging():
if self.collection:
fetched = self.collection.query(
query_texts=query,
n_results=limit,
where=filter,
)
results = []
for i in range(len(fetched["ids"][0])): # type: ignore
result = {
"id": fetched["ids"][0][i], # type: ignore
"metadata": fetched["metadatas"][0][i], # type: ignore
"context": fetched["documents"][0][i], # type: ignore
"score": fetched["distances"][0][i], # type: ignore
}
if result["score"] >= score_threshold:
results.append(result)
return results
else:
raise Exception("Collection not initialized")
if not CHROMADB_AVAILABLE:
logging.warning(
"ChromaDB is not installed. Search functionality limited. "
"Install with 'pip install crewai[chromadb]' to enable full functionality."
)
return []
try:
with suppress_logging():
if self.collection:
fetched = self.collection.query(
query_texts=query,
n_results=limit,
where=filter,
)
results = []
for i in range(len(fetched["ids"][0])): # type: ignore
result = {
"id": fetched["ids"][0][i], # type: ignore
"metadata": fetched["metadatas"][0][i], # type: ignore
"context": fetched["documents"][0][i], # type: ignore
"score": fetched["distances"][0][i], # type: ignore
}
if result["score"] >= score_threshold:
results.append(result)
return results
else:
return []
except (ImportError, NameError, AttributeError, Exception) as e:
logging.error(f"Error during knowledge search operation: {str(e)}")
return []
def initialize_knowledge_storage(self):
base_path = os.path.join(db_storage_path(), "knowledge")
chroma_client = chromadb.PersistentClient(
path=base_path,
settings=Settings(allow_reset=True),
)
self.app = chroma_client
try:
collection_name = (
f"knowledge_{self.collection_name}"
if self.collection_name
else "knowledge"
if not CHROMADB_AVAILABLE:
import logging
logging.warning(
"ChromaDB is not installed. Knowledge storage functionality will be limited. "
"Install with 'pip install crewai[chromadb]' to enable full functionality."
)
if self.app:
self.collection = self.app.get_or_create_collection(
name=collection_name, embedding_function=self.embedder
)
else:
raise Exception("Vector Database Client not initialized")
except Exception:
raise Exception("Failed to create or get collection")
def reset(self):
base_path = os.path.join(db_storage_path(), KNOWLEDGE_DIRECTORY)
if not self.app:
self.app = chromadb.PersistentClient(
self.app = None
self.collection = None
return
try:
base_path = os.path.join(db_storage_path(), "knowledge")
chroma_client = chromadb.PersistentClient(
path=base_path,
settings=Settings(allow_reset=True),
)
self.app.reset()
shutil.rmtree(base_path)
self.app = None
self.collection = None
self.app = chroma_client
try:
collection_name = (
f"knowledge_{self.collection_name}"
if self.collection_name
else "knowledge"
)
if self.app:
self.collection = self.app.get_or_create_collection(
name=collection_name, embedding_function=self.embedder
)
else:
raise Exception("Vector Database Client not initialized")
except Exception:
raise Exception("Failed to create or get collection")
except Exception:
logging.warning(
"Error initializing ChromaDB. Knowledge storage functionality will be limited."
)
self.app = None
self.collection = None
def reset(self):
base_path = os.path.join(db_storage_path(), KNOWLEDGE_DIRECTORY)
try:
if not self.app:
self.app = chromadb.PersistentClient(
path=base_path,
settings=Settings(allow_reset=True),
)
self.app.reset()
shutil.rmtree(base_path)
self.app = None
self.collection = None
except (ImportError, NameError, AttributeError):
# Handle case when chromadb is not available
if os.path.exists(base_path):
shutil.rmtree(base_path)
self.app = None
self.collection = None
def save(
self,
@@ -125,7 +168,8 @@ class KnowledgeStorage(BaseKnowledgeStorage):
metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None,
):
if not self.collection:
raise Exception("Collection not initialized")
# Just return silently if chromadb is not available
return
try:
# Create a dictionary to store unique documents
@@ -154,7 +198,7 @@ class KnowledgeStorage(BaseKnowledgeStorage):
filtered_ids.append(doc_id)
# If we have no metadata at all, set it to None
final_metadata: Optional[OneOrMany[chromadb.Metadata]] = (
final_metadata = (
None if all(m is None for m in filtered_metadata) else filtered_metadata
)
@@ -163,29 +207,40 @@ class KnowledgeStorage(BaseKnowledgeStorage):
metadatas=final_metadata,
ids=filtered_ids,
)
except chromadb.errors.InvalidDimensionException as e:
Logger(verbose=True).log(
"error",
"Embedding dimension mismatch. This usually happens when mixing different embedding models. Try resetting the collection using `crewai reset-memories -a`",
"red",
)
raise ValueError(
"Embedding dimension mismatch. Make sure you're using the same embedding model "
"across all operations with this collection."
"Try resetting the collection using `crewai reset-memories -a`"
) from e
except (ImportError, NameError, AttributeError) as e:
# Handle case when chromadb is not available
return
except Exception as e:
if "chromadb" in str(e.__class__):
# Handle chromadb-specific errors silently when chromadb might not be fully available
return
Logger(verbose=True).log("error", f"Failed to upsert documents: {e}", "red")
raise
# Don't raise the exception, just log it and continue
return
def _create_default_embedding_function(self):
from chromadb.utils.embedding_functions.openai_embedding_function import (
OpenAIEmbeddingFunction,
)
try:
from chromadb.utils.embedding_functions.openai_embedding_function import (
OpenAIEmbeddingFunction,
)
return OpenAIEmbeddingFunction(
api_key=os.getenv("OPENAI_API_KEY"), model_name="text-embedding-3-small"
)
api_key = os.getenv("OPENAI_API_KEY")
if not api_key:
logging.warning(
"OPENAI_API_KEY not found. Vector operations will be limited."
)
return None
return OpenAIEmbeddingFunction(
api_key=api_key, model_name="text-embedding-3-small"
)
except ImportError:
import logging
logging.warning(
"ChromaDB is not installed. Cannot create default embedding function. "
"Install with 'pip install crewai[chromadb]' to enable full functionality."
)
return None
def _set_embedder_config(self, embedder: Optional[Dict[str, Any]] = None) -> None:
"""Set the embedding configuration for the knowledge storage.
@@ -194,8 +249,12 @@ class KnowledgeStorage(BaseKnowledgeStorage):
embedder_config (Optional[Dict[str, Any]]): Configuration dictionary for the embedder.
If None or empty, defaults to the default embedding function.
"""
self.embedder = (
EmbeddingConfigurator().configure_embedder(embedder)
if embedder
else self._create_default_embedding_function()
)
try:
self.embedder = (
EmbeddingConfigurator().configure_embedder(embedder)
if embedder
else self._create_default_embedding_function()
)
except (ImportError, NameError, AttributeError):
# Handle case when chromadb is not available
self.embedder = None

View File

@@ -114,6 +114,19 @@ LLM_CONTEXT_WINDOW_SIZES = {
"Llama-3.2-11B-Vision-Instruct": 16384,
"Meta-Llama-3.2-3B-Instruct": 4096,
"Meta-Llama-3.2-1B-Instruct": 16384,
# mistral
"mistral-tiny": 32768,
"mistral-small-latest": 32768,
"mistral-medium-latest": 32768,
"mistral-large-latest": 32768,
"mistral-large-2407": 32768,
"mistral-large-2402": 32768,
"mistral/mistral-tiny": 32768,
"mistral/mistral-small-latest": 32768,
"mistral/mistral-medium-latest": 32768,
"mistral/mistral-large-latest": 32768,
"mistral/mistral-large-2407": 32768,
"mistral/mistral-large-2402": 32768,
}
DEFAULT_CONTEXT_WINDOW_SIZE = 8192
@@ -789,6 +802,17 @@ class LLM:
formatted_messages.append(msg)
return formatted_messages
# Handle Mistral models - they require the last message to have a role of 'user' or 'tool'
if "mistral" in self.model.lower():
# Check if the last message has a role of 'assistant'
if messages and messages[-1]["role"] == "assistant":
# Add a dummy user message to ensure the last message has a role of 'user'
messages = (
messages.copy()
) # Create a copy to avoid modifying the original
messages.append({"role": "user", "content": "Please continue."})
return messages
# Handle Anthropic models
if not self.is_anthropic:
return messages

View File

@@ -4,9 +4,15 @@ import logging
import os
import shutil
import uuid
from typing import Any, Dict, List, Optional
from typing import Any, Dict, List, Optional, cast
from chromadb.api import ClientAPI
try:
from chromadb.api import ClientAPI
CHROMADB_AVAILABLE = True
except ImportError:
CHROMADB_AVAILABLE = False
# Define placeholder type for type checking
ClientAPI = Any
from crewai.memory.storage.base_rag_storage import BaseRAGStorage
from crewai.utilities import EmbeddingConfigurator
@@ -37,7 +43,7 @@ class RAGStorage(BaseRAGStorage):
search efficiency.
"""
app: ClientAPI | None = None
app = None # Type will be ClientAPI when available
def __init__(
self, type, allow_reset=True, embedder_config=None, crew=None, path=None
@@ -60,26 +66,41 @@ class RAGStorage(BaseRAGStorage):
self.embedder_config = configurator.configure_embedder(self.embedder_config)
def _initialize_app(self):
import chromadb
from chromadb.config import Settings
self._set_embedder_config()
chroma_client = chromadb.PersistentClient(
path=self.path if self.path else self.storage_file_name,
settings=Settings(allow_reset=self.allow_reset),
)
self.app = chroma_client
if not CHROMADB_AVAILABLE:
logging.warning(
"ChromaDB is not installed. RAG storage functionality will be limited. "
"Install with 'pip install crewai[chromadb]' to enable full functionality."
)
self.app = None
self.collection = None
return
try:
self.collection = self.app.get_collection(
name=self.type, embedding_function=self.embedder_config
)
except Exception:
self.collection = self.app.create_collection(
name=self.type, embedding_function=self.embedder_config
import chromadb
from chromadb.config import Settings
self._set_embedder_config()
chroma_client = chromadb.PersistentClient(
path=self.path if self.path else self.storage_file_name,
settings=Settings(allow_reset=self.allow_reset),
)
self.app = chroma_client
try:
self.collection = self.app.get_collection(
name=self.type, embedding_function=self.embedder_config
)
except Exception as e:
logging.info(f"Collection not found, creating new one: {str(e)}")
self.collection = self.app.create_collection(
name=self.type, embedding_function=self.embedder_config
)
except Exception as e:
logging.error(f"Failed to initialize ChromaDB: {str(e)}")
self.app = None
self.collection = None
def _sanitize_role(self, role: str) -> str:
"""
Sanitizes agent roles to ensure valid directory names.
@@ -103,6 +124,10 @@ class RAGStorage(BaseRAGStorage):
def save(self, value: Any, metadata: Dict[str, Any]) -> None:
if not hasattr(self, "app") or not hasattr(self, "collection"):
self._initialize_app()
if not self.collection:
return
try:
self._generate_embedding(value, metadata)
except Exception as e:
@@ -115,8 +140,18 @@ class RAGStorage(BaseRAGStorage):
filter: Optional[dict] = None,
score_threshold: float = 0.35,
) -> List[Any]:
if not CHROMADB_AVAILABLE:
logging.warning(
"ChromaDB is not installed. Search functionality limited. "
"Install with 'pip install crewai[chromadb]' to enable full functionality."
)
return []
if not hasattr(self, "app"):
self._initialize_app()
if not self.collection:
return []
try:
with suppress_logging():
@@ -141,6 +176,9 @@ class RAGStorage(BaseRAGStorage):
def _generate_embedding(self, text: str, metadata: Dict[str, Any]) -> None: # type: ignore
if not hasattr(self, "app") or not hasattr(self, "collection"):
self._initialize_app()
if not self.collection:
return
self.collection.add(
documents=[text],
@@ -149,26 +187,44 @@ class RAGStorage(BaseRAGStorage):
)
def reset(self) -> None:
if not self.app:
return
try:
if self.app:
self.app.reset()
shutil.rmtree(f"{db_storage_path()}/{self.type}")
self.app = None
self.collection = None
self.app.reset()
path = f"{db_storage_path()}/{self.type}"
if os.path.exists(path):
shutil.rmtree(path)
self.app = None
self.collection = None
except Exception as e:
if "attempt to write a readonly database" in str(e):
# Ignore this specific error
pass
else:
raise Exception(
f"An error occurred while resetting the {self.type} memory: {e}"
)
logging.error(f"Error during {self.type} reset: {str(e)}")
# Don't raise the exception, just log it
def _create_default_embedding_function(self):
from chromadb.utils.embedding_functions.openai_embedding_function import (
OpenAIEmbeddingFunction,
)
try:
from chromadb.utils.embedding_functions.openai_embedding_function import (
OpenAIEmbeddingFunction,
)
return OpenAIEmbeddingFunction(
api_key=os.getenv("OPENAI_API_KEY"), model_name="text-embedding-3-small"
)
api_key = os.getenv("OPENAI_API_KEY")
if not api_key:
logging.warning(
"OPENAI_API_KEY not found. Vector operations will be limited."
)
return None
return OpenAIEmbeddingFunction(
api_key=api_key, model_name="text-embedding-3-small"
)
except ImportError:
import logging
logging.warning(
"ChromaDB is not installed. Cannot create default embedding function. "
"Install with 'pip install crewai[chromadb]' to enable full functionality."
)
return None

View File

@@ -1,10 +1,8 @@
from functools import wraps
from typing import Any, Callable, Optional, Union, cast
from typing import Callable
from crewai import Crew
from crewai.project.utils import memoize
from crewai.tasks.conditional_task import ConditionalTask
from crewai.tasks.task_output import TaskOutput
"""Decorators for defining crew components and their behaviors."""
@@ -23,35 +21,13 @@ def after_kickoff(func):
def task(func):
"""Marks a method as a crew task."""
setattr(func, "is_task", True)
func.is_task = True
@wraps(func)
def wrapper(*args, **kwargs):
result = func(*args, **kwargs)
# Set the task name if not already set
if hasattr(result, "name") and not result.name:
if not result.name:
result.name = func.__name__
# If this is a ConditionalTask, ensure it has a valid condition
if isinstance(result, ConditionalTask):
# If the condition is a boolean, wrap it in a function
if isinstance(result.condition, bool):
bool_value = result.condition
result.condition = lambda _: bool_value
# Get the previous task output if available
self = args[0] if args else None
if self and hasattr(self, "_method_outputs"):
previous_outputs = getattr(self, "_method_outputs", [])
previous_output = previous_outputs[-1] if previous_outputs else None
# If there's a previous output and it's a TaskOutput, check if we should execute
if previous_output and isinstance(previous_output, TaskOutput):
if not result.should_execute(previous_output):
# Return a skipped task output instead of the task
return result.get_skipped_task_output()
return result
return memoize(wrapper)

View File

@@ -1,4 +1,4 @@
from typing import Any, Callable, Union, cast
from typing import Any, Callable
from pydantic import Field
@@ -14,23 +14,17 @@ class ConditionalTask(Task):
"""
condition: Callable[[TaskOutput], bool] = Field(
default=lambda _: True, # Default to always execute
description="Function that determines whether the task should be executed or a boolean value.",
default=None,
description="Maximum number of retries for an agent to execute a task when an error occurs.",
)
def __init__(
self,
condition: Union[Callable[[Any], bool], bool],
condition: Callable[[Any], bool],
**kwargs,
):
super().__init__(**kwargs)
# If condition is a boolean, wrap it in a function that always returns that boolean
if isinstance(condition, bool):
bool_value = condition
self.condition = lambda _: bool_value
else:
self.condition = cast(Callable[[TaskOutput], bool], condition)
self.condition = condition
def should_execute(self, context: TaskOutput) -> bool:
"""

View File

@@ -1,8 +1,17 @@
import os
from typing import Any, Dict, Optional, cast
from chromadb import Documents, EmbeddingFunction, Embeddings
from chromadb.api.types import validate_embedding_function
try:
from chromadb import Documents, EmbeddingFunction, Embeddings
from chromadb.api.types import validate_embedding_function
CHROMADB_AVAILABLE = True
except ImportError:
CHROMADB_AVAILABLE = False
# Define placeholder types for type checking
Documents = Any
EmbeddingFunction = Any
Embeddings = Any
def validate_embedding_function(func): return func
class EmbeddingConfigurator:
@@ -47,6 +56,14 @@ class EmbeddingConfigurator:
@staticmethod
def _create_default_embedding_function():
if not CHROMADB_AVAILABLE:
import logging
logging.warning(
"ChromaDB is not installed. Cannot create default embedding function. "
"Install with 'pip install crewai[chromadb]' to enable full functionality."
)
return None
from chromadb.utils.embedding_functions.openai_embedding_function import (
OpenAIEmbeddingFunction,
)

View File

@@ -43,8 +43,8 @@ def create_llm(
try:
# Extract attributes with explicit types
model = (
getattr(llm_value, "model_name", None)
or getattr(llm_value, "model", None)
getattr(llm_value, "model", None)
or getattr(llm_value, "model_name", None)
or getattr(llm_value, "deployment_name", None)
or str(llm_value)
)
@@ -77,8 +77,9 @@ def _llm_via_environment_or_fallback() -> Optional[LLM]:
Helper function: if llm_value is None, we load environment variables or fallback default model.
"""
model_name = (
os.environ.get("OPENAI_MODEL_NAME")
or os.environ.get("MODEL")
os.environ.get("MODEL")
or os.environ.get("MODEL_NAME")
or os.environ.get("OPENAI_MODEL_NAME")
or DEFAULT_LLM_MODEL
)

View File

@@ -1,190 +0,0 @@
from unittest.mock import MagicMock, patch
import pytest
from crewai import Agent, Crew, Task
from crewai.tasks.conditional_task import ConditionalTask
from crewai.tasks.task_output import TaskOutput
# Create mock agents for testing
researcher = Agent(
role="Researcher",
goal="Research information",
backstory="You are a researcher with expertise in finding information.",
)
writer = Agent(
role="Writer",
goal="Write content",
backstory="You are a writer with expertise in creating engaging content.",
)
def test_conditional_task_with_boolean_false():
"""Test that a conditional task with a boolean False condition is skipped."""
task1 = Task(
description="Initial task",
expected_output="Initial output",
agent=researcher,
)
# Use a boolean False directly as the condition
task2 = ConditionalTask(
description="Conditional task that should be skipped",
expected_output="This should not be executed",
agent=writer,
condition=False,
)
crew = Crew(
agents=[researcher, writer],
tasks=[task1, task2],
)
with patch.object(Task, "execute_sync") as mock_execute_sync:
mock_execute_sync.return_value = TaskOutput(
description="Task 1 description",
raw="Task 1 output",
agent="Researcher",
)
result = crew.kickoff()
# Only the first task should be executed
assert mock_execute_sync.call_count == 1
# The conditional task should be skipped
assert task2.output is not None
assert task2.output.raw == ""
# The final output should be from the first task
assert result.raw.startswith("Task 1 output")
def test_conditional_task_with_boolean_true():
"""Test that a conditional task with a boolean True condition is executed."""
task1 = Task(
description="Initial task",
expected_output="Initial output",
agent=researcher,
)
# Use a boolean True directly as the condition
task2 = ConditionalTask(
description="Conditional task that should be executed",
expected_output="This should be executed",
agent=writer,
condition=True,
)
crew = Crew(
agents=[researcher, writer],
tasks=[task1, task2],
)
with patch.object(Task, "execute_sync") as mock_execute_sync:
mock_execute_sync.return_value = TaskOutput(
description="Task output",
raw="Task output",
agent="Agent",
)
crew.kickoff()
# Both tasks should be executed
assert mock_execute_sync.call_count == 2
def test_multiple_sequential_conditional_tasks():
"""Test that multiple conditional tasks in sequence work correctly."""
task1 = Task(
description="Initial task",
expected_output="Initial output",
agent=researcher,
)
# First conditional task (will be executed)
task2 = ConditionalTask(
description="First conditional task",
expected_output="First conditional output",
agent=writer,
condition=True,
)
# Second conditional task (will be skipped)
task3 = ConditionalTask(
description="Second conditional task",
expected_output="Second conditional output",
agent=researcher,
condition=False,
)
# Third conditional task (will be executed)
task4 = ConditionalTask(
description="Third conditional task",
expected_output="Third conditional output",
agent=writer,
condition=True,
)
crew = Crew(
agents=[researcher, writer],
tasks=[task1, task2, task3, task4],
)
with patch.object(Task, "execute_sync") as mock_execute_sync:
mock_execute_sync.return_value = TaskOutput(
description="Task output",
raw="Task output",
agent="Agent",
)
result = crew.kickoff()
# Tasks 1, 2, and 4 should be executed (task 3 is skipped)
assert mock_execute_sync.call_count == 3
# Task 3 should be skipped
assert task3.output is not None
assert task3.output.raw == ""
def test_last_task_conditional():
"""Test that a conditional task at the end of the task list works correctly."""
task1 = Task(
description="Initial task",
expected_output="Initial output",
agent=researcher,
)
# Last task is conditional and will be skipped
task2 = ConditionalTask(
description="Last conditional task",
expected_output="Last conditional output",
agent=writer,
condition=False,
)
crew = Crew(
agents=[researcher, writer],
tasks=[task1, task2],
)
with patch.object(Task, "execute_sync") as mock_execute_sync:
mock_execute_sync.return_value = TaskOutput(
description="Task 1 output",
raw="Task 1 output",
agent="Researcher",
)
result = crew.kickoff()
# Only the first task should be executed
assert mock_execute_sync.call_count == 1
# The conditional task should be skipped
assert task2.output is not None
assert task2.output.raw == ""
# The final output should be from the first task
assert result.raw.startswith("Task 1 output")

View File

@@ -1,152 +0,0 @@
from unittest.mock import MagicMock, patch
import pytest
from crewai import Agent, Task
from crewai.flow import Flow, listen, start
from crewai.project.annotations import task
from crewai.tasks.conditional_task import ConditionalTask
from crewai.tasks.task_output import TaskOutput
# Create mock agents for testing
researcher = Agent(
role="Researcher",
goal="Research information",
backstory="You are a researcher with expertise in finding information.",
)
writer = Agent(
role="Writer",
goal="Write content",
backstory="You are a writer with expertise in creating engaging content.",
)
class TestFlowWithConditionalTasks(Flow):
"""Test flow with conditional tasks."""
@start()
@task
def initial_task(self):
"""Initial task that always executes."""
return Task(
description="Initial task",
expected_output="Initial output",
agent=researcher,
)
@listen(initial_task)
@task
def conditional_task_false(self):
"""Conditional task that should be skipped."""
return ConditionalTask(
description="Conditional task that should be skipped",
expected_output="This should not be executed",
agent=writer,
condition=False,
)
@listen(initial_task)
@task
def conditional_task_true(self):
"""Conditional task that should be executed."""
return ConditionalTask(
description="Conditional task that should be executed",
expected_output="This should be executed",
agent=writer,
condition=True,
)
@listen(conditional_task_true)
@task
def final_task(self):
"""Final task that executes after the conditional task."""
return Task(
description="Final task",
expected_output="Final output",
agent=researcher,
)
def test_flow_with_conditional_tasks():
"""Test that conditional tasks work correctly in a Flow."""
flow = TestFlowWithConditionalTasks()
with patch.object(Task, "execute_sync") as mock_execute_sync:
mock_execute_sync.return_value = TaskOutput(
description="Task output",
raw="Task output",
agent="Agent",
)
flow.kickoff()
# The initial task, conditional_task_true, and final_task should be executed
# conditional_task_false should be skipped
assert mock_execute_sync.call_count == 3
class TestFlowWithSequentialConditionalTasks(Flow):
"""Test flow with sequential conditional tasks."""
@start()
@task
def initial_task(self):
"""Initial task that always executes."""
return Task(
description="Initial task",
expected_output="Initial output",
agent=researcher,
)
@listen(initial_task)
@task
def conditional_task_1(self):
"""First conditional task that should be executed."""
return ConditionalTask(
description="First conditional task",
expected_output="First conditional output",
agent=writer,
condition=True,
)
@listen(conditional_task_1)
@task
def conditional_task_2(self):
"""Second conditional task that should be skipped."""
return ConditionalTask(
description="Second conditional task",
expected_output="Second conditional output",
agent=researcher,
condition=False,
)
@listen(conditional_task_2)
@task
def conditional_task_3(self):
"""Third conditional task that should be executed."""
return ConditionalTask(
description="Third conditional task",
expected_output="Third conditional output",
agent=writer,
condition=True,
)
def test_flow_with_sequential_conditional_tasks():
"""Test that sequential conditional tasks work correctly in a Flow."""
flow = TestFlowWithSequentialConditionalTasks()
with patch.object(Task, "execute_sync") as mock_execute_sync:
mock_execute_sync.return_value = TaskOutput(
description="Task output",
raw="Task output",
agent="Agent",
)
flow.kickoff()
# The initial_task and conditional_task_1 should be executed
# conditional_task_2 should be skipped, and since it's skipped,
# conditional_task_3 should not be triggered
assert mock_execute_sync.call_count == 2

View File

@@ -0,0 +1,52 @@
import importlib
import sys
from unittest import mock
import pytest
def test_rag_storage_without_chromadb():
# Mock the import to simulate chromadb not being installed
with mock.patch.dict(sys.modules, {'chromadb': None}):
# Force reload to ensure our mock takes effect
if 'crewai.memory.storage.rag_storage' in sys.modules:
importlib.reload(sys.modules['crewai.memory.storage.rag_storage'])
# Now import and test
from crewai.memory.storage.rag_storage import RAGStorage
# Should not raise an exception
storage = RAGStorage(type="test", allow_reset=True)
# Methods should handle the case when chromadb is not available
assert storage.app is None
assert storage.collection is None
# These methods should not raise exceptions
storage.save("test", {})
results = storage.search("test")
assert results == []
storage.reset()
def test_knowledge_storage_without_chromadb():
# Mock the import to simulate chromadb not being installed
with mock.patch.dict(sys.modules, {'chromadb': None}):
# Force reload to ensure our mock takes effect
if 'crewai.knowledge.storage.knowledge_storage' in sys.modules:
importlib.reload(sys.modules['crewai.knowledge.storage.knowledge_storage'])
# Now import and test
from crewai.knowledge.storage.knowledge_storage import KnowledgeStorage
# Should not raise an exception
storage = KnowledgeStorage()
# Methods should handle the case when chromadb is not available
assert storage.app is None
assert storage.collection is None
# These methods should not raise exceptions
storage.initialize_knowledge_storage()
results = storage.search(["test"])
assert results == []
storage.reset()

113
uv.lock generated
View File

@@ -1,42 +1,19 @@
version = 1
requires-python = ">=3.10, <3.13"
resolution-markers = [
"python_full_version < '3.11' and platform_system == 'Darwin' and sys_platform == 'darwin'",
"python_full_version < '3.11' and platform_machine == 'aarch64' and platform_system == 'Linux' and sys_platform == 'darwin'",
"(python_full_version < '3.11' and platform_machine != 'aarch64' and platform_system != 'Darwin' and sys_platform == 'darwin') or (python_full_version < '3.11' and platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform == 'darwin')",
"python_full_version < '3.11' and platform_machine == 'aarch64' and platform_system == 'Darwin' and sys_platform == 'linux'",
"python_full_version < '3.11' and platform_machine == 'aarch64' and platform_system == 'Linux' and sys_platform == 'linux'",
"python_full_version < '3.11' and platform_machine == 'aarch64' and platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform == 'linux'",
"(python_full_version < '3.11' and platform_machine != 'aarch64' and platform_system == 'Darwin' and sys_platform != 'darwin') or (python_full_version < '3.11' and platform_system == 'Darwin' and sys_platform != 'darwin' and sys_platform != 'linux')",
"python_full_version < '3.11' and platform_machine == 'aarch64' and platform_system == 'Linux' and sys_platform != 'darwin' and sys_platform != 'linux'",
"(python_full_version < '3.11' and platform_machine != 'aarch64' and platform_system != 'Darwin' and sys_platform != 'darwin') or (python_full_version < '3.11' and platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform != 'darwin' and sys_platform != 'linux')",
"python_full_version == '3.11.*' and platform_system == 'Darwin' and sys_platform == 'darwin'",
"python_full_version == '3.11.*' and platform_machine == 'aarch64' and platform_system == 'Linux' and sys_platform == 'darwin'",
"(python_full_version == '3.11.*' and platform_machine != 'aarch64' and platform_system != 'Darwin' and sys_platform == 'darwin') or (python_full_version == '3.11.*' and platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform == 'darwin')",
"python_full_version == '3.11.*' and platform_machine == 'aarch64' and platform_system == 'Darwin' and sys_platform == 'linux'",
"python_full_version == '3.11.*' and platform_machine == 'aarch64' and platform_system == 'Linux' and sys_platform == 'linux'",
"python_full_version == '3.11.*' and platform_machine == 'aarch64' and platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform == 'linux'",
"(python_full_version == '3.11.*' and platform_machine != 'aarch64' and platform_system == 'Darwin' and sys_platform != 'darwin') or (python_full_version == '3.11.*' and platform_system == 'Darwin' and sys_platform != 'darwin' and sys_platform != 'linux')",
"python_full_version == '3.11.*' and platform_machine == 'aarch64' and platform_system == 'Linux' and sys_platform != 'darwin' and sys_platform != 'linux'",
"(python_full_version == '3.11.*' and platform_machine != 'aarch64' and platform_system != 'Darwin' and sys_platform != 'darwin') or (python_full_version == '3.11.*' and platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform != 'darwin' and sys_platform != 'linux')",
"python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_system == 'Darwin' and sys_platform == 'darwin'",
"python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_machine == 'aarch64' and platform_system == 'Linux' and sys_platform == 'darwin'",
"(python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_machine != 'aarch64' and platform_system != 'Darwin' and sys_platform == 'darwin') or (python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform == 'darwin')",
"python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_machine == 'aarch64' and platform_system == 'Darwin' and sys_platform == 'linux'",
"python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_machine == 'aarch64' and platform_system == 'Linux' and sys_platform == 'linux'",
"python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_machine == 'aarch64' and platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform == 'linux'",
"(python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_machine != 'aarch64' and platform_system == 'Darwin' and sys_platform != 'darwin') or (python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_system == 'Darwin' and sys_platform != 'darwin' and sys_platform != 'linux')",
"python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_machine == 'aarch64' and platform_system == 'Linux' and sys_platform != 'darwin' and sys_platform != 'linux'",
"(python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_machine != 'aarch64' and platform_system != 'Darwin' and sys_platform != 'darwin') or (python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform != 'darwin' and sys_platform != 'linux')",
"python_full_version >= '3.12.4' and platform_system == 'Darwin' and sys_platform == 'darwin'",
"python_full_version >= '3.12.4' and platform_machine == 'aarch64' and platform_system == 'Linux' and sys_platform == 'darwin'",
"(python_full_version >= '3.12.4' and platform_machine != 'aarch64' and platform_system != 'Darwin' and sys_platform == 'darwin') or (python_full_version >= '3.12.4' and platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform == 'darwin')",
"python_full_version >= '3.12.4' and platform_machine == 'aarch64' and platform_system == 'Darwin' and sys_platform == 'linux'",
"python_full_version >= '3.12.4' and platform_machine == 'aarch64' and platform_system == 'Linux' and sys_platform == 'linux'",
"python_full_version >= '3.12.4' and platform_machine == 'aarch64' and platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform == 'linux'",
"(python_full_version >= '3.12.4' and platform_machine != 'aarch64' and platform_system == 'Darwin' and sys_platform != 'darwin') or (python_full_version >= '3.12.4' and platform_system == 'Darwin' and sys_platform != 'darwin' and sys_platform != 'linux')",
"python_full_version >= '3.12.4' and platform_machine == 'aarch64' and platform_system == 'Linux' and sys_platform != 'darwin' and sys_platform != 'linux'",
"(python_full_version >= '3.12.4' and platform_machine != 'aarch64' and platform_system != 'Darwin' and sys_platform != 'darwin') or (python_full_version >= '3.12.4' and platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform != 'darwin' and sys_platform != 'linux')",
"python_full_version < '3.11' and sys_platform == 'darwin'",
"python_version < '0'",
"python_full_version < '3.11' and platform_machine == 'aarch64' and sys_platform == 'linux'",
"(python_full_version < '3.11' and platform_machine != 'aarch64' and sys_platform == 'linux') or (python_full_version < '3.11' and sys_platform != 'darwin' and sys_platform != 'linux')",
"python_full_version == '3.11.*' and sys_platform == 'darwin'",
"python_full_version == '3.11.*' and platform_machine == 'aarch64' and sys_platform == 'linux'",
"(python_full_version == '3.11.*' and platform_machine != 'aarch64' and sys_platform == 'linux') or (python_full_version == '3.11.*' and sys_platform != 'darwin' and sys_platform != 'linux')",
"python_full_version >= '3.12' and python_full_version < '3.12.4' and sys_platform == 'darwin'",
"python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_machine == 'aarch64' and sys_platform == 'linux'",
"(python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_machine != 'aarch64' and sys_platform == 'linux') or (python_full_version >= '3.12' and python_full_version < '3.12.4' and sys_platform != 'darwin' and sys_platform != 'linux')",
"python_full_version >= '3.12.4' and sys_platform == 'darwin'",
"python_full_version >= '3.12.4' and platform_machine == 'aarch64' and sys_platform == 'linux'",
"(python_full_version >= '3.12.4' and platform_machine != 'aarch64' and sys_platform == 'linux') or (python_full_version >= '3.12.4' and sys_platform != 'darwin' and sys_platform != 'linux')",
]
[[package]]
@@ -333,7 +310,7 @@ name = "build"
version = "1.2.2.post1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "colorama", marker = "os_name == 'nt'" },
{ name = "colorama", marker = "(os_name == 'nt' and platform_machine != 'aarch64' and sys_platform == 'linux') or (os_name == 'nt' and sys_platform != 'darwin' and sys_platform != 'linux')" },
{ name = "importlib-metadata", marker = "python_full_version < '3.10.2'" },
{ name = "packaging" },
{ name = "pyproject-hooks" },
@@ -568,7 +545,7 @@ name = "click"
version = "8.1.8"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "colorama", marker = "platform_system == 'Windows'" },
{ name = "colorama", marker = "sys_platform == 'win32'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/b9/2e/0090cbf739cee7d23781ad4b89a9894a41538e4fcf4c31dcdd705b78eb8b/click-8.1.8.tar.gz", hash = "sha256:ed53c9d8990d83c2a27deae68e4ee337473f6330c040a31d4225c9574d16096a", size = 226593 }
wheels = [
@@ -619,13 +596,12 @@ wheels = [
[[package]]
name = "crewai"
version = "0.102.0"
version = "0.105.0"
source = { editable = "." }
dependencies = [
{ name = "appdirs" },
{ name = "auth0-python" },
{ name = "blinker" },
{ name = "chromadb" },
{ name = "click" },
{ name = "instructor" },
{ name = "json-repair" },
@@ -651,6 +627,9 @@ dependencies = [
agentops = [
{ name = "agentops" },
]
chromadb = [
{ name = "chromadb" },
]
docling = [
{ name = "docling" },
]
@@ -701,9 +680,9 @@ requires-dist = [
{ name = "appdirs", specifier = ">=1.4.4" },
{ name = "auth0-python", specifier = ">=4.7.1" },
{ name = "blinker", specifier = ">=1.9.0" },
{ name = "chromadb", specifier = ">=0.5.23" },
{ name = "chromadb", marker = "extra == 'chromadb'", specifier = ">=0.5.23" },
{ name = "click", specifier = ">=8.1.7" },
{ name = "crewai-tools", marker = "extra == 'tools'", specifier = ">=0.36.0" },
{ name = "crewai-tools", marker = "extra == 'tools'", specifier = ">=0.37.0" },
{ name = "docling", marker = "extra == 'docling'", specifier = ">=2.12.0" },
{ name = "fastembed", marker = "extra == 'fastembed'", specifier = ">=0.4.1" },
{ name = "instructor", specifier = ">=1.3.3" },
@@ -752,7 +731,7 @@ dev = [
[[package]]
name = "crewai-tools"
version = "0.36.0"
version = "0.37.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "chromadb" },
@@ -767,9 +746,9 @@ dependencies = [
{ name = "pytube" },
{ name = "requests" },
]
sdist = { url = "https://files.pythonhosted.org/packages/4d/e1/d65778cf4aea106f3f60a4208521f04bc7f1d26be4e34eeb63cae6297d50/crewai_tools-0.36.0.tar.gz", hash = "sha256:761b396ee6a4019a988720dd6a14e1409f5de9d0cdc2a8662b487d87efb1a6bf", size = 900178 }
sdist = { url = "https://files.pythonhosted.org/packages/ef/a9/813ef7b721d11ac962c2a3cf4c98196d3ca8bca5bb0fa5e01da0af51ac23/crewai_tools-0.37.0.tar.gz", hash = "sha256:23c8428761809e30d164be32c2a02850c4648e4371e9934eb58842590bca9659", size = 722104 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/bd/b6/533632a6c2a2e623fc4a1677458aff3539413a196fb220a7fece4ead3f71/crewai_tools-0.36.0-py3-none-any.whl", hash = "sha256:dbd0d95a080acfb281e105f4376e1e98576dae6d53d94f7b883c57af893668b3", size = 545937 },
{ url = "https://files.pythonhosted.org/packages/f4/b3/6bf9b066f628875c383689ab72d21968e1108ebece887491dbf051ee39c5/crewai_tools-0.37.0-py3-none-any.whl", hash = "sha256:df5c9efade5c1f4fcfdf6ac8af13c422be7127a3083a5cda75d8f314c652bb10", size = 548490 },
]
[[package]]
@@ -2500,7 +2479,7 @@ version = "1.6.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "click" },
{ name = "colorama", marker = "platform_system == 'Windows'" },
{ name = "colorama", marker = "sys_platform == 'win32'" },
{ name = "ghp-import" },
{ name = "jinja2" },
{ name = "markdown" },
@@ -2681,7 +2660,7 @@ version = "2.10.2"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "pygments" },
{ name = "pywin32", marker = "platform_system == 'Windows'" },
{ name = "pywin32", marker = "sys_platform == 'win32'" },
{ name = "tqdm" },
]
sdist = { url = "https://files.pythonhosted.org/packages/3a/93/80ac75c20ce54c785648b4ed363c88f148bf22637e10c9863db4fbe73e74/mpire-2.10.2.tar.gz", hash = "sha256:f66a321e93fadff34585a4bfa05e95bd946cf714b442f51c529038eb45773d97", size = 271270 }
@@ -2928,7 +2907,7 @@ name = "nvidia-cudnn-cu12"
version = "9.1.0.70"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "nvidia-cublas-cu12", marker = "(platform_machine != 'aarch64' and platform_system != 'Darwin') or (platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform != 'linux')" },
{ name = "nvidia-cublas-cu12", marker = "(platform_machine != 'aarch64' and sys_platform == 'linux') or (sys_platform != 'darwin' and sys_platform != 'linux')" },
]
wheels = [
{ url = "https://files.pythonhosted.org/packages/9f/fd/713452cd72343f682b1c7b9321e23829f00b842ceaedcda96e742ea0b0b3/nvidia_cudnn_cu12-9.1.0.70-py3-none-manylinux2014_x86_64.whl", hash = "sha256:165764f44ef8c61fcdfdfdbe769d687e06374059fbb388b6c89ecb0e28793a6f", size = 664752741 },
@@ -2955,9 +2934,9 @@ name = "nvidia-cusolver-cu12"
version = "11.4.5.107"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "nvidia-cublas-cu12", marker = "(platform_machine != 'aarch64' and platform_system != 'Darwin') or (platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform != 'linux')" },
{ name = "nvidia-cusparse-cu12", marker = "(platform_machine != 'aarch64' and platform_system != 'Darwin') or (platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform != 'linux')" },
{ name = "nvidia-nvjitlink-cu12", marker = "(platform_machine != 'aarch64' and platform_system != 'Darwin') or (platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform != 'linux')" },
{ name = "nvidia-cublas-cu12", marker = "(platform_machine != 'aarch64' and sys_platform == 'linux') or (sys_platform != 'darwin' and sys_platform != 'linux')" },
{ name = "nvidia-cusparse-cu12", marker = "(platform_machine != 'aarch64' and sys_platform == 'linux') or (sys_platform != 'darwin' and sys_platform != 'linux')" },
{ name = "nvidia-nvjitlink-cu12", marker = "(platform_machine != 'aarch64' and sys_platform == 'linux') or (sys_platform != 'darwin' and sys_platform != 'linux')" },
]
wheels = [
{ url = "https://files.pythonhosted.org/packages/bc/1d/8de1e5c67099015c834315e333911273a8c6aaba78923dd1d1e25fc5f217/nvidia_cusolver_cu12-11.4.5.107-py3-none-manylinux1_x86_64.whl", hash = "sha256:8a7ec542f0412294b15072fa7dab71d31334014a69f953004ea7a118206fe0dd", size = 124161928 },
@@ -2968,7 +2947,7 @@ name = "nvidia-cusparse-cu12"
version = "12.1.0.106"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "nvidia-nvjitlink-cu12", marker = "(platform_machine != 'aarch64' and platform_system != 'Darwin') or (platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform != 'linux')" },
{ name = "nvidia-nvjitlink-cu12", marker = "(platform_machine != 'aarch64' and sys_platform == 'linux') or (sys_platform != 'darwin' and sys_platform != 'linux')" },
]
wheels = [
{ url = "https://files.pythonhosted.org/packages/65/5b/cfaeebf25cd9fdec14338ccb16f6b2c4c7fa9163aefcf057d86b9cc248bb/nvidia_cusparse_cu12-12.1.0.106-py3-none-manylinux1_x86_64.whl", hash = "sha256:f3b50f42cf363f86ab21f720998517a659a48131e8d538dc02f8768237bd884c", size = 195958278 },
@@ -3506,7 +3485,7 @@ name = "portalocker"
version = "2.10.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "pywin32", marker = "platform_system == 'Windows'" },
{ name = "pywin32", marker = "sys_platform == 'win32'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/ed/d3/c6c64067759e87af98cc668c1cc75171347d0f1577fab7ca3749134e3cd4/portalocker-2.10.1.tar.gz", hash = "sha256:ef1bf844e878ab08aee7e40184156e1151f228f103aa5c6bd0724cc330960f8f", size = 40891 }
wheels = [
@@ -5019,19 +4998,19 @@ dependencies = [
{ name = "fsspec" },
{ name = "jinja2" },
{ name = "networkx" },
{ name = "nvidia-cublas-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "nvidia-cuda-cupti-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "nvidia-cuda-nvrtc-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "nvidia-cuda-runtime-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "nvidia-cudnn-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "nvidia-cufft-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "nvidia-curand-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "nvidia-cusolver-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "nvidia-cusparse-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "nvidia-nccl-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "nvidia-nvtx-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "nvidia-cublas-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "nvidia-cuda-cupti-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "nvidia-cuda-nvrtc-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "nvidia-cuda-runtime-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "nvidia-cudnn-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "nvidia-cufft-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "nvidia-curand-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "nvidia-cusolver-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "nvidia-cusparse-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "nvidia-nccl-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "nvidia-nvtx-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "sympy" },
{ name = "triton", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "triton", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "typing-extensions" },
]
wheels = [
@@ -5078,7 +5057,7 @@ name = "tqdm"
version = "4.66.5"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "colorama", marker = "platform_system == 'Windows'" },
{ name = "colorama", marker = "sys_platform == 'win32'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/58/83/6ba9844a41128c62e810fddddd72473201f3eacde02046066142a2d96cc5/tqdm-4.66.5.tar.gz", hash = "sha256:e1020aef2e5096702d8a025ac7d16b1577279c9d63f8375b63083e9a5f0fcbad", size = 169504 }
wheels = [
@@ -5120,7 +5099,7 @@ name = "triton"
version = "3.0.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "filelock", marker = "(platform_machine != 'aarch64' and platform_system != 'Darwin') or (platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform != 'linux')" },
{ name = "filelock", marker = "(platform_machine != 'aarch64' and sys_platform == 'linux') or (sys_platform != 'darwin' and sys_platform != 'linux')" },
]
wheels = [
{ url = "https://files.pythonhosted.org/packages/45/27/14cc3101409b9b4b9241d2ba7deaa93535a217a211c86c4cc7151fb12181/triton-3.0.0-1-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:e1efef76935b2febc365bfadf74bcb65a6f959a9872e5bddf44cc9e0adce1e1a", size = 209376304 },