Compare commits

..

1 Commits

Author SHA1 Message Date
Brandon Hancock
1eb4717352 wip 2025-03-07 16:39:50 -05:00
18 changed files with 508 additions and 190 deletions

View File

@@ -1,4 +1,4 @@
Copyright (c) 2025 crewAI, Inc.
Copyright (c) 2018 The Python Packaging Authority
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

173
README.md
View File

@@ -2,46 +2,21 @@
![Logo of CrewAI](./docs/crewai_logo.png)
# **CrewAI**
</div>
**CrewAI**: Production-grade framework for orchestrating sophisticated AI agent systems. From simple automations to complex real-world applications, CrewAI provides precise control and deep customization. By fostering collaborative intelligence through flexible, production-ready architecture, CrewAI empowers agents to work together seamlessly, tackling complex business challenges with predictable, consistent results.
### Fast and Flexible Multi-Agent Automation Framework
**CrewAI Enterprise**
Want to plan, build (+ no code), deploy, monitor and interare your agents: [CrewAI Enterprise](https://www.crewai.com/enterprise). Designed for complex, real-world applications, our enterprise solution offers:
CrewAI is a lean, lightning-fast Python framework built entirely from
scratch—completely **independent of LangChain or other agent frameworks**.
It empowers developers with both high-level simplicity and precise low-level
control, ideal for creating autonomous AI agents tailored to any scenario.
- **CrewAI Crews**: Optimize for autonomy and collaborative intelligence.
- **CrewAI Flows**: Enable granular, event-driven control, single LLM calls for precise task orchestration and supports Crews natively
With over 100,000 developers certified through our community courses at
[learn.crewai.com](https://learn.crewai.com), CrewAI is rapidly becoming the
standard for enterprise-ready AI automation.
# CrewAI Enterprise Suite
CrewAI Enterprise Suite is a comprehensive bundle tailored for organizations
that require secure, scalable, and easy-to-manage agent-driven automation.
You can try one part of the suite the [Crew Control Plane for free](https://app.crewai.com)
## Crew Control Plane Key Features:
- **Tracing & Observability**: Monitor and track your AI agents and workflows in real-time, including metrics, logs, and traces.
- **Unified Control Plane**: A centralized platform for managing, monitoring, and scaling your AI agents and workflows.
- **Seamless Integrations**: Easily connect with existing enterprise systems, data sources, and cloud infrastructure.
- **Advanced Security**: Built-in robust security and compliance measures ensuring safe deployment and management.
- **Actionable Insights**: Real-time analytics and reporting to optimize performance and decision-making.
- **24/7 Support**: Dedicated enterprise support to ensure uninterrupted operation and quick resolution of issues.
- **On-premise and Cloud Deployment Options**: Deploy CrewAI Enterprise on-premise or in the cloud, depending on your security and compliance requirements.
CrewAI Enterprise is designed for enterprises seeking a powerful,
reliable solution to transform complex business processes into efficient,
intelligent automations.
- **Seamless Integrations**
- **Scalable & Secure Deployment**
- **Actionable Insights**
- **24/7 Support**
<h3>
[Homepage](https://www.crewai.com/) | [Documentation](https://docs.crewai.com/) | [Chat with Docs](https://chatg.pt/DWjSBZn) | [Discourse](https://community.crewai.com)
[Homepage](https://www.crewai.com/) | [Documentation](https://docs.crewai.com/) | [Chat with Docs](https://chatg.pt/DWjSBZn) | [Examples](https://github.com/crewAIInc/crewAI-examples) | [Discourse](https://community.crewai.com)
</h3>
@@ -72,19 +47,8 @@ intelligent automations.
## Why CrewAI?
<div align="center" style="margin-bottom: 30px;">
<img src="docs/asset.png" alt="CrewAI Logo" width="100%">
</div>
CrewAI unlocks the true potential of multi-agent automation, delivering the best-in-class combination of speed, flexibility, and control with either Crews of AI Agents or Flows of Events:
- **Standalone Framework**: Built from scratch, independent of LangChain or any other agent framework.
- **High Performance**: Optimized for speed and minimal resource usage, enabling faster execution.
- **Flexible Low Level Customization**: Complete freedom to customize at both high and low levels - from overall workflows and system architecture to granular agent behaviors, internal prompts, and execution logic.
- **Ideal for Every Use Case**: Proven effective for both simple tasks and highly complex, real-world, enterprise-grade scenarios.
- **Robust Community**: Backed by a rapidly growing community of over **100,000 certified** developers offering comprehensive support and resources.
CrewAI empowers developers and enterprises to confidently build intelligent automations, bridging the gap between simplicity, flexibility, and performance.
The power of AI collaboration has too much to offer.
CrewAI is a standalone framework, built from the ground up without dependencies on Langchain or other agent frameworks. It's designed to enable AI agents to assume roles, share goals, and operate in a cohesive unit - much like a well-oiled crew. Whether you're building a smart assistant platform, an automated customer service ensemble, or a multi-agent research team, CrewAI provides the backbone for sophisticated multi-agent interactions.
## Getting Started
@@ -357,16 +321,18 @@ In addition to the sequential process, you can use the hierarchical process, whi
## Key Features
CrewAI stands apart as a lean, standalone, high-performance framework delivering simplicity, flexibility, and precise control—free from the complexity and limitations found in other agent frameworks.
**Note**: CrewAI is a standalone framework built from the ground up, without dependencies on Langchain or other agent frameworks.
- **Standalone & Lean**: Completely independent from other frameworks like LangChain, offering faster execution and lighter resource demands.
- **Flexible & Precise**: Easily orchestrate autonomous agents through intuitive [Crews](https://docs.crewai.com/concepts/crews) or precise [Flows](https://docs.crewai.com/concepts/flows), achieving perfect balance for your needs.
- **Seamless Integration**: Effortlessly combine Crews (autonomy) and Flows (precision) to create complex, real-world automations.
- **Deep Customization**: Tailor every aspect—from high-level workflows down to low-level internal prompts and agent behaviors.
- **Reliable Performance**: Consistent results across simple tasks and complex, enterprise-level automations.
- **Thriving Community**: Backed by robust documentation and over 100,000 certified developers, providing exceptional support and guidance.
- **Deep Customization**: Build sophisticated agents with full control over the system - from overriding inner prompts to accessing low-level APIs. Customize roles, goals, tools, and behaviors while maintaining clean abstractions.
- **Autonomous Inter-Agent Delegation**: Agents can autonomously delegate tasks and inquire amongst themselves, enabling complex problem-solving in real-world scenarios.
- **Flexible Task Management**: Define and customize tasks with granular control, from simple operations to complex multi-step processes.
- **Production-Grade Architecture**: Support for both high-level abstractions and low-level customization, with robust error handling and state management.
- **Predictable Results**: Ensure consistent, accurate outputs through programmatic guardrails, agent training capabilities, and flow-based execution control. See our [documentation on guardrails](https://docs.crewai.com/how-to/guardrails/) for implementation details.
- **Model Flexibility**: Run your crew using OpenAI or open source models with production-ready integrations. See [Connect CrewAI to LLMs](https://docs.crewai.com/how-to/LLM-Connections/) for detailed configuration options.
- **Event-Driven Flows**: Build complex, real-world workflows with precise control over execution paths, state management, and conditional logic.
- **Process Orchestration**: Achieve any workflow pattern through flows - from simple sequential and hierarchical processes to complex, custom orchestration patterns with conditional branching and parallel execution.
Choose CrewAI to easily build powerful, adaptable, and production-ready AI automations.
![CrewAI Mind Map](./docs/crewAI-mindmap.png "CrewAI Mind Map")
## Examples
@@ -597,39 +563,13 @@ Users can opt-in to Further Telemetry, sharing the complete telemetry data by se
CrewAI is released under the [MIT License](https://github.com/crewAIInc/crewAI/blob/main/LICENSE).
## Frequently Asked Questions (FAQ)
### General
- [What exactly is CrewAI?](#q-what-exactly-is-crewai)
- [How do I install CrewAI?](#q-how-do-i-install-crewai)
- [Does CrewAI depend on LangChain?](#q-does-crewai-depend-on-langchain)
- [Is CrewAI open-source?](#q-is-crewai-open-source)
- [Does CrewAI collect data from users?](#q-does-crewai-collect-data-from-users)
### Features and Capabilities
- [Can CrewAI handle complex use cases?](#q-can-crewai-handle-complex-use-cases)
- [Can I use CrewAI with local AI models?](#q-can-i-use-crewai-with-local-ai-models)
- [What makes Crews different from Flows?](#q-what-makes-crews-different-from-flows)
- [How is CrewAI better than LangChain?](#q-how-is-crewai-better-than-langchain)
- [Does CrewAI support fine-tuning or training custom models?](#q-does-crewai-support-fine-tuning-or-training-custom-models)
### Resources and Community
- [Where can I find real-world CrewAI examples?](#q-where-can-i-find-real-world-crewai-examples)
- [How can I contribute to CrewAI?](#q-how-can-i-contribute-to-crewai)
### Enterprise Features
- [What additional features does CrewAI Enterprise offer?](#q-what-additional-features-does-crewai-enterprise-offer)
- [Is CrewAI Enterprise available for cloud and on-premise deployments?](#q-is-crewai-enterprise-available-for-cloud-and-on-premise-deployments)
- [Can I try CrewAI Enterprise for free?](#q-can-i-try-crewai-enterprise-for-free)
### Q: What exactly is CrewAI?
A: CrewAI is a standalone, lean, and fast Python framework built specifically for orchestrating autonomous AI agents. Unlike frameworks like LangChain, CrewAI does not rely on external dependencies, making it leaner, faster, and simpler.
### Q: What is CrewAI?
A: CrewAI is a cutting-edge framework for orchestrating role-playing, autonomous AI agents. It enables agents to work together seamlessly, tackling complex tasks through collaborative intelligence.
### Q: How do I install CrewAI?
A: Install CrewAI using pip:
A: You can install CrewAI using pip:
```shell
pip install crewai
```
@@ -637,62 +577,27 @@ For additional tools, use:
```shell
pip install 'crewai[tools]'
```
### Q: Does CrewAI depend on LangChain?
A: No. CrewAI is built entirely from the ground up, with no dependencies on LangChain or other agent frameworks. This ensures a lean, fast, and flexible experience.
### Q: Can CrewAI handle complex use cases?
A: Yes. CrewAI excels at both simple and highly complex real-world scenarios, offering deep customization options at both high and low levels, from internal prompts to sophisticated workflow orchestration.
### Q: Can I use CrewAI with local models?
A: Yes, CrewAI supports various LLMs, including local models. You can configure your agents to use local models via tools like Ollama & LM Studio. Check the [LLM Connections documentation](https://docs.crewai.com/how-to/LLM-Connections/) for more details.
### Q: Can I use CrewAI with local AI models?
A: Absolutely! CrewAI supports various language models, including local ones. Tools like Ollama and LM Studio allow seamless integration. Check the [LLM Connections documentation](https://docs.crewai.com/how-to/LLM-Connections/) for more details.
### Q: What are the key features of CrewAI?
A: Key features include role-based agent design, autonomous inter-agent delegation, flexible task management, process-driven execution, output saving as files, and compatibility with both open-source and proprietary models.
### Q: What makes Crews different from Flows?
A: Crews provide autonomous agent collaboration, ideal for tasks requiring flexible decision-making and dynamic interaction. Flows offer precise, event-driven control, ideal for managing detailed execution paths and secure state management. You can seamlessly combine both for maximum effectiveness.
### Q: How is CrewAI better than LangChain?
A: CrewAI provides simpler, more intuitive APIs, faster execution speeds, more reliable and consistent results, robust documentation, and an active community—addressing common criticisms and limitations associated with LangChain.
### Q: How does CrewAI compare to other AI orchestration tools?
A: CrewAI is designed with production in mind, offering flexibility similar to Autogen's conversational agents and structured processes like ChatDev, but with more adaptability for real-world applications.
### Q: Is CrewAI open-source?
A: Yes, CrewAI is open-source and actively encourages community contributions and collaboration.
A: Yes, CrewAI is open-source and welcomes contributions from the community.
### Q: Does CrewAI collect data from users?
A: CrewAI collects anonymous telemetry data strictly for improvement purposes. Sensitive data such as prompts, tasks, or API responses are never collected unless explicitly enabled by the user.
### Q: Does CrewAI collect any data?
A: CrewAI uses anonymous telemetry to collect usage data for improvement purposes. No sensitive data (like prompts, task descriptions, or API calls) is collected. Users can opt-in to share more detailed data by setting `share_crew=True` on their Crews.
### Q: Where can I find real-world CrewAI examples?
A: Check out practical examples in the [CrewAI-examples repository](https://github.com/crewAIInc/crewAI-examples), covering use cases like trip planners, stock analysis, and job postings.
### Q: Where can I find examples of CrewAI in action?
A: You can find various real-life examples in the [CrewAI-examples repository](https://github.com/crewAIInc/crewAI-examples), including trip planners, stock analysis tools, and more.
### Q: What is the difference between Crews and Flows?
A: Crews and Flows serve different but complementary purposes in CrewAI. Crews are teams of AI agents working together to accomplish specific tasks through role-based collaboration, delivering accurate and predictable results. Flows, on the other hand, are event-driven workflows that can orchestrate both Crews and regular Python code, allowing you to build complex automation pipelines with secure state management and conditional execution paths.
### Q: How can I contribute to CrewAI?
A: Contributions are warmly welcomed! Fork the repository, create your branch, implement your changes, and submit a pull request. See the Contribution section of the README for detailed guidelines.
### Q: What additional features does CrewAI Enterprise offer?
A: CrewAI Enterprise provides advanced features such as a unified control plane, real-time observability, secure integrations, advanced security, actionable insights, and dedicated 24/7 enterprise support.
### Q: Is CrewAI Enterprise available for cloud and on-premise deployments?
A: Yes, CrewAI Enterprise supports both cloud-based and on-premise deployment options, allowing enterprises to meet their specific security and compliance requirements.
### Q: Can I try CrewAI Enterprise for free?
A: Yes, you can explore part of the CrewAI Enterprise Suite by accessing the [Crew Control Plane](https://app.crewai.com) for free.
### Q: Does CrewAI support fine-tuning or training custom models?
A: Yes, CrewAI can integrate with custom-trained or fine-tuned models, allowing you to enhance your agents with domain-specific knowledge and accuracy.
### Q: Can CrewAI agents interact with external tools and APIs?
A: Absolutely! CrewAI agents can easily integrate with external tools, APIs, and databases, empowering them to leverage real-world data and resources.
### Q: Is CrewAI suitable for production environments?
A: Yes, CrewAI is explicitly designed with production-grade standards, ensuring reliability, stability, and scalability for enterprise deployments.
### Q: How scalable is CrewAI?
A: CrewAI is highly scalable, supporting simple automations and large-scale enterprise workflows involving numerous agents and complex tasks simultaneously.
### Q: Does CrewAI offer debugging and monitoring tools?
A: Yes, CrewAI Enterprise includes advanced debugging, tracing, and real-time observability features, simplifying the management and troubleshooting of your automations.
### Q: What programming languages does CrewAI support?
A: CrewAI is primarily Python-based but easily integrates with services and APIs written in any programming language through its flexible API integration capabilities.
### Q: Does CrewAI offer educational resources for beginners?
A: Yes, CrewAI provides extensive beginner-friendly tutorials, courses, and documentation through learn.crewai.com, supporting developers at all skill levels.
### Q: Can CrewAI automate human-in-the-loop workflows?
A: Yes, CrewAI fully supports human-in-the-loop workflows, allowing seamless collaboration between human experts and AI agents for enhanced decision-making.
A: Contributions are welcome! You can fork the repository, create a new branch for your feature, add your improvement, and send a pull request. Check the Contribution section in the README for more details.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 66 KiB

View File

@@ -6,7 +6,7 @@ icon: handshake
# What is CrewAI?
**CrewAI is a cutting-edge framework for orchestrating autonomous AI agents.**
**CrewAI is a cutting-edge framework for orchestrating autonomous AI agents.**
CrewAI enables you to create AI teams where each agent has specific roles, tools, and goals, working together to accomplish complex tasks.
@@ -19,7 +19,7 @@ Think of it as assembling your dream team - each member (agent) brings unique sk
</Note>
<Frame caption="CrewAI Framework Overview">
<img src="asset.png" alt="CrewAI Framework Overview" />
<img src="crewAI-mindmap.png" alt="CrewAI Framework Overview" />
</Frame>
| Component | Description | Key Features |

View File

@@ -1,6 +1,6 @@
[project]
name = "crewai"
version = "0.105.0"
version = "0.102.0"
description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks."
readme = "README.md"
requires-python = ">=3.10,<3.13"
@@ -45,7 +45,7 @@ Documentation = "https://docs.crewai.com"
Repository = "https://github.com/crewAIInc/crewAI"
[project.optional-dependencies]
tools = ["crewai-tools>=0.37.0"]
tools = ["crewai-tools>=0.36.0"]
embeddings = [
"tiktoken~=0.7.0"
]

View File

@@ -14,7 +14,7 @@ warnings.filterwarnings(
category=UserWarning,
module="pydantic.main",
)
__version__ = "0.105.0"
__version__ = "0.102.0"
__all__ = [
"Agent",
"Crew",

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.13"
dependencies = [
"crewai[tools]>=0.105.0,<1.0.0"
"crewai[tools]>=0.102.0,<1.0.0"
]
[project.scripts]

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.13"
dependencies = [
"crewai[tools]>=0.105.0,<1.0.0",
"crewai[tools]>=0.102.0,<1.0.0",
]
[project.scripts]

View File

@@ -5,7 +5,7 @@ description = "Power up your crews with {{folder_name}}"
readme = "README.md"
requires-python = ">=3.10,<3.13"
dependencies = [
"crewai[tools]>=0.105.0"
"crewai[tools]>=0.102.0"
]
[tool.crewai]

View File

@@ -184,7 +184,7 @@ class Crew(BaseModel):
default=None,
description="Maximum number of requests per minute for the crew execution to be respected.",
)
prompt_file: str = Field(
prompt_file: Optional[str] = Field(
default=None,
description="Path to the prompt json file to be used for the crew.",
)
@@ -808,6 +808,7 @@ class Crew(BaseModel):
)
if skipped_task_output:
task_outputs.append(skipped_task_output)
last_sync_output = skipped_task_output
continue
if task.async_execution:
@@ -821,8 +822,10 @@ class Crew(BaseModel):
)
futures.append((task, future, task_index))
else:
# Process any pending async tasks before executing a sync task
if futures:
task_outputs = self._process_async_tasks(futures, was_replayed)
processed_outputs = self._process_async_tasks(futures, was_replayed)
task_outputs.extend(processed_outputs)
futures.clear()
context = self._get_context(task, task_outputs)
@@ -832,11 +835,14 @@ class Crew(BaseModel):
tools=tools_for_task,
)
task_outputs.append(task_output)
last_sync_output = task_output
self._process_task_result(task, task_output)
self._store_execution_log(task, task_output, task_index, was_replayed)
# Process any remaining async tasks at the end
if futures:
task_outputs = self._process_async_tasks(futures, was_replayed)
processed_outputs = self._process_async_tasks(futures, was_replayed)
task_outputs.extend(processed_outputs)
return self._create_crew_output(task_outputs)
@@ -848,12 +854,17 @@ class Crew(BaseModel):
task_index: int,
was_replayed: bool,
) -> Optional[TaskOutput]:
# Process any pending async tasks to ensure we have the most up-to-date context
if futures:
task_outputs = self._process_async_tasks(futures, was_replayed)
processed_outputs = self._process_async_tasks(futures, was_replayed)
task_outputs.extend(processed_outputs)
futures.clear()
# Get the previous output to evaluate the condition
previous_output = task_outputs[-1] if task_outputs else None
if previous_output is not None and not task.should_execute(previous_output):
# If there's no previous output or the condition evaluates to False, skip the task
if previous_output is None or not task.should_execute(previous_output):
self._logger.log(
"debug",
f"Skipping conditional task: {task.description}",
@@ -861,8 +872,13 @@ class Crew(BaseModel):
)
skipped_task_output = task.get_skipped_task_output()
# Store the execution log for the skipped task
if not was_replayed:
self._store_execution_log(task, skipped_task_output, task_index)
# Set the output on the task itself so it can be referenced later
task.output = skipped_task_output
return skipped_task_output
return None

View File

@@ -0,0 +1,50 @@
from functools import wraps
from typing import Any, Callable, Optional, Union, cast
from crewai.tasks.conditional_task import ConditionalTask
from crewai.tasks.task_output import TaskOutput
def task(func: Callable) -> Callable:
"""
Decorator for Flow methods that return a Task.
This decorator ensures that when a method returns a ConditionalTask,
the condition is properly evaluated based on the previous task's output.
Args:
func: The method to decorate
Returns:
The decorated method
"""
setattr(func, "is_task", True)
@wraps(func)
def wrapper(self, *args, **kwargs):
result = func(self, *args, **kwargs)
# Set the task name if not already set
if hasattr(result, "name") and not result.name:
result.name = func.__name__
# If this is a ConditionalTask, ensure it has a valid condition
if isinstance(result, ConditionalTask):
# If the condition is a boolean, wrap it in a function
if isinstance(result.condition, bool):
bool_value = result.condition
result.condition = lambda _: bool_value
# Get the previous task output if available
previous_outputs = getattr(self, "_method_outputs", [])
previous_output = previous_outputs[-1] if previous_outputs else None
# If there's a previous output and it's a TaskOutput, check if we should execute
if previous_output and isinstance(previous_output, TaskOutput):
if not result.should_execute(previous_output):
# Return a skipped task output instead of the task
return result.get_skipped_task_output()
return result
return wrapper

View File

@@ -114,19 +114,6 @@ LLM_CONTEXT_WINDOW_SIZES = {
"Llama-3.2-11B-Vision-Instruct": 16384,
"Meta-Llama-3.2-3B-Instruct": 4096,
"Meta-Llama-3.2-1B-Instruct": 16384,
# mistral
"mistral-tiny": 32768,
"mistral-small-latest": 32768,
"mistral-medium-latest": 32768,
"mistral-large-latest": 32768,
"mistral-large-2407": 32768,
"mistral-large-2402": 32768,
"mistral/mistral-tiny": 32768,
"mistral/mistral-small-latest": 32768,
"mistral/mistral-medium-latest": 32768,
"mistral/mistral-large-latest": 32768,
"mistral/mistral-large-2407": 32768,
"mistral/mistral-large-2402": 32768,
}
DEFAULT_CONTEXT_WINDOW_SIZE = 8192
@@ -802,17 +789,6 @@ class LLM:
formatted_messages.append(msg)
return formatted_messages
# Handle Mistral models - they require the last message to have a role of 'user' or 'tool'
if "mistral" in self.model.lower():
# Check if the last message has a role of 'assistant'
if messages and messages[-1]["role"] == "assistant":
# Add a dummy user message to ensure the last message has a role of 'user'
messages = (
messages.copy()
) # Create a copy to avoid modifying the original
messages.append({"role": "user", "content": "Please continue."})
return messages
# Handle Anthropic models
if not self.is_anthropic:
return messages

View File

@@ -1,8 +1,10 @@
from functools import wraps
from typing import Callable
from typing import Any, Callable, Optional, Union, cast
from crewai import Crew
from crewai.project.utils import memoize
from crewai.tasks.conditional_task import ConditionalTask
from crewai.tasks.task_output import TaskOutput
"""Decorators for defining crew components and their behaviors."""
@@ -21,13 +23,35 @@ def after_kickoff(func):
def task(func):
"""Marks a method as a crew task."""
func.is_task = True
setattr(func, "is_task", True)
@wraps(func)
def wrapper(*args, **kwargs):
result = func(*args, **kwargs)
if not result.name:
# Set the task name if not already set
if hasattr(result, "name") and not result.name:
result.name = func.__name__
# If this is a ConditionalTask, ensure it has a valid condition
if isinstance(result, ConditionalTask):
# If the condition is a boolean, wrap it in a function
if isinstance(result.condition, bool):
bool_value = result.condition
result.condition = lambda _: bool_value
# Get the previous task output if available
self = args[0] if args else None
if self and hasattr(self, "_method_outputs"):
previous_outputs = getattr(self, "_method_outputs", [])
previous_output = previous_outputs[-1] if previous_outputs else None
# If there's a previous output and it's a TaskOutput, check if we should execute
if previous_output and isinstance(previous_output, TaskOutput):
if not result.should_execute(previous_output):
# Return a skipped task output instead of the task
return result.get_skipped_task_output()
return result
return memoize(wrapper)

View File

@@ -1,4 +1,4 @@
from typing import Any, Callable
from typing import Any, Callable, Union, cast
from pydantic import Field
@@ -14,17 +14,23 @@ class ConditionalTask(Task):
"""
condition: Callable[[TaskOutput], bool] = Field(
default=None,
description="Maximum number of retries for an agent to execute a task when an error occurs.",
default=lambda _: True, # Default to always execute
description="Function that determines whether the task should be executed or a boolean value.",
)
def __init__(
self,
condition: Callable[[Any], bool],
condition: Union[Callable[[Any], bool], bool],
**kwargs,
):
super().__init__(**kwargs)
self.condition = condition
# If condition is a boolean, wrap it in a function that always returns that boolean
if isinstance(condition, bool):
bool_value = condition
self.condition = lambda _: bool_value
else:
self.condition = cast(Callable[[TaskOutput], bool], condition)
def should_execute(self, context: TaskOutput) -> bool:
"""

View File

@@ -43,8 +43,8 @@ def create_llm(
try:
# Extract attributes with explicit types
model = (
getattr(llm_value, "model", None)
or getattr(llm_value, "model_name", None)
getattr(llm_value, "model_name", None)
or getattr(llm_value, "model", None)
or getattr(llm_value, "deployment_name", None)
or str(llm_value)
)
@@ -77,9 +77,8 @@ def _llm_via_environment_or_fallback() -> Optional[LLM]:
Helper function: if llm_value is None, we load environment variables or fallback default model.
"""
model_name = (
os.environ.get("MODEL")
or os.environ.get("MODEL_NAME")
or os.environ.get("OPENAI_MODEL_NAME")
os.environ.get("OPENAI_MODEL_NAME")
or os.environ.get("MODEL")
or DEFAULT_LLM_MODEL
)

View File

@@ -0,0 +1,190 @@
from unittest.mock import MagicMock, patch
import pytest
from crewai import Agent, Crew, Task
from crewai.tasks.conditional_task import ConditionalTask
from crewai.tasks.task_output import TaskOutput
# Create mock agents for testing
researcher = Agent(
role="Researcher",
goal="Research information",
backstory="You are a researcher with expertise in finding information.",
)
writer = Agent(
role="Writer",
goal="Write content",
backstory="You are a writer with expertise in creating engaging content.",
)
def test_conditional_task_with_boolean_false():
"""Test that a conditional task with a boolean False condition is skipped."""
task1 = Task(
description="Initial task",
expected_output="Initial output",
agent=researcher,
)
# Use a boolean False directly as the condition
task2 = ConditionalTask(
description="Conditional task that should be skipped",
expected_output="This should not be executed",
agent=writer,
condition=False,
)
crew = Crew(
agents=[researcher, writer],
tasks=[task1, task2],
)
with patch.object(Task, "execute_sync") as mock_execute_sync:
mock_execute_sync.return_value = TaskOutput(
description="Task 1 description",
raw="Task 1 output",
agent="Researcher",
)
result = crew.kickoff()
# Only the first task should be executed
assert mock_execute_sync.call_count == 1
# The conditional task should be skipped
assert task2.output is not None
assert task2.output.raw == ""
# The final output should be from the first task
assert result.raw.startswith("Task 1 output")
def test_conditional_task_with_boolean_true():
"""Test that a conditional task with a boolean True condition is executed."""
task1 = Task(
description="Initial task",
expected_output="Initial output",
agent=researcher,
)
# Use a boolean True directly as the condition
task2 = ConditionalTask(
description="Conditional task that should be executed",
expected_output="This should be executed",
agent=writer,
condition=True,
)
crew = Crew(
agents=[researcher, writer],
tasks=[task1, task2],
)
with patch.object(Task, "execute_sync") as mock_execute_sync:
mock_execute_sync.return_value = TaskOutput(
description="Task output",
raw="Task output",
agent="Agent",
)
crew.kickoff()
# Both tasks should be executed
assert mock_execute_sync.call_count == 2
def test_multiple_sequential_conditional_tasks():
"""Test that multiple conditional tasks in sequence work correctly."""
task1 = Task(
description="Initial task",
expected_output="Initial output",
agent=researcher,
)
# First conditional task (will be executed)
task2 = ConditionalTask(
description="First conditional task",
expected_output="First conditional output",
agent=writer,
condition=True,
)
# Second conditional task (will be skipped)
task3 = ConditionalTask(
description="Second conditional task",
expected_output="Second conditional output",
agent=researcher,
condition=False,
)
# Third conditional task (will be executed)
task4 = ConditionalTask(
description="Third conditional task",
expected_output="Third conditional output",
agent=writer,
condition=True,
)
crew = Crew(
agents=[researcher, writer],
tasks=[task1, task2, task3, task4],
)
with patch.object(Task, "execute_sync") as mock_execute_sync:
mock_execute_sync.return_value = TaskOutput(
description="Task output",
raw="Task output",
agent="Agent",
)
result = crew.kickoff()
# Tasks 1, 2, and 4 should be executed (task 3 is skipped)
assert mock_execute_sync.call_count == 3
# Task 3 should be skipped
assert task3.output is not None
assert task3.output.raw == ""
def test_last_task_conditional():
"""Test that a conditional task at the end of the task list works correctly."""
task1 = Task(
description="Initial task",
expected_output="Initial output",
agent=researcher,
)
# Last task is conditional and will be skipped
task2 = ConditionalTask(
description="Last conditional task",
expected_output="Last conditional output",
agent=writer,
condition=False,
)
crew = Crew(
agents=[researcher, writer],
tasks=[task1, task2],
)
with patch.object(Task, "execute_sync") as mock_execute_sync:
mock_execute_sync.return_value = TaskOutput(
description="Task 1 output",
raw="Task 1 output",
agent="Researcher",
)
result = crew.kickoff()
# Only the first task should be executed
assert mock_execute_sync.call_count == 1
# The conditional task should be skipped
assert task2.output is not None
assert task2.output.raw == ""
# The final output should be from the first task
assert result.raw.startswith("Task 1 output")

View File

@@ -0,0 +1,152 @@
from unittest.mock import MagicMock, patch
import pytest
from crewai import Agent, Task
from crewai.flow import Flow, listen, start
from crewai.project.annotations import task
from crewai.tasks.conditional_task import ConditionalTask
from crewai.tasks.task_output import TaskOutput
# Create mock agents for testing
researcher = Agent(
role="Researcher",
goal="Research information",
backstory="You are a researcher with expertise in finding information.",
)
writer = Agent(
role="Writer",
goal="Write content",
backstory="You are a writer with expertise in creating engaging content.",
)
class TestFlowWithConditionalTasks(Flow):
"""Test flow with conditional tasks."""
@start()
@task
def initial_task(self):
"""Initial task that always executes."""
return Task(
description="Initial task",
expected_output="Initial output",
agent=researcher,
)
@listen(initial_task)
@task
def conditional_task_false(self):
"""Conditional task that should be skipped."""
return ConditionalTask(
description="Conditional task that should be skipped",
expected_output="This should not be executed",
agent=writer,
condition=False,
)
@listen(initial_task)
@task
def conditional_task_true(self):
"""Conditional task that should be executed."""
return ConditionalTask(
description="Conditional task that should be executed",
expected_output="This should be executed",
agent=writer,
condition=True,
)
@listen(conditional_task_true)
@task
def final_task(self):
"""Final task that executes after the conditional task."""
return Task(
description="Final task",
expected_output="Final output",
agent=researcher,
)
def test_flow_with_conditional_tasks():
"""Test that conditional tasks work correctly in a Flow."""
flow = TestFlowWithConditionalTasks()
with patch.object(Task, "execute_sync") as mock_execute_sync:
mock_execute_sync.return_value = TaskOutput(
description="Task output",
raw="Task output",
agent="Agent",
)
flow.kickoff()
# The initial task, conditional_task_true, and final_task should be executed
# conditional_task_false should be skipped
assert mock_execute_sync.call_count == 3
class TestFlowWithSequentialConditionalTasks(Flow):
"""Test flow with sequential conditional tasks."""
@start()
@task
def initial_task(self):
"""Initial task that always executes."""
return Task(
description="Initial task",
expected_output="Initial output",
agent=researcher,
)
@listen(initial_task)
@task
def conditional_task_1(self):
"""First conditional task that should be executed."""
return ConditionalTask(
description="First conditional task",
expected_output="First conditional output",
agent=writer,
condition=True,
)
@listen(conditional_task_1)
@task
def conditional_task_2(self):
"""Second conditional task that should be skipped."""
return ConditionalTask(
description="Second conditional task",
expected_output="Second conditional output",
agent=researcher,
condition=False,
)
@listen(conditional_task_2)
@task
def conditional_task_3(self):
"""Third conditional task that should be executed."""
return ConditionalTask(
description="Third conditional task",
expected_output="Third conditional output",
agent=writer,
condition=True,
)
def test_flow_with_sequential_conditional_tasks():
"""Test that sequential conditional tasks work correctly in a Flow."""
flow = TestFlowWithSequentialConditionalTasks()
with patch.object(Task, "execute_sync") as mock_execute_sync:
mock_execute_sync.return_value = TaskOutput(
description="Task output",
raw="Task output",
agent="Agent",
)
flow.kickoff()
# The initial_task and conditional_task_1 should be executed
# conditional_task_2 should be skipped, and since it's skipped,
# conditional_task_3 should not be triggered
assert mock_execute_sync.call_count == 2

10
uv.lock generated
View File

@@ -619,7 +619,7 @@ wheels = [
[[package]]
name = "crewai"
version = "0.105.0"
version = "0.102.0"
source = { editable = "." }
dependencies = [
{ name = "appdirs" },
@@ -703,7 +703,7 @@ requires-dist = [
{ name = "blinker", specifier = ">=1.9.0" },
{ name = "chromadb", specifier = ">=0.5.23" },
{ name = "click", specifier = ">=8.1.7" },
{ name = "crewai-tools", marker = "extra == 'tools'", specifier = ">=0.37.0" },
{ name = "crewai-tools", marker = "extra == 'tools'", specifier = ">=0.36.0" },
{ name = "docling", marker = "extra == 'docling'", specifier = ">=2.12.0" },
{ name = "fastembed", marker = "extra == 'fastembed'", specifier = ">=0.4.1" },
{ name = "instructor", specifier = ">=1.3.3" },
@@ -752,7 +752,7 @@ dev = [
[[package]]
name = "crewai-tools"
version = "0.37.0"
version = "0.36.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "chromadb" },
@@ -767,9 +767,9 @@ dependencies = [
{ name = "pytube" },
{ name = "requests" },
]
sdist = { url = "https://files.pythonhosted.org/packages/ef/a9/813ef7b721d11ac962c2a3cf4c98196d3ca8bca5bb0fa5e01da0af51ac23/crewai_tools-0.37.0.tar.gz", hash = "sha256:23c8428761809e30d164be32c2a02850c4648e4371e9934eb58842590bca9659", size = 722104 }
sdist = { url = "https://files.pythonhosted.org/packages/4d/e1/d65778cf4aea106f3f60a4208521f04bc7f1d26be4e34eeb63cae6297d50/crewai_tools-0.36.0.tar.gz", hash = "sha256:761b396ee6a4019a988720dd6a14e1409f5de9d0cdc2a8662b487d87efb1a6bf", size = 900178 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/f4/b3/6bf9b066f628875c383689ab72d21968e1108ebece887491dbf051ee39c5/crewai_tools-0.37.0-py3-none-any.whl", hash = "sha256:df5c9efade5c1f4fcfdf6ac8af13c422be7127a3083a5cda75d8f314c652bb10", size = 548490 },
{ url = "https://files.pythonhosted.org/packages/bd/b6/533632a6c2a2e623fc4a1677458aff3539413a196fb220a7fece4ead3f71/crewai_tools-0.36.0-py3-none-any.whl", hash = "sha256:dbd0d95a080acfb281e105f4376e1e98576dae6d53d94f7b883c57af893668b3", size = 545937 },
]
[[package]]