mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-01-29 18:18:13 +00:00
Compare commits
8 Commits
devin/1745
...
ea413ae03b
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
ea413ae03b | ||
|
|
f1299f484d | ||
|
|
a0c322a535 | ||
|
|
86f58c95de | ||
|
|
99fe91586d | ||
|
|
0c2d23dfe0 | ||
|
|
2433819c4f | ||
|
|
97fc44c930 |
158
README.md
158
README.md
@@ -4,7 +4,7 @@
|
|||||||
|
|
||||||
# **CrewAI**
|
# **CrewAI**
|
||||||
|
|
||||||
🤖 **CrewAI**: Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
|
🤖 **CrewAI**: Production-grade framework for orchestrating sophisticated AI agent systems. From simple automations to complex real-world applications, CrewAI provides precise control and deep customization. By fostering collaborative intelligence through flexible, production-ready architecture, CrewAI empowers agents to work together seamlessly, tackling complex business challenges with predictable, consistent results.
|
||||||
|
|
||||||
<h3>
|
<h3>
|
||||||
|
|
||||||
@@ -22,13 +22,17 @@
|
|||||||
- [Why CrewAI?](#why-crewai)
|
- [Why CrewAI?](#why-crewai)
|
||||||
- [Getting Started](#getting-started)
|
- [Getting Started](#getting-started)
|
||||||
- [Key Features](#key-features)
|
- [Key Features](#key-features)
|
||||||
|
- [Understanding Flows and Crews](#understanding-flows-and-crews)
|
||||||
|
- [CrewAI vs LangGraph](#how-crewai-compares)
|
||||||
- [Examples](#examples)
|
- [Examples](#examples)
|
||||||
- [Quick Tutorial](#quick-tutorial)
|
- [Quick Tutorial](#quick-tutorial)
|
||||||
- [Write Job Descriptions](#write-job-descriptions)
|
- [Write Job Descriptions](#write-job-descriptions)
|
||||||
- [Trip Planner](#trip-planner)
|
- [Trip Planner](#trip-planner)
|
||||||
- [Stock Analysis](#stock-analysis)
|
- [Stock Analysis](#stock-analysis)
|
||||||
|
- [Using Crews and Flows Together](#using-crews-and-flows-together)
|
||||||
- [Connecting Your Crew to a Model](#connecting-your-crew-to-a-model)
|
- [Connecting Your Crew to a Model](#connecting-your-crew-to-a-model)
|
||||||
- [How CrewAI Compares](#how-crewai-compares)
|
- [How CrewAI Compares](#how-crewai-compares)
|
||||||
|
- [Frequently Asked Questions (FAQ)](#frequently-asked-questions-faq)
|
||||||
- [Contribution](#contribution)
|
- [Contribution](#contribution)
|
||||||
- [Telemetry](#telemetry)
|
- [Telemetry](#telemetry)
|
||||||
- [License](#license)
|
- [License](#license)
|
||||||
@@ -36,10 +40,40 @@
|
|||||||
## Why CrewAI?
|
## Why CrewAI?
|
||||||
|
|
||||||
The power of AI collaboration has too much to offer.
|
The power of AI collaboration has too much to offer.
|
||||||
CrewAI is designed to enable AI agents to assume roles, share goals, and operate in a cohesive unit - much like a well-oiled crew. Whether you're building a smart assistant platform, an automated customer service ensemble, or a multi-agent research team, CrewAI provides the backbone for sophisticated multi-agent interactions.
|
CrewAI is a standalone framework, built from the ground up without dependencies on Langchain or other agent frameworks. It's designed to enable AI agents to assume roles, share goals, and operate in a cohesive unit - much like a well-oiled crew. Whether you're building a smart assistant platform, an automated customer service ensemble, or a multi-agent research team, CrewAI provides the backbone for sophisticated multi-agent interactions.
|
||||||
|
|
||||||
## Getting Started
|
## Getting Started
|
||||||
|
|
||||||
|
### Learning Resources
|
||||||
|
|
||||||
|
Learn CrewAI through our comprehensive courses:
|
||||||
|
- [Multi AI Agent Systems with CrewAI](https://www.deeplearning.ai/short-courses/multi-ai-agent-systems-with-crewai/) - Master the fundamentals of multi-agent systems
|
||||||
|
- [Practical Multi AI Agents and Advanced Use Cases](https://www.deeplearning.ai/short-courses/practical-multi-ai-agents-and-advanced-use-cases-with-crewai/) - Deep dive into advanced implementations
|
||||||
|
|
||||||
|
### Understanding Flows and Crews
|
||||||
|
|
||||||
|
CrewAI offers two powerful, complementary approaches that work seamlessly together to build sophisticated AI applications:
|
||||||
|
|
||||||
|
1. **Crews**: Teams of AI agents with true autonomy and agency, working together to accomplish complex tasks through role-based collaboration. Crews enable:
|
||||||
|
- Natural, autonomous decision-making between agents
|
||||||
|
- Dynamic task delegation and collaboration
|
||||||
|
- Specialized roles with defined goals and expertise
|
||||||
|
- Flexible problem-solving approaches
|
||||||
|
|
||||||
|
2. **Flows**: Production-ready, event-driven workflows that deliver precise control over complex automations. Flows provide:
|
||||||
|
- Fine-grained control over execution paths for real-world scenarios
|
||||||
|
- Secure, consistent state management between tasks
|
||||||
|
- Clean integration of AI agents with production Python code
|
||||||
|
- Conditional branching for complex business logic
|
||||||
|
|
||||||
|
The true power of CrewAI emerges when combining Crews and Flows. This synergy allows you to:
|
||||||
|
- Build complex, production-grade applications
|
||||||
|
- Balance autonomy with precise control
|
||||||
|
- Handle sophisticated real-world scenarios
|
||||||
|
- Maintain clean, maintainable code structure
|
||||||
|
|
||||||
|
### Getting Started with Installation
|
||||||
|
|
||||||
To get started with CrewAI, follow these simple steps:
|
To get started with CrewAI, follow these simple steps:
|
||||||
|
|
||||||
### 1. Installation
|
### 1. Installation
|
||||||
@@ -264,13 +298,16 @@ In addition to the sequential process, you can use the hierarchical process, whi
|
|||||||
|
|
||||||
## Key Features
|
## Key Features
|
||||||
|
|
||||||
- **Role-Based Agent Design**: Customize agents with specific roles, goals, and tools.
|
**Note**: CrewAI is a standalone framework built from the ground up, without dependencies on Langchain or other agent frameworks.
|
||||||
- **Autonomous Inter-Agent Delegation**: Agents can autonomously delegate tasks and inquire amongst themselves, enhancing problem-solving efficiency.
|
|
||||||
- **Flexible Task Management**: Define tasks with customizable tools and assign them to agents dynamically.
|
- **Deep Customization**: Build sophisticated agents with full control over the system - from overriding inner prompts to accessing low-level APIs. Customize roles, goals, tools, and behaviors while maintaining clean abstractions.
|
||||||
- **Processes Driven**: Currently only supports `sequential` task execution and `hierarchical` processes, but more complex processes like consensual and autonomous are being worked on.
|
- **Autonomous Inter-Agent Delegation**: Agents can autonomously delegate tasks and inquire amongst themselves, enabling complex problem-solving in real-world scenarios.
|
||||||
- **Save output as file**: Save the output of individual tasks as a file, so you can use it later.
|
- **Flexible Task Management**: Define and customize tasks with granular control, from simple operations to complex multi-step processes.
|
||||||
- **Parse output as Pydantic or Json**: Parse the output of individual tasks as a Pydantic model or as a Json if you want to.
|
- **Production-Grade Architecture**: Support for both high-level abstractions and low-level customization, with robust error handling and state management.
|
||||||
- **Works with Open Source Models**: Run your crew using Open AI or open source models refer to the [Connect CrewAI to LLMs](https://docs.crewai.com/how-to/LLM-Connections/) page for details on configuring your agents' connections to models, even ones running locally!
|
- **Predictable Results**: Ensure consistent, accurate outputs through programmatic guardrails, agent training capabilities, and flow-based execution control. See our [documentation on guardrails](https://docs.crewai.com/how-to/guardrails/) for implementation details.
|
||||||
|
- **Model Flexibility**: Run your crew using OpenAI or open source models with production-ready integrations. See [Connect CrewAI to LLMs](https://docs.crewai.com/how-to/LLM-Connections/) for detailed configuration options.
|
||||||
|
- **Event-Driven Flows**: Build complex, real-world workflows with precise control over execution paths, state management, and conditional logic.
|
||||||
|
- **Process Orchestration**: Achieve any workflow pattern through flows - from simple sequential and hierarchical processes to complex, custom orchestration patterns with conditional branching and parallel execution.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
@@ -305,6 +342,98 @@ You can test different real life examples of AI crews in the [CrewAI-examples re
|
|||||||
|
|
||||||
[](https://www.youtube.com/watch?v=e0Uj4yWdaAg "Stock Analysis")
|
[](https://www.youtube.com/watch?v=e0Uj4yWdaAg "Stock Analysis")
|
||||||
|
|
||||||
|
### Using Crews and Flows Together
|
||||||
|
|
||||||
|
CrewAI's power truly shines when combining Crews with Flows to create sophisticated automation pipelines. Here's how you can orchestrate multiple Crews within a Flow:
|
||||||
|
|
||||||
|
```python
|
||||||
|
from crewai.flow.flow import Flow, listen, start, router
|
||||||
|
from crewai import Crew, Agent, Task
|
||||||
|
from pydantic import BaseModel
|
||||||
|
|
||||||
|
# Define structured state for precise control
|
||||||
|
class MarketState(BaseModel):
|
||||||
|
sentiment: str = "neutral"
|
||||||
|
confidence: float = 0.0
|
||||||
|
recommendations: list = []
|
||||||
|
|
||||||
|
class AdvancedAnalysisFlow(Flow[MarketState]):
|
||||||
|
@start()
|
||||||
|
def fetch_market_data(self):
|
||||||
|
# Demonstrate low-level control with structured state
|
||||||
|
self.state.sentiment = "analyzing"
|
||||||
|
return {"sector": "tech", "timeframe": "1W"} # These parameters match the task description template
|
||||||
|
|
||||||
|
@listen(fetch_market_data)
|
||||||
|
def analyze_with_crew(self, market_data):
|
||||||
|
# Show crew agency through specialized roles
|
||||||
|
analyst = Agent(
|
||||||
|
role="Senior Market Analyst",
|
||||||
|
goal="Conduct deep market analysis with expert insight",
|
||||||
|
backstory="You're a veteran analyst known for identifying subtle market patterns"
|
||||||
|
)
|
||||||
|
researcher = Agent(
|
||||||
|
role="Data Researcher",
|
||||||
|
goal="Gather and validate supporting market data",
|
||||||
|
backstory="You excel at finding and correlating multiple data sources"
|
||||||
|
)
|
||||||
|
|
||||||
|
analysis_task = Task(
|
||||||
|
description="Analyze {sector} sector data for the past {timeframe}",
|
||||||
|
expected_output="Detailed market analysis with confidence score",
|
||||||
|
agent=analyst
|
||||||
|
)
|
||||||
|
research_task = Task(
|
||||||
|
description="Find supporting data to validate the analysis",
|
||||||
|
expected_output="Corroborating evidence and potential contradictions",
|
||||||
|
agent=researcher
|
||||||
|
)
|
||||||
|
|
||||||
|
# Demonstrate crew autonomy
|
||||||
|
analysis_crew = Crew(
|
||||||
|
agents=[analyst, researcher],
|
||||||
|
tasks=[analysis_task, research_task],
|
||||||
|
process=Process.sequential,
|
||||||
|
verbose=True
|
||||||
|
)
|
||||||
|
return analysis_crew.kickoff(inputs=market_data) # Pass market_data as named inputs
|
||||||
|
|
||||||
|
@router(analyze_with_crew)
|
||||||
|
def determine_next_steps(self):
|
||||||
|
# Show flow control with conditional routing
|
||||||
|
if self.state.confidence > 0.8:
|
||||||
|
return "high_confidence"
|
||||||
|
elif self.state.confidence > 0.5:
|
||||||
|
return "medium_confidence"
|
||||||
|
return "low_confidence"
|
||||||
|
|
||||||
|
@listen("high_confidence")
|
||||||
|
def execute_strategy(self):
|
||||||
|
# Demonstrate complex decision making
|
||||||
|
strategy_crew = Crew(
|
||||||
|
agents=[
|
||||||
|
Agent(role="Strategy Expert",
|
||||||
|
goal="Develop optimal market strategy")
|
||||||
|
],
|
||||||
|
tasks=[
|
||||||
|
Task(description="Create detailed strategy based on analysis",
|
||||||
|
expected_output="Step-by-step action plan")
|
||||||
|
]
|
||||||
|
)
|
||||||
|
return strategy_crew.kickoff()
|
||||||
|
|
||||||
|
@listen("medium_confidence", "low_confidence")
|
||||||
|
def request_additional_analysis(self):
|
||||||
|
self.state.recommendations.append("Gather more data")
|
||||||
|
return "Additional analysis required"
|
||||||
|
```
|
||||||
|
|
||||||
|
This example demonstrates how to:
|
||||||
|
1. Use Python code for basic data operations
|
||||||
|
2. Create and execute Crews as steps in your workflow
|
||||||
|
3. Use Flow decorators to manage the sequence of operations
|
||||||
|
4. Implement conditional branching based on Crew results
|
||||||
|
|
||||||
## Connecting Your Crew to a Model
|
## Connecting Your Crew to a Model
|
||||||
|
|
||||||
CrewAI supports using various LLMs through a variety of connection options. By default your agents will use the OpenAI API when querying the model. However, there are several other ways to allow your agents to connect to models. For example, you can configure your agents to use a local model via the Ollama tool.
|
CrewAI supports using various LLMs through a variety of connection options. By default your agents will use the OpenAI API when querying the model. However, there are several other ways to allow your agents to connect to models. For example, you can configure your agents to use a local model via the Ollama tool.
|
||||||
@@ -313,9 +442,13 @@ Please refer to the [Connect CrewAI to LLMs](https://docs.crewai.com/how-to/LLM-
|
|||||||
|
|
||||||
## How CrewAI Compares
|
## How CrewAI Compares
|
||||||
|
|
||||||
**CrewAI's Advantage**: CrewAI is built with production in mind. It offers the flexibility of Autogen's conversational agents and the structured process approach of ChatDev, but without the rigidity. CrewAI's processes are designed to be dynamic and adaptable, fitting seamlessly into both development and production workflows.
|
**CrewAI's Advantage**: CrewAI combines autonomous agent intelligence with precise workflow control through its unique Crews and Flows architecture. The framework excels at both high-level orchestration and low-level customization, enabling complex, production-grade systems with granular control.
|
||||||
|
|
||||||
- **Autogen**: While Autogen does good in creating conversational agents capable of working together, it lacks an inherent concept of process. In Autogen, orchestrating agents' interactions requires additional programming, which can become complex and cumbersome as the scale of tasks grows.
|
- **LangGraph**: While LangGraph provides a foundation for building agent workflows, its approach requires significant boilerplate code and complex state management patterns. The framework's tight coupling with LangChain can limit flexibility when implementing custom agent behaviors or integrating with external systems.
|
||||||
|
|
||||||
|
*P.S. CrewAI demonstrates significant performance advantages over LangGraph, executing 5.76x faster in certain cases like this QA task example ([see comparison](https://github.com/crewAIInc/crewAI-examples/tree/main/Notebooks/CrewAI%20Flows%20%26%20Langgraph/QA%20Agent)) while achieving higher evaluation scores with faster completion times in certain coding tasks, like in this example ([detailed analysis](https://github.com/crewAIInc/crewAI-examples/blob/main/Notebooks/CrewAI%20Flows%20%26%20Langgraph/Coding%20Assistant/coding_assistant_eval.ipynb)).*
|
||||||
|
|
||||||
|
- **Autogen**: While Autogen excels at creating conversational agents capable of working together, it lacks an inherent concept of process. In Autogen, orchestrating agents' interactions requires additional programming, which can become complex and cumbersome as the scale of tasks grows.
|
||||||
|
|
||||||
- **ChatDev**: ChatDev introduced the idea of processes into the realm of AI agents, but its implementation is quite rigid. Customizations in ChatDev are limited and not geared towards production environments, which can hinder scalability and flexibility in real-world applications.
|
- **ChatDev**: ChatDev introduced the idea of processes into the realm of AI agents, but its implementation is quite rigid. Customizations in ChatDev are limited and not geared towards production environments, which can hinder scalability and flexibility in real-world applications.
|
||||||
|
|
||||||
@@ -440,5 +573,8 @@ A: CrewAI uses anonymous telemetry to collect usage data for improvement purpose
|
|||||||
### Q: Where can I find examples of CrewAI in action?
|
### Q: Where can I find examples of CrewAI in action?
|
||||||
A: You can find various real-life examples in the [CrewAI-examples repository](https://github.com/crewAIInc/crewAI-examples), including trip planners, stock analysis tools, and more.
|
A: You can find various real-life examples in the [CrewAI-examples repository](https://github.com/crewAIInc/crewAI-examples), including trip planners, stock analysis tools, and more.
|
||||||
|
|
||||||
|
### Q: What is the difference between Crews and Flows?
|
||||||
|
A: Crews and Flows serve different but complementary purposes in CrewAI. Crews are teams of AI agents working together to accomplish specific tasks through role-based collaboration, delivering accurate and predictable results. Flows, on the other hand, are event-driven workflows that can orchestrate both Crews and regular Python code, allowing you to build complex automation pipelines with secure state management and conditional execution paths.
|
||||||
|
|
||||||
### Q: How can I contribute to CrewAI?
|
### Q: How can I contribute to CrewAI?
|
||||||
A: Contributions are welcome! You can fork the repository, create a new branch for your feature, add your improvement, and send a pull request. Check the Contribution section in the README for more details.
|
A: Contributions are welcome! You can fork the repository, create a new branch for your feature, add your improvement, and send a pull request. Check the Contribution section in the README for more details.
|
||||||
|
|||||||
@@ -171,6 +171,58 @@ crewai reset-memories --knowledge
|
|||||||
|
|
||||||
This is useful when you've updated your knowledge sources and want to ensure that the agents are using the most recent information.
|
This is useful when you've updated your knowledge sources and want to ensure that the agents are using the most recent information.
|
||||||
|
|
||||||
|
## Agent-Specific Knowledge
|
||||||
|
|
||||||
|
While knowledge can be provided at the crew level using `crew.knowledge_sources`, individual agents can also have their own knowledge sources using the `knowledge_sources` parameter:
|
||||||
|
|
||||||
|
```python Code
|
||||||
|
from crewai import Agent, Task, Crew
|
||||||
|
from crewai.knowledge.source.string_knowledge_source import StringKnowledgeSource
|
||||||
|
|
||||||
|
# Create agent-specific knowledge about a product
|
||||||
|
product_specs = StringKnowledgeSource(
|
||||||
|
content="""The XPS 13 laptop features:
|
||||||
|
- 13.4-inch 4K display
|
||||||
|
- Intel Core i7 processor
|
||||||
|
- 16GB RAM
|
||||||
|
- 512GB SSD storage
|
||||||
|
- 12-hour battery life""",
|
||||||
|
metadata={"category": "product_specs"}
|
||||||
|
)
|
||||||
|
|
||||||
|
# Create a support agent with product knowledge
|
||||||
|
support_agent = Agent(
|
||||||
|
role="Technical Support Specialist",
|
||||||
|
goal="Provide accurate product information and support.",
|
||||||
|
backstory="You are an expert on our laptop products and specifications.",
|
||||||
|
knowledge_sources=[product_specs] # Agent-specific knowledge
|
||||||
|
)
|
||||||
|
|
||||||
|
# Create a task that requires product knowledge
|
||||||
|
support_task = Task(
|
||||||
|
description="Answer this customer question: {question}",
|
||||||
|
agent=support_agent
|
||||||
|
)
|
||||||
|
|
||||||
|
# Create and run the crew
|
||||||
|
crew = Crew(
|
||||||
|
agents=[support_agent],
|
||||||
|
tasks=[support_task]
|
||||||
|
)
|
||||||
|
|
||||||
|
# Get answer about the laptop's specifications
|
||||||
|
result = crew.kickoff(
|
||||||
|
inputs={"question": "What is the storage capacity of the XPS 13?"}
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
<Info>
|
||||||
|
Benefits of agent-specific knowledge:
|
||||||
|
- Give agents specialized information for their roles
|
||||||
|
- Maintain separation of concerns between agents
|
||||||
|
- Combine with crew-level knowledge for layered information access
|
||||||
|
</Info>
|
||||||
|
|
||||||
## Custom Knowledge Sources
|
## Custom Knowledge Sources
|
||||||
|
|
||||||
CrewAI allows you to create custom knowledge sources for any type of data by extending the `BaseKnowledgeSource` class. Let's create a practical example that fetches and processes space news articles.
|
CrewAI allows you to create custom knowledge sources for any type of data by extending the `BaseKnowledgeSource` class. Let's create a practical example that fetches and processes space news articles.
|
||||||
|
|||||||
@@ -1,47 +0,0 @@
|
|||||||
"""
|
|
||||||
Example of using task decomposition in CrewAI.
|
|
||||||
|
|
||||||
This example demonstrates how to use the task decomposition feature
|
|
||||||
to break down complex tasks into simpler sub-tasks.
|
|
||||||
|
|
||||||
Feature introduced in CrewAI v1.x.x
|
|
||||||
"""
|
|
||||||
|
|
||||||
from crewai import Agent, Task, Crew
|
|
||||||
|
|
||||||
researcher = Agent(
|
|
||||||
role="Researcher",
|
|
||||||
goal="Research effectively",
|
|
||||||
backstory="You're an expert researcher with skills in breaking down complex topics.",
|
|
||||||
)
|
|
||||||
|
|
||||||
research_task = Task(
|
|
||||||
description="Research the impact of AI on various industries",
|
|
||||||
expected_output="A comprehensive report covering multiple industries",
|
|
||||||
agent=researcher,
|
|
||||||
)
|
|
||||||
|
|
||||||
sub_tasks = research_task.decompose(
|
|
||||||
descriptions=[
|
|
||||||
"Research AI impact on healthcare industry",
|
|
||||||
"Research AI impact on finance industry",
|
|
||||||
"Research AI impact on education industry",
|
|
||||||
],
|
|
||||||
expected_outputs=[
|
|
||||||
"A report on AI in healthcare",
|
|
||||||
"A report on AI in finance",
|
|
||||||
"A report on AI in education",
|
|
||||||
],
|
|
||||||
names=["Healthcare", "Finance", "Education"],
|
|
||||||
)
|
|
||||||
|
|
||||||
crew = Crew(
|
|
||||||
agents=[researcher],
|
|
||||||
tasks=[research_task],
|
|
||||||
)
|
|
||||||
|
|
||||||
result = crew.kickoff()
|
|
||||||
print("Final result:", result)
|
|
||||||
|
|
||||||
for i, sub_task in enumerate(research_task.sub_tasks):
|
|
||||||
print(f"Sub-task {i+1} result: {sub_task.output.raw if hasattr(sub_task, 'output') and sub_task.output else 'No output'}")
|
|
||||||
@@ -26,7 +26,7 @@ class CrewAgentExecutorMixin:
|
|||||||
|
|
||||||
def _should_force_answer(self) -> bool:
|
def _should_force_answer(self) -> bool:
|
||||||
"""Determine if a forced answer is required based on iteration count."""
|
"""Determine if a forced answer is required based on iteration count."""
|
||||||
return (self.iterations >= self.max_iter) and not self.have_forced_answer
|
return self.iterations >= self.max_iter
|
||||||
|
|
||||||
def _create_short_term_memory(self, output) -> None:
|
def _create_short_term_memory(self, output) -> None:
|
||||||
"""Create and save a short-term memory item if conditions are met."""
|
"""Create and save a short-term memory item if conditions are met."""
|
||||||
|
|||||||
@@ -14,13 +14,13 @@ class Knowledge(BaseModel):
|
|||||||
Knowledge is a collection of sources and setup for the vector store to save and query relevant context.
|
Knowledge is a collection of sources and setup for the vector store to save and query relevant context.
|
||||||
Args:
|
Args:
|
||||||
sources: List[BaseKnowledgeSource] = Field(default_factory=list)
|
sources: List[BaseKnowledgeSource] = Field(default_factory=list)
|
||||||
storage: KnowledgeStorage = Field(default_factory=KnowledgeStorage)
|
storage: Optional[KnowledgeStorage] = Field(default=None)
|
||||||
embedder_config: Optional[Dict[str, Any]] = None
|
embedder_config: Optional[Dict[str, Any]] = None
|
||||||
"""
|
"""
|
||||||
|
|
||||||
sources: List[BaseKnowledgeSource] = Field(default_factory=list)
|
sources: List[BaseKnowledgeSource] = Field(default_factory=list)
|
||||||
model_config = ConfigDict(arbitrary_types_allowed=True)
|
model_config = ConfigDict(arbitrary_types_allowed=True)
|
||||||
storage: KnowledgeStorage = Field(default_factory=KnowledgeStorage)
|
storage: Optional[KnowledgeStorage] = Field(default=None)
|
||||||
embedder_config: Optional[Dict[str, Any]] = None
|
embedder_config: Optional[Dict[str, Any]] = None
|
||||||
collection_name: Optional[str] = None
|
collection_name: Optional[str] = None
|
||||||
|
|
||||||
@@ -49,8 +49,13 @@ class Knowledge(BaseModel):
|
|||||||
"""
|
"""
|
||||||
Query across all knowledge sources to find the most relevant information.
|
Query across all knowledge sources to find the most relevant information.
|
||||||
Returns the top_k most relevant chunks.
|
Returns the top_k most relevant chunks.
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
ValueError: If storage is not initialized.
|
||||||
"""
|
"""
|
||||||
|
if self.storage is None:
|
||||||
|
raise ValueError("Storage is not initialized.")
|
||||||
|
|
||||||
results = self.storage.search(
|
results = self.storage.search(
|
||||||
query,
|
query,
|
||||||
limit,
|
limit,
|
||||||
|
|||||||
@@ -22,13 +22,14 @@ class BaseFileKnowledgeSource(BaseKnowledgeSource, ABC):
|
|||||||
default_factory=list, description="The path to the file"
|
default_factory=list, description="The path to the file"
|
||||||
)
|
)
|
||||||
content: Dict[Path, str] = Field(init=False, default_factory=dict)
|
content: Dict[Path, str] = Field(init=False, default_factory=dict)
|
||||||
storage: KnowledgeStorage = Field(default_factory=KnowledgeStorage)
|
storage: Optional[KnowledgeStorage] = Field(default=None)
|
||||||
safe_file_paths: List[Path] = Field(default_factory=list)
|
safe_file_paths: List[Path] = Field(default_factory=list)
|
||||||
|
|
||||||
@field_validator("file_path", "file_paths", mode="before")
|
@field_validator("file_path", "file_paths", mode="before")
|
||||||
def validate_file_path(cls, v, values):
|
def validate_file_path(cls, v, info):
|
||||||
"""Validate that at least one of file_path or file_paths is provided."""
|
"""Validate that at least one of file_path or file_paths is provided."""
|
||||||
if v is None and ("file_path" not in values or values.get("file_path") is None):
|
# Single check if both are None, O(1) instead of nested conditions
|
||||||
|
if v is None and info.data.get("file_path" if info.field_name == "file_paths" else "file_paths") is None:
|
||||||
raise ValueError("Either file_path or file_paths must be provided")
|
raise ValueError("Either file_path or file_paths must be provided")
|
||||||
return v
|
return v
|
||||||
|
|
||||||
@@ -62,7 +63,10 @@ class BaseFileKnowledgeSource(BaseKnowledgeSource, ABC):
|
|||||||
|
|
||||||
def _save_documents(self):
|
def _save_documents(self):
|
||||||
"""Save the documents to the storage."""
|
"""Save the documents to the storage."""
|
||||||
self.storage.save(self.chunks)
|
if self.storage:
|
||||||
|
self.storage.save(self.chunks)
|
||||||
|
else:
|
||||||
|
raise ValueError("No storage found to save documents.")
|
||||||
|
|
||||||
def convert_to_path(self, path: Union[Path, str]) -> Path:
|
def convert_to_path(self, path: Union[Path, str]) -> Path:
|
||||||
"""Convert a path to a Path object."""
|
"""Convert a path to a Path object."""
|
||||||
|
|||||||
@@ -16,7 +16,7 @@ class BaseKnowledgeSource(BaseModel, ABC):
|
|||||||
chunk_embeddings: List[np.ndarray] = Field(default_factory=list)
|
chunk_embeddings: List[np.ndarray] = Field(default_factory=list)
|
||||||
|
|
||||||
model_config = ConfigDict(arbitrary_types_allowed=True)
|
model_config = ConfigDict(arbitrary_types_allowed=True)
|
||||||
storage: KnowledgeStorage = Field(default_factory=KnowledgeStorage)
|
storage: Optional[KnowledgeStorage] = Field(default=None)
|
||||||
metadata: Dict[str, Any] = Field(default_factory=dict) # Currently unused
|
metadata: Dict[str, Any] = Field(default_factory=dict) # Currently unused
|
||||||
collection_name: Optional[str] = Field(default=None)
|
collection_name: Optional[str] = Field(default=None)
|
||||||
|
|
||||||
@@ -46,4 +46,7 @@ class BaseKnowledgeSource(BaseModel, ABC):
|
|||||||
Save the documents to the storage.
|
Save the documents to the storage.
|
||||||
This method should be called after the chunks and embeddings are generated.
|
This method should be called after the chunks and embeddings are generated.
|
||||||
"""
|
"""
|
||||||
self.storage.save(self.chunks)
|
if self.storage:
|
||||||
|
self.storage.save(self.chunks)
|
||||||
|
else:
|
||||||
|
raise ValueError("No storage found to save documents.")
|
||||||
|
|||||||
@@ -19,7 +19,6 @@ from typing import (
|
|||||||
Tuple,
|
Tuple,
|
||||||
Type,
|
Type,
|
||||||
Union,
|
Union,
|
||||||
ForwardRef,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
from opentelemetry.trace import Span
|
from opentelemetry.trace import Span
|
||||||
@@ -138,16 +137,6 @@ class Task(BaseModel):
|
|||||||
default=0,
|
default=0,
|
||||||
description="Current number of retries"
|
description="Current number of retries"
|
||||||
)
|
)
|
||||||
parent_task: Optional['Task'] = Field(
|
|
||||||
default=None,
|
|
||||||
description="Parent task that this task was decomposed from.",
|
|
||||||
exclude=True,
|
|
||||||
)
|
|
||||||
sub_tasks: List['Task'] = Field(
|
|
||||||
default_factory=list,
|
|
||||||
description="Sub-tasks that this task was decomposed into.",
|
|
||||||
exclude=True,
|
|
||||||
)
|
|
||||||
|
|
||||||
@field_validator("guardrail")
|
@field_validator("guardrail")
|
||||||
@classmethod
|
@classmethod
|
||||||
@@ -257,151 +246,13 @@ class Task(BaseModel):
|
|||||||
)
|
)
|
||||||
return self
|
return self
|
||||||
|
|
||||||
def decompose(
|
|
||||||
self,
|
|
||||||
descriptions: List[str],
|
|
||||||
expected_outputs: Optional[List[str]] = None,
|
|
||||||
names: Optional[List[str]] = None
|
|
||||||
) -> List['Task']:
|
|
||||||
"""
|
|
||||||
Decompose a complex task into simpler sub-tasks.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
descriptions: List of descriptions for each sub-task.
|
|
||||||
expected_outputs: Optional list of expected outputs for each sub-task.
|
|
||||||
names: Optional list of names for each sub-task.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
List of created sub-tasks.
|
|
||||||
|
|
||||||
Raises:
|
|
||||||
ValueError: If descriptions is empty, or if expected_outputs or names
|
|
||||||
have different lengths than descriptions.
|
|
||||||
|
|
||||||
Side Effects:
|
|
||||||
Modifies self.sub_tasks by adding newly created sub-tasks.
|
|
||||||
"""
|
|
||||||
if not descriptions:
|
|
||||||
raise ValueError("At least one sub-task description is required.")
|
|
||||||
|
|
||||||
if expected_outputs and len(expected_outputs) != len(descriptions):
|
|
||||||
raise ValueError(
|
|
||||||
f"If provided, expected_outputs must have the same length as descriptions. "
|
|
||||||
f"Got {len(expected_outputs)} expected outputs and {len(descriptions)} descriptions."
|
|
||||||
)
|
|
||||||
|
|
||||||
if names and len(names) != len(descriptions):
|
|
||||||
raise ValueError(
|
|
||||||
f"If provided, names must have the same length as descriptions. "
|
|
||||||
f"Got {len(names)} names and {len(descriptions)} descriptions."
|
|
||||||
)
|
|
||||||
|
|
||||||
for i, description in enumerate(descriptions):
|
|
||||||
sub_task = Task(
|
|
||||||
description=description,
|
|
||||||
expected_output=expected_outputs[i] if expected_outputs else self.expected_output,
|
|
||||||
name=names[i] if names else None,
|
|
||||||
agent=self.agent, # Inherit the agent from the parent task
|
|
||||||
tools=self.tools, # Inherit the tools from the parent task
|
|
||||||
context=[self], # Set the parent task as context for the sub-task
|
|
||||||
parent_task=self, # Reference back to the parent task
|
|
||||||
)
|
|
||||||
self.sub_tasks.append(sub_task)
|
|
||||||
|
|
||||||
return self.sub_tasks
|
|
||||||
|
|
||||||
def combine_sub_task_results(self) -> str:
|
|
||||||
"""
|
|
||||||
Combine the results from all sub-tasks into a single result for this task.
|
|
||||||
|
|
||||||
This method uses the task's agent to intelligently combine the results from
|
|
||||||
all sub-tasks. It requires an agent capable of coherent text summarization
|
|
||||||
and is designed for stateless prompt execution.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
The combined result as a string.
|
|
||||||
|
|
||||||
Raises:
|
|
||||||
ValueError: If the task has no sub-tasks or no agent assigned.
|
|
||||||
|
|
||||||
Side Effects:
|
|
||||||
None. This method does not modify the task's state.
|
|
||||||
"""
|
|
||||||
if not self.sub_tasks:
|
|
||||||
raise ValueError("Task has no sub-tasks to combine results from.")
|
|
||||||
|
|
||||||
if not self.agent:
|
|
||||||
raise ValueError("Task has no agent to combine sub-task results.")
|
|
||||||
|
|
||||||
sub_task_results = "\n\n".join([
|
|
||||||
f"Sub-task: {sub_task.description}\nResult: {sub_task.output.raw if sub_task.output else 'No result'}"
|
|
||||||
for sub_task in self.sub_tasks
|
|
||||||
])
|
|
||||||
|
|
||||||
combine_prompt = f"""
|
|
||||||
You have completed the following sub-tasks for the main task: "{self.description}"
|
|
||||||
|
|
||||||
{sub_task_results}
|
|
||||||
|
|
||||||
Based on all these sub-tasks, please provide a consolidated final answer for the main task.
|
|
||||||
Expected output format: {self.expected_output if self.expected_output else 'Not specified'}
|
|
||||||
"""
|
|
||||||
|
|
||||||
result = self.agent.execute_task(
|
|
||||||
task=self,
|
|
||||||
context=combine_prompt,
|
|
||||||
tools=self.tools or []
|
|
||||||
)
|
|
||||||
|
|
||||||
return result
|
|
||||||
|
|
||||||
def execute_sync(
|
def execute_sync(
|
||||||
self,
|
self,
|
||||||
agent: Optional[BaseAgent] = None,
|
agent: Optional[BaseAgent] = None,
|
||||||
context: Optional[str] = None,
|
context: Optional[str] = None,
|
||||||
tools: Optional[List[BaseTool]] = None,
|
tools: Optional[List[BaseTool]] = None,
|
||||||
) -> TaskOutput:
|
) -> TaskOutput:
|
||||||
"""
|
"""Execute the task synchronously."""
|
||||||
Execute the task synchronously.
|
|
||||||
|
|
||||||
If the task has sub-tasks and no output yet, this method will:
|
|
||||||
1. Execute all sub-tasks first
|
|
||||||
2. Combine their results using the agent
|
|
||||||
3. Set the combined result as this task's output
|
|
||||||
|
|
||||||
Args:
|
|
||||||
agent: Optional agent to execute the task with.
|
|
||||||
context: Optional context to pass to the task.
|
|
||||||
tools: Optional tools to pass to the task.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
TaskOutput: The result of the task execution.
|
|
||||||
|
|
||||||
Side Effects:
|
|
||||||
Sets self.output with the execution result.
|
|
||||||
"""
|
|
||||||
if self.sub_tasks and not self.output:
|
|
||||||
for sub_task in self.sub_tasks:
|
|
||||||
sub_task.execute_sync(
|
|
||||||
agent=sub_task.agent or agent,
|
|
||||||
context=context,
|
|
||||||
tools=sub_task.tools or tools or [],
|
|
||||||
)
|
|
||||||
|
|
||||||
# Combine the results from sub-tasks
|
|
||||||
result = self.combine_sub_task_results()
|
|
||||||
|
|
||||||
self.output = TaskOutput(
|
|
||||||
description=self.description,
|
|
||||||
name=self.name,
|
|
||||||
expected_output=self.expected_output,
|
|
||||||
raw=result,
|
|
||||||
agent=self.agent.role if self.agent else None,
|
|
||||||
output_format=self.output_format,
|
|
||||||
)
|
|
||||||
|
|
||||||
return self.output
|
|
||||||
|
|
||||||
return self._execute_core(agent, context, tools)
|
return self._execute_core(agent, context, tools)
|
||||||
|
|
||||||
@property
|
@property
|
||||||
@@ -427,55 +278,6 @@ class Task(BaseModel):
|
|||||||
).start()
|
).start()
|
||||||
return future
|
return future
|
||||||
|
|
||||||
def execute_sub_tasks_async(
|
|
||||||
self,
|
|
||||||
agent: Optional[BaseAgent] = None,
|
|
||||||
context: Optional[str] = None,
|
|
||||||
tools: Optional[List[BaseTool]] = None,
|
|
||||||
) -> List[Future[TaskOutput]]:
|
|
||||||
"""
|
|
||||||
Execute all sub-tasks asynchronously.
|
|
||||||
|
|
||||||
This method starts the execution of all sub-tasks in parallel and returns
|
|
||||||
futures that can be awaited. After all futures are complete, you should call
|
|
||||||
combine_sub_task_results() to aggregate the results.
|
|
||||||
|
|
||||||
Example:
|
|
||||||
```python
|
|
||||||
futures = task.execute_sub_tasks_async()
|
|
||||||
|
|
||||||
for future in futures:
|
|
||||||
future.result()
|
|
||||||
|
|
||||||
# Combine the results
|
|
||||||
result = task.combine_sub_task_results()
|
|
||||||
```
|
|
||||||
|
|
||||||
Args:
|
|
||||||
agent: Optional agent to execute the sub-tasks with.
|
|
||||||
context: Optional context to pass to the sub-tasks.
|
|
||||||
tools: Optional tools to pass to the sub-tasks.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
List of futures for the sub-task executions.
|
|
||||||
|
|
||||||
Raises:
|
|
||||||
ValueError: If the task has no sub-tasks.
|
|
||||||
"""
|
|
||||||
if not self.sub_tasks:
|
|
||||||
return []
|
|
||||||
|
|
||||||
futures = []
|
|
||||||
for sub_task in self.sub_tasks:
|
|
||||||
future = sub_task.execute_async(
|
|
||||||
agent=sub_task.agent or agent,
|
|
||||||
context=context,
|
|
||||||
tools=sub_task.tools or tools or [],
|
|
||||||
)
|
|
||||||
futures.append(future)
|
|
||||||
|
|
||||||
return futures
|
|
||||||
|
|
||||||
def _execute_task_async(
|
def _execute_task_async(
|
||||||
self,
|
self,
|
||||||
agent: Optional[BaseAgent],
|
agent: Optional[BaseAgent],
|
||||||
@@ -632,8 +434,6 @@ class Task(BaseModel):
|
|||||||
"agent",
|
"agent",
|
||||||
"context",
|
"context",
|
||||||
"tools",
|
"tools",
|
||||||
"parent_task",
|
|
||||||
"sub_tasks",
|
|
||||||
}
|
}
|
||||||
|
|
||||||
copied_data = self.model_dump(exclude=exclude)
|
copied_data = self.model_dump(exclude=exclude)
|
||||||
@@ -657,7 +457,6 @@ class Task(BaseModel):
|
|||||||
agent=cloned_agent,
|
agent=cloned_agent,
|
||||||
tools=cloned_tools,
|
tools=cloned_tools,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
return copied_task
|
return copied_task
|
||||||
|
|
||||||
@@ -727,6 +526,3 @@ class Task(BaseModel):
|
|||||||
|
|
||||||
def __repr__(self):
|
def __repr__(self):
|
||||||
return f"Task(description={self.description}, expected_output={self.expected_output})"
|
return f"Task(description={self.description}, expected_output={self.expected_output})"
|
||||||
|
|
||||||
|
|
||||||
Task.model_rebuild()
|
|
||||||
|
|||||||
@@ -584,3 +584,28 @@ def test_docling_source_with_local_file():
|
|||||||
docling_source = CrewDoclingSource(file_paths=[pdf_path])
|
docling_source = CrewDoclingSource(file_paths=[pdf_path])
|
||||||
assert docling_source.file_paths == [pdf_path]
|
assert docling_source.file_paths == [pdf_path]
|
||||||
assert docling_source.content is not None
|
assert docling_source.content is not None
|
||||||
|
|
||||||
|
|
||||||
|
def test_file_path_validation():
|
||||||
|
"""Test file path validation for knowledge sources."""
|
||||||
|
current_dir = Path(__file__).parent
|
||||||
|
pdf_path = current_dir / "crewai_quickstart.pdf"
|
||||||
|
|
||||||
|
# Test valid single file_path
|
||||||
|
source = PDFKnowledgeSource(file_path=pdf_path)
|
||||||
|
assert source.safe_file_paths == [pdf_path]
|
||||||
|
|
||||||
|
# Test valid file_paths list
|
||||||
|
source = PDFKnowledgeSource(file_paths=[pdf_path])
|
||||||
|
assert source.safe_file_paths == [pdf_path]
|
||||||
|
|
||||||
|
# Test both file_path and file_paths provided (should use file_paths)
|
||||||
|
source = PDFKnowledgeSource(file_path=pdf_path, file_paths=[pdf_path])
|
||||||
|
assert source.safe_file_paths == [pdf_path]
|
||||||
|
|
||||||
|
# Test neither file_path nor file_paths provided
|
||||||
|
with pytest.raises(
|
||||||
|
ValueError,
|
||||||
|
match="file_path/file_paths must be a Path, str, or a list of these types"
|
||||||
|
):
|
||||||
|
PDFKnowledgeSource()
|
||||||
|
|||||||
@@ -1,157 +0,0 @@
|
|||||||
import pytest
|
|
||||||
from unittest.mock import Mock, patch
|
|
||||||
|
|
||||||
from crewai import Agent, Task
|
|
||||||
|
|
||||||
|
|
||||||
def test_task_decomposition_structure():
|
|
||||||
"""Test that task decomposition creates the proper parent-child relationship."""
|
|
||||||
agent = Agent(
|
|
||||||
role="Researcher",
|
|
||||||
goal="Research effectively",
|
|
||||||
backstory="You're an expert researcher",
|
|
||||||
)
|
|
||||||
|
|
||||||
parent_task = Task(
|
|
||||||
description="Research the impact of AI on various industries",
|
|
||||||
expected_output="A comprehensive report",
|
|
||||||
agent=agent,
|
|
||||||
)
|
|
||||||
|
|
||||||
sub_task_descriptions = [
|
|
||||||
"Research AI impact on healthcare",
|
|
||||||
"Research AI impact on finance",
|
|
||||||
"Research AI impact on education",
|
|
||||||
]
|
|
||||||
|
|
||||||
sub_tasks = parent_task.decompose(
|
|
||||||
descriptions=sub_task_descriptions,
|
|
||||||
expected_outputs=["Healthcare report", "Finance report", "Education report"],
|
|
||||||
names=["Healthcare", "Finance", "Education"],
|
|
||||||
)
|
|
||||||
|
|
||||||
assert len(sub_tasks) == 3
|
|
||||||
assert len(parent_task.sub_tasks) == 3
|
|
||||||
|
|
||||||
for sub_task in sub_tasks:
|
|
||||||
assert sub_task.parent_task == parent_task
|
|
||||||
assert parent_task in sub_task.context
|
|
||||||
|
|
||||||
|
|
||||||
def test_task_execution_with_sub_tasks():
|
|
||||||
"""Test that executing a task with sub-tasks executes the sub-tasks first."""
|
|
||||||
agent = Agent(
|
|
||||||
role="Researcher",
|
|
||||||
goal="Research effectively",
|
|
||||||
backstory="You're an expert researcher",
|
|
||||||
)
|
|
||||||
|
|
||||||
parent_task = Task(
|
|
||||||
description="Research the impact of AI on various industries",
|
|
||||||
expected_output="A comprehensive report",
|
|
||||||
agent=agent,
|
|
||||||
)
|
|
||||||
|
|
||||||
sub_task_descriptions = [
|
|
||||||
"Research AI impact on healthcare",
|
|
||||||
"Research AI impact on finance",
|
|
||||||
"Research AI impact on education",
|
|
||||||
]
|
|
||||||
|
|
||||||
parent_task.decompose(
|
|
||||||
descriptions=sub_task_descriptions,
|
|
||||||
expected_outputs=["Healthcare report", "Finance report", "Education report"],
|
|
||||||
)
|
|
||||||
|
|
||||||
with patch.object(Agent, 'execute_task', return_value="Mock result") as mock_execute_task:
|
|
||||||
result = parent_task.execute_sync()
|
|
||||||
|
|
||||||
assert mock_execute_task.call_count >= 3
|
|
||||||
|
|
||||||
for sub_task in parent_task.sub_tasks:
|
|
||||||
assert sub_task.output is not None
|
|
||||||
|
|
||||||
assert result is not None
|
|
||||||
assert result.raw is not None
|
|
||||||
|
|
||||||
|
|
||||||
def test_combine_sub_task_results():
|
|
||||||
"""Test that combining sub-task results works correctly."""
|
|
||||||
agent = Agent(
|
|
||||||
role="Researcher",
|
|
||||||
goal="Research effectively",
|
|
||||||
backstory="You're an expert researcher",
|
|
||||||
)
|
|
||||||
|
|
||||||
parent_task = Task(
|
|
||||||
description="Research the impact of AI on various industries",
|
|
||||||
expected_output="A comprehensive report",
|
|
||||||
agent=agent,
|
|
||||||
)
|
|
||||||
|
|
||||||
sub_tasks = parent_task.decompose([
|
|
||||||
"Research AI impact on healthcare",
|
|
||||||
"Research AI impact on finance",
|
|
||||||
])
|
|
||||||
|
|
||||||
for sub_task in sub_tasks:
|
|
||||||
sub_task.output = Mock()
|
|
||||||
sub_task.output.raw = f"Result for {sub_task.description}"
|
|
||||||
|
|
||||||
with patch.object(Agent, 'execute_task', return_value="Combined result") as mock_execute_task:
|
|
||||||
result = parent_task.combine_sub_task_results()
|
|
||||||
|
|
||||||
assert mock_execute_task.called
|
|
||||||
assert result == "Combined result"
|
|
||||||
|
|
||||||
|
|
||||||
def test_task_decomposition_validation():
|
|
||||||
"""Test that task decomposition validates inputs correctly."""
|
|
||||||
parent_task = Task(
|
|
||||||
description="Research the impact of AI",
|
|
||||||
expected_output="A report",
|
|
||||||
)
|
|
||||||
|
|
||||||
with pytest.raises(ValueError, match="At least one sub-task description is required"):
|
|
||||||
parent_task.decompose([])
|
|
||||||
|
|
||||||
with pytest.raises(ValueError, match="expected_outputs must have the same length"):
|
|
||||||
parent_task.decompose(
|
|
||||||
["Task 1", "Task 2"],
|
|
||||||
expected_outputs=["Output 1"]
|
|
||||||
)
|
|
||||||
|
|
||||||
with pytest.raises(ValueError, match="names must have the same length"):
|
|
||||||
parent_task.decompose(
|
|
||||||
["Task 1", "Task 2"],
|
|
||||||
names=["Name 1"]
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def test_execute_sub_tasks_async():
|
|
||||||
"""Test that executing sub-tasks asynchronously works correctly."""
|
|
||||||
agent = Agent(
|
|
||||||
role="Researcher",
|
|
||||||
goal="Research effectively",
|
|
||||||
backstory="You're an expert researcher",
|
|
||||||
)
|
|
||||||
|
|
||||||
parent_task = Task(
|
|
||||||
description="Research the impact of AI on various industries",
|
|
||||||
expected_output="A comprehensive report",
|
|
||||||
agent=agent,
|
|
||||||
)
|
|
||||||
|
|
||||||
sub_tasks = parent_task.decompose([
|
|
||||||
"Research AI impact on healthcare",
|
|
||||||
"Research AI impact on finance",
|
|
||||||
])
|
|
||||||
|
|
||||||
with patch.object(Task, 'execute_async') as mock_execute_async:
|
|
||||||
mock_future = Mock()
|
|
||||||
mock_execute_async.return_value = mock_future
|
|
||||||
|
|
||||||
futures = parent_task.execute_sub_tasks_async()
|
|
||||||
|
|
||||||
assert mock_execute_async.call_count == 2
|
|
||||||
assert len(futures) == 2
|
|
||||||
@@ -1,109 +0,0 @@
|
|||||||
import pytest
|
|
||||||
from unittest.mock import Mock, patch
|
|
||||||
|
|
||||||
from crewai import Agent, Task, TaskOutput
|
|
||||||
|
|
||||||
|
|
||||||
def test_combine_sub_task_results_no_sub_tasks():
|
|
||||||
"""Test that combining sub-task results raises an error when there are no sub-tasks."""
|
|
||||||
agent = Agent(
|
|
||||||
role="Researcher",
|
|
||||||
goal="Research effectively",
|
|
||||||
backstory="You're an expert researcher",
|
|
||||||
)
|
|
||||||
|
|
||||||
parent_task = Task(
|
|
||||||
description="Research the impact of AI",
|
|
||||||
expected_output="A report",
|
|
||||||
agent=agent,
|
|
||||||
)
|
|
||||||
|
|
||||||
with pytest.raises(ValueError, match="Task has no sub-tasks to combine results from"):
|
|
||||||
parent_task.combine_sub_task_results()
|
|
||||||
|
|
||||||
|
|
||||||
def test_combine_sub_task_results_no_agent():
|
|
||||||
"""Test that combining sub-task results raises an error when there is no agent."""
|
|
||||||
parent_task = Task(
|
|
||||||
description="Research the impact of AI",
|
|
||||||
expected_output="A report",
|
|
||||||
)
|
|
||||||
|
|
||||||
sub_task = Task(
|
|
||||||
description="Research AI impact on healthcare",
|
|
||||||
expected_output="Healthcare report",
|
|
||||||
parent_task=parent_task,
|
|
||||||
)
|
|
||||||
parent_task.sub_tasks.append(sub_task)
|
|
||||||
|
|
||||||
with pytest.raises(ValueError, match="Task has no agent to combine sub-task results"):
|
|
||||||
parent_task.combine_sub_task_results()
|
|
||||||
|
|
||||||
|
|
||||||
def test_execute_sync_sets_output_after_combining():
|
|
||||||
"""Test that execute_sync sets the output after combining sub-task results."""
|
|
||||||
agent = Agent(
|
|
||||||
role="Researcher",
|
|
||||||
goal="Research effectively",
|
|
||||||
backstory="You're an expert researcher",
|
|
||||||
)
|
|
||||||
|
|
||||||
parent_task = Task(
|
|
||||||
description="Research the impact of AI",
|
|
||||||
expected_output="A report",
|
|
||||||
agent=agent,
|
|
||||||
)
|
|
||||||
|
|
||||||
sub_tasks = parent_task.decompose([
|
|
||||||
"Research AI impact on healthcare",
|
|
||||||
"Research AI impact on finance",
|
|
||||||
])
|
|
||||||
|
|
||||||
with patch.object(Agent, 'execute_task', return_value="Combined result") as mock_execute_task:
|
|
||||||
result = parent_task.execute_sync()
|
|
||||||
|
|
||||||
assert parent_task.output is not None
|
|
||||||
assert parent_task.output.raw == "Combined result"
|
|
||||||
assert result.raw == "Combined result"
|
|
||||||
|
|
||||||
assert mock_execute_task.call_count >= 3
|
|
||||||
|
|
||||||
|
|
||||||
def test_deep_cloning_prevents_shared_state():
|
|
||||||
"""Test that deep cloning prevents shared mutable state between tasks."""
|
|
||||||
agent = Agent(
|
|
||||||
role="Researcher",
|
|
||||||
goal="Research effectively",
|
|
||||||
backstory="You're an expert researcher",
|
|
||||||
)
|
|
||||||
|
|
||||||
parent_task = Task(
|
|
||||||
description="Research the impact of AI",
|
|
||||||
expected_output="A report",
|
|
||||||
agent=agent,
|
|
||||||
)
|
|
||||||
|
|
||||||
copied_task = parent_task.copy()
|
|
||||||
|
|
||||||
copied_task.description = "Modified description"
|
|
||||||
|
|
||||||
assert parent_task.description == "Research the impact of AI"
|
|
||||||
assert copied_task.description == "Modified description"
|
|
||||||
|
|
||||||
parent_task.decompose(["Sub-task 1", "Sub-task 2"])
|
|
||||||
|
|
||||||
assert len(parent_task.sub_tasks) == 2
|
|
||||||
assert len(copied_task.sub_tasks) == 0
|
|
||||||
|
|
||||||
|
|
||||||
def test_execute_sub_tasks_async_empty_sub_tasks():
|
|
||||||
"""Test that execute_sub_tasks_async returns an empty list when there are no sub-tasks."""
|
|
||||||
parent_task = Task(
|
|
||||||
description="Research the impact of AI",
|
|
||||||
expected_output="A report",
|
|
||||||
)
|
|
||||||
|
|
||||||
futures = parent_task.execute_sub_tasks_async()
|
|
||||||
|
|
||||||
assert isinstance(futures, list)
|
|
||||||
assert len(futures) == 0
|
|
||||||
Reference in New Issue
Block a user