mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-01-07 15:18:29 +00:00
Compare commits
5 Commits
devin/1739
...
devin/1735
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
a499d9de42 | ||
|
|
99fe91586d | ||
|
|
0c2d23dfe0 | ||
|
|
2433819c4f | ||
|
|
97fc44c930 |
158
README.md
158
README.md
@@ -4,7 +4,7 @@
|
||||
|
||||
# **CrewAI**
|
||||
|
||||
🤖 **CrewAI**: Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
|
||||
🤖 **CrewAI**: Production-grade framework for orchestrating sophisticated AI agent systems. From simple automations to complex real-world applications, CrewAI provides precise control and deep customization. By fostering collaborative intelligence through flexible, production-ready architecture, CrewAI empowers agents to work together seamlessly, tackling complex business challenges with predictable, consistent results.
|
||||
|
||||
<h3>
|
||||
|
||||
@@ -22,13 +22,17 @@
|
||||
- [Why CrewAI?](#why-crewai)
|
||||
- [Getting Started](#getting-started)
|
||||
- [Key Features](#key-features)
|
||||
- [Understanding Flows and Crews](#understanding-flows-and-crews)
|
||||
- [CrewAI vs LangGraph](#how-crewai-compares)
|
||||
- [Examples](#examples)
|
||||
- [Quick Tutorial](#quick-tutorial)
|
||||
- [Write Job Descriptions](#write-job-descriptions)
|
||||
- [Trip Planner](#trip-planner)
|
||||
- [Stock Analysis](#stock-analysis)
|
||||
- [Using Crews and Flows Together](#using-crews-and-flows-together)
|
||||
- [Connecting Your Crew to a Model](#connecting-your-crew-to-a-model)
|
||||
- [How CrewAI Compares](#how-crewai-compares)
|
||||
- [Frequently Asked Questions (FAQ)](#frequently-asked-questions-faq)
|
||||
- [Contribution](#contribution)
|
||||
- [Telemetry](#telemetry)
|
||||
- [License](#license)
|
||||
@@ -36,10 +40,40 @@
|
||||
## Why CrewAI?
|
||||
|
||||
The power of AI collaboration has too much to offer.
|
||||
CrewAI is designed to enable AI agents to assume roles, share goals, and operate in a cohesive unit - much like a well-oiled crew. Whether you're building a smart assistant platform, an automated customer service ensemble, or a multi-agent research team, CrewAI provides the backbone for sophisticated multi-agent interactions.
|
||||
CrewAI is a standalone framework, built from the ground up without dependencies on Langchain or other agent frameworks. It's designed to enable AI agents to assume roles, share goals, and operate in a cohesive unit - much like a well-oiled crew. Whether you're building a smart assistant platform, an automated customer service ensemble, or a multi-agent research team, CrewAI provides the backbone for sophisticated multi-agent interactions.
|
||||
|
||||
## Getting Started
|
||||
|
||||
### Learning Resources
|
||||
|
||||
Learn CrewAI through our comprehensive courses:
|
||||
- [Multi AI Agent Systems with CrewAI](https://www.deeplearning.ai/short-courses/multi-ai-agent-systems-with-crewai/) - Master the fundamentals of multi-agent systems
|
||||
- [Practical Multi AI Agents and Advanced Use Cases](https://www.deeplearning.ai/short-courses/practical-multi-ai-agents-and-advanced-use-cases-with-crewai/) - Deep dive into advanced implementations
|
||||
|
||||
### Understanding Flows and Crews
|
||||
|
||||
CrewAI offers two powerful, complementary approaches that work seamlessly together to build sophisticated AI applications:
|
||||
|
||||
1. **Crews**: Teams of AI agents with true autonomy and agency, working together to accomplish complex tasks through role-based collaboration. Crews enable:
|
||||
- Natural, autonomous decision-making between agents
|
||||
- Dynamic task delegation and collaboration
|
||||
- Specialized roles with defined goals and expertise
|
||||
- Flexible problem-solving approaches
|
||||
|
||||
2. **Flows**: Production-ready, event-driven workflows that deliver precise control over complex automations. Flows provide:
|
||||
- Fine-grained control over execution paths for real-world scenarios
|
||||
- Secure, consistent state management between tasks
|
||||
- Clean integration of AI agents with production Python code
|
||||
- Conditional branching for complex business logic
|
||||
|
||||
The true power of CrewAI emerges when combining Crews and Flows. This synergy allows you to:
|
||||
- Build complex, production-grade applications
|
||||
- Balance autonomy with precise control
|
||||
- Handle sophisticated real-world scenarios
|
||||
- Maintain clean, maintainable code structure
|
||||
|
||||
### Getting Started with Installation
|
||||
|
||||
To get started with CrewAI, follow these simple steps:
|
||||
|
||||
### 1. Installation
|
||||
@@ -264,13 +298,16 @@ In addition to the sequential process, you can use the hierarchical process, whi
|
||||
|
||||
## Key Features
|
||||
|
||||
- **Role-Based Agent Design**: Customize agents with specific roles, goals, and tools.
|
||||
- **Autonomous Inter-Agent Delegation**: Agents can autonomously delegate tasks and inquire amongst themselves, enhancing problem-solving efficiency.
|
||||
- **Flexible Task Management**: Define tasks with customizable tools and assign them to agents dynamically.
|
||||
- **Processes Driven**: Currently only supports `sequential` task execution and `hierarchical` processes, but more complex processes like consensual and autonomous are being worked on.
|
||||
- **Save output as file**: Save the output of individual tasks as a file, so you can use it later.
|
||||
- **Parse output as Pydantic or Json**: Parse the output of individual tasks as a Pydantic model or as a Json if you want to.
|
||||
- **Works with Open Source Models**: Run your crew using Open AI or open source models refer to the [Connect CrewAI to LLMs](https://docs.crewai.com/how-to/LLM-Connections/) page for details on configuring your agents' connections to models, even ones running locally!
|
||||
**Note**: CrewAI is a standalone framework built from the ground up, without dependencies on Langchain or other agent frameworks.
|
||||
|
||||
- **Deep Customization**: Build sophisticated agents with full control over the system - from overriding inner prompts to accessing low-level APIs. Customize roles, goals, tools, and behaviors while maintaining clean abstractions.
|
||||
- **Autonomous Inter-Agent Delegation**: Agents can autonomously delegate tasks and inquire amongst themselves, enabling complex problem-solving in real-world scenarios.
|
||||
- **Flexible Task Management**: Define and customize tasks with granular control, from simple operations to complex multi-step processes.
|
||||
- **Production-Grade Architecture**: Support for both high-level abstractions and low-level customization, with robust error handling and state management.
|
||||
- **Predictable Results**: Ensure consistent, accurate outputs through programmatic guardrails, agent training capabilities, and flow-based execution control. See our [documentation on guardrails](https://docs.crewai.com/how-to/guardrails/) for implementation details.
|
||||
- **Model Flexibility**: Run your crew using OpenAI or open source models with production-ready integrations. See [Connect CrewAI to LLMs](https://docs.crewai.com/how-to/LLM-Connections/) for detailed configuration options.
|
||||
- **Event-Driven Flows**: Build complex, real-world workflows with precise control over execution paths, state management, and conditional logic.
|
||||
- **Process Orchestration**: Achieve any workflow pattern through flows - from simple sequential and hierarchical processes to complex, custom orchestration patterns with conditional branching and parallel execution.
|
||||
|
||||

|
||||
|
||||
@@ -305,6 +342,98 @@ You can test different real life examples of AI crews in the [CrewAI-examples re
|
||||
|
||||
[](https://www.youtube.com/watch?v=e0Uj4yWdaAg "Stock Analysis")
|
||||
|
||||
### Using Crews and Flows Together
|
||||
|
||||
CrewAI's power truly shines when combining Crews with Flows to create sophisticated automation pipelines. Here's how you can orchestrate multiple Crews within a Flow:
|
||||
|
||||
```python
|
||||
from crewai.flow.flow import Flow, listen, start, router
|
||||
from crewai import Crew, Agent, Task
|
||||
from pydantic import BaseModel
|
||||
|
||||
# Define structured state for precise control
|
||||
class MarketState(BaseModel):
|
||||
sentiment: str = "neutral"
|
||||
confidence: float = 0.0
|
||||
recommendations: list = []
|
||||
|
||||
class AdvancedAnalysisFlow(Flow[MarketState]):
|
||||
@start()
|
||||
def fetch_market_data(self):
|
||||
# Demonstrate low-level control with structured state
|
||||
self.state.sentiment = "analyzing"
|
||||
return {"sector": "tech", "timeframe": "1W"} # These parameters match the task description template
|
||||
|
||||
@listen(fetch_market_data)
|
||||
def analyze_with_crew(self, market_data):
|
||||
# Show crew agency through specialized roles
|
||||
analyst = Agent(
|
||||
role="Senior Market Analyst",
|
||||
goal="Conduct deep market analysis with expert insight",
|
||||
backstory="You're a veteran analyst known for identifying subtle market patterns"
|
||||
)
|
||||
researcher = Agent(
|
||||
role="Data Researcher",
|
||||
goal="Gather and validate supporting market data",
|
||||
backstory="You excel at finding and correlating multiple data sources"
|
||||
)
|
||||
|
||||
analysis_task = Task(
|
||||
description="Analyze {sector} sector data for the past {timeframe}",
|
||||
expected_output="Detailed market analysis with confidence score",
|
||||
agent=analyst
|
||||
)
|
||||
research_task = Task(
|
||||
description="Find supporting data to validate the analysis",
|
||||
expected_output="Corroborating evidence and potential contradictions",
|
||||
agent=researcher
|
||||
)
|
||||
|
||||
# Demonstrate crew autonomy
|
||||
analysis_crew = Crew(
|
||||
agents=[analyst, researcher],
|
||||
tasks=[analysis_task, research_task],
|
||||
process=Process.sequential,
|
||||
verbose=True
|
||||
)
|
||||
return analysis_crew.kickoff(inputs=market_data) # Pass market_data as named inputs
|
||||
|
||||
@router(analyze_with_crew)
|
||||
def determine_next_steps(self):
|
||||
# Show flow control with conditional routing
|
||||
if self.state.confidence > 0.8:
|
||||
return "high_confidence"
|
||||
elif self.state.confidence > 0.5:
|
||||
return "medium_confidence"
|
||||
return "low_confidence"
|
||||
|
||||
@listen("high_confidence")
|
||||
def execute_strategy(self):
|
||||
# Demonstrate complex decision making
|
||||
strategy_crew = Crew(
|
||||
agents=[
|
||||
Agent(role="Strategy Expert",
|
||||
goal="Develop optimal market strategy")
|
||||
],
|
||||
tasks=[
|
||||
Task(description="Create detailed strategy based on analysis",
|
||||
expected_output="Step-by-step action plan")
|
||||
]
|
||||
)
|
||||
return strategy_crew.kickoff()
|
||||
|
||||
@listen("medium_confidence", "low_confidence")
|
||||
def request_additional_analysis(self):
|
||||
self.state.recommendations.append("Gather more data")
|
||||
return "Additional analysis required"
|
||||
```
|
||||
|
||||
This example demonstrates how to:
|
||||
1. Use Python code for basic data operations
|
||||
2. Create and execute Crews as steps in your workflow
|
||||
3. Use Flow decorators to manage the sequence of operations
|
||||
4. Implement conditional branching based on Crew results
|
||||
|
||||
## Connecting Your Crew to a Model
|
||||
|
||||
CrewAI supports using various LLMs through a variety of connection options. By default your agents will use the OpenAI API when querying the model. However, there are several other ways to allow your agents to connect to models. For example, you can configure your agents to use a local model via the Ollama tool.
|
||||
@@ -313,9 +442,13 @@ Please refer to the [Connect CrewAI to LLMs](https://docs.crewai.com/how-to/LLM-
|
||||
|
||||
## How CrewAI Compares
|
||||
|
||||
**CrewAI's Advantage**: CrewAI is built with production in mind. It offers the flexibility of Autogen's conversational agents and the structured process approach of ChatDev, but without the rigidity. CrewAI's processes are designed to be dynamic and adaptable, fitting seamlessly into both development and production workflows.
|
||||
**CrewAI's Advantage**: CrewAI combines autonomous agent intelligence with precise workflow control through its unique Crews and Flows architecture. The framework excels at both high-level orchestration and low-level customization, enabling complex, production-grade systems with granular control.
|
||||
|
||||
- **Autogen**: While Autogen does good in creating conversational agents capable of working together, it lacks an inherent concept of process. In Autogen, orchestrating agents' interactions requires additional programming, which can become complex and cumbersome as the scale of tasks grows.
|
||||
- **LangGraph**: While LangGraph provides a foundation for building agent workflows, its approach requires significant boilerplate code and complex state management patterns. The framework's tight coupling with LangChain can limit flexibility when implementing custom agent behaviors or integrating with external systems.
|
||||
|
||||
*P.S. CrewAI demonstrates significant performance advantages over LangGraph, executing 5.76x faster in certain cases like this QA task example ([see comparison](https://github.com/crewAIInc/crewAI-examples/tree/main/Notebooks/CrewAI%20Flows%20%26%20Langgraph/QA%20Agent)) while achieving higher evaluation scores with faster completion times in certain coding tasks, like in this example ([detailed analysis](https://github.com/crewAIInc/crewAI-examples/blob/main/Notebooks/CrewAI%20Flows%20%26%20Langgraph/Coding%20Assistant/coding_assistant_eval.ipynb)).*
|
||||
|
||||
- **Autogen**: While Autogen excels at creating conversational agents capable of working together, it lacks an inherent concept of process. In Autogen, orchestrating agents' interactions requires additional programming, which can become complex and cumbersome as the scale of tasks grows.
|
||||
|
||||
- **ChatDev**: ChatDev introduced the idea of processes into the realm of AI agents, but its implementation is quite rigid. Customizations in ChatDev are limited and not geared towards production environments, which can hinder scalability and flexibility in real-world applications.
|
||||
|
||||
@@ -440,5 +573,8 @@ A: CrewAI uses anonymous telemetry to collect usage data for improvement purpose
|
||||
### Q: Where can I find examples of CrewAI in action?
|
||||
A: You can find various real-life examples in the [CrewAI-examples repository](https://github.com/crewAIInc/crewAI-examples), including trip planners, stock analysis tools, and more.
|
||||
|
||||
### Q: What is the difference between Crews and Flows?
|
||||
A: Crews and Flows serve different but complementary purposes in CrewAI. Crews are teams of AI agents working together to accomplish specific tasks through role-based collaboration, delivering accurate and predictable results. Flows, on the other hand, are event-driven workflows that can orchestrate both Crews and regular Python code, allowing you to build complex automation pipelines with secure state management and conditional execution paths.
|
||||
|
||||
### Q: How can I contribute to CrewAI?
|
||||
A: Contributions are welcome! You can fork the repository, create a new branch for your feature, add your improvement, and send a pull request. Check the Contribution section in the README for more details.
|
||||
|
||||
@@ -35,10 +35,41 @@ By default, the memory system is disabled, and you can ensure it is active by se
|
||||
The memory will use OpenAI embeddings by default, but you can change it by setting `embedder` to a different model.
|
||||
It's also possible to initialize the memory instance with your own instance.
|
||||
|
||||
The 'embedder' only applies to **Short-Term Memory** which uses Chroma for RAG.
|
||||
The **Long-Term Memory** uses SQLite3 to store task results. Currently, there is no way to override these storage implementations.
|
||||
The data storage files are saved into a platform-specific location found using the appdirs package,
|
||||
and the name of the project can be overridden using the **CREWAI_STORAGE_DIR** environment variable.
|
||||
Each memory type uses different storage implementations:
|
||||
|
||||
- **Short-Term Memory**: Uses Chroma for RAG (Retrieval-Augmented Generation) with configurable embeddings
|
||||
- **Long-Term Memory**: Uses SQLite3 for persistent storage of task results and metadata
|
||||
- **Entity Memory**: Uses either RAG storage (default) or Mem0 for entity information
|
||||
- **User Memory**: Available through Mem0 integration for personalized experiences
|
||||
|
||||
The data storage files are saved in a platform-specific location using the appdirs package.
|
||||
You can override the storage location using the **CREWAI_STORAGE_DIR** environment variable.
|
||||
|
||||
### Storage Implementation Details
|
||||
|
||||
#### Short-Term Memory
|
||||
- Default: ChromaDB with RAG
|
||||
- Configurable embeddings (OpenAI, Ollama, Google AI, etc.)
|
||||
- Supports custom embedding functions
|
||||
- Optional Mem0 integration for enhanced capabilities
|
||||
|
||||
#### Long-Term Memory
|
||||
- SQLite3 storage with structured schema
|
||||
- Stores task descriptions, metadata, timestamps, and quality scores
|
||||
- Supports querying by task description with configurable limits
|
||||
- Includes error handling and reset capabilities
|
||||
|
||||
#### Entity Memory
|
||||
- Default: RAG storage with ChromaDB
|
||||
- Optional Mem0 integration
|
||||
- Structured entity storage (name, type, description)
|
||||
- Supports metadata and relationship mapping
|
||||
|
||||
#### User Memory
|
||||
- Requires Mem0 integration
|
||||
- Stores user preferences and interaction history
|
||||
- Supports personalized context building
|
||||
- Configurable through memory_config
|
||||
|
||||
### Example: Configuring Memory for a Crew
|
||||
|
||||
@@ -93,11 +124,41 @@ my_crew = Crew(
|
||||
)
|
||||
```
|
||||
|
||||
## Integrating Mem0 for Enhanced User Memory
|
||||
## Integrating Mem0 Provider
|
||||
|
||||
[Mem0](https://mem0.ai/) is a self-improving memory layer for LLM applications, enabling personalized AI experiences.
|
||||
[Mem0](https://mem0.ai/) is a self-improving memory layer for LLM applications that can enhance all memory types in CrewAI. It provides advanced features for storing and retrieving contextual information.
|
||||
|
||||
To include user-specific memory you can get your API key [here](https://app.mem0.ai/dashboard/api-keys) and refer the [docs](https://docs.mem0.ai/platform/quickstart#4-1-create-memories) for adding user preferences.
|
||||
### Configuration
|
||||
|
||||
To use Mem0, you'll need:
|
||||
1. An API key from [Mem0 Dashboard](https://app.mem0.ai/dashboard/api-keys)
|
||||
2. The `mem0ai` package installed: `pip install mem0ai`
|
||||
|
||||
You can configure Mem0 in two ways:
|
||||
|
||||
1. **Environment Variable**:
|
||||
```bash
|
||||
export MEM0_API_KEY="your-api-key"
|
||||
```
|
||||
|
||||
2. **Memory Config**:
|
||||
```python
|
||||
memory_config = {
|
||||
"provider": "mem0",
|
||||
"config": {
|
||||
"api_key": "your-api-key",
|
||||
"user_id": "user123" # Required for user memory
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Memory Type Support
|
||||
|
||||
Mem0 can be used with all memory types:
|
||||
- **Short-Term Memory**: Enhanced context retention
|
||||
- **Long-Term Memory**: Improved task history storage
|
||||
- **Entity Memory**: Better entity relationship tracking
|
||||
- **User Memory**: Personalized user preferences and history
|
||||
|
||||
|
||||
```python Code
|
||||
@@ -135,9 +196,118 @@ crew = Crew(
|
||||
```
|
||||
|
||||
|
||||
## Additional Embedding Providers
|
||||
## Memory Interface Details
|
||||
|
||||
### Using OpenAI embeddings (already default)
|
||||
When implementing custom memory storage, be aware of these interface requirements:
|
||||
|
||||
### Base Memory Class
|
||||
```python
|
||||
class Memory:
|
||||
def save(
|
||||
self,
|
||||
value: Any,
|
||||
metadata: Optional[Dict[str, Any]] = None,
|
||||
agent: Optional[str] = None,
|
||||
) -> None:
|
||||
"""Save data to memory with optional metadata and agent information."""
|
||||
pass
|
||||
|
||||
def search(
|
||||
self,
|
||||
query: str,
|
||||
limit: int = 3,
|
||||
score_threshold: float = 0.35,
|
||||
) -> List[Any]:
|
||||
"""Search memory with configurable limit and relevance threshold."""
|
||||
pass
|
||||
```
|
||||
|
||||
### Memory Type Specifics
|
||||
|
||||
1. **LongTermMemory**:
|
||||
```python
|
||||
class LongTermMemoryItem:
|
||||
task: str # Task description
|
||||
expected_output: str # Expected task output
|
||||
metadata: Dict[str, Any] # Additional metadata
|
||||
agent: Optional[str] = None # Associated agent
|
||||
datetime: str # Timestamp
|
||||
quality: float # Task quality score (0-1)
|
||||
```
|
||||
- Saves task results with quality scores and timestamps
|
||||
- Search returns historical task data ordered by date
|
||||
- Note: Implementation has type hint differences from base Memory class
|
||||
|
||||
2. **EntityMemory**:
|
||||
```python
|
||||
class EntityMemoryItem:
|
||||
name: str # Entity name
|
||||
type: str # Entity type
|
||||
description: str # Entity description
|
||||
metadata: Dict[str, Any] # Additional metadata
|
||||
agent: Optional[str] = None # Associated agent
|
||||
```
|
||||
- Saves entity information with type and description
|
||||
- Search supports entity relationship queries
|
||||
- Note: Implementation has type hint differences from base Memory class
|
||||
|
||||
3. **ShortTermMemory**:
|
||||
```python
|
||||
class ShortTermMemoryItem:
|
||||
data: Any # Memory content
|
||||
metadata: Dict[str, Any] # Additional metadata
|
||||
agent: Optional[str] = None # Associated agent
|
||||
```
|
||||
- Saves recent interactions with metadata
|
||||
- Search supports semantic similarity
|
||||
- Follows base Memory class interface exactly
|
||||
|
||||
### Error Handling and Reset
|
||||
|
||||
Each memory type includes error handling and reset capabilities:
|
||||
|
||||
```python
|
||||
# Reset short-term memory
|
||||
try:
|
||||
crew.short_term_memory.reset()
|
||||
except Exception as e:
|
||||
print(f"Error resetting short-term memory: {e}")
|
||||
|
||||
# Reset entity memory
|
||||
try:
|
||||
crew.entity_memory.reset()
|
||||
except Exception as e:
|
||||
print(f"Error resetting entity memory: {e}")
|
||||
|
||||
# Reset long-term memory
|
||||
try:
|
||||
crew.long_term_memory.reset()
|
||||
except Exception as e:
|
||||
print(f"Error resetting long-term memory: {e}")
|
||||
```
|
||||
|
||||
Common error scenarios:
|
||||
- Database connection issues
|
||||
- File permission errors
|
||||
- Storage initialization failures
|
||||
- Embedding generation errors
|
||||
|
||||
### Implementation Notes
|
||||
|
||||
1. **Type Hint Considerations**:
|
||||
- LongTermMemory.save() expects LongTermMemoryItem
|
||||
- EntityMemory.save() expects EntityMemoryItem
|
||||
- ShortTermMemory.save() follows base Memory interface
|
||||
|
||||
2. **Storage Reset Behavior**:
|
||||
- Short-term: Clears ChromaDB collection
|
||||
- Long-term: Truncates SQLite table
|
||||
- Entity: Clears entity storage
|
||||
- Mem0: Provider-specific reset
|
||||
|
||||
## Embedding Providers
|
||||
|
||||
CrewAI supports multiple embedding providers for RAG-based memory types:
|
||||
```python Code
|
||||
from crewai import Crew, Agent, Task, Process
|
||||
|
||||
|
||||
196
docs/how-to/mlflow-observability.mdx
Normal file
196
docs/how-to/mlflow-observability.mdx
Normal file
@@ -0,0 +1,196 @@
|
||||
---
|
||||
title: Agent Monitoring with MLFlow
|
||||
description: How to monitor and track CrewAI Agents using MLFlow for experiment tracking and model registry.
|
||||
icon: chart-line
|
||||
---
|
||||
|
||||
# Introduction
|
||||
|
||||
MLFlow is an open-source platform for managing the end-to-end machine learning lifecycle. When integrated with CrewAI, it provides powerful capabilities for tracking agent performance, logging metrics, and managing experiments. This guide demonstrates how to implement precise monitoring and tracking of your CrewAI agents using MLFlow.
|
||||
|
||||
## MLFlow Integration
|
||||
|
||||
MLFlow offers comprehensive experiment tracking and model registry capabilities that complement CrewAI's agent-based workflows:
|
||||
|
||||
- **Experiment Tracking**: Monitor agent performance metrics and execution patterns
|
||||
- **Metric Logging**: Track costs, latency, and success rates
|
||||
- **Artifact Management**: Store and version agent configurations and outputs
|
||||
- **Model Registry**: Maintain different versions of agent configurations
|
||||
|
||||
### Features
|
||||
|
||||
- **Real-time Monitoring**: Track agent performance as tasks are executed
|
||||
- **Metric Collection**: Gather detailed statistics on agent operations
|
||||
- **Experiment Organization**: Group related agent runs for comparison
|
||||
- **Resource Tracking**: Monitor computational and token usage
|
||||
- **Custom Metrics**: Define and track domain-specific performance indicators
|
||||
|
||||
## Getting Started
|
||||
|
||||
<Steps>
|
||||
<Step title="Install Dependencies">
|
||||
Install MLFlow alongside CrewAI:
|
||||
```bash
|
||||
pip install mlflow crewai
|
||||
```
|
||||
</Step>
|
||||
<Step title="Configure MLFlow">
|
||||
Set up MLFlow tracking in your environment:
|
||||
```python
|
||||
import mlflow
|
||||
from crewai import Agent, Task, Crew
|
||||
|
||||
# Configure MLFlow tracking
|
||||
mlflow.set_tracking_uri("http://localhost:5000")
|
||||
mlflow.set_experiment("crewai-agents")
|
||||
```
|
||||
</Step>
|
||||
<Step title="Create Tracking Callbacks">
|
||||
Implement MLFlow callbacks for monitoring:
|
||||
```python
|
||||
class MLFlowCallback:
|
||||
def __init__(self):
|
||||
self.start_time = time.time()
|
||||
|
||||
def on_step(self, agent, task, step_number, step_input, step_output):
|
||||
mlflow.log_metrics({
|
||||
"step_number": step_number,
|
||||
"step_duration": time.time() - self.start_time,
|
||||
"output_length": len(step_output)
|
||||
})
|
||||
|
||||
def on_task(self, agent, task, output):
|
||||
mlflow.log_metrics({
|
||||
"task_duration": time.time() - self.start_time,
|
||||
"final_output_length": len(output)
|
||||
})
|
||||
mlflow.log_param("task_description", task.description)
|
||||
```
|
||||
</Step>
|
||||
<Step title="Integrate with CrewAI">
|
||||
Apply MLFlow tracking to your CrewAI agents:
|
||||
```python
|
||||
# Create MLFlow callback
|
||||
mlflow_callback = MLFlowCallback()
|
||||
|
||||
# Create agent with tracking
|
||||
researcher = Agent(
|
||||
role='Researcher',
|
||||
goal='Conduct market analysis',
|
||||
backstory='Expert market researcher with deep analytical skills',
|
||||
step_callback=mlflow_callback.on_step
|
||||
)
|
||||
|
||||
# Create crew with tracking
|
||||
crew = Crew(
|
||||
agents=[researcher],
|
||||
tasks=[...],
|
||||
task_callback=mlflow_callback.on_task
|
||||
)
|
||||
|
||||
# Execute with MLFlow tracking
|
||||
with mlflow.start_run():
|
||||
result = crew.kickoff()
|
||||
```
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
## Advanced Usage
|
||||
|
||||
### Custom Metric Tracking
|
||||
|
||||
Track specific metrics relevant to your use case:
|
||||
|
||||
```python
|
||||
class CustomMLFlowCallback:
|
||||
def __init__(self):
|
||||
self.metrics = {}
|
||||
|
||||
def on_step(self, agent, task, step_number, step_input, step_output):
|
||||
# Track custom metrics
|
||||
self.metrics[f"agent_{agent.role}_steps"] = step_number
|
||||
|
||||
# Log tool usage
|
||||
if hasattr(task, 'tools'):
|
||||
for tool in task.tools:
|
||||
mlflow.log_param(f"tool_used_{step_number}", tool.name)
|
||||
|
||||
# Track token usage
|
||||
mlflow.log_metric(
|
||||
f"step_{step_number}_tokens",
|
||||
len(step_output)
|
||||
)
|
||||
```
|
||||
|
||||
### Experiment Organization
|
||||
|
||||
Group related experiments for better analysis:
|
||||
|
||||
```python
|
||||
def run_agent_experiment(agent_config, task_config):
|
||||
with mlflow.start_run(
|
||||
run_name=f"agent_experiment_{agent_config['role']}"
|
||||
) as run:
|
||||
# Log configuration
|
||||
mlflow.log_params(agent_config)
|
||||
mlflow.log_params(task_config)
|
||||
|
||||
# Create and run agent
|
||||
agent = Agent(**agent_config)
|
||||
task = Task(**task_config)
|
||||
|
||||
# Execute and log results
|
||||
result = agent.execute(task)
|
||||
mlflow.log_metric("execution_time", task.execution_time)
|
||||
|
||||
return result
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Structured Logging**
|
||||
- Use consistent metric names across experiments
|
||||
- Group related metrics using common prefixes
|
||||
- Include timestamps for temporal analysis
|
||||
|
||||
2. **Resource Monitoring**
|
||||
- Track token usage per agent and task
|
||||
- Monitor execution time for performance optimization
|
||||
- Log tool usage patterns and success rates
|
||||
|
||||
3. **Experiment Organization**
|
||||
- Use meaningful experiment names
|
||||
- Group related runs under the same experiment
|
||||
- Tag runs with relevant metadata
|
||||
|
||||
4. **Performance Optimization**
|
||||
- Monitor agent efficiency metrics
|
||||
- Track resource utilization
|
||||
- Identify bottlenecks in task execution
|
||||
|
||||
## Viewing Results
|
||||
|
||||
Access your MLFlow dashboard to analyze agent performance:
|
||||
|
||||
1. Start the MLFlow UI:
|
||||
```bash
|
||||
mlflow ui --port 5000
|
||||
```
|
||||
|
||||
2. Open your browser and navigate to `http://localhost:5000`
|
||||
|
||||
3. View experiment results including:
|
||||
- Agent performance metrics
|
||||
- Task execution times
|
||||
- Resource utilization
|
||||
- Custom metrics and parameters
|
||||
|
||||
## Security Considerations
|
||||
|
||||
- Ensure sensitive data is properly sanitized before logging
|
||||
- Use appropriate access controls for MLFlow server
|
||||
- Monitor and audit logged information regularly
|
||||
|
||||
## Conclusion
|
||||
|
||||
MLFlow integration provides comprehensive monitoring and experimentation capabilities for CrewAI agents. By following these guidelines and best practices, you can effectively track, analyze, and optimize your agent-based workflows while maintaining security and efficiency.
|
||||
@@ -100,7 +100,8 @@
|
||||
"how-to/conditional-tasks",
|
||||
"how-to/agentops-observability",
|
||||
"how-to/langtrace-observability",
|
||||
"how-to/openlit-observability"
|
||||
"how-to/openlit-observability",
|
||||
"how-to/mlflow-observability"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -163,4 +164,4 @@
|
||||
"linkedin": "https://www.linkedin.com/company/crewai-inc",
|
||||
"youtube": "https://youtube.com/@crewAIInc"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -31,7 +31,7 @@ Remember that when using this tool, the code must be generated by the Agent itse
|
||||
The code must be a Python3 code. And it will take some time for the first time to run
|
||||
because it needs to build the Docker image.
|
||||
|
||||
```python Code
|
||||
```python
|
||||
from crewai import Agent
|
||||
from crewai_tools import CodeInterpreterTool
|
||||
|
||||
@@ -43,7 +43,7 @@ Agent(
|
||||
|
||||
We also provide a simple way to use it directly from the Agent.
|
||||
|
||||
```python Code
|
||||
```python
|
||||
from crewai import Agent
|
||||
|
||||
agent = Agent(
|
||||
|
||||
@@ -27,7 +27,7 @@ The following example demonstrates how to initialize the tool and execute a gith
|
||||
|
||||
1. Initialize Composio tools
|
||||
|
||||
```python Code
|
||||
```python
|
||||
from composio import App
|
||||
from crewai_tools import ComposioTool
|
||||
from crewai import Agent, Task
|
||||
@@ -38,19 +38,19 @@ tools = [ComposioTool.from_action(action=Action.GITHUB_ACTIVITY_STAR_REPO_FOR_AU
|
||||
|
||||
If you don't know what action you want to use, use `from_app` and `tags` filter to get relevant actions
|
||||
|
||||
```python Code
|
||||
```python
|
||||
tools = ComposioTool.from_app(App.GITHUB, tags=["important"])
|
||||
```
|
||||
|
||||
or use `use_case` to search relevant actions
|
||||
|
||||
```python Code
|
||||
```python
|
||||
tools = ComposioTool.from_app(App.GITHUB, use_case="Star a github repository")
|
||||
```
|
||||
|
||||
2. Define agent
|
||||
|
||||
```python Code
|
||||
```python
|
||||
crewai_agent = Agent(
|
||||
role="Github Agent",
|
||||
goal="You take action on Github using Github APIs",
|
||||
@@ -65,7 +65,7 @@ crewai_agent = Agent(
|
||||
|
||||
3. Execute task
|
||||
|
||||
```python Code
|
||||
```python
|
||||
task = Task(
|
||||
description="Star a repo ComposioHQ/composio on GitHub",
|
||||
agent=crewai_agent,
|
||||
@@ -75,4 +75,4 @@ task = Task(
|
||||
task.execute()
|
||||
```
|
||||
|
||||
* More detailed list of tools can be found [here](https://app.composio.dev)
|
||||
* More detailed list of tools can be found [here](https://app.composio.dev)
|
||||
|
||||
@@ -22,7 +22,7 @@ pip install 'crewai[tools]'
|
||||
|
||||
Remember that when using this tool, the text must be generated by the Agent itself. The text must be a description of the image you want to generate.
|
||||
|
||||
```python Code
|
||||
```python
|
||||
from crewai_tools import DallETool
|
||||
|
||||
Agent(
|
||||
@@ -31,9 +31,16 @@ Agent(
|
||||
)
|
||||
```
|
||||
|
||||
If needed you can also tweak the parameters of the DALL-E model by passing them as arguments to the `DallETool` class. For example:
|
||||
## Arguments
|
||||
|
||||
```python Code
|
||||
- `model`: The DALL-E model to use (e.g., "dall-e-3")
|
||||
- `size`: Image size (e.g., "1024x1024")
|
||||
- `quality`: Image quality ("standard" or "hd")
|
||||
- `n`: Number of images to generate
|
||||
|
||||
## Configuration Example
|
||||
|
||||
```python
|
||||
from crewai_tools import DallETool
|
||||
|
||||
dalle_tool = DallETool(model="dall-e-3",
|
||||
@@ -48,4 +55,4 @@ Agent(
|
||||
```
|
||||
|
||||
The parameters are based on the `client.images.generate` method from the OpenAI API. For more information on the parameters,
|
||||
please refer to the [OpenAI API documentation](https://platform.openai.com/docs/guides/images/introduction?lang=python).
|
||||
please refer to the [OpenAI API documentation](https://platform.openai.com/docs/guides/images/introduction?lang=python).
|
||||
|
||||
@@ -20,11 +20,11 @@ Install the crewai_tools package to use the `FileWriterTool` in your projects:
|
||||
pip install 'crewai[tools]'
|
||||
```
|
||||
|
||||
## Example
|
||||
## Usage
|
||||
|
||||
To get started with the `FileWriterTool`:
|
||||
Here's how to use the `FileWriterTool`:
|
||||
|
||||
```python Code
|
||||
```python
|
||||
from crewai_tools import FileWriterTool
|
||||
|
||||
# Initialize the tool
|
||||
@@ -45,4 +45,4 @@ print(result)
|
||||
|
||||
By integrating the `FileWriterTool` into your crews, the agents can execute the process of writing content to files and creating directories.
|
||||
This tool is essential for tasks that require saving output data, creating structured file systems, and more. By adhering to the setup and usage guidelines provided,
|
||||
incorporating this tool into projects is straightforward and efficient.
|
||||
incorporating this tool into projects is straightforward and efficient.
|
||||
|
||||
@@ -23,11 +23,11 @@ To install the JSONSearchTool, use the following pip command:
|
||||
pip install 'crewai[tools]'
|
||||
```
|
||||
|
||||
## Usage Examples
|
||||
## Example
|
||||
|
||||
Here are updated examples on how to utilize the JSONSearchTool effectively for searching within JSON files. These examples take into account the current implementation and usage patterns identified in the codebase.
|
||||
|
||||
```python Code
|
||||
```python
|
||||
from crewai.json_tools import JSONSearchTool # Updated import path
|
||||
|
||||
# General JSON content search
|
||||
@@ -47,7 +47,7 @@ tool = JSONSearchTool(json_path='./path/to/your/file.json')
|
||||
|
||||
The JSONSearchTool supports extensive customization through a configuration dictionary. This allows users to select different models for embeddings and summarization based on their requirements.
|
||||
|
||||
```python Code
|
||||
```python
|
||||
tool = JSONSearchTool(
|
||||
config={
|
||||
"llm": {
|
||||
@@ -70,4 +70,4 @@ tool = JSONSearchTool(
|
||||
},
|
||||
}
|
||||
)
|
||||
```
|
||||
```
|
||||
|
||||
@@ -22,11 +22,13 @@ Before using the MDX Search Tool, ensure the `crewai_tools` package is installed
|
||||
pip install 'crewai[tools]'
|
||||
```
|
||||
|
||||
## Usage Example
|
||||
## Example
|
||||
|
||||
To use the MDX Search Tool, you must first set up the necessary environment variables. Then, integrate the tool into your crewAI project to begin your market research. Below is a basic example of how to do this:
|
||||
To use the MDX Search Tool, you must first set up the necessary environment variables for your chosen LLM and embeddings providers (e.g., OpenAI API key if using the default configuration). Then, integrate the tool into your crewAI project as shown in the examples below.
|
||||
|
||||
```python Code
|
||||
The tool will perform semantic search using RAG technology to find and extract relevant content from your MDX files based on the search query:
|
||||
|
||||
```python
|
||||
from crewai_tools import MDXSearchTool
|
||||
|
||||
# Initialize the tool to search any MDX content it learns about during execution
|
||||
@@ -40,13 +42,16 @@ tool = MDXSearchTool(mdx='path/to/your/document.mdx')
|
||||
|
||||
## Parameters
|
||||
|
||||
- mdx: **Optional**. Specifies the MDX file path for the search. It can be provided during initialization.
|
||||
- `mdx`: **Optional**. Specifies the MDX file path for the search. It can be provided during initialization or when running the search.
|
||||
- `search_query`: **Required**. The query string to search for within the MDX content.
|
||||
|
||||
The tool inherits from `RagTool` which provides advanced RAG (Retrieval-Augmented Generation) capabilities for semantic search within MDX content.
|
||||
|
||||
## Customization of Model and Embeddings
|
||||
|
||||
The tool defaults to using OpenAI for embeddings and summarization. For customization, utilize a configuration dictionary as shown below:
|
||||
|
||||
```python Code
|
||||
```python
|
||||
tool = MDXSearchTool(
|
||||
config=dict(
|
||||
llm=dict(
|
||||
@@ -70,4 +75,4 @@ tool = MDXSearchTool(
|
||||
),
|
||||
)
|
||||
)
|
||||
```
|
||||
```
|
||||
|
||||
@@ -27,7 +27,7 @@ pip install 'crewai[tools]'
|
||||
|
||||
The following example demonstrates how to initialize the tool and execute a search with a given query:
|
||||
|
||||
```python Code
|
||||
```python
|
||||
from crewai_tools import SerperDevTool
|
||||
|
||||
# Initialize the tool for internet searching capabilities
|
||||
@@ -44,22 +44,27 @@ To effectively use the `SerperDevTool`, follow these steps:
|
||||
|
||||
## Parameters
|
||||
|
||||
The `SerperDevTool` comes with several parameters that will be passed to the API :
|
||||
The `SerperDevTool` comes with several parameters that can be configured:
|
||||
|
||||
- **search_url**: The URL endpoint for the search API. (Default is `https://google.serper.dev/search`)
|
||||
- **base_url**: The base URL for the Serper API. Default is `https://google.serper.dev`.
|
||||
- **n_results**: Number of search results to return. Default is `10`.
|
||||
- **save_file**: Boolean flag to save search results to a file. Default is `False`.
|
||||
- **search_type**: Type of search to perform. Can be either `search` (default) or `news`.
|
||||
|
||||
Additional parameters that can be passed during search:
|
||||
- **country**: Optional. Specify the country for the search results.
|
||||
- **location**: Optional. Specify the location for the search results.
|
||||
- **locale**: Optional. Specify the locale for the search results.
|
||||
- **n_results**: Number of search results to return. Default is `10`.
|
||||
|
||||
The values for `country`, `location`, `locale` and `search_url` can be found on the [Serper Playground](https://serper.dev/playground).
|
||||
The values for `country`, `location`, and `locale` can be found on the [Serper Playground](https://serper.dev/playground).
|
||||
|
||||
Note: The tool requires the `SERPER_API_KEY` environment variable to be set with your Serper API key.
|
||||
|
||||
## Example with Parameters
|
||||
|
||||
Here is an example demonstrating how to use the tool with additional parameters:
|
||||
|
||||
```python Code
|
||||
```python
|
||||
from crewai_tools import SerperDevTool
|
||||
|
||||
tool = SerperDevTool(
|
||||
@@ -71,18 +76,29 @@ print(tool.run(search_query="ChatGPT"))
|
||||
|
||||
# Using Tool: Search the internet
|
||||
|
||||
# Search results: Title: Role of chat gpt in public health
|
||||
# Link: https://link.springer.com/article/10.1007/s10439-023-03172-7
|
||||
# Snippet: … ChatGPT in public health. In this overview, we will examine the potential uses of ChatGPT in
|
||||
# ---
|
||||
# Title: Potential use of chat gpt in global warming
|
||||
# Link: https://link.springer.com/article/10.1007/s10439-023-03171-8
|
||||
# Snippet: … as ChatGPT, have the potential to play a critical role in advancing our understanding of climate
|
||||
# ---
|
||||
# Search results:
|
||||
{
|
||||
"searchParameters": {
|
||||
"q": "ChatGPT",
|
||||
"type": "search"
|
||||
},
|
||||
"organic": [
|
||||
{
|
||||
"title": "Role of chat gpt in public health",
|
||||
"link": "https://link.springer.com/article/10.1007/s10439-023-03172-7",
|
||||
"snippet": "ChatGPT in public health. In this overview, we will examine the potential uses of ChatGPT in"
|
||||
},
|
||||
{
|
||||
"title": "Potential use of chat gpt in global warming",
|
||||
"link": "https://link.springer.com/article/10.1007/s10439-023-03171-8",
|
||||
"snippet": "as ChatGPT, have the potential to play a critical role in advancing our understanding of climate"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
```python Code
|
||||
```python
|
||||
from crewai_tools import SerperDevTool
|
||||
|
||||
tool = SerperDevTool(
|
||||
|
||||
@@ -26,7 +26,7 @@ pip install spider-client 'crewai[tools]'
|
||||
This example shows you how you can use the `SpiderTool` to enable your agent to scrape and crawl websites.
|
||||
The data returned from the Spider API is already LLM-ready, so no need to do any cleaning there.
|
||||
|
||||
```python Code
|
||||
```python
|
||||
from crewai_tools import SpiderTool
|
||||
|
||||
def main():
|
||||
@@ -89,4 +89,4 @@ if __name__ == "__main__":
|
||||
| **query_selector** | `string` | CSS query selector for content extraction from markup. |
|
||||
| **full_resources** | `bool` | Downloads all resources linked to the website. |
|
||||
| **request_timeout** | `int` | Timeout in seconds for requests (5-60). Default is `30`. |
|
||||
| **run_in_background** | `bool` | Runs the request in the background, useful for data storage and triggering dashboard crawls. No effect if `storageless` is set. |
|
||||
| **run_in_background** | `bool` | Runs the request in the background, useful for data storage and triggering dashboard crawls. No effect if `storageless` is set. |
|
||||
|
||||
@@ -19,11 +19,11 @@ Install the crewai_tools package
|
||||
pip install 'crewai[tools]'
|
||||
```
|
||||
|
||||
## Usage
|
||||
## Example
|
||||
|
||||
In order to use the VisionTool, the OpenAI API key should be set in the environment variable `OPENAI_API_KEY`.
|
||||
To use the VisionTool, first ensure the OpenAI API key is set in the environment variable `OPENAI_API_KEY`. Here's an example:
|
||||
|
||||
```python Code
|
||||
```python
|
||||
from crewai_tools import VisionTool
|
||||
|
||||
vision_tool = VisionTool()
|
||||
|
||||
@@ -27,11 +27,11 @@ pip install 'crewai[tools]'
|
||||
|
||||
This command installs the necessary dependencies to ensure that once the tool is fully integrated, users can start using it immediately.
|
||||
|
||||
## Example Usage
|
||||
## Example
|
||||
|
||||
Below are examples of how the WebsiteSearchTool could be utilized in different scenarios. Please note, these examples are illustrative and represent planned functionality:
|
||||
|
||||
```python Code
|
||||
```python
|
||||
from crewai_tools import WebsiteSearchTool
|
||||
|
||||
# Example of initiating tool that agents can use
|
||||
@@ -52,7 +52,7 @@ tool = WebsiteSearchTool(website='https://example.com')
|
||||
By default, the tool uses OpenAI for both embeddings and summarization. To customize the model, you can use a config dictionary as follows:
|
||||
|
||||
|
||||
```python Code
|
||||
```python
|
||||
tool = WebsiteSearchTool(
|
||||
config=dict(
|
||||
llm=dict(
|
||||
@@ -74,4 +74,4 @@ tool = WebsiteSearchTool(
|
||||
),
|
||||
)
|
||||
)
|
||||
```
|
||||
```
|
||||
|
||||
@@ -29,7 +29,9 @@ pip install 'crewai[tools]'
|
||||
Here are two examples demonstrating how to use the XMLSearchTool.
|
||||
The first example shows searching within a specific XML file, while the second example illustrates initiating a search without predefining an XML path, providing flexibility in search scope.
|
||||
|
||||
```python Code
|
||||
Note: The tool uses RAG (Retrieval-Augmented Generation) to perform semantic search within XML content, so results will include relevant context from the XML file based on the search query.
|
||||
|
||||
```python
|
||||
from crewai_tools import XMLSearchTool
|
||||
|
||||
# Allow agents to search within any XML file's content
|
||||
@@ -47,12 +49,15 @@ tool = XMLSearchTool(xml='path/to/your/xmlfile.xml')
|
||||
|
||||
- `xml`: This is the path to the XML file you wish to search.
|
||||
It is an optional parameter during the tool's initialization but must be provided either at initialization or as part of the `run` method's arguments to execute a search.
|
||||
- `search_query`: The query string to search for within the XML content. This is a required parameter when running the search.
|
||||
|
||||
The tool inherits from `RagTool` which provides advanced RAG (Retrieval-Augmented Generation) capabilities for semantic search within XML content.
|
||||
|
||||
## Custom model and embeddings
|
||||
|
||||
By default, the tool uses OpenAI for both embeddings and summarization. To customize the model, you can use a config dictionary as follows:
|
||||
|
||||
```python Code
|
||||
```python
|
||||
tool = XMLSearchTool(
|
||||
config=dict(
|
||||
llm=dict(
|
||||
@@ -74,4 +79,4 @@ tool = XMLSearchTool(
|
||||
),
|
||||
)
|
||||
)
|
||||
```
|
||||
```
|
||||
|
||||
@@ -1,9 +1,6 @@
|
||||
from importlib.metadata import version as get_version
|
||||
from typing import Optional
|
||||
|
||||
from typing import Union
|
||||
|
||||
from crewai.llm import LLM
|
||||
import click
|
||||
|
||||
from crewai.cli.add_crew_to_flow import add_crew_to_flow
|
||||
@@ -183,15 +180,8 @@ def reset_memories(
|
||||
default="gpt-4o-mini",
|
||||
help="LLM Model to run the tests on the Crew. For now only accepting only OpenAI models.",
|
||||
)
|
||||
def test(n_iterations: int, model: Union[str, LLM]):
|
||||
"""Test the crew and evaluate the results using either a model name or LLM instance.
|
||||
|
||||
Args:
|
||||
n_iterations: The number of iterations to run the test.
|
||||
model: Either a model name string or an LLM instance to use for evaluating
|
||||
the performance of the agents. If a string is provided, it will be used
|
||||
to create an LLM instance.
|
||||
"""
|
||||
def test(n_iterations: int, model: str):
|
||||
"""Test the crew and evaluate the results."""
|
||||
click.echo(f"Testing the crew for {n_iterations} iterations with model {model}")
|
||||
evaluate_crew(n_iterations, model)
|
||||
|
||||
|
||||
@@ -18,9 +18,6 @@ from pydantic import (
|
||||
)
|
||||
from pydantic_core import PydanticCustomError
|
||||
|
||||
from typing import Union
|
||||
|
||||
from crewai.llm import LLM
|
||||
from crewai.agent import Agent
|
||||
from crewai.agents.agent_builder.base_agent import BaseAgent
|
||||
from crewai.agents.cache import CacheHandler
|
||||
@@ -1078,30 +1075,19 @@ class Crew(BaseModel):
|
||||
def test(
|
||||
self,
|
||||
n_iterations: int,
|
||||
openai_model_name: Optional[Union[str, LLM]] = None,
|
||||
openai_model_name: Optional[str] = None,
|
||||
inputs: Optional[Dict[str, Any]] = None,
|
||||
) -> None:
|
||||
"""Test and evaluate the Crew with the given inputs for n iterations.
|
||||
|
||||
Args:
|
||||
n_iterations: The number of iterations to run the test.
|
||||
openai_model_name: Either a model name string or an LLM instance to use for evaluating
|
||||
the performance of the agents. If a string is provided, it will be used to create
|
||||
an LLM instance.
|
||||
inputs: The inputs to use for the test.
|
||||
|
||||
Raises:
|
||||
ValueError: If openai_model_name is not a string or LLM instance.
|
||||
"""
|
||||
"""Test and evaluate the Crew with the given inputs for n iterations concurrently using concurrent.futures."""
|
||||
test_crew = self.copy()
|
||||
|
||||
self._test_execution_span = test_crew._telemetry.test_execution_span(
|
||||
test_crew,
|
||||
n_iterations,
|
||||
inputs,
|
||||
openai_model_name,
|
||||
)
|
||||
evaluator = CrewEvaluator(test_crew, openai_model_name)
|
||||
openai_model_name, # type: ignore[arg-type]
|
||||
) # type: ignore[arg-type]
|
||||
evaluator = CrewEvaluator(test_crew, openai_model_name) # type: ignore[arg-type]
|
||||
|
||||
for i in range(1, n_iterations + 1):
|
||||
evaluator.set_iteration(i)
|
||||
|
||||
@@ -14,13 +14,13 @@ class Knowledge(BaseModel):
|
||||
Knowledge is a collection of sources and setup for the vector store to save and query relevant context.
|
||||
Args:
|
||||
sources: List[BaseKnowledgeSource] = Field(default_factory=list)
|
||||
storage: KnowledgeStorage = Field(default_factory=KnowledgeStorage)
|
||||
storage: Optional[KnowledgeStorage] = Field(default=None)
|
||||
embedder_config: Optional[Dict[str, Any]] = None
|
||||
"""
|
||||
|
||||
sources: List[BaseKnowledgeSource] = Field(default_factory=list)
|
||||
model_config = ConfigDict(arbitrary_types_allowed=True)
|
||||
storage: KnowledgeStorage = Field(default_factory=KnowledgeStorage)
|
||||
storage: Optional[KnowledgeStorage] = Field(default=None)
|
||||
embedder_config: Optional[Dict[str, Any]] = None
|
||||
collection_name: Optional[str] = None
|
||||
|
||||
@@ -49,8 +49,13 @@ class Knowledge(BaseModel):
|
||||
"""
|
||||
Query across all knowledge sources to find the most relevant information.
|
||||
Returns the top_k most relevant chunks.
|
||||
|
||||
Raises:
|
||||
ValueError: If storage is not initialized.
|
||||
"""
|
||||
|
||||
if self.storage is None:
|
||||
raise ValueError("Storage is not initialized.")
|
||||
|
||||
results = self.storage.search(
|
||||
query,
|
||||
limit,
|
||||
|
||||
@@ -22,7 +22,7 @@ class BaseFileKnowledgeSource(BaseKnowledgeSource, ABC):
|
||||
default_factory=list, description="The path to the file"
|
||||
)
|
||||
content: Dict[Path, str] = Field(init=False, default_factory=dict)
|
||||
storage: KnowledgeStorage = Field(default_factory=KnowledgeStorage)
|
||||
storage: Optional[KnowledgeStorage] = Field(default=None)
|
||||
safe_file_paths: List[Path] = Field(default_factory=list)
|
||||
|
||||
@field_validator("file_path", "file_paths", mode="before")
|
||||
@@ -62,7 +62,10 @@ class BaseFileKnowledgeSource(BaseKnowledgeSource, ABC):
|
||||
|
||||
def _save_documents(self):
|
||||
"""Save the documents to the storage."""
|
||||
self.storage.save(self.chunks)
|
||||
if self.storage:
|
||||
self.storage.save(self.chunks)
|
||||
else:
|
||||
raise ValueError("No storage found to save documents.")
|
||||
|
||||
def convert_to_path(self, path: Union[Path, str]) -> Path:
|
||||
"""Convert a path to a Path object."""
|
||||
|
||||
@@ -16,7 +16,7 @@ class BaseKnowledgeSource(BaseModel, ABC):
|
||||
chunk_embeddings: List[np.ndarray] = Field(default_factory=list)
|
||||
|
||||
model_config = ConfigDict(arbitrary_types_allowed=True)
|
||||
storage: KnowledgeStorage = Field(default_factory=KnowledgeStorage)
|
||||
storage: Optional[KnowledgeStorage] = Field(default=None)
|
||||
metadata: Dict[str, Any] = Field(default_factory=dict) # Currently unused
|
||||
collection_name: Optional[str] = Field(default=None)
|
||||
|
||||
@@ -46,4 +46,7 @@ class BaseKnowledgeSource(BaseModel, ABC):
|
||||
Save the documents to the storage.
|
||||
This method should be called after the chunks and embeddings are generated.
|
||||
"""
|
||||
self.storage.save(self.chunks)
|
||||
if self.storage:
|
||||
self.storage.save(self.chunks)
|
||||
else:
|
||||
raise ValueError("No storage found to save documents.")
|
||||
|
||||
@@ -1,10 +1,6 @@
|
||||
from typing import Union
|
||||
|
||||
from crewai.llm import LLM
|
||||
from collections import defaultdict
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
from crewai.utilities.logger import Logger
|
||||
from rich.box import HEAVY_EDGE
|
||||
from rich.console import Console
|
||||
from rich.table import Table
|
||||
@@ -27,7 +23,7 @@ class CrewEvaluator:
|
||||
|
||||
Attributes:
|
||||
crew (Crew): The crew of agents to evaluate.
|
||||
openai_model_name (Union[str, LLM]): Either a model name string or an LLM instance to use for evaluating the performance of the agents.
|
||||
openai_model_name (str): The model to use for evaluating the performance of the agents (for now ONLY OpenAI accepted).
|
||||
tasks_scores (defaultdict): A dictionary to store the scores of the agents for each task.
|
||||
iteration (int): The current iteration of the evaluation.
|
||||
"""
|
||||
@@ -36,29 +32,10 @@ class CrewEvaluator:
|
||||
run_execution_times: defaultdict = defaultdict(list)
|
||||
iteration: int = 0
|
||||
|
||||
def __init__(self, crew, openai_model_name: Union[str, LLM]):
|
||||
"""Initialize the CrewEvaluator.
|
||||
|
||||
Args:
|
||||
crew (Crew): The crew to evaluate
|
||||
openai_model_name (Union[str, LLM]): Either a model name string or an LLM instance
|
||||
to use for evaluation. If a string is provided, it will be used to create an
|
||||
LLM instance with default settings. If an LLM instance is provided, its settings
|
||||
(like temperature) will be preserved.
|
||||
|
||||
Raises:
|
||||
ValueError: If openai_model_name is not a string or LLM instance.
|
||||
"""
|
||||
def __init__(self, crew, openai_model_name: str):
|
||||
self.crew = crew
|
||||
if not isinstance(openai_model_name, (str, LLM)):
|
||||
raise ValueError(f"Invalid model type '{type(openai_model_name)}'. Expected str or LLM instance.")
|
||||
self.model_instance = openai_model_name if isinstance(openai_model_name, LLM) else LLM(model=openai_model_name)
|
||||
self.openai_model_name = openai_model_name
|
||||
self._telemetry = Telemetry()
|
||||
self._logger = Logger()
|
||||
self._logger.log(
|
||||
"info",
|
||||
f"Initializing CrewEvaluator with model: {openai_model_name if isinstance(openai_model_name, str) else openai_model_name.model}"
|
||||
)
|
||||
self._setup_for_evaluating()
|
||||
|
||||
def _setup_for_evaluating(self) -> None:
|
||||
@@ -74,7 +51,7 @@ class CrewEvaluator:
|
||||
),
|
||||
backstory="Evaluator agent for crew evaluation with precise capabilities to evaluate the performance of the agents in the crew based on the tasks they have performed",
|
||||
verbose=False,
|
||||
llm=self.model_instance,
|
||||
llm=self.openai_model_name,
|
||||
)
|
||||
|
||||
def _evaluation_task(
|
||||
@@ -204,11 +181,7 @@ class CrewEvaluator:
|
||||
self.crew,
|
||||
evaluation_result.pydantic.quality,
|
||||
current_task._execution_time,
|
||||
self.model_instance.model,
|
||||
)
|
||||
self._logger.log(
|
||||
"info",
|
||||
f"Task evaluation completed with quality score: {evaluation_result.pydantic.quality}"
|
||||
self.openai_model_name,
|
||||
)
|
||||
self.tasks_scores[self.iteration].append(evaluation_result.pydantic.quality)
|
||||
self.run_execution_times[self.iteration].append(
|
||||
|
||||
@@ -10,7 +10,6 @@ import instructor
|
||||
import pydantic_core
|
||||
import pytest
|
||||
|
||||
from crewai.llm import LLM
|
||||
from crewai.agent import Agent
|
||||
from crewai.agents.cache import CacheHandler
|
||||
from crewai.crew import Crew
|
||||
@@ -301,35 +300,6 @@ def test_hierarchical_process():
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_crew_test_with_custom_llm():
|
||||
"""Test that Crew.test() works correctly with custom LLM instances."""
|
||||
task = Task(
|
||||
description="Test task",
|
||||
expected_output="Test output",
|
||||
agent=researcher,
|
||||
)
|
||||
custom_llm = LLM(model="gpt-4", temperature=0.5)
|
||||
crew = Crew(agents=[researcher], tasks=[task], process=Process.sequential)
|
||||
|
||||
with mock.patch('crewai.crew.CrewEvaluator') as mock_evaluator:
|
||||
crew.test(n_iterations=1, openai_model_name=custom_llm)
|
||||
mock_evaluator.assert_called_once_with(mock.ANY, custom_llm)
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_crew_test_backward_compatibility():
|
||||
"""Test that Crew.test() maintains backward compatibility with string model names."""
|
||||
task = Task(
|
||||
description="Test task",
|
||||
expected_output="Test output",
|
||||
agent=researcher,
|
||||
)
|
||||
crew = Crew(agents=[researcher], tasks=[task], process=Process.sequential)
|
||||
|
||||
with mock.patch('crewai.crew.CrewEvaluator') as mock_evaluator:
|
||||
crew.test(n_iterations=1, openai_model_name="gpt-4")
|
||||
mock_evaluator.assert_called_once_with(mock.ANY, "gpt-4")
|
||||
|
||||
def test_manager_llm_requirement_for_hierarchical_process():
|
||||
task = Task(
|
||||
description="Come up with a list of 5 interesting ideas to explore for an article, then write one amazing paragraph highlight for each idea that showcases how good an article about this topic could be. Return the list of ideas with their paragraph and your notes.",
|
||||
@@ -1153,7 +1123,7 @@ def test_kickoff_for_each_empty_input():
|
||||
assert results == []
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headeruvs=["authorization"])
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_kickoff_for_each_invalid_input():
|
||||
"""Tests if kickoff_for_each raises TypeError for invalid input types."""
|
||||
|
||||
@@ -3155,4 +3125,4 @@ def test_multimodal_agent_live_image_analysis():
|
||||
# Verify we got a meaningful response
|
||||
assert isinstance(result.raw, str)
|
||||
assert len(result.raw) > 100 # Expecting a detailed analysis
|
||||
assert "error" not in result.raw.lower() # No error messages in response
|
||||
assert "error" not in result.raw.lower() # No error messages in response
|
||||
@@ -2,7 +2,6 @@ from unittest import mock
|
||||
|
||||
import pytest
|
||||
|
||||
from crewai.llm import LLM
|
||||
from crewai.agent import Agent
|
||||
from crewai.crew import Crew
|
||||
from crewai.task import Task
|
||||
@@ -132,30 +131,6 @@ class TestCrewEvaluator:
|
||||
# Ensure the console prints the table
|
||||
console.assert_has_calls([mock.call(), mock.call().print(table())])
|
||||
|
||||
def test_evaluator_with_custom_llm(self, crew_planner):
|
||||
"""Test that CrewEvaluator correctly handles custom LLM instances."""
|
||||
custom_llm = LLM(model="gpt-4", temperature=0.5)
|
||||
evaluator = CrewEvaluator(crew_planner.crew, custom_llm)
|
||||
assert evaluator.model_instance == custom_llm
|
||||
assert evaluator.model_instance.temperature == 0.5
|
||||
|
||||
def test_evaluator_with_invalid_model_type(self, crew_planner):
|
||||
"""Test that CrewEvaluator raises error for invalid model type."""
|
||||
with pytest.raises(ValueError, match="Invalid model type"):
|
||||
CrewEvaluator(crew_planner.crew, 123)
|
||||
|
||||
def test_evaluator_preserves_model_settings(self, crew_planner):
|
||||
"""Test that CrewEvaluator preserves model settings."""
|
||||
custom_llm = LLM(model="gpt-4", temperature=0.7)
|
||||
evaluator = CrewEvaluator(crew_planner.crew, custom_llm)
|
||||
assert evaluator.model_instance.temperature == 0.7
|
||||
|
||||
def test_evaluator_with_model_name(self, crew_planner):
|
||||
"""Test that CrewEvaluator correctly handles string model names."""
|
||||
evaluator = CrewEvaluator(crew_planner.crew, "gpt-4")
|
||||
assert isinstance(evaluator.model_instance, LLM)
|
||||
assert evaluator.model_instance.model == "gpt-4"
|
||||
|
||||
def test_evaluate(self, crew_planner):
|
||||
task_output = TaskOutput(
|
||||
description="Task 1", agent=str(crew_planner.crew.agents[0])
|
||||
|
||||
Reference in New Issue
Block a user