Compare commits

..

20 Commits

Author SHA1 Message Date
João Moura
7eee68d313 type improvements 2024-12-27 17:00:03 -03:00
João Moura
842e5fb70a ignore 2024-12-27 16:53:00 -03:00
João Moura
afeb8ca1ee fix 2024-12-27 16:51:35 -03:00
João Moura
2bbcba1ccb fix linter 2024-12-27 16:40:59 -03:00
João Moura
75136fcd77 fix 2024-12-27 16:36:48 -03:00
João Moura
8e93f3aa5b fix 2024-12-27 16:34:28 -03:00
João Moura
ee9d7aea61 test 2024-12-27 16:33:25 -03:00
João Moura
b5f2161e34 fix linters 2024-12-27 16:31:20 -03:00
João Moura
9a55b54977 Revert "fixing linter"
This reverts commit 2eda5fdeed.
2024-12-27 16:22:43 -03:00
João Moura
5bd4fdc3d0 fix types and linter 2024-12-27 16:19:56 -03:00
João Moura
ffff182033 mixxing translations 2024-12-27 16:17:42 -03:00
João Moura
2e5bb3f856 fix linter and types 2024-12-27 16:16:39 -03:00
João Moura
8735b58fc6 Making sure multimodal feature support i18n 2024-12-27 15:59:10 -03:00
João Moura
d56db9f34f fix linter 2024-12-27 10:56:18 -03:00
João Moura
2eda5fdeed fixing linter 2024-12-26 23:43:51 -03:00
João Moura
2357d3e8eb Merge branch 'main' into joaomdmoura/multimodal-crew 2024-12-26 23:33:37 -03:00
João Moura
e61f2f50c9 supporting image tool 2024-12-26 23:24:41 -03:00
João Moura
93bee87324 Refactor prepare tool and adding initial add images logic 2024-12-26 13:30:59 -03:00
João Moura
e6be4ed66d fixing tests for delegations and coding 2024-12-26 10:09:20 -03:00
João Moura
3e58c995a4 initial fix on delegation tools 2024-12-23 16:53:25 -03:00
7 changed files with 11 additions and 897 deletions

View File

@@ -1,211 +0,0 @@
# Portkey Integration with CrewAI
<img src="https://raw.githubusercontent.com/siddharthsambharia-portkey/Portkey-Product-Images/main/Portkey-CrewAI.png" alt="Portkey CrewAI Header Image" width="70%" />
[Portkey](https://portkey.ai/?utm_source=crewai&utm_medium=crewai&utm_campaign=crewai) is a 2-line upgrade to make your CrewAI agents reliable, cost-efficient, and fast.
Portkey adds 4 core production capabilities to any CrewAI agent:
1. Routing to **200+ LLMs**
2. Making each LLM call more robust
3. Full-stack tracing & cost, performance analytics
4. Real-time guardrails to enforce behavior
## Getting Started
1. **Install Required Packages:**
```bash
pip install -qU crewai portkey-ai
```
2. **Configure the LLM Client:**
To build CrewAI Agents with Portkey, you'll need two keys:
- **Portkey API Key**: Sign up on the [Portkey app](https://app.portkey.ai/?utm_source=crewai&utm_medium=crewai&utm_campaign=crewai) and copy your API key
- **Virtual Key**: Virtual Keys securely manage your LLM API keys in one place. Store your LLM provider API keys securely in Portkey's vault
```python
from crewai import LLM
from portkey_ai import createHeaders, PORTKEY_GATEWAY_URL
gpt_llm = LLM(
model="gpt-4",
base_url=PORTKEY_GATEWAY_URL,
api_key="dummy", # We are using Virtual key
extra_headers=createHeaders(
api_key="YOUR_PORTKEY_API_KEY",
virtual_key="YOUR_VIRTUAL_KEY", # Enter your Virtual key from Portkey
)
)
```
3. **Create and Run Your First Agent:**
```python
from crewai import Agent, Task, Crew
# Define your agents with roles and goals
coder = Agent(
role='Software developer',
goal='Write clear, concise code on demand',
backstory='An expert coder with a keen eye for software trends.',
llm=gpt_llm
)
# Create tasks for your agents
task1 = Task(
description="Define the HTML for making a simple website with heading- Hello World! Portkey is working!",
expected_output="A clear and concise HTML code",
agent=coder
)
# Instantiate your crew
crew = Crew(
agents=[coder],
tasks=[task1],
)
result = crew.kickoff()
print(result)
```
## Key Features
| Feature | Description |
|---------|-------------|
| 🌐 Multi-LLM Support | Access OpenAI, Anthropic, Gemini, Azure, and 250+ providers through a unified interface |
| 🛡️ Production Reliability | Implement retries, timeouts, load balancing, and fallbacks |
| 📊 Advanced Observability | Track 40+ metrics including costs, tokens, latency, and custom metadata |
| 🔍 Comprehensive Logging | Debug with detailed execution traces and function call logs |
| 🚧 Security Controls | Set budget limits and implement role-based access control |
| 🔄 Performance Analytics | Capture and analyze feedback for continuous improvement |
| 💾 Intelligent Caching | Reduce costs and latency with semantic or simple caching |
## Production Features with Portkey Configs
All features mentioned below are through Portkey's Config system. Portkey's Config system allows you to define routing strategies using simple JSON objects in your LLM API calls. You can create and manage Configs directly in your code or through the Portkey Dashboard. Each Config has a unique ID for easy reference.
<Frame>
<img src="https://raw.githubusercontent.com/Portkey-AI/docs-core/refs/heads/main/images/libraries/libraries-3.avif"/>
</Frame>
### 1. Use 250+ LLMs
Access various LLMs like Anthropic, Gemini, Mistral, Azure OpenAI, and more with minimal code changes. Switch between providers or use them together seamlessly. [Learn more about Universal API](https://portkey.ai/docs/product/ai-gateway/universal-api)
Easily switch between different LLM providers:
```python
# Anthropic Configuration
anthropic_llm = LLM(
model="claude-3-5-sonnet-latest",
base_url=PORTKEY_GATEWAY_URL,
api_key="dummy",
extra_headers=createHeaders(
api_key="YOUR_PORTKEY_API_KEY",
virtual_key="YOUR_ANTHROPIC_VIRTUAL_KEY", #You don't need provider when using Virtual keys
trace_id="anthropic_agent"
)
)
# Azure OpenAI Configuration
azure_llm = LLM(
model="gpt-4",
base_url=PORTKEY_GATEWAY_URL,
api_key="dummy",
extra_headers=createHeaders(
api_key="YOUR_PORTKEY_API_KEY",
virtual_key="YOUR_AZURE_VIRTUAL_KEY", #You don't need provider when using Virtual keys
trace_id="azure_agent"
)
)
```
### 2. Caching
Improve response times and reduce costs with two powerful caching modes:
- **Simple Cache**: Perfect for exact matches
- **Semantic Cache**: Matches responses for requests that are semantically similar
[Learn more about Caching](https://portkey.ai/docs/product/ai-gateway/cache-simple-and-semantic)
```py
config = {
"cache": {
"mode": "semantic", # or "simple" for exact matching
}
}
```
### 3. Production Reliability
Portkey provides comprehensive reliability features:
- **Automatic Retries**: Handle temporary failures gracefully
- **Request Timeouts**: Prevent hanging operations
- **Conditional Routing**: Route requests based on specific conditions
- **Fallbacks**: Set up automatic provider failovers
- **Load Balancing**: Distribute requests efficiently
[Learn more about Reliability Features](https://portkey.ai/docs/product/ai-gateway/)
### 4. Metrics
Agent runs are complex. Portkey automatically logs **40+ comprehensive metrics** for your AI agents, including cost, tokens used, latency, etc. Whether you need a broad overview or granular insights into your agent runs, Portkey's customizable filters provide the metrics you need.
- Cost per agent interaction
- Response times and latency
- Token usage and efficiency
- Success/failure rates
- Cache hit rates
<img src="https://github.com/siddharthsambharia-portkey/Portkey-Product-Images/blob/main/Portkey-Dashboard.png?raw=true" width="70%" alt="Portkey Dashboard" />
### 5. Detailed Logging
Logs are essential for understanding agent behavior, diagnosing issues, and improving performance. They provide a detailed record of agent activities and tool use, which is crucial for debugging and optimizing processes.
Access a dedicated section to view records of agent executions, including parameters, outcomes, function calls, and errors. Filter logs based on multiple parameters such as trace ID, model, tokens used, and metadata.
<details>
<summary><b>Traces</b></summary>
<img src="https://raw.githubusercontent.com/siddharthsambharia-portkey/Portkey-Product-Images/main/Portkey-Traces.png" alt="Portkey Traces" width="70%" />
</details>
<details>
<summary><b>Logs</b></summary>
<img src="https://raw.githubusercontent.com/siddharthsambharia-portkey/Portkey-Product-Images/main/Portkey-Logs.png" alt="Portkey Logs" width="70%" />
</details>
### 6. Enterprise Security Features
- Set budget limit and rate limts per Virtual Key (disposable API keys)
- Implement role-based access control
- Track system changes with audit logs
- Configure data retention policies
For detailed information on creating and managing Configs, visit the [Portkey documentation](https://docs.portkey.ai/product/ai-gateway/configs).
## Resources
- [📘 Portkey Documentation](https://docs.portkey.ai)
- [📊 Portkey Dashboard](https://app.portkey.ai/?utm_source=crewai&utm_medium=crewai&utm_campaign=crewai)
- [🐦 Twitter](https://twitter.com/portkeyai)
- [💬 Discord Community](https://discord.gg/DD7vgKK299)

View File

@@ -1,138 +0,0 @@
---
title: Using Multimodal Agents
description: Learn how to enable and use multimodal capabilities in your agents for processing images and other non-text content within the CrewAI framework.
icon: image
---
# Using Multimodal Agents
CrewAI supports multimodal agents that can process both text and non-text content like images. This guide will show you how to enable and use multimodal capabilities in your agents.
## Enabling Multimodal Capabilities
To create a multimodal agent, simply set the `multimodal` parameter to `True` when initializing your agent:
```python
from crewai import Agent
agent = Agent(
role="Image Analyst",
goal="Analyze and extract insights from images",
backstory="An expert in visual content interpretation with years of experience in image analysis",
multimodal=True # This enables multimodal capabilities
)
```
When you set `multimodal=True`, the agent is automatically configured with the necessary tools for handling non-text content, including the `AddImageTool`.
## Working with Images
The multimodal agent comes pre-configured with the `AddImageTool`, which allows it to process images. You don't need to manually add this tool - it's automatically included when you enable multimodal capabilities.
Here's a complete example showing how to use a multimodal agent to analyze an image:
```python
from crewai import Agent, Task, Crew
# Create a multimodal agent
image_analyst = Agent(
role="Product Analyst",
goal="Analyze product images and provide detailed descriptions",
backstory="Expert in visual product analysis with deep knowledge of design and features",
multimodal=True
)
# Create a task for image analysis
task = Task(
description="Analyze the product image at https://example.com/product.jpg and provide a detailed description",
agent=image_analyst
)
# Create and run the crew
crew = Crew(
agents=[image_analyst],
tasks=[task]
)
result = crew.kickoff()
```
### Advanced Usage with Context
You can provide additional context or specific questions about the image when creating tasks for multimodal agents. The task description can include specific aspects you want the agent to focus on:
```python
from crewai import Agent, Task, Crew
# Create a multimodal agent for detailed analysis
expert_analyst = Agent(
role="Visual Quality Inspector",
goal="Perform detailed quality analysis of product images",
backstory="Senior quality control expert with expertise in visual inspection",
multimodal=True # AddImageTool is automatically included
)
# Create a task with specific analysis requirements
inspection_task = Task(
description="""
Analyze the product image at https://example.com/product.jpg with focus on:
1. Quality of materials
2. Manufacturing defects
3. Compliance with standards
Provide a detailed report highlighting any issues found.
""",
agent=expert_analyst
)
# Create and run the crew
crew = Crew(
agents=[expert_analyst],
tasks=[inspection_task]
)
result = crew.kickoff()
```
### Tool Details
When working with multimodal agents, the `AddImageTool` is automatically configured with the following schema:
```python
class AddImageToolSchema:
image_url: str # Required: The URL or path of the image to process
action: Optional[str] = None # Optional: Additional context or specific questions about the image
```
The multimodal agent will automatically handle the image processing through its built-in tools, allowing it to:
- Access images via URLs or local file paths
- Process image content with optional context or specific questions
- Provide analysis and insights based on the visual information and task requirements
## Best Practices
When working with multimodal agents, keep these best practices in mind:
1. **Image Access**
- Ensure your images are accessible via URLs that the agent can reach
- For local images, consider hosting them temporarily or using absolute file paths
- Verify that image URLs are valid and accessible before running tasks
2. **Task Description**
- Be specific about what aspects of the image you want the agent to analyze
- Include clear questions or requirements in the task description
- Consider using the optional `action` parameter for focused analysis
3. **Resource Management**
- Image processing may require more computational resources than text-only tasks
- Some language models may require base64 encoding for image data
- Consider batch processing for multiple images to optimize performance
4. **Environment Setup**
- Verify that your environment has the necessary dependencies for image processing
- Ensure your language model supports multimodal capabilities
- Test with small images first to validate your setup
5. **Error Handling**
- Implement proper error handling for image loading failures
- Have fallback strategies for when image processing fails
- Monitor and log image processing operations for debugging

View File

@@ -1,213 +0,0 @@
# Multiple Model Configuration in CrewAI
CrewAI now supports configuring multiple language models with different API keys and configurations. This feature allows you to:
1. Load-balance across multiple model deployments
2. Set up fallback models in case of rate limits or errors
3. Configure different routing strategies for model selection
4. Maintain fine-grained control over model selection and usage
## Basic Usage
You can configure multiple models at the agent level:
```python
from crewai import Agent
# Define model configurations
model_list = [
{
"model_name": "gpt-4o-mini",
"litellm_params": {
"model": "gpt-4o-mini", # Required: model name must be specified here
"api_key": "your-openai-api-key-1"
}
},
{
"model_name": "gpt-3.5-turbo",
"litellm_params": {
"model": "gpt-3.5-turbo", # Required: model name must be specified here
"api_key": "your-openai-api-key-2"
}
},
{
"model_name": "claude-3-sonnet-20240229",
"litellm_params": {
"model": "claude-3-sonnet-20240229", # Required: model name must be specified here
"api_key": "your-anthropic-api-key"
}
}
]
# Create an agent with multiple model configurations
agent = Agent(
role="Data Analyst",
goal="Analyze the data and provide insights",
backstory="You are an expert data analyst with years of experience.",
model_list=model_list,
routing_strategy="simple-shuffle" # Optional routing strategy
)
```
## Routing Strategies
CrewAI supports the following routing strategies for precise control over model selection:
- `simple-shuffle`: Randomly selects a model from the list
- `least-busy`: Routes to the model with the least number of ongoing requests
- `usage-based`: Routes based on token usage across models
- `latency-based`: Routes to the model with the lowest latency
- `cost-based`: Routes to the model with the lowest cost
Example with latency-based routing:
```python
agent = Agent(
role="Data Analyst",
goal="Analyze the data and provide insights",
backstory="You are an expert data analyst with years of experience.",
model_list=model_list,
routing_strategy="latency-based"
)
```
## Direct LLM Configuration
You can also configure multiple models directly with the LLM class for more flexibility:
```python
from crewai import LLM
llm = LLM(
model="gpt-4o-mini",
model_list=model_list,
routing_strategy="simple-shuffle"
)
```
## Advanced Configuration
For more advanced configurations, you can specify additional parameters for each model to handle complex use cases:
```python
model_list = [
{
"model_name": "gpt-4o-mini",
"litellm_params": {
"model": "gpt-4o-mini", # Required: model name must be specified here
"api_key": "your-openai-api-key-1",
"temperature": 0.7
},
"tpm": 100000, # Tokens per minute limit
"rpm": 1000 # Requests per minute limit
},
{
"model_name": "gpt-3.5-turbo",
"litellm_params": {
"model": "gpt-3.5-turbo", # Required: model name must be specified here
"api_key": "your-openai-api-key-2",
"temperature": 0.5
}
}
]
```
## Error Handling and Troubleshooting
When working with multiple model configurations, you may encounter various issues. Here are some common problems and their solutions:
### Missing Required Parameters
**Problem**: Router initialization fails with an error about missing parameters.
**Solution**: Ensure each model configuration in `model_list` includes both `model_name` and `litellm_params` with the required `model` parameter:
```python
# Correct configuration
model_config = {
"model_name": "gpt-4o-mini", # Required
"litellm_params": {
"model": "gpt-4o-mini", # Required
"api_key": "your-api-key"
}
}
```
### Invalid Routing Strategy
**Problem**: Error when specifying an unsupported routing strategy.
**Solution**: Use only the supported routing strategies:
```python
# Valid routing strategies
valid_strategies = [
"simple-shuffle",
"least-busy",
"usage-based",
"latency-based",
"cost-based"
]
```
### API Key Authentication Errors
**Problem**: Authentication errors when making API calls.
**Solution**: Verify that all API keys are valid and have the necessary permissions:
```python
# Check environment variables first
import os
os.environ.get("OPENAI_API_KEY") # Should be set if using OpenAI models
# Or explicitly provide in the configuration
model_list = [{
"model_name": "gpt-4o-mini",
"litellm_params": {
"model": "gpt-4o-mini",
"api_key": "valid-api-key-here" # Ensure this is correct
}
}]
```
### Rate Limit Handling
**Problem**: Encountering rate limits with multiple models.
**Solution**: Configure rate limits and implement fallback mechanisms:
```python
model_list = [
{
"model_name": "primary-model",
"litellm_params": {"model": "primary-model", "api_key": "key1"},
"rpm": 100 # Requests per minute
},
{
"model_name": "fallback-model",
"litellm_params": {"model": "fallback-model", "api_key": "key2"}
}
]
# Configure with fallback
llm = LLM(
model="primary-model",
model_list=model_list,
routing_strategy="least-busy" # Will route to fallback when primary is busy
)
```
### Debugging Router Issues
If you're experiencing issues with the router, you can enable verbose logging to get more information:
```python
import litellm
litellm.set_verbose = True
# Then initialize your LLM
llm = LLM(model="gpt-4o-mini", model_list=model_list)
```
This feature leverages litellm's Router functionality under the hood, providing robust load balancing and fallback capabilities for your CrewAI agents. The implementation ensures predictability and consistency in model selection while maintaining security through proper API key management.

View File

@@ -67,6 +67,7 @@ dev-dependencies = [
"mkdocs-material-extensions>=1.3.1",
"pillow>=10.2.0",
"cairosvg>=2.7.1",
"crewai-tools>=0.17.0",
"pytest>=8.0.0",
"pytest-vcr>=1.0.2",
"python-dotenv>=1.0.0",

View File

@@ -1,10 +1,9 @@
import os
import shutil
import subprocess
from enum import Enum
from typing import Any, Dict, List, Literal, Optional, Union
from pydantic import Field, InstanceOf, PrivateAttr, model_validator, field_validator
from pydantic import Field, InstanceOf, PrivateAttr, model_validator
from crewai.agents import CacheHandler
from crewai.agents.agent_builder.base_agent import BaseAgent
@@ -87,20 +86,7 @@ class Agent(BaseAgent):
description="Language model that will run the agent.", default=None
)
function_calling_llm: Optional[Any] = Field(
description="Language model that will handle function calling for the agent.", default=None
)
class RoutingStrategy(str, Enum):
SIMPLE_SHUFFLE = "simple-shuffle"
LEAST_BUSY = "least-busy"
USAGE_BASED = "usage-based"
LATENCY_BASED = "latency-based"
COST_BASED = "cost-based"
model_list: Optional[List[Dict[str, Any]]] = Field(
default=None, description="List of model configurations for routing between multiple models."
)
routing_strategy: Optional[RoutingStrategy] = Field(
default=None, description="Strategy for routing between multiple models (e.g., 'simple-shuffle', 'least-busy', 'usage-based', 'latency-based', 'cost-based')."
description="Language model that will run the agent.", default=None
)
system_template: Optional[str] = Field(
default=None, description="System format for the agent."
@@ -162,17 +148,10 @@ class Agent(BaseAgent):
# Handle different cases for self.llm
if isinstance(self.llm, str):
# If it's a string, create an LLM instance
self.llm = LLM(
model=self.llm,
model_list=self.model_list,
routing_strategy=self.routing_strategy
)
self.llm = LLM(model=self.llm)
elif isinstance(self.llm, LLM):
# If it's already an LLM instance, keep it as is
if self.model_list and not getattr(self.llm, "model_list", None):
self.llm.model_list = self.model_list
self.llm.routing_strategy = self.routing_strategy
self.llm._initialize_router()
pass
elif self.llm is None:
# Determine the model name from environment variables or use default
model_name = (
@@ -180,11 +159,7 @@ class Agent(BaseAgent):
or os.environ.get("MODEL")
or "gpt-4o-mini"
)
llm_params = {
"model": model_name,
"model_list": self.model_list,
"routing_strategy": self.routing_strategy
}
llm_params = {"model": model_name}
api_base = os.environ.get("OPENAI_API_BASE") or os.environ.get(
"OPENAI_BASE_URL"
@@ -232,8 +207,6 @@ class Agent(BaseAgent):
"api_key": getattr(self.llm, "api_key", None),
"base_url": getattr(self.llm, "base_url", None),
"organization": getattr(self.llm, "organization", None),
"model_list": self.model_list,
"routing_strategy": self.routing_strategy,
}
# Remove None values to avoid passing unnecessary parameters
llm_params = {k: v for k, v in llm_params.items() if v is not None}

View File

@@ -7,17 +7,12 @@ from contextlib import contextmanager
from typing import Any, Dict, List, Optional, Union
import litellm
from litellm import Router as LiteLLMRouter
from litellm import get_supported_openai_params
from tenacity import retry, stop_after_attempt, wait_exponential
from crewai.utilities.logger import Logger
from crewai.utilities.exceptions.context_window_exceeding_exception import (
LLMContextLengthExceededException,
)
logger = Logger(verbose=True)
class FilteredStream:
def __init__(self, original_stream):
@@ -118,8 +113,6 @@ class LLM:
api_version: Optional[str] = None,
api_key: Optional[str] = None,
callbacks: List[Any] = [],
model_list: Optional[List[Dict[str, Any]]] = None,
routing_strategy: Optional[str] = None,
**kwargs,
):
self.model = model
@@ -143,50 +136,11 @@ class LLM:
self.callbacks = callbacks
self.context_window_size = 0
self.kwargs = kwargs
self.model_list = model_list
self.routing_strategy = routing_strategy
self.router = None
litellm.drop_params = True
litellm.set_verbose = False
self.set_callbacks(callbacks)
self.set_env_callbacks()
if self.model_list:
self._initialize_router()
def _initialize_router(self):
"""
Initialize the litellm Router with the provided model_list and routing_strategy.
"""
try:
router_kwargs = {}
if self.routing_strategy:
valid_strategies = ["simple-shuffle", "least-busy", "usage-based", "latency-based", "cost-based"]
if self.routing_strategy not in valid_strategies:
raise ValueError(f"Invalid routing strategy: {self.routing_strategy}. Valid options are: {', '.join(valid_strategies)}")
router_kwargs["routing_strategy"] = self.routing_strategy
self.router = LiteLLMRouter(
model_list=self.model_list,
**router_kwargs
)
except Exception as e:
logger.log("error", f"Failed to initialize router: {str(e)}")
raise RuntimeError(f"Router initialization failed: {str(e)}")
@retry(stop=stop_after_attempt(3), wait=wait_exponential(multiplier=1, min=4, max=10))
def _execute_router_call(self, params):
"""
Execute a call to the router with retry logic for handling transient issues.
Args:
params: Parameters to pass to the router completion method
Returns:
The response from the router
"""
return self.router.completion(model=self.model, **params)
def call(self, messages: List[Dict[str, str]], callbacks: List[Any] = []) -> str:
with suppress_warnings():
@@ -195,6 +149,7 @@ class LLM:
try:
params = {
"model": self.model,
"messages": messages,
"timeout": self.timeout,
"temperature": self.temperature,
@@ -209,6 +164,9 @@ class LLM:
"seed": self.seed,
"logprobs": self.logprobs,
"top_logprobs": self.top_logprobs,
"api_base": self.base_url,
"api_version": self.api_version,
"api_key": self.api_key,
"stream": False,
**self.kwargs,
}
@@ -216,17 +174,7 @@ class LLM:
# Remove None values to avoid passing unnecessary parameters
params = {k: v for k, v in params.items() if v is not None}
if self.router:
response = self._execute_router_call(params)
else:
params.update({
"model": self.model,
"api_base": self.base_url,
"api_version": self.api_version,
"api_key": self.api_key,
})
response = litellm.completion(**params)
response = litellm.completion(**params)
return response["choices"][0]["message"]["content"]
except Exception as e:
if not LLMContextLengthExceededException(

View File

@@ -1,246 +0,0 @@
import pytest
from unittest.mock import patch, MagicMock
from crewai.llm import LLM
from crewai.agent import Agent
@pytest.mark.vcr(filter_headers=["authorization"])
@patch("litellm.Router")
@patch.object(LLM, '_initialize_router')
def test_llm_with_model_list(mock_initialize_router, mock_router):
"""Test that LLM can be initialized with a model_list for multiple model configurations."""
mock_initialize_router.return_value = None
mock_router_instance = MagicMock()
mock_router.return_value = mock_router_instance
model_list = [
{
"model_name": "gpt-4o-mini",
"litellm_params": {
"model": "gpt-4o-mini",
"api_key": "test-key-1"
}
},
{
"model_name": "gpt-3.5-turbo",
"litellm_params": {
"model": "gpt-3.5-turbo",
"api_key": "test-key-2"
}
}
]
llm = LLM(model="gpt-4o-mini", model_list=model_list)
llm.router = mock_router_instance
assert llm.model == "gpt-4o-mini"
assert llm.model_list == model_list
assert llm.router is not None
@pytest.mark.vcr(filter_headers=["authorization"])
@patch("litellm.Router")
@patch.object(LLM, '_initialize_router')
def test_llm_with_routing_strategy(mock_initialize_router, mock_router):
"""Test that LLM can be initialized with a routing strategy."""
mock_initialize_router.return_value = None
mock_router_instance = MagicMock()
mock_router.return_value = mock_router_instance
model_list = [
{
"model_name": "gpt-4o-mini",
"litellm_params": {
"model": "gpt-4o-mini",
"api_key": "test-key-1"
}
},
{
"model_name": "gpt-3.5-turbo",
"litellm_params": {
"model": "gpt-3.5-turbo",
"api_key": "test-key-2"
}
}
]
llm = LLM(
model="gpt-4o-mini",
model_list=model_list,
routing_strategy="simple-shuffle"
)
llm.router = mock_router_instance
assert llm.routing_strategy == "simple-shuffle"
assert llm.router is not None
@pytest.mark.vcr(filter_headers=["authorization"])
@patch("litellm.Router")
@patch.object(LLM, '_initialize_router')
def test_agent_with_model_list(mock_initialize_router, mock_router):
"""Test that Agent can be initialized with a model_list for multiple model configurations."""
mock_initialize_router.return_value = None
mock_router_instance = MagicMock()
mock_router.return_value = mock_router_instance
model_list = [
{
"model_name": "gpt-4o-mini",
"litellm_params": {
"model": "gpt-4o-mini",
"api_key": "test-key-1"
}
},
{
"model_name": "gpt-3.5-turbo",
"litellm_params": {
"model": "gpt-3.5-turbo",
"api_key": "test-key-2"
}
}
]
with patch.object(Agent, 'post_init_setup', wraps=Agent.post_init_setup) as mock_post_init:
agent = Agent(
role="test",
goal="test",
backstory="test",
model_list=model_list
)
agent.llm.router = mock_router_instance
assert agent.model_list == model_list
assert agent.llm.model_list == model_list
assert agent.llm.router is not None
@pytest.mark.vcr(filter_headers=["authorization"])
@patch("litellm.Router")
@patch.object(LLM, '_initialize_router')
def test_llm_call_with_router(mock_initialize_router, mock_router):
"""Test that LLM.call uses the router when model_list is provided."""
mock_initialize_router.return_value = None
mock_router_instance = MagicMock()
mock_router.return_value = mock_router_instance
mock_response = {
"choices": [{"message": {"content": "Test response"}}]
}
mock_router_instance.completion.return_value = mock_response
model_list = [
{
"model_name": "gpt-4o-mini",
"litellm_params": {
"model": "gpt-4o-mini",
"api_key": "test-key-1"
}
}
]
# Create LLM with model_list
llm = LLM(model="gpt-4o-mini", model_list=model_list)
llm.router = mock_router_instance
messages = [{"role": "user", "content": "Hello"}]
response = llm.call(messages)
mock_router_instance.completion.assert_called_once()
assert response == "Test response"
@pytest.mark.vcr(filter_headers=["authorization"])
@patch("litellm.completion")
def test_llm_call_without_router(mock_completion):
"""Test that LLM.call uses litellm.completion when no model_list is provided."""
mock_response = {
"choices": [{"message": {"content": "Test response"}}]
}
mock_completion.return_value = mock_response
llm = LLM(model="gpt-4o-mini")
messages = [{"role": "user", "content": "Hello"}]
response = llm.call(messages)
mock_completion.assert_called_once()
assert response == "Test response"
@pytest.mark.vcr(filter_headers=["authorization"])
def test_llm_with_invalid_routing_strategy():
"""Test that LLM initialization raises an error with an invalid routing strategy."""
model_list = [
{
"model_name": "gpt-4o-mini",
"litellm_params": {
"model": "gpt-4o-mini",
"api_key": "test-key-1"
}
}
]
with pytest.raises(RuntimeError) as exc_info:
LLM(
model="gpt-4o-mini",
model_list=model_list,
routing_strategy="invalid-strategy"
)
assert "Invalid routing strategy" in str(exc_info.value)
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_with_invalid_routing_strategy():
"""Test that Agent initialization raises an error with an invalid routing strategy."""
model_list = [
{
"model_name": "gpt-4o-mini",
"litellm_params": {
"model": "gpt-4o-mini",
"api_key": "test-key-1"
}
}
]
with pytest.raises(Exception) as exc_info:
Agent(
role="test",
goal="test",
backstory="test",
model_list=model_list,
routing_strategy="invalid-strategy"
)
assert "Input should be" in str(exc_info.value)
assert "simple-shuffle" in str(exc_info.value)
assert "least-busy" in str(exc_info.value)
@pytest.mark.vcr(filter_headers=["authorization"])
@patch.object(LLM, '_initialize_router')
def test_llm_with_missing_model_in_litellm_params(mock_initialize_router):
"""Test that LLM initialization raises an error when model is missing in litellm_params."""
mock_initialize_router.side_effect = RuntimeError("Router initialization failed: Missing required 'model' in litellm_params")
model_list = [
{
"model_name": "gpt-4o-mini",
"litellm_params": {
"api_key": "test-key-1"
}
}
]
with pytest.raises(RuntimeError) as exc_info:
LLM(model="gpt-4o-mini", model_list=model_list)
assert "Router initialization failed" in str(exc_info.value)