Files
crewAI/docs/multiple_model_config.md
2025-05-11 01:39:47 +00:00

3.5 KiB

Multiple Model Configuration in CrewAI

CrewAI now supports configuring multiple language models with different API keys and configurations. This feature allows you to:

  1. Load-balance across multiple model deployments
  2. Set up fallback models in case of rate limits or errors
  3. Configure different routing strategies for model selection
  4. Maintain fine-grained control over model selection and usage

Basic Usage

You can configure multiple models at the agent level:

from crewai import Agent

# Define model configurations
model_list = [
    {
        "model_name": "gpt-4o-mini",
        "litellm_params": {
            "model": "gpt-4o-mini",  # Required: model name must be specified here
            "api_key": "your-openai-api-key-1"
        }
    },
    {
        "model_name": "gpt-3.5-turbo",
        "litellm_params": {
            "model": "gpt-3.5-turbo",  # Required: model name must be specified here
            "api_key": "your-openai-api-key-2"
        }
    },
    {
        "model_name": "claude-3-sonnet-20240229",
        "litellm_params": {
            "model": "claude-3-sonnet-20240229",  # Required: model name must be specified here
            "api_key": "your-anthropic-api-key"
        }
    }
]

# Create an agent with multiple model configurations
agent = Agent(
    role="Data Analyst",
    goal="Analyze the data and provide insights",
    backstory="You are an expert data analyst with years of experience.",
    model_list=model_list,
    routing_strategy="simple-shuffle"  # Optional routing strategy
)

Routing Strategies

CrewAI supports the following routing strategies for precise control over model selection:

  • simple-shuffle: Randomly selects a model from the list
  • least-busy: Routes to the model with the least number of ongoing requests
  • usage-based: Routes based on token usage across models
  • latency-based: Routes to the model with the lowest latency
  • cost-based: Routes to the model with the lowest cost

Example with latency-based routing:

agent = Agent(
    role="Data Analyst",
    goal="Analyze the data and provide insights",
    backstory="You are an expert data analyst with years of experience.",
    model_list=model_list,
    routing_strategy="latency-based"
)

Direct LLM Configuration

You can also configure multiple models directly with the LLM class for more flexibility:

from crewai import LLM

llm = LLM(
    model="gpt-4o-mini",
    model_list=model_list,
    routing_strategy="simple-shuffle"
)

Advanced Configuration

For more advanced configurations, you can specify additional parameters for each model to handle complex use cases:

model_list = [
    {
        "model_name": "gpt-4o-mini",
        "litellm_params": {
            "model": "gpt-4o-mini",  # Required: model name must be specified here
            "api_key": "your-openai-api-key-1",
            "temperature": 0.7
        },
        "tpm": 100000,  # Tokens per minute limit
        "rpm": 1000     # Requests per minute limit
    },
    {
        "model_name": "gpt-3.5-turbo",
        "litellm_params": {
            "model": "gpt-3.5-turbo",  # Required: model name must be specified here
            "api_key": "your-openai-api-key-2",
            "temperature": 0.5
        }
    }
]

This feature leverages litellm's Router functionality under the hood, providing robust load balancing and fallback capabilities for your CrewAI agents. The implementation ensures predictability and consistency in model selection while maintaining security through proper API key management.