mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-04-14 23:12:37 +00:00
Removed language implying the quarantine is resolved and removed date-specific references so the docs stay evergreen.
359 lines
9.8 KiB
Plaintext
359 lines
9.8 KiB
Plaintext
---
|
|
title: Using CrewAI Without LiteLLM
|
|
description: How to use CrewAI with native provider integrations and remove the LiteLLM dependency from your project.
|
|
icon: shield-check
|
|
mode: "wide"
|
|
---
|
|
|
|
## Overview
|
|
|
|
CrewAI supports two paths for connecting to LLM providers:
|
|
|
|
1. **Native integrations** — direct SDK connections to OpenAI, Anthropic, Google Gemini, Azure OpenAI, and AWS Bedrock
|
|
2. **LiteLLM fallback** — a translation layer that supports 100+ additional providers
|
|
|
|
This guide explains how to use CrewAI exclusively with native provider integrations, removing any dependency on LiteLLM.
|
|
|
|
<Warning>
|
|
The `litellm` package was quarantined on PyPI due to a security/reliability incident. If you rely on LiteLLM-dependent providers, you should migrate to native integrations. CrewAI's native integrations give you full functionality without LiteLLM.
|
|
</Warning>
|
|
|
|
## Why Remove LiteLLM?
|
|
|
|
- **Reduced dependency surface** — fewer packages means fewer potential supply-chain risks
|
|
- **Better performance** — native SDKs communicate directly with provider APIs, eliminating a translation layer
|
|
- **Simpler debugging** — one less abstraction layer between your code and the provider
|
|
- **Smaller install footprint** — LiteLLM brings in many transitive dependencies
|
|
|
|
## Native Providers (No LiteLLM Required)
|
|
|
|
These providers use their own SDKs and work without LiteLLM installed:
|
|
|
|
<CardGroup cols={2}>
|
|
<Card title="OpenAI" icon="bolt">
|
|
GPT-4o, GPT-4o-mini, o1, o3-mini, and more.
|
|
```bash
|
|
uv add "crewai[openai]"
|
|
```
|
|
</Card>
|
|
<Card title="Anthropic" icon="a">
|
|
Claude Sonnet, Claude Haiku, and more.
|
|
```bash
|
|
uv add "crewai[anthropic]"
|
|
```
|
|
</Card>
|
|
<Card title="Google Gemini" icon="google">
|
|
Gemini 2.0 Flash, Gemini 2.0 Pro, and more.
|
|
```bash
|
|
uv add "crewai[gemini]"
|
|
```
|
|
</Card>
|
|
<Card title="Azure OpenAI" icon="microsoft">
|
|
Azure-hosted OpenAI models.
|
|
```bash
|
|
uv add "crewai[azure]"
|
|
```
|
|
</Card>
|
|
<Card title="AWS Bedrock" icon="aws">
|
|
Claude, Llama, Titan, and more via AWS.
|
|
```bash
|
|
uv add "crewai[bedrock]"
|
|
```
|
|
</Card>
|
|
</CardGroup>
|
|
|
|
<Info>
|
|
If you only use native providers, you **never** need to install `crewai[litellm]`. The base `crewai` package plus your chosen provider extra is all you need.
|
|
</Info>
|
|
|
|
## How to Check If You're Using LiteLLM
|
|
|
|
### Check your model strings
|
|
|
|
If your code uses model prefixes like these, you're routing through LiteLLM:
|
|
|
|
| Prefix | Provider | Uses LiteLLM? |
|
|
|--------|----------|---------------|
|
|
| `ollama/` | Ollama | ✅ Yes |
|
|
| `groq/` | Groq | ✅ Yes |
|
|
| `together_ai/` | Together AI | ✅ Yes |
|
|
| `mistral/` | Mistral | ✅ Yes |
|
|
| `cohere/` | Cohere | ✅ Yes |
|
|
| `huggingface/` | Hugging Face | ✅ Yes |
|
|
| `openai/` | OpenAI | ❌ Native |
|
|
| `anthropic/` | Anthropic | ❌ Native |
|
|
| `gemini/` | Google Gemini | ❌ Native |
|
|
| `azure/` | Azure OpenAI | ❌ Native |
|
|
| `bedrock/` | AWS Bedrock | ❌ Native |
|
|
|
|
### Check if LiteLLM is installed
|
|
|
|
```bash
|
|
# Using pip
|
|
pip show litellm
|
|
|
|
# Using uv
|
|
uv pip show litellm
|
|
```
|
|
|
|
If the command returns package information, LiteLLM is installed in your environment.
|
|
|
|
### Check your dependencies
|
|
|
|
Look at your `pyproject.toml` for `crewai[litellm]`:
|
|
|
|
```toml
|
|
# If you see this, you have LiteLLM as a dependency
|
|
dependencies = [
|
|
"crewai[litellm]>=0.100.0", # ← Uses LiteLLM
|
|
]
|
|
|
|
# Change to a native provider extra instead
|
|
dependencies = [
|
|
"crewai[openai]>=0.100.0", # ← Native, no LiteLLM
|
|
]
|
|
```
|
|
|
|
## Migration Guide
|
|
|
|
### Step 1: Identify your current provider
|
|
|
|
Find all `LLM()` calls and model strings in your code:
|
|
|
|
```bash
|
|
# Search your codebase for LLM model strings
|
|
grep -r "LLM(" --include="*.py" .
|
|
grep -r "llm=" --include="*.yaml" .
|
|
grep -r "llm:" --include="*.yaml" .
|
|
```
|
|
|
|
### Step 2: Switch to a native provider
|
|
|
|
<Tabs>
|
|
<Tab title="Switch to OpenAI">
|
|
```python
|
|
from crewai import LLM
|
|
|
|
# Before (LiteLLM):
|
|
# llm = LLM(model="groq/llama-3.1-70b")
|
|
|
|
# After (Native):
|
|
llm = LLM(model="openai/gpt-4o")
|
|
```
|
|
|
|
```bash
|
|
# Install
|
|
uv add "crewai[openai]"
|
|
|
|
# Set your API key
|
|
export OPENAI_API_KEY="sk-..."
|
|
```
|
|
</Tab>
|
|
<Tab title="Switch to Anthropic">
|
|
```python
|
|
from crewai import LLM
|
|
|
|
# Before (LiteLLM):
|
|
# llm = LLM(model="together_ai/meta-llama/Meta-Llama-3.1-70B")
|
|
|
|
# After (Native):
|
|
llm = LLM(model="anthropic/claude-sonnet-4-20250514")
|
|
```
|
|
|
|
```bash
|
|
# Install
|
|
uv add "crewai[anthropic]"
|
|
|
|
# Set your API key
|
|
export ANTHROPIC_API_KEY="sk-ant-..."
|
|
```
|
|
</Tab>
|
|
<Tab title="Switch to Gemini">
|
|
```python
|
|
from crewai import LLM
|
|
|
|
# Before (LiteLLM):
|
|
# llm = LLM(model="mistral/mistral-large-latest")
|
|
|
|
# After (Native):
|
|
llm = LLM(model="gemini/gemini-2.0-flash")
|
|
```
|
|
|
|
```bash
|
|
# Install
|
|
uv add "crewai[gemini]"
|
|
|
|
# Set your API key
|
|
export GEMINI_API_KEY="..."
|
|
```
|
|
</Tab>
|
|
<Tab title="Switch to Azure OpenAI">
|
|
```python
|
|
from crewai import LLM
|
|
|
|
# After (Native):
|
|
llm = LLM(
|
|
model="azure/your-deployment-name",
|
|
api_key="your-azure-api-key",
|
|
base_url="https://your-resource.openai.azure.com",
|
|
api_version="2024-06-01"
|
|
)
|
|
```
|
|
|
|
```bash
|
|
# Install
|
|
uv add "crewai[azure]"
|
|
```
|
|
</Tab>
|
|
<Tab title="Switch to AWS Bedrock">
|
|
```python
|
|
from crewai import LLM
|
|
|
|
# After (Native):
|
|
llm = LLM(
|
|
model="bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0",
|
|
aws_region_name="us-east-1"
|
|
)
|
|
```
|
|
|
|
```bash
|
|
# Install
|
|
uv add "crewai[bedrock]"
|
|
|
|
# Configure AWS credentials
|
|
export AWS_ACCESS_KEY_ID="..."
|
|
export AWS_SECRET_ACCESS_KEY="..."
|
|
export AWS_DEFAULT_REGION="us-east-1"
|
|
```
|
|
</Tab>
|
|
</Tabs>
|
|
|
|
### Step 3: Keep Ollama without LiteLLM
|
|
|
|
If you're using Ollama and want to keep using it, you can connect via Ollama's OpenAI-compatible API:
|
|
|
|
```python
|
|
from crewai import LLM
|
|
|
|
# Before (LiteLLM):
|
|
# llm = LLM(model="ollama/llama3")
|
|
|
|
# After (OpenAI-compatible mode, no LiteLLM needed):
|
|
llm = LLM(
|
|
model="openai/llama3",
|
|
base_url="http://localhost:11434/v1",
|
|
api_key="ollama" # Ollama doesn't require a real API key
|
|
)
|
|
```
|
|
|
|
<Tip>
|
|
Many local inference servers (Ollama, vLLM, LM Studio, llama.cpp) expose an OpenAI-compatible API. You can use the `openai/` prefix with a custom `base_url` to connect to any of them natively.
|
|
</Tip>
|
|
|
|
### Step 4: Update your YAML configs
|
|
|
|
```yaml
|
|
# Before (LiteLLM providers):
|
|
researcher:
|
|
role: Research Specialist
|
|
goal: Conduct research
|
|
backstory: A dedicated researcher
|
|
llm: groq/llama-3.1-70b # ← LiteLLM
|
|
|
|
# After (Native provider):
|
|
researcher:
|
|
role: Research Specialist
|
|
goal: Conduct research
|
|
backstory: A dedicated researcher
|
|
llm: openai/gpt-4o # ← Native
|
|
```
|
|
|
|
### Step 5: Remove LiteLLM
|
|
|
|
Once you've migrated all your model references:
|
|
|
|
```bash
|
|
# Remove litellm from your project
|
|
uv remove litellm
|
|
|
|
# Or if using pip
|
|
pip uninstall litellm
|
|
|
|
# Update your pyproject.toml: change crewai[litellm] to your provider extra
|
|
# e.g., crewai[openai], crewai[anthropic], crewai[gemini]
|
|
```
|
|
|
|
### Step 6: Verify
|
|
|
|
Run your project and confirm everything works:
|
|
|
|
```bash
|
|
# Run your crew
|
|
crewai run
|
|
|
|
# Or run your tests
|
|
uv run pytest
|
|
```
|
|
|
|
## Quick Reference: Model String Mapping
|
|
|
|
Here are common migration paths from LiteLLM-dependent providers to native ones:
|
|
|
|
```python
|
|
from crewai import LLM
|
|
|
|
# ─── LiteLLM providers → Native alternatives ────────────────────
|
|
|
|
# Groq → OpenAI or Anthropic
|
|
# llm = LLM(model="groq/llama-3.1-70b")
|
|
llm = LLM(model="openai/gpt-4o-mini") # Fast & affordable
|
|
llm = LLM(model="anthropic/claude-haiku-3-5") # Fast & affordable
|
|
|
|
# Together AI → OpenAI or Gemini
|
|
# llm = LLM(model="together_ai/meta-llama/Meta-Llama-3.1-70B")
|
|
llm = LLM(model="openai/gpt-4o") # High quality
|
|
llm = LLM(model="gemini/gemini-2.0-flash") # Fast & capable
|
|
|
|
# Mistral → Anthropic or OpenAI
|
|
# llm = LLM(model="mistral/mistral-large-latest")
|
|
llm = LLM(model="anthropic/claude-sonnet-4-20250514") # High quality
|
|
|
|
# Ollama → OpenAI-compatible (keep using local models)
|
|
# llm = LLM(model="ollama/llama3")
|
|
llm = LLM(
|
|
model="openai/llama3",
|
|
base_url="http://localhost:11434/v1",
|
|
api_key="ollama"
|
|
)
|
|
```
|
|
|
|
## FAQ
|
|
|
|
<AccordionGroup>
|
|
<Accordion title="Do I lose any functionality by removing LiteLLM?">
|
|
No, if you use one of the five natively supported providers (OpenAI, Anthropic, Gemini, Azure, Bedrock). These native integrations support all CrewAI features including streaming, tool calling, structured output, and more. You only lose access to providers that are exclusively available through LiteLLM (like Groq, Together AI, Mistral as first-class providers).
|
|
</Accordion>
|
|
<Accordion title="Can I use multiple native providers at the same time?">
|
|
Yes. Install multiple extras and use different providers for different agents:
|
|
```bash
|
|
uv add "crewai[openai,anthropic,gemini]"
|
|
```
|
|
```python
|
|
researcher = Agent(llm="openai/gpt-4o", ...)
|
|
writer = Agent(llm="anthropic/claude-sonnet-4-20250514", ...)
|
|
```
|
|
</Accordion>
|
|
<Accordion title="Is LiteLLM safe to use now?">
|
|
Regardless of quarantine status, reducing your dependency surface is good security practice. If you only need providers that CrewAI supports natively, there's no reason to keep LiteLLM installed.
|
|
</Accordion>
|
|
<Accordion title="What about environment variables like OPENAI_API_KEY?">
|
|
Native providers use the same environment variables you're already familiar with. No changes needed for `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`, `GEMINI_API_KEY`, etc.
|
|
</Accordion>
|
|
</AccordionGroup>
|
|
|
|
## Related Resources
|
|
|
|
- [LLM Connections](/en/learn/llm-connections) — Full guide to connecting CrewAI with any LLM
|
|
- [LLM Concepts](/en/concepts/llms) — Understanding LLMs in CrewAI
|
|
- [LLM Selection Guide](/en/learn/llm-selection-guide) — Choosing the right model for your use case
|