mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-01-09 08:08:32 +00:00
docs: Update model reference in LLM configuration (#2267)
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
This commit is contained in:
@@ -59,7 +59,7 @@ There are three ways to configure LLMs in CrewAI. Choose the method that best fi
|
|||||||
goal: Conduct comprehensive research and analysis
|
goal: Conduct comprehensive research and analysis
|
||||||
backstory: A dedicated research professional with years of experience
|
backstory: A dedicated research professional with years of experience
|
||||||
verbose: true
|
verbose: true
|
||||||
llm: openai/gpt-4o-mini # your model here
|
llm: openai/gpt-4o-mini # your model here
|
||||||
# (see provider configuration examples below for more)
|
# (see provider configuration examples below for more)
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -111,7 +111,7 @@ There are three ways to configure LLMs in CrewAI. Choose the method that best fi
|
|||||||
## Provider Configuration Examples
|
## Provider Configuration Examples
|
||||||
|
|
||||||
|
|
||||||
CrewAI supports a multitude of LLM providers, each offering unique features, authentication methods, and model capabilities.
|
CrewAI supports a multitude of LLM providers, each offering unique features, authentication methods, and model capabilities.
|
||||||
In this section, you'll find detailed examples that help you select, configure, and optimize the LLM that best fits your project's needs.
|
In this section, you'll find detailed examples that help you select, configure, and optimize the LLM that best fits your project's needs.
|
||||||
|
|
||||||
<AccordionGroup>
|
<AccordionGroup>
|
||||||
@@ -121,7 +121,7 @@ In this section, you'll find detailed examples that help you select, configure,
|
|||||||
```toml Code
|
```toml Code
|
||||||
# Required
|
# Required
|
||||||
OPENAI_API_KEY=sk-...
|
OPENAI_API_KEY=sk-...
|
||||||
|
|
||||||
# Optional
|
# Optional
|
||||||
OPENAI_API_BASE=<custom-base-url>
|
OPENAI_API_BASE=<custom-base-url>
|
||||||
OPENAI_ORGANIZATION=<your-org-id>
|
OPENAI_ORGANIZATION=<your-org-id>
|
||||||
@@ -226,7 +226,7 @@ In this section, you'll find detailed examples that help you select, configure,
|
|||||||
AZURE_API_KEY=<your-api-key>
|
AZURE_API_KEY=<your-api-key>
|
||||||
AZURE_API_BASE=<your-resource-url>
|
AZURE_API_BASE=<your-resource-url>
|
||||||
AZURE_API_VERSION=<api-version>
|
AZURE_API_VERSION=<api-version>
|
||||||
|
|
||||||
# Optional
|
# Optional
|
||||||
AZURE_AD_TOKEN=<your-azure-ad-token>
|
AZURE_AD_TOKEN=<your-azure-ad-token>
|
||||||
AZURE_API_TYPE=<your-azure-api-type>
|
AZURE_API_TYPE=<your-azure-api-type>
|
||||||
@@ -289,7 +289,7 @@ In this section, you'll find detailed examples that help you select, configure,
|
|||||||
| Mistral 8x7B Instruct | Up to 32k tokens | An MOE LLM that follows instructions, completes requests, and generates creative text. |
|
| Mistral 8x7B Instruct | Up to 32k tokens | An MOE LLM that follows instructions, completes requests, and generates creative text. |
|
||||||
|
|
||||||
</Accordion>
|
</Accordion>
|
||||||
|
|
||||||
<Accordion title="Amazon SageMaker">
|
<Accordion title="Amazon SageMaker">
|
||||||
```toml Code
|
```toml Code
|
||||||
AWS_ACCESS_KEY_ID=<your-access-key>
|
AWS_ACCESS_KEY_ID=<your-access-key>
|
||||||
@@ -474,7 +474,7 @@ In this section, you'll find detailed examples that help you select, configure,
|
|||||||
WATSONX_URL=<your-url>
|
WATSONX_URL=<your-url>
|
||||||
WATSONX_APIKEY=<your-apikey>
|
WATSONX_APIKEY=<your-apikey>
|
||||||
WATSONX_PROJECT_ID=<your-project-id>
|
WATSONX_PROJECT_ID=<your-project-id>
|
||||||
|
|
||||||
# Optional
|
# Optional
|
||||||
WATSONX_TOKEN=<your-token>
|
WATSONX_TOKEN=<your-token>
|
||||||
WATSONX_DEPLOYMENT_SPACE_ID=<your-space-id>
|
WATSONX_DEPLOYMENT_SPACE_ID=<your-space-id>
|
||||||
@@ -491,7 +491,7 @@ In this section, you'll find detailed examples that help you select, configure,
|
|||||||
|
|
||||||
<Accordion title="Ollama (Local LLMs)">
|
<Accordion title="Ollama (Local LLMs)">
|
||||||
1. Install Ollama: [ollama.ai](https://ollama.ai/)
|
1. Install Ollama: [ollama.ai](https://ollama.ai/)
|
||||||
2. Run a model: `ollama run llama2`
|
2. Run a model: `ollama run llama3`
|
||||||
3. Configure:
|
3. Configure:
|
||||||
|
|
||||||
```python Code
|
```python Code
|
||||||
@@ -600,7 +600,7 @@ In this section, you'll find detailed examples that help you select, configure,
|
|||||||
```toml Code
|
```toml Code
|
||||||
OPENROUTER_API_KEY=<your-api-key>
|
OPENROUTER_API_KEY=<your-api-key>
|
||||||
```
|
```
|
||||||
|
|
||||||
Example usage in your CrewAI project:
|
Example usage in your CrewAI project:
|
||||||
```python Code
|
```python Code
|
||||||
llm = LLM(
|
llm = LLM(
|
||||||
@@ -723,7 +723,7 @@ Learn how to get the most out of your LLM configuration:
|
|||||||
- Small tasks (up to 4K tokens): Standard models
|
- Small tasks (up to 4K tokens): Standard models
|
||||||
- Medium tasks (between 4K-32K): Enhanced models
|
- Medium tasks (between 4K-32K): Enhanced models
|
||||||
- Large tasks (over 32K): Large context models
|
- Large tasks (over 32K): Large context models
|
||||||
|
|
||||||
```python
|
```python
|
||||||
# Configure model with appropriate settings
|
# Configure model with appropriate settings
|
||||||
llm = LLM(
|
llm = LLM(
|
||||||
@@ -760,11 +760,11 @@ Learn how to get the most out of your LLM configuration:
|
|||||||
<Warning>
|
<Warning>
|
||||||
Most authentication issues can be resolved by checking API key format and environment variable names.
|
Most authentication issues can be resolved by checking API key format and environment variable names.
|
||||||
</Warning>
|
</Warning>
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# OpenAI
|
# OpenAI
|
||||||
OPENAI_API_KEY=sk-...
|
OPENAI_API_KEY=sk-...
|
||||||
|
|
||||||
# Anthropic
|
# Anthropic
|
||||||
ANTHROPIC_API_KEY=sk-ant-...
|
ANTHROPIC_API_KEY=sk-ant-...
|
||||||
```
|
```
|
||||||
@@ -773,11 +773,11 @@ Learn how to get the most out of your LLM configuration:
|
|||||||
<Check>
|
<Check>
|
||||||
Always include the provider prefix in model names
|
Always include the provider prefix in model names
|
||||||
</Check>
|
</Check>
|
||||||
|
|
||||||
```python
|
```python
|
||||||
# Correct
|
# Correct
|
||||||
llm = LLM(model="openai/gpt-4")
|
llm = LLM(model="openai/gpt-4")
|
||||||
|
|
||||||
# Incorrect
|
# Incorrect
|
||||||
llm = LLM(model="gpt-4")
|
llm = LLM(model="gpt-4")
|
||||||
```
|
```
|
||||||
@@ -786,5 +786,10 @@ Learn how to get the most out of your LLM configuration:
|
|||||||
<Tip>
|
<Tip>
|
||||||
Use larger context models for extensive tasks
|
Use larger context models for extensive tasks
|
||||||
</Tip>
|
</Tip>
|
||||||
|
|
||||||
|
```python
|
||||||
|
# Large context model
|
||||||
|
llm = LLM(model="openai/gpt-4o") # 128K tokens
|
||||||
|
```
|
||||||
</Tab>
|
</Tab>
|
||||||
</Tabs>
|
</Tabs>
|
||||||
|
|||||||
Reference in New Issue
Block a user