mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-01-08 23:58:34 +00:00
updating docs
This commit is contained in:
155
docs/core-concepts/LLMs.md
Normal file
155
docs/core-concepts/LLMs.md
Normal file
@@ -0,0 +1,155 @@
|
||||
# Large Language Models (LLMs) in crewAI
|
||||
|
||||
## Introduction
|
||||
Large Language Models (LLMs) are the backbone of intelligent agents in the crewAI framework. This guide will help you understand, configure, and optimize LLM usage for your crewAI projects.
|
||||
|
||||
## Table of Contents
|
||||
- [Key Concepts](#key-concepts)
|
||||
- [Configuring LLMs for Agents](#configuring-llms-for-agents)
|
||||
- [1. Default Configuration](#1-default-configuration)
|
||||
- [2. String Identifier](#2-string-identifier)
|
||||
- [3. LLM Instance](#3-llm-instance)
|
||||
- [4. Custom LLM Objects](#4-custom-llm-objects)
|
||||
- [Connecting to OpenAI-Compatible LLMs](#connecting-to-openai-compatible-llms)
|
||||
- [LLM Configuration Options](#llm-configuration-options)
|
||||
- [Using Ollama (Local LLMs)](#using-ollama-local-llms)
|
||||
- [Changing the Base API URL](#changing-the-base-api-url)
|
||||
- [Best Practices](#best-practices)
|
||||
- [Troubleshooting](#troubleshooting)
|
||||
|
||||
## Key Concepts
|
||||
- **LLM**: Large Language Model, the AI powering agent intelligence
|
||||
- **Agent**: A crewAI entity that uses an LLM to perform tasks
|
||||
- **Provider**: A service that offers LLM capabilities (e.g., OpenAI, Anthropic, Ollama, [more providers](https://docs.litellm.ai/docs/providers))
|
||||
|
||||
## Configuring LLMs for Agents
|
||||
|
||||
crewAI offers flexible options for setting up LLMs:
|
||||
|
||||
### 1. Default Configuration
|
||||
By default, crewAI uses the `gpt-4o-mini` model. It uses environment variables if no LLM is specified:
|
||||
- `OPENAI_MODEL_NAME` (defaults to "gpt-4o-mini" if not set)
|
||||
- `OPENAI_API_BASE`
|
||||
- `OPENAI_API_KEY`
|
||||
|
||||
### 2. String Identifier
|
||||
```python
|
||||
agent = Agent(llm="gpt-4o", ...)
|
||||
```
|
||||
|
||||
### 3. LLM Instance
|
||||
List of [more providers](https://docs.litellm.ai/docs/providers).
|
||||
```python
|
||||
from crewai import LLM
|
||||
|
||||
llm = LLM(model="gpt-4", temperature=0.7)
|
||||
agent = Agent(llm=llm, ...)
|
||||
```
|
||||
|
||||
### 4. Custom LLM Objects
|
||||
Pass a custom LLM implementation or object from another library.
|
||||
|
||||
## Connecting to OpenAI-Compatible LLMs
|
||||
|
||||
You can connect to OpenAI-compatible LLMs using either environment variables or by setting specific attributes on the LLM class:
|
||||
|
||||
1. Using environment variables:
|
||||
```python
|
||||
import os
|
||||
|
||||
os.environ["OPENAI_API_KEY"] = "your-api-key"
|
||||
os.environ["OPENAI_API_BASE"] = "https://api.your-provider.com/v1"
|
||||
```
|
||||
|
||||
2. Using LLM class attributes:
|
||||
```python
|
||||
llm = LLM(
|
||||
model="custom-model-name",
|
||||
api_key="your-api-key",
|
||||
base_url="https://api.your-provider.com/v1"
|
||||
)
|
||||
agent = Agent(llm=llm, ...)
|
||||
```
|
||||
|
||||
## LLM Configuration Options
|
||||
|
||||
When configuring an LLM for your agent, you have access to a wide range of parameters:
|
||||
|
||||
| Parameter | Type | Description |
|
||||
|-----------|------|-------------|
|
||||
| `model` | str | The name of the model to use (e.g., "gpt-4", "gpt-3.5-turbo", "ollama/llama3.1", [more providers](https://docs.litellm.ai/docs/providers)) |
|
||||
| `timeout` | float, int | Maximum time (in seconds) to wait for a response |
|
||||
| `temperature` | float | Controls randomness in output (0.0 to 1.0) |
|
||||
| `top_p` | float | Controls diversity of output (0.0 to 1.0) |
|
||||
| `n` | int | Number of completions to generate |
|
||||
| `stop` | str, List[str] | Sequence(s) to stop generation |
|
||||
| `max_tokens` | int | Maximum number of tokens to generate |
|
||||
| `presence_penalty` | float | Penalizes new tokens based on their presence in the text so far |
|
||||
| `frequency_penalty` | float | Penalizes new tokens based on their frequency in the text so far |
|
||||
| `logit_bias` | Dict[int, float] | Modifies likelihood of specified tokens appearing in the completion |
|
||||
| `response_format` | Dict[str, Any] | Specifies the format of the response (e.g., {"type": "json_object"}) |
|
||||
| `seed` | int | Sets a random seed for deterministic results |
|
||||
| `logprobs` | bool | Whether to return log probabilities of the output tokens |
|
||||
| `top_logprobs` | int | Number of most likely tokens to return the log probabilities for |
|
||||
| `base_url` | str | The base URL for the API endpoint |
|
||||
| `api_version` | str | The version of the API to use |
|
||||
| `api_key` | str | Your API key for authentication |
|
||||
|
||||
Example:
|
||||
```python
|
||||
llm = LLM(
|
||||
model="gpt-4",
|
||||
temperature=0.8,
|
||||
max_tokens=150,
|
||||
top_p=0.9,
|
||||
frequency_penalty=0.1,
|
||||
presence_penalty=0.1,
|
||||
stop=["END"],
|
||||
seed=42,
|
||||
base_url="https://api.openai.com/v1",
|
||||
api_key="your-api-key-here"
|
||||
)
|
||||
agent = Agent(llm=llm, ...)
|
||||
```
|
||||
|
||||
## Using Ollama (Local LLMs)
|
||||
|
||||
crewAI supports using Ollama for running open-source models locally:
|
||||
|
||||
1. Install Ollama: [ollama.ai](https://ollama.ai/)
|
||||
2. Run a model: `ollama run llama2`
|
||||
3. Configure agent:
|
||||
```python
|
||||
agent = Agent(
|
||||
llm=LLM(model="ollama/llama3.1", base_url="http://localhost:11434"),
|
||||
...
|
||||
)
|
||||
```
|
||||
|
||||
## Changing the Base API URL
|
||||
|
||||
You can change the base API URL for any LLM provider by setting the `base_url` parameter:
|
||||
|
||||
```python
|
||||
llm = LLM(
|
||||
model="custom-model-name",
|
||||
base_url="https://api.your-provider.com/v1",
|
||||
api_key="your-api-key"
|
||||
)
|
||||
agent = Agent(llm=llm, ...)
|
||||
```
|
||||
|
||||
This is particularly useful when working with OpenAI-compatible APIs or when you need to specify a different endpoint for your chosen provider.
|
||||
|
||||
## Best Practices
|
||||
1. **Choose the right model**: Balance capability and cost.
|
||||
2. **Optimize prompts**: Clear, concise instructions improve output.
|
||||
3. **Manage tokens**: Monitor and limit token usage for efficiency.
|
||||
4. **Use appropriate temperature**: Lower for factual tasks, higher for creative ones.
|
||||
5. **Implement error handling**: Gracefully manage API errors and rate limits.
|
||||
|
||||
## Troubleshooting
|
||||
- **API Errors**: Check your API key, network connection, and rate limits.
|
||||
- **Unexpected Outputs**: Refine your prompts and adjust temperature or top_p.
|
||||
- **Performance Issues**: Consider using a more powerful model or optimizing your queries.
|
||||
- **Timeout Errors**: Increase the `timeout` parameter or optimize your input.
|
||||
@@ -5,10 +5,10 @@ description: Comprehensive guide on integrating CrewAI with various Large Langua
|
||||
|
||||
## Connect CrewAI to LLMs
|
||||
|
||||
CrewAI now uses LiteLLM to connect to a wide variety of Language Models (LLMs). This integration provides extensive versatility, allowing you to use models from numerous providers with a simple, unified interface.
|
||||
CrewAI uses LiteLLM to connect to a wide variety of Language Models (LLMs). This integration provides extensive versatility, allowing you to use models from numerous providers with a simple, unified interface.
|
||||
|
||||
!!! note "Default LLM"
|
||||
By default, CrewAI uses OpenAI's GPT-4 model (specifically, the model specified by the OPENAI_MODEL_NAME environment variable, defaulting to "gpt-4") for language processing. You can easily configure your agents to use a different model or provider as described in this guide.
|
||||
By default, CrewAI uses the `gpt-4o-mini` model. This is determined by the `OPENAI_MODEL_NAME` environment variable, which defaults to "gpt-4o-mini" if not set. You can easily configure your agents to use a different model or provider as described in this guide.
|
||||
|
||||
## Supported Providers
|
||||
|
||||
@@ -35,7 +35,11 @@ For a complete and up-to-date list of supported providers, please refer to the [
|
||||
|
||||
## Changing the LLM
|
||||
|
||||
To use a different LLM with your CrewAI agents, you simply need to pass the model name as a string when initializing the agent. Here are some examples:
|
||||
To use a different LLM with your CrewAI agents, you have several options:
|
||||
|
||||
### 1. Using a String Identifier
|
||||
|
||||
Pass the model name as a string when initializing the agent:
|
||||
|
||||
```python
|
||||
from crewai import Agent
|
||||
@@ -55,59 +59,105 @@ claude_agent = Agent(
|
||||
backstory="An AI assistant leveraging Anthropic's language model.",
|
||||
llm='claude-2'
|
||||
)
|
||||
```
|
||||
|
||||
# Using Ollama's local Llama 2 model
|
||||
ollama_agent = Agent(
|
||||
role='Local AI Expert',
|
||||
goal='Process information using a local model',
|
||||
backstory="An AI assistant running on local hardware.",
|
||||
llm='ollama/llama2'
|
||||
### 2. Using the LLM Class
|
||||
|
||||
For more detailed configuration, use the LLM class:
|
||||
|
||||
```python
|
||||
from crewai import Agent, LLM
|
||||
|
||||
llm = LLM(
|
||||
model="gpt-4",
|
||||
temperature=0.7,
|
||||
base_url="https://api.openai.com/v1",
|
||||
api_key="your-api-key-here"
|
||||
)
|
||||
|
||||
# Using Google's Gemini model
|
||||
gemini_agent = Agent(
|
||||
role='Google AI Expert',
|
||||
goal='Generate creative content with Gemini',
|
||||
backstory="An AI assistant powered by Google's advanced language model.",
|
||||
llm='gemini-pro'
|
||||
agent = Agent(
|
||||
role='Customized LLM Expert',
|
||||
goal='Provide tailored responses',
|
||||
backstory="An AI assistant with custom LLM settings.",
|
||||
llm=llm
|
||||
)
|
||||
```
|
||||
|
||||
## Configuration
|
||||
## Configuration Options
|
||||
|
||||
For most providers, you'll need to set up your API keys as environment variables. Here's how you can do it for some common providers:
|
||||
When configuring an LLM for your agent, you have access to a wide range of parameters:
|
||||
|
||||
| Parameter | Type | Description |
|
||||
|-----------|------|-------------|
|
||||
| `model` | str | The name of the model to use (e.g., "gpt-4", "claude-2") |
|
||||
| `temperature` | float | Controls randomness in output (0.0 to 1.0) |
|
||||
| `max_tokens` | int | Maximum number of tokens to generate |
|
||||
| `top_p` | float | Controls diversity of output (0.0 to 1.0) |
|
||||
| `frequency_penalty` | float | Penalizes new tokens based on their frequency in the text so far |
|
||||
| `presence_penalty` | float | Penalizes new tokens based on their presence in the text so far |
|
||||
| `stop` | str, List[str] | Sequence(s) to stop generation |
|
||||
| `base_url` | str | The base URL for the API endpoint |
|
||||
| `api_key` | str | Your API key for authentication |
|
||||
|
||||
For a complete list of parameters and their descriptions, refer to the LLM class documentation.
|
||||
|
||||
## Connecting to OpenAI-Compatible LLMs
|
||||
|
||||
You can connect to OpenAI-compatible LLMs using either environment variables or by setting specific attributes on the LLM class:
|
||||
|
||||
### Using Environment Variables
|
||||
|
||||
```python
|
||||
import os
|
||||
|
||||
# OpenAI
|
||||
os.environ["OPENAI_API_KEY"] = "your-openai-api-key"
|
||||
|
||||
# Anthropic
|
||||
os.environ["ANTHROPIC_API_KEY"] = "your-anthropic-api-key"
|
||||
|
||||
# Google (Vertex AI)
|
||||
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "path/to/your/credentials.json"
|
||||
|
||||
# Azure OpenAI
|
||||
os.environ["AZURE_API_KEY"] = "your-azure-api-key"
|
||||
os.environ["AZURE_API_BASE"] = "your-azure-endpoint"
|
||||
|
||||
# AWS (Bedrock)
|
||||
os.environ["AWS_ACCESS_KEY_ID"] = "your-aws-access-key-id"
|
||||
os.environ["AWS_SECRET_ACCESS_KEY"] = "your-aws-secret-access-key"
|
||||
os.environ["OPENAI_API_KEY"] = "your-api-key"
|
||||
os.environ["OPENAI_API_BASE"] = "https://api.your-provider.com/v1"
|
||||
os.environ["OPENAI_MODEL_NAME"] = "your-model-name"
|
||||
```
|
||||
|
||||
For providers that require additional configuration or have specific setup requirements, please refer to the [LiteLLM documentation](https://docs.litellm.ai/docs/) for detailed instructions.
|
||||
### Using LLM Class Attributes
|
||||
|
||||
## Using Local Models
|
||||
```python
|
||||
llm = LLM(
|
||||
model="custom-model-name",
|
||||
api_key="your-api-key",
|
||||
base_url="https://api.your-provider.com/v1"
|
||||
)
|
||||
agent = Agent(llm=llm, ...)
|
||||
```
|
||||
|
||||
For local models like those provided by Ollama, ensure you have the necessary software installed and running. For example, to use Ollama:
|
||||
## Using Local Models with Ollama
|
||||
|
||||
For local models like those provided by Ollama:
|
||||
|
||||
1. [Download and install Ollama](https://ollama.com/download)
|
||||
2. Pull the desired model (e.g., `ollama pull llama2`)
|
||||
3. Use the model in your CrewAI agent by specifying `llm='ollama/llama2'`
|
||||
3. Configure your agent:
|
||||
|
||||
```python
|
||||
agent = Agent(
|
||||
role='Local AI Expert',
|
||||
goal='Process information using a local model',
|
||||
backstory="An AI assistant running on local hardware.",
|
||||
llm=LLM(model="ollama/llama2", base_url="http://localhost:11434")
|
||||
)
|
||||
```
|
||||
|
||||
## Changing the Base API URL
|
||||
|
||||
You can change the base API URL for any LLM provider by setting the `base_url` parameter:
|
||||
|
||||
```python
|
||||
llm = LLM(
|
||||
model="custom-model-name",
|
||||
base_url="https://api.your-provider.com/v1",
|
||||
api_key="your-api-key"
|
||||
)
|
||||
agent = Agent(llm=llm, ...)
|
||||
```
|
||||
|
||||
This is particularly useful when working with OpenAI-compatible APIs or when you need to specify a different endpoint for your chosen provider.
|
||||
|
||||
## Conclusion
|
||||
|
||||
By leveraging LiteLLM, CrewAI now offers seamless integration with a vast array of LLMs. This flexibility allows you to choose the most suitable model for your specific needs, whether you prioritize performance, cost-efficiency, or local deployment. Remember to consult the [LiteLLM documentation](https://docs.litellm.ai/docs/) for the most up-to-date information on supported models and configuration options.
|
||||
By leveraging LiteLLM, CrewAI offers seamless integration with a vast array of LLMs. This flexibility allows you to choose the most suitable model for your specific needs, whether you prioritize performance, cost-efficiency, or local deployment. Remember to consult the [LiteLLM documentation](https://docs.litellm.ai/docs/) for the most up-to-date information on supported models and configuration options.
|
||||
@@ -53,6 +53,11 @@ Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By
|
||||
Crews
|
||||
</a>
|
||||
</li>
|
||||
<li>
|
||||
<a href="./core-concepts/LLMs">
|
||||
LLMs
|
||||
</a>
|
||||
</li>
|
||||
<li>
|
||||
<a href="./core-concepts/Pipeline">
|
||||
Pipeline
|
||||
|
||||
Reference in New Issue
Block a user