Merge branch 'main' into intergrate-mem0

This commit is contained in:
Dev-Khant
2024-09-23 15:00:41 +05:30
124 changed files with 38469 additions and 16222 deletions

155
docs/core-concepts/LLMs.md Normal file
View File

@@ -0,0 +1,155 @@
# Large Language Models (LLMs) in crewAI
## Introduction
Large Language Models (LLMs) are the backbone of intelligent agents in the crewAI framework. This guide will help you understand, configure, and optimize LLM usage for your crewAI projects.
## Table of Contents
- [Key Concepts](#key-concepts)
- [Configuring LLMs for Agents](#configuring-llms-for-agents)
- [1. Default Configuration](#1-default-configuration)
- [2. String Identifier](#2-string-identifier)
- [3. LLM Instance](#3-llm-instance)
- [4. Custom LLM Objects](#4-custom-llm-objects)
- [Connecting to OpenAI-Compatible LLMs](#connecting-to-openai-compatible-llms)
- [LLM Configuration Options](#llm-configuration-options)
- [Using Ollama (Local LLMs)](#using-ollama-local-llms)
- [Changing the Base API URL](#changing-the-base-api-url)
- [Best Practices](#best-practices)
- [Troubleshooting](#troubleshooting)
## Key Concepts
- **LLM**: Large Language Model, the AI powering agent intelligence
- **Agent**: A crewAI entity that uses an LLM to perform tasks
- **Provider**: A service that offers LLM capabilities (e.g., OpenAI, Anthropic, Ollama, [more providers](https://docs.litellm.ai/docs/providers))
## Configuring LLMs for Agents
crewAI offers flexible options for setting up LLMs:
### 1. Default Configuration
By default, crewAI uses the `gpt-4o-mini` model. It uses environment variables if no LLM is specified:
- `OPENAI_MODEL_NAME` (defaults to "gpt-4o-mini" if not set)
- `OPENAI_API_BASE`
- `OPENAI_API_KEY`
### 2. String Identifier
```python
agent = Agent(llm="gpt-4o", ...)
```
### 3. LLM Instance
List of [more providers](https://docs.litellm.ai/docs/providers).
```python
from crewai import LLM
llm = LLM(model="gpt-4", temperature=0.7)
agent = Agent(llm=llm, ...)
```
### 4. Custom LLM Objects
Pass a custom LLM implementation or object from another library.
## Connecting to OpenAI-Compatible LLMs
You can connect to OpenAI-compatible LLMs using either environment variables or by setting specific attributes on the LLM class:
1. Using environment variables:
```python
import os
os.environ["OPENAI_API_KEY"] = "your-api-key"
os.environ["OPENAI_API_BASE"] = "https://api.your-provider.com/v1"
```
2. Using LLM class attributes:
```python
llm = LLM(
model="custom-model-name",
api_key="your-api-key",
base_url="https://api.your-provider.com/v1"
)
agent = Agent(llm=llm, ...)
```
## LLM Configuration Options
When configuring an LLM for your agent, you have access to a wide range of parameters:
| Parameter | Type | Description |
|-----------|------|-------------|
| `model` | str | The name of the model to use (e.g., "gpt-4", "gpt-3.5-turbo", "ollama/llama3.1", [more providers](https://docs.litellm.ai/docs/providers)) |
| `timeout` | float, int | Maximum time (in seconds) to wait for a response |
| `temperature` | float | Controls randomness in output (0.0 to 1.0) |
| `top_p` | float | Controls diversity of output (0.0 to 1.0) |
| `n` | int | Number of completions to generate |
| `stop` | str, List[str] | Sequence(s) to stop generation |
| `max_tokens` | int | Maximum number of tokens to generate |
| `presence_penalty` | float | Penalizes new tokens based on their presence in the text so far |
| `frequency_penalty` | float | Penalizes new tokens based on their frequency in the text so far |
| `logit_bias` | Dict[int, float] | Modifies likelihood of specified tokens appearing in the completion |
| `response_format` | Dict[str, Any] | Specifies the format of the response (e.g., {"type": "json_object"}) |
| `seed` | int | Sets a random seed for deterministic results |
| `logprobs` | bool | Whether to return log probabilities of the output tokens |
| `top_logprobs` | int | Number of most likely tokens to return the log probabilities for |
| `base_url` | str | The base URL for the API endpoint |
| `api_version` | str | The version of the API to use |
| `api_key` | str | Your API key for authentication |
Example:
```python
llm = LLM(
model="gpt-4",
temperature=0.8,
max_tokens=150,
top_p=0.9,
frequency_penalty=0.1,
presence_penalty=0.1,
stop=["END"],
seed=42,
base_url="https://api.openai.com/v1",
api_key="your-api-key-here"
)
agent = Agent(llm=llm, ...)
```
## Using Ollama (Local LLMs)
crewAI supports using Ollama for running open-source models locally:
1. Install Ollama: [ollama.ai](https://ollama.ai/)
2. Run a model: `ollama run llama2`
3. Configure agent:
```python
agent = Agent(
llm=LLM(model="ollama/llama3.1", base_url="http://localhost:11434"),
...
)
```
## Changing the Base API URL
You can change the base API URL for any LLM provider by setting the `base_url` parameter:
```python
llm = LLM(
model="custom-model-name",
base_url="https://api.your-provider.com/v1",
api_key="your-api-key"
)
agent = Agent(llm=llm, ...)
```
This is particularly useful when working with OpenAI-compatible APIs or when you need to specify a different endpoint for your chosen provider.
## Best Practices
1. **Choose the right model**: Balance capability and cost.
2. **Optimize prompts**: Clear, concise instructions improve output.
3. **Manage tokens**: Monitor and limit token usage for efficiency.
4. **Use appropriate temperature**: Lower for factual tasks, higher for creative ones.
5. **Implement error handling**: Gracefully manage API errors and rate limits.
## Troubleshooting
- **API Errors**: Check your API key, network connection, and rate limits.
- **Unexpected Outputs**: Refine your prompts and adjust temperature or top_p.
- **Performance Issues**: Consider using a more powerful model or optimizing your queries.
- **Timeout Errors**: Increase the `timeout` parameter or optimize your input.

View File

@@ -28,7 +28,7 @@ description: Leveraging memory systems in the crewAI framework to enhance agent
## Implementing Memory in Your Crew
When configuring a crew, you can enable and customize each memory component to suit the crew's objectives and the nature of tasks it will perform.
By default, the memory system is disabled, and you can ensure it is active by setting `memory=True` in the crew configuration. The memory will use OpenAI embeddings by default, but you can change it by setting `embedder` to a different model.
By default, the memory system is disabled, and you can ensure it is active by setting `memory=True` in the crew configuration. The memory will use OpenAI embeddings by default, but you can change it by setting `embedder` to a different model. It's also possible to initialize the memory instance with your own instance.
The 'embedder' only applies to **Short-Term Memory** which uses Chroma for RAG using the EmbedChain package.
The **Long-Term Memory** uses SQLite3 to store task results. Currently, there is no way to override these storage implementations.
@@ -50,6 +50,45 @@ my_crew = Crew(
)
```
### Example: Use Custom Memory Instances e.g FAISS as the VectorDB
```python
from crewai import Crew, Agent, Task, Process
# Assemble your crew with memory capabilities
my_crew = Crew(
agents=[...],
tasks=[...],
process="Process.sequential",
memory=True,
long_term_memory=EnhanceLongTermMemory(
storage=LTMSQLiteStorage(
db_path="/my_data_dir/my_crew1/long_term_memory_storage.db"
)
),
short_term_memory=EnhanceShortTermMemory(
storage=CustomRAGStorage(
crew_name="my_crew",
storage_type="short_term",
data_dir="//my_data_dir",
model=embedder["model"],
dimension=embedder["dimension"],
),
),
entity_memory=EnhanceEntityMemory(
storage=CustomRAGStorage(
crew_name="my_crew",
storage_type="entities",
data_dir="//my_data_dir",
model=embedder["model"],
dimension=embedder["dimension"],
),
),
verbose=True,
)
```
## Additional Embedding Providers
### Using OpenAI embeddings (already default)