mirror of
https://github.com/crewAIInc/crewAI.git
synced 2025-12-16 04:18:35 +00:00
536 lines
16 KiB
Plaintext
536 lines
16 KiB
Plaintext
---
|
|
title: Memory
|
|
description: Leveraging memory systems in the CrewAI framework to enhance agent capabilities.
|
|
icon: database
|
|
---
|
|
|
|
## Introduction to Memory Systems in CrewAI
|
|
|
|
The crewAI framework introduces a sophisticated memory system designed to significantly enhance the capabilities of AI agents.
|
|
This system comprises `short-term memory`, `long-term memory`, `entity memory`, and `contextual memory`, each serving a unique purpose in aiding agents to remember,
|
|
reason, and learn from past interactions.
|
|
|
|
## Memory System Components
|
|
|
|
| Component | Description |
|
|
| :------------------- | :---------------------------------------------------------------------------------------------------------------------- |
|
|
| **Short-Term Memory**| Temporarily stores recent interactions and outcomes using `RAG`, enabling agents to recall and utilize information relevant to their current context during the current executions.|
|
|
| **Long-Term Memory** | Preserves valuable insights and learnings from past executions, allowing agents to build and refine their knowledge over time. |
|
|
| **Entity Memory** | Captures and organizes information about entities (people, places, concepts) encountered during tasks, facilitating deeper understanding and relationship mapping. Uses `RAG` for storing entity information. |
|
|
| **Contextual Memory**| Maintains the context of interactions by combining `ShortTermMemory`, `LongTermMemory`, and `EntityMemory`, aiding in the coherence and relevance of agent responses over a sequence of tasks or a conversation. |
|
|
| **User Memory** | Stores user-specific information and preferences, enhancing personalization and user experience. |
|
|
|
|
## How Memory Systems Empower Agents
|
|
|
|
1. **Contextual Awareness**: With short-term and contextual memory, agents gain the ability to maintain context over a conversation or task sequence, leading to more coherent and relevant responses.
|
|
|
|
2. **Experience Accumulation**: Long-term memory allows agents to accumulate experiences, learning from past actions to improve future decision-making and problem-solving.
|
|
|
|
3. **Entity Understanding**: By maintaining entity memory, agents can recognize and remember key entities, enhancing their ability to process and interact with complex information.
|
|
|
|
## Implementing Memory in Your Crew
|
|
|
|
When configuring a crew, you can enable and customize each memory component to suit the crew's objectives and the nature of tasks it will perform.
|
|
By default, the memory system is disabled, and you can ensure it is active by setting `memory=True` in the crew configuration.
|
|
The memory will use OpenAI embeddings by default, but you can change it by setting `embedder` to a different model.
|
|
It's also possible to initialize the memory instance with your own instance.
|
|
|
|
The 'embedder' only applies to **Short-Term Memory** which uses Chroma for RAG.
|
|
The **Long-Term Memory** uses SQLite3 to store task results. Currently, there is no way to override these storage implementations.
|
|
The data storage files are saved into a platform-specific location found using the appdirs package,
|
|
and the name of the project can be overridden using the **CREWAI_STORAGE_DIR** environment variable.
|
|
|
|
### Example: Configuring Memory for a Crew
|
|
|
|
```python Code
|
|
from crewai import Crew, Agent, Task, Process
|
|
|
|
# Assemble your crew with memory capabilities
|
|
my_crew = Crew(
|
|
agents=[...],
|
|
tasks=[...],
|
|
process=Process.sequential,
|
|
memory=True,
|
|
verbose=True
|
|
)
|
|
```
|
|
|
|
### Example: Use Custom Memory Instances e.g FAISS as the VectorDB
|
|
|
|
```python Code
|
|
from crewai import Crew, Process
|
|
from crewai.memory import LongTermMemory, ShortTermMemory, EntityMemory
|
|
from crewai.memory.storage import LTMSQLiteStorage, RAGStorage
|
|
from typing import List, Optional
|
|
|
|
# Assemble your crew with memory capabilities
|
|
my_crew: Crew = Crew(
|
|
agents = [...],
|
|
tasks = [...],
|
|
process = Process.sequential,
|
|
memory = True,
|
|
# Long-term memory for persistent storage across sessions
|
|
long_term_memory = LongTermMemory(
|
|
storage=LTMSQLiteStorage(
|
|
db_path="/my_crew1/long_term_memory_storage.db"
|
|
)
|
|
),
|
|
# Short-term memory for current context using RAG
|
|
short_term_memory = ShortTermMemory(
|
|
storage = RAGStorage(
|
|
embedder_config={
|
|
"provider": "openai",
|
|
"config": {
|
|
"model": 'text-embedding-3-small'
|
|
}
|
|
},
|
|
type="short_term",
|
|
path="/my_crew1/"
|
|
)
|
|
),
|
|
),
|
|
# Entity memory for tracking key information about entities
|
|
entity_memory = EntityMemory(
|
|
storage=RAGStorage(
|
|
embedder_config={
|
|
"provider": "openai",
|
|
"config": {
|
|
"model": 'text-embedding-3-small'
|
|
}
|
|
},
|
|
type="short_term",
|
|
path="/my_crew1/"
|
|
)
|
|
),
|
|
verbose=True,
|
|
)
|
|
```
|
|
|
|
## Security Considerations
|
|
|
|
When configuring memory storage:
|
|
- Use environment variables for storage paths (e.g., `CREWAI_STORAGE_DIR`)
|
|
- Never hardcode sensitive information like database credentials
|
|
- Consider access permissions for storage directories
|
|
- Use relative paths when possible to maintain portability
|
|
|
|
Example using environment variables:
|
|
```python
|
|
import os
|
|
from crewai import Crew
|
|
from crewai.memory import LongTermMemory
|
|
from crewai.memory.storage import LTMSQLiteStorage
|
|
|
|
# Configure storage path using environment variable
|
|
storage_path = os.getenv("CREWAI_STORAGE_DIR", "./storage")
|
|
crew = Crew(
|
|
memory=True,
|
|
long_term_memory=LongTermMemory(
|
|
storage=LTMSQLiteStorage(
|
|
db_path="{storage_path}/memory.db".format(storage_path=storage_path)
|
|
)
|
|
)
|
|
)
|
|
```
|
|
|
|
## Configuration Examples
|
|
|
|
### Basic Memory Configuration
|
|
```python
|
|
from crewai import Crew
|
|
from crewai.memory import LongTermMemory
|
|
|
|
# Simple memory configuration
|
|
crew = Crew(memory=True) # Uses default storage locations
|
|
```
|
|
|
|
### Custom Storage Configuration
|
|
```python
|
|
from crewai import Crew
|
|
from crewai.memory import LongTermMemory
|
|
from crewai.memory.storage import LTMSQLiteStorage
|
|
|
|
# Configure custom storage paths
|
|
crew = Crew(
|
|
memory=True,
|
|
long_term_memory=LongTermMemory(
|
|
storage=LTMSQLiteStorage(db_path="./memory.db")
|
|
)
|
|
)
|
|
```
|
|
|
|
## Integrating Mem0 for Enhanced User Memory
|
|
|
|
[Mem0](https://mem0.ai/) is a self-improving memory layer for LLM applications, enabling personalized AI experiences.
|
|
|
|
To include user-specific memory you can get your API key [here](https://app.mem0.ai/dashboard/api-keys) and refer the [docs](https://docs.mem0.ai/platform/quickstart#4-1-create-memories) for adding user preferences.
|
|
|
|
|
|
```python Code
|
|
import os
|
|
from crewai import Crew, Process
|
|
from mem0 import MemoryClient
|
|
|
|
# Set environment variables for Mem0
|
|
os.environ["MEM0_API_KEY"] = "m0-xx"
|
|
|
|
# Step 1: Record preferences based on past conversation or user input
|
|
client = MemoryClient()
|
|
messages = [
|
|
{"role": "user", "content": "Hi there! I'm planning a vacation and could use some advice."},
|
|
{"role": "assistant", "content": "Hello! I'd be happy to help with your vacation planning. What kind of destination do you prefer?"},
|
|
{"role": "user", "content": "I am more of a beach person than a mountain person."},
|
|
{"role": "assistant", "content": "That's interesting. Do you like hotels or Airbnb?"},
|
|
{"role": "user", "content": "I like Airbnb more."},
|
|
]
|
|
client.add(messages, user_id="john")
|
|
|
|
# Step 2: Create a Crew with User Memory
|
|
|
|
crew = Crew(
|
|
agents=[...],
|
|
tasks=[...],
|
|
verbose=True,
|
|
process=Process.sequential,
|
|
memory=True,
|
|
memory_config={
|
|
"provider": "mem0",
|
|
"config": {"user_id": "john"},
|
|
},
|
|
)
|
|
```
|
|
|
|
## Memory Configuration Options
|
|
If you want to access a specific organization and project, you can set the `org_id` and `project_id` parameters in the memory configuration.
|
|
|
|
```python Code
|
|
from crewai import Crew
|
|
|
|
crew = Crew(
|
|
agents=[...],
|
|
tasks=[...],
|
|
verbose=True,
|
|
memory=True,
|
|
memory_config={
|
|
"provider": "mem0",
|
|
"config": {"user_id": "john", "org_id": "my_org_id", "project_id": "my_project_id"},
|
|
},
|
|
)
|
|
```
|
|
|
|
## Additional Embedding Providers
|
|
|
|
### Using OpenAI embeddings (already default)
|
|
```python Code
|
|
from crewai import Crew, Agent, Task, Process
|
|
|
|
my_crew = Crew(
|
|
agents=[...],
|
|
tasks=[...],
|
|
process=Process.sequential,
|
|
memory=True,
|
|
verbose=True,
|
|
embedder={
|
|
"provider": "openai",
|
|
"config": {
|
|
"model": 'text-embedding-3-small'
|
|
}
|
|
}
|
|
)
|
|
```
|
|
Alternatively, you can directly pass the OpenAIEmbeddingFunction to the embedder parameter.
|
|
|
|
Example:
|
|
```python Code
|
|
from crewai import Crew, Agent, Task, Process
|
|
from chromadb.utils.embedding_functions import OpenAIEmbeddingFunction
|
|
|
|
my_crew = Crew(
|
|
agents=[...],
|
|
tasks=[...],
|
|
process=Process.sequential,
|
|
memory=True,
|
|
verbose=True,
|
|
embedder={
|
|
"provider": "openai",
|
|
"config": {
|
|
"model": 'text-embedding-3-small'
|
|
}
|
|
}
|
|
)
|
|
```
|
|
|
|
### Using Ollama embeddings
|
|
|
|
```python Code
|
|
from crewai import Crew, Agent, Task, Process
|
|
|
|
my_crew = Crew(
|
|
agents=[...],
|
|
tasks=[...],
|
|
process=Process.sequential,
|
|
memory=True,
|
|
verbose=True,
|
|
embedder={
|
|
"provider": "ollama",
|
|
"config": {
|
|
"model": "mxbai-embed-large"
|
|
}
|
|
}
|
|
)
|
|
```
|
|
|
|
### Using Google AI embeddings
|
|
|
|
#### Prerequisites
|
|
Before using Google AI embeddings, ensure you have:
|
|
- Access to the Gemini API
|
|
- The necessary API keys and permissions
|
|
|
|
You will need to update your *pyproject.toml* dependencies:
|
|
```YAML
|
|
dependencies = [
|
|
"google-generativeai>=0.8.4", #main version in January/2025 - crewai v.0.100.0 and crewai-tools 0.33.0
|
|
"crewai[tools]>=0.100.0,<1.0.0"
|
|
]
|
|
```
|
|
|
|
```python Code
|
|
from crewai import Crew, Agent, Task, Process
|
|
|
|
my_crew = Crew(
|
|
agents=[...],
|
|
tasks=[...],
|
|
process=Process.sequential,
|
|
memory=True,
|
|
verbose=True,
|
|
embedder={
|
|
"provider": "google",
|
|
"config": {
|
|
"api_key": "<YOUR_API_KEY>",
|
|
"model": "<model_name>"
|
|
}
|
|
}
|
|
)
|
|
```
|
|
|
|
### Using Azure OpenAI embeddings
|
|
|
|
```python Code
|
|
from chromadb.utils.embedding_functions import OpenAIEmbeddingFunction
|
|
from crewai import Crew, Agent, Task, Process
|
|
|
|
my_crew = Crew(
|
|
agents=[...],
|
|
tasks=[...],
|
|
process=Process.sequential,
|
|
memory=True,
|
|
verbose=True,
|
|
embedder={
|
|
"provider": "openai",
|
|
"config": {
|
|
"api_key": "YOUR_API_KEY",
|
|
"api_base": "YOUR_API_BASE_PATH",
|
|
"api_version": "YOUR_API_VERSION",
|
|
"model_name": 'text-embedding-3-small'
|
|
}
|
|
}
|
|
)
|
|
```
|
|
|
|
### Using Vertex AI embeddings
|
|
|
|
```python Code
|
|
from chromadb.utils.embedding_functions import GoogleVertexEmbeddingFunction
|
|
from crewai import Crew, Agent, Task, Process
|
|
|
|
my_crew = Crew(
|
|
agents=[...],
|
|
tasks=[...],
|
|
process=Process.sequential,
|
|
memory=True,
|
|
verbose=True,
|
|
embedder={
|
|
"provider": "vertexai",
|
|
"config": {
|
|
"project_id"="YOUR_PROJECT_ID",
|
|
"region"="YOUR_REGION",
|
|
"api_key"="YOUR_API_KEY",
|
|
"model_name"="textembedding-gecko"
|
|
}
|
|
}
|
|
)
|
|
```
|
|
|
|
### Using Cohere embeddings
|
|
|
|
```python Code
|
|
from crewai import Crew, Agent, Task, Process
|
|
|
|
my_crew = Crew(
|
|
agents=[...],
|
|
tasks=[...],
|
|
process=Process.sequential,
|
|
memory=True,
|
|
verbose=True,
|
|
embedder={
|
|
"provider": "cohere",
|
|
"config": {
|
|
"api_key": "YOUR_API_KEY",
|
|
"model": "<model_name>"
|
|
}
|
|
}
|
|
)
|
|
```
|
|
### Using VoyageAI embeddings
|
|
|
|
```python Code
|
|
from crewai import Crew, Agent, Task, Process
|
|
|
|
my_crew = Crew(
|
|
agents=[...],
|
|
tasks=[...],
|
|
process=Process.sequential,
|
|
memory=True,
|
|
verbose=True,
|
|
embedder={
|
|
"provider": "voyageai",
|
|
"config": {
|
|
"api_key": "YOUR_API_KEY",
|
|
"model": "<model_name>"
|
|
}
|
|
}
|
|
)
|
|
```
|
|
### Using HuggingFace embeddings
|
|
|
|
```python Code
|
|
from crewai import Crew, Agent, Task, Process
|
|
|
|
my_crew = Crew(
|
|
agents=[...],
|
|
tasks=[...],
|
|
process=Process.sequential,
|
|
memory=True,
|
|
verbose=True,
|
|
embedder={
|
|
"provider": "huggingface",
|
|
"config": {
|
|
"api_url": "<api_url>",
|
|
}
|
|
}
|
|
)
|
|
```
|
|
|
|
### Using Watson embeddings
|
|
|
|
```python Code
|
|
from crewai import Crew, Agent, Task, Process
|
|
|
|
# Note: Ensure you have installed and imported `ibm_watsonx_ai` for Watson embeddings to work.
|
|
|
|
my_crew = Crew(
|
|
agents=[...],
|
|
tasks=[...],
|
|
process=Process.sequential,
|
|
memory=True,
|
|
verbose=True,
|
|
embedder={
|
|
"provider": "watson",
|
|
"config": {
|
|
"model": "<model_name>",
|
|
"api_url": "<api_url>",
|
|
"api_key": "<YOUR_API_KEY>",
|
|
"project_id": "<YOUR_PROJECT_ID>",
|
|
}
|
|
}
|
|
)
|
|
```
|
|
|
|
### Using Amazon Bedrock embeddings
|
|
|
|
```python Code
|
|
# Note: Ensure you have installed `boto3` for Bedrock embeddings to work.
|
|
|
|
import os
|
|
import boto3
|
|
from crewai import Crew, Agent, Task, Process
|
|
|
|
boto3_session = boto3.Session(
|
|
region_name=os.environ.get("AWS_REGION_NAME"),
|
|
aws_access_key_id=os.environ.get("AWS_ACCESS_KEY_ID"),
|
|
aws_secret_access_key=os.environ.get("AWS_SECRET_ACCESS_KEY")
|
|
)
|
|
|
|
my_crew = Crew(
|
|
agents=[...],
|
|
tasks=[...],
|
|
process=Process.sequential,
|
|
memory=True,
|
|
embedder={
|
|
"provider": "bedrock",
|
|
"config":{
|
|
"session": boto3_session,
|
|
"model": "amazon.titan-embed-text-v2:0",
|
|
"vector_dimension": 1024
|
|
}
|
|
}
|
|
verbose=True
|
|
)
|
|
```
|
|
|
|
### Adding Custom Embedding Function
|
|
|
|
```python Code
|
|
from crewai import Crew, Agent, Task, Process
|
|
from chromadb import Documents, EmbeddingFunction, Embeddings
|
|
|
|
# Create a custom embedding function
|
|
class CustomEmbedder(EmbeddingFunction):
|
|
def __call__(self, input: Documents) -> Embeddings:
|
|
# generate embeddings
|
|
return [1, 2, 3] # this is a dummy embedding
|
|
|
|
my_crew = Crew(
|
|
agents=[...],
|
|
tasks=[...],
|
|
process=Process.sequential,
|
|
memory=True,
|
|
verbose=True,
|
|
embedder={
|
|
"provider": "custom",
|
|
"config": {
|
|
"embedder": CustomEmbedder()
|
|
}
|
|
}
|
|
)
|
|
```
|
|
|
|
### Resetting Memory
|
|
|
|
```shell
|
|
crewai reset-memories [OPTIONS]
|
|
```
|
|
|
|
#### Resetting Memory Options
|
|
|
|
| Option | Description | Type | Default |
|
|
| :----------------- | :------------------------------- | :------------- | :------ |
|
|
| `-l`, `--long` | Reset LONG TERM memory. | Flag (boolean) | False |
|
|
| `-s`, `--short` | Reset SHORT TERM memory. | Flag (boolean) | False |
|
|
| `-e`, `--entities` | Reset ENTITIES memory. | Flag (boolean) | False |
|
|
| `-k`, `--kickoff-outputs` | Reset LATEST KICKOFF TASK OUTPUTS. | Flag (boolean) | False |
|
|
| `-a`, `--all` | Reset ALL memories. | Flag (boolean) | False |
|
|
|
|
|
|
## Benefits of Using CrewAI's Memory System
|
|
|
|
- 🦾 **Adaptive Learning:** Crews become more efficient over time, adapting to new information and refining their approach to tasks.
|
|
- 🫡 **Enhanced Personalization:** Memory enables agents to remember user preferences and historical interactions, leading to personalized experiences.
|
|
- 🧠 **Improved Problem Solving:** Access to a rich memory store aids agents in making more informed decisions, drawing on past learnings and contextual insights.
|
|
|
|
## Conclusion
|
|
|
|
Integrating CrewAI's memory system into your projects is straightforward. By leveraging the provided memory components and configurations,
|
|
you can quickly empower your agents with the ability to remember, reason, and learn from their interactions, unlocking new levels of intelligence and capability.
|