mirror of
https://github.com/crewAIInc/crewAI.git
synced 2025-12-28 10:18:32 +00:00
* docs: clarify how to specify org_id and project_id in Mem0 configuration * Add org_id and project_id to mem0 config and fix mem0 entity '400 Bad Request' * Remove ruff changes to docs --------- Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
368 lines
12 KiB
Plaintext
368 lines
12 KiB
Plaintext
---
|
|
title: Memory
|
|
description: Leveraging memory systems in the CrewAI framework to enhance agent capabilities.
|
|
icon: database
|
|
---
|
|
|
|
## Introduction to Memory Systems in CrewAI
|
|
|
|
The crewAI framework introduces a sophisticated memory system designed to significantly enhance the capabilities of AI agents.
|
|
This system comprises `short-term memory`, `long-term memory`, `entity memory`, and `contextual memory`, each serving a unique purpose in aiding agents to remember,
|
|
reason, and learn from past interactions.
|
|
|
|
## Memory System Components
|
|
|
|
| Component | Description |
|
|
| :------------------- | :---------------------------------------------------------------------------------------------------------------------- |
|
|
| **Short-Term Memory**| Temporarily stores recent interactions and outcomes using `RAG`, enabling agents to recall and utilize information relevant to their current context during the current executions.|
|
|
| **Long-Term Memory** | Preserves valuable insights and learnings from past executions, allowing agents to build and refine their knowledge over time. |
|
|
| **Entity Memory** | Captures and organizes information about entities (people, places, concepts) encountered during tasks, facilitating deeper understanding and relationship mapping. Uses `RAG` for storing entity information. |
|
|
| **Contextual Memory**| Maintains the context of interactions by combining `ShortTermMemory`, `LongTermMemory`, and `EntityMemory`, aiding in the coherence and relevance of agent responses over a sequence of tasks or a conversation. |
|
|
| **User Memory** | Stores user-specific information and preferences, enhancing personalization and user experience. |
|
|
|
|
## How Memory Systems Empower Agents
|
|
|
|
1. **Contextual Awareness**: With short-term and contextual memory, agents gain the ability to maintain context over a conversation or task sequence, leading to more coherent and relevant responses.
|
|
|
|
2. **Experience Accumulation**: Long-term memory allows agents to accumulate experiences, learning from past actions to improve future decision-making and problem-solving.
|
|
|
|
3. **Entity Understanding**: By maintaining entity memory, agents can recognize and remember key entities, enhancing their ability to process and interact with complex information.
|
|
|
|
## Implementing Memory in Your Crew
|
|
|
|
When configuring a crew, you can enable and customize each memory component to suit the crew's objectives and the nature of tasks it will perform.
|
|
By default, the memory system is disabled, and you can ensure it is active by setting `memory=True` in the crew configuration.
|
|
The memory will use OpenAI embeddings by default, but you can change it by setting `embedder` to a different model.
|
|
It's also possible to initialize the memory instance with your own instance.
|
|
|
|
The 'embedder' only applies to **Short-Term Memory** which uses Chroma for RAG.
|
|
The **Long-Term Memory** uses SQLite3 to store task results. Currently, there is no way to override these storage implementations.
|
|
The data storage files are saved into a platform-specific location found using the appdirs package,
|
|
and the name of the project can be overridden using the **CREWAI_STORAGE_DIR** environment variable.
|
|
|
|
### Example: Configuring Memory for a Crew
|
|
|
|
```python Code
|
|
from crewai import Crew, Agent, Task, Process
|
|
|
|
# Assemble your crew with memory capabilities
|
|
my_crew = Crew(
|
|
agents=[...],
|
|
tasks=[...],
|
|
process=Process.sequential,
|
|
memory=True,
|
|
verbose=True
|
|
)
|
|
```
|
|
|
|
### Example: Use Custom Memory Instances e.g FAISS as the VectorDB
|
|
|
|
```python Code
|
|
from crewai import Crew, Agent, Task, Process
|
|
|
|
# Assemble your crew with memory capabilities
|
|
my_crew = Crew(
|
|
agents=[...],
|
|
tasks=[...],
|
|
process="Process.sequential",
|
|
memory=True,
|
|
long_term_memory=EnhanceLongTermMemory(
|
|
storage=LTMSQLiteStorage(
|
|
db_path="/my_data_dir/my_crew1/long_term_memory_storage.db"
|
|
)
|
|
),
|
|
short_term_memory=EnhanceShortTermMemory(
|
|
storage=CustomRAGStorage(
|
|
crew_name="my_crew",
|
|
storage_type="short_term",
|
|
data_dir="//my_data_dir",
|
|
model=embedder["model"],
|
|
dimension=embedder["dimension"],
|
|
),
|
|
),
|
|
entity_memory=EnhanceEntityMemory(
|
|
storage=CustomRAGStorage(
|
|
crew_name="my_crew",
|
|
storage_type="entities",
|
|
data_dir="//my_data_dir",
|
|
model=embedder["model"],
|
|
dimension=embedder["dimension"],
|
|
),
|
|
),
|
|
verbose=True,
|
|
)
|
|
```
|
|
|
|
## Integrating Mem0 for Enhanced User Memory
|
|
|
|
[Mem0](https://mem0.ai/) is a self-improving memory layer for LLM applications, enabling personalized AI experiences.
|
|
|
|
To include user-specific memory you can get your API key [here](https://app.mem0.ai/dashboard/api-keys) and refer the [docs](https://docs.mem0.ai/platform/quickstart#4-1-create-memories) for adding user preferences.
|
|
|
|
|
|
```python Code
|
|
import os
|
|
from crewai import Crew, Process
|
|
from mem0 import MemoryClient
|
|
|
|
# Set environment variables for Mem0
|
|
os.environ["MEM0_API_KEY"] = "m0-xx"
|
|
|
|
# Step 1: Record preferences based on past conversation or user input
|
|
client = MemoryClient()
|
|
messages = [
|
|
{"role": "user", "content": "Hi there! I'm planning a vacation and could use some advice."},
|
|
{"role": "assistant", "content": "Hello! I'd be happy to help with your vacation planning. What kind of destination do you prefer?"},
|
|
{"role": "user", "content": "I am more of a beach person than a mountain person."},
|
|
{"role": "assistant", "content": "That's interesting. Do you like hotels or Airbnb?"},
|
|
{"role": "user", "content": "I like Airbnb more."},
|
|
]
|
|
client.add(messages, user_id="john")
|
|
|
|
# Step 2: Create a Crew with User Memory
|
|
|
|
crew = Crew(
|
|
agents=[...],
|
|
tasks=[...],
|
|
verbose=True,
|
|
process=Process.sequential,
|
|
memory=True,
|
|
memory_config={
|
|
"provider": "mem0",
|
|
"config": {"user_id": "john"},
|
|
},
|
|
)
|
|
```
|
|
|
|
## Memory Configuration Options
|
|
If you want to access a specific organization and project, you can set the `org_id` and `project_id` parameters in the memory configuration.
|
|
|
|
```python Code
|
|
from crewai import Crew
|
|
|
|
crew = Crew(
|
|
agents=[...],
|
|
tasks=[...],
|
|
verbose=True,
|
|
memory=True,
|
|
memory_config={
|
|
"provider": "mem0",
|
|
"config": {"user_id": "john", "org_id": "my_org_id", "project_id": "my_project_id"},
|
|
},
|
|
)
|
|
```
|
|
|
|
## Additional Embedding Providers
|
|
|
|
### Using OpenAI embeddings (already default)
|
|
```python Code
|
|
from crewai import Crew, Agent, Task, Process
|
|
|
|
my_crew = Crew(
|
|
agents=[...],
|
|
tasks=[...],
|
|
process=Process.sequential,
|
|
memory=True,
|
|
verbose=True,
|
|
embedder={
|
|
"provider": "openai",
|
|
"config": {
|
|
"model": 'text-embedding-3-small'
|
|
}
|
|
}
|
|
)
|
|
```
|
|
Alternatively, you can directly pass the OpenAIEmbeddingFunction to the embedder parameter.
|
|
|
|
Example:
|
|
```python Code
|
|
from crewai import Crew, Agent, Task, Process
|
|
from chromadb.utils.embedding_functions import OpenAIEmbeddingFunction
|
|
|
|
my_crew = Crew(
|
|
agents=[...],
|
|
tasks=[...],
|
|
process=Process.sequential,
|
|
memory=True,
|
|
verbose=True,
|
|
embedder=OpenAIEmbeddingFunction(api_key=os.getenv("OPENAI_API_KEY"), model_name="text-embedding-3-small"),
|
|
)
|
|
```
|
|
|
|
### Using Ollama embeddings
|
|
|
|
```python Code
|
|
from crewai import Crew, Agent, Task, Process
|
|
|
|
my_crew = Crew(
|
|
agents=[...],
|
|
tasks=[...],
|
|
process=Process.sequential,
|
|
memory=True,
|
|
verbose=True,
|
|
embedder={
|
|
"provider": "ollama",
|
|
"config": {
|
|
"model": "mxbai-embed-large"
|
|
}
|
|
}
|
|
)
|
|
```
|
|
|
|
### Using Google AI embeddings
|
|
|
|
```python Code
|
|
from crewai import Crew, Agent, Task, Process
|
|
|
|
my_crew = Crew(
|
|
agents=[...],
|
|
tasks=[...],
|
|
process=Process.sequential,
|
|
memory=True,
|
|
verbose=True,
|
|
embedder={
|
|
"provider": "google",
|
|
"config": {
|
|
"api_key": "<YOUR_API_KEY>",
|
|
"model_name": "<model_name>"
|
|
}
|
|
}
|
|
)
|
|
```
|
|
|
|
### Using Azure OpenAI embeddings
|
|
|
|
```python Code
|
|
from chromadb.utils.embedding_functions import OpenAIEmbeddingFunction
|
|
from crewai import Crew, Agent, Task, Process
|
|
|
|
my_crew = Crew(
|
|
agents=[...],
|
|
tasks=[...],
|
|
process=Process.sequential,
|
|
memory=True,
|
|
verbose=True,
|
|
embedder=OpenAIEmbeddingFunction(
|
|
api_key="YOUR_API_KEY",
|
|
api_base="YOUR_API_BASE_PATH",
|
|
api_type="azure",
|
|
api_version="YOUR_API_VERSION",
|
|
model_name="text-embedding-3-small"
|
|
)
|
|
)
|
|
```
|
|
|
|
### Using Vertex AI embeddings
|
|
|
|
```python Code
|
|
from chromadb.utils.embedding_functions import GoogleVertexEmbeddingFunction
|
|
from crewai import Crew, Agent, Task, Process
|
|
|
|
my_crew = Crew(
|
|
agents=[...],
|
|
tasks=[...],
|
|
process=Process.sequential,
|
|
memory=True,
|
|
verbose=True,
|
|
embedder=GoogleVertexEmbeddingFunction(
|
|
project_id="YOUR_PROJECT_ID",
|
|
region="YOUR_REGION",
|
|
api_key="YOUR_API_KEY",
|
|
model_name="textembedding-gecko"
|
|
)
|
|
)
|
|
```
|
|
|
|
### Using Cohere embeddings
|
|
|
|
```python Code
|
|
from crewai import Crew, Agent, Task, Process
|
|
|
|
my_crew = Crew(
|
|
agents=[...],
|
|
tasks=[...],
|
|
process=Process.sequential,
|
|
memory=True,
|
|
verbose=True,
|
|
embedder={
|
|
"provider": "cohere",
|
|
"config": {
|
|
"api_key": "YOUR_API_KEY",
|
|
"model_name": "<model_name>"
|
|
}
|
|
}
|
|
)
|
|
```
|
|
### Using HuggingFace embeddings
|
|
|
|
```python Code
|
|
from crewai import Crew, Agent, Task, Process
|
|
|
|
my_crew = Crew(
|
|
agents=[...],
|
|
tasks=[...],
|
|
process=Process.sequential,
|
|
memory=True,
|
|
verbose=True,
|
|
embedder={
|
|
"provider": "huggingface",
|
|
"config": {
|
|
"api_url": "<api_url>",
|
|
}
|
|
}
|
|
)
|
|
```
|
|
|
|
### Using Watson embeddings
|
|
|
|
```python Code
|
|
from crewai import Crew, Agent, Task, Process
|
|
|
|
# Note: Ensure you have installed and imported `ibm_watsonx_ai` for Watson embeddings to work.
|
|
|
|
my_crew = Crew(
|
|
agents=[...],
|
|
tasks=[...],
|
|
process=Process.sequential,
|
|
memory=True,
|
|
verbose=True,
|
|
embedder={
|
|
"provider": "watson",
|
|
"config": {
|
|
"model": "<model_name>",
|
|
"api_url": "<api_url>",
|
|
"api_key": "<YOUR_API_KEY>",
|
|
"project_id": "<YOUR_PROJECT_ID>",
|
|
}
|
|
}
|
|
)
|
|
```
|
|
|
|
### Resetting Memory
|
|
|
|
```shell
|
|
crewai reset-memories [OPTIONS]
|
|
```
|
|
|
|
#### Resetting Memory Options
|
|
|
|
| Option | Description | Type | Default |
|
|
| :----------------- | :------------------------------- | :------------- | :------ |
|
|
| `-l`, `--long` | Reset LONG TERM memory. | Flag (boolean) | False |
|
|
| `-s`, `--short` | Reset SHORT TERM memory. | Flag (boolean) | False |
|
|
| `-e`, `--entities` | Reset ENTITIES memory. | Flag (boolean) | False |
|
|
| `-k`, `--kickoff-outputs` | Reset LATEST KICKOFF TASK OUTPUTS. | Flag (boolean) | False |
|
|
| `-a`, `--all` | Reset ALL memories. | Flag (boolean) | False |
|
|
|
|
|
|
## Benefits of Using CrewAI's Memory System
|
|
|
|
- 🦾 **Adaptive Learning:** Crews become more efficient over time, adapting to new information and refining their approach to tasks.
|
|
- 🫡 **Enhanced Personalization:** Memory enables agents to remember user preferences and historical interactions, leading to personalized experiences.
|
|
- 🧠 **Improved Problem Solving:** Access to a rich memory store aids agents in making more informed decisions, drawing on past learnings and contextual insights.
|
|
|
|
## Conclusion
|
|
|
|
Integrating CrewAI's memory system into your projects is straightforward. By leveraging the provided memory components and configurations,
|
|
you can quickly empower your agents with the ability to remember, reason, and learn from their interactions, unlocking new levels of intelligence and capability.
|