mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-01-11 09:08:31 +00:00
- Update SerperDevTool documentation with accurate parameters and JSON response format - Enhance XMLSearchTool and MDXSearchTool docs with RAG capabilities and required parameters - Fix code block formatting across multiple tool documentation files - Add clarification about environment variables and configuration - Validate all examples against actual implementations - Successfully tested with mkdocs build Co-Authored-By: Joe Moura <joao@crewai.com>
521 lines
16 KiB
Plaintext
521 lines
16 KiB
Plaintext
---
|
|
title: Memory
|
|
description: Leveraging memory systems in the CrewAI framework to enhance agent capabilities.
|
|
icon: database
|
|
---
|
|
|
|
## Introduction to Memory Systems in CrewAI
|
|
|
|
The crewAI framework introduces a sophisticated memory system designed to significantly enhance the capabilities of AI agents.
|
|
This system comprises `short-term memory`, `long-term memory`, `entity memory`, and `contextual memory`, each serving a unique purpose in aiding agents to remember,
|
|
reason, and learn from past interactions.
|
|
|
|
## Memory System Components
|
|
|
|
| Component | Description |
|
|
| :------------------- | :---------------------------------------------------------------------------------------------------------------------- |
|
|
| **Short-Term Memory**| Temporarily stores recent interactions and outcomes using `RAG`, enabling agents to recall and utilize information relevant to their current context during the current executions.|
|
|
| **Long-Term Memory** | Preserves valuable insights and learnings from past executions, allowing agents to build and refine their knowledge over time. |
|
|
| **Entity Memory** | Captures and organizes information about entities (people, places, concepts) encountered during tasks, facilitating deeper understanding and relationship mapping. Uses `RAG` for storing entity information. |
|
|
| **Contextual Memory**| Maintains the context of interactions by combining `ShortTermMemory`, `LongTermMemory`, and `EntityMemory`, aiding in the coherence and relevance of agent responses over a sequence of tasks or a conversation. |
|
|
| **User Memory** | Stores user-specific information and preferences, enhancing personalization and user experience. |
|
|
|
|
## How Memory Systems Empower Agents
|
|
|
|
1. **Contextual Awareness**: With short-term and contextual memory, agents gain the ability to maintain context over a conversation or task sequence, leading to more coherent and relevant responses.
|
|
|
|
2. **Experience Accumulation**: Long-term memory allows agents to accumulate experiences, learning from past actions to improve future decision-making and problem-solving.
|
|
|
|
3. **Entity Understanding**: By maintaining entity memory, agents can recognize and remember key entities, enhancing their ability to process and interact with complex information.
|
|
|
|
## Implementing Memory in Your Crew
|
|
|
|
When configuring a crew, you can enable and customize each memory component to suit the crew's objectives and the nature of tasks it will perform.
|
|
By default, the memory system is disabled, and you can ensure it is active by setting `memory=True` in the crew configuration.
|
|
The memory will use OpenAI embeddings by default, but you can change it by setting `embedder` to a different model.
|
|
It's also possible to initialize the memory instance with your own instance.
|
|
|
|
Each memory type uses different storage implementations:
|
|
|
|
- **Short-Term Memory**: Uses Chroma for RAG (Retrieval-Augmented Generation) with configurable embeddings
|
|
- **Long-Term Memory**: Uses SQLite3 for persistent storage of task results and metadata
|
|
- **Entity Memory**: Uses either RAG storage (default) or Mem0 for entity information
|
|
- **User Memory**: Available through Mem0 integration for personalized experiences
|
|
|
|
The data storage files are saved in a platform-specific location using the appdirs package.
|
|
You can override the storage location using the **CREWAI_STORAGE_DIR** environment variable.
|
|
|
|
### Storage Implementation Details
|
|
|
|
#### Short-Term Memory
|
|
- Default: ChromaDB with RAG
|
|
- Configurable embeddings (OpenAI, Ollama, Google AI, etc.)
|
|
- Supports custom embedding functions
|
|
- Optional Mem0 integration for enhanced capabilities
|
|
|
|
#### Long-Term Memory
|
|
- SQLite3 storage with structured schema
|
|
- Stores task descriptions, metadata, timestamps, and quality scores
|
|
- Supports querying by task description with configurable limits
|
|
- Includes error handling and reset capabilities
|
|
|
|
#### Entity Memory
|
|
- Default: RAG storage with ChromaDB
|
|
- Optional Mem0 integration
|
|
- Structured entity storage (name, type, description)
|
|
- Supports metadata and relationship mapping
|
|
|
|
#### User Memory
|
|
- Requires Mem0 integration
|
|
- Stores user preferences and interaction history
|
|
- Supports personalized context building
|
|
- Configurable through memory_config
|
|
|
|
### Example: Configuring Memory for a Crew
|
|
|
|
```python Code
|
|
from crewai import Crew, Agent, Task, Process
|
|
|
|
# Assemble your crew with memory capabilities
|
|
my_crew = Crew(
|
|
agents=[...],
|
|
tasks=[...],
|
|
process=Process.sequential,
|
|
memory=True,
|
|
verbose=True
|
|
)
|
|
```
|
|
|
|
### Example: Use Custom Memory Instances e.g FAISS as the VectorDB
|
|
|
|
```python Code
|
|
from crewai import Crew, Agent, Task, Process
|
|
|
|
# Assemble your crew with memory capabilities
|
|
my_crew = Crew(
|
|
agents=[...],
|
|
tasks=[...],
|
|
process="Process.sequential",
|
|
memory=True,
|
|
long_term_memory=EnhanceLongTermMemory(
|
|
storage=LTMSQLiteStorage(
|
|
db_path="/my_data_dir/my_crew1/long_term_memory_storage.db"
|
|
)
|
|
),
|
|
short_term_memory=EnhanceShortTermMemory(
|
|
storage=CustomRAGStorage(
|
|
crew_name="my_crew",
|
|
storage_type="short_term",
|
|
data_dir="//my_data_dir",
|
|
model=embedder["model"],
|
|
dimension=embedder["dimension"],
|
|
),
|
|
),
|
|
entity_memory=EnhanceEntityMemory(
|
|
storage=CustomRAGStorage(
|
|
crew_name="my_crew",
|
|
storage_type="entities",
|
|
data_dir="//my_data_dir",
|
|
model=embedder["model"],
|
|
dimension=embedder["dimension"],
|
|
),
|
|
),
|
|
verbose=True,
|
|
)
|
|
```
|
|
|
|
## Integrating Mem0 Provider
|
|
|
|
[Mem0](https://mem0.ai/) is a self-improving memory layer for LLM applications that can enhance all memory types in CrewAI. It provides advanced features for storing and retrieving contextual information.
|
|
|
|
### Configuration
|
|
|
|
To use Mem0, you'll need:
|
|
1. An API key from [Mem0 Dashboard](https://app.mem0.ai/dashboard/api-keys)
|
|
2. The `mem0ai` package installed: `pip install mem0ai`
|
|
|
|
You can configure Mem0 in two ways:
|
|
|
|
1. **Environment Variable**:
|
|
```bash
|
|
export MEM0_API_KEY="your-api-key"
|
|
```
|
|
|
|
2. **Memory Config**:
|
|
```python
|
|
memory_config = {
|
|
"provider": "mem0",
|
|
"config": {
|
|
"api_key": "your-api-key",
|
|
"user_id": "user123" # Required for user memory
|
|
}
|
|
}
|
|
```
|
|
|
|
### Memory Type Support
|
|
|
|
Mem0 can be used with all memory types:
|
|
- **Short-Term Memory**: Enhanced context retention
|
|
- **Long-Term Memory**: Improved task history storage
|
|
- **Entity Memory**: Better entity relationship tracking
|
|
- **User Memory**: Personalized user preferences and history
|
|
|
|
|
|
```python Code
|
|
import os
|
|
from crewai import Crew, Process
|
|
from mem0 import MemoryClient
|
|
|
|
# Set environment variables for Mem0
|
|
os.environ["MEM0_API_KEY"] = "m0-xx"
|
|
|
|
# Step 1: Record preferences based on past conversation or user input
|
|
client = MemoryClient()
|
|
messages = [
|
|
{"role": "user", "content": "Hi there! I'm planning a vacation and could use some advice."},
|
|
{"role": "assistant", "content": "Hello! I'd be happy to help with your vacation planning. What kind of destination do you prefer?"},
|
|
{"role": "user", "content": "I am more of a beach person than a mountain person."},
|
|
{"role": "assistant", "content": "That's interesting. Do you like hotels or Airbnb?"},
|
|
{"role": "user", "content": "I like Airbnb more."},
|
|
]
|
|
client.add(messages, user_id="john")
|
|
|
|
# Step 2: Create a Crew with User Memory
|
|
|
|
crew = Crew(
|
|
agents=[...],
|
|
tasks=[...],
|
|
verbose=True,
|
|
process=Process.sequential,
|
|
memory=True,
|
|
memory_config={
|
|
"provider": "mem0",
|
|
"config": {"user_id": "john"},
|
|
},
|
|
)
|
|
```
|
|
|
|
|
|
## Memory Interface Details
|
|
|
|
When implementing custom memory storage, be aware of these interface requirements:
|
|
|
|
### Base Memory Class
|
|
```python
|
|
class Memory:
|
|
def save(
|
|
self,
|
|
value: Any,
|
|
metadata: Optional[Dict[str, Any]] = None,
|
|
agent: Optional[str] = None,
|
|
) -> None:
|
|
"""Save data to memory with optional metadata and agent information."""
|
|
pass
|
|
|
|
def search(
|
|
self,
|
|
query: str,
|
|
limit: int = 3,
|
|
score_threshold: float = 0.35,
|
|
) -> List[Any]:
|
|
"""Search memory with configurable limit and relevance threshold."""
|
|
pass
|
|
```
|
|
|
|
### Memory Type Specifics
|
|
|
|
1. **LongTermMemory**:
|
|
```python
|
|
class LongTermMemoryItem:
|
|
task: str # Task description
|
|
expected_output: str # Expected task output
|
|
metadata: Dict[str, Any] # Additional metadata
|
|
agent: Optional[str] = None # Associated agent
|
|
datetime: str # Timestamp
|
|
quality: float # Task quality score (0-1)
|
|
```
|
|
- Saves task results with quality scores and timestamps
|
|
- Search returns historical task data ordered by date
|
|
- Note: Implementation has type hint differences from base Memory class
|
|
|
|
2. **EntityMemory**:
|
|
```python
|
|
class EntityMemoryItem:
|
|
name: str # Entity name
|
|
type: str # Entity type
|
|
description: str # Entity description
|
|
metadata: Dict[str, Any] # Additional metadata
|
|
agent: Optional[str] = None # Associated agent
|
|
```
|
|
- Saves entity information with type and description
|
|
- Search supports entity relationship queries
|
|
- Note: Implementation has type hint differences from base Memory class
|
|
|
|
3. **ShortTermMemory**:
|
|
```python
|
|
class ShortTermMemoryItem:
|
|
data: Any # Memory content
|
|
metadata: Dict[str, Any] # Additional metadata
|
|
agent: Optional[str] = None # Associated agent
|
|
```
|
|
- Saves recent interactions with metadata
|
|
- Search supports semantic similarity
|
|
- Follows base Memory class interface exactly
|
|
|
|
### Error Handling and Reset
|
|
|
|
Each memory type includes error handling and reset capabilities:
|
|
|
|
```python
|
|
# Reset short-term memory
|
|
try:
|
|
crew.short_term_memory.reset()
|
|
except Exception as e:
|
|
print(f"Error resetting short-term memory: {e}")
|
|
|
|
# Reset entity memory
|
|
try:
|
|
crew.entity_memory.reset()
|
|
except Exception as e:
|
|
print(f"Error resetting entity memory: {e}")
|
|
|
|
# Reset long-term memory
|
|
try:
|
|
crew.long_term_memory.reset()
|
|
except Exception as e:
|
|
print(f"Error resetting long-term memory: {e}")
|
|
```
|
|
|
|
Common error scenarios:
|
|
- Database connection issues
|
|
- File permission errors
|
|
- Storage initialization failures
|
|
- Embedding generation errors
|
|
|
|
### Implementation Notes
|
|
|
|
1. **Type Hint Considerations**:
|
|
- LongTermMemory.save() expects LongTermMemoryItem
|
|
- EntityMemory.save() expects EntityMemoryItem
|
|
- ShortTermMemory.save() follows base Memory interface
|
|
|
|
2. **Storage Reset Behavior**:
|
|
- Short-term: Clears ChromaDB collection
|
|
- Long-term: Truncates SQLite table
|
|
- Entity: Clears entity storage
|
|
- Mem0: Provider-specific reset
|
|
|
|
## Embedding Providers
|
|
|
|
CrewAI supports multiple embedding providers for RAG-based memory types:
|
|
```python Code
|
|
from crewai import Crew, Agent, Task, Process
|
|
|
|
my_crew = Crew(
|
|
agents=[...],
|
|
tasks=[...],
|
|
process=Process.sequential,
|
|
memory=True,
|
|
verbose=True,
|
|
embedder={
|
|
"provider": "openai",
|
|
"config": {
|
|
"model": 'text-embedding-3-small'
|
|
}
|
|
}
|
|
)
|
|
```
|
|
Alternatively, you can directly pass the OpenAIEmbeddingFunction to the embedder parameter.
|
|
|
|
Example:
|
|
```python Code
|
|
from crewai import Crew, Agent, Task, Process
|
|
from chromadb.utils.embedding_functions import OpenAIEmbeddingFunction
|
|
|
|
my_crew = Crew(
|
|
agents=[...],
|
|
tasks=[...],
|
|
process=Process.sequential,
|
|
memory=True,
|
|
verbose=True,
|
|
embedder=OpenAIEmbeddingFunction(api_key=os.getenv("OPENAI_API_KEY"), model_name="text-embedding-3-small"),
|
|
)
|
|
```
|
|
|
|
### Using Ollama embeddings
|
|
|
|
```python Code
|
|
from crewai import Crew, Agent, Task, Process
|
|
|
|
my_crew = Crew(
|
|
agents=[...],
|
|
tasks=[...],
|
|
process=Process.sequential,
|
|
memory=True,
|
|
verbose=True,
|
|
embedder={
|
|
"provider": "ollama",
|
|
"config": {
|
|
"model": "mxbai-embed-large"
|
|
}
|
|
}
|
|
)
|
|
```
|
|
|
|
### Using Google AI embeddings
|
|
|
|
```python Code
|
|
from crewai import Crew, Agent, Task, Process
|
|
|
|
my_crew = Crew(
|
|
agents=[...],
|
|
tasks=[...],
|
|
process=Process.sequential,
|
|
memory=True,
|
|
verbose=True,
|
|
embedder={
|
|
"provider": "google",
|
|
"config": {
|
|
"api_key": "<YOUR_API_KEY>",
|
|
"model_name": "<model_name>"
|
|
}
|
|
}
|
|
)
|
|
```
|
|
|
|
### Using Azure OpenAI embeddings
|
|
|
|
```python Code
|
|
from chromadb.utils.embedding_functions import OpenAIEmbeddingFunction
|
|
from crewai import Crew, Agent, Task, Process
|
|
|
|
my_crew = Crew(
|
|
agents=[...],
|
|
tasks=[...],
|
|
process=Process.sequential,
|
|
memory=True,
|
|
verbose=True,
|
|
embedder=OpenAIEmbeddingFunction(
|
|
api_key="YOUR_API_KEY",
|
|
api_base="YOUR_API_BASE_PATH",
|
|
api_type="azure",
|
|
api_version="YOUR_API_VERSION",
|
|
model_name="text-embedding-3-small"
|
|
)
|
|
)
|
|
```
|
|
|
|
### Using Vertex AI embeddings
|
|
|
|
```python Code
|
|
from chromadb.utils.embedding_functions import GoogleVertexEmbeddingFunction
|
|
from crewai import Crew, Agent, Task, Process
|
|
|
|
my_crew = Crew(
|
|
agents=[...],
|
|
tasks=[...],
|
|
process=Process.sequential,
|
|
memory=True,
|
|
verbose=True,
|
|
embedder=GoogleVertexEmbeddingFunction(
|
|
project_id="YOUR_PROJECT_ID",
|
|
region="YOUR_REGION",
|
|
api_key="YOUR_API_KEY",
|
|
model_name="textembedding-gecko"
|
|
)
|
|
)
|
|
```
|
|
|
|
### Using Cohere embeddings
|
|
|
|
```python Code
|
|
from crewai import Crew, Agent, Task, Process
|
|
|
|
my_crew = Crew(
|
|
agents=[...],
|
|
tasks=[...],
|
|
process=Process.sequential,
|
|
memory=True,
|
|
verbose=True,
|
|
embedder={
|
|
"provider": "cohere",
|
|
"config": {
|
|
"api_key": "YOUR_API_KEY",
|
|
"model_name": "<model_name>"
|
|
}
|
|
}
|
|
)
|
|
```
|
|
### Using HuggingFace embeddings
|
|
|
|
```python Code
|
|
from crewai import Crew, Agent, Task, Process
|
|
|
|
my_crew = Crew(
|
|
agents=[...],
|
|
tasks=[...],
|
|
process=Process.sequential,
|
|
memory=True,
|
|
verbose=True,
|
|
embedder={
|
|
"provider": "huggingface",
|
|
"config": {
|
|
"api_url": "<api_url>",
|
|
}
|
|
}
|
|
)
|
|
```
|
|
|
|
### Using Watson embeddings
|
|
|
|
```python Code
|
|
from crewai import Crew, Agent, Task, Process
|
|
|
|
# Note: Ensure you have installed and imported `ibm_watsonx_ai` for Watson embeddings to work.
|
|
|
|
my_crew = Crew(
|
|
agents=[...],
|
|
tasks=[...],
|
|
process=Process.sequential,
|
|
memory=True,
|
|
verbose=True,
|
|
embedder={
|
|
"provider": "watson",
|
|
"config": {
|
|
"model": "<model_name>",
|
|
"api_url": "<api_url>",
|
|
"api_key": "<YOUR_API_KEY>",
|
|
"project_id": "<YOUR_PROJECT_ID>",
|
|
}
|
|
}
|
|
)
|
|
```
|
|
|
|
### Resetting Memory
|
|
|
|
```shell
|
|
crewai reset-memories [OPTIONS]
|
|
```
|
|
|
|
#### Resetting Memory Options
|
|
|
|
| Option | Description | Type | Default |
|
|
| :----------------- | :------------------------------- | :------------- | :------ |
|
|
| `-l`, `--long` | Reset LONG TERM memory. | Flag (boolean) | False |
|
|
| `-s`, `--short` | Reset SHORT TERM memory. | Flag (boolean) | False |
|
|
| `-e`, `--entities` | Reset ENTITIES memory. | Flag (boolean) | False |
|
|
| `-k`, `--kickoff-outputs` | Reset LATEST KICKOFF TASK OUTPUTS. | Flag (boolean) | False |
|
|
| `-a`, `--all` | Reset ALL memories. | Flag (boolean) | False |
|
|
|
|
|
|
## Benefits of Using CrewAI's Memory System
|
|
|
|
- 🦾 **Adaptive Learning:** Crews become more efficient over time, adapting to new information and refining their approach to tasks.
|
|
- 🫡 **Enhanced Personalization:** Memory enables agents to remember user preferences and historical interactions, leading to personalized experiences.
|
|
- 🧠 **Improved Problem Solving:** Access to a rich memory store aids agents in making more informed decisions, drawing on past learnings and contextual insights.
|
|
|
|
## Conclusion
|
|
|
|
Integrating CrewAI's memory system into your projects is straightforward. By leveraging the provided memory components and configurations,
|
|
you can quickly empower your agents with the ability to remember, reason, and learn from their interactions, unlocking new levels of intelligence and capability.
|