mirror of
https://github.com/crewAIInc/crewAI.git
synced 2025-12-16 04:18:35 +00:00
* CLI command added * Added reset agent knowledge function * Reduced verbose * Added test cases * Added docs * Llama test case failing * Changed _reset_agent_knowledge function * Fixed new line error * Added docs * fixed the new line error * Refractored * Uncommmented some test cases * ruff check fixed * fixed run type checks * fixed run type checks * fixed run type checks * Made reset_fn callable by casting to silence run type checks * Changed the reset_knowledge as it expects only list of knowledge * Fixed typo in docs * Refractored the memory_system * Minor Changes * fixed test case * Fixed linting issues * Network test cases failing --------- Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
735 lines
22 KiB
Plaintext
735 lines
22 KiB
Plaintext
---
|
||
title: Memory
|
||
description: Leveraging memory systems in the CrewAI framework to enhance agent capabilities.
|
||
icon: database
|
||
---
|
||
|
||
## Introduction to Memory Systems in CrewAI
|
||
|
||
The crewAI framework introduces a sophisticated memory system designed to significantly enhance the capabilities of AI agents.
|
||
This system comprises `short-term memory`, `long-term memory`, `entity memory`, and `contextual memory`, each serving a unique purpose in aiding agents to remember,
|
||
reason, and learn from past interactions.
|
||
|
||
## Memory System Components
|
||
|
||
| Component | Description |
|
||
| :------------------- | :---------------------------------------------------------------------------------------------------------------------- |
|
||
| **Short-Term Memory**| Temporarily stores recent interactions and outcomes using `RAG`, enabling agents to recall and utilize information relevant to their current context during the current executions.|
|
||
| **Long-Term Memory** | Preserves valuable insights and learnings from past executions, allowing agents to build and refine their knowledge over time. |
|
||
| **Entity Memory** | Captures and organizes information about entities (people, places, concepts) encountered during tasks, facilitating deeper understanding and relationship mapping. Uses `RAG` for storing entity information. |
|
||
| **Contextual Memory**| Maintains the context of interactions by combining `ShortTermMemory`, `LongTermMemory`, and `EntityMemory`, aiding in the coherence and relevance of agent responses over a sequence of tasks or a conversation. |
|
||
| **External Memory** | Enables integration with external memory systems and providers (like Mem0), allowing for specialized memory storage and retrieval across different applications. Supports custom storage implementations for flexible memory management. |
|
||
| **User Memory** | ⚠️ **DEPRECATED**: This component is deprecated and will be removed in a future version. Please use [External Memory](#using-external-memory) instead. |
|
||
|
||
## How Memory Systems Empower Agents
|
||
|
||
1. **Contextual Awareness**: With short-term and contextual memory, agents gain the ability to maintain context over a conversation or task sequence, leading to more coherent and relevant responses.
|
||
|
||
2. **Experience Accumulation**: Long-term memory allows agents to accumulate experiences, learning from past actions to improve future decision-making and problem-solving.
|
||
|
||
3. **Entity Understanding**: By maintaining entity memory, agents can recognize and remember key entities, enhancing their ability to process and interact with complex information.
|
||
|
||
## Implementing Memory in Your Crew
|
||
|
||
When configuring a crew, you can enable and customize each memory component to suit the crew's objectives and the nature of tasks it will perform.
|
||
By default, the memory system is disabled, and you can ensure it is active by setting `memory=True` in the crew configuration.
|
||
The memory will use OpenAI embeddings by default, but you can change it by setting `embedder` to a different model.
|
||
It's also possible to initialize the memory instance with your own instance.
|
||
|
||
The 'embedder' only applies to **Short-Term Memory** which uses Chroma for RAG.
|
||
The **Long-Term Memory** uses SQLite3 to store task results. Currently, there is no way to override these storage implementations.
|
||
The data storage files are saved into a platform-specific location found using the appdirs package,
|
||
and the name of the project can be overridden using the **CREWAI_STORAGE_DIR** environment variable.
|
||
|
||
### Example: Configuring Memory for a Crew
|
||
|
||
```python Code
|
||
from crewai import Crew, Agent, Task, Process
|
||
|
||
# Assemble your crew with memory capabilities
|
||
my_crew = Crew(
|
||
agents=[...],
|
||
tasks=[...],
|
||
process=Process.sequential,
|
||
memory=True,
|
||
verbose=True
|
||
)
|
||
```
|
||
|
||
### Example: Use Custom Memory Instances e.g FAISS as the VectorDB
|
||
|
||
```python Code
|
||
from crewai import Crew, Process
|
||
from crewai.memory import LongTermMemory, ShortTermMemory, EntityMemory
|
||
from crewai.memory.storage.rag_storage import RAGStorage
|
||
from crewai.memory.storage.ltm_sqlite_storage import LTMSQLiteStorage
|
||
from typing import List, Optional
|
||
|
||
# Assemble your crew with memory capabilities
|
||
my_crew: Crew = Crew(
|
||
agents = [...],
|
||
tasks = [...],
|
||
process = Process.sequential,
|
||
memory = True,
|
||
# Long-term memory for persistent storage across sessions
|
||
long_term_memory = LongTermMemory(
|
||
storage=LTMSQLiteStorage(
|
||
db_path="/my_crew1/long_term_memory_storage.db"
|
||
)
|
||
),
|
||
# Short-term memory for current context using RAG
|
||
short_term_memory = ShortTermMemory(
|
||
storage = RAGStorage(
|
||
embedder_config={
|
||
"provider": "openai",
|
||
"config": {
|
||
"model": 'text-embedding-3-small'
|
||
}
|
||
},
|
||
type="short_term",
|
||
path="/my_crew1/"
|
||
)
|
||
),
|
||
),
|
||
# Entity memory for tracking key information about entities
|
||
entity_memory = EntityMemory(
|
||
storage=RAGStorage(
|
||
embedder_config={
|
||
"provider": "openai",
|
||
"config": {
|
||
"model": 'text-embedding-3-small'
|
||
}
|
||
},
|
||
type="short_term",
|
||
path="/my_crew1/"
|
||
)
|
||
),
|
||
verbose=True,
|
||
)
|
||
```
|
||
|
||
## Security Considerations
|
||
|
||
When configuring memory storage:
|
||
- Use environment variables for storage paths (e.g., `CREWAI_STORAGE_DIR`)
|
||
- Never hardcode sensitive information like database credentials
|
||
- Consider access permissions for storage directories
|
||
- Use relative paths when possible to maintain portability
|
||
|
||
Example using environment variables:
|
||
```python
|
||
import os
|
||
from crewai import Crew
|
||
from crewai.memory import LongTermMemory
|
||
from crewai.memory.storage.ltm_sqlite_storage import LTMSQLiteStorage
|
||
|
||
# Configure storage path using environment variable
|
||
storage_path = os.getenv("CREWAI_STORAGE_DIR", "./storage")
|
||
crew = Crew(
|
||
memory=True,
|
||
long_term_memory=LongTermMemory(
|
||
storage=LTMSQLiteStorage(
|
||
db_path="{storage_path}/memory.db".format(storage_path=storage_path)
|
||
)
|
||
)
|
||
)
|
||
```
|
||
|
||
## Configuration Examples
|
||
|
||
### Basic Memory Configuration
|
||
```python
|
||
from crewai import Crew
|
||
from crewai.memory import LongTermMemory
|
||
|
||
# Simple memory configuration
|
||
crew = Crew(memory=True) # Uses default storage locations
|
||
```
|
||
Note that External Memory won’t be defined when `memory=True` is set, as we can’t infer which external memory would be suitable for your case
|
||
|
||
### Custom Storage Configuration
|
||
```python
|
||
from crewai import Crew
|
||
from crewai.memory import LongTermMemory
|
||
from crewai.memory.storage.ltm_sqlite_storage import LTMSQLiteStorage
|
||
|
||
# Configure custom storage paths
|
||
crew = Crew(
|
||
memory=True,
|
||
long_term_memory=LongTermMemory(
|
||
storage=LTMSQLiteStorage(db_path="./memory.db")
|
||
)
|
||
)
|
||
```
|
||
|
||
## Integrating Mem0 for Enhanced User Memory
|
||
|
||
[Mem0](https://mem0.ai/) is a self-improving memory layer for LLM applications, enabling personalized AI experiences.
|
||
|
||
|
||
### Using Mem0 API platform
|
||
|
||
To include user-specific memory you can get your API key [here](https://app.mem0.ai/dashboard/api-keys) and refer the [docs](https://docs.mem0.ai/platform/quickstart#4-1-create-memories) for adding user preferences. In this case `user_memory` is set to `MemoryClient` from mem0.
|
||
|
||
|
||
```python Code
|
||
import os
|
||
from crewai import Crew, Process
|
||
from mem0 import MemoryClient
|
||
|
||
# Set environment variables for Mem0
|
||
os.environ["MEM0_API_KEY"] = "m0-xx"
|
||
|
||
# Step 1: Create a Crew with User Memory
|
||
|
||
crew = Crew(
|
||
agents=[...],
|
||
tasks=[...],
|
||
verbose=True,
|
||
process=Process.sequential,
|
||
memory=True,
|
||
memory_config={
|
||
"provider": "mem0",
|
||
"config": {"user_id": "john"},
|
||
"user_memory" : {} #Set user_memory explicitly to a dictionary, we are working on this issue.
|
||
},
|
||
)
|
||
```
|
||
|
||
#### Additional Memory Configuration Options
|
||
If you want to access a specific organization and project, you can set the `org_id` and `project_id` parameters in the memory configuration.
|
||
|
||
```python Code
|
||
from crewai import Crew
|
||
|
||
crew = Crew(
|
||
agents=[...],
|
||
tasks=[...],
|
||
verbose=True,
|
||
memory=True,
|
||
memory_config={
|
||
"provider": "mem0",
|
||
"config": {"user_id": "john", "org_id": "my_org_id", "project_id": "my_project_id"},
|
||
"user_memory" : {} #Set user_memory explicitly to a dictionary, we are working on this issue.
|
||
},
|
||
)
|
||
```
|
||
|
||
### Using Local Mem0 memory
|
||
If you want to use local mem0 memory, with a custom configuration, you can set a parameter `local_mem0_config` in the config itself.
|
||
If both os environment key is set and local_mem0_config is given, the API platform takes higher priority over the local configuration.
|
||
Check [this](https://docs.mem0.ai/open-source/python-quickstart#run-mem0-locally) mem0 local configuration docs for more understanding.
|
||
In this case `user_memory` is set to `Memory` from mem0.
|
||
|
||
|
||
```python Code
|
||
from crewai import Crew
|
||
|
||
|
||
#local mem0 config
|
||
config = {
|
||
"vector_store": {
|
||
"provider": "qdrant",
|
||
"config": {
|
||
"host": "localhost",
|
||
"port": 6333
|
||
}
|
||
},
|
||
"llm": {
|
||
"provider": "openai",
|
||
"config": {
|
||
"api_key": "your-api-key",
|
||
"model": "gpt-4"
|
||
}
|
||
},
|
||
"embedder": {
|
||
"provider": "openai",
|
||
"config": {
|
||
"api_key": "your-api-key",
|
||
"model": "text-embedding-3-small"
|
||
}
|
||
},
|
||
"graph_store": {
|
||
"provider": "neo4j",
|
||
"config": {
|
||
"url": "neo4j+s://your-instance",
|
||
"username": "neo4j",
|
||
"password": "password"
|
||
}
|
||
},
|
||
"history_db_path": "/path/to/history.db",
|
||
"version": "v1.1",
|
||
"custom_fact_extraction_prompt": "Optional custom prompt for fact extraction for memory",
|
||
"custom_update_memory_prompt": "Optional custom prompt for update memory"
|
||
}
|
||
|
||
crew = Crew(
|
||
agents=[...],
|
||
tasks=[...],
|
||
verbose=True,
|
||
memory=True,
|
||
memory_config={
|
||
"provider": "mem0",
|
||
"config": {"user_id": "john", 'local_mem0_config': config},
|
||
"user_memory" : {} #Set user_memory explicitly to a dictionary, we are working on this issue.
|
||
},
|
||
)
|
||
```
|
||
|
||
### Using External Memory
|
||
|
||
External Memory is a powerful feature that allows you to integrate external memory systems with your CrewAI applications. This is particularly useful when you want to use specialized memory providers or maintain memory across different applications.
|
||
Since it’s an external memory, we’re not able to add a default value to it - unlike with Long Term and Short Term memory.
|
||
|
||
#### Basic Usage with Mem0
|
||
|
||
The most common way to use External Memory is with Mem0 as the provider:
|
||
|
||
```python
|
||
import os
|
||
from crewai import Agent, Crew, Process, Task
|
||
from crewai.memory.external.external_memory import ExternalMemory
|
||
|
||
os.environ["MEM0_API_KEY"] = "YOUR-API-KEY"
|
||
|
||
agent = Agent(
|
||
role="You are a helpful assistant",
|
||
goal="Plan a vacation for the user",
|
||
backstory="You are a helpful assistant that can plan a vacation for the user",
|
||
verbose=True,
|
||
)
|
||
task = Task(
|
||
description="Give things related to the user's vacation",
|
||
expected_output="A plan for the vacation",
|
||
agent=agent,
|
||
)
|
||
|
||
crew = Crew(
|
||
agents=[agent],
|
||
tasks=[task],
|
||
verbose=True,
|
||
process=Process.sequential,
|
||
external_memory=ExternalMemory(
|
||
embedder_config={"provider": "mem0", "config": {"user_id": "U-123"}} # you can provide an entire Mem0 configuration
|
||
),
|
||
)
|
||
|
||
crew.kickoff(
|
||
inputs={"question": "which destination is better for a beach vacation?"}
|
||
)
|
||
```
|
||
|
||
#### Using External Memory with Custom Storage
|
||
|
||
You can also create custom storage implementations for External Memory. Here's an example of how to create a custom storage:
|
||
|
||
```python
|
||
from crewai import Agent, Crew, Process, Task
|
||
from crewai.memory.external.external_memory import ExternalMemory
|
||
from crewai.memory.storage.interface import Storage
|
||
|
||
|
||
class CustomStorage(Storage):
|
||
def __init__(self):
|
||
self.memories = []
|
||
|
||
def save(self, value, metadata=None, agent=None):
|
||
self.memories.append({"value": value, "metadata": metadata, "agent": agent})
|
||
|
||
def search(self, query, limit=10, score_threshold=0.5):
|
||
# Implement your search logic here
|
||
return []
|
||
|
||
def reset(self):
|
||
self.memories = []
|
||
|
||
|
||
# Create external memory with custom storage
|
||
external_memory = ExternalMemory(
|
||
storage=CustomStorage(),
|
||
embedder_config={"provider": "mem0", "config": {"user_id": "U-123"}},
|
||
)
|
||
|
||
agent = Agent(
|
||
role="You are a helpful assistant",
|
||
goal="Plan a vacation for the user",
|
||
backstory="You are a helpful assistant that can plan a vacation for the user",
|
||
verbose=True,
|
||
)
|
||
task = Task(
|
||
description="Give things related to the user's vacation",
|
||
expected_output="A plan for the vacation",
|
||
agent=agent,
|
||
)
|
||
|
||
crew = Crew(
|
||
agents=[agent],
|
||
tasks=[task],
|
||
verbose=True,
|
||
process=Process.sequential,
|
||
external_memory=external_memory,
|
||
)
|
||
|
||
crew.kickoff(
|
||
inputs={"question": "which destination is better for a beach vacation?"}
|
||
)
|
||
```
|
||
|
||
|
||
## Additional Embedding Providers
|
||
|
||
### Using OpenAI embeddings (already default)
|
||
```python Code
|
||
from crewai import Crew, Agent, Task, Process
|
||
|
||
my_crew = Crew(
|
||
agents=[...],
|
||
tasks=[...],
|
||
process=Process.sequential,
|
||
memory=True,
|
||
verbose=True,
|
||
embedder={
|
||
"provider": "openai",
|
||
"config": {
|
||
"model": 'text-embedding-3-small'
|
||
}
|
||
}
|
||
)
|
||
```
|
||
Alternatively, you can directly pass the OpenAIEmbeddingFunction to the embedder parameter.
|
||
|
||
Example:
|
||
```python Code
|
||
from crewai import Crew, Agent, Task, Process
|
||
from chromadb.utils.embedding_functions import OpenAIEmbeddingFunction
|
||
|
||
my_crew = Crew(
|
||
agents=[...],
|
||
tasks=[...],
|
||
process=Process.sequential,
|
||
memory=True,
|
||
verbose=True,
|
||
embedder={
|
||
"provider": "openai",
|
||
"config": {
|
||
"model": 'text-embedding-3-small'
|
||
}
|
||
}
|
||
)
|
||
```
|
||
|
||
### Using Ollama embeddings
|
||
|
||
```python Code
|
||
from crewai import Crew, Agent, Task, Process
|
||
|
||
my_crew = Crew(
|
||
agents=[...],
|
||
tasks=[...],
|
||
process=Process.sequential,
|
||
memory=True,
|
||
verbose=True,
|
||
embedder={
|
||
"provider": "ollama",
|
||
"config": {
|
||
"model": "mxbai-embed-large"
|
||
}
|
||
}
|
||
)
|
||
```
|
||
|
||
### Using Google AI embeddings
|
||
|
||
#### Prerequisites
|
||
Before using Google AI embeddings, ensure you have:
|
||
- Access to the Gemini API
|
||
- The necessary API keys and permissions
|
||
|
||
You will need to update your *pyproject.toml* dependencies:
|
||
```YAML
|
||
dependencies = [
|
||
"google-generativeai>=0.8.4", #main version in January/2025 - crewai v.0.100.0 and crewai-tools 0.33.0
|
||
"crewai[tools]>=0.100.0,<1.0.0"
|
||
]
|
||
```
|
||
|
||
```python Code
|
||
from crewai import Crew, Agent, Task, Process
|
||
|
||
my_crew = Crew(
|
||
agents=[...],
|
||
tasks=[...],
|
||
process=Process.sequential,
|
||
memory=True,
|
||
verbose=True,
|
||
embedder={
|
||
"provider": "google",
|
||
"config": {
|
||
"api_key": "<YOUR_API_KEY>",
|
||
"model": "<model_name>"
|
||
}
|
||
}
|
||
)
|
||
```
|
||
|
||
### Using Azure OpenAI embeddings
|
||
|
||
```python Code
|
||
from chromadb.utils.embedding_functions import OpenAIEmbeddingFunction
|
||
from crewai import Crew, Agent, Task, Process
|
||
|
||
my_crew = Crew(
|
||
agents=[...],
|
||
tasks=[...],
|
||
process=Process.sequential,
|
||
memory=True,
|
||
verbose=True,
|
||
embedder={
|
||
"provider": "openai",
|
||
"config": {
|
||
"api_key": "YOUR_API_KEY",
|
||
"api_base": "YOUR_API_BASE_PATH",
|
||
"api_version": "YOUR_API_VERSION",
|
||
"model_name": 'text-embedding-3-small'
|
||
}
|
||
}
|
||
)
|
||
```
|
||
|
||
### Using Vertex AI embeddings
|
||
|
||
```python Code
|
||
from chromadb.utils.embedding_functions import GoogleVertexEmbeddingFunction
|
||
from crewai import Crew, Agent, Task, Process
|
||
|
||
my_crew = Crew(
|
||
agents=[...],
|
||
tasks=[...],
|
||
process=Process.sequential,
|
||
memory=True,
|
||
verbose=True,
|
||
embedder={
|
||
"provider": "vertexai",
|
||
"config": {
|
||
"project_id"="YOUR_PROJECT_ID",
|
||
"region"="YOUR_REGION",
|
||
"api_key"="YOUR_API_KEY",
|
||
"model_name"="textembedding-gecko"
|
||
}
|
||
}
|
||
)
|
||
```
|
||
|
||
### Using Cohere embeddings
|
||
|
||
```python Code
|
||
from crewai import Crew, Agent, Task, Process
|
||
|
||
my_crew = Crew(
|
||
agents=[...],
|
||
tasks=[...],
|
||
process=Process.sequential,
|
||
memory=True,
|
||
verbose=True,
|
||
embedder={
|
||
"provider": "cohere",
|
||
"config": {
|
||
"api_key": "YOUR_API_KEY",
|
||
"model": "<model_name>"
|
||
}
|
||
}
|
||
)
|
||
```
|
||
### Using VoyageAI embeddings
|
||
|
||
```python Code
|
||
from crewai import Crew, Agent, Task, Process
|
||
|
||
my_crew = Crew(
|
||
agents=[...],
|
||
tasks=[...],
|
||
process=Process.sequential,
|
||
memory=True,
|
||
verbose=True,
|
||
embedder={
|
||
"provider": "voyageai",
|
||
"config": {
|
||
"api_key": "YOUR_API_KEY",
|
||
"model": "<model_name>"
|
||
}
|
||
}
|
||
)
|
||
```
|
||
### Using HuggingFace embeddings
|
||
|
||
```python Code
|
||
from crewai import Crew, Agent, Task, Process
|
||
|
||
my_crew = Crew(
|
||
agents=[...],
|
||
tasks=[...],
|
||
process=Process.sequential,
|
||
memory=True,
|
||
verbose=True,
|
||
embedder={
|
||
"provider": "huggingface",
|
||
"config": {
|
||
"api_url": "<api_url>",
|
||
}
|
||
}
|
||
)
|
||
```
|
||
|
||
### Using Watson embeddings
|
||
|
||
```python Code
|
||
from crewai import Crew, Agent, Task, Process
|
||
|
||
# Note: Ensure you have installed and imported `ibm_watsonx_ai` for Watson embeddings to work.
|
||
|
||
my_crew = Crew(
|
||
agents=[...],
|
||
tasks=[...],
|
||
process=Process.sequential,
|
||
memory=True,
|
||
verbose=True,
|
||
embedder={
|
||
"provider": "watson",
|
||
"config": {
|
||
"model": "<model_name>",
|
||
"api_url": "<api_url>",
|
||
"api_key": "<YOUR_API_KEY>",
|
||
"project_id": "<YOUR_PROJECT_ID>",
|
||
}
|
||
}
|
||
)
|
||
```
|
||
|
||
### Using Amazon Bedrock embeddings
|
||
|
||
```python Code
|
||
# Note: Ensure you have installed `boto3` for Bedrock embeddings to work.
|
||
|
||
import os
|
||
import boto3
|
||
from crewai import Crew, Agent, Task, Process
|
||
|
||
boto3_session = boto3.Session(
|
||
region_name=os.environ.get("AWS_REGION_NAME"),
|
||
aws_access_key_id=os.environ.get("AWS_ACCESS_KEY_ID"),
|
||
aws_secret_access_key=os.environ.get("AWS_SECRET_ACCESS_KEY")
|
||
)
|
||
|
||
my_crew = Crew(
|
||
agents=[...],
|
||
tasks=[...],
|
||
process=Process.sequential,
|
||
memory=True,
|
||
embedder={
|
||
"provider": "bedrock",
|
||
"config":{
|
||
"session": boto3_session,
|
||
"model": "amazon.titan-embed-text-v2:0",
|
||
"vector_dimension": 1024
|
||
}
|
||
}
|
||
verbose=True
|
||
)
|
||
```
|
||
|
||
### Adding Custom Embedding Function
|
||
|
||
```python Code
|
||
from crewai import Crew, Agent, Task, Process
|
||
from chromadb import Documents, EmbeddingFunction, Embeddings
|
||
|
||
# Create a custom embedding function
|
||
class CustomEmbedder(EmbeddingFunction):
|
||
def __call__(self, input: Documents) -> Embeddings:
|
||
# generate embeddings
|
||
return [1, 2, 3] # this is a dummy embedding
|
||
|
||
my_crew = Crew(
|
||
agents=[...],
|
||
tasks=[...],
|
||
process=Process.sequential,
|
||
memory=True,
|
||
verbose=True,
|
||
embedder={
|
||
"provider": "custom",
|
||
"config": {
|
||
"embedder": CustomEmbedder()
|
||
}
|
||
}
|
||
)
|
||
```
|
||
|
||
### Resetting Memory via cli
|
||
|
||
```shell
|
||
crewai reset-memories [OPTIONS]
|
||
```
|
||
|
||
#### Resetting Memory Options
|
||
|
||
| Option | Description | Type | Default |
|
||
| :----------------- | :------------------------------- | :------------- | :------ |
|
||
| `-l`, `--long` | Reset LONG TERM memory. | Flag (boolean) | False |
|
||
| `-s`, `--short` | Reset SHORT TERM memory. | Flag (boolean) | False |
|
||
| `-e`, `--entities` | Reset ENTITIES memory. | Flag (boolean) | False |
|
||
| `-k`, `--kickoff-outputs` | Reset LATEST KICKOFF TASK OUTPUTS. | Flag (boolean) | False |
|
||
| `-kn`, `--knowledge` | Reset KNOWLEDEGE storage | Flag (boolean) | False |
|
||
| `-akn`, `--agent-knowledge` | Reset AGENT KNOWLEDGE storage | Flag (boolean) | False |
|
||
| `-a`, `--all` | Reset ALL memories. | Flag (boolean) | False |
|
||
|
||
Note: To use the cli command you need to have your crew in a file called crew.py in the same directory.
|
||
|
||
|
||
|
||
|
||
### Resetting Memory via crew object
|
||
|
||
```python
|
||
|
||
my_crew = Crew(
|
||
agents=[...],
|
||
tasks=[...],
|
||
process=Process.sequential,
|
||
memory=True,
|
||
verbose=True,
|
||
embedder={
|
||
"provider": "custom",
|
||
"config": {
|
||
"embedder": CustomEmbedder()
|
||
}
|
||
}
|
||
)
|
||
|
||
my_crew.reset_memories(command_type = 'all') # Resets all the memory
|
||
```
|
||
|
||
#### Resetting Memory Options
|
||
|
||
| Command Type | Description |
|
||
| :----------------- | :------------------------------- |
|
||
| `long` | Reset LONG TERM memory. |
|
||
| `short` | Reset SHORT TERM memory. |
|
||
| `entities` | Reset ENTITIES memory. |
|
||
| `kickoff_outputs` | Reset LATEST KICKOFF TASK OUTPUTS. |
|
||
| `knowledge` | Reset KNOWLEDGE memory. |
|
||
| `agent_knowledge` | Reset AGENT KNOWLEDGE memory. |
|
||
| `all` | Reset ALL memories. |
|
||
|
||
|
||
|
||
## Benefits of Using CrewAI's Memory System
|
||
|
||
- 🦾 **Adaptive Learning:** Crews become more efficient over time, adapting to new information and refining their approach to tasks.
|
||
- 🫡 **Enhanced Personalization:** Memory enables agents to remember user preferences and historical interactions, leading to personalized experiences.
|
||
- 🧠 **Improved Problem Solving:** Access to a rich memory store aids agents in making more informed decisions, drawing on past learnings and contextual insights.
|
||
|
||
## Conclusion
|
||
|
||
Integrating CrewAI's memory system into your projects is straightforward. By leveraging the provided memory components and configurations,
|
||
you can quickly empower your agents with the ability to remember, reason, and learn from their interactions, unlocking new levels of intelligence and capability.
|