diff --git a/README.md b/README.md index 57cb26e39..279c35017 100644 --- a/README.md +++ b/README.md @@ -1,7 +1,6 @@
-![Logo of CrewAI](./docs/crewai_logo.png) - +![Logo of CrewAI](./docs/images/crewai_logo.png)
@@ -27,6 +26,7 @@ that require secure, scalable, and easy-to-manage agent-driven automation. You can try one part of the suite the [Crew Control Plane for free](https://app.crewai.com) ## Crew Control Plane Key Features: + - **Tracing & Observability**: Monitor and track your AI agents and workflows in real-time, including metrics, logs, and traces. - **Unified Control Plane**: A centralized platform for managing, monitoring, and scaling your AI agents and workflows. - **Seamless Integrations**: Easily connect with existing enterprise systems, data sources, and cloud infrastructure. @@ -73,7 +73,7 @@ intelligent automations. ## Why CrewAI?
- CrewAI Logo + CrewAI Logo
CrewAI unlocks the true potential of multi-agent automation, delivering the best-in-class combination of speed, flexibility, and control with either Crews of AI Agents or Flows of Events: @@ -91,6 +91,7 @@ CrewAI empowers developers and enterprises to confidently build intelligent auto ### Learning Resources Learn CrewAI through our comprehensive courses: + - [Multi AI Agent Systems with CrewAI](https://www.deeplearning.ai/short-courses/multi-ai-agent-systems-with-crewai/) - Master the fundamentals of multi-agent systems - [Practical Multi AI Agents and Advanced Use Cases](https://www.deeplearning.ai/short-courses/practical-multi-ai-agents-and-advanced-use-cases-with-crewai/) - Deep dive into advanced implementations @@ -99,18 +100,20 @@ Learn CrewAI through our comprehensive courses: CrewAI offers two powerful, complementary approaches that work seamlessly together to build sophisticated AI applications: 1. **Crews**: Teams of AI agents with true autonomy and agency, working together to accomplish complex tasks through role-based collaboration. Crews enable: + - Natural, autonomous decision-making between agents - Dynamic task delegation and collaboration - Specialized roles with defined goals and expertise - Flexible problem-solving approaches - 2. **Flows**: Production-ready, event-driven workflows that deliver precise control over complex automations. Flows provide: + - Fine-grained control over execution paths for real-world scenarios - Secure, consistent state management between tasks - Clean integration of AI agents with production Python code - Conditional branching for complex business logic The true power of CrewAI emerges when combining Crews and Flows. This synergy allows you to: + - Build complex, production-grade applications - Balance autonomy with precise control - Handle sophisticated real-world scenarios @@ -129,11 +132,13 @@ First, install CrewAI: ```shell pip install crewai ``` + If you want to install the 'crewai' package along with its optional features that include additional tools for agents, you can do so by using the following command: ```shell pip install 'crewai[tools]' ``` + The command above installs the basic package and also adds extra components which require more dependencies to function. ### Troubleshooting Dependencies @@ -143,10 +148,11 @@ If you encounter issues during installation or usage, here are some common solut #### Common Issues 1. **ModuleNotFoundError: No module named 'tiktoken'** + - Install tiktoken explicitly: `pip install 'crewai[embeddings]'` - If using embedchain or other tools: `pip install 'crewai[tools]'` - 2. **Failed building wheel for tiktoken** + - Ensure Rust compiler is installed (see installation steps above) - For Windows: Verify Visual C++ Build Tools are installed - Try upgrading pip: `pip install --upgrade pip` @@ -407,7 +413,8 @@ You can test different real life examples of AI crews in the [CrewAI-examples re CrewAI's power truly shines when combining Crews with Flows to create sophisticated automation pipelines. CrewAI flows support logical operators like `or_` and `and_` to combine multiple conditions. This can be used with `@start`, `@listen`, or `@router` decorators to create complex triggering conditions. -- `or_`: Triggers when any of the specified conditions are met. + +- `or_`: Triggers when any of the specified conditions are met. - `and_`Triggers when all of the specified conditions are met. Here's how you can orchestrate multiple Crews within a Flow: @@ -495,6 +502,7 @@ class AdvancedAnalysisFlow(Flow[MarketState]): ``` This example demonstrates how to: + 1. Use Python code for basic data operations 2. Create and execute Crews as steps in your workflow 3. Use Flow decorators to manage the sequence of operations @@ -515,7 +523,6 @@ Please refer to the [Connect CrewAI to LLMs](https://docs.crewai.com/how-to/LLM- *P.S. CrewAI demonstrates significant performance advantages over LangGraph, executing 5.76x faster in certain cases like this QA task example ([see comparison](https://github.com/crewAIInc/crewAI-examples/tree/main/Notebooks/CrewAI%20Flows%20%26%20Langgraph/QA%20Agent)) while achieving higher evaluation scores with faster completion times in certain coding tasks, like in this example ([detailed analysis](https://github.com/crewAIInc/crewAI-examples/blob/main/Notebooks/CrewAI%20Flows%20%26%20Langgraph/Coding%20Assistant/coding_assistant_eval.ipynb)).* - **Autogen**: While Autogen excels at creating conversational agents capable of working together, it lacks an inherent concept of process. In Autogen, orchestrating agents' interactions requires additional programming, which can become complex and cumbersome as the scale of tasks grows. - - **ChatDev**: ChatDev introduced the idea of processes into the realm of AI agents, but its implementation is quite rigid. Customizations in ChatDev are limited and not geared towards production environments, which can hinder scalability and flexibility in real-world applications. ## Contribution @@ -606,10 +613,10 @@ Users can opt-in to Further Telemetry, sharing the complete telemetry data by se CrewAI is released under the [MIT License](https://github.com/crewAIInc/crewAI/blob/main/LICENSE). - ## Frequently Asked Questions (FAQ) ### General + - [What exactly is CrewAI?](#q-what-exactly-is-crewai) - [How do I install CrewAI?](#q-how-do-i-install-crewai) - [Does CrewAI depend on LangChain?](#q-does-crewai-depend-on-langchain) @@ -617,6 +624,7 @@ CrewAI is released under the [MIT License](https://github.com/crewAIInc/crewAI/b - [Does CrewAI collect data from users?](#q-does-crewai-collect-data-from-users) ### Features and Capabilities + - [Can CrewAI handle complex use cases?](#q-can-crewai-handle-complex-use-cases) - [Can I use CrewAI with local AI models?](#q-can-i-use-crewai-with-local-ai-models) - [What makes Crews different from Flows?](#q-what-makes-crews-different-from-flows) @@ -624,84 +632,110 @@ CrewAI is released under the [MIT License](https://github.com/crewAIInc/crewAI/b - [Does CrewAI support fine-tuning or training custom models?](#q-does-crewai-support-fine-tuning-or-training-custom-models) ### Resources and Community + - [Where can I find real-world CrewAI examples?](#q-where-can-i-find-real-world-crewai-examples) - [How can I contribute to CrewAI?](#q-how-can-i-contribute-to-crewai) ### Enterprise Features + - [What additional features does CrewAI Enterprise offer?](#q-what-additional-features-does-crewai-enterprise-offer) - [Is CrewAI Enterprise available for cloud and on-premise deployments?](#q-is-crewai-enterprise-available-for-cloud-and-on-premise-deployments) - [Can I try CrewAI Enterprise for free?](#q-can-i-try-crewai-enterprise-for-free) - - ### Q: What exactly is CrewAI? + A: CrewAI is a standalone, lean, and fast Python framework built specifically for orchestrating autonomous AI agents. Unlike frameworks like LangChain, CrewAI does not rely on external dependencies, making it leaner, faster, and simpler. ### Q: How do I install CrewAI? + A: Install CrewAI using pip: + ```shell pip install crewai ``` + For additional tools, use: + ```shell pip install 'crewai[tools]' ``` + ### Q: Does CrewAI depend on LangChain? + A: No. CrewAI is built entirely from the ground up, with no dependencies on LangChain or other agent frameworks. This ensures a lean, fast, and flexible experience. ### Q: Can CrewAI handle complex use cases? + A: Yes. CrewAI excels at both simple and highly complex real-world scenarios, offering deep customization options at both high and low levels, from internal prompts to sophisticated workflow orchestration. ### Q: Can I use CrewAI with local AI models? + A: Absolutely! CrewAI supports various language models, including local ones. Tools like Ollama and LM Studio allow seamless integration. Check the [LLM Connections documentation](https://docs.crewai.com/how-to/LLM-Connections/) for more details. ### Q: What makes Crews different from Flows? + A: Crews provide autonomous agent collaboration, ideal for tasks requiring flexible decision-making and dynamic interaction. Flows offer precise, event-driven control, ideal for managing detailed execution paths and secure state management. You can seamlessly combine both for maximum effectiveness. ### Q: How is CrewAI better than LangChain? + A: CrewAI provides simpler, more intuitive APIs, faster execution speeds, more reliable and consistent results, robust documentation, and an active community—addressing common criticisms and limitations associated with LangChain. ### Q: Is CrewAI open-source? + A: Yes, CrewAI is open-source and actively encourages community contributions and collaboration. ### Q: Does CrewAI collect data from users? + A: CrewAI collects anonymous telemetry data strictly for improvement purposes. Sensitive data such as prompts, tasks, or API responses are never collected unless explicitly enabled by the user. ### Q: Where can I find real-world CrewAI examples? + A: Check out practical examples in the [CrewAI-examples repository](https://github.com/crewAIInc/crewAI-examples), covering use cases like trip planners, stock analysis, and job postings. ### Q: How can I contribute to CrewAI? + A: Contributions are warmly welcomed! Fork the repository, create your branch, implement your changes, and submit a pull request. See the Contribution section of the README for detailed guidelines. ### Q: What additional features does CrewAI Enterprise offer? + A: CrewAI Enterprise provides advanced features such as a unified control plane, real-time observability, secure integrations, advanced security, actionable insights, and dedicated 24/7 enterprise support. ### Q: Is CrewAI Enterprise available for cloud and on-premise deployments? + A: Yes, CrewAI Enterprise supports both cloud-based and on-premise deployment options, allowing enterprises to meet their specific security and compliance requirements. ### Q: Can I try CrewAI Enterprise for free? + A: Yes, you can explore part of the CrewAI Enterprise Suite by accessing the [Crew Control Plane](https://app.crewai.com) for free. ### Q: Does CrewAI support fine-tuning or training custom models? + A: Yes, CrewAI can integrate with custom-trained or fine-tuned models, allowing you to enhance your agents with domain-specific knowledge and accuracy. ### Q: Can CrewAI agents interact with external tools and APIs? + A: Absolutely! CrewAI agents can easily integrate with external tools, APIs, and databases, empowering them to leverage real-world data and resources. ### Q: Is CrewAI suitable for production environments? + A: Yes, CrewAI is explicitly designed with production-grade standards, ensuring reliability, stability, and scalability for enterprise deployments. ### Q: How scalable is CrewAI? + A: CrewAI is highly scalable, supporting simple automations and large-scale enterprise workflows involving numerous agents and complex tasks simultaneously. ### Q: Does CrewAI offer debugging and monitoring tools? + A: Yes, CrewAI Enterprise includes advanced debugging, tracing, and real-time observability features, simplifying the management and troubleshooting of your automations. ### Q: What programming languages does CrewAI support? + A: CrewAI is primarily Python-based but easily integrates with services and APIs written in any programming language through its flexible API integration capabilities. ### Q: Does CrewAI offer educational resources for beginners? + A: Yes, CrewAI provides extensive beginner-friendly tutorials, courses, and documentation through learn.crewai.com, supporting developers at all skill levels. ### Q: Can CrewAI automate human-in-the-loop workflows? + A: Yes, CrewAI fully supports human-in-the-loop workflows, allowing seamless collaboration between human experts and AI agents for enhanced decision-making. diff --git a/docs/concepts/memory.mdx b/docs/concepts/memory.mdx index df9e34ac9..ec9e76cae 100644 --- a/docs/concepts/memory.mdx +++ b/docs/concepts/memory.mdx @@ -6,9 +6,11 @@ icon: database ## Overview -The crewAI framework introduces a sophisticated memory system designed to significantly enhance the capabilities of AI agents. -This system comprises `short-term memory`, `long-term memory`, `entity memory`, and `contextual memory`, each serving a unique purpose in aiding agents to remember, -reason, and learn from past interactions. +The CrewAI framework provides a sophisticated memory system designed to significantly enhance AI agent capabilities. CrewAI offers **three distinct memory approaches** that serve different use cases: + +1. **Basic Memory System** - Built-in short-term, long-term, and entity memory +2. **User Memory** - User-specific memory with Mem0 integration (legacy approach) +3. **External Memory** - Standalone external memory providers (new approach) ## Memory System Components @@ -18,709 +20,360 @@ reason, and learn from past interactions. | **Long-Term Memory** | Preserves valuable insights and learnings from past executions, allowing agents to build and refine their knowledge over time. | | **Entity Memory** | Captures and organizes information about entities (people, places, concepts) encountered during tasks, facilitating deeper understanding and relationship mapping. Uses `RAG` for storing entity information. | | **Contextual Memory**| Maintains the context of interactions by combining `ShortTermMemory`, `LongTermMemory`, and `EntityMemory`, aiding in the coherence and relevance of agent responses over a sequence of tasks or a conversation. | -| **External Memory** | Enables integration with external memory systems and providers (like Mem0), allowing for specialized memory storage and retrieval across different applications. Supports custom storage implementations for flexible memory management. | -| **User Memory** | ⚠️ **DEPRECATED**: This component is deprecated and will be removed in a future version. Please use [External Memory](#using-external-memory) instead. | -## How Memory Systems Empower Agents +## 1. Basic Memory System (Recommended) -1. **Contextual Awareness**: With short-term and contextual memory, agents gain the ability to maintain context over a conversation or task sequence, leading to more coherent and relevant responses. +The simplest and most commonly used approach. Enable memory for your crew with a single parameter: -2. **Experience Accumulation**: Long-term memory allows agents to accumulate experiences, learning from past actions to improve future decision-making and problem-solving. - -3. **Entity Understanding**: By maintaining entity memory, agents can recognize and remember key entities, enhancing their ability to process and interact with complex information. - -## Implementing Memory in Your Crew - -When configuring a crew, you can enable and customize each memory component to suit the crew's objectives and the nature of tasks it will perform. -By default, the memory system is disabled, and you can ensure it is active by setting `memory=True` in the crew configuration. -The memory will use OpenAI embeddings by default, but you can change it by setting `embedder` to a different model. -It's also possible to initialize the memory instance with your own instance. - -The 'embedder' only applies to **Short-Term Memory** which uses Chroma for RAG. -The **Long-Term Memory** uses SQLite3 to store task results. Currently, there is no way to override these storage implementations. -The data storage files are saved into a platform-specific location found using the appdirs package, -and the name of the project can be overridden using the **CREWAI_STORAGE_DIR** environment variable. - -### Example: Configuring Memory for a Crew - -```python Code +### Quick Start +```python from crewai import Crew, Agent, Task, Process -# Assemble your crew with memory capabilities -my_crew = Crew( +# Enable basic memory system +crew = Crew( agents=[...], tasks=[...], process=Process.sequential, - memory=True, + memory=True, # Enables short-term, long-term, and entity memory verbose=True ) ``` -### Example: Use Custom Memory Instances e.g FAISS as the VectorDB +### How It Works +- **Short-Term Memory**: Uses ChromaDB with RAG for current context +- **Long-Term Memory**: Uses SQLite3 to store task results across sessions +- **Entity Memory**: Uses RAG to track entities (people, places, concepts) +- **Storage Location**: Platform-specific location via `appdirs` package +- **Custom Storage Directory**: Set `CREWAI_STORAGE_DIR` environment variable -```python Code -from crewai import Crew, Process -from crewai.memory import LongTermMemory, ShortTermMemory, EntityMemory -from crewai.memory.storage.rag_storage import RAGStorage -from crewai.memory.storage.ltm_sqlite_storage import LTMSQLiteStorage -from typing import List, Optional - -# Assemble your crew with memory capabilities -my_crew: Crew = Crew( - agents = [...], - tasks = [...], - process = Process.sequential, - memory = True, - # Long-term memory for persistent storage across sessions - long_term_memory = LongTermMemory( - storage=LTMSQLiteStorage( - db_path="/my_crew1/long_term_memory_storage.db" - ) - ), - # Short-term memory for current context using RAG - short_term_memory = ShortTermMemory( - storage = RAGStorage( - embedder_config={ - "provider": "openai", - "config": { - "model": 'text-embedding-3-small' - } - }, - type="short_term", - path="/my_crew1/" - ) - ), - ), - # Entity memory for tracking key information about entities - entity_memory = EntityMemory( - storage=RAGStorage( - embedder_config={ - "provider": "openai", - "config": { - "model": 'text-embedding-3-small' - } - }, - type="short_term", - path="/my_crew1/" - ) - ), - verbose=True, +### Custom Embedder Configuration +```python +crew = Crew( + agents=[...], + tasks=[...], + memory=True, + embedder={ + "provider": "openai", + "config": { + "model": "text-embedding-3-small" + } + } ) ``` -## Security Considerations - -When configuring memory storage: -- Use environment variables for storage paths (e.g., `CREWAI_STORAGE_DIR`) -- Never hardcode sensitive information like database credentials -- Consider access permissions for storage directories -- Use relative paths when possible to maintain portability - -Example using environment variables: +### Custom Storage Paths ```python import os from crewai import Crew from crewai.memory import LongTermMemory from crewai.memory.storage.ltm_sqlite_storage import LTMSQLiteStorage -# Configure storage path using environment variable -storage_path = os.getenv("CREWAI_STORAGE_DIR", "./storage") +# Configure custom storage location crew = Crew( memory=True, long_term_memory=LongTermMemory( storage=LTMSQLiteStorage( - db_path="{storage_path}/memory.db".format(storage_path=storage_path) + db_path=os.getenv("CREWAI_STORAGE_DIR", "./storage") + "/memory.db" ) ) ) ``` -## Configuration Examples +## 2. User Memory with Mem0 (Legacy) -### Basic Memory Configuration -```python -from crewai import Crew -from crewai.memory import LongTermMemory + +**Legacy Approach**: While fully functional, this approach is considered legacy. For new projects requiring user-specific memory, consider using External Memory instead. + -# Simple memory configuration -crew = Crew(memory=True) # Uses default storage locations -``` -Note that External Memory won’t be defined when `memory=True` is set, as we can’t infer which external memory would be suitable for your case +User Memory integrates with [Mem0](https://mem0.ai/) to provide user-specific memory that persists across sessions and integrates with the crew's contextual memory system. -### Custom Storage Configuration -```python -from crewai import Crew -from crewai.memory import LongTermMemory -from crewai.memory.storage.ltm_sqlite_storage import LTMSQLiteStorage - -# Configure custom storage paths -crew = Crew( - memory=True, - long_term_memory=LongTermMemory( - storage=LTMSQLiteStorage(db_path="./memory.db") - ) -) +### Prerequisites +```bash +pip install mem0ai ``` -## Integrating Mem0 for Enhanced User Memory - -[Mem0](https://mem0.ai/) is a self-improving memory layer for LLM applications, enabling personalized AI experiences. - - -### Using Mem0 API platform - -To include user-specific memory you can get your API key [here](https://app.mem0.ai/dashboard/api-keys) and refer the [docs](https://docs.mem0.ai/platform/quickstart#4-1-create-memories) for adding user preferences. In this case `user_memory` is set to `MemoryClient` from mem0. - - -```python Code +### Mem0 Cloud Configuration +```python import os from crewai import Crew, Process -from mem0 import MemoryClient -# Set environment variables for Mem0 -os.environ["MEM0_API_KEY"] = "m0-xx" - -# Step 1: Create a Crew with User Memory +# Set your Mem0 API key +os.environ["MEM0_API_KEY"] = "m0-your-api-key" crew = Crew( agents=[...], tasks=[...], - verbose=True, - process=Process.sequential, - memory=True, + memory=True, # Required for contextual memory integration memory_config={ "provider": "mem0", "config": {"user_id": "john"}, - "user_memory" : {} #Set user_memory explicitly to a dictionary, we are working on this issue. + "user_memory": {} # Required - triggers user memory initialization }, + process=Process.sequential, + verbose=True ) ``` -#### Additional Memory Configuration Options -If you want to access a specific organization and project, you can set the `org_id` and `project_id` parameters in the memory configuration. - -```python Code -from crewai import Crew - +### Advanced Mem0 Configuration +```python crew = Crew( agents=[...], tasks=[...], - verbose=True, memory=True, memory_config={ "provider": "mem0", - "config": {"user_id": "john", "org_id": "my_org_id", "project_id": "my_project_id"}, - "user_memory" : {} #Set user_memory explicitly to a dictionary, we are working on this issue. - }, + "config": { + "user_id": "john", + "org_id": "my_org_id", # Optional + "project_id": "my_project_id", # Optional + "api_key": "custom-api-key" # Optional - overrides env var + }, + "user_memory": {} + } ) ``` -### Using Local Mem0 memory -If you want to use local mem0 memory, with a custom configuration, you can set a parameter `local_mem0_config` in the config itself. -If both os environment key is set and local_mem0_config is given, the API platform takes higher priority over the local configuration. -Check [this](https://docs.mem0.ai/open-source/python-quickstart#run-mem0-locally) mem0 local configuration docs for more understanding. -In this case `user_memory` is set to `Memory` from mem0. - - -```python Code -from crewai import Crew - - -#local mem0 config -config = { - "vector_store": { - "provider": "qdrant", - "config": { - "host": "localhost", - "port": 6333 - } - }, - "llm": { - "provider": "openai", - "config": { - "api_key": "your-api-key", - "model": "gpt-4" - } - }, - "embedder": { - "provider": "openai", - "config": { - "api_key": "your-api-key", - "model": "text-embedding-3-small" - } - }, - "graph_store": { - "provider": "neo4j", - "config": { - "url": "neo4j+s://your-instance", - "username": "neo4j", - "password": "password" - } - }, - "history_db_path": "/path/to/history.db", - "version": "v1.1", - "custom_fact_extraction_prompt": "Optional custom prompt for fact extraction for memory", - "custom_update_memory_prompt": "Optional custom prompt for update memory" -} - +### Local Mem0 Configuration +```python crew = Crew( agents=[...], tasks=[...], - verbose=True, memory=True, memory_config={ "provider": "mem0", - "config": {"user_id": "john", 'local_mem0_config': config}, - "user_memory" : {} #Set user_memory explicitly to a dictionary, we are working on this issue. - }, + "config": { + "user_id": "john", + "local_mem0_config": { + "vector_store": { + "provider": "qdrant", + "config": {"host": "localhost", "port": 6333} + }, + "llm": { + "provider": "openai", + "config": {"api_key": "your-api-key", "model": "gpt-4"} + }, + "embedder": { + "provider": "openai", + "config": {"api_key": "your-api-key", "model": "text-embedding-3-small"} + } + } + }, + "user_memory": {} + } ) ``` -### Using External Memory +## 3. External Memory (New Approach) -External Memory is a powerful feature that allows you to integrate external memory systems with your CrewAI applications. This is particularly useful when you want to use specialized memory providers or maintain memory across different applications. -Since it’s an external memory, we’re not able to add a default value to it - unlike with Long Term and Short Term memory. - -#### Basic Usage with Mem0 - -The most common way to use External Memory is with Mem0 as the provider: +External Memory provides a standalone memory system that operates independently from the crew's built-in memory. This is ideal for specialized memory providers or cross-application memory sharing. +### Basic External Memory with Mem0 ```python import os from crewai import Agent, Crew, Process, Task from crewai.memory.external.external_memory import ExternalMemory -os.environ["MEM0_API_KEY"] = "YOUR-API-KEY" +os.environ["MEM0_API_KEY"] = "your-api-key" -agent = Agent( - role="You are a helpful assistant", - goal="Plan a vacation for the user", - backstory="You are a helpful assistant that can plan a vacation for the user", - verbose=True, -) -task = Task( - description="Give things related to the user's vacation", - expected_output="A plan for the vacation", - agent=agent, +# Create external memory instance +external_memory = ExternalMemory( + embedder_config={ + "provider": "mem0", + "config": {"user_id": "U-123"} + } ) crew = Crew( - agents=[agent], - tasks=[task], - verbose=True, + agents=[...], + tasks=[...], + external_memory=external_memory, # Separate from basic memory process=Process.sequential, - external_memory=ExternalMemory( - embedder_config={"provider": "mem0", "config": {"user_id": "U-123"}} # you can provide an entire Mem0 configuration - ), -) - -crew.kickoff( - inputs={"question": "which destination is better for a beach vacation?"} + verbose=True ) ``` -#### Using External Memory with Custom Storage - -You can also create custom storage implementations for External Memory. Here's an example of how to create a custom storage: - +### Custom Storage Implementation ```python -from crewai import Agent, Crew, Process, Task from crewai.memory.external.external_memory import ExternalMemory from crewai.memory.storage.interface import Storage - class CustomStorage(Storage): def __init__(self): self.memories = [] def save(self, value, metadata=None, agent=None): - self.memories.append({"value": value, "metadata": metadata, "agent": agent}) + self.memories.append({ + "value": value, + "metadata": metadata, + "agent": agent + }) def search(self, query, limit=10, score_threshold=0.5): # Implement your search logic here - return [] + return [m for m in self.memories if query.lower() in str(m["value"]).lower()] def reset(self): self.memories = [] - -# Create external memory with custom storage -external_memory = ExternalMemory( - storage=CustomStorage(), - embedder_config={"provider": "mem0", "config": {"user_id": "U-123"}}, -) - -agent = Agent( - role="You are a helpful assistant", - goal="Plan a vacation for the user", - backstory="You are a helpful assistant that can plan a vacation for the user", - verbose=True, -) -task = Task( - description="Give things related to the user's vacation", - expected_output="A plan for the vacation", - agent=agent, -) +# Use custom storage +external_memory = ExternalMemory(storage=CustomStorage()) crew = Crew( - agents=[agent], - tasks=[task], - verbose=True, - process=Process.sequential, - external_memory=external_memory, -) - -crew.kickoff( - inputs={"question": "which destination is better for a beach vacation?"} + agents=[...], + tasks=[...], + external_memory=external_memory ) ``` +## Memory System Comparison -## Additional Embedding Providers +| Feature | Basic Memory | User Memory (Legacy) | External Memory | +|---------|-------------|---------------------|----------------| +| **Setup Complexity** | Simple | Medium | Medium | +| **Integration** | Built-in contextual | Contextual + User-specific | Standalone | +| **Storage** | Local files | Mem0 Cloud/Local | Custom/Mem0 | +| **Cross-session** | ✅ | ✅ | ✅ | +| **User-specific** | ❌ | ✅ | ✅ | +| **Custom providers** | Limited | Mem0 only | Any provider | +| **Recommended for** | Most use cases | Legacy projects | Specialized needs | -### Using OpenAI embeddings (already default) -```python Code -from crewai import Crew, Agent, Task, Process +## Supported Embedding Providers -my_crew = Crew( - agents=[...], - tasks=[...], - process=Process.sequential, +### OpenAI (Default) +```python +crew = Crew( memory=True, - verbose=True, embedder={ "provider": "openai", - "config": { - "model": 'text-embedding-3-small' - } - } -) -``` -Alternatively, you can directly pass the OpenAIEmbeddingFunction to the embedder parameter. - -Example: -```python Code -from crewai import Crew, Agent, Task, Process -from chromadb.utils.embedding_functions import OpenAIEmbeddingFunction - -my_crew = Crew( - agents=[...], - tasks=[...], - process=Process.sequential, - memory=True, - verbose=True, - embedder={ - "provider": "openai", - "config": { - "model": 'text-embedding-3-small' - } + "config": {"model": "text-embedding-3-small"} } ) ``` -### Using Ollama embeddings - -```python Code -from crewai import Crew, Agent, Task, Process - -my_crew = Crew( - agents=[...], - tasks=[...], - process=Process.sequential, +### Ollama +```python +crew = Crew( memory=True, - verbose=True, embedder={ "provider": "ollama", - "config": { - "model": "mxbai-embed-large" - } + "config": {"model": "mxbai-embed-large"} } ) ``` -### Using Google AI embeddings - -#### Prerequisites -Before using Google AI embeddings, ensure you have: -- Access to the Gemini API -- The necessary API keys and permissions - -You will need to update your *pyproject.toml* dependencies: -```YAML -dependencies = [ - "google-generativeai>=0.8.4", #main version in January/2025 - crewai v.0.100.0 and crewai-tools 0.33.0 - "crewai[tools]>=0.100.0,<1.0.0" -] -``` - -```python Code -from crewai import Crew, Agent, Task, Process - -my_crew = Crew( - agents=[...], - tasks=[...], - process=Process.sequential, +### Google AI +```python +crew = Crew( memory=True, - verbose=True, embedder={ "provider": "google", "config": { - "api_key": "", - "model": "" + "api_key": "your-api-key", + "model": "text-embedding-004" } } ) ``` -### Using Azure OpenAI embeddings - -```python Code -from chromadb.utils.embedding_functions import OpenAIEmbeddingFunction -from crewai import Crew, Agent, Task, Process - -my_crew = Crew( - agents=[...], - tasks=[...], - process=Process.sequential, +### Azure OpenAI +```python +crew = Crew( memory=True, - verbose=True, embedder={ "provider": "openai", "config": { - "api_key": "YOUR_API_KEY", - "api_base": "YOUR_API_BASE_PATH", - "api_version": "YOUR_API_VERSION", - "model_name": 'text-embedding-3-small' + "api_key": "your-api-key", + "api_base": "https://your-resource.openai.azure.com/", + "api_version": "2023-05-15", + "model_name": "text-embedding-3-small" } } ) ``` -### Using Vertex AI embeddings - -```python Code -from chromadb.utils.embedding_functions import GoogleVertexEmbeddingFunction -from crewai import Crew, Agent, Task, Process - -my_crew = Crew( - agents=[...], - tasks=[...], - process=Process.sequential, +### Vertex AI +```python +crew = Crew( memory=True, - verbose=True, embedder={ "provider": "vertexai", "config": { - "project_id"="YOUR_PROJECT_ID", - "region"="YOUR_REGION", - "api_key"="YOUR_API_KEY", - "model_name"="textembedding-gecko" + "project_id": "your-project-id", + "region": "your-region", + "api_key": "your-api-key", + "model_name": "textembedding-gecko" } } ) ``` -### Using Cohere embeddings - -```python Code -from crewai import Crew, Agent, Task, Process - -my_crew = Crew( - agents=[...], - tasks=[...], - process=Process.sequential, - memory=True, - verbose=True, - embedder={ - "provider": "cohere", - "config": { - "api_key": "YOUR_API_KEY", - "model": "" - } - } -) -``` -### Using VoyageAI embeddings - -```python Code -from crewai import Crew, Agent, Task, Process - -my_crew = Crew( - agents=[...], - tasks=[...], - process=Process.sequential, - memory=True, - verbose=True, - embedder={ - "provider": "voyageai", - "config": { - "api_key": "YOUR_API_KEY", - "model": "" - } - } -) -``` -### Using HuggingFace embeddings - -```python Code -from crewai import Crew, Agent, Task, Process - -my_crew = Crew( - agents=[...], - tasks=[...], - process=Process.sequential, - memory=True, - verbose=True, - embedder={ - "provider": "huggingface", - "config": { - "api_url": "", - } - } -) -``` - -### Using Watson embeddings - -```python Code -from crewai import Crew, Agent, Task, Process - -# Note: Ensure you have installed and imported `ibm_watsonx_ai` for Watson embeddings to work. - -my_crew = Crew( - agents=[...], - tasks=[...], - process=Process.sequential, - memory=True, - verbose=True, - embedder={ - "provider": "watson", - "config": { - "model": "", - "api_url": "", - "api_key": "", - "project_id": "", - } - } -) -``` - -### Using Amazon Bedrock embeddings - -```python Code -# Note: Ensure you have installed `boto3` for Bedrock embeddings to work. - -import os -import boto3 -from crewai import Crew, Agent, Task, Process - -boto3_session = boto3.Session( - region_name=os.environ.get("AWS_REGION_NAME"), - aws_access_key_id=os.environ.get("AWS_ACCESS_KEY_ID"), - aws_secret_access_key=os.environ.get("AWS_SECRET_ACCESS_KEY") -) - -my_crew = Crew( - agents=[...], - tasks=[...], - process=Process.sequential, - memory=True, - embedder={ - "provider": "bedrock", - "config":{ - "session": boto3_session, - "model": "amazon.titan-embed-text-v2:0", - "vector_dimension": 1024 - } - } - verbose=True -) -``` - -### Adding Custom Embedding Function - -```python Code -from crewai import Crew, Agent, Task, Process -from chromadb import Documents, EmbeddingFunction, Embeddings - -# Create a custom embedding function -class CustomEmbedder(EmbeddingFunction): - def __call__(self, input: Documents) -> Embeddings: - # generate embeddings - return [1, 2, 3] # this is a dummy embedding - -my_crew = Crew( - agents=[...], - tasks=[...], - process=Process.sequential, - memory=True, - verbose=True, - embedder={ - "provider": "custom", - "config": { - "embedder": CustomEmbedder() - } - } -) -``` - -### Resetting Memory via cli - -```shell -crewai reset-memories [OPTIONS] -``` - -#### Resetting Memory Options - -| Option | Description | Type | Default | -| :----------------- | :------------------------------- | :------------- | :------ | -| `-l`, `--long` | Reset LONG TERM memory. | Flag (boolean) | False | -| `-s`, `--short` | Reset SHORT TERM memory. | Flag (boolean) | False | -| `-e`, `--entities` | Reset ENTITIES memory. | Flag (boolean) | False | -| `-k`, `--kickoff-outputs` | Reset LATEST KICKOFF TASK OUTPUTS. | Flag (boolean) | False | -| `-kn`, `--knowledge` | Reset KNOWLEDEGE storage | Flag (boolean) | False | -| `-akn`, `--agent-knowledge` | Reset AGENT KNOWLEDGE storage | Flag (boolean) | False | -| `-a`, `--all` | Reset ALL memories. | Flag (boolean) | False | - -Note: To use the cli command you need to have your crew in a file called crew.py in the same directory. - - - - -### Resetting Memory via crew object +## Security Best Practices +### Environment Variables ```python +import os +from crewai import Crew -my_crew = Crew( - agents=[...], - tasks=[...], - process=Process.sequential, +# Store sensitive data in environment variables +crew = Crew( memory=True, - verbose=True, embedder={ - "provider": "custom", + "provider": "openai", "config": { - "embedder": CustomEmbedder() + "api_key": os.getenv("OPENAI_API_KEY"), + "model": "text-embedding-3-small" } } ) - -my_crew.reset_memories(command_type = 'all') # Resets all the memory ``` -#### Resetting Memory Options +### Storage Security +```python +import os +from crewai import Crew +from crewai.memory import LongTermMemory +from crewai.memory.storage.ltm_sqlite_storage import LTMSQLiteStorage -| Command Type | Description | -| :----------------- | :------------------------------- | -| `long` | Reset LONG TERM memory. | -| `short` | Reset SHORT TERM memory. | -| `entities` | Reset ENTITIES memory. | -| `kickoff_outputs` | Reset LATEST KICKOFF TASK OUTPUTS. | -| `knowledge` | Reset KNOWLEDGE memory. | -| `agent_knowledge` | Reset AGENT KNOWLEDGE memory. | -| `all` | Reset ALL memories. | +# Use secure storage paths +storage_path = os.getenv("CREWAI_STORAGE_DIR", "./storage") +os.makedirs(storage_path, mode=0o700, exist_ok=True) # Restricted permissions +crew = Crew( + memory=True, + long_term_memory=LongTermMemory( + storage=LTMSQLiteStorage( + db_path=f"{storage_path}/memory.db" + ) + ) +) +``` +## Troubleshooting + +### Common Issues + +**Memory not persisting between sessions?** +- Check `CREWAI_STORAGE_DIR` environment variable +- Ensure write permissions to storage directory +- Verify memory is enabled with `memory=True` + +**Mem0 authentication errors?** +- Verify `MEM0_API_KEY` environment variable is set +- Check API key permissions on Mem0 dashboard +- Ensure `mem0ai` package is installed + +**High memory usage with large datasets?** +- Consider using External Memory with custom storage +- Implement pagination in custom storage search methods +- Use smaller embedding models for reduced memory footprint + +### Performance Tips + +- Use `memory=True` for most use cases (simplest and fastest) +- Only use User Memory if you need user-specific persistence +- Consider External Memory for high-scale or specialized requirements +- Choose smaller embedding models for faster processing +- Set appropriate search limits to control memory retrieval size ## Benefits of Using CrewAI's Memory System