Compare commits

...

14 Commits

Author SHA1 Message Date
Devin AI
1eb09f7e96 Add type ignore comment to fix type checking issue
Co-Authored-By: Joe Moura <joao@crewai.com>
2025-04-09 03:11:21 +00:00
Devin AI
474a584312 Fix type checking issue in LLM class
Co-Authored-By: Joe Moura <joao@crewai.com>
2025-04-09 03:09:51 +00:00
Devin AI
1a53894cd9 Fix import order with ruff auto-fix
Co-Authored-By: Joe Moura <joao@crewai.com>
2025-04-09 03:08:37 +00:00
Devin AI
3a85b442fb Fix import order in test file
Co-Authored-By: Joe Moura <joao@crewai.com>
2025-04-09 03:07:36 +00:00
Devin AI
cdd5ebfb1a Fix #2541: Add support for multimodal content format in qwen2.5-vl model
Co-Authored-By: Joe Moura <joao@crewai.com>
2025-04-09 03:06:24 +00:00
João Moura
b992ee9d6b small comments 2025-04-08 10:27:02 -07:00
Lucas Gomide
d7fa8464c7 Add support for External Memory (the future replacement for UserMemory) (#2510)
* fix: surfacing properly supported types by Mem0Storage

* feat: prepare Mem0Storage to accept config paramenter

We're planning to remove `memory_config` soon. This commit kindly prepare this storage to accept the config provided directly

* feat: add external memory

* fix: cleanup Mem0 warning while adding messages to the memory

* feat: support set the current crew in memory

This can be useful when a memory is initialized before the crew, but the crew might still be a very relevant attribute

* fix: allow to reset only an external_memory from crew

* test: add external memory test

* test: ensure the config takes precedence over memory_config when setting mem0

* fix: support to provide a custom storage to External Memory

* docs: add docs about external memory

* chore: add warning messages about the deprecation of UserMemory

* fix: fix typing check

---------

Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-04-07 10:40:35 -07:00
João Moura
918c0589eb adding new docs 2025-04-07 02:46:40 -04:00
Tony Kipkemboi
d216edb022 Merge pull request #2520 from exiao/main
Fix title and position in docs for Arize Phoenix
2025-04-05 18:01:20 -04:00
exiao
afa8783750 Update arize-phoenix-observability.mdx 2025-04-03 13:03:39 -04:00
exiao
a661050464 Merge branch 'crewAIInc:main' into main 2025-04-03 11:34:29 -04:00
exiao
c14f990098 Update docs.json 2025-04-03 11:33:51 -04:00
exiao
26ccaf78ec Update arize-phoenix-observability.mdx 2025-04-03 11:33:18 -04:00
exiao
12e98e1f3c Update and rename phoenix-observability.mdx to arize-phoenix-observability.mdx 2025-04-03 11:32:56 -04:00
32 changed files with 4034 additions and 98 deletions

3
.gitignore vendored
View File

@@ -25,4 +25,5 @@ agentops.log
test_flow.html
crewairules.mdc
plan.md
conceptual_plan.md
conceptual_plan.md
build_image

View File

@@ -18,6 +18,18 @@ In the CrewAI framework, an `Agent` is an autonomous unit that can:
Think of an agent as a specialized team member with specific skills, expertise, and responsibilities. For example, a `Researcher` agent might excel at gathering and analyzing information, while a `Writer` agent might be better at creating content.
</Tip>
<Note type="info" title="Enterprise Enhancement: Visual Agent Builder">
CrewAI Enterprise includes a Visual Agent Builder that simplifies agent creation and configuration without writing code. Design your agents visually and test them in real-time.
![Visual Agent Builder Screenshot](../images/enterprise/crew-studio-quickstart)
The Visual Agent Builder enables:
- Intuitive agent configuration with form-based interfaces
- Real-time testing and validation
- Template library with pre-configured agent types
- Easy customization of agent attributes and behaviors
</Note>
## Agent Attributes
| Attribute | Parameter | Type | Description |
@@ -233,7 +245,7 @@ custom_agent = Agent(
#### Code Execution
- `allow_code_execution`: Must be True to run code
- `code_execution_mode`:
- `code_execution_mode`:
- `"safe"`: Uses Docker (recommended for production)
- `"unsafe"`: Direct execution (use only in trusted environments)

View File

@@ -18,6 +18,20 @@ CrewAI uses an event bus architecture to emit events throughout the execution li
When specific actions occur in CrewAI (like a Crew starting execution, an Agent completing a task, or a tool being used), the system emits corresponding events. You can register handlers for these events to execute custom code when they occur.
<Note type="info" title="Enterprise Enhancement: Prompt Tracing">
CrewAI Enterprise provides a built-in Prompt Tracing feature that leverages the event system to track, store, and visualize all prompts, completions, and associated metadata. This provides powerful debugging capabilities and transparency into your agent operations.
![Prompt Tracing Dashboard](../images/enterprise/prompt-tracing.png)
With Prompt Tracing you can:
- View the complete history of all prompts sent to your LLM
- Track token usage and costs
- Debug agent reasoning failures
- Share prompt sequences with your team
- Compare different prompt strategies
- Export traces for compliance and auditing
</Note>
## Creating a Custom Event Listener
To create a custom event listener, you need to:
@@ -40,17 +54,17 @@ from crewai.utilities.events.base_event_listener import BaseEventListener
class MyCustomListener(BaseEventListener):
def __init__(self):
super().__init__()
def setup_listeners(self, crewai_event_bus):
@crewai_event_bus.on(CrewKickoffStartedEvent)
def on_crew_started(source, event):
print(f"Crew '{event.crew_name}' has started execution!")
@crewai_event_bus.on(CrewKickoffCompletedEvent)
def on_crew_completed(source, event):
print(f"Crew '{event.crew_name}' has completed execution!")
print(f"Output: {event.output}")
@crewai_event_bus.on(AgentExecutionCompletedEvent)
def on_agent_execution_completed(source, event):
print(f"Agent '{event.agent.role}' completed task")
@@ -83,7 +97,7 @@ my_listener = MyCustomListener()
class MyCustomCrew:
# Your crew implementation...
def crew(self):
return Crew(
agents=[...],
@@ -106,7 +120,7 @@ my_listener = MyCustomListener()
class MyCustomFlow(Flow):
# Your flow implementation...
@start()
def first_step(self):
# ...
@@ -324,9 +338,9 @@ with crewai_event_bus.scoped_handlers():
@crewai_event_bus.on(CrewKickoffStartedEvent)
def temp_handler(source, event):
print("This handler only exists within this context")
# Do something that emits events
# Outside the context, the temporary handler is removed
```

View File

@@ -18,7 +18,8 @@ reason, and learn from past interactions.
| **Long-Term Memory** | Preserves valuable insights and learnings from past executions, allowing agents to build and refine their knowledge over time. |
| **Entity Memory** | Captures and organizes information about entities (people, places, concepts) encountered during tasks, facilitating deeper understanding and relationship mapping. Uses `RAG` for storing entity information. |
| **Contextual Memory**| Maintains the context of interactions by combining `ShortTermMemory`, `LongTermMemory`, and `EntityMemory`, aiding in the coherence and relevance of agent responses over a sequence of tasks or a conversation. |
| **User Memory** | Stores user-specific information and preferences, enhancing personalization and user experience. |
| **External Memory** | Enables integration with external memory systems and providers (like Mem0), allowing for specialized memory storage and retrieval across different applications. Supports custom storage implementations for flexible memory management. |
| **User Memory** | ⚠️ **DEPRECATED**: This component is deprecated and will be removed in a future version. Please use [External Memory](#using-external-memory) instead. |
## How Memory Systems Empower Agents
@@ -274,6 +275,102 @@ crew = Crew(
)
```
### Using External Memory
External Memory is a powerful feature that allows you to integrate external memory systems with your CrewAI applications. This is particularly useful when you want to use specialized memory providers or maintain memory across different applications.
#### Basic Usage with Mem0
The most common way to use External Memory is with Mem0 as the provider:
```python
from crewai import Agent, Crew, Process, Task
from crewai.memory.external.external_memory import ExternalMemory
agent = Agent(
role="You are a helpful assistant",
goal="Plan a vacation for the user",
backstory="You are a helpful assistant that can plan a vacation for the user",
verbose=True,
)
task = Task(
description="Give things related to the user's vacation",
expected_output="A plan for the vacation",
agent=agent,
)
crew = Crew(
agents=[agent],
tasks=[task],
verbose=True,
process=Process.sequential,
memory=True,
external_memory=ExternalMemory(
embedder_config={"provider": "mem0", "config": {"user_id": "U-123"}} # you can provide an entire Mem0 configuration
),
)
crew.kickoff(
inputs={"question": "which destination is better for a beach vacation?"}
)
```
#### Using External Memory with Custom Storage
You can also create custom storage implementations for External Memory. Here's an example of how to create a custom storage:
```python
from crewai import Agent, Crew, Process, Task
from crewai.memory.external.external_memory import ExternalMemory
from crewai.memory.storage.interface import Storage
class CustomStorage(Storage):
def __init__(self):
self.memories = []
def save(self, value, metadata=None, agent=None):
self.memories.append({"value": value, "metadata": metadata, "agent": agent})
def search(self, query, limit=10, score_threshold=0.5):
# Implement your search logic here
return []
def reset(self):
self.memories = []
# Create external memory with custom storage
external_memory = ExternalMemory(
storage=CustomStorage(),
embedder_config={"provider": "mem0", "config": {"user_id": "U-123"}},
)
agent = Agent(
role="You are a helpful assistant",
goal="Plan a vacation for the user",
backstory="You are a helpful assistant that can plan a vacation for the user",
verbose=True,
)
task = Task(
description="Give things related to the user's vacation",
expected_output="A plan for the vacation",
agent=agent,
)
crew = Crew(
agents=[agent],
tasks=[task],
verbose=True,
process=Process.sequential,
memory=True,
external_memory=external_memory,
)
crew.kickoff(
inputs={"question": "which destination is better for a beach vacation?"}
)
```
## Additional Embedding Providers

View File

@@ -12,6 +12,18 @@ Tasks provide all necessary details for execution, such as a description, the ag
Tasks within CrewAI can be collaborative, requiring multiple agents to work together. This is managed through the task properties and orchestrated by the Crew's process, enhancing teamwork and efficiency.
<Note type="info" title="Enterprise Enhancement: Visual Task Builder">
CrewAI Enterprise includes a Visual Task Builder in Crew Studio that simplifies complex task creation and chaining. Design your task flows visually and test them in real-time without writing code.
![Task Builder Screenshot](../images/enterprise/crew-studio-quickstart.png)
The Visual Task Builder enables:
- Drag-and-drop task creation
- Visual task dependencies and flow
- Real-time testing and validation
- Easy sharing and collaboration
</Note>
### Task Execution Flow
Tasks can be executed in two ways:
@@ -414,7 +426,7 @@ It's also important to note that the output of the final task of a crew becomes
### Using `output_pydantic`
The `output_pydantic` property allows you to define a Pydantic model that the task output should conform to. This ensures that the output is not only structured but also validated according to the Pydantic model.
Heres an example demonstrating how to use output_pydantic:
Here's an example demonstrating how to use output_pydantic:
```python Code
import json
@@ -495,7 +507,7 @@ In this example:
### Using `output_json`
The `output_json` property allows you to define the expected output in JSON format. This ensures that the task's output is a valid JSON structure that can be easily parsed and used in your application.
Heres an example demonstrating how to use `output_json`:
Here's an example demonstrating how to use `output_json`:
```python Code
import json

View File

@@ -15,6 +15,18 @@ A tool in CrewAI is a skill or function that agents can utilize to perform vario
This includes tools from the [CrewAI Toolkit](https://github.com/joaomdmoura/crewai-tools) and [LangChain Tools](https://python.langchain.com/docs/integrations/tools),
enabling everything from simple searches to complex interactions and effective teamwork among agents.
<Note type="info" title="Enterprise Enhancement: Tools Repository">
CrewAI Enterprise provides a comprehensive Tools Repository with pre-built integrations for common business systems and APIs. Deploy agents with enterprise tools in minutes instead of days.
![Tools Repository Screenshot](../images/enterprise/tools-repository.png)
The Enterprise Tools Repository includes:
- Pre-built connectors for popular enterprise systems
- Custom tool creation interface
- Version control and sharing capabilities
- Security and compliance features
</Note>
## Key Characteristics of Tools
- **Utility**: Crafted for tasks such as web searching, data analysis, content generation, and agent collaboration.
@@ -79,7 +91,7 @@ research = Task(
)
write = Task(
description='Write an engaging blog post about the AI industry, based on the research analysts summary. Draw inspiration from the latest blog posts in the directory.',
description='Write an engaging blog post about the AI industry, based on the research analyst's summary. Draw inspiration from the latest blog posts in the directory.',
expected_output='A 4-paragraph blog post formatted in markdown with engaging, informative, and accessible content, avoiding complex jargon.',
agent=writer,
output_file='blog-posts/new_post.md' # The final blog post will be saved here
@@ -141,7 +153,7 @@ Here is a list of the available tools and their descriptions:
## Creating your own Tools
<Tip>
Developers can craft `custom tools` tailored for their agents needs or
Developers can craft `custom tools` tailored for their agent's needs or
utilize pre-built options.
</Tip>

View File

@@ -105,12 +105,12 @@
"group": "Agent Monitoring & Observability",
"pages": [
"how-to/agentops-observability",
"how-to/arize-phoenix-observability",
"how-to/langfuse-observability",
"how-to/langtrace-observability",
"how-to/mlflow-observability",
"how-to/openlit-observability",
"how-to/opik-observability",
"how-to/phoenix-observability",
"how-to/portkey-observability",
"how-to/weave-integration"
]

View File

@@ -1,10 +1,10 @@
---
title: Agent Monitoring with Arize Phoenix
description: Learn how to integrate Arize Phoenix with CrewAI via OpenTelemetry using OpenInference
title: Arize Phoenix
description: Arize Phoenix integration for CrewAI with OpenTelemetry and OpenInference
icon: magnifying-glass-chart
---
# Integrate Arize Phoenix with CrewAI
# Arize Phoenix Integration
This guide demonstrates how to integrate **Arize Phoenix** with **CrewAI** using OpenTelemetry via the [OpenInference](https://github.com/openinference/openinference) SDK. By the end of this guide, you will be able to trace your CrewAI agents and easily debug your agents.

View File

@@ -6,12 +6,12 @@ icon: wrench
<Note>
**Python Version Requirements**
CrewAI requires `Python >=3.10 and <3.13`. Here's how to check your version:
```bash
python3 --version
```
If you need to update Python, visit [python.org/downloads](https://python.org/downloads)
</Note>
@@ -140,6 +140,27 @@ We recommend using the `YAML` template scaffolding for a structured approach to
</Step>
</Steps>
## Enterprise Installation Options
<Note type="info">
For teams and organizations, CrewAI offers enterprise deployment options that eliminate setup complexity:
### CrewAI Enterprise (SaaS)
- Zero installation required - just sign up for free at [app.crewai.com](https://app.crewai.com)
- Automatic updates and maintenance
- Managed infrastructure and scaling
- Build Crews with no Code
### CrewAI Factory (Self-hosted)
- Containerized deployment for your infrastructure
- Supports any hyperscaler including on prem depployments
- Integration with your existing security systems
<Card title="Explore Enterprise Options" icon="building" href="https://crewai.com/enterprise">
Learn about CrewAI's enterprise offerings and schedule a demo
</Card>
</Note>
## Next Steps
<CardGroup cols={2}>

View File

@@ -15,6 +15,7 @@ CrewAI empowers developers with both high-level simplicity and precise low-level
With over 100,000 developers certified through our community courses, CrewAI is rapidly becoming the standard for enterprise-ready AI automation.
## How Crews Work
<Note>

View File

@@ -200,6 +200,22 @@ Follow the steps below to get Crewing! 🚣‍♂️
```
</CodeGroup>
</Step>
<Step title="Enterprise Alternative: Create in Crew Studio">
For CrewAI Enterprise users, you can create the same crew without writing code:
1. Log in to your CrewAI Enterprise account (create a free account at [app.crewai.com](https://app.crewai.com))
2. Open Crew Studio
3. Type what is the automation you're tryign to build
4. Create your tasks visually and connect them in sequence
5. Configure your inputs and click "Download Code" or "Deploy"
![Crew Studio Quickstart](../images/enterprise/crew-studio-quickstart.png)
<Card title="Try CrewAI Enterprise" icon="rocket" href="https://app.crewai.com">
Start your free account at CrewAI Enterprise
</Card>
</Step>
<Step title="View your final report">
You should see the output in the console and the `report.md` file should be created in the root of your project with the final report.
@@ -271,7 +287,7 @@ Follow the steps below to get Crewing! 🚣‍♂️
</Steps>
<Check>
Congratulations!
Congratulations!
You have successfully set up your crew project and are ready to start building your own agentic workflows!
</Check>

View File

@@ -207,6 +207,7 @@ class Agent(BaseAgent):
self.crew._long_term_memory,
self.crew._entity_memory,
self.crew._user_memory,
self.crew._external_memory,
)
memory = contextual_memory.build_context_for_task(task, context)
if memory.strip() != "":

View File

@@ -47,6 +47,27 @@ class CrewAgentExecutorMixin:
print(f"Failed to add to short term memory: {e}")
pass
def _create_external_memory(self, output) -> None:
"""Create and save a external-term memory item if conditions are met."""
if (
self.crew
and self.agent
and self.task
and hasattr(self.crew, "_external_memory")
and self.crew._external_memory
):
try:
self.crew._external_memory.save(
value=output.text,
metadata={
"description": self.task.description,
},
agent=self.agent.role,
)
except Exception as e:
print(f"Failed to add to external memory: {e}")
pass
def _create_long_term_memory(self, output) -> None:
"""Create and save long-term and entity memory items based on evaluation."""
if (

View File

@@ -129,6 +129,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
self._create_short_term_memory(formatted_answer)
self._create_long_term_memory(formatted_answer)
self._create_external_memory(formatted_answer)
return {"output": formatted_answer.output}
def _invoke_loop(self) -> AgentFinish:

View File

@@ -3,6 +3,10 @@ import subprocess
import click
# Be mindful about changing this.
# on some enviorments we don't use this command but instead uv sync directly
# so if you expect this to support more things you will need to replicate it there
# ask @joaomdmoura if you are unsure
def install_crew(proxy_options: list[str]) -> None:
"""
Install the crew by running the UV command to lock and install.

View File

@@ -28,6 +28,7 @@ from crewai.knowledge.knowledge import Knowledge
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
from crewai.llm import LLM, BaseLLM
from crewai.memory.entity.entity_memory import EntityMemory
from crewai.memory.external.external_memory import ExternalMemory
from crewai.memory.long_term.long_term_memory import LongTermMemory
from crewai.memory.short_term.short_term_memory import ShortTermMemory
from crewai.memory.user.user_memory import UserMemory
@@ -105,6 +106,7 @@ class Crew(BaseModel):
_long_term_memory: Optional[InstanceOf[LongTermMemory]] = PrivateAttr()
_entity_memory: Optional[InstanceOf[EntityMemory]] = PrivateAttr()
_user_memory: Optional[InstanceOf[UserMemory]] = PrivateAttr()
_external_memory: Optional[InstanceOf[ExternalMemory]] = PrivateAttr()
_train: Optional[bool] = PrivateAttr(default=False)
_train_iteration: Optional[int] = PrivateAttr()
_inputs: Optional[Dict[str, Any]] = PrivateAttr(default=None)
@@ -145,6 +147,10 @@ class Crew(BaseModel):
default=None,
description="An instance of the UserMemory to be used by the Crew to store/fetch memories of a specific user.",
)
external_memory: Optional[InstanceOf[ExternalMemory]] = Field(
default=None,
description="An Instance of the ExternalMemory to be used by the Crew",
)
embedder: Optional[dict] = Field(
default=None,
description="Configuration for the embedder to be used for the crew.",
@@ -289,6 +295,12 @@ class Crew(BaseModel):
if self.entity_memory
else EntityMemory(crew=self, embedder_config=self.embedder)
)
self._external_memory = (
# External memory doesnt support a default value since it was designed to be managed entirely externally
self.external_memory.set_crew(self)
if self.external_memory
else None
)
if (
self.memory_config
and "user_memory" in self.memory_config
@@ -1167,6 +1179,7 @@ class Crew(BaseModel):
"_short_term_memory",
"_long_term_memory",
"_entity_memory",
"_external_memory",
"_telemetry",
"agents",
"tasks",
@@ -1313,7 +1326,15 @@ class Crew(BaseModel):
RuntimeError: If memory reset operation fails.
"""
VALID_TYPES = frozenset(
["long", "short", "entity", "knowledge", "kickoff_outputs", "all"]
[
"long",
"short",
"entity",
"knowledge",
"kickoff_outputs",
"all",
"external",
]
)
if command_type not in VALID_TYPES:
@@ -1339,6 +1360,7 @@ class Crew(BaseModel):
memory_systems = [
("short term", getattr(self, "_short_term_memory", None)),
("entity", getattr(self, "_entity_memory", None)),
("external", getattr(self, "_external_memory", None)),
("long term", getattr(self, "_long_term_memory", None)),
("task output", getattr(self, "_task_output_handler", None)),
("knowledge", getattr(self, "knowledge", None)),
@@ -1366,6 +1388,7 @@ class Crew(BaseModel):
"entity": (self._entity_memory, "entity"),
"knowledge": (self.knowledge, "knowledge"),
"kickoff_outputs": (self._task_output_handler, "task output"),
"external": (self._external_memory, "external"),
}
memory_system, name = reset_functions[memory_type]

View File

@@ -839,9 +839,13 @@ class LLM(BaseLLM):
# Validate message format first
for msg in messages:
if not isinstance(msg, dict) or "role" not in msg or "content" not in msg:
if not isinstance(msg, dict) or "role" not in msg:
raise TypeError(
"Invalid message format. Each message must be a dict with 'role' and 'content' keys"
"Invalid message format. Each message must be a dict with 'role' key"
)
if "content" not in msg and msg["role"] != "system":
raise TypeError(
"Invalid message format. Each non-system message must have a 'content' key"
)
# Handle O1 models specially
@@ -868,6 +872,19 @@ class LLM(BaseLLM):
messages.append({"role": "user", "content": "Please continue."})
return messages
if "qwen" in self.model.lower():
formatted_messages = []
for msg in messages:
if not isinstance(msg.get("content"), str):
formatted_messages.append(msg)
continue
formatted_messages.append({
"role": msg["role"],
"content": [{"type": "text", "text": msg["content"]}] # type: ignore
})
return formatted_messages
# Handle Anthropic models
if not self.is_anthropic:
return messages

View File

@@ -2,5 +2,12 @@ from .entity.entity_memory import EntityMemory
from .long_term.long_term_memory import LongTermMemory
from .short_term.short_term_memory import ShortTermMemory
from .user.user_memory import UserMemory
from .external.external_memory import ExternalMemory
__all__ = ["UserMemory", "EntityMemory", "LongTermMemory", "ShortTermMemory"]
__all__ = [
"UserMemory",
"EntityMemory",
"LongTermMemory",
"ShortTermMemory",
"ExternalMemory",
]

View File

@@ -1,6 +1,12 @@
from typing import Any, Dict, Optional
from crewai.memory import EntityMemory, LongTermMemory, ShortTermMemory, UserMemory
from crewai.memory import (
EntityMemory,
ExternalMemory,
LongTermMemory,
ShortTermMemory,
UserMemory,
)
class ContextualMemory:
@@ -11,6 +17,7 @@ class ContextualMemory:
ltm: LongTermMemory,
em: EntityMemory,
um: UserMemory,
exm: ExternalMemory,
):
if memory_config is not None:
self.memory_provider = memory_config.get("provider")
@@ -20,6 +27,7 @@ class ContextualMemory:
self.ltm = ltm
self.em = em
self.um = um
self.exm = exm
def build_context_for_task(self, task, context) -> str:
"""
@@ -35,6 +43,7 @@ class ContextualMemory:
context.append(self._fetch_ltm_context(task.description))
context.append(self._fetch_stm_context(query))
context.append(self._fetch_entity_context(query))
context.append(self._fetch_external_context(query))
if self.memory_provider == "mem0":
context.append(self._fetch_user_context(query))
return "\n".join(filter(None, context))
@@ -106,3 +115,24 @@ class ContextualMemory:
f"- {result['memory']}" for result in user_memories
)
return f"User memories/preferences:\n{formatted_memories}"
def _fetch_external_context(self, query: str) -> str:
"""
Fetches and formats relevant information from External Memory.
Args:
query (str): The search query to find relevant information.
Returns:
str: Formatted information as bullet points, or an empty string if none found.
"""
if self.exm is None:
return ""
external_memories = self.exm.search(query)
if not external_memories:
return ""
formatted_memories = "\n".join(
f"- {result['memory']}" for result in external_memories
)
return f"External memories:\n{formatted_memories}"

View File

View File

@@ -0,0 +1,61 @@
from typing import TYPE_CHECKING, Any, Dict, Optional, Self
from crewai.memory.external.external_memory_item import ExternalMemoryItem
from crewai.memory.memory import Memory
from crewai.memory.storage.interface import Storage
if TYPE_CHECKING:
from crewai.memory.storage.mem0_storage import Mem0Storage
class ExternalMemory(Memory):
def __init__(self, storage: Optional[Storage] = None, **data: Any):
super().__init__(storage=storage, **data)
@staticmethod
def _configure_mem0(crew: Any, config: Dict[str, Any]) -> "Mem0Storage":
from crewai.memory.storage.mem0_storage import Mem0Storage
return Mem0Storage(type="external", crew=crew, config=config)
@staticmethod
def external_supported_storages() -> Dict[str, Any]:
return {
"mem0": ExternalMemory._configure_mem0,
}
@staticmethod
def create_storage(crew: Any, embedder_config: Optional[Dict[str, Any]]) -> Storage:
if not embedder_config:
raise ValueError("embedder_config is required")
if "provider" not in embedder_config:
raise ValueError("embedder_config must include a 'provider' key")
provider = embedder_config["provider"]
supported_storages = ExternalMemory.external_supported_storages()
if provider not in supported_storages:
raise ValueError(f"Provider {provider} not supported")
return supported_storages[provider](crew, embedder_config.get("config", {}))
def save(
self,
value: Any,
metadata: Optional[Dict[str, Any]] = None,
agent: Optional[str] = None,
) -> None:
"""Saves a value into the external storage."""
item = ExternalMemoryItem(value=value, metadata=metadata, agent=agent)
super().save(value=item.value, metadata=item.metadata, agent=item.agent)
def reset(self) -> None:
self.storage.reset()
def set_crew(self, crew: Any) -> Self:
super().set_crew(crew)
if not self.storage:
self.storage = self.create_storage(crew, self.embedder_config)
return self

View File

@@ -0,0 +1,13 @@
from typing import Any, Dict, Optional
class ExternalMemoryItem:
def __init__(
self,
value: Any,
metadata: Optional[Dict[str, Any]] = None,
agent: Optional[str] = None,
):
self.value = value
self.metadata = metadata
self.agent = agent

View File

@@ -1,4 +1,4 @@
from typing import Any, Dict, List, Optional
from typing import Any, Dict, List, Optional, Self
from pydantic import BaseModel
@@ -9,6 +9,7 @@ class Memory(BaseModel):
"""
embedder_config: Optional[Dict[str, Any]] = None
crew: Optional[Any] = None
storage: Any
@@ -36,3 +37,7 @@ class Memory(BaseModel):
return self.storage.search(
query=query, limit=limit, score_threshold=score_threshold
)
def set_crew(self, crew: Any) -> Self:
self.crew = crew
return self

View File

@@ -11,15 +11,20 @@ class Mem0Storage(Storage):
Extends Storage to handle embedding and searching across entities using Mem0.
"""
def __init__(self, type, crew=None):
def __init__(self, type, crew=None, config=None):
super().__init__()
if type not in ["user", "short_term", "long_term", "entities"]:
raise ValueError("Invalid type for Mem0Storage. Must be 'user' or 'agent'.")
supported_types = ["user", "short_term", "long_term", "entities", "external"]
if type not in supported_types:
raise ValueError(
f"Invalid type '{type}' for Mem0Storage. Must be one of: "
+ ", ".join(supported_types)
)
self.memory_type = type
self.crew = crew
self.memory_config = crew.memory_config
self.config = config or {}
# TODO: Memory config will be removed in the future the config will be passed as a parameter
self.memory_config = self.config or getattr(crew, "memory_config", {}) or {}
# User ID is required for user memory type "user" since it's used as a unique identifier for the user.
user_id = self._get_user_id()
@@ -27,7 +32,7 @@ class Mem0Storage(Storage):
raise ValueError("User ID is required for user memory type")
# API key in memory config overrides the environment variable
config = self.memory_config.get("config", {})
config = self._get_config()
mem0_api_key = config.get("api_key") or os.getenv("MEM0_API_KEY")
mem0_org_id = config.get("org_id")
mem0_project_id = config.get("project_id")
@@ -56,26 +61,34 @@ class Mem0Storage(Storage):
def save(self, value: Any, metadata: Dict[str, Any]) -> None:
user_id = self._get_user_id()
agent_name = self._get_agent_name()
if self.memory_type == "user":
self.memory.add(value, user_id=user_id, metadata={**metadata})
elif self.memory_type == "short_term":
agent_name = self._get_agent_name()
self.memory.add(
value, agent_id=agent_name, metadata={"type": "short_term", **metadata}
)
params = None
if self.memory_type == "short_term":
params = {
"agent_id": agent_name,
"infer": False,
"metadata": {"type": "short_term", **metadata},
}
elif self.memory_type == "long_term":
agent_name = self._get_agent_name()
self.memory.add(
value,
agent_id=agent_name,
infer=False,
metadata={"type": "long_term", **metadata},
)
params = {
"agent_id": agent_name,
"infer": False,
"metadata": {"type": "long_term", **metadata},
}
elif self.memory_type == "entities":
entity_name = self._get_agent_name()
self.memory.add(
value, user_id=entity_name, metadata={"type": "entity", **metadata}
)
params = {
"agent_id": agent_name,
"infer": False,
"metadata": {"type": "entity", **metadata},
}
elif self.memory_type == "external":
params = {
"user_id": user_id,
"agent_id": agent_name,
"metadata": {"type": "external", **metadata},
}
if params:
self.memory.add(value, **params | {"output_format": "v1.1"})
def search(
self,
@@ -84,41 +97,43 @@ class Mem0Storage(Storage):
score_threshold: float = 0.35,
) -> List[Any]:
params = {"query": query, "limit": limit}
if self.memory_type == "user":
user_id = self._get_user_id()
if user_id := self._get_user_id():
params["user_id"] = user_id
elif self.memory_type == "short_term":
agent_name = self._get_agent_name()
agent_name = self._get_agent_name()
if self.memory_type == "short_term":
params["agent_id"] = agent_name
params["metadata"] = {"type": "short_term"}
elif self.memory_type == "long_term":
agent_name = self._get_agent_name()
params["agent_id"] = agent_name
params["metadata"] = {"type": "long_term"}
elif self.memory_type == "entities":
agent_name = self._get_agent_name()
params["agent_id"] = agent_name
params["metadata"] = {"type": "entity"}
elif self.memory_type == "external":
params["agent_id"] = agent_name
params["metadata"] = {"type": "external"}
# Discard the filters for now since we create the filters
# automatically when the crew is created.
results = self.memory.search(**params)
return [r for r in results if r["score"] >= score_threshold]
def _get_user_id(self):
if self.memory_type == "user":
if hasattr(self, "memory_config") and self.memory_config is not None:
return self.memory_config.get("config", {}).get("user_id")
else:
return None
return None
def _get_user_id(self) -> str:
return self._get_config().get("user_id", "")
def _get_agent_name(self):
agents = self.crew.agents if self.crew else []
def _get_agent_name(self) -> str:
if not self.crew:
return ""
agents = self.crew.agents
agents = [self._sanitize_role(agent.role) for agent in agents]
agents = "_".join(agents)
return agents
def _get_config(self) -> Dict[str, Any]:
return self.config or getattr(self, "memory_config", {}).get("config", {}) or {}
def reset(self):
if self.memory:
self.memory.reset()

View File

@@ -1,3 +1,4 @@
import warnings
from typing import Any, Dict, Optional
from crewai.memory.memory import Memory
@@ -12,6 +13,12 @@ class UserMemory(Memory):
"""
def __init__(self, crew=None):
warnings.warn(
"UserMemory is deprecated and will be removed in a future version. "
"Please use ExternalMemory instead.",
DeprecationWarning,
stacklevel=2,
)
try:
from crewai.memory.storage.mem0_storage import Mem0Storage
except ImportError:
@@ -48,6 +55,4 @@ class UserMemory(Memory):
try:
self.storage.reset()
except Exception as e:
raise Exception(
f"An error occurred while resetting the user memory: {e}"
)
raise Exception(f"An error occurred while resetting the user memory: {e}")

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,180 @@
from unittest.mock import MagicMock, patch
import pytest
from mem0.memory.main import Memory
from crewai.agent import Agent
from crewai.crew import Crew, Process
from crewai.memory.external.external_memory import ExternalMemory
from crewai.memory.external.external_memory_item import ExternalMemoryItem
from crewai.memory.storage.interface import Storage
from crewai.task import Task
@pytest.fixture
def mock_mem0_memory():
mock_memory = MagicMock(spec=Memory)
return mock_memory
@pytest.fixture
def patch_configure_mem0(mock_mem0_memory):
with patch(
"crewai.memory.external.external_memory.ExternalMemory._configure_mem0",
return_value=mock_mem0_memory,
) as mocked:
yield mocked
@pytest.fixture
def external_memory_with_mocked_config(patch_configure_mem0):
embedder_config = {"provider": "mem0"}
external_memory = ExternalMemory(embedder_config=embedder_config)
return external_memory
@pytest.fixture
def crew_with_external_memory(external_memory_with_mocked_config, patch_configure_mem0):
agent = Agent(
role="Researcher",
goal="Search relevant data and provide results",
backstory="You are a researcher at a leading tech think tank.",
tools=[],
verbose=True,
)
task = Task(
description="Perform a search on specific topics.",
expected_output="A list of relevant URLs based on the search query.",
agent=agent,
)
crew = Crew(
agents=[agent],
tasks=[task],
verbose=True,
process=Process.sequential,
memory=True,
external_memory=external_memory_with_mocked_config,
)
return crew
def test_external_memory_initialization(external_memory_with_mocked_config):
assert external_memory_with_mocked_config is not None
assert isinstance(external_memory_with_mocked_config, ExternalMemory)
def test_external_memory_save(external_memory_with_mocked_config):
memory_item = ExternalMemoryItem(
value="test value", metadata={"task": "test_task"}, agent="test_agent"
)
with patch.object(ExternalMemory, "save") as mock_save:
external_memory_with_mocked_config.save(
value=memory_item.value,
metadata=memory_item.metadata,
agent=memory_item.agent,
)
mock_save.assert_called_once_with(
value=memory_item.value,
metadata=memory_item.metadata,
agent=memory_item.agent,
)
def test_external_memory_reset(external_memory_with_mocked_config):
with patch(
"crewai.memory.external.external_memory.ExternalMemory.reset"
) as mock_reset:
external_memory_with_mocked_config.reset()
mock_reset.assert_called_once()
def test_external_memory_supported_storages():
supported_storages = ExternalMemory.external_supported_storages()
assert "mem0" in supported_storages
assert callable(supported_storages["mem0"])
def test_external_memory_create_storage_invalid_provider():
embedder_config = {"provider": "invalid_provider", "config": {}}
with pytest.raises(ValueError, match="Provider invalid_provider not supported"):
ExternalMemory.create_storage(None, embedder_config)
def test_external_memory_create_storage_missing_provider():
embedder_config = {"config": {}}
with pytest.raises(
ValueError, match="embedder_config must include a 'provider' key"
):
ExternalMemory.create_storage(None, embedder_config)
def test_external_memory_create_storage_missing_config():
with pytest.raises(ValueError, match="embedder_config is required"):
ExternalMemory.create_storage(None, None)
def test_crew_with_external_memory_initialization(crew_with_external_memory):
assert crew_with_external_memory._external_memory is not None
assert isinstance(crew_with_external_memory._external_memory, ExternalMemory)
assert crew_with_external_memory._external_memory.crew == crew_with_external_memory
@pytest.mark.parametrize("mem_type", ["external", "all"])
def test_crew_external_memory_reset(mem_type, crew_with_external_memory):
with patch(
"crewai.memory.external.external_memory.ExternalMemory.reset"
) as mock_reset:
crew_with_external_memory.reset_memories(mem_type)
mock_reset.assert_called_once()
@pytest.mark.parametrize("mem_method", ["search", "save"])
@pytest.mark.vcr(filter_headers=["authorization"])
def test_crew_external_memory_save(mem_method, crew_with_external_memory):
with patch(
f"crewai.memory.external.external_memory.ExternalMemory.{mem_method}"
) as mock_method:
crew_with_external_memory.kickoff()
assert mock_method.call_count > 0
def test_external_memory_custom_storage(crew_with_external_memory):
class CustomStorage(Storage):
def __init__(self):
self.memories = []
def save(self, value, metadata=None, agent=None):
self.memories.append({"value": value, "metadata": metadata, "agent": agent})
def search(self, query, limit=10, score_threshold=0.5):
return self.memories
def reset(self):
self.memories = []
custom_storage = CustomStorage()
external_memory = ExternalMemory(storage=custom_storage)
# by ensuring the crew is set, we can test that the storage is used
external_memory.set_crew(crew_with_external_memory)
test_value = "test value"
test_metadata = {"source": "test"}
test_agent = "test_agent"
external_memory.save(value=test_value, metadata=test_metadata, agent=test_agent)
results = external_memory.search("test")
assert len(results) == 1
assert results[0]["value"] == test_value
assert results[0]["metadata"] == test_metadata | {"agent": test_agent}
external_memory.reset()
results = external_memory.search("test")
assert len(results) == 0

View File

@@ -29,41 +29,32 @@ def mem0_storage_with_mocked_config(mock_mem0_memory):
"""Fixture to create a Mem0Storage instance with mocked dependencies"""
# Patch the Memory class to return our mock
with patch('mem0.memory.main.Memory.from_config', return_value=mock_mem0_memory):
with patch("mem0.memory.main.Memory.from_config", return_value=mock_mem0_memory):
config = {
"vector_store": {
"provider": "mock_vector_store",
"config": {
"host": "localhost",
"port": 6333
}
"config": {"host": "localhost", "port": 6333},
},
"llm": {
"provider": "mock_llm",
"config": {
"api_key": "mock-api-key",
"model": "mock-model"
}
"config": {"api_key": "mock-api-key", "model": "mock-model"},
},
"embedder": {
"provider": "mock_embedder",
"config": {
"api_key": "mock-api-key",
"model": "mock-model"
}
"config": {"api_key": "mock-api-key", "model": "mock-model"},
},
"graph_store": {
"provider": "mock_graph_store",
"config": {
"url": "mock-url",
"username": "mock-user",
"password": "mock-password"
}
"password": "mock-password",
},
},
"history_db_path": "/mock/path",
"version": "test-version",
"custom_fact_extraction_prompt": "mock prompt 1",
"custom_update_memory_prompt": "mock prompt 2"
"custom_update_memory_prompt": "mock prompt 2",
}
# Instantiate the class with memory_config
@@ -92,23 +83,73 @@ def mock_mem0_memory_client():
@pytest.fixture
def mem0_storage_with_memory_client(mock_mem0_memory_client):
def mem0_storage_with_memory_client_using_config_from_crew(mock_mem0_memory_client):
"""Fixture to create a Mem0Storage instance with mocked dependencies"""
# We need to patch the MemoryClient before it's instantiated
with patch.object(MemoryClient, '__new__', return_value=mock_mem0_memory_client):
crew = MockCrew(
memory_config={
"provider": "mem0",
"config": {"user_id": "test_user", "api_key": "ABCDEFGH", "org_id": "my_org_id", "project_id": "my_project_id"},
}
)
with patch.object(MemoryClient, "__new__", return_value=mock_mem0_memory_client):
crew = MockCrew(
memory_config={
"provider": "mem0",
"config": {
"user_id": "test_user",
"api_key": "ABCDEFGH",
"org_id": "my_org_id",
"project_id": "my_project_id",
},
}
)
mem0_storage = Mem0Storage(type="short_term", crew=crew)
return mem0_storage
mem0_storage = Mem0Storage(type="short_term", crew=crew)
return mem0_storage
def test_mem0_storage_with_memory_client_initialization(mem0_storage_with_memory_client, mock_mem0_memory_client):
@pytest.fixture
def mem0_storage_with_memory_client_using_explictly_config(mock_mem0_memory_client):
"""Fixture to create a Mem0Storage instance with mocked dependencies"""
# We need to patch the MemoryClient before it's instantiated
with patch.object(MemoryClient, "__new__", return_value=mock_mem0_memory_client):
crew = MockCrew(
memory_config={
"provider": "mem0",
"config": {
"user_id": "test_user",
"api_key": "ABCDEFGH",
"org_id": "my_org_id",
"project_id": "my_project_id",
},
}
)
new_config = {"provider": "mem0", "config": {"api_key": "new-api-key"}}
mem0_storage = Mem0Storage(type="short_term", crew=crew, config=new_config)
return mem0_storage
def test_mem0_storage_with_memory_client_initialization(
mem0_storage_with_memory_client_using_config_from_crew, mock_mem0_memory_client
):
"""Test Mem0Storage initialization with MemoryClient"""
assert mem0_storage_with_memory_client.memory_type == "short_term"
assert mem0_storage_with_memory_client.memory is mock_mem0_memory_client
assert (
mem0_storage_with_memory_client_using_config_from_crew.memory_type
== "short_term"
)
assert (
mem0_storage_with_memory_client_using_config_from_crew.memory
is mock_mem0_memory_client
)
def test_mem0_storage_with_explict_config(
mem0_storage_with_memory_client_using_explictly_config,
):
expected_config = {"provider": "mem0", "config": {"api_key": "new-api-key"}}
assert (
mem0_storage_with_memory_client_using_explictly_config.config == expected_config
)
assert (
mem0_storage_with_memory_client_using_explictly_config.memory_config
== expected_config
)

View File

@@ -0,0 +1,32 @@
import pytest
from crewai.llm import LLM
@pytest.mark.vcr(filter_headers=["authorization"])
def test_qwen_multimodal_content_formatting():
"""Test that multimodal content is properly formatted for Qwen models."""
llm = LLM(model="sambanova/Qwen2.5-72B-Instruct", temperature=0.7)
message = {"role": "user", "content": "Describe this image"}
formatted = llm._format_messages_for_provider([message])
assert isinstance(formatted[0]["content"], list)
assert formatted[0]["content"][0]["type"] == "text"
assert formatted[0]["content"][0]["text"] == "Describe this image"
multimodal_content = [
{"type": "text", "text": "What's in this image?"},
{"type": "image_url", "image_url": "https://example.com/image.jpg"}
]
message = {"role": "user", "content": multimodal_content}
formatted = llm._format_messages_for_provider([message])
assert formatted[0]["content"] == multimodal_content
messages = [
{"role": "system", "content": "You are a visual analysis assistant."},
{"role": "user", "content": multimodal_content}
]
formatted = llm._format_messages_for_provider(messages)
assert isinstance(formatted[0]["content"], list)
assert formatted[1]["content"] == multimodal_content