mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-01-08 07:38:29 +00:00
Compare commits
8 Commits
devin/1746
...
devin/1746
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
789d5f985c | ||
|
|
5deea31ebd | ||
|
|
c8ca9d7747 | ||
|
|
cb1a98cabf | ||
|
|
369e6d109c | ||
|
|
2c011631f9 | ||
|
|
d3fc2b4477 | ||
|
|
516d45deaa |
38
.github/security.md
vendored
38
.github/security.md
vendored
@@ -1,19 +1,27 @@
|
||||
CrewAI takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organization.
|
||||
If you believe you have found a security vulnerability in any CrewAI product or service, please report it to us as described below.
|
||||
## CrewAI Security Vulnerability Reporting Policy
|
||||
|
||||
## Reporting a Vulnerability
|
||||
Please do not report security vulnerabilities through public GitHub issues.
|
||||
To report a vulnerability, please email us at security@crewai.com.
|
||||
Please include the requested information listed below so that we can triage your report more quickly
|
||||
CrewAI prioritizes the security of our software products, services, and GitHub repositories. To promptly address vulnerabilities, follow these steps for reporting security issues:
|
||||
|
||||
- Type of issue (e.g. SQL injection, cross-site scripting, etc.)
|
||||
- Full paths of source file(s) related to the manifestation of the issue
|
||||
- The location of the affected source code (tag/branch/commit or direct URL)
|
||||
- Any special configuration required to reproduce the issue
|
||||
- Step-by-step instructions to reproduce the issue (please include screenshots if needed)
|
||||
- Proof-of-concept or exploit code (if possible)
|
||||
- Impact of the issue, including how an attacker might exploit the issue
|
||||
### Reporting Process
|
||||
Do **not** report vulnerabilities via public GitHub issues.
|
||||
|
||||
Once we have received your report, we will respond to you at the email address you provide. If the issue is confirmed, we will release a patch as soon as possible depending on the complexity of the issue.
|
||||
Email all vulnerability reports directly to:
|
||||
**security@crewai.com**
|
||||
|
||||
At this time, we are not offering a bug bounty program. Any rewards will be at our discretion.
|
||||
### Required Information
|
||||
To help us quickly validate and remediate the issue, your report must include:
|
||||
|
||||
- **Vulnerability Type:** Clearly state the vulnerability type (e.g., SQL injection, XSS, privilege escalation).
|
||||
- **Affected Source Code:** Provide full file paths and direct URLs (branch, tag, or commit).
|
||||
- **Reproduction Steps:** Include detailed, step-by-step instructions. Screenshots are recommended.
|
||||
- **Special Configuration:** Document any special settings or configurations required to reproduce.
|
||||
- **Proof-of-Concept (PoC):** Provide exploit or PoC code (if available).
|
||||
- **Impact Assessment:** Clearly explain the severity and potential exploitation scenarios.
|
||||
|
||||
### Our Response
|
||||
- We will acknowledge receipt of your report promptly via your provided email.
|
||||
- Confirmed vulnerabilities will receive priority remediation based on severity.
|
||||
- Patches will be released as swiftly as possible following verification.
|
||||
|
||||
### Reward Notice
|
||||
Currently, we do not offer a bug bounty program. Rewards, if issued, are discretionary.
|
||||
|
||||
@@ -169,19 +169,55 @@ In this section, you'll find detailed examples that help you select, configure,
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Google">
|
||||
Set the following environment variables in your `.env` file:
|
||||
<Accordion title="Google (Gemini API)">
|
||||
Set your API key in your `.env` file. If you need a key, or need to find an
|
||||
existing key, check [AI Studio](https://aistudio.google.com/apikey).
|
||||
|
||||
```toml Code
|
||||
# Option 1: Gemini accessed with an API key.
|
||||
```toml .env
|
||||
# https://ai.google.dev/gemini-api/docs/api-key
|
||||
GEMINI_API_KEY=<your-api-key>
|
||||
|
||||
# Option 2: Vertex AI IAM credentials for Gemini, Anthropic, and Model Garden.
|
||||
# https://cloud.google.com/vertex-ai/generative-ai/docs/overview
|
||||
```
|
||||
|
||||
Get credentials from your Google Cloud Console and save it to a JSON file with the following code:
|
||||
Example usage in your CrewAI project:
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
llm = LLM(
|
||||
model="gemini/gemini-2.0-flash",
|
||||
temperature=0.7,
|
||||
)
|
||||
```
|
||||
|
||||
### Gemini models
|
||||
|
||||
Google offers a range of powerful models optimized for different use cases.
|
||||
|
||||
| Model | Context Window | Best For |
|
||||
|--------------------------------|----------------|-------------------------------------------------------------------|
|
||||
| gemini-2.5-flash-preview-04-17 | 1M tokens | Adaptive thinking, cost efficiency |
|
||||
| gemini-2.5-pro-preview-05-06 | 1M tokens | Enhanced thinking and reasoning, multimodal understanding, advanced coding, and more |
|
||||
| gemini-2.0-flash | 1M tokens | Next generation features, speed, thinking, and realtime streaming |
|
||||
| gemini-2.0-flash-lite | 1M tokens | Cost efficiency and low latency |
|
||||
| gemini-1.5-flash | 1M tokens | Balanced multimodal model, good for most tasks |
|
||||
| gemini-1.5-flash-8B | 1M tokens | Fastest, most cost-efficient, good for high-frequency tasks |
|
||||
| gemini-1.5-pro | 2M tokens | Best performing, wide variety of reasoning tasks including logical reasoning, coding, and creative collaboration |
|
||||
|
||||
The full list of models is available in the [Gemini model docs](https://ai.google.dev/gemini-api/docs/models).
|
||||
|
||||
### Gemma
|
||||
|
||||
The Gemini API also allows you to use your API key to access [Gemma models](https://ai.google.dev/gemma/docs) hosted on Google infrastructure.
|
||||
|
||||
| Model | Context Window |
|
||||
|----------------|----------------|
|
||||
| gemma-3-1b-it | 32k tokens |
|
||||
| gemma-3-4b-it | 32k tokens |
|
||||
| gemma-3-12b-it | 32k tokens |
|
||||
| gemma-3-27b-it | 128k tokens |
|
||||
|
||||
</Accordion>
|
||||
<Accordion title="Google (Vertex AI)">
|
||||
Get credentials from your Google Cloud Console and save it to a JSON file, then load it with the following code:
|
||||
```python Code
|
||||
import json
|
||||
|
||||
@@ -205,14 +241,18 @@ In this section, you'll find detailed examples that help you select, configure,
|
||||
vertex_credentials=vertex_credentials_json
|
||||
)
|
||||
```
|
||||
|
||||
Google offers a range of powerful models optimized for different use cases:
|
||||
|
||||
| Model | Context Window | Best For |
|
||||
|-----------------------|----------------|------------------------------------------------------------------|
|
||||
| gemini-2.0-flash-exp | 1M tokens | Higher quality at faster speed, multimodal model, good for most tasks |
|
||||
| gemini-1.5-flash | 1M tokens | Balanced multimodal model, good for most tasks |
|
||||
| gemini-1.5-flash-8B | 1M tokens | Fastest, most cost-efficient, good for high-frequency tasks |
|
||||
| gemini-1.5-pro | 2M tokens | Best performing, wide variety of reasoning tasks including logical reasoning, coding, and creative collaboration |
|
||||
| Model | Context Window | Best For |
|
||||
|--------------------------------|----------------|-------------------------------------------------------------------|
|
||||
| gemini-2.5-flash-preview-04-17 | 1M tokens | Adaptive thinking, cost efficiency |
|
||||
| gemini-2.5-pro-preview-05-06 | 1M tokens | Enhanced thinking and reasoning, multimodal understanding, advanced coding, and more |
|
||||
| gemini-2.0-flash | 1M tokens | Next generation features, speed, thinking, and realtime streaming |
|
||||
| gemini-2.0-flash-lite | 1M tokens | Cost efficiency and low latency |
|
||||
| gemini-1.5-flash | 1M tokens | Balanced multimodal model, good for most tasks |
|
||||
| gemini-1.5-flash-8B | 1M tokens | Fastest, most cost-efficient, good for high-frequency tasks |
|
||||
| gemini-1.5-pro | 2M tokens | Best performing, wide variety of reasoning tasks including logical reasoning, coding, and creative collaboration |
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Azure">
|
||||
|
||||
@@ -68,7 +68,13 @@ We'll create a CrewAI application where two agents collaborate to research and w
|
||||
```python
|
||||
from crewai import Agent, Crew, Process, Task
|
||||
from crewai_tools import SerperDevTool
|
||||
from openinference.instrumentation.crewai import CrewAIInstrumentor
|
||||
from phoenix.otel import register
|
||||
|
||||
# setup monitoring for your crew
|
||||
tracer_provider = register(
|
||||
endpoint="http://localhost:6006/v1/traces")
|
||||
CrewAIInstrumentor().instrument(skip_dep_check=True, tracer_provider=tracer_provider)
|
||||
search_tool = SerperDevTool()
|
||||
|
||||
# Define your agents with roles and goals
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
[project]
|
||||
name = "crewai"
|
||||
version = "0.118.0"
|
||||
version = "0.119.0"
|
||||
description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks."
|
||||
readme = "README.md"
|
||||
requires-python = ">=3.10,<3.13"
|
||||
@@ -45,7 +45,7 @@ Documentation = "https://docs.crewai.com"
|
||||
Repository = "https://github.com/crewAIInc/crewAI"
|
||||
|
||||
[project.optional-dependencies]
|
||||
tools = ["crewai-tools~=0.42.2"]
|
||||
tools = ["crewai-tools~=0.44.0"]
|
||||
embeddings = [
|
||||
"tiktoken~=0.7.0"
|
||||
]
|
||||
|
||||
@@ -17,7 +17,7 @@ warnings.filterwarnings(
|
||||
category=UserWarning,
|
||||
module="pydantic.main",
|
||||
)
|
||||
__version__ = "0.118.0"
|
||||
__version__ = "0.119.0"
|
||||
__all__ = [
|
||||
"Agent",
|
||||
"Crew",
|
||||
|
||||
@@ -173,6 +173,7 @@ class Agent(BaseAgent):
|
||||
collection_name=self.role,
|
||||
storage=self.knowledge_storage or None,
|
||||
)
|
||||
self.knowledge.add_sources()
|
||||
except (TypeError, ValueError) as e:
|
||||
raise ValueError(f"Invalid Knowledge Configuration: {str(e)}")
|
||||
|
||||
@@ -316,6 +317,45 @@ class Agent(BaseAgent):
|
||||
error=str(e),
|
||||
),
|
||||
)
|
||||
elif self.crew and hasattr(self.crew, "knowledge") and self.crew.knowledge:
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=KnowledgeRetrievalStartedEvent(
|
||||
agent=self,
|
||||
),
|
||||
)
|
||||
try:
|
||||
self.knowledge_search_query = self._get_knowledge_search_query(
|
||||
task_prompt
|
||||
)
|
||||
if self.knowledge_search_query:
|
||||
knowledge_snippets = self.crew.query_knowledge(
|
||||
[self.knowledge_search_query], **knowledge_config
|
||||
)
|
||||
if knowledge_snippets:
|
||||
self.crew_knowledge_context = extract_knowledge_context(
|
||||
knowledge_snippets
|
||||
)
|
||||
if self.crew_knowledge_context:
|
||||
task_prompt += self.crew_knowledge_context
|
||||
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=KnowledgeRetrievalCompletedEvent(
|
||||
query=self.knowledge_search_query,
|
||||
agent=self,
|
||||
retrieved_knowledge=self.crew_knowledge_context or "",
|
||||
),
|
||||
)
|
||||
except Exception as e:
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=KnowledgeSearchQueryFailedEvent(
|
||||
query=self.knowledge_search_query or "",
|
||||
agent=self,
|
||||
error=str(e),
|
||||
),
|
||||
)
|
||||
|
||||
tools = tools or self.tools or []
|
||||
self.create_agent_executor(tools=tools, task=task)
|
||||
|
||||
@@ -13,7 +13,7 @@ ENV_VARS = {
|
||||
],
|
||||
"gemini": [
|
||||
{
|
||||
"prompt": "Enter your GEMINI API key (press Enter to skip)",
|
||||
"prompt": "Enter your GEMINI API key from https://ai.dev/apikey (press Enter to skip)",
|
||||
"key_name": "GEMINI_API_KEY",
|
||||
}
|
||||
],
|
||||
|
||||
@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
|
||||
authors = [{ name = "Your Name", email = "you@example.com" }]
|
||||
requires-python = ">=3.10,<3.13"
|
||||
dependencies = [
|
||||
"crewai[tools]>=0.118.0,<1.0.0"
|
||||
"crewai[tools]>=0.119.0,<1.0.0"
|
||||
]
|
||||
|
||||
[project.scripts]
|
||||
|
||||
@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
|
||||
authors = [{ name = "Your Name", email = "you@example.com" }]
|
||||
requires-python = ">=3.10,<3.13"
|
||||
dependencies = [
|
||||
"crewai[tools]>=0.118.0,<1.0.0",
|
||||
"crewai[tools]>=0.119.0,<1.0.0",
|
||||
]
|
||||
|
||||
[project.scripts]
|
||||
|
||||
@@ -5,7 +5,7 @@ description = "Power up your crews with {{folder_name}}"
|
||||
readme = "README.md"
|
||||
requires-python = ">=3.10,<3.13"
|
||||
dependencies = [
|
||||
"crewai[tools]>=0.118.0"
|
||||
"crewai[tools]>=0.119.0"
|
||||
]
|
||||
|
||||
[tool.crewai]
|
||||
|
||||
@@ -9,29 +9,8 @@ from crewai.memory.storage.interface import Storage
|
||||
class Mem0Storage(Storage):
|
||||
"""
|
||||
Extends Storage to handle embedding and searching across entities using Mem0.
|
||||
|
||||
Supports Mem0 v2 API with run_id for associating memories with specific conversation
|
||||
sessions. By default, uses v2 API which is recommended for better context management.
|
||||
|
||||
Args:
|
||||
type: The type of memory storage ("user", "short_term", "long_term", "entities", "external")
|
||||
crew: The crew instance this storage is associated with
|
||||
config: Optional configuration dictionary that overrides crew.memory_config
|
||||
|
||||
Configuration options:
|
||||
version: API version to use ("v1.1" or "v2", defaults to "v2")
|
||||
run_id: Optional session identifier for associating memories with specific conversations
|
||||
api_key: Mem0 API key (defaults to MEM0_API_KEY environment variable)
|
||||
user_id: User identifier (required for "user" memory type)
|
||||
org_id: Optional organization ID for Mem0 API
|
||||
project_id: Optional project ID for Mem0 API
|
||||
local_mem0_config: Optional configuration for local Mem0 instance
|
||||
"""
|
||||
|
||||
SUPPORTED_VERSIONS = ["v1.1", "v2"]
|
||||
|
||||
DEFAULT_VERSION = "v2"
|
||||
|
||||
def __init__(self, type, crew=None, config=None):
|
||||
super().__init__()
|
||||
supported_types = ["user", "short_term", "long_term", "entities", "external"]
|
||||
@@ -46,12 +25,6 @@ class Mem0Storage(Storage):
|
||||
self.config = config or {}
|
||||
# TODO: Memory config will be removed in the future the config will be passed as a parameter
|
||||
self.memory_config = self.config or getattr(crew, "memory_config", {}) or {}
|
||||
|
||||
config = self._get_config()
|
||||
self.version = config.get("version", self.DEFAULT_VERSION)
|
||||
self.run_id = config.get("run_id")
|
||||
|
||||
self._validate_config()
|
||||
|
||||
# User ID is required for user memory type "user" since it's used as a unique identifier for the user.
|
||||
user_id = self._get_user_id()
|
||||
@@ -79,70 +52,16 @@ class Mem0Storage(Storage):
|
||||
else:
|
||||
self.memory = Memory()
|
||||
|
||||
def _validate_config(self) -> None:
|
||||
"""
|
||||
Validate configuration parameters.
|
||||
|
||||
Raises:
|
||||
ValueError: If the version is not supported
|
||||
"""
|
||||
if self.version not in self.SUPPORTED_VERSIONS:
|
||||
raise ValueError(
|
||||
f"Unsupported version: {self.version}. "
|
||||
f"Please use one of: {', '.join(self.SUPPORTED_VERSIONS)}"
|
||||
)
|
||||
|
||||
if self.run_id is not None and not isinstance(self.run_id, str):
|
||||
raise ValueError("run_id must be a string")
|
||||
|
||||
def _build_params(self, base_params: Dict[str, Any], method: str = "add") -> Dict[str, Any]:
|
||||
"""
|
||||
Centralize parameter building for API calls.
|
||||
|
||||
Args:
|
||||
base_params: Base parameters to build upon
|
||||
method: The method being called ("add" or "search")
|
||||
|
||||
Returns:
|
||||
Dict[str, Any]: Complete parameters for API call
|
||||
"""
|
||||
params = base_params.copy()
|
||||
|
||||
# Add version and run_id for MemoryClient
|
||||
if isinstance(self.memory, MemoryClient):
|
||||
params["version"] = self.version
|
||||
|
||||
if self.run_id:
|
||||
params["run_id"] = self.run_id
|
||||
elif isinstance(self.memory, Memory) and method == "search" and "metadata" in params:
|
||||
del params["metadata"]
|
||||
|
||||
return params
|
||||
|
||||
def _sanitize_role(self, role: str) -> str:
|
||||
"""
|
||||
Sanitizes agent roles to ensure valid directory names.
|
||||
|
||||
Args:
|
||||
role: The role name to sanitize
|
||||
|
||||
Returns:
|
||||
str: Sanitized role name
|
||||
"""
|
||||
return role.replace("\n", "").replace(" ", "_").replace("/", "_")
|
||||
|
||||
def save(self, value: Any, metadata: Dict[str, Any]) -> None:
|
||||
"""
|
||||
Save a memory item.
|
||||
|
||||
Args:
|
||||
value: The memory content to save
|
||||
metadata: Additional metadata for the memory
|
||||
"""
|
||||
user_id = self._get_user_id()
|
||||
agent_name = self._get_agent_name()
|
||||
params = None
|
||||
|
||||
if self.memory_type == "short_term":
|
||||
params = {
|
||||
"agent_id": agent_name,
|
||||
@@ -169,7 +88,8 @@ class Mem0Storage(Storage):
|
||||
}
|
||||
|
||||
if params:
|
||||
params = self._build_params(params, method="add")
|
||||
if isinstance(self.memory, MemoryClient):
|
||||
params["output_format"] = "v1.1"
|
||||
self.memory.add(value, **params)
|
||||
|
||||
def search(
|
||||
@@ -178,61 +98,36 @@ class Mem0Storage(Storage):
|
||||
limit: int = 3,
|
||||
score_threshold: float = 0.35,
|
||||
) -> List[Any]:
|
||||
"""
|
||||
Search for memories.
|
||||
|
||||
Args:
|
||||
query: The search query
|
||||
limit: Maximum number of results to return
|
||||
score_threshold: Minimum score for results to be included
|
||||
|
||||
Returns:
|
||||
List[Any]: List of memory items that match the query
|
||||
"""
|
||||
base_params = {"query": query, "limit": limit}
|
||||
params = {"query": query, "limit": limit, "output_format": "v1.1"}
|
||||
if user_id := self._get_user_id():
|
||||
base_params["user_id"] = user_id
|
||||
params["user_id"] = user_id
|
||||
|
||||
agent_name = self._get_agent_name()
|
||||
if self.memory_type == "short_term":
|
||||
base_params["agent_id"] = agent_name
|
||||
base_params["metadata"] = {"type": "short_term"}
|
||||
params["agent_id"] = agent_name
|
||||
params["metadata"] = {"type": "short_term"}
|
||||
elif self.memory_type == "long_term":
|
||||
base_params["agent_id"] = agent_name
|
||||
base_params["metadata"] = {"type": "long_term"}
|
||||
params["agent_id"] = agent_name
|
||||
params["metadata"] = {"type": "long_term"}
|
||||
elif self.memory_type == "entities":
|
||||
base_params["agent_id"] = agent_name
|
||||
base_params["metadata"] = {"type": "entity"}
|
||||
params["agent_id"] = agent_name
|
||||
params["metadata"] = {"type": "entity"}
|
||||
elif self.memory_type == "external":
|
||||
base_params["agent_id"] = agent_name
|
||||
base_params["metadata"] = {"type": "external"}
|
||||
params["agent_id"] = agent_name
|
||||
params["metadata"] = {"type": "external"}
|
||||
|
||||
params = self._build_params(base_params, method="search")
|
||||
# Discard the filters for now since we create the filters
|
||||
# automatically when the crew is created.
|
||||
if isinstance(self.memory, Memory):
|
||||
del params["metadata"], params["output_format"]
|
||||
|
||||
results = self.memory.search(**params)
|
||||
|
||||
if isinstance(results, dict) and "results" in results:
|
||||
return [r for r in results["results"] if r["score"] >= score_threshold]
|
||||
elif isinstance(results, list):
|
||||
return [r for r in results if r["score"] >= score_threshold]
|
||||
else:
|
||||
return []
|
||||
return [r for r in results["results"] if r["score"] >= score_threshold]
|
||||
|
||||
def _get_user_id(self) -> str:
|
||||
"""
|
||||
Get the user ID from configuration.
|
||||
|
||||
Returns:
|
||||
str: User ID or empty string if not found
|
||||
"""
|
||||
return self._get_config().get("user_id", "")
|
||||
|
||||
def _get_agent_name(self) -> str:
|
||||
"""
|
||||
Get the agent name from the crew.
|
||||
|
||||
Returns:
|
||||
str: Agent name or empty string if not found
|
||||
"""
|
||||
if not self.crew:
|
||||
return ""
|
||||
|
||||
@@ -242,17 +137,8 @@ class Mem0Storage(Storage):
|
||||
return agents
|
||||
|
||||
def _get_config(self) -> Dict[str, Any]:
|
||||
"""
|
||||
Get the configuration from either config or memory_config.
|
||||
|
||||
Returns:
|
||||
Dict[str, Any]: Configuration dictionary
|
||||
"""
|
||||
return self.config or getattr(self, "memory_config", {}).get("config", {}) or {}
|
||||
|
||||
def reset(self) -> None:
|
||||
"""
|
||||
Reset the memory.
|
||||
"""
|
||||
def reset(self):
|
||||
if self.memory:
|
||||
self.memory.reset()
|
||||
|
||||
@@ -195,7 +195,7 @@ def test_save_method_with_memory_client(mem0_storage_with_memory_client_using_co
|
||||
agent_id="Test_Agent",
|
||||
infer=False,
|
||||
metadata={"type": "short_term", "key": "value"},
|
||||
version="v2"
|
||||
output_format="v1.1"
|
||||
)
|
||||
|
||||
|
||||
@@ -232,7 +232,7 @@ def test_search_method_with_memory_client(mem0_storage_with_memory_client_using_
|
||||
agent_id="Test_Agent",
|
||||
metadata={"type": "short_term"},
|
||||
user_id="test_user",
|
||||
version='v2'
|
||||
output_format='v1.1'
|
||||
)
|
||||
|
||||
assert len(results) == 1
|
||||
|
||||
@@ -1,290 +0,0 @@
|
||||
import os
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
import pytest
|
||||
from mem0.client.main import MemoryClient
|
||||
from mem0.memory.main import Memory
|
||||
|
||||
from crewai.agent import Agent
|
||||
from crewai.crew import Crew
|
||||
from crewai.memory.storage.mem0_storage import Mem0Storage
|
||||
from crewai.task import Task
|
||||
|
||||
|
||||
class MockCrew:
|
||||
def __init__(self, memory_config):
|
||||
self.memory_config = memory_config
|
||||
self.agents = [MagicMock(role="Test Agent")]
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_mem0_memory_client():
|
||||
"""Fixture to create a mock MemoryClient instance"""
|
||||
mock_memory = MagicMock(spec=MemoryClient)
|
||||
return mock_memory
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mem0_storage_with_v2_api(mock_mem0_memory_client):
|
||||
"""Fixture to create a Mem0Storage instance with v2 API configuration"""
|
||||
|
||||
with patch.object(MemoryClient, "__new__", return_value=mock_mem0_memory_client):
|
||||
crew = MockCrew(
|
||||
memory_config={
|
||||
"provider": "mem0",
|
||||
"config": {
|
||||
"user_id": "test_user",
|
||||
"api_key": "ABCDEFGH",
|
||||
"version": "v2", # Explicitly set to v2
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
mem0_storage = Mem0Storage(type="short_term", crew=crew)
|
||||
return mem0_storage, mock_mem0_memory_client
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mem0_storage_with_run_id(mock_mem0_memory_client):
|
||||
"""Fixture to create a Mem0Storage instance with run_id configuration"""
|
||||
|
||||
with patch.object(MemoryClient, "__new__", return_value=mock_mem0_memory_client):
|
||||
crew = MockCrew(
|
||||
memory_config={
|
||||
"provider": "mem0",
|
||||
"config": {
|
||||
"user_id": "test_user",
|
||||
"api_key": "ABCDEFGH",
|
||||
"version": "v2",
|
||||
"run_id": "test-session-123", # Set run_id
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
mem0_storage = Mem0Storage(type="short_term", crew=crew)
|
||||
return mem0_storage, mock_mem0_memory_client
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mem0_storage_with_v1_api(mock_mem0_memory_client):
|
||||
"""Fixture to create a Mem0Storage instance with v1.1 API configuration"""
|
||||
|
||||
with patch.object(MemoryClient, "__new__", return_value=mock_mem0_memory_client):
|
||||
crew = MockCrew(
|
||||
memory_config={
|
||||
"provider": "mem0",
|
||||
"config": {
|
||||
"user_id": "test_user",
|
||||
"api_key": "ABCDEFGH",
|
||||
"version": "v1.1", # Explicitly set to v1.1
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
mem0_storage = Mem0Storage(type="short_term", crew=crew)
|
||||
return mem0_storage, mock_mem0_memory_client
|
||||
|
||||
|
||||
@pytest.mark.v2_api
|
||||
def test_mem0_storage_v2_initialization(mem0_storage_with_v2_api):
|
||||
"""Test that Mem0Storage initializes correctly with v2 API configuration"""
|
||||
mem0_storage, _ = mem0_storage_with_v2_api
|
||||
|
||||
assert mem0_storage.version == "v2"
|
||||
assert mem0_storage.run_id is None
|
||||
|
||||
|
||||
@pytest.mark.v2_api
|
||||
def test_mem0_storage_with_run_id_initialization(mem0_storage_with_run_id):
|
||||
"""Test that Mem0Storage initializes correctly with run_id configuration"""
|
||||
mem0_storage, _ = mem0_storage_with_run_id
|
||||
|
||||
assert mem0_storage.version == "v2"
|
||||
assert mem0_storage.run_id == "test-session-123"
|
||||
|
||||
|
||||
@pytest.mark.v1_api
|
||||
def test_mem0_storage_v1_initialization(mem0_storage_with_v1_api):
|
||||
"""Test that Mem0Storage initializes correctly with v1.1 API configuration"""
|
||||
mem0_storage, _ = mem0_storage_with_v1_api
|
||||
|
||||
assert mem0_storage.version == "v1.1"
|
||||
assert mem0_storage.run_id is None
|
||||
|
||||
|
||||
@pytest.mark.v2_api
|
||||
def test_save_method_with_v2_api(mem0_storage_with_v2_api):
|
||||
"""Test save method with v2 API"""
|
||||
mem0_storage, mock_memory_client = mem0_storage_with_v2_api
|
||||
mock_memory_client.add = MagicMock()
|
||||
|
||||
test_value = "This is a test memory"
|
||||
test_metadata = {"key": "value"}
|
||||
|
||||
mem0_storage.save(test_value, test_metadata)
|
||||
|
||||
mock_memory_client.add.assert_called_once()
|
||||
call_args = mock_memory_client.add.call_args[1]
|
||||
|
||||
assert call_args["version"] == "v2"
|
||||
assert "run_id" not in call_args
|
||||
assert call_args["agent_id"] == "Test_Agent"
|
||||
assert call_args["metadata"] == {"type": "short_term", "key": "value"}
|
||||
|
||||
|
||||
@pytest.mark.v2_api
|
||||
def test_save_method_with_run_id(mem0_storage_with_run_id):
|
||||
"""Test save method with run_id"""
|
||||
mem0_storage, mock_memory_client = mem0_storage_with_run_id
|
||||
mock_memory_client.add = MagicMock()
|
||||
|
||||
test_value = "This is a test memory"
|
||||
test_metadata = {"key": "value"}
|
||||
|
||||
mem0_storage.save(test_value, test_metadata)
|
||||
|
||||
mock_memory_client.add.assert_called_once()
|
||||
call_args = mock_memory_client.add.call_args[1]
|
||||
|
||||
assert call_args["version"] == "v2"
|
||||
assert call_args["run_id"] == "test-session-123"
|
||||
assert call_args["agent_id"] == "Test_Agent"
|
||||
assert call_args["metadata"] == {"type": "short_term", "key": "value"}
|
||||
|
||||
|
||||
@pytest.mark.v2_api
|
||||
def test_search_method_with_v2_api(mem0_storage_with_v2_api):
|
||||
"""Test search method with v2 API"""
|
||||
mem0_storage, mock_memory_client = mem0_storage_with_v2_api
|
||||
mock_results = {"results": [{"score": 0.9, "content": "Result 1"}, {"score": 0.4, "content": "Result 2"}]}
|
||||
mock_memory_client.search = MagicMock(return_value=mock_results)
|
||||
|
||||
results = mem0_storage.search("test query", limit=5, score_threshold=0.5)
|
||||
|
||||
mock_memory_client.search.assert_called_once()
|
||||
call_args = mock_memory_client.search.call_args[1]
|
||||
|
||||
assert call_args["version"] == "v2"
|
||||
assert "run_id" not in call_args
|
||||
assert call_args["query"] == "test query"
|
||||
assert call_args["limit"] == 5
|
||||
|
||||
assert len(results) == 1
|
||||
assert results[0]["content"] == "Result 1"
|
||||
|
||||
|
||||
@pytest.mark.v2_api
|
||||
def test_search_method_with_run_id(mem0_storage_with_run_id):
|
||||
"""Test search method with run_id"""
|
||||
mem0_storage, mock_memory_client = mem0_storage_with_run_id
|
||||
mock_results = {"results": [{"score": 0.9, "content": "Result 1"}, {"score": 0.4, "content": "Result 2"}]}
|
||||
mock_memory_client.search = MagicMock(return_value=mock_results)
|
||||
|
||||
results = mem0_storage.search("test query", limit=5, score_threshold=0.5)
|
||||
|
||||
mock_memory_client.search.assert_called_once()
|
||||
call_args = mock_memory_client.search.call_args[1]
|
||||
|
||||
assert call_args["version"] == "v2"
|
||||
assert call_args["run_id"] == "test-session-123"
|
||||
assert call_args["query"] == "test query"
|
||||
assert call_args["limit"] == 5
|
||||
|
||||
assert len(results) == 1
|
||||
assert results[0]["content"] == "Result 1"
|
||||
|
||||
|
||||
@pytest.mark.v2_api
|
||||
def test_search_method_with_different_result_formats(mem0_storage_with_v2_api):
|
||||
"""Test search method with different result formats"""
|
||||
mem0_storage, mock_memory_client = mem0_storage_with_v2_api
|
||||
|
||||
mock_results_dict = {"results": [{"score": 0.9, "content": "Result 1"}, {"score": 0.4, "content": "Result 2"}]}
|
||||
mock_memory_client.search = MagicMock(return_value=mock_results_dict)
|
||||
|
||||
results = mem0_storage.search("test query", limit=5, score_threshold=0.5)
|
||||
assert len(results) == 1
|
||||
assert results[0]["content"] == "Result 1"
|
||||
|
||||
mock_results_list = [{"score": 0.9, "content": "Result 3"}, {"score": 0.4, "content": "Result 4"}]
|
||||
mock_memory_client.search = MagicMock(return_value=mock_results_list)
|
||||
|
||||
results = mem0_storage.search("test query", limit=5, score_threshold=0.5)
|
||||
assert len(results) == 1
|
||||
assert results[0]["content"] == "Result 3"
|
||||
|
||||
mock_memory_client.search = MagicMock(return_value="unexpected format")
|
||||
|
||||
results = mem0_storage.search("test query", limit=5, score_threshold=0.5)
|
||||
assert len(results) == 0
|
||||
|
||||
|
||||
@pytest.mark.parametrize("run_id", [None, "", "test-123", "a" * 256])
|
||||
@pytest.mark.v2_api
|
||||
def test_run_id_edge_cases(mock_mem0_memory_client, run_id):
|
||||
"""Test edge cases for run_id parameter"""
|
||||
with patch.object(MemoryClient, "__new__", return_value=mock_mem0_memory_client):
|
||||
crew = MockCrew(
|
||||
memory_config={
|
||||
"provider": "mem0",
|
||||
"config": {
|
||||
"user_id": "test_user",
|
||||
"api_key": "ABCDEFGH",
|
||||
"version": "v2",
|
||||
"run_id": run_id,
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
if run_id == "":
|
||||
mem0_storage = Mem0Storage(type="short_term", crew=crew)
|
||||
assert mem0_storage.run_id == ""
|
||||
|
||||
mock_mem0_memory_client.add = MagicMock()
|
||||
mem0_storage.save("test", {})
|
||||
assert "run_id" not in mock_mem0_memory_client.add.call_args[1]
|
||||
else:
|
||||
mem0_storage = Mem0Storage(type="short_term", crew=crew)
|
||||
assert mem0_storage.run_id == run_id
|
||||
|
||||
if run_id is not None:
|
||||
mock_mem0_memory_client.add = MagicMock()
|
||||
mem0_storage.save("test", {})
|
||||
assert mock_mem0_memory_client.add.call_args[1].get("run_id") == run_id
|
||||
|
||||
|
||||
def test_invalid_version_handling(mock_mem0_memory_client):
|
||||
"""Test handling of invalid version"""
|
||||
with patch.object(MemoryClient, "__new__", return_value=mock_mem0_memory_client):
|
||||
crew = MockCrew(
|
||||
memory_config={
|
||||
"provider": "mem0",
|
||||
"config": {
|
||||
"user_id": "test_user",
|
||||
"api_key": "ABCDEFGH",
|
||||
"version": "invalid",
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
with pytest.raises(ValueError, match="Unsupported version"):
|
||||
Mem0Storage(type="short_term", crew=crew)
|
||||
|
||||
|
||||
def test_invalid_run_id_type(mock_mem0_memory_client):
|
||||
"""Test handling of invalid run_id type"""
|
||||
with patch.object(MemoryClient, "__new__", return_value=mock_mem0_memory_client):
|
||||
crew = MockCrew(
|
||||
memory_config={
|
||||
"provider": "mem0",
|
||||
"config": {
|
||||
"user_id": "test_user",
|
||||
"api_key": "ABCDEFGH",
|
||||
"version": "v2",
|
||||
"run_id": 123, # Not a string
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
with pytest.raises(ValueError, match="run_id must be a string"):
|
||||
Mem0Storage(type="short_term", crew=crew)
|
||||
46
tests/test_agent_without_knowledge_uses_crew_knowledge.py
Normal file
46
tests/test_agent_without_knowledge_uses_crew_knowledge.py
Normal file
@@ -0,0 +1,46 @@
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
from crewai import LLM, Agent, Crew, Process, Task
|
||||
from crewai.knowledge.knowledge import Knowledge
|
||||
from crewai.knowledge.source.string_knowledge_source import StringKnowledgeSource
|
||||
|
||||
|
||||
def test_agent_without_knowledge_uses_crew_knowledge():
|
||||
"""Test that an agent without knowledge sources can use crew knowledge sources."""
|
||||
content = "John is 30 years old and lives in San Francisco."
|
||||
string_source = StringKnowledgeSource(content=content)
|
||||
|
||||
agent = Agent(
|
||||
role="Information Agent",
|
||||
goal="Provide information based on knowledge sources",
|
||||
backstory="You have access to specific knowledge sources.",
|
||||
llm=LLM(model="gpt-4o-mini"),
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="How old is John and where does he live?",
|
||||
expected_output="John's age and location.",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
with patch('crewai.knowledge.knowledge.Knowledge.query', return_value=[{"context": content}]) as mock_query:
|
||||
crew = Crew(
|
||||
agents=[agent],
|
||||
tasks=[task],
|
||||
process=Process.sequential,
|
||||
knowledge_sources=[string_source],
|
||||
)
|
||||
|
||||
agent.crew = crew
|
||||
|
||||
with patch.object(Agent, '_get_knowledge_search_query', return_value="test query"):
|
||||
with patch.object(Agent, '_execute_without_timeout', return_value="John is 30 years old and lives in San Francisco."):
|
||||
result = agent.execute_task(task)
|
||||
|
||||
assert mock_query.called
|
||||
|
||||
assert hasattr(agent, 'crew_knowledge_context')
|
||||
|
||||
assert "John" in result
|
||||
10
uv.lock
generated
10
uv.lock
generated
@@ -738,7 +738,7 @@ wheels = [
|
||||
|
||||
[[package]]
|
||||
name = "crewai"
|
||||
version = "0.118.0"
|
||||
version = "0.119.0"
|
||||
source = { editable = "." }
|
||||
dependencies = [
|
||||
{ name = "appdirs" },
|
||||
@@ -828,7 +828,7 @@ requires-dist = [
|
||||
{ name = "blinker", specifier = ">=1.9.0" },
|
||||
{ name = "chromadb", specifier = ">=0.5.23" },
|
||||
{ name = "click", specifier = ">=8.1.7" },
|
||||
{ name = "crewai-tools", marker = "extra == 'tools'", specifier = "~=0.42.2" },
|
||||
{ name = "crewai-tools", marker = "extra == 'tools'", specifier = "~=0.44.0" },
|
||||
{ name = "docling", marker = "extra == 'docling'", specifier = ">=2.12.0" },
|
||||
{ name = "fastembed", marker = "extra == 'fastembed'", specifier = ">=0.4.1" },
|
||||
{ name = "instructor", specifier = ">=1.3.3" },
|
||||
@@ -879,7 +879,7 @@ dev = [
|
||||
|
||||
[[package]]
|
||||
name = "crewai-tools"
|
||||
version = "0.42.2"
|
||||
version = "0.44.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "chromadb" },
|
||||
@@ -894,9 +894,9 @@ dependencies = [
|
||||
{ name = "pytube" },
|
||||
{ name = "requests" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/17/34/9e63e2db53d8f5c30353f271a3240687a48e55204bbd176a057c0b7658c8/crewai_tools-0.42.2.tar.gz", hash = "sha256:69365ffb168cccfea970e09b308905aa5007cfec60024d731ffac1362a0153c0", size = 754967 }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/b8/1f/2977dc72628c1225bf5788ae22a65e5a53df384d19b197646d2c4760684e/crewai_tools-0.44.0.tar.gz", hash = "sha256:44e0c26079396503a326efdd9ff34bf369d410cbf95c362cc523db65b18f3c3a", size = 892004 }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/4e/43/0f70b95350084e5cb1e1d74e9acb9e18a89ba675b1d579c787c2662baba7/crewai_tools-0.42.2-py3-none-any.whl", hash = "sha256:13727fb68f0efefd21edeb281be3d66ff2f5a3b5029d4e6adef388b11fd5846a", size = 583933 },
|
||||
{ url = "https://files.pythonhosted.org/packages/ba/80/b91aa837d06edbb472445ea3c92d7619518894fd3049d480e5fffbf0c21b/crewai_tools-0.44.0-py3-none-any.whl", hash = "sha256:119e2365fe66ee16e18a5e8e222994b19f76bafcc8c1bb87f61609c1e39b2463", size = 583462 },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
|
||||
Reference in New Issue
Block a user