Compare commits

..

2 Commits

Author SHA1 Message Date
Devin AI
eee7439610 Fix lint: Sort imports in test_llm.py
Co-Authored-By: Joe Moura <joao@crewai.com>
2025-05-01 19:59:44 +00:00
Devin AI
a0a536e737 Fix issue #2738: Exclude stop parameter for o3 model
Co-Authored-By: Joe Moura <joao@crewai.com>
2025-05-01 19:57:17 +00:00
25 changed files with 266 additions and 838 deletions

View File

@@ -27,19 +27,23 @@ Large Language Models (LLMs) are the core intelligence behind CrewAI agents. The
</Card>
</CardGroup>
## Setting up your LLM
## Setting Up Your LLM
There are different places in CrewAI code where you can specify the model to use. Once you specify the model you are using, you will need to provide the configuration (like an API key) for each of the model providers you use. See the [provider configuration examples](#provider-configuration-examples) section for your provider.
There are three ways to configure LLMs in CrewAI. Choose the method that best fits your workflow:
<Tabs>
<Tab title="1. Environment Variables">
The simplest way to get started. Set the model in your environment directly, through an `.env` file or in your app code. If you used `crewai create` to bootstrap your project, it will be set already.
The simplest way to get started. Set these variables in your environment:
```bash .env
MODEL=model-id # e.g. gpt-4o, gemini-2.0-flash, claude-3-sonnet-...
```bash
# Required: Your API key for authentication
OPENAI_API_KEY=<your-api-key>
# Be sure to set your API keys here too. See the Provider
# section below.
# Optional: Default model selection
OPENAI_MODEL_NAME=gpt-4o-mini # Default if not set
# Optional: Organization ID (if applicable)
OPENAI_ORGANIZATION_ID=<your-org-id>
```
<Warning>
@@ -49,13 +53,13 @@ There are different places in CrewAI code where you can specify the model to use
<Tab title="2. YAML Configuration">
Create a YAML file to define your agent configurations. This method is great for version control and team collaboration:
```yaml agents.yaml {6}
```yaml
researcher:
role: Research Specialist
goal: Conduct comprehensive research and analysis
backstory: A dedicated research professional with years of experience
verbose: true
llm: provider/model-id # e.g. openai/gpt-4o, google/gemini-2.0-flash, anthropic/claude...
llm: openai/gpt-4o-mini # your model here
# (see provider configuration examples below for more)
```
@@ -70,23 +74,23 @@ There are different places in CrewAI code where you can specify the model to use
<Tab title="3. Direct Code">
For maximum flexibility, configure LLMs directly in your Python code:
```python {4,8}
```python
from crewai import LLM
# Basic configuration
llm = LLM(model="model-id-here") # gpt-4o, gemini-2.0-flash, anthropic/claude...
llm = LLM(model="gpt-4")
# Advanced configuration with detailed parameters
llm = LLM(
model="model-id-here", # gpt-4o, gemini-2.0-flash, anthropic/claude...
model="gpt-4o-mini",
temperature=0.7, # Higher for more creative outputs
timeout=120, # Seconds to wait for response
max_tokens=4000, # Maximum length of response
top_p=0.9, # Nucleus sampling parameter
frequency_penalty=0.1 , # Reduce repetition
presence_penalty=0.1, # Encourage topic diversity
timeout=120, # Seconds to wait for response
max_tokens=4000, # Maximum length of response
top_p=0.9, # Nucleus sampling parameter
frequency_penalty=0.1, # Reduce repetition
presence_penalty=0.1, # Encourage topic diversity
response_format={"type": "json"}, # For structured outputs
seed=42 # For reproducible results
seed=42 # For reproducible results
)
```
@@ -106,6 +110,7 @@ There are different places in CrewAI code where you can specify the model to use
## Provider Configuration Examples
CrewAI supports a multitude of LLM providers, each offering unique features, authentication methods, and model capabilities.
In this section, you'll find detailed examples that help you select, configure, and optimize the LLM that best fits your project's needs.
@@ -378,7 +383,7 @@ In this section, you'll find detailed examples that help you select, configure,
| microsoft/phi-3-medium-4k-instruct | 4,096 tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
| microsoft/phi-3-medium-128k-instruct | 128K tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
| microsoft/phi-3.5-mini-instruct | 128K tokens | Lightweight multilingual LLM powering AI applications in latency bound, memory/compute constrained environments |
| microsoft/phi-3.5-moe-instruct | 128K tokens | Advanced LLM based on Mixture of Experts architecture to deliver compute efficient content generation |
| microsoft/phi-3.5-moe-instruct | 128K tokens | Advanced LLM based on Mixture of Experts architecure to deliver compute efficient content generation |
| microsoft/kosmos-2 | 1,024 tokens | Groundbreaking multimodal model designed to understand and reason about visual elements in images. |
| microsoft/phi-3-vision-128k-instruct | 128k tokens | Cutting-edge open multimodal model exceling in high-quality reasoning from images. |
| microsoft/phi-3.5-vision-instruct | 128k tokens | Cutting-edge open multimodal model exceling in high-quality reasoning from images. |
@@ -402,19 +407,19 @@ In this section, you'll find detailed examples that help you select, configure,
</Accordion>
<Accordion title="Local NVIDIA NIM Deployed using WSL2">
NVIDIA NIM enables you to run powerful LLMs locally on your Windows machine using WSL2 (Windows Subsystem for Linux).
This approach allows you to leverage your NVIDIA GPU for private, secure, and cost-effective AI inference without relying on cloud services.
NVIDIA NIM enables you to run powerful LLMs locally on your Windows machine using WSL2 (Windows Subsystem for Linux).
This approach allows you to leverage your NVIDIA GPU for private, secure, and cost-effective AI inference without relying on cloud services.
Perfect for development, testing, or production scenarios where data privacy or offline capabilities are required.
Here is a step-by-step guide to setting up a local NVIDIA NIM model:
1. Follow installation instructions from [NVIDIA Website](https://docs.nvidia.com/nim/wsl2/latest/getting-started.html)
2. Install the local model. For Llama 3.1-8b follow [instructions](https://build.nvidia.com/meta/llama-3_1-8b-instruct/deploy)
3. Configure your crewai local models:
```python Code
from crewai.llm import LLM
@@ -436,7 +441,7 @@ In this section, you'll find detailed examples that help you select, configure,
config=self.agents_config['researcher'], # type: ignore[index]
llm=local_nvidia_nim_llm
)
# ...
```
</Accordion>
@@ -632,19 +637,19 @@ CrewAI supports streaming responses from LLMs, allowing your application to rece
When streaming is enabled, responses are delivered in chunks as they're generated, creating a more responsive user experience.
</Tab>
<Tab title="Event Handling">
CrewAI emits events for each chunk received during streaming:
```python
from crewai import LLM
from crewai.utilities.events import EventHandler, LLMStreamChunkEvent
class MyEventHandler(EventHandler):
def on_llm_stream_chunk(self, event: LLMStreamChunkEvent):
# Process each chunk as it arrives
print(f"Received chunk: {event.chunk}")
# Register the event handler
from crewai.utilities.events import crewai_event_bus
crewai_event_bus.register_handler(MyEventHandler())
@@ -780,7 +785,7 @@ Learn how to get the most out of your LLM configuration:
<Tip>
Use larger context models for extensive tasks
</Tip>
```python
# Large context model
llm = LLM(model="openai/gpt-4o") # 128K tokens

View File

@@ -71,10 +71,6 @@ If you haven't installed `uv` yet, follow **step 1** to quickly get it set up on
```
</Warning>
<Warning>
If you encounter the `chroma-hnswlib==0.7.6` build error (`fatal error C1083: Cannot open include file: 'float.h'`) on Windows, install (Visual Studio Build Tools)[https://visualstudio.microsoft.com/downloads/] with *Desktop development with C++*.
</Warning>
- To verify that `crewai` is installed, run:
```shell
uv tool list

View File

@@ -22,7 +22,7 @@ streamlining the process of finding specific information within large document c
Install the crewai_tools package by running the following command in your terminal:
```shell
uv pip install docx2txt 'crewai[tools]'
pip install 'crewai[tools]'
```
## Example
@@ -76,4 +76,4 @@ tool = DOCXSearchTool(
),
)
)
```
```

View File

@@ -143,30 +143,12 @@ config = {
"config": {
"model": "text-embedding-ada-002"
}
},
"vectordb": {
"provider": "elasticsearch",
"config": {
"collection_name": "my-collection",
"cloud_id": "deployment-name:xxxx",
"api_key": "your-key",
"verify_certs": False
}
},
"chunker": {
"chunk_size": 400,
"chunk_overlap": 100,
"length_function": "len",
"min_chunk_size": 0
}
}
rag_tool = RagTool(config=config, summarize=True)
```
The internal RAG tool utilizes the Embedchain adapter, allowing you to pass any configuration options that are supported by Embedchain.
You can refer to the [Embedchain documentation](https://docs.embedchain.ai/components/introduction) for details.
Make sure to review the configuration options available in the .yaml file.
## Conclusion
The `RagTool` provides a powerful way to create and query knowledge bases from various data sources. By leveraging Retrieval-Augmented Generation, it enables agents to access and retrieve relevant information efficiently, enhancing their ability to provide accurate and contextually appropriate responses.

View File

@@ -11,7 +11,7 @@ dependencies = [
# Core Dependencies
"pydantic>=2.4.2",
"openai>=1.13.3",
"litellm==1.68.0",
"litellm==1.67.1",
"instructor>=1.3.3",
# Text Processing
"pdfplumber>=0.11.4",

View File

@@ -4,7 +4,7 @@ import click
# Be mindful about changing this.
# on some environments we don't use this command but instead uv sync directly
# on some enviorments we don't use this command but instead uv sync directly
# so if you expect this to support more things you will need to replicate it there
# ask @joaomdmoura if you are unsure
def install_crew(proxy_options: list[str]) -> None:

View File

@@ -2,7 +2,7 @@ import subprocess
import click
from crewai.cli.utils import get_crews
from crewai.cli.utils import get_crew
def reset_memories_command(
@@ -26,47 +26,35 @@ def reset_memories_command(
"""
try:
if not any([long, short, entity, kickoff_outputs, knowledge, all]):
crew = get_crew()
if not crew:
raise ValueError("No crew found.")
if all:
crew.reset_memories(command_type="all")
click.echo("All memories have been reset.")
return
if not any([long, short, entity, kickoff_outputs, knowledge]):
click.echo(
"No memory type specified. Please specify at least one type to reset."
)
return
crews = get_crews()
if not crews:
raise ValueError("No crew found.")
for crew in crews:
if all:
crew.reset_memories(command_type="all")
click.echo(
f"[Crew ({crew.name if crew.name else crew.id})] Reset memories command has been completed."
)
continue
if long:
crew.reset_memories(command_type="long")
click.echo(
f"[Crew ({crew.name if crew.name else crew.id})] Long term memory has been reset."
)
if short:
crew.reset_memories(command_type="short")
click.echo(
f"[Crew ({crew.name if crew.name else crew.id})] Short term memory has been reset."
)
if entity:
crew.reset_memories(command_type="entity")
click.echo(
f"[Crew ({crew.name if crew.name else crew.id})] Entity memory has been reset."
)
if kickoff_outputs:
crew.reset_memories(command_type="kickoff_outputs")
click.echo(
f"[Crew ({crew.name if crew.name else crew.id})] Latest Kickoff outputs stored has been reset."
)
if knowledge:
crew.reset_memories(command_type="knowledge")
click.echo(
f"[Crew ({crew.name if crew.name else crew.id})] Knowledge has been reset."
)
if long:
crew.reset_memories(command_type="long")
click.echo("Long term memory has been reset.")
if short:
crew.reset_memories(command_type="short")
click.echo("Short term memory has been reset.")
if entity:
crew.reset_memories(command_type="entity")
click.echo("Entity memory has been reset.")
if kickoff_outputs:
crew.reset_memories(command_type="kickoff_outputs")
click.echo("Latest Kickoff outputs stored has been reset.")
if knowledge:
crew.reset_memories(command_type="knowledge")
click.echo("Knowledge has been reset.")
except subprocess.CalledProcessError as e:
click.echo(f"An error occurred while resetting the memories: {e}", err=True)

View File

@@ -2,8 +2,7 @@ import os
import shutil
import sys
from functools import reduce
from inspect import isfunction, ismethod
from typing import Any, Dict, List, get_type_hints
from typing import Any, Dict, List
import click
import tomli
@@ -11,7 +10,6 @@ from rich.console import Console
from crewai.cli.constants import ENV_VARS
from crewai.crew import Crew
from crewai.flow import Flow
if sys.version_info >= (3, 11):
import tomllib
@@ -252,11 +250,11 @@ def write_env_file(folder_path, env_vars):
file.write(f"{key}={value}\n")
def get_crews(crew_path: str = "crew.py", require: bool = False) -> list[Crew]:
"""Get the crew instances from the a file."""
crew_instances = []
def get_crew(crew_path: str = "crew.py", require: bool = False) -> Crew | None:
"""Get the crew instance from the crew.py file."""
try:
import importlib.util
import os
for root, _, files in os.walk("."):
if crew_path in files:
@@ -273,10 +271,12 @@ def get_crews(crew_path: str = "crew.py", require: bool = False) -> list[Crew]:
spec.loader.exec_module(module)
for attr_name in dir(module):
module_attr = getattr(module, attr_name)
attr = getattr(module, attr_name)
try:
crew_instances.extend(fetch_crews(module_attr))
if callable(attr) and hasattr(attr, "crew"):
crew_instance = attr().crew()
return crew_instance
except Exception as e:
print(f"Error processing attribute {attr_name}: {e}")
continue
@@ -286,6 +286,7 @@ def get_crews(crew_path: str = "crew.py", require: bool = False) -> list[Crew]:
import traceback
print(f"Traceback: {traceback.format_exc()}")
except (ImportError, AttributeError) as e:
if require:
console.print(
@@ -299,6 +300,7 @@ def get_crews(crew_path: str = "crew.py", require: bool = False) -> list[Crew]:
if require:
console.print("No valid Crew instance found in crew.py", style="bold red")
raise SystemExit
return None
except Exception as e:
if require:
@@ -306,36 +308,4 @@ def get_crews(crew_path: str = "crew.py", require: bool = False) -> list[Crew]:
f"Unexpected error while loading crew: {str(e)}", style="bold red"
)
raise SystemExit
return crew_instances
def get_crew_instance(module_attr) -> Crew | None:
if (
callable(module_attr)
and hasattr(module_attr, "is_crew_class")
and module_attr.is_crew_class
):
return module_attr().crew()
if (ismethod(module_attr) or isfunction(module_attr)) and get_type_hints(
module_attr
).get("return") is Crew:
return module_attr()
elif isinstance(module_attr, Crew):
return module_attr
else:
return None
def fetch_crews(module_attr) -> list[Crew]:
crew_instances: list[Crew] = []
if crew_instance := get_crew_instance(module_attr):
crew_instances.append(crew_instance)
if isinstance(module_attr, type) and issubclass(module_attr, Flow):
instance = module_attr()
for attr_name in dir(instance):
attr = getattr(instance, attr_name)
if crew_instance := get_crew_instance(attr):
crew_instances.append(crew_instance)
return crew_instances

View File

@@ -6,17 +6,7 @@ import warnings
from concurrent.futures import Future
from copy import copy as shallow_copy
from hashlib import md5
from typing import (
Any,
Callable,
Dict,
List,
Optional,
Set,
Tuple,
Union,
cast,
)
from typing import Any, Callable, Dict, List, Optional, Set, Tuple, Union, cast
from pydantic import (
UUID4,
@@ -34,7 +24,6 @@ from crewai.agent import Agent
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.agents.cache import CacheHandler
from crewai.crews.crew_output import CrewOutput
from crewai.flow.flow_trackable import FlowTrackable
from crewai.knowledge.knowledge import Knowledge
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
from crewai.llm import LLM, BaseLLM
@@ -80,7 +69,7 @@ from crewai.utilities.training_handler import CrewTrainingHandler
warnings.filterwarnings("ignore", category=SyntaxWarning, module="pysbd")
class Crew(FlowTrackable, BaseModel):
class Crew(BaseModel):
"""
Represents a group of agents, defining how they should collaborate and the tasks they should perform.
@@ -315,9 +304,7 @@ class Crew(FlowTrackable, BaseModel):
"""Initialize private memory attributes."""
self._external_memory = (
# External memory doesnt support a default value since it was designed to be managed entirely externally
self.external_memory.set_crew(self)
if self.external_memory
else None
self.external_memory.set_crew(self) if self.external_memory else None
)
self._long_term_memory = self.long_term_memory
@@ -346,7 +333,6 @@ class Crew(FlowTrackable, BaseModel):
embedder=self.embedder,
collection_name="crew",
)
self.knowledge.add_sources()
except Exception as e:
self._logger.log(
@@ -1383,6 +1369,8 @@ class Crew(FlowTrackable, BaseModel):
else:
self._reset_specific_memory(command_type)
self._logger.log("info", f"{command_type} memory has been reset")
except Exception as e:
error_msg = f"Failed to reset {command_type} memory: {str(e)}"
self._logger.log("error", error_msg)
@@ -1403,14 +1391,8 @@ class Crew(FlowTrackable, BaseModel):
if system is not None:
try:
system.reset()
self._logger.log(
"info",
f"[Crew ({self.name if self.name else self.id})] {name} memory has been reset",
)
except Exception as e:
raise RuntimeError(
f"[Crew ({self.name if self.name else self.id})] Failed to reset {name} memory: {str(e)}"
) from e
raise RuntimeError(f"Failed to reset {name} memory") from e
def _reset_specific_memory(self, memory_type: str) -> None:
"""Reset a specific memory system.
@@ -1439,11 +1421,5 @@ class Crew(FlowTrackable, BaseModel):
try:
memory_system.reset()
self._logger.log(
"info",
f"[Crew ({self.name if self.name else self.id})] {name} memory has been reset",
)
except Exception as e:
raise RuntimeError(
f"[Crew ({self.name if self.name else self.id})] Failed to reset {name} memory: {str(e)}"
) from e
raise RuntimeError(f"Failed to reset {name} memory") from e

View File

@@ -1,44 +0,0 @@
import inspect
from typing import Optional
from pydantic import BaseModel, Field, InstanceOf, model_validator
from crewai.flow import Flow
class FlowTrackable(BaseModel):
"""Mixin that tracks the Flow instance that instantiated the object, e.g. a
Flow instance that created a Crew or Agent.
Automatically finds and stores a reference to the parent Flow instance by
inspecting the call stack.
"""
parent_flow: Optional[InstanceOf[Flow]] = Field(
default=None,
description="The parent flow of the instance, if it was created inside a flow.",
)
@model_validator(mode="after")
def _set_parent_flow(self, max_depth: int = 5) -> "FlowTrackable":
frame = inspect.currentframe()
try:
if frame is None:
return self
frame = frame.f_back
for _ in range(max_depth):
if frame is None:
break
candidate = frame.f_locals.get("self")
if isinstance(candidate, Flow):
self.parent_flow = candidate
break
frame = frame.f_back
finally:
del frame
return self

View File

@@ -41,6 +41,7 @@ class Knowledge(BaseModel):
)
self.sources = sources
self.storage.initialize_knowledge_storage()
self._add_sources()
def query(
self, query: List[str], results_limit: int = 3, score_threshold: float = 0.35
@@ -62,7 +63,7 @@ class Knowledge(BaseModel):
)
return results
def add_sources(self):
def _add_sources(self):
try:
for source in self.sources:
source.storage = self.storage

View File

@@ -13,7 +13,6 @@ from crewai.agents.parser import (
AgentFinish,
OutputParserException,
)
from crewai.flow.flow_trackable import FlowTrackable
from crewai.llm import LLM
from crewai.tools.base_tool import BaseTool
from crewai.tools.structured_tool import CrewStructuredTool
@@ -81,7 +80,7 @@ class LiteAgentOutput(BaseModel):
return self.raw
class LiteAgent(FlowTrackable, BaseModel):
class LiteAgent(BaseModel):
"""
A lightweight agent that can process messages and use tools.
@@ -163,7 +162,7 @@ class LiteAgent(FlowTrackable, BaseModel):
_messages: List[Dict[str, str]] = PrivateAttr(default_factory=list)
_iterations: int = PrivateAttr(default=0)
_printer: Printer = PrivateAttr(default_factory=Printer)
@model_validator(mode="after")
def setup_llm(self):
"""Set up the LLM and other components after initialization."""

View File

@@ -351,7 +351,6 @@ class LLM(BaseLLM):
"temperature": self.temperature,
"top_p": self.top_p,
"n": self.n,
"stop": self.stop,
"max_tokens": self.max_tokens or self.max_completion_tokens,
"presence_penalty": self.presence_penalty,
"frequency_penalty": self.frequency_penalty,
@@ -369,6 +368,9 @@ class LLM(BaseLLM):
"reasoning_effort": self.reasoning_effort,
**self.additional_params,
}
if self.stop and self.supports_stop_words():
params["stop"] = self.stop
# Remove None values from params
return {k: v for k, v in params.items() if v is not None}

View File

@@ -2,7 +2,6 @@ from __future__ import annotations
import asyncio
import json
import logging
import os
import platform
import warnings
@@ -15,8 +14,6 @@ from crewai.telemetry.constants import (
CREWAI_TELEMETRY_SERVICE_NAME,
)
logger = logging.getLogger(__name__)
@contextmanager
def suppress_warnings():
@@ -31,10 +28,7 @@ from opentelemetry.exporter.otlp.proto.http.trace_exporter import (
)
from opentelemetry.sdk.resources import SERVICE_NAME, Resource # noqa: E402
from opentelemetry.sdk.trace import TracerProvider # noqa: E402
from opentelemetry.sdk.trace.export import ( # noqa: E402
BatchSpanProcessor,
SpanExportResult,
)
from opentelemetry.sdk.trace.export import BatchSpanProcessor # noqa: E402
from opentelemetry.trace import Span, Status, StatusCode # noqa: E402
if TYPE_CHECKING:
@@ -42,15 +36,6 @@ if TYPE_CHECKING:
from crewai.task import Task
class SafeOTLPSpanExporter(OTLPSpanExporter):
def export(self, spans) -> SpanExportResult:
try:
return super().export(spans)
except Exception as e:
logger.error(e)
return SpanExportResult.FAILURE
class Telemetry:
"""A class to handle anonymous telemetry for the crewai package.
@@ -79,7 +64,7 @@ class Telemetry:
self.provider = TracerProvider(resource=self.resource)
processor = BatchSpanProcessor(
SafeOTLPSpanExporter(
OTLPSpanExporter(
endpoint=f"{CREWAI_TELEMETRY_BASE_URL}/v1/traces",
timeout=30,
)

View File

@@ -1,19 +0,0 @@
"""
Backward compatibility module for crewai.telemtry to handle typo in import statements.
This module allows older code that imports from `crewai.telemtry` (misspelled)
to continue working by re-exporting the Telemetry class from the correctly
spelled `crewai.telemetry` module.
"""
import warnings
from crewai.telemetry import Telemetry
warnings.warn(
"Importing from 'crewai.telemtry' is deprecated due to spelling issues. "
"Please use 'from crewai.telemetry import Telemetry' instead.",
DeprecationWarning,
stacklevel=2
)
__all__ = ["Telemetry"]

View File

@@ -1,49 +0,0 @@
import sys
import unittest
import warnings
class BackwardCompatibilityTest(unittest.TestCase):
def setUp(self):
if "crewai.telemtry" in sys.modules:
del sys.modules["crewai.telemtry"]
warnings.resetwarnings()
def test_deprecation_warning(self):
"""Test that importing from the misspelled module raises a deprecation warning."""
with warnings.catch_warnings(record=True) as w:
warnings.simplefilter("always", DeprecationWarning)
import importlib
import crewai.telemtry
importlib.reload(crewai.telemtry)
self.assertGreaterEqual(len(w), 1)
warning_messages = [str(warning.message) for warning in w]
warning_categories = [warning.category for warning in w]
has_deprecation_warning = False
for msg, cat in zip(warning_messages, warning_categories):
if (issubclass(cat, DeprecationWarning) and
"crewai.telemtry" in msg and
"crewai.telemetry" in msg):
has_deprecation_warning = True
break
self.assertTrue(has_deprecation_warning,
f"No matching deprecation warning found. Warnings: {warning_messages}")
def test_telemtry_typo_compatibility(self):
"""Test that the backward compatibility for the telemtry typo works."""
from crewai.telemetry import Telemetry
from crewai.telemtry import Telemetry as MisspelledTelemetry
self.assertIs(MisspelledTelemetry, Telemetry)
def test_functionality_preservation(self):
"""Test that the re-exported Telemetry class preserves all functionality."""
from crewai.telemetry import Telemetry
from crewai.telemtry import Telemetry as MisspelledTelemetry
self.assertEqual(dir(MisspelledTelemetry), dir(Telemetry))

View File

@@ -16,7 +16,7 @@ interactions:
answer MUST contain all the information requested in the following format: {\n \"summary\":
str,\n \"confidence\": int\n}\n\nIMPORTANT: Ensure the final output does not
include any code block markers like ```json or ```python."}, {"role": "user",
"content": "What is the population of Tokyo? Return your structured output in
"content": "What is the population of Tokyo? Return your strucutred output in
JSON format with the following fields: summary, confidence"}], "model": "gpt-4o-mini",
"stop": []}'
headers:

File diff suppressed because one or more lines are too long

View File

@@ -18,7 +18,6 @@ from crewai.cli.cli import (
train,
version,
)
from crewai.crew import Crew
@pytest.fixture
@@ -56,133 +55,81 @@ def test_train_invalid_string_iterations(train_crew, runner):
)
@pytest.fixture
def mock_crew():
_mock = mock.Mock(spec=Crew, name="test_crew")
_mock.name = "test_crew"
return _mock
@pytest.fixture
def mock_get_crews(mock_crew):
with mock.patch(
"crewai.cli.reset_memories_command.get_crews", return_value=[mock_crew]
) as mock_get_crew:
yield mock_get_crew
def test_reset_all_memories(mock_get_crews, runner):
@mock.patch("crewai.cli.reset_memories_command.get_crew")
def test_reset_all_memories(mock_get_crew, runner):
mock_crew = mock.Mock()
mock_get_crew.return_value = mock_crew
result = runner.invoke(reset_memories, ["-a"])
call_count = 0
for crew in mock_get_crews.return_value:
crew.reset_memories.assert_called_once_with(command_type="all")
assert (
f"[Crew ({crew.name})] Reset memories command has been completed."
in result.output
)
call_count += 1
assert call_count == 1, "reset_memories should have been called once"
mock_crew.reset_memories.assert_called_once_with(command_type="all")
assert result.output == "All memories have been reset.\n"
def test_reset_short_term_memories(mock_get_crews, runner):
@mock.patch("crewai.cli.reset_memories_command.get_crew")
def test_reset_short_term_memories(mock_get_crew, runner):
mock_crew = mock.Mock()
mock_get_crew.return_value = mock_crew
result = runner.invoke(reset_memories, ["-s"])
call_count = 0
for crew in mock_get_crews.return_value:
crew.reset_memories.assert_called_once_with(command_type="short")
assert (
f"[Crew ({crew.name})] Short term memory has been reset." in result.output
)
call_count += 1
assert call_count == 1, "reset_memories should have been called once"
mock_crew.reset_memories.assert_called_once_with(command_type="short")
assert result.output == "Short term memory has been reset.\n"
def test_reset_entity_memories(mock_get_crews, runner):
@mock.patch("crewai.cli.reset_memories_command.get_crew")
def test_reset_entity_memories(mock_get_crew, runner):
mock_crew = mock.Mock()
mock_get_crew.return_value = mock_crew
result = runner.invoke(reset_memories, ["-e"])
call_count = 0
for crew in mock_get_crews.return_value:
crew.reset_memories.assert_called_once_with(command_type="entity")
assert f"[Crew ({crew.name})] Entity memory has been reset." in result.output
call_count += 1
assert call_count == 1, "reset_memories should have been called once"
mock_crew.reset_memories.assert_called_once_with(command_type="entity")
assert result.output == "Entity memory has been reset.\n"
def test_reset_long_term_memories(mock_get_crews, runner):
@mock.patch("crewai.cli.reset_memories_command.get_crew")
def test_reset_long_term_memories(mock_get_crew, runner):
mock_crew = mock.Mock()
mock_get_crew.return_value = mock_crew
result = runner.invoke(reset_memories, ["-l"])
call_count = 0
for crew in mock_get_crews.return_value:
crew.reset_memories.assert_called_once_with(command_type="long")
assert f"[Crew ({crew.name})] Long term memory has been reset." in result.output
call_count += 1
assert call_count == 1, "reset_memories should have been called once"
mock_crew.reset_memories.assert_called_once_with(command_type="long")
assert result.output == "Long term memory has been reset.\n"
def test_reset_kickoff_outputs(mock_get_crews, runner):
@mock.patch("crewai.cli.reset_memories_command.get_crew")
def test_reset_kickoff_outputs(mock_get_crew, runner):
mock_crew = mock.Mock()
mock_get_crew.return_value = mock_crew
result = runner.invoke(reset_memories, ["-k"])
call_count = 0
for crew in mock_get_crews.return_value:
crew.reset_memories.assert_called_once_with(command_type="kickoff_outputs")
assert (
f"[Crew ({crew.name})] Latest Kickoff outputs stored has been reset."
in result.output
)
call_count += 1
assert call_count == 1, "reset_memories should have been called once"
mock_crew.reset_memories.assert_called_once_with(command_type="kickoff_outputs")
assert result.output == "Latest Kickoff outputs stored has been reset.\n"
def test_reset_multiple_memory_flags(mock_get_crews, runner):
@mock.patch("crewai.cli.reset_memories_command.get_crew")
def test_reset_multiple_memory_flags(mock_get_crew, runner):
mock_crew = mock.Mock()
mock_get_crew.return_value = mock_crew
result = runner.invoke(reset_memories, ["-s", "-l"])
call_count = 0
for crew in mock_get_crews.return_value:
crew.reset_memories.assert_has_calls(
[mock.call(command_type="long"), mock.call(command_type="short")]
)
assert (
f"[Crew ({crew.name})] Long term memory has been reset.\n"
f"[Crew ({crew.name})] Short term memory has been reset.\n" in result.output
)
call_count += 1
assert call_count == 1, "reset_memories should have been called once"
# Check that reset_memories was called twice with the correct arguments
assert mock_crew.reset_memories.call_count == 2
mock_crew.reset_memories.assert_has_calls(
[mock.call(command_type="long"), mock.call(command_type="short")]
)
assert (
result.output
== "Long term memory has been reset.\nShort term memory has been reset.\n"
)
def test_reset_knowledge(mock_get_crews, runner):
result = runner.invoke(reset_memories, ["--knowledge"])
call_count = 0
for crew in mock_get_crews.return_value:
crew.reset_memories.assert_called_once_with(command_type="knowledge")
assert f"[Crew ({crew.name})] Knowledge has been reset." in result.output
call_count += 1
assert call_count == 1, "reset_memories should have been called once"
def test_reset_memory_from_many_crews(mock_get_crews, runner):
crews = []
for crew_id in ["id-1234", "id-5678"]:
mock_crew = mock.Mock(spec=Crew)
mock_crew.name = None
mock_crew.id = crew_id
crews.append(mock_crew)
mock_get_crews.return_value = crews
# Run the command
@mock.patch("crewai.cli.reset_memories_command.get_crew")
def test_reset_knowledge(mock_get_crew, runner):
mock_crew = mock.Mock()
mock_get_crew.return_value = mock_crew
result = runner.invoke(reset_memories, ["--knowledge"])
call_count = 0
for crew in crews:
call_count += 1
crew.reset_memories.assert_called_once_with(command_type="knowledge")
assert f"[Crew ({crew.id})] Knowledge has been reset." in result.output
assert call_count == 2, "reset_memories should have been called twice"
mock_crew.reset_memories.assert_called_once_with(command_type="knowledge")
assert result.output == "Knowledge has been reset.\n"
def test_reset_no_memory_flags(runner):

View File

@@ -3,13 +3,12 @@ import tempfile
import unittest
import unittest.mock
from contextlib import contextmanager
from io import StringIO
from unittest import mock
from unittest.mock import MagicMock, patch
import pytest
from pytest import raises
from crewai.cli.authentication.utils import TokenManager
from crewai.cli.tools.main import ToolCommand
@@ -24,20 +23,17 @@ def in_temp_dir():
os.chdir(original_dir)
@pytest.fixture
def tool_command():
TokenManager().save_tokens("test-token", 36000)
tool_command = ToolCommand()
with patch.object(tool_command, "login"):
yield tool_command
@patch("crewai.cli.tools.main.subprocess.run")
def test_create_success(mock_subprocess, capsys, tool_command):
def test_create_success(mock_subprocess):
with in_temp_dir():
tool_command.create("test-tool")
output = capsys.readouterr().out
assert "Creating custom tool test_tool..." in output
tool_command = ToolCommand()
with (
patch.object(tool_command, "login") as mock_login,
patch("sys.stdout", new=StringIO()) as fake_out,
):
tool_command.create("test-tool")
output = fake_out.getvalue()
assert os.path.isdir("test_tool")
assert os.path.isfile(os.path.join("test_tool", "README.md"))
@@ -51,12 +47,15 @@ def test_create_success(mock_subprocess, capsys, tool_command):
content = f.read()
assert "class TestTool" in content
mock_login.assert_called_once()
mock_subprocess.assert_called_once_with(["git", "init"], check=True)
assert "Creating custom tool test_tool..." in output
@patch("crewai.cli.tools.main.subprocess.run")
@patch("crewai.cli.plus_api.PlusAPI.get_tool")
def test_install_success(mock_get, mock_subprocess_run, capsys, tool_command):
def test_install_success(mock_get, mock_subprocess_run):
mock_get_response = MagicMock()
mock_get_response.status_code = 200
mock_get_response.json.return_value = {
@@ -66,9 +65,11 @@ def test_install_success(mock_get, mock_subprocess_run, capsys, tool_command):
mock_get.return_value = mock_get_response
mock_subprocess_run.return_value = MagicMock(stderr=None)
tool_command.install("sample-tool")
output = capsys.readouterr().out
assert "Successfully installed sample-tool" in output
tool_command = ToolCommand()
with patch("sys.stdout", new=StringIO()) as fake_out:
tool_command.install("sample-tool")
output = fake_out.getvalue()
mock_get.assert_has_calls([mock.call("sample-tool"), mock.call().json()])
mock_subprocess_run.assert_any_call(
@@ -85,42 +86,54 @@ def test_install_success(mock_get, mock_subprocess_run, capsys, tool_command):
env=unittest.mock.ANY,
)
assert "Successfully installed sample-tool" in output
@patch("crewai.cli.plus_api.PlusAPI.get_tool")
def test_install_tool_not_found(mock_get, capsys, tool_command):
def test_install_tool_not_found(mock_get):
mock_get_response = MagicMock()
mock_get_response.status_code = 404
mock_get.return_value = mock_get_response
with raises(SystemExit):
tool_command.install("non-existent-tool")
output = capsys.readouterr().out
assert "No tool found with this name" in output
tool_command = ToolCommand()
with patch("sys.stdout", new=StringIO()) as fake_out:
try:
tool_command.install("non-existent-tool")
except SystemExit:
pass
output = fake_out.getvalue()
mock_get.assert_called_once_with("non-existent-tool")
assert "No tool found with this name" in output
@patch("crewai.cli.plus_api.PlusAPI.get_tool")
def test_install_api_error(mock_get, capsys, tool_command):
def test_install_api_error(mock_get):
mock_get_response = MagicMock()
mock_get_response.status_code = 500
mock_get.return_value = mock_get_response
with raises(SystemExit):
tool_command.install("error-tool")
output = capsys.readouterr().out
assert "Failed to get tool details" in output
tool_command = ToolCommand()
with patch("sys.stdout", new=StringIO()) as fake_out:
try:
tool_command.install("error-tool")
except SystemExit:
pass
output = fake_out.getvalue()
mock_get.assert_called_once_with("error-tool")
assert "Failed to get tool details" in output
@patch("crewai.cli.tools.main.git.Repository.is_synced", return_value=False)
def test_publish_when_not_in_sync(mock_is_synced, capsys, tool_command):
with raises(SystemExit):
def test_publish_when_not_in_sync(mock_is_synced):
with patch("sys.stdout", new=StringIO()) as fake_out, raises(SystemExit):
tool_command = ToolCommand()
tool_command.publish(is_public=True)
output = capsys.readouterr().out
assert "Local changes need to be resolved before publishing" in output
assert "Local changes need to be resolved before publishing" in fake_out.getvalue()
@patch("crewai.cli.tools.main.get_project_name", return_value="sample-tool")
@@ -144,13 +157,13 @@ def test_publish_when_not_in_sync_and_force(
mock_get_project_description,
mock_get_project_version,
mock_get_project_name,
tool_command,
):
mock_publish_response = MagicMock()
mock_publish_response.status_code = 200
mock_publish_response.json.return_value = {"handle": "sample-tool"}
mock_publish.return_value = mock_publish_response
tool_command = ToolCommand()
tool_command.publish(is_public=True, force=True)
mock_get_project_name.assert_called_with(require=True)
@@ -192,13 +205,13 @@ def test_publish_success(
mock_get_project_description,
mock_get_project_version,
mock_get_project_name,
tool_command,
):
mock_publish_response = MagicMock()
mock_publish_response.status_code = 200
mock_publish_response.json.return_value = {"handle": "sample-tool"}
mock_publish.return_value = mock_publish_response
tool_command = ToolCommand()
tool_command.publish(is_public=True)
mock_get_project_name.assert_called_with(require=True)
@@ -238,21 +251,24 @@ def test_publish_failure(
mock_get_project_description,
mock_get_project_version,
mock_get_project_name,
capsys,
tool_command,
):
mock_publish_response = MagicMock()
mock_publish_response.status_code = 422
mock_publish_response.json.return_value = {"name": ["is already taken"]}
mock_publish.return_value = mock_publish_response
with raises(SystemExit):
tool_command.publish(is_public=True)
output = capsys.readouterr().out
assert "Failed to complete operation" in output
assert "Name is already taken" in output
tool_command = ToolCommand()
with patch("sys.stdout", new=StringIO()) as fake_out:
try:
tool_command.publish(is_public=True)
except SystemExit:
pass
output = fake_out.getvalue()
mock_publish.assert_called_once()
assert "Failed to complete operation" in output
assert "Name is already taken" in output
@patch("crewai.cli.tools.main.get_project_name", return_value="sample-tool")
@@ -274,8 +290,6 @@ def test_publish_api_error(
mock_get_project_description,
mock_get_project_version,
mock_get_project_name,
capsys,
tool_command,
):
mock_response = MagicMock()
mock_response.status_code = 500
@@ -283,9 +297,14 @@ def test_publish_api_error(
mock_response.ok = False
mock_publish.return_value = mock_response
with raises(SystemExit):
tool_command.publish(is_public=True)
output = capsys.readouterr().out
assert "Request to Enterprise API failed" in output
tool_command = ToolCommand()
with patch("sys.stdout", new=StringIO()) as fake_out:
try:
tool_command.publish(is_public=True)
except SystemExit:
pass
output = fake_out.getvalue()
mock_publish.assert_called_once()
assert "Request to Enterprise API failed" in output

View File

@@ -17,7 +17,6 @@ from crewai.agents.cache import CacheHandler
from crewai.agents.crew_agent_executor import CrewAgentExecutor
from crewai.crew import Crew
from crewai.crews.crew_output import CrewOutput
from crewai.flow import Flow, listen, start
from crewai.knowledge.source.string_knowledge_source import StringKnowledgeSource
from crewai.llm import LLM
from crewai.memory.contextual.contextual_memory import ContextualMemory
@@ -2165,6 +2164,7 @@ def test_tools_with_custom_caching():
with patch.object(
CacheHandler, "add", wraps=crew._cache_handler.add
) as add_to_cache:
result = crew.kickoff()
# Check that add_to_cache was called exactly twice
@@ -4351,35 +4351,3 @@ def test_crew_copy_with_memory():
raise e # Re-raise other validation errors
except Exception as e:
pytest.fail(f"Copying crew raised an unexpected exception: {e}")
def test_sets_parent_flow_when_outside_flow(researcher, writer):
crew = Crew(
agents=[researcher, writer],
process=Process.sequential,
tasks=[
Task(description="Task 1", expected_output="output", agent=researcher),
Task(description="Task 2", expected_output="output", agent=writer),
],
)
assert crew.parent_flow is None
def test_sets_parent_flow_when_inside_flow(researcher, writer):
class MyFlow(Flow):
@start()
def start(self):
return Crew(
agents=[researcher, writer],
process=Process.sequential,
tasks=[
Task(
description="Task 1", expected_output="output", agent=researcher
),
Task(description="Task 2", expected_output="output", agent=writer),
],
)
flow = MyFlow()
result = flow.kickoff()
assert result.parent_flow is flow

View File

@@ -1,69 +0,0 @@
import os
from unittest.mock import patch
import pytest
from crewai import Agent, Crew, Task
from crewai.telemetry import Telemetry
@pytest.mark.parametrize(
"env_var,value,expected_ready",
[
("OTEL_SDK_DISABLED", "true", False),
("OTEL_SDK_DISABLED", "TRUE", False),
("CREWAI_DISABLE_TELEMETRY", "true", False),
("CREWAI_DISABLE_TELEMETRY", "TRUE", False),
("OTEL_SDK_DISABLED", "false", True),
("CREWAI_DISABLE_TELEMETRY", "false", True),
],
)
def test_telemetry_environment_variables(env_var, value, expected_ready):
"""Test telemetry state with different environment variable configurations."""
with patch.dict(os.environ, {env_var: value}):
with patch("crewai.telemetry.telemetry.TracerProvider"):
telemetry = Telemetry()
assert telemetry.ready is expected_ready
def test_telemetry_enabled_by_default():
"""Test that telemetry is enabled by default."""
with patch.dict(os.environ, {}, clear=True):
with patch("crewai.telemetry.telemetry.TracerProvider"):
telemetry = Telemetry()
assert telemetry.ready is True
from opentelemetry import trace
@patch("crewai.telemetry.telemetry.logger.error")
@patch(
"opentelemetry.exporter.otlp.proto.http.trace_exporter.OTLPSpanExporter.export",
side_effect=Exception("Test exception"),
)
@pytest.mark.vcr(filter_headers=["authorization"])
def test_telemetry_fails_due_connect_timeout(export_mock, logger_mock):
error = Exception("Test exception")
export_mock.side_effect = error
tracer = trace.get_tracer(__name__)
with tracer.start_as_current_span("test-span"):
agent = Agent(
role="agent",
llm="gpt-4o-mini",
goal="Just say hi",
backstory="You are a helpful assistant that just says hi",
)
task = Task(
description="Just say hi",
expected_output="hi",
agent=agent,
)
crew = Crew(agents=[agent], tasks=[task], name="TestCrew")
crew.kickoff()
trace.get_tracer_provider().force_flush()
export_mock.assert_called_once()
logger_mock.assert_called_once_with(error)

View File

@@ -1,16 +1,13 @@
import asyncio
from typing import cast
from unittest.mock import Mock
import pytest
from pydantic import BaseModel, Field
from crewai import LLM, Agent
from crewai.flow import Flow, start
from crewai.lite_agent import LiteAgent, LiteAgentOutput
from crewai.tools import BaseTool
from crewai.utilities.events import crewai_event_bus
from crewai.utilities.events.agent_events import LiteAgentExecutionStartedEvent
from crewai.utilities.events.tool_usage_events import ToolUsageStartedEvent
@@ -258,60 +255,3 @@ async def test_lite_agent_returns_usage_metrics_async():
assert "21 million" in result.raw or "37 million" in result.raw
assert result.usage_metrics is not None
assert result.usage_metrics["total_tokens"] > 0
class TestFlow(Flow):
"""A test flow that creates and runs an agent."""
def __init__(self, llm, tools):
self.llm = llm
self.tools = tools
super().__init__()
@start()
def start(self):
agent = Agent(
role="Test Agent",
goal="Test Goal",
backstory="Test Backstory",
llm=self.llm,
tools=self.tools,
)
return agent.kickoff("Test query")
def verify_agent_parent_flow(result, agent, flow):
"""Verify that both the result and agent have the correct parent flow."""
assert result.parent_flow is flow
assert agent is not None
assert agent.parent_flow is flow
def test_sets_parent_flow_when_inside_flow():
captured_agent = None
mock_llm = Mock(spec=LLM)
mock_llm.call.return_value = "Test response"
class MyFlow(Flow):
@start()
def start(self):
agent = Agent(
role="Test Agent",
goal="Test Goal",
backstory="Test Backstory",
llm=mock_llm,
tools=[WebSearchTool()],
)
return agent.kickoff("Test query")
flow = MyFlow()
with crewai_event_bus.scoped_handlers():
@crewai_event_bus.on(LiteAgentExecutionStartedEvent)
def capture_agent(source, event):
nonlocal captured_agent
captured_agent = source
result = flow.kickoff()
assert captured_agent.parent_flow is flow

52
tests/unit/test_llm.py Normal file
View File

@@ -0,0 +1,52 @@
import unittest
from types import SimpleNamespace
from unittest.mock import MagicMock, patch
from crewai.llm import LLM
class TestLLM(unittest.TestCase):
@patch("crewai.llm.litellm.completion")
@patch("crewai.llm.LLM.supports_stop_words")
def test_call_with_supported_stop_words(self, mock_supports_stop_words, mock_completion):
mock_supports_stop_words.return_value = True
message = SimpleNamespace(content="Hello, World!")
choice = SimpleNamespace(message=message)
response = SimpleNamespace(choices=[choice])
mock_completion.return_value = response
llm = LLM(model="gpt-4", stop=["STOP"])
messages = [{"role": "user", "content": "Say Hello"}]
result = llm.call(messages)
mock_completion.assert_called_once()
call_args = mock_completion.call_args[1]
self.assertIn("stop", call_args)
self.assertEqual(call_args["stop"], ["STOP"])
self.assertEqual(result, "Hello, World!")
@patch("crewai.llm.litellm.completion")
@patch("crewai.llm.LLM.supports_stop_words")
def test_call_with_unsupported_stop_words(self, mock_supports_stop_words, mock_completion):
mock_supports_stop_words.return_value = False
message = SimpleNamespace(content="Hello, World!")
choice = SimpleNamespace(message=message)
response = SimpleNamespace(choices=[choice])
mock_completion.return_value = response
llm = LLM(model="o3", stop=["STOP"])
messages = [{"role": "user", "content": "Say Hello"}]
result = llm.call(messages)
mock_completion.assert_called_once()
call_args = mock_completion.call_args[1]
self.assertNotIn("stop", call_args)
self.assertEqual(result, "Hello, World!")
if __name__ == "__main__":
unittest.main()

8
uv.lock generated
View File

@@ -835,7 +835,7 @@ requires-dist = [
{ name = "json-repair", specifier = ">=0.25.2" },
{ name = "json5", specifier = ">=0.10.0" },
{ name = "jsonref", specifier = ">=1.1.0" },
{ name = "litellm", specifier = "==1.68.0" },
{ name = "litellm", specifier = "==1.67.1" },
{ name = "mem0ai", marker = "extra == 'mem0'", specifier = ">=0.1.94" },
{ name = "openai", specifier = ">=1.13.3" },
{ name = "openpyxl", specifier = ">=3.1.5" },
@@ -2387,7 +2387,7 @@ wheels = [
[[package]]
name = "litellm"
version = "1.68.0"
version = "1.67.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "aiohttp" },
@@ -2402,9 +2402,9 @@ dependencies = [
{ name = "tiktoken" },
{ name = "tokenizers" },
]
sdist = { url = "https://files.pythonhosted.org/packages/ba/22/138545b646303ca3f4841b69613c697b9d696322a1386083bb70bcbba60b/litellm-1.68.0.tar.gz", hash = "sha256:9fb24643db84dfda339b64bafca505a2eef857477afbc6e98fb56512c24dbbfa", size = 7314051 }
sdist = { url = "https://files.pythonhosted.org/packages/54/a4/bb3e9ae59e5a9857443448de7c04752630dc84cddcbd8cee037c0976f44f/litellm-1.67.1.tar.gz", hash = "sha256:78eab1bd3d759ec13aa4a05864356a4a4725634e78501db609d451bf72150ee7", size = 7242044 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/10/af/1e344bc8aee41445272e677d802b774b1f8b34bdc3bb5697ba30f0fb5d52/litellm-1.68.0-py3-none-any.whl", hash = "sha256:3bca38848b1a5236b11aa6b70afa4393b60880198c939e582273f51a542d4759", size = 7684460 },
{ url = "https://files.pythonhosted.org/packages/88/86/c14d3c24ae13c08296d068e6f79fd4bd17a0a07bddbda94990b87c35d20e/litellm-1.67.1-py3-none-any.whl", hash = "sha256:8fff5b2a16b63bb594b94d6c071ad0f27d3d8cd4348bd5acea2fd40c8e0c11e8", size = 7607266 },
]
[[package]]