Compare commits

...

17 Commits

Author SHA1 Message Date
Lucas Gomide
1f37d2d26d Merge branch 'main' into lg-improve-tranning 2025-07-01 12:25:52 -03:00
Tony Kipkemboi
2ab002a5bf Add Reo.dev tracking script to documentation (#3094)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
2025-07-01 10:29:28 -04:00
Lucas Gomide
59e3f5df6d Merge branch 'main' into lg-improve-tranning 2025-07-01 10:33:32 -03:00
Lucas Gomide
b7bf15681e feat: add capability to track LLM calls by task and agent (#3087)
* feat: add capability to track LLM calls by task and agent

This makes it possible to filter or scope LLM events by specific agents or tasks, which can be very useful for debugging or analytics in real-time application

* feat: add docs about LLM tracking by Agents and Tasks

* fix incompatible BaseLLM.call method signature

* feat: support to filter LLM Events from Lite Agent
2025-07-01 09:30:16 -04:00
Lucas Gomide
df869c5657 Merge branch 'main' into lg-improve-tranning 2025-07-01 10:21:37 -03:00
Tony Kipkemboi
af9c01f5d3 Add Scarf analytics tracking (#3086)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* Add Scarf analytics tracking

* Fix bandit security warning for urlopen

* Fix linting errors

* Refactor telemetry: reuse existing logic and simplify exceptions
2025-06-30 17:48:45 -04:00
Lucas Gomide
c66fbf5e5f Merge branch 'main' into lg-improve-tranning 2025-06-30 16:11:11 -03:00
Irineu Brito
5a12b51ba2 fix: Correct typo 'depployments' to 'deployments' in documentation 'instalation' (#3081) 2025-06-30 12:19:31 -04:00
Lucas Gomide
00c8fad257 docs: training considerations for small models to the documentation 2025-06-30 11:11:10 -03:00
Lucas Gomide
994f0e1403 feat: improve data training for models up to 7B parameters. 2025-06-30 11:11:10 -03:00
Michael Juliano
576b8ff836 Updated LiteLLM dependency. (#3047)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* Updated LiteLLM dependency.

This moves to the latest stable release. Critically, this includes a fix
from https://github.com/BerriAI/litellm/pull/11563 which is required to
use grok-3-mini with crewAI.

* Ran `uv sync` as requested.
2025-06-27 09:54:12 -04:00
Lucas Gomide
b35c3e8024 fix: ensure env-vars are written in upper case (#3072)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
When creating a Crew via the CLI and selecting the Azure provider, the generated .env file had environment variables in lowercase.
This commit ensures that all environment variables are written in uppercase.
2025-06-26 12:29:06 -04:00
Mr. Ånand
b09796cd3f Adding Nebius to docs (#3070)
* Adding Nebius to docs

Submitting this PR on behalf of Nebius AI Studio to add Nebius models to the CrewAI documentation.

I tested with the latest CrewAI + Nebius setup to ensure compatibility.

cc @tonykipkemboi

* updated LiteLLM page

---------

Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
2025-06-26 11:10:19 -04:00
devin-ai-integration[bot]
e0b46492fa Fix: Normalize project names by stripping trailing slashes in crew creation (#3060)
* fix: normalize project names by stripping trailing slashes in crew creation

- Strip trailing slashes from project names in create_folder_structure
- Add comprehensive tests for trailing slash scenarios
- Fixes #3059

The issue occurred because trailing slashes in project names like 'hello/'
were directly incorporated into pyproject.toml, creating invalid package
names and script entries. This fix silently normalizes project names by
stripping trailing slashes before processing, maintaining backward
compatibility while fixing the invalid template generation.

Co-Authored-By: João <joao@crewai.com>

* trigger CI re-run to check for flaky test issue

Co-Authored-By: João <joao@crewai.com>

* fix: resolve circular import in CLI authentication module

- Move ToolCommand import to be local inside _poll_for_token method
- Update test mock to patch ToolCommand at correct location
- Resolves Python 3.11 test collection failure in CI

Co-Authored-By: João <joao@crewai.com>

* feat: add comprehensive class name validation for Python identifiers

- Ensure generated class names are always valid Python identifiers
- Handle edge cases: names starting with numbers, special characters, keywords, built-ins
- Add sanitization logic to remove invalid characters and prefix with 'Crew' when needed
- Add comprehensive test coverage for class name validation edge cases
- Addresses GitHub PR comment from lucasgomide about class name validity

Fixes include:
- Names starting with numbers: '123project' -> 'Crew123Project'
- Python built-ins: 'True' -> 'TrueCrew', 'False' -> 'FalseCrew'
- Special characters: 'hello@world' -> 'HelloWorld'
- Empty/whitespace: '   ' -> 'DefaultCrew'
- All generated class names pass isidentifier() and keyword checks

Co-Authored-By: João <joao@crewai.com>

* refactor: change class name validation to raise errors instead of generating defaults

- Remove default value generation (Crew prefix/suffix, DefaultCrew fallback)
- Raise ValueError with descriptive messages for invalid class names
- Update tests to expect validation errors instead of default corrections
- Addresses GitHub comment feedback from lucasgomide about strict validation

Co-Authored-By: João <joao@crewai.com>

* fix: add working directory safety checks to prevent test interference

Co-Authored-By: João <joao@crewai.com>

* fix: standardize working directory handling in tests to prevent corruption

Co-Authored-By: João <joao@crewai.com>

* fix: eliminate os.chdir() usage in tests to prevent working directory corruption

- Replace os.chdir() with parent_folder parameter for create_folder_structure tests
- Mock create_folder_structure directly for create_crew tests to avoid directory changes
- All 12 tests now pass locally without working directory corruption
- Should resolve the 103 failing tests in Python 3.12 CI

Co-Authored-By: João <joao@crewai.com>

* fix: remove unused os import to resolve lint failure

- Remove unused 'import os' statement from test_create_crew.py
- All tests still pass locally after removing unused import
- Should resolve F401 lint error in CI

Co-Authored-By: João <joao@crewai.com>

* feat: add folder name validation for Python module names

- Implement validation to ensure folder_name is valid Python identifier
- Check that folder names don't start with digits
- Validate folder names are not Python keywords
- Sanitize invalid characters from folder names
- Raise ValueError with descriptive messages for invalid cases
- Update tests to validate both folder and class name requirements
- Addresses GitHub comment requiring folder names to be valid Python module names

Co-Authored-By: João <joao@crewai.com>

* fix: correct folder name validation logic to match test expectations

- Fix validation regex to catch names starting with invalid characters like '@#/'
- Ensure validation properly raises ValueError for cases expected by tests
- Maintain support for valid cases like 'my.project/' -> 'myproject'
- Address lucasgomide's comment about valid Python module names

Co-Authored-By: João <joao@crewai.com>

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: João <joao@crewai.com>
Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
2025-06-26 10:11:16 -04:00
Greyson LaLonde
ece13fbda0 refactor: implement PEP 621 dynamic versioning (#3068) 2025-06-26 10:02:26 -04:00
kilavvy
94a62d84e1 Update test_lite_agent.py (#3040)
Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
2025-06-26 09:55:53 -04:00
Lucas Gomide
cdf8388b18 docs: update CLI LLM's documentation (#3071)
This change aims to be more generic, so we don’t have to constantly reflect all available LLM options suggested by the CLI when creating a crew.
2025-06-26 09:31:43 -04:00
34 changed files with 4210 additions and 2902 deletions

View File

@@ -285,13 +285,13 @@ Watch this video tutorial for a step-by-step demonstration of deploying your cre
### 11. API Keys
When running ```crewai create crew``` command, the CLI will first show you the top 5 most common LLM providers and ask you to select one.
When running ```crewai create crew``` command, the CLI will show you a list of available LLM providers to choose from, followed by model selection for your chosen provider.
Once you've selected an LLM provider, you will be prompted for API keys.
Once you've selected an LLM provider and model, you will be prompted for API keys.
#### Initial API key providers
#### Available LLM Providers
The CLI will initially prompt for API keys for the following services:
Here's a list of the most popular LLM providers suggested by the CLI:
* OpenAI
* Groq
@@ -299,17 +299,14 @@ The CLI will initially prompt for API keys for the following services:
* Google Gemini
* SambaNova
When you select a provider, the CLI will prompt you to enter your API key.
When you select a provider, the CLI will then show you available models for that provider and prompt you to enter your API key.
#### Other Options
If you select option 6, you will be able to select from a list of LiteLLM supported providers.
If you select "other", you will be able to select from a list of LiteLLM supported providers.
When you select a provider, the CLI will prompt you to enter the Key name and the API key.
See the following link for each provider's key name:
* [LiteLLM Providers](https://docs.litellm.ai/docs/providers)

View File

@@ -684,6 +684,28 @@ In this section, you'll find detailed examples that help you select, configure,
- openrouter/deepseek/deepseek-chat
</Info>
</Accordion>
<Accordion title="Nebius AI Studio">
Set the following environment variables in your `.env` file:
```toml Code
NEBIUS_API_KEY=<your-api-key>
```
Example usage in your CrewAI project:
```python Code
llm = LLM(
model="nebius/Qwen/Qwen3-30B-A3B"
)
```
<Info>
Nebius AI Studio features:
- Large collection of open source models
- Higher rate limits
- Competitive pricing
- Good balance of speed and quality
</Info>
</Accordion>
</AccordionGroup>
## Streaming Responses
@@ -727,9 +749,58 @@ CrewAI supports streaming responses from LLMs, allowing your application to rece
```
<Tip>
[Click here](https://docs.crewai.com/concepts/event-listener#event-listeners) for more details
[Click here](https://docs.crewai.com/concepts/event-listener#event-listeners) for more details
</Tip>
</Tab>
<Tab title="Agent & Task Tracking">
All LLM events in CrewAI include agent and task information, allowing you to track and filter LLM interactions by specific agents or tasks:
```python
from crewai import LLM, Agent, Task, Crew
from crewai.utilities.events import LLMStreamChunkEvent
from crewai.utilities.events.base_event_listener import BaseEventListener
class MyCustomListener(BaseEventListener):
def setup_listeners(self, crewai_event_bus):
@crewai_event_bus.on(LLMStreamChunkEvent)
def on_llm_stream_chunk(source, event):
if researcher.id == event.agent_id:
print("\n==============\n Got event:", event, "\n==============\n")
my_listener = MyCustomListener()
llm = LLM(model="gpt-4o-mini", temperature=0, stream=True)
researcher = Agent(
role="About User",
goal="You know everything about the user.",
backstory="""You are a master at understanding people and their preferences.""",
llm=llm,
)
search = Task(
description="Answer the following questions about the user: {question}",
expected_output="An answer to the question.",
agent=researcher,
)
crew = Crew(agents=[researcher], tasks=[search])
result = crew.kickoff(
inputs={"question": "..."}
)
```
<Info>
This feature is particularly useful for:
- Debugging specific agent behaviors
- Logging LLM usage by task type
- Auditing which agents are making what types of LLM calls
- Performance monitoring of specific tasks
</Info>
</Tab>
</Tabs>
## Structured LLM Calls
@@ -825,7 +896,7 @@ Learn how to get the most out of your LLM configuration:
Remember to regularly monitor your token usage and adjust your configuration as needed to optimize costs and performance.
</Info>
</Accordion>
<Accordion title="Drop Additional Parameters">
CrewAI internally uses Litellm for LLM calls, which allows you to drop additional parameters that are not needed for your specific use case. This can help simplify your code and reduce the complexity of your LLM configuration.
For example, if you don't need to send the <code>stop</code> parameter, you can simply omit it from your LLM call:

View File

@@ -6,10 +6,10 @@ icon: dumbbell
## Overview
The training feature in CrewAI allows you to train your AI agents using the command-line interface (CLI).
The training feature in CrewAI allows you to train your AI agents using the command-line interface (CLI).
By running the command `crewai train -n <n_iterations>`, you can specify the number of iterations for the training process.
During training, CrewAI utilizes techniques to optimize the performance of your agents along with human feedback.
During training, CrewAI utilizes techniques to optimize the performance of your agents along with human feedback.
This helps the agents improve their understanding, decision-making, and problem-solving abilities.
### Training Your Crew Using the CLI
@@ -42,8 +42,8 @@ filename = "your_model.pkl"
try:
YourCrewName_Crew().crew().train(
n_iterations=n_iterations,
inputs=inputs,
n_iterations=n_iterations,
inputs=inputs,
filename=filename
)
@@ -64,4 +64,68 @@ Once the training is complete, your agents will be equipped with enhanced capabi
Remember to regularly update and retrain your agents to ensure they stay up-to-date with the latest information and advancements in the field.
Happy training with CrewAI! 🚀
## Small Language Model Considerations
<Warning>
When using smaller language models (≤7B parameters) for training data evaluation, be aware that they may face challenges with generating structured outputs and following complex instructions.
</Warning>
### Limitations of Small Models in Training Evaluation
<CardGroup cols={2}>
<Card title="JSON Output Accuracy" icon="triangle-exclamation">
Smaller models often struggle with producing valid JSON responses needed for structured training evaluations, leading to parsing errors and incomplete data.
</Card>
<Card title="Evaluation Quality" icon="chart-line">
Models under 7B parameters may provide less nuanced evaluations with limited reasoning depth compared to larger models.
</Card>
<Card title="Instruction Following" icon="list-check">
Complex training evaluation criteria may not be fully followed or considered by smaller models.
</Card>
<Card title="Consistency" icon="rotate">
Evaluations across multiple training iterations may lack consistency with smaller models.
</Card>
</CardGroup>
### Recommendations for Training
<Tabs>
<Tab title="Best Practice">
For optimal training quality and reliable evaluations, we strongly recommend using models with at least 7B parameters or larger:
```python
from crewai import Agent, Crew, Task, LLM
# Recommended minimum for training evaluation
llm = LLM(model="mistral/open-mistral-7b")
# Better options for reliable training evaluation
llm = LLM(model="anthropic/claude-3-sonnet-20240229-v1:0")
llm = LLM(model="gpt-4o")
# Use this LLM with your agents
agent = Agent(
role="Training Evaluator",
goal="Provide accurate training feedback",
llm=llm
)
```
<Tip>
More powerful models provide higher quality feedback with better reasoning, leading to more effective training iterations.
</Tip>
</Tab>
<Tab title="Small Model Usage">
If you must use smaller models for training evaluation, be aware of these constraints:
```python
# Using a smaller model (expect some limitations)
llm = LLM(model="huggingface/microsoft/Phi-3-mini-4k-instruct")
```
<Warning>
While CrewAI includes optimizations for small models, expect less reliable and less nuanced evaluation results that may require more human intervention during training.
</Warning>
</Tab>
</Tabs>

View File

@@ -172,7 +172,7 @@ For teams and organizations, CrewAI offers enterprise deployment options that el
### CrewAI Factory (Self-hosted)
- Containerized deployment for your infrastructure
- Supports any hyperscaler including on prem depployments
- Supports any hyperscaler including on prem deployments
- Integration with your existing security systems
<Card title="Explore Enterprise Options" icon="building" href="https://crewai.com/enterprise">

View File

@@ -34,6 +34,7 @@ LiteLLM supports a wide range of providers, including but not limited to:
- DeepInfra
- Groq
- SambaNova
- Nebius AI Studio
- [NVIDIA NIMs](https://docs.api.nvidia.com/nim/reference/models-1)
- And many more!

16
docs/reo-tracking.js Normal file
View File

@@ -0,0 +1,16 @@
(function() {
var clientID = 'e1256ea7e23318f';
var initReo = function() {
Reo.init({
clientID: clientID
});
};
var script = document.createElement('script');
script.src = 'https://static.reo.dev/' + clientID + '/reo.js';
script.defer = true;
script.onload = initReo;
document.head.appendChild(script);
})();

View File

@@ -1,6 +1,6 @@
[project]
name = "crewai"
version = "0.134.0"
dynamic = ["version"]
description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks."
readme = "README.md"
requires-python = ">=3.10,<3.14"
@@ -11,7 +11,7 @@ dependencies = [
# Core Dependencies
"pydantic>=2.4.2",
"openai>=1.13.3",
"litellm==1.72.0",
"litellm==1.72.6",
"instructor>=1.3.3",
# Text Processing
"pdfplumber>=0.11.4",
@@ -117,6 +117,9 @@ torchvision = [
{ index = "pytorch", marker = "python_version < '3.13'" },
]
[tool.hatch.version]
path = "src/crewai/__init__.py"
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"

View File

@@ -1,4 +1,6 @@
import warnings
import threading
import urllib.request
from crewai.agent import Agent
from crewai.crew import Crew
@@ -11,6 +13,7 @@ from crewai.process import Process
from crewai.task import Task
from crewai.tasks.llm_guardrail import LLMGuardrail
from crewai.tasks.task_output import TaskOutput
from crewai.telemetry.telemetry import Telemetry
warnings.filterwarnings(
"ignore",
@@ -18,6 +21,39 @@ warnings.filterwarnings(
category=UserWarning,
module="pydantic.main",
)
_telemetry_submitted = False
def _track_install():
"""Track package installation/first-use via Scarf analytics."""
global _telemetry_submitted
if _telemetry_submitted or Telemetry._is_telemetry_disabled():
return
try:
pixel_url = "https://api.scarf.sh/v2/packages/CrewAI/crewai/docs/00f2dad1-8334-4a39-934e-003b2e1146db"
req = urllib.request.Request(pixel_url)
req.add_header('User-Agent', f'CrewAI-Python/{__version__}')
with urllib.request.urlopen(req, timeout=2): # nosec B310
_telemetry_submitted = True
except Exception:
pass
def _track_install_async():
"""Track installation in background thread to avoid blocking imports."""
if not Telemetry._is_telemetry_disabled():
thread = threading.Thread(target=_track_install, daemon=True)
thread.start()
_track_install_async()
__version__ = "0.134.0"
__all__ = [
"Agent",
@@ -31,4 +67,5 @@ __all__ = [
"Knowledge",
"TaskOutput",
"LLMGuardrail",
"__version__",
]

View File

@@ -775,6 +775,7 @@ class Agent(BaseAgent):
LiteAgentOutput: The result of the agent execution.
"""
lite_agent = LiteAgent(
id=self.id,
role=self.role,
goal=self.goal,
backstory=self.backstory,

View File

@@ -159,6 +159,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
messages=self.messages,
callbacks=self.callbacks,
printer=self._printer,
from_task=self.task
)
formatted_answer = process_llm_response(answer, self.use_stop_words)

View File

@@ -5,8 +5,6 @@ from typing import Any, Dict
import requests
from rich.console import Console
from crewai.cli.tools.main import ToolCommand
from .constants import AUTH0_AUDIENCE, AUTH0_CLIENT_ID, AUTH0_DOMAIN
from .utils import TokenManager, validate_token
@@ -67,6 +65,7 @@ class AuthenticationCommand:
self.token_manager.save_tokens(token_data["access_token"], expires_in)
try:
from crewai.cli.tools.main import ToolCommand
ToolCommand().login()
except Exception:
console.print(

View File

@@ -14,8 +14,50 @@ from crewai.cli.utils import copy_template, load_env_vars, write_env_file
def create_folder_structure(name, parent_folder=None):
import keyword
import re
name = name.rstrip('/')
if not name.strip():
raise ValueError("Project name cannot be empty or contain only whitespace")
folder_name = name.replace(" ", "_").replace("-", "_").lower()
folder_name = re.sub(r'[^a-zA-Z0-9_]', '', folder_name)
# Check if the name starts with invalid characters or is primarily invalid
if re.match(r'^[^a-zA-Z0-9_-]+', name):
raise ValueError(f"Project name '{name}' contains no valid characters for a Python module name")
if not folder_name:
raise ValueError(f"Project name '{name}' contains no valid characters for a Python module name")
if folder_name[0].isdigit():
raise ValueError(f"Project name '{name}' would generate folder name '{folder_name}' which cannot start with a digit (invalid Python module name)")
if keyword.iskeyword(folder_name):
raise ValueError(f"Project name '{name}' would generate folder name '{folder_name}' which is a reserved Python keyword")
if not folder_name.isidentifier():
raise ValueError(f"Project name '{name}' would generate invalid Python module name '{folder_name}'")
class_name = name.replace("_", " ").replace("-", " ").title().replace(" ", "")
class_name = re.sub(r'[^a-zA-Z0-9_]', '', class_name)
if not class_name:
raise ValueError(f"Project name '{name}' contains no valid characters for a Python class name")
if class_name[0].isdigit():
raise ValueError(f"Project name '{name}' would generate class name '{class_name}' which cannot start with a digit")
# Check if the original name (before title casing) is a keyword
original_name_clean = re.sub(r'[^a-zA-Z0-9_]', '', name.replace("_", "").replace("-", "").lower())
if keyword.iskeyword(original_name_clean) or keyword.iskeyword(class_name) or class_name in ('True', 'False', 'None'):
raise ValueError(f"Project name '{name}' would generate class name '{class_name}' which is a reserved Python keyword")
if not class_name.isidentifier():
raise ValueError(f"Project name '{name}' would generate invalid Python class name '{class_name}'")
if parent_folder:
folder_path = Path(parent_folder) / folder_name

View File

@@ -252,7 +252,7 @@ def write_env_file(folder_path, env_vars):
env_file_path = folder_path / ".env"
with open(env_file_path, "w") as file:
for key, value in env_vars.items():
file.write(f"{key}={value}\n")
file.write(f"{key.upper()}={value}\n")
def get_crews(crew_path: str = "crew.py", require: bool = False) -> list[Crew]:

View File

@@ -15,12 +15,14 @@ from typing import (
get_origin,
)
try:
from typing import Self
except ImportError:
from typing_extensions import Self
from pydantic import (
UUID4,
BaseModel,
Field,
InstanceOf,
@@ -129,6 +131,7 @@ class LiteAgent(FlowTrackable, BaseModel):
model_config = {"arbitrary_types_allowed": True}
# Core Agent Properties
id: UUID4 = Field(default_factory=uuid.uuid4, frozen=True)
role: str = Field(description="Role of the agent")
goal: str = Field(description="Goal of the agent")
backstory: str = Field(description="Backstory of the agent")
@@ -517,6 +520,7 @@ class LiteAgent(FlowTrackable, BaseModel):
messages=self._messages,
tools=None,
callbacks=self._callbacks,
from_agent=self,
),
)
@@ -526,6 +530,7 @@ class LiteAgent(FlowTrackable, BaseModel):
messages=self._messages,
callbacks=self._callbacks,
printer=self._printer,
from_agent=self,
)
# Emit LLM call completed event
@@ -534,13 +539,14 @@ class LiteAgent(FlowTrackable, BaseModel):
event=LLMCallCompletedEvent(
response=answer,
call_type=LLMCallType.LLM_CALL,
from_agent=self,
),
)
except Exception as e:
# Emit LLM call failed event
crewai_event_bus.emit(
self,
event=LLMCallFailedEvent(error=str(e)),
event=LLMCallFailedEvent(error=str(e), from_agent=self),
)
raise e

View File

@@ -419,6 +419,8 @@ class LLM(BaseLLM):
params: Dict[str, Any],
callbacks: Optional[List[Any]] = None,
available_functions: Optional[Dict[str, Any]] = None,
from_task: Optional[Any] = None,
from_agent: Optional[Any] = None,
) -> str:
"""Handle a streaming response from the LLM.
@@ -426,6 +428,8 @@ class LLM(BaseLLM):
params: Parameters for the completion call
callbacks: Optional list of callback functions
available_functions: Dict of available functions
from_task: Optional task object
from_agent: Optional agent object
Returns:
str: The complete response text
@@ -510,6 +514,8 @@ class LLM(BaseLLM):
tool_calls=tool_calls,
accumulated_tool_args=accumulated_tool_args,
available_functions=available_functions,
from_task=from_task,
from_agent=from_agent,
)
if result is not None:
chunk_content = result
@@ -527,7 +533,7 @@ class LLM(BaseLLM):
assert hasattr(crewai_event_bus, "emit")
crewai_event_bus.emit(
self,
event=LLMStreamChunkEvent(chunk=chunk_content),
event=LLMStreamChunkEvent(chunk=chunk_content, from_task=from_task, from_agent=from_agent),
)
# --- 4) Fallback to non-streaming if no content received
if not full_response.strip() and chunk_count == 0:
@@ -540,7 +546,7 @@ class LLM(BaseLLM):
"stream_options", None
) # Remove stream_options for non-streaming call
return self._handle_non_streaming_response(
non_streaming_params, callbacks, available_functions
non_streaming_params, callbacks, available_functions, from_task, from_agent
)
# --- 5) Handle empty response with chunks
@@ -625,7 +631,7 @@ class LLM(BaseLLM):
# Log token usage if available in streaming mode
self._handle_streaming_callbacks(callbacks, usage_info, last_chunk)
# Emit completion event and return response
self._handle_emit_call_events(full_response, LLMCallType.LLM_CALL)
self._handle_emit_call_events(full_response, LLMCallType.LLM_CALL, from_task, from_agent)
return full_response
# --- 9) Handle tool calls if present
@@ -637,7 +643,7 @@ class LLM(BaseLLM):
self._handle_streaming_callbacks(callbacks, usage_info, last_chunk)
# --- 11) Emit completion event and return response
self._handle_emit_call_events(full_response, LLMCallType.LLM_CALL)
self._handle_emit_call_events(full_response, LLMCallType.LLM_CALL, from_task, from_agent)
return full_response
except ContextWindowExceededError as e:
@@ -649,14 +655,14 @@ class LLM(BaseLLM):
logging.error(f"Error in streaming response: {str(e)}")
if full_response.strip():
logging.warning(f"Returning partial response despite error: {str(e)}")
self._handle_emit_call_events(full_response, LLMCallType.LLM_CALL)
self._handle_emit_call_events(full_response, LLMCallType.LLM_CALL, from_task, from_agent)
return full_response
# Emit failed event and re-raise the exception
assert hasattr(crewai_event_bus, "emit")
crewai_event_bus.emit(
self,
event=LLMCallFailedEvent(error=str(e)),
event=LLMCallFailedEvent(error=str(e), from_task=from_task, from_agent=from_agent),
)
raise Exception(f"Failed to get streaming response: {str(e)}")
@@ -665,6 +671,8 @@ class LLM(BaseLLM):
tool_calls: List[ChatCompletionDeltaToolCall],
accumulated_tool_args: DefaultDict[int, AccumulatedToolArgs],
available_functions: Optional[Dict[str, Any]] = None,
from_task: Optional[Any] = None,
from_agent: Optional[Any] = None,
) -> None | str:
for tool_call in tool_calls:
current_tool_accumulator = accumulated_tool_args[tool_call.index]
@@ -682,6 +690,8 @@ class LLM(BaseLLM):
event=LLMStreamChunkEvent(
tool_call=tool_call.to_dict(),
chunk=tool_call.function.arguments,
from_task=from_task,
from_agent=from_agent,
),
)
@@ -748,6 +758,8 @@ class LLM(BaseLLM):
params: Dict[str, Any],
callbacks: Optional[List[Any]] = None,
available_functions: Optional[Dict[str, Any]] = None,
from_task: Optional[Any] = None,
from_agent: Optional[Any] = None,
) -> str:
"""Handle a non-streaming response from the LLM.
@@ -755,6 +767,8 @@ class LLM(BaseLLM):
params: Parameters for the completion call
callbacks: Optional list of callback functions
available_functions: Dict of available functions
from_task: Optional Task that invoked the LLM
from_agent: Optional Agent that invoked the LLM
Returns:
str: The response text
@@ -795,7 +809,7 @@ class LLM(BaseLLM):
# --- 5) If no tool calls or no available functions, return the text response directly
if not tool_calls or not available_functions:
self._handle_emit_call_events(text_response, LLMCallType.LLM_CALL)
self._handle_emit_call_events(text_response, LLMCallType.LLM_CALL, from_task, from_agent)
return text_response
# --- 6) Handle tool calls if present
@@ -804,7 +818,7 @@ class LLM(BaseLLM):
return tool_result
# --- 7) If tool call handling didn't return a result, emit completion event and return text response
self._handle_emit_call_events(text_response, LLMCallType.LLM_CALL)
self._handle_emit_call_events(text_response, LLMCallType.LLM_CALL, from_task, from_agent)
return text_response
def _handle_tool_call(
@@ -889,6 +903,8 @@ class LLM(BaseLLM):
tools: Optional[List[dict]] = None,
callbacks: Optional[List[Any]] = None,
available_functions: Optional[Dict[str, Any]] = None,
from_task: Optional[Any] = None,
from_agent: Optional[Any] = None,
) -> Union[str, Any]:
"""High-level LLM call method.
@@ -903,6 +919,8 @@ class LLM(BaseLLM):
during and after the LLM call.
available_functions: Optional dict mapping function names to callables
that can be invoked by the LLM.
from_task: Optional Task that invoked the LLM
from_agent: Optional Agent that invoked the LLM
Returns:
Union[str, Any]: Either a text response from the LLM (str) or
@@ -922,6 +940,8 @@ class LLM(BaseLLM):
tools=tools,
callbacks=callbacks,
available_functions=available_functions,
from_task=from_task,
from_agent=from_agent,
),
)
@@ -950,11 +970,11 @@ class LLM(BaseLLM):
# --- 7) Make the completion call and handle response
if self.stream:
return self._handle_streaming_response(
params, callbacks, available_functions
params, callbacks, available_functions, from_task, from_agent
)
else:
return self._handle_non_streaming_response(
params, callbacks, available_functions
params, callbacks, available_functions, from_task, from_agent
)
except LLMContextLengthExceededException:
@@ -966,12 +986,12 @@ class LLM(BaseLLM):
assert hasattr(crewai_event_bus, "emit")
crewai_event_bus.emit(
self,
event=LLMCallFailedEvent(error=str(e)),
event=LLMCallFailedEvent(error=str(e), from_task=from_task, from_agent=from_agent),
)
logging.error(f"LiteLLM call failed: {str(e)}")
raise
def _handle_emit_call_events(self, response: Any, call_type: LLMCallType):
def _handle_emit_call_events(self, response: Any, call_type: LLMCallType, from_task: Optional[Any] = None, from_agent: Optional[Any] = None):
"""Handle the events for the LLM call.
Args:
@@ -981,7 +1001,7 @@ class LLM(BaseLLM):
assert hasattr(crewai_event_bus, "emit")
crewai_event_bus.emit(
self,
event=LLMCallCompletedEvent(response=response, call_type=call_type),
event=LLMCallCompletedEvent(response=response, call_type=call_type, from_task=from_task, from_agent=from_agent),
)
def _format_messages_for_provider(

View File

@@ -1,5 +1,5 @@
from abc import ABC, abstractmethod
from typing import Any, Callable, Dict, List, Optional, Union
from typing import Any, Dict, List, Optional, Union
class BaseLLM(ABC):
@@ -47,6 +47,8 @@ class BaseLLM(ABC):
tools: Optional[List[dict]] = None,
callbacks: Optional[List[Any]] = None,
available_functions: Optional[Dict[str, Any]] = None,
from_task: Optional[Any] = None,
from_agent: Optional[Any] = None,
) -> Union[str, Any]:
"""Call the LLM with the given messages.
@@ -61,6 +63,7 @@ class BaseLLM(ABC):
during and after the LLM call.
available_functions: Optional dict mapping function names to callables
that can be invoked by the LLM.
from_task: Optional task caller to be used for the LLM call.
Returns:
Either a text response from the LLM (str) or

View File

@@ -16,6 +16,8 @@ class AISuiteLLM(BaseLLM):
tools: Optional[List[dict]] = None,
callbacks: Optional[List[Any]] = None,
available_functions: Optional[Dict[str, Any]] = None,
from_task: Optional[Any] = None,
from_agent: Optional[Any] = None,
) -> Union[str, Any]:
completion_params = self._prepare_completion_params(messages, tools)
response = self.client.chat.completions.create(**completion_params)

View File

@@ -111,11 +111,13 @@ class Telemetry:
raise # Re-raise the exception to not interfere with system signals
self.ready = False
def _is_telemetry_disabled(self) -> bool:
@classmethod
def _is_telemetry_disabled(cls) -> bool:
"""Check if telemetry should be disabled based on environment variables."""
return (
os.getenv("OTEL_SDK_DISABLED", "false").lower() == "true"
or os.getenv("CREWAI_DISABLE_TELEMETRY", "false").lower() == "true"
or os.getenv("CREWAI_DISABLE_TRACKING", "false").lower() == "true"
)
def _should_execute_telemetry(self) -> bool:

View File

@@ -145,12 +145,16 @@ def get_llm_response(
messages: List[Dict[str, str]],
callbacks: List[Any],
printer: Printer,
from_task: Optional[Any] = None,
from_agent: Optional[Any] = None,
) -> str:
"""Call the LLM and return the response, handling any invalid responses."""
try:
answer = llm.call(
messages,
callbacks=callbacks,
from_task=from_task,
from_agent=from_agent,
)
except Exception as e:
printer.print(

View File

@@ -5,6 +5,7 @@ from pydantic import BaseModel, Field
from crewai.utilities import Converter
from crewai.utilities.events import TaskEvaluationEvent, crewai_event_bus
from crewai.utilities.pydantic_schema_parser import PydanticSchemaParser
from crewai.utilities.training_converter import TrainingConverter
class Entity(BaseModel):
@@ -133,7 +134,7 @@ class TaskEvaluator:
).get_schema()
instructions = f"{instructions}\n\nThe json should have the following structure, with the following keys:\n{model_schema}"
converter = Converter(
converter = TrainingConverter(
llm=self.llm,
text=evaluation_query,
model=TrainingTaskEvaluation,

View File

@@ -5,6 +5,32 @@ from pydantic import BaseModel
from crewai.utilities.events.base_events import BaseEvent
class LLMEventBase(BaseEvent):
task_name: Optional[str] = None
task_id: Optional[str] = None
agent_id: Optional[str] = None
agent_role: Optional[str] = None
def __init__(self, **data):
super().__init__(**data)
self._set_agent_params(data)
self._set_task_params(data)
def _set_agent_params(self, data: Dict[str, Any]):
task = data.get("from_task", None)
agent = task.agent if task else data.get("from_agent", None)
if not agent:
return
self.agent_id = agent.id
self.agent_role = agent.role
def _set_task_params(self, data: Dict[str, Any]):
if "from_task" in data and (task := data["from_task"]):
self.task_id = task.id
self.task_name = task.name
class LLMCallType(Enum):
"""Type of LLM call being made"""
@@ -13,7 +39,7 @@ class LLMCallType(Enum):
LLM_CALL = "llm_call"
class LLMCallStartedEvent(BaseEvent):
class LLMCallStartedEvent(LLMEventBase):
"""Event emitted when a LLM call starts
Attributes:
@@ -28,7 +54,7 @@ class LLMCallStartedEvent(BaseEvent):
available_functions: Optional[Dict[str, Any]] = None
class LLMCallCompletedEvent(BaseEvent):
class LLMCallCompletedEvent(LLMEventBase):
"""Event emitted when a LLM call completes"""
type: str = "llm_call_completed"
@@ -36,7 +62,7 @@ class LLMCallCompletedEvent(BaseEvent):
call_type: LLMCallType
class LLMCallFailedEvent(BaseEvent):
class LLMCallFailedEvent(LLMEventBase):
"""Event emitted when a LLM call fails"""
error: str
@@ -55,7 +81,7 @@ class ToolCall(BaseModel):
index: int
class LLMStreamChunkEvent(BaseEvent):
class LLMStreamChunkEvent(LLMEventBase):
"""Event emitted when a streaming chunk is received"""
type: str = "llm_stream_chunk"

View File

@@ -0,0 +1,89 @@
import json
import re
from typing import Any, get_origin
from pydantic import BaseModel, ValidationError
from crewai.utilities.converter import Converter, ConverterError
class TrainingConverter(Converter):
"""
A specialized converter for smaller LLMs (up to 7B parameters) that handles validation errors
by breaking down the model into individual fields and querying the LLM for each field separately.
"""
def to_pydantic(self, current_attempt=1) -> BaseModel:
try:
return super().to_pydantic(current_attempt)
except ConverterError:
return self._convert_field_by_field()
def _convert_field_by_field(self) -> BaseModel:
field_values = {}
for field_name, field_info in self.model.model_fields.items():
field_description = field_info.description
field_type = field_info.annotation
response = self._ask_llm_for_field(field_name, field_description)
value = self._process_field_value(response, field_type)
field_values[field_name] = value
try:
return self.model(**field_values)
except ValidationError as e:
raise ConverterError(f"Failed to create model from individually collected fields: {e}")
def _ask_llm_for_field(self, field_name: str, field_description: str) -> str:
prompt = f"""
Based on the following information:
{self.text}
Please provide ONLY the {field_name} field value as described:
"{field_description}"
Respond with ONLY the requested information, nothing else.
"""
return self.llm.call([
{"role": "system", "content": f"Extract the {field_name} from the previous information."},
{"role": "user", "content": prompt}
])
def _process_field_value(self, response: str, field_type: Any) -> Any:
response = response.strip()
origin = get_origin(field_type)
if origin is list:
return self._parse_list(response)
if field_type is float:
return self._parse_float(response)
if field_type is str:
return response
return response
def _parse_list(self, response: str) -> list:
try:
if response.startswith('['):
return json.loads(response)
items = [item.strip() for item in response.split('\n') if item.strip()]
return [self._strip_bullet(item) for item in items]
except json.JSONDecodeError:
return [response]
def _parse_float(self, response: str) -> float:
try:
match = re.search(r'(\d+(\.\d+)?)', response)
return float(match.group(1)) if match else 0.0
except Exception:
return 0.0
def _strip_bullet(self, item: str) -> str:
if item.startswith(('- ', '* ')):
return item[2:].strip()
return item.strip()

View File

@@ -44,7 +44,7 @@ class TestAuthenticationCommand(unittest.TestCase):
mock_print.assert_any_call("2. Enter the following code: ", "ABCDEF")
mock_open.assert_called_once_with("https://example.com")
@patch("crewai.cli.authentication.main.ToolCommand")
@patch("crewai.cli.tools.main.ToolCommand")
@patch("crewai.cli.authentication.main.requests.post")
@patch("crewai.cli.authentication.main.validate_token")
@patch("crewai.cli.authentication.main.console.print")

View File

@@ -0,0 +1,278 @@
import keyword
import shutil
import tempfile
from pathlib import Path
from unittest import mock
import pytest
from click.testing import CliRunner
from crewai.cli.create_crew import create_crew, create_folder_structure
@pytest.fixture
def runner():
return CliRunner()
@pytest.fixture
def temp_dir():
temp_path = tempfile.mkdtemp()
yield temp_path
shutil.rmtree(temp_path)
def test_create_folder_structure_strips_single_trailing_slash():
with tempfile.TemporaryDirectory() as temp_dir:
folder_path, folder_name, class_name = create_folder_structure("hello/", parent_folder=temp_dir)
assert folder_name == "hello"
assert class_name == "Hello"
assert folder_path.name == "hello"
assert folder_path.exists()
assert folder_path.parent == Path(temp_dir)
def test_create_folder_structure_strips_multiple_trailing_slashes():
with tempfile.TemporaryDirectory() as temp_dir:
folder_path, folder_name, class_name = create_folder_structure("hello///", parent_folder=temp_dir)
assert folder_name == "hello"
assert class_name == "Hello"
assert folder_path.name == "hello"
assert folder_path.exists()
assert folder_path.parent == Path(temp_dir)
def test_create_folder_structure_handles_complex_name_with_trailing_slash():
with tempfile.TemporaryDirectory() as temp_dir:
folder_path, folder_name, class_name = create_folder_structure("my-awesome_project/", parent_folder=temp_dir)
assert folder_name == "my_awesome_project"
assert class_name == "MyAwesomeProject"
assert folder_path.name == "my_awesome_project"
assert folder_path.exists()
assert folder_path.parent == Path(temp_dir)
def test_create_folder_structure_normal_name_unchanged():
with tempfile.TemporaryDirectory() as temp_dir:
folder_path, folder_name, class_name = create_folder_structure("hello", parent_folder=temp_dir)
assert folder_name == "hello"
assert class_name == "Hello"
assert folder_path.name == "hello"
assert folder_path.exists()
assert folder_path.parent == Path(temp_dir)
def test_create_folder_structure_with_parent_folder():
with tempfile.TemporaryDirectory() as temp_dir:
parent_path = Path(temp_dir) / "parent"
parent_path.mkdir()
folder_path, folder_name, class_name = create_folder_structure("child/", parent_folder=parent_path)
assert folder_name == "child"
assert class_name == "Child"
assert folder_path.name == "child"
assert folder_path.parent == parent_path
assert folder_path.exists()
@mock.patch("crewai.cli.create_crew.copy_template")
@mock.patch("crewai.cli.create_crew.write_env_file")
@mock.patch("crewai.cli.create_crew.load_env_vars")
def test_create_crew_with_trailing_slash_creates_valid_project(mock_load_env, mock_write_env, mock_copy_template, temp_dir):
mock_load_env.return_value = {}
with tempfile.TemporaryDirectory() as work_dir:
with mock.patch("crewai.cli.create_crew.create_folder_structure") as mock_create_folder:
mock_folder_path = Path(work_dir) / "test_project"
mock_create_folder.return_value = (mock_folder_path, "test_project", "TestProject")
create_crew("test-project/", skip_provider=True)
mock_create_folder.assert_called_once_with("test-project/", None)
mock_copy_template.assert_called()
copy_calls = mock_copy_template.call_args_list
for call in copy_calls:
args = call[0]
if len(args) >= 5:
folder_name_arg = args[4]
assert not folder_name_arg.endswith("/"), f"folder_name should not end with slash: {folder_name_arg}"
@mock.patch("crewai.cli.create_crew.copy_template")
@mock.patch("crewai.cli.create_crew.write_env_file")
@mock.patch("crewai.cli.create_crew.load_env_vars")
def test_create_crew_with_multiple_trailing_slashes(mock_load_env, mock_write_env, mock_copy_template, temp_dir):
mock_load_env.return_value = {}
with tempfile.TemporaryDirectory() as work_dir:
with mock.patch("crewai.cli.create_crew.create_folder_structure") as mock_create_folder:
mock_folder_path = Path(work_dir) / "test_project"
mock_create_folder.return_value = (mock_folder_path, "test_project", "TestProject")
create_crew("test-project///", skip_provider=True)
mock_create_folder.assert_called_once_with("test-project///", None)
@mock.patch("crewai.cli.create_crew.copy_template")
@mock.patch("crewai.cli.create_crew.write_env_file")
@mock.patch("crewai.cli.create_crew.load_env_vars")
def test_create_crew_normal_name_still_works(mock_load_env, mock_write_env, mock_copy_template, temp_dir):
mock_load_env.return_value = {}
with tempfile.TemporaryDirectory() as work_dir:
with mock.patch("crewai.cli.create_crew.create_folder_structure") as mock_create_folder:
mock_folder_path = Path(work_dir) / "normal_project"
mock_create_folder.return_value = (mock_folder_path, "normal_project", "NormalProject")
create_crew("normal-project", skip_provider=True)
mock_create_folder.assert_called_once_with("normal-project", None)
def test_create_folder_structure_handles_spaces_and_dashes_with_slash():
with tempfile.TemporaryDirectory() as temp_dir:
folder_path, folder_name, class_name = create_folder_structure("My Cool-Project/", parent_folder=temp_dir)
assert folder_name == "my_cool_project"
assert class_name == "MyCoolProject"
assert folder_path.name == "my_cool_project"
assert folder_path.exists()
assert folder_path.parent == Path(temp_dir)
def test_create_folder_structure_raises_error_for_invalid_names():
with tempfile.TemporaryDirectory() as temp_dir:
invalid_cases = [
("123project/", "cannot start with a digit"),
("True/", "reserved Python keyword"),
("False/", "reserved Python keyword"),
("None/", "reserved Python keyword"),
("class/", "reserved Python keyword"),
("def/", "reserved Python keyword"),
(" /", "empty or contain only whitespace"),
("", "empty or contain only whitespace"),
("@#$/", "contains no valid characters"),
]
for invalid_name, expected_error in invalid_cases:
with pytest.raises(ValueError, match=expected_error):
create_folder_structure(invalid_name, parent_folder=temp_dir)
def test_create_folder_structure_validates_names():
with tempfile.TemporaryDirectory() as temp_dir:
valid_cases = [
("hello/", "hello", "Hello"),
("my-project/", "my_project", "MyProject"),
("hello_world/", "hello_world", "HelloWorld"),
("valid123/", "valid123", "Valid123"),
("hello.world/", "helloworld", "HelloWorld"),
("hello@world/", "helloworld", "HelloWorld"),
]
for valid_name, expected_folder, expected_class in valid_cases:
folder_path, folder_name, class_name = create_folder_structure(valid_name, parent_folder=temp_dir)
assert folder_name == expected_folder
assert class_name == expected_class
assert folder_name.isidentifier(), f"folder_name '{folder_name}' should be valid Python identifier"
assert not keyword.iskeyword(folder_name), f"folder_name '{folder_name}' should not be Python keyword"
assert not folder_name[0].isdigit(), f"folder_name '{folder_name}' should not start with digit"
assert class_name.isidentifier(), f"class_name '{class_name}' should be valid Python identifier"
assert not keyword.iskeyword(class_name), f"class_name '{class_name}' should not be Python keyword"
assert folder_path.parent == Path(temp_dir)
if folder_path.exists():
shutil.rmtree(folder_path)
@mock.patch("crewai.cli.create_crew.copy_template")
@mock.patch("crewai.cli.create_crew.write_env_file")
@mock.patch("crewai.cli.create_crew.load_env_vars")
def test_create_crew_with_parent_folder_and_trailing_slash(mock_load_env, mock_write_env, mock_copy_template, temp_dir):
mock_load_env.return_value = {}
with tempfile.TemporaryDirectory() as work_dir:
parent_path = Path(work_dir) / "parent"
parent_path.mkdir()
create_crew("child-crew/", skip_provider=True, parent_folder=parent_path)
crew_path = parent_path / "child_crew"
assert crew_path.exists()
assert not (crew_path / "src").exists()
def test_create_folder_structure_folder_name_validation():
"""Test that folder names are validated as valid Python module names"""
with tempfile.TemporaryDirectory() as temp_dir:
folder_invalid_cases = [
("123invalid/", "cannot start with a digit.*invalid Python module name"),
("import/", "reserved Python keyword"),
("class/", "reserved Python keyword"),
("for/", "reserved Python keyword"),
("@#$invalid/", "contains no valid characters.*Python module name"),
]
for invalid_name, expected_error in folder_invalid_cases:
with pytest.raises(ValueError, match=expected_error):
create_folder_structure(invalid_name, parent_folder=temp_dir)
valid_cases = [
("hello-world/", "hello_world"),
("my.project/", "myproject"),
("test@123/", "test123"),
("valid_name/", "valid_name"),
]
for valid_name, expected_folder in valid_cases:
folder_path, folder_name, class_name = create_folder_structure(valid_name, parent_folder=temp_dir)
assert folder_name == expected_folder
assert folder_name.isidentifier()
assert not keyword.iskeyword(folder_name)
if folder_path.exists():
shutil.rmtree(folder_path)
@mock.patch("crewai.cli.create_crew.create_folder_structure")
@mock.patch("crewai.cli.create_crew.copy_template")
@mock.patch("crewai.cli.create_crew.load_env_vars")
@mock.patch("crewai.cli.create_crew.get_provider_data")
@mock.patch("crewai.cli.create_crew.select_provider")
@mock.patch("crewai.cli.create_crew.select_model")
@mock.patch("click.prompt")
def test_env_vars_are_uppercased_in_env_file(
mock_prompt,
mock_select_model,
mock_select_provider,
mock_get_provider_data,
mock_load_env_vars,
mock_copy_template,
mock_create_folder_structure,
tmp_path
):
crew_path = tmp_path / "test_crew"
crew_path.mkdir()
mock_create_folder_structure.return_value = (crew_path, "test_crew", "TestCrew")
mock_load_env_vars.return_value = {}
mock_get_provider_data.return_value = {"openai": ["gpt-4"]}
mock_select_provider.return_value = "azure"
mock_select_model.return_value = "azure/openai"
mock_prompt.return_value = "fake-api-key"
create_crew("Test Crew")
env_file_path = crew_path / ".env"
content = env_file_path.read_text()
assert "MODEL=" in content

17
tests/cli/test_version.py Normal file
View File

@@ -0,0 +1,17 @@
"""Test for version management."""
from crewai import __version__
from crewai.cli.version import get_crewai_version
def test_dynamic_versioning_consistency():
"""Test that dynamic versioning provides consistent version across all access methods."""
cli_version = get_crewai_version()
package_version = __version__
# Both should return the same version string
assert cli_version == package_version
# Version should not be empty
assert package_version is not None
assert len(package_version.strip()) > 0

View File

@@ -1,5 +1,4 @@
from typing import Any, Dict, List, Optional, Union
from unittest.mock import Mock
import pytest
@@ -31,6 +30,8 @@ class CustomLLM(BaseLLM):
tools=None,
callbacks=None,
available_functions=None,
from_task=None,
from_agent=None,
):
"""
Mock LLM call that returns a predefined response.

View File

@@ -192,7 +192,7 @@ def test_lite_agent_structured_output():
)
result = agent.kickoff(
"What is the population of Tokyo? Return your strucutred output in JSON format with the following fields: summary, confidence",
"What is the population of Tokyo? Return your structured output in JSON format with the following fields: summary, confidence",
response_format=SimpleOutput,
)
@@ -230,7 +230,7 @@ def test_lite_agent_returns_usage_metrics():
)
result = agent.kickoff(
"What is the population of Tokyo? Return your strucutred output in JSON format with the following fields: summary, confidence"
"What is the population of Tokyo? Return your structured output in JSON format with the following fields: summary, confidence"
)
assert result.usage_metrics is not None
@@ -252,7 +252,7 @@ async def test_lite_agent_returns_usage_metrics_async():
)
result = await agent.kickoff_async(
"What is the population of Tokyo? Return your strucutred output in JSON format with the following fields: summary, confidence"
"What is the population of Tokyo? Return your structured output in JSON format with the following fields: summary, confidence"
)
assert isinstance(result, LiteAgentOutput)
assert "21 million" in result.raw or "37 million" in result.raw

View File

@@ -0,0 +1,171 @@
interactions:
- request:
body: '{"messages": [{"role": "system", "content": "You are Speaker. You are a
helpful assistant that just says hi\nYour personal goal is: Just say hi\n\nTo
give my best complete final answer to the task respond using the exact following
format:\n\nThought: I now can give a great answer\nFinal Answer: Your final
answer must be the great and the most complete as possible, it must be outcome
described.\n\nI MUST use these formats, my job depends on it!"}, {"role": "user",
"content": "say hi!"}], "model": "gpt-4o-mini", "stop": ["\nObservation:"],
"stream": true, "stream_options": {"include_usage": true}}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, zstd
connection:
- keep-alive
content-length:
- '602'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.78.0
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.78.0
x-stainless-raw-response:
- 'true'
x-stainless-read-timeout:
- '600.0'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.12
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: 'data: {"id":"chatcmpl-BoGFzpBc0nuAKcVrYlEEztNwzrUG6","object":"chat.completion.chunk","created":1751318591,"model":"gpt-4o-mini-2024-07-18","service_tier":"default","system_fingerprint":"fp_34a54ae93c","choices":[{"index":0,"delta":{"role":"assistant","content":"","refusal":null},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BoGFzpBc0nuAKcVrYlEEztNwzrUG6","object":"chat.completion.chunk","created":1751318591,"model":"gpt-4o-mini-2024-07-18","service_tier":"default","system_fingerprint":"fp_34a54ae93c","choices":[{"index":0,"delta":{"content":"Thought"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BoGFzpBc0nuAKcVrYlEEztNwzrUG6","object":"chat.completion.chunk","created":1751318591,"model":"gpt-4o-mini-2024-07-18","service_tier":"default","system_fingerprint":"fp_34a54ae93c","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BoGFzpBc0nuAKcVrYlEEztNwzrUG6","object":"chat.completion.chunk","created":1751318591,"model":"gpt-4o-mini-2024-07-18","service_tier":"default","system_fingerprint":"fp_34a54ae93c","choices":[{"index":0,"delta":{"content":"
I"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BoGFzpBc0nuAKcVrYlEEztNwzrUG6","object":"chat.completion.chunk","created":1751318591,"model":"gpt-4o-mini-2024-07-18","service_tier":"default","system_fingerprint":"fp_34a54ae93c","choices":[{"index":0,"delta":{"content":"
now"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BoGFzpBc0nuAKcVrYlEEztNwzrUG6","object":"chat.completion.chunk","created":1751318591,"model":"gpt-4o-mini-2024-07-18","service_tier":"default","system_fingerprint":"fp_34a54ae93c","choices":[{"index":0,"delta":{"content":"
can"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BoGFzpBc0nuAKcVrYlEEztNwzrUG6","object":"chat.completion.chunk","created":1751318591,"model":"gpt-4o-mini-2024-07-18","service_tier":"default","system_fingerprint":"fp_34a54ae93c","choices":[{"index":0,"delta":{"content":"
give"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BoGFzpBc0nuAKcVrYlEEztNwzrUG6","object":"chat.completion.chunk","created":1751318591,"model":"gpt-4o-mini-2024-07-18","service_tier":"default","system_fingerprint":"fp_34a54ae93c","choices":[{"index":0,"delta":{"content":"
a"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BoGFzpBc0nuAKcVrYlEEztNwzrUG6","object":"chat.completion.chunk","created":1751318591,"model":"gpt-4o-mini-2024-07-18","service_tier":"default","system_fingerprint":"fp_34a54ae93c","choices":[{"index":0,"delta":{"content":"
great"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BoGFzpBc0nuAKcVrYlEEztNwzrUG6","object":"chat.completion.chunk","created":1751318591,"model":"gpt-4o-mini-2024-07-18","service_tier":"default","system_fingerprint":"fp_34a54ae93c","choices":[{"index":0,"delta":{"content":"
answer"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BoGFzpBc0nuAKcVrYlEEztNwzrUG6","object":"chat.completion.chunk","created":1751318591,"model":"gpt-4o-mini-2024-07-18","service_tier":"default","system_fingerprint":"fp_34a54ae93c","choices":[{"index":0,"delta":{"content":" \n"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BoGFzpBc0nuAKcVrYlEEztNwzrUG6","object":"chat.completion.chunk","created":1751318591,"model":"gpt-4o-mini-2024-07-18","service_tier":"default","system_fingerprint":"fp_34a54ae93c","choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BoGFzpBc0nuAKcVrYlEEztNwzrUG6","object":"chat.completion.chunk","created":1751318591,"model":"gpt-4o-mini-2024-07-18","service_tier":"default","system_fingerprint":"fp_34a54ae93c","choices":[{"index":0,"delta":{"content":"
Answer"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BoGFzpBc0nuAKcVrYlEEztNwzrUG6","object":"chat.completion.chunk","created":1751318591,"model":"gpt-4o-mini-2024-07-18","service_tier":"default","system_fingerprint":"fp_34a54ae93c","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BoGFzpBc0nuAKcVrYlEEztNwzrUG6","object":"chat.completion.chunk","created":1751318591,"model":"gpt-4o-mini-2024-07-18","service_tier":"default","system_fingerprint":"fp_34a54ae93c","choices":[{"index":0,"delta":{"content":"
Hi"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BoGFzpBc0nuAKcVrYlEEztNwzrUG6","object":"chat.completion.chunk","created":1751318591,"model":"gpt-4o-mini-2024-07-18","service_tier":"default","system_fingerprint":"fp_34a54ae93c","choices":[{"index":0,"delta":{"content":"!"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BoGFzpBc0nuAKcVrYlEEztNwzrUG6","object":"chat.completion.chunk","created":1751318591,"model":"gpt-4o-mini-2024-07-18","service_tier":"default","system_fingerprint":"fp_34a54ae93c","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}],"usage":null}
data: {"id":"chatcmpl-BoGFzpBc0nuAKcVrYlEEztNwzrUG6","object":"chat.completion.chunk","created":1751318591,"model":"gpt-4o-mini-2024-07-18","service_tier":"default","system_fingerprint":"fp_34a54ae93c","choices":[],"usage":{"prompt_tokens":99,"completion_tokens":15,"total_tokens":114,"prompt_tokens_details":{"cached_tokens":0,"audio_tokens":0},"completion_tokens_details":{"reasoning_tokens":0,"audio_tokens":0,"accepted_prediction_tokens":0,"rejected_prediction_tokens":0}}}
data: [DONE]
'
headers:
CF-RAY:
- 9580b92adce5e838-GRU
Connection:
- keep-alive
Content-Type:
- text/event-stream; charset=utf-8
Date:
- Mon, 30 Jun 2025 21:23:12 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=nhFmL5HNobQWdbf2Sd9Z8X9ad5zXKG7Ln7MlzuiuwP8-1751318592-1.0.1.1-5qDyF6nVC5d8PDerEmHSOgyWEYdzMdgyFRXqgiJB3FSyWWnvzL4PyVp6LGx9z0P5iTX8PNbxfUOEOYX.7bFaK6p.CyxLaXK7WpnQ3zeasG8;
path=/; expires=Mon, 30-Jun-25 21:53:12 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=APKo781sOKEk.HlN5nFBT1Mkid8Lj04kw6JPleI78bU-1751318592001-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '321'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-envoy-upstream-service-time:
- '326'
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999896'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_0b0f668953604810c182b1e83e9709fe
status:
code: 200
message: OK
version: 1

View File

@@ -0,0 +1,118 @@
interactions:
- request:
body: '{"messages": [{"role": "system", "content": "You are base_agent. You are
a helpful assistant that just says hi\nYour personal goal is: Just say hi\nTo
give my best complete final answer to the task respond using the exact following
format:\n\nThought: I now can give a great answer\nFinal Answer: Your final
answer must be the great and the most complete as possible, it must be outcome
described.\n\nI MUST use these formats, my job depends on it!"}, {"role": "user",
"content": "\nCurrent Task: Just say hi\n\nThis is the expected criteria for
your final answer: hi\nyou MUST return the actual complete content as the final
answer, not a summary.\n\nBegin! This is VERY important to you, use the tools
available and give your best Final Answer, your job depends on it!\n\nThought:"}],
"model": "gpt-4o-mini", "stop": ["\nObservation:"]}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, zstd
connection:
- keep-alive
content-length:
- '838'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.78.0
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.78.0
x-stainless-raw-response:
- 'true'
x-stainless-read-timeout:
- '600.0'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.12
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: !!binary |
H4sIAAAAAAAAA4xSTW/bMAy9+1cQOsdDnI9m861ZMGAbBuyyHbYWBiMxtjaZEiS5aVHkvw+y09jd
OqAXA+bje3qP5GMGILQSJQjZYJStM/nW7j6z+vb9R739+mm707vj7sv2gffq/v2VEbPEsPtfJOMT
6420rTMUteUBlp4wUlItNutiOd8s15seaK0ik2i1i/nK5q1mnS/mi1U+3+TF2zO7sVpSECX8zAAA
Hvtv8smK7kUJ89lTpaUQsCZRXpoAhLcmVQSGoENEjmI2gtJyJO6tfwS2R5DIUOs7AoQ62QbkcCQP
cMMfNKOB6/6/hEZPdTwduoApC3fGTABkthHTLPoEt2fkdPFsbO283Ye/qOKgWYem8oTBcvIXonWi
R08ZwG0/m+5ZXOG8bV2sov1N/XPFVTHoiXElE3RxBqONaCb1zXL2gl6lKKI2YTJdIVE2pEbquArs
lLYTIJuk/tfNS9pDcs31a+RHQEpykVTlPCktnyce2zyli/1f22XKvWERyN9pSVXU5NMmFB2wM8Md
ifAQIrXVQXNN3nk9HNPBVcsVrldI75ZSZKfsDwAAAP//AwBulbOoWgMAAA==
headers:
CF-RAY:
- 957fa6e91a22023d-GRU
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Mon, 30 Jun 2025 18:15:58 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=9WXNY0u6p0Nlyb1G36cXHDgtwb1538JzaUNoS4tgrpo-1751307358-1.0.1.1-BAvg6Fgqsv3ITFxrC3z3E42AqgSZcGq4Gr1Wrjx56TOsljYynqCePNzQ79oAncT9KXehFnUMxA6JSf2lAfQOeSLVREY3_P6GjPkbcwIsVXw;
path=/; expires=Mon, 30-Jun-25 18:45:58 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=N5kr6p8e26f8scPW5s2uVOatzh2RYjlQb13eQUBsrts-1751307358295-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '308'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-envoy-upstream-service-time:
- '310'
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999823'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_78bb0375ac6e0939c8e05f66869e0137
status:
code: 200
message: OK
version: 1

View File

@@ -0,0 +1,165 @@
interactions:
- request:
body: '{"messages": [{"role": "system", "content": "You are TestAgent. You are
a helpful assistant that just says hi\nYour personal goal is: Just say hi\nTo
give my best complete final answer to the task respond using the exact following
format:\n\nThought: I now can give a great answer\nFinal Answer: Your final
answer must be the great and the most complete as possible, it must be outcome
described.\n\nI MUST use these formats, my job depends on it!"}, {"role": "user",
"content": "\nCurrent Task: Just say hi\n\nThis is the expected criteria for
your final answer: hi\nyou MUST return the actual complete content as the final
answer, not a summary.\n\nBegin! This is VERY important to you, use the tools
available and give your best Final Answer, your job depends on it!\n\nThought:"}],
"model": "gpt-4o-mini", "stop": ["\nObservation:"], "stream": true, "stream_options":
{"include_usage": true}}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, zstd
connection:
- keep-alive
content-length:
- '896'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.78.0
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.78.0
x-stainless-raw-response:
- 'true'
x-stainless-read-timeout:
- '600.0'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.12
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: 'data: {"id":"chatcmpl-BoFTMrRBBaVsju5o285dR7JBeVVvS","object":"chat.completion.chunk","created":1751315576,"model":"gpt-4o-mini-2024-07-18","service_tier":"default","system_fingerprint":"fp_34a54ae93c","choices":[{"index":0,"delta":{"role":"assistant","content":"","refusal":null},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BoFTMrRBBaVsju5o285dR7JBeVVvS","object":"chat.completion.chunk","created":1751315576,"model":"gpt-4o-mini-2024-07-18","service_tier":"default","system_fingerprint":"fp_34a54ae93c","choices":[{"index":0,"delta":{"content":"I"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BoFTMrRBBaVsju5o285dR7JBeVVvS","object":"chat.completion.chunk","created":1751315576,"model":"gpt-4o-mini-2024-07-18","service_tier":"default","system_fingerprint":"fp_34a54ae93c","choices":[{"index":0,"delta":{"content":"
now"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BoFTMrRBBaVsju5o285dR7JBeVVvS","object":"chat.completion.chunk","created":1751315576,"model":"gpt-4o-mini-2024-07-18","service_tier":"default","system_fingerprint":"fp_34a54ae93c","choices":[{"index":0,"delta":{"content":"
can"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BoFTMrRBBaVsju5o285dR7JBeVVvS","object":"chat.completion.chunk","created":1751315576,"model":"gpt-4o-mini-2024-07-18","service_tier":"default","system_fingerprint":"fp_34a54ae93c","choices":[{"index":0,"delta":{"content":"
give"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BoFTMrRBBaVsju5o285dR7JBeVVvS","object":"chat.completion.chunk","created":1751315576,"model":"gpt-4o-mini-2024-07-18","service_tier":"default","system_fingerprint":"fp_34a54ae93c","choices":[{"index":0,"delta":{"content":"
a"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BoFTMrRBBaVsju5o285dR7JBeVVvS","object":"chat.completion.chunk","created":1751315576,"model":"gpt-4o-mini-2024-07-18","service_tier":"default","system_fingerprint":"fp_34a54ae93c","choices":[{"index":0,"delta":{"content":"
great"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BoFTMrRBBaVsju5o285dR7JBeVVvS","object":"chat.completion.chunk","created":1751315576,"model":"gpt-4o-mini-2024-07-18","service_tier":"default","system_fingerprint":"fp_34a54ae93c","choices":[{"index":0,"delta":{"content":"
answer"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BoFTMrRBBaVsju5o285dR7JBeVVvS","object":"chat.completion.chunk","created":1751315576,"model":"gpt-4o-mini-2024-07-18","service_tier":"default","system_fingerprint":"fp_34a54ae93c","choices":[{"index":0,"delta":{"content":" \n"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BoFTMrRBBaVsju5o285dR7JBeVVvS","object":"chat.completion.chunk","created":1751315576,"model":"gpt-4o-mini-2024-07-18","service_tier":"default","system_fingerprint":"fp_34a54ae93c","choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BoFTMrRBBaVsju5o285dR7JBeVVvS","object":"chat.completion.chunk","created":1751315576,"model":"gpt-4o-mini-2024-07-18","service_tier":"default","system_fingerprint":"fp_34a54ae93c","choices":[{"index":0,"delta":{"content":"
Answer"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BoFTMrRBBaVsju5o285dR7JBeVVvS","object":"chat.completion.chunk","created":1751315576,"model":"gpt-4o-mini-2024-07-18","service_tier":"default","system_fingerprint":"fp_34a54ae93c","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BoFTMrRBBaVsju5o285dR7JBeVVvS","object":"chat.completion.chunk","created":1751315576,"model":"gpt-4o-mini-2024-07-18","service_tier":"default","system_fingerprint":"fp_34a54ae93c","choices":[{"index":0,"delta":{"content":"
hi"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BoFTMrRBBaVsju5o285dR7JBeVVvS","object":"chat.completion.chunk","created":1751315576,"model":"gpt-4o-mini-2024-07-18","service_tier":"default","system_fingerprint":"fp_34a54ae93c","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}],"usage":null}
data: {"id":"chatcmpl-BoFTMrRBBaVsju5o285dR7JBeVVvS","object":"chat.completion.chunk","created":1751315576,"model":"gpt-4o-mini-2024-07-18","service_tier":"default","system_fingerprint":"fp_34a54ae93c","choices":[],"usage":{"prompt_tokens":161,"completion_tokens":12,"total_tokens":173,"prompt_tokens_details":{"cached_tokens":0,"audio_tokens":0},"completion_tokens_details":{"reasoning_tokens":0,"audio_tokens":0,"accepted_prediction_tokens":0,"rejected_prediction_tokens":0}}}
data: [DONE]
'
headers:
CF-RAY:
- 95806f8e3fc2f213-GRU
Connection:
- keep-alive
Content-Type:
- text/event-stream; charset=utf-8
Date:
- Mon, 30 Jun 2025 20:32:56 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=dxb.Rn1CsTQLjW9eU0KWonEuID9KkkVRJ1FaBBsW5gQ-1751315576-1.0.1.1-R7bLnCfrjJwtHYzbKEE9in7ilYfymelWgYg1OcPqSEllAFA9_R2ctsY0f7Mrv7i0dXaynAooDpLs9hGIzfgyBR9EgkRjqoaHbByPXjxy_5s;
path=/; expires=Mon, 30-Jun-25 21:02:56 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=MwdjLsfFXJDWzbJseVfA4MIpVAqLa7envAu7EAkdK4o-1751315576696-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '238'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-envoy-upstream-service-time:
- '241'
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999824'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_394b055696dfcdc221b5ecd0fba49e97
status:
code: 200
message: OK
version: 1

View File

@@ -1,13 +1,15 @@
from unittest import mock
from unittest.mock import MagicMock, patch
from crewai.utilities.evaluators.task_evaluator import (
TaskEvaluator,
TrainingTaskEvaluation,
)
from crewai.utilities.converter import ConverterError
@patch("crewai.utilities.evaluators.task_evaluator.Converter")
@patch("crewai.utilities.evaluators.task_evaluator.TrainingConverter")
def test_evaluate_training_data(converter_mock):
training_data = {
"agent_id": {
@@ -63,3 +65,39 @@ def test_evaluate_training_data(converter_mock):
mock.call().to_pydantic(),
]
)
@patch("crewai.utilities.converter.Converter.to_pydantic")
@patch("crewai.utilities.training_converter.TrainingConverter._convert_field_by_field")
def test_training_converter_fallback_mechanism(convert_field_by_field_mock, to_pydantic_mock):
training_data = {
"agent_id": {
"data1": {
"initial_output": "Initial output 1",
"human_feedback": "Human feedback 1",
"improved_output": "Improved output 1",
},
"data2": {
"initial_output": "Initial output 2",
"human_feedback": "Human feedback 2",
"improved_output": "Improved output 2",
},
}
}
agent_id = "agent_id"
to_pydantic_mock.side_effect = ConverterError("Failed to convert directly")
expected_result = TrainingTaskEvaluation(
suggestions=["Fallback suggestion"],
quality=6.5,
final_summary="Fallback summary"
)
convert_field_by_field_mock.return_value = expected_result
original_agent = MagicMock()
result = TaskEvaluator(original_agent=original_agent).evaluate_training_data(
training_data, agent_id
)
assert result == expected_result
to_pydantic_mock.assert_called_once()
convert_field_by_field_mock.assert_called_once()

View File

@@ -57,23 +57,28 @@ def vcr_config(request) -> dict:
}
base_agent = Agent(
role="base_agent",
llm="gpt-4o-mini",
goal="Just say hi",
backstory="You are a helpful assistant that just says hi",
@pytest.fixture(scope="module")
def base_agent():
return Agent(
role="base_agent",
llm="gpt-4o-mini",
goal="Just say hi",
backstory="You are a helpful assistant that just says hi",
)
base_task = Task(
description="Just say hi",
expected_output="hi",
agent=base_agent,
)
@pytest.fixture(scope="module")
def base_task(base_agent):
return Task(
description="Just say hi",
expected_output="hi",
agent=base_agent,
)
event_listener = EventListener()
@pytest.mark.vcr(filter_headers=["authorization"])
def test_crew_emits_start_kickoff_event():
def test_crew_emits_start_kickoff_event(base_agent, base_task):
received_events = []
mock_span = Mock()
@@ -101,7 +106,7 @@ def test_crew_emits_start_kickoff_event():
@pytest.mark.vcr(filter_headers=["authorization"])
def test_crew_emits_end_kickoff_event():
def test_crew_emits_end_kickoff_event(base_agent, base_task):
received_events = []
@crewai_event_bus.on(CrewKickoffCompletedEvent)
@@ -119,7 +124,7 @@ def test_crew_emits_end_kickoff_event():
@pytest.mark.vcr(filter_headers=["authorization"])
def test_crew_emits_test_kickoff_type_event():
def test_crew_emits_test_kickoff_type_event(base_agent, base_task):
received_events = []
mock_span = Mock()
@@ -165,7 +170,7 @@ def test_crew_emits_test_kickoff_type_event():
@pytest.mark.vcr(filter_headers=["authorization"])
def test_crew_emits_kickoff_failed_event():
def test_crew_emits_kickoff_failed_event(base_agent, base_task):
received_events = []
with crewai_event_bus.scoped_handlers():
@@ -190,7 +195,7 @@ def test_crew_emits_kickoff_failed_event():
@pytest.mark.vcr(filter_headers=["authorization"])
def test_crew_emits_start_task_event():
def test_crew_emits_start_task_event(base_agent, base_task):
received_events = []
@crewai_event_bus.on(TaskStartedEvent)
@@ -207,7 +212,7 @@ def test_crew_emits_start_task_event():
@pytest.mark.vcr(filter_headers=["authorization"])
def test_crew_emits_end_task_event():
def test_crew_emits_end_task_event(base_agent, base_task):
received_events = []
@crewai_event_bus.on(TaskCompletedEvent)
@@ -235,7 +240,7 @@ def test_crew_emits_end_task_event():
@pytest.mark.vcr(filter_headers=["authorization"])
def test_task_emits_failed_event_on_execution_error():
def test_task_emits_failed_event_on_execution_error(base_agent, base_task):
received_events = []
received_sources = []
@@ -272,7 +277,7 @@ def test_task_emits_failed_event_on_execution_error():
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_emits_execution_started_and_completed_events():
def test_agent_emits_execution_started_and_completed_events(base_agent, base_task):
received_events = []
@crewai_event_bus.on(AgentExecutionStartedEvent)
@@ -301,7 +306,7 @@ def test_agent_emits_execution_started_and_completed_events():
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_emits_execution_error_event():
def test_agent_emits_execution_error_event(base_agent, base_task):
received_events = []
@crewai_event_bus.on(AgentExecutionErrorEvent)
@@ -501,7 +506,7 @@ def test_flow_emits_method_execution_started_event():
@pytest.mark.vcr(filter_headers=["authorization"])
def test_register_handler_adds_new_handler():
def test_register_handler_adds_new_handler(base_agent, base_task):
received_events = []
def custom_handler(source, event):
@@ -519,7 +524,7 @@ def test_register_handler_adds_new_handler():
@pytest.mark.vcr(filter_headers=["authorization"])
def test_multiple_handlers_for_same_event():
def test_multiple_handlers_for_same_event(base_agent, base_task):
received_events_1 = []
received_events_2 = []
@@ -613,6 +618,11 @@ def test_llm_emits_call_started_event():
assert received_events[0].type == "llm_call_started"
assert received_events[1].type == "llm_call_completed"
assert received_events[0].task_name is None
assert received_events[0].agent_role is None
assert received_events[0].agent_id is None
assert received_events[0].task_id is None
@pytest.mark.vcr(filter_headers=["authorization"])
def test_llm_emits_call_failed_event():
@@ -632,6 +642,10 @@ def test_llm_emits_call_failed_event():
assert len(received_events) == 1
assert received_events[0].type == "llm_call_failed"
assert received_events[0].error == error_message
assert received_events[0].task_name is None
assert received_events[0].agent_role is None
assert received_events[0].agent_id is None
assert received_events[0].task_id is None
@pytest.mark.vcr(filter_headers=["authorization"])
@@ -742,7 +756,6 @@ def test_streaming_empty_response_handling():
received_chunks = []
with crewai_event_bus.scoped_handlers():
@crewai_event_bus.on(LLMStreamChunkEvent)
def handle_stream_chunk(source, event):
received_chunks.append(event.chunk)
@@ -779,3 +792,167 @@ def test_streaming_empty_response_handling():
finally:
# Restore the original method
llm.call = original_call
@pytest.mark.vcr(filter_headers=["authorization"])
def test_stream_llm_emits_event_with_task_and_agent_info():
completed_event = []
failed_event = []
started_event = []
stream_event = []
with crewai_event_bus.scoped_handlers():
@crewai_event_bus.on(LLMCallFailedEvent)
def handle_llm_failed(source, event):
failed_event.append(event)
@crewai_event_bus.on(LLMCallStartedEvent)
def handle_llm_started(source, event):
started_event.append(event)
@crewai_event_bus.on(LLMCallCompletedEvent)
def handle_llm_completed(source, event):
completed_event.append(event)
@crewai_event_bus.on(LLMStreamChunkEvent)
def handle_llm_stream_chunk(source, event):
stream_event.append(event)
agent = Agent(
role="TestAgent",
llm=LLM(model="gpt-4o-mini", stream=True),
goal="Just say hi",
backstory="You are a helpful assistant that just says hi",
)
task = Task(
description="Just say hi",
expected_output="hi",
llm=LLM(model="gpt-4o-mini", stream=True),
agent=agent
)
crew = Crew(agents=[agent], tasks=[task])
crew.kickoff()
assert len(completed_event) == 1
assert len(failed_event) == 0
assert len(started_event) == 1
assert len(stream_event) == 12
all_events = completed_event + failed_event + started_event + stream_event
all_agent_roles = [event.agent_role for event in all_events]
all_agent_id = [event.agent_id for event in all_events]
all_task_id = [event.task_id for event in all_events]
all_task_name = [event.task_name for event in all_events]
# ensure all events have the agent + task props set
assert len(all_agent_roles) == 14
assert len(all_agent_id) == 14
assert len(all_task_id) == 14
assert len(all_task_name) == 14
assert set(all_agent_roles) == {agent.role}
assert set(all_agent_id) == {agent.id}
assert set(all_task_id) == {task.id}
assert set(all_task_name) == {task.name}
@pytest.mark.vcr(filter_headers=["authorization"])
def test_llm_emits_event_with_task_and_agent_info(base_agent, base_task):
completed_event = []
failed_event = []
started_event = []
stream_event = []
with crewai_event_bus.scoped_handlers():
@crewai_event_bus.on(LLMCallFailedEvent)
def handle_llm_failed(source, event):
failed_event.append(event)
@crewai_event_bus.on(LLMCallStartedEvent)
def handle_llm_started(source, event):
started_event.append(event)
@crewai_event_bus.on(LLMCallCompletedEvent)
def handle_llm_completed(source, event):
completed_event.append(event)
@crewai_event_bus.on(LLMStreamChunkEvent)
def handle_llm_stream_chunk(source, event):
stream_event.append(event)
crew = Crew(agents=[base_agent], tasks=[base_task])
crew.kickoff()
assert len(completed_event) == 1
assert len(failed_event) == 0
assert len(started_event) == 1
assert len(stream_event) == 0
all_events = completed_event + failed_event + started_event + stream_event
all_agent_roles = [event.agent_role for event in all_events]
all_agent_id = [event.agent_id for event in all_events]
all_task_id = [event.task_id for event in all_events]
all_task_name = [event.task_name for event in all_events]
# ensure all events have the agent + task props set
assert len(all_agent_roles) == 2
assert len(all_agent_id) == 2
assert len(all_task_id) == 2
assert len(all_task_name) == 2
assert set(all_agent_roles) == {base_agent.role}
assert set(all_agent_id) == {base_agent.id}
assert set(all_task_id) == {base_task.id}
assert set(all_task_name) == {base_task.name}
@pytest.mark.vcr(filter_headers=["authorization"])
def test_llm_emits_event_with_lite_agent():
completed_event = []
failed_event = []
started_event = []
stream_event = []
with crewai_event_bus.scoped_handlers():
@crewai_event_bus.on(LLMCallFailedEvent)
def handle_llm_failed(source, event):
failed_event.append(event)
@crewai_event_bus.on(LLMCallStartedEvent)
def handle_llm_started(source, event):
started_event.append(event)
@crewai_event_bus.on(LLMCallCompletedEvent)
def handle_llm_completed(source, event):
completed_event.append(event)
@crewai_event_bus.on(LLMStreamChunkEvent)
def handle_llm_stream_chunk(source, event):
stream_event.append(event)
agent = Agent(
role="Speaker",
llm=LLM(model="gpt-4o-mini", stream=True),
goal="Just say hi",
backstory="You are a helpful assistant that just says hi",
)
agent.kickoff(messages=[{"role": "user", "content": "say hi!"}])
assert len(completed_event) == 2
assert len(failed_event) == 0
assert len(started_event) == 2
assert len(stream_event) == 15
all_events = completed_event + failed_event + started_event + stream_event
all_agent_roles = [event.agent_role for event in all_events]
all_agent_id = [event.agent_id for event in all_events]
all_task_id = [event.task_id for event in all_events if event.task_id]
all_task_name = [event.task_name for event in all_events if event.task_name]
# ensure all events have the agent + task props set
assert len(all_agent_roles) == 19
assert len(all_agent_id) == 19
assert len(all_task_id) == 0
assert len(all_task_name) == 0
assert set(all_agent_roles) == {agent.role}
assert set(all_agent_id) == {agent.id}

View File

@@ -0,0 +1,97 @@
from unittest.mock import MagicMock, patch
from pydantic import BaseModel, Field
from typing import List
from crewai.utilities.converter import ConverterError
from crewai.utilities.training_converter import TrainingConverter
class TestModel(BaseModel):
string_field: str = Field(description="A simple string field")
list_field: List[str] = Field(description="A list of strings")
number_field: float = Field(description="A number field")
class TestTrainingConverter:
def setup_method(self):
self.llm_mock = MagicMock()
self.test_text = "Sample text for evaluation"
self.test_instructions = "Convert to JSON format"
self.converter = TrainingConverter(
llm=self.llm_mock,
text=self.test_text,
model=TestModel,
instructions=self.test_instructions
)
@patch("crewai.utilities.converter.Converter.to_pydantic")
def test_fallback_to_field_by_field(self, parent_to_pydantic_mock):
parent_to_pydantic_mock.side_effect = ConverterError("Failed to convert directly")
llm_responses = {
"string_field": "test string value",
"list_field": "- item1\n- item2\n- item3",
"number_field": "8.5"
}
def llm_side_effect(messages):
prompt = messages[1]["content"]
if "string_field" in prompt:
return llm_responses["string_field"]
elif "list_field" in prompt:
return llm_responses["list_field"]
elif "number_field" in prompt:
return llm_responses["number_field"]
return "unknown field"
self.llm_mock.call.side_effect = llm_side_effect
result = self.converter.to_pydantic()
assert result.string_field == "test string value"
assert result.list_field == ["item1", "item2", "item3"]
assert result.number_field == 8.5
parent_to_pydantic_mock.assert_called_once()
assert self.llm_mock.call.call_count == 3
def test_ask_llm_for_field(self):
field_name = "test_field"
field_description = "This is a test field description"
expected_response = "Test response"
self.llm_mock.call.return_value = expected_response
response = self.converter._ask_llm_for_field(field_name, field_description)
assert response == expected_response
self.llm_mock.call.assert_called_once()
call_args = self.llm_mock.call.call_args[0][0]
assert call_args[0]["role"] == "system"
assert f"Extract the {field_name}" in call_args[0]["content"]
assert call_args[1]["role"] == "user"
assert field_name in call_args[1]["content"]
assert field_description in call_args[1]["content"]
def test_process_field_value_string(self):
response = " This is a string with extra whitespace "
result = self.converter._process_field_value(response, str)
assert result == "This is a string with extra whitespace"
def test_process_field_value_list_with_bullet_points(self):
response = "- Item 1\n- Item 2\n- Item 3"
result = self.converter._process_field_value(response, List[str])
assert result == ["Item 1", "Item 2", "Item 3"]
def test_process_field_value_list_with_json(self):
response = '["Item 1", "Item 2", "Item 3"]'
with patch("crewai.utilities.training_converter.json.loads") as json_mock:
json_mock.return_value = ["Item 1", "Item 2", "Item 3"]
result = self.converter._process_field_value(response, List[str])
assert result == ["Item 1", "Item 2", "Item 3"]
def test_process_field_value_float(self):
response = "The quality score is 8.5 out of 10"
result = self.converter._process_field_value(response, float)
assert result == 8.5

5523
uv.lock generated

File diff suppressed because it is too large Load Diff