Compare commits

...

13 Commits

Author SHA1 Message Date
Devin AI
6dcc7ac725 Fix: Replace SystemExit with LLMContextLengthExceededError in handle_context_length
- Changed handle_context_length to raise LLMContextLengthExceededError instead of SystemExit when respect_context_window=False
- This allows proper exception handling and prevents the entire application from terminating
- Added comprehensive unit tests to verify the fix
- Updated test imports to include LLMContextLengthExceededError

Fixes #3774

Co-Authored-By: João <joao@crewai.com>
2025-10-22 04:53:24 +00:00
Greyson LaLonde
4371cf5690 chore: remove aisuite
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Little usage + blocking some features
2025-10-21 23:18:06 -04:00
Lorenze Jay
d28daa26cd feat: bump versions to 1.1.0 (#3770)
Some checks failed
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
* feat: bump versions to 1.1.0

* chore: bump template versions

---------

Co-authored-by: Greyson Lalonde <greyson.r.lalonde@gmail.com>
2025-10-21 15:52:44 -07:00
Lorenze Jay
a850813f2b feat: enhance InternalInstructor to support multiple LLM providers (#3767)
* feat: enhance InternalInstructor to support multiple LLM providers

- Updated InternalInstructor to conditionally create an instructor client based on the LLM provider.
- Introduced a new method _create_instructor_client to handle client creation using the modern from_provider pattern.
- Added functionality to extract the provider from the LLM model name.
- Implemented tests for InternalInstructor with various LLM providers including OpenAI, Anthropic, Gemini, and Azure, ensuring robust integration and error handling.

This update improves flexibility and extensibility for different LLM integrations.

* fix test
2025-10-21 15:24:59 -07:00
Cameron Warren
5944a39629 fix: correct broken integration documentation links
Fix navigation paths for two integration tool cards that were redirecting to the
introduction page instead of their intended documentation pages.

Fixes #3516

Co-authored-by: Cwarre33 <cwarre33@charlotte.edu>
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-10-21 18:12:08 -04:00
Greyson LaLonde
c594859ed0 feat: mypy plugin base
* feat: base mypy plugin with CrewBase

* fix: add crew method to protocol
2025-10-21 17:36:08 -04:00
Daniel Barreto
2ee27efca7 feat: improvements on QdrantVectorSearchTool
* Implement improvements on QdrantVectorSearchTool

- Allow search filters to be set at the constructor level
- Fix issue that prevented multiple records from being returned

* Implement improvements on QdrantVectorSearchTool

- Allow search filters to be set at the constructor level
- Fix issue that prevented multiple records from being returned

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-10-21 16:50:08 -04:00
Greyson LaLonde
f6e13eb890 chore: update codeql config paths to new folders
* chore: update codeql config paths to new folders

* tests: use threading.Condition for event check

---------

Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-10-21 14:43:25 -04:00
Lorenze Jay
e7b3ce27ca docs: update LLM integration details and examples
* docs: update LLM integration details and examples

- Changed references from LiteLLM to native SDKs for LLM providers.
- Enhanced OpenAI and AWS Bedrock sections with new usage examples and advanced configuration options.
- Added structured output examples and supported environment variables for better clarity.
- Improved documentation on additional parameters and features for LLM configurations.

* drop this example - should use strucutred output from task instead

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-10-21 14:39:50 -04:00
Greyson LaLonde
dba27cf8b5 fix: fix double trace call; add types
* fix: fix double trace call; add types

* tests: skip long running uv install test, refactor in future

---------

Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-10-21 14:15:39 -04:00
Greyson LaLonde
6469f224f6 chore: improve CrewBase typing 2025-10-21 13:58:35 -04:00
Greyson LaLonde
f3a63be215 tests: cassettes, threading for flow tests 2025-10-21 13:48:21 -04:00
Greyson LaLonde
01d8c189f0 fix: pin template versions to latest
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
2025-10-21 10:56:41 -04:00
35 changed files with 10627 additions and 4580 deletions

View File

@@ -2,20 +2,27 @@ name: "CodeQL Config"
paths-ignore:
# Ignore template files - these are boilerplate code that shouldn't be analyzed
- "src/crewai/cli/templates/**"
- "lib/crewai/src/crewai/cli/templates/**"
# Ignore test cassettes - these are test fixtures/recordings
- "tests/cassettes/**"
- "lib/crewai/tests/cassettes/**"
- "lib/crewai-tools/tests/cassettes/**"
# Ignore cache and build artifacts
- ".cache/**"
# Ignore documentation build artifacts
- "docs/.cache/**"
# Ignore experimental code
- "lib/crewai/src/crewai/experimental/a2a/**"
paths:
# Include all Python source code
- "src/**"
# Include tests (but exclude cassettes)
- "tests/**"
# Include all Python source code from workspace packages
- "lib/crewai/src/**"
- "lib/crewai-tools/src/**"
- "lib/devtools/src/**"
# Include tests (but exclude cassettes via paths-ignore)
- "lib/crewai/tests/**"
- "lib/crewai-tools/tests/**"
- "lib/devtools/tests/**"
# Configure specific queries or packs if needed
# queries:
# - uses: security-and-quality
# - uses: security-and-quality

View File

@@ -7,7 +7,7 @@ mode: "wide"
## Overview
CrewAI integrates with multiple LLM providers through LiteLLM, giving you the flexibility to choose the right model for your specific use case. This guide will help you understand how to configure and use different LLM providers in your CrewAI projects.
CrewAI integrates with multiple LLM providers through providers native sdks, giving you the flexibility to choose the right model for your specific use case. This guide will help you understand how to configure and use different LLM providers in your CrewAI projects.
## What are LLMs?
@@ -113,44 +113,104 @@ In this section, you'll find detailed examples that help you select, configure,
<AccordionGroup>
<Accordion title="OpenAI">
Set the following environment variables in your `.env` file:
CrewAI provides native integration with OpenAI through the OpenAI Python SDK.
```toml Code
# Required
OPENAI_API_KEY=sk-...
# Optional
OPENAI_API_BASE=<custom-base-url>
OPENAI_ORGANIZATION=<your-org-id>
OPENAI_BASE_URL=<custom-base-url>
```
Example usage in your CrewAI project:
**Basic Usage:**
```python Code
from crewai import LLM
llm = LLM(
model="openai/gpt-4", # call model by provider/model_name
temperature=0.8,
max_tokens=150,
model="openai/gpt-4o",
api_key="your-api-key", # Or set OPENAI_API_KEY
temperature=0.7,
max_tokens=4000
)
```
**Advanced Configuration:**
```python Code
from crewai import LLM
llm = LLM(
model="openai/gpt-4o",
api_key="your-api-key",
base_url="https://api.openai.com/v1", # Optional custom endpoint
organization="org-...", # Optional organization ID
project="proj_...", # Optional project ID
temperature=0.7,
max_tokens=4000,
max_completion_tokens=4000, # For newer models
top_p=0.9,
frequency_penalty=0.1,
presence_penalty=0.1,
stop=["END"],
seed=42
seed=42, # For reproducible outputs
stream=True, # Enable streaming
timeout=60.0, # Request timeout in seconds
max_retries=3, # Maximum retry attempts
logprobs=True, # Return log probabilities
top_logprobs=5, # Number of most likely tokens
reasoning_effort="medium" # For o1 models: low, medium, high
)
```
OpenAI is one of the leading providers of LLMs with a wide range of models and features.
**Structured Outputs:**
```python Code
from pydantic import BaseModel
from crewai import LLM
class ResponseFormat(BaseModel):
name: str
age: int
summary: str
llm = LLM(
model="openai/gpt-4o",
)
```
**Supported Environment Variables:**
- `OPENAI_API_KEY`: Your OpenAI API key (required)
- `OPENAI_BASE_URL`: Custom base URL for OpenAI API (optional)
**Features:**
- Native function calling support (except o1 models)
- Structured outputs with JSON schema
- Streaming support for real-time responses
- Token usage tracking
- Stop sequences support (except o1 models)
- Log probabilities for token-level insights
- Reasoning effort control for o1 models
**Supported Models:**
| Model | Context Window | Best For |
|---------------------|------------------|-----------------------------------------------|
| GPT-4 | 8,192 tokens | High-accuracy tasks, complex reasoning |
| GPT-4 Turbo | 128,000 tokens | Long-form content, document analysis |
| GPT-4o & GPT-4o-mini | 128,000 tokens | Cost-effective large context processing |
| o3-mini | 200,000 tokens | Fast reasoning, complex reasoning |
| o1-mini | 128,000 tokens | Fast reasoning, complex reasoning |
| o1-preview | 128,000 tokens | Fast reasoning, complex reasoning |
| o1 | 200,000 tokens | Fast reasoning, complex reasoning |
| gpt-4.1 | 1M tokens | Latest model with enhanced capabilities |
| gpt-4.1-mini | 1M tokens | Efficient version with large context |
| gpt-4.1-nano | 1M tokens | Ultra-efficient variant |
| gpt-4o | 128,000 tokens | Optimized for speed and intelligence |
| gpt-4o-mini | 200,000 tokens | Cost-effective with large context |
| gpt-4-turbo | 128,000 tokens | Long-form content, document analysis |
| gpt-4 | 8,192 tokens | High-accuracy tasks, complex reasoning |
| o1 | 200,000 tokens | Advanced reasoning, complex problem-solving |
| o1-preview | 128,000 tokens | Preview of reasoning capabilities |
| o1-mini | 128,000 tokens | Efficient reasoning model |
| o3-mini | 200,000 tokens | Lightweight reasoning model |
| o4-mini | 200,000 tokens | Next-gen efficient reasoning |
**Note:** To use OpenAI, install the required dependencies:
```bash
uv add "crewai[openai]"
```
</Accordion>
<Accordion title="Meta-Llama">
@@ -187,69 +247,186 @@ In this section, you'll find detailed examples that help you select, configure,
</Accordion>
<Accordion title="Anthropic">
CrewAI provides native integration with Anthropic through the Anthropic Python SDK.
```toml Code
# Required
ANTHROPIC_API_KEY=sk-ant-...
# Optional
ANTHROPIC_API_BASE=<custom-base-url>
```
Example usage in your CrewAI project:
**Basic Usage:**
```python Code
from crewai import LLM
llm = LLM(
model="anthropic/claude-3-sonnet-20240229-v1:0",
temperature=0.7
model="anthropic/claude-3-5-sonnet-20241022",
api_key="your-api-key", # Or set ANTHROPIC_API_KEY
max_tokens=4096 # Required for Anthropic
)
```
**Advanced Configuration:**
```python Code
from crewai import LLM
llm = LLM(
model="anthropic/claude-3-5-sonnet-20241022",
api_key="your-api-key",
base_url="https://api.anthropic.com", # Optional custom endpoint
temperature=0.7,
max_tokens=4096, # Required parameter
top_p=0.9,
stop_sequences=["END", "STOP"], # Anthropic uses stop_sequences
stream=True, # Enable streaming
timeout=60.0, # Request timeout in seconds
max_retries=3 # Maximum retry attempts
)
```
**Supported Environment Variables:**
- `ANTHROPIC_API_KEY`: Your Anthropic API key (required)
**Features:**
- Native tool use support for Claude 3+ models
- Streaming support for real-time responses
- Automatic system message handling
- Stop sequences for controlled output
- Token usage tracking
- Multi-turn tool use conversations
**Important Notes:**
- `max_tokens` is a **required** parameter for all Anthropic models
- Claude uses `stop_sequences` instead of `stop`
- System messages are handled separately from conversation messages
- First message must be from the user (automatically handled)
- Messages must alternate between user and assistant
**Supported Models:**
| Model | Context Window | Best For |
|------------------------------|----------------|-----------------------------------------------|
| claude-3-7-sonnet | 200,000 tokens | Advanced reasoning and agentic tasks |
| claude-3-5-sonnet-20241022 | 200,000 tokens | Latest Sonnet with best performance |
| claude-3-5-haiku | 200,000 tokens | Fast, compact model for quick responses |
| claude-3-opus | 200,000 tokens | Most capable for complex tasks |
| claude-3-sonnet | 200,000 tokens | Balanced intelligence and speed |
| claude-3-haiku | 200,000 tokens | Fastest for simple tasks |
| claude-2.1 | 200,000 tokens | Extended context, reduced hallucinations |
| claude-2 | 100,000 tokens | Versatile model for various tasks |
| claude-instant | 100,000 tokens | Fast, cost-effective for everyday tasks |
**Note:** To use Anthropic, install the required dependencies:
```bash
uv add "crewai[anthropic]"
```
</Accordion>
<Accordion title="Google (Gemini API)">
Set your API key in your `.env` file. If you need a key, or need to find an
existing key, check [AI Studio](https://aistudio.google.com/apikey).
CrewAI provides native integration with Google Gemini through the Google Gen AI Python SDK.
Set your API key in your `.env` file. If you need a key, check [AI Studio](https://aistudio.google.com/apikey).
```toml .env
# https://ai.google.dev/gemini-api/docs/api-key
# Required (one of the following)
GOOGLE_API_KEY=<your-api-key>
GEMINI_API_KEY=<your-api-key>
# Optional - for Vertex AI
GOOGLE_CLOUD_PROJECT=<your-project-id>
GOOGLE_CLOUD_LOCATION=<location> # Defaults to us-central1
GOOGLE_GENAI_USE_VERTEXAI=true # Set to use Vertex AI
```
Example usage in your CrewAI project:
**Basic Usage:**
```python Code
from crewai import LLM
llm = LLM(
model="gemini/gemini-2.0-flash",
temperature=0.7,
api_key="your-api-key", # Or set GOOGLE_API_KEY/GEMINI_API_KEY
temperature=0.7
)
```
### Gemini models
**Advanced Configuration:**
```python Code
from crewai import LLM
llm = LLM(
model="gemini/gemini-2.5-flash",
api_key="your-api-key",
temperature=0.7,
top_p=0.9,
top_k=40, # Top-k sampling parameter
max_output_tokens=8192,
stop_sequences=["END", "STOP"],
stream=True, # Enable streaming
safety_settings={
"HARM_CATEGORY_HARASSMENT": "BLOCK_NONE",
"HARM_CATEGORY_HATE_SPEECH": "BLOCK_NONE"
}
)
```
**Vertex AI Configuration:**
```python Code
from crewai import LLM
llm = LLM(
model="gemini/gemini-1.5-pro",
project="your-gcp-project-id",
location="us-central1" # GCP region
)
```
**Supported Environment Variables:**
- `GOOGLE_API_KEY` or `GEMINI_API_KEY`: Your Google API key (required for Gemini API)
- `GOOGLE_CLOUD_PROJECT`: Google Cloud project ID (for Vertex AI)
- `GOOGLE_CLOUD_LOCATION`: GCP location (defaults to `us-central1`)
- `GOOGLE_GENAI_USE_VERTEXAI`: Set to `true` to use Vertex AI
**Features:**
- Native function calling support for Gemini 1.5+ and 2.x models
- Streaming support for real-time responses
- Multimodal capabilities (text, images, video)
- Safety settings configuration
- Support for both Gemini API and Vertex AI
- Automatic system instruction handling
- Token usage tracking
**Gemini Models:**
Google offers a range of powerful models optimized for different use cases.
| Model | Context Window | Best For |
|--------------------------------|----------------|-------------------------------------------------------------------|
| gemini-2.5-flash-preview-04-17 | 1M tokens | Adaptive thinking, cost efficiency |
| gemini-2.5-pro-preview-05-06 | 1M tokens | Enhanced thinking and reasoning, multimodal understanding, advanced coding, and more |
| gemini-2.0-flash | 1M tokens | Next generation features, speed, thinking, and realtime streaming |
| gemini-2.5-flash | 1M tokens | Adaptive thinking, cost efficiency |
| gemini-2.5-pro | 1M tokens | Enhanced thinking and reasoning, multimodal understanding |
| gemini-2.0-flash | 1M tokens | Next generation features, speed, thinking |
| gemini-2.0-flash-thinking | 32,768 tokens | Advanced reasoning with thinking process |
| gemini-2.0-flash-lite | 1M tokens | Cost efficiency and low latency |
| gemini-1.5-pro | 2M tokens | Best performing, logical reasoning, coding |
| gemini-1.5-flash | 1M tokens | Balanced multimodal model, good for most tasks |
| gemini-1.5-flash-8B | 1M tokens | Fastest, most cost-efficient, good for high-frequency tasks |
| gemini-1.5-pro | 2M tokens | Best performing, wide variety of reasoning tasks including logical reasoning, coding, and creative collaboration |
| gemini-1.5-flash-8b | 1M tokens | Fastest, most cost-efficient |
| gemini-1.0-pro | 32,768 tokens | Earlier generation model |
**Gemma Models:**
The Gemini API also supports [Gemma models](https://ai.google.dev/gemma/docs) hosted on Google infrastructure.
| Model | Context Window | Best For |
|----------------|----------------|------------------------------------|
| gemma-3-1b | 32,000 tokens | Ultra-lightweight tasks |
| gemma-3-4b | 128,000 tokens | Efficient general-purpose tasks |
| gemma-3-12b | 128,000 tokens | Balanced performance and efficiency|
| gemma-3-27b | 128,000 tokens | High-performance tasks |
**Note:** To use Google Gemini, install the required dependencies:
```bash
uv add "crewai[google-genai]"
```
The full list of models is available in the [Gemini model docs](https://ai.google.dev/gemini-api/docs/models).
### Gemma
The Gemini API also allows you to use your API key to access [Gemma models](https://ai.google.dev/gemma/docs) hosted on Google infrastructure.
| Model | Context Window |
|----------------|----------------|
| gemma-3-1b-it | 32k tokens |
| gemma-3-4b-it | 32k tokens |
| gemma-3-12b-it | 32k tokens |
| gemma-3-27b-it | 128k tokens |
</Accordion>
<Accordion title="Google (Vertex AI)">
Get credentials from your Google Cloud Console and save it to a JSON file, then load it with the following code:
@@ -291,43 +468,146 @@ In this section, you'll find detailed examples that help you select, configure,
</Accordion>
<Accordion title="Azure">
CrewAI provides native integration with Azure AI Inference and Azure OpenAI through the Azure AI Inference Python SDK.
```toml Code
# Required
AZURE_API_KEY=<your-api-key>
AZURE_API_BASE=<your-resource-url>
AZURE_API_VERSION=<api-version>
AZURE_ENDPOINT=<your-endpoint-url>
# Optional
AZURE_AD_TOKEN=<your-azure-ad-token>
AZURE_API_TYPE=<your-azure-api-type>
AZURE_API_VERSION=<api-version> # Defaults to 2024-06-01
```
Example usage in your CrewAI project:
**Endpoint URL Formats:**
For Azure OpenAI deployments:
```
https://<resource-name>.openai.azure.com/openai/deployments/<deployment-name>
```
For Azure AI Inference endpoints:
```
https://<resource-name>.inference.azure.com
```
**Basic Usage:**
```python Code
llm = LLM(
model="azure/gpt-4",
api_version="2023-05-15"
api_key="<your-api-key>", # Or set AZURE_API_KEY
endpoint="<your-endpoint-url>",
api_version="2024-06-01"
)
```
**Advanced Configuration:**
```python Code
llm = LLM(
model="azure/gpt-4o",
temperature=0.7,
max_tokens=4000,
top_p=0.9,
frequency_penalty=0.0,
presence_penalty=0.0,
stop=["END"],
stream=True,
timeout=60.0,
max_retries=3
)
```
**Supported Environment Variables:**
- `AZURE_API_KEY`: Your Azure API key (required)
- `AZURE_ENDPOINT`: Your Azure endpoint URL (required, also checks `AZURE_OPENAI_ENDPOINT` and `AZURE_API_BASE`)
- `AZURE_API_VERSION`: API version (optional, defaults to `2024-06-01`)
**Features:**
- Native function calling support for Azure OpenAI models (gpt-4, gpt-4o, gpt-3.5-turbo, etc.)
- Streaming support for real-time responses
- Automatic endpoint URL validation and correction
- Comprehensive error handling with retry logic
- Token usage tracking
**Note:** To use Azure AI Inference, install the required dependencies:
```bash
uv add "crewai[azure-ai-inference]"
```
</Accordion>
<Accordion title="AWS Bedrock">
CrewAI provides native integration with AWS Bedrock through the boto3 SDK using the Converse API.
```toml Code
# Required
AWS_ACCESS_KEY_ID=<your-access-key>
AWS_SECRET_ACCESS_KEY=<your-secret-key>
AWS_DEFAULT_REGION=<your-region>
# Optional
AWS_SESSION_TOKEN=<your-session-token> # For temporary credentials
AWS_DEFAULT_REGION=<your-region> # Defaults to us-east-1
```
Example usage in your CrewAI project:
**Basic Usage:**
```python Code
from crewai import LLM
llm = LLM(
model="bedrock/anthropic.claude-3-sonnet-20240229-v1:0"
model="bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0",
region_name="us-east-1"
)
```
Before using Amazon Bedrock, make sure you have boto3 installed in your environment
**Advanced Configuration:**
```python Code
from crewai import LLM
[Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/models-regions.html) is a managed service that provides access to multiple foundation models from top AI companies through a unified API, enabling secure and responsible AI application development.
llm = LLM(
model="bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0",
aws_access_key_id="your-access-key", # Or set AWS_ACCESS_KEY_ID
aws_secret_access_key="your-secret-key", # Or set AWS_SECRET_ACCESS_KEY
aws_session_token="your-session-token", # For temporary credentials
region_name="us-east-1",
temperature=0.7,
max_tokens=4096,
top_p=0.9,
top_k=250, # For Claude models
stop_sequences=["END", "STOP"],
stream=True, # Enable streaming
guardrail_config={ # Optional content filtering
"guardrailIdentifier": "your-guardrail-id",
"guardrailVersion": "1"
},
additional_model_request_fields={ # Model-specific parameters
"top_k": 250
}
)
```
**Supported Environment Variables:**
- `AWS_ACCESS_KEY_ID`: AWS access key (required)
- `AWS_SECRET_ACCESS_KEY`: AWS secret key (required)
- `AWS_SESSION_TOKEN`: AWS session token for temporary credentials (optional)
- `AWS_DEFAULT_REGION`: AWS region (defaults to `us-east-1`)
**Features:**
- Native tool calling support via Converse API
- Streaming and non-streaming responses
- Comprehensive error handling with retry logic
- Guardrail configuration for content filtering
- Model-specific parameters via `additional_model_request_fields`
- Token usage tracking and stop reason logging
- Support for all Bedrock foundation models
- Automatic conversation format handling
**Important Notes:**
- Uses the modern Converse API for unified model access
- Automatic handling of model-specific conversation requirements
- System messages are handled separately from conversation
- First message must be from user (automatically handled)
- Some models (like Cohere) require conversation to end with user message
[Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/models-regions.html) is a managed service that provides access to multiple foundation models from top AI companies through a unified API.
| Model | Context Window | Best For |
|-------------------------|----------------------|-------------------------------------------------------------------|
@@ -357,7 +637,12 @@ In this section, you'll find detailed examples that help you select, configure,
| Jamba-Instruct | Up to 256k tokens | Model with extended context window optimized for cost-effective text generation, summarization, and Q&A. |
| Mistral 7B Instruct | Up to 32k tokens | This LLM follows instructions, completes requests, and generates creative text. |
| Mistral 8x7B Instruct | Up to 32k tokens | An MOE LLM that follows instructions, completes requests, and generates creative text. |
| DeepSeek R1 | 32,768 tokens | Advanced reasoning model |
**Note:** To use AWS Bedrock, install the required dependencies:
```bash
uv add "crewai[bedrock]"
```
</Accordion>
<Accordion title="Amazon SageMaker">
@@ -899,7 +1184,7 @@ Learn how to get the most out of your LLM configuration:
</Accordion>
<Accordion title="Drop Additional Parameters">
CrewAI internally uses Litellm for LLM calls, which allows you to drop additional parameters that are not needed for your specific use case. This can help simplify your code and reduce the complexity of your LLM configuration.
CrewAI internally uses native sdks for LLM calls, which allows you to drop additional parameters that are not needed for your specific use case. This can help simplify your code and reduce the complexity of your LLM configuration.
For example, if you don't need to send the <code>stop</code> parameter, you can simply omit it from your LLM call:
```python

View File

@@ -11,7 +11,7 @@ mode: "wide"
<Card
title="Bedrock Invoke Agent Tool"
icon="cloud"
href="/en/tools/tool-integrations/bedrockinvokeagenttool"
href="/en/tools/integration/bedrockinvokeagenttool"
color="#0891B2"
>
Invoke Amazon Bedrock Agents from CrewAI to orchestrate actions across AWS services.
@@ -20,7 +20,7 @@ mode: "wide"
<Card
title="CrewAI Automation Tool"
icon="bolt"
href="/en/tools/tool-integrations/crewaiautomationtool"
href="/en/tools/integration/crewaiautomationtool"
color="#7C3AED"
>
Automate deployment and operations by integrating CrewAI with external platforms and workflows.

View File

@@ -12,7 +12,7 @@ dependencies = [
"pytube>=15.0.0",
"requests>=2.32.5",
"docker>=7.1.0",
"crewai==1.0.0",
"crewai==1.1.0",
"lancedb>=0.5.4",
"tiktoken>=0.8.0",
"beautifulsoup4>=4.13.4",

View File

@@ -287,4 +287,4 @@ __all__ = [
"ZapierActionTools",
]
__version__ = "1.0.0"
__version__ = "1.1.0"

View File

@@ -1,80 +1,42 @@
from collections.abc import Callable
from __future__ import annotations
import importlib
import json
import os
from collections.abc import Callable
from typing import Any
try:
from qdrant_client import QdrantClient
from qdrant_client.http.models import FieldCondition, Filter, MatchValue
QDRANT_AVAILABLE = True
except ImportError:
QDRANT_AVAILABLE = False
QdrantClient = Any # type: ignore[assignment,misc] # type placeholder
Filter = Any # type: ignore[assignment,misc]
FieldCondition = Any # type: ignore[assignment,misc]
MatchValue = Any # type: ignore[assignment,misc]
from crewai.tools import BaseTool, EnvVar
from pydantic import BaseModel, ConfigDict, Field
from pydantic import BaseModel, ConfigDict, Field, model_validator
from pydantic.types import ImportString
class QdrantToolSchema(BaseModel):
"""Input for QdrantTool."""
query: str = Field(..., description="Query to search in Qdrant DB.")
filter_by: str | None = None
filter_value: str | None = None
query: str = Field(
...,
description="The query to search retrieve relevant information from the Qdrant database. Pass only the query, not the question.",
)
filter_by: str | None = Field(
default=None,
description="Filter by properties. Pass only the properties, not the question.",
)
filter_value: str | None = Field(
default=None,
description="Filter by value. Pass only the value, not the question.",
)
class QdrantConfig(BaseModel):
"""All Qdrant connection and search settings."""
qdrant_url: str
qdrant_api_key: str | None = None
collection_name: str
limit: int = 3
score_threshold: float = 0.35
filter_conditions: list[tuple[str, Any]] = Field(default_factory=list)
class QdrantVectorSearchTool(BaseTool):
"""Tool to query and filter results from a Qdrant database.
This tool enables vector similarity search on internal documents stored in Qdrant,
with optional filtering capabilities.
Attributes:
client: Configured QdrantClient instance
collection_name: Name of the Qdrant collection to search
limit: Maximum number of results to return
score_threshold: Minimum similarity score threshold
qdrant_url: Qdrant server URL
qdrant_api_key: Authentication key for Qdrant
"""
"""Vector search tool for Qdrant."""
model_config = ConfigDict(arbitrary_types_allowed=True)
client: QdrantClient = None # type: ignore[assignment]
# --- Metadata ---
name: str = "QdrantVectorSearchTool"
description: str = "A tool to search the Qdrant database for relevant information on internal documents."
description: str = "Search Qdrant vector DB for relevant documents."
args_schema: type[BaseModel] = QdrantToolSchema
query: str | None = None
filter_by: str | None = None
filter_value: str | None = None
collection_name: str | None = None
limit: int | None = Field(default=3)
score_threshold: float = Field(default=0.35)
qdrant_url: str = Field(
...,
description="The URL of the Qdrant server",
)
qdrant_api_key: str | None = Field(
default=None,
description="The API key for the Qdrant server",
)
custom_embedding_fn: Callable | None = Field(
default=None,
description="A custom embedding function to use for vectorization. If not provided, the default model will be used.",
)
package_dependencies: list[str] = Field(default_factory=lambda: ["qdrant-client"])
env_vars: list[EnvVar] = Field(
default_factory=lambda: [
@@ -83,107 +45,81 @@ class QdrantVectorSearchTool(BaseTool):
)
]
)
qdrant_config: QdrantConfig
qdrant_package: ImportString[Any] = Field(
default="qdrant_client",
description="Base package path for Qdrant. Will dynamically import client and models.",
)
custom_embedding_fn: ImportString[Callable[[str], list[float]]] | None = Field(
default=None,
description="Optional embedding function or import path.",
)
client: Any | None = None
def __init__(self, **kwargs):
super().__init__(**kwargs)
if QDRANT_AVAILABLE:
self.client = QdrantClient(
url=self.qdrant_url,
api_key=self.qdrant_api_key if self.qdrant_api_key else None,
@model_validator(mode="after")
def _setup_qdrant(self) -> QdrantVectorSearchTool:
# Import the qdrant_package if it's a string
if isinstance(self.qdrant_package, str):
self.qdrant_package = importlib.import_module(self.qdrant_package)
if not self.client:
self.client = self.qdrant_package.QdrantClient(
url=self.qdrant_config.qdrant_url,
api_key=self.qdrant_config.qdrant_api_key or None,
)
else:
import click
if click.confirm(
"The 'qdrant-client' package is required to use the QdrantVectorSearchTool. "
"Would you like to install it?"
):
import subprocess
subprocess.run(["uv", "add", "qdrant-client"], check=True) # noqa: S607
else:
raise ImportError(
"The 'qdrant-client' package is required to use the QdrantVectorSearchTool. "
"Please install it with: uv add qdrant-client"
)
return self
def _run(
self,
query: str,
filter_by: str | None = None,
filter_value: str | None = None,
filter_value: Any | None = None,
) -> str:
"""Execute vector similarity search on Qdrant.
"""Perform vector similarity search."""
filter_ = self.qdrant_package.http.models.Filter
field_condition = self.qdrant_package.http.models.FieldCondition
match_value = self.qdrant_package.http.models.MatchValue
conditions = self.qdrant_config.filter_conditions.copy()
if filter_by and filter_value is not None:
conditions.append((filter_by, filter_value))
Args:
query: Search query to vectorize and match
filter_by: Optional metadata field to filter on
filter_value: Optional value to filter by
Returns:
JSON string containing search results with metadata and scores
Raises:
ImportError: If qdrant-client is not installed
ValueError: If Qdrant credentials are missing
"""
if not self.qdrant_url:
raise ValueError("QDRANT_URL is not set")
# Create filter if filter parameters are provided
search_filter = None
if filter_by and filter_value:
search_filter = Filter(
search_filter = (
filter_(
must=[
FieldCondition(key=filter_by, match=MatchValue(value=filter_value))
field_condition(key=k, match=match_value(value=v))
for k, v in conditions
]
)
# Search in Qdrant using the built-in query method
query_vector = (
self._vectorize_query(query, embedding_model="text-embedding-3-large")
if not self.custom_embedding_fn
else self.custom_embedding_fn(query)
if conditions
else None
)
search_results = self.client.query_points(
collection_name=self.collection_name, # type: ignore[arg-type]
query_vector = (
self.custom_embedding_fn(query)
if self.custom_embedding_fn
else (
lambda: __import__("openai")
.Client(api_key=os.getenv("OPENAI_API_KEY"))
.embeddings.create(input=[query], model="text-embedding-3-large")
.data[0]
.embedding
)()
)
results = self.client.query_points(
collection_name=self.qdrant_config.collection_name,
query=query_vector,
query_filter=search_filter,
limit=self.limit, # type: ignore[arg-type]
score_threshold=self.score_threshold,
limit=self.qdrant_config.limit,
score_threshold=self.qdrant_config.score_threshold,
)
# Format results similar to storage implementation
results = []
# Extract the list of ScoredPoint objects from the tuple
for point in search_results:
result = {
"metadata": point[1][0].payload.get("metadata", {}),
"context": point[1][0].payload.get("text", ""),
"distance": point[1][0].score,
}
results.append(result)
return json.dumps(results, indent=2)
def _vectorize_query(self, query: str, embedding_model: str) -> list[float]:
"""Default vectorization function with openai.
Args:
query (str): The query to vectorize
embedding_model (str): The embedding model to use
Returns:
list[float]: The vectorized query
"""
import openai
client = openai.Client(api_key=os.getenv("OPENAI_API_KEY"))
return (
client.embeddings.create(
input=[query],
model=embedding_model,
)
.data[0]
.embedding
return json.dumps(
[
{
"distance": p.score,
"metadata": p.payload.get("metadata", {}) if p.payload else {},
"context": p.payload.get("text", "") if p.payload else {},
}
for p in results.points
],
indent=2,
)

View File

@@ -31,6 +31,7 @@ def run_command(cmd, cwd):
return subprocess.run(cmd, cwd=cwd, capture_output=True, text=True)
@pytest.mark.skip(reason="Test takes too long in GitHub Actions (>30s timeout) due to dependency installation")
def test_no_optional_dependencies_in_init(temp_project):
"""
Test that crewai-tools can be imported without optional dependencies.

View File

@@ -49,7 +49,7 @@ Repository = "https://github.com/crewAIInc/crewAI"
[project.optional-dependencies]
tools = [
"crewai-tools==1.0.0",
"crewai-tools==1.1.0",
]
embeddings = [
"tiktoken~=0.8.0"
@@ -66,11 +66,6 @@ openpyxl = [
mem0 = ["mem0ai>=0.1.94"]
docling = [
"docling>=2.12.0",
]
aisuite = [
"aisuite>=0.1.11",
]
qdrant = [
"qdrant-client[fastembed]>=1.14.3",
@@ -137,13 +132,3 @@ build-backend = "hatchling.build"
[tool.hatch.version]
path = "src/crewai/__init__.py"
# Declare mutually exclusive extras due to conflicting httpx requirements
# a2a requires httpx>=0.28.1, while aisuite requires httpx>=0.27.0,<0.28.0
# [tool.uv]
# conflicts = [
# [
# { extra = "a2a" },
# { extra = "aisuite" },
# ],
# ]

View File

@@ -40,7 +40,7 @@ def _suppress_pydantic_deprecation_warnings() -> None:
_suppress_pydantic_deprecation_warnings()
__version__ = "1.0.0"
__version__ = "1.1.0"
_telemetry_submitted = False

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]>=0.203.1,<1.0.0"
"crewai[tools]==1.1.0"
]
[project.scripts]

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]>=0.203.1,<1.0.0",
"crewai[tools]==1.1.0"
]
[project.scripts]

View File

@@ -1,99 +0,0 @@
"""AI Suite LLM integration for CrewAI.
This module provides integration with AI Suite for LLM capabilities.
"""
from typing import Any
import aisuite as ai # type: ignore
from crewai.llms.base_llm import BaseLLM
class AISuiteLLM(BaseLLM):
"""AI Suite LLM implementation.
This class provides integration with AI Suite models through the BaseLLM interface.
"""
def __init__(
self,
model: str,
temperature: float | None = None,
stop: list[str] | None = None,
**kwargs: Any,
) -> None:
"""Initialize the AI Suite LLM.
Args:
model: The model identifier for AI Suite.
temperature: Optional temperature setting for response generation.
stop: Optional list of stop sequences for generation.
**kwargs: Additional keyword arguments passed to the AI Suite client.
"""
super().__init__(model=model, temperature=temperature, stop=stop)
self.client = ai.Client()
self.kwargs = kwargs
def call( # type: ignore[override]
self,
messages: str | list[dict[str, str]],
tools: list[dict] | None = None,
callbacks: list[Any] | None = None,
available_functions: dict[str, Any] | None = None,
from_task: Any | None = None,
from_agent: Any | None = None,
) -> str | Any:
"""Call the AI Suite LLM with the given messages.
Args:
messages: Input messages for the LLM.
tools: Optional list of tool schemas for function calling.
callbacks: Optional list of callback functions.
available_functions: Optional dict mapping function names to callables.
from_task: Optional task caller.
from_agent: Optional agent caller.
Returns:
The text response from the LLM.
"""
completion_params = self._prepare_completion_params(messages, tools)
response = self.client.chat.completions.create(**completion_params)
return response.choices[0].message.content
def _prepare_completion_params(
self,
messages: str | list[dict[str, str]],
tools: list[dict] | None = None,
) -> dict[str, Any]:
"""Prepare parameters for the AI Suite completion call.
Args:
messages: Input messages for the LLM.
tools: Optional list of tool schemas.
Returns:
Dictionary of parameters for the completion API.
"""
params: dict[str, Any] = {
"model": self.model,
"messages": messages,
"temperature": self.temperature,
"tools": tools,
**self.kwargs,
}
if self.stop:
params["stop"] = self.stop
return params
@staticmethod
def supports_function_calling() -> bool:
"""Check if the LLM supports function calling.
Returns:
False, as AI Suite does not currently support function calling.
"""
return False

View File

@@ -0,0 +1,60 @@
"""Mypy plugin for CrewAI decorator type checking.
This plugin informs mypy about attributes injected by the @CrewBase decorator.
"""
from collections.abc import Callable
from mypy.nodes import MDEF, SymbolTableNode, Var
from mypy.plugin import ClassDefContext, Plugin
from mypy.types import AnyType, TypeOfAny
class CrewAIPlugin(Plugin):
"""Mypy plugin that handles @CrewBase decorator attribute injection."""
def get_class_decorator_hook(
self, fullname: str
) -> Callable[[ClassDefContext], None] | None:
"""Return hook for class decorators.
Args:
fullname: Fully qualified name of the decorator.
Returns:
Hook function if this is a CrewBase decorator, None otherwise.
"""
if fullname in ("crewai.project.CrewBase", "crewai.project.crew_base.CrewBase"):
return self._crew_base_hook
return None
@staticmethod
def _crew_base_hook(ctx: ClassDefContext) -> None:
"""Add injected attributes to @CrewBase decorated classes.
Args:
ctx: Context for the class being decorated.
"""
any_type = AnyType(TypeOfAny.explicit)
str_type = ctx.api.named_type("builtins.str")
dict_type = ctx.api.named_type("builtins.dict", [str_type, any_type])
agents_config_var = Var("agents_config", dict_type)
agents_config_var.info = ctx.cls.info
agents_config_var._fullname = f"{ctx.cls.info.fullname}.agents_config"
ctx.cls.info.names["agents_config"] = SymbolTableNode(MDEF, agents_config_var)
tasks_config_var = Var("tasks_config", dict_type)
tasks_config_var.info = ctx.cls.info
tasks_config_var._fullname = f"{ctx.cls.info.fullname}.tasks_config"
ctx.cls.info.names["tasks_config"] = SymbolTableNode(MDEF, tasks_config_var)
def plugin(_: str) -> type[Plugin]:
"""Entry point for mypy plugin.
Args:
_: Mypy version string.
Returns:
Plugin class.
"""
return CrewAIPlugin

View File

@@ -4,7 +4,7 @@ from __future__ import annotations
from collections.abc import Callable
from functools import wraps
from typing import TYPE_CHECKING, Concatenate, ParamSpec, TypeVar
from typing import TYPE_CHECKING, Any, Concatenate, ParamSpec, TypeVar, overload
from crewai.project.utils import memoize
@@ -33,6 +33,7 @@ P2 = ParamSpec("P2")
R = TypeVar("R")
R2 = TypeVar("R2")
T = TypeVar("T")
SelfT = TypeVar("SelfT")
def before_kickoff(meth: Callable[P, R]) -> BeforeKickoffMethod[P, R]:
@@ -155,9 +156,17 @@ def cache_handler(meth: Callable[P, R]) -> CacheHandlerMethod[P, R]:
return CacheHandlerMethod(memoize(meth))
@overload
def crew(
meth: Callable[Concatenate[SelfT, P], Crew],
) -> Callable[Concatenate[SelfT, P], Crew]: ...
@overload
def crew(
meth: Callable[Concatenate[CrewInstance, P], Crew],
) -> Callable[Concatenate[CrewInstance, P], Crew]:
) -> Callable[Concatenate[CrewInstance, P], Crew]: ...
def crew(
meth: Callable[..., Crew],
) -> Callable[..., Crew]:
"""Marks a method as the main crew execution point.
Args:
@@ -168,7 +177,7 @@ def crew(
"""
@wraps(meth)
def wrapper(self: CrewInstance, *args: P.args, **kwargs: P.kwargs) -> Crew:
def wrapper(self: CrewInstance, *args: Any, **kwargs: Any) -> Crew:
"""Wrapper that sets up crew before calling the decorated method.
Args:

View File

@@ -6,7 +6,15 @@ from collections.abc import Callable
import inspect
import logging
from pathlib import Path
from typing import TYPE_CHECKING, Any, Literal, TypeGuard, TypeVar, TypedDict, cast
from typing import (
TYPE_CHECKING,
Any,
Literal,
TypeGuard,
TypeVar,
TypedDict,
cast,
)
from dotenv import load_dotenv
import yaml
@@ -320,14 +328,17 @@ def get_mcp_tools(self: CrewInstance, *tool_names: str) -> list[BaseTool]:
if not self.mcp_server_params:
return []
from crewai_tools import MCPServerAdapter # type: ignore[import-untyped]
from crewai_tools import MCPServerAdapter
if self._mcp_server_adapter is None:
self._mcp_server_adapter = MCPServerAdapter(
self.mcp_server_params, connect_timeout=self.mcp_connect_timeout
)
return self._mcp_server_adapter.tools.filter_by_names(tool_names or None)
return cast(
list[BaseTool],
self._mcp_server_adapter.tools.filter_by_names(tool_names or None),
)
def _load_config(
@@ -630,3 +641,17 @@ class CrewBase(metaclass=_CrewBaseType):
Note:
Reference: https://stackoverflow.com/questions/11091609/setting-a-class-metaclass-using-a-decorator
"""
# e
if TYPE_CHECKING:
def __init__(self, *args: Any, **kwargs: Any) -> None:
"""Type stub for decorator usage.
Args:
decorated_cls: Class to transform with CrewBaseMeta metaclass.
Returns:
New class with CrewBaseMeta metaclass applied.
"""
...

View File

@@ -20,7 +20,7 @@ from typing_extensions import Self
if TYPE_CHECKING:
from crewai import Agent, Task
from crewai import Agent, Crew, Task
from crewai.crews.crew_output import CrewOutput
from crewai.tools import BaseTool
@@ -124,11 +124,12 @@ class CrewClass(Protocol):
get_mcp_tools: Callable[..., list[BaseTool]]
_load_config: Callable[..., dict[str, Any]]
load_configurations: Callable[..., None]
load_yaml: staticmethod
load_yaml: Callable[..., dict[str, Any]]
map_all_agent_variables: Callable[..., None]
_map_agent_variables: Callable[..., None]
map_all_task_variables: Callable[..., None]
_map_task_variables: Callable[..., None]
crew: Callable[..., Crew]
class DecoratedMethod(Generic[P, R]):

View File

@@ -29,6 +29,7 @@ from opentelemetry.sdk.trace.export import (
SpanExportResult,
)
from opentelemetry.trace import Span
from typing_extensions import Self
from crewai.telemetry.constants import (
CREWAI_TELEMETRY_BASE_URL,
@@ -86,7 +87,7 @@ class Telemetry:
_instance = None
_lock = threading.Lock()
def __new__(cls):
def __new__(cls) -> Self:
if cls._instance is None:
with cls._lock:
if cls._instance is None:
@@ -154,19 +155,24 @@ class Telemetry:
self.ready = False
self.trace_set = False
def _safe_telemetry_operation(self, operation: Callable[[], Any]) -> None:
def _safe_telemetry_operation(
self, operation: Callable[[], Span | None]
) -> Span | None:
"""Execute telemetry operation safely, checking both readiness and environment variables.
Args:
operation: A callable that performs telemetry operations. May return any value,
but the return value is not used by this method.
operation: A callable that performs telemetry operations.
Returns:
The return value from the operation, or None if telemetry is disabled or fails.
"""
if not self._should_execute_telemetry():
return
return None
try:
operation()
return operation()
except Exception as e:
logger.debug(f"Telemetry operation failed: {e}")
return None
def crew_creation(self, crew: Crew, inputs: dict[str, Any] | None) -> None:
"""Records the creation of a crew.
@@ -176,7 +182,7 @@ class Telemetry:
inputs: Optional input parameters for the crew.
"""
def _operation():
def _operation() -> None:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Crew Created")
self._add_attribute(
@@ -386,7 +392,7 @@ class Telemetry:
The span tracking the task execution, or None if telemetry is disabled.
"""
def _operation():
def _operation() -> Span:
tracer = trace.get_tracer("crewai.telemetry")
created_span = tracer.start_span("Task Created")
@@ -445,11 +451,7 @@ class Telemetry:
return span
if not self._should_execute_telemetry():
return None
self._safe_telemetry_operation(_operation)
return _operation()
return self._safe_telemetry_operation(_operation)
def task_ended(self, span: Span, task: Task, crew: Crew) -> None:
"""Records the completion of a task execution in a crew.
@@ -463,7 +465,7 @@ class Telemetry:
If share_crew is enabled, this will also record the task output.
"""
def _operation():
def _operation() -> None:
# Ensure fingerprint data is present on completion span
if hasattr(task, "fingerprint") and task.fingerprint:
self._add_attribute(span, "task_fingerprint", task.fingerprint.uuid_str)
@@ -488,7 +490,7 @@ class Telemetry:
attempts: Number of attempts made with this tool.
"""
def _operation():
def _operation() -> None:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Tool Repeated Usage")
self._add_attribute(
@@ -516,7 +518,7 @@ class Telemetry:
agent: The agent using the tool.
"""
def _operation():
def _operation() -> None:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Tool Usage")
self._add_attribute(
@@ -546,7 +548,7 @@ class Telemetry:
tool_name: Name of the tool that caused the error.
"""
def _operation():
def _operation() -> None:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Tool Usage Error")
self._add_attribute(
@@ -578,7 +580,7 @@ class Telemetry:
model_name: Name of the model used.
"""
def _operation():
def _operation() -> None:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Crew Individual Test Result")
@@ -613,7 +615,7 @@ class Telemetry:
model_name: Name of the model used in testing.
"""
def _operation():
def _operation() -> None:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Crew Test Execution")
@@ -640,7 +642,7 @@ class Telemetry:
def deploy_signup_error_span(self) -> None:
"""Records when an error occurs during the deployment signup process."""
def _operation():
def _operation() -> None:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Deploy Signup Error")
close_span(span)
@@ -654,7 +656,7 @@ class Telemetry:
uuid: Unique identifier for the deployment.
"""
def _operation():
def _operation() -> None:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Start Deployment")
if uuid:
@@ -666,7 +668,7 @@ class Telemetry:
def create_crew_deployment_span(self) -> None:
"""Records the creation of a new crew deployment."""
def _operation():
def _operation() -> None:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Create Crew Deployment")
close_span(span)
@@ -683,7 +685,7 @@ class Telemetry:
log_type: Type of logs being retrieved. Defaults to "deployment".
"""
def _operation():
def _operation() -> None:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Get Crew Logs")
self._add_attribute(span, "log_type", log_type)
@@ -700,7 +702,7 @@ class Telemetry:
uuid: Unique identifier for the crew being removed.
"""
def _operation():
def _operation() -> None:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Remove Crew")
if uuid:
@@ -725,7 +727,7 @@ class Telemetry:
"""
self.crew_creation(crew, inputs)
def _operation():
def _operation() -> Span:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Crew Execution")
self._add_attribute(
@@ -793,8 +795,7 @@ class Telemetry:
return span
if crew.share_crew:
self._safe_telemetry_operation(_operation)
return _operation()
return self._safe_telemetry_operation(_operation)
return None
def end_crew(self, crew: Any, final_string_output: str) -> None:
@@ -805,7 +806,7 @@ class Telemetry:
final_string_output: The final output from the crew.
"""
def _operation():
def _operation() -> None:
self._add_attribute(
crew._execution_span,
"crewai_version",
@@ -842,7 +843,7 @@ class Telemetry:
value: The attribute value.
"""
def _operation():
def _operation() -> None:
return span.set_attribute(key, value)
self._safe_telemetry_operation(_operation)
@@ -854,7 +855,7 @@ class Telemetry:
flow_name: Name of the flow being created.
"""
def _operation():
def _operation() -> None:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Flow Creation")
self._add_attribute(span, "flow_name", flow_name)
@@ -870,7 +871,7 @@ class Telemetry:
node_names: List of node names in the flow.
"""
def _operation():
def _operation() -> None:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Flow Plotting")
self._add_attribute(span, "flow_name", flow_name)
@@ -887,7 +888,7 @@ class Telemetry:
node_names: List of nodes being executed in the flow.
"""
def _operation():
def _operation() -> None:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Flow Execution")
self._add_attribute(span, "flow_name", flow_name)

View File

@@ -419,7 +419,7 @@ def handle_context_length(
i18n: I18N instance for messages
Raises:
SystemExit: If context length is exceeded and user opts not to summarize
LLMContextLengthExceededError: If context length is exceeded and user opts not to summarize
"""
if respect_context_window:
printer.print(
@@ -432,7 +432,7 @@ def handle_context_length(
content="Context length exceeded. Consider using smaller text or RAG tools from crewai_tools.",
color="red",
)
raise SystemExit(
raise LLMContextLengthExceededError(
"Context length exceeded and user opted not to summarize. Consider using smaller text or RAG tools from crewai_tools."
)

View File

@@ -4,6 +4,8 @@ from typing import TYPE_CHECKING, Any, Generic, TypeGuard, TypeVar
from pydantic import BaseModel
from crewai.utilities.logger_utils import suppress_warnings
if TYPE_CHECKING:
from crewai.agent import Agent
@@ -11,9 +13,6 @@ if TYPE_CHECKING:
from crewai.llms.base_llm import BaseLLM
from crewai.utilities.types import LLMMessage
from crewai.utilities.logger_utils import suppress_warnings
T = TypeVar("T", bound=BaseModel)
@@ -62,9 +61,59 @@ class InternalInstructor(Generic[T]):
with suppress_warnings():
import instructor # type: ignore[import-untyped]
from litellm import completion
self._client = instructor.from_litellm(completion)
if (
self.llm is not None
and hasattr(self.llm, "is_litellm")
and self.llm.is_litellm
):
from litellm import completion
self._client = instructor.from_litellm(completion)
else:
self._client = self._create_instructor_client()
def _create_instructor_client(self) -> Any:
"""Create instructor client using the modern from_provider pattern.
Returns:
Instructor client configured for the LLM provider
Raises:
ValueError: If the provider is not supported
"""
import instructor
if isinstance(self.llm, str):
model_string = self.llm
elif self.llm is not None and hasattr(self.llm, "model"):
model_string = self.llm.model
else:
raise ValueError("LLM must be a string or have a model attribute")
if isinstance(self.llm, str):
provider = self._extract_provider()
elif self.llm is not None and hasattr(self.llm, "provider"):
provider = self.llm.provider
else:
provider = "openai" # Default fallback
return instructor.from_provider(f"{provider}/{model_string}")
def _extract_provider(self) -> str:
"""Extract provider from LLM model name.
Returns:
Provider name (e.g., 'openai', 'anthropic', etc.)
"""
if self.llm is not None and hasattr(self.llm, "provider") and self.llm.provider:
return self.llm.provider
if isinstance(self.llm, str):
return self.llm.partition("/")[0] or "openai"
if self.llm is not None and hasattr(self.llm, "model"):
return self.llm.model.partition("/")[0] or "openai"
return "openai"
def to_json(self) -> str:
"""Convert the structured output to JSON format.
@@ -96,6 +145,6 @@ class InternalInstructor(Generic[T]):
else:
model_name = self.llm.model
return self._client.chat.completions.create(
return self._client.chat.completions.create( # type: ignore[no-any-return]
model=model_name, response_model=self.model, messages=messages
)

View File

@@ -18,6 +18,9 @@ from crewai.process import Process
from crewai.tools.tool_calling import InstructorToolCalling
from crewai.tools.tool_usage import ToolUsage
from crewai.utilities.errors import AgentRepositoryError
from crewai.utilities.exceptions.context_window_exceeding_exception import (
LLMContextLengthExceededError,
)
import pytest
from crewai import Agent, Crew, Task
@@ -186,17 +189,24 @@ def test_agent_execution_with_tools():
expected_output="The result of the multiplication.",
)
received_events = []
event_received = threading.Event()
condition = threading.Condition()
event_handled = False
@crewai_event_bus.on(ToolUsageFinishedEvent)
def handle_tool_end(source, event):
nonlocal event_handled
received_events.append(event)
event_received.set()
with condition:
event_handled = True
condition.notify()
output = agent.execute_task(task)
assert output == "The result of the multiplication is 12."
assert event_received.wait(timeout=5), "Timeout waiting for tool usage event"
with condition:
if not event_handled:
condition.wait(timeout=5)
assert event_handled, "Timeout waiting for tool usage event"
assert len(received_events) == 1
assert isinstance(received_events[0], ToolUsageFinishedEvent)
assert received_events[0].tool_name == "multiplier"
@@ -288,12 +298,16 @@ def test_cache_hitting():
'multiplier-{"first_number": 12, "second_number": 3}': 36,
}
received_events = []
event_received = threading.Event()
condition = threading.Condition()
event_handled = False
@crewai_event_bus.on(ToolUsageFinishedEvent)
def handle_tool_end(source, event):
nonlocal event_handled
received_events.append(event)
event_received.set()
with condition:
event_handled = True
condition.notify()
with (
patch.object(CacheHandler, "read") as read,
@@ -309,7 +323,10 @@ def test_cache_hitting():
read.assert_called_with(
tool="multiplier", input='{"first_number": 2, "second_number": 6}'
)
assert event_received.wait(timeout=5), "Timeout waiting for tool usage event"
with condition:
if not event_handled:
condition.wait(timeout=5)
assert event_handled, "Timeout waiting for tool usage event"
assert len(received_events) == 1
assert isinstance(received_events[0], ToolUsageFinishedEvent)
assert received_events[0].from_cache
@@ -888,7 +905,8 @@ def test_agent_step_callback():
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_function_calling_llm():
llm = "gpt-4o"
from crewai.llm import LLM
llm = LLM(model="gpt-4o", is_litellm=True)
@tool
def learn_about_ai() -> str:

View File

@@ -987,4 +987,103 @@ interactions:
status:
code: 200
message: OK
- request:
body: '{"trace_id": "51f9439f-9497-420c-a908-4e33f01ffdfc", "execution_type":
"crew", "user_identifier": null, "execution_context": {"crew_fingerprint": null,
"crew_name": "crew", "flow_name": null, "crewai_version": "1.0.0", "privacy_level":
"standard"}, "execution_metadata": {"expected_duration_estimate": 300, "agent_count":
0, "task_count": 0, "flow_method_count": 0, "execution_started_at": "2025-10-21T18:21:13.954835+00:00"},
"ephemeral_trace_id": "51f9439f-9497-420c-a908-4e33f01ffdfc"}'
headers:
Accept:
- '*/*'
Accept-Encoding:
- gzip, deflate, zstd
Connection:
- keep-alive
Content-Length:
- '488'
Content-Type:
- application/json
User-Agent:
- CrewAI-CLI/1.0.0
X-Crewai-Version:
- 1.0.0
method: POST
uri: https://app.crewai.com/crewai_plus/api/v1/tracing/ephemeral/batches
response:
body:
string: '{"id":"432de345-a45a-4a02-9259-2ed30a72a9c3","ephemeral_trace_id":"51f9439f-9497-420c-a908-4e33f01ffdfc","execution_type":"crew","crew_name":"crew","flow_name":null,"status":"running","duration_ms":null,"crewai_version":"1.0.0","total_events":0,"execution_context":{"crew_fingerprint":null,"crew_name":"crew","flow_name":null,"crewai_version":"1.0.0","privacy_level":"standard"},"created_at":"2025-10-21T18:21:14.911Z","updated_at":"2025-10-21T18:21:14.911Z","access_code":"TRACE-da9003bc8b","user_identifier":null}'
headers:
Connection:
- keep-alive
Content-Length:
- '515'
Content-Type:
- application/json; charset=utf-8
Date:
- Tue, 21 Oct 2025 18:21:14 GMT
cache-control:
- no-store
content-security-policy:
- 'default-src ''self'' *.app.crewai.com app.crewai.com; script-src ''self''
''unsafe-inline'' *.app.crewai.com app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts
https://www.gstatic.com https://run.pstmn.io https://apis.google.com https://apis.google.com/js/api.js
https://accounts.google.com https://accounts.google.com/gsi/client https://cdnjs.cloudflare.com/ajax/libs/normalize/8.0.1/normalize.min.css.map
https://*.google.com https://docs.google.com https://slides.google.com https://js.hs-scripts.com
https://js.sentry-cdn.com https://browser.sentry-cdn.com https://www.googletagmanager.com
https://js-na1.hs-scripts.com https://js.hubspot.com http://js-na1.hs-scripts.com
https://bat.bing.com https://cdn.amplitude.com https://cdn.segment.com https://d1d3n03t5zntha.cloudfront.net/
https://descriptusercontent.com https://edge.fullstory.com https://googleads.g.doubleclick.net
https://js.hs-analytics.net https://js.hs-banner.com https://js.hsadspixel.net
https://js.hscollectedforms.net https://js.usemessages.com https://snap.licdn.com
https://static.cloudflareinsights.com https://static.reo.dev https://www.google-analytics.com
https://share.descript.com/; style-src ''self'' ''unsafe-inline'' *.app.crewai.com
app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts; img-src ''self'' data:
*.app.crewai.com app.crewai.com https://zeus.tools.crewai.com https://dashboard.tools.crewai.com
https://cdn.jsdelivr.net https://forms.hsforms.com https://track.hubspot.com
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://www.google.com
https://www.google.com.br; font-src ''self'' data: *.app.crewai.com app.crewai.com;
connect-src ''self'' *.app.crewai.com app.crewai.com https://zeus.tools.crewai.com
https://connect.useparagon.com/ https://zeus.useparagon.com/* https://*.useparagon.com/*
https://run.pstmn.io https://connect.tools.crewai.com/ https://*.sentry.io
https://www.google-analytics.com https://edge.fullstory.com https://rs.fullstory.com
https://api.hubspot.com https://forms.hscollectedforms.net https://api.hubapi.com
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://google.com/pagead/form-data/16713662509
https://google.com/ccm/form-data/16713662509 https://www.google.com/ccm/collect
https://worker-actionkit.tools.crewai.com https://api.reo.dev; frame-src ''self''
*.app.crewai.com app.crewai.com https://connect.useparagon.com/ https://zeus.tools.crewai.com
https://zeus.useparagon.com/* https://connect.tools.crewai.com/ https://docs.google.com
https://drive.google.com https://slides.google.com https://accounts.google.com
https://*.google.com https://app.hubspot.com/ https://td.doubleclick.net https://www.googletagmanager.com/
https://www.youtube.com https://share.descript.com'
etag:
- W/"f377829f71702a4e2096c862a7d4c75e"
expires:
- '0'
permissions-policy:
- camera=(), microphone=(self), geolocation=()
pragma:
- no-cache
referrer-policy:
- strict-origin-when-cross-origin
strict-transport-security:
- max-age=63072000; includeSubDomains
vary:
- Accept
x-content-type-options:
- nosniff
x-frame-options:
- SAMEORIGIN
x-permitted-cross-domain-policies:
- none
x-request-id:
- b91de61f-e9cf-4748-8346-a7e7a3e43558
x-runtime:
- '0.674115'
x-xss-protection:
- 1; mode=block
status:
code: 201
message: Created
version: 1

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1032,4 +1032,621 @@ interactions:
- req_e1e95e8f654254ef093113417ba6ab00
http_version: HTTP/1.1
status_code: 200
- request:
body: '{"trace_id": "c5146cc4-dcff-45cc-a71a-b82a83b7de73", "execution_type":
"crew", "user_identifier": null, "execution_context": {"crew_fingerprint": null,
"crew_name": "crew", "flow_name": null, "crewai_version": "1.0.0", "privacy_level":
"standard"}, "execution_metadata": {"expected_duration_estimate": 300, "agent_count":
0, "task_count": 0, "flow_method_count": 0, "execution_started_at": "2025-10-21T17:02:41.380299+00:00"},
"ephemeral_trace_id": "c5146cc4-dcff-45cc-a71a-b82a83b7de73"}'
headers:
Accept:
- '*/*'
Accept-Encoding:
- gzip, deflate, zstd
Connection:
- keep-alive
Content-Length:
- '488'
Content-Type:
- application/json
User-Agent:
- CrewAI-CLI/1.0.0
X-Crewai-Version:
- 1.0.0
method: POST
uri: https://app.crewai.com/crewai_plus/api/v1/tracing/ephemeral/batches
response:
body:
string: '{"id":"ad4ac66f-7511-444c-aec3-a8c711ab4f54","ephemeral_trace_id":"c5146cc4-dcff-45cc-a71a-b82a83b7de73","execution_type":"crew","crew_name":"crew","flow_name":null,"status":"running","duration_ms":null,"crewai_version":"1.0.0","total_events":0,"execution_context":{"crew_fingerprint":null,"crew_name":"crew","flow_name":null,"crewai_version":"1.0.0","privacy_level":"standard"},"created_at":"2025-10-21T17:02:41.683Z","updated_at":"2025-10-21T17:02:41.683Z","access_code":"TRACE-41ea39cb70","user_identifier":null}'
headers:
Connection:
- keep-alive
Content-Length:
- '515'
Content-Type:
- application/json; charset=utf-8
Date:
- Tue, 21 Oct 2025 17:02:41 GMT
cache-control:
- no-store
content-security-policy:
- 'default-src ''self'' *.app.crewai.com app.crewai.com; script-src ''self''
''unsafe-inline'' *.app.crewai.com app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts
https://www.gstatic.com https://run.pstmn.io https://apis.google.com https://apis.google.com/js/api.js
https://accounts.google.com https://accounts.google.com/gsi/client https://cdnjs.cloudflare.com/ajax/libs/normalize/8.0.1/normalize.min.css.map
https://*.google.com https://docs.google.com https://slides.google.com https://js.hs-scripts.com
https://js.sentry-cdn.com https://browser.sentry-cdn.com https://www.googletagmanager.com
https://js-na1.hs-scripts.com https://js.hubspot.com http://js-na1.hs-scripts.com
https://bat.bing.com https://cdn.amplitude.com https://cdn.segment.com https://d1d3n03t5zntha.cloudfront.net/
https://descriptusercontent.com https://edge.fullstory.com https://googleads.g.doubleclick.net
https://js.hs-analytics.net https://js.hs-banner.com https://js.hsadspixel.net
https://js.hscollectedforms.net https://js.usemessages.com https://snap.licdn.com
https://static.cloudflareinsights.com https://static.reo.dev https://www.google-analytics.com
https://share.descript.com/; style-src ''self'' ''unsafe-inline'' *.app.crewai.com
app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts; img-src ''self'' data:
*.app.crewai.com app.crewai.com https://zeus.tools.crewai.com https://dashboard.tools.crewai.com
https://cdn.jsdelivr.net https://forms.hsforms.com https://track.hubspot.com
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://www.google.com
https://www.google.com.br; font-src ''self'' data: *.app.crewai.com app.crewai.com;
connect-src ''self'' *.app.crewai.com app.crewai.com https://zeus.tools.crewai.com
https://connect.useparagon.com/ https://zeus.useparagon.com/* https://*.useparagon.com/*
https://run.pstmn.io https://connect.tools.crewai.com/ https://*.sentry.io
https://www.google-analytics.com https://edge.fullstory.com https://rs.fullstory.com
https://api.hubspot.com https://forms.hscollectedforms.net https://api.hubapi.com
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://google.com/pagead/form-data/16713662509
https://google.com/ccm/form-data/16713662509 https://www.google.com/ccm/collect
https://worker-actionkit.tools.crewai.com https://api.reo.dev; frame-src ''self''
*.app.crewai.com app.crewai.com https://connect.useparagon.com/ https://zeus.tools.crewai.com
https://zeus.useparagon.com/* https://connect.tools.crewai.com/ https://docs.google.com
https://drive.google.com https://slides.google.com https://accounts.google.com
https://*.google.com https://app.hubspot.com/ https://td.doubleclick.net https://www.googletagmanager.com/
https://www.youtube.com https://share.descript.com'
etag:
- W/"b46640957517118b3255a25e8f00184d"
expires:
- '0'
permissions-policy:
- camera=(), microphone=(self), geolocation=()
pragma:
- no-cache
referrer-policy:
- strict-origin-when-cross-origin
strict-transport-security:
- max-age=63072000; includeSubDomains
vary:
- Accept
x-content-type-options:
- nosniff
x-frame-options:
- SAMEORIGIN
x-permitted-cross-domain-policies:
- none
x-request-id:
- 0590a968-276d-4342-85bb-0e488cf4f6bc
x-runtime:
- '0.073020'
x-xss-protection:
- 1; mode=block
status:
code: 201
message: Created
- request:
body: '{"events": [{"event_id": "ad62c6f4-6367-452c-bd91-5d3153e2e20a", "timestamp":
"2025-10-21T17:02:41.379061+00:00", "type": "crew_kickoff_started", "event_data":
{"timestamp": "2025-10-21T17:02:41.379061+00:00", "type": "crew_kickoff_started",
"source_fingerprint": null, "source_type": null, "fingerprint_metadata": null,
"task_id": null, "task_name": null, "agent_id": null, "agent_role": null, "crew_name":
"crew", "crew": null, "inputs": null}}, {"event_id": "19c1acad-fa5b-4dc8-933b-bfc9036ce2eb",
"timestamp": "2025-10-21T17:02:41.381894+00:00", "type": "task_started", "event_data":
{"task_description": "Research a topic to teach a kid aged 6 about math.", "expected_output":
"A topic, explanation, angle, and examples.", "task_name": "Research a topic
to teach a kid aged 6 about math.", "context": "", "agent_role": "Researcher",
"task_id": "3283d0f7-7159-47a9-abf0-a1bfe4dafb13"}}, {"event_id": "a9c2bbc4-778e-4a5d-bda5-148f015e5fbe",
"timestamp": "2025-10-21T17:02:41.382167+00:00", "type": "memory_query_started",
"event_data": {"timestamp": "2025-10-21T17:02:41.382167+00:00", "type": "memory_query_started",
"source_fingerprint": null, "source_type": "long_term_memory", "fingerprint_metadata":
null, "task_id": "3283d0f7-7159-47a9-abf0-a1bfe4dafb13", "task_name": "Research
a topic to teach a kid aged 6 about math.", "agent_id": "5b1ba567-c4c3-4327-9c2e-4215c53bffb6",
"agent_role": "Researcher", "from_task": null, "from_agent": null, "query":
"Research a topic to teach a kid aged 6 about math.", "limit": 2, "score_threshold":
null}}, {"event_id": "d946752e-87f1-496f-b26b-a4e1aaf58d49", "timestamp": "2025-10-21T17:02:41.382357+00:00",
"type": "memory_query_completed", "event_data": {"timestamp": "2025-10-21T17:02:41.382357+00:00",
"type": "memory_query_completed", "source_fingerprint": null, "source_type":
"long_term_memory", "fingerprint_metadata": null, "task_id": "3283d0f7-7159-47a9-abf0-a1bfe4dafb13",
"task_name": "Research a topic to teach a kid aged 6 about math.", "agent_id":
"5b1ba567-c4c3-4327-9c2e-4215c53bffb6", "agent_role": "Researcher", "from_task":
null, "from_agent": null, "query": "Research a topic to teach a kid aged 6 about
math.", "results": null, "limit": 2, "score_threshold": null, "query_time_ms":
0.1468658447265625}}, {"event_id": "fec95c3e-6020-4ca5-9c8a-76d8fe2e69fc", "timestamp":
"2025-10-21T17:02:41.382390+00:00", "type": "memory_query_started", "event_data":
{"timestamp": "2025-10-21T17:02:41.382390+00:00", "type": "memory_query_started",
"source_fingerprint": null, "source_type": "short_term_memory", "fingerprint_metadata":
null, "task_id": "3283d0f7-7159-47a9-abf0-a1bfe4dafb13", "task_name": "Research
a topic to teach a kid aged 6 about math.", "agent_id": "5b1ba567-c4c3-4327-9c2e-4215c53bffb6",
"agent_role": "Researcher", "from_task": null, "from_agent": null, "query":
"Research a topic to teach a kid aged 6 about math.", "limit": 5, "score_threshold":
0.6}}, {"event_id": "b4d9b241-3336-4e5b-902b-46ef4aff3a95", "timestamp": "2025-10-21T17:02:41.532761+00:00",
"type": "memory_query_completed", "event_data": {"timestamp": "2025-10-21T17:02:41.532761+00:00",
"type": "memory_query_completed", "source_fingerprint": null, "source_type":
"short_term_memory", "fingerprint_metadata": null, "task_id": "3283d0f7-7159-47a9-abf0-a1bfe4dafb13",
"task_name": "Research a topic to teach a kid aged 6 about math.", "agent_id":
"5b1ba567-c4c3-4327-9c2e-4215c53bffb6", "agent_role": "Researcher", "from_task":
null, "from_agent": null, "query": "Research a topic to teach a kid aged 6 about
math.", "results": [], "limit": 5, "score_threshold": 0.6, "query_time_ms":
150.346040725708}}, {"event_id": "ede0e589-9609-4b27-ac6d-f02ab5d118c0", "timestamp":
"2025-10-21T17:02:41.532803+00:00", "type": "memory_query_started", "event_data":
{"timestamp": "2025-10-21T17:02:41.532803+00:00", "type": "memory_query_started",
"source_fingerprint": null, "source_type": "entity_memory", "fingerprint_metadata":
null, "task_id": "3283d0f7-7159-47a9-abf0-a1bfe4dafb13", "task_name": "Research
a topic to teach a kid aged 6 about math.", "agent_id": "5b1ba567-c4c3-4327-9c2e-4215c53bffb6",
"agent_role": "Researcher", "from_task": null, "from_agent": null, "query":
"Research a topic to teach a kid aged 6 about math.", "limit": 5, "score_threshold":
0.6}}, {"event_id": "feca316d-4c1a-4502-bb73-e190b0ed3fee", "timestamp": "2025-10-21T17:02:41.539391+00:00",
"type": "memory_query_completed", "event_data": {"timestamp": "2025-10-21T17:02:41.539391+00:00",
"type": "memory_query_completed", "source_fingerprint": null, "source_type":
"entity_memory", "fingerprint_metadata": null, "task_id": "3283d0f7-7159-47a9-abf0-a1bfe4dafb13",
"task_name": "Research a topic to teach a kid aged 6 about math.", "agent_id":
"5b1ba567-c4c3-4327-9c2e-4215c53bffb6", "agent_role": "Researcher", "from_task":
null, "from_agent": null, "query": "Research a topic to teach a kid aged 6 about
math.", "results": [], "limit": 5, "score_threshold": 0.6, "query_time_ms":
6.557941436767578}}, {"event_id": "c1d5f664-11bd-4d53-a250-bf998f28feb1", "timestamp":
"2025-10-21T17:02:41.539868+00:00", "type": "agent_execution_started", "event_data":
{"agent_role": "Researcher", "agent_goal": "You research about math.", "agent_backstory":
"You''re an expert in research and you love to learn new things."}}, {"event_id":
"72160300-cf34-4697-92c5-e19f9bb7aced", "timestamp": "2025-10-21T17:02:41.540118+00:00",
"type": "llm_call_started", "event_data": {"timestamp": "2025-10-21T17:02:41.540118+00:00",
"type": "llm_call_started", "source_fingerprint": null, "source_type": null,
"fingerprint_metadata": null, "task_id": "3283d0f7-7159-47a9-abf0-a1bfe4dafb13",
"task_name": "Research a topic to teach a kid aged 6 about math.", "agent_id":
"5b1ba567-c4c3-4327-9c2e-4215c53bffb6", "agent_role": "Researcher", "from_task":
null, "from_agent": null, "model": "gpt-4o-mini", "messages": [{"role": "system",
"content": "You are Researcher. You''re an expert in research and you love to
learn new things.\nYour personal goal is: You research about math.\nTo give
my best complete final answer to the task respond using the exact following
format:\n\nThought: I now can give a great answer\nFinal Answer: Your final
answer must be the great and the most complete as possible, it must be outcome
described.\n\nI MUST use these formats, my job depends on it!"}, {"role": "user",
"content": "\nCurrent Task: Research a topic to teach a kid aged 6 about math.\n\nThis
is the expected criteria for your final answer: A topic, explanation, angle,
and examples.\nyou MUST return the actual complete content as the final answer,
not a summary.\n\nYou MUST follow these instructions: \n - Incorporate specific
examples and case studies in initial outputs for clearer illustration of concepts.\n
- Engage more with current events or trends to enhance relevance, especially
in fields like remote work and decision-making.\n - Invite perspectives from
experts and stakeholders to add depth to discussions on ethical implications
and collaboration in creativity.\n - Use more precise language when discussing
topics, ensuring clarity and accessibility for readers.\n - Encourage exploration
of user experiences and testimonials to provide more relatable content, especially
in education and mental health contexts.\n\nBegin! This is VERY important to
you, use the tools available and give your best Final Answer, your job depends
on it!\n\nThought:"}], "tools": null, "callbacks": ["<crewai.utilities.token_counter_callback.TokenCalcHandler
object at 0x12e3e6d80>"], "available_functions": null}}, {"event_id": "83d91da9-2d3f-4638-9fdc-262371273149",
"timestamp": "2025-10-21T17:02:41.544497+00:00", "type": "llm_call_completed",
"event_data": {"timestamp": "2025-10-21T17:02:41.544497+00:00", "type": "llm_call_completed",
"source_fingerprint": null, "source_type": null, "fingerprint_metadata": null,
"task_id": "3283d0f7-7159-47a9-abf0-a1bfe4dafb13", "task_name": "Research a
topic to teach a kid aged 6 about math.", "agent_id": "5b1ba567-c4c3-4327-9c2e-4215c53bffb6",
"agent_role": "Researcher", "from_task": null, "from_agent": null, "messages":
[{"role": "system", "content": "You are Researcher. You''re an expert in research
and you love to learn new things.\nYour personal goal is: You research about
math.\nTo give my best complete final answer to the task respond using the exact
following format:\n\nThought: I now can give a great answer\nFinal Answer: Your
final answer must be the great and the most complete as possible, it must be
outcome described.\n\nI MUST use these formats, my job depends on it!"}, {"role":
"user", "content": "\nCurrent Task: Research a topic to teach a kid aged 6 about
math.\n\nThis is the expected criteria for your final answer: A topic, explanation,
angle, and examples.\nyou MUST return the actual complete content as the final
answer, not a summary.\n\nYou MUST follow these instructions: \n - Incorporate
specific examples and case studies in initial outputs for clearer illustration
of concepts.\n - Engage more with current events or trends to enhance relevance,
especially in fields like remote work and decision-making.\n - Invite perspectives
from experts and stakeholders to add depth to discussions on ethical implications
and collaboration in creativity.\n - Use more precise language when discussing
topics, ensuring clarity and accessibility for readers.\n - Encourage exploration
of user experiences and testimonials to provide more relatable content, especially
in education and mental health contexts.\n\nBegin! This is VERY important to
you, use the tools available and give your best Final Answer, your job depends
on it!\n\nThought:"}], "response": "I now can give a great answer \nFinal Answer:
\n\n**Topic: Introduction to Basic Addition**\n\n**Explanation:**\nBasic addition
is about combining two or more groups of things together to find out how many
there are in total. It''s one of the most fundamental concepts in math and is
a building block for all other math skills. Teaching addition to a 6-year-old
involves using simple numbers and relatable examples that help them visualize
and understand the concept of adding together.\n\n**Angle:**\nTo make the concept
of addition fun and engaging, we can use everyday objects that a child is familiar
with, such as toys, fruits, or drawing items. Incorporating visuals and interactive
elements will keep their attention and help reinforce the idea of combining
numbers.\n\n**Examples:**\n\n1. **Using Objects:**\n - **Scenario:** Let\u2019s
say you have 2 apples and your friend gives you 3 more apples.\n - **Visual**:
Arrange the apples in front of the child.\n - **Question:** \"How many apples
do you have now?\"\n - **Calculation:** 2 apples (your apples) + 3 apples
(friend''s apples) = 5 apples. \n - **Conclusion:** \"You now have 5 apples!\"\n\n2.
**Drawing Pictures:**\n - **Scenario:** Draw 4 stars on one side of the paper
and 2 stars on the other side.\n - **Activity:** Ask the child to count the
stars in the first group and then the second group.\n - **Question:** \"If
we put them together, how many stars do we have?\"\n - **Calculation:** 4
stars + 2 stars = 6 stars. \n - **Conclusion:** \"You drew 6 stars all together!\"\n\n3.
**Story Problems:**\n - **Scenario:** \"You have 5 toy cars, and you buy 3
more from the store. How many cars do you have?\"\n - **Interaction:** Create
a fun story around the toy cars (perhaps the cars are going on an adventure).\n -
**Calculation:** 5 toy cars + 3 toy cars = 8 toy cars. \n - **Conclusion:**
\"You now have a total of 8 toy cars for your adventure!\"\n\n4. **Games:**\n -
**Activity:** Play a simple game where you roll a pair of dice. Each die shows
a number.\n - **Task:** Ask the child to add the numbers on the dice together.\n -
**Example:** If one die shows 2 and the other shows 4, the child will say \u201c2
+ 4 = 6!\u201d\n - **Conclusion:** \u201cWhoever gets the highest number wins
a point!\u201d\n\nIn summary, when teaching a 6-year-old about basic addition,
it is essential to use simple numbers, real-life examples, visual aids, and
engaging activities. This ensures the child can grasp the concept while having
a fun learning experience. Making math relatable to their world helps build
a strong foundation for their future learning!", "call_type": "<LLMCallType.LLM_CALL:
''llm_call''>", "model": "gpt-4o-mini"}}, {"event_id": "7d008192-dc37-4798-99ca-d41b8674d085",
"timestamp": "2025-10-21T17:02:41.544571+00:00", "type": "memory_save_started",
"event_data": {"timestamp": "2025-10-21T17:02:41.544571+00:00", "type": "memory_save_started",
"source_fingerprint": null, "source_type": "short_term_memory", "fingerprint_metadata":
null, "task_id": "3283d0f7-7159-47a9-abf0-a1bfe4dafb13", "task_name": "Research
a topic to teach a kid aged 6 about math.", "agent_id": "5b1ba567-c4c3-4327-9c2e-4215c53bffb6",
"agent_role": "Researcher", "from_task": null, "from_agent": null, "value":
"I now can give a great answer \nFinal Answer: \n\n**Topic: Introduction to
Basic Addition**\n\n**Explanation:**\nBasic addition is about combining two
or more groups of things together to find out how many there are in total. It''s
one of the most fundamental concepts in math and is a building block for all
other math skills. Teaching addition to a 6-year-old involves using simple numbers
and relatable examples that help them visualize and understand the concept of
adding together.\n\n**Angle:**\nTo make the concept of addition fun and engaging,
we can use everyday objects that a child is familiar with, such as toys, fruits,
or drawing items. Incorporating visuals and interactive elements will keep their
attention and help reinforce the idea of combining numbers.\n\n**Examples:**\n\n1.
**Using Objects:**\n - **Scenario:** Let\u2019s say you have 2 apples and
your friend gives you 3 more apples.\n - **Visual**: Arrange the apples in
front of the child.\n - **Question:** \"How many apples do you have now?\"\n -
**Calculation:** 2 apples (your apples) + 3 apples (friend''s apples) = 5 apples. \n -
**Conclusion:** \"You now have 5 apples!\"\n\n2. **Drawing Pictures:**\n -
**Scenario:** Draw 4 stars on one side of the paper and 2 stars on the other
side.\n - **Activity:** Ask the child to count the stars in the first group
and then the second group.\n - **Question:** \"If we put them together, how
many stars do we have?\"\n - **Calculation:** 4 stars + 2 stars = 6 stars. \n -
**Conclusion:** \"You drew 6 stars all together!\"\n\n3. **Story Problems:**\n -
**Scenario:** \"You have 5 toy cars, and you buy 3 more from the store. How
many cars do you have?\"\n - **Interaction:** Create a fun story around the
toy cars (perhaps the cars are going on an adventure).\n - **Calculation:**
5 toy cars + 3 toy cars = 8 toy cars. \n - **Conclusion:** \"You now have
a total of 8 toy cars for your adventure!\"\n\n4. **Games:**\n - **Activity:**
Play a simple game where you roll a pair of dice. Each die shows a number.\n -
**Task:** Ask the child to add the numbers on the dice together.\n - **Example:**
If one die shows 2 and the other shows 4, the child will say \u201c2 + 4 = 6!\u201d\n -
**Conclusion:** \u201cWhoever gets the highest number wins a point!\u201d\n\nIn
summary, when teaching a 6-year-old about basic addition, it is essential to
use simple numbers, real-life examples, visual aids, and engaging activities.
This ensures the child can grasp the concept while having a fun learning experience.
Making math relatable to their world helps build a strong foundation for their
future learning!", "metadata": {"observation": "Research a topic to teach a
kid aged 6 about math."}}}, {"event_id": "6ec950dc-be30-43b0-a1e6-5ee4de464689",
"timestamp": "2025-10-21T17:02:41.556337+00:00", "type": "memory_save_completed",
"event_data": {"timestamp": "2025-10-21T17:02:41.556337+00:00", "type": "memory_save_completed",
"source_fingerprint": null, "source_type": "short_term_memory", "fingerprint_metadata":
null, "task_id": "3283d0f7-7159-47a9-abf0-a1bfe4dafb13", "task_name": "Research
a topic to teach a kid aged 6 about math.", "agent_id": "5b1ba567-c4c3-4327-9c2e-4215c53bffb6",
"agent_role": "Researcher", "from_task": null, "from_agent": null, "value":
"I now can give a great answer \nFinal Answer: \n\n**Topic: Introduction to
Basic Addition**\n\n**Explanation:**\nBasic addition is about combining two
or more groups of things together to find out how many there are in total. It''s
one of the most fundamental concepts in math and is a building block for all
other math skills. Teaching addition to a 6-year-old involves using simple numbers
and relatable examples that help them visualize and understand the concept of
adding together.\n\n**Angle:**\nTo make the concept of addition fun and engaging,
we can use everyday objects that a child is familiar with, such as toys, fruits,
or drawing items. Incorporating visuals and interactive elements will keep their
attention and help reinforce the idea of combining numbers.\n\n**Examples:**\n\n1.
**Using Objects:**\n - **Scenario:** Let\u2019s say you have 2 apples and
your friend gives you 3 more apples.\n - **Visual**: Arrange the apples in
front of the child.\n - **Question:** \"How many apples do you have now?\"\n -
**Calculation:** 2 apples (your apples) + 3 apples (friend''s apples) = 5 apples. \n -
**Conclusion:** \"You now have 5 apples!\"\n\n2. **Drawing Pictures:**\n -
**Scenario:** Draw 4 stars on one side of the paper and 2 stars on the other
side.\n - **Activity:** Ask the child to count the stars in the first group
and then the second group.\n - **Question:** \"If we put them together, how
many stars do we have?\"\n - **Calculation:** 4 stars + 2 stars = 6 stars. \n -
**Conclusion:** \"You drew 6 stars all together!\"\n\n3. **Story Problems:**\n -
**Scenario:** \"You have 5 toy cars, and you buy 3 more from the store. How
many cars do you have?\"\n - **Interaction:** Create a fun story around the
toy cars (perhaps the cars are going on an adventure).\n - **Calculation:**
5 toy cars + 3 toy cars = 8 toy cars. \n - **Conclusion:** \"You now have
a total of 8 toy cars for your adventure!\"\n\n4. **Games:**\n - **Activity:**
Play a simple game where you roll a pair of dice. Each die shows a number.\n -
**Task:** Ask the child to add the numbers on the dice together.\n - **Example:**
If one die shows 2 and the other shows 4, the child will say \u201c2 + 4 = 6!\u201d\n -
**Conclusion:** \u201cWhoever gets the highest number wins a point!\u201d\n\nIn
summary, when teaching a 6-year-old about basic addition, it is essential to
use simple numbers, real-life examples, visual aids, and engaging activities.
This ensures the child can grasp the concept while having a fun learning experience.
Making math relatable to their world helps build a strong foundation for their
future learning!", "metadata": {"observation": "Research a topic to teach a
kid aged 6 about math."}, "save_time_ms": 11.606931686401367}}, {"event_id":
"be3fce2b-9a2a-4222-b6b9-a03bae010470", "timestamp": "2025-10-21T17:02:41.688488+00:00",
"type": "memory_save_started", "event_data": {"timestamp": "2025-10-21T17:02:41.688488+00:00",
"type": "memory_save_started", "source_fingerprint": null, "source_type": "entity_memory",
"fingerprint_metadata": null, "task_id": "3283d0f7-7159-47a9-abf0-a1bfe4dafb13",
"task_name": "Research a topic to teach a kid aged 6 about math.", "agent_id":
"5b1ba567-c4c3-4327-9c2e-4215c53bffb6", "agent_role": "Researcher", "from_task":
null, "from_agent": null, "value": null, "metadata": {"entity_count": 3}}},
{"event_id": "06b57cdf-ddd2-485f-a64e-660cd6fd8318", "timestamp": "2025-10-21T17:02:41.723732+00:00",
"type": "memory_save_completed", "event_data": {"timestamp": "2025-10-21T17:02:41.723732+00:00",
"type": "memory_save_completed", "source_fingerprint": null, "source_type":
"entity_memory", "fingerprint_metadata": null, "task_id": "3283d0f7-7159-47a9-abf0-a1bfe4dafb13",
"task_name": "Research a topic to teach a kid aged 6 about math.", "agent_id":
"5b1ba567-c4c3-4327-9c2e-4215c53bffb6", "agent_role": "Researcher", "from_task":
null, "from_agent": null, "value": "Saved 3 entities", "metadata": {"entity_count":
3, "errors": []}, "save_time_ms": 35.18795967102051}}, {"event_id": "4598e3fd-0b62-4c2f-ab7d-f709646223b3",
"timestamp": "2025-10-21T17:02:41.723816+00:00", "type": "agent_execution_completed",
"event_data": {"agent_role": "Researcher", "agent_goal": "You research about
math.", "agent_backstory": "You''re an expert in research and you love to learn
new things."}}, {"event_id": "92695721-2c95-478e-9cce-cd058fb93df3", "timestamp":
"2025-10-21T17:02:41.723915+00:00", "type": "task_completed", "event_data":
{"task_description": "Research a topic to teach a kid aged 6 about math.", "task_name":
"Research a topic to teach a kid aged 6 about math.", "task_id": "3283d0f7-7159-47a9-abf0-a1bfe4dafb13",
"output_raw": "**Topic: Introduction to Basic Addition**\n\n**Explanation:**\nBasic
addition is about combining two or more groups of things together to find out
how many there are in total. It''s one of the most fundamental concepts in math
and is a building block for all other math skills. Teaching addition to a 6-year-old
involves using simple numbers and relatable examples that help them visualize
and understand the concept of adding together.\n\n**Angle:**\nTo make the concept
of addition fun and engaging, we can use everyday objects that a child is familiar
with, such as toys, fruits, or drawing items. Incorporating visuals and interactive
elements will keep their attention and help reinforce the idea of combining
numbers.\n\n**Examples:**\n\n1. **Using Objects:**\n - **Scenario:** Let\u2019s
say you have 2 apples and your friend gives you 3 more apples.\n - **Visual**:
Arrange the apples in front of the child.\n - **Question:** \"How many apples
do you have now?\"\n - **Calculation:** 2 apples (your apples) + 3 apples
(friend''s apples) = 5 apples. \n - **Conclusion:** \"You now have 5 apples!\"\n\n2.
**Drawing Pictures:**\n - **Scenario:** Draw 4 stars on one side of the paper
and 2 stars on the other side.\n - **Activity:** Ask the child to count the
stars in the first group and then the second group.\n - **Question:** \"If
we put them together, how many stars do we have?\"\n - **Calculation:** 4
stars + 2 stars = 6 stars. \n - **Conclusion:** \"You drew 6 stars all together!\"\n\n3.
**Story Problems:**\n - **Scenario:** \"You have 5 toy cars, and you buy 3
more from the store. How many cars do you have?\"\n - **Interaction:** Create
a fun story around the toy cars (perhaps the cars are going on an adventure).\n -
**Calculation:** 5 toy cars + 3 toy cars = 8 toy cars. \n - **Conclusion:**
\"You now have a total of 8 toy cars for your adventure!\"\n\n4. **Games:**\n -
**Activity:** Play a simple game where you roll a pair of dice. Each die shows
a number.\n - **Task:** Ask the child to add the numbers on the dice together.\n -
**Example:** If one die shows 2 and the other shows 4, the child will say \u201c2
+ 4 = 6!\u201d\n - **Conclusion:** \u201cWhoever gets the highest number wins
a point!\u201d\n\nIn summary, when teaching a 6-year-old about basic addition,
it is essential to use simple numbers, real-life examples, visual aids, and
engaging activities. This ensures the child can grasp the concept while having
a fun learning experience. Making math relatable to their world helps build
a strong foundation for their future learning!", "output_format": "OutputFormat.RAW",
"agent_role": "Researcher"}}, {"event_id": "1a254c34-e055-46d2-99cb-7dfdfdcefc74",
"timestamp": "2025-10-21T17:02:41.725000+00:00", "type": "crew_kickoff_completed",
"event_data": {"timestamp": "2025-10-21T17:02:41.725000+00:00", "type": "crew_kickoff_completed",
"source_fingerprint": null, "source_type": null, "fingerprint_metadata": null,
"task_id": null, "task_name": null, "agent_id": null, "agent_role": null, "crew_name":
"crew", "crew": null, "output": {"description": "Research a topic to teach a
kid aged 6 about math.", "name": "Research a topic to teach a kid aged 6 about
math.", "expected_output": "A topic, explanation, angle, and examples.", "summary":
"Research a topic to teach a kid aged 6 about...", "raw": "**Topic: Introduction
to Basic Addition**\n\n**Explanation:**\nBasic addition is about combining two
or more groups of things together to find out how many there are in total. It''s
one of the most fundamental concepts in math and is a building block for all
other math skills. Teaching addition to a 6-year-old involves using simple numbers
and relatable examples that help them visualize and understand the concept of
adding together.\n\n**Angle:**\nTo make the concept of addition fun and engaging,
we can use everyday objects that a child is familiar with, such as toys, fruits,
or drawing items. Incorporating visuals and interactive elements will keep their
attention and help reinforce the idea of combining numbers.\n\n**Examples:**\n\n1.
**Using Objects:**\n - **Scenario:** Let\u2019s say you have 2 apples and
your friend gives you 3 more apples.\n - **Visual**: Arrange the apples in
front of the child.\n - **Question:** \"How many apples do you have now?\"\n -
**Calculation:** 2 apples (your apples) + 3 apples (friend''s apples) = 5 apples. \n -
**Conclusion:** \"You now have 5 apples!\"\n\n2. **Drawing Pictures:**\n -
**Scenario:** Draw 4 stars on one side of the paper and 2 stars on the other
side.\n - **Activity:** Ask the child to count the stars in the first group
and then the second group.\n - **Question:** \"If we put them together, how
many stars do we have?\"\n - **Calculation:** 4 stars + 2 stars = 6 stars. \n -
**Conclusion:** \"You drew 6 stars all together!\"\n\n3. **Story Problems:**\n -
**Scenario:** \"You have 5 toy cars, and you buy 3 more from the store. How
many cars do you have?\"\n - **Interaction:** Create a fun story around the
toy cars (perhaps the cars are going on an adventure).\n - **Calculation:**
5 toy cars + 3 toy cars = 8 toy cars. \n - **Conclusion:** \"You now have
a total of 8 toy cars for your adventure!\"\n\n4. **Games:**\n - **Activity:**
Play a simple game where you roll a pair of dice. Each die shows a number.\n -
**Task:** Ask the child to add the numbers on the dice together.\n - **Example:**
If one die shows 2 and the other shows 4, the child will say \u201c2 + 4 = 6!\u201d\n -
**Conclusion:** \u201cWhoever gets the highest number wins a point!\u201d\n\nIn
summary, when teaching a 6-year-old about basic addition, it is essential to
use simple numbers, real-life examples, visual aids, and engaging activities.
This ensures the child can grasp the concept while having a fun learning experience.
Making math relatable to their world helps build a strong foundation for their
future learning!", "pydantic": null, "json_dict": null, "agent": "Researcher",
"output_format": "raw"}, "total_tokens": 796}}], "batch_metadata": {"events_count":
18, "batch_sequence": 1, "is_final_batch": false}}'
headers:
Accept:
- '*/*'
Accept-Encoding:
- gzip, deflate, zstd
Connection:
- keep-alive
Content-Length:
- '27251'
Content-Type:
- application/json
User-Agent:
- CrewAI-CLI/1.0.0
X-Crewai-Version:
- 1.0.0
method: POST
uri: https://app.crewai.com/crewai_plus/api/v1/tracing/ephemeral/batches/c5146cc4-dcff-45cc-a71a-b82a83b7de73/events
response:
body:
string: '{"events_created":18,"ephemeral_trace_batch_id":"ad4ac66f-7511-444c-aec3-a8c711ab4f54"}'
headers:
Connection:
- keep-alive
Content-Length:
- '87'
Content-Type:
- application/json; charset=utf-8
Date:
- Tue, 21 Oct 2025 17:02:42 GMT
cache-control:
- no-store
content-security-policy:
- 'default-src ''self'' *.app.crewai.com app.crewai.com; script-src ''self''
''unsafe-inline'' *.app.crewai.com app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts
https://www.gstatic.com https://run.pstmn.io https://apis.google.com https://apis.google.com/js/api.js
https://accounts.google.com https://accounts.google.com/gsi/client https://cdnjs.cloudflare.com/ajax/libs/normalize/8.0.1/normalize.min.css.map
https://*.google.com https://docs.google.com https://slides.google.com https://js.hs-scripts.com
https://js.sentry-cdn.com https://browser.sentry-cdn.com https://www.googletagmanager.com
https://js-na1.hs-scripts.com https://js.hubspot.com http://js-na1.hs-scripts.com
https://bat.bing.com https://cdn.amplitude.com https://cdn.segment.com https://d1d3n03t5zntha.cloudfront.net/
https://descriptusercontent.com https://edge.fullstory.com https://googleads.g.doubleclick.net
https://js.hs-analytics.net https://js.hs-banner.com https://js.hsadspixel.net
https://js.hscollectedforms.net https://js.usemessages.com https://snap.licdn.com
https://static.cloudflareinsights.com https://static.reo.dev https://www.google-analytics.com
https://share.descript.com/; style-src ''self'' ''unsafe-inline'' *.app.crewai.com
app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts; img-src ''self'' data:
*.app.crewai.com app.crewai.com https://zeus.tools.crewai.com https://dashboard.tools.crewai.com
https://cdn.jsdelivr.net https://forms.hsforms.com https://track.hubspot.com
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://www.google.com
https://www.google.com.br; font-src ''self'' data: *.app.crewai.com app.crewai.com;
connect-src ''self'' *.app.crewai.com app.crewai.com https://zeus.tools.crewai.com
https://connect.useparagon.com/ https://zeus.useparagon.com/* https://*.useparagon.com/*
https://run.pstmn.io https://connect.tools.crewai.com/ https://*.sentry.io
https://www.google-analytics.com https://edge.fullstory.com https://rs.fullstory.com
https://api.hubspot.com https://forms.hscollectedforms.net https://api.hubapi.com
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://google.com/pagead/form-data/16713662509
https://google.com/ccm/form-data/16713662509 https://www.google.com/ccm/collect
https://worker-actionkit.tools.crewai.com https://api.reo.dev; frame-src ''self''
*.app.crewai.com app.crewai.com https://connect.useparagon.com/ https://zeus.tools.crewai.com
https://zeus.useparagon.com/* https://connect.tools.crewai.com/ https://docs.google.com
https://drive.google.com https://slides.google.com https://accounts.google.com
https://*.google.com https://app.hubspot.com/ https://td.doubleclick.net https://www.googletagmanager.com/
https://www.youtube.com https://share.descript.com'
etag:
- W/"b64593afe178f1c8f741a9b67ffdcd3a"
expires:
- '0'
permissions-policy:
- camera=(), microphone=(self), geolocation=()
pragma:
- no-cache
referrer-policy:
- strict-origin-when-cross-origin
strict-transport-security:
- max-age=63072000; includeSubDomains
vary:
- Accept
x-content-type-options:
- nosniff
x-frame-options:
- SAMEORIGIN
x-permitted-cross-domain-policies:
- none
x-request-id:
- 65b0cea8-4eb3-4d77-a644-18bcce5cf785
x-runtime:
- '0.195421'
x-xss-protection:
- 1; mode=block
status:
code: 200
message: OK
- request:
body: '{"status": "completed", "duration_ms": 863, "final_event_count": 18}'
headers:
Accept:
- '*/*'
Accept-Encoding:
- gzip, deflate, zstd
Connection:
- keep-alive
Content-Length:
- '68'
Content-Type:
- application/json
User-Agent:
- CrewAI-CLI/1.0.0
X-Crewai-Version:
- 1.0.0
method: PATCH
uri: https://app.crewai.com/crewai_plus/api/v1/tracing/ephemeral/batches/c5146cc4-dcff-45cc-a71a-b82a83b7de73/finalize
response:
body:
string: '{"id":"ad4ac66f-7511-444c-aec3-a8c711ab4f54","ephemeral_trace_id":"c5146cc4-dcff-45cc-a71a-b82a83b7de73","execution_type":"crew","crew_name":"crew","flow_name":null,"status":"completed","duration_ms":863,"crewai_version":"1.0.0","total_events":18,"execution_context":{"crew_name":"crew","flow_name":null,"privacy_level":"standard","crewai_version":"1.0.0","crew_fingerprint":null},"created_at":"2025-10-21T17:02:41.683Z","updated_at":"2025-10-21T17:02:42.862Z","access_code":"TRACE-41ea39cb70","user_identifier":null}'
headers:
Connection:
- keep-alive
Content-Length:
- '517'
Content-Type:
- application/json; charset=utf-8
Date:
- Tue, 21 Oct 2025 17:02:42 GMT
cache-control:
- no-store
content-security-policy:
- 'default-src ''self'' *.app.crewai.com app.crewai.com; script-src ''self''
''unsafe-inline'' *.app.crewai.com app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts
https://www.gstatic.com https://run.pstmn.io https://apis.google.com https://apis.google.com/js/api.js
https://accounts.google.com https://accounts.google.com/gsi/client https://cdnjs.cloudflare.com/ajax/libs/normalize/8.0.1/normalize.min.css.map
https://*.google.com https://docs.google.com https://slides.google.com https://js.hs-scripts.com
https://js.sentry-cdn.com https://browser.sentry-cdn.com https://www.googletagmanager.com
https://js-na1.hs-scripts.com https://js.hubspot.com http://js-na1.hs-scripts.com
https://bat.bing.com https://cdn.amplitude.com https://cdn.segment.com https://d1d3n03t5zntha.cloudfront.net/
https://descriptusercontent.com https://edge.fullstory.com https://googleads.g.doubleclick.net
https://js.hs-analytics.net https://js.hs-banner.com https://js.hsadspixel.net
https://js.hscollectedforms.net https://js.usemessages.com https://snap.licdn.com
https://static.cloudflareinsights.com https://static.reo.dev https://www.google-analytics.com
https://share.descript.com/; style-src ''self'' ''unsafe-inline'' *.app.crewai.com
app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts; img-src ''self'' data:
*.app.crewai.com app.crewai.com https://zeus.tools.crewai.com https://dashboard.tools.crewai.com
https://cdn.jsdelivr.net https://forms.hsforms.com https://track.hubspot.com
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://www.google.com
https://www.google.com.br; font-src ''self'' data: *.app.crewai.com app.crewai.com;
connect-src ''self'' *.app.crewai.com app.crewai.com https://zeus.tools.crewai.com
https://connect.useparagon.com/ https://zeus.useparagon.com/* https://*.useparagon.com/*
https://run.pstmn.io https://connect.tools.crewai.com/ https://*.sentry.io
https://www.google-analytics.com https://edge.fullstory.com https://rs.fullstory.com
https://api.hubspot.com https://forms.hscollectedforms.net https://api.hubapi.com
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://google.com/pagead/form-data/16713662509
https://google.com/ccm/form-data/16713662509 https://www.google.com/ccm/collect
https://worker-actionkit.tools.crewai.com https://api.reo.dev; frame-src ''self''
*.app.crewai.com app.crewai.com https://connect.useparagon.com/ https://zeus.tools.crewai.com
https://zeus.useparagon.com/* https://connect.tools.crewai.com/ https://docs.google.com
https://drive.google.com https://slides.google.com https://accounts.google.com
https://*.google.com https://app.hubspot.com/ https://td.doubleclick.net https://www.googletagmanager.com/
https://www.youtube.com https://share.descript.com'
etag:
- W/"10c699106e5c1f4c4a75d76283291bbe"
expires:
- '0'
permissions-policy:
- camera=(), microphone=(self), geolocation=()
pragma:
- no-cache
referrer-policy:
- strict-origin-when-cross-origin
strict-transport-security:
- max-age=63072000; includeSubDomains
vary:
- Accept
x-content-type-options:
- nosniff
x-frame-options:
- SAMEORIGIN
x-permitted-cross-domain-policies:
- none
x-request-id:
- 249b4327-c151-4c5f-84b7-16d1465ca035
x-runtime:
- '0.357280'
x-xss-protection:
- 1; mode=block
status:
code: 200
message: OK
version: 1

View File

@@ -0,0 +1,101 @@
interactions:
- request:
body: '{"trace_id": "a15aa156-bf49-4b5e-90aa-b1d749de00f7", "execution_type":
"crew", "user_identifier": null, "execution_context": {"crew_fingerprint": null,
"crew_name": "crew", "flow_name": null, "crewai_version": "1.0.0", "privacy_level":
"standard"}, "execution_metadata": {"expected_duration_estimate": 300, "agent_count":
0, "task_count": 0, "flow_method_count": 0, "execution_started_at": "2025-10-21T14:16:55.776884+00:00"},
"ephemeral_trace_id": "a15aa156-bf49-4b5e-90aa-b1d749de00f7"}'
headers:
Accept:
- '*/*'
Accept-Encoding:
- gzip, deflate, zstd
Connection:
- keep-alive
Content-Length:
- '488'
Content-Type:
- application/json
User-Agent:
- CrewAI-CLI/1.0.0
X-Crewai-Version:
- 1.0.0
method: POST
uri: https://app.crewai.com/crewai_plus/api/v1/tracing/ephemeral/batches
response:
body:
string: '{"id":"802f2398-9dac-459b-95c5-83c1121b1c4a","ephemeral_trace_id":"a15aa156-bf49-4b5e-90aa-b1d749de00f7","execution_type":"crew","crew_name":"crew","flow_name":null,"status":"running","duration_ms":null,"crewai_version":"1.0.0","total_events":0,"execution_context":{"crew_fingerprint":null,"crew_name":"crew","flow_name":null,"crewai_version":"1.0.0","privacy_level":"standard"},"created_at":"2025-10-21T14:16:56.502Z","updated_at":"2025-10-21T14:16:56.502Z","access_code":"TRACE-a7c706bf5c","user_identifier":null}'
headers:
Connection:
- keep-alive
Content-Length:
- '515'
Content-Type:
- application/json; charset=utf-8
Date:
- Tue, 21 Oct 2025 14:16:56 GMT
cache-control:
- no-store
content-security-policy:
- 'default-src ''self'' *.app.crewai.com app.crewai.com; script-src ''self''
''unsafe-inline'' *.app.crewai.com app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts
https://www.gstatic.com https://run.pstmn.io https://apis.google.com https://apis.google.com/js/api.js
https://accounts.google.com https://accounts.google.com/gsi/client https://cdnjs.cloudflare.com/ajax/libs/normalize/8.0.1/normalize.min.css.map
https://*.google.com https://docs.google.com https://slides.google.com https://js.hs-scripts.com
https://js.sentry-cdn.com https://browser.sentry-cdn.com https://www.googletagmanager.com
https://js-na1.hs-scripts.com https://js.hubspot.com http://js-na1.hs-scripts.com
https://bat.bing.com https://cdn.amplitude.com https://cdn.segment.com https://d1d3n03t5zntha.cloudfront.net/
https://descriptusercontent.com https://edge.fullstory.com https://googleads.g.doubleclick.net
https://js.hs-analytics.net https://js.hs-banner.com https://js.hsadspixel.net
https://js.hscollectedforms.net https://js.usemessages.com https://snap.licdn.com
https://static.cloudflareinsights.com https://static.reo.dev https://www.google-analytics.com
https://share.descript.com/; style-src ''self'' ''unsafe-inline'' *.app.crewai.com
app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts; img-src ''self'' data:
*.app.crewai.com app.crewai.com https://zeus.tools.crewai.com https://dashboard.tools.crewai.com
https://cdn.jsdelivr.net https://forms.hsforms.com https://track.hubspot.com
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://www.google.com
https://www.google.com.br; font-src ''self'' data: *.app.crewai.com app.crewai.com;
connect-src ''self'' *.app.crewai.com app.crewai.com https://zeus.tools.crewai.com
https://connect.useparagon.com/ https://zeus.useparagon.com/* https://*.useparagon.com/*
https://run.pstmn.io https://connect.tools.crewai.com/ https://*.sentry.io
https://www.google-analytics.com https://edge.fullstory.com https://rs.fullstory.com
https://api.hubspot.com https://forms.hscollectedforms.net https://api.hubapi.com
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://google.com/pagead/form-data/16713662509
https://google.com/ccm/form-data/16713662509 https://www.google.com/ccm/collect
https://worker-actionkit.tools.crewai.com https://api.reo.dev; frame-src ''self''
*.app.crewai.com app.crewai.com https://connect.useparagon.com/ https://zeus.tools.crewai.com
https://zeus.useparagon.com/* https://connect.tools.crewai.com/ https://docs.google.com
https://drive.google.com https://slides.google.com https://accounts.google.com
https://*.google.com https://app.hubspot.com/ https://td.doubleclick.net https://www.googletagmanager.com/
https://www.youtube.com https://share.descript.com'
etag:
- W/"d71af423072b7e3a94f3fa25c73280e2"
expires:
- '0'
permissions-policy:
- camera=(), microphone=(self), geolocation=()
pragma:
- no-cache
referrer-policy:
- strict-origin-when-cross-origin
strict-transport-security:
- max-age=63072000; includeSubDomains
vary:
- Accept
x-content-type-options:
- nosniff
x-frame-options:
- SAMEORIGIN
x-permitted-cross-domain-policies:
- none
x-request-id:
- d36aca42-3eee-4eb9-a857-025197641016
x-runtime:
- '0.091579'
x-xss-protection:
- 1; mode=block
status:
code: 201
message: Created
version: 1

View File

@@ -0,0 +1,101 @@
interactions:
- request:
body: '{"trace_id": "06a3540b-6fa4-4066-bcd0-7eb6f9370f19", "execution_type":
"crew", "user_identifier": null, "execution_context": {"crew_fingerprint": null,
"crew_name": "crew", "flow_name": null, "crewai_version": "1.0.0", "privacy_level":
"standard"}, "execution_metadata": {"expected_duration_estimate": 300, "agent_count":
0, "task_count": 0, "flow_method_count": 0, "execution_started_at": "2025-10-21T14:17:40.843919+00:00"},
"ephemeral_trace_id": "06a3540b-6fa4-4066-bcd0-7eb6f9370f19"}'
headers:
Accept:
- '*/*'
Accept-Encoding:
- gzip, deflate, zstd
Connection:
- keep-alive
Content-Length:
- '488'
Content-Type:
- application/json
User-Agent:
- CrewAI-CLI/1.0.0
X-Crewai-Version:
- 1.0.0
method: POST
uri: https://app.crewai.com/crewai_plus/api/v1/tracing/ephemeral/batches
response:
body:
string: '{"id":"f01eb268-68a3-4711-b457-165dbfbf28d8","ephemeral_trace_id":"06a3540b-6fa4-4066-bcd0-7eb6f9370f19","execution_type":"crew","crew_name":"crew","flow_name":null,"status":"running","duration_ms":null,"crewai_version":"1.0.0","total_events":0,"execution_context":{"crew_fingerprint":null,"crew_name":"crew","flow_name":null,"crewai_version":"1.0.0","privacy_level":"standard"},"created_at":"2025-10-21T14:17:41.359Z","updated_at":"2025-10-21T14:17:41.359Z","access_code":"TRACE-91b88a5fb5","user_identifier":null}'
headers:
Connection:
- keep-alive
Content-Length:
- '515'
Content-Type:
- application/json; charset=utf-8
Date:
- Tue, 21 Oct 2025 14:17:41 GMT
cache-control:
- no-store
content-security-policy:
- 'default-src ''self'' *.app.crewai.com app.crewai.com; script-src ''self''
''unsafe-inline'' *.app.crewai.com app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts
https://www.gstatic.com https://run.pstmn.io https://apis.google.com https://apis.google.com/js/api.js
https://accounts.google.com https://accounts.google.com/gsi/client https://cdnjs.cloudflare.com/ajax/libs/normalize/8.0.1/normalize.min.css.map
https://*.google.com https://docs.google.com https://slides.google.com https://js.hs-scripts.com
https://js.sentry-cdn.com https://browser.sentry-cdn.com https://www.googletagmanager.com
https://js-na1.hs-scripts.com https://js.hubspot.com http://js-na1.hs-scripts.com
https://bat.bing.com https://cdn.amplitude.com https://cdn.segment.com https://d1d3n03t5zntha.cloudfront.net/
https://descriptusercontent.com https://edge.fullstory.com https://googleads.g.doubleclick.net
https://js.hs-analytics.net https://js.hs-banner.com https://js.hsadspixel.net
https://js.hscollectedforms.net https://js.usemessages.com https://snap.licdn.com
https://static.cloudflareinsights.com https://static.reo.dev https://www.google-analytics.com
https://share.descript.com/; style-src ''self'' ''unsafe-inline'' *.app.crewai.com
app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts; img-src ''self'' data:
*.app.crewai.com app.crewai.com https://zeus.tools.crewai.com https://dashboard.tools.crewai.com
https://cdn.jsdelivr.net https://forms.hsforms.com https://track.hubspot.com
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://www.google.com
https://www.google.com.br; font-src ''self'' data: *.app.crewai.com app.crewai.com;
connect-src ''self'' *.app.crewai.com app.crewai.com https://zeus.tools.crewai.com
https://connect.useparagon.com/ https://zeus.useparagon.com/* https://*.useparagon.com/*
https://run.pstmn.io https://connect.tools.crewai.com/ https://*.sentry.io
https://www.google-analytics.com https://edge.fullstory.com https://rs.fullstory.com
https://api.hubspot.com https://forms.hscollectedforms.net https://api.hubapi.com
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://google.com/pagead/form-data/16713662509
https://google.com/ccm/form-data/16713662509 https://www.google.com/ccm/collect
https://worker-actionkit.tools.crewai.com https://api.reo.dev; frame-src ''self''
*.app.crewai.com app.crewai.com https://connect.useparagon.com/ https://zeus.tools.crewai.com
https://zeus.useparagon.com/* https://connect.tools.crewai.com/ https://docs.google.com
https://drive.google.com https://slides.google.com https://accounts.google.com
https://*.google.com https://app.hubspot.com/ https://td.doubleclick.net https://www.googletagmanager.com/
https://www.youtube.com https://share.descript.com'
etag:
- W/"0af4d78caa068ff6a18bc41ed4176d51"
expires:
- '0'
permissions-policy:
- camera=(), microphone=(self), geolocation=()
pragma:
- no-cache
referrer-policy:
- strict-origin-when-cross-origin
strict-transport-security:
- max-age=63072000; includeSubDomains
vary:
- Accept
x-content-type-options:
- nosniff
x-frame-options:
- SAMEORIGIN
x-permitted-cross-domain-policies:
- none
x-request-id:
- a1283599-535d-4ae2-99d8-a5b4987bf951
x-runtime:
- '0.088882'
x-xss-protection:
- 1; mode=block
status:
code: 201
message: Created
version: 1

View File

@@ -0,0 +1,101 @@
interactions:
- request:
body: '{"trace_id": "48baf84b-6f03-4e2e-b69a-36f2e6298da6", "execution_type":
"crew", "user_identifier": null, "execution_context": {"crew_fingerprint": null,
"crew_name": "crew", "flow_name": null, "crewai_version": "1.0.0", "privacy_level":
"standard"}, "execution_metadata": {"expected_duration_estimate": 300, "agent_count":
0, "task_count": 0, "flow_method_count": 0, "execution_started_at": "2025-10-21T14:19:57.504164+00:00"},
"ephemeral_trace_id": "48baf84b-6f03-4e2e-b69a-36f2e6298da6"}'
headers:
Accept:
- '*/*'
Accept-Encoding:
- gzip, deflate, zstd
Connection:
- keep-alive
Content-Length:
- '488'
Content-Type:
- application/json
User-Agent:
- CrewAI-CLI/1.0.0
X-Crewai-Version:
- 1.0.0
method: POST
uri: https://app.crewai.com/crewai_plus/api/v1/tracing/ephemeral/batches
response:
body:
string: '{"id":"789714f7-716c-4a0c-a6f6-3142aac81766","ephemeral_trace_id":"48baf84b-6f03-4e2e-b69a-36f2e6298da6","execution_type":"crew","crew_name":"crew","flow_name":null,"status":"running","duration_ms":null,"crewai_version":"1.0.0","total_events":0,"execution_context":{"crew_fingerprint":null,"crew_name":"crew","flow_name":null,"crewai_version":"1.0.0","privacy_level":"standard"},"created_at":"2025-10-21T14:19:57.950Z","updated_at":"2025-10-21T14:19:57.950Z","access_code":"TRACE-2e6a3ca401","user_identifier":null}'
headers:
Connection:
- keep-alive
Content-Length:
- '515'
Content-Type:
- application/json; charset=utf-8
Date:
- Tue, 21 Oct 2025 14:19:57 GMT
cache-control:
- no-store
content-security-policy:
- 'default-src ''self'' *.app.crewai.com app.crewai.com; script-src ''self''
''unsafe-inline'' *.app.crewai.com app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts
https://www.gstatic.com https://run.pstmn.io https://apis.google.com https://apis.google.com/js/api.js
https://accounts.google.com https://accounts.google.com/gsi/client https://cdnjs.cloudflare.com/ajax/libs/normalize/8.0.1/normalize.min.css.map
https://*.google.com https://docs.google.com https://slides.google.com https://js.hs-scripts.com
https://js.sentry-cdn.com https://browser.sentry-cdn.com https://www.googletagmanager.com
https://js-na1.hs-scripts.com https://js.hubspot.com http://js-na1.hs-scripts.com
https://bat.bing.com https://cdn.amplitude.com https://cdn.segment.com https://d1d3n03t5zntha.cloudfront.net/
https://descriptusercontent.com https://edge.fullstory.com https://googleads.g.doubleclick.net
https://js.hs-analytics.net https://js.hs-banner.com https://js.hsadspixel.net
https://js.hscollectedforms.net https://js.usemessages.com https://snap.licdn.com
https://static.cloudflareinsights.com https://static.reo.dev https://www.google-analytics.com
https://share.descript.com/; style-src ''self'' ''unsafe-inline'' *.app.crewai.com
app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts; img-src ''self'' data:
*.app.crewai.com app.crewai.com https://zeus.tools.crewai.com https://dashboard.tools.crewai.com
https://cdn.jsdelivr.net https://forms.hsforms.com https://track.hubspot.com
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://www.google.com
https://www.google.com.br; font-src ''self'' data: *.app.crewai.com app.crewai.com;
connect-src ''self'' *.app.crewai.com app.crewai.com https://zeus.tools.crewai.com
https://connect.useparagon.com/ https://zeus.useparagon.com/* https://*.useparagon.com/*
https://run.pstmn.io https://connect.tools.crewai.com/ https://*.sentry.io
https://www.google-analytics.com https://edge.fullstory.com https://rs.fullstory.com
https://api.hubspot.com https://forms.hscollectedforms.net https://api.hubapi.com
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://google.com/pagead/form-data/16713662509
https://google.com/ccm/form-data/16713662509 https://www.google.com/ccm/collect
https://worker-actionkit.tools.crewai.com https://api.reo.dev; frame-src ''self''
*.app.crewai.com app.crewai.com https://connect.useparagon.com/ https://zeus.tools.crewai.com
https://zeus.useparagon.com/* https://connect.tools.crewai.com/ https://docs.google.com
https://drive.google.com https://slides.google.com https://accounts.google.com
https://*.google.com https://app.hubspot.com/ https://td.doubleclick.net https://www.googletagmanager.com/
https://www.youtube.com https://share.descript.com'
etag:
- W/"b8a7fe4a39ec1a557b05f0e3526227b0"
expires:
- '0'
permissions-policy:
- camera=(), microphone=(self), geolocation=()
pragma:
- no-cache
referrer-policy:
- strict-origin-when-cross-origin
strict-transport-security:
- max-age=63072000; includeSubDomains
vary:
- Accept
x-content-type-options:
- nosniff
x-frame-options:
- SAMEORIGIN
x-permitted-cross-domain-policies:
- none
x-request-id:
- c84a3ad3-2bf4-431c-8a1b-c117a015fa60
x-runtime:
- '0.087470'
x-xss-protection:
- 1; mode=block
status:
code: 201
message: Created
version: 1

View File

@@ -0,0 +1,477 @@
interactions:
- request:
body: '{"trace_id": "fabb9e5b-b761-4b21-8514-cd4d3c937d09", "execution_type":
"flow", "user_identifier": null, "execution_context": {"crew_fingerprint": null,
"crew_name": null, "flow_name": "MyFlow", "crewai_version": "1.0.0", "privacy_level":
"standard"}, "execution_metadata": {"expected_duration_estimate": 300, "agent_count":
0, "task_count": 0, "flow_method_count": 0, "execution_started_at": "2025-10-21T14:26:37.988214+00:00"},
"ephemeral_trace_id": "fabb9e5b-b761-4b21-8514-cd4d3c937d09"}'
headers:
Accept:
- '*/*'
Accept-Encoding:
- gzip, deflate, zstd
Connection:
- keep-alive
Content-Length:
- '490'
Content-Type:
- application/json
User-Agent:
- CrewAI-CLI/1.0.0
X-Crewai-Version:
- 1.0.0
method: POST
uri: https://app.crewai.com/crewai_plus/api/v1/tracing/ephemeral/batches
response:
body:
string: '{"id":"fe167fd8-d9b8-4faa-bb28-ed13f10e8d2e","ephemeral_trace_id":"fabb9e5b-b761-4b21-8514-cd4d3c937d09","execution_type":"flow","crew_name":null,"flow_name":"MyFlow","status":"running","duration_ms":null,"crewai_version":"1.0.0","total_events":0,"execution_context":{"crew_fingerprint":null,"crew_name":null,"flow_name":"MyFlow","crewai_version":"1.0.0","privacy_level":"standard"},"created_at":"2025-10-21T14:26:38.410Z","updated_at":"2025-10-21T14:26:38.410Z","access_code":"TRACE-8b1efdc7b5","user_identifier":null}'
headers:
Connection:
- keep-alive
Content-Length:
- '519'
Content-Type:
- application/json; charset=utf-8
Date:
- Tue, 21 Oct 2025 14:26:38 GMT
cache-control:
- no-store
content-security-policy:
- 'default-src ''self'' *.app.crewai.com app.crewai.com; script-src ''self''
''unsafe-inline'' *.app.crewai.com app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts
https://www.gstatic.com https://run.pstmn.io https://apis.google.com https://apis.google.com/js/api.js
https://accounts.google.com https://accounts.google.com/gsi/client https://cdnjs.cloudflare.com/ajax/libs/normalize/8.0.1/normalize.min.css.map
https://*.google.com https://docs.google.com https://slides.google.com https://js.hs-scripts.com
https://js.sentry-cdn.com https://browser.sentry-cdn.com https://www.googletagmanager.com
https://js-na1.hs-scripts.com https://js.hubspot.com http://js-na1.hs-scripts.com
https://bat.bing.com https://cdn.amplitude.com https://cdn.segment.com https://d1d3n03t5zntha.cloudfront.net/
https://descriptusercontent.com https://edge.fullstory.com https://googleads.g.doubleclick.net
https://js.hs-analytics.net https://js.hs-banner.com https://js.hsadspixel.net
https://js.hscollectedforms.net https://js.usemessages.com https://snap.licdn.com
https://static.cloudflareinsights.com https://static.reo.dev https://www.google-analytics.com
https://share.descript.com/; style-src ''self'' ''unsafe-inline'' *.app.crewai.com
app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts; img-src ''self'' data:
*.app.crewai.com app.crewai.com https://zeus.tools.crewai.com https://dashboard.tools.crewai.com
https://cdn.jsdelivr.net https://forms.hsforms.com https://track.hubspot.com
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://www.google.com
https://www.google.com.br; font-src ''self'' data: *.app.crewai.com app.crewai.com;
connect-src ''self'' *.app.crewai.com app.crewai.com https://zeus.tools.crewai.com
https://connect.useparagon.com/ https://zeus.useparagon.com/* https://*.useparagon.com/*
https://run.pstmn.io https://connect.tools.crewai.com/ https://*.sentry.io
https://www.google-analytics.com https://edge.fullstory.com https://rs.fullstory.com
https://api.hubspot.com https://forms.hscollectedforms.net https://api.hubapi.com
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://google.com/pagead/form-data/16713662509
https://google.com/ccm/form-data/16713662509 https://www.google.com/ccm/collect
https://worker-actionkit.tools.crewai.com https://api.reo.dev; frame-src ''self''
*.app.crewai.com app.crewai.com https://connect.useparagon.com/ https://zeus.tools.crewai.com
https://zeus.useparagon.com/* https://connect.tools.crewai.com/ https://docs.google.com
https://drive.google.com https://slides.google.com https://accounts.google.com
https://*.google.com https://app.hubspot.com/ https://td.doubleclick.net https://www.googletagmanager.com/
https://www.youtube.com https://share.descript.com'
etag:
- W/"3ccda5b1eea02b7d1edeeb99b7968370"
expires:
- '0'
permissions-policy:
- camera=(), microphone=(self), geolocation=()
pragma:
- no-cache
referrer-policy:
- strict-origin-when-cross-origin
strict-transport-security:
- max-age=63072000; includeSubDomains
vary:
- Accept
x-content-type-options:
- nosniff
x-frame-options:
- SAMEORIGIN
x-permitted-cross-domain-policies:
- none
x-request-id:
- b3682ed8-18b9-485a-8171-74e61d843589
x-runtime:
- '0.064552'
x-xss-protection:
- 1; mode=block
status:
code: 201
message: Created
- request:
body: '{"events": [{"event_id": "c6d1cb65-7e55-40a5-9319-766591bcd1ae", "timestamp":
"2025-10-21T14:26:37.986131+00:00", "type": "flow_started", "event_data": {"timestamp":
"2025-10-21T14:26:37.986131+00:00", "type": "flow_started", "source_fingerprint":
null, "source_type": null, "fingerprint_metadata": null, "task_id": null, "task_name":
null, "agent_id": null, "agent_role": null, "flow_name": "MyFlow", "inputs":
null}}, {"event_id": "17621b79-f10a-497f-9eb1-c567f8ca43f1", "timestamp": "2025-10-21T14:26:37.986545+00:00",
"type": "method_execution_started", "event_data": {"timestamp": "2025-10-21T14:26:37.986545+00:00",
"type": "method_execution_started", "source_fingerprint": null, "source_type":
null, "fingerprint_metadata": null, "task_id": null, "task_name": null, "agent_id":
null, "agent_role": null, "flow_name": "MyFlow", "method_name": "start", "state":
{"id": "c1c5fe64-dcf4-43b2-9081-c4a513770607"}, "params": {}}}, {"event_id":
"60654efd-c07b-4171-a75d-9b943863e4bf", "timestamp": "2025-10-21T14:26:37.989677+00:00",
"type": "method_execution_finished", "event_data": {"timestamp": "2025-10-21T14:26:37.989677+00:00",
"type": "method_execution_finished", "source_fingerprint": null, "source_type":
null, "fingerprint_metadata": null, "task_id": null, "task_name": null, "agent_id":
null, "agent_role": null, "flow_name": "MyFlow", "method_name": "start", "result":
{"parent_flow": "<tests.test_crew.test_sets_parent_flow_when_inside_flow.<locals>.MyFlow
object at 0x12b835520>", "name": "crew", "cache": true, "tasks": [{"used_tools":
"0", "tools_errors": "0", "delegations": "0", "i18n": "{''prompt_file'': None}",
"name": "None", "prompt_context": "None", "description": "''Task 1''", "expected_output":
"''output''", "config": "None", "callback": "None", "agent": "{''id'': UUID(''7fadf38e-4090-4643-9fcf-bd24b07e05e1''),
''role'': ''Researcher'', ''goal'': ''Make the best research and analysis on
content about AI and AI agents'', ''backstory'': \"You''re an expert researcher,
specialized in technology, software engineering, AI and startups. You work as
a freelancer and is now working on doing research and analysis for a new customer.\",
''cache'': True, ''verbose'': False, ''max_rpm'': None, ''allow_delegation'':
False, ''tools'': [], ''max_iter'': 25, ''agent_executor'': <crewai.agents.crew_agent_executor.CrewAgentExecutor
object at 0x12b973fe0>, ''llm'': <crewai.llms.providers.openai.completion.OpenAICompletion
object at 0x12b910290>, ''crew'': None, ''i18n'': {''prompt_file'': None}, ''cache_handler'':
{}, ''tools_handler'': <crewai.agents.tools_handler.ToolsHandler object at 0x12b9934d0>,
''tools_results'': [], ''max_tokens'': None, ''knowledge'': None, ''knowledge_sources'':
None, ''knowledge_storage'': None, ''security_config'': {''fingerprint'': {''metadata'':
{}}}, ''callbacks'': [], ''adapted_agent'': False, ''knowledge_config'': None,
''apps'': None, ''mcps'': None}", "context": "NOT_SPECIFIED", "async_execution":
"False", "output_json": "None", "output_pydantic": "None", "output_file": "None",
"create_directory": "True", "output": "None", "tools": "[]", "security_config":
"{''fingerprint'': {''metadata'': {}}}", "id": "UUID(''d8c03737-58d2-4c8e-929e-d9e0bf022a3b'')",
"human_input": "False", "markdown": "False", "converter_cls": "None", "processed_by_agents":
"set()", "guardrail": "None", "guardrails": "None", "max_retries": "None", "guardrail_max_retries":
"3", "retry_count": "0", "start_time": "None", "end_time": "None", "allow_crewai_trigger_context":
"None"}, {"used_tools": "0", "tools_errors": "0", "delegations": "0", "i18n":
"{''prompt_file'': None}", "name": "None", "prompt_context": "None", "description":
"''Task 2''", "expected_output": "''output''", "config": "None", "callback":
"None", "agent": "{''id'': UUID(''49785d1e-2cfa-4967-a503-da9f6fea46df''), ''role'':
''Senior Writer'', ''goal'': ''Write the best content about AI and AI agents.'',
''backstory'': \"You''re a senior writer, specialized in technology, software
engineering, AI and startups. You work as a freelancer and are now working on
writing content for a new customer.\", ''cache'': True, ''verbose'': False,
''max_rpm'': None, ''allow_delegation'': False, ''tools'': [], ''max_iter'':
25, ''agent_executor'': <crewai.agents.crew_agent_executor.CrewAgentExecutor
object at 0x12b7bbbf0>, ''llm'': <crewai.llms.providers.openai.completion.OpenAICompletion
object at 0x12b9903b0>, ''crew'': None, ''i18n'': {''prompt_file'': None}, ''cache_handler'':
{}, ''tools_handler'': <crewai.agents.tools_handler.ToolsHandler object at 0x12b631bb0>,
''tools_results'': [], ''max_tokens'': None, ''knowledge'': None, ''knowledge_sources'':
None, ''knowledge_storage'': None, ''security_config'': {''fingerprint'': {''metadata'':
{}}}, ''callbacks'': [], ''adapted_agent'': False, ''knowledge_config'': None,
''apps'': None, ''mcps'': None}", "context": "NOT_SPECIFIED", "async_execution":
"False", "output_json": "None", "output_pydantic": "None", "output_file": "None",
"create_directory": "True", "output": "None", "tools": "[]", "security_config":
"{''fingerprint'': {''metadata'': {}}}", "id": "UUID(''530f34b2-29dd-4ba8-a4e0-f68ba958890b'')",
"human_input": "False", "markdown": "False", "converter_cls": "None", "processed_by_agents":
"set()", "guardrail": "None", "guardrails": "None", "max_retries": "None", "guardrail_max_retries":
"3", "retry_count": "0", "start_time": "None", "end_time": "None", "allow_crewai_trigger_context":
"None"}], "agents": [{"id": "UUID(''7fadf38e-4090-4643-9fcf-bd24b07e05e1'')",
"role": "''Researcher''", "goal": "''Make the best research and analysis on
content about AI and AI agents''", "backstory": "\"You''re an expert researcher,
specialized in technology, software engineering, AI and startups. You work as
a freelancer and is now working on doing research and analysis for a new customer.\"",
"cache": "True", "verbose": "False", "max_rpm": "None", "allow_delegation":
"False", "tools": "[]", "max_iter": "25", "agent_executor": "<crewai.agents.crew_agent_executor.CrewAgentExecutor
object at 0x12b973fe0>", "llm": "<crewai.llms.providers.openai.completion.OpenAICompletion
object at 0x12b910290>", "crew": "None", "i18n": "{''prompt_file'': None}",
"cache_handler": "{}", "tools_handler": "<crewai.agents.tools_handler.ToolsHandler
object at 0x12b9934d0>", "tools_results": "[]", "max_tokens": "None", "knowledge":
"None", "knowledge_sources": "None", "knowledge_storage": "None", "security_config":
"{''fingerprint'': {''metadata'': {}}}", "callbacks": "[]", "adapted_agent":
"False", "knowledge_config": "None", "apps": "None", "mcps": "None"}, {"id":
"UUID(''49785d1e-2cfa-4967-a503-da9f6fea46df'')", "role": "''Senior Writer''",
"goal": "''Write the best content about AI and AI agents.''", "backstory": "\"You''re
a senior writer, specialized in technology, software engineering, AI and startups.
You work as a freelancer and are now working on writing content for a new customer.\"",
"cache": "True", "verbose": "False", "max_rpm": "None", "allow_delegation":
"False", "tools": "[]", "max_iter": "25", "agent_executor": "<crewai.agents.crew_agent_executor.CrewAgentExecutor
object at 0x12b7bbbf0>", "llm": "<crewai.llms.providers.openai.completion.OpenAICompletion
object at 0x12b9903b0>", "crew": "None", "i18n": "{''prompt_file'': None}",
"cache_handler": "{}", "tools_handler": "<crewai.agents.tools_handler.ToolsHandler
object at 0x12b631bb0>", "tools_results": "[]", "max_tokens": "None", "knowledge":
"None", "knowledge_sources": "None", "knowledge_storage": "None", "security_config":
"{''fingerprint'': {''metadata'': {}}}", "callbacks": "[]", "adapted_agent":
"False", "knowledge_config": "None", "apps": "None", "mcps": "None"}], "process":
"sequential", "verbose": false, "memory": false, "short_term_memory": null,
"long_term_memory": null, "entity_memory": null, "external_memory": null, "embedder":
null, "usage_metrics": null, "manager_llm": null, "manager_agent": null, "function_calling_llm":
null, "config": null, "id": "5a10d62b-4eab-47e7-b91f-6878b682acb5", "share_crew":
false, "step_callback": null, "task_callback": null, "before_kickoff_callbacks":
[], "after_kickoff_callbacks": [], "max_rpm": null, "prompt_file": null, "output_log_file":
null, "planning": false, "planning_llm": null, "task_execution_output_json_files":
null, "execution_logs": [], "knowledge_sources": null, "chat_llm": null, "knowledge":
null, "security_config": {"fingerprint": {"metadata": "{}"}}, "token_usage":
null, "tracing": false}, "state": {"id": "c1c5fe64-dcf4-43b2-9081-c4a513770607"}}},
{"event_id": "191d1d3e-8d9d-40cf-a7e4-48c44b05c2d3", "timestamp": "2025-10-21T14:26:37.989913+00:00",
"type": "flow_finished", "event_data": {"timestamp": "2025-10-21T14:26:37.989913+00:00",
"type": "flow_finished", "source_fingerprint": null, "source_type": null, "fingerprint_metadata":
null, "task_id": null, "task_name": null, "agent_id": null, "agent_role": null,
"flow_name": "MyFlow", "result": {"parent_flow": "<tests.test_crew.test_sets_parent_flow_when_inside_flow.<locals>.MyFlow
object at 0x12b835520>", "name": "crew", "cache": true, "tasks": [{"used_tools":
"0", "tools_errors": "0", "delegations": "0", "i18n": "{''prompt_file'': None}",
"name": "None", "prompt_context": "None", "description": "''Task 1''", "expected_output":
"''output''", "config": "None", "callback": "None", "agent": "{''id'': UUID(''7fadf38e-4090-4643-9fcf-bd24b07e05e1''),
''role'': ''Researcher'', ''goal'': ''Make the best research and analysis on
content about AI and AI agents'', ''backstory'': \"You''re an expert researcher,
specialized in technology, software engineering, AI and startups. You work as
a freelancer and is now working on doing research and analysis for a new customer.\",
''cache'': True, ''verbose'': False, ''max_rpm'': None, ''allow_delegation'':
False, ''tools'': [], ''max_iter'': 25, ''agent_executor'': <crewai.agents.crew_agent_executor.CrewAgentExecutor
object at 0x12b973fe0>, ''llm'': <crewai.llms.providers.openai.completion.OpenAICompletion
object at 0x12b910290>, ''crew'': None, ''i18n'': {''prompt_file'': None}, ''cache_handler'':
{}, ''tools_handler'': <crewai.agents.tools_handler.ToolsHandler object at 0x12b9934d0>,
''tools_results'': [], ''max_tokens'': None, ''knowledge'': None, ''knowledge_sources'':
None, ''knowledge_storage'': None, ''security_config'': {''fingerprint'': {''metadata'':
{}}}, ''callbacks'': [], ''adapted_agent'': False, ''knowledge_config'': None,
''apps'': None, ''mcps'': None}", "context": "NOT_SPECIFIED", "async_execution":
"False", "output_json": "None", "output_pydantic": "None", "output_file": "None",
"create_directory": "True", "output": "None", "tools": "[]", "security_config":
"{''fingerprint'': {''metadata'': {}}}", "id": "UUID(''d8c03737-58d2-4c8e-929e-d9e0bf022a3b'')",
"human_input": "False", "markdown": "False", "converter_cls": "None", "processed_by_agents":
"set()", "guardrail": "None", "guardrails": "None", "max_retries": "None", "guardrail_max_retries":
"3", "retry_count": "0", "start_time": "None", "end_time": "None", "allow_crewai_trigger_context":
"None"}, {"used_tools": "0", "tools_errors": "0", "delegations": "0", "i18n":
"{''prompt_file'': None}", "name": "None", "prompt_context": "None", "description":
"''Task 2''", "expected_output": "''output''", "config": "None", "callback":
"None", "agent": "{''id'': UUID(''49785d1e-2cfa-4967-a503-da9f6fea46df''), ''role'':
''Senior Writer'', ''goal'': ''Write the best content about AI and AI agents.'',
''backstory'': \"You''re a senior writer, specialized in technology, software
engineering, AI and startups. You work as a freelancer and are now working on
writing content for a new customer.\", ''cache'': True, ''verbose'': False,
''max_rpm'': None, ''allow_delegation'': False, ''tools'': [], ''max_iter'':
25, ''agent_executor'': <crewai.agents.crew_agent_executor.CrewAgentExecutor
object at 0x12b7bbbf0>, ''llm'': <crewai.llms.providers.openai.completion.OpenAICompletion
object at 0x12b9903b0>, ''crew'': None, ''i18n'': {''prompt_file'': None}, ''cache_handler'':
{}, ''tools_handler'': <crewai.agents.tools_handler.ToolsHandler object at 0x12b631bb0>,
''tools_results'': [], ''max_tokens'': None, ''knowledge'': None, ''knowledge_sources'':
None, ''knowledge_storage'': None, ''security_config'': {''fingerprint'': {''metadata'':
{}}}, ''callbacks'': [], ''adapted_agent'': False, ''knowledge_config'': None,
''apps'': None, ''mcps'': None}", "context": "NOT_SPECIFIED", "async_execution":
"False", "output_json": "None", "output_pydantic": "None", "output_file": "None",
"create_directory": "True", "output": "None", "tools": "[]", "security_config":
"{''fingerprint'': {''metadata'': {}}}", "id": "UUID(''530f34b2-29dd-4ba8-a4e0-f68ba958890b'')",
"human_input": "False", "markdown": "False", "converter_cls": "None", "processed_by_agents":
"set()", "guardrail": "None", "guardrails": "None", "max_retries": "None", "guardrail_max_retries":
"3", "retry_count": "0", "start_time": "None", "end_time": "None", "allow_crewai_trigger_context":
"None"}], "agents": [{"id": "UUID(''7fadf38e-4090-4643-9fcf-bd24b07e05e1'')",
"role": "''Researcher''", "goal": "''Make the best research and analysis on
content about AI and AI agents''", "backstory": "\"You''re an expert researcher,
specialized in technology, software engineering, AI and startups. You work as
a freelancer and is now working on doing research and analysis for a new customer.\"",
"cache": "True", "verbose": "False", "max_rpm": "None", "allow_delegation":
"False", "tools": "[]", "max_iter": "25", "agent_executor": "<crewai.agents.crew_agent_executor.CrewAgentExecutor
object at 0x12b973fe0>", "llm": "<crewai.llms.providers.openai.completion.OpenAICompletion
object at 0x12b910290>", "crew": "None", "i18n": "{''prompt_file'': None}",
"cache_handler": "{}", "tools_handler": "<crewai.agents.tools_handler.ToolsHandler
object at 0x12b9934d0>", "tools_results": "[]", "max_tokens": "None", "knowledge":
"None", "knowledge_sources": "None", "knowledge_storage": "None", "security_config":
"{''fingerprint'': {''metadata'': {}}}", "callbacks": "[]", "adapted_agent":
"False", "knowledge_config": "None", "apps": "None", "mcps": "None"}, {"id":
"UUID(''49785d1e-2cfa-4967-a503-da9f6fea46df'')", "role": "''Senior Writer''",
"goal": "''Write the best content about AI and AI agents.''", "backstory": "\"You''re
a senior writer, specialized in technology, software engineering, AI and startups.
You work as a freelancer and are now working on writing content for a new customer.\"",
"cache": "True", "verbose": "False", "max_rpm": "None", "allow_delegation":
"False", "tools": "[]", "max_iter": "25", "agent_executor": "<crewai.agents.crew_agent_executor.CrewAgentExecutor
object at 0x12b7bbbf0>", "llm": "<crewai.llms.providers.openai.completion.OpenAICompletion
object at 0x12b9903b0>", "crew": "None", "i18n": "{''prompt_file'': None}",
"cache_handler": "{}", "tools_handler": "<crewai.agents.tools_handler.ToolsHandler
object at 0x12b631bb0>", "tools_results": "[]", "max_tokens": "None", "knowledge":
"None", "knowledge_sources": "None", "knowledge_storage": "None", "security_config":
"{''fingerprint'': {''metadata'': {}}}", "callbacks": "[]", "adapted_agent":
"False", "knowledge_config": "None", "apps": "None", "mcps": "None"}], "process":
"sequential", "verbose": false, "memory": false, "short_term_memory": null,
"long_term_memory": null, "entity_memory": null, "external_memory": null, "embedder":
null, "usage_metrics": null, "manager_llm": null, "manager_agent": null, "function_calling_llm":
null, "config": null, "id": "5a10d62b-4eab-47e7-b91f-6878b682acb5", "share_crew":
false, "step_callback": null, "task_callback": null, "before_kickoff_callbacks":
[], "after_kickoff_callbacks": [], "max_rpm": null, "prompt_file": null, "output_log_file":
null, "planning": false, "planning_llm": null, "task_execution_output_json_files":
null, "execution_logs": [], "knowledge_sources": null, "chat_llm": null, "knowledge":
null, "security_config": {"fingerprint": {"metadata": "{}"}}, "token_usage":
null, "tracing": false}}}], "batch_metadata": {"events_count": 4, "batch_sequence":
1, "is_final_batch": false}}'
headers:
Accept:
- '*/*'
Accept-Encoding:
- gzip, deflate, zstd
Connection:
- keep-alive
Content-Length:
- '15841'
Content-Type:
- application/json
User-Agent:
- CrewAI-CLI/1.0.0
X-Crewai-Version:
- 1.0.0
method: POST
uri: https://app.crewai.com/crewai_plus/api/v1/tracing/ephemeral/batches/fabb9e5b-b761-4b21-8514-cd4d3c937d09/events
response:
body:
string: '{"events_created":4,"ephemeral_trace_batch_id":"fe167fd8-d9b8-4faa-bb28-ed13f10e8d2e"}'
headers:
Connection:
- keep-alive
Content-Length:
- '86'
Content-Type:
- application/json; charset=utf-8
Date:
- Tue, 21 Oct 2025 14:26:38 GMT
cache-control:
- no-store
content-security-policy:
- 'default-src ''self'' *.app.crewai.com app.crewai.com; script-src ''self''
''unsafe-inline'' *.app.crewai.com app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts
https://www.gstatic.com https://run.pstmn.io https://apis.google.com https://apis.google.com/js/api.js
https://accounts.google.com https://accounts.google.com/gsi/client https://cdnjs.cloudflare.com/ajax/libs/normalize/8.0.1/normalize.min.css.map
https://*.google.com https://docs.google.com https://slides.google.com https://js.hs-scripts.com
https://js.sentry-cdn.com https://browser.sentry-cdn.com https://www.googletagmanager.com
https://js-na1.hs-scripts.com https://js.hubspot.com http://js-na1.hs-scripts.com
https://bat.bing.com https://cdn.amplitude.com https://cdn.segment.com https://d1d3n03t5zntha.cloudfront.net/
https://descriptusercontent.com https://edge.fullstory.com https://googleads.g.doubleclick.net
https://js.hs-analytics.net https://js.hs-banner.com https://js.hsadspixel.net
https://js.hscollectedforms.net https://js.usemessages.com https://snap.licdn.com
https://static.cloudflareinsights.com https://static.reo.dev https://www.google-analytics.com
https://share.descript.com/; style-src ''self'' ''unsafe-inline'' *.app.crewai.com
app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts; img-src ''self'' data:
*.app.crewai.com app.crewai.com https://zeus.tools.crewai.com https://dashboard.tools.crewai.com
https://cdn.jsdelivr.net https://forms.hsforms.com https://track.hubspot.com
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://www.google.com
https://www.google.com.br; font-src ''self'' data: *.app.crewai.com app.crewai.com;
connect-src ''self'' *.app.crewai.com app.crewai.com https://zeus.tools.crewai.com
https://connect.useparagon.com/ https://zeus.useparagon.com/* https://*.useparagon.com/*
https://run.pstmn.io https://connect.tools.crewai.com/ https://*.sentry.io
https://www.google-analytics.com https://edge.fullstory.com https://rs.fullstory.com
https://api.hubspot.com https://forms.hscollectedforms.net https://api.hubapi.com
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://google.com/pagead/form-data/16713662509
https://google.com/ccm/form-data/16713662509 https://www.google.com/ccm/collect
https://worker-actionkit.tools.crewai.com https://api.reo.dev; frame-src ''self''
*.app.crewai.com app.crewai.com https://connect.useparagon.com/ https://zeus.tools.crewai.com
https://zeus.useparagon.com/* https://connect.tools.crewai.com/ https://docs.google.com
https://drive.google.com https://slides.google.com https://accounts.google.com
https://*.google.com https://app.hubspot.com/ https://td.doubleclick.net https://www.googletagmanager.com/
https://www.youtube.com https://share.descript.com'
etag:
- W/"31af819a6bb47947663469e423c6c943"
expires:
- '0'
permissions-policy:
- camera=(), microphone=(self), geolocation=()
pragma:
- no-cache
referrer-policy:
- strict-origin-when-cross-origin
strict-transport-security:
- max-age=63072000; includeSubDomains
vary:
- Accept
x-content-type-options:
- nosniff
x-frame-options:
- SAMEORIGIN
x-permitted-cross-domain-policies:
- none
x-request-id:
- 7e1b0d5e-c0c3-4745-aebd-6f86f9292bdd
x-runtime:
- '0.141007'
x-xss-protection:
- 1; mode=block
status:
code: 200
message: OK
- request:
body: '{"status": "completed", "duration_ms": 856, "final_event_count": 4}'
headers:
Accept:
- '*/*'
Accept-Encoding:
- gzip, deflate, zstd
Connection:
- keep-alive
Content-Length:
- '67'
Content-Type:
- application/json
User-Agent:
- CrewAI-CLI/1.0.0
X-Crewai-Version:
- 1.0.0
method: PATCH
uri: https://app.crewai.com/crewai_plus/api/v1/tracing/ephemeral/batches/fabb9e5b-b761-4b21-8514-cd4d3c937d09/finalize
response:
body:
string: '{"id":"fe167fd8-d9b8-4faa-bb28-ed13f10e8d2e","ephemeral_trace_id":"fabb9e5b-b761-4b21-8514-cd4d3c937d09","execution_type":"flow","crew_name":null,"flow_name":"MyFlow","status":"completed","duration_ms":856,"crewai_version":"1.0.0","total_events":4,"execution_context":{"crew_name":null,"flow_name":"MyFlow","privacy_level":"standard","crewai_version":"1.0.0","crew_fingerprint":null},"created_at":"2025-10-21T14:26:38.410Z","updated_at":"2025-10-21T14:26:39.268Z","access_code":"TRACE-8b1efdc7b5","user_identifier":null}'
headers:
Connection:
- keep-alive
Content-Length:
- '520'
Content-Type:
- application/json; charset=utf-8
Date:
- Tue, 21 Oct 2025 14:26:39 GMT
cache-control:
- no-store
content-security-policy:
- 'default-src ''self'' *.app.crewai.com app.crewai.com; script-src ''self''
''unsafe-inline'' *.app.crewai.com app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts
https://www.gstatic.com https://run.pstmn.io https://apis.google.com https://apis.google.com/js/api.js
https://accounts.google.com https://accounts.google.com/gsi/client https://cdnjs.cloudflare.com/ajax/libs/normalize/8.0.1/normalize.min.css.map
https://*.google.com https://docs.google.com https://slides.google.com https://js.hs-scripts.com
https://js.sentry-cdn.com https://browser.sentry-cdn.com https://www.googletagmanager.com
https://js-na1.hs-scripts.com https://js.hubspot.com http://js-na1.hs-scripts.com
https://bat.bing.com https://cdn.amplitude.com https://cdn.segment.com https://d1d3n03t5zntha.cloudfront.net/
https://descriptusercontent.com https://edge.fullstory.com https://googleads.g.doubleclick.net
https://js.hs-analytics.net https://js.hs-banner.com https://js.hsadspixel.net
https://js.hscollectedforms.net https://js.usemessages.com https://snap.licdn.com
https://static.cloudflareinsights.com https://static.reo.dev https://www.google-analytics.com
https://share.descript.com/; style-src ''self'' ''unsafe-inline'' *.app.crewai.com
app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts; img-src ''self'' data:
*.app.crewai.com app.crewai.com https://zeus.tools.crewai.com https://dashboard.tools.crewai.com
https://cdn.jsdelivr.net https://forms.hsforms.com https://track.hubspot.com
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://www.google.com
https://www.google.com.br; font-src ''self'' data: *.app.crewai.com app.crewai.com;
connect-src ''self'' *.app.crewai.com app.crewai.com https://zeus.tools.crewai.com
https://connect.useparagon.com/ https://zeus.useparagon.com/* https://*.useparagon.com/*
https://run.pstmn.io https://connect.tools.crewai.com/ https://*.sentry.io
https://www.google-analytics.com https://edge.fullstory.com https://rs.fullstory.com
https://api.hubspot.com https://forms.hscollectedforms.net https://api.hubapi.com
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://google.com/pagead/form-data/16713662509
https://google.com/ccm/form-data/16713662509 https://www.google.com/ccm/collect
https://worker-actionkit.tools.crewai.com https://api.reo.dev; frame-src ''self''
*.app.crewai.com app.crewai.com https://connect.useparagon.com/ https://zeus.tools.crewai.com
https://zeus.useparagon.com/* https://connect.tools.crewai.com/ https://docs.google.com
https://drive.google.com https://slides.google.com https://accounts.google.com
https://*.google.com https://app.hubspot.com/ https://td.doubleclick.net https://www.googletagmanager.com/
https://www.youtube.com https://share.descript.com'
etag:
- W/"dfa32cefa9e9282e5792820119ebc7f4"
expires:
- '0'
permissions-policy:
- camera=(), microphone=(self), geolocation=()
pragma:
- no-cache
referrer-policy:
- strict-origin-when-cross-origin
strict-transport-security:
- max-age=63072000; includeSubDomains
vary:
- Accept
x-content-type-options:
- nosniff
x-frame-options:
- SAMEORIGIN
x-permitted-cross-domain-policies:
- none
x-request-id:
- 58bc91dd-a0a8-4299-be3a-16f998de8786
x-runtime:
- '0.061567'
x-xss-protection:
- 1; mode=block
status:
code: 200
message: OK
version: 1

View File

@@ -0,0 +1,101 @@
interactions:
- request:
body: '{"trace_id": "487e5cbc-9483-4cda-9ceb-3a4eade22f9b", "execution_type":
"crew", "user_identifier": null, "execution_context": {"crew_fingerprint": null,
"crew_name": "crew", "flow_name": null, "crewai_version": "1.0.0", "privacy_level":
"standard"}, "execution_metadata": {"expected_duration_estimate": 300, "agent_count":
0, "task_count": 0, "flow_method_count": 0, "execution_started_at": "2025-10-21T14:25:56.570899+00:00"},
"ephemeral_trace_id": "487e5cbc-9483-4cda-9ceb-3a4eade22f9b"}'
headers:
Accept:
- '*/*'
Accept-Encoding:
- gzip, deflate, zstd
Connection:
- keep-alive
Content-Length:
- '488'
Content-Type:
- application/json
User-Agent:
- CrewAI-CLI/1.0.0
X-Crewai-Version:
- 1.0.0
method: POST
uri: https://app.crewai.com/crewai_plus/api/v1/tracing/ephemeral/batches
response:
body:
string: '{"id":"75d59a82-2468-4e95-917d-8f00c3a22995","ephemeral_trace_id":"487e5cbc-9483-4cda-9ceb-3a4eade22f9b","execution_type":"crew","crew_name":"crew","flow_name":null,"status":"running","duration_ms":null,"crewai_version":"1.0.0","total_events":0,"execution_context":{"crew_fingerprint":null,"crew_name":"crew","flow_name":null,"crewai_version":"1.0.0","privacy_level":"standard"},"created_at":"2025-10-21T14:25:57.071Z","updated_at":"2025-10-21T14:25:57.071Z","access_code":"TRACE-5c2eb0158e","user_identifier":null}'
headers:
Connection:
- keep-alive
Content-Length:
- '515'
Content-Type:
- application/json; charset=utf-8
Date:
- Tue, 21 Oct 2025 14:25:57 GMT
cache-control:
- no-store
content-security-policy:
- 'default-src ''self'' *.app.crewai.com app.crewai.com; script-src ''self''
''unsafe-inline'' *.app.crewai.com app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts
https://www.gstatic.com https://run.pstmn.io https://apis.google.com https://apis.google.com/js/api.js
https://accounts.google.com https://accounts.google.com/gsi/client https://cdnjs.cloudflare.com/ajax/libs/normalize/8.0.1/normalize.min.css.map
https://*.google.com https://docs.google.com https://slides.google.com https://js.hs-scripts.com
https://js.sentry-cdn.com https://browser.sentry-cdn.com https://www.googletagmanager.com
https://js-na1.hs-scripts.com https://js.hubspot.com http://js-na1.hs-scripts.com
https://bat.bing.com https://cdn.amplitude.com https://cdn.segment.com https://d1d3n03t5zntha.cloudfront.net/
https://descriptusercontent.com https://edge.fullstory.com https://googleads.g.doubleclick.net
https://js.hs-analytics.net https://js.hs-banner.com https://js.hsadspixel.net
https://js.hscollectedforms.net https://js.usemessages.com https://snap.licdn.com
https://static.cloudflareinsights.com https://static.reo.dev https://www.google-analytics.com
https://share.descript.com/; style-src ''self'' ''unsafe-inline'' *.app.crewai.com
app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts; img-src ''self'' data:
*.app.crewai.com app.crewai.com https://zeus.tools.crewai.com https://dashboard.tools.crewai.com
https://cdn.jsdelivr.net https://forms.hsforms.com https://track.hubspot.com
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://www.google.com
https://www.google.com.br; font-src ''self'' data: *.app.crewai.com app.crewai.com;
connect-src ''self'' *.app.crewai.com app.crewai.com https://zeus.tools.crewai.com
https://connect.useparagon.com/ https://zeus.useparagon.com/* https://*.useparagon.com/*
https://run.pstmn.io https://connect.tools.crewai.com/ https://*.sentry.io
https://www.google-analytics.com https://edge.fullstory.com https://rs.fullstory.com
https://api.hubspot.com https://forms.hscollectedforms.net https://api.hubapi.com
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://google.com/pagead/form-data/16713662509
https://google.com/ccm/form-data/16713662509 https://www.google.com/ccm/collect
https://worker-actionkit.tools.crewai.com https://api.reo.dev; frame-src ''self''
*.app.crewai.com app.crewai.com https://connect.useparagon.com/ https://zeus.tools.crewai.com
https://zeus.useparagon.com/* https://connect.tools.crewai.com/ https://docs.google.com
https://drive.google.com https://slides.google.com https://accounts.google.com
https://*.google.com https://app.hubspot.com/ https://td.doubleclick.net https://www.googletagmanager.com/
https://www.youtube.com https://share.descript.com'
etag:
- W/"fd7de54de56e26a00d67fe4ea0f9bb62"
expires:
- '0'
permissions-policy:
- camera=(), microphone=(self), geolocation=()
pragma:
- no-cache
referrer-policy:
- strict-origin-when-cross-origin
strict-transport-security:
- max-age=63072000; includeSubDomains
vary:
- Accept
x-content-type-options:
- nosniff
x-frame-options:
- SAMEORIGIN
x-permitted-cross-domain-policies:
- none
x-request-id:
- 92521ff4-b389-4b4b-b2f6-9d40ee75cd4b
x-runtime:
- '0.071078'
x-xss-protection:
- 1; mode=block
status:
code: 201
message: Created
version: 1

View File

@@ -6,10 +6,11 @@ from collections import defaultdict
from concurrent.futures import Future
from hashlib import md5
import re
from unittest import mock
from unittest.mock import ANY, MagicMock, patch
from unittest.mock import ANY, MagicMock, call, patch
from crewai.agent import Agent
from crewai.agents import CacheHandler
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.crew import Crew
from crewai.crews.crew_output import CrewOutput
from crewai.events.event_bus import crewai_event_bus
@@ -29,6 +30,7 @@ from crewai.events.types.memory_events import (
MemorySaveFailedEvent,
MemorySaveStartedEvent,
)
from crewai.flow import Flow, start
from crewai.knowledge.knowledge import Knowledge
from crewai.knowledge.source.string_knowledge_source import StringKnowledgeSource
from crewai.llm import LLM
@@ -37,19 +39,21 @@ from crewai.memory.external.external_memory import ExternalMemory
from crewai.memory.long_term.long_term_memory import LongTermMemory
from crewai.memory.short_term.short_term_memory import ShortTermMemory
from crewai.process import Process
from crewai.project import CrewBase, agent, before_kickoff, crew, task
from crewai.task import Task
from crewai.tasks.conditional_task import ConditionalTask
from crewai.tasks.output_format import OutputFormat
from crewai.tasks.task_output import TaskOutput
from crewai.tools import BaseTool, tool
from crewai.tools.agent_tools.add_image_tool import AddImageTool
from crewai.types.usage_metrics import UsageMetrics
from crewai.utilities.rpm_controller import RPMController
from crewai.utilities.task_output_storage_handler import TaskOutputStorageHandler
from crewai_tools import CodeInterpreterTool
from pydantic import BaseModel, Field
import pydantic_core
import pytest
from crewai.agents import CacheHandler
from crewai.flow import Flow, start
@pytest.fixture
def ceo():
@@ -575,10 +579,6 @@ def test_crew_with_delegating_agents(ceo, writer):
@pytest.mark.vcr(filter_headers=["authorization"])
def test_crew_with_delegating_agents_should_not_override_task_tools(ceo, writer):
from pydantic import BaseModel, Field
from crewai.tools import BaseTool
class TestToolInput(BaseModel):
"""Input schema for TestTool."""
@@ -635,10 +635,6 @@ def test_crew_with_delegating_agents_should_not_override_task_tools(ceo, writer)
@pytest.mark.vcr(filter_headers=["authorization"])
def test_crew_with_delegating_agents_should_not_override_agent_tools(ceo, writer):
from pydantic import BaseModel, Field
from crewai.tools import BaseTool
class TestToolInput(BaseModel):
"""Input schema for TestTool."""
@@ -697,10 +693,6 @@ def test_crew_with_delegating_agents_should_not_override_agent_tools(ceo, writer
@pytest.mark.vcr(filter_headers=["authorization"])
def test_task_tools_override_agent_tools(researcher):
from pydantic import BaseModel, Field
from crewai.tools import BaseTool
class TestToolInput(BaseModel):
"""Input schema for TestTool."""
@@ -753,11 +745,6 @@ def test_task_tools_override_agent_tools_with_allow_delegation(researcher, write
"""
Test that task tools override agent tools while preserving delegation tools when allow_delegation=True
"""
from pydantic import BaseModel, Field
from crewai.tools import BaseTool
class TestToolInput(BaseModel):
query: str = Field(..., description="Query to process")
@@ -876,10 +863,6 @@ def test_crew_verbose_output(researcher, writer, capsys):
@pytest.mark.vcr(filter_headers=["authorization"])
def test_cache_hitting_between_agents(researcher, writer, ceo):
from unittest.mock import call, patch
from crewai.tools import tool
@tool
def multiplier(first_number: int, second_number: int) -> float:
"""Useful for when you need to multiply two numbers together."""
@@ -934,8 +917,6 @@ def test_cache_hitting_between_agents(researcher, writer, ceo):
@pytest.mark.vcr(filter_headers=["authorization"])
def test_api_calls_throttling(capsys):
from crewai.tools import tool
@tool
def get_final_answer() -> float:
"""Get the final answer but don't give it yet, just re-use this
@@ -1216,8 +1197,6 @@ async def test_crew_async_kickoff():
@pytest.mark.asyncio
@pytest.mark.vcr(filter_headers=["authorization"])
async def test_async_task_execution_call_count(researcher, writer):
from unittest.mock import MagicMock, patch
list_ideas = Task(
description="Give me a list of 5 interesting ideas to explore for na article, what makes them unique and interesting.",
expected_output="Bullet point list of 5 important events.",
@@ -1575,9 +1554,6 @@ def test_dont_set_agents_step_callback_if_already_set():
@pytest.mark.vcr(filter_headers=["authorization"])
def test_crew_function_calling_llm():
from crewai import LLM
from crewai.tools import tool
llm = LLM(model="gpt-4o-mini")
@tool
@@ -1607,8 +1583,6 @@ def test_crew_function_calling_llm():
@pytest.mark.vcr(filter_headers=["authorization"])
def test_task_with_no_arguments():
from crewai.tools import tool
@tool
def return_data() -> str:
"Useful to get the sales related data"
@@ -1635,11 +1609,6 @@ def test_task_with_no_arguments():
def test_code_execution_flag_adds_code_tool_upon_kickoff():
try:
from crewai_tools import CodeInterpreterTool
except (ImportError, Exception):
pytest.skip("crewai_tools not available or cannot be imported")
# Mock Docker validation for the entire test
with patch.object(Agent, "_validate_docker_installation"):
programmer = Agent(
@@ -2061,8 +2030,6 @@ def test_crew_does_not_interpolate_without_inputs():
def test_task_callback_on_crew():
from unittest.mock import MagicMock, patch
researcher_agent = Agent(
role="Researcher",
goal="Make the best research and analysis on content about AI and AI agents",
@@ -2097,8 +2064,6 @@ def test_task_callback_on_crew():
def test_task_callback_both_on_task_and_crew():
from unittest.mock import MagicMock, patch
mock_callback_on_task = MagicMock()
mock_callback_on_crew = MagicMock()
@@ -2134,8 +2099,6 @@ def test_task_callback_both_on_task_and_crew():
def test_task_same_callback_both_on_task_and_crew():
from unittest.mock import MagicMock, patch
mock_callback = MagicMock()
researcher_agent = Agent(
@@ -2170,8 +2133,6 @@ def test_task_same_callback_both_on_task_and_crew():
@pytest.mark.vcr(filter_headers=["authorization"])
def test_tools_with_custom_caching():
from crewai.tools import tool
@tool
def multiplcation_tool(first_number: int, second_number: int) -> int:
"""Useful for when you need to multiply two numbers together."""
@@ -2477,7 +2438,7 @@ def test_using_contextual_memory():
@pytest.mark.vcr(filter_headers=["authorization"])
def test_memory_events_are_emitted():
events = defaultdict(list)
event_received = threading.Event()
condition = threading.Condition()
@crewai_event_bus.on(MemorySaveStartedEvent)
def handle_memory_save_started(source, event):
@@ -2509,8 +2470,9 @@ def test_memory_events_are_emitted():
@crewai_event_bus.on(MemoryRetrievalCompletedEvent)
def handle_memory_retrieval_completed(source, event):
events["MemoryRetrievalCompletedEvent"].append(event)
event_received.set()
with condition:
events["MemoryRetrievalCompletedEvent"].append(event)
condition.notify()
math_researcher = Agent(
role="Researcher",
@@ -2533,7 +2495,12 @@ def test_memory_events_are_emitted():
crew.kickoff()
assert event_received.wait(timeout=5), "Timeout waiting for memory events"
with condition:
success = condition.wait_for(
lambda: len(events["MemoryRetrievalCompletedEvent"]) >= 1, timeout=5
)
assert success, "Timeout waiting for memory events"
assert len(events["MemorySaveStartedEvent"]) == 3
assert len(events["MemorySaveCompletedEvent"]) == 3
assert len(events["MemorySaveFailedEvent"]) == 0
@@ -2797,6 +2764,7 @@ def test_crew_output_file_validation_failures():
Crew(agents=[agent], tasks=[task]).kickoff()
@pytest.mark.vcr(filter_headers=["authorization"])
def test_manager_agent(researcher, writer):
task = Task(
description="Come up with a list of 5 interesting ideas to explore for an article, then write one amazing paragraph highlight for each idea that showcases how good an article about this topic could be. Return the list of ideas with their paragraph and your notes.",
@@ -2855,9 +2823,8 @@ def test_manager_agent_in_agents_raises_exception(researcher, writer):
)
@pytest.mark.vcr(filter_headers=["authorization"])
def test_manager_agent_with_tools_raises_exception(researcher, writer):
from crewai.tools import tool
@tool
def testing_tool(first_number: int, second_number: int) -> int:
"""Useful for when you need to multiply two numbers together."""
@@ -2887,13 +2854,8 @@ def test_manager_agent_with_tools_raises_exception(researcher, writer):
crew.kickoff()
@patch("crewai.crew.Crew.kickoff")
@patch("crewai.crew.CrewTrainingHandler")
@patch("crewai.crew.TaskEvaluator")
@patch("crewai.crew.Crew.copy")
def test_crew_train_success(
copy_mock, task_evaluator, crew_training_handler, kickoff_mock, researcher, writer
):
@pytest.mark.vcr(filter_headers=["authorization"])
def test_crew_train_success(researcher, writer):
task = Task(
description="Come up with a list of 5 interesting ideas to explore for an article, then write one amazing paragraph highlight for each idea that showcases how good an article about this topic could be. Return the list of ideas with their paragraph and your notes.",
expected_output="5 bullet points with a paragraph for each idea.",
@@ -2905,79 +2867,39 @@ def test_crew_train_success(
tasks=[task],
)
# Create a mock for the copied crew
copy_mock.return_value = crew
received_events = []
lock = threading.Lock()
all_events_received = threading.Event()
condition = threading.Condition()
@crewai_event_bus.on(CrewTrainStartedEvent)
def on_crew_train_started(source, event: CrewTrainStartedEvent):
with lock:
with condition:
received_events.append(event)
if len(received_events) == 2:
all_events_received.set()
condition.notify()
@crewai_event_bus.on(CrewTrainCompletedEvent)
def on_crew_train_completed(source, event: CrewTrainCompletedEvent):
with lock:
with condition:
received_events.append(event)
if len(received_events) == 2:
all_events_received.set()
condition.notify()
crew.train(
n_iterations=2, inputs={"topic": "AI"}, filename="trained_agents_data.pkl"
)
# Mock human input to avoid blocking during training
with patch("builtins.input", return_value="Great work!"):
crew.train(
n_iterations=2, inputs={"topic": "AI"}, filename="trained_agents_data.pkl"
)
assert all_events_received.wait(timeout=5), "Timeout waiting for all train events"
# Ensure kickoff is called on the copied crew
kickoff_mock.assert_has_calls(
[mock.call(inputs={"topic": "AI"}), mock.call(inputs={"topic": "AI"})]
)
task_evaluator.assert_has_calls(
[
mock.call(researcher),
mock.call().evaluate_training_data(
training_data=crew_training_handler().load(),
agent_id=str(researcher.id),
),
mock.call().evaluate_training_data().model_dump(),
mock.call(writer),
mock.call().evaluate_training_data(
training_data=crew_training_handler().load(),
agent_id=str(writer.id),
),
mock.call().evaluate_training_data().model_dump(),
]
)
crew_training_handler.assert_any_call("training_data.pkl")
crew_training_handler().load.assert_called()
crew_training_handler.assert_any_call("trained_agents_data.pkl")
crew_training_handler().load.assert_called()
crew_training_handler().save_trained_data.assert_has_calls(
[
mock.call(
agent_id="Researcher",
trained_data=task_evaluator().evaluate_training_data().model_dump(),
),
mock.call(
agent_id="Senior Writer",
trained_data=task_evaluator().evaluate_training_data().model_dump(),
),
]
)
with condition:
success = condition.wait_for(lambda: len(received_events) == 2, timeout=5)
assert success, "Timeout waiting for all train events"
assert len(received_events) == 2
assert isinstance(received_events[0], CrewTrainStartedEvent)
assert isinstance(received_events[1], CrewTrainCompletedEvent)
@pytest.mark.vcr(filter_headers=["authorization"])
def test_crew_train_error(researcher, writer):
task = Task(
description="Come up with a list of 5 interesting ideas to explore for an article",
@@ -3277,6 +3199,7 @@ def test_replay_with_context():
assert crew.tasks[1].context[0].output.raw == "context raw output"
@pytest.mark.vcr(filter_headers=["authorization"])
def test_replay_with_context_set_to_nullable():
agent = Agent(role="test_agent", backstory="Test Description", goal="Test Goal")
task1 = Task(
@@ -3716,10 +3639,8 @@ def test_conditional_should_execute(researcher, writer):
assert mock_execute_sync.call_count == 2
@mock.patch("crewai.crew.CrewEvaluator")
@mock.patch("crewai.crew.Crew.copy")
@mock.patch("crewai.crew.Crew.kickoff")
def test_crew_testing_function(kickoff_mock, copy_mock, crew_evaluator, researcher):
@pytest.mark.vcr(filter_headers=["authorization"])
def test_crew_testing_function(researcher):
task = Task(
description="Come up with a list of 5 interesting ideas to explore for an article, then write one amazing paragraph highlight for each idea that showcases how good an article about this topic could be. Return the list of ideas with their paragraph and your notes.",
expected_output="5 bullet points with a paragraph for each idea.",
@@ -3731,48 +3652,32 @@ def test_crew_testing_function(kickoff_mock, copy_mock, crew_evaluator, research
tasks=[task],
)
# Create a mock for the copied crew
copy_mock.return_value = crew
n_iterations = 2
llm_instance = LLM("gpt-4o-mini")
received_events = []
lock = threading.Lock()
all_events_received = threading.Event()
condition = threading.Condition()
@crewai_event_bus.on(CrewTestStartedEvent)
def on_crew_test_started(source, event: CrewTestStartedEvent):
with lock:
with condition:
received_events.append(event)
if len(received_events) == 2:
all_events_received.set()
condition.notify()
@crewai_event_bus.on(CrewTestCompletedEvent)
def on_crew_test_completed(source, event: CrewTestCompletedEvent):
with lock:
with condition:
received_events.append(event)
if len(received_events) == 2:
all_events_received.set()
condition.notify()
crew.test(n_iterations, llm_instance, inputs={"topic": "AI"})
assert all_events_received.wait(timeout=5), "Timeout waiting for all test events"
# Ensure kickoff is called on the copied crew
kickoff_mock.assert_has_calls(
[mock.call(inputs={"topic": "AI"}), mock.call(inputs={"topic": "AI"})]
)
crew_evaluator.assert_has_calls(
[
mock.call(crew, llm_instance),
mock.call().set_iteration(1),
mock.call().set_iteration(2),
mock.call().print_crew_evaluation_result(),
]
)
with condition:
success = condition.wait_for(lambda: len(received_events) == 2, timeout=5)
assert success, "Timeout waiting for all test events"
assert len(received_events) == 2
assert isinstance(received_events[0], CrewTestStartedEvent)
assert isinstance(received_events[1], CrewTestCompletedEvent)
@@ -3843,15 +3748,11 @@ def test_fetch_inputs():
)
@pytest.mark.vcr(filter_headers=["authorization"])
def test_task_tools_preserve_code_execution_tools():
"""
Test that task tools don't override code execution tools when allow_code_execution=True
"""
from crewai_tools import CodeInterpreterTool
from pydantic import BaseModel, Field
from crewai.tools import BaseTool
class TestToolInput(BaseModel):
"""Input schema for TestTool."""
@@ -3865,23 +3766,25 @@ def test_task_tools_preserve_code_execution_tools():
def _run(self, query: str) -> str:
return f"Processed: {query}"
# Create a programmer agent with code execution enabled
programmer = Agent(
role="Programmer",
goal="Write code to solve problems.",
backstory="You're a programmer who loves to solve problems with code.",
allow_delegation=True,
allow_code_execution=True,
)
# Mock Docker validation for the entire test
with patch.object(Agent, "_validate_docker_installation"):
# Create a programmer agent with code execution enabled
programmer = Agent(
role="Programmer",
goal="Write code to solve problems.",
backstory="You're a programmer who loves to solve problems with code.",
allow_delegation=True,
allow_code_execution=True,
)
# Create a code reviewer agent
reviewer = Agent(
role="Code Reviewer",
goal="Review code for bugs and improvements",
backstory="You're an experienced code reviewer who ensures code quality and best practices.",
allow_delegation=True,
allow_code_execution=True,
)
# Create a code reviewer agent
reviewer = Agent(
role="Code Reviewer",
goal="Review code for bugs and improvements",
backstory="You're an experienced code reviewer who ensures code quality and best practices.",
allow_delegation=True,
allow_code_execution=True,
)
# Create a task with its own tools
task = Task(
@@ -3932,8 +3835,6 @@ def test_multimodal_flag_adds_multimodal_tools():
"""
Test that an agent with multimodal=True automatically has multimodal tools added to the task execution.
"""
from crewai.tools.agent_tools.add_image_tool import AddImageTool
# Create an agent that supports multimodal
multimodal_agent = Agent(
role="Multimodal Analyst",
@@ -4247,13 +4148,8 @@ def test_crew_guardrail_feedback_in_context():
@pytest.mark.vcr(filter_headers=["authorization"])
def test_before_kickoff_callback():
from crewai.project import CrewBase
@CrewBase
class TestCrewClass:
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.project import CrewBase, agent, before_kickoff, crew, task
agents: list[BaseAgent]
tasks: list[Task]
@@ -4309,12 +4205,8 @@ def test_before_kickoff_callback():
@pytest.mark.vcr(filter_headers=["authorization"])
def test_before_kickoff_without_inputs():
from crewai.project import CrewBase, agent, before_kickoff, task
@CrewBase
class TestCrewClass:
from crewai.project import crew
agents_config = None
tasks_config = None
@@ -4518,6 +4410,7 @@ def test_sets_parent_flow_when_outside_flow(researcher, writer):
assert crew.parent_flow is None
@pytest.mark.vcr(filter_headers=["authorization"])
def test_sets_parent_flow_when_inside_flow(researcher, writer):
class MyFlow(Flow):
@start()

View File

@@ -0,0 +1,145 @@
"""Test agent utility functions."""
import pytest
from unittest.mock import MagicMock, patch
from crewai.agent import Agent
from crewai.utilities.agent_utils import handle_context_length
from crewai.utilities.exceptions.context_window_exceeding_exception import (
LLMContextLengthExceededError,
)
from crewai.utilities.i18n import I18N
from crewai.utilities.printer import Printer
def test_handle_context_length_raises_exception_when_respect_context_window_false():
"""Test that handle_context_length raises LLMContextLengthExceededError when respect_context_window is False."""
# Create mocks for dependencies
printer = Printer()
i18n = I18N()
# Create an agent just for its LLM
agent = Agent(
role="test role",
goal="test goal",
backstory="test backstory",
respect_context_window=False,
)
llm = agent.llm
# Create test messages
messages = [
{
"role": "user",
"content": "This is a test message that would exceed context length",
}
]
# Set up test parameters
respect_context_window = False
callbacks = []
with pytest.raises(LLMContextLengthExceededError) as excinfo:
handle_context_length(
respect_context_window=respect_context_window,
printer=printer,
messages=messages,
llm=llm,
callbacks=callbacks,
i18n=i18n,
)
assert "Context length exceeded" in str(excinfo.value)
assert "user opted not to summarize" in str(excinfo.value)
def test_handle_context_length_summarizes_when_respect_context_window_true():
"""Test that handle_context_length calls summarize_messages when respect_context_window is True."""
# Create mocks for dependencies
printer = Printer()
i18n = I18N()
# Create an agent just for its LLM
agent = Agent(
role="test role",
goal="test goal",
backstory="test backstory",
respect_context_window=True,
)
llm = agent.llm
# Create test messages
messages = [
{
"role": "user",
"content": "This is a test message that would exceed context length",
}
]
# Set up test parameters
respect_context_window = True
callbacks = []
with patch("crewai.utilities.agent_utils.summarize_messages") as mock_summarize:
handle_context_length(
respect_context_window=respect_context_window,
printer=printer,
messages=messages,
llm=llm,
callbacks=callbacks,
i18n=i18n,
)
mock_summarize.assert_called_once_with(
messages=messages, llm=llm, callbacks=callbacks, i18n=i18n
)
def test_handle_context_length_does_not_raise_system_exit():
"""Test that handle_context_length does NOT raise SystemExit (regression test for issue #3774)."""
# Create mocks for dependencies
printer = Printer()
i18n = I18N()
# Create an agent just for its LLM
agent = Agent(
role="test role",
goal="test goal",
backstory="test backstory",
respect_context_window=False,
)
llm = agent.llm
# Create test messages
messages = [
{
"role": "user",
"content": "This is a test message that would exceed context length",
}
]
# Set up test parameters
respect_context_window = False
callbacks = []
with pytest.raises(Exception) as excinfo:
handle_context_length(
respect_context_window=respect_context_window,
printer=printer,
messages=messages,
llm=llm,
callbacks=callbacks,
i18n=i18n,
)
assert not isinstance(excinfo.value, SystemExit), (
"handle_context_length should not raise SystemExit. "
"It should raise LLMContextLengthExceededError instead."
)
assert isinstance(excinfo.value, LLMContextLengthExceededError), (
f"Expected LLMContextLengthExceededError but got {type(excinfo.value).__name__}"
)

View File

@@ -22,7 +22,7 @@ import pytest
@pytest.fixture(scope="module")
def vcr_config(request) -> dict:
def vcr_config(request: pytest.FixtureRequest) -> dict[str, str]:
return {
"cassette_library_dir": os.path.join(os.path.dirname(__file__), "cassettes"),
}
@@ -65,7 +65,7 @@ class CustomConverter(Converter):
# Fixtures
@pytest.fixture
def mock_agent():
def mock_agent() -> Mock:
agent = Mock()
agent.function_calling_llm = None
agent.llm = Mock()
@@ -73,7 +73,7 @@ def mock_agent():
# Tests for convert_to_model
def test_convert_to_model_with_valid_json():
def test_convert_to_model_with_valid_json() -> None:
result = '{"name": "John", "age": 30}'
output = convert_to_model(result, SimpleModel, None, None)
assert isinstance(output, SimpleModel)
@@ -81,7 +81,7 @@ def test_convert_to_model_with_valid_json():
assert output.age == 30
def test_convert_to_model_with_invalid_json():
def test_convert_to_model_with_invalid_json() -> None:
result = '{"name": "John", "age": "thirty"}'
with patch("crewai.utilities.converter.handle_partial_json") as mock_handle:
mock_handle.return_value = "Fallback result"
@@ -89,13 +89,13 @@ def test_convert_to_model_with_invalid_json():
assert output == "Fallback result"
def test_convert_to_model_with_no_model():
def test_convert_to_model_with_no_model() -> None:
result = "Plain text"
output = convert_to_model(result, None, None, None)
assert output == "Plain text"
def test_convert_to_model_with_special_characters():
def test_convert_to_model_with_special_characters() -> None:
json_string_test = """
{
"responses": [
@@ -114,7 +114,7 @@ def test_convert_to_model_with_special_characters():
)
def test_convert_to_model_with_escaped_special_characters():
def test_convert_to_model_with_escaped_special_characters() -> None:
json_string_test = json.dumps(
{
"responses": [
@@ -133,7 +133,7 @@ def test_convert_to_model_with_escaped_special_characters():
)
def test_convert_to_model_with_multiple_special_characters():
def test_convert_to_model_with_multiple_special_characters() -> None:
json_string_test = """
{
"responses": [
@@ -153,7 +153,7 @@ def test_convert_to_model_with_multiple_special_characters():
# Tests for validate_model
def test_validate_model_pydantic_output():
def test_validate_model_pydantic_output() -> None:
result = '{"name": "Alice", "age": 25}'
output = validate_model(result, SimpleModel, False)
assert isinstance(output, SimpleModel)
@@ -161,7 +161,7 @@ def test_validate_model_pydantic_output():
assert output.age == 25
def test_validate_model_json_output():
def test_validate_model_json_output() -> None:
result = '{"name": "Bob", "age": 40}'
output = validate_model(result, SimpleModel, True)
assert isinstance(output, dict)
@@ -169,7 +169,7 @@ def test_validate_model_json_output():
# Tests for handle_partial_json
def test_handle_partial_json_with_valid_partial():
def test_handle_partial_json_with_valid_partial() -> None:
result = 'Some text {"name": "Charlie", "age": 35} more text'
output = handle_partial_json(result, SimpleModel, False, None)
assert isinstance(output, SimpleModel)
@@ -177,7 +177,7 @@ def test_handle_partial_json_with_valid_partial():
assert output.age == 35
def test_handle_partial_json_with_invalid_partial(mock_agent):
def test_handle_partial_json_with_invalid_partial(mock_agent: Mock) -> None:
result = "No valid JSON here"
with patch("crewai.utilities.converter.convert_with_instructions") as mock_convert:
mock_convert.return_value = "Converted result"
@@ -189,8 +189,8 @@ def test_handle_partial_json_with_invalid_partial(mock_agent):
@patch("crewai.utilities.converter.create_converter")
@patch("crewai.utilities.converter.get_conversion_instructions")
def test_convert_with_instructions_success(
mock_get_instructions, mock_create_converter, mock_agent
):
mock_get_instructions: Mock, mock_create_converter: Mock, mock_agent: Mock
) -> None:
mock_get_instructions.return_value = "Instructions"
mock_converter = Mock()
mock_converter.to_pydantic.return_value = SimpleModel(name="David", age=50)
@@ -207,8 +207,8 @@ def test_convert_with_instructions_success(
@patch("crewai.utilities.converter.create_converter")
@patch("crewai.utilities.converter.get_conversion_instructions")
def test_convert_with_instructions_failure(
mock_get_instructions, mock_create_converter, mock_agent
):
mock_get_instructions: Mock, mock_create_converter: Mock, mock_agent: Mock
) -> None:
mock_get_instructions.return_value = "Instructions"
mock_converter = Mock()
mock_converter.to_pydantic.return_value = ConverterError("Conversion failed")
@@ -222,7 +222,7 @@ def test_convert_with_instructions_failure(
# Tests for get_conversion_instructions
def test_get_conversion_instructions_gpt():
def test_get_conversion_instructions_gpt() -> None:
llm = LLM(model="gpt-4o-mini")
with patch.object(LLM, "supports_function_calling") as supports_function_calling:
supports_function_calling.return_value = True
@@ -237,7 +237,7 @@ def test_get_conversion_instructions_gpt():
assert instructions == expected_instructions
def test_get_conversion_instructions_non_gpt():
def test_get_conversion_instructions_non_gpt() -> None:
llm = LLM(model="ollama/llama3.1", base_url="http://localhost:11434")
with patch.object(LLM, "supports_function_calling", return_value=False):
instructions = get_conversion_instructions(SimpleModel, llm)
@@ -246,17 +246,17 @@ def test_get_conversion_instructions_non_gpt():
# Tests for is_gpt
def test_supports_function_calling_true():
def test_supports_function_calling_true() -> None:
llm = LLM(model="gpt-4o")
assert llm.supports_function_calling() is True
def test_supports_function_calling_false():
def test_supports_function_calling_false() -> None:
llm = LLM(model="non-existent-model", is_litellm=True)
assert llm.supports_function_calling() is False
def test_create_converter_with_mock_agent():
def test_create_converter_with_mock_agent() -> None:
mock_agent = MagicMock()
mock_agent.get_output_converter.return_value = MagicMock(spec=Converter)
@@ -272,7 +272,7 @@ def test_create_converter_with_mock_agent():
mock_agent.get_output_converter.assert_called_once()
def test_create_converter_with_custom_converter():
def test_create_converter_with_custom_converter() -> None:
converter = create_converter(
converter_cls=CustomConverter,
llm=LLM(model="gpt-4o-mini"),
@@ -284,7 +284,7 @@ def test_create_converter_with_custom_converter():
assert isinstance(converter, CustomConverter)
def test_create_converter_fails_without_agent_or_converter_cls():
def test_create_converter_fails_without_agent_or_converter_cls() -> None:
with pytest.raises(
ValueError, match="Either agent or converter_cls must be provided"
):
@@ -293,13 +293,13 @@ def test_create_converter_fails_without_agent_or_converter_cls():
)
def test_generate_model_description_simple_model():
def test_generate_model_description_simple_model() -> None:
description = generate_model_description(SimpleModel)
expected_description = '{\n "name": str,\n "age": int\n}'
assert description == expected_description
def test_generate_model_description_nested_model():
def test_generate_model_description_nested_model() -> None:
description = generate_model_description(NestedModel)
expected_description = (
'{\n "id": int,\n "data": {\n "name": str,\n "age": int\n}\n}'
@@ -307,7 +307,7 @@ def test_generate_model_description_nested_model():
assert description == expected_description
def test_generate_model_description_optional_field():
def test_generate_model_description_optional_field() -> None:
class ModelWithOptionalField(BaseModel):
name: str
age: int | None
@@ -317,7 +317,7 @@ def test_generate_model_description_optional_field():
assert description == expected_description
def test_generate_model_description_list_field():
def test_generate_model_description_list_field() -> None:
class ModelWithListField(BaseModel):
items: list[int]
@@ -326,7 +326,7 @@ def test_generate_model_description_list_field():
assert description == expected_description
def test_generate_model_description_dict_field():
def test_generate_model_description_dict_field() -> None:
class ModelWithDictField(BaseModel):
attributes: dict[str, int]
@@ -336,7 +336,7 @@ def test_generate_model_description_dict_field():
@pytest.mark.vcr(filter_headers=["authorization"])
def test_convert_with_instructions():
def test_convert_with_instructions() -> None:
llm = LLM(model="gpt-4o-mini")
sample_text = "Name: Alice, Age: 30"
@@ -358,7 +358,7 @@ def test_convert_with_instructions():
@pytest.mark.vcr(filter_headers=["authorization"])
def test_converter_with_llama3_2_model():
def test_converter_with_llama3_2_model() -> None:
llm = LLM(model="openrouter/meta-llama/llama-3.2-3b-instruct")
sample_text = "Name: Alice Llama, Age: 30"
instructions = get_conversion_instructions(SimpleModel, llm)
@@ -375,7 +375,7 @@ def test_converter_with_llama3_2_model():
@pytest.mark.vcr(filter_headers=["authorization"])
def test_converter_with_llama3_1_model():
def test_converter_with_llama3_1_model() -> None:
llm = LLM(model="ollama/llama3.1", base_url="http://localhost:11434")
sample_text = "Name: Alice Llama, Age: 30"
instructions = get_conversion_instructions(SimpleModel, llm)
@@ -392,7 +392,7 @@ def test_converter_with_llama3_1_model():
@pytest.mark.vcr(filter_headers=["authorization"])
def test_converter_with_nested_model():
def test_converter_with_nested_model() -> None:
llm = LLM(model="gpt-4o-mini")
sample_text = "Name: John Doe\nAge: 30\nAddress: 123 Main St, Anytown, 12345"
@@ -416,7 +416,7 @@ def test_converter_with_nested_model():
# Tests for error handling
def test_converter_error_handling():
def test_converter_error_handling() -> None:
llm = Mock(spec=LLM)
llm.supports_function_calling.return_value = False
llm.call.return_value = "Invalid JSON"
@@ -437,7 +437,7 @@ def test_converter_error_handling():
# Tests for retry logic
def test_converter_retry_logic():
def test_converter_retry_logic() -> None:
llm = Mock(spec=LLM)
llm.supports_function_calling.return_value = False
llm.call.side_effect = [
@@ -465,7 +465,7 @@ def test_converter_retry_logic():
# Tests for optional fields
def test_converter_with_optional_fields():
def test_converter_with_optional_fields() -> None:
class OptionalModel(BaseModel):
name: str
age: int | None
@@ -492,7 +492,7 @@ def test_converter_with_optional_fields():
# Tests for list fields
def test_converter_with_list_field():
def test_converter_with_list_field() -> None:
class ListModel(BaseModel):
items: list[int]
@@ -515,7 +515,7 @@ def test_converter_with_list_field():
assert output.items == [1, 2, 3]
def test_converter_with_enum():
def test_converter_with_enum() -> None:
class Color(Enum):
RED = "red"
GREEN = "green"
@@ -546,7 +546,7 @@ def test_converter_with_enum():
# Tests for ambiguous input
def test_converter_with_ambiguous_input():
def test_converter_with_ambiguous_input() -> None:
llm = Mock(spec=LLM)
llm.supports_function_calling.return_value = False
llm.call.return_value = '{"name": "Charlie", "age": "Not an age"}'
@@ -567,7 +567,7 @@ def test_converter_with_ambiguous_input():
# Tests for function calling support
def test_converter_with_function_calling():
def test_converter_with_function_calling() -> None:
llm = Mock(spec=LLM)
llm.supports_function_calling.return_value = True
@@ -580,20 +580,359 @@ def test_converter_with_function_calling():
model=SimpleModel,
instructions="Convert this text.",
)
converter._create_instructor = Mock(return_value=instructor)
with patch.object(converter, '_create_instructor', return_value=instructor):
output = converter.to_pydantic()
output = converter.to_pydantic()
assert isinstance(output, SimpleModel)
assert output.name == "Eve"
assert output.age == 35
assert isinstance(output, SimpleModel)
assert output.name == "Eve"
assert output.age == 35
instructor.to_pydantic.assert_called_once()
def test_generate_model_description_union_field():
def test_generate_model_description_union_field() -> None:
class UnionModel(BaseModel):
field: int | str | None
description = generate_model_description(UnionModel)
expected_description = '{\n "field": int | str | None\n}'
assert description == expected_description
def test_internal_instructor_with_openai_provider() -> None:
"""Test InternalInstructor with OpenAI provider using registry pattern."""
from crewai.utilities.internal_instructor import InternalInstructor
# Mock LLM with OpenAI provider
mock_llm = Mock()
mock_llm.is_litellm = False
mock_llm.model = "gpt-4o"
mock_llm.provider = "openai"
# Mock instructor client
mock_client = Mock()
mock_client.chat.completions.create.return_value = SimpleModel(name="Test", age=25)
# Patch the instructor import at the method level
with patch.object(InternalInstructor, '_create_instructor_client') as mock_create_client:
mock_create_client.return_value = mock_client
instructor = InternalInstructor(
content="Test content",
model=SimpleModel,
llm=mock_llm
)
result = instructor.to_pydantic()
assert isinstance(result, SimpleModel)
assert result.name == "Test"
assert result.age == 25
# Verify the method was called with the correct LLM
mock_create_client.assert_called_once()
def test_internal_instructor_with_anthropic_provider() -> None:
"""Test InternalInstructor with Anthropic provider using registry pattern."""
from crewai.utilities.internal_instructor import InternalInstructor
# Mock LLM with Anthropic provider
mock_llm = Mock()
mock_llm.is_litellm = False
mock_llm.model = "claude-3-5-sonnet-20241022"
mock_llm.provider = "anthropic"
# Mock instructor client
mock_client = Mock()
mock_client.chat.completions.create.return_value = SimpleModel(name="Bob", age=25)
# Patch the instructor import at the method level
with patch.object(InternalInstructor, '_create_instructor_client') as mock_create_client:
mock_create_client.return_value = mock_client
instructor = InternalInstructor(
content="Name: Bob, Age: 25",
model=SimpleModel,
llm=mock_llm
)
result = instructor.to_pydantic()
assert isinstance(result, SimpleModel)
assert result.name == "Bob"
assert result.age == 25
# Verify the method was called with the correct LLM
mock_create_client.assert_called_once()
def test_factory_pattern_registry_extensibility() -> None:
"""Test that the factory pattern registry works with different providers."""
from crewai.utilities.internal_instructor import InternalInstructor
# Test with OpenAI provider
mock_llm_openai = Mock()
mock_llm_openai.is_litellm = False
mock_llm_openai.model = "gpt-4o-mini"
mock_llm_openai.provider = "openai"
mock_client_openai = Mock()
mock_client_openai.chat.completions.create.return_value = SimpleModel(name="Alice", age=30)
with patch.object(InternalInstructor, '_create_instructor_client') as mock_create_client:
mock_create_client.return_value = mock_client_openai
instructor_openai = InternalInstructor(
content="Name: Alice, Age: 30",
model=SimpleModel,
llm=mock_llm_openai
)
result_openai = instructor_openai.to_pydantic()
assert isinstance(result_openai, SimpleModel)
assert result_openai.name == "Alice"
assert result_openai.age == 30
# Test with Anthropic provider
mock_llm_anthropic = Mock()
mock_llm_anthropic.is_litellm = False
mock_llm_anthropic.model = "claude-3-5-sonnet-20241022"
mock_llm_anthropic.provider = "anthropic"
mock_client_anthropic = Mock()
mock_client_anthropic.chat.completions.create.return_value = SimpleModel(name="Bob", age=25)
with patch.object(InternalInstructor, '_create_instructor_client') as mock_create_client:
mock_create_client.return_value = mock_client_anthropic
instructor_anthropic = InternalInstructor(
content="Name: Bob, Age: 25",
model=SimpleModel,
llm=mock_llm_anthropic
)
result_anthropic = instructor_anthropic.to_pydantic()
assert isinstance(result_anthropic, SimpleModel)
assert result_anthropic.name == "Bob"
assert result_anthropic.age == 25
# Test with Bedrock provider
mock_llm_bedrock = Mock()
mock_llm_bedrock.is_litellm = False
mock_llm_bedrock.model = "claude-3-5-sonnet-20241022"
mock_llm_bedrock.provider = "bedrock"
mock_client_bedrock = Mock()
mock_client_bedrock.chat.completions.create.return_value = SimpleModel(name="Charlie", age=35)
with patch.object(InternalInstructor, '_create_instructor_client') as mock_create_client:
mock_create_client.return_value = mock_client_bedrock
instructor_bedrock = InternalInstructor(
content="Name: Charlie, Age: 35",
model=SimpleModel,
llm=mock_llm_bedrock
)
result_bedrock = instructor_bedrock.to_pydantic()
assert isinstance(result_bedrock, SimpleModel)
assert result_bedrock.name == "Charlie"
assert result_bedrock.age == 35
# Test with Google provider
mock_llm_google = Mock()
mock_llm_google.is_litellm = False
mock_llm_google.model = "gemini-1.5-flash"
mock_llm_google.provider = "google"
mock_client_google = Mock()
mock_client_google.chat.completions.create.return_value = SimpleModel(name="Diana", age=28)
with patch.object(InternalInstructor, '_create_instructor_client') as mock_create_client:
mock_create_client.return_value = mock_client_google
instructor_google = InternalInstructor(
content="Name: Diana, Age: 28",
model=SimpleModel,
llm=mock_llm_google
)
result_google = instructor_google.to_pydantic()
assert isinstance(result_google, SimpleModel)
assert result_google.name == "Diana"
assert result_google.age == 28
# Test with Azure provider
mock_llm_azure = Mock()
mock_llm_azure.is_litellm = False
mock_llm_azure.model = "gpt-4o"
mock_llm_azure.provider = "azure"
mock_client_azure = Mock()
mock_client_azure.chat.completions.create.return_value = SimpleModel(name="Eve", age=32)
with patch.object(InternalInstructor, '_create_instructor_client') as mock_create_client:
mock_create_client.return_value = mock_client_azure
instructor_azure = InternalInstructor(
content="Name: Eve, Age: 32",
model=SimpleModel,
llm=mock_llm_azure
)
result_azure = instructor_azure.to_pydantic()
assert isinstance(result_azure, SimpleModel)
assert result_azure.name == "Eve"
assert result_azure.age == 32
def test_internal_instructor_with_bedrock_provider() -> None:
"""Test InternalInstructor with AWS Bedrock provider using registry pattern."""
from crewai.utilities.internal_instructor import InternalInstructor
# Mock LLM with Bedrock provider
mock_llm = Mock()
mock_llm.is_litellm = False
mock_llm.model = "claude-3-5-sonnet-20241022"
mock_llm.provider = "bedrock"
# Mock instructor client
mock_client = Mock()
mock_client.chat.completions.create.return_value = SimpleModel(name="Charlie", age=35)
# Patch the instructor import at the method level
with patch.object(InternalInstructor, '_create_instructor_client') as mock_create_client:
mock_create_client.return_value = mock_client
instructor = InternalInstructor(
content="Name: Charlie, Age: 35",
model=SimpleModel,
llm=mock_llm
)
result = instructor.to_pydantic()
assert isinstance(result, SimpleModel)
assert result.name == "Charlie"
assert result.age == 35
# Verify the method was called with the correct LLM
mock_create_client.assert_called_once()
def test_internal_instructor_with_gemini_provider() -> None:
"""Test InternalInstructor with Google Gemini provider using registry pattern."""
from crewai.utilities.internal_instructor import InternalInstructor
# Mock LLM with Gemini provider
mock_llm = Mock()
mock_llm.is_litellm = False
mock_llm.model = "gemini-1.5-flash"
mock_llm.provider = "google"
# Mock instructor client
mock_client = Mock()
mock_client.chat.completions.create.return_value = SimpleModel(name="Diana", age=28)
# Patch the instructor import at the method level
with patch.object(InternalInstructor, '_create_instructor_client') as mock_create_client:
mock_create_client.return_value = mock_client
instructor = InternalInstructor(
content="Name: Diana, Age: 28",
model=SimpleModel,
llm=mock_llm
)
result = instructor.to_pydantic()
assert isinstance(result, SimpleModel)
assert result.name == "Diana"
assert result.age == 28
# Verify the method was called with the correct LLM
mock_create_client.assert_called_once()
def test_internal_instructor_with_azure_provider() -> None:
"""Test InternalInstructor with Azure OpenAI provider using registry pattern."""
from crewai.utilities.internal_instructor import InternalInstructor
# Mock LLM with Azure provider
mock_llm = Mock()
mock_llm.is_litellm = False
mock_llm.model = "gpt-4o"
mock_llm.provider = "azure"
# Mock instructor client
mock_client = Mock()
mock_client.chat.completions.create.return_value = SimpleModel(name="Eve", age=32)
# Patch the instructor import at the method level
with patch.object(InternalInstructor, '_create_instructor_client') as mock_create_client:
mock_create_client.return_value = mock_client
instructor = InternalInstructor(
content="Name: Eve, Age: 32",
model=SimpleModel,
llm=mock_llm
)
result = instructor.to_pydantic()
assert isinstance(result, SimpleModel)
assert result.name == "Eve"
assert result.age == 32
# Verify the method was called with the correct LLM
mock_create_client.assert_called_once()
def test_internal_instructor_unsupported_provider() -> None:
"""Test InternalInstructor with unsupported provider raises appropriate error."""
from crewai.utilities.internal_instructor import InternalInstructor
# Mock LLM with unsupported provider
mock_llm = Mock()
mock_llm.is_litellm = False
mock_llm.model = "unsupported-model"
mock_llm.provider = "unsupported"
# Mock the _create_instructor_client method to raise an error for unsupported providers
with patch.object(InternalInstructor, '_create_instructor_client') as mock_create_client:
mock_create_client.side_effect = Exception("Unsupported provider: unsupported")
# This should raise an error when trying to create the instructor client
with pytest.raises(Exception) as exc_info:
instructor = InternalInstructor(
content="Test content",
model=SimpleModel,
llm=mock_llm
)
instructor.to_pydantic()
# Verify it's the expected error
assert "Unsupported provider" in str(exc_info.value)
def test_internal_instructor_real_unsupported_provider() -> None:
"""Test InternalInstructor with real unsupported provider using actual instructor library."""
from crewai.utilities.internal_instructor import InternalInstructor
# Mock LLM with unsupported provider that would actually fail with instructor
mock_llm = Mock()
mock_llm.is_litellm = False
mock_llm.model = "unsupported-model"
mock_llm.provider = "unsupported"
# This should raise a ConfigurationError from the real instructor library
with pytest.raises(Exception) as exc_info:
instructor = InternalInstructor(
content="Test content",
model=SimpleModel,
llm=mock_llm
)
instructor.to_pydantic()
# Verify it's a configuration error about unsupported provider
assert "Unsupported provider" in str(exc_info.value) or "unsupported" in str(exc_info.value).lower()

View File

@@ -1,3 +1,3 @@
"""CrewAI development tools."""
__version__ = "1.0.0"
__version__ = "1.1.0"

View File

@@ -124,7 +124,7 @@ exclude = [
"lib/crewai-tools/tests/",
"lib/crewai/src/crewai/experimental/a2a"
]
plugins = ["pydantic.mypy"]
plugins = ["pydantic.mypy", "crewai.mypy"]
[tool.bandit]

7896
uv.lock generated

File diff suppressed because it is too large Load Diff