mirror of
https://github.com/crewAIInc/crewAI.git
synced 2025-12-24 00:08:29 +00:00
Compare commits
15 Commits
devin/1761
...
1.2.0
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
a83c57a2f2 | ||
|
|
08e15ab267 | ||
|
|
9728388ea7 | ||
|
|
4371cf5690 | ||
|
|
d28daa26cd | ||
|
|
a850813f2b | ||
|
|
5944a39629 | ||
|
|
c594859ed0 | ||
|
|
2ee27efca7 | ||
|
|
f6e13eb890 | ||
|
|
e7b3ce27ca | ||
|
|
dba27cf8b5 | ||
|
|
6469f224f6 | ||
|
|
f3a63be215 | ||
|
|
01d8c189f0 |
23
.github/codeql/codeql-config.yml
vendored
23
.github/codeql/codeql-config.yml
vendored
@@ -2,20 +2,27 @@ name: "CodeQL Config"
|
||||
|
||||
paths-ignore:
|
||||
# Ignore template files - these are boilerplate code that shouldn't be analyzed
|
||||
- "src/crewai/cli/templates/**"
|
||||
- "lib/crewai/src/crewai/cli/templates/**"
|
||||
# Ignore test cassettes - these are test fixtures/recordings
|
||||
- "tests/cassettes/**"
|
||||
- "lib/crewai/tests/cassettes/**"
|
||||
- "lib/crewai-tools/tests/cassettes/**"
|
||||
# Ignore cache and build artifacts
|
||||
- ".cache/**"
|
||||
# Ignore documentation build artifacts
|
||||
- "docs/.cache/**"
|
||||
|
||||
# Ignore experimental code
|
||||
- "lib/crewai/src/crewai/experimental/a2a/**"
|
||||
|
||||
paths:
|
||||
# Include all Python source code
|
||||
- "src/**"
|
||||
# Include tests (but exclude cassettes)
|
||||
- "tests/**"
|
||||
# Include all Python source code from workspace packages
|
||||
- "lib/crewai/src/**"
|
||||
- "lib/crewai-tools/src/**"
|
||||
- "lib/devtools/src/**"
|
||||
# Include tests (but exclude cassettes via paths-ignore)
|
||||
- "lib/crewai/tests/**"
|
||||
- "lib/crewai-tools/tests/**"
|
||||
- "lib/devtools/tests/**"
|
||||
|
||||
# Configure specific queries or packs if needed
|
||||
# queries:
|
||||
# - uses: security-and-quality
|
||||
# - uses: security-and-quality
|
||||
|
||||
@@ -7,7 +7,7 @@ mode: "wide"
|
||||
|
||||
## Overview
|
||||
|
||||
CrewAI integrates with multiple LLM providers through LiteLLM, giving you the flexibility to choose the right model for your specific use case. This guide will help you understand how to configure and use different LLM providers in your CrewAI projects.
|
||||
CrewAI integrates with multiple LLM providers through providers native sdks, giving you the flexibility to choose the right model for your specific use case. This guide will help you understand how to configure and use different LLM providers in your CrewAI projects.
|
||||
|
||||
|
||||
## What are LLMs?
|
||||
@@ -113,44 +113,104 @@ In this section, you'll find detailed examples that help you select, configure,
|
||||
|
||||
<AccordionGroup>
|
||||
<Accordion title="OpenAI">
|
||||
Set the following environment variables in your `.env` file:
|
||||
CrewAI provides native integration with OpenAI through the OpenAI Python SDK.
|
||||
|
||||
```toml Code
|
||||
# Required
|
||||
OPENAI_API_KEY=sk-...
|
||||
|
||||
# Optional
|
||||
OPENAI_API_BASE=<custom-base-url>
|
||||
OPENAI_ORGANIZATION=<your-org-id>
|
||||
OPENAI_BASE_URL=<custom-base-url>
|
||||
```
|
||||
|
||||
Example usage in your CrewAI project:
|
||||
**Basic Usage:**
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
llm = LLM(
|
||||
model="openai/gpt-4", # call model by provider/model_name
|
||||
temperature=0.8,
|
||||
max_tokens=150,
|
||||
model="openai/gpt-4o",
|
||||
api_key="your-api-key", # Or set OPENAI_API_KEY
|
||||
temperature=0.7,
|
||||
max_tokens=4000
|
||||
)
|
||||
```
|
||||
|
||||
**Advanced Configuration:**
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
llm = LLM(
|
||||
model="openai/gpt-4o",
|
||||
api_key="your-api-key",
|
||||
base_url="https://api.openai.com/v1", # Optional custom endpoint
|
||||
organization="org-...", # Optional organization ID
|
||||
project="proj_...", # Optional project ID
|
||||
temperature=0.7,
|
||||
max_tokens=4000,
|
||||
max_completion_tokens=4000, # For newer models
|
||||
top_p=0.9,
|
||||
frequency_penalty=0.1,
|
||||
presence_penalty=0.1,
|
||||
stop=["END"],
|
||||
seed=42
|
||||
seed=42, # For reproducible outputs
|
||||
stream=True, # Enable streaming
|
||||
timeout=60.0, # Request timeout in seconds
|
||||
max_retries=3, # Maximum retry attempts
|
||||
logprobs=True, # Return log probabilities
|
||||
top_logprobs=5, # Number of most likely tokens
|
||||
reasoning_effort="medium" # For o1 models: low, medium, high
|
||||
)
|
||||
```
|
||||
|
||||
OpenAI is one of the leading providers of LLMs with a wide range of models and features.
|
||||
**Structured Outputs:**
|
||||
```python Code
|
||||
from pydantic import BaseModel
|
||||
from crewai import LLM
|
||||
|
||||
class ResponseFormat(BaseModel):
|
||||
name: str
|
||||
age: int
|
||||
summary: str
|
||||
|
||||
llm = LLM(
|
||||
model="openai/gpt-4o",
|
||||
)
|
||||
```
|
||||
|
||||
**Supported Environment Variables:**
|
||||
- `OPENAI_API_KEY`: Your OpenAI API key (required)
|
||||
- `OPENAI_BASE_URL`: Custom base URL for OpenAI API (optional)
|
||||
|
||||
**Features:**
|
||||
- Native function calling support (except o1 models)
|
||||
- Structured outputs with JSON schema
|
||||
- Streaming support for real-time responses
|
||||
- Token usage tracking
|
||||
- Stop sequences support (except o1 models)
|
||||
- Log probabilities for token-level insights
|
||||
- Reasoning effort control for o1 models
|
||||
|
||||
**Supported Models:**
|
||||
|
||||
| Model | Context Window | Best For |
|
||||
|---------------------|------------------|-----------------------------------------------|
|
||||
| GPT-4 | 8,192 tokens | High-accuracy tasks, complex reasoning |
|
||||
| GPT-4 Turbo | 128,000 tokens | Long-form content, document analysis |
|
||||
| GPT-4o & GPT-4o-mini | 128,000 tokens | Cost-effective large context processing |
|
||||
| o3-mini | 200,000 tokens | Fast reasoning, complex reasoning |
|
||||
| o1-mini | 128,000 tokens | Fast reasoning, complex reasoning |
|
||||
| o1-preview | 128,000 tokens | Fast reasoning, complex reasoning |
|
||||
| o1 | 200,000 tokens | Fast reasoning, complex reasoning |
|
||||
| gpt-4.1 | 1M tokens | Latest model with enhanced capabilities |
|
||||
| gpt-4.1-mini | 1M tokens | Efficient version with large context |
|
||||
| gpt-4.1-nano | 1M tokens | Ultra-efficient variant |
|
||||
| gpt-4o | 128,000 tokens | Optimized for speed and intelligence |
|
||||
| gpt-4o-mini | 200,000 tokens | Cost-effective with large context |
|
||||
| gpt-4-turbo | 128,000 tokens | Long-form content, document analysis |
|
||||
| gpt-4 | 8,192 tokens | High-accuracy tasks, complex reasoning |
|
||||
| o1 | 200,000 tokens | Advanced reasoning, complex problem-solving |
|
||||
| o1-preview | 128,000 tokens | Preview of reasoning capabilities |
|
||||
| o1-mini | 128,000 tokens | Efficient reasoning model |
|
||||
| o3-mini | 200,000 tokens | Lightweight reasoning model |
|
||||
| o4-mini | 200,000 tokens | Next-gen efficient reasoning |
|
||||
|
||||
**Note:** To use OpenAI, install the required dependencies:
|
||||
```bash
|
||||
uv add "crewai[openai]"
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Meta-Llama">
|
||||
@@ -187,69 +247,186 @@ In this section, you'll find detailed examples that help you select, configure,
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Anthropic">
|
||||
CrewAI provides native integration with Anthropic through the Anthropic Python SDK.
|
||||
|
||||
```toml Code
|
||||
# Required
|
||||
ANTHROPIC_API_KEY=sk-ant-...
|
||||
|
||||
# Optional
|
||||
ANTHROPIC_API_BASE=<custom-base-url>
|
||||
```
|
||||
|
||||
Example usage in your CrewAI project:
|
||||
**Basic Usage:**
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
llm = LLM(
|
||||
model="anthropic/claude-3-sonnet-20240229-v1:0",
|
||||
temperature=0.7
|
||||
model="anthropic/claude-3-5-sonnet-20241022",
|
||||
api_key="your-api-key", # Or set ANTHROPIC_API_KEY
|
||||
max_tokens=4096 # Required for Anthropic
|
||||
)
|
||||
```
|
||||
|
||||
**Advanced Configuration:**
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
llm = LLM(
|
||||
model="anthropic/claude-3-5-sonnet-20241022",
|
||||
api_key="your-api-key",
|
||||
base_url="https://api.anthropic.com", # Optional custom endpoint
|
||||
temperature=0.7,
|
||||
max_tokens=4096, # Required parameter
|
||||
top_p=0.9,
|
||||
stop_sequences=["END", "STOP"], # Anthropic uses stop_sequences
|
||||
stream=True, # Enable streaming
|
||||
timeout=60.0, # Request timeout in seconds
|
||||
max_retries=3 # Maximum retry attempts
|
||||
)
|
||||
```
|
||||
|
||||
**Supported Environment Variables:**
|
||||
- `ANTHROPIC_API_KEY`: Your Anthropic API key (required)
|
||||
|
||||
**Features:**
|
||||
- Native tool use support for Claude 3+ models
|
||||
- Streaming support for real-time responses
|
||||
- Automatic system message handling
|
||||
- Stop sequences for controlled output
|
||||
- Token usage tracking
|
||||
- Multi-turn tool use conversations
|
||||
|
||||
**Important Notes:**
|
||||
- `max_tokens` is a **required** parameter for all Anthropic models
|
||||
- Claude uses `stop_sequences` instead of `stop`
|
||||
- System messages are handled separately from conversation messages
|
||||
- First message must be from the user (automatically handled)
|
||||
- Messages must alternate between user and assistant
|
||||
|
||||
**Supported Models:**
|
||||
|
||||
| Model | Context Window | Best For |
|
||||
|------------------------------|----------------|-----------------------------------------------|
|
||||
| claude-3-7-sonnet | 200,000 tokens | Advanced reasoning and agentic tasks |
|
||||
| claude-3-5-sonnet-20241022 | 200,000 tokens | Latest Sonnet with best performance |
|
||||
| claude-3-5-haiku | 200,000 tokens | Fast, compact model for quick responses |
|
||||
| claude-3-opus | 200,000 tokens | Most capable for complex tasks |
|
||||
| claude-3-sonnet | 200,000 tokens | Balanced intelligence and speed |
|
||||
| claude-3-haiku | 200,000 tokens | Fastest for simple tasks |
|
||||
| claude-2.1 | 200,000 tokens | Extended context, reduced hallucinations |
|
||||
| claude-2 | 100,000 tokens | Versatile model for various tasks |
|
||||
| claude-instant | 100,000 tokens | Fast, cost-effective for everyday tasks |
|
||||
|
||||
**Note:** To use Anthropic, install the required dependencies:
|
||||
```bash
|
||||
uv add "crewai[anthropic]"
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Google (Gemini API)">
|
||||
Set your API key in your `.env` file. If you need a key, or need to find an
|
||||
existing key, check [AI Studio](https://aistudio.google.com/apikey).
|
||||
CrewAI provides native integration with Google Gemini through the Google Gen AI Python SDK.
|
||||
|
||||
Set your API key in your `.env` file. If you need a key, check [AI Studio](https://aistudio.google.com/apikey).
|
||||
|
||||
```toml .env
|
||||
# https://ai.google.dev/gemini-api/docs/api-key
|
||||
# Required (one of the following)
|
||||
GOOGLE_API_KEY=<your-api-key>
|
||||
GEMINI_API_KEY=<your-api-key>
|
||||
|
||||
# Optional - for Vertex AI
|
||||
GOOGLE_CLOUD_PROJECT=<your-project-id>
|
||||
GOOGLE_CLOUD_LOCATION=<location> # Defaults to us-central1
|
||||
GOOGLE_GENAI_USE_VERTEXAI=true # Set to use Vertex AI
|
||||
```
|
||||
|
||||
Example usage in your CrewAI project:
|
||||
**Basic Usage:**
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
llm = LLM(
|
||||
model="gemini/gemini-2.0-flash",
|
||||
temperature=0.7,
|
||||
api_key="your-api-key", # Or set GOOGLE_API_KEY/GEMINI_API_KEY
|
||||
temperature=0.7
|
||||
)
|
||||
```
|
||||
|
||||
### Gemini models
|
||||
**Advanced Configuration:**
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
llm = LLM(
|
||||
model="gemini/gemini-2.5-flash",
|
||||
api_key="your-api-key",
|
||||
temperature=0.7,
|
||||
top_p=0.9,
|
||||
top_k=40, # Top-k sampling parameter
|
||||
max_output_tokens=8192,
|
||||
stop_sequences=["END", "STOP"],
|
||||
stream=True, # Enable streaming
|
||||
safety_settings={
|
||||
"HARM_CATEGORY_HARASSMENT": "BLOCK_NONE",
|
||||
"HARM_CATEGORY_HATE_SPEECH": "BLOCK_NONE"
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
**Vertex AI Configuration:**
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
llm = LLM(
|
||||
model="gemini/gemini-1.5-pro",
|
||||
project="your-gcp-project-id",
|
||||
location="us-central1" # GCP region
|
||||
)
|
||||
```
|
||||
|
||||
**Supported Environment Variables:**
|
||||
- `GOOGLE_API_KEY` or `GEMINI_API_KEY`: Your Google API key (required for Gemini API)
|
||||
- `GOOGLE_CLOUD_PROJECT`: Google Cloud project ID (for Vertex AI)
|
||||
- `GOOGLE_CLOUD_LOCATION`: GCP location (defaults to `us-central1`)
|
||||
- `GOOGLE_GENAI_USE_VERTEXAI`: Set to `true` to use Vertex AI
|
||||
|
||||
**Features:**
|
||||
- Native function calling support for Gemini 1.5+ and 2.x models
|
||||
- Streaming support for real-time responses
|
||||
- Multimodal capabilities (text, images, video)
|
||||
- Safety settings configuration
|
||||
- Support for both Gemini API and Vertex AI
|
||||
- Automatic system instruction handling
|
||||
- Token usage tracking
|
||||
|
||||
**Gemini Models:**
|
||||
|
||||
Google offers a range of powerful models optimized for different use cases.
|
||||
|
||||
| Model | Context Window | Best For |
|
||||
|--------------------------------|----------------|-------------------------------------------------------------------|
|
||||
| gemini-2.5-flash-preview-04-17 | 1M tokens | Adaptive thinking, cost efficiency |
|
||||
| gemini-2.5-pro-preview-05-06 | 1M tokens | Enhanced thinking and reasoning, multimodal understanding, advanced coding, and more |
|
||||
| gemini-2.0-flash | 1M tokens | Next generation features, speed, thinking, and realtime streaming |
|
||||
| gemini-2.5-flash | 1M tokens | Adaptive thinking, cost efficiency |
|
||||
| gemini-2.5-pro | 1M tokens | Enhanced thinking and reasoning, multimodal understanding |
|
||||
| gemini-2.0-flash | 1M tokens | Next generation features, speed, thinking |
|
||||
| gemini-2.0-flash-thinking | 32,768 tokens | Advanced reasoning with thinking process |
|
||||
| gemini-2.0-flash-lite | 1M tokens | Cost efficiency and low latency |
|
||||
| gemini-1.5-pro | 2M tokens | Best performing, logical reasoning, coding |
|
||||
| gemini-1.5-flash | 1M tokens | Balanced multimodal model, good for most tasks |
|
||||
| gemini-1.5-flash-8B | 1M tokens | Fastest, most cost-efficient, good for high-frequency tasks |
|
||||
| gemini-1.5-pro | 2M tokens | Best performing, wide variety of reasoning tasks including logical reasoning, coding, and creative collaboration |
|
||||
| gemini-1.5-flash-8b | 1M tokens | Fastest, most cost-efficient |
|
||||
| gemini-1.0-pro | 32,768 tokens | Earlier generation model |
|
||||
|
||||
**Gemma Models:**
|
||||
|
||||
The Gemini API also supports [Gemma models](https://ai.google.dev/gemma/docs) hosted on Google infrastructure.
|
||||
|
||||
| Model | Context Window | Best For |
|
||||
|----------------|----------------|------------------------------------|
|
||||
| gemma-3-1b | 32,000 tokens | Ultra-lightweight tasks |
|
||||
| gemma-3-4b | 128,000 tokens | Efficient general-purpose tasks |
|
||||
| gemma-3-12b | 128,000 tokens | Balanced performance and efficiency|
|
||||
| gemma-3-27b | 128,000 tokens | High-performance tasks |
|
||||
|
||||
**Note:** To use Google Gemini, install the required dependencies:
|
||||
```bash
|
||||
uv add "crewai[google-genai]"
|
||||
```
|
||||
|
||||
The full list of models is available in the [Gemini model docs](https://ai.google.dev/gemini-api/docs/models).
|
||||
|
||||
### Gemma
|
||||
|
||||
The Gemini API also allows you to use your API key to access [Gemma models](https://ai.google.dev/gemma/docs) hosted on Google infrastructure.
|
||||
|
||||
| Model | Context Window |
|
||||
|----------------|----------------|
|
||||
| gemma-3-1b-it | 32k tokens |
|
||||
| gemma-3-4b-it | 32k tokens |
|
||||
| gemma-3-12b-it | 32k tokens |
|
||||
| gemma-3-27b-it | 128k tokens |
|
||||
|
||||
</Accordion>
|
||||
<Accordion title="Google (Vertex AI)">
|
||||
Get credentials from your Google Cloud Console and save it to a JSON file, then load it with the following code:
|
||||
@@ -291,43 +468,146 @@ In this section, you'll find detailed examples that help you select, configure,
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Azure">
|
||||
CrewAI provides native integration with Azure AI Inference and Azure OpenAI through the Azure AI Inference Python SDK.
|
||||
|
||||
```toml Code
|
||||
# Required
|
||||
AZURE_API_KEY=<your-api-key>
|
||||
AZURE_API_BASE=<your-resource-url>
|
||||
AZURE_API_VERSION=<api-version>
|
||||
AZURE_ENDPOINT=<your-endpoint-url>
|
||||
|
||||
# Optional
|
||||
AZURE_AD_TOKEN=<your-azure-ad-token>
|
||||
AZURE_API_TYPE=<your-azure-api-type>
|
||||
AZURE_API_VERSION=<api-version> # Defaults to 2024-06-01
|
||||
```
|
||||
|
||||
Example usage in your CrewAI project:
|
||||
**Endpoint URL Formats:**
|
||||
|
||||
For Azure OpenAI deployments:
|
||||
```
|
||||
https://<resource-name>.openai.azure.com/openai/deployments/<deployment-name>
|
||||
```
|
||||
|
||||
For Azure AI Inference endpoints:
|
||||
```
|
||||
https://<resource-name>.inference.azure.com
|
||||
```
|
||||
|
||||
**Basic Usage:**
|
||||
```python Code
|
||||
llm = LLM(
|
||||
model="azure/gpt-4",
|
||||
api_version="2023-05-15"
|
||||
api_key="<your-api-key>", # Or set AZURE_API_KEY
|
||||
endpoint="<your-endpoint-url>",
|
||||
api_version="2024-06-01"
|
||||
)
|
||||
```
|
||||
|
||||
**Advanced Configuration:**
|
||||
```python Code
|
||||
llm = LLM(
|
||||
model="azure/gpt-4o",
|
||||
temperature=0.7,
|
||||
max_tokens=4000,
|
||||
top_p=0.9,
|
||||
frequency_penalty=0.0,
|
||||
presence_penalty=0.0,
|
||||
stop=["END"],
|
||||
stream=True,
|
||||
timeout=60.0,
|
||||
max_retries=3
|
||||
)
|
||||
```
|
||||
|
||||
**Supported Environment Variables:**
|
||||
- `AZURE_API_KEY`: Your Azure API key (required)
|
||||
- `AZURE_ENDPOINT`: Your Azure endpoint URL (required, also checks `AZURE_OPENAI_ENDPOINT` and `AZURE_API_BASE`)
|
||||
- `AZURE_API_VERSION`: API version (optional, defaults to `2024-06-01`)
|
||||
|
||||
**Features:**
|
||||
- Native function calling support for Azure OpenAI models (gpt-4, gpt-4o, gpt-3.5-turbo, etc.)
|
||||
- Streaming support for real-time responses
|
||||
- Automatic endpoint URL validation and correction
|
||||
- Comprehensive error handling with retry logic
|
||||
- Token usage tracking
|
||||
|
||||
**Note:** To use Azure AI Inference, install the required dependencies:
|
||||
```bash
|
||||
uv add "crewai[azure-ai-inference]"
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="AWS Bedrock">
|
||||
CrewAI provides native integration with AWS Bedrock through the boto3 SDK using the Converse API.
|
||||
|
||||
```toml Code
|
||||
# Required
|
||||
AWS_ACCESS_KEY_ID=<your-access-key>
|
||||
AWS_SECRET_ACCESS_KEY=<your-secret-key>
|
||||
AWS_DEFAULT_REGION=<your-region>
|
||||
|
||||
# Optional
|
||||
AWS_SESSION_TOKEN=<your-session-token> # For temporary credentials
|
||||
AWS_DEFAULT_REGION=<your-region> # Defaults to us-east-1
|
||||
```
|
||||
|
||||
Example usage in your CrewAI project:
|
||||
**Basic Usage:**
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
llm = LLM(
|
||||
model="bedrock/anthropic.claude-3-sonnet-20240229-v1:0"
|
||||
model="bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0",
|
||||
region_name="us-east-1"
|
||||
)
|
||||
```
|
||||
|
||||
Before using Amazon Bedrock, make sure you have boto3 installed in your environment
|
||||
**Advanced Configuration:**
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
[Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/models-regions.html) is a managed service that provides access to multiple foundation models from top AI companies through a unified API, enabling secure and responsible AI application development.
|
||||
llm = LLM(
|
||||
model="bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0",
|
||||
aws_access_key_id="your-access-key", # Or set AWS_ACCESS_KEY_ID
|
||||
aws_secret_access_key="your-secret-key", # Or set AWS_SECRET_ACCESS_KEY
|
||||
aws_session_token="your-session-token", # For temporary credentials
|
||||
region_name="us-east-1",
|
||||
temperature=0.7,
|
||||
max_tokens=4096,
|
||||
top_p=0.9,
|
||||
top_k=250, # For Claude models
|
||||
stop_sequences=["END", "STOP"],
|
||||
stream=True, # Enable streaming
|
||||
guardrail_config={ # Optional content filtering
|
||||
"guardrailIdentifier": "your-guardrail-id",
|
||||
"guardrailVersion": "1"
|
||||
},
|
||||
additional_model_request_fields={ # Model-specific parameters
|
||||
"top_k": 250
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
**Supported Environment Variables:**
|
||||
- `AWS_ACCESS_KEY_ID`: AWS access key (required)
|
||||
- `AWS_SECRET_ACCESS_KEY`: AWS secret key (required)
|
||||
- `AWS_SESSION_TOKEN`: AWS session token for temporary credentials (optional)
|
||||
- `AWS_DEFAULT_REGION`: AWS region (defaults to `us-east-1`)
|
||||
|
||||
**Features:**
|
||||
- Native tool calling support via Converse API
|
||||
- Streaming and non-streaming responses
|
||||
- Comprehensive error handling with retry logic
|
||||
- Guardrail configuration for content filtering
|
||||
- Model-specific parameters via `additional_model_request_fields`
|
||||
- Token usage tracking and stop reason logging
|
||||
- Support for all Bedrock foundation models
|
||||
- Automatic conversation format handling
|
||||
|
||||
**Important Notes:**
|
||||
- Uses the modern Converse API for unified model access
|
||||
- Automatic handling of model-specific conversation requirements
|
||||
- System messages are handled separately from conversation
|
||||
- First message must be from user (automatically handled)
|
||||
- Some models (like Cohere) require conversation to end with user message
|
||||
|
||||
[Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/models-regions.html) is a managed service that provides access to multiple foundation models from top AI companies through a unified API.
|
||||
|
||||
| Model | Context Window | Best For |
|
||||
|-------------------------|----------------------|-------------------------------------------------------------------|
|
||||
@@ -357,7 +637,12 @@ In this section, you'll find detailed examples that help you select, configure,
|
||||
| Jamba-Instruct | Up to 256k tokens | Model with extended context window optimized for cost-effective text generation, summarization, and Q&A. |
|
||||
| Mistral 7B Instruct | Up to 32k tokens | This LLM follows instructions, completes requests, and generates creative text. |
|
||||
| Mistral 8x7B Instruct | Up to 32k tokens | An MOE LLM that follows instructions, completes requests, and generates creative text. |
|
||||
| DeepSeek R1 | 32,768 tokens | Advanced reasoning model |
|
||||
|
||||
**Note:** To use AWS Bedrock, install the required dependencies:
|
||||
```bash
|
||||
uv add "crewai[bedrock]"
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Amazon SageMaker">
|
||||
@@ -899,7 +1184,7 @@ Learn how to get the most out of your LLM configuration:
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Drop Additional Parameters">
|
||||
CrewAI internally uses Litellm for LLM calls, which allows you to drop additional parameters that are not needed for your specific use case. This can help simplify your code and reduce the complexity of your LLM configuration.
|
||||
CrewAI internally uses native sdks for LLM calls, which allows you to drop additional parameters that are not needed for your specific use case. This can help simplify your code and reduce the complexity of your LLM configuration.
|
||||
For example, if you don't need to send the <code>stop</code> parameter, you can simply omit it from your LLM call:
|
||||
|
||||
```python
|
||||
|
||||
@@ -11,7 +11,7 @@ mode: "wide"
|
||||
<Card
|
||||
title="Bedrock Invoke Agent Tool"
|
||||
icon="cloud"
|
||||
href="/en/tools/tool-integrations/bedrockinvokeagenttool"
|
||||
href="/en/tools/integration/bedrockinvokeagenttool"
|
||||
color="#0891B2"
|
||||
>
|
||||
Invoke Amazon Bedrock Agents from CrewAI to orchestrate actions across AWS services.
|
||||
@@ -20,7 +20,7 @@ mode: "wide"
|
||||
<Card
|
||||
title="CrewAI Automation Tool"
|
||||
icon="bolt"
|
||||
href="/en/tools/tool-integrations/crewaiautomationtool"
|
||||
href="/en/tools/integration/crewaiautomationtool"
|
||||
color="#7C3AED"
|
||||
>
|
||||
Automate deployment and operations by integrating CrewAI with external platforms and workflows.
|
||||
|
||||
@@ -12,7 +12,7 @@ dependencies = [
|
||||
"pytube>=15.0.0",
|
||||
"requests>=2.32.5",
|
||||
"docker>=7.1.0",
|
||||
"crewai==1.0.0",
|
||||
"crewai==1.2.0",
|
||||
"lancedb>=0.5.4",
|
||||
"tiktoken>=0.8.0",
|
||||
"beautifulsoup4>=4.13.4",
|
||||
|
||||
@@ -287,4 +287,4 @@ __all__ = [
|
||||
"ZapierActionTools",
|
||||
]
|
||||
|
||||
__version__ = "1.0.0"
|
||||
__version__ = "1.2.0"
|
||||
|
||||
@@ -1,80 +1,42 @@
|
||||
from collections.abc import Callable
|
||||
from __future__ import annotations
|
||||
|
||||
import importlib
|
||||
import json
|
||||
import os
|
||||
from collections.abc import Callable
|
||||
from typing import Any
|
||||
|
||||
|
||||
try:
|
||||
from qdrant_client import QdrantClient
|
||||
from qdrant_client.http.models import FieldCondition, Filter, MatchValue
|
||||
|
||||
QDRANT_AVAILABLE = True
|
||||
except ImportError:
|
||||
QDRANT_AVAILABLE = False
|
||||
QdrantClient = Any # type: ignore[assignment,misc] # type placeholder
|
||||
Filter = Any # type: ignore[assignment,misc]
|
||||
FieldCondition = Any # type: ignore[assignment,misc]
|
||||
MatchValue = Any # type: ignore[assignment,misc]
|
||||
|
||||
from crewai.tools import BaseTool, EnvVar
|
||||
from pydantic import BaseModel, ConfigDict, Field
|
||||
from pydantic import BaseModel, ConfigDict, Field, model_validator
|
||||
from pydantic.types import ImportString
|
||||
|
||||
|
||||
class QdrantToolSchema(BaseModel):
|
||||
"""Input for QdrantTool."""
|
||||
query: str = Field(..., description="Query to search in Qdrant DB.")
|
||||
filter_by: str | None = None
|
||||
filter_value: str | None = None
|
||||
|
||||
query: str = Field(
|
||||
...,
|
||||
description="The query to search retrieve relevant information from the Qdrant database. Pass only the query, not the question.",
|
||||
)
|
||||
filter_by: str | None = Field(
|
||||
default=None,
|
||||
description="Filter by properties. Pass only the properties, not the question.",
|
||||
)
|
||||
filter_value: str | None = Field(
|
||||
default=None,
|
||||
description="Filter by value. Pass only the value, not the question.",
|
||||
)
|
||||
|
||||
class QdrantConfig(BaseModel):
|
||||
"""All Qdrant connection and search settings."""
|
||||
|
||||
qdrant_url: str
|
||||
qdrant_api_key: str | None = None
|
||||
collection_name: str
|
||||
limit: int = 3
|
||||
score_threshold: float = 0.35
|
||||
filter_conditions: list[tuple[str, Any]] = Field(default_factory=list)
|
||||
|
||||
|
||||
class QdrantVectorSearchTool(BaseTool):
|
||||
"""Tool to query and filter results from a Qdrant database.
|
||||
|
||||
This tool enables vector similarity search on internal documents stored in Qdrant,
|
||||
with optional filtering capabilities.
|
||||
|
||||
Attributes:
|
||||
client: Configured QdrantClient instance
|
||||
collection_name: Name of the Qdrant collection to search
|
||||
limit: Maximum number of results to return
|
||||
score_threshold: Minimum similarity score threshold
|
||||
qdrant_url: Qdrant server URL
|
||||
qdrant_api_key: Authentication key for Qdrant
|
||||
"""
|
||||
"""Vector search tool for Qdrant."""
|
||||
|
||||
model_config = ConfigDict(arbitrary_types_allowed=True)
|
||||
client: QdrantClient = None # type: ignore[assignment]
|
||||
|
||||
# --- Metadata ---
|
||||
name: str = "QdrantVectorSearchTool"
|
||||
description: str = "A tool to search the Qdrant database for relevant information on internal documents."
|
||||
description: str = "Search Qdrant vector DB for relevant documents."
|
||||
args_schema: type[BaseModel] = QdrantToolSchema
|
||||
query: str | None = None
|
||||
filter_by: str | None = None
|
||||
filter_value: str | None = None
|
||||
collection_name: str | None = None
|
||||
limit: int | None = Field(default=3)
|
||||
score_threshold: float = Field(default=0.35)
|
||||
qdrant_url: str = Field(
|
||||
...,
|
||||
description="The URL of the Qdrant server",
|
||||
)
|
||||
qdrant_api_key: str | None = Field(
|
||||
default=None,
|
||||
description="The API key for the Qdrant server",
|
||||
)
|
||||
custom_embedding_fn: Callable | None = Field(
|
||||
default=None,
|
||||
description="A custom embedding function to use for vectorization. If not provided, the default model will be used.",
|
||||
)
|
||||
package_dependencies: list[str] = Field(default_factory=lambda: ["qdrant-client"])
|
||||
env_vars: list[EnvVar] = Field(
|
||||
default_factory=lambda: [
|
||||
@@ -83,107 +45,81 @@ class QdrantVectorSearchTool(BaseTool):
|
||||
)
|
||||
]
|
||||
)
|
||||
qdrant_config: QdrantConfig
|
||||
qdrant_package: ImportString[Any] = Field(
|
||||
default="qdrant_client",
|
||||
description="Base package path for Qdrant. Will dynamically import client and models.",
|
||||
)
|
||||
custom_embedding_fn: ImportString[Callable[[str], list[float]]] | None = Field(
|
||||
default=None,
|
||||
description="Optional embedding function or import path.",
|
||||
)
|
||||
client: Any | None = None
|
||||
|
||||
def __init__(self, **kwargs):
|
||||
super().__init__(**kwargs)
|
||||
if QDRANT_AVAILABLE:
|
||||
self.client = QdrantClient(
|
||||
url=self.qdrant_url,
|
||||
api_key=self.qdrant_api_key if self.qdrant_api_key else None,
|
||||
@model_validator(mode="after")
|
||||
def _setup_qdrant(self) -> QdrantVectorSearchTool:
|
||||
# Import the qdrant_package if it's a string
|
||||
if isinstance(self.qdrant_package, str):
|
||||
self.qdrant_package = importlib.import_module(self.qdrant_package)
|
||||
|
||||
if not self.client:
|
||||
self.client = self.qdrant_package.QdrantClient(
|
||||
url=self.qdrant_config.qdrant_url,
|
||||
api_key=self.qdrant_config.qdrant_api_key or None,
|
||||
)
|
||||
else:
|
||||
import click
|
||||
|
||||
if click.confirm(
|
||||
"The 'qdrant-client' package is required to use the QdrantVectorSearchTool. "
|
||||
"Would you like to install it?"
|
||||
):
|
||||
import subprocess
|
||||
|
||||
subprocess.run(["uv", "add", "qdrant-client"], check=True) # noqa: S607
|
||||
else:
|
||||
raise ImportError(
|
||||
"The 'qdrant-client' package is required to use the QdrantVectorSearchTool. "
|
||||
"Please install it with: uv add qdrant-client"
|
||||
)
|
||||
return self
|
||||
|
||||
def _run(
|
||||
self,
|
||||
query: str,
|
||||
filter_by: str | None = None,
|
||||
filter_value: str | None = None,
|
||||
filter_value: Any | None = None,
|
||||
) -> str:
|
||||
"""Execute vector similarity search on Qdrant.
|
||||
"""Perform vector similarity search."""
|
||||
filter_ = self.qdrant_package.http.models.Filter
|
||||
field_condition = self.qdrant_package.http.models.FieldCondition
|
||||
match_value = self.qdrant_package.http.models.MatchValue
|
||||
conditions = self.qdrant_config.filter_conditions.copy()
|
||||
if filter_by and filter_value is not None:
|
||||
conditions.append((filter_by, filter_value))
|
||||
|
||||
Args:
|
||||
query: Search query to vectorize and match
|
||||
filter_by: Optional metadata field to filter on
|
||||
filter_value: Optional value to filter by
|
||||
|
||||
Returns:
|
||||
JSON string containing search results with metadata and scores
|
||||
|
||||
Raises:
|
||||
ImportError: If qdrant-client is not installed
|
||||
ValueError: If Qdrant credentials are missing
|
||||
"""
|
||||
if not self.qdrant_url:
|
||||
raise ValueError("QDRANT_URL is not set")
|
||||
|
||||
# Create filter if filter parameters are provided
|
||||
search_filter = None
|
||||
if filter_by and filter_value:
|
||||
search_filter = Filter(
|
||||
search_filter = (
|
||||
filter_(
|
||||
must=[
|
||||
FieldCondition(key=filter_by, match=MatchValue(value=filter_value))
|
||||
field_condition(key=k, match=match_value(value=v))
|
||||
for k, v in conditions
|
||||
]
|
||||
)
|
||||
|
||||
# Search in Qdrant using the built-in query method
|
||||
query_vector = (
|
||||
self._vectorize_query(query, embedding_model="text-embedding-3-large")
|
||||
if not self.custom_embedding_fn
|
||||
else self.custom_embedding_fn(query)
|
||||
if conditions
|
||||
else None
|
||||
)
|
||||
search_results = self.client.query_points(
|
||||
collection_name=self.collection_name, # type: ignore[arg-type]
|
||||
query_vector = (
|
||||
self.custom_embedding_fn(query)
|
||||
if self.custom_embedding_fn
|
||||
else (
|
||||
lambda: __import__("openai")
|
||||
.Client(api_key=os.getenv("OPENAI_API_KEY"))
|
||||
.embeddings.create(input=[query], model="text-embedding-3-large")
|
||||
.data[0]
|
||||
.embedding
|
||||
)()
|
||||
)
|
||||
results = self.client.query_points(
|
||||
collection_name=self.qdrant_config.collection_name,
|
||||
query=query_vector,
|
||||
query_filter=search_filter,
|
||||
limit=self.limit, # type: ignore[arg-type]
|
||||
score_threshold=self.score_threshold,
|
||||
limit=self.qdrant_config.limit,
|
||||
score_threshold=self.qdrant_config.score_threshold,
|
||||
)
|
||||
|
||||
# Format results similar to storage implementation
|
||||
results = []
|
||||
# Extract the list of ScoredPoint objects from the tuple
|
||||
for point in search_results:
|
||||
result = {
|
||||
"metadata": point[1][0].payload.get("metadata", {}),
|
||||
"context": point[1][0].payload.get("text", ""),
|
||||
"distance": point[1][0].score,
|
||||
}
|
||||
results.append(result)
|
||||
|
||||
return json.dumps(results, indent=2)
|
||||
|
||||
def _vectorize_query(self, query: str, embedding_model: str) -> list[float]:
|
||||
"""Default vectorization function with openai.
|
||||
|
||||
Args:
|
||||
query (str): The query to vectorize
|
||||
embedding_model (str): The embedding model to use
|
||||
|
||||
Returns:
|
||||
list[float]: The vectorized query
|
||||
"""
|
||||
import openai
|
||||
|
||||
client = openai.Client(api_key=os.getenv("OPENAI_API_KEY"))
|
||||
return (
|
||||
client.embeddings.create(
|
||||
input=[query],
|
||||
model=embedding_model,
|
||||
)
|
||||
.data[0]
|
||||
.embedding
|
||||
return json.dumps(
|
||||
[
|
||||
{
|
||||
"distance": p.score,
|
||||
"metadata": p.payload.get("metadata", {}) if p.payload else {},
|
||||
"context": p.payload.get("text", "") if p.payload else {},
|
||||
}
|
||||
for p in results.points
|
||||
],
|
||||
indent=2,
|
||||
)
|
||||
|
||||
@@ -31,6 +31,7 @@ def run_command(cmd, cwd):
|
||||
return subprocess.run(cmd, cwd=cwd, capture_output=True, text=True)
|
||||
|
||||
|
||||
@pytest.mark.skip(reason="Test takes too long in GitHub Actions (>30s timeout) due to dependency installation")
|
||||
def test_no_optional_dependencies_in_init(temp_project):
|
||||
"""
|
||||
Test that crewai-tools can be imported without optional dependencies.
|
||||
|
||||
@@ -49,7 +49,7 @@ Repository = "https://github.com/crewAIInc/crewAI"
|
||||
|
||||
[project.optional-dependencies]
|
||||
tools = [
|
||||
"crewai-tools==1.0.0",
|
||||
"crewai-tools==1.2.0",
|
||||
]
|
||||
embeddings = [
|
||||
"tiktoken~=0.8.0"
|
||||
@@ -66,11 +66,6 @@ openpyxl = [
|
||||
mem0 = ["mem0ai>=0.1.94"]
|
||||
docling = [
|
||||
"docling>=2.12.0",
|
||||
]
|
||||
aisuite = [
|
||||
"aisuite>=0.1.11",
|
||||
|
||||
|
||||
]
|
||||
qdrant = [
|
||||
"qdrant-client[fastembed]>=1.14.3",
|
||||
@@ -137,13 +132,3 @@ build-backend = "hatchling.build"
|
||||
|
||||
[tool.hatch.version]
|
||||
path = "src/crewai/__init__.py"
|
||||
|
||||
# Declare mutually exclusive extras due to conflicting httpx requirements
|
||||
# a2a requires httpx>=0.28.1, while aisuite requires httpx>=0.27.0,<0.28.0
|
||||
# [tool.uv]
|
||||
# conflicts = [
|
||||
# [
|
||||
# { extra = "a2a" },
|
||||
# { extra = "aisuite" },
|
||||
# ],
|
||||
# ]
|
||||
|
||||
@@ -40,7 +40,7 @@ def _suppress_pydantic_deprecation_warnings() -> None:
|
||||
|
||||
_suppress_pydantic_deprecation_warnings()
|
||||
|
||||
__version__ = "1.0.0"
|
||||
__version__ = "1.2.0"
|
||||
_telemetry_submitted = False
|
||||
|
||||
|
||||
|
||||
@@ -322,7 +322,7 @@ MODELS = {
|
||||
],
|
||||
}
|
||||
|
||||
DEFAULT_LLM_MODEL = "gpt-4o-mini"
|
||||
DEFAULT_LLM_MODEL = "gpt-4.1-mini"
|
||||
|
||||
JSON_URL = "https://raw.githubusercontent.com/BerriAI/litellm/main/model_prices_and_context_window.json"
|
||||
|
||||
|
||||
@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
|
||||
authors = [{ name = "Your Name", email = "you@example.com" }]
|
||||
requires-python = ">=3.10,<3.14"
|
||||
dependencies = [
|
||||
"crewai[tools]>=0.203.1,<1.0.0"
|
||||
"crewai[tools]==1.2.0"
|
||||
]
|
||||
|
||||
[project.scripts]
|
||||
|
||||
@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
|
||||
authors = [{ name = "Your Name", email = "you@example.com" }]
|
||||
requires-python = ">=3.10,<3.14"
|
||||
dependencies = [
|
||||
"crewai[tools]>=0.203.1,<1.0.0",
|
||||
"crewai[tools]==1.2.0"
|
||||
]
|
||||
|
||||
[project.scripts]
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import os
|
||||
from typing import TYPE_CHECKING
|
||||
from typing import TYPE_CHECKING, Any
|
||||
|
||||
from pyvis.network import Network # type: ignore[import-untyped]
|
||||
|
||||
@@ -29,7 +29,7 @@ _printer = Printer()
|
||||
class FlowPlot:
|
||||
"""Handles the creation and rendering of flow visualization diagrams."""
|
||||
|
||||
def __init__(self, flow: Flow) -> None:
|
||||
def __init__(self, flow: Flow[Any]) -> None:
|
||||
"""
|
||||
Initialize FlowPlot with a flow object.
|
||||
|
||||
@@ -136,7 +136,7 @@ class FlowPlot:
|
||||
f"Unexpected error during flow visualization: {e!s}"
|
||||
) from e
|
||||
finally:
|
||||
self._cleanup_pyvis_lib()
|
||||
self._cleanup_pyvis_lib(filename)
|
||||
|
||||
def _generate_final_html(self, network_html: str) -> str:
|
||||
"""
|
||||
@@ -186,26 +186,33 @@ class FlowPlot:
|
||||
raise IOError(f"Failed to generate visualization HTML: {e!s}") from e
|
||||
|
||||
@staticmethod
|
||||
def _cleanup_pyvis_lib() -> None:
|
||||
def _cleanup_pyvis_lib(filename: str) -> None:
|
||||
"""
|
||||
Clean up the generated lib folder from pyvis.
|
||||
|
||||
This method safely removes the temporary lib directory created by pyvis
|
||||
during network visualization generation.
|
||||
during network visualization generation. The lib folder is created in the
|
||||
same directory as the output HTML file.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
filename : str
|
||||
The output filename (without .html extension) used for the visualization.
|
||||
"""
|
||||
try:
|
||||
lib_folder = safe_path_join("lib", root=os.getcwd())
|
||||
if os.path.exists(lib_folder) and os.path.isdir(lib_folder):
|
||||
import shutil
|
||||
import shutil
|
||||
|
||||
shutil.rmtree(lib_folder)
|
||||
except ValueError as e:
|
||||
_printer.print(f"Error validating lib folder path: {e}", color="red")
|
||||
output_dir = os.path.dirname(os.path.abspath(filename)) or os.getcwd()
|
||||
lib_folder = os.path.join(output_dir, "lib")
|
||||
if os.path.exists(lib_folder) and os.path.isdir(lib_folder):
|
||||
vis_js = os.path.join(lib_folder, "vis-network.min.js")
|
||||
if os.path.exists(vis_js):
|
||||
shutil.rmtree(lib_folder)
|
||||
except Exception as e:
|
||||
_printer.print(f"Error cleaning up lib folder: {e}", color="red")
|
||||
|
||||
|
||||
def plot_flow(flow: Flow, filename: str = "flow_plot") -> None:
|
||||
def plot_flow(flow: Flow[Any], filename: str = "flow_plot") -> None:
|
||||
"""
|
||||
Convenience function to create and save a flow visualization.
|
||||
|
||||
|
||||
@@ -1,5 +1,8 @@
|
||||
"""HTML template processing and generation for flow visualization diagrams."""
|
||||
|
||||
import base64
|
||||
import re
|
||||
from typing import Any
|
||||
|
||||
from crewai.flow.path_utils import validate_path_exists
|
||||
|
||||
@@ -7,7 +10,7 @@ from crewai.flow.path_utils import validate_path_exists
|
||||
class HTMLTemplateHandler:
|
||||
"""Handles HTML template processing and generation for flow visualization diagrams."""
|
||||
|
||||
def __init__(self, template_path, logo_path):
|
||||
def __init__(self, template_path: str, logo_path: str) -> None:
|
||||
"""
|
||||
Initialize HTMLTemplateHandler with validated template and logo paths.
|
||||
|
||||
@@ -29,23 +32,23 @@ class HTMLTemplateHandler:
|
||||
except ValueError as e:
|
||||
raise ValueError(f"Invalid template or logo path: {e}") from e
|
||||
|
||||
def read_template(self):
|
||||
def read_template(self) -> str:
|
||||
"""Read and return the HTML template file contents."""
|
||||
with open(self.template_path, "r", encoding="utf-8") as f:
|
||||
return f.read()
|
||||
|
||||
def encode_logo(self):
|
||||
def encode_logo(self) -> str:
|
||||
"""Convert the logo SVG file to base64 encoded string."""
|
||||
with open(self.logo_path, "rb") as logo_file:
|
||||
logo_svg_data = logo_file.read()
|
||||
return base64.b64encode(logo_svg_data).decode("utf-8")
|
||||
|
||||
def extract_body_content(self, html):
|
||||
def extract_body_content(self, html: str) -> str:
|
||||
"""Extract and return content between body tags from HTML string."""
|
||||
match = re.search("<body.*?>(.*?)</body>", html, re.DOTALL)
|
||||
return match.group(1) if match else ""
|
||||
|
||||
def generate_legend_items_html(self, legend_items):
|
||||
def generate_legend_items_html(self, legend_items: list[dict[str, Any]]) -> str:
|
||||
"""Generate HTML markup for the legend items."""
|
||||
legend_items_html = ""
|
||||
for item in legend_items:
|
||||
@@ -73,7 +76,9 @@ class HTMLTemplateHandler:
|
||||
"""
|
||||
return legend_items_html
|
||||
|
||||
def generate_final_html(self, network_body, legend_items_html, title="Flow Plot"):
|
||||
def generate_final_html(
|
||||
self, network_body: str, legend_items_html: str, title: str = "Flow Plot"
|
||||
) -> str:
|
||||
"""Combine all components into final HTML document with network visualization."""
|
||||
html_template = self.read_template()
|
||||
logo_svg_base64 = self.encode_logo()
|
||||
|
||||
@@ -1,4 +1,23 @@
|
||||
def get_legend_items(colors):
|
||||
"""Legend generation for flow visualization diagrams."""
|
||||
|
||||
from typing import Any
|
||||
|
||||
from crewai.flow.config import FlowColors
|
||||
|
||||
|
||||
def get_legend_items(colors: FlowColors) -> list[dict[str, Any]]:
|
||||
"""Generate legend items based on flow colors.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
colors : FlowColors
|
||||
Dictionary containing color definitions for flow elements.
|
||||
|
||||
Returns
|
||||
-------
|
||||
list[dict[str, Any]]
|
||||
List of legend item dictionaries with labels and styling.
|
||||
"""
|
||||
return [
|
||||
{"label": "Start Method", "color": colors["start"]},
|
||||
{"label": "Method", "color": colors["method"]},
|
||||
@@ -24,7 +43,19 @@ def get_legend_items(colors):
|
||||
]
|
||||
|
||||
|
||||
def generate_legend_items_html(legend_items):
|
||||
def generate_legend_items_html(legend_items: list[dict[str, Any]]) -> str:
|
||||
"""Generate HTML markup for legend items.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
legend_items : list[dict[str, Any]]
|
||||
List of legend item dictionaries containing labels and styling.
|
||||
|
||||
Returns
|
||||
-------
|
||||
str
|
||||
HTML string containing formatted legend items.
|
||||
"""
|
||||
legend_items_html = ""
|
||||
for item in legend_items:
|
||||
if "border" in item:
|
||||
|
||||
@@ -36,28 +36,29 @@ from crewai.flow.utils import (
|
||||
from crewai.utilities.printer import Printer
|
||||
|
||||
|
||||
|
||||
_printer = Printer()
|
||||
|
||||
|
||||
def method_calls_crew(method: Any) -> bool:
|
||||
"""
|
||||
Check if the method contains a call to `.crew()`.
|
||||
Check if the method contains a call to `.crew()`, `.kickoff()`, or `.kickoff_async()`.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
method : Any
|
||||
The method to analyze for crew() calls.
|
||||
The method to analyze for crew or agent execution calls.
|
||||
|
||||
Returns
|
||||
-------
|
||||
bool
|
||||
True if the method calls .crew(), False otherwise.
|
||||
True if the method calls .crew(), .kickoff(), or .kickoff_async(), False otherwise.
|
||||
|
||||
Notes
|
||||
-----
|
||||
Uses AST analysis to detect method calls, specifically looking for
|
||||
attribute access of 'crew'.
|
||||
attribute access of 'crew', 'kickoff', or 'kickoff_async'.
|
||||
This includes both traditional Crew execution (.crew()) and Agent/LiteAgent
|
||||
execution (.kickoff() or .kickoff_async()).
|
||||
"""
|
||||
try:
|
||||
source = inspect.getsource(method)
|
||||
@@ -68,14 +69,14 @@ def method_calls_crew(method: Any) -> bool:
|
||||
return False
|
||||
|
||||
class CrewCallVisitor(ast.NodeVisitor):
|
||||
"""AST visitor to detect .crew() method calls."""
|
||||
"""AST visitor to detect .crew(), .kickoff(), or .kickoff_async() method calls."""
|
||||
|
||||
def __init__(self):
|
||||
def __init__(self) -> None:
|
||||
self.found = False
|
||||
|
||||
def visit_Call(self, node):
|
||||
def visit_Call(self, node: ast.Call) -> None:
|
||||
if isinstance(node.func, ast.Attribute):
|
||||
if node.func.attr == "crew":
|
||||
if node.func.attr in ("crew", "kickoff", "kickoff_async"):
|
||||
self.found = True
|
||||
self.generic_visit(node)
|
||||
|
||||
@@ -113,7 +114,7 @@ def add_nodes_to_network(
|
||||
- Regular methods
|
||||
"""
|
||||
|
||||
def human_friendly_label(method_name):
|
||||
def human_friendly_label(method_name: str) -> str:
|
||||
return method_name.replace("_", " ").title()
|
||||
|
||||
node_style: (
|
||||
|
||||
@@ -1,99 +0,0 @@
|
||||
"""AI Suite LLM integration for CrewAI.
|
||||
|
||||
This module provides integration with AI Suite for LLM capabilities.
|
||||
"""
|
||||
|
||||
from typing import Any
|
||||
|
||||
import aisuite as ai # type: ignore
|
||||
|
||||
from crewai.llms.base_llm import BaseLLM
|
||||
|
||||
|
||||
class AISuiteLLM(BaseLLM):
|
||||
"""AI Suite LLM implementation.
|
||||
|
||||
This class provides integration with AI Suite models through the BaseLLM interface.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
model: str,
|
||||
temperature: float | None = None,
|
||||
stop: list[str] | None = None,
|
||||
**kwargs: Any,
|
||||
) -> None:
|
||||
"""Initialize the AI Suite LLM.
|
||||
|
||||
Args:
|
||||
model: The model identifier for AI Suite.
|
||||
temperature: Optional temperature setting for response generation.
|
||||
stop: Optional list of stop sequences for generation.
|
||||
**kwargs: Additional keyword arguments passed to the AI Suite client.
|
||||
"""
|
||||
super().__init__(model=model, temperature=temperature, stop=stop)
|
||||
self.client = ai.Client()
|
||||
self.kwargs = kwargs
|
||||
|
||||
def call( # type: ignore[override]
|
||||
self,
|
||||
messages: str | list[dict[str, str]],
|
||||
tools: list[dict] | None = None,
|
||||
callbacks: list[Any] | None = None,
|
||||
available_functions: dict[str, Any] | None = None,
|
||||
from_task: Any | None = None,
|
||||
from_agent: Any | None = None,
|
||||
) -> str | Any:
|
||||
"""Call the AI Suite LLM with the given messages.
|
||||
|
||||
Args:
|
||||
messages: Input messages for the LLM.
|
||||
tools: Optional list of tool schemas for function calling.
|
||||
callbacks: Optional list of callback functions.
|
||||
available_functions: Optional dict mapping function names to callables.
|
||||
from_task: Optional task caller.
|
||||
from_agent: Optional agent caller.
|
||||
|
||||
Returns:
|
||||
The text response from the LLM.
|
||||
"""
|
||||
completion_params = self._prepare_completion_params(messages, tools)
|
||||
response = self.client.chat.completions.create(**completion_params)
|
||||
|
||||
return response.choices[0].message.content
|
||||
|
||||
def _prepare_completion_params(
|
||||
self,
|
||||
messages: str | list[dict[str, str]],
|
||||
tools: list[dict] | None = None,
|
||||
) -> dict[str, Any]:
|
||||
"""Prepare parameters for the AI Suite completion call.
|
||||
|
||||
Args:
|
||||
messages: Input messages for the LLM.
|
||||
tools: Optional list of tool schemas.
|
||||
|
||||
Returns:
|
||||
Dictionary of parameters for the completion API.
|
||||
"""
|
||||
params: dict[str, Any] = {
|
||||
"model": self.model,
|
||||
"messages": messages,
|
||||
"temperature": self.temperature,
|
||||
"tools": tools,
|
||||
**self.kwargs,
|
||||
}
|
||||
|
||||
if self.stop:
|
||||
params["stop"] = self.stop
|
||||
|
||||
return params
|
||||
|
||||
@staticmethod
|
||||
def supports_function_calling() -> bool:
|
||||
"""Check if the LLM supports function calling.
|
||||
|
||||
Returns:
|
||||
False, as AI Suite does not currently support function calling.
|
||||
"""
|
||||
return False
|
||||
60
lib/crewai/src/crewai/mypy.py
Normal file
60
lib/crewai/src/crewai/mypy.py
Normal file
@@ -0,0 +1,60 @@
|
||||
"""Mypy plugin for CrewAI decorator type checking.
|
||||
|
||||
This plugin informs mypy about attributes injected by the @CrewBase decorator.
|
||||
"""
|
||||
|
||||
from collections.abc import Callable
|
||||
|
||||
from mypy.nodes import MDEF, SymbolTableNode, Var
|
||||
from mypy.plugin import ClassDefContext, Plugin
|
||||
from mypy.types import AnyType, TypeOfAny
|
||||
|
||||
|
||||
class CrewAIPlugin(Plugin):
|
||||
"""Mypy plugin that handles @CrewBase decorator attribute injection."""
|
||||
|
||||
def get_class_decorator_hook(
|
||||
self, fullname: str
|
||||
) -> Callable[[ClassDefContext], None] | None:
|
||||
"""Return hook for class decorators.
|
||||
|
||||
Args:
|
||||
fullname: Fully qualified name of the decorator.
|
||||
|
||||
Returns:
|
||||
Hook function if this is a CrewBase decorator, None otherwise.
|
||||
"""
|
||||
if fullname in ("crewai.project.CrewBase", "crewai.project.crew_base.CrewBase"):
|
||||
return self._crew_base_hook
|
||||
return None
|
||||
|
||||
@staticmethod
|
||||
def _crew_base_hook(ctx: ClassDefContext) -> None:
|
||||
"""Add injected attributes to @CrewBase decorated classes.
|
||||
|
||||
Args:
|
||||
ctx: Context for the class being decorated.
|
||||
"""
|
||||
any_type = AnyType(TypeOfAny.explicit)
|
||||
str_type = ctx.api.named_type("builtins.str")
|
||||
dict_type = ctx.api.named_type("builtins.dict", [str_type, any_type])
|
||||
agents_config_var = Var("agents_config", dict_type)
|
||||
agents_config_var.info = ctx.cls.info
|
||||
agents_config_var._fullname = f"{ctx.cls.info.fullname}.agents_config"
|
||||
ctx.cls.info.names["agents_config"] = SymbolTableNode(MDEF, agents_config_var)
|
||||
tasks_config_var = Var("tasks_config", dict_type)
|
||||
tasks_config_var.info = ctx.cls.info
|
||||
tasks_config_var._fullname = f"{ctx.cls.info.fullname}.tasks_config"
|
||||
ctx.cls.info.names["tasks_config"] = SymbolTableNode(MDEF, tasks_config_var)
|
||||
|
||||
|
||||
def plugin(_: str) -> type[Plugin]:
|
||||
"""Entry point for mypy plugin.
|
||||
|
||||
Args:
|
||||
_: Mypy version string.
|
||||
|
||||
Returns:
|
||||
Plugin class.
|
||||
"""
|
||||
return CrewAIPlugin
|
||||
@@ -4,7 +4,7 @@ from __future__ import annotations
|
||||
|
||||
from collections.abc import Callable
|
||||
from functools import wraps
|
||||
from typing import TYPE_CHECKING, Concatenate, ParamSpec, TypeVar
|
||||
from typing import TYPE_CHECKING, Any, Concatenate, ParamSpec, TypeVar, overload
|
||||
|
||||
from crewai.project.utils import memoize
|
||||
|
||||
@@ -33,6 +33,7 @@ P2 = ParamSpec("P2")
|
||||
R = TypeVar("R")
|
||||
R2 = TypeVar("R2")
|
||||
T = TypeVar("T")
|
||||
SelfT = TypeVar("SelfT")
|
||||
|
||||
|
||||
def before_kickoff(meth: Callable[P, R]) -> BeforeKickoffMethod[P, R]:
|
||||
@@ -155,9 +156,17 @@ def cache_handler(meth: Callable[P, R]) -> CacheHandlerMethod[P, R]:
|
||||
return CacheHandlerMethod(memoize(meth))
|
||||
|
||||
|
||||
@overload
|
||||
def crew(
|
||||
meth: Callable[Concatenate[SelfT, P], Crew],
|
||||
) -> Callable[Concatenate[SelfT, P], Crew]: ...
|
||||
@overload
|
||||
def crew(
|
||||
meth: Callable[Concatenate[CrewInstance, P], Crew],
|
||||
) -> Callable[Concatenate[CrewInstance, P], Crew]:
|
||||
) -> Callable[Concatenate[CrewInstance, P], Crew]: ...
|
||||
def crew(
|
||||
meth: Callable[..., Crew],
|
||||
) -> Callable[..., Crew]:
|
||||
"""Marks a method as the main crew execution point.
|
||||
|
||||
Args:
|
||||
@@ -168,7 +177,7 @@ def crew(
|
||||
"""
|
||||
|
||||
@wraps(meth)
|
||||
def wrapper(self: CrewInstance, *args: P.args, **kwargs: P.kwargs) -> Crew:
|
||||
def wrapper(self: CrewInstance, *args: Any, **kwargs: Any) -> Crew:
|
||||
"""Wrapper that sets up crew before calling the decorated method.
|
||||
|
||||
Args:
|
||||
|
||||
@@ -6,7 +6,15 @@ from collections.abc import Callable
|
||||
import inspect
|
||||
import logging
|
||||
from pathlib import Path
|
||||
from typing import TYPE_CHECKING, Any, Literal, TypeGuard, TypeVar, TypedDict, cast
|
||||
from typing import (
|
||||
TYPE_CHECKING,
|
||||
Any,
|
||||
Literal,
|
||||
TypeGuard,
|
||||
TypeVar,
|
||||
TypedDict,
|
||||
cast,
|
||||
)
|
||||
|
||||
from dotenv import load_dotenv
|
||||
import yaml
|
||||
@@ -320,14 +328,17 @@ def get_mcp_tools(self: CrewInstance, *tool_names: str) -> list[BaseTool]:
|
||||
if not self.mcp_server_params:
|
||||
return []
|
||||
|
||||
from crewai_tools import MCPServerAdapter # type: ignore[import-untyped]
|
||||
from crewai_tools import MCPServerAdapter
|
||||
|
||||
if self._mcp_server_adapter is None:
|
||||
self._mcp_server_adapter = MCPServerAdapter(
|
||||
self.mcp_server_params, connect_timeout=self.mcp_connect_timeout
|
||||
)
|
||||
|
||||
return self._mcp_server_adapter.tools.filter_by_names(tool_names or None)
|
||||
return cast(
|
||||
list[BaseTool],
|
||||
self._mcp_server_adapter.tools.filter_by_names(tool_names or None),
|
||||
)
|
||||
|
||||
|
||||
def _load_config(
|
||||
@@ -630,3 +641,17 @@ class CrewBase(metaclass=_CrewBaseType):
|
||||
Note:
|
||||
Reference: https://stackoverflow.com/questions/11091609/setting-a-class-metaclass-using-a-decorator
|
||||
"""
|
||||
|
||||
# e
|
||||
if TYPE_CHECKING:
|
||||
|
||||
def __init__(self, *args: Any, **kwargs: Any) -> None:
|
||||
"""Type stub for decorator usage.
|
||||
|
||||
Args:
|
||||
decorated_cls: Class to transform with CrewBaseMeta metaclass.
|
||||
|
||||
Returns:
|
||||
New class with CrewBaseMeta metaclass applied.
|
||||
"""
|
||||
...
|
||||
|
||||
@@ -20,7 +20,7 @@ from typing_extensions import Self
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from crewai import Agent, Task
|
||||
from crewai import Agent, Crew, Task
|
||||
from crewai.crews.crew_output import CrewOutput
|
||||
from crewai.tools import BaseTool
|
||||
|
||||
@@ -124,11 +124,12 @@ class CrewClass(Protocol):
|
||||
get_mcp_tools: Callable[..., list[BaseTool]]
|
||||
_load_config: Callable[..., dict[str, Any]]
|
||||
load_configurations: Callable[..., None]
|
||||
load_yaml: staticmethod
|
||||
load_yaml: Callable[..., dict[str, Any]]
|
||||
map_all_agent_variables: Callable[..., None]
|
||||
_map_agent_variables: Callable[..., None]
|
||||
map_all_task_variables: Callable[..., None]
|
||||
_map_task_variables: Callable[..., None]
|
||||
crew: Callable[..., Crew]
|
||||
|
||||
|
||||
class DecoratedMethod(Generic[P, R]):
|
||||
|
||||
@@ -29,6 +29,7 @@ from opentelemetry.sdk.trace.export import (
|
||||
SpanExportResult,
|
||||
)
|
||||
from opentelemetry.trace import Span
|
||||
from typing_extensions import Self
|
||||
|
||||
from crewai.telemetry.constants import (
|
||||
CREWAI_TELEMETRY_BASE_URL,
|
||||
@@ -86,7 +87,7 @@ class Telemetry:
|
||||
_instance = None
|
||||
_lock = threading.Lock()
|
||||
|
||||
def __new__(cls):
|
||||
def __new__(cls) -> Self:
|
||||
if cls._instance is None:
|
||||
with cls._lock:
|
||||
if cls._instance is None:
|
||||
@@ -154,19 +155,24 @@ class Telemetry:
|
||||
self.ready = False
|
||||
self.trace_set = False
|
||||
|
||||
def _safe_telemetry_operation(self, operation: Callable[[], Any]) -> None:
|
||||
def _safe_telemetry_operation(
|
||||
self, operation: Callable[[], Span | None]
|
||||
) -> Span | None:
|
||||
"""Execute telemetry operation safely, checking both readiness and environment variables.
|
||||
|
||||
Args:
|
||||
operation: A callable that performs telemetry operations. May return any value,
|
||||
but the return value is not used by this method.
|
||||
operation: A callable that performs telemetry operations.
|
||||
|
||||
Returns:
|
||||
The return value from the operation, or None if telemetry is disabled or fails.
|
||||
"""
|
||||
if not self._should_execute_telemetry():
|
||||
return
|
||||
return None
|
||||
try:
|
||||
operation()
|
||||
return operation()
|
||||
except Exception as e:
|
||||
logger.debug(f"Telemetry operation failed: {e}")
|
||||
return None
|
||||
|
||||
def crew_creation(self, crew: Crew, inputs: dict[str, Any] | None) -> None:
|
||||
"""Records the creation of a crew.
|
||||
@@ -176,7 +182,7 @@ class Telemetry:
|
||||
inputs: Optional input parameters for the crew.
|
||||
"""
|
||||
|
||||
def _operation():
|
||||
def _operation() -> None:
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Crew Created")
|
||||
self._add_attribute(
|
||||
@@ -386,7 +392,7 @@ class Telemetry:
|
||||
The span tracking the task execution, or None if telemetry is disabled.
|
||||
"""
|
||||
|
||||
def _operation():
|
||||
def _operation() -> Span:
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
|
||||
created_span = tracer.start_span("Task Created")
|
||||
@@ -445,11 +451,7 @@ class Telemetry:
|
||||
|
||||
return span
|
||||
|
||||
if not self._should_execute_telemetry():
|
||||
return None
|
||||
|
||||
self._safe_telemetry_operation(_operation)
|
||||
return _operation()
|
||||
return self._safe_telemetry_operation(_operation)
|
||||
|
||||
def task_ended(self, span: Span, task: Task, crew: Crew) -> None:
|
||||
"""Records the completion of a task execution in a crew.
|
||||
@@ -463,7 +465,7 @@ class Telemetry:
|
||||
If share_crew is enabled, this will also record the task output.
|
||||
"""
|
||||
|
||||
def _operation():
|
||||
def _operation() -> None:
|
||||
# Ensure fingerprint data is present on completion span
|
||||
if hasattr(task, "fingerprint") and task.fingerprint:
|
||||
self._add_attribute(span, "task_fingerprint", task.fingerprint.uuid_str)
|
||||
@@ -488,7 +490,7 @@ class Telemetry:
|
||||
attempts: Number of attempts made with this tool.
|
||||
"""
|
||||
|
||||
def _operation():
|
||||
def _operation() -> None:
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Tool Repeated Usage")
|
||||
self._add_attribute(
|
||||
@@ -516,7 +518,7 @@ class Telemetry:
|
||||
agent: The agent using the tool.
|
||||
"""
|
||||
|
||||
def _operation():
|
||||
def _operation() -> None:
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Tool Usage")
|
||||
self._add_attribute(
|
||||
@@ -546,7 +548,7 @@ class Telemetry:
|
||||
tool_name: Name of the tool that caused the error.
|
||||
"""
|
||||
|
||||
def _operation():
|
||||
def _operation() -> None:
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Tool Usage Error")
|
||||
self._add_attribute(
|
||||
@@ -578,7 +580,7 @@ class Telemetry:
|
||||
model_name: Name of the model used.
|
||||
"""
|
||||
|
||||
def _operation():
|
||||
def _operation() -> None:
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Crew Individual Test Result")
|
||||
|
||||
@@ -613,7 +615,7 @@ class Telemetry:
|
||||
model_name: Name of the model used in testing.
|
||||
"""
|
||||
|
||||
def _operation():
|
||||
def _operation() -> None:
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Crew Test Execution")
|
||||
|
||||
@@ -640,7 +642,7 @@ class Telemetry:
|
||||
def deploy_signup_error_span(self) -> None:
|
||||
"""Records when an error occurs during the deployment signup process."""
|
||||
|
||||
def _operation():
|
||||
def _operation() -> None:
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Deploy Signup Error")
|
||||
close_span(span)
|
||||
@@ -654,7 +656,7 @@ class Telemetry:
|
||||
uuid: Unique identifier for the deployment.
|
||||
"""
|
||||
|
||||
def _operation():
|
||||
def _operation() -> None:
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Start Deployment")
|
||||
if uuid:
|
||||
@@ -666,7 +668,7 @@ class Telemetry:
|
||||
def create_crew_deployment_span(self) -> None:
|
||||
"""Records the creation of a new crew deployment."""
|
||||
|
||||
def _operation():
|
||||
def _operation() -> None:
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Create Crew Deployment")
|
||||
close_span(span)
|
||||
@@ -683,7 +685,7 @@ class Telemetry:
|
||||
log_type: Type of logs being retrieved. Defaults to "deployment".
|
||||
"""
|
||||
|
||||
def _operation():
|
||||
def _operation() -> None:
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Get Crew Logs")
|
||||
self._add_attribute(span, "log_type", log_type)
|
||||
@@ -700,7 +702,7 @@ class Telemetry:
|
||||
uuid: Unique identifier for the crew being removed.
|
||||
"""
|
||||
|
||||
def _operation():
|
||||
def _operation() -> None:
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Remove Crew")
|
||||
if uuid:
|
||||
@@ -725,7 +727,7 @@ class Telemetry:
|
||||
"""
|
||||
self.crew_creation(crew, inputs)
|
||||
|
||||
def _operation():
|
||||
def _operation() -> Span:
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Crew Execution")
|
||||
self._add_attribute(
|
||||
@@ -793,8 +795,7 @@ class Telemetry:
|
||||
return span
|
||||
|
||||
if crew.share_crew:
|
||||
self._safe_telemetry_operation(_operation)
|
||||
return _operation()
|
||||
return self._safe_telemetry_operation(_operation)
|
||||
return None
|
||||
|
||||
def end_crew(self, crew: Any, final_string_output: str) -> None:
|
||||
@@ -805,7 +806,7 @@ class Telemetry:
|
||||
final_string_output: The final output from the crew.
|
||||
"""
|
||||
|
||||
def _operation():
|
||||
def _operation() -> None:
|
||||
self._add_attribute(
|
||||
crew._execution_span,
|
||||
"crewai_version",
|
||||
@@ -842,7 +843,7 @@ class Telemetry:
|
||||
value: The attribute value.
|
||||
"""
|
||||
|
||||
def _operation():
|
||||
def _operation() -> None:
|
||||
return span.set_attribute(key, value)
|
||||
|
||||
self._safe_telemetry_operation(_operation)
|
||||
@@ -854,7 +855,7 @@ class Telemetry:
|
||||
flow_name: Name of the flow being created.
|
||||
"""
|
||||
|
||||
def _operation():
|
||||
def _operation() -> None:
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Flow Creation")
|
||||
self._add_attribute(span, "flow_name", flow_name)
|
||||
@@ -870,7 +871,7 @@ class Telemetry:
|
||||
node_names: List of node names in the flow.
|
||||
"""
|
||||
|
||||
def _operation():
|
||||
def _operation() -> None:
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Flow Plotting")
|
||||
self._add_attribute(span, "flow_name", flow_name)
|
||||
@@ -887,7 +888,7 @@ class Telemetry:
|
||||
node_names: List of nodes being executed in the flow.
|
||||
"""
|
||||
|
||||
def _operation():
|
||||
def _operation() -> None:
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Flow Execution")
|
||||
self._add_attribute(span, "flow_name", flow_name)
|
||||
|
||||
@@ -4,6 +4,8 @@ from typing import TYPE_CHECKING, Any, Generic, TypeGuard, TypeVar
|
||||
|
||||
from pydantic import BaseModel
|
||||
|
||||
from crewai.utilities.logger_utils import suppress_warnings
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from crewai.agent import Agent
|
||||
@@ -11,9 +13,6 @@ if TYPE_CHECKING:
|
||||
from crewai.llms.base_llm import BaseLLM
|
||||
from crewai.utilities.types import LLMMessage
|
||||
|
||||
from crewai.utilities.logger_utils import suppress_warnings
|
||||
|
||||
|
||||
|
||||
T = TypeVar("T", bound=BaseModel)
|
||||
|
||||
@@ -62,9 +61,59 @@ class InternalInstructor(Generic[T]):
|
||||
|
||||
with suppress_warnings():
|
||||
import instructor # type: ignore[import-untyped]
|
||||
from litellm import completion
|
||||
|
||||
self._client = instructor.from_litellm(completion)
|
||||
if (
|
||||
self.llm is not None
|
||||
and hasattr(self.llm, "is_litellm")
|
||||
and self.llm.is_litellm
|
||||
):
|
||||
from litellm import completion
|
||||
|
||||
self._client = instructor.from_litellm(completion)
|
||||
else:
|
||||
self._client = self._create_instructor_client()
|
||||
|
||||
def _create_instructor_client(self) -> Any:
|
||||
"""Create instructor client using the modern from_provider pattern.
|
||||
|
||||
Returns:
|
||||
Instructor client configured for the LLM provider
|
||||
|
||||
Raises:
|
||||
ValueError: If the provider is not supported
|
||||
"""
|
||||
import instructor
|
||||
|
||||
if isinstance(self.llm, str):
|
||||
model_string = self.llm
|
||||
elif self.llm is not None and hasattr(self.llm, "model"):
|
||||
model_string = self.llm.model
|
||||
else:
|
||||
raise ValueError("LLM must be a string or have a model attribute")
|
||||
|
||||
if isinstance(self.llm, str):
|
||||
provider = self._extract_provider()
|
||||
elif self.llm is not None and hasattr(self.llm, "provider"):
|
||||
provider = self.llm.provider
|
||||
else:
|
||||
provider = "openai" # Default fallback
|
||||
|
||||
return instructor.from_provider(f"{provider}/{model_string}")
|
||||
|
||||
def _extract_provider(self) -> str:
|
||||
"""Extract provider from LLM model name.
|
||||
|
||||
Returns:
|
||||
Provider name (e.g., 'openai', 'anthropic', etc.)
|
||||
"""
|
||||
if self.llm is not None and hasattr(self.llm, "provider") and self.llm.provider:
|
||||
return self.llm.provider
|
||||
|
||||
if isinstance(self.llm, str):
|
||||
return self.llm.partition("/")[0] or "openai"
|
||||
if self.llm is not None and hasattr(self.llm, "model"):
|
||||
return self.llm.model.partition("/")[0] or "openai"
|
||||
return "openai"
|
||||
|
||||
def to_json(self) -> str:
|
||||
"""Convert the structured output to JSON format.
|
||||
@@ -96,6 +145,6 @@ class InternalInstructor(Generic[T]):
|
||||
else:
|
||||
model_name = self.llm.model
|
||||
|
||||
return self._client.chat.completions.create(
|
||||
return self._client.chat.completions.create( # type: ignore[no-any-return]
|
||||
model=model_name, response_model=self.model, messages=messages
|
||||
)
|
||||
|
||||
@@ -29,8 +29,8 @@ def create_llm(
|
||||
try:
|
||||
return LLM(model=llm_value)
|
||||
except Exception as e:
|
||||
logger.debug(f"Failed to instantiate LLM with model='{llm_value}': {e}")
|
||||
return None
|
||||
logger.error(f"Error instantiating LLM from string: {e}")
|
||||
raise e
|
||||
|
||||
if llm_value is None:
|
||||
return _llm_via_environment_or_fallback()
|
||||
@@ -62,8 +62,8 @@ def create_llm(
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
logger.debug(f"Error instantiating LLM from unknown object type: {e}")
|
||||
return None
|
||||
logger.error(f"Error instantiating LLM from unknown object type: {e}")
|
||||
raise e
|
||||
|
||||
|
||||
UNACCEPTED_ATTRIBUTES: Final[list[str]] = [
|
||||
@@ -176,10 +176,10 @@ def _llm_via_environment_or_fallback() -> LLM | None:
|
||||
try:
|
||||
return LLM(**llm_params)
|
||||
except Exception as e:
|
||||
logger.debug(
|
||||
logger.error(
|
||||
f"Error instantiating LLM from environment/fallback: {type(e).__name__}: {e}"
|
||||
)
|
||||
return None
|
||||
raise e
|
||||
|
||||
|
||||
def _normalize_key_name(key_name: str) -> str:
|
||||
|
||||
@@ -6,6 +6,7 @@ from unittest import mock
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
from crewai.agents.crew_agent_executor import AgentFinish, CrewAgentExecutor
|
||||
from crewai.cli.constants import DEFAULT_LLM_MODEL
|
||||
from crewai.events.event_bus import crewai_event_bus
|
||||
from crewai.events.types.tool_usage_events import ToolUsageFinishedEvent
|
||||
from crewai.knowledge.knowledge import Knowledge
|
||||
@@ -135,7 +136,7 @@ def test_agent_with_missing_response_template():
|
||||
|
||||
def test_agent_default_values():
|
||||
agent = Agent(role="test role", goal="test goal", backstory="test backstory")
|
||||
assert agent.llm.model == "gpt-4o-mini"
|
||||
assert agent.llm.model == DEFAULT_LLM_MODEL
|
||||
assert agent.allow_delegation is False
|
||||
|
||||
|
||||
@@ -186,17 +187,24 @@ def test_agent_execution_with_tools():
|
||||
expected_output="The result of the multiplication.",
|
||||
)
|
||||
received_events = []
|
||||
event_received = threading.Event()
|
||||
condition = threading.Condition()
|
||||
event_handled = False
|
||||
|
||||
@crewai_event_bus.on(ToolUsageFinishedEvent)
|
||||
def handle_tool_end(source, event):
|
||||
nonlocal event_handled
|
||||
received_events.append(event)
|
||||
event_received.set()
|
||||
with condition:
|
||||
event_handled = True
|
||||
condition.notify()
|
||||
|
||||
output = agent.execute_task(task)
|
||||
assert output == "The result of the multiplication is 12."
|
||||
|
||||
assert event_received.wait(timeout=5), "Timeout waiting for tool usage event"
|
||||
with condition:
|
||||
if not event_handled:
|
||||
condition.wait(timeout=5)
|
||||
assert event_handled, "Timeout waiting for tool usage event"
|
||||
assert len(received_events) == 1
|
||||
assert isinstance(received_events[0], ToolUsageFinishedEvent)
|
||||
assert received_events[0].tool_name == "multiplier"
|
||||
@@ -218,7 +226,7 @@ def test_logging_tool_usage():
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
assert agent.llm.model == "gpt-4o-mini"
|
||||
assert agent.llm.model == DEFAULT_LLM_MODEL
|
||||
assert agent.tools_handler.last_used_tool is None
|
||||
task = Task(
|
||||
description="What is 3 times 4?",
|
||||
@@ -288,12 +296,16 @@ def test_cache_hitting():
|
||||
'multiplier-{"first_number": 12, "second_number": 3}': 36,
|
||||
}
|
||||
received_events = []
|
||||
event_received = threading.Event()
|
||||
condition = threading.Condition()
|
||||
event_handled = False
|
||||
|
||||
@crewai_event_bus.on(ToolUsageFinishedEvent)
|
||||
def handle_tool_end(source, event):
|
||||
nonlocal event_handled
|
||||
received_events.append(event)
|
||||
event_received.set()
|
||||
with condition:
|
||||
event_handled = True
|
||||
condition.notify()
|
||||
|
||||
with (
|
||||
patch.object(CacheHandler, "read") as read,
|
||||
@@ -309,7 +321,10 @@ def test_cache_hitting():
|
||||
read.assert_called_with(
|
||||
tool="multiplier", input='{"first_number": 2, "second_number": 6}'
|
||||
)
|
||||
assert event_received.wait(timeout=5), "Timeout waiting for tool usage event"
|
||||
with condition:
|
||||
if not event_handled:
|
||||
condition.wait(timeout=5)
|
||||
assert event_handled, "Timeout waiting for tool usage event"
|
||||
assert len(received_events) == 1
|
||||
assert isinstance(received_events[0], ToolUsageFinishedEvent)
|
||||
assert received_events[0].from_cache
|
||||
@@ -888,7 +903,8 @@ def test_agent_step_callback():
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_agent_function_calling_llm():
|
||||
llm = "gpt-4o"
|
||||
from crewai.llm import LLM
|
||||
llm = LLM(model="gpt-4o", is_litellm=True)
|
||||
|
||||
@tool
|
||||
def learn_about_ai() -> str:
|
||||
|
||||
@@ -987,4 +987,103 @@ interactions:
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"trace_id": "51f9439f-9497-420c-a908-4e33f01ffdfc", "execution_type":
|
||||
"crew", "user_identifier": null, "execution_context": {"crew_fingerprint": null,
|
||||
"crew_name": "crew", "flow_name": null, "crewai_version": "1.0.0", "privacy_level":
|
||||
"standard"}, "execution_metadata": {"expected_duration_estimate": 300, "agent_count":
|
||||
0, "task_count": 0, "flow_method_count": 0, "execution_started_at": "2025-10-21T18:21:13.954835+00:00"},
|
||||
"ephemeral_trace_id": "51f9439f-9497-420c-a908-4e33f01ffdfc"}'
|
||||
headers:
|
||||
Accept:
|
||||
- '*/*'
|
||||
Accept-Encoding:
|
||||
- gzip, deflate, zstd
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '488'
|
||||
Content-Type:
|
||||
- application/json
|
||||
User-Agent:
|
||||
- CrewAI-CLI/1.0.0
|
||||
X-Crewai-Version:
|
||||
- 1.0.0
|
||||
method: POST
|
||||
uri: https://app.crewai.com/crewai_plus/api/v1/tracing/ephemeral/batches
|
||||
response:
|
||||
body:
|
||||
string: '{"id":"432de345-a45a-4a02-9259-2ed30a72a9c3","ephemeral_trace_id":"51f9439f-9497-420c-a908-4e33f01ffdfc","execution_type":"crew","crew_name":"crew","flow_name":null,"status":"running","duration_ms":null,"crewai_version":"1.0.0","total_events":0,"execution_context":{"crew_fingerprint":null,"crew_name":"crew","flow_name":null,"crewai_version":"1.0.0","privacy_level":"standard"},"created_at":"2025-10-21T18:21:14.911Z","updated_at":"2025-10-21T18:21:14.911Z","access_code":"TRACE-da9003bc8b","user_identifier":null}'
|
||||
headers:
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '515'
|
||||
Content-Type:
|
||||
- application/json; charset=utf-8
|
||||
Date:
|
||||
- Tue, 21 Oct 2025 18:21:14 GMT
|
||||
cache-control:
|
||||
- no-store
|
||||
content-security-policy:
|
||||
- 'default-src ''self'' *.app.crewai.com app.crewai.com; script-src ''self''
|
||||
''unsafe-inline'' *.app.crewai.com app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts
|
||||
https://www.gstatic.com https://run.pstmn.io https://apis.google.com https://apis.google.com/js/api.js
|
||||
https://accounts.google.com https://accounts.google.com/gsi/client https://cdnjs.cloudflare.com/ajax/libs/normalize/8.0.1/normalize.min.css.map
|
||||
https://*.google.com https://docs.google.com https://slides.google.com https://js.hs-scripts.com
|
||||
https://js.sentry-cdn.com https://browser.sentry-cdn.com https://www.googletagmanager.com
|
||||
https://js-na1.hs-scripts.com https://js.hubspot.com http://js-na1.hs-scripts.com
|
||||
https://bat.bing.com https://cdn.amplitude.com https://cdn.segment.com https://d1d3n03t5zntha.cloudfront.net/
|
||||
https://descriptusercontent.com https://edge.fullstory.com https://googleads.g.doubleclick.net
|
||||
https://js.hs-analytics.net https://js.hs-banner.com https://js.hsadspixel.net
|
||||
https://js.hscollectedforms.net https://js.usemessages.com https://snap.licdn.com
|
||||
https://static.cloudflareinsights.com https://static.reo.dev https://www.google-analytics.com
|
||||
https://share.descript.com/; style-src ''self'' ''unsafe-inline'' *.app.crewai.com
|
||||
app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts; img-src ''self'' data:
|
||||
*.app.crewai.com app.crewai.com https://zeus.tools.crewai.com https://dashboard.tools.crewai.com
|
||||
https://cdn.jsdelivr.net https://forms.hsforms.com https://track.hubspot.com
|
||||
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://www.google.com
|
||||
https://www.google.com.br; font-src ''self'' data: *.app.crewai.com app.crewai.com;
|
||||
connect-src ''self'' *.app.crewai.com app.crewai.com https://zeus.tools.crewai.com
|
||||
https://connect.useparagon.com/ https://zeus.useparagon.com/* https://*.useparagon.com/*
|
||||
https://run.pstmn.io https://connect.tools.crewai.com/ https://*.sentry.io
|
||||
https://www.google-analytics.com https://edge.fullstory.com https://rs.fullstory.com
|
||||
https://api.hubspot.com https://forms.hscollectedforms.net https://api.hubapi.com
|
||||
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://google.com/pagead/form-data/16713662509
|
||||
https://google.com/ccm/form-data/16713662509 https://www.google.com/ccm/collect
|
||||
https://worker-actionkit.tools.crewai.com https://api.reo.dev; frame-src ''self''
|
||||
*.app.crewai.com app.crewai.com https://connect.useparagon.com/ https://zeus.tools.crewai.com
|
||||
https://zeus.useparagon.com/* https://connect.tools.crewai.com/ https://docs.google.com
|
||||
https://drive.google.com https://slides.google.com https://accounts.google.com
|
||||
https://*.google.com https://app.hubspot.com/ https://td.doubleclick.net https://www.googletagmanager.com/
|
||||
https://www.youtube.com https://share.descript.com'
|
||||
etag:
|
||||
- W/"f377829f71702a4e2096c862a7d4c75e"
|
||||
expires:
|
||||
- '0'
|
||||
permissions-policy:
|
||||
- camera=(), microphone=(self), geolocation=()
|
||||
pragma:
|
||||
- no-cache
|
||||
referrer-policy:
|
||||
- strict-origin-when-cross-origin
|
||||
strict-transport-security:
|
||||
- max-age=63072000; includeSubDomains
|
||||
vary:
|
||||
- Accept
|
||||
x-content-type-options:
|
||||
- nosniff
|
||||
x-frame-options:
|
||||
- SAMEORIGIN
|
||||
x-permitted-cross-domain-policies:
|
||||
- none
|
||||
x-request-id:
|
||||
- b91de61f-e9cf-4748-8346-a7e7a3e43558
|
||||
x-runtime:
|
||||
- '0.674115'
|
||||
x-xss-protection:
|
||||
- 1; mode=block
|
||||
status:
|
||||
code: 201
|
||||
message: Created
|
||||
version: 1
|
||||
|
||||
1934
lib/crewai/tests/cassettes/test_crew_testing_function.yaml
Normal file
1934
lib/crewai/tests/cassettes/test_crew_testing_function.yaml
Normal file
File diff suppressed because it is too large
Load Diff
1899
lib/crewai/tests/cassettes/test_crew_train_success.yaml
Normal file
1899
lib/crewai/tests/cassettes/test_crew_train_success.yaml
Normal file
File diff suppressed because it is too large
Load Diff
@@ -1032,4 +1032,621 @@ interactions:
|
||||
- req_e1e95e8f654254ef093113417ba6ab00
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
- request:
|
||||
body: '{"trace_id": "c5146cc4-dcff-45cc-a71a-b82a83b7de73", "execution_type":
|
||||
"crew", "user_identifier": null, "execution_context": {"crew_fingerprint": null,
|
||||
"crew_name": "crew", "flow_name": null, "crewai_version": "1.0.0", "privacy_level":
|
||||
"standard"}, "execution_metadata": {"expected_duration_estimate": 300, "agent_count":
|
||||
0, "task_count": 0, "flow_method_count": 0, "execution_started_at": "2025-10-21T17:02:41.380299+00:00"},
|
||||
"ephemeral_trace_id": "c5146cc4-dcff-45cc-a71a-b82a83b7de73"}'
|
||||
headers:
|
||||
Accept:
|
||||
- '*/*'
|
||||
Accept-Encoding:
|
||||
- gzip, deflate, zstd
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '488'
|
||||
Content-Type:
|
||||
- application/json
|
||||
User-Agent:
|
||||
- CrewAI-CLI/1.0.0
|
||||
X-Crewai-Version:
|
||||
- 1.0.0
|
||||
method: POST
|
||||
uri: https://app.crewai.com/crewai_plus/api/v1/tracing/ephemeral/batches
|
||||
response:
|
||||
body:
|
||||
string: '{"id":"ad4ac66f-7511-444c-aec3-a8c711ab4f54","ephemeral_trace_id":"c5146cc4-dcff-45cc-a71a-b82a83b7de73","execution_type":"crew","crew_name":"crew","flow_name":null,"status":"running","duration_ms":null,"crewai_version":"1.0.0","total_events":0,"execution_context":{"crew_fingerprint":null,"crew_name":"crew","flow_name":null,"crewai_version":"1.0.0","privacy_level":"standard"},"created_at":"2025-10-21T17:02:41.683Z","updated_at":"2025-10-21T17:02:41.683Z","access_code":"TRACE-41ea39cb70","user_identifier":null}'
|
||||
headers:
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '515'
|
||||
Content-Type:
|
||||
- application/json; charset=utf-8
|
||||
Date:
|
||||
- Tue, 21 Oct 2025 17:02:41 GMT
|
||||
cache-control:
|
||||
- no-store
|
||||
content-security-policy:
|
||||
- 'default-src ''self'' *.app.crewai.com app.crewai.com; script-src ''self''
|
||||
''unsafe-inline'' *.app.crewai.com app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts
|
||||
https://www.gstatic.com https://run.pstmn.io https://apis.google.com https://apis.google.com/js/api.js
|
||||
https://accounts.google.com https://accounts.google.com/gsi/client https://cdnjs.cloudflare.com/ajax/libs/normalize/8.0.1/normalize.min.css.map
|
||||
https://*.google.com https://docs.google.com https://slides.google.com https://js.hs-scripts.com
|
||||
https://js.sentry-cdn.com https://browser.sentry-cdn.com https://www.googletagmanager.com
|
||||
https://js-na1.hs-scripts.com https://js.hubspot.com http://js-na1.hs-scripts.com
|
||||
https://bat.bing.com https://cdn.amplitude.com https://cdn.segment.com https://d1d3n03t5zntha.cloudfront.net/
|
||||
https://descriptusercontent.com https://edge.fullstory.com https://googleads.g.doubleclick.net
|
||||
https://js.hs-analytics.net https://js.hs-banner.com https://js.hsadspixel.net
|
||||
https://js.hscollectedforms.net https://js.usemessages.com https://snap.licdn.com
|
||||
https://static.cloudflareinsights.com https://static.reo.dev https://www.google-analytics.com
|
||||
https://share.descript.com/; style-src ''self'' ''unsafe-inline'' *.app.crewai.com
|
||||
app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts; img-src ''self'' data:
|
||||
*.app.crewai.com app.crewai.com https://zeus.tools.crewai.com https://dashboard.tools.crewai.com
|
||||
https://cdn.jsdelivr.net https://forms.hsforms.com https://track.hubspot.com
|
||||
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://www.google.com
|
||||
https://www.google.com.br; font-src ''self'' data: *.app.crewai.com app.crewai.com;
|
||||
connect-src ''self'' *.app.crewai.com app.crewai.com https://zeus.tools.crewai.com
|
||||
https://connect.useparagon.com/ https://zeus.useparagon.com/* https://*.useparagon.com/*
|
||||
https://run.pstmn.io https://connect.tools.crewai.com/ https://*.sentry.io
|
||||
https://www.google-analytics.com https://edge.fullstory.com https://rs.fullstory.com
|
||||
https://api.hubspot.com https://forms.hscollectedforms.net https://api.hubapi.com
|
||||
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://google.com/pagead/form-data/16713662509
|
||||
https://google.com/ccm/form-data/16713662509 https://www.google.com/ccm/collect
|
||||
https://worker-actionkit.tools.crewai.com https://api.reo.dev; frame-src ''self''
|
||||
*.app.crewai.com app.crewai.com https://connect.useparagon.com/ https://zeus.tools.crewai.com
|
||||
https://zeus.useparagon.com/* https://connect.tools.crewai.com/ https://docs.google.com
|
||||
https://drive.google.com https://slides.google.com https://accounts.google.com
|
||||
https://*.google.com https://app.hubspot.com/ https://td.doubleclick.net https://www.googletagmanager.com/
|
||||
https://www.youtube.com https://share.descript.com'
|
||||
etag:
|
||||
- W/"b46640957517118b3255a25e8f00184d"
|
||||
expires:
|
||||
- '0'
|
||||
permissions-policy:
|
||||
- camera=(), microphone=(self), geolocation=()
|
||||
pragma:
|
||||
- no-cache
|
||||
referrer-policy:
|
||||
- strict-origin-when-cross-origin
|
||||
strict-transport-security:
|
||||
- max-age=63072000; includeSubDomains
|
||||
vary:
|
||||
- Accept
|
||||
x-content-type-options:
|
||||
- nosniff
|
||||
x-frame-options:
|
||||
- SAMEORIGIN
|
||||
x-permitted-cross-domain-policies:
|
||||
- none
|
||||
x-request-id:
|
||||
- 0590a968-276d-4342-85bb-0e488cf4f6bc
|
||||
x-runtime:
|
||||
- '0.073020'
|
||||
x-xss-protection:
|
||||
- 1; mode=block
|
||||
status:
|
||||
code: 201
|
||||
message: Created
|
||||
- request:
|
||||
body: '{"events": [{"event_id": "ad62c6f4-6367-452c-bd91-5d3153e2e20a", "timestamp":
|
||||
"2025-10-21T17:02:41.379061+00:00", "type": "crew_kickoff_started", "event_data":
|
||||
{"timestamp": "2025-10-21T17:02:41.379061+00:00", "type": "crew_kickoff_started",
|
||||
"source_fingerprint": null, "source_type": null, "fingerprint_metadata": null,
|
||||
"task_id": null, "task_name": null, "agent_id": null, "agent_role": null, "crew_name":
|
||||
"crew", "crew": null, "inputs": null}}, {"event_id": "19c1acad-fa5b-4dc8-933b-bfc9036ce2eb",
|
||||
"timestamp": "2025-10-21T17:02:41.381894+00:00", "type": "task_started", "event_data":
|
||||
{"task_description": "Research a topic to teach a kid aged 6 about math.", "expected_output":
|
||||
"A topic, explanation, angle, and examples.", "task_name": "Research a topic
|
||||
to teach a kid aged 6 about math.", "context": "", "agent_role": "Researcher",
|
||||
"task_id": "3283d0f7-7159-47a9-abf0-a1bfe4dafb13"}}, {"event_id": "a9c2bbc4-778e-4a5d-bda5-148f015e5fbe",
|
||||
"timestamp": "2025-10-21T17:02:41.382167+00:00", "type": "memory_query_started",
|
||||
"event_data": {"timestamp": "2025-10-21T17:02:41.382167+00:00", "type": "memory_query_started",
|
||||
"source_fingerprint": null, "source_type": "long_term_memory", "fingerprint_metadata":
|
||||
null, "task_id": "3283d0f7-7159-47a9-abf0-a1bfe4dafb13", "task_name": "Research
|
||||
a topic to teach a kid aged 6 about math.", "agent_id": "5b1ba567-c4c3-4327-9c2e-4215c53bffb6",
|
||||
"agent_role": "Researcher", "from_task": null, "from_agent": null, "query":
|
||||
"Research a topic to teach a kid aged 6 about math.", "limit": 2, "score_threshold":
|
||||
null}}, {"event_id": "d946752e-87f1-496f-b26b-a4e1aaf58d49", "timestamp": "2025-10-21T17:02:41.382357+00:00",
|
||||
"type": "memory_query_completed", "event_data": {"timestamp": "2025-10-21T17:02:41.382357+00:00",
|
||||
"type": "memory_query_completed", "source_fingerprint": null, "source_type":
|
||||
"long_term_memory", "fingerprint_metadata": null, "task_id": "3283d0f7-7159-47a9-abf0-a1bfe4dafb13",
|
||||
"task_name": "Research a topic to teach a kid aged 6 about math.", "agent_id":
|
||||
"5b1ba567-c4c3-4327-9c2e-4215c53bffb6", "agent_role": "Researcher", "from_task":
|
||||
null, "from_agent": null, "query": "Research a topic to teach a kid aged 6 about
|
||||
math.", "results": null, "limit": 2, "score_threshold": null, "query_time_ms":
|
||||
0.1468658447265625}}, {"event_id": "fec95c3e-6020-4ca5-9c8a-76d8fe2e69fc", "timestamp":
|
||||
"2025-10-21T17:02:41.382390+00:00", "type": "memory_query_started", "event_data":
|
||||
{"timestamp": "2025-10-21T17:02:41.382390+00:00", "type": "memory_query_started",
|
||||
"source_fingerprint": null, "source_type": "short_term_memory", "fingerprint_metadata":
|
||||
null, "task_id": "3283d0f7-7159-47a9-abf0-a1bfe4dafb13", "task_name": "Research
|
||||
a topic to teach a kid aged 6 about math.", "agent_id": "5b1ba567-c4c3-4327-9c2e-4215c53bffb6",
|
||||
"agent_role": "Researcher", "from_task": null, "from_agent": null, "query":
|
||||
"Research a topic to teach a kid aged 6 about math.", "limit": 5, "score_threshold":
|
||||
0.6}}, {"event_id": "b4d9b241-3336-4e5b-902b-46ef4aff3a95", "timestamp": "2025-10-21T17:02:41.532761+00:00",
|
||||
"type": "memory_query_completed", "event_data": {"timestamp": "2025-10-21T17:02:41.532761+00:00",
|
||||
"type": "memory_query_completed", "source_fingerprint": null, "source_type":
|
||||
"short_term_memory", "fingerprint_metadata": null, "task_id": "3283d0f7-7159-47a9-abf0-a1bfe4dafb13",
|
||||
"task_name": "Research a topic to teach a kid aged 6 about math.", "agent_id":
|
||||
"5b1ba567-c4c3-4327-9c2e-4215c53bffb6", "agent_role": "Researcher", "from_task":
|
||||
null, "from_agent": null, "query": "Research a topic to teach a kid aged 6 about
|
||||
math.", "results": [], "limit": 5, "score_threshold": 0.6, "query_time_ms":
|
||||
150.346040725708}}, {"event_id": "ede0e589-9609-4b27-ac6d-f02ab5d118c0", "timestamp":
|
||||
"2025-10-21T17:02:41.532803+00:00", "type": "memory_query_started", "event_data":
|
||||
{"timestamp": "2025-10-21T17:02:41.532803+00:00", "type": "memory_query_started",
|
||||
"source_fingerprint": null, "source_type": "entity_memory", "fingerprint_metadata":
|
||||
null, "task_id": "3283d0f7-7159-47a9-abf0-a1bfe4dafb13", "task_name": "Research
|
||||
a topic to teach a kid aged 6 about math.", "agent_id": "5b1ba567-c4c3-4327-9c2e-4215c53bffb6",
|
||||
"agent_role": "Researcher", "from_task": null, "from_agent": null, "query":
|
||||
"Research a topic to teach a kid aged 6 about math.", "limit": 5, "score_threshold":
|
||||
0.6}}, {"event_id": "feca316d-4c1a-4502-bb73-e190b0ed3fee", "timestamp": "2025-10-21T17:02:41.539391+00:00",
|
||||
"type": "memory_query_completed", "event_data": {"timestamp": "2025-10-21T17:02:41.539391+00:00",
|
||||
"type": "memory_query_completed", "source_fingerprint": null, "source_type":
|
||||
"entity_memory", "fingerprint_metadata": null, "task_id": "3283d0f7-7159-47a9-abf0-a1bfe4dafb13",
|
||||
"task_name": "Research a topic to teach a kid aged 6 about math.", "agent_id":
|
||||
"5b1ba567-c4c3-4327-9c2e-4215c53bffb6", "agent_role": "Researcher", "from_task":
|
||||
null, "from_agent": null, "query": "Research a topic to teach a kid aged 6 about
|
||||
math.", "results": [], "limit": 5, "score_threshold": 0.6, "query_time_ms":
|
||||
6.557941436767578}}, {"event_id": "c1d5f664-11bd-4d53-a250-bf998f28feb1", "timestamp":
|
||||
"2025-10-21T17:02:41.539868+00:00", "type": "agent_execution_started", "event_data":
|
||||
{"agent_role": "Researcher", "agent_goal": "You research about math.", "agent_backstory":
|
||||
"You''re an expert in research and you love to learn new things."}}, {"event_id":
|
||||
"72160300-cf34-4697-92c5-e19f9bb7aced", "timestamp": "2025-10-21T17:02:41.540118+00:00",
|
||||
"type": "llm_call_started", "event_data": {"timestamp": "2025-10-21T17:02:41.540118+00:00",
|
||||
"type": "llm_call_started", "source_fingerprint": null, "source_type": null,
|
||||
"fingerprint_metadata": null, "task_id": "3283d0f7-7159-47a9-abf0-a1bfe4dafb13",
|
||||
"task_name": "Research a topic to teach a kid aged 6 about math.", "agent_id":
|
||||
"5b1ba567-c4c3-4327-9c2e-4215c53bffb6", "agent_role": "Researcher", "from_task":
|
||||
null, "from_agent": null, "model": "gpt-4o-mini", "messages": [{"role": "system",
|
||||
"content": "You are Researcher. You''re an expert in research and you love to
|
||||
learn new things.\nYour personal goal is: You research about math.\nTo give
|
||||
my best complete final answer to the task respond using the exact following
|
||||
format:\n\nThought: I now can give a great answer\nFinal Answer: Your final
|
||||
answer must be the great and the most complete as possible, it must be outcome
|
||||
described.\n\nI MUST use these formats, my job depends on it!"}, {"role": "user",
|
||||
"content": "\nCurrent Task: Research a topic to teach a kid aged 6 about math.\n\nThis
|
||||
is the expected criteria for your final answer: A topic, explanation, angle,
|
||||
and examples.\nyou MUST return the actual complete content as the final answer,
|
||||
not a summary.\n\nYou MUST follow these instructions: \n - Incorporate specific
|
||||
examples and case studies in initial outputs for clearer illustration of concepts.\n
|
||||
- Engage more with current events or trends to enhance relevance, especially
|
||||
in fields like remote work and decision-making.\n - Invite perspectives from
|
||||
experts and stakeholders to add depth to discussions on ethical implications
|
||||
and collaboration in creativity.\n - Use more precise language when discussing
|
||||
topics, ensuring clarity and accessibility for readers.\n - Encourage exploration
|
||||
of user experiences and testimonials to provide more relatable content, especially
|
||||
in education and mental health contexts.\n\nBegin! This is VERY important to
|
||||
you, use the tools available and give your best Final Answer, your job depends
|
||||
on it!\n\nThought:"}], "tools": null, "callbacks": ["<crewai.utilities.token_counter_callback.TokenCalcHandler
|
||||
object at 0x12e3e6d80>"], "available_functions": null}}, {"event_id": "83d91da9-2d3f-4638-9fdc-262371273149",
|
||||
"timestamp": "2025-10-21T17:02:41.544497+00:00", "type": "llm_call_completed",
|
||||
"event_data": {"timestamp": "2025-10-21T17:02:41.544497+00:00", "type": "llm_call_completed",
|
||||
"source_fingerprint": null, "source_type": null, "fingerprint_metadata": null,
|
||||
"task_id": "3283d0f7-7159-47a9-abf0-a1bfe4dafb13", "task_name": "Research a
|
||||
topic to teach a kid aged 6 about math.", "agent_id": "5b1ba567-c4c3-4327-9c2e-4215c53bffb6",
|
||||
"agent_role": "Researcher", "from_task": null, "from_agent": null, "messages":
|
||||
[{"role": "system", "content": "You are Researcher. You''re an expert in research
|
||||
and you love to learn new things.\nYour personal goal is: You research about
|
||||
math.\nTo give my best complete final answer to the task respond using the exact
|
||||
following format:\n\nThought: I now can give a great answer\nFinal Answer: Your
|
||||
final answer must be the great and the most complete as possible, it must be
|
||||
outcome described.\n\nI MUST use these formats, my job depends on it!"}, {"role":
|
||||
"user", "content": "\nCurrent Task: Research a topic to teach a kid aged 6 about
|
||||
math.\n\nThis is the expected criteria for your final answer: A topic, explanation,
|
||||
angle, and examples.\nyou MUST return the actual complete content as the final
|
||||
answer, not a summary.\n\nYou MUST follow these instructions: \n - Incorporate
|
||||
specific examples and case studies in initial outputs for clearer illustration
|
||||
of concepts.\n - Engage more with current events or trends to enhance relevance,
|
||||
especially in fields like remote work and decision-making.\n - Invite perspectives
|
||||
from experts and stakeholders to add depth to discussions on ethical implications
|
||||
and collaboration in creativity.\n - Use more precise language when discussing
|
||||
topics, ensuring clarity and accessibility for readers.\n - Encourage exploration
|
||||
of user experiences and testimonials to provide more relatable content, especially
|
||||
in education and mental health contexts.\n\nBegin! This is VERY important to
|
||||
you, use the tools available and give your best Final Answer, your job depends
|
||||
on it!\n\nThought:"}], "response": "I now can give a great answer \nFinal Answer:
|
||||
\n\n**Topic: Introduction to Basic Addition**\n\n**Explanation:**\nBasic addition
|
||||
is about combining two or more groups of things together to find out how many
|
||||
there are in total. It''s one of the most fundamental concepts in math and is
|
||||
a building block for all other math skills. Teaching addition to a 6-year-old
|
||||
involves using simple numbers and relatable examples that help them visualize
|
||||
and understand the concept of adding together.\n\n**Angle:**\nTo make the concept
|
||||
of addition fun and engaging, we can use everyday objects that a child is familiar
|
||||
with, such as toys, fruits, or drawing items. Incorporating visuals and interactive
|
||||
elements will keep their attention and help reinforce the idea of combining
|
||||
numbers.\n\n**Examples:**\n\n1. **Using Objects:**\n - **Scenario:** Let\u2019s
|
||||
say you have 2 apples and your friend gives you 3 more apples.\n - **Visual**:
|
||||
Arrange the apples in front of the child.\n - **Question:** \"How many apples
|
||||
do you have now?\"\n - **Calculation:** 2 apples (your apples) + 3 apples
|
||||
(friend''s apples) = 5 apples. \n - **Conclusion:** \"You now have 5 apples!\"\n\n2.
|
||||
**Drawing Pictures:**\n - **Scenario:** Draw 4 stars on one side of the paper
|
||||
and 2 stars on the other side.\n - **Activity:** Ask the child to count the
|
||||
stars in the first group and then the second group.\n - **Question:** \"If
|
||||
we put them together, how many stars do we have?\"\n - **Calculation:** 4
|
||||
stars + 2 stars = 6 stars. \n - **Conclusion:** \"You drew 6 stars all together!\"\n\n3.
|
||||
**Story Problems:**\n - **Scenario:** \"You have 5 toy cars, and you buy 3
|
||||
more from the store. How many cars do you have?\"\n - **Interaction:** Create
|
||||
a fun story around the toy cars (perhaps the cars are going on an adventure).\n -
|
||||
**Calculation:** 5 toy cars + 3 toy cars = 8 toy cars. \n - **Conclusion:**
|
||||
\"You now have a total of 8 toy cars for your adventure!\"\n\n4. **Games:**\n -
|
||||
**Activity:** Play a simple game where you roll a pair of dice. Each die shows
|
||||
a number.\n - **Task:** Ask the child to add the numbers on the dice together.\n -
|
||||
**Example:** If one die shows 2 and the other shows 4, the child will say \u201c2
|
||||
+ 4 = 6!\u201d\n - **Conclusion:** \u201cWhoever gets the highest number wins
|
||||
a point!\u201d\n\nIn summary, when teaching a 6-year-old about basic addition,
|
||||
it is essential to use simple numbers, real-life examples, visual aids, and
|
||||
engaging activities. This ensures the child can grasp the concept while having
|
||||
a fun learning experience. Making math relatable to their world helps build
|
||||
a strong foundation for their future learning!", "call_type": "<LLMCallType.LLM_CALL:
|
||||
''llm_call''>", "model": "gpt-4o-mini"}}, {"event_id": "7d008192-dc37-4798-99ca-d41b8674d085",
|
||||
"timestamp": "2025-10-21T17:02:41.544571+00:00", "type": "memory_save_started",
|
||||
"event_data": {"timestamp": "2025-10-21T17:02:41.544571+00:00", "type": "memory_save_started",
|
||||
"source_fingerprint": null, "source_type": "short_term_memory", "fingerprint_metadata":
|
||||
null, "task_id": "3283d0f7-7159-47a9-abf0-a1bfe4dafb13", "task_name": "Research
|
||||
a topic to teach a kid aged 6 about math.", "agent_id": "5b1ba567-c4c3-4327-9c2e-4215c53bffb6",
|
||||
"agent_role": "Researcher", "from_task": null, "from_agent": null, "value":
|
||||
"I now can give a great answer \nFinal Answer: \n\n**Topic: Introduction to
|
||||
Basic Addition**\n\n**Explanation:**\nBasic addition is about combining two
|
||||
or more groups of things together to find out how many there are in total. It''s
|
||||
one of the most fundamental concepts in math and is a building block for all
|
||||
other math skills. Teaching addition to a 6-year-old involves using simple numbers
|
||||
and relatable examples that help them visualize and understand the concept of
|
||||
adding together.\n\n**Angle:**\nTo make the concept of addition fun and engaging,
|
||||
we can use everyday objects that a child is familiar with, such as toys, fruits,
|
||||
or drawing items. Incorporating visuals and interactive elements will keep their
|
||||
attention and help reinforce the idea of combining numbers.\n\n**Examples:**\n\n1.
|
||||
**Using Objects:**\n - **Scenario:** Let\u2019s say you have 2 apples and
|
||||
your friend gives you 3 more apples.\n - **Visual**: Arrange the apples in
|
||||
front of the child.\n - **Question:** \"How many apples do you have now?\"\n -
|
||||
**Calculation:** 2 apples (your apples) + 3 apples (friend''s apples) = 5 apples. \n -
|
||||
**Conclusion:** \"You now have 5 apples!\"\n\n2. **Drawing Pictures:**\n -
|
||||
**Scenario:** Draw 4 stars on one side of the paper and 2 stars on the other
|
||||
side.\n - **Activity:** Ask the child to count the stars in the first group
|
||||
and then the second group.\n - **Question:** \"If we put them together, how
|
||||
many stars do we have?\"\n - **Calculation:** 4 stars + 2 stars = 6 stars. \n -
|
||||
**Conclusion:** \"You drew 6 stars all together!\"\n\n3. **Story Problems:**\n -
|
||||
**Scenario:** \"You have 5 toy cars, and you buy 3 more from the store. How
|
||||
many cars do you have?\"\n - **Interaction:** Create a fun story around the
|
||||
toy cars (perhaps the cars are going on an adventure).\n - **Calculation:**
|
||||
5 toy cars + 3 toy cars = 8 toy cars. \n - **Conclusion:** \"You now have
|
||||
a total of 8 toy cars for your adventure!\"\n\n4. **Games:**\n - **Activity:**
|
||||
Play a simple game where you roll a pair of dice. Each die shows a number.\n -
|
||||
**Task:** Ask the child to add the numbers on the dice together.\n - **Example:**
|
||||
If one die shows 2 and the other shows 4, the child will say \u201c2 + 4 = 6!\u201d\n -
|
||||
**Conclusion:** \u201cWhoever gets the highest number wins a point!\u201d\n\nIn
|
||||
summary, when teaching a 6-year-old about basic addition, it is essential to
|
||||
use simple numbers, real-life examples, visual aids, and engaging activities.
|
||||
This ensures the child can grasp the concept while having a fun learning experience.
|
||||
Making math relatable to their world helps build a strong foundation for their
|
||||
future learning!", "metadata": {"observation": "Research a topic to teach a
|
||||
kid aged 6 about math."}}}, {"event_id": "6ec950dc-be30-43b0-a1e6-5ee4de464689",
|
||||
"timestamp": "2025-10-21T17:02:41.556337+00:00", "type": "memory_save_completed",
|
||||
"event_data": {"timestamp": "2025-10-21T17:02:41.556337+00:00", "type": "memory_save_completed",
|
||||
"source_fingerprint": null, "source_type": "short_term_memory", "fingerprint_metadata":
|
||||
null, "task_id": "3283d0f7-7159-47a9-abf0-a1bfe4dafb13", "task_name": "Research
|
||||
a topic to teach a kid aged 6 about math.", "agent_id": "5b1ba567-c4c3-4327-9c2e-4215c53bffb6",
|
||||
"agent_role": "Researcher", "from_task": null, "from_agent": null, "value":
|
||||
"I now can give a great answer \nFinal Answer: \n\n**Topic: Introduction to
|
||||
Basic Addition**\n\n**Explanation:**\nBasic addition is about combining two
|
||||
or more groups of things together to find out how many there are in total. It''s
|
||||
one of the most fundamental concepts in math and is a building block for all
|
||||
other math skills. Teaching addition to a 6-year-old involves using simple numbers
|
||||
and relatable examples that help them visualize and understand the concept of
|
||||
adding together.\n\n**Angle:**\nTo make the concept of addition fun and engaging,
|
||||
we can use everyday objects that a child is familiar with, such as toys, fruits,
|
||||
or drawing items. Incorporating visuals and interactive elements will keep their
|
||||
attention and help reinforce the idea of combining numbers.\n\n**Examples:**\n\n1.
|
||||
**Using Objects:**\n - **Scenario:** Let\u2019s say you have 2 apples and
|
||||
your friend gives you 3 more apples.\n - **Visual**: Arrange the apples in
|
||||
front of the child.\n - **Question:** \"How many apples do you have now?\"\n -
|
||||
**Calculation:** 2 apples (your apples) + 3 apples (friend''s apples) = 5 apples. \n -
|
||||
**Conclusion:** \"You now have 5 apples!\"\n\n2. **Drawing Pictures:**\n -
|
||||
**Scenario:** Draw 4 stars on one side of the paper and 2 stars on the other
|
||||
side.\n - **Activity:** Ask the child to count the stars in the first group
|
||||
and then the second group.\n - **Question:** \"If we put them together, how
|
||||
many stars do we have?\"\n - **Calculation:** 4 stars + 2 stars = 6 stars. \n -
|
||||
**Conclusion:** \"You drew 6 stars all together!\"\n\n3. **Story Problems:**\n -
|
||||
**Scenario:** \"You have 5 toy cars, and you buy 3 more from the store. How
|
||||
many cars do you have?\"\n - **Interaction:** Create a fun story around the
|
||||
toy cars (perhaps the cars are going on an adventure).\n - **Calculation:**
|
||||
5 toy cars + 3 toy cars = 8 toy cars. \n - **Conclusion:** \"You now have
|
||||
a total of 8 toy cars for your adventure!\"\n\n4. **Games:**\n - **Activity:**
|
||||
Play a simple game where you roll a pair of dice. Each die shows a number.\n -
|
||||
**Task:** Ask the child to add the numbers on the dice together.\n - **Example:**
|
||||
If one die shows 2 and the other shows 4, the child will say \u201c2 + 4 = 6!\u201d\n -
|
||||
**Conclusion:** \u201cWhoever gets the highest number wins a point!\u201d\n\nIn
|
||||
summary, when teaching a 6-year-old about basic addition, it is essential to
|
||||
use simple numbers, real-life examples, visual aids, and engaging activities.
|
||||
This ensures the child can grasp the concept while having a fun learning experience.
|
||||
Making math relatable to their world helps build a strong foundation for their
|
||||
future learning!", "metadata": {"observation": "Research a topic to teach a
|
||||
kid aged 6 about math."}, "save_time_ms": 11.606931686401367}}, {"event_id":
|
||||
"be3fce2b-9a2a-4222-b6b9-a03bae010470", "timestamp": "2025-10-21T17:02:41.688488+00:00",
|
||||
"type": "memory_save_started", "event_data": {"timestamp": "2025-10-21T17:02:41.688488+00:00",
|
||||
"type": "memory_save_started", "source_fingerprint": null, "source_type": "entity_memory",
|
||||
"fingerprint_metadata": null, "task_id": "3283d0f7-7159-47a9-abf0-a1bfe4dafb13",
|
||||
"task_name": "Research a topic to teach a kid aged 6 about math.", "agent_id":
|
||||
"5b1ba567-c4c3-4327-9c2e-4215c53bffb6", "agent_role": "Researcher", "from_task":
|
||||
null, "from_agent": null, "value": null, "metadata": {"entity_count": 3}}},
|
||||
{"event_id": "06b57cdf-ddd2-485f-a64e-660cd6fd8318", "timestamp": "2025-10-21T17:02:41.723732+00:00",
|
||||
"type": "memory_save_completed", "event_data": {"timestamp": "2025-10-21T17:02:41.723732+00:00",
|
||||
"type": "memory_save_completed", "source_fingerprint": null, "source_type":
|
||||
"entity_memory", "fingerprint_metadata": null, "task_id": "3283d0f7-7159-47a9-abf0-a1bfe4dafb13",
|
||||
"task_name": "Research a topic to teach a kid aged 6 about math.", "agent_id":
|
||||
"5b1ba567-c4c3-4327-9c2e-4215c53bffb6", "agent_role": "Researcher", "from_task":
|
||||
null, "from_agent": null, "value": "Saved 3 entities", "metadata": {"entity_count":
|
||||
3, "errors": []}, "save_time_ms": 35.18795967102051}}, {"event_id": "4598e3fd-0b62-4c2f-ab7d-f709646223b3",
|
||||
"timestamp": "2025-10-21T17:02:41.723816+00:00", "type": "agent_execution_completed",
|
||||
"event_data": {"agent_role": "Researcher", "agent_goal": "You research about
|
||||
math.", "agent_backstory": "You''re an expert in research and you love to learn
|
||||
new things."}}, {"event_id": "92695721-2c95-478e-9cce-cd058fb93df3", "timestamp":
|
||||
"2025-10-21T17:02:41.723915+00:00", "type": "task_completed", "event_data":
|
||||
{"task_description": "Research a topic to teach a kid aged 6 about math.", "task_name":
|
||||
"Research a topic to teach a kid aged 6 about math.", "task_id": "3283d0f7-7159-47a9-abf0-a1bfe4dafb13",
|
||||
"output_raw": "**Topic: Introduction to Basic Addition**\n\n**Explanation:**\nBasic
|
||||
addition is about combining two or more groups of things together to find out
|
||||
how many there are in total. It''s one of the most fundamental concepts in math
|
||||
and is a building block for all other math skills. Teaching addition to a 6-year-old
|
||||
involves using simple numbers and relatable examples that help them visualize
|
||||
and understand the concept of adding together.\n\n**Angle:**\nTo make the concept
|
||||
of addition fun and engaging, we can use everyday objects that a child is familiar
|
||||
with, such as toys, fruits, or drawing items. Incorporating visuals and interactive
|
||||
elements will keep their attention and help reinforce the idea of combining
|
||||
numbers.\n\n**Examples:**\n\n1. **Using Objects:**\n - **Scenario:** Let\u2019s
|
||||
say you have 2 apples and your friend gives you 3 more apples.\n - **Visual**:
|
||||
Arrange the apples in front of the child.\n - **Question:** \"How many apples
|
||||
do you have now?\"\n - **Calculation:** 2 apples (your apples) + 3 apples
|
||||
(friend''s apples) = 5 apples. \n - **Conclusion:** \"You now have 5 apples!\"\n\n2.
|
||||
**Drawing Pictures:**\n - **Scenario:** Draw 4 stars on one side of the paper
|
||||
and 2 stars on the other side.\n - **Activity:** Ask the child to count the
|
||||
stars in the first group and then the second group.\n - **Question:** \"If
|
||||
we put them together, how many stars do we have?\"\n - **Calculation:** 4
|
||||
stars + 2 stars = 6 stars. \n - **Conclusion:** \"You drew 6 stars all together!\"\n\n3.
|
||||
**Story Problems:**\n - **Scenario:** \"You have 5 toy cars, and you buy 3
|
||||
more from the store. How many cars do you have?\"\n - **Interaction:** Create
|
||||
a fun story around the toy cars (perhaps the cars are going on an adventure).\n -
|
||||
**Calculation:** 5 toy cars + 3 toy cars = 8 toy cars. \n - **Conclusion:**
|
||||
\"You now have a total of 8 toy cars for your adventure!\"\n\n4. **Games:**\n -
|
||||
**Activity:** Play a simple game where you roll a pair of dice. Each die shows
|
||||
a number.\n - **Task:** Ask the child to add the numbers on the dice together.\n -
|
||||
**Example:** If one die shows 2 and the other shows 4, the child will say \u201c2
|
||||
+ 4 = 6!\u201d\n - **Conclusion:** \u201cWhoever gets the highest number wins
|
||||
a point!\u201d\n\nIn summary, when teaching a 6-year-old about basic addition,
|
||||
it is essential to use simple numbers, real-life examples, visual aids, and
|
||||
engaging activities. This ensures the child can grasp the concept while having
|
||||
a fun learning experience. Making math relatable to their world helps build
|
||||
a strong foundation for their future learning!", "output_format": "OutputFormat.RAW",
|
||||
"agent_role": "Researcher"}}, {"event_id": "1a254c34-e055-46d2-99cb-7dfdfdcefc74",
|
||||
"timestamp": "2025-10-21T17:02:41.725000+00:00", "type": "crew_kickoff_completed",
|
||||
"event_data": {"timestamp": "2025-10-21T17:02:41.725000+00:00", "type": "crew_kickoff_completed",
|
||||
"source_fingerprint": null, "source_type": null, "fingerprint_metadata": null,
|
||||
"task_id": null, "task_name": null, "agent_id": null, "agent_role": null, "crew_name":
|
||||
"crew", "crew": null, "output": {"description": "Research a topic to teach a
|
||||
kid aged 6 about math.", "name": "Research a topic to teach a kid aged 6 about
|
||||
math.", "expected_output": "A topic, explanation, angle, and examples.", "summary":
|
||||
"Research a topic to teach a kid aged 6 about...", "raw": "**Topic: Introduction
|
||||
to Basic Addition**\n\n**Explanation:**\nBasic addition is about combining two
|
||||
or more groups of things together to find out how many there are in total. It''s
|
||||
one of the most fundamental concepts in math and is a building block for all
|
||||
other math skills. Teaching addition to a 6-year-old involves using simple numbers
|
||||
and relatable examples that help them visualize and understand the concept of
|
||||
adding together.\n\n**Angle:**\nTo make the concept of addition fun and engaging,
|
||||
we can use everyday objects that a child is familiar with, such as toys, fruits,
|
||||
or drawing items. Incorporating visuals and interactive elements will keep their
|
||||
attention and help reinforce the idea of combining numbers.\n\n**Examples:**\n\n1.
|
||||
**Using Objects:**\n - **Scenario:** Let\u2019s say you have 2 apples and
|
||||
your friend gives you 3 more apples.\n - **Visual**: Arrange the apples in
|
||||
front of the child.\n - **Question:** \"How many apples do you have now?\"\n -
|
||||
**Calculation:** 2 apples (your apples) + 3 apples (friend''s apples) = 5 apples. \n -
|
||||
**Conclusion:** \"You now have 5 apples!\"\n\n2. **Drawing Pictures:**\n -
|
||||
**Scenario:** Draw 4 stars on one side of the paper and 2 stars on the other
|
||||
side.\n - **Activity:** Ask the child to count the stars in the first group
|
||||
and then the second group.\n - **Question:** \"If we put them together, how
|
||||
many stars do we have?\"\n - **Calculation:** 4 stars + 2 stars = 6 stars. \n -
|
||||
**Conclusion:** \"You drew 6 stars all together!\"\n\n3. **Story Problems:**\n -
|
||||
**Scenario:** \"You have 5 toy cars, and you buy 3 more from the store. How
|
||||
many cars do you have?\"\n - **Interaction:** Create a fun story around the
|
||||
toy cars (perhaps the cars are going on an adventure).\n - **Calculation:**
|
||||
5 toy cars + 3 toy cars = 8 toy cars. \n - **Conclusion:** \"You now have
|
||||
a total of 8 toy cars for your adventure!\"\n\n4. **Games:**\n - **Activity:**
|
||||
Play a simple game where you roll a pair of dice. Each die shows a number.\n -
|
||||
**Task:** Ask the child to add the numbers on the dice together.\n - **Example:**
|
||||
If one die shows 2 and the other shows 4, the child will say \u201c2 + 4 = 6!\u201d\n -
|
||||
**Conclusion:** \u201cWhoever gets the highest number wins a point!\u201d\n\nIn
|
||||
summary, when teaching a 6-year-old about basic addition, it is essential to
|
||||
use simple numbers, real-life examples, visual aids, and engaging activities.
|
||||
This ensures the child can grasp the concept while having a fun learning experience.
|
||||
Making math relatable to their world helps build a strong foundation for their
|
||||
future learning!", "pydantic": null, "json_dict": null, "agent": "Researcher",
|
||||
"output_format": "raw"}, "total_tokens": 796}}], "batch_metadata": {"events_count":
|
||||
18, "batch_sequence": 1, "is_final_batch": false}}'
|
||||
headers:
|
||||
Accept:
|
||||
- '*/*'
|
||||
Accept-Encoding:
|
||||
- gzip, deflate, zstd
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '27251'
|
||||
Content-Type:
|
||||
- application/json
|
||||
User-Agent:
|
||||
- CrewAI-CLI/1.0.0
|
||||
X-Crewai-Version:
|
||||
- 1.0.0
|
||||
method: POST
|
||||
uri: https://app.crewai.com/crewai_plus/api/v1/tracing/ephemeral/batches/c5146cc4-dcff-45cc-a71a-b82a83b7de73/events
|
||||
response:
|
||||
body:
|
||||
string: '{"events_created":18,"ephemeral_trace_batch_id":"ad4ac66f-7511-444c-aec3-a8c711ab4f54"}'
|
||||
headers:
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '87'
|
||||
Content-Type:
|
||||
- application/json; charset=utf-8
|
||||
Date:
|
||||
- Tue, 21 Oct 2025 17:02:42 GMT
|
||||
cache-control:
|
||||
- no-store
|
||||
content-security-policy:
|
||||
- 'default-src ''self'' *.app.crewai.com app.crewai.com; script-src ''self''
|
||||
''unsafe-inline'' *.app.crewai.com app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts
|
||||
https://www.gstatic.com https://run.pstmn.io https://apis.google.com https://apis.google.com/js/api.js
|
||||
https://accounts.google.com https://accounts.google.com/gsi/client https://cdnjs.cloudflare.com/ajax/libs/normalize/8.0.1/normalize.min.css.map
|
||||
https://*.google.com https://docs.google.com https://slides.google.com https://js.hs-scripts.com
|
||||
https://js.sentry-cdn.com https://browser.sentry-cdn.com https://www.googletagmanager.com
|
||||
https://js-na1.hs-scripts.com https://js.hubspot.com http://js-na1.hs-scripts.com
|
||||
https://bat.bing.com https://cdn.amplitude.com https://cdn.segment.com https://d1d3n03t5zntha.cloudfront.net/
|
||||
https://descriptusercontent.com https://edge.fullstory.com https://googleads.g.doubleclick.net
|
||||
https://js.hs-analytics.net https://js.hs-banner.com https://js.hsadspixel.net
|
||||
https://js.hscollectedforms.net https://js.usemessages.com https://snap.licdn.com
|
||||
https://static.cloudflareinsights.com https://static.reo.dev https://www.google-analytics.com
|
||||
https://share.descript.com/; style-src ''self'' ''unsafe-inline'' *.app.crewai.com
|
||||
app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts; img-src ''self'' data:
|
||||
*.app.crewai.com app.crewai.com https://zeus.tools.crewai.com https://dashboard.tools.crewai.com
|
||||
https://cdn.jsdelivr.net https://forms.hsforms.com https://track.hubspot.com
|
||||
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://www.google.com
|
||||
https://www.google.com.br; font-src ''self'' data: *.app.crewai.com app.crewai.com;
|
||||
connect-src ''self'' *.app.crewai.com app.crewai.com https://zeus.tools.crewai.com
|
||||
https://connect.useparagon.com/ https://zeus.useparagon.com/* https://*.useparagon.com/*
|
||||
https://run.pstmn.io https://connect.tools.crewai.com/ https://*.sentry.io
|
||||
https://www.google-analytics.com https://edge.fullstory.com https://rs.fullstory.com
|
||||
https://api.hubspot.com https://forms.hscollectedforms.net https://api.hubapi.com
|
||||
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://google.com/pagead/form-data/16713662509
|
||||
https://google.com/ccm/form-data/16713662509 https://www.google.com/ccm/collect
|
||||
https://worker-actionkit.tools.crewai.com https://api.reo.dev; frame-src ''self''
|
||||
*.app.crewai.com app.crewai.com https://connect.useparagon.com/ https://zeus.tools.crewai.com
|
||||
https://zeus.useparagon.com/* https://connect.tools.crewai.com/ https://docs.google.com
|
||||
https://drive.google.com https://slides.google.com https://accounts.google.com
|
||||
https://*.google.com https://app.hubspot.com/ https://td.doubleclick.net https://www.googletagmanager.com/
|
||||
https://www.youtube.com https://share.descript.com'
|
||||
etag:
|
||||
- W/"b64593afe178f1c8f741a9b67ffdcd3a"
|
||||
expires:
|
||||
- '0'
|
||||
permissions-policy:
|
||||
- camera=(), microphone=(self), geolocation=()
|
||||
pragma:
|
||||
- no-cache
|
||||
referrer-policy:
|
||||
- strict-origin-when-cross-origin
|
||||
strict-transport-security:
|
||||
- max-age=63072000; includeSubDomains
|
||||
vary:
|
||||
- Accept
|
||||
x-content-type-options:
|
||||
- nosniff
|
||||
x-frame-options:
|
||||
- SAMEORIGIN
|
||||
x-permitted-cross-domain-policies:
|
||||
- none
|
||||
x-request-id:
|
||||
- 65b0cea8-4eb3-4d77-a644-18bcce5cf785
|
||||
x-runtime:
|
||||
- '0.195421'
|
||||
x-xss-protection:
|
||||
- 1; mode=block
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"status": "completed", "duration_ms": 863, "final_event_count": 18}'
|
||||
headers:
|
||||
Accept:
|
||||
- '*/*'
|
||||
Accept-Encoding:
|
||||
- gzip, deflate, zstd
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '68'
|
||||
Content-Type:
|
||||
- application/json
|
||||
User-Agent:
|
||||
- CrewAI-CLI/1.0.0
|
||||
X-Crewai-Version:
|
||||
- 1.0.0
|
||||
method: PATCH
|
||||
uri: https://app.crewai.com/crewai_plus/api/v1/tracing/ephemeral/batches/c5146cc4-dcff-45cc-a71a-b82a83b7de73/finalize
|
||||
response:
|
||||
body:
|
||||
string: '{"id":"ad4ac66f-7511-444c-aec3-a8c711ab4f54","ephemeral_trace_id":"c5146cc4-dcff-45cc-a71a-b82a83b7de73","execution_type":"crew","crew_name":"crew","flow_name":null,"status":"completed","duration_ms":863,"crewai_version":"1.0.0","total_events":18,"execution_context":{"crew_name":"crew","flow_name":null,"privacy_level":"standard","crewai_version":"1.0.0","crew_fingerprint":null},"created_at":"2025-10-21T17:02:41.683Z","updated_at":"2025-10-21T17:02:42.862Z","access_code":"TRACE-41ea39cb70","user_identifier":null}'
|
||||
headers:
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '517'
|
||||
Content-Type:
|
||||
- application/json; charset=utf-8
|
||||
Date:
|
||||
- Tue, 21 Oct 2025 17:02:42 GMT
|
||||
cache-control:
|
||||
- no-store
|
||||
content-security-policy:
|
||||
- 'default-src ''self'' *.app.crewai.com app.crewai.com; script-src ''self''
|
||||
''unsafe-inline'' *.app.crewai.com app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts
|
||||
https://www.gstatic.com https://run.pstmn.io https://apis.google.com https://apis.google.com/js/api.js
|
||||
https://accounts.google.com https://accounts.google.com/gsi/client https://cdnjs.cloudflare.com/ajax/libs/normalize/8.0.1/normalize.min.css.map
|
||||
https://*.google.com https://docs.google.com https://slides.google.com https://js.hs-scripts.com
|
||||
https://js.sentry-cdn.com https://browser.sentry-cdn.com https://www.googletagmanager.com
|
||||
https://js-na1.hs-scripts.com https://js.hubspot.com http://js-na1.hs-scripts.com
|
||||
https://bat.bing.com https://cdn.amplitude.com https://cdn.segment.com https://d1d3n03t5zntha.cloudfront.net/
|
||||
https://descriptusercontent.com https://edge.fullstory.com https://googleads.g.doubleclick.net
|
||||
https://js.hs-analytics.net https://js.hs-banner.com https://js.hsadspixel.net
|
||||
https://js.hscollectedforms.net https://js.usemessages.com https://snap.licdn.com
|
||||
https://static.cloudflareinsights.com https://static.reo.dev https://www.google-analytics.com
|
||||
https://share.descript.com/; style-src ''self'' ''unsafe-inline'' *.app.crewai.com
|
||||
app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts; img-src ''self'' data:
|
||||
*.app.crewai.com app.crewai.com https://zeus.tools.crewai.com https://dashboard.tools.crewai.com
|
||||
https://cdn.jsdelivr.net https://forms.hsforms.com https://track.hubspot.com
|
||||
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://www.google.com
|
||||
https://www.google.com.br; font-src ''self'' data: *.app.crewai.com app.crewai.com;
|
||||
connect-src ''self'' *.app.crewai.com app.crewai.com https://zeus.tools.crewai.com
|
||||
https://connect.useparagon.com/ https://zeus.useparagon.com/* https://*.useparagon.com/*
|
||||
https://run.pstmn.io https://connect.tools.crewai.com/ https://*.sentry.io
|
||||
https://www.google-analytics.com https://edge.fullstory.com https://rs.fullstory.com
|
||||
https://api.hubspot.com https://forms.hscollectedforms.net https://api.hubapi.com
|
||||
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://google.com/pagead/form-data/16713662509
|
||||
https://google.com/ccm/form-data/16713662509 https://www.google.com/ccm/collect
|
||||
https://worker-actionkit.tools.crewai.com https://api.reo.dev; frame-src ''self''
|
||||
*.app.crewai.com app.crewai.com https://connect.useparagon.com/ https://zeus.tools.crewai.com
|
||||
https://zeus.useparagon.com/* https://connect.tools.crewai.com/ https://docs.google.com
|
||||
https://drive.google.com https://slides.google.com https://accounts.google.com
|
||||
https://*.google.com https://app.hubspot.com/ https://td.doubleclick.net https://www.googletagmanager.com/
|
||||
https://www.youtube.com https://share.descript.com'
|
||||
etag:
|
||||
- W/"10c699106e5c1f4c4a75d76283291bbe"
|
||||
expires:
|
||||
- '0'
|
||||
permissions-policy:
|
||||
- camera=(), microphone=(self), geolocation=()
|
||||
pragma:
|
||||
- no-cache
|
||||
referrer-policy:
|
||||
- strict-origin-when-cross-origin
|
||||
strict-transport-security:
|
||||
- max-age=63072000; includeSubDomains
|
||||
vary:
|
||||
- Accept
|
||||
x-content-type-options:
|
||||
- nosniff
|
||||
x-frame-options:
|
||||
- SAMEORIGIN
|
||||
x-permitted-cross-domain-policies:
|
||||
- none
|
||||
x-request-id:
|
||||
- 249b4327-c151-4c5f-84b7-16d1465ca035
|
||||
x-runtime:
|
||||
- '0.357280'
|
||||
x-xss-protection:
|
||||
- 1; mode=block
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
|
||||
101
lib/crewai/tests/cassettes/test_manager_agent.yaml
Normal file
101
lib/crewai/tests/cassettes/test_manager_agent.yaml
Normal file
@@ -0,0 +1,101 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"trace_id": "a15aa156-bf49-4b5e-90aa-b1d749de00f7", "execution_type":
|
||||
"crew", "user_identifier": null, "execution_context": {"crew_fingerprint": null,
|
||||
"crew_name": "crew", "flow_name": null, "crewai_version": "1.0.0", "privacy_level":
|
||||
"standard"}, "execution_metadata": {"expected_duration_estimate": 300, "agent_count":
|
||||
0, "task_count": 0, "flow_method_count": 0, "execution_started_at": "2025-10-21T14:16:55.776884+00:00"},
|
||||
"ephemeral_trace_id": "a15aa156-bf49-4b5e-90aa-b1d749de00f7"}'
|
||||
headers:
|
||||
Accept:
|
||||
- '*/*'
|
||||
Accept-Encoding:
|
||||
- gzip, deflate, zstd
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '488'
|
||||
Content-Type:
|
||||
- application/json
|
||||
User-Agent:
|
||||
- CrewAI-CLI/1.0.0
|
||||
X-Crewai-Version:
|
||||
- 1.0.0
|
||||
method: POST
|
||||
uri: https://app.crewai.com/crewai_plus/api/v1/tracing/ephemeral/batches
|
||||
response:
|
||||
body:
|
||||
string: '{"id":"802f2398-9dac-459b-95c5-83c1121b1c4a","ephemeral_trace_id":"a15aa156-bf49-4b5e-90aa-b1d749de00f7","execution_type":"crew","crew_name":"crew","flow_name":null,"status":"running","duration_ms":null,"crewai_version":"1.0.0","total_events":0,"execution_context":{"crew_fingerprint":null,"crew_name":"crew","flow_name":null,"crewai_version":"1.0.0","privacy_level":"standard"},"created_at":"2025-10-21T14:16:56.502Z","updated_at":"2025-10-21T14:16:56.502Z","access_code":"TRACE-a7c706bf5c","user_identifier":null}'
|
||||
headers:
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '515'
|
||||
Content-Type:
|
||||
- application/json; charset=utf-8
|
||||
Date:
|
||||
- Tue, 21 Oct 2025 14:16:56 GMT
|
||||
cache-control:
|
||||
- no-store
|
||||
content-security-policy:
|
||||
- 'default-src ''self'' *.app.crewai.com app.crewai.com; script-src ''self''
|
||||
''unsafe-inline'' *.app.crewai.com app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts
|
||||
https://www.gstatic.com https://run.pstmn.io https://apis.google.com https://apis.google.com/js/api.js
|
||||
https://accounts.google.com https://accounts.google.com/gsi/client https://cdnjs.cloudflare.com/ajax/libs/normalize/8.0.1/normalize.min.css.map
|
||||
https://*.google.com https://docs.google.com https://slides.google.com https://js.hs-scripts.com
|
||||
https://js.sentry-cdn.com https://browser.sentry-cdn.com https://www.googletagmanager.com
|
||||
https://js-na1.hs-scripts.com https://js.hubspot.com http://js-na1.hs-scripts.com
|
||||
https://bat.bing.com https://cdn.amplitude.com https://cdn.segment.com https://d1d3n03t5zntha.cloudfront.net/
|
||||
https://descriptusercontent.com https://edge.fullstory.com https://googleads.g.doubleclick.net
|
||||
https://js.hs-analytics.net https://js.hs-banner.com https://js.hsadspixel.net
|
||||
https://js.hscollectedforms.net https://js.usemessages.com https://snap.licdn.com
|
||||
https://static.cloudflareinsights.com https://static.reo.dev https://www.google-analytics.com
|
||||
https://share.descript.com/; style-src ''self'' ''unsafe-inline'' *.app.crewai.com
|
||||
app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts; img-src ''self'' data:
|
||||
*.app.crewai.com app.crewai.com https://zeus.tools.crewai.com https://dashboard.tools.crewai.com
|
||||
https://cdn.jsdelivr.net https://forms.hsforms.com https://track.hubspot.com
|
||||
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://www.google.com
|
||||
https://www.google.com.br; font-src ''self'' data: *.app.crewai.com app.crewai.com;
|
||||
connect-src ''self'' *.app.crewai.com app.crewai.com https://zeus.tools.crewai.com
|
||||
https://connect.useparagon.com/ https://zeus.useparagon.com/* https://*.useparagon.com/*
|
||||
https://run.pstmn.io https://connect.tools.crewai.com/ https://*.sentry.io
|
||||
https://www.google-analytics.com https://edge.fullstory.com https://rs.fullstory.com
|
||||
https://api.hubspot.com https://forms.hscollectedforms.net https://api.hubapi.com
|
||||
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://google.com/pagead/form-data/16713662509
|
||||
https://google.com/ccm/form-data/16713662509 https://www.google.com/ccm/collect
|
||||
https://worker-actionkit.tools.crewai.com https://api.reo.dev; frame-src ''self''
|
||||
*.app.crewai.com app.crewai.com https://connect.useparagon.com/ https://zeus.tools.crewai.com
|
||||
https://zeus.useparagon.com/* https://connect.tools.crewai.com/ https://docs.google.com
|
||||
https://drive.google.com https://slides.google.com https://accounts.google.com
|
||||
https://*.google.com https://app.hubspot.com/ https://td.doubleclick.net https://www.googletagmanager.com/
|
||||
https://www.youtube.com https://share.descript.com'
|
||||
etag:
|
||||
- W/"d71af423072b7e3a94f3fa25c73280e2"
|
||||
expires:
|
||||
- '0'
|
||||
permissions-policy:
|
||||
- camera=(), microphone=(self), geolocation=()
|
||||
pragma:
|
||||
- no-cache
|
||||
referrer-policy:
|
||||
- strict-origin-when-cross-origin
|
||||
strict-transport-security:
|
||||
- max-age=63072000; includeSubDomains
|
||||
vary:
|
||||
- Accept
|
||||
x-content-type-options:
|
||||
- nosniff
|
||||
x-frame-options:
|
||||
- SAMEORIGIN
|
||||
x-permitted-cross-domain-policies:
|
||||
- none
|
||||
x-request-id:
|
||||
- d36aca42-3eee-4eb9-a857-025197641016
|
||||
x-runtime:
|
||||
- '0.091579'
|
||||
x-xss-protection:
|
||||
- 1; mode=block
|
||||
status:
|
||||
code: 201
|
||||
message: Created
|
||||
version: 1
|
||||
@@ -0,0 +1,101 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"trace_id": "06a3540b-6fa4-4066-bcd0-7eb6f9370f19", "execution_type":
|
||||
"crew", "user_identifier": null, "execution_context": {"crew_fingerprint": null,
|
||||
"crew_name": "crew", "flow_name": null, "crewai_version": "1.0.0", "privacy_level":
|
||||
"standard"}, "execution_metadata": {"expected_duration_estimate": 300, "agent_count":
|
||||
0, "task_count": 0, "flow_method_count": 0, "execution_started_at": "2025-10-21T14:17:40.843919+00:00"},
|
||||
"ephemeral_trace_id": "06a3540b-6fa4-4066-bcd0-7eb6f9370f19"}'
|
||||
headers:
|
||||
Accept:
|
||||
- '*/*'
|
||||
Accept-Encoding:
|
||||
- gzip, deflate, zstd
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '488'
|
||||
Content-Type:
|
||||
- application/json
|
||||
User-Agent:
|
||||
- CrewAI-CLI/1.0.0
|
||||
X-Crewai-Version:
|
||||
- 1.0.0
|
||||
method: POST
|
||||
uri: https://app.crewai.com/crewai_plus/api/v1/tracing/ephemeral/batches
|
||||
response:
|
||||
body:
|
||||
string: '{"id":"f01eb268-68a3-4711-b457-165dbfbf28d8","ephemeral_trace_id":"06a3540b-6fa4-4066-bcd0-7eb6f9370f19","execution_type":"crew","crew_name":"crew","flow_name":null,"status":"running","duration_ms":null,"crewai_version":"1.0.0","total_events":0,"execution_context":{"crew_fingerprint":null,"crew_name":"crew","flow_name":null,"crewai_version":"1.0.0","privacy_level":"standard"},"created_at":"2025-10-21T14:17:41.359Z","updated_at":"2025-10-21T14:17:41.359Z","access_code":"TRACE-91b88a5fb5","user_identifier":null}'
|
||||
headers:
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '515'
|
||||
Content-Type:
|
||||
- application/json; charset=utf-8
|
||||
Date:
|
||||
- Tue, 21 Oct 2025 14:17:41 GMT
|
||||
cache-control:
|
||||
- no-store
|
||||
content-security-policy:
|
||||
- 'default-src ''self'' *.app.crewai.com app.crewai.com; script-src ''self''
|
||||
''unsafe-inline'' *.app.crewai.com app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts
|
||||
https://www.gstatic.com https://run.pstmn.io https://apis.google.com https://apis.google.com/js/api.js
|
||||
https://accounts.google.com https://accounts.google.com/gsi/client https://cdnjs.cloudflare.com/ajax/libs/normalize/8.0.1/normalize.min.css.map
|
||||
https://*.google.com https://docs.google.com https://slides.google.com https://js.hs-scripts.com
|
||||
https://js.sentry-cdn.com https://browser.sentry-cdn.com https://www.googletagmanager.com
|
||||
https://js-na1.hs-scripts.com https://js.hubspot.com http://js-na1.hs-scripts.com
|
||||
https://bat.bing.com https://cdn.amplitude.com https://cdn.segment.com https://d1d3n03t5zntha.cloudfront.net/
|
||||
https://descriptusercontent.com https://edge.fullstory.com https://googleads.g.doubleclick.net
|
||||
https://js.hs-analytics.net https://js.hs-banner.com https://js.hsadspixel.net
|
||||
https://js.hscollectedforms.net https://js.usemessages.com https://snap.licdn.com
|
||||
https://static.cloudflareinsights.com https://static.reo.dev https://www.google-analytics.com
|
||||
https://share.descript.com/; style-src ''self'' ''unsafe-inline'' *.app.crewai.com
|
||||
app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts; img-src ''self'' data:
|
||||
*.app.crewai.com app.crewai.com https://zeus.tools.crewai.com https://dashboard.tools.crewai.com
|
||||
https://cdn.jsdelivr.net https://forms.hsforms.com https://track.hubspot.com
|
||||
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://www.google.com
|
||||
https://www.google.com.br; font-src ''self'' data: *.app.crewai.com app.crewai.com;
|
||||
connect-src ''self'' *.app.crewai.com app.crewai.com https://zeus.tools.crewai.com
|
||||
https://connect.useparagon.com/ https://zeus.useparagon.com/* https://*.useparagon.com/*
|
||||
https://run.pstmn.io https://connect.tools.crewai.com/ https://*.sentry.io
|
||||
https://www.google-analytics.com https://edge.fullstory.com https://rs.fullstory.com
|
||||
https://api.hubspot.com https://forms.hscollectedforms.net https://api.hubapi.com
|
||||
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://google.com/pagead/form-data/16713662509
|
||||
https://google.com/ccm/form-data/16713662509 https://www.google.com/ccm/collect
|
||||
https://worker-actionkit.tools.crewai.com https://api.reo.dev; frame-src ''self''
|
||||
*.app.crewai.com app.crewai.com https://connect.useparagon.com/ https://zeus.tools.crewai.com
|
||||
https://zeus.useparagon.com/* https://connect.tools.crewai.com/ https://docs.google.com
|
||||
https://drive.google.com https://slides.google.com https://accounts.google.com
|
||||
https://*.google.com https://app.hubspot.com/ https://td.doubleclick.net https://www.googletagmanager.com/
|
||||
https://www.youtube.com https://share.descript.com'
|
||||
etag:
|
||||
- W/"0af4d78caa068ff6a18bc41ed4176d51"
|
||||
expires:
|
||||
- '0'
|
||||
permissions-policy:
|
||||
- camera=(), microphone=(self), geolocation=()
|
||||
pragma:
|
||||
- no-cache
|
||||
referrer-policy:
|
||||
- strict-origin-when-cross-origin
|
||||
strict-transport-security:
|
||||
- max-age=63072000; includeSubDomains
|
||||
vary:
|
||||
- Accept
|
||||
x-content-type-options:
|
||||
- nosniff
|
||||
x-frame-options:
|
||||
- SAMEORIGIN
|
||||
x-permitted-cross-domain-policies:
|
||||
- none
|
||||
x-request-id:
|
||||
- a1283599-535d-4ae2-99d8-a5b4987bf951
|
||||
x-runtime:
|
||||
- '0.088882'
|
||||
x-xss-protection:
|
||||
- 1; mode=block
|
||||
status:
|
||||
code: 201
|
||||
message: Created
|
||||
version: 1
|
||||
@@ -0,0 +1,101 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"trace_id": "48baf84b-6f03-4e2e-b69a-36f2e6298da6", "execution_type":
|
||||
"crew", "user_identifier": null, "execution_context": {"crew_fingerprint": null,
|
||||
"crew_name": "crew", "flow_name": null, "crewai_version": "1.0.0", "privacy_level":
|
||||
"standard"}, "execution_metadata": {"expected_duration_estimate": 300, "agent_count":
|
||||
0, "task_count": 0, "flow_method_count": 0, "execution_started_at": "2025-10-21T14:19:57.504164+00:00"},
|
||||
"ephemeral_trace_id": "48baf84b-6f03-4e2e-b69a-36f2e6298da6"}'
|
||||
headers:
|
||||
Accept:
|
||||
- '*/*'
|
||||
Accept-Encoding:
|
||||
- gzip, deflate, zstd
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '488'
|
||||
Content-Type:
|
||||
- application/json
|
||||
User-Agent:
|
||||
- CrewAI-CLI/1.0.0
|
||||
X-Crewai-Version:
|
||||
- 1.0.0
|
||||
method: POST
|
||||
uri: https://app.crewai.com/crewai_plus/api/v1/tracing/ephemeral/batches
|
||||
response:
|
||||
body:
|
||||
string: '{"id":"789714f7-716c-4a0c-a6f6-3142aac81766","ephemeral_trace_id":"48baf84b-6f03-4e2e-b69a-36f2e6298da6","execution_type":"crew","crew_name":"crew","flow_name":null,"status":"running","duration_ms":null,"crewai_version":"1.0.0","total_events":0,"execution_context":{"crew_fingerprint":null,"crew_name":"crew","flow_name":null,"crewai_version":"1.0.0","privacy_level":"standard"},"created_at":"2025-10-21T14:19:57.950Z","updated_at":"2025-10-21T14:19:57.950Z","access_code":"TRACE-2e6a3ca401","user_identifier":null}'
|
||||
headers:
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '515'
|
||||
Content-Type:
|
||||
- application/json; charset=utf-8
|
||||
Date:
|
||||
- Tue, 21 Oct 2025 14:19:57 GMT
|
||||
cache-control:
|
||||
- no-store
|
||||
content-security-policy:
|
||||
- 'default-src ''self'' *.app.crewai.com app.crewai.com; script-src ''self''
|
||||
''unsafe-inline'' *.app.crewai.com app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts
|
||||
https://www.gstatic.com https://run.pstmn.io https://apis.google.com https://apis.google.com/js/api.js
|
||||
https://accounts.google.com https://accounts.google.com/gsi/client https://cdnjs.cloudflare.com/ajax/libs/normalize/8.0.1/normalize.min.css.map
|
||||
https://*.google.com https://docs.google.com https://slides.google.com https://js.hs-scripts.com
|
||||
https://js.sentry-cdn.com https://browser.sentry-cdn.com https://www.googletagmanager.com
|
||||
https://js-na1.hs-scripts.com https://js.hubspot.com http://js-na1.hs-scripts.com
|
||||
https://bat.bing.com https://cdn.amplitude.com https://cdn.segment.com https://d1d3n03t5zntha.cloudfront.net/
|
||||
https://descriptusercontent.com https://edge.fullstory.com https://googleads.g.doubleclick.net
|
||||
https://js.hs-analytics.net https://js.hs-banner.com https://js.hsadspixel.net
|
||||
https://js.hscollectedforms.net https://js.usemessages.com https://snap.licdn.com
|
||||
https://static.cloudflareinsights.com https://static.reo.dev https://www.google-analytics.com
|
||||
https://share.descript.com/; style-src ''self'' ''unsafe-inline'' *.app.crewai.com
|
||||
app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts; img-src ''self'' data:
|
||||
*.app.crewai.com app.crewai.com https://zeus.tools.crewai.com https://dashboard.tools.crewai.com
|
||||
https://cdn.jsdelivr.net https://forms.hsforms.com https://track.hubspot.com
|
||||
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://www.google.com
|
||||
https://www.google.com.br; font-src ''self'' data: *.app.crewai.com app.crewai.com;
|
||||
connect-src ''self'' *.app.crewai.com app.crewai.com https://zeus.tools.crewai.com
|
||||
https://connect.useparagon.com/ https://zeus.useparagon.com/* https://*.useparagon.com/*
|
||||
https://run.pstmn.io https://connect.tools.crewai.com/ https://*.sentry.io
|
||||
https://www.google-analytics.com https://edge.fullstory.com https://rs.fullstory.com
|
||||
https://api.hubspot.com https://forms.hscollectedforms.net https://api.hubapi.com
|
||||
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://google.com/pagead/form-data/16713662509
|
||||
https://google.com/ccm/form-data/16713662509 https://www.google.com/ccm/collect
|
||||
https://worker-actionkit.tools.crewai.com https://api.reo.dev; frame-src ''self''
|
||||
*.app.crewai.com app.crewai.com https://connect.useparagon.com/ https://zeus.tools.crewai.com
|
||||
https://zeus.useparagon.com/* https://connect.tools.crewai.com/ https://docs.google.com
|
||||
https://drive.google.com https://slides.google.com https://accounts.google.com
|
||||
https://*.google.com https://app.hubspot.com/ https://td.doubleclick.net https://www.googletagmanager.com/
|
||||
https://www.youtube.com https://share.descript.com'
|
||||
etag:
|
||||
- W/"b8a7fe4a39ec1a557b05f0e3526227b0"
|
||||
expires:
|
||||
- '0'
|
||||
permissions-policy:
|
||||
- camera=(), microphone=(self), geolocation=()
|
||||
pragma:
|
||||
- no-cache
|
||||
referrer-policy:
|
||||
- strict-origin-when-cross-origin
|
||||
strict-transport-security:
|
||||
- max-age=63072000; includeSubDomains
|
||||
vary:
|
||||
- Accept
|
||||
x-content-type-options:
|
||||
- nosniff
|
||||
x-frame-options:
|
||||
- SAMEORIGIN
|
||||
x-permitted-cross-domain-policies:
|
||||
- none
|
||||
x-request-id:
|
||||
- c84a3ad3-2bf4-431c-8a1b-c117a015fa60
|
||||
x-runtime:
|
||||
- '0.087470'
|
||||
x-xss-protection:
|
||||
- 1; mode=block
|
||||
status:
|
||||
code: 201
|
||||
message: Created
|
||||
version: 1
|
||||
@@ -0,0 +1,477 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"trace_id": "fabb9e5b-b761-4b21-8514-cd4d3c937d09", "execution_type":
|
||||
"flow", "user_identifier": null, "execution_context": {"crew_fingerprint": null,
|
||||
"crew_name": null, "flow_name": "MyFlow", "crewai_version": "1.0.0", "privacy_level":
|
||||
"standard"}, "execution_metadata": {"expected_duration_estimate": 300, "agent_count":
|
||||
0, "task_count": 0, "flow_method_count": 0, "execution_started_at": "2025-10-21T14:26:37.988214+00:00"},
|
||||
"ephemeral_trace_id": "fabb9e5b-b761-4b21-8514-cd4d3c937d09"}'
|
||||
headers:
|
||||
Accept:
|
||||
- '*/*'
|
||||
Accept-Encoding:
|
||||
- gzip, deflate, zstd
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '490'
|
||||
Content-Type:
|
||||
- application/json
|
||||
User-Agent:
|
||||
- CrewAI-CLI/1.0.0
|
||||
X-Crewai-Version:
|
||||
- 1.0.0
|
||||
method: POST
|
||||
uri: https://app.crewai.com/crewai_plus/api/v1/tracing/ephemeral/batches
|
||||
response:
|
||||
body:
|
||||
string: '{"id":"fe167fd8-d9b8-4faa-bb28-ed13f10e8d2e","ephemeral_trace_id":"fabb9e5b-b761-4b21-8514-cd4d3c937d09","execution_type":"flow","crew_name":null,"flow_name":"MyFlow","status":"running","duration_ms":null,"crewai_version":"1.0.0","total_events":0,"execution_context":{"crew_fingerprint":null,"crew_name":null,"flow_name":"MyFlow","crewai_version":"1.0.0","privacy_level":"standard"},"created_at":"2025-10-21T14:26:38.410Z","updated_at":"2025-10-21T14:26:38.410Z","access_code":"TRACE-8b1efdc7b5","user_identifier":null}'
|
||||
headers:
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '519'
|
||||
Content-Type:
|
||||
- application/json; charset=utf-8
|
||||
Date:
|
||||
- Tue, 21 Oct 2025 14:26:38 GMT
|
||||
cache-control:
|
||||
- no-store
|
||||
content-security-policy:
|
||||
- 'default-src ''self'' *.app.crewai.com app.crewai.com; script-src ''self''
|
||||
''unsafe-inline'' *.app.crewai.com app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts
|
||||
https://www.gstatic.com https://run.pstmn.io https://apis.google.com https://apis.google.com/js/api.js
|
||||
https://accounts.google.com https://accounts.google.com/gsi/client https://cdnjs.cloudflare.com/ajax/libs/normalize/8.0.1/normalize.min.css.map
|
||||
https://*.google.com https://docs.google.com https://slides.google.com https://js.hs-scripts.com
|
||||
https://js.sentry-cdn.com https://browser.sentry-cdn.com https://www.googletagmanager.com
|
||||
https://js-na1.hs-scripts.com https://js.hubspot.com http://js-na1.hs-scripts.com
|
||||
https://bat.bing.com https://cdn.amplitude.com https://cdn.segment.com https://d1d3n03t5zntha.cloudfront.net/
|
||||
https://descriptusercontent.com https://edge.fullstory.com https://googleads.g.doubleclick.net
|
||||
https://js.hs-analytics.net https://js.hs-banner.com https://js.hsadspixel.net
|
||||
https://js.hscollectedforms.net https://js.usemessages.com https://snap.licdn.com
|
||||
https://static.cloudflareinsights.com https://static.reo.dev https://www.google-analytics.com
|
||||
https://share.descript.com/; style-src ''self'' ''unsafe-inline'' *.app.crewai.com
|
||||
app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts; img-src ''self'' data:
|
||||
*.app.crewai.com app.crewai.com https://zeus.tools.crewai.com https://dashboard.tools.crewai.com
|
||||
https://cdn.jsdelivr.net https://forms.hsforms.com https://track.hubspot.com
|
||||
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://www.google.com
|
||||
https://www.google.com.br; font-src ''self'' data: *.app.crewai.com app.crewai.com;
|
||||
connect-src ''self'' *.app.crewai.com app.crewai.com https://zeus.tools.crewai.com
|
||||
https://connect.useparagon.com/ https://zeus.useparagon.com/* https://*.useparagon.com/*
|
||||
https://run.pstmn.io https://connect.tools.crewai.com/ https://*.sentry.io
|
||||
https://www.google-analytics.com https://edge.fullstory.com https://rs.fullstory.com
|
||||
https://api.hubspot.com https://forms.hscollectedforms.net https://api.hubapi.com
|
||||
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://google.com/pagead/form-data/16713662509
|
||||
https://google.com/ccm/form-data/16713662509 https://www.google.com/ccm/collect
|
||||
https://worker-actionkit.tools.crewai.com https://api.reo.dev; frame-src ''self''
|
||||
*.app.crewai.com app.crewai.com https://connect.useparagon.com/ https://zeus.tools.crewai.com
|
||||
https://zeus.useparagon.com/* https://connect.tools.crewai.com/ https://docs.google.com
|
||||
https://drive.google.com https://slides.google.com https://accounts.google.com
|
||||
https://*.google.com https://app.hubspot.com/ https://td.doubleclick.net https://www.googletagmanager.com/
|
||||
https://www.youtube.com https://share.descript.com'
|
||||
etag:
|
||||
- W/"3ccda5b1eea02b7d1edeeb99b7968370"
|
||||
expires:
|
||||
- '0'
|
||||
permissions-policy:
|
||||
- camera=(), microphone=(self), geolocation=()
|
||||
pragma:
|
||||
- no-cache
|
||||
referrer-policy:
|
||||
- strict-origin-when-cross-origin
|
||||
strict-transport-security:
|
||||
- max-age=63072000; includeSubDomains
|
||||
vary:
|
||||
- Accept
|
||||
x-content-type-options:
|
||||
- nosniff
|
||||
x-frame-options:
|
||||
- SAMEORIGIN
|
||||
x-permitted-cross-domain-policies:
|
||||
- none
|
||||
x-request-id:
|
||||
- b3682ed8-18b9-485a-8171-74e61d843589
|
||||
x-runtime:
|
||||
- '0.064552'
|
||||
x-xss-protection:
|
||||
- 1; mode=block
|
||||
status:
|
||||
code: 201
|
||||
message: Created
|
||||
- request:
|
||||
body: '{"events": [{"event_id": "c6d1cb65-7e55-40a5-9319-766591bcd1ae", "timestamp":
|
||||
"2025-10-21T14:26:37.986131+00:00", "type": "flow_started", "event_data": {"timestamp":
|
||||
"2025-10-21T14:26:37.986131+00:00", "type": "flow_started", "source_fingerprint":
|
||||
null, "source_type": null, "fingerprint_metadata": null, "task_id": null, "task_name":
|
||||
null, "agent_id": null, "agent_role": null, "flow_name": "MyFlow", "inputs":
|
||||
null}}, {"event_id": "17621b79-f10a-497f-9eb1-c567f8ca43f1", "timestamp": "2025-10-21T14:26:37.986545+00:00",
|
||||
"type": "method_execution_started", "event_data": {"timestamp": "2025-10-21T14:26:37.986545+00:00",
|
||||
"type": "method_execution_started", "source_fingerprint": null, "source_type":
|
||||
null, "fingerprint_metadata": null, "task_id": null, "task_name": null, "agent_id":
|
||||
null, "agent_role": null, "flow_name": "MyFlow", "method_name": "start", "state":
|
||||
{"id": "c1c5fe64-dcf4-43b2-9081-c4a513770607"}, "params": {}}}, {"event_id":
|
||||
"60654efd-c07b-4171-a75d-9b943863e4bf", "timestamp": "2025-10-21T14:26:37.989677+00:00",
|
||||
"type": "method_execution_finished", "event_data": {"timestamp": "2025-10-21T14:26:37.989677+00:00",
|
||||
"type": "method_execution_finished", "source_fingerprint": null, "source_type":
|
||||
null, "fingerprint_metadata": null, "task_id": null, "task_name": null, "agent_id":
|
||||
null, "agent_role": null, "flow_name": "MyFlow", "method_name": "start", "result":
|
||||
{"parent_flow": "<tests.test_crew.test_sets_parent_flow_when_inside_flow.<locals>.MyFlow
|
||||
object at 0x12b835520>", "name": "crew", "cache": true, "tasks": [{"used_tools":
|
||||
"0", "tools_errors": "0", "delegations": "0", "i18n": "{''prompt_file'': None}",
|
||||
"name": "None", "prompt_context": "None", "description": "''Task 1''", "expected_output":
|
||||
"''output''", "config": "None", "callback": "None", "agent": "{''id'': UUID(''7fadf38e-4090-4643-9fcf-bd24b07e05e1''),
|
||||
''role'': ''Researcher'', ''goal'': ''Make the best research and analysis on
|
||||
content about AI and AI agents'', ''backstory'': \"You''re an expert researcher,
|
||||
specialized in technology, software engineering, AI and startups. You work as
|
||||
a freelancer and is now working on doing research and analysis for a new customer.\",
|
||||
''cache'': True, ''verbose'': False, ''max_rpm'': None, ''allow_delegation'':
|
||||
False, ''tools'': [], ''max_iter'': 25, ''agent_executor'': <crewai.agents.crew_agent_executor.CrewAgentExecutor
|
||||
object at 0x12b973fe0>, ''llm'': <crewai.llms.providers.openai.completion.OpenAICompletion
|
||||
object at 0x12b910290>, ''crew'': None, ''i18n'': {''prompt_file'': None}, ''cache_handler'':
|
||||
{}, ''tools_handler'': <crewai.agents.tools_handler.ToolsHandler object at 0x12b9934d0>,
|
||||
''tools_results'': [], ''max_tokens'': None, ''knowledge'': None, ''knowledge_sources'':
|
||||
None, ''knowledge_storage'': None, ''security_config'': {''fingerprint'': {''metadata'':
|
||||
{}}}, ''callbacks'': [], ''adapted_agent'': False, ''knowledge_config'': None,
|
||||
''apps'': None, ''mcps'': None}", "context": "NOT_SPECIFIED", "async_execution":
|
||||
"False", "output_json": "None", "output_pydantic": "None", "output_file": "None",
|
||||
"create_directory": "True", "output": "None", "tools": "[]", "security_config":
|
||||
"{''fingerprint'': {''metadata'': {}}}", "id": "UUID(''d8c03737-58d2-4c8e-929e-d9e0bf022a3b'')",
|
||||
"human_input": "False", "markdown": "False", "converter_cls": "None", "processed_by_agents":
|
||||
"set()", "guardrail": "None", "guardrails": "None", "max_retries": "None", "guardrail_max_retries":
|
||||
"3", "retry_count": "0", "start_time": "None", "end_time": "None", "allow_crewai_trigger_context":
|
||||
"None"}, {"used_tools": "0", "tools_errors": "0", "delegations": "0", "i18n":
|
||||
"{''prompt_file'': None}", "name": "None", "prompt_context": "None", "description":
|
||||
"''Task 2''", "expected_output": "''output''", "config": "None", "callback":
|
||||
"None", "agent": "{''id'': UUID(''49785d1e-2cfa-4967-a503-da9f6fea46df''), ''role'':
|
||||
''Senior Writer'', ''goal'': ''Write the best content about AI and AI agents.'',
|
||||
''backstory'': \"You''re a senior writer, specialized in technology, software
|
||||
engineering, AI and startups. You work as a freelancer and are now working on
|
||||
writing content for a new customer.\", ''cache'': True, ''verbose'': False,
|
||||
''max_rpm'': None, ''allow_delegation'': False, ''tools'': [], ''max_iter'':
|
||||
25, ''agent_executor'': <crewai.agents.crew_agent_executor.CrewAgentExecutor
|
||||
object at 0x12b7bbbf0>, ''llm'': <crewai.llms.providers.openai.completion.OpenAICompletion
|
||||
object at 0x12b9903b0>, ''crew'': None, ''i18n'': {''prompt_file'': None}, ''cache_handler'':
|
||||
{}, ''tools_handler'': <crewai.agents.tools_handler.ToolsHandler object at 0x12b631bb0>,
|
||||
''tools_results'': [], ''max_tokens'': None, ''knowledge'': None, ''knowledge_sources'':
|
||||
None, ''knowledge_storage'': None, ''security_config'': {''fingerprint'': {''metadata'':
|
||||
{}}}, ''callbacks'': [], ''adapted_agent'': False, ''knowledge_config'': None,
|
||||
''apps'': None, ''mcps'': None}", "context": "NOT_SPECIFIED", "async_execution":
|
||||
"False", "output_json": "None", "output_pydantic": "None", "output_file": "None",
|
||||
"create_directory": "True", "output": "None", "tools": "[]", "security_config":
|
||||
"{''fingerprint'': {''metadata'': {}}}", "id": "UUID(''530f34b2-29dd-4ba8-a4e0-f68ba958890b'')",
|
||||
"human_input": "False", "markdown": "False", "converter_cls": "None", "processed_by_agents":
|
||||
"set()", "guardrail": "None", "guardrails": "None", "max_retries": "None", "guardrail_max_retries":
|
||||
"3", "retry_count": "0", "start_time": "None", "end_time": "None", "allow_crewai_trigger_context":
|
||||
"None"}], "agents": [{"id": "UUID(''7fadf38e-4090-4643-9fcf-bd24b07e05e1'')",
|
||||
"role": "''Researcher''", "goal": "''Make the best research and analysis on
|
||||
content about AI and AI agents''", "backstory": "\"You''re an expert researcher,
|
||||
specialized in technology, software engineering, AI and startups. You work as
|
||||
a freelancer and is now working on doing research and analysis for a new customer.\"",
|
||||
"cache": "True", "verbose": "False", "max_rpm": "None", "allow_delegation":
|
||||
"False", "tools": "[]", "max_iter": "25", "agent_executor": "<crewai.agents.crew_agent_executor.CrewAgentExecutor
|
||||
object at 0x12b973fe0>", "llm": "<crewai.llms.providers.openai.completion.OpenAICompletion
|
||||
object at 0x12b910290>", "crew": "None", "i18n": "{''prompt_file'': None}",
|
||||
"cache_handler": "{}", "tools_handler": "<crewai.agents.tools_handler.ToolsHandler
|
||||
object at 0x12b9934d0>", "tools_results": "[]", "max_tokens": "None", "knowledge":
|
||||
"None", "knowledge_sources": "None", "knowledge_storage": "None", "security_config":
|
||||
"{''fingerprint'': {''metadata'': {}}}", "callbacks": "[]", "adapted_agent":
|
||||
"False", "knowledge_config": "None", "apps": "None", "mcps": "None"}, {"id":
|
||||
"UUID(''49785d1e-2cfa-4967-a503-da9f6fea46df'')", "role": "''Senior Writer''",
|
||||
"goal": "''Write the best content about AI and AI agents.''", "backstory": "\"You''re
|
||||
a senior writer, specialized in technology, software engineering, AI and startups.
|
||||
You work as a freelancer and are now working on writing content for a new customer.\"",
|
||||
"cache": "True", "verbose": "False", "max_rpm": "None", "allow_delegation":
|
||||
"False", "tools": "[]", "max_iter": "25", "agent_executor": "<crewai.agents.crew_agent_executor.CrewAgentExecutor
|
||||
object at 0x12b7bbbf0>", "llm": "<crewai.llms.providers.openai.completion.OpenAICompletion
|
||||
object at 0x12b9903b0>", "crew": "None", "i18n": "{''prompt_file'': None}",
|
||||
"cache_handler": "{}", "tools_handler": "<crewai.agents.tools_handler.ToolsHandler
|
||||
object at 0x12b631bb0>", "tools_results": "[]", "max_tokens": "None", "knowledge":
|
||||
"None", "knowledge_sources": "None", "knowledge_storage": "None", "security_config":
|
||||
"{''fingerprint'': {''metadata'': {}}}", "callbacks": "[]", "adapted_agent":
|
||||
"False", "knowledge_config": "None", "apps": "None", "mcps": "None"}], "process":
|
||||
"sequential", "verbose": false, "memory": false, "short_term_memory": null,
|
||||
"long_term_memory": null, "entity_memory": null, "external_memory": null, "embedder":
|
||||
null, "usage_metrics": null, "manager_llm": null, "manager_agent": null, "function_calling_llm":
|
||||
null, "config": null, "id": "5a10d62b-4eab-47e7-b91f-6878b682acb5", "share_crew":
|
||||
false, "step_callback": null, "task_callback": null, "before_kickoff_callbacks":
|
||||
[], "after_kickoff_callbacks": [], "max_rpm": null, "prompt_file": null, "output_log_file":
|
||||
null, "planning": false, "planning_llm": null, "task_execution_output_json_files":
|
||||
null, "execution_logs": [], "knowledge_sources": null, "chat_llm": null, "knowledge":
|
||||
null, "security_config": {"fingerprint": {"metadata": "{}"}}, "token_usage":
|
||||
null, "tracing": false}, "state": {"id": "c1c5fe64-dcf4-43b2-9081-c4a513770607"}}},
|
||||
{"event_id": "191d1d3e-8d9d-40cf-a7e4-48c44b05c2d3", "timestamp": "2025-10-21T14:26:37.989913+00:00",
|
||||
"type": "flow_finished", "event_data": {"timestamp": "2025-10-21T14:26:37.989913+00:00",
|
||||
"type": "flow_finished", "source_fingerprint": null, "source_type": null, "fingerprint_metadata":
|
||||
null, "task_id": null, "task_name": null, "agent_id": null, "agent_role": null,
|
||||
"flow_name": "MyFlow", "result": {"parent_flow": "<tests.test_crew.test_sets_parent_flow_when_inside_flow.<locals>.MyFlow
|
||||
object at 0x12b835520>", "name": "crew", "cache": true, "tasks": [{"used_tools":
|
||||
"0", "tools_errors": "0", "delegations": "0", "i18n": "{''prompt_file'': None}",
|
||||
"name": "None", "prompt_context": "None", "description": "''Task 1''", "expected_output":
|
||||
"''output''", "config": "None", "callback": "None", "agent": "{''id'': UUID(''7fadf38e-4090-4643-9fcf-bd24b07e05e1''),
|
||||
''role'': ''Researcher'', ''goal'': ''Make the best research and analysis on
|
||||
content about AI and AI agents'', ''backstory'': \"You''re an expert researcher,
|
||||
specialized in technology, software engineering, AI and startups. You work as
|
||||
a freelancer and is now working on doing research and analysis for a new customer.\",
|
||||
''cache'': True, ''verbose'': False, ''max_rpm'': None, ''allow_delegation'':
|
||||
False, ''tools'': [], ''max_iter'': 25, ''agent_executor'': <crewai.agents.crew_agent_executor.CrewAgentExecutor
|
||||
object at 0x12b973fe0>, ''llm'': <crewai.llms.providers.openai.completion.OpenAICompletion
|
||||
object at 0x12b910290>, ''crew'': None, ''i18n'': {''prompt_file'': None}, ''cache_handler'':
|
||||
{}, ''tools_handler'': <crewai.agents.tools_handler.ToolsHandler object at 0x12b9934d0>,
|
||||
''tools_results'': [], ''max_tokens'': None, ''knowledge'': None, ''knowledge_sources'':
|
||||
None, ''knowledge_storage'': None, ''security_config'': {''fingerprint'': {''metadata'':
|
||||
{}}}, ''callbacks'': [], ''adapted_agent'': False, ''knowledge_config'': None,
|
||||
''apps'': None, ''mcps'': None}", "context": "NOT_SPECIFIED", "async_execution":
|
||||
"False", "output_json": "None", "output_pydantic": "None", "output_file": "None",
|
||||
"create_directory": "True", "output": "None", "tools": "[]", "security_config":
|
||||
"{''fingerprint'': {''metadata'': {}}}", "id": "UUID(''d8c03737-58d2-4c8e-929e-d9e0bf022a3b'')",
|
||||
"human_input": "False", "markdown": "False", "converter_cls": "None", "processed_by_agents":
|
||||
"set()", "guardrail": "None", "guardrails": "None", "max_retries": "None", "guardrail_max_retries":
|
||||
"3", "retry_count": "0", "start_time": "None", "end_time": "None", "allow_crewai_trigger_context":
|
||||
"None"}, {"used_tools": "0", "tools_errors": "0", "delegations": "0", "i18n":
|
||||
"{''prompt_file'': None}", "name": "None", "prompt_context": "None", "description":
|
||||
"''Task 2''", "expected_output": "''output''", "config": "None", "callback":
|
||||
"None", "agent": "{''id'': UUID(''49785d1e-2cfa-4967-a503-da9f6fea46df''), ''role'':
|
||||
''Senior Writer'', ''goal'': ''Write the best content about AI and AI agents.'',
|
||||
''backstory'': \"You''re a senior writer, specialized in technology, software
|
||||
engineering, AI and startups. You work as a freelancer and are now working on
|
||||
writing content for a new customer.\", ''cache'': True, ''verbose'': False,
|
||||
''max_rpm'': None, ''allow_delegation'': False, ''tools'': [], ''max_iter'':
|
||||
25, ''agent_executor'': <crewai.agents.crew_agent_executor.CrewAgentExecutor
|
||||
object at 0x12b7bbbf0>, ''llm'': <crewai.llms.providers.openai.completion.OpenAICompletion
|
||||
object at 0x12b9903b0>, ''crew'': None, ''i18n'': {''prompt_file'': None}, ''cache_handler'':
|
||||
{}, ''tools_handler'': <crewai.agents.tools_handler.ToolsHandler object at 0x12b631bb0>,
|
||||
''tools_results'': [], ''max_tokens'': None, ''knowledge'': None, ''knowledge_sources'':
|
||||
None, ''knowledge_storage'': None, ''security_config'': {''fingerprint'': {''metadata'':
|
||||
{}}}, ''callbacks'': [], ''adapted_agent'': False, ''knowledge_config'': None,
|
||||
''apps'': None, ''mcps'': None}", "context": "NOT_SPECIFIED", "async_execution":
|
||||
"False", "output_json": "None", "output_pydantic": "None", "output_file": "None",
|
||||
"create_directory": "True", "output": "None", "tools": "[]", "security_config":
|
||||
"{''fingerprint'': {''metadata'': {}}}", "id": "UUID(''530f34b2-29dd-4ba8-a4e0-f68ba958890b'')",
|
||||
"human_input": "False", "markdown": "False", "converter_cls": "None", "processed_by_agents":
|
||||
"set()", "guardrail": "None", "guardrails": "None", "max_retries": "None", "guardrail_max_retries":
|
||||
"3", "retry_count": "0", "start_time": "None", "end_time": "None", "allow_crewai_trigger_context":
|
||||
"None"}], "agents": [{"id": "UUID(''7fadf38e-4090-4643-9fcf-bd24b07e05e1'')",
|
||||
"role": "''Researcher''", "goal": "''Make the best research and analysis on
|
||||
content about AI and AI agents''", "backstory": "\"You''re an expert researcher,
|
||||
specialized in technology, software engineering, AI and startups. You work as
|
||||
a freelancer and is now working on doing research and analysis for a new customer.\"",
|
||||
"cache": "True", "verbose": "False", "max_rpm": "None", "allow_delegation":
|
||||
"False", "tools": "[]", "max_iter": "25", "agent_executor": "<crewai.agents.crew_agent_executor.CrewAgentExecutor
|
||||
object at 0x12b973fe0>", "llm": "<crewai.llms.providers.openai.completion.OpenAICompletion
|
||||
object at 0x12b910290>", "crew": "None", "i18n": "{''prompt_file'': None}",
|
||||
"cache_handler": "{}", "tools_handler": "<crewai.agents.tools_handler.ToolsHandler
|
||||
object at 0x12b9934d0>", "tools_results": "[]", "max_tokens": "None", "knowledge":
|
||||
"None", "knowledge_sources": "None", "knowledge_storage": "None", "security_config":
|
||||
"{''fingerprint'': {''metadata'': {}}}", "callbacks": "[]", "adapted_agent":
|
||||
"False", "knowledge_config": "None", "apps": "None", "mcps": "None"}, {"id":
|
||||
"UUID(''49785d1e-2cfa-4967-a503-da9f6fea46df'')", "role": "''Senior Writer''",
|
||||
"goal": "''Write the best content about AI and AI agents.''", "backstory": "\"You''re
|
||||
a senior writer, specialized in technology, software engineering, AI and startups.
|
||||
You work as a freelancer and are now working on writing content for a new customer.\"",
|
||||
"cache": "True", "verbose": "False", "max_rpm": "None", "allow_delegation":
|
||||
"False", "tools": "[]", "max_iter": "25", "agent_executor": "<crewai.agents.crew_agent_executor.CrewAgentExecutor
|
||||
object at 0x12b7bbbf0>", "llm": "<crewai.llms.providers.openai.completion.OpenAICompletion
|
||||
object at 0x12b9903b0>", "crew": "None", "i18n": "{''prompt_file'': None}",
|
||||
"cache_handler": "{}", "tools_handler": "<crewai.agents.tools_handler.ToolsHandler
|
||||
object at 0x12b631bb0>", "tools_results": "[]", "max_tokens": "None", "knowledge":
|
||||
"None", "knowledge_sources": "None", "knowledge_storage": "None", "security_config":
|
||||
"{''fingerprint'': {''metadata'': {}}}", "callbacks": "[]", "adapted_agent":
|
||||
"False", "knowledge_config": "None", "apps": "None", "mcps": "None"}], "process":
|
||||
"sequential", "verbose": false, "memory": false, "short_term_memory": null,
|
||||
"long_term_memory": null, "entity_memory": null, "external_memory": null, "embedder":
|
||||
null, "usage_metrics": null, "manager_llm": null, "manager_agent": null, "function_calling_llm":
|
||||
null, "config": null, "id": "5a10d62b-4eab-47e7-b91f-6878b682acb5", "share_crew":
|
||||
false, "step_callback": null, "task_callback": null, "before_kickoff_callbacks":
|
||||
[], "after_kickoff_callbacks": [], "max_rpm": null, "prompt_file": null, "output_log_file":
|
||||
null, "planning": false, "planning_llm": null, "task_execution_output_json_files":
|
||||
null, "execution_logs": [], "knowledge_sources": null, "chat_llm": null, "knowledge":
|
||||
null, "security_config": {"fingerprint": {"metadata": "{}"}}, "token_usage":
|
||||
null, "tracing": false}}}], "batch_metadata": {"events_count": 4, "batch_sequence":
|
||||
1, "is_final_batch": false}}'
|
||||
headers:
|
||||
Accept:
|
||||
- '*/*'
|
||||
Accept-Encoding:
|
||||
- gzip, deflate, zstd
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '15841'
|
||||
Content-Type:
|
||||
- application/json
|
||||
User-Agent:
|
||||
- CrewAI-CLI/1.0.0
|
||||
X-Crewai-Version:
|
||||
- 1.0.0
|
||||
method: POST
|
||||
uri: https://app.crewai.com/crewai_plus/api/v1/tracing/ephemeral/batches/fabb9e5b-b761-4b21-8514-cd4d3c937d09/events
|
||||
response:
|
||||
body:
|
||||
string: '{"events_created":4,"ephemeral_trace_batch_id":"fe167fd8-d9b8-4faa-bb28-ed13f10e8d2e"}'
|
||||
headers:
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '86'
|
||||
Content-Type:
|
||||
- application/json; charset=utf-8
|
||||
Date:
|
||||
- Tue, 21 Oct 2025 14:26:38 GMT
|
||||
cache-control:
|
||||
- no-store
|
||||
content-security-policy:
|
||||
- 'default-src ''self'' *.app.crewai.com app.crewai.com; script-src ''self''
|
||||
''unsafe-inline'' *.app.crewai.com app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts
|
||||
https://www.gstatic.com https://run.pstmn.io https://apis.google.com https://apis.google.com/js/api.js
|
||||
https://accounts.google.com https://accounts.google.com/gsi/client https://cdnjs.cloudflare.com/ajax/libs/normalize/8.0.1/normalize.min.css.map
|
||||
https://*.google.com https://docs.google.com https://slides.google.com https://js.hs-scripts.com
|
||||
https://js.sentry-cdn.com https://browser.sentry-cdn.com https://www.googletagmanager.com
|
||||
https://js-na1.hs-scripts.com https://js.hubspot.com http://js-na1.hs-scripts.com
|
||||
https://bat.bing.com https://cdn.amplitude.com https://cdn.segment.com https://d1d3n03t5zntha.cloudfront.net/
|
||||
https://descriptusercontent.com https://edge.fullstory.com https://googleads.g.doubleclick.net
|
||||
https://js.hs-analytics.net https://js.hs-banner.com https://js.hsadspixel.net
|
||||
https://js.hscollectedforms.net https://js.usemessages.com https://snap.licdn.com
|
||||
https://static.cloudflareinsights.com https://static.reo.dev https://www.google-analytics.com
|
||||
https://share.descript.com/; style-src ''self'' ''unsafe-inline'' *.app.crewai.com
|
||||
app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts; img-src ''self'' data:
|
||||
*.app.crewai.com app.crewai.com https://zeus.tools.crewai.com https://dashboard.tools.crewai.com
|
||||
https://cdn.jsdelivr.net https://forms.hsforms.com https://track.hubspot.com
|
||||
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://www.google.com
|
||||
https://www.google.com.br; font-src ''self'' data: *.app.crewai.com app.crewai.com;
|
||||
connect-src ''self'' *.app.crewai.com app.crewai.com https://zeus.tools.crewai.com
|
||||
https://connect.useparagon.com/ https://zeus.useparagon.com/* https://*.useparagon.com/*
|
||||
https://run.pstmn.io https://connect.tools.crewai.com/ https://*.sentry.io
|
||||
https://www.google-analytics.com https://edge.fullstory.com https://rs.fullstory.com
|
||||
https://api.hubspot.com https://forms.hscollectedforms.net https://api.hubapi.com
|
||||
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://google.com/pagead/form-data/16713662509
|
||||
https://google.com/ccm/form-data/16713662509 https://www.google.com/ccm/collect
|
||||
https://worker-actionkit.tools.crewai.com https://api.reo.dev; frame-src ''self''
|
||||
*.app.crewai.com app.crewai.com https://connect.useparagon.com/ https://zeus.tools.crewai.com
|
||||
https://zeus.useparagon.com/* https://connect.tools.crewai.com/ https://docs.google.com
|
||||
https://drive.google.com https://slides.google.com https://accounts.google.com
|
||||
https://*.google.com https://app.hubspot.com/ https://td.doubleclick.net https://www.googletagmanager.com/
|
||||
https://www.youtube.com https://share.descript.com'
|
||||
etag:
|
||||
- W/"31af819a6bb47947663469e423c6c943"
|
||||
expires:
|
||||
- '0'
|
||||
permissions-policy:
|
||||
- camera=(), microphone=(self), geolocation=()
|
||||
pragma:
|
||||
- no-cache
|
||||
referrer-policy:
|
||||
- strict-origin-when-cross-origin
|
||||
strict-transport-security:
|
||||
- max-age=63072000; includeSubDomains
|
||||
vary:
|
||||
- Accept
|
||||
x-content-type-options:
|
||||
- nosniff
|
||||
x-frame-options:
|
||||
- SAMEORIGIN
|
||||
x-permitted-cross-domain-policies:
|
||||
- none
|
||||
x-request-id:
|
||||
- 7e1b0d5e-c0c3-4745-aebd-6f86f9292bdd
|
||||
x-runtime:
|
||||
- '0.141007'
|
||||
x-xss-protection:
|
||||
- 1; mode=block
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"status": "completed", "duration_ms": 856, "final_event_count": 4}'
|
||||
headers:
|
||||
Accept:
|
||||
- '*/*'
|
||||
Accept-Encoding:
|
||||
- gzip, deflate, zstd
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '67'
|
||||
Content-Type:
|
||||
- application/json
|
||||
User-Agent:
|
||||
- CrewAI-CLI/1.0.0
|
||||
X-Crewai-Version:
|
||||
- 1.0.0
|
||||
method: PATCH
|
||||
uri: https://app.crewai.com/crewai_plus/api/v1/tracing/ephemeral/batches/fabb9e5b-b761-4b21-8514-cd4d3c937d09/finalize
|
||||
response:
|
||||
body:
|
||||
string: '{"id":"fe167fd8-d9b8-4faa-bb28-ed13f10e8d2e","ephemeral_trace_id":"fabb9e5b-b761-4b21-8514-cd4d3c937d09","execution_type":"flow","crew_name":null,"flow_name":"MyFlow","status":"completed","duration_ms":856,"crewai_version":"1.0.0","total_events":4,"execution_context":{"crew_name":null,"flow_name":"MyFlow","privacy_level":"standard","crewai_version":"1.0.0","crew_fingerprint":null},"created_at":"2025-10-21T14:26:38.410Z","updated_at":"2025-10-21T14:26:39.268Z","access_code":"TRACE-8b1efdc7b5","user_identifier":null}'
|
||||
headers:
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '520'
|
||||
Content-Type:
|
||||
- application/json; charset=utf-8
|
||||
Date:
|
||||
- Tue, 21 Oct 2025 14:26:39 GMT
|
||||
cache-control:
|
||||
- no-store
|
||||
content-security-policy:
|
||||
- 'default-src ''self'' *.app.crewai.com app.crewai.com; script-src ''self''
|
||||
''unsafe-inline'' *.app.crewai.com app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts
|
||||
https://www.gstatic.com https://run.pstmn.io https://apis.google.com https://apis.google.com/js/api.js
|
||||
https://accounts.google.com https://accounts.google.com/gsi/client https://cdnjs.cloudflare.com/ajax/libs/normalize/8.0.1/normalize.min.css.map
|
||||
https://*.google.com https://docs.google.com https://slides.google.com https://js.hs-scripts.com
|
||||
https://js.sentry-cdn.com https://browser.sentry-cdn.com https://www.googletagmanager.com
|
||||
https://js-na1.hs-scripts.com https://js.hubspot.com http://js-na1.hs-scripts.com
|
||||
https://bat.bing.com https://cdn.amplitude.com https://cdn.segment.com https://d1d3n03t5zntha.cloudfront.net/
|
||||
https://descriptusercontent.com https://edge.fullstory.com https://googleads.g.doubleclick.net
|
||||
https://js.hs-analytics.net https://js.hs-banner.com https://js.hsadspixel.net
|
||||
https://js.hscollectedforms.net https://js.usemessages.com https://snap.licdn.com
|
||||
https://static.cloudflareinsights.com https://static.reo.dev https://www.google-analytics.com
|
||||
https://share.descript.com/; style-src ''self'' ''unsafe-inline'' *.app.crewai.com
|
||||
app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts; img-src ''self'' data:
|
||||
*.app.crewai.com app.crewai.com https://zeus.tools.crewai.com https://dashboard.tools.crewai.com
|
||||
https://cdn.jsdelivr.net https://forms.hsforms.com https://track.hubspot.com
|
||||
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://www.google.com
|
||||
https://www.google.com.br; font-src ''self'' data: *.app.crewai.com app.crewai.com;
|
||||
connect-src ''self'' *.app.crewai.com app.crewai.com https://zeus.tools.crewai.com
|
||||
https://connect.useparagon.com/ https://zeus.useparagon.com/* https://*.useparagon.com/*
|
||||
https://run.pstmn.io https://connect.tools.crewai.com/ https://*.sentry.io
|
||||
https://www.google-analytics.com https://edge.fullstory.com https://rs.fullstory.com
|
||||
https://api.hubspot.com https://forms.hscollectedforms.net https://api.hubapi.com
|
||||
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://google.com/pagead/form-data/16713662509
|
||||
https://google.com/ccm/form-data/16713662509 https://www.google.com/ccm/collect
|
||||
https://worker-actionkit.tools.crewai.com https://api.reo.dev; frame-src ''self''
|
||||
*.app.crewai.com app.crewai.com https://connect.useparagon.com/ https://zeus.tools.crewai.com
|
||||
https://zeus.useparagon.com/* https://connect.tools.crewai.com/ https://docs.google.com
|
||||
https://drive.google.com https://slides.google.com https://accounts.google.com
|
||||
https://*.google.com https://app.hubspot.com/ https://td.doubleclick.net https://www.googletagmanager.com/
|
||||
https://www.youtube.com https://share.descript.com'
|
||||
etag:
|
||||
- W/"dfa32cefa9e9282e5792820119ebc7f4"
|
||||
expires:
|
||||
- '0'
|
||||
permissions-policy:
|
||||
- camera=(), microphone=(self), geolocation=()
|
||||
pragma:
|
||||
- no-cache
|
||||
referrer-policy:
|
||||
- strict-origin-when-cross-origin
|
||||
strict-transport-security:
|
||||
- max-age=63072000; includeSubDomains
|
||||
vary:
|
||||
- Accept
|
||||
x-content-type-options:
|
||||
- nosniff
|
||||
x-frame-options:
|
||||
- SAMEORIGIN
|
||||
x-permitted-cross-domain-policies:
|
||||
- none
|
||||
x-request-id:
|
||||
- 58bc91dd-a0a8-4299-be3a-16f998de8786
|
||||
x-runtime:
|
||||
- '0.061567'
|
||||
x-xss-protection:
|
||||
- 1; mode=block
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
@@ -0,0 +1,101 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"trace_id": "487e5cbc-9483-4cda-9ceb-3a4eade22f9b", "execution_type":
|
||||
"crew", "user_identifier": null, "execution_context": {"crew_fingerprint": null,
|
||||
"crew_name": "crew", "flow_name": null, "crewai_version": "1.0.0", "privacy_level":
|
||||
"standard"}, "execution_metadata": {"expected_duration_estimate": 300, "agent_count":
|
||||
0, "task_count": 0, "flow_method_count": 0, "execution_started_at": "2025-10-21T14:25:56.570899+00:00"},
|
||||
"ephemeral_trace_id": "487e5cbc-9483-4cda-9ceb-3a4eade22f9b"}'
|
||||
headers:
|
||||
Accept:
|
||||
- '*/*'
|
||||
Accept-Encoding:
|
||||
- gzip, deflate, zstd
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '488'
|
||||
Content-Type:
|
||||
- application/json
|
||||
User-Agent:
|
||||
- CrewAI-CLI/1.0.0
|
||||
X-Crewai-Version:
|
||||
- 1.0.0
|
||||
method: POST
|
||||
uri: https://app.crewai.com/crewai_plus/api/v1/tracing/ephemeral/batches
|
||||
response:
|
||||
body:
|
||||
string: '{"id":"75d59a82-2468-4e95-917d-8f00c3a22995","ephemeral_trace_id":"487e5cbc-9483-4cda-9ceb-3a4eade22f9b","execution_type":"crew","crew_name":"crew","flow_name":null,"status":"running","duration_ms":null,"crewai_version":"1.0.0","total_events":0,"execution_context":{"crew_fingerprint":null,"crew_name":"crew","flow_name":null,"crewai_version":"1.0.0","privacy_level":"standard"},"created_at":"2025-10-21T14:25:57.071Z","updated_at":"2025-10-21T14:25:57.071Z","access_code":"TRACE-5c2eb0158e","user_identifier":null}'
|
||||
headers:
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '515'
|
||||
Content-Type:
|
||||
- application/json; charset=utf-8
|
||||
Date:
|
||||
- Tue, 21 Oct 2025 14:25:57 GMT
|
||||
cache-control:
|
||||
- no-store
|
||||
content-security-policy:
|
||||
- 'default-src ''self'' *.app.crewai.com app.crewai.com; script-src ''self''
|
||||
''unsafe-inline'' *.app.crewai.com app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts
|
||||
https://www.gstatic.com https://run.pstmn.io https://apis.google.com https://apis.google.com/js/api.js
|
||||
https://accounts.google.com https://accounts.google.com/gsi/client https://cdnjs.cloudflare.com/ajax/libs/normalize/8.0.1/normalize.min.css.map
|
||||
https://*.google.com https://docs.google.com https://slides.google.com https://js.hs-scripts.com
|
||||
https://js.sentry-cdn.com https://browser.sentry-cdn.com https://www.googletagmanager.com
|
||||
https://js-na1.hs-scripts.com https://js.hubspot.com http://js-na1.hs-scripts.com
|
||||
https://bat.bing.com https://cdn.amplitude.com https://cdn.segment.com https://d1d3n03t5zntha.cloudfront.net/
|
||||
https://descriptusercontent.com https://edge.fullstory.com https://googleads.g.doubleclick.net
|
||||
https://js.hs-analytics.net https://js.hs-banner.com https://js.hsadspixel.net
|
||||
https://js.hscollectedforms.net https://js.usemessages.com https://snap.licdn.com
|
||||
https://static.cloudflareinsights.com https://static.reo.dev https://www.google-analytics.com
|
||||
https://share.descript.com/; style-src ''self'' ''unsafe-inline'' *.app.crewai.com
|
||||
app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts; img-src ''self'' data:
|
||||
*.app.crewai.com app.crewai.com https://zeus.tools.crewai.com https://dashboard.tools.crewai.com
|
||||
https://cdn.jsdelivr.net https://forms.hsforms.com https://track.hubspot.com
|
||||
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://www.google.com
|
||||
https://www.google.com.br; font-src ''self'' data: *.app.crewai.com app.crewai.com;
|
||||
connect-src ''self'' *.app.crewai.com app.crewai.com https://zeus.tools.crewai.com
|
||||
https://connect.useparagon.com/ https://zeus.useparagon.com/* https://*.useparagon.com/*
|
||||
https://run.pstmn.io https://connect.tools.crewai.com/ https://*.sentry.io
|
||||
https://www.google-analytics.com https://edge.fullstory.com https://rs.fullstory.com
|
||||
https://api.hubspot.com https://forms.hscollectedforms.net https://api.hubapi.com
|
||||
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://google.com/pagead/form-data/16713662509
|
||||
https://google.com/ccm/form-data/16713662509 https://www.google.com/ccm/collect
|
||||
https://worker-actionkit.tools.crewai.com https://api.reo.dev; frame-src ''self''
|
||||
*.app.crewai.com app.crewai.com https://connect.useparagon.com/ https://zeus.tools.crewai.com
|
||||
https://zeus.useparagon.com/* https://connect.tools.crewai.com/ https://docs.google.com
|
||||
https://drive.google.com https://slides.google.com https://accounts.google.com
|
||||
https://*.google.com https://app.hubspot.com/ https://td.doubleclick.net https://www.googletagmanager.com/
|
||||
https://www.youtube.com https://share.descript.com'
|
||||
etag:
|
||||
- W/"fd7de54de56e26a00d67fe4ea0f9bb62"
|
||||
expires:
|
||||
- '0'
|
||||
permissions-policy:
|
||||
- camera=(), microphone=(self), geolocation=()
|
||||
pragma:
|
||||
- no-cache
|
||||
referrer-policy:
|
||||
- strict-origin-when-cross-origin
|
||||
strict-transport-security:
|
||||
- max-age=63072000; includeSubDomains
|
||||
vary:
|
||||
- Accept
|
||||
x-content-type-options:
|
||||
- nosniff
|
||||
x-frame-options:
|
||||
- SAMEORIGIN
|
||||
x-permitted-cross-domain-policies:
|
||||
- none
|
||||
x-request-id:
|
||||
- 92521ff4-b389-4b4b-b2f6-9d40ee75cd4b
|
||||
x-runtime:
|
||||
- '0.071078'
|
||||
x-xss-protection:
|
||||
- 1; mode=block
|
||||
status:
|
||||
code: 201
|
||||
message: Created
|
||||
version: 1
|
||||
@@ -6,10 +6,11 @@ from collections import defaultdict
|
||||
from concurrent.futures import Future
|
||||
from hashlib import md5
|
||||
import re
|
||||
from unittest import mock
|
||||
from unittest.mock import ANY, MagicMock, patch
|
||||
from unittest.mock import ANY, MagicMock, call, patch
|
||||
|
||||
from crewai.agent import Agent
|
||||
from crewai.agents import CacheHandler
|
||||
from crewai.agents.agent_builder.base_agent import BaseAgent
|
||||
from crewai.crew import Crew
|
||||
from crewai.crews.crew_output import CrewOutput
|
||||
from crewai.events.event_bus import crewai_event_bus
|
||||
@@ -29,6 +30,7 @@ from crewai.events.types.memory_events import (
|
||||
MemorySaveFailedEvent,
|
||||
MemorySaveStartedEvent,
|
||||
)
|
||||
from crewai.flow import Flow, start
|
||||
from crewai.knowledge.knowledge import Knowledge
|
||||
from crewai.knowledge.source.string_knowledge_source import StringKnowledgeSource
|
||||
from crewai.llm import LLM
|
||||
@@ -37,19 +39,21 @@ from crewai.memory.external.external_memory import ExternalMemory
|
||||
from crewai.memory.long_term.long_term_memory import LongTermMemory
|
||||
from crewai.memory.short_term.short_term_memory import ShortTermMemory
|
||||
from crewai.process import Process
|
||||
from crewai.project import CrewBase, agent, before_kickoff, crew, task
|
||||
from crewai.task import Task
|
||||
from crewai.tasks.conditional_task import ConditionalTask
|
||||
from crewai.tasks.output_format import OutputFormat
|
||||
from crewai.tasks.task_output import TaskOutput
|
||||
from crewai.tools import BaseTool, tool
|
||||
from crewai.tools.agent_tools.add_image_tool import AddImageTool
|
||||
from crewai.types.usage_metrics import UsageMetrics
|
||||
from crewai.utilities.rpm_controller import RPMController
|
||||
from crewai.utilities.task_output_storage_handler import TaskOutputStorageHandler
|
||||
from crewai_tools import CodeInterpreterTool
|
||||
from pydantic import BaseModel, Field
|
||||
import pydantic_core
|
||||
import pytest
|
||||
|
||||
from crewai.agents import CacheHandler
|
||||
from crewai.flow import Flow, start
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def ceo():
|
||||
@@ -575,10 +579,6 @@ def test_crew_with_delegating_agents(ceo, writer):
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_crew_with_delegating_agents_should_not_override_task_tools(ceo, writer):
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from crewai.tools import BaseTool
|
||||
|
||||
class TestToolInput(BaseModel):
|
||||
"""Input schema for TestTool."""
|
||||
|
||||
@@ -635,10 +635,6 @@ def test_crew_with_delegating_agents_should_not_override_task_tools(ceo, writer)
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_crew_with_delegating_agents_should_not_override_agent_tools(ceo, writer):
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from crewai.tools import BaseTool
|
||||
|
||||
class TestToolInput(BaseModel):
|
||||
"""Input schema for TestTool."""
|
||||
|
||||
@@ -697,10 +693,6 @@ def test_crew_with_delegating_agents_should_not_override_agent_tools(ceo, writer
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_task_tools_override_agent_tools(researcher):
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from crewai.tools import BaseTool
|
||||
|
||||
class TestToolInput(BaseModel):
|
||||
"""Input schema for TestTool."""
|
||||
|
||||
@@ -753,11 +745,6 @@ def test_task_tools_override_agent_tools_with_allow_delegation(researcher, write
|
||||
"""
|
||||
Test that task tools override agent tools while preserving delegation tools when allow_delegation=True
|
||||
"""
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from crewai.tools import BaseTool
|
||||
|
||||
class TestToolInput(BaseModel):
|
||||
query: str = Field(..., description="Query to process")
|
||||
|
||||
@@ -876,10 +863,6 @@ def test_crew_verbose_output(researcher, writer, capsys):
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_cache_hitting_between_agents(researcher, writer, ceo):
|
||||
from unittest.mock import call, patch
|
||||
|
||||
from crewai.tools import tool
|
||||
|
||||
@tool
|
||||
def multiplier(first_number: int, second_number: int) -> float:
|
||||
"""Useful for when you need to multiply two numbers together."""
|
||||
@@ -934,8 +917,6 @@ def test_cache_hitting_between_agents(researcher, writer, ceo):
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_api_calls_throttling(capsys):
|
||||
from crewai.tools import tool
|
||||
|
||||
@tool
|
||||
def get_final_answer() -> float:
|
||||
"""Get the final answer but don't give it yet, just re-use this
|
||||
@@ -1216,8 +1197,6 @@ async def test_crew_async_kickoff():
|
||||
@pytest.mark.asyncio
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
async def test_async_task_execution_call_count(researcher, writer):
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
list_ideas = Task(
|
||||
description="Give me a list of 5 interesting ideas to explore for na article, what makes them unique and interesting.",
|
||||
expected_output="Bullet point list of 5 important events.",
|
||||
@@ -1575,9 +1554,6 @@ def test_dont_set_agents_step_callback_if_already_set():
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_crew_function_calling_llm():
|
||||
from crewai import LLM
|
||||
from crewai.tools import tool
|
||||
|
||||
llm = LLM(model="gpt-4o-mini")
|
||||
|
||||
@tool
|
||||
@@ -1607,8 +1583,6 @@ def test_crew_function_calling_llm():
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_task_with_no_arguments():
|
||||
from crewai.tools import tool
|
||||
|
||||
@tool
|
||||
def return_data() -> str:
|
||||
"Useful to get the sales related data"
|
||||
@@ -1635,11 +1609,6 @@ def test_task_with_no_arguments():
|
||||
|
||||
|
||||
def test_code_execution_flag_adds_code_tool_upon_kickoff():
|
||||
try:
|
||||
from crewai_tools import CodeInterpreterTool
|
||||
except (ImportError, Exception):
|
||||
pytest.skip("crewai_tools not available or cannot be imported")
|
||||
|
||||
# Mock Docker validation for the entire test
|
||||
with patch.object(Agent, "_validate_docker_installation"):
|
||||
programmer = Agent(
|
||||
@@ -2061,8 +2030,6 @@ def test_crew_does_not_interpolate_without_inputs():
|
||||
|
||||
|
||||
def test_task_callback_on_crew():
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
researcher_agent = Agent(
|
||||
role="Researcher",
|
||||
goal="Make the best research and analysis on content about AI and AI agents",
|
||||
@@ -2097,8 +2064,6 @@ def test_task_callback_on_crew():
|
||||
|
||||
|
||||
def test_task_callback_both_on_task_and_crew():
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
mock_callback_on_task = MagicMock()
|
||||
mock_callback_on_crew = MagicMock()
|
||||
|
||||
@@ -2134,8 +2099,6 @@ def test_task_callback_both_on_task_and_crew():
|
||||
|
||||
|
||||
def test_task_same_callback_both_on_task_and_crew():
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
mock_callback = MagicMock()
|
||||
|
||||
researcher_agent = Agent(
|
||||
@@ -2170,8 +2133,6 @@ def test_task_same_callback_both_on_task_and_crew():
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_tools_with_custom_caching():
|
||||
from crewai.tools import tool
|
||||
|
||||
@tool
|
||||
def multiplcation_tool(first_number: int, second_number: int) -> int:
|
||||
"""Useful for when you need to multiply two numbers together."""
|
||||
@@ -2477,7 +2438,7 @@ def test_using_contextual_memory():
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_memory_events_are_emitted():
|
||||
events = defaultdict(list)
|
||||
event_received = threading.Event()
|
||||
condition = threading.Condition()
|
||||
|
||||
@crewai_event_bus.on(MemorySaveStartedEvent)
|
||||
def handle_memory_save_started(source, event):
|
||||
@@ -2509,8 +2470,9 @@ def test_memory_events_are_emitted():
|
||||
|
||||
@crewai_event_bus.on(MemoryRetrievalCompletedEvent)
|
||||
def handle_memory_retrieval_completed(source, event):
|
||||
events["MemoryRetrievalCompletedEvent"].append(event)
|
||||
event_received.set()
|
||||
with condition:
|
||||
events["MemoryRetrievalCompletedEvent"].append(event)
|
||||
condition.notify()
|
||||
|
||||
math_researcher = Agent(
|
||||
role="Researcher",
|
||||
@@ -2533,7 +2495,12 @@ def test_memory_events_are_emitted():
|
||||
|
||||
crew.kickoff()
|
||||
|
||||
assert event_received.wait(timeout=5), "Timeout waiting for memory events"
|
||||
with condition:
|
||||
success = condition.wait_for(
|
||||
lambda: len(events["MemoryRetrievalCompletedEvent"]) >= 1, timeout=5
|
||||
)
|
||||
|
||||
assert success, "Timeout waiting for memory events"
|
||||
assert len(events["MemorySaveStartedEvent"]) == 3
|
||||
assert len(events["MemorySaveCompletedEvent"]) == 3
|
||||
assert len(events["MemorySaveFailedEvent"]) == 0
|
||||
@@ -2797,6 +2764,7 @@ def test_crew_output_file_validation_failures():
|
||||
Crew(agents=[agent], tasks=[task]).kickoff()
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_manager_agent(researcher, writer):
|
||||
task = Task(
|
||||
description="Come up with a list of 5 interesting ideas to explore for an article, then write one amazing paragraph highlight for each idea that showcases how good an article about this topic could be. Return the list of ideas with their paragraph and your notes.",
|
||||
@@ -2855,9 +2823,8 @@ def test_manager_agent_in_agents_raises_exception(researcher, writer):
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_manager_agent_with_tools_raises_exception(researcher, writer):
|
||||
from crewai.tools import tool
|
||||
|
||||
@tool
|
||||
def testing_tool(first_number: int, second_number: int) -> int:
|
||||
"""Useful for when you need to multiply two numbers together."""
|
||||
@@ -2887,13 +2854,8 @@ def test_manager_agent_with_tools_raises_exception(researcher, writer):
|
||||
crew.kickoff()
|
||||
|
||||
|
||||
@patch("crewai.crew.Crew.kickoff")
|
||||
@patch("crewai.crew.CrewTrainingHandler")
|
||||
@patch("crewai.crew.TaskEvaluator")
|
||||
@patch("crewai.crew.Crew.copy")
|
||||
def test_crew_train_success(
|
||||
copy_mock, task_evaluator, crew_training_handler, kickoff_mock, researcher, writer
|
||||
):
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_crew_train_success(researcher, writer):
|
||||
task = Task(
|
||||
description="Come up with a list of 5 interesting ideas to explore for an article, then write one amazing paragraph highlight for each idea that showcases how good an article about this topic could be. Return the list of ideas with their paragraph and your notes.",
|
||||
expected_output="5 bullet points with a paragraph for each idea.",
|
||||
@@ -2905,79 +2867,39 @@ def test_crew_train_success(
|
||||
tasks=[task],
|
||||
)
|
||||
|
||||
# Create a mock for the copied crew
|
||||
copy_mock.return_value = crew
|
||||
|
||||
received_events = []
|
||||
lock = threading.Lock()
|
||||
all_events_received = threading.Event()
|
||||
condition = threading.Condition()
|
||||
|
||||
@crewai_event_bus.on(CrewTrainStartedEvent)
|
||||
def on_crew_train_started(source, event: CrewTrainStartedEvent):
|
||||
with lock:
|
||||
with condition:
|
||||
received_events.append(event)
|
||||
if len(received_events) == 2:
|
||||
all_events_received.set()
|
||||
condition.notify()
|
||||
|
||||
@crewai_event_bus.on(CrewTrainCompletedEvent)
|
||||
def on_crew_train_completed(source, event: CrewTrainCompletedEvent):
|
||||
with lock:
|
||||
with condition:
|
||||
received_events.append(event)
|
||||
if len(received_events) == 2:
|
||||
all_events_received.set()
|
||||
condition.notify()
|
||||
|
||||
crew.train(
|
||||
n_iterations=2, inputs={"topic": "AI"}, filename="trained_agents_data.pkl"
|
||||
)
|
||||
# Mock human input to avoid blocking during training
|
||||
with patch("builtins.input", return_value="Great work!"):
|
||||
crew.train(
|
||||
n_iterations=2, inputs={"topic": "AI"}, filename="trained_agents_data.pkl"
|
||||
)
|
||||
|
||||
assert all_events_received.wait(timeout=5), "Timeout waiting for all train events"
|
||||
|
||||
# Ensure kickoff is called on the copied crew
|
||||
kickoff_mock.assert_has_calls(
|
||||
[mock.call(inputs={"topic": "AI"}), mock.call(inputs={"topic": "AI"})]
|
||||
)
|
||||
|
||||
task_evaluator.assert_has_calls(
|
||||
[
|
||||
mock.call(researcher),
|
||||
mock.call().evaluate_training_data(
|
||||
training_data=crew_training_handler().load(),
|
||||
agent_id=str(researcher.id),
|
||||
),
|
||||
mock.call().evaluate_training_data().model_dump(),
|
||||
mock.call(writer),
|
||||
mock.call().evaluate_training_data(
|
||||
training_data=crew_training_handler().load(),
|
||||
agent_id=str(writer.id),
|
||||
),
|
||||
mock.call().evaluate_training_data().model_dump(),
|
||||
]
|
||||
)
|
||||
|
||||
crew_training_handler.assert_any_call("training_data.pkl")
|
||||
crew_training_handler().load.assert_called()
|
||||
|
||||
crew_training_handler.assert_any_call("trained_agents_data.pkl")
|
||||
crew_training_handler().load.assert_called()
|
||||
|
||||
crew_training_handler().save_trained_data.assert_has_calls(
|
||||
[
|
||||
mock.call(
|
||||
agent_id="Researcher",
|
||||
trained_data=task_evaluator().evaluate_training_data().model_dump(),
|
||||
),
|
||||
mock.call(
|
||||
agent_id="Senior Writer",
|
||||
trained_data=task_evaluator().evaluate_training_data().model_dump(),
|
||||
),
|
||||
]
|
||||
)
|
||||
with condition:
|
||||
success = condition.wait_for(lambda: len(received_events) == 2, timeout=5)
|
||||
|
||||
assert success, "Timeout waiting for all train events"
|
||||
assert len(received_events) == 2
|
||||
assert isinstance(received_events[0], CrewTrainStartedEvent)
|
||||
assert isinstance(received_events[1], CrewTrainCompletedEvent)
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_crew_train_error(researcher, writer):
|
||||
task = Task(
|
||||
description="Come up with a list of 5 interesting ideas to explore for an article",
|
||||
@@ -3277,6 +3199,7 @@ def test_replay_with_context():
|
||||
assert crew.tasks[1].context[0].output.raw == "context raw output"
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_replay_with_context_set_to_nullable():
|
||||
agent = Agent(role="test_agent", backstory="Test Description", goal="Test Goal")
|
||||
task1 = Task(
|
||||
@@ -3716,10 +3639,8 @@ def test_conditional_should_execute(researcher, writer):
|
||||
assert mock_execute_sync.call_count == 2
|
||||
|
||||
|
||||
@mock.patch("crewai.crew.CrewEvaluator")
|
||||
@mock.patch("crewai.crew.Crew.copy")
|
||||
@mock.patch("crewai.crew.Crew.kickoff")
|
||||
def test_crew_testing_function(kickoff_mock, copy_mock, crew_evaluator, researcher):
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_crew_testing_function(researcher):
|
||||
task = Task(
|
||||
description="Come up with a list of 5 interesting ideas to explore for an article, then write one amazing paragraph highlight for each idea that showcases how good an article about this topic could be. Return the list of ideas with their paragraph and your notes.",
|
||||
expected_output="5 bullet points with a paragraph for each idea.",
|
||||
@@ -3731,48 +3652,32 @@ def test_crew_testing_function(kickoff_mock, copy_mock, crew_evaluator, research
|
||||
tasks=[task],
|
||||
)
|
||||
|
||||
# Create a mock for the copied crew
|
||||
copy_mock.return_value = crew
|
||||
|
||||
n_iterations = 2
|
||||
llm_instance = LLM("gpt-4o-mini")
|
||||
|
||||
received_events = []
|
||||
lock = threading.Lock()
|
||||
all_events_received = threading.Event()
|
||||
condition = threading.Condition()
|
||||
|
||||
@crewai_event_bus.on(CrewTestStartedEvent)
|
||||
def on_crew_test_started(source, event: CrewTestStartedEvent):
|
||||
with lock:
|
||||
with condition:
|
||||
received_events.append(event)
|
||||
if len(received_events) == 2:
|
||||
all_events_received.set()
|
||||
condition.notify()
|
||||
|
||||
@crewai_event_bus.on(CrewTestCompletedEvent)
|
||||
def on_crew_test_completed(source, event: CrewTestCompletedEvent):
|
||||
with lock:
|
||||
with condition:
|
||||
received_events.append(event)
|
||||
if len(received_events) == 2:
|
||||
all_events_received.set()
|
||||
condition.notify()
|
||||
|
||||
crew.test(n_iterations, llm_instance, inputs={"topic": "AI"})
|
||||
|
||||
assert all_events_received.wait(timeout=5), "Timeout waiting for all test events"
|
||||
|
||||
# Ensure kickoff is called on the copied crew
|
||||
kickoff_mock.assert_has_calls(
|
||||
[mock.call(inputs={"topic": "AI"}), mock.call(inputs={"topic": "AI"})]
|
||||
)
|
||||
|
||||
crew_evaluator.assert_has_calls(
|
||||
[
|
||||
mock.call(crew, llm_instance),
|
||||
mock.call().set_iteration(1),
|
||||
mock.call().set_iteration(2),
|
||||
mock.call().print_crew_evaluation_result(),
|
||||
]
|
||||
)
|
||||
with condition:
|
||||
success = condition.wait_for(lambda: len(received_events) == 2, timeout=5)
|
||||
|
||||
assert success, "Timeout waiting for all test events"
|
||||
assert len(received_events) == 2
|
||||
assert isinstance(received_events[0], CrewTestStartedEvent)
|
||||
assert isinstance(received_events[1], CrewTestCompletedEvent)
|
||||
@@ -3843,15 +3748,11 @@ def test_fetch_inputs():
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_task_tools_preserve_code_execution_tools():
|
||||
"""
|
||||
Test that task tools don't override code execution tools when allow_code_execution=True
|
||||
"""
|
||||
from crewai_tools import CodeInterpreterTool
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from crewai.tools import BaseTool
|
||||
|
||||
class TestToolInput(BaseModel):
|
||||
"""Input schema for TestTool."""
|
||||
|
||||
@@ -3865,23 +3766,25 @@ def test_task_tools_preserve_code_execution_tools():
|
||||
def _run(self, query: str) -> str:
|
||||
return f"Processed: {query}"
|
||||
|
||||
# Create a programmer agent with code execution enabled
|
||||
programmer = Agent(
|
||||
role="Programmer",
|
||||
goal="Write code to solve problems.",
|
||||
backstory="You're a programmer who loves to solve problems with code.",
|
||||
allow_delegation=True,
|
||||
allow_code_execution=True,
|
||||
)
|
||||
# Mock Docker validation for the entire test
|
||||
with patch.object(Agent, "_validate_docker_installation"):
|
||||
# Create a programmer agent with code execution enabled
|
||||
programmer = Agent(
|
||||
role="Programmer",
|
||||
goal="Write code to solve problems.",
|
||||
backstory="You're a programmer who loves to solve problems with code.",
|
||||
allow_delegation=True,
|
||||
allow_code_execution=True,
|
||||
)
|
||||
|
||||
# Create a code reviewer agent
|
||||
reviewer = Agent(
|
||||
role="Code Reviewer",
|
||||
goal="Review code for bugs and improvements",
|
||||
backstory="You're an experienced code reviewer who ensures code quality and best practices.",
|
||||
allow_delegation=True,
|
||||
allow_code_execution=True,
|
||||
)
|
||||
# Create a code reviewer agent
|
||||
reviewer = Agent(
|
||||
role="Code Reviewer",
|
||||
goal="Review code for bugs and improvements",
|
||||
backstory="You're an experienced code reviewer who ensures code quality and best practices.",
|
||||
allow_delegation=True,
|
||||
allow_code_execution=True,
|
||||
)
|
||||
|
||||
# Create a task with its own tools
|
||||
task = Task(
|
||||
@@ -3932,8 +3835,6 @@ def test_multimodal_flag_adds_multimodal_tools():
|
||||
"""
|
||||
Test that an agent with multimodal=True automatically has multimodal tools added to the task execution.
|
||||
"""
|
||||
from crewai.tools.agent_tools.add_image_tool import AddImageTool
|
||||
|
||||
# Create an agent that supports multimodal
|
||||
multimodal_agent = Agent(
|
||||
role="Multimodal Analyst",
|
||||
@@ -4247,13 +4148,8 @@ def test_crew_guardrail_feedback_in_context():
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_before_kickoff_callback():
|
||||
from crewai.project import CrewBase
|
||||
|
||||
@CrewBase
|
||||
class TestCrewClass:
|
||||
from crewai.agents.agent_builder.base_agent import BaseAgent
|
||||
from crewai.project import CrewBase, agent, before_kickoff, crew, task
|
||||
|
||||
agents: list[BaseAgent]
|
||||
tasks: list[Task]
|
||||
|
||||
@@ -4309,12 +4205,8 @@ def test_before_kickoff_callback():
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_before_kickoff_without_inputs():
|
||||
from crewai.project import CrewBase, agent, before_kickoff, task
|
||||
|
||||
@CrewBase
|
||||
class TestCrewClass:
|
||||
from crewai.project import crew
|
||||
|
||||
agents_config = None
|
||||
tasks_config = None
|
||||
|
||||
@@ -4518,6 +4410,7 @@ def test_sets_parent_flow_when_outside_flow(researcher, writer):
|
||||
assert crew.parent_flow is None
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_sets_parent_flow_when_inside_flow(researcher, writer):
|
||||
class MyFlow(Flow):
|
||||
@start()
|
||||
|
||||
@@ -850,6 +850,31 @@ def test_flow_plotting():
|
||||
assert isinstance(received_events[0].timestamp, datetime)
|
||||
|
||||
|
||||
def test_method_calls_crew_detection():
|
||||
"""Test that method_calls_crew() detects .crew(), .kickoff(), and .kickoff_async() calls."""
|
||||
from crewai.flow.visualization_utils import method_calls_crew
|
||||
from crewai import Agent
|
||||
|
||||
# Test with a real Flow that uses agent.kickoff()
|
||||
class FlowWithAgentKickoff(Flow):
|
||||
@start()
|
||||
def run_agent(self):
|
||||
agent = Agent(role="test", goal="test", backstory="test")
|
||||
return agent.kickoff("query")
|
||||
|
||||
flow = FlowWithAgentKickoff()
|
||||
assert method_calls_crew(flow.run_agent) is True
|
||||
|
||||
# Test with a Flow that has no crew/agent calls
|
||||
class FlowWithoutCrewCalls(Flow):
|
||||
@start()
|
||||
def simple_method(self):
|
||||
return "Just a regular method"
|
||||
|
||||
flow2 = FlowWithoutCrewCalls()
|
||||
assert method_calls_crew(flow2.simple_method) is False
|
||||
|
||||
|
||||
def test_multiple_routers_from_same_trigger():
|
||||
"""Test that multiple routers triggered by the same method all activate their listeners."""
|
||||
execution_order = []
|
||||
|
||||
@@ -22,7 +22,7 @@ import pytest
|
||||
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
def vcr_config(request) -> dict:
|
||||
def vcr_config(request: pytest.FixtureRequest) -> dict[str, str]:
|
||||
return {
|
||||
"cassette_library_dir": os.path.join(os.path.dirname(__file__), "cassettes"),
|
||||
}
|
||||
@@ -65,7 +65,7 @@ class CustomConverter(Converter):
|
||||
|
||||
# Fixtures
|
||||
@pytest.fixture
|
||||
def mock_agent():
|
||||
def mock_agent() -> Mock:
|
||||
agent = Mock()
|
||||
agent.function_calling_llm = None
|
||||
agent.llm = Mock()
|
||||
@@ -73,7 +73,7 @@ def mock_agent():
|
||||
|
||||
|
||||
# Tests for convert_to_model
|
||||
def test_convert_to_model_with_valid_json():
|
||||
def test_convert_to_model_with_valid_json() -> None:
|
||||
result = '{"name": "John", "age": 30}'
|
||||
output = convert_to_model(result, SimpleModel, None, None)
|
||||
assert isinstance(output, SimpleModel)
|
||||
@@ -81,7 +81,7 @@ def test_convert_to_model_with_valid_json():
|
||||
assert output.age == 30
|
||||
|
||||
|
||||
def test_convert_to_model_with_invalid_json():
|
||||
def test_convert_to_model_with_invalid_json() -> None:
|
||||
result = '{"name": "John", "age": "thirty"}'
|
||||
with patch("crewai.utilities.converter.handle_partial_json") as mock_handle:
|
||||
mock_handle.return_value = "Fallback result"
|
||||
@@ -89,13 +89,13 @@ def test_convert_to_model_with_invalid_json():
|
||||
assert output == "Fallback result"
|
||||
|
||||
|
||||
def test_convert_to_model_with_no_model():
|
||||
def test_convert_to_model_with_no_model() -> None:
|
||||
result = "Plain text"
|
||||
output = convert_to_model(result, None, None, None)
|
||||
assert output == "Plain text"
|
||||
|
||||
|
||||
def test_convert_to_model_with_special_characters():
|
||||
def test_convert_to_model_with_special_characters() -> None:
|
||||
json_string_test = """
|
||||
{
|
||||
"responses": [
|
||||
@@ -114,7 +114,7 @@ def test_convert_to_model_with_special_characters():
|
||||
)
|
||||
|
||||
|
||||
def test_convert_to_model_with_escaped_special_characters():
|
||||
def test_convert_to_model_with_escaped_special_characters() -> None:
|
||||
json_string_test = json.dumps(
|
||||
{
|
||||
"responses": [
|
||||
@@ -133,7 +133,7 @@ def test_convert_to_model_with_escaped_special_characters():
|
||||
)
|
||||
|
||||
|
||||
def test_convert_to_model_with_multiple_special_characters():
|
||||
def test_convert_to_model_with_multiple_special_characters() -> None:
|
||||
json_string_test = """
|
||||
{
|
||||
"responses": [
|
||||
@@ -153,7 +153,7 @@ def test_convert_to_model_with_multiple_special_characters():
|
||||
|
||||
|
||||
# Tests for validate_model
|
||||
def test_validate_model_pydantic_output():
|
||||
def test_validate_model_pydantic_output() -> None:
|
||||
result = '{"name": "Alice", "age": 25}'
|
||||
output = validate_model(result, SimpleModel, False)
|
||||
assert isinstance(output, SimpleModel)
|
||||
@@ -161,7 +161,7 @@ def test_validate_model_pydantic_output():
|
||||
assert output.age == 25
|
||||
|
||||
|
||||
def test_validate_model_json_output():
|
||||
def test_validate_model_json_output() -> None:
|
||||
result = '{"name": "Bob", "age": 40}'
|
||||
output = validate_model(result, SimpleModel, True)
|
||||
assert isinstance(output, dict)
|
||||
@@ -169,7 +169,7 @@ def test_validate_model_json_output():
|
||||
|
||||
|
||||
# Tests for handle_partial_json
|
||||
def test_handle_partial_json_with_valid_partial():
|
||||
def test_handle_partial_json_with_valid_partial() -> None:
|
||||
result = 'Some text {"name": "Charlie", "age": 35} more text'
|
||||
output = handle_partial_json(result, SimpleModel, False, None)
|
||||
assert isinstance(output, SimpleModel)
|
||||
@@ -177,7 +177,7 @@ def test_handle_partial_json_with_valid_partial():
|
||||
assert output.age == 35
|
||||
|
||||
|
||||
def test_handle_partial_json_with_invalid_partial(mock_agent):
|
||||
def test_handle_partial_json_with_invalid_partial(mock_agent: Mock) -> None:
|
||||
result = "No valid JSON here"
|
||||
with patch("crewai.utilities.converter.convert_with_instructions") as mock_convert:
|
||||
mock_convert.return_value = "Converted result"
|
||||
@@ -189,8 +189,8 @@ def test_handle_partial_json_with_invalid_partial(mock_agent):
|
||||
@patch("crewai.utilities.converter.create_converter")
|
||||
@patch("crewai.utilities.converter.get_conversion_instructions")
|
||||
def test_convert_with_instructions_success(
|
||||
mock_get_instructions, mock_create_converter, mock_agent
|
||||
):
|
||||
mock_get_instructions: Mock, mock_create_converter: Mock, mock_agent: Mock
|
||||
) -> None:
|
||||
mock_get_instructions.return_value = "Instructions"
|
||||
mock_converter = Mock()
|
||||
mock_converter.to_pydantic.return_value = SimpleModel(name="David", age=50)
|
||||
@@ -207,8 +207,8 @@ def test_convert_with_instructions_success(
|
||||
@patch("crewai.utilities.converter.create_converter")
|
||||
@patch("crewai.utilities.converter.get_conversion_instructions")
|
||||
def test_convert_with_instructions_failure(
|
||||
mock_get_instructions, mock_create_converter, mock_agent
|
||||
):
|
||||
mock_get_instructions: Mock, mock_create_converter: Mock, mock_agent: Mock
|
||||
) -> None:
|
||||
mock_get_instructions.return_value = "Instructions"
|
||||
mock_converter = Mock()
|
||||
mock_converter.to_pydantic.return_value = ConverterError("Conversion failed")
|
||||
@@ -222,7 +222,7 @@ def test_convert_with_instructions_failure(
|
||||
|
||||
|
||||
# Tests for get_conversion_instructions
|
||||
def test_get_conversion_instructions_gpt():
|
||||
def test_get_conversion_instructions_gpt() -> None:
|
||||
llm = LLM(model="gpt-4o-mini")
|
||||
with patch.object(LLM, "supports_function_calling") as supports_function_calling:
|
||||
supports_function_calling.return_value = True
|
||||
@@ -237,7 +237,7 @@ def test_get_conversion_instructions_gpt():
|
||||
assert instructions == expected_instructions
|
||||
|
||||
|
||||
def test_get_conversion_instructions_non_gpt():
|
||||
def test_get_conversion_instructions_non_gpt() -> None:
|
||||
llm = LLM(model="ollama/llama3.1", base_url="http://localhost:11434")
|
||||
with patch.object(LLM, "supports_function_calling", return_value=False):
|
||||
instructions = get_conversion_instructions(SimpleModel, llm)
|
||||
@@ -246,17 +246,17 @@ def test_get_conversion_instructions_non_gpt():
|
||||
|
||||
|
||||
# Tests for is_gpt
|
||||
def test_supports_function_calling_true():
|
||||
def test_supports_function_calling_true() -> None:
|
||||
llm = LLM(model="gpt-4o")
|
||||
assert llm.supports_function_calling() is True
|
||||
|
||||
|
||||
def test_supports_function_calling_false():
|
||||
def test_supports_function_calling_false() -> None:
|
||||
llm = LLM(model="non-existent-model", is_litellm=True)
|
||||
assert llm.supports_function_calling() is False
|
||||
|
||||
|
||||
def test_create_converter_with_mock_agent():
|
||||
def test_create_converter_with_mock_agent() -> None:
|
||||
mock_agent = MagicMock()
|
||||
mock_agent.get_output_converter.return_value = MagicMock(spec=Converter)
|
||||
|
||||
@@ -272,7 +272,7 @@ def test_create_converter_with_mock_agent():
|
||||
mock_agent.get_output_converter.assert_called_once()
|
||||
|
||||
|
||||
def test_create_converter_with_custom_converter():
|
||||
def test_create_converter_with_custom_converter() -> None:
|
||||
converter = create_converter(
|
||||
converter_cls=CustomConverter,
|
||||
llm=LLM(model="gpt-4o-mini"),
|
||||
@@ -284,7 +284,7 @@ def test_create_converter_with_custom_converter():
|
||||
assert isinstance(converter, CustomConverter)
|
||||
|
||||
|
||||
def test_create_converter_fails_without_agent_or_converter_cls():
|
||||
def test_create_converter_fails_without_agent_or_converter_cls() -> None:
|
||||
with pytest.raises(
|
||||
ValueError, match="Either agent or converter_cls must be provided"
|
||||
):
|
||||
@@ -293,13 +293,13 @@ def test_create_converter_fails_without_agent_or_converter_cls():
|
||||
)
|
||||
|
||||
|
||||
def test_generate_model_description_simple_model():
|
||||
def test_generate_model_description_simple_model() -> None:
|
||||
description = generate_model_description(SimpleModel)
|
||||
expected_description = '{\n "name": str,\n "age": int\n}'
|
||||
assert description == expected_description
|
||||
|
||||
|
||||
def test_generate_model_description_nested_model():
|
||||
def test_generate_model_description_nested_model() -> None:
|
||||
description = generate_model_description(NestedModel)
|
||||
expected_description = (
|
||||
'{\n "id": int,\n "data": {\n "name": str,\n "age": int\n}\n}'
|
||||
@@ -307,7 +307,7 @@ def test_generate_model_description_nested_model():
|
||||
assert description == expected_description
|
||||
|
||||
|
||||
def test_generate_model_description_optional_field():
|
||||
def test_generate_model_description_optional_field() -> None:
|
||||
class ModelWithOptionalField(BaseModel):
|
||||
name: str
|
||||
age: int | None
|
||||
@@ -317,7 +317,7 @@ def test_generate_model_description_optional_field():
|
||||
assert description == expected_description
|
||||
|
||||
|
||||
def test_generate_model_description_list_field():
|
||||
def test_generate_model_description_list_field() -> None:
|
||||
class ModelWithListField(BaseModel):
|
||||
items: list[int]
|
||||
|
||||
@@ -326,7 +326,7 @@ def test_generate_model_description_list_field():
|
||||
assert description == expected_description
|
||||
|
||||
|
||||
def test_generate_model_description_dict_field():
|
||||
def test_generate_model_description_dict_field() -> None:
|
||||
class ModelWithDictField(BaseModel):
|
||||
attributes: dict[str, int]
|
||||
|
||||
@@ -336,7 +336,7 @@ def test_generate_model_description_dict_field():
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_convert_with_instructions():
|
||||
def test_convert_with_instructions() -> None:
|
||||
llm = LLM(model="gpt-4o-mini")
|
||||
sample_text = "Name: Alice, Age: 30"
|
||||
|
||||
@@ -358,7 +358,7 @@ def test_convert_with_instructions():
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_converter_with_llama3_2_model():
|
||||
def test_converter_with_llama3_2_model() -> None:
|
||||
llm = LLM(model="openrouter/meta-llama/llama-3.2-3b-instruct")
|
||||
sample_text = "Name: Alice Llama, Age: 30"
|
||||
instructions = get_conversion_instructions(SimpleModel, llm)
|
||||
@@ -375,7 +375,7 @@ def test_converter_with_llama3_2_model():
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_converter_with_llama3_1_model():
|
||||
def test_converter_with_llama3_1_model() -> None:
|
||||
llm = LLM(model="ollama/llama3.1", base_url="http://localhost:11434")
|
||||
sample_text = "Name: Alice Llama, Age: 30"
|
||||
instructions = get_conversion_instructions(SimpleModel, llm)
|
||||
@@ -392,7 +392,7 @@ def test_converter_with_llama3_1_model():
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_converter_with_nested_model():
|
||||
def test_converter_with_nested_model() -> None:
|
||||
llm = LLM(model="gpt-4o-mini")
|
||||
sample_text = "Name: John Doe\nAge: 30\nAddress: 123 Main St, Anytown, 12345"
|
||||
|
||||
@@ -416,7 +416,7 @@ def test_converter_with_nested_model():
|
||||
|
||||
|
||||
# Tests for error handling
|
||||
def test_converter_error_handling():
|
||||
def test_converter_error_handling() -> None:
|
||||
llm = Mock(spec=LLM)
|
||||
llm.supports_function_calling.return_value = False
|
||||
llm.call.return_value = "Invalid JSON"
|
||||
@@ -437,7 +437,7 @@ def test_converter_error_handling():
|
||||
|
||||
|
||||
# Tests for retry logic
|
||||
def test_converter_retry_logic():
|
||||
def test_converter_retry_logic() -> None:
|
||||
llm = Mock(spec=LLM)
|
||||
llm.supports_function_calling.return_value = False
|
||||
llm.call.side_effect = [
|
||||
@@ -465,7 +465,7 @@ def test_converter_retry_logic():
|
||||
|
||||
|
||||
# Tests for optional fields
|
||||
def test_converter_with_optional_fields():
|
||||
def test_converter_with_optional_fields() -> None:
|
||||
class OptionalModel(BaseModel):
|
||||
name: str
|
||||
age: int | None
|
||||
@@ -492,7 +492,7 @@ def test_converter_with_optional_fields():
|
||||
|
||||
|
||||
# Tests for list fields
|
||||
def test_converter_with_list_field():
|
||||
def test_converter_with_list_field() -> None:
|
||||
class ListModel(BaseModel):
|
||||
items: list[int]
|
||||
|
||||
@@ -515,7 +515,7 @@ def test_converter_with_list_field():
|
||||
assert output.items == [1, 2, 3]
|
||||
|
||||
|
||||
def test_converter_with_enum():
|
||||
def test_converter_with_enum() -> None:
|
||||
class Color(Enum):
|
||||
RED = "red"
|
||||
GREEN = "green"
|
||||
@@ -546,7 +546,7 @@ def test_converter_with_enum():
|
||||
|
||||
|
||||
# Tests for ambiguous input
|
||||
def test_converter_with_ambiguous_input():
|
||||
def test_converter_with_ambiguous_input() -> None:
|
||||
llm = Mock(spec=LLM)
|
||||
llm.supports_function_calling.return_value = False
|
||||
llm.call.return_value = '{"name": "Charlie", "age": "Not an age"}'
|
||||
@@ -567,7 +567,7 @@ def test_converter_with_ambiguous_input():
|
||||
|
||||
|
||||
# Tests for function calling support
|
||||
def test_converter_with_function_calling():
|
||||
def test_converter_with_function_calling() -> None:
|
||||
llm = Mock(spec=LLM)
|
||||
llm.supports_function_calling.return_value = True
|
||||
|
||||
@@ -580,20 +580,359 @@ def test_converter_with_function_calling():
|
||||
model=SimpleModel,
|
||||
instructions="Convert this text.",
|
||||
)
|
||||
converter._create_instructor = Mock(return_value=instructor)
|
||||
|
||||
with patch.object(converter, '_create_instructor', return_value=instructor):
|
||||
output = converter.to_pydantic()
|
||||
|
||||
output = converter.to_pydantic()
|
||||
|
||||
assert isinstance(output, SimpleModel)
|
||||
assert output.name == "Eve"
|
||||
assert output.age == 35
|
||||
assert isinstance(output, SimpleModel)
|
||||
assert output.name == "Eve"
|
||||
assert output.age == 35
|
||||
instructor.to_pydantic.assert_called_once()
|
||||
|
||||
|
||||
def test_generate_model_description_union_field():
|
||||
def test_generate_model_description_union_field() -> None:
|
||||
class UnionModel(BaseModel):
|
||||
field: int | str | None
|
||||
|
||||
description = generate_model_description(UnionModel)
|
||||
expected_description = '{\n "field": int | str | None\n}'
|
||||
assert description == expected_description
|
||||
|
||||
def test_internal_instructor_with_openai_provider() -> None:
|
||||
"""Test InternalInstructor with OpenAI provider using registry pattern."""
|
||||
from crewai.utilities.internal_instructor import InternalInstructor
|
||||
|
||||
# Mock LLM with OpenAI provider
|
||||
mock_llm = Mock()
|
||||
mock_llm.is_litellm = False
|
||||
mock_llm.model = "gpt-4o"
|
||||
mock_llm.provider = "openai"
|
||||
|
||||
# Mock instructor client
|
||||
mock_client = Mock()
|
||||
mock_client.chat.completions.create.return_value = SimpleModel(name="Test", age=25)
|
||||
|
||||
# Patch the instructor import at the method level
|
||||
with patch.object(InternalInstructor, '_create_instructor_client') as mock_create_client:
|
||||
mock_create_client.return_value = mock_client
|
||||
|
||||
instructor = InternalInstructor(
|
||||
content="Test content",
|
||||
model=SimpleModel,
|
||||
llm=mock_llm
|
||||
)
|
||||
|
||||
result = instructor.to_pydantic()
|
||||
|
||||
assert isinstance(result, SimpleModel)
|
||||
assert result.name == "Test"
|
||||
assert result.age == 25
|
||||
# Verify the method was called with the correct LLM
|
||||
mock_create_client.assert_called_once()
|
||||
|
||||
|
||||
def test_internal_instructor_with_anthropic_provider() -> None:
|
||||
"""Test InternalInstructor with Anthropic provider using registry pattern."""
|
||||
from crewai.utilities.internal_instructor import InternalInstructor
|
||||
|
||||
# Mock LLM with Anthropic provider
|
||||
mock_llm = Mock()
|
||||
mock_llm.is_litellm = False
|
||||
mock_llm.model = "claude-3-5-sonnet-20241022"
|
||||
mock_llm.provider = "anthropic"
|
||||
|
||||
# Mock instructor client
|
||||
mock_client = Mock()
|
||||
mock_client.chat.completions.create.return_value = SimpleModel(name="Bob", age=25)
|
||||
|
||||
# Patch the instructor import at the method level
|
||||
with patch.object(InternalInstructor, '_create_instructor_client') as mock_create_client:
|
||||
mock_create_client.return_value = mock_client
|
||||
|
||||
instructor = InternalInstructor(
|
||||
content="Name: Bob, Age: 25",
|
||||
model=SimpleModel,
|
||||
llm=mock_llm
|
||||
)
|
||||
|
||||
result = instructor.to_pydantic()
|
||||
|
||||
assert isinstance(result, SimpleModel)
|
||||
assert result.name == "Bob"
|
||||
assert result.age == 25
|
||||
# Verify the method was called with the correct LLM
|
||||
mock_create_client.assert_called_once()
|
||||
|
||||
|
||||
def test_factory_pattern_registry_extensibility() -> None:
|
||||
"""Test that the factory pattern registry works with different providers."""
|
||||
from crewai.utilities.internal_instructor import InternalInstructor
|
||||
|
||||
# Test with OpenAI provider
|
||||
mock_llm_openai = Mock()
|
||||
mock_llm_openai.is_litellm = False
|
||||
mock_llm_openai.model = "gpt-4o-mini"
|
||||
mock_llm_openai.provider = "openai"
|
||||
|
||||
mock_client_openai = Mock()
|
||||
mock_client_openai.chat.completions.create.return_value = SimpleModel(name="Alice", age=30)
|
||||
|
||||
with patch.object(InternalInstructor, '_create_instructor_client') as mock_create_client:
|
||||
mock_create_client.return_value = mock_client_openai
|
||||
|
||||
instructor_openai = InternalInstructor(
|
||||
content="Name: Alice, Age: 30",
|
||||
model=SimpleModel,
|
||||
llm=mock_llm_openai
|
||||
)
|
||||
|
||||
result_openai = instructor_openai.to_pydantic()
|
||||
|
||||
assert isinstance(result_openai, SimpleModel)
|
||||
assert result_openai.name == "Alice"
|
||||
assert result_openai.age == 30
|
||||
|
||||
# Test with Anthropic provider
|
||||
mock_llm_anthropic = Mock()
|
||||
mock_llm_anthropic.is_litellm = False
|
||||
mock_llm_anthropic.model = "claude-3-5-sonnet-20241022"
|
||||
mock_llm_anthropic.provider = "anthropic"
|
||||
|
||||
mock_client_anthropic = Mock()
|
||||
mock_client_anthropic.chat.completions.create.return_value = SimpleModel(name="Bob", age=25)
|
||||
|
||||
with patch.object(InternalInstructor, '_create_instructor_client') as mock_create_client:
|
||||
mock_create_client.return_value = mock_client_anthropic
|
||||
|
||||
instructor_anthropic = InternalInstructor(
|
||||
content="Name: Bob, Age: 25",
|
||||
model=SimpleModel,
|
||||
llm=mock_llm_anthropic
|
||||
)
|
||||
|
||||
result_anthropic = instructor_anthropic.to_pydantic()
|
||||
|
||||
assert isinstance(result_anthropic, SimpleModel)
|
||||
assert result_anthropic.name == "Bob"
|
||||
assert result_anthropic.age == 25
|
||||
|
||||
# Test with Bedrock provider
|
||||
mock_llm_bedrock = Mock()
|
||||
mock_llm_bedrock.is_litellm = False
|
||||
mock_llm_bedrock.model = "claude-3-5-sonnet-20241022"
|
||||
mock_llm_bedrock.provider = "bedrock"
|
||||
|
||||
mock_client_bedrock = Mock()
|
||||
mock_client_bedrock.chat.completions.create.return_value = SimpleModel(name="Charlie", age=35)
|
||||
|
||||
with patch.object(InternalInstructor, '_create_instructor_client') as mock_create_client:
|
||||
mock_create_client.return_value = mock_client_bedrock
|
||||
|
||||
instructor_bedrock = InternalInstructor(
|
||||
content="Name: Charlie, Age: 35",
|
||||
model=SimpleModel,
|
||||
llm=mock_llm_bedrock
|
||||
)
|
||||
|
||||
result_bedrock = instructor_bedrock.to_pydantic()
|
||||
|
||||
assert isinstance(result_bedrock, SimpleModel)
|
||||
assert result_bedrock.name == "Charlie"
|
||||
assert result_bedrock.age == 35
|
||||
|
||||
# Test with Google provider
|
||||
mock_llm_google = Mock()
|
||||
mock_llm_google.is_litellm = False
|
||||
mock_llm_google.model = "gemini-1.5-flash"
|
||||
mock_llm_google.provider = "google"
|
||||
|
||||
mock_client_google = Mock()
|
||||
mock_client_google.chat.completions.create.return_value = SimpleModel(name="Diana", age=28)
|
||||
|
||||
with patch.object(InternalInstructor, '_create_instructor_client') as mock_create_client:
|
||||
mock_create_client.return_value = mock_client_google
|
||||
|
||||
instructor_google = InternalInstructor(
|
||||
content="Name: Diana, Age: 28",
|
||||
model=SimpleModel,
|
||||
llm=mock_llm_google
|
||||
)
|
||||
|
||||
result_google = instructor_google.to_pydantic()
|
||||
|
||||
assert isinstance(result_google, SimpleModel)
|
||||
assert result_google.name == "Diana"
|
||||
assert result_google.age == 28
|
||||
|
||||
# Test with Azure provider
|
||||
mock_llm_azure = Mock()
|
||||
mock_llm_azure.is_litellm = False
|
||||
mock_llm_azure.model = "gpt-4o"
|
||||
mock_llm_azure.provider = "azure"
|
||||
|
||||
mock_client_azure = Mock()
|
||||
mock_client_azure.chat.completions.create.return_value = SimpleModel(name="Eve", age=32)
|
||||
|
||||
with patch.object(InternalInstructor, '_create_instructor_client') as mock_create_client:
|
||||
mock_create_client.return_value = mock_client_azure
|
||||
|
||||
instructor_azure = InternalInstructor(
|
||||
content="Name: Eve, Age: 32",
|
||||
model=SimpleModel,
|
||||
llm=mock_llm_azure
|
||||
)
|
||||
|
||||
result_azure = instructor_azure.to_pydantic()
|
||||
|
||||
assert isinstance(result_azure, SimpleModel)
|
||||
assert result_azure.name == "Eve"
|
||||
assert result_azure.age == 32
|
||||
|
||||
|
||||
def test_internal_instructor_with_bedrock_provider() -> None:
|
||||
"""Test InternalInstructor with AWS Bedrock provider using registry pattern."""
|
||||
from crewai.utilities.internal_instructor import InternalInstructor
|
||||
|
||||
# Mock LLM with Bedrock provider
|
||||
mock_llm = Mock()
|
||||
mock_llm.is_litellm = False
|
||||
mock_llm.model = "claude-3-5-sonnet-20241022"
|
||||
mock_llm.provider = "bedrock"
|
||||
|
||||
# Mock instructor client
|
||||
mock_client = Mock()
|
||||
mock_client.chat.completions.create.return_value = SimpleModel(name="Charlie", age=35)
|
||||
|
||||
# Patch the instructor import at the method level
|
||||
with patch.object(InternalInstructor, '_create_instructor_client') as mock_create_client:
|
||||
mock_create_client.return_value = mock_client
|
||||
|
||||
instructor = InternalInstructor(
|
||||
content="Name: Charlie, Age: 35",
|
||||
model=SimpleModel,
|
||||
llm=mock_llm
|
||||
)
|
||||
|
||||
result = instructor.to_pydantic()
|
||||
|
||||
assert isinstance(result, SimpleModel)
|
||||
assert result.name == "Charlie"
|
||||
assert result.age == 35
|
||||
# Verify the method was called with the correct LLM
|
||||
mock_create_client.assert_called_once()
|
||||
|
||||
|
||||
def test_internal_instructor_with_gemini_provider() -> None:
|
||||
"""Test InternalInstructor with Google Gemini provider using registry pattern."""
|
||||
from crewai.utilities.internal_instructor import InternalInstructor
|
||||
|
||||
# Mock LLM with Gemini provider
|
||||
mock_llm = Mock()
|
||||
mock_llm.is_litellm = False
|
||||
mock_llm.model = "gemini-1.5-flash"
|
||||
mock_llm.provider = "google"
|
||||
|
||||
# Mock instructor client
|
||||
mock_client = Mock()
|
||||
mock_client.chat.completions.create.return_value = SimpleModel(name="Diana", age=28)
|
||||
|
||||
# Patch the instructor import at the method level
|
||||
with patch.object(InternalInstructor, '_create_instructor_client') as mock_create_client:
|
||||
mock_create_client.return_value = mock_client
|
||||
|
||||
instructor = InternalInstructor(
|
||||
content="Name: Diana, Age: 28",
|
||||
model=SimpleModel,
|
||||
llm=mock_llm
|
||||
)
|
||||
|
||||
result = instructor.to_pydantic()
|
||||
|
||||
assert isinstance(result, SimpleModel)
|
||||
assert result.name == "Diana"
|
||||
assert result.age == 28
|
||||
# Verify the method was called with the correct LLM
|
||||
mock_create_client.assert_called_once()
|
||||
|
||||
|
||||
def test_internal_instructor_with_azure_provider() -> None:
|
||||
"""Test InternalInstructor with Azure OpenAI provider using registry pattern."""
|
||||
from crewai.utilities.internal_instructor import InternalInstructor
|
||||
|
||||
# Mock LLM with Azure provider
|
||||
mock_llm = Mock()
|
||||
mock_llm.is_litellm = False
|
||||
mock_llm.model = "gpt-4o"
|
||||
mock_llm.provider = "azure"
|
||||
|
||||
# Mock instructor client
|
||||
mock_client = Mock()
|
||||
mock_client.chat.completions.create.return_value = SimpleModel(name="Eve", age=32)
|
||||
|
||||
# Patch the instructor import at the method level
|
||||
with patch.object(InternalInstructor, '_create_instructor_client') as mock_create_client:
|
||||
mock_create_client.return_value = mock_client
|
||||
|
||||
instructor = InternalInstructor(
|
||||
content="Name: Eve, Age: 32",
|
||||
model=SimpleModel,
|
||||
llm=mock_llm
|
||||
)
|
||||
|
||||
result = instructor.to_pydantic()
|
||||
|
||||
assert isinstance(result, SimpleModel)
|
||||
assert result.name == "Eve"
|
||||
assert result.age == 32
|
||||
# Verify the method was called with the correct LLM
|
||||
mock_create_client.assert_called_once()
|
||||
|
||||
|
||||
def test_internal_instructor_unsupported_provider() -> None:
|
||||
"""Test InternalInstructor with unsupported provider raises appropriate error."""
|
||||
from crewai.utilities.internal_instructor import InternalInstructor
|
||||
|
||||
# Mock LLM with unsupported provider
|
||||
mock_llm = Mock()
|
||||
mock_llm.is_litellm = False
|
||||
mock_llm.model = "unsupported-model"
|
||||
mock_llm.provider = "unsupported"
|
||||
|
||||
# Mock the _create_instructor_client method to raise an error for unsupported providers
|
||||
with patch.object(InternalInstructor, '_create_instructor_client') as mock_create_client:
|
||||
mock_create_client.side_effect = Exception("Unsupported provider: unsupported")
|
||||
|
||||
# This should raise an error when trying to create the instructor client
|
||||
with pytest.raises(Exception) as exc_info:
|
||||
instructor = InternalInstructor(
|
||||
content="Test content",
|
||||
model=SimpleModel,
|
||||
llm=mock_llm
|
||||
)
|
||||
instructor.to_pydantic()
|
||||
|
||||
# Verify it's the expected error
|
||||
assert "Unsupported provider" in str(exc_info.value)
|
||||
|
||||
|
||||
def test_internal_instructor_real_unsupported_provider() -> None:
|
||||
"""Test InternalInstructor with real unsupported provider using actual instructor library."""
|
||||
from crewai.utilities.internal_instructor import InternalInstructor
|
||||
|
||||
# Mock LLM with unsupported provider that would actually fail with instructor
|
||||
mock_llm = Mock()
|
||||
mock_llm.is_litellm = False
|
||||
mock_llm.model = "unsupported-model"
|
||||
mock_llm.provider = "unsupported"
|
||||
|
||||
# This should raise a ConfigurationError from the real instructor library
|
||||
with pytest.raises(Exception) as exc_info:
|
||||
instructor = InternalInstructor(
|
||||
content="Test content",
|
||||
model=SimpleModel,
|
||||
llm=mock_llm
|
||||
)
|
||||
instructor.to_pydantic()
|
||||
|
||||
# Verify it's a configuration error about unsupported provider
|
||||
assert "Unsupported provider" in str(exc_info.value) or "unsupported" in str(exc_info.value).lower()
|
||||
|
||||
@@ -1,77 +1,79 @@
|
||||
import os
|
||||
from typing import Any
|
||||
from unittest.mock import patch
|
||||
|
||||
from crewai.cli.constants import DEFAULT_LLM_MODEL
|
||||
from crewai.llm import LLM
|
||||
from crewai.llms.base_llm import BaseLLM
|
||||
from crewai.utilities.llm_utils import create_llm
|
||||
import pytest
|
||||
|
||||
|
||||
try:
|
||||
from litellm.exceptions import BadRequestError
|
||||
except ImportError:
|
||||
BadRequestError = Exception
|
||||
|
||||
|
||||
def test_create_llm_with_llm_instance():
|
||||
existing_llm = LLM(model="gpt-4o")
|
||||
llm = create_llm(llm_value=existing_llm)
|
||||
assert llm is existing_llm
|
||||
|
||||
|
||||
def test_create_llm_with_valid_model_string():
|
||||
llm = create_llm(llm_value="gpt-4o")
|
||||
assert isinstance(llm, BaseLLM)
|
||||
assert llm.model == "gpt-4o"
|
||||
|
||||
|
||||
def test_create_llm_with_invalid_model_string():
|
||||
# For invalid model strings, create_llm succeeds but call() fails with API error
|
||||
llm = create_llm(llm_value="invalid-model")
|
||||
assert llm is not None
|
||||
assert isinstance(llm, BaseLLM)
|
||||
|
||||
# The error should occur when making the actual API call
|
||||
# We expect some kind of API error (NotFoundError, etc.)
|
||||
with pytest.raises(Exception): # noqa: B017
|
||||
llm.call(messages=[{"role": "user", "content": "Hello, world!"}])
|
||||
|
||||
|
||||
def test_create_llm_with_unknown_object_missing_attributes():
|
||||
class UnknownObject:
|
||||
pass
|
||||
|
||||
unknown_obj = UnknownObject()
|
||||
llm = create_llm(llm_value=unknown_obj)
|
||||
|
||||
# Should succeed because str(unknown_obj) provides a model name
|
||||
assert llm is not None
|
||||
assert isinstance(llm, BaseLLM)
|
||||
|
||||
|
||||
def test_create_llm_with_none_uses_default_model():
|
||||
def test_create_llm_with_llm_instance() -> None:
|
||||
with patch.dict(os.environ, {"OPENAI_API_KEY": "fake-key"}, clear=True):
|
||||
with patch("crewai.utilities.llm_utils.DEFAULT_LLM_MODEL", "gpt-4o-mini"):
|
||||
existing_llm = LLM(model="gpt-4o")
|
||||
llm = create_llm(llm_value=existing_llm)
|
||||
assert llm is existing_llm
|
||||
|
||||
|
||||
def test_create_llm_with_valid_model_string() -> None:
|
||||
with patch.dict(os.environ, {"OPENAI_API_KEY": "fake-key"}, clear=True):
|
||||
llm = create_llm(llm_value="gpt-4o")
|
||||
assert isinstance(llm, BaseLLM)
|
||||
assert llm.model == "gpt-4o"
|
||||
|
||||
|
||||
def test_create_llm_with_invalid_model_string() -> None:
|
||||
with patch.dict(os.environ, {"OPENAI_API_KEY": "fake-key"}, clear=True):
|
||||
# For invalid model strings, create_llm succeeds but call() fails with API error
|
||||
llm = create_llm(llm_value="invalid-model")
|
||||
assert llm is not None
|
||||
assert isinstance(llm, BaseLLM)
|
||||
|
||||
# The error should occur when making the actual API call
|
||||
# We expect some kind of API error (NotFoundError, etc.)
|
||||
with pytest.raises(Exception): # noqa: B017
|
||||
llm.call(messages=[{"role": "user", "content": "Hello, world!"}])
|
||||
|
||||
|
||||
def test_create_llm_with_unknown_object_missing_attributes() -> None:
|
||||
with patch.dict(os.environ, {"OPENAI_API_KEY": "fake-key"}, clear=True):
|
||||
class UnknownObject:
|
||||
pass
|
||||
|
||||
unknown_obj = UnknownObject()
|
||||
llm = create_llm(llm_value=unknown_obj)
|
||||
|
||||
# Should succeed because str(unknown_obj) provides a model name
|
||||
assert llm is not None
|
||||
assert isinstance(llm, BaseLLM)
|
||||
|
||||
|
||||
def test_create_llm_with_none_uses_default_model() -> None:
|
||||
with patch.dict(os.environ, {"OPENAI_API_KEY": "fake-key"}, clear=True):
|
||||
with patch("crewai.utilities.llm_utils.DEFAULT_LLM_MODEL", DEFAULT_LLM_MODEL):
|
||||
llm = create_llm(llm_value=None)
|
||||
assert isinstance(llm, BaseLLM)
|
||||
assert llm.model == "gpt-4o-mini"
|
||||
assert llm.model == DEFAULT_LLM_MODEL
|
||||
|
||||
|
||||
def test_create_llm_with_unknown_object():
|
||||
class UnknownObject:
|
||||
model_name = "gpt-4o"
|
||||
temperature = 0.7
|
||||
max_tokens = 1500
|
||||
def test_create_llm_with_unknown_object() -> None:
|
||||
with patch.dict(os.environ, {"OPENAI_API_KEY": "fake-key"}, clear=True):
|
||||
class UnknownObject:
|
||||
model_name = "gpt-4o"
|
||||
temperature = 0.7
|
||||
max_tokens = 1500
|
||||
|
||||
unknown_obj = UnknownObject()
|
||||
llm = create_llm(llm_value=unknown_obj)
|
||||
assert isinstance(llm, BaseLLM)
|
||||
assert llm.model == "gpt-4o"
|
||||
assert llm.temperature == 0.7
|
||||
assert llm.max_tokens == 1500
|
||||
unknown_obj = UnknownObject()
|
||||
llm = create_llm(llm_value=unknown_obj)
|
||||
assert isinstance(llm, BaseLLM)
|
||||
assert llm.model == "gpt-4o"
|
||||
assert llm.temperature == 0.7
|
||||
if hasattr(llm, 'max_tokens'):
|
||||
assert llm.max_tokens == 1500
|
||||
|
||||
|
||||
def test_create_llm_from_env_with_unaccepted_attributes():
|
||||
def test_create_llm_from_env_with_unaccepted_attributes() -> None:
|
||||
with patch.dict(
|
||||
os.environ,
|
||||
{
|
||||
@@ -90,25 +92,47 @@ def test_create_llm_from_env_with_unaccepted_attributes():
|
||||
assert not hasattr(llm, "AWS_REGION_NAME")
|
||||
|
||||
|
||||
def test_create_llm_with_partial_attributes():
|
||||
class PartialAttributes:
|
||||
model_name = "gpt-4o"
|
||||
# temperature is missing
|
||||
def test_create_llm_with_partial_attributes() -> None:
|
||||
with patch.dict(os.environ, {"OPENAI_API_KEY": "fake-key"}, clear=True):
|
||||
class PartialAttributes:
|
||||
model_name = "gpt-4o"
|
||||
# temperature is missing
|
||||
|
||||
obj = PartialAttributes()
|
||||
llm = create_llm(llm_value=obj)
|
||||
assert isinstance(llm, BaseLLM)
|
||||
assert llm.model == "gpt-4o"
|
||||
assert llm.temperature is None # Should handle missing attributes gracefully
|
||||
obj = PartialAttributes()
|
||||
llm = create_llm(llm_value=obj)
|
||||
assert isinstance(llm, BaseLLM)
|
||||
assert llm.model == "gpt-4o"
|
||||
assert llm.temperature is None # Should handle missing attributes gracefully
|
||||
|
||||
|
||||
def test_create_llm_with_invalid_type():
|
||||
# For integers, create_llm succeeds because str(42) becomes "42"
|
||||
llm = create_llm(llm_value=42)
|
||||
assert llm is not None
|
||||
assert isinstance(llm, BaseLLM)
|
||||
assert llm.model == "42"
|
||||
def test_create_llm_with_invalid_type() -> None:
|
||||
with patch.dict(os.environ, {"OPENAI_API_KEY": "fake-key"}, clear=True):
|
||||
# For integers, create_llm succeeds because str(42) becomes "42"
|
||||
llm = create_llm(llm_value=42)
|
||||
assert llm is not None
|
||||
assert isinstance(llm, BaseLLM)
|
||||
assert llm.model == "42"
|
||||
|
||||
# The error should occur when making the actual API call
|
||||
with pytest.raises(Exception): # noqa: B017
|
||||
llm.call(messages=[{"role": "user", "content": "Hello, world!"}])
|
||||
# The error should occur when making the actual API call
|
||||
with pytest.raises(Exception): # noqa: B017
|
||||
llm.call(messages=[{"role": "user", "content": "Hello, world!"}])
|
||||
|
||||
|
||||
def test_create_llm_openai_missing_api_key() -> None:
|
||||
"""Test that create_llm raises error when OpenAI API key is missing"""
|
||||
with patch.dict(os.environ, {}, clear=True):
|
||||
with pytest.raises((ValueError, ImportError)) as exc_info:
|
||||
create_llm(llm_value="gpt-4o")
|
||||
|
||||
error_message = str(exc_info.value).lower()
|
||||
assert "openai_api_key" in error_message or "api_key" in error_message
|
||||
|
||||
|
||||
def test_create_llm_anthropic_missing_dependency() -> None:
|
||||
"""Test that create_llm raises error when Anthropic dependency is missing"""
|
||||
with patch.dict(os.environ, {"ANTHROPIC_API_KEY": "fake-key"}, clear=True):
|
||||
with patch("crewai.llm.LLM.__new__", side_effect=ImportError('Anthropic native provider not available, to install: uv add "crewai[anthropic]"')):
|
||||
with pytest.raises(ImportError) as exc_info:
|
||||
create_llm(llm_value="anthropic/claude-3-sonnet")
|
||||
|
||||
assert "Anthropic native provider not available, to install: uv add \"crewai[anthropic]\"" in str(exc_info.value)
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
"""CrewAI development tools."""
|
||||
|
||||
__version__ = "1.0.0"
|
||||
__version__ = "1.2.0"
|
||||
|
||||
@@ -124,7 +124,7 @@ exclude = [
|
||||
"lib/crewai-tools/tests/",
|
||||
"lib/crewai/src/crewai/experimental/a2a"
|
||||
]
|
||||
plugins = ["pydantic.mypy"]
|
||||
plugins = ["pydantic.mypy", "crewai.mypy"]
|
||||
|
||||
|
||||
[tool.bandit]
|
||||
|
||||
56
uv.lock
generated
56
uv.lock
generated
@@ -204,18 +204,6 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/fb/76/641ae371508676492379f16e2fa48f4e2c11741bd63c48be4b12a6b09cba/aiosignal-1.4.0-py3-none-any.whl", hash = "sha256:053243f8b92b990551949e63930a839ff0cf0b0ebbe0597b0f3fb19e1a0fe82e", size = 7490, upload-time = "2025-07-03T22:54:42.156Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "aisuite"
|
||||
version = "0.1.11"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "httpx" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/17/07/129a68a6f74a80fc1d189064a2f576a84a1a05f14f211fde9352668d1c25/aisuite-0.1.11.tar.gz", hash = "sha256:27260075f8502b9cb40ef476cae29544e39316bbf4b4318464eb4c728e72146a", size = 27533, upload-time = "2025-03-26T12:04:44.068Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/d0/f7/a2799bf017d0303bb2f6c10f55f9c85619a0c8b9cf77fb8a9579961bfe88/aisuite-0.1.11-py3-none-any.whl", hash = "sha256:14293e9b7d81268dabe9b1cbb41cab64ca6c0272b52166213a7fa80196140d7c", size = 41222, upload-time = "2025-03-26T12:04:42.472Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "annotated-types"
|
||||
version = "0.7.0"
|
||||
@@ -1098,9 +1086,6 @@ dependencies = [
|
||||
]
|
||||
|
||||
[package.optional-dependencies]
|
||||
aisuite = [
|
||||
{ name = "aisuite" },
|
||||
]
|
||||
anthropic = [
|
||||
{ name = "anthropic" },
|
||||
]
|
||||
@@ -1153,7 +1138,6 @@ watson = [
|
||||
|
||||
[package.metadata]
|
||||
requires-dist = [
|
||||
{ name = "aisuite", marker = "extra == 'aisuite'", specifier = ">=0.1.11" },
|
||||
{ name = "anthropic", marker = "extra == 'anthropic'", specifier = ">=0.69.0" },
|
||||
{ name = "appdirs", specifier = ">=1.4.4" },
|
||||
{ name = "azure-ai-inference", marker = "extra == 'azure-ai-inference'", specifier = ">=1.0.0b9" },
|
||||
@@ -1196,7 +1180,7 @@ requires-dist = [
|
||||
{ name = "uv", specifier = ">=0.4.25" },
|
||||
{ name = "voyageai", marker = "extra == 'voyageai'", specifier = ">=0.3.5" },
|
||||
]
|
||||
provides-extras = ["aisuite", "anthropic", "aws", "azure-ai-inference", "bedrock", "docling", "embeddings", "google-genai", "litellm", "mem0", "openpyxl", "pandas", "pdfplumber", "qdrant", "tools", "voyageai", "watson"]
|
||||
provides-extras = ["anthropic", "aws", "azure-ai-inference", "bedrock", "docling", "embeddings", "google-genai", "litellm", "mem0", "openpyxl", "pandas", "pdfplumber", "qdrant", "tools", "voyageai", "watson"]
|
||||
|
||||
[[package]]
|
||||
name = "crewai-devtools"
|
||||
@@ -1810,7 +1794,7 @@ name = "exceptiongroup"
|
||||
version = "1.3.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "typing-extensions", marker = "python_full_version < '3.13'" },
|
||||
{ name = "typing-extensions", marker = "python_full_version < '3.11'" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/0b/9f/a65090624ecf468cdca03533906e7c69ed7588582240cfe7cc9e770b50eb/exceptiongroup-1.3.0.tar.gz", hash = "sha256:b241f5885f560bc56a59ee63ca4c6a8bfa46ae4ad651af316d4e81817bb9fd88", size = 29749, upload-time = "2025-05-10T17:42:51.123Z" }
|
||||
wheels = [
|
||||
@@ -4445,7 +4429,7 @@ name = "nvidia-cudnn-cu12"
|
||||
version = "9.10.2.21"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "nvidia-cublas-cu12" },
|
||||
{ name = "nvidia-cublas-cu12", marker = "(platform_machine != 'aarch64' and sys_platform == 'linux') or (sys_platform != 'darwin' and sys_platform != 'linux')" },
|
||||
]
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/ba/51/e123d997aa098c61d029f76663dedbfb9bc8dcf8c60cbd6adbe42f76d049/nvidia_cudnn_cu12-9.10.2.21-py3-none-manylinux_2_27_x86_64.whl", hash = "sha256:949452be657fa16687d0930933f032835951ef0892b37d2d53824d1a84dc97a8", size = 706758467, upload-time = "2025-06-06T21:54:08.597Z" },
|
||||
@@ -4456,7 +4440,7 @@ name = "nvidia-cufft-cu12"
|
||||
version = "11.3.3.83"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "nvidia-nvjitlink-cu12" },
|
||||
{ name = "nvidia-nvjitlink-cu12", marker = "(platform_machine != 'aarch64' and sys_platform == 'linux') or (sys_platform != 'darwin' and sys_platform != 'linux')" },
|
||||
]
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/1f/13/ee4e00f30e676b66ae65b4f08cb5bcbb8392c03f54f2d5413ea99a5d1c80/nvidia_cufft_cu12-11.3.3.83-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:4d2dd21ec0b88cf61b62e6b43564355e5222e4a3fb394cac0db101f2dd0d4f74", size = 193118695, upload-time = "2025-03-07T01:45:27.821Z" },
|
||||
@@ -4483,9 +4467,9 @@ name = "nvidia-cusolver-cu12"
|
||||
version = "11.7.3.90"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "nvidia-cublas-cu12" },
|
||||
{ name = "nvidia-cusparse-cu12" },
|
||||
{ name = "nvidia-nvjitlink-cu12" },
|
||||
{ name = "nvidia-cublas-cu12", marker = "(platform_machine != 'aarch64' and sys_platform == 'linux') or (sys_platform != 'darwin' and sys_platform != 'linux')" },
|
||||
{ name = "nvidia-cusparse-cu12", marker = "(platform_machine != 'aarch64' and sys_platform == 'linux') or (sys_platform != 'darwin' and sys_platform != 'linux')" },
|
||||
{ name = "nvidia-nvjitlink-cu12", marker = "(platform_machine != 'aarch64' and sys_platform == 'linux') or (sys_platform != 'darwin' and sys_platform != 'linux')" },
|
||||
]
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/85/48/9a13d2975803e8cf2777d5ed57b87a0b6ca2cc795f9a4f59796a910bfb80/nvidia_cusolver_cu12-11.7.3.90-py3-none-manylinux_2_27_x86_64.whl", hash = "sha256:4376c11ad263152bd50ea295c05370360776f8c3427b30991df774f9fb26c450", size = 267506905, upload-time = "2025-03-07T01:47:16.273Z" },
|
||||
@@ -4496,7 +4480,7 @@ name = "nvidia-cusparse-cu12"
|
||||
version = "12.5.8.93"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "nvidia-nvjitlink-cu12" },
|
||||
{ name = "nvidia-nvjitlink-cu12", marker = "(platform_machine != 'aarch64' and sys_platform == 'linux') or (sys_platform != 'darwin' and sys_platform != 'linux')" },
|
||||
]
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/c2/f5/e1854cb2f2bcd4280c44736c93550cc300ff4b8c95ebe370d0aa7d2b473d/nvidia_cusparse_cu12-12.5.8.93-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:1ec05d76bbbd8b61b06a80e1eaf8cf4959c3d4ce8e711b65ebd0443bb0ebb13b", size = 288216466, upload-time = "2025-03-07T01:48:13.779Z" },
|
||||
@@ -4556,9 +4540,9 @@ name = "ocrmac"
|
||||
version = "1.0.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "click" },
|
||||
{ name = "pillow" },
|
||||
{ name = "pyobjc-framework-vision" },
|
||||
{ name = "click", marker = "sys_platform == 'darwin'" },
|
||||
{ name = "pillow", marker = "sys_platform == 'darwin'" },
|
||||
{ name = "pyobjc-framework-vision", marker = "sys_platform == 'darwin'" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/dd/dc/de3e9635774b97d9766f6815bbb3f5ec9bce347115f10d9abbf2733a9316/ocrmac-1.0.0.tar.gz", hash = "sha256:5b299e9030c973d1f60f82db000d6c2e5ff271601878c7db0885e850597d1d2e", size = 1463997, upload-time = "2024-11-07T12:00:00.197Z" }
|
||||
wheels = [
|
||||
@@ -6183,7 +6167,7 @@ name = "pyobjc-framework-cocoa"
|
||||
version = "11.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "pyobjc-core" },
|
||||
{ name = "pyobjc-core", marker = "sys_platform == 'darwin'" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/4b/c5/7a866d24bc026f79239b74d05e2cf3088b03263da66d53d1b4cf5207f5ae/pyobjc_framework_cocoa-11.1.tar.gz", hash = "sha256:87df76b9b73e7ca699a828ff112564b59251bb9bbe72e610e670a4dc9940d038", size = 5565335, upload-time = "2025-06-14T20:56:59.683Z" }
|
||||
wheels = [
|
||||
@@ -6199,8 +6183,8 @@ name = "pyobjc-framework-coreml"
|
||||
version = "11.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "pyobjc-core" },
|
||||
{ name = "pyobjc-framework-cocoa" },
|
||||
{ name = "pyobjc-core", marker = "sys_platform == 'darwin'" },
|
||||
{ name = "pyobjc-framework-cocoa", marker = "sys_platform == 'darwin'" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/0d/5d/4309f220981d769b1a2f0dcb2c5c104490d31389a8ebea67e5595ce1cb74/pyobjc_framework_coreml-11.1.tar.gz", hash = "sha256:775923eefb9eac2e389c0821b10564372de8057cea89f1ea1cdaf04996c970a7", size = 82005, upload-time = "2025-06-14T20:57:12.004Z" }
|
||||
wheels = [
|
||||
@@ -6216,8 +6200,8 @@ name = "pyobjc-framework-quartz"
|
||||
version = "11.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "pyobjc-core" },
|
||||
{ name = "pyobjc-framework-cocoa" },
|
||||
{ name = "pyobjc-core", marker = "sys_platform == 'darwin'" },
|
||||
{ name = "pyobjc-framework-cocoa", marker = "sys_platform == 'darwin'" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/c7/ac/6308fec6c9ffeda9942fef72724f4094c6df4933560f512e63eac37ebd30/pyobjc_framework_quartz-11.1.tar.gz", hash = "sha256:a57f35ccfc22ad48c87c5932818e583777ff7276605fef6afad0ac0741169f75", size = 3953275, upload-time = "2025-06-14T20:58:17.924Z" }
|
||||
wheels = [
|
||||
@@ -6233,10 +6217,10 @@ name = "pyobjc-framework-vision"
|
||||
version = "11.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "pyobjc-core" },
|
||||
{ name = "pyobjc-framework-cocoa" },
|
||||
{ name = "pyobjc-framework-coreml" },
|
||||
{ name = "pyobjc-framework-quartz" },
|
||||
{ name = "pyobjc-core", marker = "sys_platform == 'darwin'" },
|
||||
{ name = "pyobjc-framework-cocoa", marker = "sys_platform == 'darwin'" },
|
||||
{ name = "pyobjc-framework-coreml", marker = "sys_platform == 'darwin'" },
|
||||
{ name = "pyobjc-framework-quartz", marker = "sys_platform == 'darwin'" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/40/a8/7128da4d0a0103cabe58910a7233e2f98d18c590b1d36d4b3efaaedba6b9/pyobjc_framework_vision-11.1.tar.gz", hash = "sha256:26590512ee7758da3056499062a344b8a351b178be66d4b719327884dde4216b", size = 133721, upload-time = "2025-06-14T20:58:46.095Z" }
|
||||
wheels = [
|
||||
|
||||
Reference in New Issue
Block a user