chore: restructure test env, cassettes, and conftest; fix flaky tests
Some checks failed
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled

Consolidates pytest config, standardizes env handling, reorganizes cassette layout, removes outdated VCR configs, improves sync with threading.Condition, updates event-waiting logic, ensures cleanup, regenerates Gemini cassettes, and reverts unintended test changes.
This commit is contained in:
Greyson LaLonde
2025-11-29 16:55:24 -05:00
committed by GitHub
parent bc4e6a3127
commit c925d2d519
200 changed files with 2070 additions and 1891 deletions

161
.env.test Normal file
View File

@@ -0,0 +1,161 @@
# =============================================================================
# Test Environment Variables
# =============================================================================
# This file contains all environment variables needed to run tests locally
# in a way that mimics the GitHub Actions CI environment.
# =============================================================================
# -----------------------------------------------------------------------------
# LLM Provider API Keys
# -----------------------------------------------------------------------------
OPENAI_API_KEY=fake-api-key
ANTHROPIC_API_KEY=fake-anthropic-key
GEMINI_API_KEY=fake-gemini-key
AZURE_API_KEY=fake-azure-key
OPENROUTER_API_KEY=fake-openrouter-key
# -----------------------------------------------------------------------------
# AWS Credentials
# -----------------------------------------------------------------------------
AWS_ACCESS_KEY_ID=fake-aws-access-key
AWS_SECRET_ACCESS_KEY=fake-aws-secret-key
AWS_DEFAULT_REGION=us-east-1
AWS_REGION_NAME=us-east-1
# -----------------------------------------------------------------------------
# Azure OpenAI Configuration
# -----------------------------------------------------------------------------
AZURE_ENDPOINT=https://fake-azure-endpoint.openai.azure.com
AZURE_OPENAI_ENDPOINT=https://fake-azure-endpoint.openai.azure.com
AZURE_OPENAI_API_KEY=fake-azure-openai-key
AZURE_API_VERSION=2024-02-15-preview
OPENAI_API_VERSION=2024-02-15-preview
# -----------------------------------------------------------------------------
# Google Cloud Configuration
# -----------------------------------------------------------------------------
#GOOGLE_CLOUD_PROJECT=fake-gcp-project
#GOOGLE_CLOUD_LOCATION=us-central1
# -----------------------------------------------------------------------------
# OpenAI Configuration
# -----------------------------------------------------------------------------
OPENAI_BASE_URL=https://api.openai.com/v1
OPENAI_API_BASE=https://api.openai.com/v1
# -----------------------------------------------------------------------------
# Search & Scraping Tool API Keys
# -----------------------------------------------------------------------------
SERPER_API_KEY=fake-serper-key
EXA_API_KEY=fake-exa-key
BRAVE_API_KEY=fake-brave-key
FIRECRAWL_API_KEY=fake-firecrawl-key
TAVILY_API_KEY=fake-tavily-key
SERPAPI_API_KEY=fake-serpapi-key
SERPLY_API_KEY=fake-serply-key
LINKUP_API_KEY=fake-linkup-key
PARALLEL_API_KEY=fake-parallel-key
# -----------------------------------------------------------------------------
# Exa Configuration
# -----------------------------------------------------------------------------
EXA_BASE_URL=https://api.exa.ai
# -----------------------------------------------------------------------------
# Web Scraping & Automation
# -----------------------------------------------------------------------------
BRIGHT_DATA_API_KEY=fake-brightdata-key
BRIGHT_DATA_ZONE=fake-zone
BRIGHTDATA_API_URL=https://api.brightdata.com
BRIGHTDATA_DEFAULT_TIMEOUT=600
BRIGHTDATA_DEFAULT_POLLING_INTERVAL=1
OXYLABS_USERNAME=fake-oxylabs-user
OXYLABS_PASSWORD=fake-oxylabs-pass
SCRAPFLY_API_KEY=fake-scrapfly-key
SCRAPEGRAPH_API_KEY=fake-scrapegraph-key
BROWSERBASE_API_KEY=fake-browserbase-key
BROWSERBASE_PROJECT_ID=fake-browserbase-project
HYPERBROWSER_API_KEY=fake-hyperbrowser-key
MULTION_API_KEY=fake-multion-key
APIFY_API_TOKEN=fake-apify-token
# -----------------------------------------------------------------------------
# Database & Vector Store Credentials
# -----------------------------------------------------------------------------
SINGLESTOREDB_URL=mysql://fake:fake@localhost:3306/fake
SINGLESTOREDB_HOST=localhost
SINGLESTOREDB_PORT=3306
SINGLESTOREDB_USER=fake-user
SINGLESTOREDB_PASSWORD=fake-password
SINGLESTOREDB_DATABASE=fake-database
SINGLESTOREDB_CONNECT_TIMEOUT=30
SNOWFLAKE_USER=fake-snowflake-user
SNOWFLAKE_PASSWORD=fake-snowflake-password
SNOWFLAKE_ACCOUNT=fake-snowflake-account
SNOWFLAKE_WAREHOUSE=fake-snowflake-warehouse
SNOWFLAKE_DATABASE=fake-snowflake-database
SNOWFLAKE_SCHEMA=fake-snowflake-schema
WEAVIATE_URL=http://localhost:8080
WEAVIATE_API_KEY=fake-weaviate-key
EMBEDCHAIN_DB_URI=sqlite:///test.db
# Databricks Credentials
DATABRICKS_HOST=https://fake-databricks.cloud.databricks.com
DATABRICKS_TOKEN=fake-databricks-token
DATABRICKS_CONFIG_PROFILE=fake-profile
# MongoDB Credentials
MONGODB_URI=mongodb://fake:fake@localhost:27017/fake
# -----------------------------------------------------------------------------
# CrewAI Platform & Enterprise
# -----------------------------------------------------------------------------
# setting CREWAI_PLATFORM_INTEGRATION_TOKEN causes these test to fail:
#=========================== short test summary info ============================
#FAILED tests/test_context.py::TestPlatformIntegrationToken::test_platform_context_manager_basic_usage - AssertionError: assert 'fake-platform-token' is None
# + where 'fake-platform-token' = get_platform_integration_token()
#FAILED tests/test_context.py::TestPlatformIntegrationToken::test_context_var_isolation_between_tests - AssertionError: assert 'fake-platform-token' is None
# + where 'fake-platform-token' = get_platform_integration_token()
#FAILED tests/test_context.py::TestPlatformIntegrationToken::test_multiple_sequential_context_managers - AssertionError: assert 'fake-platform-token' is None
# + where 'fake-platform-token' = get_platform_integration_token()
#CREWAI_PLATFORM_INTEGRATION_TOKEN=fake-platform-token
CREWAI_PERSONAL_ACCESS_TOKEN=fake-personal-token
CREWAI_PLUS_URL=https://fake.crewai.com
# -----------------------------------------------------------------------------
# Other Service API Keys
# -----------------------------------------------------------------------------
ZAPIER_API_KEY=fake-zapier-key
PATRONUS_API_KEY=fake-patronus-key
MINDS_API_KEY=fake-minds-key
HF_TOKEN=fake-hf-token
# -----------------------------------------------------------------------------
# Feature Flags/Testing Modes
# -----------------------------------------------------------------------------
CREWAI_DISABLE_TELEMETRY=true
OTEL_SDK_DISABLED=true
CREWAI_TESTING=true
CREWAI_TRACING_ENABLED=false
# -----------------------------------------------------------------------------
# Testing/CI Configuration
# -----------------------------------------------------------------------------
# VCR recording mode: "none" (default), "new_episodes", "all", "once"
PYTEST_VCR_RECORD_MODE=none
# Set to "true" by GitHub when running in GitHub Actions
# GITHUB_ACTIONS=false
# -----------------------------------------------------------------------------
# Python Configuration
# -----------------------------------------------------------------------------
PYTHONUNBUFFERED=1

View File

@@ -5,18 +5,6 @@ on: [pull_request]
permissions: permissions:
contents: read contents: read
env:
OPENAI_API_KEY: fake-api-key
PYTHONUNBUFFERED: 1
BRAVE_API_KEY: fake-brave-key
SNOWFLAKE_USER: fake-snowflake-user
SNOWFLAKE_PASSWORD: fake-snowflake-password
SNOWFLAKE_ACCOUNT: fake-snowflake-account
SNOWFLAKE_WAREHOUSE: fake-snowflake-warehouse
SNOWFLAKE_DATABASE: fake-snowflake-database
SNOWFLAKE_SCHEMA: fake-snowflake-schema
EMBEDCHAIN_DB_URI: sqlite:///test.db
jobs: jobs:
tests: tests:
name: tests (${{ matrix.python-version }}) name: tests (${{ matrix.python-version }})
@@ -84,26 +72,20 @@ jobs:
# fi # fi
cd lib/crewai && uv run pytest \ cd lib/crewai && uv run pytest \
--block-network \
--timeout=30 \
-vv \ -vv \
--splits 8 \ --splits 8 \
--group ${{ matrix.group }} \ --group ${{ matrix.group }} \
$DURATIONS_ARG \ $DURATIONS_ARG \
--durations=10 \ --durations=10 \
-n auto \
--maxfail=3 --maxfail=3
- name: Run tool tests (group ${{ matrix.group }} of 8) - name: Run tool tests (group ${{ matrix.group }} of 8)
run: | run: |
cd lib/crewai-tools && uv run pytest \ cd lib/crewai-tools && uv run pytest \
--block-network \
--timeout=30 \
-vv \ -vv \
--splits 8 \ --splits 8 \
--group ${{ matrix.group }} \ --group ${{ matrix.group }} \
--durations=10 \ --durations=10 \
-n auto \
--maxfail=3 --maxfail=3

166
conftest.py Normal file
View File

@@ -0,0 +1,166 @@
"""Pytest configuration for crewAI workspace."""
from collections.abc import Generator
import os
from pathlib import Path
import tempfile
from typing import Any
from dotenv import load_dotenv
import pytest
from vcr.request import Request # type: ignore[import-untyped]
env_test_path = Path(__file__).parent / ".env.test"
load_dotenv(env_test_path, override=True)
load_dotenv(override=True)
@pytest.fixture(autouse=True, scope="function")
def cleanup_event_handlers() -> Generator[None, Any, None]:
"""Clean up event bus handlers after each test to prevent test pollution."""
yield
try:
from crewai.events.event_bus import crewai_event_bus
with crewai_event_bus._rwlock.w_locked():
crewai_event_bus._sync_handlers.clear()
crewai_event_bus._async_handlers.clear()
except Exception: # noqa: S110
pass
@pytest.fixture(autouse=True, scope="function")
def setup_test_environment() -> Generator[None, Any, None]:
"""Setup test environment for crewAI workspace."""
with tempfile.TemporaryDirectory() as temp_dir:
storage_dir = Path(temp_dir) / "crewai_test_storage"
storage_dir.mkdir(parents=True, exist_ok=True)
if not storage_dir.exists() or not storage_dir.is_dir():
raise RuntimeError(
f"Failed to create test storage directory: {storage_dir}"
)
try:
test_file = storage_dir / ".permissions_test"
test_file.touch()
test_file.unlink()
except (OSError, IOError) as e:
raise RuntimeError(
f"Test storage directory {storage_dir} is not writable: {e}"
) from e
os.environ["CREWAI_STORAGE_DIR"] = str(storage_dir)
os.environ["CREWAI_TESTING"] = "true"
try:
yield
finally:
os.environ.pop("CREWAI_TESTING", "true")
os.environ.pop("CREWAI_STORAGE_DIR", None)
os.environ.pop("CREWAI_DISABLE_TELEMETRY", "true")
os.environ.pop("OTEL_SDK_DISABLED", "true")
os.environ.pop("OPENAI_BASE_URL", "https://api.openai.com/v1")
os.environ.pop("OPENAI_API_BASE", "https://api.openai.com/v1")
HEADERS_TO_FILTER = {
"authorization": "AUTHORIZATION-XXX",
"content-security-policy": "CSP-FILTERED",
"cookie": "COOKIE-XXX",
"set-cookie": "SET-COOKIE-XXX",
"permissions-policy": "PERMISSIONS-POLICY-XXX",
"referrer-policy": "REFERRER-POLICY-XXX",
"strict-transport-security": "STS-XXX",
"x-content-type-options": "X-CONTENT-TYPE-XXX",
"x-frame-options": "X-FRAME-OPTIONS-XXX",
"x-permitted-cross-domain-policies": "X-PERMITTED-XXX",
"x-request-id": "X-REQUEST-ID-XXX",
"x-runtime": "X-RUNTIME-XXX",
"x-xss-protection": "X-XSS-PROTECTION-XXX",
"x-stainless-arch": "X-STAINLESS-ARCH-XXX",
"x-stainless-os": "X-STAINLESS-OS-XXX",
"x-stainless-read-timeout": "X-STAINLESS-READ-TIMEOUT-XXX",
"cf-ray": "CF-RAY-XXX",
"etag": "ETAG-XXX",
"Strict-Transport-Security": "STS-XXX",
"access-control-expose-headers": "ACCESS-CONTROL-XXX",
"openai-organization": "OPENAI-ORG-XXX",
"openai-project": "OPENAI-PROJECT-XXX",
"x-ratelimit-limit-requests": "X-RATELIMIT-LIMIT-REQUESTS-XXX",
"x-ratelimit-limit-tokens": "X-RATELIMIT-LIMIT-TOKENS-XXX",
"x-ratelimit-remaining-requests": "X-RATELIMIT-REMAINING-REQUESTS-XXX",
"x-ratelimit-remaining-tokens": "X-RATELIMIT-REMAINING-TOKENS-XXX",
"x-ratelimit-reset-requests": "X-RATELIMIT-RESET-REQUESTS-XXX",
"x-ratelimit-reset-tokens": "X-RATELIMIT-RESET-TOKENS-XXX",
"x-goog-api-key": "X-GOOG-API-KEY-XXX",
}
def _filter_request_headers(request: Request) -> Request: # type: ignore[no-any-unimported]
"""Filter sensitive headers from request before recording."""
for header_name, replacement in HEADERS_TO_FILTER.items():
for variant in [header_name, header_name.upper(), header_name.title()]:
if variant in request.headers:
request.headers[variant] = [replacement]
return request
def _filter_response_headers(response: dict[str, Any]) -> dict[str, Any]:
"""Filter sensitive headers from response before recording."""
for header_name, replacement in HEADERS_TO_FILTER.items():
for variant in [header_name, header_name.upper(), header_name.title()]:
if variant in response["headers"]:
response["headers"][variant] = [replacement]
return response
@pytest.fixture(scope="module")
def vcr_cassette_dir(request: Any) -> str:
"""Generate cassette directory path based on test module location.
Organizes cassettes to mirror test directory structure within each package:
lib/crewai/tests/llms/google/test_google.py -> lib/crewai/tests/cassettes/llms/google/
lib/crewai-tools/tests/tools/test_search.py -> lib/crewai-tools/tests/cassettes/tools/
"""
test_file = Path(request.fspath)
for parent in test_file.parents:
if parent.name in ("crewai", "crewai-tools") and parent.parent.name == "lib":
package_root = parent
break
else:
package_root = test_file.parent
tests_root = package_root / "tests"
test_dir = test_file.parent
if test_dir != tests_root:
relative_path = test_dir.relative_to(tests_root)
cassette_dir = tests_root / "cassettes" / relative_path
else:
cassette_dir = tests_root / "cassettes"
cassette_dir.mkdir(parents=True, exist_ok=True)
return str(cassette_dir)
@pytest.fixture(scope="module")
def vcr_config(vcr_cassette_dir: str) -> dict[str, Any]:
"""Configure VCR with organized cassette storage."""
config = {
"cassette_library_dir": vcr_cassette_dir,
"record_mode": os.getenv("PYTEST_VCR_RECORD_MODE", "once"),
"filter_headers": [(k, v) for k, v in HEADERS_TO_FILTER.items()],
"before_record_request": _filter_request_headers,
"before_record_response": _filter_response_headers,
"filter_query_parameters": ["key"],
}
if os.getenv("GITHUB_ACTIONS") == "true":
config["record_mode"] = "none"
return config

View File

@@ -218,7 +218,7 @@ Update the root `README.md` only if the tool introduces a new category or notabl
## Discovery and specs ## Discovery and specs
Our internal tooling discovers classes whose names end with `Tool`. Keep your class exported from the module path under `crewai_tools/tools/...` to be picked up by scripts like `generate_tool_specs.py`. Our internal tooling discovers classes whose names end with `Tool`. Keep your class exported from the module path under `crewai_tools/tools/...` to be picked up by scripts like `crewai_tools.generate_tool_specs.py`.
--- ---

View File

@@ -4,17 +4,20 @@ from collections.abc import Mapping
import inspect import inspect
import json import json
from pathlib import Path from pathlib import Path
from typing import Any, cast from typing import Any
from crewai.tools.base_tool import BaseTool, EnvVar from crewai.tools.base_tool import BaseTool, EnvVar
from crewai_tools import tools
from pydantic import BaseModel from pydantic import BaseModel
from pydantic.json_schema import GenerateJsonSchema from pydantic.json_schema import GenerateJsonSchema
from pydantic_core import PydanticOmit from pydantic_core import PydanticOmit
from crewai_tools import tools
class SchemaGenerator(GenerateJsonSchema): class SchemaGenerator(GenerateJsonSchema):
def handle_invalid_for_json_schema(self, schema, error_info): def handle_invalid_for_json_schema(
self, schema: Any, error_info: Any
) -> dict[str, Any]:
raise PydanticOmit raise PydanticOmit
@@ -73,7 +76,7 @@ class ToolSpecExtractor:
@staticmethod @staticmethod
def _extract_field_default( def _extract_field_default(
field: dict | None, fallback: str | list[Any] = "" field: dict[str, Any] | None, fallback: str | list[Any] = ""
) -> str | list[Any] | int: ) -> str | list[Any] | int:
if not field: if not field:
return fallback return fallback
@@ -83,7 +86,7 @@ class ToolSpecExtractor:
return default if isinstance(default, (list, str, int)) else fallback return default if isinstance(default, (list, str, int)) else fallback
@staticmethod @staticmethod
def _extract_params(args_schema_field: dict | None) -> dict[str, Any]: def _extract_params(args_schema_field: dict[str, Any] | None) -> dict[str, Any]:
if not args_schema_field: if not args_schema_field:
return {} return {}
@@ -94,15 +97,15 @@ class ToolSpecExtractor:
): ):
return {} return {}
# Cast to type[BaseModel] after runtime check
schema_class = cast(type[BaseModel], args_schema_class)
try: try:
return schema_class.model_json_schema(schema_generator=SchemaGenerator) return args_schema_class.model_json_schema(schema_generator=SchemaGenerator)
except Exception: except Exception:
return {} return {}
@staticmethod @staticmethod
def _extract_env_vars(env_vars_field: dict | None) -> list[dict[str, Any]]: def _extract_env_vars(
env_vars_field: dict[str, Any] | None,
) -> list[dict[str, Any]]:
if not env_vars_field: if not env_vars_field:
return [] return []

View File

@@ -1,21 +0,0 @@
import pytest
def pytest_configure(config):
"""Register custom markers."""
config.addinivalue_line("markers", "integration: mark test as an integration test")
config.addinivalue_line("markers", "asyncio: mark test as an async test")
# Set the asyncio loop scope through ini configuration
config.inicfg["asyncio_mode"] = "auto"
@pytest.fixture(scope="function")
def event_loop():
"""Create an instance of the default event loop for each test case."""
import asyncio
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
yield loop
loop.close()

View File

@@ -2,7 +2,7 @@ import json
from unittest import mock from unittest import mock
from crewai.tools.base_tool import BaseTool, EnvVar from crewai.tools.base_tool import BaseTool, EnvVar
from generate_tool_specs import ToolSpecExtractor from crewai_tools.generate_tool_specs import ToolSpecExtractor
from pydantic import BaseModel, Field from pydantic import BaseModel, Field
import pytest import pytest
@@ -61,8 +61,8 @@ def test_unwrap_schema(extractor):
@pytest.fixture @pytest.fixture
def mock_tool_extractor(extractor): def mock_tool_extractor(extractor):
with ( with (
mock.patch("generate_tool_specs.dir", return_value=["MockTool"]), mock.patch("crewai_tools.generate_tool_specs.dir", return_value=["MockTool"]),
mock.patch("generate_tool_specs.getattr", return_value=MockTool), mock.patch("crewai_tools.generate_tool_specs.getattr", return_value=MockTool),
): ):
extractor.extract_all_tools() extractor.extract_all_tools()
assert len(extractor.tools_spec) == 1 assert len(extractor.tools_spec) == 1

View File

@@ -4,7 +4,7 @@ from crewai_tools.tools.firecrawl_crawl_website_tool.firecrawl_crawl_website_too
FirecrawlCrawlWebsiteTool, FirecrawlCrawlWebsiteTool,
) )
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_firecrawl_crawl_tool_integration(): def test_firecrawl_crawl_tool_integration():
tool = FirecrawlCrawlWebsiteTool(config={ tool = FirecrawlCrawlWebsiteTool(config={
"limit": 2, "limit": 2,

View File

@@ -4,7 +4,7 @@ from crewai_tools.tools.firecrawl_scrape_website_tool.firecrawl_scrape_website_t
FirecrawlScrapeWebsiteTool, FirecrawlScrapeWebsiteTool,
) )
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_firecrawl_scrape_tool_integration(): def test_firecrawl_scrape_tool_integration():
tool = FirecrawlScrapeWebsiteTool() tool = FirecrawlScrapeWebsiteTool()
result = tool.run(url="https://firecrawl.dev") result = tool.run(url="https://firecrawl.dev")

View File

@@ -3,7 +3,7 @@ import pytest
from crewai_tools.tools.firecrawl_search_tool.firecrawl_search_tool import FirecrawlSearchTool from crewai_tools.tools.firecrawl_search_tool.firecrawl_search_tool import FirecrawlSearchTool
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_firecrawl_search_tool_integration(): def test_firecrawl_search_tool_integration():
tool = FirecrawlSearchTool() tool = FirecrawlSearchTool()
result = tool.run(query="firecrawl") result = tool.run(query="firecrawl")

View File

@@ -23,15 +23,13 @@ from crewai_tools.tools.rag.rag_tool import Adapter
import pytest import pytest
pytestmark = [pytest.mark.vcr(filter_headers=["authorization"])]
@pytest.fixture @pytest.fixture
def mock_adapter(): def mock_adapter():
mock_adapter = MagicMock(spec=Adapter) mock_adapter = MagicMock(spec=Adapter)
return mock_adapter return mock_adapter
@pytest.mark.vcr()
def test_directory_search_tool(): def test_directory_search_tool():
with tempfile.TemporaryDirectory() as temp_dir: with tempfile.TemporaryDirectory() as temp_dir:
test_file = Path(temp_dir) / "test.txt" test_file = Path(temp_dir) / "test.txt"
@@ -65,6 +63,7 @@ def test_pdf_search_tool(mock_adapter):
) )
@pytest.mark.vcr()
def test_txt_search_tool(): def test_txt_search_tool():
with tempfile.NamedTemporaryFile(suffix=".txt", delete=False) as temp_file: with tempfile.NamedTemporaryFile(suffix=".txt", delete=False) as temp_file:
temp_file.write(b"This is a test file for txt search") temp_file.write(b"This is a test file for txt search")
@@ -102,6 +101,7 @@ def test_docx_search_tool(mock_adapter):
) )
@pytest.mark.vcr()
def test_json_search_tool(): def test_json_search_tool():
with tempfile.NamedTemporaryFile(suffix=".json", delete=False) as temp_file: with tempfile.NamedTemporaryFile(suffix=".json", delete=False) as temp_file:
temp_file.write(b'{"test": "This is a test JSON file"}') temp_file.write(b'{"test": "This is a test JSON file"}')
@@ -127,6 +127,7 @@ def test_xml_search_tool(mock_adapter):
) )
@pytest.mark.vcr()
def test_csv_search_tool(): def test_csv_search_tool():
with tempfile.NamedTemporaryFile(suffix=".csv", delete=False) as temp_file: with tempfile.NamedTemporaryFile(suffix=".csv", delete=False) as temp_file:
temp_file.write(b"name,description\ntest,This is a test CSV file") temp_file.write(b"name,description\ntest,This is a test CSV file")
@@ -141,6 +142,7 @@ def test_csv_search_tool():
os.unlink(temp_file_path) os.unlink(temp_file_path)
@pytest.mark.vcr()
def test_mdx_search_tool(): def test_mdx_search_tool():
with tempfile.NamedTemporaryFile(suffix=".mdx", delete=False) as temp_file: with tempfile.NamedTemporaryFile(suffix=".mdx", delete=False) as temp_file:
temp_file.write(b"# Test MDX\nThis is a test MDX file") temp_file.write(b"# Test MDX\nThis is a test MDX file")

View File

@@ -76,7 +76,7 @@ class TraceBatchManager:
use_ephemeral: bool = False, use_ephemeral: bool = False,
) -> TraceBatch: ) -> TraceBatch:
"""Initialize a new trace batch (thread-safe)""" """Initialize a new trace batch (thread-safe)"""
with self._init_lock: with self._batch_ready_cv:
if self.current_batch is not None: if self.current_batch is not None:
logger.debug( logger.debug(
"Batch already initialized, skipping duplicate initialization" "Batch already initialized, skipping duplicate initialization"
@@ -99,7 +99,6 @@ class TraceBatchManager:
self.backend_initialized = True self.backend_initialized = True
self._batch_ready_cv.notify_all() self._batch_ready_cv.notify_all()
return self.current_batch return self.current_batch
def _initialize_backend_batch( def _initialize_backend_batch(
@@ -107,7 +106,7 @@ class TraceBatchManager:
user_context: dict[str, str], user_context: dict[str, str],
execution_metadata: dict[str, Any], execution_metadata: dict[str, Any],
use_ephemeral: bool = False, use_ephemeral: bool = False,
): ) -> None:
"""Send batch initialization to backend""" """Send batch initialization to backend"""
if not is_tracing_enabled_in_context(): if not is_tracing_enabled_in_context():
@@ -204,7 +203,7 @@ class TraceBatchManager:
return False return False
return True return True
def add_event(self, trace_event: TraceEvent): def add_event(self, trace_event: TraceEvent) -> None:
"""Add event to buffer""" """Add event to buffer"""
self.event_buffer.append(trace_event) self.event_buffer.append(trace_event)
@@ -300,7 +299,7 @@ class TraceBatchManager:
return finalized_batch return finalized_batch
def _finalize_backend_batch(self, events_count: int = 0): def _finalize_backend_batch(self, events_count: int = 0) -> None:
"""Send batch finalization to backend """Send batch finalization to backend
Args: Args:
@@ -366,7 +365,7 @@ class TraceBatchManager:
logger.error(f"❌ Error finalizing trace batch: {e}") logger.error(f"❌ Error finalizing trace batch: {e}")
self.plus_api.mark_trace_batch_as_failed(self.trace_batch_id, str(e)) self.plus_api.mark_trace_batch_as_failed(self.trace_batch_id, str(e))
def _cleanup_batch_data(self): def _cleanup_batch_data(self) -> None:
"""Clean up batch data after successful finalization to free memory""" """Clean up batch data after successful finalization to free memory"""
try: try:
if hasattr(self, "event_buffer") and self.event_buffer: if hasattr(self, "event_buffer") and self.event_buffer:
@@ -411,7 +410,7 @@ class TraceBatchManager:
lambda: self.current_batch is not None, timeout=timeout lambda: self.current_batch is not None, timeout=timeout
) )
def record_start_time(self, key: str): def record_start_time(self, key: str) -> None:
"""Record start time for duration calculation""" """Record start time for duration calculation"""
self.execution_start_times[key] = datetime.now(timezone.utc) self.execution_start_times[key] = datetime.now(timezone.utc)

View File

@@ -256,6 +256,7 @@ GeminiModels: TypeAlias = Literal[
"gemini-2.5-flash-preview-tts", "gemini-2.5-flash-preview-tts",
"gemini-2.5-pro-preview-tts", "gemini-2.5-pro-preview-tts",
"gemini-2.5-computer-use-preview-10-2025", "gemini-2.5-computer-use-preview-10-2025",
"gemini-2.5-pro-exp-03-25",
"gemini-2.0-flash", "gemini-2.0-flash",
"gemini-2.0-flash-001", "gemini-2.0-flash-001",
"gemini-2.0-flash-exp", "gemini-2.0-flash-exp",
@@ -309,6 +310,7 @@ GEMINI_MODELS: list[GeminiModels] = [
"gemini-2.5-flash-preview-tts", "gemini-2.5-flash-preview-tts",
"gemini-2.5-pro-preview-tts", "gemini-2.5-pro-preview-tts",
"gemini-2.5-computer-use-preview-10-2025", "gemini-2.5-computer-use-preview-10-2025",
"gemini-2.5-pro-exp-03-25",
"gemini-2.0-flash", "gemini-2.0-flash",
"gemini-2.0-flash-001", "gemini-2.0-flash-001",
"gemini-2.0-flash-exp", "gemini-2.0-flash-exp",

View File

@@ -147,7 +147,7 @@ def test_custom_llm():
assert agent.llm.model == "gpt-4" assert agent.llm.model == "gpt-4"
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_agent_execution(): def test_agent_execution():
agent = Agent( agent = Agent(
role="test role", role="test role",
@@ -166,7 +166,7 @@ def test_agent_execution():
assert output == "1 + 1 is 2" assert output == "1 + 1 is 2"
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_agent_execution_with_tools(): def test_agent_execution_with_tools():
@tool @tool
def multiplier(first_number: int, second_number: int) -> float: def multiplier(first_number: int, second_number: int) -> float:
@@ -211,7 +211,7 @@ def test_agent_execution_with_tools():
assert received_events[0].tool_args == {"first_number": 3, "second_number": 4} assert received_events[0].tool_args == {"first_number": 3, "second_number": 4}
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_logging_tool_usage(): def test_logging_tool_usage():
@tool @tool
def multiplier(first_number: int, second_number: int) -> float: def multiplier(first_number: int, second_number: int) -> float:
@@ -245,7 +245,7 @@ def test_logging_tool_usage():
assert agent.tools_handler.last_used_tool.arguments == tool_usage.arguments assert agent.tools_handler.last_used_tool.arguments == tool_usage.arguments
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_cache_hitting(): def test_cache_hitting():
@tool @tool
def multiplier(first_number: int, second_number: int) -> float: def multiplier(first_number: int, second_number: int) -> float:
@@ -325,7 +325,7 @@ def test_cache_hitting():
assert received_events[0].output == "12" assert received_events[0].output == "12"
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_disabling_cache_for_agent(): def test_disabling_cache_for_agent():
@tool @tool
def multiplier(first_number: int, second_number: int) -> float: def multiplier(first_number: int, second_number: int) -> float:
@@ -389,7 +389,7 @@ def test_disabling_cache_for_agent():
read.assert_not_called() read.assert_not_called()
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_agent_execution_with_specific_tools(): def test_agent_execution_with_specific_tools():
@tool @tool
def multiplier(first_number: int, second_number: int) -> float: def multiplier(first_number: int, second_number: int) -> float:
@@ -412,7 +412,7 @@ def test_agent_execution_with_specific_tools():
assert output == "The result of the multiplication is 12." assert output == "The result of the multiplication is 12."
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_agent_powered_by_new_o_model_family_that_allows_skipping_tool(): def test_agent_powered_by_new_o_model_family_that_allows_skipping_tool():
@tool @tool
def multiplier(first_number: int, second_number: int) -> float: def multiplier(first_number: int, second_number: int) -> float:
@@ -438,7 +438,7 @@ def test_agent_powered_by_new_o_model_family_that_allows_skipping_tool():
assert output == "12" assert output == "12"
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_agent_powered_by_new_o_model_family_that_uses_tool(): def test_agent_powered_by_new_o_model_family_that_uses_tool():
@tool @tool
def comapny_customer_data() -> str: def comapny_customer_data() -> str:
@@ -464,7 +464,7 @@ def test_agent_powered_by_new_o_model_family_that_uses_tool():
assert output == "42" assert output == "42"
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_agent_custom_max_iterations(): def test_agent_custom_max_iterations():
@tool @tool
def get_final_answer() -> float: def get_final_answer() -> float:
@@ -509,7 +509,7 @@ def test_agent_custom_max_iterations():
assert call_count == 2 assert call_count == 2
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
@pytest.mark.timeout(30) @pytest.mark.timeout(30)
def test_agent_max_iterations_stops_loop(): def test_agent_max_iterations_stops_loop():
"""Test that agent execution terminates when max_iter is reached.""" """Test that agent execution terminates when max_iter is reached."""
@@ -546,7 +546,7 @@ def test_agent_max_iterations_stops_loop():
) )
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_agent_repeated_tool_usage(capsys): def test_agent_repeated_tool_usage(capsys):
"""Test that agents handle repeated tool usage appropriately. """Test that agents handle repeated tool usage appropriately.
@@ -595,7 +595,7 @@ def test_agent_repeated_tool_usage(capsys):
) )
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_agent_repeated_tool_usage_check_even_with_disabled_cache(capsys): def test_agent_repeated_tool_usage_check_even_with_disabled_cache(capsys):
@tool @tool
def get_final_answer(anything: str) -> float: def get_final_answer(anything: str) -> float:
@@ -638,7 +638,7 @@ def test_agent_repeated_tool_usage_check_even_with_disabled_cache(capsys):
) )
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_agent_moved_on_after_max_iterations(): def test_agent_moved_on_after_max_iterations():
@tool @tool
def get_final_answer() -> float: def get_final_answer() -> float:
@@ -665,7 +665,7 @@ def test_agent_moved_on_after_max_iterations():
assert output == "42" assert output == "42"
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_agent_respect_the_max_rpm_set(capsys): def test_agent_respect_the_max_rpm_set(capsys):
@tool @tool
def get_final_answer() -> float: def get_final_answer() -> float:
@@ -699,7 +699,7 @@ def test_agent_respect_the_max_rpm_set(capsys):
moveon.assert_called() moveon.assert_called()
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_agent_respect_the_max_rpm_set_over_crew_rpm(capsys): def test_agent_respect_the_max_rpm_set_over_crew_rpm(capsys):
from unittest.mock import patch from unittest.mock import patch
@@ -737,7 +737,7 @@ def test_agent_respect_the_max_rpm_set_over_crew_rpm(capsys):
moveon.assert_not_called() moveon.assert_not_called()
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_agent_without_max_rpm_respects_crew_rpm(capsys): def test_agent_without_max_rpm_respects_crew_rpm(capsys):
from unittest.mock import patch from unittest.mock import patch
@@ -797,7 +797,7 @@ def test_agent_without_max_rpm_respects_crew_rpm(capsys):
moveon.assert_called_once() moveon.assert_called_once()
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_agent_error_on_parsing_tool(capsys): def test_agent_error_on_parsing_tool(capsys):
from unittest.mock import patch from unittest.mock import patch
@@ -840,7 +840,7 @@ def test_agent_error_on_parsing_tool(capsys):
assert "Error on parsing tool." in captured.out assert "Error on parsing tool." in captured.out
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_agent_remembers_output_format_after_using_tools_too_many_times(): def test_agent_remembers_output_format_after_using_tools_too_many_times():
from unittest.mock import patch from unittest.mock import patch
@@ -875,7 +875,7 @@ def test_agent_remembers_output_format_after_using_tools_too_many_times():
remember_format.assert_called() remember_format.assert_called()
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_agent_use_specific_tasks_output_as_context(capsys): def test_agent_use_specific_tasks_output_as_context(capsys):
agent1 = Agent(role="test role", goal="test goal", backstory="test backstory") agent1 = Agent(role="test role", goal="test goal", backstory="test backstory")
agent2 = Agent(role="test role2", goal="test goal2", backstory="test backstory2") agent2 = Agent(role="test role2", goal="test goal2", backstory="test backstory2")
@@ -902,7 +902,7 @@ def test_agent_use_specific_tasks_output_as_context(capsys):
assert "hi" in result.raw.lower() or "hello" in result.raw.lower() assert "hi" in result.raw.lower() or "hello" in result.raw.lower()
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_agent_step_callback(): def test_agent_step_callback():
class StepCallback: class StepCallback:
def callback(self, step): def callback(self, step):
@@ -936,7 +936,7 @@ def test_agent_step_callback():
callback.assert_called() callback.assert_called()
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_agent_function_calling_llm(): def test_agent_function_calling_llm():
from crewai.llm import LLM from crewai.llm import LLM
llm = LLM(model="gpt-4o", is_litellm=True) llm = LLM(model="gpt-4o", is_litellm=True)
@@ -983,7 +983,7 @@ def test_agent_function_calling_llm():
mock_original_tool_calling.assert_called() mock_original_tool_calling.assert_called()
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_tool_result_as_answer_is_the_final_answer_for_the_agent(): def test_tool_result_as_answer_is_the_final_answer_for_the_agent():
from crewai.tools import BaseTool from crewai.tools import BaseTool
@@ -1013,7 +1013,7 @@ def test_tool_result_as_answer_is_the_final_answer_for_the_agent():
assert result.raw == "Howdy!" assert result.raw == "Howdy!"
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_tool_usage_information_is_appended_to_agent(): def test_tool_usage_information_is_appended_to_agent():
from crewai.tools import BaseTool from crewai.tools import BaseTool
@@ -1068,7 +1068,7 @@ def test_agent_definition_based_on_dict():
# test for human input # test for human input
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_agent_human_input(): def test_agent_human_input():
# Agent configuration # Agent configuration
config = { config = {
@@ -1216,7 +1216,7 @@ Thought:<|eot_id|>
assert mock_format_prompt.return_value == expected_prompt assert mock_format_prompt.return_value == expected_prompt
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_task_allow_crewai_trigger_context(): def test_task_allow_crewai_trigger_context():
from crewai import Crew from crewai import Crew
@@ -1237,7 +1237,7 @@ def test_task_allow_crewai_trigger_context():
assert "Trigger Payload: Important context data" in prompt assert "Trigger Payload: Important context data" in prompt
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_task_without_allow_crewai_trigger_context(): def test_task_without_allow_crewai_trigger_context():
from crewai import Crew from crewai import Crew
@@ -1260,7 +1260,7 @@ def test_task_without_allow_crewai_trigger_context():
assert "Important context data" not in prompt assert "Important context data" not in prompt
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_task_allow_crewai_trigger_context_no_payload(): def test_task_allow_crewai_trigger_context_no_payload():
from crewai import Crew from crewai import Crew
@@ -1282,7 +1282,7 @@ def test_task_allow_crewai_trigger_context_no_payload():
assert "Trigger Payload:" not in prompt assert "Trigger Payload:" not in prompt
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_do_not_allow_crewai_trigger_context_for_first_task_hierarchical(): def test_do_not_allow_crewai_trigger_context_for_first_task_hierarchical():
from crewai import Crew from crewai import Crew
@@ -1311,7 +1311,7 @@ def test_do_not_allow_crewai_trigger_context_for_first_task_hierarchical():
assert "Trigger Payload: Initial context data" not in first_prompt assert "Trigger Payload: Initial context data" not in first_prompt
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_first_task_auto_inject_trigger(): def test_first_task_auto_inject_trigger():
from crewai import Crew from crewai import Crew
@@ -1344,7 +1344,7 @@ def test_first_task_auto_inject_trigger():
assert "Trigger Payload:" not in second_prompt assert "Trigger Payload:" not in second_prompt
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_ensure_first_task_allow_crewai_trigger_context_is_false_does_not_inject(): def test_ensure_first_task_allow_crewai_trigger_context_is_false_does_not_inject():
from crewai import Crew from crewai import Crew
@@ -1549,7 +1549,7 @@ def test_agent_with_additional_kwargs():
assert agent.llm.frequency_penalty == 0.1 assert agent.llm.frequency_penalty == 0.1
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_llm_call(): def test_llm_call():
llm = LLM(model="gpt-3.5-turbo") llm = LLM(model="gpt-3.5-turbo")
messages = [{"role": "user", "content": "Say 'Hello, World!'"}] messages = [{"role": "user", "content": "Say 'Hello, World!'"}]
@@ -1558,7 +1558,7 @@ def test_llm_call():
assert "Hello, World!" in response assert "Hello, World!" in response
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_llm_call_with_error(): def test_llm_call_with_error():
llm = LLM(model="non-existent-model") llm = LLM(model="non-existent-model")
messages = [{"role": "user", "content": "This should fail"}] messages = [{"role": "user", "content": "This should fail"}]
@@ -1567,7 +1567,7 @@ def test_llm_call_with_error():
llm.call(messages) llm.call(messages)
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_handle_context_length_exceeds_limit(): def test_handle_context_length_exceeds_limit():
# Import necessary modules # Import necessary modules
from crewai.utilities.agent_utils import handle_context_length from crewai.utilities.agent_utils import handle_context_length
@@ -1620,7 +1620,7 @@ def test_handle_context_length_exceeds_limit():
mock_summarize.assert_called_once() mock_summarize.assert_called_once()
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_handle_context_length_exceeds_limit_cli_no(): def test_handle_context_length_exceeds_limit_cli_no():
agent = Agent( agent = Agent(
role="test role", role="test role",
@@ -1695,7 +1695,7 @@ def test_agent_with_all_llm_attributes():
assert agent.llm.api_key == "sk-your-api-key-here" assert agent.llm.api_key == "sk-your-api-key-here"
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_llm_call_with_all_attributes(): def test_llm_call_with_all_attributes():
llm = LLM( llm = LLM(
model="gpt-3.5-turbo", model="gpt-3.5-turbo",
@@ -1712,7 +1712,7 @@ def test_llm_call_with_all_attributes():
assert "STOP" not in response assert "STOP" not in response
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_agent_with_ollama_llama3(): def test_agent_with_ollama_llama3():
agent = Agent( agent = Agent(
role="test role", role="test role",
@@ -1733,7 +1733,7 @@ def test_agent_with_ollama_llama3():
assert "Llama3" in response or "AI" in response or "language model" in response assert "Llama3" in response or "AI" in response or "language model" in response
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_llm_call_with_ollama_llama3(): def test_llm_call_with_ollama_llama3():
llm = LLM( llm = LLM(
model="ollama/llama3.2:3b", model="ollama/llama3.2:3b",
@@ -1752,7 +1752,7 @@ def test_llm_call_with_ollama_llama3():
assert "Llama3" in response or "AI" in response or "language model" in response assert "Llama3" in response or "AI" in response or "language model" in response
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_agent_execute_task_basic(): def test_agent_execute_task_basic():
agent = Agent( agent = Agent(
role="test role", role="test role",
@@ -1771,7 +1771,7 @@ def test_agent_execute_task_basic():
assert "4" in result assert "4" in result
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_agent_execute_task_with_context(): def test_agent_execute_task_with_context():
agent = Agent( agent = Agent(
role="test role", role="test role",
@@ -1793,7 +1793,7 @@ def test_agent_execute_task_with_context():
assert "fox" in result.lower() and "dog" in result.lower() assert "fox" in result.lower() and "dog" in result.lower()
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_agent_execute_task_with_tool(): def test_agent_execute_task_with_tool():
@tool @tool
def dummy_tool(query: str) -> str: def dummy_tool(query: str) -> str:
@@ -1818,7 +1818,7 @@ def test_agent_execute_task_with_tool():
assert "Dummy result for: test query" in result assert "Dummy result for: test query" in result
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_agent_execute_task_with_custom_llm(): def test_agent_execute_task_with_custom_llm():
agent = Agent( agent = Agent(
role="test role", role="test role",
@@ -1839,7 +1839,7 @@ def test_agent_execute_task_with_custom_llm():
) )
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_agent_execute_task_with_ollama(): def test_agent_execute_task_with_ollama():
agent = Agent( agent = Agent(
role="test role", role="test role",
@@ -1859,7 +1859,7 @@ def test_agent_execute_task_with_ollama():
assert "AI" in result or "artificial intelligence" in result.lower() assert "AI" in result or "artificial intelligence" in result.lower()
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_agent_with_knowledge_sources(): def test_agent_with_knowledge_sources():
content = "Brandon's favorite color is red and he likes Mexican food." content = "Brandon's favorite color is red and he likes Mexican food."
string_source = StringKnowledgeSource(content=content) string_source = StringKnowledgeSource(content=content)
@@ -1891,7 +1891,7 @@ def test_agent_with_knowledge_sources():
assert "red" in result.raw.lower() assert "red" in result.raw.lower()
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_agent_with_knowledge_sources_with_query_limit_and_score_threshold(): def test_agent_with_knowledge_sources_with_query_limit_and_score_threshold():
content = "Brandon's favorite color is red and he likes Mexican food." content = "Brandon's favorite color is red and he likes Mexican food."
string_source = StringKnowledgeSource(content=content) string_source = StringKnowledgeSource(content=content)
@@ -1939,7 +1939,7 @@ def test_agent_with_knowledge_sources_with_query_limit_and_score_threshold():
) )
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_agent_with_knowledge_sources_with_query_limit_and_score_threshold_default(): def test_agent_with_knowledge_sources_with_query_limit_and_score_threshold_default():
content = "Brandon's favorite color is red and he likes Mexican food." content = "Brandon's favorite color is red and he likes Mexican food."
string_source = StringKnowledgeSource(content=content) string_source = StringKnowledgeSource(content=content)
@@ -1988,7 +1988,7 @@ def test_agent_with_knowledge_sources_with_query_limit_and_score_threshold_defau
) )
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_agent_with_knowledge_sources_extensive_role(): def test_agent_with_knowledge_sources_extensive_role():
content = "Brandon's favorite color is red and he likes Mexican food." content = "Brandon's favorite color is red and he likes Mexican food."
string_source = StringKnowledgeSource(content=content) string_source = StringKnowledgeSource(content=content)
@@ -2024,7 +2024,7 @@ def test_agent_with_knowledge_sources_extensive_role():
assert "red" in result.raw.lower() assert "red" in result.raw.lower()
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_agent_with_knowledge_sources_works_with_copy(): def test_agent_with_knowledge_sources_works_with_copy():
content = "Brandon's favorite color is red and he likes Mexican food." content = "Brandon's favorite color is red and he likes Mexican food."
string_source = StringKnowledgeSource(content=content) string_source = StringKnowledgeSource(content=content)
@@ -2063,7 +2063,7 @@ def test_agent_with_knowledge_sources_works_with_copy():
assert isinstance(agent_copy.llm, BaseLLM) assert isinstance(agent_copy.llm, BaseLLM)
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_agent_with_knowledge_sources_generate_search_query(): def test_agent_with_knowledge_sources_generate_search_query():
content = "Brandon's favorite color is red and he likes Mexican food." content = "Brandon's favorite color is red and he likes Mexican food."
string_source = StringKnowledgeSource(content=content) string_source = StringKnowledgeSource(content=content)
@@ -2116,7 +2116,7 @@ def test_agent_with_knowledge_sources_generate_search_query():
assert "red" in result.raw.lower() assert "red" in result.raw.lower()
@pytest.mark.vcr(record_mode="none", filter_headers=["authorization"]) @pytest.mark.vcr()
def test_agent_with_knowledge_with_no_crewai_knowledge(): def test_agent_with_knowledge_with_no_crewai_knowledge():
mock_knowledge = MagicMock(spec=Knowledge) mock_knowledge = MagicMock(spec=Knowledge)
@@ -2143,7 +2143,7 @@ def test_agent_with_knowledge_with_no_crewai_knowledge():
mock_knowledge.query.assert_called_once() mock_knowledge.query.assert_called_once()
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_agent_with_only_crewai_knowledge(): def test_agent_with_only_crewai_knowledge():
mock_knowledge = MagicMock(spec=Knowledge) mock_knowledge = MagicMock(spec=Knowledge)
@@ -2168,7 +2168,7 @@ def test_agent_with_only_crewai_knowledge():
mock_knowledge.query.assert_called_once() mock_knowledge.query.assert_called_once()
@pytest.mark.vcr(record_mode="none", filter_headers=["authorization"]) @pytest.mark.vcr()
def test_agent_knowledege_with_crewai_knowledge(): def test_agent_knowledege_with_crewai_knowledge():
crew_knowledge = MagicMock(spec=Knowledge) crew_knowledge = MagicMock(spec=Knowledge)
agent_knowledge = MagicMock(spec=Knowledge) agent_knowledge = MagicMock(spec=Knowledge)
@@ -2197,7 +2197,7 @@ def test_agent_knowledege_with_crewai_knowledge():
crew_knowledge.query.assert_called_once() crew_knowledge.query.assert_called_once()
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_litellm_auth_error_handling(): def test_litellm_auth_error_handling():
"""Test that LiteLLM authentication errors are handled correctly and not retried.""" """Test that LiteLLM authentication errors are handled correctly and not retried."""
from litellm import AuthenticationError as LiteLLMAuthenticationError from litellm import AuthenticationError as LiteLLMAuthenticationError
@@ -2326,7 +2326,7 @@ def test_litellm_anthropic_error_handling():
mock_llm_call.assert_called_once() mock_llm_call.assert_called_once()
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_get_knowledge_search_query(): def test_get_knowledge_search_query():
"""Test that _get_knowledge_search_query calls the LLM with the correct prompts.""" """Test that _get_knowledge_search_query calls the LLM with the correct prompts."""
from crewai.utilities.i18n import I18N from crewai.utilities.i18n import I18N

View File

@@ -70,7 +70,7 @@ class ResearchResult(BaseModel):
sources: list[str] = Field(description="List of sources used") sources: list[str] = Field(description="List of sources used")
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
@pytest.mark.parametrize("verbose", [True, False]) @pytest.mark.parametrize("verbose", [True, False])
def test_lite_agent_created_with_correct_parameters(monkeypatch, verbose): def test_lite_agent_created_with_correct_parameters(monkeypatch, verbose):
"""Test that LiteAgent is created with the correct parameters when Agent.kickoff() is called.""" """Test that LiteAgent is created with the correct parameters when Agent.kickoff() is called."""
@@ -130,7 +130,7 @@ def test_lite_agent_created_with_correct_parameters(monkeypatch, verbose):
assert created_lite_agent["response_format"] == TestResponse assert created_lite_agent["response_format"] == TestResponse
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_lite_agent_with_tools(): def test_lite_agent_with_tools():
"""Test that Agent can use tools.""" """Test that Agent can use tools."""
# Create a LiteAgent with tools # Create a LiteAgent with tools
@@ -174,7 +174,7 @@ def test_lite_agent_with_tools():
assert event.tool_name == "search_web" assert event.tool_name == "search_web"
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_lite_agent_structured_output(): def test_lite_agent_structured_output():
"""Test that Agent can return a simple structured output.""" """Test that Agent can return a simple structured output."""
@@ -217,7 +217,7 @@ def test_lite_agent_structured_output():
return result return result
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_lite_agent_returns_usage_metrics(): def test_lite_agent_returns_usage_metrics():
"""Test that LiteAgent returns usage metrics.""" """Test that LiteAgent returns usage metrics."""
llm = LLM(model="gpt-4o-mini") llm = LLM(model="gpt-4o-mini")
@@ -238,7 +238,7 @@ def test_lite_agent_returns_usage_metrics():
assert result.usage_metrics["total_tokens"] > 0 assert result.usage_metrics["total_tokens"] > 0
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_lite_agent_output_includes_messages(): def test_lite_agent_output_includes_messages():
"""Test that LiteAgentOutput includes messages from agent execution.""" """Test that LiteAgentOutput includes messages from agent execution."""
llm = LLM(model="gpt-4o-mini") llm = LLM(model="gpt-4o-mini")
@@ -259,7 +259,7 @@ def test_lite_agent_output_includes_messages():
assert len(result.messages) > 0 assert len(result.messages) > 0
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_lite_agent_returns_usage_metrics_async(): async def test_lite_agent_returns_usage_metrics_async():
"""Test that LiteAgent returns usage metrics when run asynchronously.""" """Test that LiteAgent returns usage metrics when run asynchronously."""
@@ -354,9 +354,9 @@ def test_sets_parent_flow_when_inside_flow():
assert captured_agent.parent_flow is flow assert captured_agent.parent_flow is flow
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_guardrail_is_called_using_string(): def test_guardrail_is_called_using_string():
guardrail_events = defaultdict(list) guardrail_events: dict[str, list] = defaultdict(list)
from crewai.events.event_types import ( from crewai.events.event_types import (
LLMGuardrailCompletedEvent, LLMGuardrailCompletedEvent,
LLMGuardrailStartedEvent, LLMGuardrailStartedEvent,
@@ -369,35 +369,33 @@ def test_guardrail_is_called_using_string():
guardrail="""Only include Brazilian players, both women and men""", guardrail="""Only include Brazilian players, both women and men""",
) )
all_events_received = threading.Event() condition = threading.Condition()
@crewai_event_bus.on(LLMGuardrailStartedEvent) @crewai_event_bus.on(LLMGuardrailStartedEvent)
def capture_guardrail_started(source, event): def capture_guardrail_started(source, event):
assert isinstance(source, LiteAgent) assert isinstance(source, LiteAgent)
assert source.original_agent == agent assert source.original_agent == agent
guardrail_events["started"].append(event) with condition:
if ( guardrail_events["started"].append(event)
len(guardrail_events["started"]) == 2 condition.notify()
and len(guardrail_events["completed"]) == 2
):
all_events_received.set()
@crewai_event_bus.on(LLMGuardrailCompletedEvent) @crewai_event_bus.on(LLMGuardrailCompletedEvent)
def capture_guardrail_completed(source, event): def capture_guardrail_completed(source, event):
assert isinstance(source, LiteAgent) assert isinstance(source, LiteAgent)
assert source.original_agent == agent assert source.original_agent == agent
guardrail_events["completed"].append(event) with condition:
if ( guardrail_events["completed"].append(event)
len(guardrail_events["started"]) == 2 condition.notify()
and len(guardrail_events["completed"]) == 2
):
all_events_received.set()
result = agent.kickoff(messages="Top 10 best players in the world?") result = agent.kickoff(messages="Top 10 best players in the world?")
assert all_events_received.wait(timeout=10), ( with condition:
"Timeout waiting for all guardrail events" success = condition.wait_for(
) lambda: len(guardrail_events["started"]) >= 2
and len(guardrail_events["completed"]) >= 2,
timeout=10,
)
assert success, "Timeout waiting for all guardrail events"
assert len(guardrail_events["started"]) == 2 assert len(guardrail_events["started"]) == 2
assert len(guardrail_events["completed"]) == 2 assert len(guardrail_events["completed"]) == 2
assert not guardrail_events["completed"][0].success assert not guardrail_events["completed"][0].success
@@ -408,33 +406,27 @@ def test_guardrail_is_called_using_string():
) )
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_guardrail_is_called_using_callable(): def test_guardrail_is_called_using_callable():
guardrail_events = defaultdict(list) guardrail_events: dict[str, list] = defaultdict(list)
from crewai.events.event_types import ( from crewai.events.event_types import (
LLMGuardrailCompletedEvent, LLMGuardrailCompletedEvent,
LLMGuardrailStartedEvent, LLMGuardrailStartedEvent,
) )
all_events_received = threading.Event() condition = threading.Condition()
@crewai_event_bus.on(LLMGuardrailStartedEvent) @crewai_event_bus.on(LLMGuardrailStartedEvent)
def capture_guardrail_started(source, event): def capture_guardrail_started(source, event):
guardrail_events["started"].append(event) with condition:
if ( guardrail_events["started"].append(event)
len(guardrail_events["started"]) == 1 condition.notify()
and len(guardrail_events["completed"]) == 1
):
all_events_received.set()
@crewai_event_bus.on(LLMGuardrailCompletedEvent) @crewai_event_bus.on(LLMGuardrailCompletedEvent)
def capture_guardrail_completed(source, event): def capture_guardrail_completed(source, event):
guardrail_events["completed"].append(event) with condition:
if ( guardrail_events["completed"].append(event)
len(guardrail_events["started"]) == 1 condition.notify()
and len(guardrail_events["completed"]) == 1
):
all_events_received.set()
agent = Agent( agent = Agent(
role="Sports Analyst", role="Sports Analyst",
@@ -445,42 +437,40 @@ def test_guardrail_is_called_using_callable():
result = agent.kickoff(messages="Top 1 best players in the world?") result = agent.kickoff(messages="Top 1 best players in the world?")
assert all_events_received.wait(timeout=10), ( with condition:
"Timeout waiting for all guardrail events" success = condition.wait_for(
) lambda: len(guardrail_events["started"]) >= 1
and len(guardrail_events["completed"]) >= 1,
timeout=10,
)
assert success, "Timeout waiting for all guardrail events"
assert len(guardrail_events["started"]) == 1 assert len(guardrail_events["started"]) == 1
assert len(guardrail_events["completed"]) == 1 assert len(guardrail_events["completed"]) == 1
assert guardrail_events["completed"][0].success assert guardrail_events["completed"][0].success
assert "Pelé - Santos, 1958" in result.raw assert "Pelé - Santos, 1958" in result.raw
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_guardrail_reached_attempt_limit(): def test_guardrail_reached_attempt_limit():
guardrail_events = defaultdict(list) guardrail_events: dict[str, list] = defaultdict(list)
from crewai.events.event_types import ( from crewai.events.event_types import (
LLMGuardrailCompletedEvent, LLMGuardrailCompletedEvent,
LLMGuardrailStartedEvent, LLMGuardrailStartedEvent,
) )
all_events_received = threading.Event() condition = threading.Condition()
@crewai_event_bus.on(LLMGuardrailStartedEvent) @crewai_event_bus.on(LLMGuardrailStartedEvent)
def capture_guardrail_started(source, event): def capture_guardrail_started(source, event):
guardrail_events["started"].append(event) with condition:
if ( guardrail_events["started"].append(event)
len(guardrail_events["started"]) == 3 condition.notify()
and len(guardrail_events["completed"]) == 3
):
all_events_received.set()
@crewai_event_bus.on(LLMGuardrailCompletedEvent) @crewai_event_bus.on(LLMGuardrailCompletedEvent)
def capture_guardrail_completed(source, event): def capture_guardrail_completed(source, event):
guardrail_events["completed"].append(event) with condition:
if ( guardrail_events["completed"].append(event)
len(guardrail_events["started"]) == 3 condition.notify()
and len(guardrail_events["completed"]) == 3
):
all_events_received.set()
agent = Agent( agent = Agent(
role="Sports Analyst", role="Sports Analyst",
@@ -498,9 +488,13 @@ def test_guardrail_reached_attempt_limit():
): ):
agent.kickoff(messages="Top 10 best players in the world?") agent.kickoff(messages="Top 10 best players in the world?")
assert all_events_received.wait(timeout=10), ( with condition:
"Timeout waiting for all guardrail events" success = condition.wait_for(
) lambda: len(guardrail_events["started"]) >= 3
and len(guardrail_events["completed"]) >= 3,
timeout=10,
)
assert success, "Timeout waiting for all guardrail events"
assert len(guardrail_events["started"]) == 3 # 2 retries + 1 initial call assert len(guardrail_events["started"]) == 3 # 2 retries + 1 initial call
assert len(guardrail_events["completed"]) == 3 # 2 retries + 1 initial call assert len(guardrail_events["completed"]) == 3 # 2 retries + 1 initial call
assert not guardrail_events["completed"][0].success assert not guardrail_events["completed"][0].success
@@ -508,7 +502,7 @@ def test_guardrail_reached_attempt_limit():
assert not guardrail_events["completed"][2].success assert not guardrail_events["completed"][2].success
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_agent_output_when_guardrail_returns_base_model(): def test_agent_output_when_guardrail_returns_base_model():
class Player(BaseModel): class Player(BaseModel):
name: str name: str
@@ -599,7 +593,7 @@ def test_lite_agent_with_custom_llm_and_guardrails():
assert result2.raw == "Modified by guardrail" assert result2.raw == "Modified by guardrail"
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_lite_agent_with_invalid_llm(): def test_lite_agent_with_invalid_llm():
"""Test that LiteAgent raises proper error when create_llm returns None.""" """Test that LiteAgent raises proper error when create_llm returns None."""
with patch("crewai.lite_agent.create_llm", return_value=None): with patch("crewai.lite_agent.create_llm", return_value=None):
@@ -615,7 +609,7 @@ def test_lite_agent_with_invalid_llm():
@patch.dict("os.environ", {"CREWAI_PLATFORM_INTEGRATION_TOKEN": "test_token"}) @patch.dict("os.environ", {"CREWAI_PLATFORM_INTEGRATION_TOKEN": "test_token"})
@patch("crewai_tools.tools.crewai_platform_tools.crewai_platform_tool_builder.requests.get") @patch("crewai_tools.tools.crewai_platform_tools.crewai_platform_tool_builder.requests.get")
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_agent_kickoff_with_platform_tools(mock_get): def test_agent_kickoff_with_platform_tools(mock_get):
"""Test that Agent.kickoff() properly integrates platform tools with LiteAgent""" """Test that Agent.kickoff() properly integrates platform tools with LiteAgent"""
mock_response = Mock() mock_response = Mock()
@@ -657,7 +651,7 @@ def test_agent_kickoff_with_platform_tools(mock_get):
@patch.dict("os.environ", {"EXA_API_KEY": "test_exa_key"}) @patch.dict("os.environ", {"EXA_API_KEY": "test_exa_key"})
@patch("crewai.agent.Agent._get_external_mcp_tools") @patch("crewai.agent.Agent._get_external_mcp_tools")
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr()
def test_agent_kickoff_with_mcp_tools(mock_get_mcp_tools): def test_agent_kickoff_with_mcp_tools(mock_get_mcp_tools):
"""Test that Agent.kickoff() properly integrates MCP tools with LiteAgent""" """Test that Agent.kickoff() properly integrates MCP tools with LiteAgent"""
# Setup mock MCP tools - create a proper BaseTool instance # Setup mock MCP tools - create a proper BaseTool instance

View File

@@ -1,126 +0,0 @@
interactions:
- request:
body: '{"messages": [{"role": "system", "content": "You are Test Agent. Test backstory\nYour
personal goal is: Test goal\nTo give my best complete final answer to the task
respond using the exact following format:\n\nThought: I now can give a great
answer\nFinal Answer: Your final answer must be the great and the most complete
as possible, it must be outcome described.\n\nI MUST use these formats, my job
depends on it!"}, {"role": "user", "content": "\nCurrent Task: Say hello to
the world\n\nThis is the expected criteria for your final answer: hello world\nyou
MUST return the actual complete content as the final answer, not a summary.\n\nBegin!
This is VERY important to you, use the tools available and give your best Final
Answer, your job depends on it!\n\nThought:"}], "model": "gpt-4o-mini", "stop":
["\nObservation:"]}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, zstd
connection:
- keep-alive
content-length:
- '825'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.93.0
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.93.0
x-stainless-raw-response:
- 'true'
x-stainless-read-timeout:
- '600.0'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.9
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: !!binary |
H4sIAAAAAAAAAwAAAP//jFJdi9swEHz3r1j0HBc7ZydXvx1HC0evhT6UUtrDKNLa1lXWqpJ8aTjy
34vsXOz0A/pi8M7OaGZ3nxMApiSrgImOB9Fbnd4Why/v79HeePeD37/Zfdx8evf5+rY4fNjdPbJV
ZNDuEUV4Yb0S1FuNQZGZYOGQB4yq+bYsr7J1nhcj0JNEHWmtDWlBaa+MStfZukizbZpfn9gdKYGe
VfA1AQB4Hr/Rp5H4k1WQrV4qPXrPW2TVuQmAOdKxwrj3ygduAlvNoCAT0IzW78DQHgQ30KonBA5t
tA3c+D06gG/mrTJcw834X0GHWhPsyWm5FHTYDJ7HUGbQegFwYyjwOJQxysMJOZ7Na2qto53/jcoa
ZZTvaofck4lGfSDLRvSYADyMQxoucjPrqLehDvQdx+fycjvpsXk3C/TqBAYKXC/q29NoL/VqiYEr
7RdjZoKLDuVMnXfCB6loASSL1H+6+Zv2lFyZ9n/kZ0AItAFlbR1KJS4Tz20O4+n+q+085dEw8+ie
lMA6KHRxExIbPujpoJg/+IB93SjTorNOTVfV2LrcZLzZYFm+Zskx+QUAAP//AwB1vYZ+YwMAAA==
headers:
CF-RAY:
- 96fc9f29dea3cf1f-SJC
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Fri, 15 Aug 2025 23:55:15 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=oA9oTa3cE0ZaEUDRf0hCpnarSAQKzrVUhl6qDS4j09w-1755302115-1.0.1.1-gUUDl4ZqvBQkg7244DTwOmSiDUT2z_AiQu0P1xUaABjaufSpZuIlI5G0H7OSnW.ldypvpxjj45NGWesJ62M_2U7r20tHz_gMmDFw6D5ZiNc;
path=/; expires=Sat, 16-Aug-25 00:25:15 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=ICenEGMmOE5jaOjwD30bAOwrF8.XRbSIKTBl1EyWs0o-1755302115700-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Strict-Transport-Security:
- max-age=31536000; includeSubDomains; preload
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '735'
openai-project:
- proj_xitITlrFeen7zjNSzML82h9x
openai-version:
- '2020-10-01'
x-envoy-upstream-service-time:
- '753'
x-ratelimit-limit-project-tokens:
- '150000000'
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-project-tokens:
- '149999830'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999827'
x-ratelimit-reset-project-tokens:
- 0s
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_212fde9d945a462ba0d89ea856131dce
status:
code: 200
message: OK
version: 1

View File

@@ -0,0 +1,211 @@
interactions:
- request:
body: '{"trace_id": "4d0d2b51-d83a-4054-b41e-8c2d17baa88f", "execution_type":
"crew", "user_identifier": null, "execution_context": {"crew_fingerprint": null,
"crew_name": "crew", "flow_name": null, "crewai_version": "1.6.0", "privacy_level":
"standard"}, "execution_metadata": {"expected_duration_estimate": 300, "agent_count":
0, "task_count": 0, "flow_method_count": 0, "execution_started_at": "2025-11-29T02:50:39.376314+00:00"},
"ephemeral_trace_id": "4d0d2b51-d83a-4054-b41e-8c2d17baa88f"}'
headers:
Accept:
- '*/*'
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '488'
Content-Type:
- application/json
User-Agent:
- CrewAI-CLI/1.6.0
X-Crewai-Version:
- 1.6.0
authorization:
- AUTHORIZATION-XXX
method: POST
uri: https://app.crewai.com/crewai_plus/api/v1/tracing/ephemeral/batches
response:
body:
string: '{"id":"71726285-2e63-4d2a-b4c4-4bbd0ff6a9f1","ephemeral_trace_id":"4d0d2b51-d83a-4054-b41e-8c2d17baa88f","execution_type":"crew","crew_name":"crew","flow_name":null,"status":"running","duration_ms":null,"crewai_version":"1.6.0","total_events":0,"execution_context":{"crew_fingerprint":null,"crew_name":"crew","flow_name":null,"crewai_version":"1.6.0","privacy_level":"standard"},"created_at":"2025-11-29T02:50:39.931Z","updated_at":"2025-11-29T02:50:39.931Z","access_code":"TRACE-bf7f3f49b3","user_identifier":null}'
headers:
Connection:
- keep-alive
Content-Length:
- '515'
Content-Type:
- application/json; charset=utf-8
Date:
- Sat, 29 Nov 2025 02:50:39 GMT
cache-control:
- no-store
content-security-policy:
- CSP-FILTERED
etag:
- ETAG-XXX
expires:
- '0'
permissions-policy:
- PERMISSIONS-POLICY-XXX
pragma:
- no-cache
referrer-policy:
- REFERRER-POLICY-XXX
strict-transport-security:
- STS-XXX
vary:
- Accept
x-content-type-options:
- X-CONTENT-TYPE-XXX
x-frame-options:
- X-FRAME-OPTIONS-XXX
x-permitted-cross-domain-policies:
- X-PERMITTED-XXX
x-request-id:
- X-REQUEST-ID-XXX
x-runtime:
- X-RUNTIME-XXX
x-xss-protection:
- X-XSS-PROTECTION-XXX
status:
code: 201
message: Created
- request:
body: '{"messages":[{"role":"system","content":"You are test role. test backstory\nYour
personal goal is: test goal\nTo give my best complete final answer to the task
respond using the exact following format:\n\nThought: I now can give a great
answer\nFinal Answer: Your final answer must be the great and the most complete
as possible, it must be outcome described.\n\nI MUST use these formats, my job
depends on it!"},{"role":"user","content":"\nCurrent Task: Analyze the data\n\nThis
is the expected criteria for your final answer: Analysis report\nyou MUST return
the actual complete content as the final answer, not a summary.\n\nBegin! This
is VERY important to you, use the tools available and give your best Final Answer,
your job depends on it!\n\nThought:"}],"model":"gpt-4.1-mini"}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
authorization:
- AUTHORIZATION-XXX
connection:
- keep-alive
content-length:
- '785'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.109.1
x-stainless-arch:
- X-STAINLESS-ARCH-XXX
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 1.109.1
x-stainless-read-timeout:
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.10
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: !!binary |
H4sIAAAAAAAAAwAAAP//jFbbbhtHDH33VxD71AKyECeynegtdW9uiiZI3AJpXRj0DHeXzSxnO5yV
ohT594Kzq5WcpEBfdBkOycPDy/CfE4CKfbWGyrWYXdeH06v2/EX7li7T5uenL364rH/6NUn+/Ze/
3nP99mW1MI14/xe5vNdautj1gTJHGcUuEWYyq2eXF6snT1cXT54VQRc9BVNr+ny6Wp6ddix8+vjR
4/PTR6vTs9Wk3kZ2pNUa/jgBAPinfBpQ8fS+WsOjxf6kI1VsqFrPlwCqFIOdVKjKmlFytTgIXZRM
UrDftHFo2ryGa5C4BYcCDW8IEBoLAFB0S+lWvmfBAM/LvzXcyq08Fww7ZYXX1MeU7ehsCdeSU/SD
MyJu5aYlyKjvINHfAydSQMhtTOYTcG8g1pBbKn4FPGZcwk2EPsUNe0Ni1CZqSdSQpeJuUVTsMrSo
cE8koDvN1GFmhyHsINGGaUt+AVvOLWC2mDkK5AieMnIAFA+JAm1QHNl5/gRwR5J1abE9XsK35u3l
hpLZvZU3bEoSQXtyXLMb4WxR99g9sBSTfYpdXzCzTgEAqg5dYaQhobRXV8p7SHmPyMCQZqhjmlkz
qgEF0OUBA6gjwcRxMVrpI0tW0MG1gAoydOYBA2wwDKQLcJipiYntd+aOQGn8ExPE3FKCDSbG+0AK
2zgEb867guYej5I2BlMYejIx9CpFR6osza2cjkdXgVBYmjV8JzokluaQPlaoExHUKXbA4qJYxZK4
AqfjYmnGbRmjlGKyrEzWX6YGhT+gJXcNV1NkH0zNrmtOg8uj1+LRaKS6JpdLpe8Jne39xjpgmA3+
WgC4FlPWYrBJ2LdqyWFvJVXvICcSP0p7K7QkCl9xDdj3gZ3R+HXhaLWEuW9uyLXCltnimdQl7guk
Nxkza2anFk5wQyhQjPOOUBbQkefyHT0twPrbY/LgacO4L/FBPKUiAkeSEwbIJJ7E7QpOz9pTUo5S
Ir+xCGZwa7geQ2M3ux76rTmJCXzcSvk9hR03lMYqsnrnjk7Hahqb2axfxZRoiuLg46ol987I3cu0
5d6aOW+tnz3XNSWSfKjFQuL5Er5n8SxNYe4F7eDVRPr6wOMIWrmREoXkQ2YsfJTYYTCQU4/OWK9F
uWmzcSCZUp8ozxyU+jlK9rbFbNo74K4Pu6L/KpZBgwGucFDSNfy4662nlBSijIkJO4u7RpdjMgh1
GKzkjxqjhHqxhKsoLgyWpxLtm6HrMO1KLSAL1BMTI/SulFuhspS5GauZQknb/aAspApl/r/PI+3k
92NmZuA1udh1JP7IUj2kMhbQjVwkYNmQZm7KpYL2cgk/c8cjXQXtc9mZN80Jy0AicXEwVsnPwymY
Cvlp/LnY02hdh7pmx5b/aVxz15v7PUnU59Z4OOrgW3m6hOd9T+Lt+Sw9+NlU/rpUZOnnBeRSV+Ng
Gd0YtH0DYoA45H6YHoFvyFlKzX0im1yfTf+Hk5/14eifn7zpDYhDDpaSEk9HuY0+htjsHswtOsz9
D/MMm+dX2C3hut4/A/uJihvkYJEtygjaTZwpgWbqFbYcAuxKYeAh7NIXJb+m+inawsB34o3y6eR4
qUhUD4q22cgQwpEAReJUErbO/DlJPs4LTIhNn+K9fqJa1Sys7V0i1Ci2rGiOfVWkH08A/iyL0vBg
96lGuu9yfEfF3dn5+WivOixoB+mTp88maY4Zw0FwvlotvmDwbqRKj3atyqFryR9UD4sZDp7jkeDk
KOzP4XzJ9hg6S/N/zB8EzlGfyd/1iTy7hyEfriWyBfa/rs00F8CV2tbj6C4zJUuFpxqHMG6V1bh4
3dUsjc1LHlfLur97dnlxQeerZ/ePq5OPJ/8CAAD//wMAwDPj9GkLAAA=
headers:
CF-RAY:
- CF-RAY-XXX
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Sat, 29 Nov 2025 02:50:45 GMT
Server:
- cloudflare
Set-Cookie:
- SET-COOKIE-XXX
Strict-Transport-Security:
- STS-XXX
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- X-CONTENT-TYPE-XXX
access-control-expose-headers:
- ACCESS-CONTROL-XXX
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '5125'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
x-envoy-upstream-service-time:
- '5227'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-project-tokens:
- '150000000'
x-ratelimit-limit-requests:
- X-RATELIMIT-LIMIT-REQUESTS-XXX
x-ratelimit-limit-tokens:
- X-RATELIMIT-LIMIT-TOKENS-XXX
x-ratelimit-remaining-project-tokens:
- '149999830'
x-ratelimit-remaining-requests:
- X-RATELIMIT-REMAINING-REQUESTS-XXX
x-ratelimit-remaining-tokens:
- X-RATELIMIT-REMAINING-TOKENS-XXX
x-ratelimit-reset-project-tokens:
- 0s
x-ratelimit-reset-requests:
- X-RATELIMIT-RESET-REQUESTS-XXX
x-ratelimit-reset-tokens:
- X-RATELIMIT-RESET-TOKENS-XXX
x-request-id:
- X-REQUEST-ID-XXX
status:
code: 200
message: OK
version: 1

Some files were not shown because too many files have changed in this diff Show More