Compare commits

..

11 Commits

Author SHA1 Message Date
Rip&Tear
afa4d01c22 Merge branch 'main' into codex/fix-codeql-alert-28 2026-02-11 13:43:46 +08:00
theCyberTech
78ad77e619 fix: harden Azure endpoint validation for CodeQL alert 28 2026-02-11 13:28:03 +08:00
Lucas Gomide
a3bee66be8 Address OpenSSL CVE-2025-15467 vulnerability (#4426)
Some checks failed
Build uv cache / build-cache (3.10) (push) Waiting to run
Build uv cache / build-cache (3.11) (push) Waiting to run
Build uv cache / build-cache (3.12) (push) Waiting to run
Build uv cache / build-cache (3.13) (push) Waiting to run
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
* fix(security): bump regex from 2024.9.11 to 2026.1.15

Address security vulnerability flagged in regex==2024.9.11

* bump mcp from 1.23.1 to 1.26.0

Address security vulnerability flagged in mcp==1.16.0 (resolved to 1.23.3)
2026-02-10 09:39:35 -08:00
Greyson LaLonde
f6fa04528a fix: add async HITL support and chained-router tests
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
asynchronous human-in-the-loop handling and related fixes.

- Extend human_input provider with async support: AsyncExecutorContext, handle_feedback_async, async prompt helpers (_prompt_input_async, _async_readline), and async training/regular feedback loops in SyncHumanInputProvider.
- Add async handler methods in CrewAgentExecutor and AgentExecutor (_ahandle_human_feedback, _ainvoke_loop) to integrate async provider flows.
- Change PlusAPI.get_agent to an async httpx call and adapt caller in agent_utils to run it via asyncio.run.
- Simplify listener execution in flow.Flow to correctly pass HumanFeedbackResult to listeners and unify execution path for router outcomes.
- Remove deprecated types/hitl.py definitions.
- Add tests covering chained router feedback, rejected paths, and mixed router/non-router listeners to prevent regressions.
2026-02-06 16:29:27 -05:00
Greyson LaLonde
7d498b29be fix: event ordering; flow state locks, routing
* fix: add current task id context and flow updates

introduce a context var for the current task id in `crewai.context` to track task scope. update `Flow._execute_single_listener` to return `(result, event_id)` and adjust callers to unpack it and append `FlowMethodName(str(result))` to `router_results`. set/reset the current task id at the start/end of task execution (async + sync) with minor import and call-site tweaks.

* fix: await event futures and flush event bus

call `crewai_event_bus.flush()` after crew kickoff. in `Flow`, await event handler futures instead of just collecting them: await pending `_event_futures` before finishing, await emitted futures immediately with try/except to log failures, then clear `_event_futures`. ensures handlers complete and errors surface.

* fix: continue iteration on tool completion events

expand the loop bridge listener to also trigger on tool completion events (`tool_completed` and `native_tool_completed`) so agent iteration resumes after tools finish. add a `requests.post` mock and response fixture in the liteagent test to simulate platform tool execution. refresh and sanitize vcr cassettes (updated model responses, timestamps, and header placeholders) to reflect tool-call flows and new recordings.

* fix: thread-safe state proxies & native routing

add thread-safe state proxies and refactor native tool routing.

* introduce `LockedListProxy` and `LockedDictProxy` in `flow.py` and update `StateProxy` to return them for list/dict attrs so mutations are protected by the flow lock.
* update `AgentExecutor` to use `StateProxy` on flow init, guard the messages setter with the state lock, and return a `StateProxy` from the temp state accessor.
* convert `call_llm_native_tools` into a listener (no direct routing return) and add `route_native_tool_result` to route based on state (pending tool calls, final answer, or context error).
* minor cleanup in `continue_iteration` to drop orphan listeners on init.
* update test cassettes for new native tool call responses, timestamps, and ids.

improves concurrency safety for shared state and makes native tool routing explicit.

* chore: regen cassettes

* chore: regen cassettes, remove duplicate listener call path
2026-02-06 14:02:43 -05:00
Greyson LaLonde
1308bdee63 feat: add started_event_id and set in eventbus
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* feat: add started_event_id and set in eventbus

* chore: update additional test assumption

* fix: restore event bus handlers on context exit

fix rollback in crewai events bus so that exiting the context restores
the previous _sync_handlers, _async_handlers, _handler_dependencies, and _execution_plan_cache by assigning shallow copies of the saved dicts. previously these
were set to empty dicts on exit, which caused registered handlers and cached execution plans to be lost.
2026-02-05 21:28:23 -05:00
Greyson LaLonde
6bb1b178a1 chore: extension points
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Introduce ContextVar-backed hooks and small API/behavior changes to improve extensibility and testability.

Changes include:
- agents: mark configure_structured_output as abstract and change its parameter to task to reflect use of task metadata.
- tracing: convert _first_time_trace_hook to a ContextVar and call .get() to safely retrieve the hook.
- console formatter: add _disable_version_check ContextVar and skip version checks when set (avoids noisy checks in certain contexts).
- flow: use current_triggering_event_id variable when scheduling listener tasks to keep naming consistent.
- hallucination guardrail: make context optional, add _validate_output_hook to allow custom validation hooks, update examples and return contract to allow hooks to override behavior.
- agent utilities: add _create_plus_client_hook for injecting a Plus client (used in tests/alternate flows), ensure structured tools have current_usage_count initialized and propagate to original tool, and fall back to creating PlusAPI client when no hook is provided.
2026-02-05 12:49:54 -05:00
Greyson LaLonde
fe2a4b4e40 chore: bug fixes and more refactor
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Refactor agent executor to delegate human interactions to a provider: add messages and ask_for_human_input properties, implement _invoke_loop and _format_feedback_message, and replace the internal iterative/training feedback logic with a call to get_provider().handle_feedback.

Make LLMGuardrail kickoff coroutine-aware by detecting coroutines and running them via asyncio.run so both sync and async agents are supported.

Make telemetry more robust by safely handling missing task.output (use empty string) and returning early if span is None before setting attributes.

Improve serialization to detect circular references via an _ancestors set, propagate it through recursive calls, and pass exclude/max_depth/_current_depth consistently to prevent infinite recursion and produce stable serializable output.
2026-02-04 21:21:54 -05:00
Greyson LaLonde
711e7171e1 chore: improve hook typing and registration
Allow hook registration to accept both typed hook types and plain callables by importing and using After*/Before*CallHookCallable types; add explicit LLMCallHookContext and ToolCallHookContext typing in crew_base. Introduce a post-initialize crew hook list and invoke hooks after Crew instance initialization. Refactor filtered hook factory functions to include precise typing and clearer local names (before_llm_hook/after_llm_hook/before_tool_hook/after_tool_hook) and register those with the instance. Update CrewInstance protocol to include _registered_hook_functions and _hooks_being_registered fields.
2026-02-04 21:16:20 -05:00
Vini Brasil
76b5f72e81 Fix tool error causing double event scope pop (#4373)
When a tool raises an error, both ToolUsageErrorEvent and
ToolUsageFinishedEvent were being emitted. Since both events pop the
event scope stack, this caused the agent scope to be incorrectly popped
along with the tool scope.
2026-02-04 20:34:08 -03:00
Greyson LaLonde
d86d43d3e0 chore: refactor crew to provider
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Enable dynamic extension exports and small behavior fixes across events and flow modules:

- events/__init__.py: Added _extension_exports and extended __getattr__ to lazily resolve registered extension values or import paths.
- events/event_bus.py: Implemented off() to unregister sync/async handlers, clean handler dependencies, and invalidate execution plan cache.
- events/listeners/tracing/utils.py: Added Callable import and _first_time_trace_hook to allow overriding first-time trace auto-collection behavior.
- events/types/tool_usage_events.py: Changed ToolUsageEvent.run_attempts default from None to 0 to avoid nullable handling.
- events/utils/console_formatter.py: Respect CREWAI_DISABLE_VERSION_CHECK env var to skip version checks in CI-like flows.
- flow/async_feedback/__init__.py: Added typing.Any import, _extension_exports and __getattr__ to support extensions via attribute lookup.

These changes add extension points and safer defaults, and provide a way to unregister event handlers.
2026-02-04 16:05:21 -05:00
57 changed files with 2169 additions and 1409 deletions

View File

@@ -14,7 +14,7 @@ dependencies = [
"instructor>=1.3.3",
# Text Processing
"pdfplumber~=0.11.4",
"regex~=2024.9.11",
"regex~=2026.1.15",
# Telemetry and Monitoring
"opentelemetry-api~=1.34.0",
"opentelemetry-sdk~=1.34.0",
@@ -36,7 +36,7 @@ dependencies = [
"json5~=0.10.0",
"portalocker~=2.7.0",
"pydantic-settings~=2.10.1",
"mcp~=1.23.1",
"mcp~=1.26.0",
"uv~=0.9.13",
"aiosqlite~=0.21.0",
]

View File

@@ -24,7 +24,6 @@ from pydantic import (
)
from typing_extensions import Self
from crewai.agent.json_schema_converter import JSONSchemaConverter
from crewai.agent.utils import (
ahandle_knowledge_retrieval,
apply_training_data,
@@ -1179,7 +1178,6 @@ class Agent(BaseAgent):
tools = []
for tool_def in tools_list:
tool_name = tool_def.get("name", "")
original_tool_name = tool_def.get("original_name", tool_name)
if not tool_name:
continue
@@ -1201,7 +1199,6 @@ class Agent(BaseAgent):
tool_name=tool_name,
tool_schema=tool_schema,
server_name=server_name,
original_tool_name=original_tool_name,
)
tools.append(native_tool)
except Exception as e:
@@ -1216,63 +1213,26 @@ class Agent(BaseAgent):
raise RuntimeError(f"Failed to get native MCP tools: {e}") from e
def _get_amp_mcp_tools(self, amp_ref: str) -> list[BaseTool]:
"""Get tools from CrewAI AMP MCP via crewai-oauth service.
Fetches MCP server configuration with tokens injected from crewai-oauth,
then uses _get_native_mcp_tools to connect and discover tools.
"""
# Parse: "crewai-amp:mcp-slug" or "crewai-amp:mcp-slug#tool_name"
"""Get tools from CrewAI AMP MCP marketplace."""
# Parse: "crewai-amp:mcp-name" or "crewai-amp:mcp-name#tool_name"
amp_part = amp_ref.replace("crewai-amp:", "")
if "#" in amp_part:
mcp_slug, specific_tool = amp_part.split("#", 1)
mcp_name, specific_tool = amp_part.split("#", 1)
else:
mcp_slug, specific_tool = amp_part, None
mcp_name, specific_tool = amp_part, None
# Fetch MCP config from crewai-oauth (with tokens injected)
mcp_config_dict = self._fetch_amp_mcp_config(mcp_slug)
# Call AMP API to get MCP server URLs
mcp_servers = self._fetch_amp_mcp_servers(mcp_name)
if not mcp_config_dict:
self._logger.log(
"warning", f"Failed to fetch MCP config for '{mcp_slug}' from crewai-oauth"
)
return []
tools = []
for server_config in mcp_servers:
server_ref = server_config["url"]
if specific_tool:
server_ref += f"#{specific_tool}"
server_tools = self._get_external_mcp_tools(server_ref)
tools.extend(server_tools)
# Convert dict to MCPServerConfig (MCPServerHTTP or MCPServerSSE)
config_type = mcp_config_dict.get("type", "http")
if config_type == "sse":
mcp_config = MCPServerSSE(
url=mcp_config_dict["url"],
headers=mcp_config_dict.get("headers"),
cache_tools_list=mcp_config_dict.get("cache_tools_list", False),
)
else:
mcp_config = MCPServerHTTP(
url=mcp_config_dict["url"],
headers=mcp_config_dict.get("headers"),
streamable=mcp_config_dict.get("streamable", True),
cache_tools_list=mcp_config_dict.get("cache_tools_list", False),
)
# Apply tool filter if specific tool requested
if specific_tool:
from crewai.mcp.filters import create_static_tool_filter
mcp_config.tool_filter = create_static_tool_filter(
allowed_tool_names=[specific_tool]
)
# Use native MCP tools to connect and discover tools
try:
tools, client = self._get_native_mcp_tools(mcp_config)
if client:
self._mcp_clients.append(client)
return tools
except Exception as e:
self._logger.log(
"warning", f"Failed to get MCP tools from '{mcp_slug}': {e}"
)
return []
return tools
@staticmethod
def _extract_server_name(server_url: str) -> str:
@@ -1429,9 +1389,6 @@ class Agent(BaseAgent):
}
return schemas
# Shared JSON Schema converter instance
_schema_converter: JSONSchemaConverter = JSONSchemaConverter()
def _json_schema_to_pydantic(
self, tool_name: str, json_schema: dict[str, Any]
) -> type:
@@ -1444,62 +1401,77 @@ class Agent(BaseAgent):
Returns:
Pydantic BaseModel class
"""
return self._schema_converter.json_schema_to_pydantic(tool_name, json_schema)
from pydantic import Field, create_model
def _fetch_amp_mcp_config(self, mcp_slug: str) -> dict[str, Any] | None:
"""Fetch MCP server configuration from crewai-oauth service.
properties = json_schema.get("properties", {})
required_fields = json_schema.get("required", [])
Returns MCPServerConfig dict with tokens injected, ready for use with
_get_native_mcp_tools.
field_definitions: dict[str, Any] = {}
Environment variables:
CREWAI_OAUTH_URL: Base URL of crewai-oauth service
CREWAI_OAUTH_API_KEY: API key for authenticating with crewai-oauth
for field_name, field_schema in properties.items():
field_type = self._json_type_to_python(field_schema)
field_description = field_schema.get("description", "")
is_required = field_name in required_fields
if is_required:
field_definitions[field_name] = (
field_type,
Field(..., description=field_description),
)
else:
field_definitions[field_name] = (
field_type | None,
Field(default=None, description=field_description),
)
model_name = f"{tool_name.replace('-', '_').replace(' ', '_')}Schema"
return create_model(model_name, **field_definitions) # type: ignore[no-any-return]
def _json_type_to_python(self, field_schema: dict[str, Any]) -> type:
"""Convert JSON Schema type to Python type.
Args:
mcp_slug: The MCP server slug (e.g., "notion-mcp-abc123")
field_schema: JSON Schema field definition
Returns:
Dict with type, url, headers, streamable, cache_tools_list, or None if failed.
Python type
"""
import os
import requests
json_type = field_schema.get("type")
try:
endpoint = f"http://localhost:8787/mcps/{mcp_slug}/config"
response = requests.get(
endpoint,
headers={"Authorization": "Bearer 6b327f9ebe62726590f8de8f624cf018ad4765fecb7373f9db475a940ad546d0"},
timeout=30,
)
if "anyOf" in field_schema:
types: list[type] = []
for option in field_schema["anyOf"]:
if "const" in option:
types.append(str)
else:
types.append(self._json_type_to_python(option))
unique_types = list(set(types))
if len(unique_types) > 1:
result: Any = unique_types[0]
for t in unique_types[1:]:
result = result | t
return result # type: ignore[no-any-return]
return unique_types[0]
if response.status_code == 200:
return response.json()
elif response.status_code == 400:
error_data = response.json()
self._logger.log(
"warning",
f"MCP '{mcp_slug}' is not connected: {error_data.get('error_description', 'Unknown error')}",
)
return None
elif response.status_code == 404:
self._logger.log(
"warning", f"MCP server '{mcp_slug}' not found in crewai-oauth"
)
return None
else:
self._logger.log(
"warning",
f"Failed to fetch MCP config from crewai-oauth: HTTP {response.status_code}",
)
return None
type_mapping: dict[str | None, type] = {
"string": str,
"number": float,
"integer": int,
"boolean": bool,
"array": list,
"object": dict,
}
except requests.exceptions.RequestException as e:
self._logger.log(
"warning", f"Failed to connect to crewai-oauth: {e}"
)
return None
return type_mapping.get(json_type, Any)
@staticmethod
def _fetch_amp_mcp_servers(mcp_name: str) -> list[dict[str, Any]]:
"""Fetch MCP server configurations from CrewAI AMP API."""
# TODO: Implement AMP API call to "integrations/mcps" endpoint
# Should return list of server configs with URLs
return []
@staticmethod
def get_multimodal_tools() -> Sequence[BaseTool]:

View File

@@ -1,399 +0,0 @@
from typing import Any, Literal, Type, Union, get_args
from pydantic import Field, create_model
from pydantic.fields import FieldInfo
import datetime
import uuid
class JSONSchemaConverter:
"""Converts JSON Schema definitions to Python/Pydantic types."""
def json_schema_to_pydantic(
self, tool_name: str, json_schema: dict[str, Any]
) -> Type[Any]:
"""Convert JSON Schema to Pydantic model for tool arguments.
Args:
tool_name: Name of the tool (used for model naming)
json_schema: JSON Schema dict with 'properties', 'required', etc.
Returns:
Pydantic BaseModel class
"""
properties = json_schema.get("properties", {})
required_fields = json_schema.get("required", [])
model_name = f"{tool_name.replace('-', '_').replace(' ', '_')}Schema"
return self._create_pydantic_model(model_name, properties, required_fields)
def _json_type_to_python(
self, field_schema: dict[str, Any], field_name: str = "Field"
) -> Type[Any]:
"""Convert JSON Schema type to Python type, handling nested structures.
Args:
field_schema: JSON Schema field definition
field_name: Name of the field (used for nested model naming)
Returns:
Python type (may be a dynamically created Pydantic model for objects/arrays)
"""
if not field_schema:
return Any
# Handle $ref if needed
if "$ref" in field_schema:
# You might want to implement reference resolution here
return Any
# Handle enum constraint - create Literal type
if "enum" in field_schema:
return self._handle_enum(field_schema)
# Handle different schema constructs in order of precedence
if "allOf" in field_schema:
return self._handle_allof(field_schema, field_name)
if "anyOf" in field_schema or "oneOf" in field_schema:
return self._handle_union_schemas(field_schema, field_name)
json_type = field_schema.get("type")
if isinstance(json_type, list):
return self._handle_type_union(json_type)
if json_type == "array":
return self._handle_array_type(field_schema, field_name)
if json_type == "object":
return self._handle_object_type(field_schema, field_name)
# Handle format for string types
if json_type == "string" and "format" in field_schema:
return self._get_formatted_type(field_schema["format"])
return self._get_simple_type(json_type)
def _get_formatted_type(self, format_type: str) -> Type[Any]:
"""Get Python type for JSON Schema format constraint.
Args:
format_type: JSON Schema format string (date, date-time, email, etc.)
Returns:
Appropriate Python type for the format
"""
format_mapping: dict[str, Type[Any]] = {
"date": datetime.date,
"date-time": datetime.datetime,
"time": datetime.time,
"email": str, # Could use EmailStr from pydantic
"uri": str,
"uuid": str, # Could use UUID
"hostname": str,
"ipv4": str,
"ipv6": str,
}
return format_mapping.get(format_type, str)
def _handle_enum(self, field_schema: dict[str, Any]) -> Type[Any]:
"""Handle enum constraint by creating a Literal type.
Args:
field_schema: Schema containing enum values
Returns:
Literal type with enum values
"""
enum_values = field_schema.get("enum", [])
if not enum_values:
return str
# Filter out None values for the Literal type
non_null_values = [v for v in enum_values if v is not None]
if not non_null_values:
return type(None)
# Create Literal type with enum values
# For strings, create Literal["value1", "value2", ...]
if all(isinstance(v, str) for v in non_null_values):
literal_type = Literal[tuple(non_null_values)] # type: ignore[valid-type]
# If null is in enum, make it optional
if None in enum_values:
return literal_type | None # type: ignore[return-value]
return literal_type # type: ignore[return-value]
# For mixed types or non-strings, fall back to the base type
json_type = field_schema.get("type", "string")
return self._get_simple_type(json_type)
def _handle_allof(
self, field_schema: dict[str, Any], field_name: str
) -> Type[Any]:
"""Handle allOf schema composition by merging all schemas.
Args:
field_schema: Schema containing allOf
field_name: Name for the generated model
Returns:
Merged Pydantic model or basic type
"""
merged_properties: dict[str, Any] = {}
merged_required: list[str] = []
found_type: str | None = None
for sub_schema in field_schema["allOf"]:
# Collect type information
if sub_schema.get("type"):
found_type = sub_schema.get("type")
# Merge properties
if sub_schema.get("properties"):
merged_properties.update(sub_schema["properties"])
# Merge required fields
if sub_schema.get("required"):
merged_required.extend(sub_schema["required"])
# Handle nested anyOf/oneOf - merge properties from all variants
for union_key in ("anyOf", "oneOf"):
if union_key in sub_schema:
for variant in sub_schema[union_key]:
if variant.get("properties"):
# Merge variant properties (will be optional)
for prop_name, prop_schema in variant["properties"].items():
if prop_name not in merged_properties:
merged_properties[prop_name] = prop_schema
# If we found properties, create a merged object model
if merged_properties:
return self._create_pydantic_model(
field_name, merged_properties, merged_required
)
# Fallback: return the found type or dict
if found_type == "object":
return dict
elif found_type == "array":
return list
return dict # Default for complex allOf
def _handle_union_schemas(
self, field_schema: dict[str, Any], field_name: str
) -> Type[Any]:
"""Handle anyOf/oneOf union schemas.
Args:
field_schema: Schema containing anyOf or oneOf
field_name: Name for nested types
Returns:
Union type combining all options
"""
key = "anyOf" if "anyOf" in field_schema else "oneOf"
types: list[Type[Any]] = []
for option in field_schema[key]:
if "const" in option:
# For const values, use string type
# Could use Literal[option["const"]] for more precision
types.append(str)
else:
types.append(self._json_type_to_python(option, field_name))
return self._build_union_type(types)
def _handle_type_union(self, json_types: list[str]) -> Type[Any]:
"""Handle union types from type arrays.
Args:
json_types: List of JSON Schema type strings
Returns:
Union of corresponding Python types
"""
type_mapping: dict[str, Type[Any]] = {
"string": str,
"number": float,
"integer": int,
"boolean": bool,
"null": type(None),
"array": list,
"object": dict,
}
types = [type_mapping.get(t, Any) for t in json_types]
return self._build_union_type(types)
def _handle_array_type(
self, field_schema: dict[str, Any], field_name: str
) -> Type[Any]:
"""Handle array type with typed items.
Args:
field_schema: Schema with type="array"
field_name: Name for item types
Returns:
list or list[ItemType]
"""
items_schema = field_schema.get("items")
if items_schema:
item_type = self._json_type_to_python(items_schema, f"{field_name}Item")
return list[item_type] # type: ignore[valid-type]
return list
def _handle_object_type(
self, field_schema: dict[str, Any], field_name: str
) -> Type[Any]:
"""Handle object type with properties.
Args:
field_schema: Schema with type="object"
field_name: Name for the generated model
Returns:
Pydantic model or dict
"""
properties = field_schema.get("properties")
if properties:
required_fields = field_schema.get("required", [])
return self._create_pydantic_model(field_name, properties, required_fields)
# Object without properties (e.g., additionalProperties only)
return dict
def _create_pydantic_model(
self,
field_name: str,
properties: dict[str, Any],
required_fields: list[str],
) -> Type[Any]:
"""Create a Pydantic model from properties.
Args:
field_name: Base name for the model
properties: Property schemas
required_fields: List of required property names
Returns:
Dynamically created Pydantic model
"""
model_name = f"Generated_{field_name}_{uuid.uuid4().hex[:8]}"
field_definitions: dict[str, Any] = {}
for prop_name, prop_schema in properties.items():
prop_type = self._json_type_to_python(prop_schema, prop_name.title())
prop_description = self._build_field_description(prop_schema)
is_required = prop_name in required_fields
if is_required:
field_definitions[prop_name] = (
prop_type,
Field(..., description=prop_description),
)
else:
field_definitions[prop_name] = (
prop_type | None,
Field(default=None, description=prop_description),
)
return create_model(model_name, **field_definitions) # type: ignore[return-value]
def _build_field_description(self, prop_schema: dict[str, Any]) -> str:
"""Build a comprehensive field description including constraints.
Args:
prop_schema: Property schema with description and constraints
Returns:
Enhanced description with format, enum, and other constraints
"""
parts: list[str] = []
# Start with the original description
description = prop_schema.get("description", "")
if description:
parts.append(description)
# Add format constraint
format_type = prop_schema.get("format")
if format_type:
parts.append(f"Format: {format_type}")
# Add enum constraint (if not already handled by Literal type)
enum_values = prop_schema.get("enum")
if enum_values:
enum_str = ", ".join(repr(v) for v in enum_values)
parts.append(f"Allowed values: [{enum_str}]")
# Add pattern constraint
pattern = prop_schema.get("pattern")
if pattern:
parts.append(f"Pattern: {pattern}")
# Add min/max constraints
minimum = prop_schema.get("minimum")
maximum = prop_schema.get("maximum")
if minimum is not None:
parts.append(f"Minimum: {minimum}")
if maximum is not None:
parts.append(f"Maximum: {maximum}")
min_length = prop_schema.get("minLength")
max_length = prop_schema.get("maxLength")
if min_length is not None:
parts.append(f"Min length: {min_length}")
if max_length is not None:
parts.append(f"Max length: {max_length}")
# Add examples if available
examples = prop_schema.get("examples")
if examples:
examples_str = ", ".join(repr(e) for e in examples[:3]) # Limit to 3
parts.append(f"Examples: {examples_str}")
return ". ".join(parts) if parts else ""
def _get_simple_type(self, json_type: str | None) -> Type[Any]:
"""Map simple JSON Schema types to Python types.
Args:
json_type: JSON Schema type string
Returns:
Corresponding Python type
"""
simple_type_mapping: dict[str | None, Type[Any]] = {
"string": str,
"number": float,
"integer": int,
"boolean": bool,
"null": type(None),
}
return simple_type_mapping.get(json_type, Any)
def _build_union_type(self, types: list[Type[Any]]) -> Type[Any]:
"""Build a union type from a list of types.
Args:
types: List of Python types to combine
Returns:
Union type or single type if only one unique type
"""
# Remove duplicates while preserving order
unique_types = list(dict.fromkeys(types))
if len(unique_types) == 1:
return unique_types[0]
# Build union using | operator
result = unique_types[0]
for t in unique_types[1:]:
result = result | t
return result # type: ignore[no-any-return]

View File

@@ -37,9 +37,10 @@ class BaseAgentAdapter(BaseAgent, ABC):
tools: Optional list of BaseTool instances to be configured
"""
def configure_structured_output(self, structured_output: Any) -> None:
@abstractmethod
def configure_structured_output(self, task: Any) -> None:
"""Configure the structured output for the specific agent implementation.
Args:
structured_output: The structured output to be configured
task: The task object containing output format specifications.
"""

View File

@@ -814,6 +814,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
agent_key=agent_key,
),
)
error_event_emitted = False
track_delegation_if_needed(func_name, args_dict, self.task)
@@ -896,6 +897,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
error=e,
),
)
error_event_emitted = True
elif max_usage_reached and original_tool:
# Return error message when max usage limit is reached
result = f"Tool '{func_name}' has reached its usage limit of {original_tool.max_usage_count} times and cannot be used anymore."
@@ -923,20 +925,20 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
color="red",
)
# Emit tool usage finished event
crewai_event_bus.emit(
self,
event=ToolUsageFinishedEvent(
output=result,
tool_name=func_name,
tool_args=args_dict,
from_agent=self.agent,
from_task=self.task,
agent_key=agent_key,
started_at=started_at,
finished_at=datetime.now(),
),
)
if not error_event_emitted:
crewai_event_bus.emit(
self,
event=ToolUsageFinishedEvent(
output=result,
tool_name=func_name,
tool_args=args_dict,
from_agent=self.agent,
from_task=self.task,
agent_key=agent_key,
started_at=started_at,
finished_at=datetime.now(),
),
)
# Append tool result message
tool_message: LLMMessage = {
@@ -1007,7 +1009,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
raise
if self.ask_for_human_input:
formatted_answer = self._handle_human_feedback(formatted_answer)
formatted_answer = await self._ahandle_human_feedback(formatted_answer)
self._create_short_term_memory(formatted_answer)
self._create_long_term_memory(formatted_answer)
@@ -1506,6 +1508,20 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
provider = get_provider()
return provider.handle_feedback(formatted_answer, self)
async def _ahandle_human_feedback(
self, formatted_answer: AgentFinish
) -> AgentFinish:
"""Process human feedback asynchronously via the configured provider.
Args:
formatted_answer: Initial agent result.
Returns:
Final answer after feedback.
"""
provider = get_provider()
return await provider.handle_feedback_async(formatted_answer, self)
def _is_training_mode(self) -> bool:
"""Check if training mode is active.

View File

@@ -1,6 +1,8 @@
import os
from typing import Any
from urllib.parse import urljoin
import os
import httpx
import requests
from crewai.cli.config import Settings
@@ -33,7 +35,11 @@ class PlusAPI:
if settings.org_uuid:
self.headers["X-Crewai-Organization-Id"] = settings.org_uuid
self.base_url = os.getenv("CREWAI_PLUS_URL") or str(settings.enterprise_base_url) or DEFAULT_CREWAI_ENTERPRISE_URL
self.base_url = (
os.getenv("CREWAI_PLUS_URL")
or str(settings.enterprise_base_url)
or DEFAULT_CREWAI_ENTERPRISE_URL
)
def _make_request(
self, method: str, endpoint: str, **kwargs: Any
@@ -49,8 +55,10 @@ class PlusAPI:
def get_tool(self, handle: str) -> requests.Response:
return self._make_request("GET", f"{self.TOOLS_RESOURCE}/{handle}")
def get_agent(self, handle: str) -> requests.Response:
return self._make_request("GET", f"{self.AGENTS_RESOURCE}/{handle}")
async def get_agent(self, handle: str) -> httpx.Response:
url = urljoin(self.base_url, f"{self.AGENTS_RESOURCE}/{handle}")
async with httpx.AsyncClient() as client:
return await client.get(url, headers=self.headers)
def publish_tool(
self,

View File

@@ -43,3 +43,23 @@ def platform_context(integration_token: str) -> Generator[None, Any, None]:
yield
finally:
_platform_integration_token.reset(token)
_current_task_id: contextvars.ContextVar[str | None] = contextvars.ContextVar(
"current_task_id", default=None
)
def set_current_task_id(task_id: str | None) -> contextvars.Token[str | None]:
"""Set the current task ID in the context. Returns a token for reset."""
return _current_task_id.set(task_id)
def reset_current_task_id(token: contextvars.Token[str | None]) -> None:
"""Reset the current task ID to its previous value."""
_current_task_id.reset(token)
def get_current_task_id() -> str | None:
"""Get the current task ID from the context."""
return _current_task_id.get()

View File

@@ -2,7 +2,9 @@
from __future__ import annotations
import asyncio
from contextvars import ContextVar, Token
import sys
from typing import TYPE_CHECKING, Protocol, runtime_checkable
@@ -46,13 +48,21 @@ class ExecutorContext(Protocol):
...
class AsyncExecutorContext(ExecutorContext, Protocol):
"""Extended context for executors that support async invocation."""
async def _ainvoke_loop(self) -> AgentFinish:
"""Invoke the agent loop asynchronously and return the result."""
...
@runtime_checkable
class HumanInputProvider(Protocol):
"""Protocol for human input handling.
Implementations handle the full feedback flow:
- Sync: prompt user, loop until satisfied
- Async: raise exception for external handling
- Async: use non-blocking I/O and async invoke loop
"""
def setup_messages(self, context: ExecutorContext) -> bool:
@@ -86,7 +96,7 @@ class HumanInputProvider(Protocol):
formatted_answer: AgentFinish,
context: ExecutorContext,
) -> AgentFinish:
"""Handle the full human feedback flow.
"""Handle the full human feedback flow synchronously.
Args:
formatted_answer: The agent's current answer.
@@ -100,6 +110,25 @@ class HumanInputProvider(Protocol):
"""
...
async def handle_feedback_async(
self,
formatted_answer: AgentFinish,
context: AsyncExecutorContext,
) -> AgentFinish:
"""Handle the full human feedback flow asynchronously.
Uses non-blocking I/O for user prompts and async invoke loop
for agent re-execution.
Args:
formatted_answer: The agent's current answer.
context: Async executor context for callbacks.
Returns:
The final answer after feedback processing.
"""
...
@staticmethod
def _get_output_string(answer: AgentFinish) -> str:
"""Extract output string from answer.
@@ -116,7 +145,7 @@ class HumanInputProvider(Protocol):
class SyncHumanInputProvider(HumanInputProvider):
"""Default synchronous human input via terminal."""
"""Default human input provider with sync and async support."""
def setup_messages(self, context: ExecutorContext) -> bool:
"""Use standard message setup.
@@ -157,6 +186,33 @@ class SyncHumanInputProvider(HumanInputProvider):
return self._handle_regular_feedback(formatted_answer, feedback, context)
async def handle_feedback_async(
self,
formatted_answer: AgentFinish,
context: AsyncExecutorContext,
) -> AgentFinish:
"""Handle feedback asynchronously without blocking the event loop.
Args:
formatted_answer: The agent's current answer.
context: Async executor context for callbacks.
Returns:
The final answer after feedback processing.
"""
feedback = await self._prompt_input_async(context.crew)
if context._is_training_mode():
return await self._handle_training_feedback_async(
formatted_answer, feedback, context
)
return await self._handle_regular_feedback_async(
formatted_answer, feedback, context
)
# ── Sync helpers ──────────────────────────────────────────────────
@staticmethod
def _handle_training_feedback(
initial_answer: AgentFinish,
@@ -209,6 +265,62 @@ class SyncHumanInputProvider(HumanInputProvider):
return answer
# ── Async helpers ─────────────────────────────────────────────────
@staticmethod
async def _handle_training_feedback_async(
initial_answer: AgentFinish,
feedback: str,
context: AsyncExecutorContext,
) -> AgentFinish:
"""Process training feedback asynchronously (single iteration).
Args:
initial_answer: The agent's initial answer.
feedback: Human feedback string.
context: Async executor context for callbacks.
Returns:
Improved answer after processing feedback.
"""
context._handle_crew_training_output(initial_answer, feedback)
context.messages.append(context._format_feedback_message(feedback))
improved_answer = await context._ainvoke_loop()
context._handle_crew_training_output(improved_answer)
context.ask_for_human_input = False
return improved_answer
async def _handle_regular_feedback_async(
self,
current_answer: AgentFinish,
initial_feedback: str,
context: AsyncExecutorContext,
) -> AgentFinish:
"""Process regular feedback with async iteration loop.
Args:
current_answer: The agent's current answer.
initial_feedback: Initial human feedback string.
context: Async executor context for callbacks.
Returns:
Final answer after all feedback iterations.
"""
feedback = initial_feedback
answer = current_answer
while context.ask_for_human_input:
if feedback.strip() == "":
context.ask_for_human_input = False
else:
context.messages.append(context._format_feedback_message(feedback))
answer = await context._ainvoke_loop()
feedback = await self._prompt_input_async(context.crew)
return answer
# ── I/O ───────────────────────────────────────────────────────────
@staticmethod
def _prompt_input(crew: Crew | None) -> str:
"""Show rich panel and prompt for input.
@@ -262,6 +374,79 @@ class SyncHumanInputProvider(HumanInputProvider):
finally:
formatter.resume_live_updates()
@staticmethod
async def _prompt_input_async(crew: Crew | None) -> str:
"""Show rich panel and prompt for input without blocking the event loop.
Args:
crew: The crew instance for context.
Returns:
User input string from terminal.
"""
from rich.panel import Panel
from rich.text import Text
from crewai.events.event_listener import event_listener
formatter = event_listener.formatter
formatter.pause_live_updates()
try:
if crew and getattr(crew, "_train", False):
prompt_text = (
"TRAINING MODE: Provide feedback to improve the agent's performance.\n\n"
"This will be used to train better versions of the agent.\n"
"Please provide detailed feedback about the result quality and reasoning process."
)
title = "🎓 Training Feedback Required"
else:
prompt_text = (
"Provide feedback on the Final Result above.\n\n"
"• If you are happy with the result, simply hit Enter without typing anything.\n"
"• Otherwise, provide specific improvement requests.\n"
"• You can provide multiple rounds of feedback until satisfied."
)
title = "💬 Human Feedback Required"
content = Text()
content.append(prompt_text, style="yellow")
prompt_panel = Panel(
content,
title=title,
border_style="yellow",
padding=(1, 2),
)
formatter.console.print(prompt_panel)
response = await _async_readline()
if response.strip() != "":
formatter.console.print("\n[cyan]Processing your feedback...[/cyan]")
return response
finally:
formatter.resume_live_updates()
async def _async_readline() -> str:
"""Read a line from stdin using the event loop's native I/O.
Falls back to asyncio.to_thread on platforms where piping stdin
is unsupported.
Returns:
The line read from stdin, with trailing newline stripped.
"""
loop = asyncio.get_running_loop()
try:
reader = asyncio.StreamReader()
protocol = asyncio.StreamReaderProtocol(reader)
await loop.connect_read_pipe(lambda: protocol, sys.stdin)
raw = await reader.readline()
return raw.decode().rstrip("\n")
except (OSError, NotImplementedError, ValueError):
return await asyncio.to_thread(input)
_provider: ContextVar[HumanInputProvider | None] = ContextVar(
"human_input_provider",

View File

@@ -751,6 +751,8 @@ class Crew(FlowTrackable, BaseModel):
for after_callback in self.after_kickoff_callbacks:
result = after_callback(result)
result = self._post_kickoff(result)
self.usage_metrics = self.calculate_usage_metrics()
return result
@@ -764,6 +766,9 @@ class Crew(FlowTrackable, BaseModel):
clear_files(self.id)
detach(token)
def _post_kickoff(self, result: CrewOutput) -> CrewOutput:
return result
def kickoff_for_each(
self,
inputs: list[dict[str, Any]],
@@ -936,6 +941,8 @@ class Crew(FlowTrackable, BaseModel):
for after_callback in self.after_kickoff_callbacks:
result = after_callback(result)
result = self._post_kickoff(result)
self.usage_metrics = self.calculate_usage_metrics()
return result
@@ -1181,6 +1188,9 @@ class Crew(FlowTrackable, BaseModel):
self.manager_agent = manager
manager.crew = self
def _get_execution_start_index(self, tasks: list[Task]) -> int | None:
return None
def _execute_tasks(
self,
tasks: list[Task],
@@ -1197,6 +1207,9 @@ class Crew(FlowTrackable, BaseModel):
Returns:
CrewOutput: Final output of the crew
"""
custom_start = self._get_execution_start_index(tasks)
if custom_start is not None:
start_index = custom_start
task_outputs: list[TaskOutput] = []
futures: list[tuple[Task, Future[TaskOutput], int]] = []
@@ -1305,8 +1318,10 @@ class Crew(FlowTrackable, BaseModel):
if files:
supported_types: list[str] = []
if agent and agent.llm and agent.llm.supports_multimodal():
provider = getattr(agent.llm, "provider", None) or getattr(
agent.llm, "model", "openai"
provider = (
getattr(agent.llm, "provider", None)
or getattr(agent.llm, "model", None)
or "openai"
)
api = getattr(agent.llm, "api", None)
supported_types = get_supported_content_types(provider, api)
@@ -1502,6 +1517,7 @@ class Crew(FlowTrackable, BaseModel):
final_string_output = final_task_output.raw
self._finish_execution(final_string_output)
self.token_usage = self.calculate_usage_metrics()
crewai_event_bus.flush()
crewai_event_bus.emit(
self,
CrewKickoffCompletedEvent(
@@ -2011,7 +2027,13 @@ class Crew(FlowTrackable, BaseModel):
@staticmethod
def _show_tracing_disabled_message() -> None:
"""Show a message when tracing is disabled."""
from crewai.events.listeners.tracing.utils import has_user_declined_tracing
from crewai.events.listeners.tracing.utils import (
has_user_declined_tracing,
should_suppress_tracing_messages,
)
if should_suppress_tracing_messages():
return
console = Console()

View File

@@ -195,6 +195,7 @@ __all__ = [
"ToolUsageFinishedEvent",
"ToolUsageStartedEvent",
"ToolValidateInputErrorEvent",
"_extension_exports",
"crewai_event_bus",
]
@@ -210,14 +211,29 @@ _AGENT_EVENT_MAPPING = {
"LiteAgentExecutionStartedEvent": "crewai.events.types.agent_events",
}
_extension_exports: dict[str, Any] = {}
def __getattr__(name: str) -> Any:
"""Lazy import for agent events to avoid circular imports."""
"""Lazy import for agent events and registered extensions."""
if name in _AGENT_EVENT_MAPPING:
import importlib
module_path = _AGENT_EVENT_MAPPING[name]
module = importlib.import_module(module_path)
return getattr(module, name)
if name in _extension_exports:
import importlib
value = _extension_exports[name]
if isinstance(value, str):
module_path, _, attr_name = value.rpartition(".")
if module_path:
module = importlib.import_module(module_path)
return getattr(module, attr_name)
return importlib.import_module(value)
return value
msg = f"module {__name__!r} has no attribute {name!r}"
raise AttributeError(msg)

View File

@@ -63,6 +63,7 @@ class BaseEvent(BaseModel):
parent_event_id: str | None = None
previous_event_id: str | None = None
triggered_by_event_id: str | None = None
started_event_id: str | None = None
emission_sequence: int | None = None
def to_json(self, exclude: set[str] | None = None) -> Serializable:

View File

@@ -227,6 +227,39 @@ class CrewAIEventsBus:
return decorator
def off(
self,
event_type: type[BaseEvent],
handler: Callable[..., Any],
) -> None:
"""Unregister an event handler for a specific event type.
Args:
event_type: The event class to stop listening for
handler: The handler function to unregister
"""
with self._rwlock.w_locked():
if event_type in self._sync_handlers:
existing_sync = self._sync_handlers[event_type]
if handler in existing_sync:
self._sync_handlers[event_type] = existing_sync - {handler}
if not self._sync_handlers[event_type]:
del self._sync_handlers[event_type]
if event_type in self._async_handlers:
existing_async = self._async_handlers[event_type]
if handler in existing_async:
self._async_handlers[event_type] = existing_async - {handler}
if not self._async_handlers[event_type]:
del self._async_handlers[event_type]
if event_type in self._handler_dependencies:
self._handler_dependencies[event_type].pop(handler, None)
if not self._handler_dependencies[event_type]:
del self._handler_dependencies[event_type]
self._execution_plan_cache.pop(event_type, None)
def _call_handlers(
self,
source: Any,
@@ -374,7 +407,8 @@ class CrewAIEventsBus:
if popped is None:
handle_empty_pop(event_type_name)
else:
_, popped_type = popped
popped_event_id, popped_type = popped
event.started_event_id = popped_event_id
expected_start = VALID_EVENT_PAIRS.get(event_type_name)
if expected_start and popped_type and popped_type != expected_start:
handle_mismatch(event_type_name, popped_type, expected_start)
@@ -536,24 +570,52 @@ class CrewAIEventsBus:
... # Do stuff...
... # Handlers are cleared after the context
"""
with self._rwlock.w_locked():
prev_sync = self._sync_handlers
prev_async = self._async_handlers
prev_deps = self._handler_dependencies
prev_cache = self._execution_plan_cache
self._sync_handlers = {}
self._async_handlers = {}
self._handler_dependencies = {}
self._execution_plan_cache = {}
with self._rwlock.r_locked():
saved_sync: dict[type[BaseEvent], frozenset[SyncHandler]] = dict(
self._sync_handlers
)
saved_async: dict[type[BaseEvent], frozenset[AsyncHandler]] = dict(
self._async_handlers
)
saved_deps: dict[type[BaseEvent], dict[Handler, list[Depends[Any]]]] = {
event_type: dict(handlers)
for event_type, handlers in self._handler_dependencies.items()
}
for event_type, sync_handlers in saved_sync.items():
for sync_handler in sync_handlers:
self.off(event_type, sync_handler)
for event_type, async_handlers in saved_async.items():
for async_handler in async_handlers:
self.off(event_type, async_handler)
try:
yield
finally:
with self._rwlock.w_locked():
self._sync_handlers = prev_sync
self._async_handlers = prev_async
self._handler_dependencies = prev_deps
self._execution_plan_cache = prev_cache
with self._rwlock.r_locked():
current_sync = dict(self._sync_handlers)
current_async = dict(self._async_handlers)
for event_type, cur_sync in current_sync.items():
orig_sync = saved_sync.get(event_type, frozenset())
for new_handler in cur_sync - orig_sync:
self.off(event_type, new_handler)
for event_type, cur_async in current_async.items():
orig_async = saved_async.get(event_type, frozenset())
for new_async_handler in cur_async - orig_async:
self.off(event_type, new_async_handler)
for event_type, sync_handlers in saved_sync.items():
for sync_handler in sync_handlers:
deps = saved_deps.get(event_type, {}).get(sync_handler)
self._register_handler(event_type, sync_handler, deps)
for event_type, async_handlers in saved_async.items():
for async_handler in async_handlers:
deps = saved_deps.get(event_type, {}).get(async_handler)
self._register_handler(event_type, async_handler, deps)
def shutdown(self, wait: bool = True) -> None:
"""Gracefully shutdown the event loop and wait for all tasks to finish.

View File

@@ -797,7 +797,13 @@ class TraceCollectionListener(BaseEventListener):
from rich.console import Console
from rich.panel import Panel
from crewai.events.listeners.tracing.utils import has_user_declined_tracing
from crewai.events.listeners.tracing.utils import (
has_user_declined_tracing,
should_suppress_tracing_messages,
)
if should_suppress_tracing_messages():
return
console = Console()

View File

@@ -1,3 +1,4 @@
from collections.abc import Callable
from contextvars import ContextVar, Token
from datetime import datetime
import getpass
@@ -26,6 +27,35 @@ logger = logging.getLogger(__name__)
_tracing_enabled: ContextVar[bool | None] = ContextVar("_tracing_enabled", default=None)
_first_time_trace_hook: ContextVar[Callable[[], bool] | None] = ContextVar(
"_first_time_trace_hook", default=None
)
_suppress_tracing_messages: ContextVar[bool] = ContextVar(
"_suppress_tracing_messages", default=False
)
def set_suppress_tracing_messages(suppress: bool) -> object:
"""Set whether to suppress tracing-related console messages.
Args:
suppress: True to suppress messages, False to show them.
Returns:
A token that can be used to restore the previous value.
"""
return _suppress_tracing_messages.set(suppress)
def should_suppress_tracing_messages() -> bool:
"""Check if tracing messages should be suppressed.
Returns:
True if messages should be suppressed, False otherwise.
"""
return _suppress_tracing_messages.get()
def should_enable_tracing(*, override: bool | None = None) -> bool:
"""Determine if tracing should be enabled.
@@ -407,10 +437,13 @@ def truncate_messages(
def should_auto_collect_first_time_traces() -> bool:
"""True if we should auto-collect traces for first-time user.
Returns:
True if first-time user AND telemetry not disabled AND tracing not explicitly enabled, False otherwise.
"""
hook = _first_time_trace_hook.get()
if hook is not None:
return hook()
if _is_test_environment():
return False
@@ -432,6 +465,9 @@ def prompt_user_for_trace_viewing(timeout_seconds: int = 20) -> bool:
if _is_test_environment():
return False
if should_suppress_tracing_messages():
return False
try:
import threading

View File

@@ -16,7 +16,7 @@ class ToolUsageEvent(BaseEvent):
tool_name: str
tool_args: dict[str, Any] | str
tool_class: str | None = None
run_attempts: int | None = None
run_attempts: int = 0
delegations: int | None = None
agent: Any | None = None
task_name: str | None = None
@@ -26,7 +26,7 @@ class ToolUsageEvent(BaseEvent):
model_config = ConfigDict(arbitrary_types_allowed=True)
def __init__(self, **data):
def __init__(self, **data: Any) -> None:
if data.get("from_task"):
task = data["from_task"]
data["task_id"] = str(task.id)
@@ -96,10 +96,10 @@ class ToolExecutionErrorEvent(BaseEvent):
type: str = "tool_execution_error"
tool_name: str
tool_args: dict[str, Any]
tool_class: Callable
tool_class: Callable[..., Any]
agent: Any | None = None
def __init__(self, **data):
def __init__(self, **data: Any) -> None:
super().__init__(**data)
# Set fingerprint data from the agent
if self.agent and hasattr(self.agent, "fingerprint") and self.agent.fingerprint:

View File

@@ -1,3 +1,4 @@
from contextvars import ContextVar
import os
import threading
from typing import Any, ClassVar, cast
@@ -10,6 +11,36 @@ from rich.text import Text
from crewai.cli.version import is_newer_version_available
_disable_version_check: ContextVar[bool] = ContextVar(
"_disable_version_check", default=False
)
_suppress_console_output: ContextVar[bool] = ContextVar(
"_suppress_console_output", default=False
)
def set_suppress_console_output(suppress: bool) -> object:
"""Set whether to suppress all console output.
Args:
suppress: True to suppress output, False to show it.
Returns:
A token that can be used to restore the previous value.
"""
return _suppress_console_output.set(suppress)
def should_suppress_console_output() -> bool:
"""Check if console output should be suppressed.
Returns:
True if output should be suppressed, False otherwise.
"""
return _suppress_console_output.get()
class ConsoleFormatter:
tool_usage_counts: ClassVar[dict[str, int]] = {}
@@ -46,9 +77,15 @@ class ConsoleFormatter:
if not self.verbose:
return
if _disable_version_check.get():
return
if os.getenv("CI", "").lower() in ("true", "1"):
return
if os.getenv("CREWAI_DISABLE_VERSION_CHECK", "").lower() in ("true", "1"):
return
try:
is_newer, current, latest = is_newer_version_available()
if is_newer and latest:
@@ -76,8 +113,12 @@ To update, run: uv sync --upgrade-package crewai"""
from crewai.events.listeners.tracing.utils import (
has_user_declined_tracing,
is_tracing_enabled_in_context,
should_suppress_tracing_messages,
)
if should_suppress_tracing_messages():
return
if not is_tracing_enabled_in_context():
if has_user_declined_tracing():
message = """Info: Tracing is disabled.
@@ -129,6 +170,8 @@ To enable tracing, do any one of these:
def print(self, *args: Any, **kwargs: Any) -> None:
"""Print to console. Simplified to only handle panel-based output."""
if should_suppress_console_output():
return
# Skip blank lines during streaming
if len(args) == 0 and self._is_streaming:
return
@@ -485,6 +528,9 @@ To enable tracing, do any one of these:
if not self.verbose:
return
if should_suppress_console_output():
return
self._is_streaming = True
self._last_stream_call_type = call_type

View File

@@ -18,6 +18,7 @@ from crewai.agents.parser import (
AgentFinish,
OutputParserError,
)
from crewai.core.providers.human_input import get_provider
from crewai.events.event_bus import crewai_event_bus
from crewai.events.listeners.tracing.utils import (
is_tracing_enabled_in_context,
@@ -31,7 +32,8 @@ from crewai.events.types.tool_usage_events import (
ToolUsageFinishedEvent,
ToolUsageStartedEvent,
)
from crewai.flow.flow import Flow, listen, or_, router, start
from crewai.flow.flow import Flow, StateProxy, listen, or_, router, start
from crewai.flow.types import FlowMethodName
from crewai.hooks.llm_hooks import (
get_after_llm_call_hooks,
get_before_llm_call_hooks,
@@ -41,7 +43,12 @@ from crewai.hooks.tool_hooks import (
get_after_tool_call_hooks,
get_before_tool_call_hooks,
)
from crewai.hooks.types import AfterLLMCallHookType, BeforeLLMCallHookType
from crewai.hooks.types import (
AfterLLMCallHookCallable,
AfterLLMCallHookType,
BeforeLLMCallHookCallable,
BeforeLLMCallHookType,
)
from crewai.utilities.agent_utils import (
convert_tools_to_openai_schema,
enforce_rpm_limit,
@@ -191,8 +198,12 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
self._instance_id = str(uuid4())[:8]
self.before_llm_call_hooks: list[BeforeLLMCallHookType] = []
self.after_llm_call_hooks: list[AfterLLMCallHookType] = []
self.before_llm_call_hooks: list[
BeforeLLMCallHookType | BeforeLLMCallHookCallable
] = []
self.after_llm_call_hooks: list[
AfterLLMCallHookType | AfterLLMCallHookCallable
] = []
self.before_llm_call_hooks.extend(get_before_llm_call_hooks())
self.after_llm_call_hooks.extend(get_after_llm_call_hooks())
@@ -207,6 +218,71 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
)
self._state = AgentReActState()
@property
def messages(self) -> list[LLMMessage]:
"""Delegate to state for ExecutorContext conformance."""
return self._state.messages
@messages.setter
def messages(self, value: list[LLMMessage]) -> None:
"""Delegate to state for ExecutorContext conformance."""
if self._flow_initialized and hasattr(self, "_state_lock"):
with self._state_lock:
self._state.messages = value
else:
self._state.messages = value
@property
def ask_for_human_input(self) -> bool:
"""Delegate to state for ExecutorContext conformance."""
return self._state.ask_for_human_input
@ask_for_human_input.setter
def ask_for_human_input(self, value: bool) -> None:
"""Delegate to state for ExecutorContext conformance."""
self._state.ask_for_human_input = value
def _invoke_loop(self) -> AgentFinish:
"""Invoke the agent loop and return the result.
Required by ExecutorContext protocol.
"""
self._state.iterations = 0
self._state.is_finished = False
self._state.current_answer = None
self.kickoff()
answer = self._state.current_answer
if not isinstance(answer, AgentFinish):
raise RuntimeError("Agent loop did not produce a final answer")
return answer
async def _ainvoke_loop(self) -> AgentFinish:
"""Invoke the agent loop asynchronously and return the result.
Required by AsyncExecutorContext protocol.
"""
self._state.iterations = 0
self._state.is_finished = False
self._state.current_answer = None
await self.akickoff()
answer = self._state.current_answer
if not isinstance(answer, AgentFinish):
raise RuntimeError("Agent loop did not produce a final answer")
return answer
def _format_feedback_message(self, feedback: str) -> LLMMessage:
"""Format feedback as a message for the LLM.
Required by ExecutorContext protocol.
"""
return format_message_for_llm(
self._i18n.slice("feedback_instructions").format(feedback=feedback)
)
def _ensure_flow_initialized(self) -> None:
"""Ensure Flow.__init__() has been called.
@@ -298,18 +374,10 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
Flow initialization is deferred to prevent event emission during agent setup.
Returns the temporary state until invoke() is called.
"""
if self._flow_initialized and hasattr(self, "_state_lock"):
return StateProxy(self._state, self._state_lock) # type: ignore[return-value]
return self._state
@property
def messages(self) -> list[LLMMessage]:
"""Compatibility property for mixin - returns state messages."""
return self._state.messages
@messages.setter
def messages(self, value: list[LLMMessage]) -> None:
"""Set state messages."""
self._state.messages = value
@property
def iterations(self) -> int:
"""Compatibility property for mixin - returns state iterations."""
@@ -416,15 +484,14 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
raise
@listen("continue_reasoning_native")
def call_llm_native_tools(
self,
) -> Literal["native_tool_calls", "native_finished", "context_error"]:
def call_llm_native_tools(self) -> None:
"""Execute LLM call with native function calling.
Always calls the LLM so it can read reflection prompts and decide
whether to provide a final answer or request more tools.
Returns routing decision based on whether tool calls or final answer.
Note: This is a listener, not a router. The route_native_tool_result
router fires after this to determine the next step based on state.
"""
try:
# Clear pending tools - LLM will decide what to do next after reading
@@ -454,8 +521,7 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
if isinstance(answer, list) and answer and self._is_tool_call_list(answer):
# Store tool calls for sequential processing
self.state.pending_tool_calls = list(answer)
return "native_tool_calls"
return # Router will check pending_tool_calls
if isinstance(answer, BaseModel):
self.state.current_answer = AgentFinish(
@@ -465,7 +531,7 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
)
self._invoke_step_callback(self.state.current_answer)
self._append_message_to_state(answer.model_dump_json())
return "native_finished"
return # Router will check current_answer
# Text response - this is the final answer
if isinstance(answer, str):
@@ -476,8 +542,7 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
)
self._invoke_step_callback(self.state.current_answer)
self._append_message_to_state(answer)
return "native_finished"
return # Router will check current_answer
# Unexpected response type, treat as final answer
self.state.current_answer = AgentFinish(
@@ -487,13 +552,12 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
)
self._invoke_step_callback(self.state.current_answer)
self._append_message_to_state(str(answer))
return "native_finished"
# Router will check current_answer
except Exception as e:
if is_context_length_exceeded(e):
self._last_context_error = e
return "context_error"
return # Router will check _last_context_error
if e.__class__.__module__.startswith("litellm"):
raise e
handle_unknown_error(self._printer, e, verbose=self.agent.verbose)
@@ -506,6 +570,22 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
return "execute_tool"
return "agent_finished"
@router(call_llm_native_tools)
def route_native_tool_result(
self,
) -> Literal["native_tool_calls", "native_finished", "context_error"]:
"""Route based on LLM response for native tool calling.
Checks state set by call_llm_native_tools to determine next step.
This router is needed because only router return values trigger
downstream listeners.
"""
if self._last_context_error is not None:
return "context_error"
if self.state.pending_tool_calls:
return "native_tool_calls"
return "native_finished"
@listen("execute_tool")
def execute_tool_action(self) -> Literal["tool_completed", "tool_result_is_final"]:
"""Execute the tool action and handle the result."""
@@ -689,6 +769,7 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
agent_key=agent_key,
),
)
error_event_emitted = False
track_delegation_if_needed(func_name, args_dict, self.task)
@@ -764,6 +845,7 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
error=e,
),
)
error_event_emitted = True
elif max_usage_reached and original_tool:
# Return error message when max usage limit is reached
result = f"Tool '{func_name}' has reached its usage limit of {original_tool.max_usage_count} times and cannot be used anymore."
@@ -792,20 +874,20 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
color="red",
)
# Emit tool usage finished event
crewai_event_bus.emit(
self,
event=ToolUsageFinishedEvent(
output=result,
tool_name=func_name,
tool_args=args_dict,
from_agent=self.agent,
from_task=self.task,
agent_key=agent_key,
started_at=started_at,
finished_at=datetime.now(),
),
)
if not error_event_emitted:
crewai_event_bus.emit(
self,
event=ToolUsageFinishedEvent(
output=result,
tool_name=func_name,
tool_args=args_dict,
from_agent=self.agent,
from_task=self.task,
agent_key=agent_key,
started_at=started_at,
finished_at=datetime.now(),
),
)
# Append tool result message
tool_message: LLMMessage = {
@@ -861,9 +943,11 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
self.state.iterations += 1
return "initialized"
@listen("initialized")
@listen(or_("initialized", "tool_completed", "native_tool_completed"))
def continue_iteration(self) -> Literal["check_iteration"]:
"""Bridge listener that connects iteration loop back to iteration check."""
if self._flow_initialized:
self._discard_or_listener(FlowMethodName("continue_iteration"))
return "check_iteration"
@router(or_(initialize_reasoning, continue_iteration))
@@ -1105,7 +1189,7 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
)
if self.state.ask_for_human_input:
formatted_answer = self._handle_human_feedback(formatted_answer)
formatted_answer = await self._ahandle_human_feedback(formatted_answer)
self._create_short_term_memory(formatted_answer)
self._create_long_term_memory(formatted_answer)
@@ -1319,17 +1403,22 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
Returns:
Final answer after feedback.
"""
output_str = (
str(formatted_answer.output)
if isinstance(formatted_answer.output, BaseModel)
else formatted_answer.output
)
human_feedback = self._ask_human_input(output_str)
provider = get_provider()
return provider.handle_feedback(formatted_answer, self)
if self._is_training_mode():
return self._handle_training_feedback(formatted_answer, human_feedback)
async def _ahandle_human_feedback(
self, formatted_answer: AgentFinish
) -> AgentFinish:
"""Process human feedback asynchronously and refine answer.
return self._handle_regular_feedback(formatted_answer, human_feedback)
Args:
formatted_answer: Initial agent result.
Returns:
Final answer after feedback.
"""
provider = get_provider()
return await provider.handle_feedback_async(formatted_answer, self)
def _is_training_mode(self) -> bool:
"""Check if training mode is active.
@@ -1339,101 +1428,6 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
"""
return bool(self.crew and self.crew._train)
def _handle_training_feedback(
self, initial_answer: AgentFinish, feedback: str
) -> AgentFinish:
"""Process training feedback and generate improved answer.
Args:
initial_answer: Initial agent output.
feedback: Training feedback.
Returns:
Improved answer.
"""
self._handle_crew_training_output(initial_answer, feedback)
self.state.messages.append(
format_message_for_llm(
self._i18n.slice("feedback_instructions").format(feedback=feedback)
)
)
# Re-run flow for improved answer
self.state.iterations = 0
self.state.is_finished = False
self.state.current_answer = None
self.kickoff()
# Get improved answer from state
improved_answer = self.state.current_answer
if not isinstance(improved_answer, AgentFinish):
raise RuntimeError(
"Training feedback iteration did not produce final answer"
)
self._handle_crew_training_output(improved_answer)
self.state.ask_for_human_input = False
return improved_answer
def _handle_regular_feedback(
self, current_answer: AgentFinish, initial_feedback: str
) -> AgentFinish:
"""Process regular feedback iteratively until user is satisfied.
Args:
current_answer: Current agent output.
initial_feedback: Initial user feedback.
Returns:
Final answer after iterations.
"""
feedback = initial_feedback
answer = current_answer
while self.state.ask_for_human_input:
if feedback.strip() == "":
self.state.ask_for_human_input = False
else:
answer = self._process_feedback_iteration(feedback)
output_str = (
str(answer.output)
if isinstance(answer.output, BaseModel)
else answer.output
)
feedback = self._ask_human_input(output_str)
return answer
def _process_feedback_iteration(self, feedback: str) -> AgentFinish:
"""Process a single feedback iteration and generate updated response.
Args:
feedback: User feedback.
Returns:
Updated agent response.
"""
self.state.messages.append(
format_message_for_llm(
self._i18n.slice("feedback_instructions").format(feedback=feedback)
)
)
# Re-run flow
self.state.iterations = 0
self.state.is_finished = False
self.state.current_answer = None
self.kickoff()
# Get answer from state
answer = self.state.current_answer
if not isinstance(answer, AgentFinish):
raise RuntimeError("Feedback iteration did not produce final answer")
return answer
@classmethod
def __get_pydantic_core_schema__(
cls, _source_type: Any, _handler: GetCoreSchemaHandler

View File

@@ -28,6 +28,8 @@ Example:
```
"""
from typing import Any
from crewai.flow.async_feedback.providers import ConsoleProvider
from crewai.flow.async_feedback.types import (
HumanFeedbackPending,
@@ -41,4 +43,15 @@ __all__ = [
"HumanFeedbackPending",
"HumanFeedbackProvider",
"PendingFeedbackContext",
"_extension_exports",
]
_extension_exports: dict[str, Any] = {}
def __getattr__(name: str) -> Any:
"""Support extensions via dynamic attribute lookup."""
if name in _extension_exports:
return _extension_exports[name]
msg = f"module {__name__!r} has no attribute {name!r}"
raise AttributeError(msg)

View File

@@ -7,7 +7,14 @@ for building event-driven workflows with conditional execution and routing.
from __future__ import annotations
import asyncio
from collections.abc import Callable, Sequence
from collections.abc import (
Callable,
ItemsView,
Iterator,
KeysView,
Sequence,
ValuesView,
)
from concurrent.futures import Future
import copy
import inspect
@@ -45,6 +52,7 @@ from crewai.events.listeners.tracing.utils import (
has_user_declined_tracing,
set_tracing_enabled,
should_enable_tracing,
should_suppress_tracing_messages,
)
from crewai.events.types.flow_events import (
FlowCreatedEvent,
@@ -408,6 +416,132 @@ def and_(*conditions: str | FlowCondition | Callable[..., Any]) -> FlowCondition
return {"type": AND_CONDITION, "conditions": processed_conditions}
class LockedListProxy(Generic[T]):
"""Thread-safe proxy for list operations.
Wraps a list and uses a lock for all mutating operations.
"""
def __init__(self, lst: list[T], lock: threading.Lock) -> None:
self._list = lst
self._lock = lock
def append(self, item: T) -> None:
with self._lock:
self._list.append(item)
def extend(self, items: list[T]) -> None:
with self._lock:
self._list.extend(items)
def insert(self, index: int, item: T) -> None:
with self._lock:
self._list.insert(index, item)
def remove(self, item: T) -> None:
with self._lock:
self._list.remove(item)
def pop(self, index: int = -1) -> T:
with self._lock:
return self._list.pop(index)
def clear(self) -> None:
with self._lock:
self._list.clear()
def __setitem__(self, index: int, value: T) -> None:
with self._lock:
self._list[index] = value
def __delitem__(self, index: int) -> None:
with self._lock:
del self._list[index]
def __getitem__(self, index: int) -> T:
return self._list[index]
def __len__(self) -> int:
return len(self._list)
def __iter__(self) -> Iterator[T]:
return iter(self._list)
def __contains__(self, item: object) -> bool:
return item in self._list
def __repr__(self) -> str:
return repr(self._list)
def __bool__(self) -> bool:
return bool(self._list)
class LockedDictProxy(Generic[T]):
"""Thread-safe proxy for dict operations.
Wraps a dict and uses a lock for all mutating operations.
"""
def __init__(self, d: dict[str, T], lock: threading.Lock) -> None:
self._dict = d
self._lock = lock
def __setitem__(self, key: str, value: T) -> None:
with self._lock:
self._dict[key] = value
def __delitem__(self, key: str) -> None:
with self._lock:
del self._dict[key]
def pop(self, key: str, *default: T) -> T:
with self._lock:
return self._dict.pop(key, *default)
def update(self, other: dict[str, T]) -> None:
with self._lock:
self._dict.update(other)
def clear(self) -> None:
with self._lock:
self._dict.clear()
def setdefault(self, key: str, default: T) -> T:
with self._lock:
return self._dict.setdefault(key, default)
def __getitem__(self, key: str) -> T:
return self._dict[key]
def __len__(self) -> int:
return len(self._dict)
def __iter__(self) -> Iterator[str]:
return iter(self._dict)
def __contains__(self, key: object) -> bool:
return key in self._dict
def keys(self) -> KeysView[str]:
return self._dict.keys()
def values(self) -> ValuesView[T]:
return self._dict.values()
def items(self) -> ItemsView[str, T]:
return self._dict.items()
def get(self, key: str, default: T | None = None) -> T | None:
return self._dict.get(key, default)
def __repr__(self) -> str:
return repr(self._dict)
def __bool__(self) -> bool:
return bool(self._dict)
class StateProxy(Generic[T]):
"""Proxy that provides thread-safe access to flow state.
@@ -422,7 +556,13 @@ class StateProxy(Generic[T]):
object.__setattr__(self, "_proxy_lock", lock)
def __getattr__(self, name: str) -> Any:
return getattr(object.__getattribute__(self, "_proxy_state"), name)
value = getattr(object.__getattribute__(self, "_proxy_state"), name)
lock = object.__getattribute__(self, "_proxy_lock")
if isinstance(value, list):
return LockedListProxy(value, lock)
if isinstance(value, dict):
return LockedDictProxy(value, lock)
return value
def __setattr__(self, name: str, value: Any) -> None:
if name in ("_proxy_state", "_proxy_lock"):
@@ -1592,7 +1732,6 @@ class Flow(Generic[T], metaclass=FlowMeta):
reset_emission_counter()
reset_last_event_id()
# Emit FlowStartedEvent and log the start of the flow.
if not self.suppress_flow_events:
future = crewai_event_bus.emit(
self,
@@ -1603,7 +1742,10 @@ class Flow(Generic[T], metaclass=FlowMeta):
),
)
if future:
self._event_futures.append(future)
try:
await asyncio.wrap_future(future)
except Exception:
logger.warning("FlowStartedEvent handler failed", exc_info=True)
self._log_flow_event(
f"Flow started with ID: {self.flow_id}", color="bold magenta"
)
@@ -1695,6 +1837,12 @@ class Flow(Generic[T], metaclass=FlowMeta):
final_output = self._method_outputs[-1] if self._method_outputs else None
if self._event_futures:
await asyncio.gather(
*[asyncio.wrap_future(f) for f in self._event_futures]
)
self._event_futures.clear()
if not self.suppress_flow_events:
future = crewai_event_bus.emit(
self,
@@ -1706,13 +1854,12 @@ class Flow(Generic[T], metaclass=FlowMeta):
),
)
if future:
self._event_futures.append(future)
if self._event_futures:
await asyncio.gather(
*[asyncio.wrap_future(f) for f in self._event_futures]
)
self._event_futures.clear()
try:
await asyncio.wrap_future(future)
except Exception:
logger.warning(
"FlowFinishedEvent handler failed", exc_info=True
)
if not self.suppress_flow_events:
trace_listener = TraceCollectionListener()
@@ -1787,40 +1934,14 @@ class Flow(Generic[T], metaclass=FlowMeta):
await self._execute_listeners(start_method_name, result, finished_event_id)
# Then execute listeners for the router result (e.g., "approved")
router_result_trigger = FlowMethodName(str(result))
listeners_for_result = self._find_triggered_methods(
router_result_trigger, router_only=False
listener_result = (
self.last_human_feedback
if self.last_human_feedback is not None
else result
)
await self._execute_listeners(
router_result_trigger, listener_result, finished_event_id
)
if listeners_for_result:
# Pass the HumanFeedbackResult if available
listener_result = (
self.last_human_feedback
if self.last_human_feedback is not None
else result
)
racing_group = self._get_racing_group_for_listeners(
listeners_for_result
)
if racing_group:
racing_members, _ = racing_group
other_listeners = [
name
for name in listeners_for_result
if name not in racing_members
]
await self._execute_racing_listeners(
racing_members,
other_listeners,
listener_result,
finished_event_id,
)
else:
tasks = [
self._execute_single_listener(
listener_name, listener_result, finished_event_id
)
for listener_name in listeners_for_result
]
await asyncio.gather(*tasks)
else:
await self._execute_listeners(start_method_name, result, finished_event_id)
@@ -2026,15 +2147,14 @@ class Flow(Generic[T], metaclass=FlowMeta):
router_input = router_result_to_feedback.get(
str(current_trigger), current_result
)
current_triggering_event_id = await self._execute_single_listener(
(
router_result,
current_triggering_event_id,
) = await self._execute_single_listener(
router_name, router_input, current_triggering_event_id
)
# After executing router, the router's result is the path
router_result = (
self._method_outputs[-1] if self._method_outputs else None
)
if router_result: # Only add non-None results
router_results.append(router_result)
router_results.append(FlowMethodName(str(router_result)))
# If this was a human_feedback router, map the outcome to the feedback
if self.last_human_feedback is not None:
router_result_to_feedback[str(router_result)] = (
@@ -2074,12 +2194,14 @@ class Flow(Generic[T], metaclass=FlowMeta):
racing_members,
other_listeners,
listener_result,
triggering_event_id,
current_triggering_event_id,
)
else:
tasks = [
self._execute_single_listener(
listener_name, listener_result, triggering_event_id
listener_name,
listener_result,
current_triggering_event_id,
)
for listener_name in listeners_triggered
]
@@ -2262,7 +2384,7 @@ class Flow(Generic[T], metaclass=FlowMeta):
listener_name: FlowMethodName,
result: Any,
triggering_event_id: str | None = None,
) -> str | None:
) -> tuple[Any, str | None]:
"""Executes a single listener method with proper event handling.
This internal method manages the execution of an individual listener,
@@ -2275,8 +2397,9 @@ class Flow(Generic[T], metaclass=FlowMeta):
used for causal chain tracking.
Returns:
The event_id of the MethodExecutionFinishedEvent emitted by this listener,
or None if events are suppressed.
A tuple of (listener_result, event_id) where listener_result is the return
value of the listener method and event_id is the MethodExecutionFinishedEvent
id, or (None, None) if skipped during resumption.
Note:
- Inspects method signature to determine if it accepts the trigger result
@@ -2302,7 +2425,7 @@ class Flow(Generic[T], metaclass=FlowMeta):
):
# This conditional start was executed, continue its chain
await self._execute_start_method(start_method_name)
return None
return (None, None)
# For cyclic flows, clear from completed to allow re-execution
self._completed_methods.discard(listener_name)
# Also clear from fired OR listeners for cyclic flows
@@ -2340,46 +2463,7 @@ class Flow(Generic[T], metaclass=FlowMeta):
listener_name, listener_result, finished_event_id
)
# If this listener is also a router (e.g., has @human_feedback with emit),
# we need to trigger listeners for the router result as well
if listener_name in self._routers and listener_result is not None:
router_result_trigger = FlowMethodName(str(listener_result))
listeners_for_result = self._find_triggered_methods(
router_result_trigger, router_only=False
)
if listeners_for_result:
# Pass the HumanFeedbackResult if available
feedback_result = (
self.last_human_feedback
if self.last_human_feedback is not None
else listener_result
)
racing_group = self._get_racing_group_for_listeners(
listeners_for_result
)
if racing_group:
racing_members, _ = racing_group
other_listeners = [
name
for name in listeners_for_result
if name not in racing_members
]
await self._execute_racing_listeners(
racing_members,
other_listeners,
feedback_result,
finished_event_id,
)
else:
tasks = [
self._execute_single_listener(
name, feedback_result, finished_event_id
)
for name in listeners_for_result
]
await asyncio.gather(*tasks)
return finished_event_id
return (listener_result, finished_event_id)
except Exception as e:
# Don't log HumanFeedbackPending as an error - it's expected control flow
@@ -2626,6 +2710,8 @@ class Flow(Generic[T], metaclass=FlowMeta):
@staticmethod
def _show_tracing_disabled_message() -> None:
"""Show a message when tracing is disabled."""
if should_suppress_tracing_messages():
return
console = Console()

View File

@@ -3,7 +3,12 @@ from __future__ import annotations
from typing import TYPE_CHECKING, Any, cast
from crewai.events.event_listener import event_listener
from crewai.hooks.types import AfterLLMCallHookType, BeforeLLMCallHookType
from crewai.hooks.types import (
AfterLLMCallHookCallable,
AfterLLMCallHookType,
BeforeLLMCallHookCallable,
BeforeLLMCallHookType,
)
from crewai.utilities.printer import Printer
@@ -149,12 +154,12 @@ class LLMCallHookContext:
event_listener.formatter.resume_live_updates()
_before_llm_call_hooks: list[BeforeLLMCallHookType] = []
_after_llm_call_hooks: list[AfterLLMCallHookType] = []
_before_llm_call_hooks: list[BeforeLLMCallHookType | BeforeLLMCallHookCallable] = []
_after_llm_call_hooks: list[AfterLLMCallHookType | AfterLLMCallHookCallable] = []
def register_before_llm_call_hook(
hook: BeforeLLMCallHookType,
hook: BeforeLLMCallHookType | BeforeLLMCallHookCallable,
) -> None:
"""Register a global before_llm_call hook.
@@ -190,7 +195,7 @@ def register_before_llm_call_hook(
def register_after_llm_call_hook(
hook: AfterLLMCallHookType,
hook: AfterLLMCallHookType | AfterLLMCallHookCallable,
) -> None:
"""Register a global after_llm_call hook.
@@ -217,7 +222,9 @@ def register_after_llm_call_hook(
_after_llm_call_hooks.append(hook)
def get_before_llm_call_hooks() -> list[BeforeLLMCallHookType]:
def get_before_llm_call_hooks() -> list[
BeforeLLMCallHookType | BeforeLLMCallHookCallable
]:
"""Get all registered global before_llm_call hooks.
Returns:
@@ -226,7 +233,7 @@ def get_before_llm_call_hooks() -> list[BeforeLLMCallHookType]:
return _before_llm_call_hooks.copy()
def get_after_llm_call_hooks() -> list[AfterLLMCallHookType]:
def get_after_llm_call_hooks() -> list[AfterLLMCallHookType | AfterLLMCallHookCallable]:
"""Get all registered global after_llm_call hooks.
Returns:
@@ -236,7 +243,7 @@ def get_after_llm_call_hooks() -> list[AfterLLMCallHookType]:
def unregister_before_llm_call_hook(
hook: BeforeLLMCallHookType,
hook: BeforeLLMCallHookType | BeforeLLMCallHookCallable,
) -> bool:
"""Unregister a specific global before_llm_call hook.
@@ -262,7 +269,7 @@ def unregister_before_llm_call_hook(
def unregister_after_llm_call_hook(
hook: AfterLLMCallHookType,
hook: AfterLLMCallHookType | AfterLLMCallHookCallable,
) -> bool:
"""Unregister a specific global after_llm_call hook.

View File

@@ -3,7 +3,12 @@ from __future__ import annotations
from typing import TYPE_CHECKING, Any
from crewai.events.event_listener import event_listener
from crewai.hooks.types import AfterToolCallHookType, BeforeToolCallHookType
from crewai.hooks.types import (
AfterToolCallHookCallable,
AfterToolCallHookType,
BeforeToolCallHookCallable,
BeforeToolCallHookType,
)
from crewai.utilities.printer import Printer
@@ -112,12 +117,12 @@ class ToolCallHookContext:
# Global hook registries
_before_tool_call_hooks: list[BeforeToolCallHookType] = []
_after_tool_call_hooks: list[AfterToolCallHookType] = []
_before_tool_call_hooks: list[BeforeToolCallHookType | BeforeToolCallHookCallable] = []
_after_tool_call_hooks: list[AfterToolCallHookType | AfterToolCallHookCallable] = []
def register_before_tool_call_hook(
hook: BeforeToolCallHookType,
hook: BeforeToolCallHookType | BeforeToolCallHookCallable,
) -> None:
"""Register a global before_tool_call hook.
@@ -154,7 +159,7 @@ def register_before_tool_call_hook(
def register_after_tool_call_hook(
hook: AfterToolCallHookType,
hook: AfterToolCallHookType | AfterToolCallHookCallable,
) -> None:
"""Register a global after_tool_call hook.
@@ -184,7 +189,9 @@ def register_after_tool_call_hook(
_after_tool_call_hooks.append(hook)
def get_before_tool_call_hooks() -> list[BeforeToolCallHookType]:
def get_before_tool_call_hooks() -> list[
BeforeToolCallHookType | BeforeToolCallHookCallable
]:
"""Get all registered global before_tool_call hooks.
Returns:
@@ -193,7 +200,9 @@ def get_before_tool_call_hooks() -> list[BeforeToolCallHookType]:
return _before_tool_call_hooks.copy()
def get_after_tool_call_hooks() -> list[AfterToolCallHookType]:
def get_after_tool_call_hooks() -> list[
AfterToolCallHookType | AfterToolCallHookCallable
]:
"""Get all registered global after_tool_call hooks.
Returns:
@@ -203,7 +212,7 @@ def get_after_tool_call_hooks() -> list[AfterToolCallHookType]:
def unregister_before_tool_call_hook(
hook: BeforeToolCallHookType,
hook: BeforeToolCallHookType | BeforeToolCallHookCallable,
) -> bool:
"""Unregister a specific global before_tool_call hook.
@@ -229,7 +238,7 @@ def unregister_before_tool_call_hook(
def unregister_after_tool_call_hook(
hook: AfterToolCallHookType,
hook: AfterToolCallHookType | AfterToolCallHookCallable,
) -> bool:
"""Unregister a specific global after_tool_call hook.

View File

@@ -0,0 +1 @@
"""Knowledge source utilities."""

View File

@@ -0,0 +1,70 @@
"""Helper utilities for knowledge sources."""
from typing import Any, ClassVar
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
from crewai.knowledge.source.csv_knowledge_source import CSVKnowledgeSource
from crewai.knowledge.source.excel_knowledge_source import ExcelKnowledgeSource
from crewai.knowledge.source.json_knowledge_source import JSONKnowledgeSource
from crewai.knowledge.source.pdf_knowledge_source import PDFKnowledgeSource
from crewai.knowledge.source.text_file_knowledge_source import TextFileKnowledgeSource
class SourceHelper:
"""Helper class for creating and managing knowledge sources."""
SUPPORTED_FILE_TYPES: ClassVar[list[str]] = [
".csv",
".pdf",
".json",
".txt",
".xlsx",
".xls",
]
_FILE_TYPE_MAP: ClassVar[dict[str, type[BaseKnowledgeSource]]] = {
".csv": CSVKnowledgeSource,
".pdf": PDFKnowledgeSource,
".json": JSONKnowledgeSource,
".txt": TextFileKnowledgeSource,
".xlsx": ExcelKnowledgeSource,
".xls": ExcelKnowledgeSource,
}
@classmethod
def is_supported_file(cls, file_path: str) -> bool:
"""Check if a file type is supported.
Args:
file_path: Path to the file.
Returns:
True if the file type is supported.
"""
return file_path.lower().endswith(tuple(cls.SUPPORTED_FILE_TYPES))
@classmethod
def get_source(
cls, file_path: str, metadata: dict[str, Any] | None = None
) -> BaseKnowledgeSource:
"""Create appropriate KnowledgeSource based on file extension.
Args:
file_path: Path to the file.
metadata: Optional metadata to attach to the source.
Returns:
The appropriate KnowledgeSource instance.
Raises:
ValueError: If the file type is not supported.
"""
if not cls.is_supported_file(file_path):
raise ValueError(f"Unsupported file type: {file_path}")
lower_path = file_path.lower()
for ext, source_cls in cls._FILE_TYPE_MAP.items():
if lower_path.endswith(ext):
return source_cls(file_path=[file_path], metadata=metadata)
raise ValueError(f"Unsupported file type: {file_path}")

View File

@@ -4,6 +4,7 @@ import json
import logging
import os
from typing import TYPE_CHECKING, Any, TypedDict
from urllib.parse import urlparse
from pydantic import BaseModel
from typing_extensions import Self
@@ -175,11 +176,51 @@ class AzureCompletion(BaseLLM):
prefix in model.lower() for prefix in ["gpt-", "o1-", "text-"]
)
self.is_azure_openai_endpoint = (
"openai.azure.com" in self.endpoint
and "/openai/deployments/" in self.endpoint
self.is_azure_openai_endpoint = self._is_azure_openai_deployment_endpoint(
self.endpoint
)
@staticmethod
def _parse_endpoint_url(endpoint: str):
parsed_endpoint = urlparse(endpoint)
if parsed_endpoint.hostname:
return parsed_endpoint
# Support endpoint values without a URL scheme.
return urlparse(f"https://{endpoint}")
@staticmethod
def _is_azure_openai_hostname(endpoint: str) -> bool:
parsed_endpoint = AzureCompletion._parse_endpoint_url(endpoint)
hostname = parsed_endpoint.hostname or ""
labels = [label for label in hostname.lower().split(".") if label]
return len(labels) >= 3 and labels[-3:] == ["openai", "azure", "com"]
@staticmethod
def _get_endpoint_path_segments(endpoint: str) -> list[str]:
parsed_endpoint = AzureCompletion._parse_endpoint_url(endpoint)
return [segment for segment in parsed_endpoint.path.split("/") if segment]
@staticmethod
def _is_azure_openai_deployment_endpoint(endpoint: str) -> bool:
if not AzureCompletion._is_azure_openai_hostname(endpoint):
return False
path_segments = AzureCompletion._get_endpoint_path_segments(endpoint)
return len(path_segments) >= 3 and path_segments[:2] == [
"openai",
"deployments",
]
@staticmethod
def _is_azure_openai_deployments_collection(endpoint: str) -> bool:
if not AzureCompletion._is_azure_openai_hostname(endpoint):
return False
path_segments = AzureCompletion._get_endpoint_path_segments(endpoint)
return path_segments == ["openai", "deployments"]
@staticmethod
def _validate_and_fix_endpoint(endpoint: str, model: str) -> str:
"""Validate and fix Azure endpoint URL format.
@@ -194,10 +235,12 @@ class AzureCompletion(BaseLLM):
Returns:
Validated and potentially corrected endpoint URL
"""
if "openai.azure.com" in endpoint and "/openai/deployments/" not in endpoint:
if AzureCompletion._is_azure_openai_hostname(
endpoint
) and not AzureCompletion._is_azure_openai_deployment_endpoint(endpoint):
endpoint = endpoint.rstrip("/")
if not endpoint.endswith("/openai/deployments"):
if not AzureCompletion._is_azure_openai_deployments_collection(endpoint):
deployment_name = model.replace("azure/", "")
endpoint = f"{endpoint}/openai/deployments/{deployment_name}"
logging.info(f"Constructed Azure OpenAI endpoint URL: {endpoint}")

View File

@@ -420,7 +420,6 @@ class MCPClient:
return [
{
"name": sanitize_tool_name(tool.name),
"original_name": tool.name,
"description": getattr(tool, "description", ""),
"inputSchema": getattr(tool, "inputSchema", {}),
}

View File

@@ -27,6 +27,8 @@ if TYPE_CHECKING:
from crewai import Agent, Task
from crewai.agents.cache.cache_handler import CacheHandler
from crewai.crews.crew_output import CrewOutput
from crewai.hooks.llm_hooks import LLMCallHookContext
from crewai.hooks.tool_hooks import ToolCallHookContext
from crewai.project.wrappers import (
CrewInstance,
OutputJsonClass,
@@ -34,6 +36,8 @@ if TYPE_CHECKING:
)
from crewai.tasks.task_output import TaskOutput
_post_initialize_crew_hooks: list[Callable[[Any], None]] = []
class AgentConfig(TypedDict, total=False):
"""Type definition for agent configuration dictionary.
@@ -266,6 +270,9 @@ class CrewBaseMeta(type):
instance.map_all_agent_variables()
instance.map_all_task_variables()
for hook in _post_initialize_crew_hooks:
hook(instance)
original_methods = {
name: method
for name, method in cls.__dict__.items()
@@ -485,47 +492,61 @@ def _register_crew_hooks(instance: CrewInstance, cls: type) -> None:
if has_agent_filter:
agents_filter = hook_method._filter_agents
def make_filtered_before_llm(bound_fn, agents_list):
def filtered(context):
def make_filtered_before_llm(
bound_fn: Callable[[LLMCallHookContext], bool | None],
agents_list: list[str],
) -> Callable[[LLMCallHookContext], bool | None]:
def filtered(context: LLMCallHookContext) -> bool | None:
if context.agent and context.agent.role not in agents_list:
return None
return bound_fn(context)
return filtered
final_hook = make_filtered_before_llm(bound_hook, agents_filter)
before_llm_hook = make_filtered_before_llm(bound_hook, agents_filter)
else:
final_hook = bound_hook
before_llm_hook = bound_hook
register_before_llm_call_hook(final_hook)
instance._registered_hook_functions.append(("before_llm_call", final_hook))
register_before_llm_call_hook(before_llm_hook)
instance._registered_hook_functions.append(
("before_llm_call", before_llm_hook)
)
if hasattr(hook_method, "is_after_llm_call_hook"):
if has_agent_filter:
agents_filter = hook_method._filter_agents
def make_filtered_after_llm(bound_fn, agents_list):
def filtered(context):
def make_filtered_after_llm(
bound_fn: Callable[[LLMCallHookContext], str | None],
agents_list: list[str],
) -> Callable[[LLMCallHookContext], str | None]:
def filtered(context: LLMCallHookContext) -> str | None:
if context.agent and context.agent.role not in agents_list:
return None
return bound_fn(context)
return filtered
final_hook = make_filtered_after_llm(bound_hook, agents_filter)
after_llm_hook = make_filtered_after_llm(bound_hook, agents_filter)
else:
final_hook = bound_hook
after_llm_hook = bound_hook
register_after_llm_call_hook(final_hook)
instance._registered_hook_functions.append(("after_llm_call", final_hook))
register_after_llm_call_hook(after_llm_hook)
instance._registered_hook_functions.append(
("after_llm_call", after_llm_hook)
)
if hasattr(hook_method, "is_before_tool_call_hook"):
if has_tool_filter or has_agent_filter:
tools_filter = getattr(hook_method, "_filter_tools", None)
agents_filter = getattr(hook_method, "_filter_agents", None)
def make_filtered_before_tool(bound_fn, tools_list, agents_list):
def filtered(context):
def make_filtered_before_tool(
bound_fn: Callable[[ToolCallHookContext], bool | None],
tools_list: list[str] | None,
agents_list: list[str] | None,
) -> Callable[[ToolCallHookContext], bool | None]:
def filtered(context: ToolCallHookContext) -> bool | None:
if tools_list and context.tool_name not in tools_list:
return None
if (
@@ -538,22 +559,28 @@ def _register_crew_hooks(instance: CrewInstance, cls: type) -> None:
return filtered
final_hook = make_filtered_before_tool(
before_tool_hook = make_filtered_before_tool(
bound_hook, tools_filter, agents_filter
)
else:
final_hook = bound_hook
before_tool_hook = bound_hook
register_before_tool_call_hook(final_hook)
instance._registered_hook_functions.append(("before_tool_call", final_hook))
register_before_tool_call_hook(before_tool_hook)
instance._registered_hook_functions.append(
("before_tool_call", before_tool_hook)
)
if hasattr(hook_method, "is_after_tool_call_hook"):
if has_tool_filter or has_agent_filter:
tools_filter = getattr(hook_method, "_filter_tools", None)
agents_filter = getattr(hook_method, "_filter_agents", None)
def make_filtered_after_tool(bound_fn, tools_list, agents_list):
def filtered(context):
def make_filtered_after_tool(
bound_fn: Callable[[ToolCallHookContext], str | None],
tools_list: list[str] | None,
agents_list: list[str] | None,
) -> Callable[[ToolCallHookContext], str | None]:
def filtered(context: ToolCallHookContext) -> str | None:
if tools_list and context.tool_name not in tools_list:
return None
if (
@@ -566,14 +593,16 @@ def _register_crew_hooks(instance: CrewInstance, cls: type) -> None:
return filtered
final_hook = make_filtered_after_tool(
after_tool_hook = make_filtered_after_tool(
bound_hook, tools_filter, agents_filter
)
else:
final_hook = bound_hook
after_tool_hook = bound_hook
register_after_tool_call_hook(final_hook)
instance._registered_hook_functions.append(("after_tool_call", final_hook))
register_after_tool_call_hook(after_tool_hook)
instance._registered_hook_functions.append(
("after_tool_call", after_tool_hook)
)
instance._hooks_being_registered = False

View File

@@ -72,6 +72,8 @@ class CrewInstance(Protocol):
__crew_metadata__: CrewMetadata
_mcp_server_adapter: Any
_all_methods: dict[str, Callable[..., Any]]
_registered_hook_functions: list[tuple[str, Callable[..., Any]]]
_hooks_being_registered: bool
agents: list[Agent]
tasks: list[Task]
base_directory: Path

View File

@@ -31,6 +31,7 @@ from pydantic_core import PydanticCustomError
from typing_extensions import Self
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.context import reset_current_task_id, set_current_task_id
from crewai.core.providers.content_processor import process_content
from crewai.events.event_bus import crewai_event_bus
from crewai.events.types.task_events import (
@@ -561,6 +562,7 @@ class Task(BaseModel):
tools: list[Any] | None,
) -> TaskOutput:
"""Run the core execution logic of the task asynchronously."""
task_id_token = set_current_task_id(str(self.id))
self._store_input_files()
try:
agent = agent or self.agent
@@ -648,6 +650,7 @@ class Task(BaseModel):
raise e # Re-raise the exception after emitting the event
finally:
clear_task_files(self.id)
reset_current_task_id(task_id_token)
def _execute_core(
self,
@@ -656,6 +659,7 @@ class Task(BaseModel):
tools: list[Any] | None,
) -> TaskOutput:
"""Run the core execution logic of the task."""
task_id_token = set_current_task_id(str(self.id))
self._store_input_files()
try:
agent = agent or self.agent
@@ -744,6 +748,7 @@ class Task(BaseModel):
raise e # Re-raise the exception after emitting the event
finally:
clear_task_files(self.id)
reset_current_task_id(task_id_token)
def _post_agent_execution(self, agent: BaseAgent) -> None:
pass

View File

@@ -6,6 +6,7 @@ Classes:
HallucinationGuardrail: Placeholder guardrail that validates task outputs.
"""
from collections.abc import Callable
from typing import Any
from crewai.llm import LLM
@@ -13,32 +14,36 @@ from crewai.tasks.task_output import TaskOutput
from crewai.utilities.logger import Logger
_validate_output_hook: Callable[..., tuple[bool, Any]] | None = None
class HallucinationGuardrail:
"""Placeholder for the HallucinationGuardrail feature.
Attributes:
context: The reference context that outputs would be checked against.
context: Optional reference context that outputs would be checked against.
llm: The language model that would be used for evaluation.
threshold: Optional minimum faithfulness score that would be required to pass.
tool_response: Optional tool response information that would be used in evaluation.
Examples:
>>> # Basic usage with default verdict logic
>>> # Basic usage without context (uses task expected_output as context)
>>> guardrail = HallucinationGuardrail(llm=agent.llm)
>>> # With context for reference
>>> guardrail = HallucinationGuardrail(
... context="AI helps with various tasks including analysis and generation.",
... llm=agent.llm,
... context="AI helps with various tasks including analysis and generation.",
... )
>>> # With custom threshold for stricter validation
>>> strict_guardrail = HallucinationGuardrail(
... context="Quantum computing uses qubits in superposition.",
... llm=agent.llm,
... threshold=8.0, # Would require score >= 8 to pass in enterprise version
... threshold=8.0, # Require score >= 8 to pass
... )
>>> # With tool response for additional context
>>> guardrail_with_tools = HallucinationGuardrail(
... context="The current weather data",
... llm=agent.llm,
... tool_response="Weather API returned: Temperature 22°C, Humidity 65%",
... )
@@ -46,16 +51,17 @@ class HallucinationGuardrail:
def __init__(
self,
context: str,
llm: LLM,
context: str | None = None,
threshold: float | None = None,
tool_response: str = "",
):
"""Initialize the HallucinationGuardrail placeholder.
Args:
context: The reference context that outputs would be checked against.
llm: The language model that would be used for evaluation.
context: Optional reference context that outputs would be checked against.
If not provided, the task's expected_output will be used as context.
threshold: Optional minimum faithfulness score that would be required to pass.
tool_response: Optional tool response information that would be used in evaluation.
"""
@@ -78,16 +84,17 @@ class HallucinationGuardrail:
def __call__(self, task_output: TaskOutput) -> tuple[bool, Any]:
"""Validate a task output against hallucination criteria.
In the open source, this method always returns that the output is valid.
Args:
task_output: The output to be validated.
Returns:
A tuple containing:
- True
- The raw task output
- True if validation passed, False otherwise
- The raw task output if valid, or error feedback if invalid
"""
if callable(_validate_output_hook):
return _validate_output_hook(self, task_output)
self._logger.log(
"warning",
"Premium hallucination detection skipped (use for free at https://app.crewai.com)\n",

View File

@@ -1,6 +1,10 @@
import asyncio
from collections.abc import Coroutine
import inspect
from typing import Any
from pydantic import BaseModel, Field
from typing_extensions import TypeIs
from crewai.agent import Agent
from crewai.lite_agent_output import LiteAgentOutput
@@ -8,6 +12,13 @@ from crewai.llms.base_llm import BaseLLM
from crewai.tasks.task_output import TaskOutput
def _is_coroutine(
obj: LiteAgentOutput | Coroutine[Any, Any, LiteAgentOutput],
) -> TypeIs[Coroutine[Any, Any, LiteAgentOutput]]:
"""Check if obj is a coroutine for type narrowing."""
return inspect.iscoroutine(obj)
class LLMGuardrailResult(BaseModel):
valid: bool = Field(
description="Whether the task output complies with the guardrail"
@@ -62,7 +73,10 @@ class LLMGuardrail:
- If the Task result complies with the guardrail, saying that is valid
"""
return agent.kickoff(query, response_format=LLMGuardrailResult)
kickoff_result = agent.kickoff(query, response_format=LLMGuardrailResult)
if _is_coroutine(kickoff_result):
return asyncio.run(kickoff_result)
return kickoff_result
def __call__(self, task_output: TaskOutput) -> tuple[bool, Any]:
"""Validates the output of a task based on specified criteria.

View File

@@ -903,7 +903,7 @@ class Telemetry:
{
"id": str(task.id),
"description": task.description,
"output": task.output.raw_output,
"output": task.output.raw if task.output else "",
}
for task in crew.tasks
]
@@ -923,6 +923,9 @@ class Telemetry:
value: The attribute value.
"""
if span is None:
return
def _operation() -> None:
return span.set_attribute(key, value)

View File

@@ -27,16 +27,14 @@ class MCPNativeTool(BaseTool):
tool_name: str,
tool_schema: dict[str, Any],
server_name: str,
original_tool_name: str | None = None,
) -> None:
"""Initialize native MCP tool.
Args:
mcp_client: MCPClient instance with active session.
tool_name: Name of the tool (may be prefixed).
tool_name: Original name of the tool on the MCP server.
tool_schema: Schema information for the tool.
server_name: Name of the MCP server for prefixing.
original_tool_name: Original name of the tool on the MCP server.
"""
# Create tool name with server prefix to avoid conflicts
prefixed_name = f"{server_name}_{tool_name}"
@@ -59,7 +57,7 @@ class MCPNativeTool(BaseTool):
# Set instance attributes after super().__init__
self._mcp_client = mcp_client
self._original_tool_name = original_tool_name or tool_name
self._original_tool_name = tool_name
self._server_name = server_name
# self._logger = logging.getLogger(__name__)

View File

@@ -270,6 +270,7 @@ class ToolUsage:
result = None # type: ignore
should_retry = False
available_tool = None
error_event_emitted = False
try:
if self.tools_handler and self.tools_handler.cache:
@@ -408,6 +409,7 @@ class ToolUsage:
except Exception as e:
self.on_tool_error(tool=tool, tool_calling=calling, e=e)
error_event_emitted = True
self._run_attempts += 1
if self._run_attempts > self._max_parsing_attempts:
self._telemetry.tool_usage_error(llm=self.function_calling_llm)
@@ -435,7 +437,7 @@ class ToolUsage:
result = self._format_result(result=result)
finally:
if started_event_emitted:
if started_event_emitted and not error_event_emitted:
self.on_tool_use_finished(
tool=tool,
tool_calling=calling,
@@ -500,6 +502,7 @@ class ToolUsage:
result = None # type: ignore
should_retry = False
available_tool = None
error_event_emitted = False
try:
if self.tools_handler and self.tools_handler.cache:
@@ -638,6 +641,7 @@ class ToolUsage:
except Exception as e:
self.on_tool_error(tool=tool, tool_calling=calling, e=e)
error_event_emitted = True
self._run_attempts += 1
if self._run_attempts > self._max_parsing_attempts:
self._telemetry.tool_usage_error(llm=self.function_calling_llm)
@@ -665,7 +669,7 @@ class ToolUsage:
result = self._format_result(result=result)
finally:
if started_event_emitted:
if started_event_emitted and not error_event_emitted:
self.on_tool_use_finished(
tool=tool,
tool_calling=calling,

View File

@@ -1,37 +0,0 @@
"""Human-in-the-loop (HITL) type definitions.
This module provides type definitions for human-in-the-loop interactions
in crew executions.
"""
from typing import TypedDict
class HITLResumeInfo(TypedDict, total=False):
"""HITL resume information passed from flow to crew.
Attributes:
task_id: Unique identifier for the task.
crew_execution_id: Unique identifier for the crew execution.
task_key: Key identifying the specific task.
task_output: Output from the task before human intervention.
human_feedback: Feedback provided by the human.
previous_messages: History of messages in the conversation.
"""
task_id: str
crew_execution_id: str
task_key: str
task_output: str
human_feedback: str
previous_messages: list[dict[str, str]]
class CrewInputsWithHITL(TypedDict, total=False):
"""Crew inputs that may contain HITL resume information.
Attributes:
_hitl_resume: Optional HITL resume information for continuing execution.
"""
_hitl_resume: HITLResumeInfo

View File

@@ -42,6 +42,8 @@ if TYPE_CHECKING:
from crewai.llm import LLM
from crewai.task import Task
_create_plus_client_hook: Callable[[], Any] | None = None
class SummaryContent(TypedDict):
"""Structure for summary content entries.
@@ -91,7 +93,11 @@ def parse_tools(tools: list[BaseTool]) -> list[CrewStructuredTool]:
for tool in tools:
if isinstance(tool, CrewAITool):
tools_list.append(tool.to_structured_tool())
structured_tool = tool.to_structured_tool()
structured_tool.current_usage_count = 0
if structured_tool._original_tool:
structured_tool._original_tool.current_usage_count = 0
tools_list.append(structured_tool)
else:
raise ValueError("Tool is not a CrewStructuredTool or BaseTool")
@@ -818,12 +824,15 @@ def load_agent_from_repository(from_repository: str) -> dict[str, Any]:
if from_repository:
import importlib
from crewai.cli.authentication.token import get_auth_token
from crewai.cli.plus_api import PlusAPI
if callable(_create_plus_client_hook):
client = _create_plus_client_hook()
else:
from crewai.cli.authentication.token import get_auth_token
from crewai.cli.plus_api import PlusAPI
client = PlusAPI(api_key=get_auth_token())
client = PlusAPI(api_key=get_auth_token())
_print_current_organization()
response = client.get_agent(from_repository)
response = asyncio.run(client.get_agent(from_repository))
if response.status_code == 404:
raise AgentRepositoryError(
f"Agent {from_repository} does not exist, make sure the name is correct or the agent is available on your organization."

View File

@@ -1,7 +1,7 @@
from __future__ import annotations
from collections import defaultdict
from typing import TYPE_CHECKING
from typing import TYPE_CHECKING, Any
from pydantic import BaseModel, Field, InstanceOf
from rich.box import HEAVY_EDGE
@@ -36,7 +36,13 @@ class CrewEvaluator:
iteration: The current iteration of the evaluation.
"""
def __init__(self, crew: Crew, eval_llm: InstanceOf[BaseLLM]) -> None:
def __init__(
self,
crew: Crew,
eval_llm: InstanceOf[BaseLLM] | str | None = None,
openai_model_name: str | None = None,
llm: InstanceOf[BaseLLM] | str | None = None,
) -> None:
self.crew = crew
self.llm = eval_llm
self.tasks_scores: defaultdict[int, list[float]] = defaultdict(list)
@@ -86,7 +92,9 @@ class CrewEvaluator:
"""
self.iteration = iteration
def print_crew_evaluation_result(self) -> None:
def print_crew_evaluation_result(
self, token_usage: list[dict[str, Any]] | None = None
) -> None:
"""
Prints the evaluation result of the crew in a table.
A Crew with 2 tasks using the command crewai test -n 3
@@ -204,7 +212,7 @@ class CrewEvaluator:
CrewTestResultEvent(
quality=quality_score,
execution_duration=current_task.execution_duration,
model=self.llm.model,
model=getattr(self.llm, "model", str(self.llm)),
crew_name=self.crew.name,
crew=self.crew,
),

View File

@@ -4,6 +4,8 @@ from __future__ import annotations
from typing import TYPE_CHECKING, Final, Literal, NamedTuple
from crewai.events.utils.console_formatter import should_suppress_console_output
if TYPE_CHECKING:
from _typeshed import SupportsWrite
@@ -77,6 +79,8 @@ class Printer:
file: A file-like object (stream); defaults to the current sys.stdout.
flush: Whether to forcibly flush the stream.
"""
if should_suppress_console_output():
return
if isinstance(content, str):
content = [ColoredText(content, color)]
print(

View File

@@ -19,6 +19,7 @@ def to_serializable(
exclude: set[str] | None = None,
max_depth: int = 5,
_current_depth: int = 0,
_ancestors: set[int] | None = None,
) -> Serializable:
"""Converts a Python object into a JSON-compatible representation.
@@ -31,6 +32,7 @@ def to_serializable(
exclude: Set of keys to exclude from the result.
max_depth: Maximum recursion depth. Defaults to 5.
_current_depth: Current recursion depth (for internal use).
_ancestors: Set of ancestor object ids for cycle detection (for internal use).
Returns:
Serializable: A JSON-compatible structure.
@@ -41,16 +43,29 @@ def to_serializable(
if exclude is None:
exclude = set()
if _ancestors is None:
_ancestors = set()
if isinstance(obj, (str, int, float, bool, type(None))):
return obj
if isinstance(obj, uuid.UUID):
return str(obj)
if isinstance(obj, (date, datetime)):
return obj.isoformat()
object_id = id(obj)
if object_id in _ancestors:
return f"<circular_ref:{type(obj).__name__}>"
new_ancestors = _ancestors | {object_id}
if isinstance(obj, (list, tuple, set)):
return [
to_serializable(
item, max_depth=max_depth, _current_depth=_current_depth + 1
item,
exclude=exclude,
max_depth=max_depth,
_current_depth=_current_depth + 1,
_ancestors=new_ancestors,
)
for item in obj
]
@@ -61,6 +76,7 @@ def to_serializable(
exclude=exclude,
max_depth=max_depth,
_current_depth=_current_depth + 1,
_ancestors=new_ancestors,
)
for key, value in obj.items()
if key not in exclude
@@ -71,12 +87,16 @@ def to_serializable(
obj=obj.model_dump(exclude=exclude),
max_depth=max_depth,
_current_depth=_current_depth + 1,
_ancestors=new_ancestors,
)
except Exception:
try:
return {
_to_serializable_key(k): to_serializable(
v, max_depth=max_depth, _current_depth=_current_depth + 1
v,
max_depth=max_depth,
_current_depth=_current_depth + 1,
_ancestors=new_ancestors,
)
for k, v in obj.__dict__.items()
if k not in (exclude or set())

View File

@@ -51,6 +51,10 @@ class ConcreteAgentAdapter(BaseAgentAdapter):
# Dummy implementation for MCP tools
return []
def configure_structured_output(self, task: Any) -> None:
# Dummy implementation for structured output
pass
async def aexecute_task(
self,
task: Any,

View File

@@ -606,9 +606,10 @@ def test_lite_agent_with_invalid_llm():
@patch.dict("os.environ", {"CREWAI_PLATFORM_INTEGRATION_TOKEN": "test_token"})
@patch("crewai_tools.tools.crewai_platform_tools.crewai_platform_action_tool.requests.post")
@patch("crewai_tools.tools.crewai_platform_tools.crewai_platform_tool_builder.requests.get")
@pytest.mark.vcr()
def test_agent_kickoff_with_platform_tools(mock_get):
def test_agent_kickoff_with_platform_tools(mock_get, mock_post):
"""Test that Agent.kickoff() properly integrates platform tools with LiteAgent"""
mock_response = Mock()
mock_response.raise_for_status.return_value = None
@@ -632,6 +633,15 @@ def test_agent_kickoff_with_platform_tools(mock_get):
}
mock_get.return_value = mock_response
# Mock the platform tool execution
mock_post_response = Mock()
mock_post_response.ok = True
mock_post_response.json.return_value = {
"success": True,
"issue_url": "https://github.com/test/repo/issues/1"
}
mock_post.return_value = mock_post_response
agent = Agent(
role="Test Agent",
goal="Test goal",

View File

@@ -1,98 +1,227 @@
interactions:
- request:
body: '{"messages": [{"role": "system", "content": "You are Test Agent. Test backstory\nYour personal goal is: Test goal\n\nYou ONLY have access to the following tools, and should NEVER make up tools that are not listed here:\n\nTool Name: create_issue\nTool Arguments: {''title'': {''description'': ''Issue title'', ''type'': ''str''}, ''body'': {''description'': ''Issue body'', ''type'': ''Union[str, NoneType]''}}\nTool Description: Create a GitHub issue\nDetailed Parameter Structure:\nObject with properties:\n - title: Issue title (required)\n - body: Issue body (optional)\n\nIMPORTANT: Use the following format in your response:\n\n```\nThought: you should always think about what to do\nAction: the action to take, only one name of [create_issue], just the name, exactly as it''s written.\nAction Input: the input to the action, just a simple JSON object, enclosed in curly braces, using \" to wrap keys and values.\nObservation: the result of the action\n```\n\nOnce all necessary information
is gathered, return the following format:\n\n```\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n```"}, {"role": "user", "content": "Create a GitHub issue"}], "model": "gpt-3.5-turbo", "stream": false}'
body: '{"messages":[{"role":"system","content":"You are Test Agent. Test backstory\nYour
personal goal is: Test goal"},{"role":"user","content":"\nCurrent Task: Create
a GitHub issue"}],"model":"gpt-3.5-turbo","tool_choice":"auto","tools":[{"type":"function","function":{"name":"create_issue","description":"Create
a GitHub issue","strict":true,"parameters":{"additionalProperties":false,"properties":{"title":{"description":"Issue
title","title":"Title","type":"string"},"body":{"default":null,"description":"Issue
body","title":"Body","type":"string"}},"required":["title","body"],"type":"object"}}}]}'
headers:
User-Agent:
- X-USER-AGENT-XXX
accept:
- application/json
accept-encoding:
- gzip, deflate
- ACCEPT-ENCODING-XXX
authorization:
- AUTHORIZATION-XXX
connection:
- keep-alive
content-length:
- '1233'
- '596'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.109.1
x-stainless-arch:
- arm64
- X-STAINLESS-ARCH-XXX
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 1.109.1
- 1.83.0
x-stainless-read-timeout:
- '600'
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
- 3.13.5
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-CULxKTEIB85AVItcEQ09z4Xi0JCID\",\n \"object\": \"chat.completion\",\n \"created\": 1761350274,\n \"model\": \"gpt-3.5-turbo-0125\",\n \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\": \"assistant\",\n \"content\": \"I will need more specific information to create a GitHub issue. Could you please provide more details such as the title and body of the issue you would like to create?\",\n \"refusal\": null,\n \"annotations\": []\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 255,\n \"completion_tokens\": 33,\n \"total_tokens\": 288,\n \"prompt_tokens_details\": {\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\": 0,\n \"rejected_prediction_tokens\": 0\n \
\ }\n },\n \"service_tier\": \"default\",\n \"system_fingerprint\": null\n}\n"
string: "{\n \"id\": \"chatcmpl-D6L3fqygkUIZ3bN4wvSpAhdaSk7MF\",\n \"object\":
\"chat.completion\",\n \"created\": 1770403287,\n \"model\": \"gpt-3.5-turbo-0125\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": null,\n \"tool_calls\": [\n {\n
\ \"id\": \"call_RuWuYzjzgRL3byVGhLlPi0rq\",\n \"type\":
\"function\",\n \"function\": {\n \"name\": \"create_issue\",\n
\ \"arguments\": \"{\\\"title\\\":\\\"Test issue\\\",\\\"body\\\":\\\"This
is a test issue created for testing purposes.\\\"}\"\n }\n }\n
\ ],\n \"refusal\": null,\n \"annotations\": []\n },\n
\ \"logprobs\": null,\n \"finish_reason\": \"tool_calls\"\n }\n
\ ],\n \"usage\": {\n \"prompt_tokens\": 93,\n \"completion_tokens\":
28,\n \"total_tokens\": 121,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": null\n}\n"
headers:
CF-RAY:
- 993d6b4be9862379-SJC
- CF-RAY-XXX
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Fri, 24 Oct 2025 23:57:54 GMT
- Fri, 06 Feb 2026 18:41:28 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=WY9bgemMDI_hUYISAPlQ2a.DBGeZfM6AjVEa3SKNg1c-1761350274-1.0.1.1-K3Qm2cl6IlDAgmocoKZ8IMUTmue6Q81hH9stECprUq_SM8LF8rR9d1sHktvRCN3.jEM.twEuFFYDNpBnN8NBRJFZcea1yvpm8Uo0G_UhyDs; path=/; expires=Sat, 25-Oct-25 00:27:54 GMT; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
- _cfuvid=JklLS4i3hBGELpS9cz1KMpTbj72hCwP41LyXDSxWIv8-1761350274521-0.0.1.1-604800000; path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
- SET-COOKIE-XXX
Strict-Transport-Security:
- max-age=31536000; includeSubDomains; preload
- STS-XXX
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
- X-CONTENT-TYPE-XXX
access-control-expose-headers:
- X-Request-ID
- ACCESS-CONTROL-XXX
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- crewai-iuxna1
- OPENAI-ORG-XXX
openai-processing-ms:
- '487'
- '1406'
openai-project:
- proj_xitITlrFeen7zjNSzML82h9x
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
x-envoy-upstream-service-time:
- '526'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:
- '10000'
- X-RATELIMIT-LIMIT-REQUESTS-XXX
x-ratelimit-limit-tokens:
- '50000000'
- X-RATELIMIT-LIMIT-TOKENS-XXX
x-ratelimit-remaining-requests:
- '9999'
- X-RATELIMIT-REMAINING-REQUESTS-XXX
x-ratelimit-remaining-tokens:
- '49999727'
- X-RATELIMIT-REMAINING-TOKENS-XXX
x-ratelimit-reset-requests:
- 6ms
- X-RATELIMIT-RESET-REQUESTS-XXX
x-ratelimit-reset-tokens:
- 0s
- X-RATELIMIT-RESET-TOKENS-XXX
x-request-id:
- req_1708dc0928c64882aaa5bc2c168c140f
- X-REQUEST-ID-XXX
status:
code: 200
message: OK
- request:
body: '{"messages":[{"role":"system","content":"You are Test Agent. Test backstory\nYour
personal goal is: Test goal"},{"role":"user","content":"\nCurrent Task: Create
a GitHub issue"},{"role":"assistant","content":null,"tool_calls":[{"id":"call_RuWuYzjzgRL3byVGhLlPi0rq","type":"function","function":{"name":"create_issue","arguments":"{\"title\":\"Test
issue\",\"body\":\"This is a test issue created for testing purposes.\"}"}}]},{"role":"tool","tool_call_id":"call_RuWuYzjzgRL3byVGhLlPi0rq","name":"create_issue","content":"{\n \"success\":
true,\n \"issue_url\": \"https://github.com/test/repo/issues/1\"\n}"}],"model":"gpt-3.5-turbo","tool_choice":"auto","tools":[{"type":"function","function":{"name":"create_issue","description":"Create
a GitHub issue","strict":true,"parameters":{"additionalProperties":false,"properties":{"title":{"description":"Issue
title","title":"Title","type":"string"},"body":{"default":null,"description":"Issue
body","title":"Body","type":"string"}},"required":["title","body"],"type":"object"}}}]}'
headers:
User-Agent:
- X-USER-AGENT-XXX
accept:
- application/json
accept-encoding:
- ACCEPT-ENCODING-XXX
authorization:
- AUTHORIZATION-XXX
connection:
- keep-alive
content-length:
- '1028'
content-type:
- application/json
cookie:
- COOKIE-XXX
host:
- api.openai.com
x-stainless-arch:
- X-STAINLESS-ARCH-XXX
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 1.83.0
x-stainless-read-timeout:
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.5
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-D6L3hfuBxk36LIb3ekD1IVwFD5VVL\",\n \"object\":
\"chat.completion\",\n \"created\": 1770403289,\n \"model\": \"gpt-3.5-turbo-0125\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"I have successfully created a GitHub
issue for testing purposes. You can view the issue at this URL: [Test issue](https://github.com/test/repo/issues/1)\",\n
\ \"refusal\": null,\n \"annotations\": []\n },\n \"logprobs\":
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
154,\n \"completion_tokens\": 36,\n \"total_tokens\": 190,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": null\n}\n"
headers:
CF-RAY:
- CF-RAY-XXX
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Fri, 06 Feb 2026 18:41:29 GMT
Server:
- cloudflare
Strict-Transport-Security:
- STS-XXX
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- X-CONTENT-TYPE-XXX
access-control-expose-headers:
- ACCESS-CONTROL-XXX
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '888'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:
- X-RATELIMIT-LIMIT-REQUESTS-XXX
x-ratelimit-limit-tokens:
- X-RATELIMIT-LIMIT-TOKENS-XXX
x-ratelimit-remaining-requests:
- X-RATELIMIT-REMAINING-REQUESTS-XXX
x-ratelimit-remaining-tokens:
- X-RATELIMIT-REMAINING-TOKENS-XXX
x-ratelimit-reset-requests:
- X-RATELIMIT-RESET-REQUESTS-XXX
x-ratelimit-reset-tokens:
- X-RATELIMIT-RESET-TOKENS-XXX
x-request-id:
- X-REQUEST-ID-XXX
status:
code: 200
message: OK

View File

@@ -1,400 +1,428 @@
interactions:
- request:
body: '{"messages": [{"role": "system", "content": "You are Sports Analyst. You are an expert at gathering and organizing information. You carefully collect details and present them in a structured way.\nYour personal goal is: Gather information about the best soccer players\n\nTo give my best complete final answer to the task respond using the exact following format:\n\nThought: I now can give a great answer\nFinal Answer: Your final answer must be the great and the most complete as possible, it must be outcome described.\n\nI MUST use these formats, my job depends on it!"}, {"role": "user", "content": "Top 10 best players in the world?"}], "model": "gpt-4o-mini", "stop": ["\nObservation:"]}'
body: '{"messages":[{"role":"system","content":"You are Sports Analyst. You are
an expert at gathering and organizing information. You carefully collect details
and present them in a structured way.\nYour personal goal is: Gather information
about the best soccer players"},{"role":"user","content":"\nCurrent Task: Top
10 best players in the world?\n\nProvide your complete response:"}],"model":"gpt-4.1-mini"}'
headers:
User-Agent:
- X-USER-AGENT-XXX
accept:
- application/json
accept-encoding:
- gzip, deflate, zstd
- ACCEPT-ENCODING-XXX
authorization:
- AUTHORIZATION-XXX
connection:
- keep-alive
content-length:
- '694'
- '404'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.78.0
x-stainless-arch:
- arm64
- X-STAINLESS-ARCH-XXX
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 1.78.0
x-stainless-raw-response:
- 'true'
- 1.83.0
x-stainless-read-timeout:
- '600.0'
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.9
- 3.13.5
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-BgufUtDqGzvqPZx2NmkqqxdW4G8rQ\",\n \"object\": \"chat.completion\",\n \"created\": 1749567308,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\": \"assistant\",\n \"content\": \"Thought: I now can give a great answer \\nFinal Answer: The top 10 best soccer players in the world, as of October 2023, can be identified based on their recent performances, skills, impact on games, and overall contributions to their teams. Here is the structured list:\\n\\n1. **Lionel Messi (Inter Miami CF)**\\n - Position: Forward\\n - Key Attributes: Dribbling, vision, goal-scoring ability.\\n - Achievements: Multiple Ballon d'Or winner, Copa America champion, World Cup champion (2022).\\n\\n2. **Kylian Mbappé (Paris Saint-Germain)**\\n - Position: Forward\\n - Key Attributes: Speed, technique, finishing.\\n - Achievements: FIFA World Cup champion (2018), Ligue 1 titles, multiple\
\ domestic cups.\\n\\n3. **Erling Haaland (Manchester City)**\\n - Position: Forward\\n - Key Attributes: Power, speed, goal-scoring instinct.\\n - Achievements: Bundesliga top scorer, UEFA Champions League winner (2023), Premier League titles.\\n\\n4. **Kevin De Bruyne (Manchester City)**\\n - Position: Midfielder\\n - Key Attributes: Passing, vision, creativity.\\n - Achievements: Multiple Premier League titles, FA Cups, UEFA Champions League winner (2023).\\n\\n5. **Karim Benzema (Al-Ittihad)**\\n - Position: Forward\\n - Key Attributes: Goal-scoring, playmaking, tactical intelligence.\\n - Achievements: 2022 Ballon d'Or winner, multiple Champions Leagues with Real Madrid.\\n\\n6. **Neymar Jr. (Al Hilal)**\\n - Position: Forward\\n - Key Attributes: Flair, dribbling, creativity.\\n - Achievements: Multiple domestic league titles, Champions League runner-up.\\n\\n7. **Robert Lewandowski (FC Barcelona)**\\n - Position: Forward\\n - Key Attributes: Finishing,\
\ positioning, aerial ability.\\n - Achievements: FIFA Best Men's Player, multiple Bundesliga titles, La Liga champion (2023).\\n\\n8. **Mohamed Salah (Liverpool)**\\n - Position: Forward\\n - Key Attributes: Speed, finishing, dribbling.\\n - Achievements: Premier League champion, FA Cup, UEFA Champions League winner.\\n\\n9. **Vinícius Júnior (Real Madrid)**\\n - Position: Forward\\n - Key Attributes: Speed, dribbling, creativity.\\n - Achievements: UEFA Champions League winner (2022), La Liga champion (2023).\\n\\n10. **Luka Modrić (Real Madrid)**\\n - Position: Midfielder\\n - Key Attributes: Passing, vision, tactical intelligence.\\n - Achievements: Multiple Champions League titles, Ballon d'Or winner (2018).\\n\\nThis list is compiled based on their current form, past performances, and contributions to their respective teams in both domestic and international competitions. Player rankings can vary based on personal opinion and specific criteria used for\
\ evaluation, but these players have consistently been regarded as some of the best in the world as of October 2023.\",\n \"refusal\": null,\n \"annotations\": []\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 122,\n \"completion_tokens\": 643,\n \"total_tokens\": 765,\n \"prompt_tokens_details\": {\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\": 0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\": \"default\",\n \"system_fingerprint\": \"fp_34a54ae93c\"\n}\n"
string: "{\n \"id\": \"chatcmpl-D6L3hzoRVVEa07HZsM9wpi2RVRKQp\",\n \"object\":
\"chat.completion\",\n \"created\": 1770403289,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Here is a structured list of the top
10 best soccer players in the world as of 2024, based on recent performances,
awards, and overall impact on the game:\\n\\n1. **Kylian Mbapp\xE9** \\n
\ - Nationality: French \\n - Club: Paris Saint-Germain (PSG) \\n -
Position: Forward \\n - Key Highlights: Multiple Ligue 1 titles, World
Cup winner (2018), known for speed, dribbling, and scoring prowess.\\n\\n2.
**Erling Haaland** \\n - Nationality: Norwegian \\n - Club: Manchester
City \\n - Position: Striker \\n - Key Highlights: Premier League Golden
Boot winner, incredible goal-scoring record, physical presence, and finishing
skills.\\n\\n3. **Lionel Messi** \\n - Nationality: Argentine \\n -
Club: Inter Miami \\n - Position: Forward/Attacking Midfielder \\n -
Key Highlights: Seven Ballon d\u2019Or awards, World Cup winner (2022), exceptional
playmaking and dribbling ability.\\n\\n4. **Kevin De Bruyne** \\n - Nationality:
Belgian \\n - Club: Manchester City \\n - Position: Midfielder \\n
\ - Key Highlights: One of the best playmakers globally, assists leader,
consistent high-level performance in the Premier League.\\n\\n5. **Robert
Lewandowski** \\n - Nationality: Polish \\n - Club: FC Barcelona \\n
\ - Position: Striker \\n - Key Highlights: Exceptional goal-scoring record,
multiple Bundesliga top scorer awards, key figure in Bayern Munich\u2019s
dominance before transferring.\\n\\n6. **Karim Benzema** \\n - Nationality:
French \\n - Club: Al-Ittihad \\n - Position: Striker \\n - Key Highlights:
Ballon d\u2019Or winner (2022), excellent technical skills, leadership at
Real Madrid before recent transfer.\\n\\n7. **Mohamed Salah** \\n - Nationality:
Egyptian \\n - Club: Liverpool \\n - Position: Forward \\n - Key
Highlights: Premier League Golden Boot winner, known for speed, dribbling,
and goal-scoring consistency.\\n\\n8. **Vin\xEDcius J\xFAnior** \\n - Nationality:
Brazilian \\n - Club: Real Madrid \\n - Position: Winger \\n - Key
Highlights: Key player for Real Madrid, exceptional dribbling and pace, rising
star in world football.\\n\\n9. **Jude Bellingham** \\n - Nationality:
English \\n - Club: Real Madrid \\n - Position: Midfielder \\n -
Key Highlights: Young talent with maturity beyond years, influential midfielder
with great vision and work rate.\\n\\n10. **Thibaut Courtois** \\n - Nationality:
Belgian \\n - Club: Real Madrid \\n - Position: Goalkeeper \\n -
Key Highlights: One of the best goalkeepers globally, crucial performances
in La Liga and Champions League.\\n\\nThese rankings consider individual talent,
recent achievements, influence on matches, and overall contribution to club
and country.\",\n \"refusal\": null,\n \"annotations\": []\n
\ },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n
\ ],\n \"usage\": {\n \"prompt_tokens\": 68,\n \"completion_tokens\":
621,\n \"total_tokens\": 689,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_75546bd1a7\"\n}\n"
headers:
CF-RAY:
- 94d9b5400dcd624b-GRU
- CF-RAY-XXX
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Tue, 10 Jun 2025 14:55:42 GMT
- Fri, 06 Feb 2026 18:41:40 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=8Yv8F0ZCFAo2lf.qoqxao70yxyjVvIV90zQqVF6bVzQ-1749567342-1.0.1.1-fZgnv3RDfunvCO1koxwwFJrHnxSx_rwS_FHvQ6xxDPpKHwYr7dTqIQLZrNgSX5twGyK4F22rUmkuiS6KMVogcinChk8lmHtJBTUVTFjr2KU; path=/; expires=Tue, 10-Jun-25 15:25:42 GMT; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
- _cfuvid=wzh8YnmXvLq1G0RcIVijtzboQtCZyIe2uZiochkBLqE-1749567342267-0.0.1.1-604800000; path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
- SET-COOKIE-XXX
Strict-Transport-Security:
- STS-XXX
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
- X-CONTENT-TYPE-XXX
access-control-expose-headers:
- X-Request-ID
- ACCESS-CONTROL-XXX
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- crewai-iuxna1
- OPENAI-ORG-XXX
openai-processing-ms:
- '33288'
- '10634'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-envoy-upstream-service-time:
- '33292'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:
- '30000'
- X-RATELIMIT-LIMIT-REQUESTS-XXX
x-ratelimit-limit-tokens:
- '150000000'
- X-RATELIMIT-LIMIT-TOKENS-XXX
x-ratelimit-remaining-requests:
- '29999'
- X-RATELIMIT-REMAINING-REQUESTS-XXX
x-ratelimit-remaining-tokens:
- '149999859'
- X-RATELIMIT-REMAINING-TOKENS-XXX
x-ratelimit-reset-requests:
- 2ms
- X-RATELIMIT-RESET-REQUESTS-XXX
x-ratelimit-reset-tokens:
- 0s
- X-RATELIMIT-RESET-TOKENS-XXX
x-request-id:
- req_6a587ea22edef774ecdada790a320cab
- X-REQUEST-ID-XXX
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "system", "content": "You are Sports Analyst. You are an expert at gathering and organizing information. You carefully collect details and present them in a structured way.\nYour personal goal is: Gather information about the best soccer players\n\nTo give my best complete final answer to the task respond using the exact following format:\n\nThought: I now can give a great answer\nFinal Answer: Your final answer must be the great and the most complete as possible, it must be outcome described.\n\nI MUST use these formats, my job depends on it!"}, {"role": "user", "content": "Top 10 best players in the world?"}, {"role": "assistant", "content": "Thought: I now can give a great answer \nFinal Answer: The top 10 best soccer players in the world, as of October 2023, can be identified based on their recent performances, skills, impact on games, and overall contributions to their teams. Here is the structured list:\n\n1. **Lionel Messi (Inter Miami CF)**\n -
Position: Forward\n - Key Attributes: Dribbling, vision, goal-scoring ability.\n - Achievements: Multiple Ballon d''Or winner, Copa America champion, World Cup champion (2022).\n\n2. **Kylian Mbapp\u00e9 (Paris Saint-Germain)**\n - Position: Forward\n - Key Attributes: Speed, technique, finishing.\n - Achievements: FIFA World Cup champion (2018), Ligue 1 titles, multiple domestic cups.\n\n3. **Erling Haaland (Manchester City)**\n - Position: Forward\n - Key Attributes: Power, speed, goal-scoring instinct.\n - Achievements: Bundesliga top scorer, UEFA Champions League winner (2023), Premier League titles.\n\n4. **Kevin De Bruyne (Manchester City)**\n - Position: Midfielder\n - Key Attributes: Passing, vision, creativity.\n - Achievements: Multiple Premier League titles, FA Cups, UEFA Champions League winner (2023).\n\n5. **Karim Benzema (Al-Ittihad)**\n - Position: Forward\n - Key Attributes: Goal-scoring, playmaking, tactical intelligence.\n - Achievements:
2022 Ballon d''Or winner, multiple Champions Leagues with Real Madrid.\n\n6. **Neymar Jr. (Al Hilal)**\n - Position: Forward\n - Key Attributes: Flair, dribbling, creativity.\n - Achievements: Multiple domestic league titles, Champions League runner-up.\n\n7. **Robert Lewandowski (FC Barcelona)**\n - Position: Forward\n - Key Attributes: Finishing, positioning, aerial ability.\n - Achievements: FIFA Best Men''s Player, multiple Bundesliga titles, La Liga champion (2023).\n\n8. **Mohamed Salah (Liverpool)**\n - Position: Forward\n - Key Attributes: Speed, finishing, dribbling.\n - Achievements: Premier League champion, FA Cup, UEFA Champions League winner.\n\n9. **Vin\u00edcius J\u00fanior (Real Madrid)**\n - Position: Forward\n - Key Attributes: Speed, dribbling, creativity.\n - Achievements: UEFA Champions League winner (2022), La Liga champion (2023).\n\n10. **Luka Modri\u0107 (Real Madrid)**\n - Position: Midfielder\n - Key Attributes: Passing, vision,
tactical intelligence.\n - Achievements: Multiple Champions League titles, Ballon d''Or winner (2018).\n\nThis list is compiled based on their current form, past performances, and contributions to their respective teams in both domestic and international competitions. Player rankings can vary based on personal opinion and specific criteria used for evaluation, but these players have consistently been regarded as some of the best in the world as of October 2023."}, {"role": "user", "content": "You are not allowed to include Brazilian players"}], "model": "gpt-4o-mini", "stop": ["\nObservation:"]}'
body: '{"messages":[{"role":"system","content":"You are Sports Analyst. You are
an expert at gathering and organizing information. You carefully collect details
and present them in a structured way.\nYour personal goal is: Gather information
about the best soccer players"},{"role":"user","content":"\nCurrent Task: Top
10 best players in the world?\n\nProvide your complete response:"}],"model":"gpt-4.1-mini"}'
headers:
User-Agent:
- X-USER-AGENT-XXX
accept:
- application/json
accept-encoding:
- gzip, deflate, zstd
- ACCEPT-ENCODING-XXX
authorization:
- AUTHORIZATION-XXX
connection:
- keep-alive
content-length:
- '3594'
- '404'
content-type:
- application/json
cookie:
- __cf_bm=8Yv8F0ZCFAo2lf.qoqxao70yxyjVvIV90zQqVF6bVzQ-1749567342-1.0.1.1-fZgnv3RDfunvCO1koxwwFJrHnxSx_rwS_FHvQ6xxDPpKHwYr7dTqIQLZrNgSX5twGyK4F22rUmkuiS6KMVogcinChk8lmHtJBTUVTFjr2KU; _cfuvid=wzh8YnmXvLq1G0RcIVijtzboQtCZyIe2uZiochkBLqE-1749567342267-0.0.1.1-604800000
- COOKIE-XXX
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.78.0
x-stainless-arch:
- arm64
- X-STAINLESS-ARCH-XXX
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 1.78.0
x-stainless-raw-response:
- 'true'
- 1.83.0
x-stainless-read-timeout:
- '600.0'
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.9
- 3.13.5
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-BgugJkCDtB2EfvAMiIFK0reeLKFBl\",\n \"object\": \"chat.completion\",\n \"created\": 1749567359,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\": \"assistant\",\n \"content\": \"Thought: I now can give a great answer \\nFinal Answer: Here is an updated list of the top 10 best soccer players in the world as of October 2023, excluding Brazilian players:\\n\\n1. **Lionel Messi (Inter Miami CF)**\\n - Position: Forward\\n - Key Attributes: Dribbling, vision, goal-scoring ability.\\n - Achievements: Multiple Ballon d'Or winner, Copa America champion, World Cup champion (2022).\\n\\n2. **Kylian Mbappé (Paris Saint-Germain)**\\n - Position: Forward\\n - Key Attributes: Speed, technique, finishing.\\n - Achievements: FIFA World Cup champion (2018), Ligue 1 titles, multiple domestic cups.\\n\\n3. **Erling Haaland (Manchester City)**\\n - Position: Forward\\\
n - Key Attributes: Power, speed, goal-scoring instinct.\\n - Achievements: Bundesliga top scorer, UEFA Champions League winner (2023), Premier League titles.\\n\\n4. **Kevin De Bruyne (Manchester City)**\\n - Position: Midfielder\\n - Key Attributes: Passing, vision, creativity.\\n - Achievements: Multiple Premier League titles, FA Cups, UEFA Champions League winner (2023).\\n\\n5. **Karim Benzema (Al-Ittihad)**\\n - Position: Forward\\n - Key Attributes: Goal-scoring, playmaking, tactical intelligence.\\n - Achievements: 2022 Ballon d'Or winner, multiple Champions Leagues with Real Madrid.\\n\\n6. **Robert Lewandowski (FC Barcelona)**\\n - Position: Forward\\n - Key Attributes: Finishing, positioning, aerial ability.\\n - Achievements: FIFA Best Men's Player, multiple Bundesliga titles, La Liga champion (2023).\\n\\n7. **Mohamed Salah (Liverpool)**\\n - Position: Forward\\n - Key Attributes: Speed, finishing, dribbling.\\n - Achievements: Premier League\
\ champion, FA Cup, UEFA Champions League winner.\\n\\n8. **Vinícius Júnior (Real Madrid)**\\n - Position: Forward\\n - Key Attributes: Speed, dribbling, creativity.\\n - Achievements: UEFA Champions League winner (2022), La Liga champion (2023).\\n\\n9. **Luka Modrić (Real Madrid)**\\n - Position: Midfielder\\n - Key Attributes: Passing, vision, tactical intelligence.\\n - Achievements: Multiple Champions League titles, Ballon d'Or winner (2018).\\n\\n10. **Harry Kane (Bayern Munich)**\\n - Position: Forward\\n - Key Attributes: Goal-scoring, technique, playmaking.\\n - Achievements: Golden Boot winner, Premier League titles, UEFA European Championship runner-up.\\n\\nThis list has been adjusted to exclude Brazilian players and focuses on those who have made significant impacts in their clubs and on the international stage as of October 2023. Each player is recognized for their exceptional skills, performances, and achievements.\",\n \"refusal\": null,\n\
\ \"annotations\": []\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 781,\n \"completion_tokens\": 610,\n \"total_tokens\": 1391,\n \"prompt_tokens_details\": {\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\": 0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\": \"default\",\n \"system_fingerprint\": \"fp_34a54ae93c\"\n}\n"
string: "{\n \"id\": \"chatcmpl-D6L3sn9nSnGGOMKrS88avliVF7XTv\",\n \"object\":
\"chat.completion\",\n \"created\": 1770403300,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Certainly! Here's a structured list
of the top 10 best soccer players in the world as of 2024, considering their
performance, skills, achievements, and impact in recent seasons:\\n\\n###
Top 10 Best Soccer Players in the World (2024)\\n\\n| Rank | Player Name |
Nationality | Club (2023/24 Season) | Position | Key Attributes
\ | Recent Achievements |\\n|-------|---------------------|-------------|----------------------------|------------------|---------------------------------|------------------------------------|\\n|
1 | Lionel Messi | Argentina | Paris Saint-Germain (PSG) |
Forward/Playmaker| Dribbling, Vision, Free kicks | 2023 World Cup Golden
Ball, Club Successes |\\n| 2 | Kylian Mbapp\xE9 | France |
Paris Saint-Germain (PSG) | Forward | Speed, Finishing, Dribbling
\ | Ligue 1 Top Scorer, World Cup Winner 2018|\\n| 3 | Erling Haaland
\ | Norway | Manchester City | Striker | Strength,
Finishing, Positioning| Premier League Golden Boot, Champions League Impact|\\n|
4 | Kevin De Bruyne | Belgium | Manchester City |
Midfielder | Passing, Vision, Creativity | Premier League Titles,
Key Playmaker|\\n| 5 | Robert Lewandowski | Poland | FC Barcelona
\ | Striker | Finishing, Positioning, Composure| La
Liga Top Scorer, Consistent Scorer|\\n| 6 | Neymar Jr. | Brazil
\ | Al-Hilal | Forward/Winger | Dribbling, Creativity,
Flair | Copa America Titles, Club Success |\\n| 7 | Mohamed Salah |
Egypt | Liverpool | Forward/Winger | Pace, Finishing,
Work Rate | Premier League Golden Boot, Champions League Winner|\\n|
8 | Vin\xEDcius Jr. | Brazil | Real Madrid |
Winger | Speed, Dribbling, Crossing | La Liga Titles, UEFA Champions
League Winner|\\n| 9 | Luka Modri\u0107 | Croatia | Real Madrid
\ | Midfielder | Passing, Control, Experience | Ballon
d\u2019Or 2018, Multiple Champions League Titles|\\n| 10 | Karim Benzema
\ | France | Al-Ittihad | Striker | Finishing,
Link-up Play, Movements| Ballon d\u2019Or 2022, UEFA Champions League Top
Scorer |\\n\\n### Notes:\\n- The rankings reflect a combination of individual
skill, recent performance, consistency, and influence on the game.\\n- Players\u2019
clubs are based on the 2023/24 season affiliations.\\n- Achievements highlight
recent titles, awards, or standout contributions.\\n\\nIf you would like me
to focus on specific leagues, historical players, or emerging talents, just
let me know!\",\n \"refusal\": null,\n \"annotations\": []\n
\ },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n
\ ],\n \"usage\": {\n \"prompt_tokens\": 68,\n \"completion_tokens\":
605,\n \"total_tokens\": 673,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_75546bd1a7\"\n}\n"
headers:
CF-RAY:
- 94d9b6782db84d3b-GRU
- CF-RAY-XXX
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Tue, 10 Jun 2025 14:56:30 GMT
- Fri, 06 Feb 2026 18:41:49 GMT
Server:
- cloudflare
Strict-Transport-Security:
- STS-XXX
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
- X-CONTENT-TYPE-XXX
access-control-expose-headers:
- X-Request-ID
- ACCESS-CONTROL-XXX
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- crewai-iuxna1
- OPENAI-ORG-XXX
openai-processing-ms:
- '31484'
- '9044'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-envoy-upstream-service-time:
- '31490'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:
- '30000'
- X-RATELIMIT-LIMIT-REQUESTS-XXX
x-ratelimit-limit-tokens:
- '150000000'
- X-RATELIMIT-LIMIT-TOKENS-XXX
x-ratelimit-remaining-requests:
- '29999'
- X-RATELIMIT-REMAINING-REQUESTS-XXX
x-ratelimit-remaining-tokens:
- '149999166'
- X-RATELIMIT-REMAINING-TOKENS-XXX
x-ratelimit-reset-requests:
- 2ms
- X-RATELIMIT-RESET-REQUESTS-XXX
x-ratelimit-reset-tokens:
- 0s
- X-RATELIMIT-RESET-TOKENS-XXX
x-request-id:
- req_aa737cf40bb76af9f458bfd35f7a77a1
- X-REQUEST-ID-XXX
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "system", "content": "You are Sports Analyst. You are an expert at gathering and organizing information. You carefully collect details and present them in a structured way.\nYour personal goal is: Gather information about the best soccer players\n\nTo give my best complete final answer to the task respond using the exact following format:\n\nThought: I now can give a great answer\nFinal Answer: Your final answer must be the great and the most complete as possible, it must be outcome described.\n\nI MUST use these formats, my job depends on it!"}, {"role": "user", "content": "Top 10 best players in the world?"}, {"role": "assistant", "content": "Thought: I now can give a great answer \nFinal Answer: The top 10 best soccer players in the world, as of October 2023, can be identified based on their recent performances, skills, impact on games, and overall contributions to their teams. Here is the structured list:\n\n1. **Lionel Messi (Inter Miami CF)**\n -
Position: Forward\n - Key Attributes: Dribbling, vision, goal-scoring ability.\n - Achievements: Multiple Ballon d''Or winner, Copa America champion, World Cup champion (2022).\n\n2. **Kylian Mbapp\u00e9 (Paris Saint-Germain)**\n - Position: Forward\n - Key Attributes: Speed, technique, finishing.\n - Achievements: FIFA World Cup champion (2018), Ligue 1 titles, multiple domestic cups.\n\n3. **Erling Haaland (Manchester City)**\n - Position: Forward\n - Key Attributes: Power, speed, goal-scoring instinct.\n - Achievements: Bundesliga top scorer, UEFA Champions League winner (2023), Premier League titles.\n\n4. **Kevin De Bruyne (Manchester City)**\n - Position: Midfielder\n - Key Attributes: Passing, vision, creativity.\n - Achievements: Multiple Premier League titles, FA Cups, UEFA Champions League winner (2023).\n\n5. **Karim Benzema (Al-Ittihad)**\n - Position: Forward\n - Key Attributes: Goal-scoring, playmaking, tactical intelligence.\n - Achievements:
2022 Ballon d''Or winner, multiple Champions Leagues with Real Madrid.\n\n6. **Neymar Jr. (Al Hilal)**\n - Position: Forward\n - Key Attributes: Flair, dribbling, creativity.\n - Achievements: Multiple domestic league titles, Champions League runner-up.\n\n7. **Robert Lewandowski (FC Barcelona)**\n - Position: Forward\n - Key Attributes: Finishing, positioning, aerial ability.\n - Achievements: FIFA Best Men''s Player, multiple Bundesliga titles, La Liga champion (2023).\n\n8. **Mohamed Salah (Liverpool)**\n - Position: Forward\n - Key Attributes: Speed, finishing, dribbling.\n - Achievements: Premier League champion, FA Cup, UEFA Champions League winner.\n\n9. **Vin\u00edcius J\u00fanior (Real Madrid)**\n - Position: Forward\n - Key Attributes: Speed, dribbling, creativity.\n - Achievements: UEFA Champions League winner (2022), La Liga champion (2023).\n\n10. **Luka Modri\u0107 (Real Madrid)**\n - Position: Midfielder\n - Key Attributes: Passing, vision,
tactical intelligence.\n - Achievements: Multiple Champions League titles, Ballon d''Or winner (2018).\n\nThis list is compiled based on their current form, past performances, and contributions to their respective teams in both domestic and international competitions. Player rankings can vary based on personal opinion and specific criteria used for evaluation, but these players have consistently been regarded as some of the best in the world as of October 2023."}, {"role": "user", "content": "You are not allowed to include Brazilian players"}, {"role": "assistant", "content": "Thought: I now can give a great answer \nFinal Answer: Here is an updated list of the top 10 best soccer players in the world as of October 2023, excluding Brazilian players:\n\n1. **Lionel Messi (Inter Miami CF)**\n - Position: Forward\n - Key Attributes: Dribbling, vision, goal-scoring ability.\n - Achievements: Multiple Ballon d''Or winner, Copa America champion, World Cup champion (2022).\n\n2.
**Kylian Mbapp\u00e9 (Paris Saint-Germain)**\n - Position: Forward\n - Key Attributes: Speed, technique, finishing.\n - Achievements: FIFA World Cup champion (2018), Ligue 1 titles, multiple domestic cups.\n\n3. **Erling Haaland (Manchester City)**\n - Position: Forward\n - Key Attributes: Power, speed, goal-scoring instinct.\n - Achievements: Bundesliga top scorer, UEFA Champions League winner (2023), Premier League titles.\n\n4. **Kevin De Bruyne (Manchester City)**\n - Position: Midfielder\n - Key Attributes: Passing, vision, creativity.\n - Achievements: Multiple Premier League titles, FA Cups, UEFA Champions League winner (2023).\n\n5. **Karim Benzema (Al-Ittihad)**\n - Position: Forward\n - Key Attributes: Goal-scoring, playmaking, tactical intelligence.\n - Achievements: 2022 Ballon d''Or winner, multiple Champions Leagues with Real Madrid.\n\n6. **Robert Lewandowski (FC Barcelona)**\n - Position: Forward\n - Key Attributes: Finishing, positioning,
aerial ability.\n - Achievements: FIFA Best Men''s Player, multiple Bundesliga titles, La Liga champion (2023).\n\n7. **Mohamed Salah (Liverpool)**\n - Position: Forward\n - Key Attributes: Speed, finishing, dribbling.\n - Achievements: Premier League champion, FA Cup, UEFA Champions League winner.\n\n8. **Vin\u00edcius J\u00fanior (Real Madrid)**\n - Position: Forward\n - Key Attributes: Speed, dribbling, creativity.\n - Achievements: UEFA Champions League winner (2022), La Liga champion (2023).\n\n9. **Luka Modri\u0107 (Real Madrid)**\n - Position: Midfielder\n - Key Attributes: Passing, vision, tactical intelligence.\n - Achievements: Multiple Champions League titles, Ballon d''Or winner (2018).\n\n10. **Harry Kane (Bayern Munich)**\n - Position: Forward\n - Key Attributes: Goal-scoring, technique, playmaking.\n - Achievements: Golden Boot winner, Premier League titles, UEFA European Championship runner-up.\n\nThis list has been adjusted to exclude Brazilian
players and focuses on those who have made significant impacts in their clubs and on the international stage as of October 2023. Each player is recognized for their exceptional skills, performances, and achievements."}, {"role": "user", "content": "You are not allowed to include Brazilian players"}], "model": "gpt-4o-mini", "stop": ["\nObservation:"]}'
body: '{"messages":[{"role":"system","content":"You are Sports Analyst. You are
an expert at gathering and organizing information. You carefully collect details
and present them in a structured way.\nYour personal goal is: Gather information
about the best soccer players"},{"role":"user","content":"\nCurrent Task: Top
10 best players in the world?\n\nProvide your complete response:"}],"model":"gpt-4.1-mini"}'
headers:
User-Agent:
- X-USER-AGENT-XXX
accept:
- application/json
accept-encoding:
- gzip, deflate, zstd
- ACCEPT-ENCODING-XXX
authorization:
- AUTHORIZATION-XXX
connection:
- keep-alive
content-length:
- '6337'
- '404'
content-type:
- application/json
cookie:
- __cf_bm=8Yv8F0ZCFAo2lf.qoqxao70yxyjVvIV90zQqVF6bVzQ-1749567342-1.0.1.1-fZgnv3RDfunvCO1koxwwFJrHnxSx_rwS_FHvQ6xxDPpKHwYr7dTqIQLZrNgSX5twGyK4F22rUmkuiS6KMVogcinChk8lmHtJBTUVTFjr2KU; _cfuvid=wzh8YnmXvLq1G0RcIVijtzboQtCZyIe2uZiochkBLqE-1749567342267-0.0.1.1-604800000
- COOKIE-XXX
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.78.0
x-stainless-arch:
- arm64
- X-STAINLESS-ARCH-XXX
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 1.78.0
x-stainless-raw-response:
- 'true'
- 1.83.0
x-stainless-read-timeout:
- '600.0'
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.9
- 3.13.5
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-BgugsAmyI50uQ6SpCp89ZZY4eD1Pz\",\n \"object\": \"chat.completion\",\n \"created\": 1749567394,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\": \"assistant\",\n \"content\": \"Thought: I now can give a great answer \\nFinal Answer: Here is the revised list of the top 10 best soccer players in the world as of October 2023, explicitly excluding Brazilian players:\\n\\n1. **Lionel Messi (Inter Miami CF)**\\n - Position: Forward\\n - Key Attributes: Dribbling, vision, goal-scoring ability.\\n - Achievements: Multiple Ballon d'Or winner, Copa America champion, World Cup champion (2022).\\n\\n2. **Kylian Mbappé (Paris Saint-Germain)**\\n - Position: Forward\\n - Key Attributes: Speed, technique, finishing.\\n - Achievements: FIFA World Cup champion (2018), multiple Ligue 1 titles, and various domestic cups.\\n\\n3. **Erling Haaland (Manchester City)**\\n\
\ - Position: Forward\\n - Key Attributes: Power, speed, goal-scoring instinct.\\n - Achievements: Bundesliga top scorer, UEFA Champions League winner (2023), Premier League titles.\\n\\n4. **Kevin De Bruyne (Manchester City)**\\n - Position: Midfielder\\n - Key Attributes: Passing, vision, creativity.\\n - Achievements: Multiple Premier League titles, FA Cups, UEFA Champions League winner (2023).\\n\\n5. **Karim Benzema (Al-Ittihad)**\\n - Position: Forward\\n - Key Attributes: Goal-scoring, playmaking, tactical intelligence.\\n - Achievements: 2022 Ballon d'Or winner, multiple Champions Leagues with Real Madrid.\\n\\n6. **Robert Lewandowski (FC Barcelona)**\\n - Position: Forward\\n - Key Attributes: Finishing, positioning, aerial ability.\\n - Achievements: FIFA Best Men's Player, multiple Bundesliga titles, La Liga champion (2023).\\n\\n7. **Mohamed Salah (Liverpool)**\\n - Position: Forward\\n - Key Attributes: Speed, finishing, dribbling.\\n -\
\ Achievements: Premier League champion, FA Cup, UEFA Champions League winner.\\n\\n8. **Luka Modrić (Real Madrid)**\\n - Position: Midfielder\\n - Key Attributes: Passing, vision, tactical intelligence.\\n - Achievements: Multiple Champions League titles, Ballon d'Or winner (2018).\\n\\n9. **Harry Kane (Bayern Munich)**\\n - Position: Forward\\n - Key Attributes: Goal-scoring, technique, playmaking.\\n - Achievements: Golden Boot winner, Premier League titles, UEFA European Championship runner-up.\\n\\n10. **Rodri (Manchester City)**\\n - Position: Midfielder\\n - Key Attributes: Defensive skills, passing, positional awareness.\\n - Achievements: Premier League titles, UEFA Champions League winner (2023).\\n\\nThis list is curated while adhering to the restriction of excluding Brazilian players. Each player included has demonstrated exceptional skills and remarkable performances, solidifying their status as some of the best in the world as of October 2023.\"\
,\n \"refusal\": null,\n \"annotations\": []\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 1407,\n \"completion_tokens\": 605,\n \"total_tokens\": 2012,\n \"prompt_tokens_details\": {\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\": 0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\": \"default\",\n \"system_fingerprint\": \"fp_62a23a81ef\"\n}\n"
string: "{\n \"id\": \"chatcmpl-D6L4102eMwTEPeHxfyN9Kh7rjBoX6\",\n \"object\":
\"chat.completion\",\n \"created\": 1770403309,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Certainly! Here is a list of the top
10 best soccer players in the world as of 2024, considering their recent performances,
skills, impact, and accolades:\\n\\n1. **Lionel Messi** \\n - Nationality:
Argentine \\n - Position: Forward \\n - Key Achievements: 7 Ballon d'Or
awards, led Argentina to 2021 Copa Am\xE9rica victory and 2022 FIFA World
Cup triumph, exceptional dribbling and playmaking skills.\\n\\n2. **Kylian
Mbapp\xE9** \\n - Nationality: French \\n - Position: Forward \\n -
Key Achievements: FIFA World Cup winner (2018), multiple Ligue 1 titles, known
for incredible speed, finishing, and consistency.\\n\\n3. **Erling Haaland**
\ \\n - Nationality: Norwegian \\n - Position: Striker \\n - Key Achievements:
Premier League Golden Boot winner (2022-23), prolific goal scorer, physical
presence, and finishing ability.\\n\\n4. **Karim Benzema** \\n - Nationality:
French \\n - Position: Forward \\n - Key Achievements: 2022 Ballon d'Or
winner, key player for Real Madrid\u2019s recent Champions League victories,
excellent technical skills and leadership.\\n\\n5. **Kevin De Bruyne** \\n
\ - Nationality: Belgian \\n - Position: Midfielder \\n - Key Achievements:
Premier League playmaker, known for vision, passing accuracy, and creativity.\\n\\n6.
**Robert Lewandowski** \\n - Nationality: Polish \\n - Position: Striker
\ \\n - Key Achievements: Multiple Bundesliga top scorer titles, consistent
goal scorer, known for positioning and finishing.\\n\\n7. **Neymar Jr.** \\n
\ - Nationality: Brazilian \\n - Position: Forward \\n - Key Achievements:
Exceptional dribbling, creativity, and flair; multiple domestic titles and
Copa Libertadores winner.\\n\\n8. **Mohamed Salah** \\n - Nationality:
Egyptian \\n - Position: Forward \\n - Key Achievements: Premier League
Golden Boot, consistent goal scoring with Liverpool, known for speed and finishing.\\n\\n9.
**Luka Modri\u0107** \\n - Nationality: Croatian \\n - Position: Midfielder
\ \\n - Key Achievements: 2018 Ballon d\u2019Or winner, pivotal midfield
maestro, excellent passing and control.\\n\\n10. **Thibaut Courtois** \\n
\ - Nationality: Belgian \\n - Position: Goalkeeper \\n - Key Achievements:
Exceptional shot-stopper, key player in Real Madrid's recent successes.\\n\\nThis
list includes a blend of forwards, midfielders, and a goalkeeper, showcasing
the best talents in various positions worldwide. The rankings may vary slightly
depending on current form and opinions, but these players consistently rank
among the best globally.\",\n \"refusal\": null,\n \"annotations\":
[]\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 68,\n \"completion_tokens\":
575,\n \"total_tokens\": 643,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_75546bd1a7\"\n}\n"
headers:
CF-RAY:
- 94d9b7561f204d3b-GRU
- CF-RAY-XXX
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Tue, 10 Jun 2025 14:56:46 GMT
- Fri, 06 Feb 2026 18:41:57 GMT
Server:
- cloudflare
Strict-Transport-Security:
- STS-XXX
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
- X-CONTENT-TYPE-XXX
access-control-expose-headers:
- X-Request-ID
- ACCESS-CONTROL-XXX
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- crewai-iuxna1
- OPENAI-ORG-XXX
openai-processing-ms:
- '12189'
- '7948'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-envoy-upstream-service-time:
- '12193'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:
- '30000'
- X-RATELIMIT-LIMIT-REQUESTS-XXX
x-ratelimit-limit-tokens:
- '150000000'
- X-RATELIMIT-LIMIT-TOKENS-XXX
x-ratelimit-remaining-requests:
- '29999'
- X-RATELIMIT-REMAINING-REQUESTS-XXX
x-ratelimit-remaining-tokens:
- '149998513'
- X-RATELIMIT-REMAINING-TOKENS-XXX
x-ratelimit-reset-requests:
- 2ms
- X-RATELIMIT-RESET-REQUESTS-XXX
x-ratelimit-reset-tokens:
- 0s
- X-RATELIMIT-RESET-TOKENS-XXX
x-request-id:
- req_1098f5a5384f4a26aecf0c9e4e4d1fc0
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "system", "content": "You are Sports Analyst. You are an expert at gathering and organizing information. You carefully collect details and present them in a structured way.\nYour personal goal is: Gather information about the best soccer players\n\nTo give my best complete final answer to the task respond using the exact following format:\n\nThought: I now can give a great answer\nFinal Answer: Your final answer must be the great and the most complete as possible, it must be outcome described.\n\nI MUST use these formats, my job depends on it!"}, {"role": "user", "content": "Top 10 best players in the world?"}, {"role": "assistant", "content": "Thought: I now can give a great answer \nFinal Answer: The top 10 best soccer players in the world, as of October 2023, can be identified based on their recent performances, skills, impact on games, and overall contributions to their teams. Here is the structured list:\n\n1. **Lionel Messi (Inter Miami CF)**\n -
Position: Forward\n - Key Attributes: Dribbling, vision, goal-scoring ability.\n - Achievements: Multiple Ballon d''Or winner, Copa America champion, World Cup champion (2022).\n\n2. **Kylian Mbapp\u00e9 (Paris Saint-Germain)**\n - Position: Forward\n - Key Attributes: Speed, technique, finishing.\n - Achievements: FIFA World Cup champion (2018), Ligue 1 titles, multiple domestic cups.\n\n3. **Erling Haaland (Manchester City)**\n - Position: Forward\n - Key Attributes: Power, speed, goal-scoring instinct.\n - Achievements: Bundesliga top scorer, UEFA Champions League winner (2023), Premier League titles.\n\n4. **Kevin De Bruyne (Manchester City)**\n - Position: Midfielder\n - Key Attributes: Passing, vision, creativity.\n - Achievements: Multiple Premier League titles, FA Cups, UEFA Champions League winner (2023).\n\n5. **Karim Benzema (Al-Ittihad)**\n - Position: Forward\n - Key Attributes: Goal-scoring, playmaking, tactical intelligence.\n - Achievements:
2022 Ballon d''Or winner, multiple Champions Leagues with Real Madrid.\n\n6. **Neymar Jr. (Al Hilal)**\n - Position: Forward\n - Key Attributes: Flair, dribbling, creativity.\n - Achievements: Multiple domestic league titles, Champions League runner-up.\n\n7. **Robert Lewandowski (FC Barcelona)**\n - Position: Forward\n - Key Attributes: Finishing, positioning, aerial ability.\n - Achievements: FIFA Best Men''s Player, multiple Bundesliga titles, La Liga champion (2023).\n\n8. **Mohamed Salah (Liverpool)**\n - Position: Forward\n - Key Attributes: Speed, finishing, dribbling.\n - Achievements: Premier League champion, FA Cup, UEFA Champions League winner.\n\n9. **Vin\u00edcius J\u00fanior (Real Madrid)**\n - Position: Forward\n - Key Attributes: Speed, dribbling, creativity.\n - Achievements: UEFA Champions League winner (2022), La Liga champion (2023).\n\n10. **Luka Modri\u0107 (Real Madrid)**\n - Position: Midfielder\n - Key Attributes: Passing, vision,
tactical intelligence.\n - Achievements: Multiple Champions League titles, Ballon d''Or winner (2018).\n\nThis list is compiled based on their current form, past performances, and contributions to their respective teams in both domestic and international competitions. Player rankings can vary based on personal opinion and specific criteria used for evaluation, but these players have consistently been regarded as some of the best in the world as of October 2023."}, {"role": "user", "content": "You are not allowed to include Brazilian players"}, {"role": "assistant", "content": "Thought: I now can give a great answer \nFinal Answer: Here is an updated list of the top 10 best soccer players in the world as of October 2023, excluding Brazilian players:\n\n1. **Lionel Messi (Inter Miami CF)**\n - Position: Forward\n - Key Attributes: Dribbling, vision, goal-scoring ability.\n - Achievements: Multiple Ballon d''Or winner, Copa America champion, World Cup champion (2022).\n\n2.
**Kylian Mbapp\u00e9 (Paris Saint-Germain)**\n - Position: Forward\n - Key Attributes: Speed, technique, finishing.\n - Achievements: FIFA World Cup champion (2018), Ligue 1 titles, multiple domestic cups.\n\n3. **Erling Haaland (Manchester City)**\n - Position: Forward\n - Key Attributes: Power, speed, goal-scoring instinct.\n - Achievements: Bundesliga top scorer, UEFA Champions League winner (2023), Premier League titles.\n\n4. **Kevin De Bruyne (Manchester City)**\n - Position: Midfielder\n - Key Attributes: Passing, vision, creativity.\n - Achievements: Multiple Premier League titles, FA Cups, UEFA Champions League winner (2023).\n\n5. **Karim Benzema (Al-Ittihad)**\n - Position: Forward\n - Key Attributes: Goal-scoring, playmaking, tactical intelligence.\n - Achievements: 2022 Ballon d''Or winner, multiple Champions Leagues with Real Madrid.\n\n6. **Robert Lewandowski (FC Barcelona)**\n - Position: Forward\n - Key Attributes: Finishing, positioning,
aerial ability.\n - Achievements: FIFA Best Men''s Player, multiple Bundesliga titles, La Liga champion (2023).\n\n7. **Mohamed Salah (Liverpool)**\n - Position: Forward\n - Key Attributes: Speed, finishing, dribbling.\n - Achievements: Premier League champion, FA Cup, UEFA Champions League winner.\n\n8. **Vin\u00edcius J\u00fanior (Real Madrid)**\n - Position: Forward\n - Key Attributes: Speed, dribbling, creativity.\n - Achievements: UEFA Champions League winner (2022), La Liga champion (2023).\n\n9. **Luka Modri\u0107 (Real Madrid)**\n - Position: Midfielder\n - Key Attributes: Passing, vision, tactical intelligence.\n - Achievements: Multiple Champions League titles, Ballon d''Or winner (2018).\n\n10. **Harry Kane (Bayern Munich)**\n - Position: Forward\n - Key Attributes: Goal-scoring, technique, playmaking.\n - Achievements: Golden Boot winner, Premier League titles, UEFA European Championship runner-up.\n\nThis list has been adjusted to exclude Brazilian
players and focuses on those who have made significant impacts in their clubs and on the international stage as of October 2023. Each player is recognized for their exceptional skills, performances, and achievements."}, {"role": "user", "content": "You are not allowed to include Brazilian players"}, {"role": "assistant", "content": "Thought: I now can give a great answer \nFinal Answer: Here is the revised list of the top 10 best soccer players in the world as of October 2023, explicitly excluding Brazilian players:\n\n1. **Lionel Messi (Inter Miami CF)**\n - Position: Forward\n - Key Attributes: Dribbling, vision, goal-scoring ability.\n - Achievements: Multiple Ballon d''Or winner, Copa America champion, World Cup champion (2022).\n\n2. **Kylian Mbapp\u00e9 (Paris Saint-Germain)**\n - Position: Forward\n - Key Attributes: Speed, technique, finishing.\n - Achievements: FIFA World Cup champion (2018), multiple Ligue 1 titles, and various domestic cups.\n\n3. **Erling Haaland
(Manchester City)**\n - Position: Forward\n - Key Attributes: Power, speed, goal-scoring instinct.\n - Achievements: Bundesliga top scorer, UEFA Champions League winner (2023), Premier League titles.\n\n4. **Kevin De Bruyne (Manchester City)**\n - Position: Midfielder\n - Key Attributes: Passing, vision, creativity.\n - Achievements: Multiple Premier League titles, FA Cups, UEFA Champions League winner (2023).\n\n5. **Karim Benzema (Al-Ittihad)**\n - Position: Forward\n - Key Attributes: Goal-scoring, playmaking, tactical intelligence.\n - Achievements: 2022 Ballon d''Or winner, multiple Champions Leagues with Real Madrid.\n\n6. **Robert Lewandowski (FC Barcelona)**\n - Position: Forward\n - Key Attributes: Finishing, positioning, aerial ability.\n - Achievements: FIFA Best Men''s Player, multiple Bundesliga titles, La Liga champion (2023).\n\n7. **Mohamed Salah (Liverpool)**\n - Position: Forward\n - Key Attributes: Speed, finishing, dribbling.\n - Achievements:
Premier League champion, FA Cup, UEFA Champions League winner.\n\n8. **Luka Modri\u0107 (Real Madrid)**\n - Position: Midfielder\n - Key Attributes: Passing, vision, tactical intelligence.\n - Achievements: Multiple Champions League titles, Ballon d''Or winner (2018).\n\n9. **Harry Kane (Bayern Munich)**\n - Position: Forward\n - Key Attributes: Goal-scoring, technique, playmaking.\n - Achievements: Golden Boot winner, Premier League titles, UEFA European Championship runner-up.\n\n10. **Rodri (Manchester City)**\n - Position: Midfielder\n - Key Attributes: Defensive skills, passing, positional awareness.\n - Achievements: Premier League titles, UEFA Champions League winner (2023).\n\nThis list is curated while adhering to the restriction of excluding Brazilian players. Each player included has demonstrated exceptional skills and remarkable performances, solidifying their status as some of the best in the world as of October 2023."}, {"role": "user", "content":
"You are not allowed to include Brazilian players"}], "model": "gpt-4o-mini", "stop": ["\nObservation:"]}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, zstd
connection:
- keep-alive
content-length:
- '9093'
content-type:
- application/json
cookie:
- __cf_bm=8Yv8F0ZCFAo2lf.qoqxao70yxyjVvIV90zQqVF6bVzQ-1749567342-1.0.1.1-fZgnv3RDfunvCO1koxwwFJrHnxSx_rwS_FHvQ6xxDPpKHwYr7dTqIQLZrNgSX5twGyK4F22rUmkuiS6KMVogcinChk8lmHtJBTUVTFjr2KU; _cfuvid=wzh8YnmXvLq1G0RcIVijtzboQtCZyIe2uZiochkBLqE-1749567342267-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.78.0
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.78.0
x-stainless-raw-response:
- 'true'
x-stainless-read-timeout:
- '600.0'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.9
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-BguhCefN1bN2OeYRo5ChhUqNBLUda\",\n \"object\": \"chat.completion\",\n \"created\": 1749567414,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\": \"assistant\",\n \"content\": \"Thought: I now can give a great answer \\nFinal Answer: Here is a refined list of the top 10 best soccer players in the world as of October 2023, ensuring that no Brazilian players are included:\\n\\n1. **Lionel Messi (Inter Miami CF)**\\n - Position: Forward\\n - Key Attributes: Dribbling, vision, goal-scoring ability.\\n - Achievements: Multiple Ballon d'Or winner, Copa America champion, World Cup champion (2022).\\n\\n2. **Kylian Mbappé (Paris Saint-Germain)**\\n - Position: Forward\\n - Key Attributes: Speed, technique, finishing.\\n - Achievements: FIFA World Cup champion (2018), multiple Ligue 1 titles, various domestic cups.\\n\\n3. **Erling Haaland (Manchester City)**\\\
n - Position: Forward\\n - Key Attributes: Power, speed, goal-scoring instinct.\\n - Achievements: Bundesliga top scorer, UEFA Champions League winner (2023), Premier League titles.\\n\\n4. **Kevin De Bruyne (Manchester City)**\\n - Position: Midfielder\\n - Key Attributes: Passing, vision, creativity.\\n - Achievements: Multiple Premier League titles, FA Cups, UEFA Champions League winner (2023).\\n\\n5. **Karim Benzema (Al-Ittihad)**\\n - Position: Forward\\n - Key Attributes: Goal-scoring, playmaking, tactical intelligence.\\n - Achievements: 2022 Ballon d'Or winner, multiple Champions Leagues with Real Madrid.\\n\\n6. **Robert Lewandowski (FC Barcelona)**\\n - Position: Forward\\n - Key Attributes: Finishing, positioning, aerial ability.\\n - Achievements: FIFA Best Men's Player, multiple Bundesliga titles, La Liga champion (2023).\\n\\n7. **Mohamed Salah (Liverpool)**\\n - Position: Forward\\n - Key Attributes: Speed, finishing, dribbling.\\n -\
\ Achievements: Premier League champion, FA Cup, UEFA Champions League winner.\\n\\n8. **Luka Modrić (Real Madrid)**\\n - Position: Midfielder\\n - Key Attributes: Passing, vision, tactical intelligence.\\n - Achievements: Multiple Champions League titles, Ballon d'Or winner (2018).\\n\\n9. **Harry Kane (Bayern Munich)**\\n - Position: Forward\\n - Key Attributes: Goal-scoring, technique, playmaking.\\n - Achievements: Golden Boot winner, multiple Premier League titles, UEFA European Championship runner-up.\\n\\n10. **Son Heung-min (Tottenham Hotspur)**\\n - Position: Forward\\n - Key Attributes: Speed, finishing, playmaking.\\n - Achievements: Premier League Golden Boot winner, multiple domestic cup titles.\\n\\nThis list has been carefully revised to exclude all Brazilian players while highlighting some of the most talented individuals in soccer as of October 2023. Each player has showcased remarkable effectiveness and skill, contributing significantly to their\
\ teams on both domestic and international stages.\",\n \"refusal\": null,\n \"annotations\": []\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 2028,\n \"completion_tokens\": 614,\n \"total_tokens\": 2642,\n \"prompt_tokens_details\": {\n \"cached_tokens\": 1280,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\": 0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\": \"default\",\n \"system_fingerprint\": \"fp_34a54ae93c\"\n}\n"
headers:
CF-RAY:
- 94d9b7d24d991d2c-GRU
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Tue, 10 Jun 2025 14:57:29 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '35291'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-envoy-upstream-service-time:
- '35294'
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149997855'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_4676152d4227ac1825d1240ddef231d6
- X-REQUEST-ID-XXX
status:
code: 200
message: OK

View File

@@ -1,14 +1,8 @@
interactions:
- request:
body: '{"messages":[{"role":"system","content":"You are Test Agent. A helpful
test assistant\nYour personal goal is: Answer questions\nTo give my best complete
final answer to the task respond using the exact following format:\n\nThought:
I now can give a great answer\nFinal Answer: Your final answer must be the great
and the most complete as possible, it must be outcome described.\n\nI MUST use
these formats, my job depends on it!"},{"role":"user","content":"\nCurrent Task:
What is 2+2? Reply with just the number.\n\nBegin! This is VERY important to
you, use the tools available and give your best Final Answer, your job depends
on it!\n\nThought:"}],"model":"gpt-4o-mini"}'
test assistant\nYour personal goal is: Answer questions"},{"role":"user","content":"\nCurrent
Task: What is 2+2? Reply with just the number.\n\nProvide your complete response:"}],"model":"gpt-4o-mini"}'
headers:
User-Agent:
- X-USER-AGENT-XXX
@@ -21,7 +15,7 @@ interactions:
connection:
- keep-alive
content-length:
- '673'
- '272'
content-type:
- application/json
host:
@@ -43,23 +37,22 @@ interactions:
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
- 3.13.5
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-Cy7b0HjL79y39EkUcMLrRhPFe3XGj\",\n \"object\":
\"chat.completion\",\n \"created\": 1768444914,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
string: "{\n \"id\": \"chatcmpl-D6L4AzMHXLXDfyclWS6fJSwS0cvOl\",\n \"object\":
\"chat.completion\",\n \"created\": 1770403318,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"I now can give a great answer \\nFinal
Answer: 4\",\n \"refusal\": null,\n \"annotations\": []\n },\n
\ \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n
\ \"usage\": {\n \"prompt_tokens\": 136,\n \"completion_tokens\": 13,\n
\ \"total_tokens\": 149,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
\"assistant\",\n \"content\": \"4\",\n \"refusal\": null,\n
\ \"annotations\": []\n },\n \"logprobs\": null,\n \"finish_reason\":
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 50,\n \"completion_tokens\":
1,\n \"total_tokens\": 51,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_8bbc38b4db\"\n}\n"
\"default\",\n \"system_fingerprint\": \"fp_f4ae844694\"\n}\n"
headers:
CF-RAY:
- CF-RAY-XXX
@@ -68,7 +61,7 @@ interactions:
Content-Type:
- application/json
Date:
- Thu, 15 Jan 2026 02:41:55 GMT
- Fri, 06 Feb 2026 18:41:58 GMT
Server:
- cloudflare
Set-Cookie:
@@ -85,18 +78,14 @@ interactions:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
content-length:
- '857'
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '341'
- '264'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
x-envoy-upstream-service-time:
- '358'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:

View File

@@ -1,14 +1,8 @@
interactions:
- request:
body: '{"messages":[{"role":"system","content":"You are Standalone Agent. A helpful
assistant\nYour personal goal is: Answer questions\nTo give my best complete
final answer to the task respond using the exact following format:\n\nThought:
I now can give a great answer\nFinal Answer: Your final answer must be the great
and the most complete as possible, it must be outcome described.\n\nI MUST use
these formats, my job depends on it!"},{"role":"user","content":"\nCurrent Task:
What is 5+5? Reply with just the number.\n\nBegin! This is VERY important to
you, use the tools available and give your best Final Answer, your job depends
on it!\n\nThought:"}],"model":"gpt-4o-mini"}'
assistant\nYour personal goal is: Answer questions"},{"role":"user","content":"\nCurrent
Task: What is 5+5? Reply with just the number.\n\nProvide your complete response:"}],"model":"gpt-4o-mini"}'
headers:
User-Agent:
- X-USER-AGENT-XXX
@@ -21,7 +15,7 @@ interactions:
connection:
- keep-alive
content-length:
- '674'
- '273'
content-type:
- application/json
host:
@@ -43,23 +37,22 @@ interactions:
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
- 3.13.5
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-Cy7azhPwUHQ0p5tdhxSAmLPoE8UgC\",\n \"object\":
\"chat.completion\",\n \"created\": 1768444913,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
string: "{\n \"id\": \"chatcmpl-D6L3cLs2ndBaXV2wnqYCdi6X1ykvv\",\n \"object\":
\"chat.completion\",\n \"created\": 1770403284,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"I now can give a great answer \\nFinal
Answer: 10\",\n \"refusal\": null,\n \"annotations\": []\n },\n
\ \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n
\ \"usage\": {\n \"prompt_tokens\": 136,\n \"completion_tokens\": 13,\n
\ \"total_tokens\": 149,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
\"assistant\",\n \"content\": \"10\",\n \"refusal\": null,\n
\ \"annotations\": []\n },\n \"logprobs\": null,\n \"finish_reason\":
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 50,\n \"completion_tokens\":
1,\n \"total_tokens\": 51,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_29330a9688\"\n}\n"
\"default\",\n \"system_fingerprint\": \"fp_f4ae844694\"\n}\n"
headers:
CF-RAY:
- CF-RAY-XXX
@@ -68,7 +61,7 @@ interactions:
Content-Type:
- application/json
Date:
- Thu, 15 Jan 2026 02:41:54 GMT
- Fri, 06 Feb 2026 18:41:25 GMT
Server:
- cloudflare
Set-Cookie:
@@ -85,18 +78,14 @@ interactions:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
content-length:
- '858'
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '455'
- '270'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
x-envoy-upstream-service-time:
- '583'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:

View File

@@ -1,13 +1,8 @@
interactions:
- request:
body: '{"messages":[{"role":"system","content":"You are First Agent. A friendly
greeter\nYour personal goal is: Greet users\nTo give my best complete final
answer to the task respond using the exact following format:\n\nThought: I now
can give a great answer\nFinal Answer: Your final answer must be the great and
the most complete as possible, it must be outcome described.\n\nI MUST use these
formats, my job depends on it!"},{"role":"user","content":"\nCurrent Task: Say
hello\n\nBegin! This is VERY important to you, use the tools available and give
your best Final Answer, your job depends on it!\n\nThought:"}],"model":"gpt-4o-mini"}'
greeter\nYour personal goal is: Greet users"},{"role":"user","content":"\nCurrent
Task: Say hello\n\nProvide your complete response:"}],"model":"gpt-4o-mini"}'
headers:
User-Agent:
- X-USER-AGENT-XXX
@@ -20,7 +15,7 @@ interactions:
connection:
- keep-alive
content-length:
- '632'
- '231'
content-type:
- application/json
host:
@@ -42,24 +37,22 @@ interactions:
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
- 3.13.5
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-CyRKzgODZ9yn3F9OkaXsscLk2Ln3N\",\n \"object\":
\"chat.completion\",\n \"created\": 1768520801,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
string: "{\n \"id\": \"chatcmpl-D6L4A8Aad6P1YUxWjQpvyltn8GaKT\",\n \"object\":
\"chat.completion\",\n \"created\": 1770403318,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"I now can give a great answer \\nFinal
Answer: Hello! Welcome! I'm so glad to see you here. If you need any assistance
or have any questions, feel free to ask. Have a wonderful day!\",\n \"refusal\":
null,\n \"annotations\": []\n },\n \"logprobs\": null,\n
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
127,\n \"completion_tokens\": 43,\n \"total_tokens\": 170,\n \"prompt_tokens_details\":
\"assistant\",\n \"content\": \"Hello! \U0001F60A How are you today?\",\n
\ \"refusal\": null,\n \"annotations\": []\n },\n \"logprobs\":
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
41,\n \"completion_tokens\": 8,\n \"total_tokens\": 49,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_c4585b5b9c\"\n}\n"
\"default\",\n \"system_fingerprint\": \"fp_f4ae844694\"\n}\n"
headers:
CF-RAY:
- CF-RAY-XXX
@@ -68,7 +61,7 @@ interactions:
Content-Type:
- application/json
Date:
- Thu, 15 Jan 2026 23:46:42 GMT
- Fri, 06 Feb 2026 18:41:58 GMT
Server:
- cloudflare
Set-Cookie:
@@ -85,18 +78,14 @@ interactions:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
content-length:
- '990'
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '880'
- '325'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
x-envoy-upstream-service-time:
- '1160'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:
@@ -118,13 +107,8 @@ interactions:
message: OK
- request:
body: '{"messages":[{"role":"system","content":"You are Second Agent. A polite
farewell agent\nYour personal goal is: Say goodbye\nTo give my best complete
final answer to the task respond using the exact following format:\n\nThought:
I now can give a great answer\nFinal Answer: Your final answer must be the great
and the most complete as possible, it must be outcome described.\n\nI MUST use
these formats, my job depends on it!"},{"role":"user","content":"\nCurrent Task:
Say goodbye\n\nBegin! This is VERY important to you, use the tools available
and give your best Final Answer, your job depends on it!\n\nThought:"}],"model":"gpt-4o-mini"}'
farewell agent\nYour personal goal is: Say goodbye"},{"role":"user","content":"\nCurrent
Task: Say goodbye\n\nProvide your complete response:"}],"model":"gpt-4o-mini"}'
headers:
User-Agent:
- X-USER-AGENT-XXX
@@ -137,7 +121,7 @@ interactions:
connection:
- keep-alive
content-length:
- '640'
- '239'
content-type:
- application/json
host:
@@ -159,27 +143,24 @@ interactions:
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
- 3.13.5
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-CyRL1Ua2PkK5xXPp3KeF0AnGAk3JP\",\n \"object\":
\"chat.completion\",\n \"created\": 1768520803,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
string: "{\n \"id\": \"chatcmpl-D6L4BLMYC3ODccwbKfBIdtrEyd3no\",\n \"object\":
\"chat.completion\",\n \"created\": 1770403319,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"I now can give a great answer \\nFinal
Answer: As we reach the end of our conversation, I want to express my gratitude
for the time we've shared. It's been a pleasure assisting you, and I hope
you found our interaction helpful and enjoyable. Remember, whenever you need
assistance, I'm just a message away. Wishing you all the best in your future
endeavors. Goodbye and take care!\",\n \"refusal\": null,\n \"annotations\":
\"assistant\",\n \"content\": \"Thank you for the time we've spent
together! I wish you all the best in your future endeavors. Take care, and
until we meet again, goodbye!\",\n \"refusal\": null,\n \"annotations\":
[]\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 126,\n \"completion_tokens\":
79,\n \"total_tokens\": 205,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 40,\n \"completion_tokens\":
31,\n \"total_tokens\": 71,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_29330a9688\"\n}\n"
\"default\",\n \"system_fingerprint\": \"fp_f4ae844694\"\n}\n"
headers:
CF-RAY:
- CF-RAY-XXX
@@ -188,7 +169,7 @@ interactions:
Content-Type:
- application/json
Date:
- Thu, 15 Jan 2026 23:46:44 GMT
- Fri, 06 Feb 2026 18:41:59 GMT
Server:
- cloudflare
Set-Cookie:
@@ -205,18 +186,14 @@ interactions:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
content-length:
- '1189'
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '1363'
- '726'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
x-envoy-upstream-service-time:
- '1605'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:

View File

@@ -2,9 +2,8 @@ interactions:
- request:
body: '{"messages":[{"role":"system","content":"You are Calculator. You calculate
things.\nYour personal goal is: Perform calculations efficiently"},{"role":"user","content":"\nCurrent
Task: Use the failing_tool to do something.\n\nThis is VERY important to you,
your job depends on it!"}],"model":"gpt-4o-mini","tool_choice":"auto","tools":[{"type":"function","function":{"name":"failing_tool","description":"This
tool always fails","parameters":{"properties":{},"type":"object"}}}]}'
Task: Use the failing_tool to do something."}],"model":"gpt-4o-mini","tool_choice":"auto","tools":[{"type":"function","function":{"name":"failing_tool","description":"This
tool always fails","strict":true,"parameters":{"properties":{},"type":"object","additionalProperties":false,"required":[]}}}]}'
headers:
User-Agent:
- X-USER-AGENT-XXX
@@ -17,7 +16,7 @@ interactions:
connection:
- keep-alive
content-length:
- '477'
- '476'
content-type:
- application/json
host:
@@ -39,26 +38,26 @@ interactions:
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
- 3.13.5
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-D0vm2JDsOmy0czXPAr4vnw3wvuqYZ\",\n \"object\":
\"chat.completion\",\n \"created\": 1769114454,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
string: "{\n \"id\": \"chatcmpl-D6L3dV6acwapgRyxmnzGfuOXemtjJ\",\n \"object\":
\"chat.completion\",\n \"created\": 1770403285,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": null,\n \"tool_calls\": [\n {\n
\ \"id\": \"call_8xr8rPUDWzLfQ3LOWPHtBUjK\",\n \"type\":
\ \"id\": \"call_GCdaOdo32pr1sSk4RzO0tiB9\",\n \"type\":
\"function\",\n \"function\": {\n \"name\": \"failing_tool\",\n
\ \"arguments\": \"{}\"\n }\n }\n ],\n
\ \"refusal\": null,\n \"annotations\": []\n },\n \"logprobs\":
null,\n \"finish_reason\": \"tool_calls\"\n }\n ],\n \"usage\":
{\n \"prompt_tokens\": 78,\n \"completion_tokens\": 11,\n \"total_tokens\":
89,\n \"prompt_tokens_details\": {\n \"cached_tokens\": 0,\n \"audio_tokens\":
{\n \"prompt_tokens\": 65,\n \"completion_tokens\": 11,\n \"total_tokens\":
76,\n \"prompt_tokens_details\": {\n \"cached_tokens\": 0,\n \"audio_tokens\":
0\n },\n \"completion_tokens_details\": {\n \"reasoning_tokens\":
0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\": 0,\n
\ \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_c4585b5b9c\"\n}\n"
\"default\",\n \"system_fingerprint\": \"fp_6c0d1490cb\"\n}\n"
headers:
CF-RAY:
- CF-RAY-XXX
@@ -67,7 +66,7 @@ interactions:
Content-Type:
- application/json
Date:
- Thu, 22 Jan 2026 20:40:54 GMT
- Fri, 06 Feb 2026 18:41:25 GMT
Server:
- cloudflare
Set-Cookie:
@@ -87,13 +86,11 @@ interactions:
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '593'
- '436'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
x-envoy-upstream-service-time:
- '621'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:
@@ -116,12 +113,9 @@ interactions:
- request:
body: '{"messages":[{"role":"system","content":"You are Calculator. You calculate
things.\nYour personal goal is: Perform calculations efficiently"},{"role":"user","content":"\nCurrent
Task: Use the failing_tool to do something.\n\nThis is VERY important to you,
your job depends on it!"},{"role":"assistant","content":null,"tool_calls":[{"id":"call_8xr8rPUDWzLfQ3LOWPHtBUjK","type":"function","function":{"name":"failing_tool","arguments":"{}"}}]},{"role":"tool","tool_call_id":"call_8xr8rPUDWzLfQ3LOWPHtBUjK","content":"Error
executing tool: This tool always fails"},{"role":"user","content":"Analyze the
tool result. If requirements are met, provide the Final Answer. Otherwise, call
the next tool. Deliver only the answer without meta-commentary."}],"model":"gpt-4o-mini","tool_choice":"auto","tools":[{"type":"function","function":{"name":"failing_tool","description":"This
tool always fails","parameters":{"properties":{},"type":"object"}}}]}'
Task: Use the failing_tool to do something."},{"role":"assistant","content":null,"tool_calls":[{"id":"call_GCdaOdo32pr1sSk4RzO0tiB9","type":"function","function":{"name":"failing_tool","arguments":"{}"}}]},{"role":"tool","tool_call_id":"call_GCdaOdo32pr1sSk4RzO0tiB9","name":"failing_tool","content":"Error
executing tool: This tool always fails"}],"model":"gpt-4o-mini","tool_choice":"auto","tools":[{"type":"function","function":{"name":"failing_tool","description":"This
tool always fails","strict":true,"parameters":{"properties":{},"type":"object","additionalProperties":false,"required":[]}}}]}'
headers:
User-Agent:
- X-USER-AGENT-XXX
@@ -134,7 +128,7 @@ interactions:
connection:
- keep-alive
content-length:
- '941'
- '778'
content-type:
- application/json
cookie:
@@ -158,22 +152,25 @@ interactions:
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
- 3.13.5
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-D0vm3xcywoKBW75bhBXfkGJNim6Th\",\n \"object\":
\"chat.completion\",\n \"created\": 1769114455,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
string: "{\n \"id\": \"chatcmpl-D6L3dhjDZOoihHvXvRpbJD3ReGu0z\",\n \"object\":
\"chat.completion\",\n \"created\": 1770403285,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Error: This tool always fails.\",\n
\ \"refusal\": null,\n \"annotations\": []\n },\n \"logprobs\":
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
141,\n \"completion_tokens\": 8,\n \"total_tokens\": 149,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
\"assistant\",\n \"content\": \"The attempt to use the failing tool
resulted in an error, as expected since it is designed to always fail. If
there's anything else you would like to calculate or explore, please let me
know!\",\n \"refusal\": null,\n \"annotations\": []\n },\n
\ \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n
\ \"usage\": {\n \"prompt_tokens\": 93,\n \"completion_tokens\": 40,\n
\ \"total_tokens\": 133,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_c4585b5b9c\"\n}\n"
\"default\",\n \"system_fingerprint\": \"fp_6c0d1490cb\"\n}\n"
headers:
CF-RAY:
- CF-RAY-XXX
@@ -182,7 +179,7 @@ interactions:
Content-Type:
- application/json
Date:
- Thu, 22 Jan 2026 20:40:55 GMT
- Fri, 06 Feb 2026 18:41:26 GMT
Server:
- cloudflare
Strict-Transport-Security:
@@ -200,13 +197,11 @@ interactions:
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '420'
- '776'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
x-envoy-upstream-service-time:
- '436'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:

View File

@@ -43,15 +43,15 @@ interactions:
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
- 3.13.5
x-stainless-timeout:
- NOT_GIVEN
method: POST
uri: https://api.anthropic.com/v1/messages
response:
body:
string: '{"model":"claude-3-5-haiku-20241022","id":"msg_0149zKBgM47utdBdrfJjM6YZ","type":"message","role":"assistant","content":[{"type":"tool_use","id":"toolu_011jnBYLgtzXqdmSi7JDyQHj","name":"structured_output","input":{"operation":"Addition","result":42,"explanation":"Adding
15 and 27 together results in 42"}}],"stop_reason":"tool_use","stop_sequence":null,"usage":{"input_tokens":573,"cache_creation_input_tokens":0,"cache_read_input_tokens":0,"cache_creation":{"ephemeral_5m_input_tokens":0,"ephemeral_1h_input_tokens":0},"output_tokens":79,"service_tier":"standard"}}'
string: '{"model":"claude-3-5-haiku-20241022","id":"msg_01A41GpDoJbZLUhR8dQzUcUX","type":"message","role":"assistant","content":[{"type":"tool_use","id":"toolu_01UNPdzpayoWyqDYVE7fR5oA","name":"structured_output","input":{"operation":"Addition","result":42,"explanation":"Added
15 and 27 together"}}],"stop_reason":"tool_use","stop_sequence":null,"usage":{"input_tokens":573,"cache_creation_input_tokens":0,"cache_read_input_tokens":0,"cache_creation":{"ephemeral_5m_input_tokens":0,"ephemeral_1h_input_tokens":0},"output_tokens":75,"service_tier":"standard","inference_geo":"not_available"}}'
headers:
CF-RAY:
- CF-RAY-XXX
@@ -62,7 +62,7 @@ interactions:
Content-Type:
- application/json
Date:
- Fri, 30 Jan 2026 18:56:15 GMT
- Fri, 06 Feb 2026 18:41:25 GMT
Server:
- cloudflare
Transfer-Encoding:
@@ -88,7 +88,7 @@ interactions:
anthropic-ratelimit-requests-remaining:
- '3999'
anthropic-ratelimit-requests-reset:
- '2026-01-30T18:56:14Z'
- '2026-02-06T18:41:24Z'
anthropic-ratelimit-tokens-limit:
- ANTHROPIC-RATELIMIT-TOKENS-LIMIT-XXX
anthropic-ratelimit-tokens-remaining:
@@ -102,7 +102,7 @@ interactions:
strict-transport-security:
- STS-XXX
x-envoy-upstream-service-time:
- '1473'
- '1247'
status:
code: 200
message: OK

View File

@@ -44,21 +44,20 @@ interactions:
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
- 3.13.5
x-stainless-timeout:
- NOT_GIVEN
method: POST
uri: https://api.anthropic.com/v1/messages
response:
body:
string: '{"model":"claude-3-5-haiku-20241022","id":"msg_013iHkpmto99iyH5kDvn8uER","type":"message","role":"assistant","content":[{"type":"tool_use","id":"toolu_01Kpda2DzHBqWq9a2FS2Bdw6","name":"structured_output","input":{"topic":"Benefits
string: '{"model":"claude-3-5-haiku-20241022","id":"msg_016wrV83wm3FLYD4JoTy2Piw","type":"message","role":"assistant","content":[{"type":"tool_use","id":"toolu_01V6Pzr7eGfuG4Q3mc25ZXwN","name":"structured_output","input":{"topic":"Benefits
of Remote Work","summary":"Remote work offers significant advantages for both
employees and employers, transforming traditional work paradigms by providing
flexibility, increased productivity, and cost savings.","key_points":["Increased
employee flexibility and work-life balance","Reduced commuting time and associated
stress","Cost savings for companies on office infrastructure","Access to a
global talent pool","Higher employee productivity and job satisfaction","Lower
carbon footprint due to reduced travel"]}}],"stop_reason":"tool_use","stop_sequence":null,"usage":{"input_tokens":589,"cache_creation_input_tokens":0,"cache_read_input_tokens":0,"cache_creation":{"ephemeral_5m_input_tokens":0,"ephemeral_1h_input_tokens":0},"output_tokens":153,"service_tier":"standard"}}'
employees and employers, transforming traditional workplace dynamics.","key_points":["Increased
flexibility in work schedule","Reduced commute time and transportation costs","Improved
work-life balance","Higher productivity for many employees","Cost savings
for companies on office infrastructure","Expanded talent pool for hiring","Enhanced
employee job satisfaction"]}}],"stop_reason":"tool_use","stop_sequence":null,"usage":{"input_tokens":589,"cache_creation_input_tokens":0,"cache_read_input_tokens":0,"cache_creation":{"ephemeral_5m_input_tokens":0,"ephemeral_1h_input_tokens":0},"output_tokens":142,"service_tier":"standard","inference_geo":"not_available"}}'
headers:
CF-RAY:
- CF-RAY-XXX
@@ -69,7 +68,7 @@ interactions:
Content-Type:
- application/json
Date:
- Fri, 30 Jan 2026 18:56:19 GMT
- Fri, 06 Feb 2026 18:41:28 GMT
Server:
- cloudflare
Transfer-Encoding:
@@ -95,7 +94,7 @@ interactions:
anthropic-ratelimit-requests-remaining:
- '3999'
anthropic-ratelimit-requests-reset:
- '2026-01-30T18:56:16Z'
- '2026-02-06T18:41:26Z'
anthropic-ratelimit-tokens-limit:
- ANTHROPIC-RATELIMIT-TOKENS-LIMIT-XXX
anthropic-ratelimit-tokens-remaining:
@@ -109,7 +108,7 @@ interactions:
strict-transport-security:
- STS-XXX
x-envoy-upstream-service-time:
- '3107'
- '2650'
status:
code: 200
message: OK

View File

@@ -1,6 +1,8 @@
import os
import unittest
from unittest.mock import ANY, MagicMock, patch
from unittest.mock import ANY, AsyncMock, MagicMock, patch
import pytest
from crewai.cli.plus_api import PlusAPI
@@ -68,37 +70,6 @@ class TestPlusAPI(unittest.TestCase):
)
self.assertEqual(response, mock_response)
@patch("crewai.cli.plus_api.PlusAPI._make_request")
def test_get_agent(self, mock_make_request):
mock_response = MagicMock()
mock_make_request.return_value = mock_response
response = self.api.get_agent("test_agent_handle")
mock_make_request.assert_called_once_with(
"GET", "/crewai_plus/api/v1/agents/test_agent_handle"
)
self.assertEqual(response, mock_response)
@patch("crewai.cli.plus_api.Settings")
@patch("requests.Session.request")
def test_get_agent_with_org_uuid(self, mock_make_request, mock_settings_class):
mock_settings = MagicMock()
mock_settings.org_uuid = self.org_uuid
mock_settings.enterprise_base_url = os.getenv('CREWAI_PLUS_URL')
mock_settings_class.return_value = mock_settings
# re-initialize Client
self.api = PlusAPI(self.api_key)
mock_response = MagicMock()
mock_make_request.return_value = mock_response
response = self.api.get_agent("test_agent_handle")
self.assert_request_with_org_id(
mock_make_request, "GET", "/crewai_plus/api/v1/agents/test_agent_handle"
)
self.assertEqual(response, mock_response)
@patch("crewai.cli.plus_api.PlusAPI._make_request")
def test_get_tool(self, mock_make_request):
mock_response = MagicMock()
@@ -338,3 +309,49 @@ class TestPlusAPI(unittest.TestCase):
custom_api.base_url,
"https://custom-url-from-env.com",
)
@pytest.mark.asyncio
@patch("httpx.AsyncClient")
async def test_get_agent(mock_async_client_class):
api = PlusAPI("test_api_key")
mock_response = MagicMock()
mock_client_instance = AsyncMock()
mock_client_instance.get.return_value = mock_response
mock_async_client_class.return_value.__aenter__.return_value = mock_client_instance
response = await api.get_agent("test_agent_handle")
mock_client_instance.get.assert_called_once_with(
f"{api.base_url}/crewai_plus/api/v1/agents/test_agent_handle",
headers=api.headers,
)
assert response == mock_response
@pytest.mark.asyncio
@patch("httpx.AsyncClient")
@patch("crewai.cli.plus_api.Settings")
async def test_get_agent_with_org_uuid(mock_settings_class, mock_async_client_class):
org_uuid = "test-org-uuid"
mock_settings = MagicMock()
mock_settings.org_uuid = org_uuid
mock_settings.enterprise_base_url = os.getenv("CREWAI_PLUS_URL")
mock_settings_class.return_value = mock_settings
api = PlusAPI("test_api_key")
mock_response = MagicMock()
mock_client_instance = AsyncMock()
mock_client_instance.get.return_value = mock_response
mock_async_client_class.return_value.__aenter__.return_value = mock_client_instance
response = await api.get_agent("test_agent_handle")
mock_client_instance.get.assert_called_once_with(
f"{api.base_url}/crewai_plus/api/v1/agents/test_agent_handle",
headers=api.headers,
)
assert "X-Crewai-Organization-Id" in api.headers
assert api.headers["X-Crewai-Organization-Id"] == org_uuid
assert response == mock_response

View File

@@ -177,4 +177,40 @@ class TestTriggeredByScope:
raise ValueError("test error")
except ValueError:
pass
assert get_triggering_event_id() is None
assert get_triggering_event_id() is None
def test_agent_scope_preserved_after_tool_error_event() -> None:
from crewai.events import crewai_event_bus
from crewai.events.types.tool_usage_events import (
ToolUsageErrorEvent,
ToolUsageStartedEvent,
)
push_event_scope("crew-1", "crew_kickoff_started")
push_event_scope("task-1", "task_started")
push_event_scope("agent-1", "agent_execution_started")
crewai_event_bus.emit(
None,
ToolUsageStartedEvent(
tool_name="test_tool",
tool_args={},
agent_key="test_agent",
)
)
crewai_event_bus.emit(
None,
ToolUsageErrorEvent(
tool_name="test_tool",
tool_args={},
agent_key="test_agent",
error=ValueError("test error"),
)
)
crewai_event_bus.flush()
assert get_current_parent_id() == "agent-1"

View File

@@ -958,6 +958,34 @@ def test_azure_endpoint_detection_flags():
assert llm_other.is_azure_openai_endpoint == False
def test_azure_endpoint_detection_ignores_spoofed_urls():
"""
Test that endpoint detection does not trust spoofed host/path substrings
"""
with patch.dict(os.environ, {
"AZURE_API_KEY": "test-key",
"AZURE_ENDPOINT": (
"https://evil.example.com/?redirect="
"https://test.openai.azure.com/openai/deployments/gpt-4"
),
}):
llm_query_spoof = LLM(model="azure/gpt-4")
assert llm_query_spoof.is_azure_openai_endpoint == False
assert "model" in llm_query_spoof._prepare_completion_params(
messages=[{"role": "user", "content": "test"}]
)
with patch.dict(os.environ, {
"AZURE_API_KEY": "test-key",
"AZURE_ENDPOINT": "https://test.openai.azure.com.evil/openai/deployments/gpt-4",
}):
llm_host_spoof = LLM(model="azure/gpt-4")
assert llm_host_spoof.is_azure_openai_endpoint == False
assert "model" in llm_host_spoof._prepare_completion_params(
messages=[{"role": "user", "content": "test"}]
)
def test_azure_improved_error_messages():
"""
Test that improved error messages are provided for common HTTP errors

View File

@@ -308,6 +308,7 @@ def test_external_memory_search_events(
"parent_event_id": None,
"previous_event_id": ANY,
"triggered_by_event_id": None,
"started_event_id": ANY,
"emission_sequence": ANY,
"query": "test value",
"limit": 3,
@@ -330,6 +331,7 @@ def test_external_memory_search_events(
"parent_event_id": ANY,
"previous_event_id": ANY,
"triggered_by_event_id": None,
"started_event_id": ANY,
"emission_sequence": ANY,
"query": "test value",
"results": [],
@@ -390,6 +392,7 @@ def test_external_memory_save_events(
"parent_event_id": None,
"previous_event_id": ANY,
"triggered_by_event_id": None,
"started_event_id": ANY,
"emission_sequence": ANY,
"value": "saving value",
"metadata": {"task": "test_task"},
@@ -411,6 +414,7 @@ def test_external_memory_save_events(
"parent_event_id": ANY,
"previous_event_id": ANY,
"triggered_by_event_id": None,
"started_event_id": ANY,
"emission_sequence": ANY,
"value": "saving value",
"metadata": {"task": "test_task"},

View File

@@ -74,6 +74,7 @@ def test_long_term_memory_save_events(long_term_memory):
"parent_event_id": None,
"previous_event_id": ANY,
"triggered_by_event_id": None,
"started_event_id": ANY,
"emission_sequence": ANY,
"value": "test_task",
"metadata": {"task": "test_task", "quality": 0.5},
@@ -94,6 +95,7 @@ def test_long_term_memory_save_events(long_term_memory):
"parent_event_id": None,
"previous_event_id": ANY,
"triggered_by_event_id": None,
"started_event_id": ANY,
"emission_sequence": ANY,
"value": "test_task",
"metadata": {
@@ -153,6 +155,7 @@ def test_long_term_memory_search_events(long_term_memory):
"parent_event_id": None,
"previous_event_id": ANY,
"triggered_by_event_id": None,
"started_event_id": ANY,
"emission_sequence": ANY,
"query": "test query",
"limit": 5,
@@ -175,6 +178,7 @@ def test_long_term_memory_search_events(long_term_memory):
"parent_event_id": ANY,
"previous_event_id": ANY,
"triggered_by_event_id": None,
"started_event_id": ANY,
"emission_sequence": ANY,
"query": "test query",
"results": None,

View File

@@ -85,6 +85,7 @@ def test_short_term_memory_search_events(short_term_memory):
"parent_event_id": None,
"previous_event_id": ANY,
"triggered_by_event_id": None,
"started_event_id": ANY,
"emission_sequence": ANY,
"query": "test value",
"limit": 3,
@@ -107,6 +108,7 @@ def test_short_term_memory_search_events(short_term_memory):
"parent_event_id": None,
"previous_event_id": ANY,
"triggered_by_event_id": None,
"started_event_id": ANY,
"emission_sequence": ANY,
"query": "test value",
"results": [],
@@ -164,6 +166,7 @@ def test_short_term_memory_save_events(short_term_memory):
"parent_event_id": None,
"previous_event_id": ANY,
"triggered_by_event_id": None,
"started_event_id": ANY,
"emission_sequence": ANY,
"value": "test value",
"metadata": {"task": "test_task"},
@@ -185,6 +188,7 @@ def test_short_term_memory_save_events(short_term_memory):
"parent_event_id": None,
"previous_event_id": ANY,
"triggered_by_event_id": None,
"started_event_id": ANY,
"emission_sequence": ANY,
"value": "test value",
"metadata": {"task": "test_task"},

View File

@@ -157,6 +157,176 @@ class TestMultiStepFlows:
assert execution_order == ["generate", "review", "finalize"]
def test_chained_router_feedback_steps(self):
"""Test that a router outcome can trigger another router method.
Regression test: @listen("outcome") combined with @human_feedback(emit=...)
creates a method that is both a listener and a router. The flow must find
and execute it when the upstream router emits the matching outcome.
"""
execution_order: list[str] = []
class ChainedRouterFlow(Flow):
@start()
@human_feedback(
message="First review:",
emit=["approved", "rejected"],
llm="gpt-4o-mini",
)
def draft(self):
execution_order.append("draft")
return "draft content"
@listen("approved")
@human_feedback(
message="Final review:",
emit=["publish", "revise"],
llm="gpt-4o-mini",
)
def final_review(self, prev: HumanFeedbackResult):
execution_order.append("final_review")
return "final content"
@listen("rejected")
def on_rejected(self, prev: HumanFeedbackResult):
execution_order.append("on_rejected")
return "rejected"
@listen("publish")
def on_publish(self, prev: HumanFeedbackResult):
execution_order.append("on_publish")
return "published"
@listen("revise")
def on_revise(self, prev: HumanFeedbackResult):
execution_order.append("on_revise")
return "revised"
flow = ChainedRouterFlow()
with (
patch.object(
flow,
"_request_human_feedback",
side_effect=["looks good", "ship it"],
),
patch.object(
flow,
"_collapse_to_outcome",
side_effect=["approved", "publish"],
),
):
result = flow.kickoff()
assert execution_order == ["draft", "final_review", "on_publish"]
assert result == "published"
assert len(flow.human_feedback_history) == 2
assert flow.human_feedback_history[0].outcome == "approved"
assert flow.human_feedback_history[1].outcome == "publish"
def test_chained_router_rejected_path(self):
"""Test that a start-router outcome routes to a non-router listener."""
execution_order: list[str] = []
class ChainedRouterFlow(Flow):
@start()
@human_feedback(
message="Review:",
emit=["approved", "rejected"],
llm="gpt-4o-mini",
)
def draft(self):
execution_order.append("draft")
return "draft"
@listen("approved")
@human_feedback(
message="Final:",
emit=["publish", "revise"],
llm="gpt-4o-mini",
)
def final_review(self, prev: HumanFeedbackResult):
execution_order.append("final_review")
return "final"
@listen("rejected")
def on_rejected(self, prev: HumanFeedbackResult):
execution_order.append("on_rejected")
return "rejected"
flow = ChainedRouterFlow()
with (
patch.object(
flow, "_request_human_feedback", return_value="bad"
),
patch.object(
flow, "_collapse_to_outcome", return_value="rejected"
),
):
result = flow.kickoff()
assert execution_order == ["draft", "on_rejected"]
assert result == "rejected"
assert len(flow.human_feedback_history) == 1
assert flow.human_feedback_history[0].outcome == "rejected"
def test_router_and_non_router_listeners_for_same_outcome(self):
"""Test that both router and non-router listeners fire for the same outcome."""
execution_order: list[str] = []
class MixedListenerFlow(Flow):
@start()
@human_feedback(
message="Review:",
emit=["approved", "rejected"],
llm="gpt-4o-mini",
)
def draft(self):
execution_order.append("draft")
return "draft"
@listen("approved")
@human_feedback(
message="Final:",
emit=["publish", "revise"],
llm="gpt-4o-mini",
)
def router_listener(self, prev: HumanFeedbackResult):
execution_order.append("router_listener")
return "final"
@listen("approved")
def plain_listener(self, prev: HumanFeedbackResult):
execution_order.append("plain_listener")
return "logged"
@listen("publish")
def on_publish(self, prev: HumanFeedbackResult):
execution_order.append("on_publish")
return "published"
flow = MixedListenerFlow()
with (
patch.object(
flow,
"_request_human_feedback",
side_effect=["approve it", "publish it"],
),
patch.object(
flow,
"_collapse_to_outcome",
side_effect=["approved", "publish"],
),
):
flow.kickoff()
assert "draft" in execution_order
assert "router_listener" in execution_order
assert "plain_listener" in execution_order
assert "on_publish" in execution_order
class TestStateManagement:
"""Tests for state management with human feedback."""

View File

@@ -10,7 +10,9 @@ from crewai import Agent, Task
from crewai.events.event_bus import crewai_event_bus
from crewai.events.types.tool_usage_events import (
ToolSelectionErrorEvent,
ToolUsageErrorEvent,
ToolUsageFinishedEvent,
ToolUsageStartedEvent,
ToolValidateInputErrorEvent,
)
from crewai.tools import BaseTool
@@ -744,3 +746,78 @@ def test_tool_usage_finished_event_with_cached_result():
assert isinstance(event.started_at, datetime.datetime)
assert isinstance(event.finished_at, datetime.datetime)
assert event.type == "tool_usage_finished"
def test_tool_error_does_not_emit_finished_event():
from crewai.tools.tool_calling import ToolCalling
class FailingTool(BaseTool):
name: str = "Failing Tool"
description: str = "A tool that always fails"
def _run(self, **kwargs) -> str:
raise ValueError("Intentional failure")
failing_tool = FailingTool().to_structured_tool()
mock_agent = MagicMock()
mock_agent.key = "test_agent_key"
mock_agent.role = "test_agent_role"
mock_agent._original_role = "test_agent_role"
mock_agent.verbose = False
mock_agent.fingerprint = None
mock_agent.i18n.tools.return_value = {"name": "Add Image"}
mock_agent.i18n.errors.return_value = "Error: {error}"
mock_agent.i18n.slice.return_value = "Available tools: {tool_names}"
mock_task = MagicMock()
mock_task.delegations = 0
mock_task.name = "Test Task"
mock_task.description = "A test task"
mock_task.id = "test-task-id"
mock_action = MagicMock()
mock_action.tool = "failing_tool"
mock_action.tool_input = "{}"
tool_usage = ToolUsage(
tools_handler=MagicMock(cache=None, last_used_tool=None),
tools=[failing_tool],
task=mock_task,
function_calling_llm=None,
agent=mock_agent,
action=mock_action,
)
started_events = []
error_events = []
finished_events = []
error_received = threading.Event()
@crewai_event_bus.on(ToolUsageStartedEvent)
def on_started(source, event):
if event.tool_name == "failing_tool":
started_events.append(event)
@crewai_event_bus.on(ToolUsageErrorEvent)
def on_error(source, event):
if event.tool_name == "failing_tool":
error_events.append(event)
error_received.set()
@crewai_event_bus.on(ToolUsageFinishedEvent)
def on_finished(source, event):
if event.tool_name == "failing_tool":
finished_events.append(event)
tool_calling = ToolCalling(tool_name="failing_tool", arguments={})
tool_usage.use(calling=tool_calling, tool_string="Action: failing_tool")
assert error_received.wait(timeout=5), "Timeout waiting for error event"
crewai_event_bus.flush()
assert len(started_events) >= 1, "Expected at least one ToolUsageStartedEvent"
assert len(error_events) >= 1, "Expected at least one ToolUsageErrorEvent"
assert len(finished_events) == 0, (
"ToolUsageFinishedEvent should NOT be emitted after ToolUsageErrorEvent"
)

156
uv.lock generated
View File

@@ -1295,7 +1295,7 @@ requires-dist = [
{ name = "json5", specifier = "~=0.10.0" },
{ name = "jsonref", specifier = "~=1.1.0" },
{ name = "litellm", marker = "extra == 'litellm'", specifier = ">=1.74.9,<3" },
{ name = "mcp", specifier = "~=1.23.1" },
{ name = "mcp", specifier = "~=1.26.0" },
{ name = "mem0ai", marker = "extra == 'mem0'", specifier = "~=0.1.94" },
{ name = "openai", specifier = ">=1.83.0,<3" },
{ name = "openpyxl", specifier = "~=3.1.5" },
@@ -1311,7 +1311,7 @@ requires-dist = [
{ name = "pyjwt", specifier = ">=2.9.0,<3" },
{ name = "python-dotenv", specifier = "~=1.1.1" },
{ name = "qdrant-client", extras = ["fastembed"], marker = "extra == 'qdrant'", specifier = "~=1.14.3" },
{ name = "regex", specifier = "~=2024.9.11" },
{ name = "regex", specifier = "~=2026.1.15" },
{ name = "tiktoken", marker = "extra == 'embeddings'", specifier = "~=0.8.0" },
{ name = "tokenizers", specifier = "~=0.20.3" },
{ name = "tomli", specifier = "~=2.0.2" },
@@ -3777,7 +3777,7 @@ wheels = [
[[package]]
name = "mcp"
version = "1.23.3"
version = "1.26.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "anyio" },
@@ -3795,9 +3795,9 @@ dependencies = [
{ name = "typing-inspection" },
{ name = "uvicorn", marker = "sys_platform != 'emscripten'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/a7/a4/d06a303f45997e266f2c228081abe299bbcba216cb806128e2e49095d25f/mcp-1.23.3.tar.gz", hash = "sha256:b3b0da2cc949950ce1259c7bfc1b081905a51916fcd7c8182125b85e70825201", size = 600697, upload-time = "2025-12-09T16:04:37.351Z" }
sdist = { url = "https://files.pythonhosted.org/packages/fc/6d/62e76bbb8144d6ed86e202b5edd8a4cb631e7c8130f3f4893c3f90262b10/mcp-1.26.0.tar.gz", hash = "sha256:db6e2ef491eecc1a0d93711a76f28dec2e05999f93afd48795da1c1137142c66", size = 608005, upload-time = "2026-01-24T19:40:32.468Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/32/c6/13c1a26b47b3f3a3b480783001ada4268917c9f42d78a079c336da2e75e5/mcp-1.23.3-py3-none-any.whl", hash = "sha256:32768af4b46a1b4f7df34e2bfdf5c6011e7b63d7f1b0e321d0fdef4cd6082031", size = 231570, upload-time = "2025-12-09T16:04:35.56Z" },
{ url = "https://files.pythonhosted.org/packages/fd/d9/eaa1f80170d2b7c5ba23f3b59f766f3a0bb41155fbc32a69adfa1adaaef9/mcp-1.26.0-py3-none-any.whl", hash = "sha256:904a21c33c25aa98ddbeb47273033c435e595bbacfdb177f4bd87f6dceebe1ca", size = 233615, upload-time = "2026-01-24T19:40:30.652Z" },
]
[[package]]
@@ -6792,71 +6792,91 @@ wheels = [
[[package]]
name = "regex"
version = "2024.9.11"
version = "2026.1.15"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/f9/38/148df33b4dbca3bd069b963acab5e0fa1a9dbd6820f8c322d0dd6faeff96/regex-2024.9.11.tar.gz", hash = "sha256:6c188c307e8433bcb63dc1915022deb553b4203a70722fc542c363bf120a01fd", size = 399403, upload-time = "2024-09-11T19:00:09.814Z" }
sdist = { url = "https://files.pythonhosted.org/packages/0b/86/07d5056945f9ec4590b518171c4254a5925832eb727b56d3c38a7476f316/regex-2026.1.15.tar.gz", hash = "sha256:164759aa25575cbc0651bef59a0b18353e54300d79ace8084c818ad8ac72b7d5", size = 414811, upload-time = "2026-01-14T23:18:02.775Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/63/12/497bd6599ce8a239ade68678132296aec5ee25ebea45fc8ba91aa60fceec/regex-2024.9.11-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:1494fa8725c285a81d01dc8c06b55287a1ee5e0e382d8413adc0a9197aac6408", size = 482488, upload-time = "2024-09-11T18:56:55.331Z" },
{ url = "https://files.pythonhosted.org/packages/c1/24/595ddb9bec2a9b151cdaf9565b0c9f3da9f0cb1dca6c158bc5175332ddf8/regex-2024.9.11-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:0e12c481ad92d129c78f13a2a3662317e46ee7ef96c94fd332e1c29131875b7d", size = 287443, upload-time = "2024-09-11T18:56:58.531Z" },
{ url = "https://files.pythonhosted.org/packages/69/a8/b2fb45d9715b1469383a0da7968f8cacc2f83e9fbbcd6b8713752dd980a6/regex-2024.9.11-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:16e13a7929791ac1216afde26f712802e3df7bf0360b32e4914dca3ab8baeea5", size = 284561, upload-time = "2024-09-11T18:57:00.655Z" },
{ url = "https://files.pythonhosted.org/packages/88/87/1ce4a5357216b19b7055e7d3b0efc75a6e426133bf1e7d094321df514257/regex-2024.9.11-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:46989629904bad940bbec2106528140a218b4a36bb3042d8406980be1941429c", size = 783177, upload-time = "2024-09-11T18:57:01.958Z" },
{ url = "https://files.pythonhosted.org/packages/3c/65/b9f002ab32f7b68e7d1dcabb67926f3f47325b8dbc22cc50b6a043e1d07c/regex-2024.9.11-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a906ed5e47a0ce5f04b2c981af1c9acf9e8696066900bf03b9d7879a6f679fc8", size = 823193, upload-time = "2024-09-11T18:57:04.06Z" },
{ url = "https://files.pythonhosted.org/packages/22/91/8339dd3abce101204d246e31bc26cdd7ec07c9f91598472459a3a902aa41/regex-2024.9.11-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:e9a091b0550b3b0207784a7d6d0f1a00d1d1c8a11699c1a4d93db3fbefc3ad35", size = 809950, upload-time = "2024-09-11T18:57:05.805Z" },
{ url = "https://files.pythonhosted.org/packages/cb/19/556638aa11c2ec9968a1da998f07f27ec0abb9bf3c647d7c7985ca0b8eea/regex-2024.9.11-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5ddcd9a179c0a6fa8add279a4444015acddcd7f232a49071ae57fa6e278f1f71", size = 782661, upload-time = "2024-09-11T18:57:07.881Z" },
{ url = "https://files.pythonhosted.org/packages/d1/e9/7a5bc4c6ef8d9cd2bdd83a667888fc35320da96a4cc4da5fa084330f53db/regex-2024.9.11-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6b41e1adc61fa347662b09398e31ad446afadff932a24807d3ceb955ed865cc8", size = 772348, upload-time = "2024-09-11T18:57:09.494Z" },
{ url = "https://files.pythonhosted.org/packages/f1/0b/29f2105bfac3ed08e704914c38e93b07c784a6655f8a015297ee7173e95b/regex-2024.9.11-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:ced479f601cd2f8ca1fd7b23925a7e0ad512a56d6e9476f79b8f381d9d37090a", size = 697460, upload-time = "2024-09-11T18:57:11.595Z" },
{ url = "https://files.pythonhosted.org/packages/71/3a/52ff61054d15a4722605f5872ad03962b319a04c1ebaebe570b8b9b7dde1/regex-2024.9.11-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:635a1d96665f84b292e401c3d62775851aedc31d4f8784117b3c68c4fcd4118d", size = 769151, upload-time = "2024-09-11T18:57:14.358Z" },
{ url = "https://files.pythonhosted.org/packages/97/07/37e460ab5ca84be8e1e197c3b526c5c86993dcc9e13cbc805c35fc2463c1/regex-2024.9.11-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:c0256beda696edcf7d97ef16b2a33a8e5a875affd6fa6567b54f7c577b30a137", size = 777478, upload-time = "2024-09-11T18:57:16.397Z" },
{ url = "https://files.pythonhosted.org/packages/65/7b/953075723dd5ab00780043ac2f9de667306ff9e2a85332975e9f19279174/regex-2024.9.11-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:3ce4f1185db3fbde8ed8aa223fc9620f276c58de8b0d4f8cc86fd1360829edb6", size = 845373, upload-time = "2024-09-11T18:57:17.938Z" },
{ url = "https://files.pythonhosted.org/packages/40/b8/3e9484c6230b8b6e8f816ab7c9a080e631124991a4ae2c27a81631777db0/regex-2024.9.11-cp310-cp310-musllinux_1_2_s390x.whl", hash = "sha256:09d77559e80dcc9d24570da3745ab859a9cf91953062e4ab126ba9d5993688ca", size = 845369, upload-time = "2024-09-11T18:57:20.091Z" },
{ url = "https://files.pythonhosted.org/packages/b7/99/38434984d912edbd2e1969d116257e869578f67461bd7462b894c45ed874/regex-2024.9.11-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:7a22ccefd4db3f12b526eccb129390942fe874a3a9fdbdd24cf55773a1faab1a", size = 773935, upload-time = "2024-09-11T18:57:21.652Z" },
{ url = "https://files.pythonhosted.org/packages/ab/67/43174d2b46fa947b7b9dfe56b6c8a8a76d44223f35b1d64645a732fd1d6f/regex-2024.9.11-cp310-cp310-win32.whl", hash = "sha256:f745ec09bc1b0bd15cfc73df6fa4f726dcc26bb16c23a03f9e3367d357eeedd0", size = 261624, upload-time = "2024-09-11T18:57:23.777Z" },
{ url = "https://files.pythonhosted.org/packages/c4/2a/4f9c47d9395b6aff24874c761d8d620c0232f97c43ef3cf668c8b355e7a7/regex-2024.9.11-cp310-cp310-win_amd64.whl", hash = "sha256:01c2acb51f8a7d6494c8c5eafe3d8e06d76563d8a8a4643b37e9b2dd8a2ff623", size = 274020, upload-time = "2024-09-11T18:57:25.27Z" },
{ url = "https://files.pythonhosted.org/packages/86/a1/d526b7b6095a0019aa360948c143aacfeb029919c898701ce7763bbe4c15/regex-2024.9.11-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:2cce2449e5927a0bf084d346da6cd5eb016b2beca10d0013ab50e3c226ffc0df", size = 482483, upload-time = "2024-09-11T18:57:26.694Z" },
{ url = "https://files.pythonhosted.org/packages/32/d9/bfdd153179867c275719e381e1e8e84a97bd186740456a0dcb3e7125c205/regex-2024.9.11-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:3b37fa423beefa44919e009745ccbf353d8c981516e807995b2bd11c2c77d268", size = 287442, upload-time = "2024-09-11T18:57:28.133Z" },
{ url = "https://files.pythonhosted.org/packages/33/c4/60f3370735135e3a8d673ddcdb2507a8560d0e759e1398d366e43d000253/regex-2024.9.11-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:64ce2799bd75039b480cc0360907c4fb2f50022f030bf9e7a8705b636e408fad", size = 284561, upload-time = "2024-09-11T18:57:30.83Z" },
{ url = "https://files.pythonhosted.org/packages/b1/51/91a5ebdff17f9ec4973cb0aa9d37635efec1c6868654bbc25d1543aca4ec/regex-2024.9.11-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a4cc92bb6db56ab0c1cbd17294e14f5e9224f0cc6521167ef388332604e92679", size = 791779, upload-time = "2024-09-11T18:57:32.461Z" },
{ url = "https://files.pythonhosted.org/packages/07/4a/022c5e6f0891a90cd7eb3d664d6c58ce2aba48bff107b00013f3d6167069/regex-2024.9.11-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:d05ac6fa06959c4172eccd99a222e1fbf17b5670c4d596cb1e5cde99600674c4", size = 832605, upload-time = "2024-09-11T18:57:34.01Z" },
{ url = "https://files.pythonhosted.org/packages/ac/1c/3793990c8c83ca04e018151ddda83b83ecc41d89964f0f17749f027fc44d/regex-2024.9.11-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:040562757795eeea356394a7fb13076ad4f99d3c62ab0f8bdfb21f99a1f85664", size = 818556, upload-time = "2024-09-11T18:57:36.363Z" },
{ url = "https://files.pythonhosted.org/packages/e9/5c/8b385afbfacb853730682c57be56225f9fe275c5bf02ac1fc88edbff316d/regex-2024.9.11-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6113c008a7780792efc80f9dfe10ba0cd043cbf8dc9a76ef757850f51b4edc50", size = 792808, upload-time = "2024-09-11T18:57:38.493Z" },
{ url = "https://files.pythonhosted.org/packages/9b/8b/a4723a838b53c771e9240951adde6af58c829fb6a6a28f554e8131f53839/regex-2024.9.11-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8e5fb5f77c8745a60105403a774fe2c1759b71d3e7b4ca237a5e67ad066c7199", size = 781115, upload-time = "2024-09-11T18:57:41.4Z" },
{ url = "https://files.pythonhosted.org/packages/83/5f/031a04b6017033d65b261259c09043c06f4ef2d4eac841d0649d76d69541/regex-2024.9.11-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:54d9ff35d4515debf14bc27f1e3b38bfc453eff3220f5bce159642fa762fe5d4", size = 778155, upload-time = "2024-09-11T18:57:43.608Z" },
{ url = "https://files.pythonhosted.org/packages/fd/cd/4660756070b03ce4a66663a43f6c6e7ebc2266cc6b4c586c167917185eb4/regex-2024.9.11-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:df5cbb1fbc74a8305b6065d4ade43b993be03dbe0f8b30032cced0d7740994bd", size = 784614, upload-time = "2024-09-11T18:57:45.219Z" },
{ url = "https://files.pythonhosted.org/packages/93/8d/65b9bea7df120a7be8337c415b6d256ba786cbc9107cebba3bf8ff09da99/regex-2024.9.11-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:7fb89ee5d106e4a7a51bce305ac4efb981536301895f7bdcf93ec92ae0d91c7f", size = 853744, upload-time = "2024-09-11T18:57:46.907Z" },
{ url = "https://files.pythonhosted.org/packages/96/a7/fba1eae75eb53a704475baf11bd44b3e6ccb95b316955027eb7748f24ef8/regex-2024.9.11-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:a738b937d512b30bf75995c0159c0ddf9eec0775c9d72ac0202076c72f24aa96", size = 855890, upload-time = "2024-09-11T18:57:49.264Z" },
{ url = "https://files.pythonhosted.org/packages/45/14/d864b2db80a1a3358534392373e8a281d95b28c29c87d8548aed58813910/regex-2024.9.11-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:e28f9faeb14b6f23ac55bfbbfd3643f5c7c18ede093977f1df249f73fd22c7b1", size = 781887, upload-time = "2024-09-11T18:57:51.619Z" },
{ url = "https://files.pythonhosted.org/packages/4d/a9/bfb29b3de3eb11dc9b412603437023b8e6c02fb4e11311863d9bf62c403a/regex-2024.9.11-cp311-cp311-win32.whl", hash = "sha256:18e707ce6c92d7282dfce370cd205098384b8ee21544e7cb29b8aab955b66fa9", size = 261644, upload-time = "2024-09-11T18:57:53.334Z" },
{ url = "https://files.pythonhosted.org/packages/c7/ab/1ad2511cf6a208fde57fafe49829cab8ca018128ab0d0b48973d8218634a/regex-2024.9.11-cp311-cp311-win_amd64.whl", hash = "sha256:313ea15e5ff2a8cbbad96ccef6be638393041b0a7863183c2d31e0c6116688cf", size = 274033, upload-time = "2024-09-11T18:57:55.605Z" },
{ url = "https://files.pythonhosted.org/packages/6e/92/407531450762bed778eedbde04407f68cbd75d13cee96c6f8d6903d9c6c1/regex-2024.9.11-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:b0d0a6c64fcc4ef9c69bd5b3b3626cc3776520a1637d8abaa62b9edc147a58f7", size = 483590, upload-time = "2024-09-11T18:57:57.793Z" },
{ url = "https://files.pythonhosted.org/packages/8e/a2/048acbc5ae1f615adc6cba36cc45734e679b5f1e4e58c3c77f0ed611d4e2/regex-2024.9.11-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:49b0e06786ea663f933f3710a51e9385ce0cba0ea56b67107fd841a55d56a231", size = 288175, upload-time = "2024-09-11T18:57:59.671Z" },
{ url = "https://files.pythonhosted.org/packages/8a/ea/909d8620329ab710dfaf7b4adee41242ab7c9b95ea8d838e9bfe76244259/regex-2024.9.11-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:5b513b6997a0b2f10e4fd3a1313568e373926e8c252bd76c960f96fd039cd28d", size = 284749, upload-time = "2024-09-11T18:58:01.855Z" },
{ url = "https://files.pythonhosted.org/packages/ca/fa/521eb683b916389b4975337873e66954e0f6d8f91bd5774164a57b503185/regex-2024.9.11-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ee439691d8c23e76f9802c42a95cfeebf9d47cf4ffd06f18489122dbb0a7ad64", size = 795181, upload-time = "2024-09-11T18:58:03.985Z" },
{ url = "https://files.pythonhosted.org/packages/28/db/63047feddc3280cc242f9c74f7aeddc6ee662b1835f00046f57d5630c827/regex-2024.9.11-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a8f877c89719d759e52783f7fe6e1c67121076b87b40542966c02de5503ace42", size = 835842, upload-time = "2024-09-11T18:58:05.74Z" },
{ url = "https://files.pythonhosted.org/packages/e3/94/86adc259ff8ec26edf35fcca7e334566c1805c7493b192cb09679f9c3dee/regex-2024.9.11-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:23b30c62d0f16827f2ae9f2bb87619bc4fba2044911e2e6c2eb1af0161cdb766", size = 823533, upload-time = "2024-09-11T18:58:07.427Z" },
{ url = "https://files.pythonhosted.org/packages/29/52/84662b6636061277cb857f658518aa7db6672bc6d1a3f503ccd5aefc581e/regex-2024.9.11-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:85ab7824093d8f10d44330fe1e6493f756f252d145323dd17ab6b48733ff6c0a", size = 797037, upload-time = "2024-09-11T18:58:09.879Z" },
{ url = "https://files.pythonhosted.org/packages/c3/2a/cd4675dd987e4a7505f0364a958bc41f3b84942de9efaad0ef9a2646681c/regex-2024.9.11-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8dee5b4810a89447151999428fe096977346cf2f29f4d5e29609d2e19e0199c9", size = 784106, upload-time = "2024-09-11T18:58:11.55Z" },
{ url = "https://files.pythonhosted.org/packages/6f/75/3ea7ec29de0bbf42f21f812f48781d41e627d57a634f3f23947c9a46e303/regex-2024.9.11-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:98eeee2f2e63edae2181c886d7911ce502e1292794f4c5ee71e60e23e8d26b5d", size = 782468, upload-time = "2024-09-11T18:58:13.552Z" },
{ url = "https://files.pythonhosted.org/packages/d3/67/15519d69b52c252b270e679cb578e22e0c02b8dd4e361f2b04efcc7f2335/regex-2024.9.11-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:57fdd2e0b2694ce6fc2e5ccf189789c3e2962916fb38779d3e3521ff8fe7a822", size = 790324, upload-time = "2024-09-11T18:58:15.268Z" },
{ url = "https://files.pythonhosted.org/packages/9c/71/eff77d3fe7ba08ab0672920059ec30d63fa7e41aa0fb61c562726e9bd721/regex-2024.9.11-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:d552c78411f60b1fdaafd117a1fca2f02e562e309223b9d44b7de8be451ec5e0", size = 860214, upload-time = "2024-09-11T18:58:17.583Z" },
{ url = "https://files.pythonhosted.org/packages/81/11/e1bdf84a72372e56f1ea4b833dd583b822a23138a616ace7ab57a0e11556/regex-2024.9.11-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:a0b2b80321c2ed3fcf0385ec9e51a12253c50f146fddb2abbb10f033fe3d049a", size = 859420, upload-time = "2024-09-11T18:58:19.898Z" },
{ url = "https://files.pythonhosted.org/packages/ea/75/9753e9dcebfa7c3645563ef5c8a58f3a47e799c872165f37c55737dadd3e/regex-2024.9.11-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:18406efb2f5a0e57e3a5881cd9354c1512d3bb4f5c45d96d110a66114d84d23a", size = 787333, upload-time = "2024-09-11T18:58:21.699Z" },
{ url = "https://files.pythonhosted.org/packages/bc/4e/ba1cbca93141f7416624b3ae63573e785d4bc1834c8be44a8f0747919eca/regex-2024.9.11-cp312-cp312-win32.whl", hash = "sha256:e464b467f1588e2c42d26814231edecbcfe77f5ac414d92cbf4e7b55b2c2a776", size = 262058, upload-time = "2024-09-11T18:58:23.452Z" },
{ url = "https://files.pythonhosted.org/packages/6e/16/efc5f194778bf43e5888209e5cec4b258005d37c613b67ae137df3b89c53/regex-2024.9.11-cp312-cp312-win_amd64.whl", hash = "sha256:9e8719792ca63c6b8340380352c24dcb8cd7ec49dae36e963742a275dfae6009", size = 273526, upload-time = "2024-09-11T18:58:25.191Z" },
{ url = "https://files.pythonhosted.org/packages/93/0a/d1c6b9af1ff1e36832fe38d74d5c5bab913f2bdcbbd6bc0e7f3ce8b2f577/regex-2024.9.11-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:c157bb447303070f256e084668b702073db99bbb61d44f85d811025fcf38f784", size = 483376, upload-time = "2024-09-11T18:58:27.11Z" },
{ url = "https://files.pythonhosted.org/packages/a4/42/5910a050c105d7f750a72dcb49c30220c3ae4e2654e54aaaa0e9bc0584cb/regex-2024.9.11-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:4db21ece84dfeefc5d8a3863f101995de646c6cb0536952c321a2650aa202c36", size = 288112, upload-time = "2024-09-11T18:58:28.78Z" },
{ url = "https://files.pythonhosted.org/packages/8d/56/0c262aff0e9224fa7ffce47b5458d373f4d3e3ff84e99b5ff0cb15e0b5b2/regex-2024.9.11-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:220e92a30b426daf23bb67a7962900ed4613589bab80382be09b48896d211e92", size = 284608, upload-time = "2024-09-11T18:58:30.498Z" },
{ url = "https://files.pythonhosted.org/packages/b9/54/9fe8f9aec5007bbbbce28ba3d2e3eaca425f95387b7d1e84f0d137d25237/regex-2024.9.11-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:eb1ae19e64c14c7ec1995f40bd932448713d3c73509e82d8cd7744dc00e29e86", size = 795337, upload-time = "2024-09-11T18:58:32.665Z" },
{ url = "https://files.pythonhosted.org/packages/b2/e7/6b2f642c3cded271c4f16cc4daa7231be544d30fe2b168e0223724b49a61/regex-2024.9.11-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:f47cd43a5bfa48f86925fe26fbdd0a488ff15b62468abb5d2a1e092a4fb10e85", size = 835848, upload-time = "2024-09-11T18:58:34.337Z" },
{ url = "https://files.pythonhosted.org/packages/cd/9e/187363bdf5d8c0e4662117b92aa32bf52f8f09620ae93abc7537d96d3311/regex-2024.9.11-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:9d4a76b96f398697fe01117093613166e6aa8195d63f1b4ec3f21ab637632963", size = 823503, upload-time = "2024-09-11T18:58:36.17Z" },
{ url = "https://files.pythonhosted.org/packages/f8/10/601303b8ee93589f879664b0cfd3127949ff32b17f9b6c490fb201106c4d/regex-2024.9.11-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0ea51dcc0835eea2ea31d66456210a4e01a076d820e9039b04ae8d17ac11dee6", size = 797049, upload-time = "2024-09-11T18:58:38.225Z" },
{ url = "https://files.pythonhosted.org/packages/ef/1c/ea200f61ce9f341763f2717ab4daebe4422d83e9fd4ac5e33435fd3a148d/regex-2024.9.11-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b7aaa315101c6567a9a45d2839322c51c8d6e81f67683d529512f5bcfb99c802", size = 784144, upload-time = "2024-09-11T18:58:40.605Z" },
{ url = "https://files.pythonhosted.org/packages/d8/5c/d2429be49ef3292def7688401d3deb11702c13dcaecdc71d2b407421275b/regex-2024.9.11-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:c57d08ad67aba97af57a7263c2d9006d5c404d721c5f7542f077f109ec2a4a29", size = 782483, upload-time = "2024-09-11T18:58:42.58Z" },
{ url = "https://files.pythonhosted.org/packages/12/d9/cbc30f2ff7164f3b26a7760f87c54bf8b2faed286f60efd80350a51c5b99/regex-2024.9.11-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:f8404bf61298bb6f8224bb9176c1424548ee1181130818fcd2cbffddc768bed8", size = 790320, upload-time = "2024-09-11T18:58:44.5Z" },
{ url = "https://files.pythonhosted.org/packages/19/1d/43ed03a236313639da5a45e61bc553c8d41e925bcf29b0f8ecff0c2c3f25/regex-2024.9.11-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:dd4490a33eb909ef5078ab20f5f000087afa2a4daa27b4c072ccb3cb3050ad84", size = 860435, upload-time = "2024-09-11T18:58:47.014Z" },
{ url = "https://files.pythonhosted.org/packages/34/4f/5d04da61c7c56e785058a46349f7285ae3ebc0726c6ea7c5c70600a52233/regex-2024.9.11-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:eee9130eaad130649fd73e5cd92f60e55708952260ede70da64de420cdcad554", size = 859571, upload-time = "2024-09-11T18:58:48.974Z" },
{ url = "https://files.pythonhosted.org/packages/12/7f/8398c8155a3c70703a8e91c29532558186558e1aea44144b382faa2a6f7a/regex-2024.9.11-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:6a2644a93da36c784e546de579ec1806bfd2763ef47babc1b03d765fe560c9f8", size = 787398, upload-time = "2024-09-11T18:58:51.05Z" },
{ url = "https://files.pythonhosted.org/packages/58/3a/f5903977647a9a7e46d5535e9e96c194304aeeca7501240509bde2f9e17f/regex-2024.9.11-cp313-cp313-win32.whl", hash = "sha256:e997fd30430c57138adc06bba4c7c2968fb13d101e57dd5bb9355bf8ce3fa7e8", size = 262035, upload-time = "2024-09-11T18:58:53.526Z" },
{ url = "https://files.pythonhosted.org/packages/ff/80/51ba3a4b7482f6011095b3a036e07374f64de180b7d870b704ed22509002/regex-2024.9.11-cp313-cp313-win_amd64.whl", hash = "sha256:042c55879cfeb21a8adacc84ea347721d3d83a159da6acdf1116859e2427c43f", size = 273510, upload-time = "2024-09-11T18:58:55.263Z" },
{ url = "https://files.pythonhosted.org/packages/ea/d2/e6ee96b7dff201a83f650241c52db8e5bd080967cb93211f57aa448dc9d6/regex-2026.1.15-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:4e3dd93c8f9abe8aa4b6c652016da9a3afa190df5ad822907efe6b206c09896e", size = 488166, upload-time = "2026-01-14T23:13:46.408Z" },
{ url = "https://files.pythonhosted.org/packages/23/8a/819e9ce14c9f87af026d0690901b3931f3101160833e5d4c8061fa3a1b67/regex-2026.1.15-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:97499ff7862e868b1977107873dd1a06e151467129159a6ffd07b66706ba3a9f", size = 290632, upload-time = "2026-01-14T23:13:48.688Z" },
{ url = "https://files.pythonhosted.org/packages/d5/c3/23dfe15af25d1d45b07dfd4caa6003ad710dcdcb4c4b279909bdfe7a2de8/regex-2026.1.15-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:0bda75ebcac38d884240914c6c43d8ab5fb82e74cde6da94b43b17c411aa4c2b", size = 288500, upload-time = "2026-01-14T23:13:50.503Z" },
{ url = "https://files.pythonhosted.org/packages/c6/31/1adc33e2f717df30d2f4d973f8776d2ba6ecf939301efab29fca57505c95/regex-2026.1.15-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:7dcc02368585334f5bc81fc73a2a6a0bbade60e7d83da21cead622faf408f32c", size = 781670, upload-time = "2026-01-14T23:13:52.453Z" },
{ url = "https://files.pythonhosted.org/packages/23/ce/21a8a22d13bc4adcb927c27b840c948f15fc973e21ed2346c1bd0eae22dc/regex-2026.1.15-cp310-cp310-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:693b465171707bbe882a7a05de5e866f33c76aa449750bee94a8d90463533cc9", size = 850820, upload-time = "2026-01-14T23:13:54.894Z" },
{ url = "https://files.pythonhosted.org/packages/6c/4f/3eeacdf587a4705a44484cd0b30e9230a0e602811fb3e2cc32268c70d509/regex-2026.1.15-cp310-cp310-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:b0d190e6f013ea938623a58706d1469a62103fb2a241ce2873a9906e0386582c", size = 898777, upload-time = "2026-01-14T23:13:56.908Z" },
{ url = "https://files.pythonhosted.org/packages/79/a9/1898a077e2965c35fc22796488141a22676eed2d73701e37c73ad7c0b459/regex-2026.1.15-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:5ff818702440a5878a81886f127b80127f5d50563753a28211482867f8318106", size = 791750, upload-time = "2026-01-14T23:13:58.527Z" },
{ url = "https://files.pythonhosted.org/packages/4c/84/e31f9d149a178889b3817212827f5e0e8c827a049ff31b4b381e76b26e2d/regex-2026.1.15-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:f052d1be37ef35a54e394de66136e30fa1191fab64f71fc06ac7bc98c9a84618", size = 782674, upload-time = "2026-01-14T23:13:59.874Z" },
{ url = "https://files.pythonhosted.org/packages/d2/ff/adf60063db24532add6a1676943754a5654dcac8237af024ede38244fd12/regex-2026.1.15-cp310-cp310-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:6bfc31a37fd1592f0c4fc4bfc674b5c42e52efe45b4b7a6a14f334cca4bcebe4", size = 767906, upload-time = "2026-01-14T23:14:01.298Z" },
{ url = "https://files.pythonhosted.org/packages/af/3e/e6a216cee1e2780fec11afe7fc47b6f3925d7264e8149c607ac389fd9b1a/regex-2026.1.15-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:3d6ce5ae80066b319ae3bc62fd55a557c9491baa5efd0d355f0de08c4ba54e79", size = 774798, upload-time = "2026-01-14T23:14:02.715Z" },
{ url = "https://files.pythonhosted.org/packages/0f/98/23a4a8378a9208514ed3efc7e7850c27fa01e00ed8557c958df0335edc4a/regex-2026.1.15-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:1704d204bd42b6bb80167df0e4554f35c255b579ba99616def38f69e14a5ccb9", size = 845861, upload-time = "2026-01-14T23:14:04.824Z" },
{ url = "https://files.pythonhosted.org/packages/f8/57/d7605a9d53bd07421a8785d349cd29677fe660e13674fa4c6cbd624ae354/regex-2026.1.15-cp310-cp310-musllinux_1_2_riscv64.whl", hash = "sha256:e3174a5ed4171570dc8318afada56373aa9289eb6dc0d96cceb48e7358b0e220", size = 755648, upload-time = "2026-01-14T23:14:06.371Z" },
{ url = "https://files.pythonhosted.org/packages/6f/76/6f2e24aa192da1e299cc1101674a60579d3912391867ce0b946ba83e2194/regex-2026.1.15-cp310-cp310-musllinux_1_2_s390x.whl", hash = "sha256:87adf5bd6d72e3e17c9cb59ac4096b1faaf84b7eb3037a5ffa61c4b4370f0f13", size = 836250, upload-time = "2026-01-14T23:14:08.343Z" },
{ url = "https://files.pythonhosted.org/packages/11/3a/1f2a1d29453299a7858eab7759045fc3d9d1b429b088dec2dc85b6fa16a2/regex-2026.1.15-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:e85dc94595f4d766bd7d872a9de5ede1ca8d3063f3bdf1e2c725f5eb411159e3", size = 779919, upload-time = "2026-01-14T23:14:09.954Z" },
{ url = "https://files.pythonhosted.org/packages/c0/67/eab9bc955c9dcc58e9b222c801e39cff7ca0b04261792a2149166ce7e792/regex-2026.1.15-cp310-cp310-win32.whl", hash = "sha256:21ca32c28c30d5d65fc9886ff576fc9b59bbca08933e844fa2363e530f4c8218", size = 265888, upload-time = "2026-01-14T23:14:11.35Z" },
{ url = "https://files.pythonhosted.org/packages/1d/62/31d16ae24e1f8803bddb0885508acecaec997fcdcde9c243787103119ae4/regex-2026.1.15-cp310-cp310-win_amd64.whl", hash = "sha256:3038a62fc7d6e5547b8915a3d927a0fbeef84cdbe0b1deb8c99bbd4a8961b52a", size = 277830, upload-time = "2026-01-14T23:14:12.908Z" },
{ url = "https://files.pythonhosted.org/packages/e5/36/5d9972bccd6417ecd5a8be319cebfd80b296875e7f116c37fb2a2deecebf/regex-2026.1.15-cp310-cp310-win_arm64.whl", hash = "sha256:505831646c945e3e63552cc1b1b9b514f0e93232972a2d5bedbcc32f15bc82e3", size = 270376, upload-time = "2026-01-14T23:14:14.782Z" },
{ url = "https://files.pythonhosted.org/packages/d0/c9/0c80c96eab96948363d270143138d671d5731c3a692b417629bf3492a9d6/regex-2026.1.15-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:1ae6020fb311f68d753b7efa9d4b9a5d47a5d6466ea0d5e3b5a471a960ea6e4a", size = 488168, upload-time = "2026-01-14T23:14:16.129Z" },
{ url = "https://files.pythonhosted.org/packages/17/f0/271c92f5389a552494c429e5cc38d76d1322eb142fb5db3c8ccc47751468/regex-2026.1.15-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:eddf73f41225942c1f994914742afa53dc0d01a6e20fe14b878a1b1edc74151f", size = 290636, upload-time = "2026-01-14T23:14:17.715Z" },
{ url = "https://files.pythonhosted.org/packages/a0/f9/5f1fd077d106ca5655a0f9ff8f25a1ab55b92128b5713a91ed7134ff688e/regex-2026.1.15-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:1e8cd52557603f5c66a548f69421310886b28b7066853089e1a71ee710e1cdc1", size = 288496, upload-time = "2026-01-14T23:14:19.326Z" },
{ url = "https://files.pythonhosted.org/packages/b5/e1/8f43b03a4968c748858ec77f746c286d81f896c2e437ccf050ebc5d3128c/regex-2026.1.15-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:5170907244b14303edc5978f522f16c974f32d3aa92109fabc2af52411c9433b", size = 793503, upload-time = "2026-01-14T23:14:20.922Z" },
{ url = "https://files.pythonhosted.org/packages/8d/4e/a39a5e8edc5377a46a7c875c2f9a626ed3338cb3bb06931be461c3e1a34a/regex-2026.1.15-cp311-cp311-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:2748c1ec0663580b4510bd89941a31560b4b439a0b428b49472a3d9944d11cd8", size = 860535, upload-time = "2026-01-14T23:14:22.405Z" },
{ url = "https://files.pythonhosted.org/packages/dc/1c/9dce667a32a9477f7a2869c1c767dc00727284a9fa3ff5c09a5c6c03575e/regex-2026.1.15-cp311-cp311-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:2f2775843ca49360508d080eaa87f94fa248e2c946bbcd963bb3aae14f333413", size = 907225, upload-time = "2026-01-14T23:14:23.897Z" },
{ url = "https://files.pythonhosted.org/packages/a4/3c/87ca0a02736d16b6262921425e84b48984e77d8e4e572c9072ce96e66c30/regex-2026.1.15-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:d9ea2604370efc9a174c1b5dcc81784fb040044232150f7f33756049edfc9026", size = 800526, upload-time = "2026-01-14T23:14:26.039Z" },
{ url = "https://files.pythonhosted.org/packages/4b/ff/647d5715aeea7c87bdcbd2f578f47b415f55c24e361e639fe8c0cc88878f/regex-2026.1.15-cp311-cp311-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:0dcd31594264029b57bf16f37fd7248a70b3b764ed9e0839a8f271b2d22c0785", size = 773446, upload-time = "2026-01-14T23:14:28.109Z" },
{ url = "https://files.pythonhosted.org/packages/af/89/bf22cac25cb4ba0fe6bff52ebedbb65b77a179052a9d6037136ae93f42f4/regex-2026.1.15-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:c08c1f3e34338256732bd6938747daa3c0d5b251e04b6e43b5813e94d503076e", size = 783051, upload-time = "2026-01-14T23:14:29.929Z" },
{ url = "https://files.pythonhosted.org/packages/1e/f4/6ed03e71dca6348a5188363a34f5e26ffd5db1404780288ff0d79513bce4/regex-2026.1.15-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:e43a55f378df1e7a4fa3547c88d9a5a9b7113f653a66821bcea4718fe6c58763", size = 854485, upload-time = "2026-01-14T23:14:31.366Z" },
{ url = "https://files.pythonhosted.org/packages/d9/9a/8e8560bd78caded8eb137e3e47612430a05b9a772caf60876435192d670a/regex-2026.1.15-cp311-cp311-musllinux_1_2_riscv64.whl", hash = "sha256:f82110ab962a541737bd0ce87978d4c658f06e7591ba899192e2712a517badbb", size = 762195, upload-time = "2026-01-14T23:14:32.802Z" },
{ url = "https://files.pythonhosted.org/packages/38/6b/61fc710f9aa8dfcd764fe27d37edfaa023b1a23305a0d84fccd5adb346ea/regex-2026.1.15-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:27618391db7bdaf87ac6c92b31e8f0dfb83a9de0075855152b720140bda177a2", size = 845986, upload-time = "2026-01-14T23:14:34.898Z" },
{ url = "https://files.pythonhosted.org/packages/fd/2e/fbee4cb93f9d686901a7ca8d94285b80405e8c34fe4107f63ffcbfb56379/regex-2026.1.15-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:bfb0d6be01fbae8d6655c8ca21b3b72458606c4aec9bbc932db758d47aba6db1", size = 788992, upload-time = "2026-01-14T23:14:37.116Z" },
{ url = "https://files.pythonhosted.org/packages/ed/14/3076348f3f586de64b1ab75a3fbabdaab7684af7f308ad43be7ef1849e55/regex-2026.1.15-cp311-cp311-win32.whl", hash = "sha256:b10e42a6de0e32559a92f2f8dc908478cc0fa02838d7dbe764c44dca3fa13569", size = 265893, upload-time = "2026-01-14T23:14:38.426Z" },
{ url = "https://files.pythonhosted.org/packages/0f/19/772cf8b5fc803f5c89ba85d8b1870a1ca580dc482aa030383a9289c82e44/regex-2026.1.15-cp311-cp311-win_amd64.whl", hash = "sha256:e9bf3f0bbdb56633c07d7116ae60a576f846efdd86a8848f8d62b749e1209ca7", size = 277840, upload-time = "2026-01-14T23:14:39.785Z" },
{ url = "https://files.pythonhosted.org/packages/78/84/d05f61142709474da3c0853222d91086d3e1372bcdab516c6fd8d80f3297/regex-2026.1.15-cp311-cp311-win_arm64.whl", hash = "sha256:41aef6f953283291c4e4e6850607bd71502be67779586a61472beacb315c97ec", size = 270374, upload-time = "2026-01-14T23:14:41.592Z" },
{ url = "https://files.pythonhosted.org/packages/92/81/10d8cf43c807d0326efe874c1b79f22bfb0fb226027b0b19ebc26d301408/regex-2026.1.15-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:4c8fcc5793dde01641a35905d6731ee1548f02b956815f8f1cab89e515a5bdf1", size = 489398, upload-time = "2026-01-14T23:14:43.741Z" },
{ url = "https://files.pythonhosted.org/packages/90/b0/7c2a74e74ef2a7c32de724658a69a862880e3e4155cba992ba04d1c70400/regex-2026.1.15-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:bfd876041a956e6a90ad7cdb3f6a630c07d491280bfeed4544053cd434901681", size = 291339, upload-time = "2026-01-14T23:14:45.183Z" },
{ url = "https://files.pythonhosted.org/packages/19/4d/16d0773d0c818417f4cc20aa0da90064b966d22cd62a8c46765b5bd2d643/regex-2026.1.15-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:9250d087bc92b7d4899ccd5539a1b2334e44eee85d848c4c1aef8e221d3f8c8f", size = 289003, upload-time = "2026-01-14T23:14:47.25Z" },
{ url = "https://files.pythonhosted.org/packages/c6/e4/1fc4599450c9f0863d9406e944592d968b8d6dfd0d552a7d569e43bceada/regex-2026.1.15-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:c8a154cf6537ebbc110e24dabe53095e714245c272da9c1be05734bdad4a61aa", size = 798656, upload-time = "2026-01-14T23:14:48.77Z" },
{ url = "https://files.pythonhosted.org/packages/b2/e6/59650d73a73fa8a60b3a590545bfcf1172b4384a7df2e7fe7b9aab4e2da9/regex-2026.1.15-cp312-cp312-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:8050ba2e3ea1d8731a549e83c18d2f0999fbc99a5f6bd06b4c91449f55291804", size = 864252, upload-time = "2026-01-14T23:14:50.528Z" },
{ url = "https://files.pythonhosted.org/packages/6e/ab/1d0f4d50a1638849a97d731364c9a80fa304fec46325e48330c170ee8e80/regex-2026.1.15-cp312-cp312-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:0bf065240704cb8951cc04972cf107063917022511273e0969bdb34fc173456c", size = 912268, upload-time = "2026-01-14T23:14:52.952Z" },
{ url = "https://files.pythonhosted.org/packages/dd/df/0d722c030c82faa1d331d1921ee268a4e8fb55ca8b9042c9341c352f17fa/regex-2026.1.15-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:c32bef3e7aeee75746748643667668ef941d28b003bfc89994ecf09a10f7a1b5", size = 803589, upload-time = "2026-01-14T23:14:55.182Z" },
{ url = "https://files.pythonhosted.org/packages/66/23/33289beba7ccb8b805c6610a8913d0131f834928afc555b241caabd422a9/regex-2026.1.15-cp312-cp312-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:d5eaa4a4c5b1906bd0d2508d68927f15b81821f85092e06f1a34a4254b0e1af3", size = 775700, upload-time = "2026-01-14T23:14:56.707Z" },
{ url = "https://files.pythonhosted.org/packages/e7/65/bf3a42fa6897a0d3afa81acb25c42f4b71c274f698ceabd75523259f6688/regex-2026.1.15-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:86c1077a3cc60d453d4084d5b9649065f3bf1184e22992bd322e1f081d3117fb", size = 787928, upload-time = "2026-01-14T23:14:58.312Z" },
{ url = "https://files.pythonhosted.org/packages/f4/f5/13bf65864fc314f68cdd6d8ca94adcab064d4d39dbd0b10fef29a9da48fc/regex-2026.1.15-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:2b091aefc05c78d286657cd4db95f2e6313375ff65dcf085e42e4c04d9c8d410", size = 858607, upload-time = "2026-01-14T23:15:00.657Z" },
{ url = "https://files.pythonhosted.org/packages/a3/31/040e589834d7a439ee43fb0e1e902bc81bd58a5ba81acffe586bb3321d35/regex-2026.1.15-cp312-cp312-musllinux_1_2_riscv64.whl", hash = "sha256:57e7d17f59f9ebfa9667e6e5a1c0127b96b87cb9cede8335482451ed00788ba4", size = 763729, upload-time = "2026-01-14T23:15:02.248Z" },
{ url = "https://files.pythonhosted.org/packages/9b/84/6921e8129687a427edf25a34a5594b588b6d88f491320b9de5b6339a4fcb/regex-2026.1.15-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:c6c4dcdfff2c08509faa15d36ba7e5ef5fcfab25f1e8f85a0c8f45bc3a30725d", size = 850697, upload-time = "2026-01-14T23:15:03.878Z" },
{ url = "https://files.pythonhosted.org/packages/8a/87/3d06143d4b128f4229158f2de5de6c8f2485170c7221e61bf381313314b2/regex-2026.1.15-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:cf8ff04c642716a7f2048713ddc6278c5fd41faa3b9cab12607c7abecd012c22", size = 789849, upload-time = "2026-01-14T23:15:06.102Z" },
{ url = "https://files.pythonhosted.org/packages/77/69/c50a63842b6bd48850ebc7ab22d46e7a2a32d824ad6c605b218441814639/regex-2026.1.15-cp312-cp312-win32.whl", hash = "sha256:82345326b1d8d56afbe41d881fdf62f1926d7264b2fc1537f99ae5da9aad7913", size = 266279, upload-time = "2026-01-14T23:15:07.678Z" },
{ url = "https://files.pythonhosted.org/packages/f2/36/39d0b29d087e2b11fd8191e15e81cce1b635fcc845297c67f11d0d19274d/regex-2026.1.15-cp312-cp312-win_amd64.whl", hash = "sha256:4def140aa6156bc64ee9912383d4038f3fdd18fee03a6f222abd4de6357ce42a", size = 277166, upload-time = "2026-01-14T23:15:09.257Z" },
{ url = "https://files.pythonhosted.org/packages/28/32/5b8e476a12262748851fa8ab1b0be540360692325975b094e594dfebbb52/regex-2026.1.15-cp312-cp312-win_arm64.whl", hash = "sha256:c6c565d9a6e1a8d783c1948937ffc377dd5771e83bd56de8317c450a954d2056", size = 270415, upload-time = "2026-01-14T23:15:10.743Z" },
{ url = "https://files.pythonhosted.org/packages/f8/2e/6870bb16e982669b674cce3ee9ff2d1d46ab80528ee6bcc20fb2292efb60/regex-2026.1.15-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:e69d0deeb977ffe7ed3d2e4439360089f9c3f217ada608f0f88ebd67afb6385e", size = 489164, upload-time = "2026-01-14T23:15:13.962Z" },
{ url = "https://files.pythonhosted.org/packages/dc/67/9774542e203849b0286badf67199970a44ebdb0cc5fb739f06e47ada72f8/regex-2026.1.15-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:3601ffb5375de85a16f407854d11cca8fe3f5febbe3ac78fb2866bb220c74d10", size = 291218, upload-time = "2026-01-14T23:15:15.647Z" },
{ url = "https://files.pythonhosted.org/packages/b2/87/b0cda79f22b8dee05f774922a214da109f9a4c0eca5da2c9d72d77ea062c/regex-2026.1.15-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:4c5ef43b5c2d4114eb8ea424bb8c9cec01d5d17f242af88b2448f5ee81caadbc", size = 288895, upload-time = "2026-01-14T23:15:17.788Z" },
{ url = "https://files.pythonhosted.org/packages/3b/6a/0041f0a2170d32be01ab981d6346c83a8934277d82c780d60b127331f264/regex-2026.1.15-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:968c14d4f03e10b2fd960f1d5168c1f0ac969381d3c1fcc973bc45fb06346599", size = 798680, upload-time = "2026-01-14T23:15:19.342Z" },
{ url = "https://files.pythonhosted.org/packages/58/de/30e1cfcdbe3e891324aa7568b7c968771f82190df5524fabc1138cb2d45a/regex-2026.1.15-cp313-cp313-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:56a5595d0f892f214609c9f76b41b7428bed439d98dc961efafdd1354d42baae", size = 864210, upload-time = "2026-01-14T23:15:22.005Z" },
{ url = "https://files.pythonhosted.org/packages/64/44/4db2f5c5ca0ccd40ff052ae7b1e9731352fcdad946c2b812285a7505ca75/regex-2026.1.15-cp313-cp313-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:0bf650f26087363434c4e560011f8e4e738f6f3e029b85d4904c50135b86cfa5", size = 912358, upload-time = "2026-01-14T23:15:24.569Z" },
{ url = "https://files.pythonhosted.org/packages/79/b6/e6a5665d43a7c42467138c8a2549be432bad22cbd206f5ec87162de74bd7/regex-2026.1.15-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:18388a62989c72ac24de75f1449d0fb0b04dfccd0a1a7c1c43af5eb503d890f6", size = 803583, upload-time = "2026-01-14T23:15:26.526Z" },
{ url = "https://files.pythonhosted.org/packages/e7/53/7cd478222169d85d74d7437e74750005e993f52f335f7c04ff7adfda3310/regex-2026.1.15-cp313-cp313-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:6d220a2517f5893f55daac983bfa9fe998a7dbcaee4f5d27a88500f8b7873788", size = 775782, upload-time = "2026-01-14T23:15:29.352Z" },
{ url = "https://files.pythonhosted.org/packages/ca/b5/75f9a9ee4b03a7c009fe60500fe550b45df94f0955ca29af16333ef557c5/regex-2026.1.15-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:c9c08c2fbc6120e70abff5d7f28ffb4d969e14294fb2143b4b5c7d20e46d1714", size = 787978, upload-time = "2026-01-14T23:15:31.295Z" },
{ url = "https://files.pythonhosted.org/packages/72/b3/79821c826245bbe9ccbb54f6eadb7879c722fd3e0248c17bfc90bf54e123/regex-2026.1.15-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:7ef7d5d4bd49ec7364315167a4134a015f61e8266c6d446fc116a9ac4456e10d", size = 858550, upload-time = "2026-01-14T23:15:33.558Z" },
{ url = "https://files.pythonhosted.org/packages/4a/85/2ab5f77a1c465745bfbfcb3ad63178a58337ae8d5274315e2cc623a822fa/regex-2026.1.15-cp313-cp313-musllinux_1_2_riscv64.whl", hash = "sha256:6e42844ad64194fa08d5ccb75fe6a459b9b08e6d7296bd704460168d58a388f3", size = 763747, upload-time = "2026-01-14T23:15:35.206Z" },
{ url = "https://files.pythonhosted.org/packages/6d/84/c27df502d4bfe2873a3e3a7cf1bdb2b9cc10284d1a44797cf38bed790470/regex-2026.1.15-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:cfecdaa4b19f9ca534746eb3b55a5195d5c95b88cac32a205e981ec0a22b7d31", size = 850615, upload-time = "2026-01-14T23:15:37.523Z" },
{ url = "https://files.pythonhosted.org/packages/7d/b7/658a9782fb253680aa8ecb5ccbb51f69e088ed48142c46d9f0c99b46c575/regex-2026.1.15-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:08df9722d9b87834a3d701f3fca570b2be115654dbfd30179f30ab2f39d606d3", size = 789951, upload-time = "2026-01-14T23:15:39.582Z" },
{ url = "https://files.pythonhosted.org/packages/fc/2a/5928af114441e059f15b2f63e188bd00c6529b3051c974ade7444b85fcda/regex-2026.1.15-cp313-cp313-win32.whl", hash = "sha256:d426616dae0967ca225ab12c22274eb816558f2f99ccb4a1d52ca92e8baf180f", size = 266275, upload-time = "2026-01-14T23:15:42.108Z" },
{ url = "https://files.pythonhosted.org/packages/4f/16/5bfbb89e435897bff28cf0352a992ca719d9e55ebf8b629203c96b6ce4f7/regex-2026.1.15-cp313-cp313-win_amd64.whl", hash = "sha256:febd38857b09867d3ed3f4f1af7d241c5c50362e25ef43034995b77a50df494e", size = 277145, upload-time = "2026-01-14T23:15:44.244Z" },
{ url = "https://files.pythonhosted.org/packages/56/c1/a09ff7392ef4233296e821aec5f78c51be5e91ffde0d163059e50fd75835/regex-2026.1.15-cp313-cp313-win_arm64.whl", hash = "sha256:8e32f7896f83774f91499d239e24cebfadbc07639c1494bb7213983842348337", size = 270411, upload-time = "2026-01-14T23:15:45.858Z" },
{ url = "https://files.pythonhosted.org/packages/3c/38/0cfd5a78e5c6db00e6782fdae70458f89850ce95baa5e8694ab91d89744f/regex-2026.1.15-cp313-cp313t-macosx_10_13_universal2.whl", hash = "sha256:ec94c04149b6a7b8120f9f44565722c7ae31b7a6d2275569d2eefa76b83da3be", size = 492068, upload-time = "2026-01-14T23:15:47.616Z" },
{ url = "https://files.pythonhosted.org/packages/50/72/6c86acff16cb7c959c4355826bbf06aad670682d07c8f3998d9ef4fee7cd/regex-2026.1.15-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:40c86d8046915bb9aeb15d3f3f15b6fd500b8ea4485b30e1bbc799dab3fe29f8", size = 292756, upload-time = "2026-01-14T23:15:49.307Z" },
{ url = "https://files.pythonhosted.org/packages/4e/58/df7fb69eadfe76526ddfce28abdc0af09ffe65f20c2c90932e89d705153f/regex-2026.1.15-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:726ea4e727aba21643205edad8f2187ec682d3305d790f73b7a51c7587b64bdd", size = 291114, upload-time = "2026-01-14T23:15:51.484Z" },
{ url = "https://files.pythonhosted.org/packages/ed/6c/a4011cd1cf96b90d2cdc7e156f91efbd26531e822a7fbb82a43c1016678e/regex-2026.1.15-cp313-cp313t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:1cb740d044aff31898804e7bf1181cc72c03d11dfd19932b9911ffc19a79070a", size = 807524, upload-time = "2026-01-14T23:15:53.102Z" },
{ url = "https://files.pythonhosted.org/packages/1d/25/a53ffb73183f69c3e9f4355c4922b76d2840aee160af6af5fac229b6201d/regex-2026.1.15-cp313-cp313t-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:05d75a668e9ea16f832390d22131fe1e8acc8389a694c8febc3e340b0f810b93", size = 873455, upload-time = "2026-01-14T23:15:54.956Z" },
{ url = "https://files.pythonhosted.org/packages/66/0b/8b47fc2e8f97d9b4a851736f3890a5f786443aa8901061c55f24c955f45b/regex-2026.1.15-cp313-cp313t-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:d991483606f3dbec93287b9f35596f41aa2e92b7c2ebbb935b63f409e243c9af", size = 915007, upload-time = "2026-01-14T23:15:57.041Z" },
{ url = "https://files.pythonhosted.org/packages/c2/fa/97de0d681e6d26fabe71968dbee06dd52819e9a22fdce5dac7256c31ed84/regex-2026.1.15-cp313-cp313t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:194312a14819d3e44628a44ed6fea6898fdbecb0550089d84c403475138d0a09", size = 812794, upload-time = "2026-01-14T23:15:58.916Z" },
{ url = "https://files.pythonhosted.org/packages/22/38/e752f94e860d429654aa2b1c51880bff8dfe8f084268258adf9151cf1f53/regex-2026.1.15-cp313-cp313t-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:fe2fda4110a3d0bc163c2e0664be44657431440722c5c5315c65155cab92f9e5", size = 781159, upload-time = "2026-01-14T23:16:00.817Z" },
{ url = "https://files.pythonhosted.org/packages/e9/a7/d739ffaef33c378fc888302a018d7f81080393d96c476b058b8c64fd2b0d/regex-2026.1.15-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:124dc36c85d34ef2d9164da41a53c1c8c122cfb1f6e1ec377a1f27ee81deb794", size = 795558, upload-time = "2026-01-14T23:16:03.267Z" },
{ url = "https://files.pythonhosted.org/packages/3e/c4/542876f9a0ac576100fc73e9c75b779f5c31e3527576cfc9cb3009dcc58a/regex-2026.1.15-cp313-cp313t-musllinux_1_2_ppc64le.whl", hash = "sha256:a1774cd1981cd212506a23a14dba7fdeaee259f5deba2df6229966d9911e767a", size = 868427, upload-time = "2026-01-14T23:16:05.646Z" },
{ url = "https://files.pythonhosted.org/packages/fc/0f/d5655bea5b22069e32ae85a947aa564912f23758e112cdb74212848a1a1b/regex-2026.1.15-cp313-cp313t-musllinux_1_2_riscv64.whl", hash = "sha256:b5f7d8d2867152cdb625e72a530d2ccb48a3d199159144cbdd63870882fb6f80", size = 769939, upload-time = "2026-01-14T23:16:07.542Z" },
{ url = "https://files.pythonhosted.org/packages/20/06/7e18a4fa9d326daeda46d471a44ef94201c46eaa26dbbb780b5d92cbfdda/regex-2026.1.15-cp313-cp313t-musllinux_1_2_s390x.whl", hash = "sha256:492534a0ab925d1db998defc3c302dae3616a2fc3fe2e08db1472348f096ddf2", size = 854753, upload-time = "2026-01-14T23:16:10.395Z" },
{ url = "https://files.pythonhosted.org/packages/3b/67/dc8946ef3965e166f558ef3b47f492bc364e96a265eb4a2bb3ca765c8e46/regex-2026.1.15-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:c661fc820cfb33e166bf2450d3dadbda47c8d8981898adb9b6fe24e5e582ba60", size = 799559, upload-time = "2026-01-14T23:16:12.347Z" },
{ url = "https://files.pythonhosted.org/packages/a5/61/1bba81ff6d50c86c65d9fd84ce9699dd106438ee4cdb105bf60374ee8412/regex-2026.1.15-cp313-cp313t-win32.whl", hash = "sha256:99ad739c3686085e614bf77a508e26954ff1b8f14da0e3765ff7abbf7799f952", size = 268879, upload-time = "2026-01-14T23:16:14.049Z" },
{ url = "https://files.pythonhosted.org/packages/e9/5e/cef7d4c5fb0ea3ac5c775fd37db5747f7378b29526cc83f572198924ff47/regex-2026.1.15-cp313-cp313t-win_amd64.whl", hash = "sha256:32655d17905e7ff8ba5c764c43cb124e34a9245e45b83c22e81041e1071aee10", size = 280317, upload-time = "2026-01-14T23:16:15.718Z" },
{ url = "https://files.pythonhosted.org/packages/b4/52/4317f7a5988544e34ab57b4bde0f04944c4786128c933fb09825924d3e82/regex-2026.1.15-cp313-cp313t-win_arm64.whl", hash = "sha256:b2a13dd6a95e95a489ca242319d18fc02e07ceb28fa9ad146385194d95b3c829", size = 271551, upload-time = "2026-01-14T23:16:17.533Z" },
]
[[package]]