Compare commits

...

11 Commits

Author SHA1 Message Date
Lucas Gomide
bdc6706b4b docs: add missing docs about LLMGuardrail events 2025-06-16 11:51:36 -03:00
Vidit Ostwal
a40447df29 updated docs (#2989)
Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
2025-06-16 10:49:27 -04:00
leopardracer
5d6b467042 Update quickstart.mdx (#2998)
Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
2025-06-16 10:35:52 -04:00
Greyson LaLonde
e0ff30c212 Fix tools parameter syntax 2025-06-16 10:25:34 -04:00
Lorenze Jay
a5b5c8ab37 Lorenze/console printer nice (#3004)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* fix: possible fix for Thinking stuck

* feat: add agent logging events for execution tracking

- Introduced AgentLogsStartedEvent and AgentLogsExecutionEvent to enhance logging capabilities during agent execution.
- Updated CrewAgentExecutor to emit these events at the start and during execution, respectively.
- Modified EventListener to handle the new logging events and format output accordingly in the console.
- Enhanced ConsoleFormatter to display agent logs in a structured format, improving visibility of agent actions and outputs.

* drop emoji

* refactor: improve code structure and logging in LiteAgent and ConsoleFormatter

- Refactored imports in lite_agent.py for better readability.
- Enhanced guardrail property initialization in LiteAgent.
- Updated logging functionality to emit AgentLogsExecutionEvent for better tracking.
- Modified ConsoleFormatter to include tool arguments and final output in status updates.
- Improved output formatting for long text in ConsoleFormatter.

* fix tests

---------

Co-authored-by: Eduardo Chiarotti <dudumelgaco@hotmail.com>
2025-06-14 12:21:46 -07:00
Vidit Ostwal
7f12e98de5 Added sanitize role feature in mem0 storage (#2988)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* Added sanitize role feature in mme0 storage

* Used chroma db functionality
2025-06-12 13:14:34 -04:00
Lorenze Jay
99133104dd Update version to 0.130.0 and dependencies in pyproject.toml and uv.lock (#3002)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
- Bump CrewAI version from 0.126.0 to 0.130.0 in pyproject.toml and uv.lock.
- Update optional dependency 'crewai-tools' version from 0.46.0 to 0.47.1.
- Adjust dependency specifications in CLI templates to reflect the new version.
2025-06-11 17:01:11 -07:00
devin-ai-integration[bot]
970a63c13c Fix issue 2993: Prevent Flow status logs from hiding human input (#2994)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
* Fix issue 2993: Prevent Flow status logs from hiding human input

- Add pause_live_updates() and resume_live_updates() methods to ConsoleFormatter
- Modify _ask_human_input() to pause Flow status updates during human input
- Add comprehensive tests for pause/resume functionality and integration
- Ensure Live session is properly managed during human input prompts
- Fix prevents Flow status logs from overwriting user input prompts

Fixes #2993

Co-Authored-By: João <joao@crewai.com>

* Fix lint: Remove unused pytest import

- Remove unused pytest import from test_console_formatter_pause_resume.py
- Fixes F401 lint error identified in CI

Co-Authored-By: João <joao@crewai.com>

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: João <joao@crewai.com>
2025-06-11 12:08:00 -04:00
devin-ai-integration[bot]
06c991d8c3 Fix telemetry singleton pattern to respect dynamic environment variables (#2946)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* Fix telemetry singleton pattern to respect dynamic environment variables

- Modified Telemetry.__init__ to prevent re-initialization with _initialized flag
- Updated _safe_telemetry_operation to check _is_telemetry_disabled() dynamically
- Added comprehensive tests for environment variables set after singleton creation
- Fixed singleton contamination in existing tests by adding proper reset
- Resolves issue #2945 where CREWAI_DISABLE_TELEMETRY=true was ignored when set after import

Co-Authored-By: João <joao@crewai.com>

* Implement code review improvements

- Move _initialized flag to __new__ method for better encapsulation
- Add type hints to _safe_telemetry_operation method
- Consolidate telemetry execution checks into _should_execute_telemetry helper
- Add pytest fixtures to reduce test setup redundancy
- Enhanced documentation for singleton behavior

Co-Authored-By: João <joao@crewai.com>

* Fix mypy type-checker errors

- Add explicit bool type annotation to _initialized field
- Fix return value in task_started method to not return _safe_telemetry_operation result
- Simplify initialization logic to set _initialized once in __init__

Co-Authored-By: João <joao@crewai.com>

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: João <joao@crewai.com>
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-06-10 17:38:40 -07:00
Lucas Gomide
739eb72fd0 LiteAgent w/ Guardrail (#2982)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
* feat: add guardrail support for Agents when using direct kickoff calls

* refactor: expose guardrail func in a proper utils file

* fix: resolve Self import on python 3.10
2025-06-10 13:32:32 -04:00
Lucas Gomide
b0d2e9fe31 docs: update Python version requirement from <=3.13 to <3.14 (#2987)
This correctly reflects support for all 3.13.x patch version
2025-06-10 12:44:28 -04:00
38 changed files with 4015 additions and 247 deletions

View File

@@ -295,6 +295,11 @@ multimodal_agent = Agent(
- `"safe"`: Uses Docker (recommended for production)
- `"unsafe"`: Direct execution (use only in trusted environments)
<Note>
This runs a default Docker image. If you want to configure the docker image, the checkout the Code Interpreter Tool in the tools section.
Add the code interpreter tool as a tool in the agent as a tool parameter.
</Note>
#### Advanced Features
- `multimodal`: Enable multimodal capabilities for processing text and visual content
- `reasoning`: Enable agent to reflect and create plans before executing tasks

View File

@@ -233,6 +233,11 @@ CrewAI provides a wide range of events that you can listen for:
- **KnowledgeQueryFailedEvent**: Emitted when a knowledge query fails
- **KnowledgeSearchQueryFailedEvent**: Emitted when a knowledge search query fails
### LLM Guardrail Events
- **LLMGuardrailStartedEvent**: Emitted when a guardrail validation starts. Contains details about the guardrail being applied and retry count.
- **LLMGuardrailCompletedEvent**: Emitted when a guardrail validation completes. Contains details about validation success/failure, results, and error messages if any.
### Flow Events
- **FlowCreatedEvent**: Emitted when a Flow is created

View File

@@ -96,7 +96,7 @@ email_agent = Agent(
role="Email Manager",
goal="Manage and organize email communications",
backstory="An AI assistant specialized in email management and communication.",
tools=[enterprise_tools]
tools=enterprise_tools
)
# Task to send an email

View File

@@ -22,7 +22,7 @@ Watch this video tutorial for a step-by-step demonstration of the installation p
<Note>
**Python Version Requirements**
CrewAI requires `Python >=3.10 and <=3.13`. Here's how to check your version:
CrewAI requires `Python >=3.10 and <3.14`. Here's how to check your version:
```bash
python3 --version
```

View File

@@ -212,7 +212,7 @@ Follow the steps below to get Crewing! 🚣‍♂️
1. Log in to your CrewAI Enterprise account (create a free account at [app.crewai.com](https://app.crewai.com))
2. Open Crew Studio
3. Type what is the automation you're tryign to build
3. Type what is the automation you're trying to build
4. Create your tasks visually and connect them in sequence
5. Configure your inputs and click "Download Code" or "Deploy"

View File

@@ -1,6 +1,6 @@
[project]
name = "crewai"
version = "0.126.0"
version = "0.130.0"
description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks."
readme = "README.md"
requires-python = ">=3.10,<3.14"
@@ -47,7 +47,7 @@ Documentation = "https://docs.crewai.com"
Repository = "https://github.com/crewAIInc/crewAI"
[project.optional-dependencies]
tools = ["crewai-tools~=0.46.0"]
tools = ["crewai-tools~=0.47.1"]
embeddings = [
"tiktoken~=0.8.0"
]

View File

@@ -18,7 +18,7 @@ warnings.filterwarnings(
category=UserWarning,
module="pydantic.main",
)
__version__ = "0.126.0"
__version__ = "0.130.0"
__all__ = [
"Agent",
"Crew",

View File

@@ -1,6 +1,6 @@
import shutil
import subprocess
from typing import Any, Dict, List, Literal, Optional, Sequence, Type, Union
from typing import Any, Callable, Dict, List, Literal, Optional, Sequence, Tuple, Type, Union
from pydantic import Field, InstanceOf, PrivateAttr, model_validator
@@ -155,6 +155,13 @@ class Agent(BaseAgent):
default=None,
description="The Agent's role to be used from your repository.",
)
guardrail: Optional[Union[Callable[[Any], Tuple[bool, Any]], str]] = Field(
default=None,
description="Function or string description of a guardrail to validate agent output"
)
guardrail_max_retries: int = Field(
default=3, description="Maximum number of retries when guardrail fails"
)
@model_validator(mode="before")
def validate_from_repository(cls, v):
@@ -780,6 +787,8 @@ class Agent(BaseAgent):
response_format=response_format,
i18n=self.i18n,
original_agent=self,
guardrail=self.guardrail,
guardrail_max_retries=self.guardrail_max_retries,
)
return lite_agent.kickoff(messages)

View File

@@ -7,6 +7,7 @@ from crewai.utilities import I18N
from crewai.utilities.converter import ConverterError
from crewai.utilities.evaluators.task_evaluator import TaskEvaluator
from crewai.utilities.printer import Printer
from crewai.utilities.events.event_listener import event_listener
if TYPE_CHECKING:
from crewai.agents.agent_builder.base_agent import BaseAgent
@@ -125,33 +126,38 @@ class CrewAgentExecutorMixin:
def _ask_human_input(self, final_answer: str) -> str:
"""Prompt human input with mode-appropriate messaging."""
self._printer.print(
content=f"\033[1m\033[95m ## Final Result:\033[00m \033[92m{final_answer}\033[00m"
)
# Training mode prompt (single iteration)
if self.crew and getattr(self.crew, "_train", False):
prompt = (
"\n\n=====\n"
"## TRAINING MODE: Provide feedback to improve the agent's performance.\n"
"This will be used to train better versions of the agent.\n"
"Please provide detailed feedback about the result quality and reasoning process.\n"
"=====\n"
)
# Regular human-in-the-loop prompt (multiple iterations)
else:
prompt = (
"\n\n=====\n"
"## HUMAN FEEDBACK: Provide feedback on the Final Result and Agent's actions.\n"
"Please follow these guidelines:\n"
" - If you are happy with the result, simply hit Enter without typing anything.\n"
" - Otherwise, provide specific improvement requests.\n"
" - You can provide multiple rounds of feedback until satisfied.\n"
"=====\n"
event_listener.formatter.pause_live_updates()
try:
self._printer.print(
content=f"\033[1m\033[95m ## Final Result:\033[00m \033[92m{final_answer}\033[00m"
)
self._printer.print(content=prompt, color="bold_yellow")
response = input()
if response.strip() != "":
self._printer.print(content="\nProcessing your feedback...", color="cyan")
return response
# Training mode prompt (single iteration)
if self.crew and getattr(self.crew, "_train", False):
prompt = (
"\n\n=====\n"
"## TRAINING MODE: Provide feedback to improve the agent's performance.\n"
"This will be used to train better versions of the agent.\n"
"Please provide detailed feedback about the result quality and reasoning process.\n"
"=====\n"
)
# Regular human-in-the-loop prompt (multiple iterations)
else:
prompt = (
"\n\n=====\n"
"## HUMAN FEEDBACK: Provide feedback on the Final Result and Agent's actions.\n"
"Please follow these guidelines:\n"
" - If you are happy with the result, simply hit Enter without typing anything.\n"
" - Otherwise, provide specific improvement requests.\n"
" - You can provide multiple rounds of feedback until satisfied.\n"
"=====\n"
)
self._printer.print(content=prompt, color="bold_yellow")
response = input()
if response.strip() != "":
self._printer.print(content="\nProcessing your feedback...", color="cyan")
return response
finally:
event_listener.formatter.resume_live_updates()

View File

@@ -25,12 +25,16 @@ from crewai.utilities.agent_utils import (
has_reached_max_iterations,
is_context_length_exceeded,
process_llm_response,
show_agent_logs,
)
from crewai.utilities.constants import MAX_LLM_RETRY, TRAINING_DATA_FILE
from crewai.utilities.logger import Logger
from crewai.utilities.tool_utils import execute_tool_and_check_finality
from crewai.utilities.training_handler import CrewTrainingHandler
from crewai.utilities.events.agent_events import (
AgentLogsStartedEvent,
AgentLogsExecutionEvent,
)
from crewai.utilities.events.crewai_event_bus import crewai_event_bus
class CrewAgentExecutor(CrewAgentExecutorMixin):
@@ -263,26 +267,32 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
"""Show logs for the start of agent execution."""
if self.agent is None:
raise ValueError("Agent cannot be None")
show_agent_logs(
printer=self._printer,
agent_role=self.agent.role,
task_description=(
getattr(self.task, "description") if self.task else "Not Found"
crewai_event_bus.emit(
self.agent,
AgentLogsStartedEvent(
agent_role=self.agent.role,
task_description=(
getattr(self.task, "description") if self.task else "Not Found"
),
verbose=self.agent.verbose
or (hasattr(self, "crew") and getattr(self.crew, "verbose", False)),
),
verbose=self.agent.verbose
or (hasattr(self, "crew") and getattr(self.crew, "verbose", False)),
)
def _show_logs(self, formatted_answer: Union[AgentAction, AgentFinish]):
"""Show logs for the agent's execution."""
if self.agent is None:
raise ValueError("Agent cannot be None")
show_agent_logs(
printer=self._printer,
agent_role=self.agent.role,
formatted_answer=formatted_answer,
verbose=self.agent.verbose
or (hasattr(self, "crew") and getattr(self.crew, "verbose", False)),
crewai_event_bus.emit(
self.agent,
AgentLogsExecutionEvent(
agent_role=self.agent.role,
formatted_answer=formatted_answer,
verbose=self.agent.verbose
or (hasattr(self, "crew") and getattr(self.crew, "verbose", False)),
),
)
def _summarize_messages(self) -> None:

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]>=0.126.0,<1.0.0"
"crewai[tools]>=0.130.0,<1.0.0"
]
[project.scripts]

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]>=0.126.0,<1.0.0",
"crewai[tools]>=0.130.0,<1.0.0",
]
[project.scripts]

View File

@@ -5,7 +5,7 @@ description = "Power up your crews with {{folder_name}}"
readme = "README.md"
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]>=0.126.0"
"crewai[tools]>=0.130.0"
]
[tool.crewai]

View File

@@ -1,9 +1,33 @@
import asyncio
import inspect
import uuid
from datetime import datetime
from typing import Any, Callable, Dict, List, Optional, Type, Union, cast
from typing import (
Any,
Callable,
Dict,
List,
Optional,
Tuple,
Type,
Union,
cast,
get_args,
get_origin,
)
from pydantic import BaseModel, Field, InstanceOf, PrivateAttr, model_validator
try:
from typing import Self
except ImportError:
from typing_extensions import Self
from pydantic import (
BaseModel,
Field,
InstanceOf,
PrivateAttr,
model_validator,
field_validator,
)
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.agents.agent_builder.utilities.base_token_process import TokenProcess
@@ -18,6 +42,7 @@ from crewai.llm import LLM
from crewai.tools.base_tool import BaseTool
from crewai.tools.structured_tool import CrewStructuredTool
from crewai.utilities import I18N
from crewai.utilities.guardrail import process_guardrail
from crewai.utilities.agent_utils import (
enforce_rpm_limit,
format_message_for_llm,
@@ -33,10 +58,10 @@ from crewai.utilities.agent_utils import (
parse_tools,
process_llm_response,
render_text_description_and_args,
show_agent_logs,
)
from crewai.utilities.converter import convert_to_model, generate_model_description
from crewai.utilities.converter import generate_model_description
from crewai.utilities.events.agent_events import (
AgentLogsExecutionEvent,
LiteAgentExecutionCompletedEvent,
LiteAgentExecutionErrorEvent,
LiteAgentExecutionStartedEvent,
@@ -146,6 +171,17 @@ class LiteAgent(FlowTrackable, BaseModel):
default=[], description="Callbacks to be used for the agent"
)
# Guardrail Properties
guardrail: Optional[Union[Callable[[LiteAgentOutput], Tuple[bool, Any]], str]] = (
Field(
default=None,
description="Function or string description of a guardrail to validate agent output",
)
)
guardrail_max_retries: int = Field(
default=3, description="Maximum number of retries when guardrail fails"
)
# State and Results
tools_results: List[Dict[str, Any]] = Field(
default=[], description="Results of the tools used by the agent."
@@ -163,6 +199,8 @@ class LiteAgent(FlowTrackable, BaseModel):
_messages: List[Dict[str, str]] = PrivateAttr(default_factory=list)
_iterations: int = PrivateAttr(default=0)
_printer: Printer = PrivateAttr(default_factory=Printer)
_guardrail: Optional[Callable] = PrivateAttr(default=None)
_guardrail_retry_count: int = PrivateAttr(default=0)
@model_validator(mode="after")
def setup_llm(self):
@@ -184,6 +222,61 @@ class LiteAgent(FlowTrackable, BaseModel):
return self
@model_validator(mode="after")
def ensure_guardrail_is_callable(self) -> Self:
if callable(self.guardrail):
self._guardrail = self.guardrail
elif isinstance(self.guardrail, str):
from crewai.tasks.llm_guardrail import LLMGuardrail
assert isinstance(self.llm, LLM)
self._guardrail = LLMGuardrail(description=self.guardrail, llm=self.llm)
return self
@field_validator("guardrail", mode="before")
@classmethod
def validate_guardrail_function(
cls, v: Optional[Union[Callable, str]]
) -> Optional[Union[Callable, str]]:
"""Validate that the guardrail function has the correct signature.
If v is a callable, validate that it has the correct signature.
If v is a string, return it as is.
Args:
v: The guardrail function to validate or a string describing the guardrail task
Returns:
The validated guardrail function or a string describing the guardrail task
"""
if v is None or isinstance(v, str):
return v
# Check function signature
sig = inspect.signature(v)
if len(sig.parameters) != 1:
raise ValueError(
f"Guardrail function must accept exactly 1 parameter (LiteAgentOutput), "
f"but it accepts {len(sig.parameters)}"
)
# Check return annotation if present
if sig.return_annotation is not sig.empty:
if sig.return_annotation == Tuple[bool, Any]:
return v
origin = get_origin(sig.return_annotation)
args = get_args(sig.return_annotation)
if origin is not tuple or len(args) != 2 or args[0] is not bool:
raise ValueError(
"If return type is annotated, it must be Tuple[bool, Any]"
)
return v
@property
def key(self) -> str:
"""Get the unique key for this agent instance."""
@@ -223,54 +316,7 @@ class LiteAgent(FlowTrackable, BaseModel):
# Format messages for the LLM
self._messages = self._format_messages(messages)
# Emit event for agent execution start
crewai_event_bus.emit(
self,
event=LiteAgentExecutionStartedEvent(
agent_info=agent_info,
tools=self._parsed_tools,
messages=messages,
),
)
# Execute the agent using invoke loop
agent_finish = self._invoke_loop()
formatted_result: Optional[BaseModel] = None
if self.response_format:
try:
# Cast to BaseModel to ensure type safety
result = self.response_format.model_validate_json(
agent_finish.output
)
if isinstance(result, BaseModel):
formatted_result = result
except Exception as e:
self._printer.print(
content=f"Failed to parse output into response format: {str(e)}",
color="yellow",
)
# Calculate token usage metrics
usage_metrics = self._token_process.get_summary()
# Create output
output = LiteAgentOutput(
raw=agent_finish.output,
pydantic=formatted_result,
agent_role=self.role,
usage_metrics=usage_metrics.model_dump() if usage_metrics else None,
)
# Emit completion event
crewai_event_bus.emit(
self,
event=LiteAgentExecutionCompletedEvent(
agent_info=agent_info,
output=agent_finish.output,
),
)
return output
return self._execute_core(agent_info=agent_info)
except Exception as e:
self._printer.print(
@@ -288,6 +334,95 @@ class LiteAgent(FlowTrackable, BaseModel):
)
raise e
def _execute_core(self, agent_info: Dict[str, Any]) -> LiteAgentOutput:
# Emit event for agent execution start
crewai_event_bus.emit(
self,
event=LiteAgentExecutionStartedEvent(
agent_info=agent_info,
tools=self._parsed_tools,
messages=self._messages,
),
)
# Execute the agent using invoke loop
agent_finish = self._invoke_loop()
formatted_result: Optional[BaseModel] = None
if self.response_format:
try:
# Cast to BaseModel to ensure type safety
result = self.response_format.model_validate_json(agent_finish.output)
if isinstance(result, BaseModel):
formatted_result = result
except Exception as e:
self._printer.print(
content=f"Failed to parse output into response format: {str(e)}",
color="yellow",
)
# Calculate token usage metrics
usage_metrics = self._token_process.get_summary()
# Create output
output = LiteAgentOutput(
raw=agent_finish.output,
pydantic=formatted_result,
agent_role=self.role,
usage_metrics=usage_metrics.model_dump() if usage_metrics else None,
)
# Process guardrail if set
if self._guardrail is not None:
guardrail_result = process_guardrail(
output=output,
guardrail=self._guardrail,
retry_count=self._guardrail_retry_count,
)
if not guardrail_result.success:
if self._guardrail_retry_count >= self.guardrail_max_retries:
raise Exception(
f"Agent's guardrail failed validation after {self.guardrail_max_retries} retries. "
f"Last error: {guardrail_result.error}"
)
self._guardrail_retry_count += 1
if self.verbose:
self._printer.print(
f"Guardrail failed. Retrying ({self._guardrail_retry_count}/{self.guardrail_max_retries})..."
f"\n{guardrail_result.error}"
)
self._messages.append(
{
"role": "user",
"content": guardrail_result.error
or "Guardrail validation failed",
}
)
return self._execute_core(agent_info=agent_info)
# Apply guardrail result if available
if guardrail_result.result is not None:
if isinstance(guardrail_result.result, str):
output.raw = guardrail_result.result
elif isinstance(guardrail_result.result, BaseModel):
output.pydantic = guardrail_result.result
usage_metrics = self._token_process.get_summary()
output.usage_metrics = usage_metrics.model_dump() if usage_metrics else None
# Emit completion event
crewai_event_bus.emit(
self,
event=LiteAgentExecutionCompletedEvent(
agent_info=agent_info,
output=agent_finish.output,
),
)
return output
async def kickoff_async(
self, messages: Union[str, List[Dict[str, str]]]
) -> LiteAgentOutput:
@@ -467,11 +602,13 @@ class LiteAgent(FlowTrackable, BaseModel):
def _show_logs(self, formatted_answer: Union[AgentAction, AgentFinish]):
"""Show logs for the agent's execution."""
show_agent_logs(
printer=self._printer,
agent_role=self.role,
formatted_answer=formatted_answer,
verbose=self.verbose,
crewai_event_bus.emit(
self,
AgentLogsExecutionEvent(
agent_role=self.role,
formatted_answer=formatted_answer,
verbose=self.verbose,
),
)
def _append_message(self, text: str, role: str = "assistant") -> None:

View File

@@ -4,6 +4,9 @@ from typing import Any, Dict, List
from mem0 import Memory, MemoryClient
from crewai.memory.storage.interface import Storage
from crewai.utilities.chromadb import sanitize_collection_name
MAX_AGENT_ID_LENGTH_MEM0 = 255
class Mem0Storage(Storage):
@@ -134,7 +137,7 @@ class Mem0Storage(Storage):
agents = self.crew.agents
agents = [self._sanitize_role(agent.role) for agent in agents]
agents = "_".join(agents)
return agents
return sanitize_collection_name(name=agents,max_collection_length=MAX_AGENT_ID_LENGTH_MEM0)
def _get_config(self) -> Dict[str, Any]:
return self.config or getattr(self, "memory_config", {}).get("config", {}) or {}

View File

@@ -35,12 +35,12 @@ from pydantic_core import PydanticCustomError
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.security import Fingerprint, SecurityConfig
from crewai.tasks.guardrail_result import GuardrailResult
from crewai.tasks.output_format import OutputFormat
from crewai.tasks.task_output import TaskOutput
from crewai.tools.base_tool import BaseTool
from crewai.utilities.config import process_config
from crewai.utilities.constants import NOT_SPECIFIED
from crewai.utilities.guardrail import process_guardrail, GuardrailResult
from crewai.utilities.converter import Converter, convert_to_model
from crewai.utilities.events import (
TaskCompletedEvent,
@@ -431,7 +431,11 @@ class Task(BaseModel):
)
if self._guardrail:
guardrail_result = self._process_guardrail(task_output)
guardrail_result = process_guardrail(
output=task_output,
guardrail=self._guardrail,
retry_count=self.retry_count
)
if not guardrail_result.success:
if self.retry_count >= self.max_retries:
raise Exception(
@@ -527,10 +531,10 @@ class Task(BaseModel):
def prompt(self) -> str:
"""Generates the task prompt with optional markdown formatting.
When the markdown attribute is True, instructions for formatting the
response in Markdown syntax will be added to the prompt.
Returns:
str: The formatted prompt string containing the task description,
expected output, and optional markdown formatting instructions.
@@ -541,7 +545,7 @@ class Task(BaseModel):
expected_output=self.expected_output
)
tasks_slices = [self.description, output]
if self.markdown:
markdown_instruction = """Your final answer MUST be formatted in Markdown syntax.
Follow these guidelines:

View File

@@ -8,7 +8,7 @@ import platform
import warnings
from contextlib import contextmanager
from importlib.metadata import version
from typing import TYPE_CHECKING, Any, Optional
from typing import TYPE_CHECKING, Any, Callable, Optional
import threading
from opentelemetry import trace
@@ -73,11 +73,16 @@ class Telemetry:
with cls._lock:
if cls._instance is None:
cls._instance = super(Telemetry, cls).__new__(cls)
cls._instance._initialized = False
return cls._instance
def __init__(self) -> None:
if hasattr(self, '_initialized') and self._initialized:
return
self.ready: bool = False
self.trace_set: bool = False
self._initialized: bool = True
if self._is_telemetry_disabled():
return
@@ -113,6 +118,10 @@ class Telemetry:
or os.getenv("CREWAI_DISABLE_TELEMETRY", "false").lower() == "true"
)
def _should_execute_telemetry(self) -> bool:
"""Check if telemetry operations should be executed."""
return self.ready and not self._is_telemetry_disabled()
def set_tracer(self):
if self.ready and not self.trace_set:
try:
@@ -123,8 +132,9 @@ class Telemetry:
self.ready = False
self.trace_set = False
def _safe_telemetry_operation(self, operation):
if not self.ready:
def _safe_telemetry_operation(self, operation: Callable[[], None]) -> None:
"""Execute telemetry operation safely, checking both readiness and environment variables."""
if not self._should_execute_telemetry():
return
try:
operation()
@@ -423,7 +433,8 @@ class Telemetry:
return span
return self._safe_telemetry_operation(operation)
self._safe_telemetry_operation(operation)
return None
def task_ended(self, span: Span, task: Task, crew: Crew):
"""Records the completion of a task execution in a crew.
@@ -773,7 +784,8 @@ class Telemetry:
return span
if crew.share_crew:
return self._safe_telemetry_operation(operation)
self._safe_telemetry_operation(operation)
return operation()
return None
def end_crew(self, crew, final_string_output):

View File

@@ -64,7 +64,7 @@ class BaseTool(BaseModel, ABC):
},
},
)
@field_validator("max_usage_count", mode="before")
@classmethod
def validate_max_usage_count(cls, v: int | None) -> int | None:
@@ -88,11 +88,11 @@ class BaseTool(BaseModel, ABC):
# If _run is async, we safely run it
if asyncio.iscoroutine(result):
result = asyncio.run(result)
self.current_usage_count += 1
return result
def reset_usage_count(self) -> None:
"""Reset the current usage count to zero."""
self.current_usage_count = 0
@@ -279,7 +279,7 @@ def to_langchain(
def tool(*args, result_as_answer: bool = False, max_usage_count: int | None = None) -> Callable:
"""
Decorator to create a tool from a function.
Args:
*args: Positional arguments, either the function to decorate or the tool name.
result_as_answer: Flag to indicate if the tool result should be used as the final agent answer.

View File

@@ -23,7 +23,7 @@ def is_ipv4_pattern(name: str) -> bool:
return bool(IPV4_PATTERN.match(name))
def sanitize_collection_name(name: Optional[str]) -> str:
def sanitize_collection_name(name: Optional[str], max_collection_length: int = MAX_COLLECTION_LENGTH) -> str:
"""
Sanitize a collection name to meet ChromaDB requirements:
1. 3-63 characters long
@@ -54,8 +54,8 @@ def sanitize_collection_name(name: Optional[str]) -> str:
if len(sanitized) < MIN_COLLECTION_LENGTH:
sanitized = sanitized + "x" * (MIN_COLLECTION_LENGTH - len(sanitized))
if len(sanitized) > MAX_COLLECTION_LENGTH:
sanitized = sanitized[:MAX_COLLECTION_LENGTH]
if len(sanitized) > max_collection_length:
sanitized = sanitized[:max_collection_length]
if not sanitized[-1].isalnum():
sanitized = sanitized[:-1] + "z"

View File

@@ -102,3 +102,24 @@ class LiteAgentExecutionErrorEvent(BaseEvent):
agent_info: Dict[str, Any]
error: str
type: str = "lite_agent_execution_error"
# New logging events
class AgentLogsStartedEvent(BaseEvent):
"""Event emitted when agent logs should be shown at start"""
agent_role: str
task_description: Optional[str] = None
verbose: bool = False
type: str = "agent_logs_started"
class AgentLogsExecutionEvent(BaseEvent):
"""Event emitted when agent logs should be shown during execution"""
agent_role: str
formatted_answer: Any
verbose: bool = False
type: str = "agent_logs_execution"
model_config = {"arbitrary_types_allowed": True}

View File

@@ -27,6 +27,8 @@ from crewai.utilities.events.utils.console_formatter import ConsoleFormatter
from .agent_events import (
AgentExecutionCompletedEvent,
AgentExecutionStartedEvent,
AgentLogsStartedEvent,
AgentLogsExecutionEvent,
LiteAgentExecutionCompletedEvent,
LiteAgentExecutionErrorEvent,
LiteAgentExecutionStartedEvent,
@@ -108,6 +110,7 @@ class EventListener(BaseEventListener):
event.crew_name or "Crew",
source.id,
"completed",
final_string_output,
)
@crewai_event_bus.on(CrewKickoffFailedEvent)
@@ -286,13 +289,14 @@ class EventListener(BaseEventListener):
if isinstance(source, LLM):
self.formatter.handle_llm_tool_usage_started(
event.tool_name,
event.tool_args,
)
else:
self.formatter.handle_tool_usage_started(
self.formatter.current_agent_branch,
event.tool_name,
self.formatter.current_crew_tree,
)
self.formatter.current_crew_tree,
)
@crewai_event_bus.on(ToolUsageFinishedEvent)
def on_tool_usage_finished(source, event: ToolUsageFinishedEvent):
@@ -320,16 +324,20 @@ class EventListener(BaseEventListener):
event.tool_name,
event.error,
self.formatter.current_crew_tree,
)
)
# ----------- LLM EVENTS -----------
@crewai_event_bus.on(LLMCallStartedEvent)
def on_llm_call_started(source, event: LLMCallStartedEvent):
self.formatter.handle_llm_call_started(
# Capture the returned tool branch and update the current_tool_branch reference
thinking_branch = self.formatter.handle_llm_call_started(
self.formatter.current_agent_branch,
self.formatter.current_crew_tree,
)
# Update the formatter's current_tool_branch to ensure proper cleanup
if thinking_branch is not None:
self.formatter.current_tool_branch = thinking_branch
@crewai_event_bus.on(LLMCallCompletedEvent)
def on_llm_call_completed(source, event: LLMCallCompletedEvent):
@@ -462,5 +470,23 @@ class EventListener(BaseEventListener):
self.formatter.current_crew_tree,
)
# ----------- AGENT LOGGING EVENTS -----------
@crewai_event_bus.on(AgentLogsStartedEvent)
def on_agent_logs_started(source, event: AgentLogsStartedEvent):
self.formatter.handle_agent_logs_started(
event.agent_role,
event.task_description,
event.verbose,
)
@crewai_event_bus.on(AgentLogsExecutionEvent)
def on_agent_logs_execution(source, event: AgentLogsExecutionEvent):
self.formatter.handle_agent_logs_execution(
event.agent_role,
event.formatted_answer,
event.verbose,
)
event_listener = EventListener()

View File

@@ -5,6 +5,7 @@ from rich.panel import Panel
from rich.text import Text
from rich.tree import Tree
from rich.live import Live
from rich.syntax import Syntax
class ConsoleFormatter:
@@ -17,6 +18,7 @@ class ConsoleFormatter:
current_lite_agent_branch: Optional[Tree] = None
tool_usage_counts: Dict[str, int] = {}
current_reasoning_branch: Optional[Tree] = None # Track reasoning status
_live_paused: bool = False
current_llm_tool_tree: Optional[Tree] = None
def __init__(self, verbose: bool = False):
@@ -39,7 +41,12 @@ class ConsoleFormatter:
)
def create_status_content(
self, title: str, name: str, status_style: str = "blue", **fields
self,
title: str,
name: str,
status_style: str = "blue",
tool_args: Dict[str, Any] | str = "",
**fields,
) -> Text:
"""Create standardized status content with consistent formatting."""
content = Text()
@@ -52,6 +59,8 @@ class ConsoleFormatter:
content.append(
f"{value}\n", style=fields.get(f"{label}_style", status_style)
)
content.append("Tool Args: ", style="white")
content.append(f"{tool_args}\n", style=status_style)
return content
@@ -119,6 +128,19 @@ class ConsoleFormatter:
# Finally, pass through to the regular Console.print implementation
self.console.print(*args, **kwargs)
def pause_live_updates(self) -> None:
"""Pause Live session updates to allow for human input without interference."""
if not self._live_paused:
if self._live:
self._live.stop()
self._live = None
self._live_paused = True
def resume_live_updates(self) -> None:
"""Resume Live session updates after human input is complete."""
if self._live_paused:
self._live_paused = False
def print_panel(
self, content: Text, title: str, style: str = "blue", is_flow: bool = False
) -> None:
@@ -138,6 +160,7 @@ class ConsoleFormatter:
crew_name: str,
source_id: str,
status: str = "completed",
final_string_output: str = "",
) -> None:
"""Handle crew tree updates with consistent formatting."""
if not self.verbose or tree is None:
@@ -169,6 +192,7 @@ class ConsoleFormatter:
style,
ID=source_id,
)
content.append(f"Final Output: {final_string_output}\n", style="white")
self.print_panel(content, title, style)
@@ -441,12 +465,19 @@ class ConsoleFormatter:
def handle_llm_tool_usage_started(
self,
tool_name: str,
tool_args: Dict[str, Any] | str,
):
tree = self.get_llm_tree(tool_name)
self.add_tree_node(tree, "🔄 Tool Usage Started", "green")
self.print(tree)
# Create status content for the tool usage
content = self.create_status_content(
"Tool Usage Started", tool_name, Status="In Progress", tool_args=tool_args
)
# Create and print the panel
self.print_panel(content, "Tool Usage", "green")
self.print()
return tree
# Still return the tree for compatibility with existing code
return self.get_llm_tree(tool_name)
def handle_llm_tool_usage_finished(
self,
@@ -477,6 +508,7 @@ class ConsoleFormatter:
agent_branch: Optional[Tree],
tool_name: str,
crew_tree: Optional[Tree],
tool_args: Dict[str, Any] | str = "",
) -> Optional[Tree]:
"""Handle tool usage started event."""
if not self.verbose:
@@ -484,9 +516,7 @@ class ConsoleFormatter:
# Parent for tool usage: LiteAgent > Agent > Task
branch_to_use = (
self.current_lite_agent_branch
or agent_branch
or self.current_task_branch
self.current_lite_agent_branch or agent_branch or self.current_task_branch
)
# Render full crew tree when available for consistent live updates
@@ -595,9 +625,7 @@ class ConsoleFormatter:
# Parent for tool usage: LiteAgent > Agent > Task
branch_to_use = (
self.current_lite_agent_branch
or agent_branch
or self.current_task_branch
self.current_lite_agent_branch or agent_branch or self.current_task_branch
)
# Render full crew tree when available for consistent live updates
@@ -611,14 +639,21 @@ class ConsoleFormatter:
return None
# Only add thinking status if we don't have a current tool branch
if self.current_tool_branch is None:
# or if the current tool branch is not a thinking node
should_add_thinking = self.current_tool_branch is None or "Thinking" not in str(
self.current_tool_branch.label
)
if should_add_thinking:
tool_branch = branch_to_use.add("")
self.update_tree_label(tool_branch, "🧠", "Thinking...", "blue")
self.current_tool_branch = tool_branch
self.print(tree_to_use)
self.print()
return tool_branch
return None
# Return the existing tool branch if it's already a thinking node
return self.current_tool_branch
def handle_llm_call_completed(
self,
@@ -627,7 +662,7 @@ class ConsoleFormatter:
crew_tree: Optional[Tree],
) -> None:
"""Handle LLM call completed event."""
if not self.verbose or tool_branch is None:
if not self.verbose:
return
# Decide which tree to render: prefer full crew tree, else parent branch
@@ -635,23 +670,50 @@ class ConsoleFormatter:
if tree_to_use is None:
return
# Remove the thinking status node when complete
if "Thinking" in str(tool_branch.label):
# Try to remove the thinking status node - first try the provided tool_branch
thinking_branch_to_remove = None
removed = False
# Method 1: Use the provided tool_branch if it's a thinking node
if tool_branch is not None and "Thinking" in str(tool_branch.label):
thinking_branch_to_remove = tool_branch
# Method 2: Fallback - search for any thinking node if tool_branch is None or not thinking
if thinking_branch_to_remove is None:
parents = [
self.current_lite_agent_branch,
self.current_agent_branch,
self.current_task_branch,
tree_to_use,
]
removed = False
for parent in parents:
if isinstance(parent, Tree) and tool_branch in parent.children:
parent.children.remove(tool_branch)
if isinstance(parent, Tree):
for child in parent.children:
if "Thinking" in str(child.label):
thinking_branch_to_remove = child
break
if thinking_branch_to_remove:
break
# Remove the thinking node if found
if thinking_branch_to_remove:
parents = [
self.current_lite_agent_branch,
self.current_agent_branch,
self.current_task_branch,
tree_to_use,
]
for parent in parents:
if (
isinstance(parent, Tree)
and thinking_branch_to_remove in parent.children
):
parent.children.remove(thinking_branch_to_remove)
removed = True
break
# Clear pointer if we just removed the current_tool_branch
if self.current_tool_branch is tool_branch:
if self.current_tool_branch is thinking_branch_to_remove:
self.current_tool_branch = None
if removed:
@@ -668,9 +730,36 @@ class ConsoleFormatter:
# Decide which tree to render: prefer full crew tree, else parent branch
tree_to_use = self.current_crew_tree or crew_tree or self.current_task_branch
# Update tool branch if it exists
if tool_branch:
tool_branch.label = Text("❌ LLM Failed", style="red bold")
# Find the thinking branch to update (similar to completion logic)
thinking_branch_to_update = None
# Method 1: Use the provided tool_branch if it's a thinking node
if tool_branch is not None and "Thinking" in str(tool_branch.label):
thinking_branch_to_update = tool_branch
# Method 2: Fallback - search for any thinking node if tool_branch is None or not thinking
if thinking_branch_to_update is None:
parents = [
self.current_lite_agent_branch,
self.current_agent_branch,
self.current_task_branch,
tree_to_use,
]
for parent in parents:
if isinstance(parent, Tree):
for child in parent.children:
if "Thinking" in str(child.label):
thinking_branch_to_update = child
break
if thinking_branch_to_update:
break
# Update the thinking branch to show failure
if thinking_branch_to_update:
thinking_branch_to_update.label = Text("❌ LLM Failed", style="red bold")
# Clear the current_tool_branch reference
if self.current_tool_branch is thinking_branch_to_update:
self.current_tool_branch = None
if tree_to_use:
self.print(tree_to_use)
self.print()
@@ -1113,9 +1202,7 @@ class ConsoleFormatter:
# Prefer LiteAgent > Agent > Task branch as the parent for reasoning
branch_to_use = (
self.current_lite_agent_branch
or agent_branch
or self.current_task_branch
self.current_lite_agent_branch or agent_branch or self.current_task_branch
)
# We always want to render the full crew tree when possible so the
@@ -1162,7 +1249,9 @@ class ConsoleFormatter:
)
style = "green" if ready else "yellow"
status_text = "Reasoning Completed" if ready else "Reasoning Completed (Not Ready)"
status_text = (
"Reasoning Completed" if ready else "Reasoning Completed (Not Ready)"
)
if reasoning_branch is not None:
self.update_tree_label(reasoning_branch, "", status_text, style)
@@ -1219,3 +1308,149 @@ class ConsoleFormatter:
# Clear stored branch after failure
self.current_reasoning_branch = None
# ----------- AGENT LOGGING EVENTS -----------
def handle_agent_logs_started(
self,
agent_role: str,
task_description: Optional[str] = None,
verbose: bool = False,
) -> None:
"""Handle agent logs started event."""
if not verbose:
return
agent_role = agent_role.split("\n")[0]
# Create panel content
content = Text()
content.append("Agent: ", style="white")
content.append(f"{agent_role}", style="bright_green bold")
if task_description:
content.append("\n\nTask: ", style="white")
content.append(f"{task_description}", style="bright_green")
# Create and display the panel
agent_panel = Panel(
content,
title="🤖 Agent Started",
border_style="magenta",
padding=(1, 2),
)
self.print(agent_panel)
self.print()
def handle_agent_logs_execution(
self,
agent_role: str,
formatted_answer: Any,
verbose: bool = False,
) -> None:
"""Handle agent logs execution event."""
if not verbose:
return
from crewai.agents.parser import AgentAction, AgentFinish
import json
import re
agent_role = agent_role.split("\n")[0]
if isinstance(formatted_answer, AgentAction):
thought = re.sub(r"\n+", "\n", formatted_answer.thought)
formatted_json = json.dumps(
formatted_answer.tool_input,
indent=2,
ensure_ascii=False,
)
# Create content for the action panel
content = Text()
content.append("Agent: ", style="white")
content.append(f"{agent_role}\n\n", style="bright_green bold")
if thought and thought != "":
content.append("Thought: ", style="white")
content.append(f"{thought}\n\n", style="bright_green")
content.append("Using Tool: ", style="white")
content.append(f"{formatted_answer.tool}\n\n", style="bright_green bold")
content.append("Tool Input:\n", style="white")
# Create a syntax-highlighted JSON code block
json_syntax = Syntax(
formatted_json,
"json",
theme="monokai",
line_numbers=False,
background_color="default",
)
content.append("\n")
# Create separate panels for better organization
main_content = Text()
main_content.append("Agent: ", style="white")
main_content.append(f"{agent_role}\n\n", style="bright_green bold")
if thought and thought != "":
main_content.append("Thought: ", style="white")
main_content.append(f"{thought}\n\n", style="bright_green")
main_content.append("Using Tool: ", style="white")
main_content.append(f"{formatted_answer.tool}", style="bright_green bold")
# Create the main action panel
action_panel = Panel(
main_content,
title="🔧 Agent Tool Execution",
border_style="magenta",
padding=(1, 2),
)
# Create the JSON input panel
input_panel = Panel(
json_syntax,
title="Tool Input",
border_style="blue",
padding=(1, 2),
)
# Create tool output content with better formatting
output_text = str(formatted_answer.result)
if len(output_text) > 2000:
output_text = output_text[:1997] + "..."
output_panel = Panel(
Text(output_text, style="bright_green"),
title="Tool Output",
border_style="green",
padding=(1, 2),
)
# Print all panels
self.print(action_panel)
self.print(input_panel)
self.print(output_panel)
self.print()
elif isinstance(formatted_answer, AgentFinish):
# Create content for the finish panel
content = Text()
content.append("Agent: ", style="white")
content.append(f"{agent_role}\n\n", style="bright_green bold")
content.append("Final Answer:\n", style="white")
content.append(f"{formatted_answer.output}", style="bright_green")
# Create and display the finish panel
finish_panel = Panel(
content,
title="✅ Agent Final Answer",
border_style="green",
padding=(1, 2),
)
self.print(finish_panel)
self.print()

View File

@@ -1,15 +1,7 @@
"""
Module for handling task guardrail validation results.
This module provides the GuardrailResult class which standardizes
the way task guardrails return their validation results.
"""
from typing import Any, Optional, Tuple, Union
from typing import Any, Callable, Optional, Tuple, Union
from pydantic import BaseModel, field_validator
class GuardrailResult(BaseModel):
"""Result from a task guardrail execution.
@@ -54,3 +46,48 @@ class GuardrailResult(BaseModel):
result=data if success else None,
error=data if not success else None
)
def process_guardrail(output: Any, guardrail: Callable, retry_count: int) -> GuardrailResult:
"""Process the guardrail for the agent output.
Args:
output: The output to validate with the guardrail
Returns:
GuardrailResult: The result of the guardrail validation
"""
from crewai.task import TaskOutput
from crewai.lite_agent import LiteAgentOutput
assert isinstance(output, TaskOutput) or isinstance(output, LiteAgentOutput), "Output must be a TaskOutput or LiteAgentOutput"
assert guardrail is not None
from crewai.utilities.events import (
LLMGuardrailCompletedEvent,
LLMGuardrailStartedEvent,
)
from crewai.utilities.events.crewai_event_bus import crewai_event_bus
crewai_event_bus.emit(
None,
LLMGuardrailStartedEvent(
guardrail=guardrail, retry_count=retry_count
),
)
result = guardrail(output)
guardrail_result = GuardrailResult.from_tuple(result)
crewai_event_bus.emit(
None,
LLMGuardrailCompletedEvent(
success=guardrail_result.success,
result=guardrail_result.result,
error=guardrail_result.error,
retry_count=retry_count,
),
)
return guardrail_result

View File

@@ -501,8 +501,7 @@ def test_agent_custom_max_iterations():
def test_agent_repeated_tool_usage(capsys):
@tool
def get_final_answer() -> float:
"""Get the final answer but don't give it yet, just re-use this
tool non-stop."""
"""Get the final answer but don't give it yet, just re-use this tool non-stop."""
return 42
agent = Agent(
@@ -527,12 +526,42 @@ def test_agent_repeated_tool_usage(capsys):
)
captured = capsys.readouterr()
assert (
"I tried reusing the same input, I must stop using this action input. I'll try something else instead."
in captured.out
output = (
captured.out.replace("\n", " ")
.replace(" ", " ")
.strip()
.replace("", "")
.replace("", "")
.replace("", "")
.replace("", "")
.replace("", "")
.replace("", "")
.replace("[", "")
.replace("]", "")
.replace("bold", "")
.replace("blue", "")
.replace("yellow", "")
.replace("green", "")
.replace("red", "")
.replace("dim", "")
.replace("🤖", "")
.replace("🔧", "")
.replace("", "")
.replace("\x1b[93m", "")
.replace("\x1b[00m", "")
.replace("\\", "")
.replace('"', "")
.replace("'", "")
)
# Look for the message in the normalized output, handling the apostrophe difference
expected_message = (
"I tried reusing the same input, I must stop using this action input."
)
assert (
expected_message in output
), f"Expected message not found in output. Output was: {output}"
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_repeated_tool_usage_check_even_with_disabled_cache(capsys):
@@ -564,11 +593,43 @@ def test_agent_repeated_tool_usage_check_even_with_disabled_cache(capsys):
)
captured = capsys.readouterr()
assert (
"I tried reusing the same input, I must stop using this action input. I'll try something else instead."
in captured.out
output = (
captured.out.replace("\n", " ")
.replace(" ", " ")
.strip()
.replace("", "")
.replace("", "")
.replace("", "")
.replace("", "")
.replace("", "")
.replace("", "")
.replace("[", "")
.replace("]", "")
.replace("bold", "")
.replace("blue", "")
.replace("yellow", "")
.replace("green", "")
.replace("red", "")
.replace("dim", "")
.replace("🤖", "")
.replace("🔧", "")
.replace("", "")
.replace("\x1b[93m", "")
.replace("\x1b[00m", "")
.replace("\\", "")
.replace('"', "")
.replace("'", "")
)
# Look for the message in the normalized output, handling the apostrophe difference
expected_message = (
"I tried reusing the same input, I must stop using this action input"
)
assert (
expected_message in output
), f"Expected message not found in output. Output was: {output}"
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_moved_on_after_max_iterations():
@@ -2092,7 +2153,12 @@ def test_agent_from_repository_with_invalid_tools(mock_get_agent, mock_get_auth_
"role": "test role",
"goal": "test goal",
"backstory": "test backstory",
"tools": [{"name": "DoesNotExist", "module": "crewai_tools",}],
"tools": [
{
"name": "DoesNotExist",
"module": "crewai_tools",
}
],
}
mock_get_agent.return_value = mock_get_response
with pytest.raises(
@@ -2131,7 +2197,9 @@ def test_agent_from_repository_agent_not_found(mock_get_agent, mock_get_auth_tok
@patch("crewai.cli.plus_api.PlusAPI.get_agent")
@patch("crewai.utilities.agent_utils.Settings")
@patch("crewai.utilities.agent_utils.console")
def test_agent_from_repository_displays_org_info(mock_console, mock_settings, mock_get_agent, mock_get_auth_token):
def test_agent_from_repository_displays_org_info(
mock_console, mock_settings, mock_get_agent, mock_get_auth_token
):
mock_settings_instance = MagicMock()
mock_settings_instance.org_uuid = "test-org-uuid"
mock_settings_instance.org_name = "Test Organization"
@@ -2143,7 +2211,7 @@ def test_agent_from_repository_displays_org_info(mock_console, mock_settings, mo
"role": "test role",
"goal": "test goal",
"backstory": "test backstory",
"tools": []
"tools": [],
}
mock_get_agent.return_value = mock_get_response
@@ -2151,7 +2219,7 @@ def test_agent_from_repository_displays_org_info(mock_console, mock_settings, mo
mock_console.print.assert_any_call(
"Fetching agent from organization: Test Organization (test-org-uuid)",
style="bold blue"
style="bold blue",
)
assert agent.role == "test role"
@@ -2162,7 +2230,9 @@ def test_agent_from_repository_displays_org_info(mock_console, mock_settings, mo
@patch("crewai.cli.plus_api.PlusAPI.get_agent")
@patch("crewai.utilities.agent_utils.Settings")
@patch("crewai.utilities.agent_utils.console")
def test_agent_from_repository_without_org_set(mock_console, mock_settings, mock_get_agent, mock_get_auth_token):
def test_agent_from_repository_without_org_set(
mock_console, mock_settings, mock_get_agent, mock_get_auth_token
):
mock_settings_instance = MagicMock()
mock_settings_instance.org_uuid = None
mock_settings_instance.org_name = None
@@ -2175,11 +2245,11 @@ def test_agent_from_repository_without_org_set(mock_console, mock_settings, mock
with pytest.raises(
AgentRepositoryError,
match="Agent test_agent could not be loaded: Unauthorized access"
match="Agent test_agent could not be loaded: Unauthorized access",
):
Agent(from_repository="test_agent")
mock_console.print.assert_any_call(
"No organization currently set. We recommend setting one before using: `crewai org switch <org_id>` command.",
style="yellow"
style="yellow",
)

View File

@@ -0,0 +1,137 @@
interactions:
- request:
body: '{"messages": [{"role": "system", "content": "You are Sports Analyst. You
are an expert at gathering and organizing information. You carefully collect
details and present them in a structured way.\nYour personal goal is: Gather
information about the best soccer players\n\nTo give my best complete final
answer to the task respond using the exact following format:\n\nThought: I now
can give a great answer\nFinal Answer: Your final answer must be the great and
the most complete as possible, it must be outcome described.\n\nI MUST use these
formats, my job depends on it!"}, {"role": "user", "content": "Top 10 best players
in the world?"}], "model": "gpt-4o-mini", "stop": ["\nObservation:"]}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, zstd
connection:
- keep-alive
content-length:
- '694'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.78.0
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.78.0
x-stainless-raw-response:
- 'true'
x-stainless-read-timeout:
- '600.0'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.9
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: !!binary |
H4sIAAAAAAAAAwAAAP//nFfNchtHDr7rKVBz0a6KVJGUZMm6SVrJcSw6Ktmb7NY6pQJ7wBlEPd1T
6B5S3JTP+yw55AVy9T7YFnr4Jy7pRLmwioP+wfcB+Br4eQ8g4zw7h8yUGE1V2+5l0dh/xOvXo5MT
ufDvJk//lNvJm+9HvR9+errPOrrDj34iExe7Do2vakuRvWvNRggj6an90+PXJ69OX50dJUPlc7K6
rahj99h3K3bcHfQGx93eabd/Nt9dejYUsnP41x4AwM/pV/10OT1l59DrLL5UFAIWlJ0vFwFk4q1+
yTAEDhFdzDoro/Eukkuufyx9U5TxHN6C81Mw6KDgCQFCof4DujAlAfjkbtihhYv0/xw+lgRjb62f
siuAAyCEKI2JjVAOfkIyYZqCH0MsCUwjQi5C9DX0exC8MSRQW5yRBGCXFk292BxGGPSA9IkFapKx
lwqdodABNCXThCpyUf+59ia0Friq0cT5PiiwIsCg139noh+RwKA3ODr/5D65/iEcHNyyd2RhSCHw
wQH85a2LJDBkrPivnxwAdOHg4M4H1ngeHJzDjZcpSr60XfnGRZmp6UIKcpEdLo0Xa27qilO4RGu9
g3z/OwHUg0IHqsZGri3B369vLuCqxKpm7wLcEhYNQeRoFbMlzJXj5TUQvaLpw5WvES6qL78IG0xs
DHqDAdy8vbmAHxKZV00NEzbRy+xQsQ8U+7uZZXQwHGFdf/lF0d+hcIAPyC5235BUyO7FLNyIxmgn
BRtOTdk5Eo38oNc/64BrKhLfBLhlxd5fon90fupg7AVKDhBqorwDufBoZNkVbQ4UHm03GC9KUy1+
SiEkuEcK91p0JXyDaNHlCneIzpQUNOJXHGcvhvpeTbPdUDFECnGe3hotITTlCqP6m7J+a+A7aaO6
jGCkMYwWtJp1w4bn+wGi0MhSV/nULYEweNfyOhioqhwlJo5T4GnCDv5GcCnNzNEfpmLI+ZjJ5iTb
2LgkW3BT7aTjHc0SogofSVIkNy5dq4Q7oYpJNktAg/w8EejJUK3+oYUJB/YuLapV7pS5EVuObc6f
KPQr4RAZnYd73ZN7BX9hu+8xBHlxAtx5iU2Bdifmk60Fj9Z2I1e0LGlNBNDEbUtBlWtHSszrxY9X
XNl1jnT7tSs0wzvwoUZ2LWtvI9qWhldKw3uaVSjwrRzO8X/DFu2L8V8K/pt3o3/3LFRjiyzJmfDI
1nagJCgxwNS7jWpvQ6hVkwPChONa5rdX7gdwOA97JKwgNMZQCB1gZ2yTSF2UgrI5V0hSgUwsnCoL
9/ogRLilKbrcT8NjegIuUQxZ7/BPpIPyvpOOe1I+KE+MPNOqeZp2Ehfqb1LJSxWPIbn9AHethKQE
mhd1byH0/Q7oOy48aqIeVhJO2M5Uby51l4Nh49iU+2HBEgUY0dgLQeUniSJdOked6DlLb2PziDD0
ufB//6PE3BNaGGIunP8ZfbgSj5F3P476ADwrlzbXNaTaUeg6tAp+zY/9sOW9FG6qumyzaFFh88sV
qfI71h4mLLqSdPPyTUoEvFYChr7EinL4gBZLZeCWJyS19y+vlOtiVsfflca5Li6v0Rqx9TyJKyUk
+buhjorz662DrljqxRtvc3Jw6X2cKxJsPTfx0O8pEd+z+/Kr4SbAt19+c+xlazp8lY+vSMf2YuEk
4CGibEg+FqlY1pVkqRU1T/y6WvxOqsxbogV+LaZuap3a5zMx8LGkQMsWtcQJablpM00u2hnkZDVc
lAM9RUEvOTuU2bOGddGOpupIjpfe5hC4cDxmgy4Cu7FtyBmCKcfyWSfsx/NGeaPQ21xmSQoY9teq
OzVDKI6SurDLecJ5gxbQGG8xp3C4PgYIjZuAOoq4xto1AzrnY9LZNID8OLd8Xo4c1he1+FHY2JqN
2XEoHyTRqONFiL7OkvXzHsCPabRpnk0rWS2+quND9I+UrusPBu152WqiWllPjxbWqBFfGc5Ojjtb
DnzIKSLbsDYdZQZNSflq62qUwiZnv2bYW4P9/+5sO7uFzq74I8evDEbbGcofaqGczXPIq2VCOnHu
WrakOTmcBR3BDD1EJtFQ5DTGxrZzYBZmIVL1MGZXkNTC7TA4rh9eDXBwhGd9Gmd7n/f+BwAA//8D
AMMI9CsaDwAA
headers:
CF-RAY:
- 94d9be627c40f260-GRU
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Tue, 10 Jun 2025 15:02:05 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=qYkxv9nLxeWAtPBvECxNw8fLnoBHLorJdRI8.xVEVEA-1749567725-1.0.1.1-75sp4gwHGJocK1MFkSgRcB4xJUiCwz31VRD4LAmQGEmfYB0BMQZ5sgWS8e_UMbjCaEhaPNO88q5XdbLOCWA85_rO0vYTb4hp6tmIiaerhsM;
path=/; expires=Tue, 10-Jun-25 15:32:05 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=HRKCwkyTqSXpCj9_i_T5lDtlr_INA290o0b3k.26oi8-1749567725794-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '42674'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-envoy-upstream-service-time:
- '42684'
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999859'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_d92e6f33fa5e0fbe43349afee8f55921
status:
code: 200
message: OK
version: 1

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,130 @@
interactions:
- request:
body: '{"messages": [{"role": "system", "content": "You are Sports Analyst. You
are an expert at gathering and organizing information. You carefully collect
details and present them in a structured way.\nYour personal goal is: Gather
information about the best soccer players\n\nTo give my best complete final
answer to the task respond using the exact following format:\n\nThought: I now
can give a great answer\nFinal Answer: Your final answer must be the great and
the most complete as possible, it must be outcome described.\n\nI MUST use these
formats, my job depends on it!"}, {"role": "user", "content": "Top 1 best players
in the world?"}], "model": "gpt-4o-mini", "stop": ["\nObservation:"]}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, zstd
connection:
- keep-alive
content-length:
- '693'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.78.0
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.78.0
x-stainless-raw-response:
- 'true'
x-stainless-read-timeout:
- '600.0'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.9
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: !!binary |
H4sIAAAAAAAAAwAAAP//TFPRbttGEHzPVwz04taQBFt1nFZvsgsnRlvYqI0aafOyPK7IjY97xO1R
CpMv6nf0x4qlZLcvBHh3Mzs7s/vtDTCTerbGLLRUQtfHxVUzPF6udsaXHx/Cn7//8fG8e9rf/XxF
X+vqaTZ3RKo+cygvqGVIXR+5SNLDdchMhZ31/N3FT28vL99enE0XXao5Oqzpy+IiLTpRWazOVheL
s3eL8x+P6DZJYJut8dcbAPg2fV2n1vxltsbENZ10bEYNz9avj4BZTtFPZmQmVkjLbP7fZUhaWCfp
j20amrascQtNewRSNLJjEBrXD1LbcwY+6Y0oRWym/zUeW0ZJPfpII2eIorSMfcqxnoMMaYu7UFLF
Gauz1Q9ziOFXScoRv7GZLPEkNccRmRvKNdcTSNmBzjRVZyuwFALnY52Jl2JEkY7nBya0ZGipBikk
xsFKljQYAmXmjNBSplA4y1euUY3gLyVTyrUo5RH2LDHaHDsxSToHaY2Q1E1jDePSO/+kv/CITWiF
d9yxFlv78QKnp9dxqOz0dH2UYj1rmfR39DllKaMLbuVVDRXcXOOKcuCYlObYt5wZLaPiQB1P2BCH
6sS8z4X3ichUizawkDLnJT5MzxQlk9qWc+YaJeGeshgeSLQs3nPuSBTf3T+8/97TWZ2tzpcHzbda
OCv5pFJ07R+8RI1NbliLKDnZTkJJeXwJ1uG4Tj1h0/3zd5ZAk1PHqxVubm82ePL0cT30cxiHIbtm
7z1yQ2H0gAkvddFyTsuji5s95fp/Nnqi+6Tohlikj4wrijEp6pO7DJoez9FK00Zp2vJSxgqVwbxM
mfy08jKdTUwVxTgiKYx3nCkihUAeuS094PtIo/M8lDHylO5BiRieNe0V25SnIqIhcy1VZNRZqio6
iiqJUsY5+sxBjNH72mlzGKc+pyhbCWgSxYWHKNos8cGd8ZVjz8PnpMm085HxZveGVjpPoiPlYccZ
pc2+qyjeNGreshobjKmLbBbHOTp6PrjRgaYx9s13oK9yOkS5FY71chrrW93GgTUcOr7iMWk92ShW
JNhxwU4M0vUUXhka8uU7uHEk8CuyXqbMj7t66H5Kpk+5WEdqrfRo6V8AAAD//4xVy24CIRTdz1cQ
1m1Tx+nCr+gXGHKFO85NGSDAaF347w2ggtUmXR84931O0hcdLJP5mlC14yNzPfmJQlrB0qkkWWQW
VMyhH62fIUVNApUXgWk8oGZH1JqRiTYzzqRe1+8hDRFYEhOY83kWVKEimbcx55mFoLTlM2+IvuoL
fuPs0gQxsOMEkVFkM4IJiWmHD9vWaiGLVsHprRVfj+MSIBmAWbRuADDGxpxQlv3tBTnfhF7bvfN2
F3595SMZCpPwCMGaJOohWsczeu4Y22ZDWe48gjtvZxdFtF+Yw636vvDx6mMV7TebCxptBF2BoV+9
PCEUCiOQDo0ncQlyQlW/VgODRZFtgK4p+zGdZ9yldDL7/9BXQEp0EZVw6aTlfcn1mcfk8389u7U5
J8wD+gNJFJHQp1EoHGHRxX15OIWIsxjJ7NE7T8WCRyfWA3wMgJu15N25+wEAAP//AwDdzCHTkAgA
AA==
headers:
CF-RAY:
- 94d9a27f5dc000f9-GRU
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Tue, 10 Jun 2025 14:42:51 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=7hq1JYlSmmLvjUR7npK1vcLJYOvCPn947S.EYBtvTcQ-1749566571-1.0.1.1-11XCSwdUqYCYC3zE9DZk20c_BHXTPqEi6YMhVtX9dekgrj0J3a4EHGdHvcnhBNkIxYzhM4zzQsetx2sxisMk62ywkO8Tzo3rlYdo__Kov7w;
path=/; expires=Tue, 10-Jun-25 15:12:51 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=bhxj6kzt6diFCyNbiiw60v4lKiUKaoHjQ3Yc4KWW4OI-1749566571331-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '30419'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-envoy-upstream-service-time:
- '30424'
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999859'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_b5983a9572e28ded39da7b12e678e2b7
status:
code: 200
message: OK
version: 1

View File

@@ -0,0 +1,643 @@
interactions:
- request:
body: '{"messages": [{"role": "system", "content": "You are Sports Analyst. You
are an expert at gathering and organizing information. You carefully collect
details and present them in a structured way.\nYour personal goal is: Gather
information about the best soccer players\n\nTo give my best complete final
answer to the task respond using the exact following format:\n\nThought: I now
can give a great answer\nFinal Answer: Your final answer must be the great and
the most complete as possible, it must be outcome described.\n\nI MUST use these
formats, my job depends on it!"}, {"role": "user", "content": "Top 10 best players
in the world?"}], "model": "gpt-4o-mini", "stop": ["\nObservation:"]}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, zstd
connection:
- keep-alive
content-length:
- '694'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.78.0
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.78.0
x-stainless-raw-response:
- 'true'
x-stainless-read-timeout:
- '600.0'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.9
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: !!binary |
H4sIAAAAAAAAA6RXXW4cNxJ+1ykK8+JdoyVII/lH8yZprUi25BVsB0GwDoQadk13ZdgkU2TPaBIY
2Lc9wx5hz7F7kj1JUOye38iBgrwImiZZrPr41VdVv+wBDLgcjGBgakymCXb/vGrfyvX4+zCZ/fTd
3dWPD69v+N3V7bfjDw/N20GhJ/z4RzJpeerA+CZYSuxdt2yEMJFaPXp1cvri5YvTl6/yQuNLsnqs
Cmn/xO837Hh/eDg82T98tX/0uj9dezYUByP4xx4AwC/5r/rpSnoYjOCwWH5pKEasaDBabQIYiLf6
ZYAxckzo0qBYLxrvErns+qfat1WdRnANzs/BoIOKZwQIlfoP6OKcBOCzu2SHFs7y7xFckRCgEKSa
IPkAR4cwppggemNIIFhckERgl3fMvdgSMIKfwN9N8mMSGB4OjwsYY6QSfN7GAoFk4qVBZ6gAbgKa
VAC6EvyMBK0F9V143CrQEZLP5itsaPTZfXZHB/D8+Q17RxZuKUZ+/hz+cu0SCdwyNvzXzw4A9uHO
R1YLI7j0Mkcp++8XvnVJFiM4k4pcYof9whVXteWqTnHUGc6OsGspOxFrPzcYCWqOQA+GglpHC6Xw
eGzZVQXMOLJ3XTQKT4NTdhXgmC2nRQGWsNQPq6vV8IxN8rJY4jg8HA7h8vryDL7LiF60IduLXDme
sEGX7GIDI1epEXXK2Hb8LEJsjaEYKR4oXEOF693CMjq4HWMI//2PAnaHwhE+Iru0/w1Jg+yeDNyl
6Ns9gto75+cOJl6yO2PLMZGogzEQlTkKY9mxQQsTdhzrjFrvFnAEhCktemZlQ2Ofarj7+E0+rPBc
CjlTg8Me/UTYFDDdunmDYZmeN1y1BEfZBitT1qd9Kw4bcqlD61jReiP6mnCFaNGVitYtOlOTRgMX
nBZPRuq9fl48glRv+1mEyqPdj8ZnnIL4OcXYvXcSclWqocYZQYOlMq8B70gzTKFofEwwIRTKu3na
Z+ObVnygAmqyQa3ueA+RTCsEkXK+QeJkM07GtpmdavvbN5dncFFjE3IS3hBWLWWITjKhaMYO/kZw
Lu3C0ZMwuuVywmRLkl2YzslW3DaP4LS64f///Pd21gWVPWVPzgRvu6TrEhAanP4GrSxdfVLuylYB
7GKSVomAVpd2osnXl75hp6TSDaVvKCY2+doOcXSgFYJSDrgj1AtF60I4Jkbn4YP6XnrF68zCe4xR
nkymOy+prdA+ClMMnJQRs14PNLSP2JYMd+L75yuW1z9TKJOgl5IdygISWnIph7LFx164YEoUMp49
aMa7GUnMWQTY+J40CnGhfNWzNZaAYKkiV+odBoVIAI34GKFpbeJgSSWxanuteqlgvadFgwJv5aBH
6Yot2iejdC74Mz+GUWf3WQSsejXO5Ztn+f+ccVO2tku3KYWUA0bVJC1+vaaQKAZdBSxyPe3rQg6Z
Yw/lil67bwDzmq2mM7uE7DLImrwK4ZamQRCK5EyXdK8UmQ9aUxPc0Bxd6edxmkvfOYoh6x3+ASKp
9jwC0aeaVIKslpmlouwUQeMlP7+2HJQrkeorqsTirgxvSHzofco/FGskYbTryrhbz1ZR5czbLmmv
swKhcAPn5H6mBnumXKfENZZ/vpApEP1rfyC0cIulsIosCgjp4y1p0T13ASVH1Rb1Xos8Say5q9uJ
TN2VvMyv2LGmR3XJmj45vkKaHPVp7nvaKcKtL4X/9y8NesO7PyK4F+Ix8WONzxk0mFXPT6Dpzy8l
tmtrlrrbe7HGwwG7iW21scn1vdIig2kTwWI3W+ghkLDSPBufeJ/G2gJqlbaWq1UCHB1q+Le+xoZK
+IgWa43/hmckwfteH7766JvBv6kWIS2/bsaerWZtDNZHbZED9o79vipmkTDkkqwD1yRYebd82Duh
hkmWWvDV0lFsCYQCFROmNub2el3SyKroT7o4u9z4VFOkVWuetaykxmt101Flq4RmPhbrbDa9DnY9
ee8zK+NjIJMUEFWmrj3pdbuA6C2XPFn0nYM2+P0TxJ3SsFVyDzanFaFJG1EnJtdau7GAzvmUZTHP
ST/0K19Wk5H1VRA/jjtHB5323Ath9E6noJh8GOTVL3sAP+QJrN0aqgZBfBPSffJTytcdDYedvcF6
8Fuvvjw96leTT2jXC6+PjotHDN6XlJBt3BjiBgZNTeX66HriUwXwGwt7G2H/1p3HbHehs6ueYn69
YJQfVN4HoZLNdsjrbUI6GH9t2wrm7PAgkszY0H1iEn2KkibY2m5cHcRFTNTcT9hVJEG4m1kn4f74
BF+cIJ0em8Hel71fAQAA//8DAICIe4nBDwAA
headers:
CF-RAY:
- 94d9947e9abcf1fe-GRU
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Tue, 10 Jun 2025 14:33:27 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=UscrLNUFPQp17ivpT2nYqbIb8GsK9e0GpWC7sIKWwnU-1749566007-1.0.1.1-fCm5jdN02Agxc9.Ep4aAnKukyBp9S3iOLK9NY51NLG1zib3MnfjyCm5HhsWtUvr2lIjQpD_EWosVk4JuLbGxrKYZBa4WTendGsY9lCU9naU;
path=/; expires=Tue, 10-Jun-25 15:03:27 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=CGbpwQsAGlm5OcWs4NP0JuyqJh.mIqHyUZjIdKm8_.I-1749566007688-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '40370'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-envoy-upstream-service-time:
- '40375'
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999859'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_94fb13dc93d3bc9714811ff4ede4c08f
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "system", "content": "You are Guardrail Agent. You
are a expert at validating the output of a task. By providing effective feedback
if the output is not valid.\nYour personal goal is: Validate the output of the
task\n\nTo give my best complete final answer to the task respond using the
exact following format:\n\nThought: I now can give a great answer\nFinal Answer:
Your final answer must be the great and the most complete as possible, it must
be outcome described.\n\nI MUST use these formats, my job depends on it!\nIMPORTANT:
Your final answer MUST contain all the information requested in the following
format: {\n \"valid\": bool,\n \"feedback\": str | None\n}\n\nIMPORTANT: Ensure
the final output does not include any code block markers like ```json or ```python."},
{"role": "user", "content": "\n Ensure the following task result complies
with the given guardrail.\n\n Task result:\n Here are the top
10 best soccer players in the world as of October 2023, based on their performance,
impact, and overall contributions to the game:\n\n1. **Lionel Messi** (Inter
Miami)\n - Position: Forward\n - Country: Argentina\n - Highlights: Messi
continues to showcase his exceptional dribbling, vision, and playmaking ability,
leading Argentina to victory in the 2022 FIFA World Cup and significantly contributing
to his club''s successes.\n\n2. **Kylian Mbapp\u00e9** (Paris Saint-Germain)\n -
Position: Forward\n - Country: France\n - Highlights: Known for his blistering
speed and clinical finishing, Mbapp\u00e9 is a key player for both PSG and the
French national team, known for his performances in Ligue 1 and international
tournaments.\n\n3. **Erling Haaland** (Manchester City)\n - Position: Forward\n -
Country: Norway\n - Highlights: Haaland''s goal-scoring prowess and strength
have made him one of the most feared strikers in Europe, helping Manchester
City secure several titles including the UEFA Champions League.\n\n4. **Kevin
De Bruyne** (Manchester City)\n - Position: Midfielder\n - Country: Belgium\n -
Highlights: De Bruyne\u2019s exceptional passing, control, and vision make him
one of the best playmakers in the world, instrumental in Manchester City\u2019s
dominance in domestic and European competitions.\n\n5. **Cristiano Ronaldo**
(Al Nassr)\n - Position: Forward\n - Country: Portugal\n - Highlights:
Despite moving to the Saudi Pro League, Ronaldo''s extraordinary talent and
goal-scoring ability keep him in the conversation among the best, having had
a legendary career across multiple leagues.\n\n6. **Neymar Jr.** (Al Hilal)\n -
Position: Forward\n - Country: Brazil\n - Highlights: Neymar''s agility,
creativity, and skill have kept him as a top performer in soccer, now showcasing
his talents in the Saudi Pro League while maintaining a strong national team
presence.\n\n7. **Robert Lewandowski** (Barcelona)\n - Position: Forward\n -
Country: Poland\n - Highlights: The prolific striker continues to score consistently
in La Liga, known for his finishing, positioning, and aerial ability, contributing
to Barcelona\u2019s successes.\n\n8. **Karim Benzema** (Al Ittihad)\n - Position:
Forward\n - Country: France\n - Highlights: The former Real Madrid star
remains a top talent, displaying leadership and technical skills, now continuing
his career in the Saudi Pro League.\n\n9. **Luka Modri\u0107** (Real Madrid)\n -
Position: Midfielder\n - Country: Croatia\n - Highlights: A master of midfield
control and passing, Modri\u0107 remains an influential figure at Real Madrid,
showcasing his experience and football intelligence.\n\n10. **Mohamed Salah**
(Liverpool)\n - Position: Forward\n - Country: Egypt\n - Highlights:
Salah''s explosive pace and goal-scoring ability keep him as a central figure
for Liverpool in the Premier League and European competitions, maintaining his
status as one of the elite forwards.\n\nThese players have demonstrated exceptional
skill, consistency, and impact in their respective teams and leagues, solidifying
their positions among the best in the world.\n\n Guardrail:\n Only
include Brazilian players, both women and men\n \n Your task:\n -
Confirm if the Task result complies with the guardrail.\n - If not, provide
clear feedback explaining what is wrong (e.g., by how much it violates the rule,
or what specific part fails).\n - Focus only on identifying issues \u2014
do not propose corrections.\n - If the Task result complies with the
guardrail, saying that is valid\n "}], "model": "gpt-4o-mini", "stop":
["\nObservation:"]}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, zstd
connection:
- keep-alive
content-length:
- '4676'
content-type:
- application/json
cookie:
- __cf_bm=UscrLNUFPQp17ivpT2nYqbIb8GsK9e0GpWC7sIKWwnU-1749566007-1.0.1.1-fCm5jdN02Agxc9.Ep4aAnKukyBp9S3iOLK9NY51NLG1zib3MnfjyCm5HhsWtUvr2lIjQpD_EWosVk4JuLbGxrKYZBa4WTendGsY9lCU9naU;
_cfuvid=CGbpwQsAGlm5OcWs4NP0JuyqJh.mIqHyUZjIdKm8_.I-1749566007688-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.78.0
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.78.0
x-stainless-raw-response:
- 'true'
x-stainless-read-timeout:
- '600.0'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.9
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: !!binary |
H4sIAAAAAAAAA4xUwWobMRC9+ysGnVqwjZ3YjuNbQim0hULBtIc6mLE0uzuxVtpqtHbckH8v2o29
TpNCLwuapzfzZvRmH3sAio1agNIFRl1WdnCb119+LIvZh+/+h9zM55Op3Wzv5dt4rKei+onhN/ek
45E11L6sLEX2roV1IIyUso6vJtfT2Ww0mjdA6Q3ZRMurOJj4QcmOBxeji8lgdDUYz5/ZhWdNohbw
swcA8Nh8k05n6EEtYNQ/RkoSwZzU4nQJQAVvU0ShCEtEF1W/A7V3kVwjfVn4Oi/iAj6B83vQ6CDn
HQFCnvQDOtlTAFi5j+zQwk1zXsDjygGs1A4tm5VaQIZWqN8GMyKzQb1N8ZVaFgQRZQuBpLYRjCcB
5yM0AzvAnmMBsSDIawwmIFtAAY7ATtvakEBl8UBBIAu+hB0G9rWA9rWLgUkAnQHv7AFKcmn8AnHv
4Tbgb7aM7kR/95UOJQb4HIYNpz2+h33BloAeUjV2+Rlz70vq+Cl7IHsYwvKFWqlIc8YarT1AoF81
SRSI/thAK+6Vnj5sfCyeayQ9Jbl+EqML4HZCWW0ztpbMcKVW7un8CQNltWCykautPQPQOR+xmUMy
z90z8nSyi/V5FfxG/qKqjB1LsQ6E4l2yhkRfqQZ96gHcNbasXzhNVcGXVVxHv6Wm3PV01uZT3TZ0
6PwIRh/RdvHxaHLRfyPh2lBEtnLmbKVRF2Q6brcGWBv2Z0DvrO3Xct7K3bbOLv+f9B2gNVWRzLoK
ZFi/bLm7Fij9Lf517TTmRrASCjvWtI5MIT2FoQxr2+6wkoNEKtcZu5xCFbhd5KxaX05wOkG6vtSq
99T7AwAA//8DABZK9w/WBAAA
headers:
CF-RAY:
- 94d9957c4b46f1fe-GRU
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Tue, 10 Jun 2025 14:33:46 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '18280'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-envoy-upstream-service-time:
- '18287'
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149998889'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_853da6b9ca8f386ff1427c837346290d
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "system", "content": "You are Sports Analyst. You
are an expert at gathering and organizing information. You carefully collect
details and present them in a structured way.\nYour personal goal is: Gather
information about the best soccer players\n\nTo give my best complete final
answer to the task respond using the exact following format:\n\nThought: I now
can give a great answer\nFinal Answer: Your final answer must be the great and
the most complete as possible, it must be outcome described.\n\nI MUST use these
formats, my job depends on it!"}, {"role": "user", "content": "Top 10 best players
in the world?"}, {"role": "assistant", "content": "Thought: I now can give a
great answer \nFinal Answer: Here are the top 10 best soccer players in the
world as of October 2023, based on their performance, impact, and overall contributions
to the game:\n\n1. **Lionel Messi** (Inter Miami)\n - Position: Forward\n -
Country: Argentina\n - Highlights: Messi continues to showcase his exceptional
dribbling, vision, and playmaking ability, leading Argentina to victory in the
2022 FIFA World Cup and significantly contributing to his club''s successes.\n\n2.
**Kylian Mbapp\u00e9** (Paris Saint-Germain)\n - Position: Forward\n - Country:
France\n - Highlights: Known for his blistering speed and clinical finishing,
Mbapp\u00e9 is a key player for both PSG and the French national team, known
for his performances in Ligue 1 and international tournaments.\n\n3. **Erling
Haaland** (Manchester City)\n - Position: Forward\n - Country: Norway\n -
Highlights: Haaland''s goal-scoring prowess and strength have made him one of
the most feared strikers in Europe, helping Manchester City secure several titles
including the UEFA Champions League.\n\n4. **Kevin De Bruyne** (Manchester City)\n -
Position: Midfielder\n - Country: Belgium\n - Highlights: De Bruyne\u2019s
exceptional passing, control, and vision make him one of the best playmakers
in the world, instrumental in Manchester City\u2019s dominance in domestic and
European competitions.\n\n5. **Cristiano Ronaldo** (Al Nassr)\n - Position:
Forward\n - Country: Portugal\n - Highlights: Despite moving to the Saudi
Pro League, Ronaldo''s extraordinary talent and goal-scoring ability keep him
in the conversation among the best, having had a legendary career across multiple
leagues.\n\n6. **Neymar Jr.** (Al Hilal)\n - Position: Forward\n - Country:
Brazil\n - Highlights: Neymar''s agility, creativity, and skill have kept
him as a top performer in soccer, now showcasing his talents in the Saudi Pro
League while maintaining a strong national team presence.\n\n7. **Robert Lewandowski**
(Barcelona)\n - Position: Forward\n - Country: Poland\n - Highlights:
The prolific striker continues to score consistently in La Liga, known for his
finishing, positioning, and aerial ability, contributing to Barcelona\u2019s
successes.\n\n8. **Karim Benzema** (Al Ittihad)\n - Position: Forward\n -
Country: France\n - Highlights: The former Real Madrid star remains a top
talent, displaying leadership and technical skills, now continuing his career
in the Saudi Pro League.\n\n9. **Luka Modri\u0107** (Real Madrid)\n - Position:
Midfielder\n - Country: Croatia\n - Highlights: A master of midfield control
and passing, Modri\u0107 remains an influential figure at Real Madrid, showcasing
his experience and football intelligence.\n\n10. **Mohamed Salah** (Liverpool)\n -
Position: Forward\n - Country: Egypt\n - Highlights: Salah''s explosive
pace and goal-scoring ability keep him as a central figure for Liverpool in
the Premier League and European competitions, maintaining his status as one
of the elite forwards.\n\nThese players have demonstrated exceptional skill,
consistency, and impact in their respective teams and leagues, solidifying their
positions among the best in the world."}, {"role": "user", "content": "The task
result does not comply with the guardrail as it includes players from various
countries and only mentions two Brazilian players (Neymar Jr. and Neymar) while
excluding Brazilian women players entirely. The guardrail specifically requests
to include only Brazilian players, both women and men, which is not fulfilled."}],
"model": "gpt-4o-mini", "stop": ["\nObservation:"]}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, zstd
connection:
- keep-alive
content-length:
- '4338'
content-type:
- application/json
cookie:
- __cf_bm=UscrLNUFPQp17ivpT2nYqbIb8GsK9e0GpWC7sIKWwnU-1749566007-1.0.1.1-fCm5jdN02Agxc9.Ep4aAnKukyBp9S3iOLK9NY51NLG1zib3MnfjyCm5HhsWtUvr2lIjQpD_EWosVk4JuLbGxrKYZBa4WTendGsY9lCU9naU;
_cfuvid=CGbpwQsAGlm5OcWs4NP0JuyqJh.mIqHyUZjIdKm8_.I-1749566007688-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.78.0
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.78.0
x-stainless-raw-response:
- 'true'
x-stainless-read-timeout:
- '600.0'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.9
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: !!binary |
H4sIAAAAAAAAAwAAAP//lFfNbhxHDr7rKYi5BDBagiTLsj032QvHceC1ExubIOtAYFdzuhlVs3pZ
1TMeBwb2Nfa857xArnqTfZIFq2e6R9J417oImmYVfz5+JIu/HwDMuJrNYeYaTK7t/OGzuv8+vPuu
O/2H+9k//vnF8flP4Zcf6uclPj35dlbYjVD+Ri5tbx250HaeEgcZxE4JE5nWk8dnTx+dnx+fnmdB
Gyrydq3u0uFZOGxZ+PD0+PTs8Pjx4cmTze0msKM4m8PfDwAAfs9/zU+p6ONsDsfF9ktLMWJNs/l4
CGCmwduXGcbIMaGkWTEJXZBEkl1/34S+btIcvgMJK3AoUPOSAKE2/wElrkgBPsgLFvRwkX/P4SUp
ASpBaghS6ODkGEqKCWJwjhQ6j2vSCCz5xCqorwpYBNdHlhroo/N95CX5NQSBZ4qf2DPK9l4BZUgN
rEJLAigVtCQFlBipsvOpIVboSBdBWxRH+Qy3Hbq0NVljS4ARwgLeuBRKUjg9Pn04/yAf5OQIHjz4
K61bVHilRw8eWIAAcAhvQ2TL4RxeBF2hVqPkue/LOVx4eMke/fj5JdeN57pJcQ5vhMycWW+DYXHF
3lMFi0FVhNqHEr1fF7Axbplg6SlCClDhp0+eYMWpgYYjVMpl6VnqIsPS4lX+30KtA/rD6IIamFiy
57QuoMFl/g2Ra+EFO5S0RSXIAKkpdr4vsxrzdMJe0CJHD4mwPTKcTg2nv7Fc/+G4j/Dq+k/hoPeA
60dCD6+xUq72IjbpbjACtaQ1VZY1hCtab9hg+O1qKkBCGmDN4cSOqCogkWuEHfoB9zgg5bR3jD4j
lunIbRfUKgJaTK6hWEBswsphJqbpS+hJJsRGtFgS6QiSpyX5mGF6aDD9yK5B9RyD3AOg9yElkgZb
eBlS7HrdC9P3ElYyxrskjZhyzrNfSGoRjjTY8STDWpILVguw5IQeWmqtGDZEvZH0rM6umMRht1Fp
5HQNSk1TZY0s7XBTfrfomJE5M2Reoya8ByZv1KNUAd4qV7QXjwtI2hN4qkkqiyQ3im/ipvsUkE3m
SFwQ64Akya+3iaYKGtKBJgV4woo0Ntztqa1Ow4piLIBQxT5I35KGPgIOJZ0T4FzwWFkVN2r9NPQp
WxgcRF2DQyXSjMkjw+RCKrW+DRd+SXEvOK+5WjD5ivR2UYUWvwBLx8tgOd4UDssdZEbDBk7FMZ8c
8KCPjroNG4YSGpoEupTLyiImoRiHumAZSsM05IPGz//XUc5z8B4F4R1KCvtD/wstSO4G/ozEmtr+
2LNOjqDkQi38adsgSKEydTZttukcsrZD7iBJueyTzbIcA6aE7qoAiglLz7HJzYE0kl/s7U9DcyW9
2VzvAvDYAPgWS2Xy8IpiH+8zfTSSfGH4/GQFibDwyINHYz1arPkZkMfHDdsGGApQjCTJmkiHmva2
hgKubjShVdArUEw0tkbvuSabxcOAhkzeHPMTi/k9Xv8R4ULx+s/fwv0I/+763wHeYu/DF2ivnJt3
TJg5PxHwNvsHJ4z6NbJQtWWLOTDSJSYNVvrT8yIPjnbj3o15kdOei2UzbQgTL7ft72kOvGGsA7xj
v8T7sP15Qz7SF9guQB87UjbAq2xayTOWnsCRzanDMvM3W/3PP/8Vd9pcPj7VxFAHTIbLkqDFiqDh
FhBcUCGNKQhlcLYefc3T4eTYYn/GCL+gkGJKU9L/B9PH6N+ib4kV4yS4NQHylJ5eVwWYsZ15t8NX
0ptdfRxtFrVFM72vxsfD2BPyx+lB5XPH+Npyf2+14jkmaEbv81EWp1SxJWzz3sj8GY0GyU/XOy9j
Y+KWzTs8HF7E267tNGy79MB/072tBRvfsYDU9NYsO6Vc+1JvbH0TQdk124e8pxrd+mh3fVBa9BFt
hZHe+x0BioSUIciLy68byedxVfGh7jSU8dbV2YKFY3OphDGIrSUxhW6WpZ8PAH7NK1F/Y8uZdRra
Ll2mcEXZ3JMnJ4O+2bSJTdLz883CNEs2ICfByaOz7b0bGi8rSsg+7qxVM4euoWq6O+1g2FccdgQH
O3Hf9Wef7iF2lvpr1E8CZyObqsvOyORuxjwdU7JV9UvHRpyzw7NIumRHl4lJLRcVLbD3wwI5i+uY
qL1csNSknfKwRS66y4dn+OgM6elDNzv4fPBfAAAA//8DAGsE37hTDwAA
headers:
CF-RAY:
- 94d995f0ca0cf1fe-GRU
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Tue, 10 Jun 2025 14:34:22 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '35941'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-envoy-upstream-service-time:
- '35946'
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149998983'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_c874fed664f962f5ee90decb8ebad875
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "system", "content": "You are Guardrail Agent. You
are a expert at validating the output of a task. By providing effective feedback
if the output is not valid.\nYour personal goal is: Validate the output of the
task\n\nTo give my best complete final answer to the task respond using the
exact following format:\n\nThought: I now can give a great answer\nFinal Answer:
Your final answer must be the great and the most complete as possible, it must
be outcome described.\n\nI MUST use these formats, my job depends on it!\nIMPORTANT:
Your final answer MUST contain all the information requested in the following
format: {\n \"valid\": bool,\n \"feedback\": str | None\n}\n\nIMPORTANT: Ensure
the final output does not include any code block markers like ```json or ```python."},
{"role": "user", "content": "\n Ensure the following task result complies
with the given guardrail.\n\n Task result:\n Here are the top
10 best soccer players in the world, focusing exclusively on Brazilian players,
both women and men, based on their performance and impact in the game as of
October 2023:\n\n1. **Neymar Jr.** \n - Position: Forward \n - Club: Al
Hilal \n - Highlights: One of the most skilled forwards globally, Neymar
continues to dazzle with his dribbling, playmaking, and goal-scoring ability,
having a significant impact on both his club and the Brazilian national team.\n\n2.
**Vin\u00edcius J\u00fanior** \n - Position: Forward \n - Club: Real Madrid \n -
Highlights: Vin\u00edcius has emerged as a key player for Real Madrid, noted
for his speed, technical skills, and crucial goals in important matches, showcasing
his talent on both club and international levels.\n\n3. **Richarlison** \n -
Position: Forward \n - Club: Tottenham Hotspur \n - Highlights: Known
for his versatility and aerial ability, Richarlison has become a vital member
of the national team and has the capability to change the game with his pace
and scoring ability.\n\n4. **Marta** \n - Position: Forward \n - Club:
Orlando Pride \n - Highlights: A true legend of women''s soccer, Marta has
consistently showcased her skill, leadership, and goal-scoring prowess, earning
numerous awards and accolades throughout her legendary career.\n\n5. **Andressa
Alves** \n - Position: Midfielder \n - Club: Roma \n - Highlights:
A pivotal player in women''s soccer, Andressa has displayed her exceptional
skills and tactical awareness both in club play and for the Brazilian national
team.\n\n6. **Alana Santos** \n - Position: Defender \n - Club: Benfica \n -
Highlights: Alana is recognized for her defensive prowess and ability to contribute
to the attack, establishing herself as a key player for both her club and the
national team.\n\n7. **Gabriel Jesus** \n - Position: Forward \n - Club:
Arsenal \n - Highlights: With a flair for scoring and assisting, Gabriel
Jesus is an essential part of the national team, known for his work rate and
intelligence on the field.\n\n8. **Ta\u00eds Ara\u00fajo** \n - Position:
Midfielder \n - Club: S\u00e3o Paulo \n - Highlights: A rising star in
Brazilian women''s soccer, Ta\u00eds has gained recognition for her strong performances
in midfield, showcasing both skill and creativity.\n\n9. **Thiago Silva** \n -
Position: Defender \n - Club: Chelsea \n - Highlights: An experienced
and reliable center-back, Silva\u2019s leadership and defensive abilities have
made him a cornerstone for Chelsea and the Brazilian national team.\n\n10. **Bia
Zaneratto** \n - Position: Forward \n - Club: Palmeiras \n - Highlights:
A talented forward, Bia has become known for her goal-scoring capabilities and
playmaking skills, contributing significantly to both her club and the national
team.\n\nThis list highlights the incredible talent and contributions of Brazilian
players in soccer, showcasing their skills across both men''s and women''s games,
thus representing Brazil''s rich soccer legacy.\n\n Guardrail:\n Only
include Brazilian players, both women and men\n \n Your task:\n -
Confirm if the Task result complies with the guardrail.\n - If not, provide
clear feedback explaining what is wrong (e.g., by how much it violates the rule,
or what specific part fails).\n - Focus only on identifying issues \u2014
do not propose corrections.\n - If the Task result complies with the
guardrail, saying that is valid\n "}], "model": "gpt-4o-mini", "stop":
["\nObservation:"]}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, zstd
connection:
- keep-alive
content-length:
- '4571'
content-type:
- application/json
cookie:
- __cf_bm=UscrLNUFPQp17ivpT2nYqbIb8GsK9e0GpWC7sIKWwnU-1749566007-1.0.1.1-fCm5jdN02Agxc9.Ep4aAnKukyBp9S3iOLK9NY51NLG1zib3MnfjyCm5HhsWtUvr2lIjQpD_EWosVk4JuLbGxrKYZBa4WTendGsY9lCU9naU;
_cfuvid=CGbpwQsAGlm5OcWs4NP0JuyqJh.mIqHyUZjIdKm8_.I-1749566007688-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.78.0
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.78.0
x-stainless-raw-response:
- 'true'
x-stainless-read-timeout:
- '600.0'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.9
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: !!binary |
H4sIAAAAAAAAA4ySTW/bMAyG7/4VhM72kLj5WHxrDwEKtNilGDYshcFItK1WlgRJTjYE+e+D7DR2
tw7YxYD58KX4kjwlAEwKVgDjDQbeWpXd1d3Dl+5Yfb3dPD6K7bri26cw899f5vruG0ujwuxfiIc3
1SduWqsoSKMHzB1hoFh1vl5slqvVbJX3oDWCVJTVNmQLk7VSyyyf5Ytsts7mny/qxkhOnhXwIwEA
OPXf2KcW9JMVMEvfIi15jzWx4poEwJxRMcLQe+kD6sDSEXKjA+m+9afGdHUTCrgHbY7AUUMtDwQI
dewfUPsjOYCd3kqNCm77/wJOOw2wYwdUUuxYAcF1lA6xikjskb/GsO6U2unz9HFHVedRXeAEoNYm
YBxgb/v5Qs5Xo8rU1pm9/0PKKqmlb0pH6I2OpnwwlvX0nAA89wPt3s2IWWdaG8pgXql/bpMvh3ps
3ONI8/UFBhNQTVTLPP2gXikooFR+shLGkTckRum4P+yENBOQTFz/3c1HtQfnUtf/U34EnJMNJErr
SEj+3vGY5iie+b/SrlPuG2ae3EFyKoMkFzchqMJODcfH/C8fqC0rqWty1snhAitb3ixwuUDa3HCW
nJPfAAAA//8DAOHOvQuPAwAA
headers:
CF-RAY:
- 94d996d2b877f1fe-GRU
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Tue, 10 Jun 2025 14:34:38 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '15341'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-envoy-upstream-service-time:
- '15343'
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149998917'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_22419b91d42cf1e03278a29f3093ed3d
status:
code: 200
message: OK
version: 1

View File

@@ -0,0 +1,726 @@
interactions:
- request:
body: '{"messages": [{"role": "system", "content": "You are Sports Analyst. You
are an expert at gathering and organizing information. You carefully collect
details and present them in a structured way.\nYour personal goal is: Gather
information about the best soccer players\n\nTo give my best complete final
answer to the task respond using the exact following format:\n\nThought: I now
can give a great answer\nFinal Answer: Your final answer must be the great and
the most complete as possible, it must be outcome described.\n\nI MUST use these
formats, my job depends on it!"}, {"role": "user", "content": "Top 10 best players
in the world?"}], "model": "gpt-4o-mini", "stop": ["\nObservation:"]}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, zstd
connection:
- keep-alive
content-length:
- '694'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.78.0
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.78.0
x-stainless-raw-response:
- 'true'
x-stainless-read-timeout:
- '600.0'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.9
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: !!binary |
H4sIAAAAAAAAA5yXzVIbORDH7zxF11xiqLHLNp/xDUhMssEJm49N1S4pqq1pz/SikYaWxo6Tynmf
ZQ/7AnvNPtiWZIMNGAK5UHhaLfVPrW799XUNIOEs6UGiCvSqrHTzIK9HH/yzi6Mv44uT3z93X5fn
Fxefs49bR3vya5IGDzv8k5S/9GopW1aaPFszMysh9BRm7exuPd3e2d1s70VDaTPSwS2vfHPLNks2
3Oy2u1vN9m6zszf3LiwrckkP/lgDAPga/4Y4TUafkx6008svJTmHOSW9q0EAiVgdviToHDuPxifp
wqis8WRi6O8LW+eF78FLMHYCCg3kPCZAyEP8gMZNSABOTZ8NatiPv3vwviDwtoJOG4bkPDirFAlU
GqckDtiALwgmVnSWAjqwI3ijvB2SQLfd3UzjSkMCzsh4HjFlMERHGdjoyQJCioyHimRkpUSjyKXg
zllrlwKXFSofBudYBgOaDOyYBLWGgCc8rEMuHHg7n9ATlq4FL0gI2MX4nJda+VooA83O907Nqem0
YGPjmK0hDQNyjqHx0ngSGDCWDIf99Y2NUwMATTixjsMiPehbmaBk8++vaAr7fhYDuR48Ex4ONZs8
hTE7tiaF3KJuOmWFTQ44ZM1+2pq776uCaUwlGe96MKi150oTHKDW1kD25I3AhI0hSeHQVgj7JQkr
BFVgWcXZP4Z9h8O6uvoGjW67211vBcJuIHw11YwGBkOsqu9/Q+MEhR28Qza+eURSIpvHg76riLIU
PKnC8EVNKYzYsCvY5Kvh+i/7+3dE29lbT+GY85qgA569DmkuLzcjsyU5zwpUXblItRmonkvYZniB
qMOJaAzQqIJcSN8h++njiU7sJOyzm4Fdyxob59kov5rsoDYZOc05xjoJTmGiD8/7+3A4x3RwTBgA
Z+mMOdpcT+FEqGSSS+sMPkJuxdTRmA08IziQemroQZQDzkZMOiO5CzR0iuUTGvsXj398LldGm0Lg
rCv3IOTIth3ZULiEAzJfqERo7OvmS++5wOzxqTtaSlYaO1OJ5/F/j8qzQg1sPGnNORlFqylD1ays
vKuTeJPMwYR9AW8JNQwwE84i3U6ge03TEgV+kVZAgxesUT8erK+RJYVs0VUenKurstHXk3UrPVIH
zGZdxeB3Q/BvQ/f2cEwTNJmduHOGRv8QDlAUaWvwJ0Aum0MK1dwl/kASRn1/W4yd4yBcPQMyTxyc
xJtnKS/LBTinPMbQUPB6U5yfvr2AOLAFlpTBO9RYQOOYxySVtfqnO+FoQXiVrtU8N6po0ctndXR/
GUWCp4HgNzbf/1FcO/jl+7+GrUBj6ST+NMcjztoPy727fn8mOu14A9fnCAObCf/310qG+9rbQ/rb
PU3g7vq5xXV5tG63iPkdFpHeF+yiwgjCIwhF1rcFj6pFguIJcieFCt1N8RNutDu0jZCrSPmg3KLM
CQJsaH2xKPjgHEjFYPBFHeMgHzfQteb1A4Im9EgX9dkYZbqIsiJx0dFWbELOwpRhWR6Fe1jYh7KF
OgwfWQEao67jYikMax8idXSlEAscU6AJ8pSM11MYEhkQylEyyoJmdLakIByDUosic1lVrhCVrWWN
KzSqHQadbWqtlwxojPUxrKiuP80t3670tLZ5JXbobrgms1o+E0JnTdDOztsqidZvawCfom6vr0nx
pBJbVv7M23OKy3W63dl8yeK5sLDubG3Ord561AvD7s52umLCs4w8snZL0j9RqArKFq6LdwLWGdsl
w9oS9u1wVs09Q2eTP2T6hUEpqjxlZ5VQxuo68mKYUHhO3TXsaptjwIkjGbOiM88kIRUZjbDWs0dO
4qbOU3k2YpOTVMKzl86oOtvcwu0tpKebKln7tvY/AAAA//8DAJ67Vp33DQAA
headers:
CF-RAY:
- 94d9b5400dcd624b-GRU
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Tue, 10 Jun 2025 14:55:42 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=8Yv8F0ZCFAo2lf.qoqxao70yxyjVvIV90zQqVF6bVzQ-1749567342-1.0.1.1-fZgnv3RDfunvCO1koxwwFJrHnxSx_rwS_FHvQ6xxDPpKHwYr7dTqIQLZrNgSX5twGyK4F22rUmkuiS6KMVogcinChk8lmHtJBTUVTFjr2KU;
path=/; expires=Tue, 10-Jun-25 15:25:42 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=wzh8YnmXvLq1G0RcIVijtzboQtCZyIe2uZiochkBLqE-1749567342267-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '33288'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-envoy-upstream-service-time:
- '33292'
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999859'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_6a587ea22edef774ecdada790a320cab
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "system", "content": "You are Sports Analyst. You
are an expert at gathering and organizing information. You carefully collect
details and present them in a structured way.\nYour personal goal is: Gather
information about the best soccer players\n\nTo give my best complete final
answer to the task respond using the exact following format:\n\nThought: I now
can give a great answer\nFinal Answer: Your final answer must be the great and
the most complete as possible, it must be outcome described.\n\nI MUST use these
formats, my job depends on it!"}, {"role": "user", "content": "Top 10 best players
in the world?"}, {"role": "assistant", "content": "Thought: I now can give a
great answer \nFinal Answer: The top 10 best soccer players in the world, as
of October 2023, can be identified based on their recent performances, skills,
impact on games, and overall contributions to their teams. Here is the structured
list:\n\n1. **Lionel Messi (Inter Miami CF)**\n - Position: Forward\n -
Key Attributes: Dribbling, vision, goal-scoring ability.\n - Achievements:
Multiple Ballon d''Or winner, Copa America champion, World Cup champion (2022).\n\n2.
**Kylian Mbapp\u00e9 (Paris Saint-Germain)**\n - Position: Forward\n - Key
Attributes: Speed, technique, finishing.\n - Achievements: FIFA World Cup
champion (2018), Ligue 1 titles, multiple domestic cups.\n\n3. **Erling Haaland
(Manchester City)**\n - Position: Forward\n - Key Attributes: Power, speed,
goal-scoring instinct.\n - Achievements: Bundesliga top scorer, UEFA Champions
League winner (2023), Premier League titles.\n\n4. **Kevin De Bruyne (Manchester
City)**\n - Position: Midfielder\n - Key Attributes: Passing, vision, creativity.\n -
Achievements: Multiple Premier League titles, FA Cups, UEFA Champions League
winner (2023).\n\n5. **Karim Benzema (Al-Ittihad)**\n - Position: Forward\n -
Key Attributes: Goal-scoring, playmaking, tactical intelligence.\n - Achievements:
2022 Ballon d''Or winner, multiple Champions Leagues with Real Madrid.\n\n6.
**Neymar Jr. (Al Hilal)**\n - Position: Forward\n - Key Attributes: Flair,
dribbling, creativity.\n - Achievements: Multiple domestic league titles,
Champions League runner-up.\n\n7. **Robert Lewandowski (FC Barcelona)**\n -
Position: Forward\n - Key Attributes: Finishing, positioning, aerial ability.\n -
Achievements: FIFA Best Men''s Player, multiple Bundesliga titles, La Liga champion
(2023).\n\n8. **Mohamed Salah (Liverpool)**\n - Position: Forward\n - Key
Attributes: Speed, finishing, dribbling.\n - Achievements: Premier League
champion, FA Cup, UEFA Champions League winner.\n\n9. **Vin\u00edcius J\u00fanior
(Real Madrid)**\n - Position: Forward\n - Key Attributes: Speed, dribbling,
creativity.\n - Achievements: UEFA Champions League winner (2022), La Liga
champion (2023).\n\n10. **Luka Modri\u0107 (Real Madrid)**\n - Position:
Midfielder\n - Key Attributes: Passing, vision, tactical intelligence.\n -
Achievements: Multiple Champions League titles, Ballon d''Or winner (2018).\n\nThis
list is compiled based on their current form, past performances, and contributions
to their respective teams in both domestic and international competitions. Player
rankings can vary based on personal opinion and specific criteria used for evaluation,
but these players have consistently been regarded as some of the best in the
world as of October 2023."}, {"role": "user", "content": "You are not allowed
to include Brazilian players"}], "model": "gpt-4o-mini", "stop": ["\nObservation:"]}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, zstd
connection:
- keep-alive
content-length:
- '3594'
content-type:
- application/json
cookie:
- __cf_bm=8Yv8F0ZCFAo2lf.qoqxao70yxyjVvIV90zQqVF6bVzQ-1749567342-1.0.1.1-fZgnv3RDfunvCO1koxwwFJrHnxSx_rwS_FHvQ6xxDPpKHwYr7dTqIQLZrNgSX5twGyK4F22rUmkuiS6KMVogcinChk8lmHtJBTUVTFjr2KU;
_cfuvid=wzh8YnmXvLq1G0RcIVijtzboQtCZyIe2uZiochkBLqE-1749567342267-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.78.0
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.78.0
x-stainless-raw-response:
- 'true'
x-stainless-read-timeout:
- '600.0'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.9
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: !!binary |
H4sIAAAAAAAAAwAAAP//nJfNctpIEMfvfoouXYJdwgXYjm1u4JjEsam4kuzuYZ1yNaNG6njUo50Z
QUgq5zzLHvYF9pp9sK0ZwOAEx3EuFKjVTf+mP/TXpy2AhLOkC4kq0Kuy0s1+Xucvb06e+X7ndDzp
DflscN6yRBfng75O0uBhRu9J+aXXrjJlpcmzkblZWUJPIWr7cP/44Onh3sFxNJQmIx3c8so3902z
ZOFmp9XZb7YOm+2jhXdhWJFLuvDnFgDAp/gZ8pSMPiRdaKXLKyU5hzkl3dubABJrdLiSoHPsPIpP
0pVRGfEkMfW3hanzwnfhDMRMQaFAzhMChDzkDyhuShbgSgYsqKEXf3fhBVkCdoACdZUFUNDsPJgx
+ILAmwraLRiR8+CMUmSh0jgj64Al3jE1VmeALni8Ut6MyEKn1dlLgT4oXWcsOfQtfmTNKEvn7pVc
SXsXdnYu2AhpGJJzDI0z8WRhyFgynAy2d3auBACacGkch4p0YWDsFG22uH5OM+h5b3lUe3JdeGZ5
NNIseQoTdmwkhdygbjplbEgER6zZz3YX7j1VME2oJPGuC8Nae640QR+1NgLZk1cWpixCNoUTUyH0
SrKsEFSBZRWj/xHpT+rq9ho0Oq1OZ3s3EHYC4fkskg9HWFVf/4bGJVp28AZZfPM52RJZHg/6piLK
UvCkCuG/akphzMKuYMk3ww3OBr17sm0fbadwwXlN0AbPXpNLoVweRmZKcp4VqLpykWovUJ3acMzw
AlGjZNAYoqiCXCjfCfvZ44kuzTScs5uD3akai/Msym8m69eSkdOcY+zW4BQC/XY66MHJAtPBBWEA
nJcz1mhvO4VLSyWTXVrn8BFyP5aOJizwjKBv65nQT1EOORsz6YzsfaBhlNc7NC4YnjzclxuzTSFw
1pX7KeTIdhDZ0HIJfZKPVCI0erp55j0XmD2+dM/XipXGES/xJn73qDwr1MDiSWvOSRRtpgxTs3Hy
bjvxWzIHU/YFvCbUMMTMchbpnga612EPebigKUpmpu6GoTE4gT5aRdoIPh5ysJyvFKqFS/yBZBn1
jzdLHL5+2KFDkicOLuMWXENb7+FFVS8wzCTe3SuLAh4GxKEpsKQM3qDGAhoXPCFbGaN/eZmMV4TZ
co9u5vmmEVfrcN6KP+7ESHAUCH5n+fqP4trBy6//ChsLjbVi/jJHtnoIPDRaD05MZ/vHlTiOz7D6
BmFoMsv/fXkQ4fH74RFDNLxvVm7b6vsJWzwCIk67FXheoLUzOMew8fqhUwWGtbAqljz31uTB5bD2
wFrtid2l712Y50ZnJNA3xt8ug3tWYKzjaW1NRSgr+IIrsHXwbNZVBHxbsJsLnAIdjIgEMHtfu6B7
vFlIFvpesEB4yI2Nqh05MEH5GEcwLQwUOCEoMSNwnAuPWaF44LJC5ZciiS0oXY/mUcxcN4ViWsFw
hqjBecxpg4rahVNUxSKLoNMsKZMLf6SQjl0Epw+KqmWkG9bapVCRHRtboqhwQOGPce10d9dlpKVx
7TBIWam1XjOgiPExxShg3y0sn28lqzZ5Zc3IfeOazFfJtSV0RoI8dd5USbR+3gJ4F6VxfUftJpU1
ZeWvvbmh+HeHR+15vGSlyFfWp+2FcE688ahXhvbe8dLvTsTrjDyydmvyOlGoCspWvistjnXGZs2w
tcb9fT6bYs/ZWfKfCb8yqFBJyq4rSxmru8yr2yyFV5b7brs955hw4shOWNG1Z7KhFhmNsdbzF4nE
zZyn8nrMkpOtLM/fJsbV9d4+HuwjHe+pZOvz1v8AAAD//wMAVoKTVVsNAAA=
headers:
CF-RAY:
- 94d9b6782db84d3b-GRU
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Tue, 10 Jun 2025 14:56:30 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '31484'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-envoy-upstream-service-time:
- '31490'
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999166'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_aa737cf40bb76af9f458bfd35f7a77a1
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "system", "content": "You are Sports Analyst. You
are an expert at gathering and organizing information. You carefully collect
details and present them in a structured way.\nYour personal goal is: Gather
information about the best soccer players\n\nTo give my best complete final
answer to the task respond using the exact following format:\n\nThought: I now
can give a great answer\nFinal Answer: Your final answer must be the great and
the most complete as possible, it must be outcome described.\n\nI MUST use these
formats, my job depends on it!"}, {"role": "user", "content": "Top 10 best players
in the world?"}, {"role": "assistant", "content": "Thought: I now can give a
great answer \nFinal Answer: The top 10 best soccer players in the world, as
of October 2023, can be identified based on their recent performances, skills,
impact on games, and overall contributions to their teams. Here is the structured
list:\n\n1. **Lionel Messi (Inter Miami CF)**\n - Position: Forward\n -
Key Attributes: Dribbling, vision, goal-scoring ability.\n - Achievements:
Multiple Ballon d''Or winner, Copa America champion, World Cup champion (2022).\n\n2.
**Kylian Mbapp\u00e9 (Paris Saint-Germain)**\n - Position: Forward\n - Key
Attributes: Speed, technique, finishing.\n - Achievements: FIFA World Cup
champion (2018), Ligue 1 titles, multiple domestic cups.\n\n3. **Erling Haaland
(Manchester City)**\n - Position: Forward\n - Key Attributes: Power, speed,
goal-scoring instinct.\n - Achievements: Bundesliga top scorer, UEFA Champions
League winner (2023), Premier League titles.\n\n4. **Kevin De Bruyne (Manchester
City)**\n - Position: Midfielder\n - Key Attributes: Passing, vision, creativity.\n -
Achievements: Multiple Premier League titles, FA Cups, UEFA Champions League
winner (2023).\n\n5. **Karim Benzema (Al-Ittihad)**\n - Position: Forward\n -
Key Attributes: Goal-scoring, playmaking, tactical intelligence.\n - Achievements:
2022 Ballon d''Or winner, multiple Champions Leagues with Real Madrid.\n\n6.
**Neymar Jr. (Al Hilal)**\n - Position: Forward\n - Key Attributes: Flair,
dribbling, creativity.\n - Achievements: Multiple domestic league titles,
Champions League runner-up.\n\n7. **Robert Lewandowski (FC Barcelona)**\n -
Position: Forward\n - Key Attributes: Finishing, positioning, aerial ability.\n -
Achievements: FIFA Best Men''s Player, multiple Bundesliga titles, La Liga champion
(2023).\n\n8. **Mohamed Salah (Liverpool)**\n - Position: Forward\n - Key
Attributes: Speed, finishing, dribbling.\n - Achievements: Premier League
champion, FA Cup, UEFA Champions League winner.\n\n9. **Vin\u00edcius J\u00fanior
(Real Madrid)**\n - Position: Forward\n - Key Attributes: Speed, dribbling,
creativity.\n - Achievements: UEFA Champions League winner (2022), La Liga
champion (2023).\n\n10. **Luka Modri\u0107 (Real Madrid)**\n - Position:
Midfielder\n - Key Attributes: Passing, vision, tactical intelligence.\n -
Achievements: Multiple Champions League titles, Ballon d''Or winner (2018).\n\nThis
list is compiled based on their current form, past performances, and contributions
to their respective teams in both domestic and international competitions. Player
rankings can vary based on personal opinion and specific criteria used for evaluation,
but these players have consistently been regarded as some of the best in the
world as of October 2023."}, {"role": "user", "content": "You are not allowed
to include Brazilian players"}, {"role": "assistant", "content": "Thought: I
now can give a great answer \nFinal Answer: Here is an updated list of the
top 10 best soccer players in the world as of October 2023, excluding Brazilian
players:\n\n1. **Lionel Messi (Inter Miami CF)**\n - Position: Forward\n -
Key Attributes: Dribbling, vision, goal-scoring ability.\n - Achievements:
Multiple Ballon d''Or winner, Copa America champion, World Cup champion (2022).\n\n2.
**Kylian Mbapp\u00e9 (Paris Saint-Germain)**\n - Position: Forward\n - Key
Attributes: Speed, technique, finishing.\n - Achievements: FIFA World Cup
champion (2018), Ligue 1 titles, multiple domestic cups.\n\n3. **Erling Haaland
(Manchester City)**\n - Position: Forward\n - Key Attributes: Power, speed,
goal-scoring instinct.\n - Achievements: Bundesliga top scorer, UEFA Champions
League winner (2023), Premier League titles.\n\n4. **Kevin De Bruyne (Manchester
City)**\n - Position: Midfielder\n - Key Attributes: Passing, vision, creativity.\n -
Achievements: Multiple Premier League titles, FA Cups, UEFA Champions League
winner (2023).\n\n5. **Karim Benzema (Al-Ittihad)**\n - Position: Forward\n -
Key Attributes: Goal-scoring, playmaking, tactical intelligence.\n - Achievements:
2022 Ballon d''Or winner, multiple Champions Leagues with Real Madrid.\n\n6.
**Robert Lewandowski (FC Barcelona)**\n - Position: Forward\n - Key Attributes:
Finishing, positioning, aerial ability.\n - Achievements: FIFA Best Men''s
Player, multiple Bundesliga titles, La Liga champion (2023).\n\n7. **Mohamed
Salah (Liverpool)**\n - Position: Forward\n - Key Attributes: Speed, finishing,
dribbling.\n - Achievements: Premier League champion, FA Cup, UEFA Champions
League winner.\n\n8. **Vin\u00edcius J\u00fanior (Real Madrid)**\n - Position:
Forward\n - Key Attributes: Speed, dribbling, creativity.\n - Achievements:
UEFA Champions League winner (2022), La Liga champion (2023).\n\n9. **Luka Modri\u0107
(Real Madrid)**\n - Position: Midfielder\n - Key Attributes: Passing, vision,
tactical intelligence.\n - Achievements: Multiple Champions League titles,
Ballon d''Or winner (2018).\n\n10. **Harry Kane (Bayern Munich)**\n - Position:
Forward\n - Key Attributes: Goal-scoring, technique, playmaking.\n - Achievements:
Golden Boot winner, Premier League titles, UEFA European Championship runner-up.\n\nThis
list has been adjusted to exclude Brazilian players and focuses on those who
have made significant impacts in their clubs and on the international stage
as of October 2023. Each player is recognized for their exceptional skills,
performances, and achievements."}, {"role": "user", "content": "You are not
allowed to include Brazilian players"}], "model": "gpt-4o-mini", "stop": ["\nObservation:"]}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, zstd
connection:
- keep-alive
content-length:
- '6337'
content-type:
- application/json
cookie:
- __cf_bm=8Yv8F0ZCFAo2lf.qoqxao70yxyjVvIV90zQqVF6bVzQ-1749567342-1.0.1.1-fZgnv3RDfunvCO1koxwwFJrHnxSx_rwS_FHvQ6xxDPpKHwYr7dTqIQLZrNgSX5twGyK4F22rUmkuiS6KMVogcinChk8lmHtJBTUVTFjr2KU;
_cfuvid=wzh8YnmXvLq1G0RcIVijtzboQtCZyIe2uZiochkBLqE-1749567342267-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.78.0
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.78.0
x-stainless-raw-response:
- 'true'
x-stainless-read-timeout:
- '600.0'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.9
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: !!binary |
H4sIAAAAAAAAAwAAAP//rJfNbhs3EMfvforBXiIHK0OSv3WTHCsxYiFunKJo6sAYkaPdqbkkS3Il
K0HOfZY+R/tgBSnJkhPZtdteDC+HQ82P87H//bIFkLHMupCJEoOorGr2i7rwvWp2tt+qfzi4tCf2
6Pjjx5/36FX74nOWRw8z+pVEWHrtCFNZRYGNnpuFIwwUT20f7h3vHxzuHu8lQ2UkqehW2NDcM82K
NTc7rc5es3XYbB8tvEvDgnzWhV+2AAC+pL8xTi3pNutCK1+uVOQ9FpR17zYBZM6ouJKh9+wD6pDl
K6MwOpBOoX8oTV2UoQtnoM0UBGooeEKAUMT4AbWfkgO40gPWqKCXnrvwhhwBewglgaMJe5Kg2Acw
47QWjIV2C0bkA3gjBDmwCmfkPLBOO6bGKQnoo8c7EcyIHHRand0c6NYqFhzUDOhWqFqyLqDv8DMr
Rr08p3ulr3R7B16+PGejScGQvGdonOlADoaMFcPJYPvlyysNAE24MJ5jdrowMG6KTi7W39IMeiE4
HtWBfBdeOR6NFOsihwl7NjqHwqBqemFcDARHrDjMdhbuPVEyTagiHXwXhrUKbBVBH5UyGuSLdw6m
rDW5HE6MRehV5FggiBIrm07/KV3ESW3v1qDRaXU62zuRsBMJ384S+XCE1v75BzQu0LGHS2Qdmq/J
Vcj6+aCXlkjmEEiUmn+rKYcxa/Yl62Iz3OBs0Hsg2vbRdg7VEv6ci5qgDYGDIp8DagkTdGxqD9JU
5AMLELX1iXA3Ep66eOXwBlHF3Y0halGSj6k84TB7Pt2FmcY793PIexlk7QNrETZT9mstySsuMBVx
dIoH/Xg66MHJAtnDOWFknKc25Wt3O4cLRxWTW1rn/AlyL6WRJqzhFUHf1TNNT6IcshwzKUnuIdDY
4uvVmgYPT/65RjdGm0PkrK1/EnJi209s6LiCPunPVCE0eqp5FgKXKJ+futdrycpTu1d4k/4PKAIL
VMA6kFJckBa0mTJ20MYuvKvSb8k8TDmU8J5QwRClY5noDiLd+zieApzTFLU0U3/D0BicQB+dIGU0
Ph9ysOy1HOzCJT0gOUb1+JRJjdiPo3VI+oWHizQR19DWa3iR1XOMbYn3Z8wigYcRcWhKrEjCJSos
oXHOE3LWGPWvB8t4RSiXM3UzzzeFuBqN81J8vBITwVF6D9Q3CEMjHf/1OzTW8vj/9NUzim/4UI3d
peP7ylyM0YRzHHHeoHMzeItxUPRjgjUMa82i/K8dtTbxV821GeS1UZI09I0Jdw30wNhIOTqtnbGE
egVesgVXR89mbRNcuzXvKOn4wQn4WKo2vbNpTNpH3eJvWCmfg11mb9lcsaem6EiT9zvLcx4tw3tg
TxiDH0r2cw3EHkTtovqDacmKAGVJ6dUTzEIx+eBYxLii/HlE5ezAKYpy8QSs40aSUKIHSZXRPsx/
h24F2QXn/A7SW9dRhe4GR4rAkhsbV6EWEcobxZLHsxRUSezABwy1j4LMm4qWOi4JuH9QbDvr2tLR
uPYY9a2ulVozoNYmYAwxqdpPC8vXOx2rTGGdGflvXLP5HLl2hN7oqFl9MDZL1q9bAJ+SXq7vSeDM
OlPZcB3MDaWfa++1DucHZiudvjIftPYX1mACqpWh02p38g1HXksKyMqvie5MoChJrnxXCh1ryWbN
sLUG/n08m86ew7MunnL8yiBiUZC8to7kvNw2bXMUP2Qe2nZ30SngzJObsKDrwORiMiSNsVbzz4vM
z3yg6nrMuiBnHc+/Mcb2+qCDnV08atM42/q69TcAAAD//wMAvqcBwHENAAA=
headers:
CF-RAY:
- 94d9b7561f204d3b-GRU
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Tue, 10 Jun 2025 14:56:46 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '12189'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-envoy-upstream-service-time:
- '12193'
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149998513'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_1098f5a5384f4a26aecf0c9e4e4d1fc0
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "system", "content": "You are Sports Analyst. You
are an expert at gathering and organizing information. You carefully collect
details and present them in a structured way.\nYour personal goal is: Gather
information about the best soccer players\n\nTo give my best complete final
answer to the task respond using the exact following format:\n\nThought: I now
can give a great answer\nFinal Answer: Your final answer must be the great and
the most complete as possible, it must be outcome described.\n\nI MUST use these
formats, my job depends on it!"}, {"role": "user", "content": "Top 10 best players
in the world?"}, {"role": "assistant", "content": "Thought: I now can give a
great answer \nFinal Answer: The top 10 best soccer players in the world, as
of October 2023, can be identified based on their recent performances, skills,
impact on games, and overall contributions to their teams. Here is the structured
list:\n\n1. **Lionel Messi (Inter Miami CF)**\n - Position: Forward\n -
Key Attributes: Dribbling, vision, goal-scoring ability.\n - Achievements:
Multiple Ballon d''Or winner, Copa America champion, World Cup champion (2022).\n\n2.
**Kylian Mbapp\u00e9 (Paris Saint-Germain)**\n - Position: Forward\n - Key
Attributes: Speed, technique, finishing.\n - Achievements: FIFA World Cup
champion (2018), Ligue 1 titles, multiple domestic cups.\n\n3. **Erling Haaland
(Manchester City)**\n - Position: Forward\n - Key Attributes: Power, speed,
goal-scoring instinct.\n - Achievements: Bundesliga top scorer, UEFA Champions
League winner (2023), Premier League titles.\n\n4. **Kevin De Bruyne (Manchester
City)**\n - Position: Midfielder\n - Key Attributes: Passing, vision, creativity.\n -
Achievements: Multiple Premier League titles, FA Cups, UEFA Champions League
winner (2023).\n\n5. **Karim Benzema (Al-Ittihad)**\n - Position: Forward\n -
Key Attributes: Goal-scoring, playmaking, tactical intelligence.\n - Achievements:
2022 Ballon d''Or winner, multiple Champions Leagues with Real Madrid.\n\n6.
**Neymar Jr. (Al Hilal)**\n - Position: Forward\n - Key Attributes: Flair,
dribbling, creativity.\n - Achievements: Multiple domestic league titles,
Champions League runner-up.\n\n7. **Robert Lewandowski (FC Barcelona)**\n -
Position: Forward\n - Key Attributes: Finishing, positioning, aerial ability.\n -
Achievements: FIFA Best Men''s Player, multiple Bundesliga titles, La Liga champion
(2023).\n\n8. **Mohamed Salah (Liverpool)**\n - Position: Forward\n - Key
Attributes: Speed, finishing, dribbling.\n - Achievements: Premier League
champion, FA Cup, UEFA Champions League winner.\n\n9. **Vin\u00edcius J\u00fanior
(Real Madrid)**\n - Position: Forward\n - Key Attributes: Speed, dribbling,
creativity.\n - Achievements: UEFA Champions League winner (2022), La Liga
champion (2023).\n\n10. **Luka Modri\u0107 (Real Madrid)**\n - Position:
Midfielder\n - Key Attributes: Passing, vision, tactical intelligence.\n -
Achievements: Multiple Champions League titles, Ballon d''Or winner (2018).\n\nThis
list is compiled based on their current form, past performances, and contributions
to their respective teams in both domestic and international competitions. Player
rankings can vary based on personal opinion and specific criteria used for evaluation,
but these players have consistently been regarded as some of the best in the
world as of October 2023."}, {"role": "user", "content": "You are not allowed
to include Brazilian players"}, {"role": "assistant", "content": "Thought: I
now can give a great answer \nFinal Answer: Here is an updated list of the
top 10 best soccer players in the world as of October 2023, excluding Brazilian
players:\n\n1. **Lionel Messi (Inter Miami CF)**\n - Position: Forward\n -
Key Attributes: Dribbling, vision, goal-scoring ability.\n - Achievements:
Multiple Ballon d''Or winner, Copa America champion, World Cup champion (2022).\n\n2.
**Kylian Mbapp\u00e9 (Paris Saint-Germain)**\n - Position: Forward\n - Key
Attributes: Speed, technique, finishing.\n - Achievements: FIFA World Cup
champion (2018), Ligue 1 titles, multiple domestic cups.\n\n3. **Erling Haaland
(Manchester City)**\n - Position: Forward\n - Key Attributes: Power, speed,
goal-scoring instinct.\n - Achievements: Bundesliga top scorer, UEFA Champions
League winner (2023), Premier League titles.\n\n4. **Kevin De Bruyne (Manchester
City)**\n - Position: Midfielder\n - Key Attributes: Passing, vision, creativity.\n -
Achievements: Multiple Premier League titles, FA Cups, UEFA Champions League
winner (2023).\n\n5. **Karim Benzema (Al-Ittihad)**\n - Position: Forward\n -
Key Attributes: Goal-scoring, playmaking, tactical intelligence.\n - Achievements:
2022 Ballon d''Or winner, multiple Champions Leagues with Real Madrid.\n\n6.
**Robert Lewandowski (FC Barcelona)**\n - Position: Forward\n - Key Attributes:
Finishing, positioning, aerial ability.\n - Achievements: FIFA Best Men''s
Player, multiple Bundesliga titles, La Liga champion (2023).\n\n7. **Mohamed
Salah (Liverpool)**\n - Position: Forward\n - Key Attributes: Speed, finishing,
dribbling.\n - Achievements: Premier League champion, FA Cup, UEFA Champions
League winner.\n\n8. **Vin\u00edcius J\u00fanior (Real Madrid)**\n - Position:
Forward\n - Key Attributes: Speed, dribbling, creativity.\n - Achievements:
UEFA Champions League winner (2022), La Liga champion (2023).\n\n9. **Luka Modri\u0107
(Real Madrid)**\n - Position: Midfielder\n - Key Attributes: Passing, vision,
tactical intelligence.\n - Achievements: Multiple Champions League titles,
Ballon d''Or winner (2018).\n\n10. **Harry Kane (Bayern Munich)**\n - Position:
Forward\n - Key Attributes: Goal-scoring, technique, playmaking.\n - Achievements:
Golden Boot winner, Premier League titles, UEFA European Championship runner-up.\n\nThis
list has been adjusted to exclude Brazilian players and focuses on those who
have made significant impacts in their clubs and on the international stage
as of October 2023. Each player is recognized for their exceptional skills,
performances, and achievements."}, {"role": "user", "content": "You are not
allowed to include Brazilian players"}, {"role": "assistant", "content": "Thought:
I now can give a great answer \nFinal Answer: Here is the revised list of the
top 10 best soccer players in the world as of October 2023, explicitly excluding
Brazilian players:\n\n1. **Lionel Messi (Inter Miami CF)**\n - Position: Forward\n -
Key Attributes: Dribbling, vision, goal-scoring ability.\n - Achievements:
Multiple Ballon d''Or winner, Copa America champion, World Cup champion (2022).\n\n2.
**Kylian Mbapp\u00e9 (Paris Saint-Germain)**\n - Position: Forward\n - Key
Attributes: Speed, technique, finishing.\n - Achievements: FIFA World Cup
champion (2018), multiple Ligue 1 titles, and various domestic cups.\n\n3. **Erling
Haaland (Manchester City)**\n - Position: Forward\n - Key Attributes: Power,
speed, goal-scoring instinct.\n - Achievements: Bundesliga top scorer, UEFA
Champions League winner (2023), Premier League titles.\n\n4. **Kevin De Bruyne
(Manchester City)**\n - Position: Midfielder\n - Key Attributes: Passing,
vision, creativity.\n - Achievements: Multiple Premier League titles, FA Cups,
UEFA Champions League winner (2023).\n\n5. **Karim Benzema (Al-Ittihad)**\n -
Position: Forward\n - Key Attributes: Goal-scoring, playmaking, tactical intelligence.\n -
Achievements: 2022 Ballon d''Or winner, multiple Champions Leagues with Real
Madrid.\n\n6. **Robert Lewandowski (FC Barcelona)**\n - Position: Forward\n -
Key Attributes: Finishing, positioning, aerial ability.\n - Achievements:
FIFA Best Men''s Player, multiple Bundesliga titles, La Liga champion (2023).\n\n7.
**Mohamed Salah (Liverpool)**\n - Position: Forward\n - Key Attributes:
Speed, finishing, dribbling.\n - Achievements: Premier League champion, FA
Cup, UEFA Champions League winner.\n\n8. **Luka Modri\u0107 (Real Madrid)**\n -
Position: Midfielder\n - Key Attributes: Passing, vision, tactical intelligence.\n -
Achievements: Multiple Champions League titles, Ballon d''Or winner (2018).\n\n9.
**Harry Kane (Bayern Munich)**\n - Position: Forward\n - Key Attributes:
Goal-scoring, technique, playmaking.\n - Achievements: Golden Boot winner,
Premier League titles, UEFA European Championship runner-up.\n\n10. **Rodri
(Manchester City)**\n - Position: Midfielder\n - Key Attributes: Defensive
skills, passing, positional awareness.\n - Achievements: Premier League titles,
UEFA Champions League winner (2023).\n\nThis list is curated while adhering
to the restriction of excluding Brazilian players. Each player included has
demonstrated exceptional skills and remarkable performances, solidifying their
status as some of the best in the world as of October 2023."}, {"role": "user",
"content": "You are not allowed to include Brazilian players"}], "model": "gpt-4o-mini",
"stop": ["\nObservation:"]}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, zstd
connection:
- keep-alive
content-length:
- '9093'
content-type:
- application/json
cookie:
- __cf_bm=8Yv8F0ZCFAo2lf.qoqxao70yxyjVvIV90zQqVF6bVzQ-1749567342-1.0.1.1-fZgnv3RDfunvCO1koxwwFJrHnxSx_rwS_FHvQ6xxDPpKHwYr7dTqIQLZrNgSX5twGyK4F22rUmkuiS6KMVogcinChk8lmHtJBTUVTFjr2KU;
_cfuvid=wzh8YnmXvLq1G0RcIVijtzboQtCZyIe2uZiochkBLqE-1749567342267-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.78.0
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.78.0
x-stainless-raw-response:
- 'true'
x-stainless-read-timeout:
- '600.0'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.9
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: !!binary |
H4sIAAAAAAAAAwAAAP//rJfNThtJEMfvPEVpLjFojGxjEuIbJhCi4ASFRKvVJkLl7vJMLT3Vk+4e
O06U8z7LPsfug626bbAhJgm7e0F4aqqmfl0f858vWwAZ62wAmSoxqKo27WHRlEc0edUdv+q9pl/f
2P2jsnz38dXw7J3GLI8edvw7qXDttatsVRsKbGVhVo4wUIzafdJ/uv/4Sb/bT4bKajLRrahDu2/b
FQu3e51ev9150u4eLL1Ly4p8NoDftgAAvqS/MU/R9CkbQCe/vlKR91hQNri5CSBz1sQrGXrPPqCE
LF8ZlZVAklJ/W9qmKMMAXoDYGSgUKHhKgFDE/AHFz8gBvJcTFjRwmH4P4JQcAXtAcDRhIQ2GfQA7
gVASBFtDtwNj8gG8VYoc1Abn5DywpDtm1hkN6KPHaxXsmBz0Or29HEh841gKCCUGEAtDh5/ZMMpN
DIzPFmUaTXrwXt5Ldxd2ds7YChkYkfcMrRcSyMGIsWI4Otne2XkvANCGc+s5FmkAJ9bN0Onl9Zc0
h8MQHI+bQH4AzxyPx4alyGHKnq3kUFg0ba9syg7HbDjMd5fuh6pkmlJFEvwARo0JXBuCIRpjBfSj
1w5mLEIuhyNbIxxW5FghqBKrOkX/JZ3IUVPfXINWr9Prbe9Gwl4kfDlPxzAaY13/9Se0ztGxhwtk
Ce3n5CpkeTjoRU2kcwikSuGPDeUwYWFfshSb4U5enBzek233YDuH6hr+jIuGoAuBgyGfwxQd28aD
thX5wApUU/tEtxfpjl08bjhFNCgaWiMUVZKPZTziMH842bmdxfP2C8Bb1WPxgUWFzYTDRjR5wwWm
To5OMdC745NDOFriejgjjHyLsqZa7W3ncO6oYnLX1gV7guynEtKUBZ4RDF0zF/opyhHrCZPR5O4D
jVO+3qlp9/D0x/25MdscImdT+59CTmz7iQ0dVzAk+UwVQuvQtF+EwCXqh5fu+Vqx8jT3FV6l/wOq
wAoNsAQyhgsSRZsp4/RsnMCbDr1L5mHGoYQ3hAZGqB3rRPc40r2JOyrAGc1QtJ35K4bWyREM0Sky
VvDhkCfXc5ZDvXRJP5Aco/n+hklDOIz7dUTyyMN5Wo1raOs9vKzqGcaRxNv7ZVnAJxFxZEusSMMF
GiyhdcZTcrW15l8vlcmKUF/v0808dxpxtRYXrfj9TkwEB+kd0FwhjKx2/Pcf0Fqr4/8zVw9ovtF9
PXZTjm87c7lCE87TiHOKzs3hJcZFMYwFFhg1wqr8rxO1tu1Xw7UZ5Lk1mgSG1oZvB+ie/ZGKddw4
WxPK6gRKrsE1MUS7qRNltxMxL6zAKTVSRC0Erbc2BJISKzi1wdeNu6a9F/enOvAu6I968Lvg6++w
9SX/tmS/kEIlehgTCSh0NGmMmYOjKXvSECzQpyRfAI3ZIHBmJRuCkovScFGG+MbytqJreVVZHyCg
IQmkgUXzlHWDJqmrpd76VlrtwjGqcvmMlJ4v7UxhzMhRhe4Kx4aAJhNSgack5D3EN7G/YmNyiKox
HW9KhwvhCSuUYOYRKJTEDgJh5cEKjG0oV4cUo8SRcYKxeGjAByzI766r0nhKHqMylsaYNQOK2JAc
kx7+sLR8vVHAxha1s2N/xzVbFP/SEXorUe36YOssWb9uAXxISru5JZ6z2tmqDpfBXlF6XK/TO1gE
zFYKf2V+vBD1AFmwAc2a3+N+L98Q8lJTQDZ+Ta5nClVJeuXb7R2s5D02mu3K1tlaY/82pU3hF/ws
xVqUe8OvDEpRHUhf1o40q9vYq9scxa+g+267OeuUcObJTVnRZWBysR6aJtiYxbdJ5uc+UHU5YSnI
1Y4XHyiT+nKvj/t9pKd7Ktv6uvUPAAAA//8DAMc1SFGuDQAA
headers:
CF-RAY:
- 94d9b7d24d991d2c-GRU
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Tue, 10 Jun 2025 14:57:29 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '35291'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-envoy-upstream-service-time:
- '35294'
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149997855'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_4676152d4227ac1825d1240ddef231d6
status:
code: 200
message: OK
version: 1

File diff suppressed because one or more lines are too long

View File

@@ -199,4 +199,833 @@ interactions:
- req_2ac1e3cef69e9b09b7ade0e1d010fc08
http_version: HTTP/1.1
status_code: 200
- request:
body: '{"input": ["I now can give a great answer. Final Answer: **Topic**: Basic
Addition **Explanation**: Addition is a fundamental concept in math that means
combining two or more numbers to get a new total. It''s like putting together
pieces of a puzzle to see the whole picture. When we add, we take two or more
groups of things and count them all together. **Angle**: Use relatable and
engaging real-life scenarios to illustrate addition, making it fun and easier
for a 6-year-old to understand and apply. **Examples**: 1. **Counting Apples**: Let''s
say you have 2 apples and your friend gives you 3 more apples. How many apples
do you have in total? - You start with 2 apples. - Your friend gives you
3 more apples. - Now, you count all the apples together: 2 + 3 = 5. -
So, you have 5 apples in total. 2. **Toy Cars**: Imagine you have 4 toy
cars and you find 2 more toy cars in your room. How many toy cars do you have
now? - You start with 4 toy cars. - You find 2 more toy cars. - You
count them all together: 4 + 2 = 6. - So, you have 6 toy cars in total. 3.
**Drawing Pictures**: If you draw 3 pictures today and 2 pictures tomorrow,
how many pictures will you have drawn in total? - You draw 3 pictures today. -
You draw 2 pictures tomorrow. - You add them together: 3 + 2 = 5. - So,
you will have drawn 5 pictures in total. 4. **Using Fingers**: Let''s use
your fingers to practice addition. Show 3 fingers on one hand and 1 finger on
the other hand. How many fingers are you holding up? - 3 fingers on one hand. -
1 finger on the other hand. - Put them together and count: 3 + 1 = 4. -
So, you are holding up 4 fingers. By using objects that kids are familiar with,
such as apples, toy cars, drawings, and even their own fingers, we can make
the concept of addition relatable and enjoyable. Practicing with real items
helps children visualize the math and understand that addition is simply combining
groups to find out how many there are altogether."], "model": "text-embedding-3-small",
"encoding_format": "base64"}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, zstd
connection:
- keep-alive
content-length:
- '2092'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.78.0
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.78.0
x-stainless-read-timeout:
- '600'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.9
method: POST
uri: https://api.openai.com/v1/embeddings
response:
body:
string: !!binary |
H4sIAAAAAAAAA1SaSw+yTrfl5++nePKf0idyEarqnXEXASkEr51OBxAREJFLFVAn57t38OmcTk8c
IBGE2muv9dv1n//68+efNq3ybPzn33/+eZfD+M//WI89kjH5599//ue//vz58+c/f5//35l5k+aP
R/kpfqf/viw/j3z+599/+P8+8v9O+veff+ZECgOidjuPT/hggOYz8/B5WXJjKhdrgcXu+QpqzjpU
fHRkJbTCW4nP3zI23k+iNugeb0qq9aECplduqMgw4oEQ79P11KnpRe7JhmCtVCkjaRmdQPymFnlr
alLNxrOD0FHcF7aCU+zx4if3AdioPj4Y2K+W8UIT2IeKGkz3BnhfmzMXhMhxhw/EruKFp/kWHL5S
Rd399RvP3VZSAdhPHc2G/mvQ7ZJPECZiSrWnMXjLt25zkGDgYS1J7X4e5CVBrDtrNIqXOObbdEhg
ROwPxh/vyKb6ClTY3d0d9UThysbDPCwwwwnEKvAoYO0YdUhr0iGY+TruhV2VnaAT4xrbtfoFQywZ
KozwpAdXsNlVAhNFE87oqWHTwbAfzlrpoFZo1GB5DAmTpjsNAZeNO5yk8SedYWISNNrcg+KxbEC2
C4sJ9dXtQk+NdKgk+XtbIH+J+gCiAqXU6h86VKTwRC/vsUynp+tE6P60Q6py75e3zCGywe65SMH2
ethV/FLVIthevw69JLYe88BmOVTVvAvGgtcqnhNLHbXz/h7wdW2nou/vM9jm1QcHlhIZ00s+OrBi
QkSfwsiBiVotQSdU37A6uDmYLevmQzxFEd7nxdJTgNISnqOhoImdvPvFfNEcpqpQYtuzjh4f3r4+
CM5KS+Y+VBizw6H5+z6vocWDIVgYROWyhThwhI2x8PSigNcMG+yj4pHOvfsqlfen4OnFferp0m0P
BXQsNaHnk6FXYsjXLcKhltFry12qqZNuPqxvAoeNnXz25nl4qjDUlxbvz8HEFul5KEGjcg71d9Wr
J49BTqARKjMNqH6PheNO7dD1MX+pIfQ3JpzdRYeNNXLY/AZ6NRePYwl3pmlS3XOu3nD2DqLiwFbC
l1ta9PzEpwoUIiAEKLo/PSkLM1P+trNIxm1z7ykpThw6yJDDYWQ4vRhvXglyVasmUOOXeIzFYwFP
cuzTdT1V88OJE2TfnATne+gwiRZKB+F4mQOFM2WwlGflBKO+PmJf45d0fkh2A0vd1PDxMzqxSK/d
BD/TwaCPQBwq3qNFCEmS7rGWaueUdVlfw3PouzivjachVAfVhZcr1KkT2Q828dnGAbODn2QWvnYv
GMElQFonxngv3U+pVNC4gVKZNuSSkA+QhjPawn5oGFXtrK2Wx7GK0A4ML3y+cDQeQHh14TeeXRzG
08ug+CsvUDFfDtk+rFssHIWLA9vZu2M3+pzSuXkVNTpPoMO7d1p7U+/FOrzdRYL3U/7piXfGGbix
UKdHGs39OJzVGrG8PeDw0c5sycqtDvnmZuM7oVtv7rYbFV4jU6RxWpdVS54sQ6Nwe/zepyFwqWMj
EJsV3d32eirQl8TBjmg3et9/dxX/yg0dvnMf4IPLC2zOhk8Ji86K6HOvW5449e8SNaKN6eFEHjHv
yR4EBbt7+EbSXb+g8QGh38oH7Jbinc0mew9IAmNOw+3W6dnTNW20i44BPUWqXkmc5BVw9CRC7etc
G0s5OzlsP1KBD+ZTYmycvwPwkvNCd35ZpkxhkYMeYU6wZWZlz+rvK4L32kb0AHWtEk/biw7N3t0E
pfjOKnYwzQbdx2NCVW+IK0l9mDXyNE3BVqraHo0d+QLrk+1gk1ZqJV5bs4NvxXCpWn/8mE/54Qat
e73BGl/HFf/Y+1u4JVmHb9rxWy1m4N7AkKoMa2QoGTvHYQLbiMNUhxedSUyYJhQr40xKeTOy6SS1
A8jApcGH/OH30/c9LoCiTYgDaTj3bJOlNrL2z5p6THGB8MaiA/2Hf8fnu5yl/EnzM/DuRRVj7zyk
0+tDCLgkF4T30JgYm17hBblhsCEKs56p4D1KHQmRLOBdJBUxFWIeovFbBzgOQtcQV32F7qnwsHeO
ZMaie3GDR7cssJoC2Vj20T2HQZxENIvdV7V4Xj+Bjfwc8L5NEu9Xj6AZpgtObtw+FdlxypB+D2N6
MV0zFfYjSxDpyhrr78lJJYDiAn08E2A36TUgnGW89uekDHripdWS2KQErplMBDrMZ9MzriBkVqBg
9XrM1nqVXEhx1+PoReZ+kA8vB8Yb9xZ0PB8AYeZV++//u+jpibGDUvDQsfQkgM9rmM6DrCSw8gMe
62v/EpRCuUGznUxq4bttvFXKnyDXaCp9uPwZTC9fDNHenyKcp+JkLFtFvoB0+9xg0yQFmzoXncDv
98yNq1ZS7+U8iIfy89OHfnCEKdpqjx3D1uLTeBFOJIdO4EvUKV5qKmaozRRyUwOalGHvsaj6Tkje
DjLdyySqFk3+KvADZJFi9day0boJIbraiY6NV8biqYY1j7Ran+m+LPexCHijRXXpSUSpkNEv0LEz
WFhmEwg23TO+c9EFXIxXSq/385NNxRbkqDzVV4qjU8OoJ48Z5EQ3pansXoxxYl0ClyII8PMFbMCP
58VE589nxpohDN6kA38Lq3Mxkhm2midJp+mEWl3kqb8zHzHlyTuBrcQB7Avnd7qkt5CDeRs0WKU1
BbN8mXU4vMRLMNDmzCQuOg7IO2cvmoqCBFjvXXiAU8fFjzhfDMZS9YS6z/mIdeHaprPQDAOEdsrT
nXzIPEm/5ArEp0tLlI1XVYM+GwU6+nJKcfOuPJ4EDYGin1dk+529lP3WI0z4FB+PsxtLrjopqHod
XtjIfCeeA4vWkAfJEkhH9wNoKS8tSgieSN+oRj9VhxSi0+NaBnz/2gOpdQoePbxcowfbn9IF3bdb
SJ+TjePeK3u2hN8Q1oe9izH/7PoJf8Pwr77GvCp5yyY2fHD4ZC1+bBu5mrYHL4OyUS80Izetl3b9
MkBOdFLqmP6lWuovssH7reYBr2+O8XTIZR96yXUJGHftAcMoF8HyvfUYX6CfsnZMOnTvlCcO1n4r
NP0XwlVvSHWXOm/5hKqKxEDK6WF3rNIZL4WLFi69YjMOLoAah5sCizzJqRnGncdIkXGA25EUO2/K
V/S4Ezhg7ewvtn2+6PmZO0cw7wikJslEMPB168OhdwJq3eUs5pHon6BbnBKK97plCEWDfPh4wCt2
IDpVdPWXCgmGM72qQI8ZheoEf3qmX3ZzNdVXpiMmagINci9I+QmXNqqI+KY+U3fevFR3HZ6F0xHv
dmER0w+nDagsYg87ihoC6RzfEqhz4YZsIxux71N7D4j2mUJmwJkVT8C+QY9PGAZbPG/6cWzfHYwT
/Yj11rIrNn6KC+AyuiMzF+tgEh9KATTZeFBtrYepa4wMICGwMNb7ySPbZ+9DJc0saq9+l46f9gLn
p5tRcyuUVce7OAHg3MhEOjhHb75/Y15ePNvGFrRFwJbwFYG1XnFoXc/VdNL8HNRjopJvdvBikaWZ
q1yMKqWed4tSWl85Hr7zAFD11Tk9w+jCK93negy4NHylS4gVHsovcA06QX6x9X5OcPXbVN0wIaaw
rExYqaWGD9HWiMnbKiBK81qjUSoWYNa81gcgS1KyHQg2pHisXDR6AqHuxjP6v/04r+QE4+ZteMLP
v7X37kIPVu0C/llPBCLve6GuFgOwTP1YKuN5/yHzsHHTyQBljXKrFqjxLQog3g/yFrYRxARMhcyW
8WAMIN9eNMK0+mpQWiwdaG5nir30I7HyV3/5mYTBIpkFYGz/2MLWwhRj4pyrJXJoBkQ/q9Z+6sdk
Q44XiOvcoMFz27DpM8QZkl/ylernlw4mKQ99eNudbvi+zASw2JkvaHvtHRrDjqV/+2UPh4BGTHc9
UUGfi8K3SkZ1H7+8KbsziHbO5vzLAx6Vv+EC62TBhH68GQz8PjHhYOs7MgXakU15nargg2+74Bhd
ZW+Y730OO2LcqMciKxV5etnCl3tyCODLsJqqpc+hbrclzVT6jJflGE2A7WiI93o3evOQ1yHkzvIb
H6L7xlvuX0eHN5PXcTi4HGDF6ZBBA/IJtcpq2887rswR2VISREzvjGWCS/Q3T9+tT8j4y/Q4AVWX
3mQq3uee321iCJf37hkoG8+o2JuVLbztLjdsfQ48G5qQtrD8vo2g0JMjWMKujxS+NJVgU+2BMbrb
KIN5N8AAOa+gJ30s3+DRGDgy7hutWrLjoQHqdrcjkbqHjDmp1/1dn0YUFYwmFlKBVW632PXMF5u/
qpsBVRfeRFIzzls0WpRo0jeIWr88/wnlAhhJ8MW2nvJsWjgFwp9/uHWbTV8f0UTAej80H/m3txAg
Kmjth1QlTwjI8Q4DGG+cGz1N89lYIqHVEfwab2xMD7Pib1/awbc5t9QASetNt8KDAC/fw/r8JsbY
ZnER+iJA/e9RAqPvnyNYeNOIrWfrVTS9bxbw6o0WmzvN8VhlHhdUGeoVW9Flz+bNlpTgvfcx2W6o
6k3SByjwfb3fycwUyRs4sVQhXK55AF9D0C+S1+YwcO076ZO7UUlL+ArR6m8p5q7Am992ksGLGH6w
phwHg73kjFP0vbPFp3N1NsZvTAu41hc9VVCtiHEYCJQa8UAOw71jc3rXMzTPU4B3d742WAKPEXI+
nYetfScC+mZdC3/Pe99YANBm987AqenTYDM9GBtXvyufP++ZTM9zFy96e3bBwt2veP9KN/20rjdo
MT3+63/YbVZ8+DjwkPQRAj05P8YG6FMPCDLwUC0vb4KoUgsN30X2rqbw9goQSqc5UK7SJiX3pSLw
GR02dNVXNjf25MDFXhTCe99DRV6KnYF8t5nIzz8J0w4XUAyEnGpdNxijGxcdbF7flnB72LJZsYYQ
8g/7QJjKtfHq37ZwrY/gUT6NeOUROUB6JtNjnOtAUoL2AnHquthhxsGjuaoTONjqjhrGEVfDY28q
8Bjzd4wfcugxf9IWxI7aHueKxsDS4O4El70n0MA/7jwxrtMQ6oJdEvPA9WDZS14It7OdUfd6nHp2
L945ZMMIsV/OWsXQTulgm7kD1rO8qSaPFtHPP2KvaaKUCTGE8Drd02B7FQ4GrZTwgqoraMm0069x
1z3ONjQ2h3Ow7BUd/OUJBxK5QfdQC29ESXeDMjhgbHdoTNf3FciOIJzx+XM4rfpxPkFjg8+klmzm
fSfW3ZTVb9PHYX/yFkHb36BPB0rjrT4ytjX3IaqbJg+mVa+Wbj+piseYjTXEcxVBlRJBO+dfeEcs
NRaeRK3RpH50aj+MxljurxxCrIlRgBPOMNgjAS5cogehh8dNrkhgcja8sOVIFHjfGmy6f0L443s7
MlpA5KKxVDRr8KkF8qOxXPMWwvKyiagbnkZvfn2iLZIBxj9+ELNV30FW37/0serZfFVcRanBkwbF
6WYD/g4iG9HnYmMruRX9rOWfBpZCLGE/px2Y1voFF/dQrv23ipn77gjIFmdP17zR0wQeQ/ispYl6
t6E3WiGeXUTr7w0b7JJUkw5MBU6c3NCTftd78SGebURBc6YWlY7GcJ2ADt/FGOFIC+1etG7eAqMo
vGN75WuDG8McgnMtU9vrn2yauXMID157x8+Mf3gLKEUF2m6q4AB/tXRqwk8HDVHf0l+9jzDxCfwm
zUB3e1tJZ99/hL98ToRT73h/n//1tr2Ry3vU46Xd3LYoTbyE2ppgG9M3cgi4lv6RJqPUG9O5STlo
DGRDpOvTYlI7Ji1Y+VfAUV1OZxpEChpt+KCPOI+87zEcLlDdRf7K9zgwK/jdoeoqt9iF95ux8qcb
MB8vheIdSL1lkNUtMNvFDDZZEMRLAKITyBB9UY0p19Wfez6smzoP0LVowShoyhY8HtyV6iQxeyE4
hB36LjtK0C8/RunTB4pZOQSsPHbZp4KDNvJjoJalz/HyUoIcjhK7YXwYtHRAx6SAP/8DoUJ6qrfa
BTp30GO7rzJjGeYrD1G6zNSaG7Wf+ublwsUzbfzXnzySY4uOD13Eq5/2Fi56F7CmE6B7kVm9WGj3
Lbh+dmXAbxbbEL+RqEKn02ysOwc3FstnrgOK255eTptbNWTce4ErrybS3k7iZXrVDshzZQx6rdCA
EIvHEh2k2ljzyataBtlRUPZQNfxs3pVBAD7y0DooHnWmyWKsjcwC7mliBDeuvcTLq3klCOyXbvVj
Qs/gLuNgfTIdfBO4xBDse8RBS0gDvL+AfcyP36sDf/lit+ax6Yi2BKz+hcb5E/dz+JpttK2uEGuz
9jK67GjVMBbEgnqcbMXCYOQuNJ1XgB+NLrApvechPB4vBj7ortzTSrmd4O7SJNhldVURK2htKAiv
Bseffc9W/1FD2UIj9Vce8hY9GP7WL46G0TcmlIgBsKL5gTGah3QmX+8C1w4X3NgXsNnivQTOfFQH
U/EWqjnlwwa07/yNtdMgxYvkFdmPl6z++xyzrtmFci3bfCAkN7UfRH1IQC8oA3kvPk4Hy94P8MqT
D0FkiAxem0YRnI6LG0hKhL3VH7UQOicL78vyG7PRj3z55aQS9cv51U/fSCXIvadZwPvyFoxhaHdQ
S8iWrnmPTSlWaqBm+wfVY9+spOPdJWBdj0ScldlYyOF1g3HsU3xLJQNI5tImsJSHnD4W0rJ55rQc
xepwp/rhIcTT01UjWEq+g73b4HmSJ78zCDewxJmdB9XCxDL81QPhsIs8ejD9BgI/Vsm7CI/p+NK/
NvzgZIft9fp/9W/NQ6Rud1vAuOhI0JqPg6pnjiH+eORvfoLT+vjrhxFUHoKKHfVbe0ueDq0iHIBD
xIqL2DIarQ795/tEz6naGNOTLCZQpmzBfndxGRNPUQtXfcIBP7fs626TDFx24gfbq3599fO3hvh0
avEZ2hcw7TYp95e3OIP9rYaNtnfhwevu1H17N6/7JNoNrX4C7wyRGsPKt4DNuC/dn0UxnfbvvQ/n
4+uGcc+u6XKqJw7Zo9bjw7a5VzQarRw2b8gH7cpfV17RgCCrKV7nJ3/1Dd32Y4w13YmrVY8HyI7G
nsiNlbL50qgOuATdHrvXY1gt7y6aYMWkiDq5X3okrtMIao5fUsMN4nRBGzVBD66XAhgZVzAmt2GB
a97EwXyVqnkerjqCTdthzzL4dLkZUwE7zmFE/j5Hb1C1pYPotMHYrGs7/hqKbMtTYPfB8TRc4yVF
7gSuU5pS21U5Y0SOBEHvnxE206iOx9IvM3jl9Cf17GlgU9d4GQTWMw+mWQ48FiybBfZVcvnx3n7p
yEsE+Ss1sbtr74x91c2kqLvQxykNST9fjaYD+EKrgK28Z1r91s8/Yo2LdTbja6TCrFQJ3YlbBMi1
9TswnPAxEO1HX9HMnhKUU9Whz/ATpOKxem4VqBUVdUep9xZur/FwYpcg2Kz8hd+29wb+eK/Z7rZs
7XeKsj4PaqqD6/F+Hhdwo0mYgM3zZCys3vDwEFZ9IHhXmq68eAInraN0X+DSmPZ7YMPH1RFouOaR
sbKu9m++FWxW/eejyokgaHcYuzxPGPNPoAWf8sMR8fHlGZ1e4QndnYhQZ9Un1j+0CAbWfKDqOr/8
PEJ5UnZLYdLUeVg9Xx/kDloctdf5KAZ8e+xVeHRUlzSaYHstFEgLf/161fNqHpXQhl1a6UGv22G/
jO2dhz6SZWqs/HkGvOCikBuOGFtqEIt6+3D/9utg92Zsviq6AhTovfFhne+JF3pT4ai6AGMmmUDK
NrtQOQuXI1UDbQZ/5ynqvt/h4PhM00HVlBYGFjsE2218ixn+zhPkzuCNnZXnDHtSb9GT89qVp4k9
XecDcJ0P0B9fXCpzrH/9m+orb2Z+jnQ4deqZPu9Ht19y97mFewtO2B2xVf30X9nhPqPmZrG95XaR
FKin34Bq0fXurfmh/PFAen2a2CPfus3gpjSt1c/HgD83MYTTfavgYMncfuWbBIKN7hM4A95bx2uc
ojlBSd7BiRl05rhJKS8oWvP802BZeDLR1hNlrGllzeb7N+XhjDWN7prvNl4W0Wwg1Y1P8FjPn/Ps
4kLEYn71C29v+PHkw1eoqCskqJ86V7iAdZ5DFreYq3k5khr85kP7/XfXM3B7ERTDTsf2uE3S2Y3b
9ufPME64ylhOcVCCUx0W9LKVeDCt80QIl3NOg/7ZsDHFmo329V4h0nD7AH6j7R249KxZ8/AVzPEG
+cA+XrbY1oTGmMenBtE/v10B//WvP3/+12+HQdM+8ve6MWDM5/E//nurwH9I/zE0yfv9dxsCGZIi
/+ff/3cHwj/fvm2+4/8e2zr/DP/8+48sin83G/wztmPy/v+++Nd6sf/61/8BAAD//wMAMvIyqeIg
AAA=
headers:
CF-RAY:
- 94f4c6967c75fb3c-SJC
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Fri, 13 Jun 2025 21:45:35 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=I11FOm0L.OsIzhpXscunzNqcbLqgXE1LlRgqtNWazAs-1749851135-1.0.1.1-R2n01WWaBpL_jzTwLFUo8WNM2u_1OD78QDkM9lSQoM9Av669lg1O9RgxK.Eew7KQ1MP_oJ9WkD408NuNCPwFcLRzX8TqwMNow5Gb_N1LChY;
path=/; expires=Fri, 13-Jun-25 22:15:35 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=TSYyYzslZUaIRAy.2QeTpFy6uf5Di7f.JM5ndpSEYhs-1749851135188-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-allow-origin:
- '*'
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-model:
- text-embedding-3-small
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '76'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
via:
- envoy-router-86bb9759f6-ksq4x
x-envoy-upstream-service-time:
- '80'
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '10000000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '9999496'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 3ms
x-request-id:
- req_335974d1e85d40d312fb1989e3b2ade5
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "user", "content": "Assess the quality of the task
completed based on the description, expected output, and actual results.\n\nTask
Description:\nResearch a topic to teach a kid aged 6 about math.\n\nExpected
Output:\nA topic, explanation, angle, and examples.\n\nActual Output:\nI now
can give a great answer.\nFinal Answer: \n**Topic**: Basic Addition\n\n**Explanation**:\nAddition
is a fundamental concept in math that means combining two or more numbers to
get a new total. It''s like putting together pieces of a puzzle to see the whole
picture. When we add, we take two or more groups of things and count them all
together.\n\n**Angle**:\nUse relatable and engaging real-life scenarios to illustrate
addition, making it fun and easier for a 6-year-old to understand and apply.\n\n**Examples**:\n\n1.
**Counting Apples**:\n Let''s say you have 2 apples and your friend gives
you 3 more apples. How many apples do you have in total?\n - You start with
2 apples.\n - Your friend gives you 3 more apples.\n - Now, you count all
the apples together: 2 + 3 = 5.\n - So, you have 5 apples in total.\n\n2.
**Toy Cars**:\n Imagine you have 4 toy cars and you find 2 more toy cars in
your room. How many toy cars do you have now?\n - You start with 4 toy cars.\n -
You find 2 more toy cars.\n - You count them all together: 4 + 2 = 6.\n -
So, you have 6 toy cars in total.\n\n3. **Drawing Pictures**:\n If you draw
3 pictures today and 2 pictures tomorrow, how many pictures will you have drawn
in total?\n - You draw 3 pictures today.\n - You draw 2 pictures tomorrow.\n -
You add them together: 3 + 2 = 5.\n - So, you will have drawn 5 pictures in
total.\n\n4. **Using Fingers**:\n Let''s use your fingers to practice addition.
Show 3 fingers on one hand and 1 finger on the other hand. How many fingers
are you holding up?\n - 3 fingers on one hand.\n - 1 finger on the other
hand.\n - Put them together and count: 3 + 1 = 4.\n - So, you are holding
up 4 fingers.\n\nBy using objects that kids are familiar with, such as apples,
toy cars, drawings, and even their own fingers, we can make the concept of addition
relatable and enjoyable. Practicing with real items helps children visualize
the math and understand that addition is simply combining groups to find out
how many there are altogether.\n\nPlease provide:\n- Bullet points suggestions
to improve future similar tasks\n- A score from 0 to 10 evaluating on completion,
quality, and overall performance- Entities extracted from the task output, if
any, their type, description, and relationships"}], "model": "gpt-4o-mini",
"tool_choice": {"type": "function", "function": {"name": "TaskEvaluation"}},
"tools": [{"type": "function", "function": {"name": "TaskEvaluation", "description":
"Correctly extracted `TaskEvaluation` with all the required parameters with
correct types", "parameters": {"$defs": {"Entity": {"properties": {"name": {"description":
"The name of the entity.", "title": "Name", "type": "string"}, "type": {"description":
"The type of the entity.", "title": "Type", "type": "string"}, "description":
{"description": "Description of the entity.", "title": "Description", "type":
"string"}, "relationships": {"description": "Relationships of the entity.",
"items": {"type": "string"}, "title": "Relationships", "type": "array"}}, "required":
["name", "type", "description", "relationships"], "title": "Entity", "type":
"object"}}, "properties": {"suggestions": {"description": "Suggestions to improve
future similar tasks.", "items": {"type": "string"}, "title": "Suggestions",
"type": "array"}, "quality": {"description": "A score from 0 to 10 evaluating
on completion, quality, and overall performance, all taking into account the
task description, expected output, and the result of the task.", "title": "Quality",
"type": "number"}, "entities": {"description": "Entities extracted from the
task output.", "items": {"$ref": "#/$defs/Entity"}, "title": "Entities", "type":
"array"}}, "required": ["entities", "quality", "suggestions"], "type": "object"}}}]}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, zstd
connection:
- keep-alive
content-length:
- '4092'
content-type:
- application/json
cookie:
- _cfuvid=63wmKMTuFamkLN8FBI4fP8JZWbjWiRxWm7wb3kz.z_A-1735447412038-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.78.0
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.78.0
x-stainless-raw-response:
- 'true'
x-stainless-read-timeout:
- '600.0'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.9
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: !!binary |
H4sIAAAAAAAAA6xW32/bNhB+z19x4Mte7CJu0jb2W5q2WJt2KLZsw1AFxpk6S2wpHkdSdtQg//tA
0rKU1gaKbX4wBB7v1/fdHe/+BECoUixAyBqDbKyevlTP//ig/vy8+iu8b+qvZ7Puan6+wnc3+v1v
nZhEDV59Jhl6rSeSG6spKDZZLB1hoGh19uJ8fvFsNjt7ngQNl6SjWmXD9JynjTJq+vT06fn09MV0
drHTrllJ8mIBn04AAO7Tf4zTlHQnFnA66U8a8h4rEov9JQDhWMcTgd4rH9AEMRmEkk0gE0M3rdYj
QWDWS4laD47z7370PYCFWi9Pb66v52x/veDrd9cb/uDo7rO7vvhl5C+b7mwKaN0auQdpJN+fL75x
BiAMNkn3Bv2X1xvULR6wACDQVW1DJsToxX0hfFtV5ONdX4jFp0K8NVK3JUHDjkCZQA5lUBsCNCWQ
qbBSpoJ0poIiD2t2EGoCWStdPinEpBCXZQkb5VvUgKr0wA5Kh1tlKg+BoSZtQWnd+uAwUNZmI8kG
nw28NZKd5SStsKFkYt0akDVqTaaiZMiRMmt2kkATOqNMldU/Ot6okgDLUsXUUINNaUjUQHcYq9BD
qDFAxbCijk0JklsTYm6BgUyNRhK0piQXa6PMtm8nhfi7Ra1CV4jFfFIIMiHBEMG7LxINhVgU4iV6
JeFyF0CKKtKbZB8w1HDDVsl0XpKXTtl8b1GIy5hpiZEl1D0woAw0US9FrcyG9YY8SG5WyqSotxxB
SrSZtlmRSxBVFADB0BYCB9QZH0c6lYevle1p94Cw5uh4h1jy5r8orRPFhE53GWdyPrOU/DdsdAcB
26oO0WOqA0cG0EVziV2sCHgNzwtx+zAZw/Q6k7GAqx79SxvJOQDY7uphyExPK7SeykThndWoDKwS
EX0lwKobiLZ151NJ5DHlJ4DJecQ61MqDRE9HEXtFDZtcwX5cwjHRvbtQO26rGhyhnmq1pt5X7hNr
tZK40pS7iFDWMbABwn39HMPthju4Qvc/ADZqyINoBe5AovNHAXnTulCTG7oyw7I31uPTY4KQjKT0
vSSDTnGy/TNp6wcQ8iRRX+kH4HiVxwx8VDK07r/UUeqz3F7kAV1IAzC34FBd++S2KtRgd16PQvTa
SG4dxvGV3r44QzvY1io67NnH8dB4VGXDEMNcOcn1rt5M1ff9cXh+9/HaG2Uq+jclM34OtthFIFLU
AwxtcrDODgD9uHxYHwfmo+OGU4p9T6anhuIQjO3Yz/dEQDJyxcbERsqUJPj3r9BPHnhrYMVllxpr
RSGQezzNI0i3D+LR+/hwcuj7dvT6O1q3HvX3awEawyFnFfeC253kYb+CaK6s45X/RlWslVG+XjpC
n1524QPbHFYMITkX7aPtRVjHjQ3LwF8ouXsxn2d7YtiwBunZ6dlOmp6AQTCbnT6dHLC4LCmgSgvO
fqeSKGsqB91ht8K2VDwSnIzy/j6eQ7Zz7spUP2J+EMg4TahcWkelko9zHq45ivP22LU9zilg4clt
lKRlUOQiFyWtsdV5MRS+84GaZS5u61TaDsXaLs/O8dk50vxMipOHk38AAAD//wMAfAt9/isLAAA=
headers:
CF-RAY:
- 94f4c6a1cf5afb44-SJC
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Fri, 13 Jun 2025 21:45:40 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=RNxOcvndh1ViOpBviaCq8vg2oE59_B32cF84QEfAM8M-1749851140-1.0.1.1-161vq6SqDcfIu41VKaJdjmGjwyhGQ3AyY0VDnfk1SUfufmIewYKYnNufCV49o2gCDVOzInyRnwwp3.Sk2rj9DoDtAbcdOdEHxpr34JvDa8w;
path=/; expires=Fri, 13-Jun-25 22:15:40 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=uF44YidguuLD6X0Fw3uiyzdru2Ad2jXf2Nx1M4V87qI-1749851140865-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '4149'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-envoy-upstream-service-time:
- '4154'
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999368'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_437679f0db60c9eec6bbdf32d399f960
status:
code: 200
message: OK
- request:
body: '{"input": ["Example: Counting Apples(Math Example): An example used to
explain basic addition by counting physical objects, apples in this case."],
"model": "text-embedding-3-small", "encoding_format": "base64"}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, zstd
connection:
- keep-alive
content-length:
- '211'
content-type:
- application/json
cookie:
- __cf_bm=I11FOm0L.OsIzhpXscunzNqcbLqgXE1LlRgqtNWazAs-1749851135-1.0.1.1-R2n01WWaBpL_jzTwLFUo8WNM2u_1OD78QDkM9lSQoM9Av669lg1O9RgxK.Eew7KQ1MP_oJ9WkD408NuNCPwFcLRzX8TqwMNow5Gb_N1LChY;
_cfuvid=TSYyYzslZUaIRAy.2QeTpFy6uf5Di7f.JM5ndpSEYhs-1749851135188-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.78.0
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.78.0
x-stainless-read-timeout:
- '600'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.9
method: POST
uri: https://api.openai.com/v1/embeddings
response:
body:
string: !!binary |
H4sIAAAAAAAAA1R6SRO6Orvn/v0Up87WviUCkuTsmGcJggN2dXUBIoIiMiWQW/e7d+H/rdvdGxcY
BZ7k+U3Jf/7rr7/+brO6yMe///nr73c1jH//j/XaPR3Tv//563/+66+//vrrP3+f/9/IosmK+736
lL/hvy+rz72Y//7nL+6/r/zfQf/89bcQPDriHh8DoPtGkCHJhgLfnF1fExk5Jry0lo29W9H25Kvd
bLA1e4CN/JNqixDFFfLfuCDmIo9g8sGbQ9MtP2MvnId6EqKshKDmCpwqoaEJk8IV8CDiDmt5qNQ7
//k4wQQHE1axFmTcrs8k+ImrICh32502X0c5QEL0NvCpC64erdqBSt3WNtffP2uquwyC9/XpYv3O
n2t62D0roBngQ0zen3oizGILZSGViazHQcy+pZXCzdA2uODmBtC9Mm9QaukJ8VFwZdwQaBS0hnoK
eh5Y/fh5ti6cOj8h3pt3vaXblwFSjrczDufixpjIUR5JgxPiIL4rmTCHXguLj2rh424JvfmGNyI0
eGoTDe2fMbuC+ALvdX7CMdd3WmdsQ4gWhmQSXKKjtrsz6QS9CEf4gC5jtlT+4ANXiRgx2Etju6vD
l+gsbytilZMX70i5CxDZVAHOy0/QE0jjDSy4lBGtyaC3mA+rQMh6jdi86a7GjGM4IMntXaxuPs+a
vvVjDrj2ERGrPIQee9iGCPMvueFgNp8x9d+eBG/LZjPtJeGe8Z9GTZDl0BO+WWirfViscdDN3zPJ
1nrNwkU2UX79DkRGfJZ1T7adINiJHs46tWMLxtoEY4TO5GGcvxrh6m+B2Lcpp49U6mAQ97UOl8fu
Tc4AGrFg7e8b+InLgKS3/lAvXcPZ6NaczjjRDiWj72m0gXi8CwQf917MbId1IHZvFblCbh8PiX3K
0S5XDIJtV2RTJvUykkfhSYLtMWO8OjsUbZVKx6qXtIAJr8wHwW2k2FtSP6ZbEDTQ9uU7Tt83yaPF
6LRAOWZncngZWOPO0iGAUrG8sJosiiZkqR1AT/be0065PgF5nOsLwo9riNUYY4/u+lhCM6MvfFO+
Lti1+bFCj901ImZTqJ7wOpkduu8qZfr4ccWWzUUIkCKeHsS4LV02iQddAqWVbolxsXTAtpveljQn
3AR70200Oj2fDbQD3cUXbzvEgrMtLrD2dxSHrC/7mSRTAu6QvCde31XxEiV+COtLigNQcnHN5+pe
hFhuATnfli4W1PEewGvWAqyftzgjNyWjMFPfDKv45dTCRttzsPFMNLGji9gEb4cOvHMYkAMxHxn1
JeqihzI/sNNYUsY2jlyg6WJuiHarHtkua+sT5PlKw/La7zS71B1qLjeIYyU/ZnQ5IRFm4iHG+Sff
aMuZUQnNSpnh/PEWGHN8xgOHVhT/6rGrw5DCgs/LYGnLQ88rk9NAfkAvfK4eZsyt/wfU5h5gNbmb
vWA4iog6r4kxPiZUozyYKzhyjTghwyg88nq+dLT5HCpyH5beG2ZCKeSZPZDgc53qmW7ECPTkUhHL
jyvAIkhVBK/JkRQCCNla/xP4qIuH/UVp6uX7VRtk+PeGZL4e9sLo3yZ4iwKwzueLsQ2oNqAP2wO2
OMQ8BsWOwtv23az4ocR8Tg2KKtY8gu863+ymxAvSrtsTMWM5ZyzMIP3NN5Hj06afDzPcwF+9LsdD
zWbJOSXoaty5YHwBnQmDropoJB4Lht117lld9BIQtohObMUran/LBZ0qjmHtWhsx24BuA6kjHIjR
9JYmuPdbLjpqqgT741WIWZnUFdykr/0klrLnzdZ9TGHQz3dyM040Xh58YMMVL4PelpqY7YW9DK0+
TPHlQx+MOkMJ0XWvfIh3777xcJaMAHjfScOOvd2xOb50Ptgc/AO5PPb7+g9elZuLQ9xtOWl079oL
2Hs0wQU+yj2nl22AvDa5kZOuQ2+8RsYJvtF0xIWa1IAJwbQBKBssnJ0jCyxq/Tr94Tsn5HNvkcpq
QhV7PYhyK5WMEzfzBI/Z5RRI75ukDZ2tJb/5xNaO2Yw+maqi3SPNiSunJF7iUSxB4F91jD+Pl9Za
3jjB+LDsSXxVO22OM8qh6cVOxGIgj+fac204vqUKW36sguUu1zmKH1xDChp6vcBa0YTPqXzjy3Ru
M2rn2gmlBp2I9aVPwHM38QW35hfgA3Z6jV3HnQjfffsiqlvF3ty+2hOci72J73mtZS0d6Av21Gzw
NVdIzGxl4GDC6T4OvXuSCRpaeHgLjBI/QmWrve6nuw9flnQmAX032kK6yZRof42xWswp42Do2DBW
3CPx/fGZUXyfuN98kChs5JjppjMg5EId61uOsOXBmzZ81dtrAO5bje12O53CLNVf0+eeyPHuGH55
0Dtehg8ekz1BJn0HpIK+cHzxT/FsvNUCSaxTsf6onh7TxZcPb0ouktSikkdXPkE54hi5czavMc5u
XlB1E588bnSuh07lKDqY42uCa3/O9+YkIv7YNuQ8X5E2u/WbIru1HlhlTwTe8FQn0DITNDFzW2vL
6+5w4NjJHTmxPMjY1RZ5RCtTJdFnG3nT7UYWwFLJJSeueGudMbMKFce3hq/te+6pUG46uK6P6Rqx
K2BvKSzRAbwVogtWDJYo0UPElOswLePBqYdngm3I+WmIb2s/dNbnEoHjLfzgNLQzxt5SUkIWZ7fp
/Xv+ajm3sBxfJvnxV3c4pjYEemIQtz828aKULxv1cAymlk3XbPkG1gRW/g6oHgfZLkJogOJ+6gN+
dhONXmWjRWb9PgRf5dsBwYncfD/O+BlcvK2f0dY9cshx3m98yJ5+zYV8FoG1/6fPN/vE5Mh9ffjm
UU3OJKm9pcxDHe3zKsOO+ij7pfJfPjrrpz1O7eO5n8/P2YfK4ShheTnh/ofvyDu/voHXXI2M8yXR
hiGfC0Tvgqu26M+zBOA1PQbboC6zBcz2CbY7n5IgJNuYfg+RjOg2H0kC4QQY//3ykCXcPei//s6j
fPLhwUOzHKw+Il4bzIsLwYcLj9h10NOjm4M0QRsG+996B9Naf/T6XCZsF/e9RtuobKUVH4hK1X02
d/7YgsctCqd9EZ68+W2DCfqe7+AHzFqPxjDrpO7zPWPzOTcea7uhhU8AjlgXna7/Wr3K/RlvtdLg
0dtTlpBEo554X3+njZhMJXh9kww7P/xNr7YMzzKqiIY9jf34DnpteiM/fUwe260KNSEbiZfHYjZv
aSjDQ+Gfp14bAkZLMexQ9Ghd8ntf9l6cCf7hl+lsZ7Te3FQURKWPLyUI43kuJgrtTdRMm+4pgsWx
jw3c2qSd5jGd6kV4UIh6qjdEV14m44f8IiHv3HyxpaUGILN0vQBtsjNsg/e9n5AgD/ARC37AoufE
1n7i4KLmEnlMO6wx6a1DKEdVTdw6LT0CNFuH+VYsp+3Bxxl7eVt+byhiT8yTymoacToPN+ACJ+mu
iDWdNldOupgfNZA2nZiRZ9ZP8KfHTLu0GN+WjAJNE6SAR9W+HszitsDx0I/Es0Tcz4/hHKK74yzE
iCfkTS/VhLC+JJhks5ZljdTuS7iZpjhgefjsp11OKVz1e8AuPpf13EWXYfToXGwlly9jw1S8gBLf
VRJFt332NXeljpKFVtjiT7o235tcAsNuMol+Lk4ebxvHFIlIVwNh5YfdmEIeMDHiyaHeT95CFk6S
tvbYEuc4zGy4RPsBqlcqBpvaG8Hy6Y8yuubwTTy50gE3iNccnnf8lsib66X+8dWffjPNNsuWmTY5
NNzbPuBfR5DRZutSmHlpMIEzqz1aOAKEx07tsL7ya0sCVEo/fX6Jn4m2LDEN4N7QumDvlEdt7vx3
i2jeoUmk2uIxzQtPcJOJNT7GiMSv/Fbz6PlhAnE72v/xW2Bz9nbEOKQ2EFLGR1DULW/arXp4ufJ7
Hak3vcJyfCrq5Vh9S/ihohTwTaFqzYqfMHgGFQlSNsQjRt7m5+ew/MSRNm/TngJl3zorPqYanVO9
hKOwwUSvD01PFWJFP/+BvVd/Y0R86A3QjP2HGHYSxdM+iUWo6MuAbTYJGfnxrfEaMqwpaOgpDBUb
pe+DhTVhx3mMKyIewYqEwTymQf9VXskEV/zFim06seBaYwDujrf88L7nAhpJEGy9PODqSWFs1WvQ
z6CP09S32VKelBChvcn+6KUZPOQWInejB/NTrNiXJE0CEzrtsM/xl5ievuQlRTQNVn8c1MtB4yZk
X5SF2JdmYaMSvAoEHqM2id5NWf3+0YW0XBLswovuMf6ZyRJUDu70UV/UI8UhryR633wmEd1djYQZ
t8C0fRXYYbLgDfmt5sCkxQH++ddy32xVsOpFct6Ee9BqFlhgZweQeFJz8Pj9e76AcwA4YtTys2Zu
1Zro/jjtsRbw33hp+VaGiVze8G/98sYMyp8fwzaHFzaFaTRBjQbOxEShY9Ml/qSw6eoLtuHnFK/8
IML2w+n4uOSGNhZ6oqJrvnkHUy9+GXsvyvB7fhKoicamh1Z36JdfxMd+71HHuYbAbOAL48VLtblU
nRdsdwElh0S1td3bBgM8Sss4cQoestk6X15wvz152NoqpUZbQwtgieTHxNvlB6z43/4b374sqemL
MQr1QCxIUtxv2p9+m6OowJ6DSzbsasNE3vztcQgqk41Ls2tgbF93+KCGU9zDii8h8D7fCQ7bUFse
Wt9ChywDCWxxr013uc+BIC5+sOT24NEh+/qwrR0N+/6oxMLiIhl28RNgjbvgfuAVOwDFe7jj5PVs
4oWHxAfhVS2Ik0qOxu+TTIJ2wR3IRd+p8cKcaIN++Y7v7jg2X51NBTXqO8Qu7jev/xAWwZWvgu1W
GvpJnRWKErm6kYNXGP3wPaQyVOhuwrj5TvGy+fYRtO2vGbB90mh0R6sSHcpQ+4NXOwKoCQWR+kQb
OA3sTuATwbM1hH/8+vDr54x3GqK25ViP3IA2kLwJI+r7UserH05hEOYPHPf3qKdnyfDhOVYwMUsQ
ZjO0DPjz0wRfgAIEPJW8VIdbb2JX1fV2q16ECj5/SHAMpb6f8E2Cj06Oce71XT2LXsXDVX/iG+l8
bffVC+6Xz2DzjRr201s/vCVupRGtMR+4gOXYmNjrRLnfoVwK4FG4XbBG3YAx/I189Ih3/gT5wydb
LOSWYJvOZxwX+zDeRabfwNv43E/v9+4Yz3GQJeBQBGdiaweZcfQR27B95SM5b01cj4/hHsKwOt+J
y7PIYzLpW8BAdyCyt2H1/L4cXbTFxWV6G6oVc/dOyeEh5TbkCBxFW+b20UIhUZ2VD+8af5frAsT5
AxPf691+0M7HAH18LZ9258apOX/5nkC01WvsXmTb4/ZcnsMwPFKy+hXvS/lvDtd8iihh4QL2zs85
Wv0k8Ve/Px1uLYQncs9IdivsXljXO/jV78/6f2wFFbr4YGDZu4vZfGNyg3a3bYK1/VjWs7tPZfi2
OY7guSvYdDp8VMjHU4wP+/7D6JA9/R8+E0X8PABt/DZCVaTfsLqTjJom9qkAa95BvIWYgAezfUHS
83UmZwF/45E0Ggef9rXA6tOt+pk3ugKWLTCCre0mbLrdPgvYu/lEDt9c1ub0Xduwetkmvu/7D6BY
pg3iwxPEZ5C5Hucvz8svDyJ6Jfoe/fnPDLw6bJBE84SlfIZIFUuA84S2MSVmMwDRqTDR+mX0WH63
E/Rbz0WQuYx19C3DD04YSS9yq1H3xkn70cwd7If7SqNJIQ8orK53rC/GPZ7qKixQJuJ4AvP+6I0f
8x1BIszltP08Xt6MkbZBx935G4h5J2tcKYYtPCozj4PXQa/ZZGk6nE8XD7ur3lo282LC9sPrxOmL
PmPu/ZjDdX2u/FCD+WGFPpoi8gm41f8IKdtEMO+Zie1LE7EhOe8loAm3cSq7+zeeMqmW4SFhR+I4
5dGjL6c14asbDPz4PHRttkGbA4urhwBuJC1rV/0OJNaq+GH3J20Rq70rWfz7NoFOdQFb1CkA6IQv
E4mI4gmLOvmQmQvF5vGggZ0ADBsOj3JDMLm42S5NDhPAG/YNPmveyDaOXUAj3iRYD+yvRkPwlBA0
u4D4n7zQJku9BfDLGRUJstOnns+3LoXj2X1MrL8v/TJV1wiC2MymrcG/4hld6wb9+F6Rca1NRPQq
YKqWRdzoXPWLKbkneI+GI1bB/pGNT0Oh0HzWZSDi0AL88Bon6MMB46JXTxqXhH0Erlb9xHbWOvFO
yuUcrflb4F6eWsyOQ5rC6P66r3kEzJZM6lVY5N4mgGs+vkwotQE42zYJ492NCco7SIG131Di59nI
Ru0GcsjDDAXiV33287x/tr/+wGFhfBhduspFyGpGrDxFlfFYmUJJSzVl2lb3Q80MMCT7TxW8/p3v
n+ZNKn246IjNNR8aYr2r/uhPFvDfbNEncAGz1thYbhaN8cO2fYHjdYOI0QkWo9tQEuG9Lk7E7bKw
XuR7DYF6MytiH18wZu/XEMGV/4mzTZKeCEMlI7F+vogubsuY545PCofUkIivy0bGVDu00aQdg4AL
7lpNud3eBfyh9ohivUdAo55X0aoPsdkul35EOF8kVUd2ACwR19PdkHQYOXeNBNGnr4n4yRI4n04e
uZs51nbYiLg/ejCSTDNbHt6Sg4PeCxPw6rNHP6qngrxI06lsjm0912G4oJWfieNeTLCTFKORTkZh
YTU03h57GtscrvkMkY9+741W73I/PTTtjuDl0YjzOXE36QtWQmbETJjFTsJ10a/1dWKm5EIDl0tz
wupnG2mMcBcIm+55wabnXOpVf7yky/VqEOcytTHdLAIFBym+Eqf+vLWpGLcXsOZJ5PoYTvUffbru
N017ZfPO5o0ulOB7+jq4KLajtlyzTQB2+9uMj6p4ZBO3Qe0f/rxoEY0nfWIX+LXCO8aDX8ZfD0YB
etXoSvxMh/Xwyw9n2Z+xEx1Gjb2egw4Ew7+ReJZf/YyF3QlcvsmCHUi+YNFhuKDDoYgwTptPNlzl
Q7fXy+Q9NYs8Mpaltg/Tpppx4Ny4bHRamMDerxbsfJgBlrQzEogPx3oS1BfVSDYTDlqfvTZt0uLI
OlsAkuQwySKaKLiMdHRUf3gVbMMqzV6pGaiQOrsDCVChArrVzgU6L/cWh18ziamlXST0y7sOw+J5
3HEfF2CRs0PAPSpFo2e8r2AdgPuU7I/fbM2PKvgCuU3MLNnEy3F7paioruY0uhZX08MDRfBTGA6x
dEvVCHG1EKIuuZNc4/SYOzxQCNuncZnEIRnq12coJggUeSLHKXzF86ovpBMPD3jl35oF3sOVVj6f
5ptnrvU4pODEbw7EBIGhcffOyX/Ph5138/DGjLcXyBPluPqnJRt+eeh3XzZ//A8POblCa56DjfmK
vPmXfzjO5028BI3Z9AkvNjSt0iPqMATZ0tRq8Me/Z/N+1mZn03ZgaTuF6I9K8f7orTxTNGIbfsro
qj8k62zEk7j6NbrmlcBCJ4uc4b1k7KTSAu2sfU/8seyz5ekMMrCSvJuk6Kz2K54OkhDWya+/a2Yc
kwGeLoEbtPUDr/ogLwCsxnDa94UX88H+kQKjeMbEDb+CRu7XPYVb39wSfKNzT6xTKMINpQrOP+iW
8S/8SmCbI0DkIq2yr9tzL+DGl5JYnzplS+7ZA8hsWSDG9tOBxRaYiMqsvJHfftIfvt7mgvrH33Ka
xRZ4FEeZ+NEANBrQVALuQvc4QE4b0w83lnB/4Ayino5vMD0DqQU/fex07blnVb2UqHTpZd2v6gAb
+TGBH3koiG5Qq6bOQysRUr0vtq95Ww8XQiBc8yyiULzJ1v1AHTHXtsjhdFV7zj1OOlj9MTlSXGSz
lNsF7Hp5IPgJo4wVh1MlrfgYsO0gZZN+s0z44/tz/pE8CndBAJODoxA1Dbf90k7HAK3++E/ePA+l
lsM23wK88q23PK9ODuVjsA8Wy2+9uTvBAn4W+MHGIW0B3VycCvDMHQIKFK/ffcwxgi7HN1i1/Fab
7Te36pHxg+W1H5fVr8Be6p/E9L0xXkbpuEHs+yqxKX+VWth3WQMeLH7jIMhcMNMNDVFC4EwiruZ6
yieEk4xGNrD+HjM2B5/rANb8mhjwXoJZvYAIrnkvVqJmrd/Cn+C6v45dvaHaPPVRDtZ8bR0vg5GT
svTnn/+9X/jbjwdPOq96K8mE11cSYY4Hfb3/qZ7kew8hQcGLGPP1rs0LF8vo79+pgP/6119//a/f
CYOmvRfv9WDAWMzjf/z3UYH/EP5jaNL3+88xhGlIy+Lvf/59AuHvb9823/F/j+2r+Ax///MXL/05
a/D32I7p+/+9/q/1Vv/1r/8DAAD//wMAbF1vq+AgAAA=
headers:
CF-RAY:
- 94f4c6bed82cfac6-SJC
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Fri, 13 Jun 2025 21:45:41 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-allow-origin:
- '*'
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-model:
- text-embedding-3-small
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '457'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
via:
- envoy-router-6c6c4dc575-7ps9v
x-envoy-upstream-service-time:
- '459'
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '10000000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '9999967'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_eba1229193b2c4c623e7661cac3bec77
status:
code: 200
message: OK
- request:
body: '{"input": ["Example: Drawing Pictures(Math Example): An example that combines
art and math to explain addition with pictures."], "model": "text-embedding-3-small",
"encoding_format": "base64"}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, zstd
connection:
- keep-alive
content-length:
- '192'
content-type:
- application/json
cookie:
- __cf_bm=I11FOm0L.OsIzhpXscunzNqcbLqgXE1LlRgqtNWazAs-1749851135-1.0.1.1-R2n01WWaBpL_jzTwLFUo8WNM2u_1OD78QDkM9lSQoM9Av669lg1O9RgxK.Eew7KQ1MP_oJ9WkD408NuNCPwFcLRzX8TqwMNow5Gb_N1LChY;
_cfuvid=TSYyYzslZUaIRAy.2QeTpFy6uf5Di7f.JM5ndpSEYhs-1749851135188-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.78.0
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.78.0
x-stainless-read-timeout:
- '600'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.9
method: POST
uri: https://api.openai.com/v1/embeddings
response:
body:
string: !!binary |
H4sIAAAAAAAAA1Say86rOpeu++sqpmY3+9fHIWCzegkGkgCxORmSUmkrQE5AToBt4Ffd+xaZS7Wr
OmkAItgeHuN9n+F///Xr1+9XXp2L/vffv343967//X/ma+WpP/3++9d//PXr169f//7+/q8nz4/8
XJb35/X7+Pfm/Vmeh99//5L++8r/f+jvX79TObuzMZvWnrS6PSX4DEJESNScJ3GWlj6cOnZlHai2
SLo1rg5NN0uIpecmGOyh1KHsZArHBXrn/O2PieHeAkGOF88Gw5MtF7A+dSkJmnyFqh7qKTA3wQfL
Hj0g6fBjCoOhriQh89u8H+30AH+azMACKD/TVGx1DKc1lbnpJG07jeN4gmXtR9ztrTAX2BGWAUhM
2EtIARqk0BuBDKnC/Wsdta9UKw+wwozihW3FVaO+PQHgsXuQ0yANuTDKUoHXoEvwpFn+pILhMsLL
G8d8E3QciSVBrqFEmcrRqV0Dnqz9EWrXmJDt3efTdNr5ClzZNCahsFNPbgoBjejaeSQjEAP1IekB
fGA/495dLvLx2VsBeB8oI94HPWORB8I3BoQvxLmiGkklcCVY+/GK+A7YtOoU7i1gNVmKBxqvqpc1
mbUBrt2DRyu0ntfD1OEizSB+TrYGxkK2XHgeWMzdQ+W06tsfU8NcshX319OzHdFVC6BnBIJYDVWn
6WNcargD9Mj3u2mbD1GFDnAbY4t9TnJeTSXWfeA9fYeEn4YCiSbLrVE59EN2o+egoT1rK3DQacUv
Vytpp0a2dMNqaMqJ3n6816FbbsFuYjZ+XCXcSsZSXHWr89f8+GodpE7L8WRcX8GCewbatJJRlhI8
u37ETyHNK9UaMgzfBB/5ZZJ/vMtwHQ+G98QOKYX3AoK/xAHsfGyxAE6rXC2oL8E5vjhdIFRJSV8e
YC9YyX5Qs2hfNXKhcbQw+c5PJZ2spW/4b5qTrZ4EngCP5VVfSN2LxLekq0Rg0NTYucziXiYN7VgZ
vg+PNiPc204bNDTFcgFJHU1YJRBPUkNcqB/ZqBFnKnAl63tPgbQeVWJp9gGxzzi8wHOV9WQzeBZQ
X0d/ARdaHGHwbGAuvD1NjaELd2xRSBPgR8MaoR13JglTq/VEA0VkHIoO84vnvavBWi1f0I6ZyffC
207S2y4/0Lkwk5xba5rEkiIXbq4BIyWkZ0/EUFhGw8I12afUqF6hYXbwqrAEL6W8qV6rnVkbshb6
nN67flJ31PpAU/dXZLetRf7ZO9oBZqsOk8StzFjeKcsXRMnY8u2nsuMRN9YKXnhg8DzPn4gbg+0a
9OrvudNXtTcMV82B1jR2fMUTt1UP59HVW8ByJnXAnNTOsBYwCOiZlI/41UrxrTzAovVDkmwSH4jQ
MJlhmmxFNnFlxWqvjCmU49Anqzm/DJ8GvUBX4YKpK2Dlimz4B73fdAWJVc/0lBpqD/jTUIO4IQXt
5F/HlbE6ZDEx13VbTbfycgVjSyXiqeCZD/YFBca8X/mhaO14LJl2BdHUeTzC8RrJ9/LswwvCMc8X
deR94x90AS74rrQB6tGx1LUn6w5811hBO7J1ZsL0iPc8chKvVT3tEsF9lBXMeEoxkuzyuQTrc/Am
2ZZq7Zi//S18JLFJzFW+RjKxvQPkPiu5Rdp73GUP4Rr7MRh5AJTLpK73Fx98duxENo7EwPiTajpM
eaQSswC3eBBYcwDx8BZPSaN6/UZFAcBxt+XHBQX50B4GE/4Y3YvvE/RquXPWFvCa0xcb1K6NJ+Dr
d1gMAeS20xyQ2sMxMY75qHHrlPTe4BKUGoSMI8Fv8I5F0tMDvIXBD8/uVd2y1E4FLFgA2LD1PaRs
MgThzqZHctoBJ57wOK4Ms6QJA3drmKZmf3nox0NH+P5Wvdq+R14H3SOzuU2bQyyBh7gb14AlTGss
0fb38ozhCTNC6GgfWv5qsofRnkKbBxb85EOPUAfNiVISeI+f+GM9hsDYl2Lg/o2WaBgm1OnAxw1W
jA4D5ScddGOubxj80KIVuKEpPNxHjbvQCtvukZvQyKsRYG0piWm6aL4Cn3l34GtJTtHYWkNn+HE0
kNUIVkg2DgM29g7ecHrzUKyi5uLDy9uPebKVs5Zdjj6G24dvkSOrRTssdunVkBL/xskLud6ItPQK
U5nev/WglY5lWRiwYw9Ouunjdd/6bEU0ZT8D+KBRoRcHTrfY46ZntW0XKUvnO15SpMkYj7JhneAJ
BTrJC++ZsyXWJAhaqhOcJVM7qreLpftXf0PSM6iA6F1xgOtz9CZemQzToMjnE5jHT+b6EUtp747A
W/sO2e0TkXP7oEFwz8cf7o9oi8SrN6FhH8eO4IGeK0Wl/hnkwQiwmsR1NVTvFMOPFNtMP4BH+yr3
5R2iAK/Zw28rMEbKcAK1HK5JMOcPkcv0rN830Q9fnR6XaszGoTD0XabxVSR9cvbuhpdxVbqEKalX
TUJPxBXaHk25tW/USpwyUzJ+Nt2LoLbJUP8AXg3HRSbxTWEfJ/Ud6QlI1r5PnKjGccfengmMBD8x
mMcvxJ52sOuo4LEh+YBdVsPDOFZZzR3RaPkAh3MB4C0O2fLcnOJxUfp36LbYJtm5Jki6OiIApxUO
iLVvsnaS3lkH0qzb81VCZa93qZeCcxUZ/I9eWN7KEaxv0Yfg1Ht74+Z9foCjQmuyZcCOeXK8HIB8
pAqxjt69ksQaWYanY4cTinZgupXZFWZDVnGLWftYkYUmjE5iBXFBMuSTfty7ALOsYLIWV62spJoL
TSP6EPQp/FY22bI28KrbfuPbU8VJP8HEHhWmPmOrnWT50oFDHWnEb5KomuhK3wLn0pkk0Gp3GrIh
tSBNsxs35/w5Wnb6gPN8kb0EtlNX7WhtwAN+kH0EtvmUDdnBmPcHmb83n4aHfoDIyShxCou10+6W
BaB8CcgPzdTETKGZo78DduI4quN2mvUCvEA/5t4VbGLlHabQuKR+jJ8PUFfv60MsgV+PA1+vq2s+
13MIfoo4ItubRKaxoVZglDWOuBnSNJfVQnMh32QjQQa65+Pqdr7D4Rjv2HVML5PYeOYL2KvOJFlh
H8HYUD+A8mOuTzn85CLclac/evF0pWAarpp3hznIGo7TaucxHA0neJ+yD1mPdZt3/C4ORi1Yxm5L
y2vVg3E5gGYITb61E9722tpTQLwbZbbY528ka8dzAfQuJNxuAIpldCyXMCP+HksRzdoplC8JhDjT
+aZpjoDd66GG8pndyMpPf9p3zNHJyAZa8bVSXcFQZaiGoGQNXx1t6g3f/KU7ISHxqb1Nr/PaXYJb
wRLipB76R88ofuzztZevcqkvvRUcjuGOkFMtxRPaWw9jZQZvvm/Rqx2txxAZqzw3sPTj3cEw11Ng
rvEKs0Q+x9Pqri1hkI/TN39Vk/rSC1iwCHA7qiqvD0vvpKVjsCTOJLH2pdByhPnODzhm8S6fBjtj
8LrECf451Uku7gcR6FpMtTl/heCpynsHtFcqiP8CWyRJjefADIolsdtiP01249dLkmUldyQfT+xl
p9Awsjhk5ZzvpjLRMcQ5Lbj3orAdQTdIsDrSD3fKRkO9YaQFfC8o49uPZ/8zHngOA2JHoEKDeHod
RB5b8VizfDBtPsMCugvfJvRWoZw9+rP+jR8entCqlZf79Ay3Iz1i6dqoYODUY7DZhmt+0Lym7Xap
MI0fQiF3aTIgXhd68L2PoSVDMGzq5QvMeorvR7kE0urGJbhvsoINm8KdlKRPGbSWWUo2BtW9wdyX
SxCRQCKI110rnPdTh+PFv5B93Rhg+upt6xK0pDwnsSd2aw6B/ehMErtUAR8n9Bbwumcxg0H+nIZ5
PozpyK58v6FlPF1H3Yda79ckWtdt22k35BpoFbX8bMsLjx1DfwuXN7/mluOZuTrYFwboO7vxb30Z
fX1IjO0xYsRcVOtqaj96sgRRGDCFy0ePd4YFobfGDh4f+S1/7WQawU6LHayHNG/bV3Op4c+ne/ET
skEuMpuejSQLFF708daTLkd6h9Oxu/JwR+X2A23/Cn1VDAzs6qHlspx1+kllAdlX1uRN0vvSAdcL
OFlDqa3E7eIu4aaMerJxUJ2Px9B3IbajkeOnfY6HKvMef/QPuVpJpZ6E7sKN3SFOk6IHYth7FrQI
pbg9Uhj3hSmWxuccW5w4+a6Vd5RCiIr8h/ugeqHHzThLwDDCkB9nvc70qybBeAgUNmrTuhqL+3CF
SpOpDMz6XEy+OMMCdSFrhFe1E20uJ/jY0I6vb7Y8DdM4WLDdhDbxz/FmUiw7reEjxAdOAPp4Ezzo
d2gYccja2Bqq9kfR78A5+yY52B2pxE+wfADDZE886U2WC+ILC9ovbPLYbhQwLAEaDdWO92TrFqRV
O6GnMKqEzAsfbaZ+c15+DCR1axKbYF3JaI8iEC98j7g0Cb2JlFkKTnUXkFSmx1hIBmXGc9sdeAgL
rxUUCxdKRuwRfGl3iD222gfckuyN2eyPepMtH4Z+jAlZH9tV+yplV9G/epwGVg/EwTYVo3OowOJT
f9pXWC8t+EhCkzi4WU6c0f0KipydmYasACnz/OvKid2Z4J3bSpJRdvBc4ojYwquqP+sJawrI3qkj
MCVrX3z1Hj9BGSIOIv0KbTvoeCqaY6yO9t4CXtQ5fFvbp1aY2lMx7lvxQyKVKq34KRGGzTZec6R6
d28MBt8Ek5rJHG3z29QXUCjwcco67hIpmNTZn2jSRBXilq3TqsfdpQBP4R/xm9VBxfE4mnDmC4TA
WprEun8q0H9nOVkn4FoN8X05wjLEEdkr1hRPp52lGMqSqhj66AmYE6LFl59gOEpRpQhJ6/SZDzBj
1W5nfWFvAZrwmq/U9orGLtA6uNoeDb43mzIev/5sQP6FnDxLTM/A0h3YVX7BT2XrVCptLgfwVEJE
bJLsK1nlngDBnpb8NCAHyMv9+az3l9ghq13i5qKqlxHMWHbn8/y3/d1DDwNrbMvXkSy3vL9lH7iU
WU32T2nyhGIgC8x+FP9YdAFGvvYPsGHxmhPVklrB78sDdG4Y8VkP5tOO+h+YKQwTM5a8SmxdYcL4
4Hv8mx/5vfR1EIf0StJZf76ptBTgZI06HtTOixXroUVgT2jOV6HzU3G/9z9wcx57grjvtwox0tf3
eziyCj+W4hs9QHiOA9xdpDGeortu6eaVUpLCyoqlT0g/UDLZjViq38cz74lA+mF7vul9ks/+2/rq
F75yZWmansdLDd3dyDnSig6IJfVczfayFH95yfuspa6xYkeDKTefVeKiL33Yvrqch++ijefxv3Tt
hGv+9ePysRaKIaZwx50QVdPQfQbTOFlC566TBNNQB2IBP0/KOdnUiTfH4xLazswDRnBFwzVMfYhg
RjnJ6AIM4oY6Qzuwmtsl7KexK/0EGrr/5Nms34ZVsazB7NeY3sUOUJ6J9oHFTgBmbG2jHcejf4I8
CbcEb9udxx0jw9AMjj98n0gT6h8yekA6dHv2/EH19PU3BuHdloc0vlbdJjQ/xqzXsfGmBui2mfmB
Xz2+y5oTUNy7lhrKJfa/ftj7iL1/giONXYJqcPeG2f9DIwtDLF62HHeHkBawv8cbvomBFfMvX3PP
EScWk1UwTDYajY3NEAmHaQUkM3cfMDh0O35YQdLKBVwq8GWxI1vM45EPLSqMXdlZJDp46+/7z4C1
4YZj1u5idfMZoGHSbsUMrd1O4woOGF5OozHzr0c1/ND0/ief2Z3PkOje9KN/edPpMTnteI2Gxx/e
OPPU6ss/wSn2A+74cQX4/jGaxuGZVdi22gqJRYZ8qLbhHn/eMpg6hdJRF1o2kRVpkkn4xzIFn9vs
x2vnAobQXQpgXceObFeQt/3043VQ6fCdgZmXfv0VADbVyX4tjaAXycwvim7LzZ+k8zrwEFf42fon
nn+kECi7YkiNmQ8wsJLClt1rrYZhkZ2//CPneDmaMIoCmWRmwabxOqQKDKrsjPVnPaD+GqYYln32
wEC2Rm/28xJsS8rJ5oVqMJ6l4WpYG7bm9j3ZA1X0tgnm+GVgqp7xcNqlLnC7ziZHm+rT+NasyLii
cTHzZOKNtTLUALfRyPoEbSslfWhn4LnZic2AH4xGaN0NTeCG725yjsb+baXGeMEXBlDrALE0l1d9
9res+wEvJMq7mDNvlpNdS3NPsFSc4GZFDxzFsgJGa0I6dPzZj+ztY6WMR/8AtrvsQPZb26iG4TRY
hpPRjLscPNqpXl8eOl+HW251wARqvM8OwB2FmPXgCY1XX6u//orp9/zRdmOMHOPU0ZrJcddXog+W
B7g5YcSGk+R5ynlv+XDWewSpnomU5GZZ3/1N1mtwjcWzL2tAq0hlUl3v4/4zpAE0l92KZOVkVaop
DTqkb3ojWwM07QBipBh9MfO9UD62g3NYfv7sJ6zL52rodmkCzVZ8CHKpMk3xa7SgssxUbhcTiqVa
LAOYj37IN5OtTcOrSSNIRrbjDoLYG3fr8wsuUAY5SpoMqRvDf4BZj5JdaeceR6sBGpXKUrJt8iae
pjLrQN9kA3eDevBmvxgZkdu5hFBJyrtdZX7Al2cMRqN43XEpXDivJ7cM+YAYOQ8Hbda/3C7yKmcJ
1s5wtQze5IBR4437o2WC2U8xlVENCGw8F/BzDi0+54tJfLplCud+BscLS/KE1CIH5mbWMGDlG28q
qKWAdZYvmAHpArGTfB6BdcRrJnLo5uKbXwobh/g+60M28xNYy/GabJeT3XaZ5o4wsSIFd4KW+bx/
E7hrIk78mR/9+d7IpBe+d71XPiVrS0B8iEa+aylAfffRTIjWouWXvXyehoZ7NdgbbIOTsdjnfUHR
FfLQP5NAp3LeWzGKINvGG+LO+YdzrB9g0tPrzIO31bCWPR+0ExV4lKUOcbbVTzDIxUSsrbVHQ9mn
W2PmQXiJUYN4ebMk+NEzzr/rN+x7lMC3TRnBZbOIe10fXANH/pasXDmZ+1lUh7N+4Nsc2N6EbYvp
Mx/CL9M+ea0z+DU8pZGOQYY23miH5xr6q2D4U0+6iNITfEH/SNYzr+fM1E9fXs59w4rA7A8VeL76
MUcPq2vFbaBXMPBs+vIxb+ZNKbyDjpJjKom8m+u5cQpZQKx2MluxsMTD+HHwC6tz/uovk3eHkxe6
HDVd1/bujS5gkQeAn3pryAd1fS5gJ3UFFjMfm/TmMoKZr+P6xzMn9TP4ENKU3rhZU6XqT6pXwNnf
40Urn/NxUVp3+PVb3sxj30gSNXxZ3ZF4vB6mrz8FeOlvSdb7PO482f1AGWYKnnLJz6f06I9wcOId
sU7J3vuuF5j1NTaYXQJRFAJDR4161rh+Xz2IkX5guO5cXmbtKx/01eAbvjoOfHOVNfDNV3Dquisv
YhsCsb5R/8t/yOkshUgw7pnQckXL5/wOhN88IRyXscumh6widrS0K0STv+YbP+GIffXnHD/f/t4k
W454AXMxfnAfNmXbK3J6gFyPt8QO5EMu3SIRQeiwB9lXcllN+8e4Mub+InfUvPb67Wf5guc0exLk
T7dpzk8v2KQs45Zf9KjTT+IKlka25OYx8YB4UBd+9QpZ7yhtpeWtFHDuH5CNXNVgkF1tC2mJfY5k
OYuF2Zc+mP059/be05t50gF++1nbQ0cmttwOijHzJLLyHpd4mPnId3/iH4p209jL5wVcf/IFWYN4
Namfw5gY/JONTFXlI5j1/woyKRs5dpuFN94HyzeY6ZekPNaT917Ug2/c6uCHu13sTHJOz1tYoXBF
MtNi03BOZn/adSesftfLZKLW9xnbYH0BHPCGQ1p8+zk8uDeyJyrVXeq21pn8+GgAYmdJu0L/EQxk
jW25FYKJE6zScMXJNpHil4KQa1zz7MWtmZf/6UfP/hffqybNu9lfAenFboRckiSXs4tXfP0UcTyA
WgnZzz96jNuruPLGe609YJhGEpYuRe/x1fqC9W8/Ot7VXTz0irYy8kAAtjjM/bRjSQv40EOT1TM/
Hi4TusPbXfxw+9OiWN6oKDKyyMfkWCSiZXkxXA1TxyuynvWcbL6WNZzqTMay0aJKyjRXwPn9ZPXw
P+1oj0NkOEnUE6cocKtcfe0B4AI/yNbyGjR6kpbonRY6//CgdLVU4PaBLT7zB/TZH30TZum4ZNpb
ClrpsneXwBqjjqlzvLcLMV6hcc0g/tZPvrtdAvD7eyrgv/769es/vycMHq/y3MwHA/rz0P/rv48K
/Ev9V/c4Nc2fYwisO13Pv//+5wTC73f7erz7/9u/6vOz+/33L+Wfswa/+1d/av7n9b/mv/qvv/4f
AAAA//8DAIV5SL/gIAAA
headers:
CF-RAY:
- 94f4c6c3be7efac6-SJC
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Fri, 13 Jun 2025 21:45:42 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-allow-origin:
- '*'
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-model:
- text-embedding-3-small
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '128'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
via:
- envoy-router-675c5bcfc8-v6fpw
x-envoy-upstream-service-time:
- '131'
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '10000000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '9999972'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_7f9985e9b536960af27b96d0fd747fbb
status:
code: 200
message: OK
version: 1

View File

@@ -856,18 +856,22 @@ def test_crew_verbose_output(researcher, writer, capsys):
)
expected_strings = [
"\x1b[1m\x1b[95m# Agent:\x1b[00m \x1b[1m\x1b[92mResearcher",
"\x1b[00m\n\x1b[95m## Task:\x1b[00m \x1b[92mResearch AI advancements.",
"\x1b[1m\x1b[95m# Agent:\x1b[00m \x1b[1m\x1b[92mSenior Writer",
"\x1b[95m## Task:\x1b[00m \x1b[92mWrite about AI in healthcare.",
"\n\n\x1b[1m\x1b[95m# Agent:\x1b[00m \x1b[1m\x1b[92mResearcher",
"\x1b[00m\n\x1b[95m## Final Answer:",
"\n\n\x1b[1m\x1b[95m# Agent:\x1b[00m \x1b[1m\x1b[92mSenior Writer",
"\x1b[00m\n\x1b[95m## Final Answer:",
"🤖 Agent Started",
"Agent: Researcher",
"Task: Research AI advancements.",
"✅ Agent Final Answer",
"Agent: Researcher",
"🤖 Agent Started",
"Agent: Senior Writer",
"Task: Write about AI in healthcare.",
"✅ Agent Final Answer",
"Agent: Senior Writer",
]
for expected_string in expected_strings:
assert expected_string in filtered_output
assert (
expected_string in filtered_output
), f"Expected '{expected_string}' in output, but it was not found."
# Now test with verbose set to False
crew.verbose = False
@@ -1783,12 +1787,20 @@ def test_hierarchical_kickoff_usage_metrics_include_manager(researcher):
)
# ── 2. Stub out each agents _token_process.get_summary() ───────────────────
researcher_metrics = UsageMetrics(total_tokens=120, prompt_tokens=80, completion_tokens=40, successful_requests=2)
manager_metrics = UsageMetrics(total_tokens=30, prompt_tokens=20, completion_tokens=10, successful_requests=1)
researcher_metrics = UsageMetrics(
total_tokens=120, prompt_tokens=80, completion_tokens=40, successful_requests=2
)
manager_metrics = UsageMetrics(
total_tokens=30, prompt_tokens=20, completion_tokens=10, successful_requests=1
)
# Replace the internal _token_process objects with simple mocks
researcher._token_process = MagicMock(get_summary=MagicMock(return_value=researcher_metrics))
manager._token_process = MagicMock(get_summary=MagicMock(return_value=manager_metrics))
researcher._token_process = MagicMock(
get_summary=MagicMock(return_value=researcher_metrics)
)
manager._token_process = MagicMock(
get_summary=MagicMock(return_value=manager_metrics)
)
# ── 3. Create the crew (hierarchical!) and kick it off ──────────────────────
crew = Crew(
@@ -1799,14 +1811,32 @@ def test_hierarchical_kickoff_usage_metrics_include_manager(researcher):
)
# We dont care about LLM output here; patch execute_sync to avoid network
with patch.object(Task, "execute_sync", return_value=TaskOutput(description="dummy", raw="Hello", agent=researcher.role)):
with patch.object(
Task,
"execute_sync",
return_value=TaskOutput(
description="dummy", raw="Hello", agent=researcher.role
),
):
crew.kickoff()
# ── 4. Assert the aggregated numbers are the *sum* of both agents ───────────
assert crew.usage_metrics.total_tokens == researcher_metrics.total_tokens + manager_metrics.total_tokens
assert crew.usage_metrics.prompt_tokens == researcher_metrics.prompt_tokens + manager_metrics.prompt_tokens
assert crew.usage_metrics.completion_tokens == researcher_metrics.completion_tokens + manager_metrics.completion_tokens
assert crew.usage_metrics.successful_requests == researcher_metrics.successful_requests + manager_metrics.successful_requests
assert (
crew.usage_metrics.total_tokens
== researcher_metrics.total_tokens + manager_metrics.total_tokens
)
assert (
crew.usage_metrics.prompt_tokens
== researcher_metrics.prompt_tokens + manager_metrics.prompt_tokens
)
assert (
crew.usage_metrics.completion_tokens
== researcher_metrics.completion_tokens + manager_metrics.completion_tokens
)
assert (
crew.usage_metrics.successful_requests
== researcher_metrics.successful_requests + manager_metrics.successful_requests
)
@pytest.mark.vcr(filter_headers=["authorization"])
@@ -4450,27 +4480,29 @@ def test_sets_parent_flow_when_inside_flow(researcher, writer):
assert result.parent_flow is flow
def test_reset_knowledge_with_no_crew_knowledge(researcher,writer):
def test_reset_knowledge_with_no_crew_knowledge(researcher, writer):
crew = Crew(
agents=[researcher, writer],
process=Process.sequential,
tasks=[
Task(description="Task 1", expected_output="output", agent=researcher),
Task(description="Task 2", expected_output="output", agent=writer),
]
],
)
with pytest.raises(RuntimeError) as excinfo:
crew.reset_memories(command_type='knowledge')
crew.reset_memories(command_type="knowledge")
# Optionally, you can also check the error message
assert "Crew Knowledge and Agent Knowledge memory system is not initialized" in str(excinfo.value) # Replace with the expected message
assert "Crew Knowledge and Agent Knowledge memory system is not initialized" in str(
excinfo.value
) # Replace with the expected message
def test_reset_knowledge_with_only_crew_knowledge(researcher,writer):
def test_reset_knowledge_with_only_crew_knowledge(researcher, writer):
mock_ks = MagicMock(spec=Knowledge)
with patch.object(Crew,'reset_knowledge') as mock_reset_agent_knowledge:
with patch.object(Crew, "reset_knowledge") as mock_reset_agent_knowledge:
crew = Crew(
agents=[researcher, writer],
process=Process.sequential,
@@ -4478,14 +4510,14 @@ def test_reset_knowledge_with_only_crew_knowledge(researcher,writer):
Task(description="Task 1", expected_output="output", agent=researcher),
Task(description="Task 2", expected_output="output", agent=writer),
],
knowledge=mock_ks
knowledge=mock_ks,
)
crew.reset_memories(command_type='knowledge')
crew.reset_memories(command_type="knowledge")
mock_reset_agent_knowledge.assert_called_once_with([mock_ks])
def test_reset_knowledge_with_crew_and_agent_knowledge(researcher,writer):
def test_reset_knowledge_with_crew_and_agent_knowledge(researcher, writer):
mock_ks_crew = MagicMock(spec=Knowledge)
mock_ks_research = MagicMock(spec=Knowledge)
mock_ks_writer = MagicMock(spec=Knowledge)
@@ -4493,7 +4525,7 @@ def test_reset_knowledge_with_crew_and_agent_knowledge(researcher,writer):
researcher.knowledge = mock_ks_research
writer.knowledge = mock_ks_writer
with patch.object(Crew,'reset_knowledge') as mock_reset_agent_knowledge:
with patch.object(Crew, "reset_knowledge") as mock_reset_agent_knowledge:
crew = Crew(
agents=[researcher, writer],
process=Process.sequential,
@@ -4501,21 +4533,23 @@ def test_reset_knowledge_with_crew_and_agent_knowledge(researcher,writer):
Task(description="Task 1", expected_output="output", agent=researcher),
Task(description="Task 2", expected_output="output", agent=writer),
],
knowledge=mock_ks_crew
knowledge=mock_ks_crew,
)
crew.reset_memories(command_type='knowledge')
mock_reset_agent_knowledge.assert_called_once_with([mock_ks_crew,mock_ks_research,mock_ks_writer])
crew.reset_memories(command_type="knowledge")
mock_reset_agent_knowledge.assert_called_once_with(
[mock_ks_crew, mock_ks_research, mock_ks_writer]
)
def test_reset_knowledge_with_only_agent_knowledge(researcher,writer):
def test_reset_knowledge_with_only_agent_knowledge(researcher, writer):
mock_ks_research = MagicMock(spec=Knowledge)
mock_ks_writer = MagicMock(spec=Knowledge)
researcher.knowledge = mock_ks_research
writer.knowledge = mock_ks_writer
with patch.object(Crew,'reset_knowledge') as mock_reset_agent_knowledge:
with patch.object(Crew, "reset_knowledge") as mock_reset_agent_knowledge:
crew = Crew(
agents=[researcher, writer],
process=Process.sequential,
@@ -4525,11 +4559,13 @@ def test_reset_knowledge_with_only_agent_knowledge(researcher,writer):
],
)
crew.reset_memories(command_type='knowledge')
mock_reset_agent_knowledge.assert_called_once_with([mock_ks_research,mock_ks_writer])
crew.reset_memories(command_type="knowledge")
mock_reset_agent_knowledge.assert_called_once_with(
[mock_ks_research, mock_ks_writer]
)
def test_reset_agent_knowledge_with_no_agent_knowledge(researcher,writer):
def test_reset_agent_knowledge_with_no_agent_knowledge(researcher, writer):
crew = Crew(
agents=[researcher, writer],
process=Process.sequential,
@@ -4540,13 +4576,15 @@ def test_reset_agent_knowledge_with_no_agent_knowledge(researcher,writer):
)
with pytest.raises(RuntimeError) as excinfo:
crew.reset_memories(command_type='agent_knowledge')
crew.reset_memories(command_type="agent_knowledge")
# Optionally, you can also check the error message
assert "Agent Knowledge memory system is not initialized" in str(excinfo.value) # Replace with the expected message
assert "Agent Knowledge memory system is not initialized" in str(
excinfo.value
) # Replace with the expected message
def test_reset_agent_knowledge_with_only_crew_knowledge(researcher,writer):
def test_reset_agent_knowledge_with_only_crew_knowledge(researcher, writer):
mock_ks = MagicMock(spec=Knowledge)
crew = Crew(
@@ -4556,17 +4594,19 @@ def test_reset_agent_knowledge_with_only_crew_knowledge(researcher,writer):
Task(description="Task 1", expected_output="output", agent=researcher),
Task(description="Task 2", expected_output="output", agent=writer),
],
knowledge=mock_ks
knowledge=mock_ks,
)
with pytest.raises(RuntimeError) as excinfo:
crew.reset_memories(command_type='agent_knowledge')
crew.reset_memories(command_type="agent_knowledge")
# Optionally, you can also check the error message
assert "Agent Knowledge memory system is not initialized" in str(excinfo.value) # Replace with the expected message
assert "Agent Knowledge memory system is not initialized" in str(
excinfo.value
) # Replace with the expected message
def test_reset_agent_knowledge_with_crew_and_agent_knowledge(researcher,writer):
def test_reset_agent_knowledge_with_crew_and_agent_knowledge(researcher, writer):
mock_ks_crew = MagicMock(spec=Knowledge)
mock_ks_research = MagicMock(spec=Knowledge)
mock_ks_writer = MagicMock(spec=Knowledge)
@@ -4574,7 +4614,7 @@ def test_reset_agent_knowledge_with_crew_and_agent_knowledge(researcher,writer):
researcher.knowledge = mock_ks_research
writer.knowledge = mock_ks_writer
with patch.object(Crew,'reset_knowledge') as mock_reset_agent_knowledge:
with patch.object(Crew, "reset_knowledge") as mock_reset_agent_knowledge:
crew = Crew(
agents=[researcher, writer],
process=Process.sequential,
@@ -4582,21 +4622,23 @@ def test_reset_agent_knowledge_with_crew_and_agent_knowledge(researcher,writer):
Task(description="Task 1", expected_output="output", agent=researcher),
Task(description="Task 2", expected_output="output", agent=writer),
],
knowledge=mock_ks_crew
knowledge=mock_ks_crew,
)
crew.reset_memories(command_type='agent_knowledge')
mock_reset_agent_knowledge.assert_called_once_with([mock_ks_research,mock_ks_writer])
crew.reset_memories(command_type="agent_knowledge")
mock_reset_agent_knowledge.assert_called_once_with(
[mock_ks_research, mock_ks_writer]
)
def test_reset_agent_knowledge_with_only_agent_knowledge(researcher,writer):
def test_reset_agent_knowledge_with_only_agent_knowledge(researcher, writer):
mock_ks_research = MagicMock(spec=Knowledge)
mock_ks_writer = MagicMock(spec=Knowledge)
researcher.knowledge = mock_ks_research
writer.knowledge = mock_ks_writer
with patch.object(Crew,'reset_knowledge') as mock_reset_agent_knowledge:
with patch.object(Crew, "reset_knowledge") as mock_reset_agent_knowledge:
crew = Crew(
agents=[researcher, writer],
process=Process.sequential,
@@ -4606,5 +4648,7 @@ def test_reset_agent_knowledge_with_only_agent_knowledge(researcher,writer):
],
)
crew.reset_memories(command_type='agent_knowledge')
mock_reset_agent_knowledge.assert_called_once_with([mock_ks_research,mock_ks_writer])
crew.reset_memories(command_type="agent_knowledge")
mock_reset_agent_knowledge.assert_called_once_with(
[mock_ks_research, mock_ks_writer]
)

View File

@@ -9,6 +9,14 @@ from crewai.telemetry import Telemetry
from opentelemetry import trace
@pytest.fixture(autouse=True)
def cleanup_telemetry():
"""Automatically clean up Telemetry singleton between tests."""
Telemetry._instance = None
yield
Telemetry._instance = None
@pytest.mark.parametrize(
"env_var,value,expected_ready",
[

View File

@@ -1,11 +1,19 @@
import os
from unittest.mock import patch
from unittest.mock import patch, MagicMock
import pytest
from crewai.telemetry import Telemetry
@pytest.fixture(autouse=True)
def cleanup_telemetry():
"""Automatically clean up Telemetry singleton between tests."""
Telemetry._instance = None
yield
Telemetry._instance = None
@pytest.mark.parametrize("env_var,value,expected_ready", [
("OTEL_SDK_DISABLED", "true", False),
("OTEL_SDK_DISABLED", "TRUE", False),
@@ -28,3 +36,59 @@ def test_telemetry_enabled_by_default():
with patch("crewai.telemetry.telemetry.TracerProvider"):
telemetry = Telemetry()
assert telemetry.ready is True
def test_telemetry_disable_after_singleton_creation():
"""Test that telemetry operations are disabled when env var is set after singleton creation."""
with patch.dict(os.environ, {}, clear=True):
with patch("crewai.telemetry.telemetry.TracerProvider"):
telemetry = Telemetry()
assert telemetry.ready is True
mock_operation = MagicMock()
telemetry._safe_telemetry_operation(mock_operation)
mock_operation.assert_called_once()
mock_operation.reset_mock()
os.environ['CREWAI_DISABLE_TELEMETRY'] = 'true'
telemetry._safe_telemetry_operation(mock_operation)
mock_operation.assert_not_called()
def test_telemetry_disable_with_multiple_instances():
"""Test that multiple telemetry instances respect dynamically changed env vars."""
with patch.dict(os.environ, {}, clear=True):
with patch("crewai.telemetry.telemetry.TracerProvider"):
telemetry1 = Telemetry()
assert telemetry1.ready is True
os.environ['CREWAI_DISABLE_TELEMETRY'] = 'true'
telemetry2 = Telemetry()
assert telemetry2 is telemetry1
assert telemetry2.ready is True
mock_operation = MagicMock()
telemetry2._safe_telemetry_operation(mock_operation)
mock_operation.assert_not_called()
def test_telemetry_otel_sdk_disabled_after_creation():
"""Test that OTEL_SDK_DISABLED also works when set after singleton creation."""
with patch.dict(os.environ, {}, clear=True):
with patch("crewai.telemetry.telemetry.TracerProvider"):
telemetry = Telemetry()
assert telemetry.ready is True
mock_operation = MagicMock()
telemetry._safe_telemetry_operation(mock_operation)
mock_operation.assert_called_once()
mock_operation.reset_mock()
os.environ['OTEL_SDK_DISABLED'] = 'true'
telemetry._safe_telemetry_operation(mock_operation)
mock_operation.assert_not_called()

View File

@@ -0,0 +1,167 @@
import pytest
from unittest.mock import patch, MagicMock
from crewai.utilities.events.event_listener import event_listener
class TestFlowHumanInputIntegration:
"""Test integration between Flow execution and human input functionality."""
def test_console_formatter_pause_resume_methods(self):
"""Test that ConsoleFormatter pause/resume methods work correctly."""
formatter = event_listener.formatter
original_paused_state = formatter._live_paused
try:
formatter._live_paused = False
formatter.pause_live_updates()
assert formatter._live_paused
formatter.resume_live_updates()
assert not formatter._live_paused
finally:
formatter._live_paused = original_paused_state
@patch('builtins.input', return_value='')
def test_human_input_pauses_flow_updates(self, mock_input):
"""Test that human input pauses Flow status updates."""
from crewai.agents.agent_builder.base_agent_executor_mixin import CrewAgentExecutorMixin
executor = CrewAgentExecutorMixin()
executor.crew = MagicMock()
executor.crew._train = False
executor._printer = MagicMock()
formatter = event_listener.formatter
original_paused_state = formatter._live_paused
try:
formatter._live_paused = False
with patch.object(formatter, 'pause_live_updates') as mock_pause, \
patch.object(formatter, 'resume_live_updates') as mock_resume:
result = executor._ask_human_input("Test result")
mock_pause.assert_called_once()
mock_resume.assert_called_once()
mock_input.assert_called_once()
assert result == ''
finally:
formatter._live_paused = original_paused_state
@patch('builtins.input', side_effect=['feedback', ''])
def test_multiple_human_input_rounds(self, mock_input):
"""Test multiple rounds of human input with Flow status management."""
from crewai.agents.agent_builder.base_agent_executor_mixin import CrewAgentExecutorMixin
executor = CrewAgentExecutorMixin()
executor.crew = MagicMock()
executor.crew._train = False
executor._printer = MagicMock()
formatter = event_listener.formatter
original_paused_state = formatter._live_paused
try:
pause_calls = []
resume_calls = []
def track_pause():
pause_calls.append(True)
def track_resume():
resume_calls.append(True)
with patch.object(formatter, 'pause_live_updates', side_effect=track_pause), \
patch.object(formatter, 'resume_live_updates', side_effect=track_resume):
result1 = executor._ask_human_input("Test result 1")
assert result1 == 'feedback'
result2 = executor._ask_human_input("Test result 2")
assert result2 == ''
assert len(pause_calls) == 2
assert len(resume_calls) == 2
finally:
formatter._live_paused = original_paused_state
def test_pause_resume_with_no_live_session(self):
"""Test pause/resume methods handle case when no Live session exists."""
formatter = event_listener.formatter
original_live = formatter._live
original_paused_state = formatter._live_paused
try:
formatter._live = None
formatter._live_paused = False
formatter.pause_live_updates()
formatter.resume_live_updates()
assert not formatter._live_paused
finally:
formatter._live = original_live
formatter._live_paused = original_paused_state
def test_pause_resume_exception_handling(self):
"""Test that resume is called even if exception occurs during human input."""
from crewai.agents.agent_builder.base_agent_executor_mixin import CrewAgentExecutorMixin
executor = CrewAgentExecutorMixin()
executor.crew = MagicMock()
executor.crew._train = False
executor._printer = MagicMock()
formatter = event_listener.formatter
original_paused_state = formatter._live_paused
try:
with patch.object(formatter, 'pause_live_updates') as mock_pause, \
patch.object(formatter, 'resume_live_updates') as mock_resume, \
patch('builtins.input', side_effect=KeyboardInterrupt("Test exception")):
with pytest.raises(KeyboardInterrupt):
executor._ask_human_input("Test result")
mock_pause.assert_called_once()
mock_resume.assert_called_once()
finally:
formatter._live_paused = original_paused_state
def test_training_mode_human_input(self):
"""Test human input in training mode."""
from crewai.agents.agent_builder.base_agent_executor_mixin import CrewAgentExecutorMixin
executor = CrewAgentExecutorMixin()
executor.crew = MagicMock()
executor.crew._train = True
executor._printer = MagicMock()
formatter = event_listener.formatter
original_paused_state = formatter._live_paused
try:
with patch.object(formatter, 'pause_live_updates') as mock_pause, \
patch.object(formatter, 'resume_live_updates') as mock_resume, \
patch('builtins.input', return_value='training feedback'):
result = executor._ask_human_input("Test result")
mock_pause.assert_called_once()
mock_resume.assert_called_once()
assert result == 'training feedback'
executor._printer.print.assert_called()
call_args = [call[1]['content'] for call in executor._printer.print.call_args_list]
training_prompt_found = any('TRAINING MODE' in content for content in call_args)
assert training_prompt_found
finally:
formatter._live_paused = original_paused_state

View File

@@ -1,4 +1,4 @@
import asyncio
from collections import defaultdict
from typing import cast
from unittest.mock import Mock
@@ -313,5 +313,108 @@ def test_sets_parent_flow_when_inside_flow():
nonlocal captured_agent
captured_agent = source
result = flow.kickoff()
flow.kickoff()
assert captured_agent.parent_flow is flow
@pytest.mark.vcr(filter_headers=["authorization"])
def test_guardrail_is_called_using_string():
guardrail_events = defaultdict(list)
from crewai.utilities.events import LLMGuardrailCompletedEvent, LLMGuardrailStartedEvent
with crewai_event_bus.scoped_handlers():
@crewai_event_bus.on(LLMGuardrailStartedEvent)
def capture_guardrail_started(source, event):
guardrail_events["started"].append(event)
@crewai_event_bus.on(LLMGuardrailCompletedEvent)
def capture_guardrail_completed(source, event):
guardrail_events["completed"].append(event)
agent = Agent(
role="Sports Analyst",
goal="Gather information about the best soccer players",
backstory="""You are an expert at gathering and organizing information. You carefully collect details and present them in a structured way.""",
guardrail="""Only include Brazilian players, both women and men""",
)
result = agent.kickoff(messages="Top 10 best players in the world?")
assert len(guardrail_events['started']) == 2
assert len(guardrail_events['completed']) == 2
assert not guardrail_events['completed'][0].success
assert guardrail_events['completed'][1].success
assert "Here are the top 10 best soccer players in the world, focusing exclusively on Brazilian players" in result.raw
@pytest.mark.vcr(filter_headers=["authorization"])
def test_guardrail_is_called_using_callable():
guardrail_events = defaultdict(list)
from crewai.utilities.events import LLMGuardrailCompletedEvent, LLMGuardrailStartedEvent
with crewai_event_bus.scoped_handlers():
@crewai_event_bus.on(LLMGuardrailStartedEvent)
def capture_guardrail_started(source, event):
guardrail_events["started"].append(event)
@crewai_event_bus.on(LLMGuardrailCompletedEvent)
def capture_guardrail_completed(source, event):
guardrail_events["completed"].append(event)
agent = Agent(
role="Sports Analyst",
goal="Gather information about the best soccer players",
backstory="""You are an expert at gathering and organizing information. You carefully collect details and present them in a structured way.""",
guardrail=lambda output: (True, "Pelé - Santos, 1958"),
)
result = agent.kickoff(messages="Top 1 best players in the world?")
assert len(guardrail_events['started']) == 1
assert len(guardrail_events['completed']) == 1
assert guardrail_events['completed'][0].success
assert "Pelé - Santos, 1958" in result.raw
@pytest.mark.vcr(filter_headers=["authorization"])
def test_guardrail_reached_attempt_limit():
guardrail_events = defaultdict(list)
from crewai.utilities.events import LLMGuardrailCompletedEvent, LLMGuardrailStartedEvent
with crewai_event_bus.scoped_handlers():
@crewai_event_bus.on(LLMGuardrailStartedEvent)
def capture_guardrail_started(source, event):
guardrail_events["started"].append(event)
@crewai_event_bus.on(LLMGuardrailCompletedEvent)
def capture_guardrail_completed(source, event):
guardrail_events["completed"].append(event)
agent = Agent(
role="Sports Analyst",
goal="Gather information about the best soccer players",
backstory="""You are an expert at gathering and organizing information. You carefully collect details and present them in a structured way.""",
guardrail=lambda output: (False, "You are not allowed to include Brazilian players"),
guardrail_max_retries=2,
)
with pytest.raises(Exception, match="Agent's guardrail failed validation after 2 retries"):
agent.kickoff(messages="Top 10 best players in the world?")
assert len(guardrail_events['started']) == 3 # 2 retries + 1 initial call
assert len(guardrail_events['completed']) == 3 # 2 retries + 1 initial call
assert not guardrail_events['completed'][0].success
assert not guardrail_events['completed'][1].success
assert not guardrail_events['completed'][2].success
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_output_when_guardrail_returns_base_model():
class Player(BaseModel):
name: str
country: str
agent = Agent(
role="Sports Analyst",
goal="Gather information about the best soccer players",
backstory="""You are an expert at gathering and organizing information. You carefully collect details and present them in a structured way.""",
guardrail=lambda output: (True, Player(name="Lionel Messi", country="Argentina")),
)
result = agent.kickoff(messages="Top 10 best players in the world?")
assert result.pydantic == Player(name="Lionel Messi", country="Argentina")

View File

@@ -0,0 +1,116 @@
from unittest.mock import MagicMock, patch
from rich.tree import Tree
from rich.live import Live
from crewai.utilities.events.utils.console_formatter import ConsoleFormatter
class TestConsoleFormatterPauseResume:
"""Test ConsoleFormatter pause/resume functionality."""
def test_pause_live_updates_with_active_session(self):
"""Test pausing when Live session is active."""
formatter = ConsoleFormatter()
mock_live = MagicMock(spec=Live)
formatter._live = mock_live
formatter._live_paused = False
formatter.pause_live_updates()
mock_live.stop.assert_called_once()
assert formatter._live_paused
def test_pause_live_updates_when_already_paused(self):
"""Test pausing when already paused does nothing."""
formatter = ConsoleFormatter()
mock_live = MagicMock(spec=Live)
formatter._live = mock_live
formatter._live_paused = True
formatter.pause_live_updates()
mock_live.stop.assert_not_called()
assert formatter._live_paused
def test_pause_live_updates_with_no_session(self):
"""Test pausing when no Live session exists."""
formatter = ConsoleFormatter()
formatter._live = None
formatter._live_paused = False
formatter.pause_live_updates()
assert formatter._live_paused
def test_resume_live_updates_when_paused(self):
"""Test resuming when paused."""
formatter = ConsoleFormatter()
formatter._live_paused = True
formatter.resume_live_updates()
assert not formatter._live_paused
def test_resume_live_updates_when_not_paused(self):
"""Test resuming when not paused does nothing."""
formatter = ConsoleFormatter()
formatter._live_paused = False
formatter.resume_live_updates()
assert not formatter._live_paused
def test_print_after_resume_restarts_live_session(self):
"""Test that printing a Tree after resume creates new Live session."""
formatter = ConsoleFormatter()
formatter._live_paused = True
formatter._live = None
formatter.resume_live_updates()
assert not formatter._live_paused
tree = Tree("Test")
with patch('crewai.utilities.events.utils.console_formatter.Live') as mock_live_class:
mock_live_instance = MagicMock()
mock_live_class.return_value = mock_live_instance
formatter.print(tree)
mock_live_class.assert_called_once()
mock_live_instance.start.assert_called_once()
assert formatter._live == mock_live_instance
def test_multiple_pause_resume_cycles(self):
"""Test multiple pause/resume cycles work correctly."""
formatter = ConsoleFormatter()
mock_live = MagicMock(spec=Live)
formatter._live = mock_live
formatter._live_paused = False
formatter.pause_live_updates()
assert formatter._live_paused
mock_live.stop.assert_called_once()
assert formatter._live is None # Live session should be cleared
formatter.resume_live_updates()
assert not formatter._live_paused
formatter.pause_live_updates()
assert formatter._live_paused
formatter.resume_live_updates()
assert not formatter._live_paused
def test_pause_resume_state_initialization(self):
"""Test that _live_paused is properly initialized."""
formatter = ConsoleFormatter()
assert hasattr(formatter, '_live_paused')
assert not formatter._live_paused

11
uv.lock generated
View File

@@ -725,7 +725,7 @@ wheels = [
[[package]]
name = "crewai"
version = "0.126.0"
version = "0.130.0"
source = { editable = "." }
dependencies = [
{ name = "appdirs" },
@@ -814,7 +814,7 @@ requires-dist = [
{ name = "blinker", specifier = ">=1.9.0" },
{ name = "chromadb", specifier = ">=0.5.23" },
{ name = "click", specifier = ">=8.1.7" },
{ name = "crewai-tools", marker = "extra == 'tools'", specifier = "~=0.46.0" },
{ name = "crewai-tools", marker = "extra == 'tools'", specifier = "~=0.47.1" },
{ name = "docling", marker = "extra == 'docling'", specifier = ">=2.12.0" },
{ name = "instructor", specifier = ">=1.3.3" },
{ name = "json-repair", specifier = ">=0.25.2" },
@@ -866,7 +866,7 @@ dev = [
[[package]]
name = "crewai-tools"
version = "0.46.0"
version = "0.47.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "chromadb" },
@@ -880,10 +880,11 @@ dependencies = [
{ name = "pyright" },
{ name = "pytube" },
{ name = "requests" },
{ name = "tiktoken" },
]
sdist = { url = "https://files.pythonhosted.org/packages/0d/9e/69109f5d5b398b2edeccec1055e93cdceac3becd04407bcce97de6557180/crewai_tools-0.46.0.tar.gz", hash = "sha256:c8f89247199d528c77db4b450a1ca781b5d32405982467baf516ede4b2045bd1", size = 913715 }
sdist = { url = "https://files.pythonhosted.org/packages/0e/cd/fc5a96be8c108febcc2c767714e3ec9b70cca9be8e6b29bba7c1874fb6d2/crewai_tools-0.47.1.tar.gz", hash = "sha256:4de5ebf320aeae317ffabe2e4704b98b5d791f663196871fb5ad2e7bbea14a82", size = 921418 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/ab/62/0b68637ce820fbb0385495bd6d75ceb27de53f060df5417f293419826481/crewai_tools-0.46.0-py3-none-any.whl", hash = "sha256:f8e60723869ca36ede7b43dcc1491ebefc93410a972d97b7b0ce59c4bd7a826b", size = 606190 },
{ url = "https://files.pythonhosted.org/packages/e3/2c/05d9fa584d9d814c0c8c4c3793df572222417695fe3d716f14bc274376d6/crewai_tools-0.47.1-py3-none-any.whl", hash = "sha256:4dc9bb0a08e3afa33c6b9efb163e47181801a7906be7241977426e6d3dec0a05", size = 606305 },
]
[[package]]