Compare commits

..

3 Commits

Author SHA1 Message Date
lorenzejay
d77e2cb1f8 Merge branch 'lorenze/feat/plan-execute-pattern' of github.com:crewAIInc/crewAI into lorenze/feat/plan-execute-pattern 2026-02-10 16:10:20 -08:00
Lorenze Jay
a6dcb275e1 Lorenze/feat planning pt 2 todo list gen (#4449)
* feat: introduce PlanningConfig for enhanced agent planning capabilities

This update adds a new PlanningConfig class to manage agent planning configurations, allowing for customizable planning behavior before task execution. The existing reasoning parameter is deprecated in favor of this new configuration, ensuring backward compatibility while enhancing the planning process. Additionally, the Agent class has been updated to utilize this new configuration, and relevant utility functions have been adjusted accordingly. Tests have been added to validate the new planning functionality and ensure proper integration with existing agent workflows.

* dropping redundancy

* fix test

* revert handle_reasoning here

* refactor: update reasoning handling in Agent class

This commit modifies the Agent class to conditionally call the handle_reasoning function based on the executor class being used. The legacy CrewAgentExecutor will continue to utilize handle_reasoning, while the new AgentExecutor will manage planning internally. Additionally, the PlanningConfig class has been referenced in the documentation to clarify its role in enabling or disabling planning. Tests have been updated to reflect these changes and ensure proper functionality.

* improve planning prompts

* matching

* refactor: remove default enabled flag from PlanningConfig in Agent class

* more cassettes

* fix test

* feat: enhance agent planning with structured todo management

This commit introduces a new planning system within the AgentExecutor class, allowing for the creation of structured todo items from planning steps. The TodoList and TodoItem models have been added to facilitate tracking of plan execution. The reasoning plan now includes a list of steps, improving the clarity and organization of agent tasks. Additionally, tests have been added to validate the new planning functionality and ensure proper integration with existing workflows.

* refactor: update planning prompt and remove deprecated methods in reasoning handler

* improve planning prompt

* improve handler

* linted

* linted
2026-02-10 16:08:26 -08:00
Lorenze Jay
79a01fca31 feat: introduce PlanningConfig for enhanced agent planning capabilities (#4344)
* feat: introduce PlanningConfig for enhanced agent planning capabilities

This update adds a new PlanningConfig class to manage agent planning configurations, allowing for customizable planning behavior before task execution. The existing reasoning parameter is deprecated in favor of this new configuration, ensuring backward compatibility while enhancing the planning process. Additionally, the Agent class has been updated to utilize this new configuration, and relevant utility functions have been adjusted accordingly. Tests have been added to validate the new planning functionality and ensure proper integration with existing agent workflows.

* dropping redundancy

* fix test

* revert handle_reasoning here

* refactor: update reasoning handling in Agent class

This commit modifies the Agent class to conditionally call the handle_reasoning function based on the executor class being used. The legacy CrewAgentExecutor will continue to utilize handle_reasoning, while the new AgentExecutor will manage planning internally. Additionally, the PlanningConfig class has been referenced in the documentation to clarify its role in enabling or disabling planning. Tests have been updated to reflect these changes and ensure proper functionality.

* improve planning prompts

* matching

* refactor: remove default enabled flag from PlanningConfig in Agent class

* more cassettes

* fix test

* refactor: update planning prompt and remove deprecated methods in reasoning handler

* improve planning prompt
2026-02-10 13:26:49 -08:00
38 changed files with 8590 additions and 452 deletions

View File

@@ -4,6 +4,7 @@ import urllib.request
import warnings
from crewai.agent.core import Agent
from crewai.agent.planning_config import PlanningConfig
from crewai.crew import Crew
from crewai.crews.crew_output import CrewOutput
from crewai.flow.flow import Flow
@@ -80,6 +81,7 @@ __all__ = [
"Flow",
"Knowledge",
"LLMGuardrail",
"PlanningConfig",
"Process",
"Task",
"TaskOutput",

View File

@@ -24,6 +24,7 @@ from pydantic import (
)
from typing_extensions import Self
from crewai.agent.planning_config import PlanningConfig
from crewai.agent.utils import (
ahandle_knowledge_retrieval,
apply_training_data,
@@ -212,13 +213,23 @@ class Agent(BaseAgent):
default="safe",
description="Mode for code execution: 'safe' (using Docker) or 'unsafe' (direct execution).",
)
reasoning: bool = Field(
planning_config: PlanningConfig | None = Field(
default=None,
description="Configuration for agent planning before task execution.",
)
planning: bool = Field(
default=False,
description="Whether the agent should reflect and create a plan before executing a task.",
)
reasoning: bool = Field(
default=False,
description="[DEPRECATED: Use planning_config instead] Whether the agent should reflect and create a plan before executing a task.",
deprecated=True,
)
max_reasoning_attempts: int | None = Field(
default=None,
description="Maximum number of reasoning attempts before executing the task. If None, will try until ready.",
description="[DEPRECATED: Use planning_config.max_attempts instead] Maximum number of reasoning attempts before executing the task. If None, will try until ready.",
deprecated=True,
)
embedder: EmbedderConfig | None = Field(
default=None,
@@ -285,8 +296,26 @@ class Agent(BaseAgent):
if self.allow_code_execution:
self._validate_docker_installation()
# Handle backward compatibility: convert reasoning=True to planning_config
if self.reasoning and self.planning_config is None:
import warnings
warnings.warn(
"The 'reasoning' parameter is deprecated. Use 'planning_config=PlanningConfig()' instead.",
DeprecationWarning,
stacklevel=2,
)
self.planning_config = PlanningConfig(
max_attempts=self.max_reasoning_attempts,
)
return self
@property
def planning_enabled(self) -> bool:
"""Check if planning is enabled for this agent."""
return self.planning_config is not None or self.planning
def _setup_agent_executor(self) -> None:
if not self.cache_handler:
self.cache_handler = CacheHandler()
@@ -362,7 +391,11 @@ class Agent(BaseAgent):
ValueError: If the max execution time is not a positive integer.
RuntimeError: If the agent execution fails for other reasons.
"""
handle_reasoning(self, task)
# Only call handle_reasoning for legacy CrewAgentExecutor
# For AgentExecutor, planning is handled in AgentExecutor.generate_plan()
if self.executor_class is not AgentExecutor:
handle_reasoning(self, task)
self._inject_date_to_task(task)
if self.tools_handler:
@@ -599,7 +632,10 @@ class Agent(BaseAgent):
ValueError: If the max execution time is not a positive integer.
RuntimeError: If the agent execution fails for other reasons.
"""
handle_reasoning(self, task)
if self.executor_class is not AgentExecutor:
handle_reasoning(
self, task
) # we need this till CrewAgentExecutor migrates to AgentExecutor
self._inject_date_to_task(task)
if self.tools_handler:

View File

@@ -0,0 +1,83 @@
from __future__ import annotations
from typing import Any
from pydantic import BaseModel, Field
class PlanningConfig(BaseModel):
"""Configuration for agent planning/reasoning before task execution.
This allows users to customize the planning behavior including prompts,
iteration limits, and the LLM used for planning.
Note: To disable planning, don't pass a planning_config or set planning=False
on the Agent. The presence of a PlanningConfig enables planning.
Attributes:
max_attempts: Maximum number of planning refinement attempts.
If None, will continue until the agent indicates readiness.
max_steps: Maximum number of steps in the generated plan.
system_prompt: Custom system prompt for planning. Uses default if None.
plan_prompt: Custom prompt for creating the initial plan.
refine_prompt: Custom prompt for refining the plan.
llm: LLM to use for planning. Uses agent's LLM if None.
Example:
```python
from crewai import Agent
from crewai.agent.planning_config import PlanningConfig
# Simple usage
agent = Agent(
role="Researcher",
goal="Research topics",
backstory="Expert researcher",
planning_config=PlanningConfig(),
)
# Customized planning
agent = Agent(
role="Researcher",
goal="Research topics",
backstory="Expert researcher",
planning_config=PlanningConfig(
max_attempts=3,
max_steps=10,
plan_prompt="Create a focused plan for: {description}",
llm="gpt-4o-mini", # Use cheaper model for planning
),
)
```
"""
max_attempts: int | None = Field(
default=None,
description=(
"Maximum number of planning refinement attempts. "
"If None, will continue until the agent indicates readiness."
),
)
max_steps: int = Field(
default=20,
description="Maximum number of steps in the generated plan.",
ge=1,
)
system_prompt: str | None = Field(
default=None,
description="Custom system prompt for planning. Uses default if None.",
)
plan_prompt: str | None = Field(
default=None,
description="Custom prompt for creating the initial plan.",
)
refine_prompt: str | None = Field(
default=None,
description="Custom prompt for refining the plan.",
)
llm: str | Any | None = Field(
default=None,
description="LLM to use for planning. Uses agent's LLM if None.",
)
model_config = {"arbitrary_types_allowed": True}

View File

@@ -28,13 +28,20 @@ if TYPE_CHECKING:
def handle_reasoning(agent: Agent, task: Task) -> None:
"""Handle the reasoning process for an agent before task execution.
"""Handle the reasoning/planning process for an agent before task execution.
This function checks if planning is enabled for the agent and, if so,
creates a plan that gets appended to the task description.
Note: This function is used by CrewAgentExecutor (legacy path).
For AgentExecutor, planning is handled in AgentExecutor.generate_plan().
Args:
agent: The agent performing the task.
task: The task to execute.
"""
if not agent.reasoning:
# Check if planning is enabled using the planning_enabled property
if not getattr(agent, "planning_enabled", False):
return
try:
@@ -43,13 +50,13 @@ def handle_reasoning(agent: Agent, task: Task) -> None:
AgentReasoningOutput,
)
reasoning_handler = AgentReasoning(task=task, agent=agent)
reasoning_output: AgentReasoningOutput = (
reasoning_handler.handle_agent_reasoning()
planning_handler = AgentReasoning(agent=agent, task=task)
planning_output: AgentReasoningOutput = (
planning_handler.handle_agent_reasoning()
)
task.description += f"\n\nReasoning Plan:\n{reasoning_output.plan.plan}"
task.description += f"\n\nPlanning:\n{planning_output.plan.plan}"
except Exception as e:
agent._logger.log("error", f"Error during reasoning process: {e!s}")
agent._logger.log("error", f"Error during planning: {e!s}")
def build_task_prompt_with_schema(task: Task, task_prompt: str, i18n: I18N) -> str:

View File

@@ -9,7 +9,7 @@ class ReasoningEvent(BaseEvent):
type: str
attempt: int = 1
agent_role: str
task_id: str
task_id: str | None = None
task_name: str | None = None
from_task: Any | None = None
agent_id: str | None = None

View File

@@ -68,6 +68,7 @@ from crewai.utilities.agent_utils import (
)
from crewai.utilities.constants import TRAINING_DATA_FILE
from crewai.utilities.i18n import I18N, get_i18n
from crewai.utilities.planning_types import PlanStep, TodoItem, TodoList
from crewai.utilities.printer import Printer
from crewai.utilities.string_utils import sanitize_tool_name
from crewai.utilities.tool_utils import execute_tool_and_check_finality
@@ -101,6 +102,13 @@ class AgentReActState(BaseModel):
ask_for_human_input: bool = Field(default=False)
use_native_tools: bool = Field(default=False)
pending_tool_calls: list[Any] = Field(default_factory=list)
plan: str | None = Field(default=None, description="Generated execution plan")
plan_ready: bool = Field(
default=False, description="Whether agent is ready to execute"
)
todos: TodoList = Field(
default_factory=TodoList, description="Todo list for tracking plan execution"
)
class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
@@ -389,6 +397,67 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
self._state.iterations = value
@start()
def generate_plan(self) -> None:
"""Generate execution plan if planning is enabled.
This is the entry point for the agent execution flow. If planning is
enabled on the agent, it generates a plan before execution begins.
The plan is stored in state and todos are created from the steps.
"""
if not getattr(self.agent, "planning_enabled", False):
return
try:
from crewai.utilities.reasoning_handler import AgentReasoning
if self.task:
planning_handler = AgentReasoning(agent=self.agent, task=self.task)
else:
# For kickoff() path - use input text directly, no Task needed
input_text = getattr(self, "_kickoff_input", "")
planning_handler = AgentReasoning(
agent=self.agent,
description=input_text or "Complete the requested task",
expected_output="Complete the task successfully",
)
output = planning_handler.handle_agent_reasoning()
self.state.plan = output.plan.plan
self.state.plan_ready = output.plan.ready
if self.state.plan_ready and output.plan.steps:
self._create_todos_from_plan(output.plan.steps)
# Backward compatibility: append plan to task description
# This can be removed in Phase 2 when plan execution is implemented
if self.task and self.state.plan:
self.task.description += f"\n\nPlanning:\n{self.state.plan}"
except Exception as e:
if hasattr(self.agent, "_logger"):
self.agent._logger.log("error", f"Error during planning: {e!s}")
def _create_todos_from_plan(self, steps: list[PlanStep]) -> None:
"""Convert plan steps into trackable todo items.
Args:
steps: List of PlanStep objects from the reasoning handler.
"""
todos: list[TodoItem] = []
for step in steps:
todo = TodoItem(
step_number=step.step_number,
description=step.description,
tool_to_use=step.tool_to_use,
depends_on=step.depends_on,
status="pending",
)
todos.append(todo)
self.state.todos = TodoList(items=todos)
@listen(generate_plan)
def initialize_reasoning(self) -> Literal["initialized"]:
"""Initialize the reasoning flow and emit agent start logs."""
self._show_start_logs()
@@ -1075,6 +1144,10 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
self.state.is_finished = False
self.state.use_native_tools = False
self.state.pending_tool_calls = []
self.state.plan = None
self.state.plan_ready = False
self._kickoff_input = inputs.get("input", "")
if "system" in self.prompt:
prompt = cast("SystemPromptResult", self.prompt)
@@ -1159,6 +1232,10 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
self.state.is_finished = False
self.state.use_native_tools = False
self.state.pending_tool_calls = []
self.state.plan = None
self.state.plan_ready = False
self._kickoff_input = inputs.get("input", "")
if "system" in self.prompt:
prompt = cast("SystemPromptResult", self.prompt)

View File

@@ -4,7 +4,6 @@ import json
import logging
import os
from typing import TYPE_CHECKING, Any, TypedDict
from urllib.parse import urlparse
from pydantic import BaseModel
from typing_extensions import Self
@@ -176,51 +175,11 @@ class AzureCompletion(BaseLLM):
prefix in model.lower() for prefix in ["gpt-", "o1-", "text-"]
)
self.is_azure_openai_endpoint = self._is_azure_openai_deployment_endpoint(
self.endpoint
self.is_azure_openai_endpoint = (
"openai.azure.com" in self.endpoint
and "/openai/deployments/" in self.endpoint
)
@staticmethod
def _parse_endpoint_url(endpoint: str):
parsed_endpoint = urlparse(endpoint)
if parsed_endpoint.hostname:
return parsed_endpoint
# Support endpoint values without a URL scheme.
return urlparse(f"https://{endpoint}")
@staticmethod
def _is_azure_openai_hostname(endpoint: str) -> bool:
parsed_endpoint = AzureCompletion._parse_endpoint_url(endpoint)
hostname = parsed_endpoint.hostname or ""
labels = [label for label in hostname.lower().split(".") if label]
return len(labels) >= 3 and labels[-3:] == ["openai", "azure", "com"]
@staticmethod
def _get_endpoint_path_segments(endpoint: str) -> list[str]:
parsed_endpoint = AzureCompletion._parse_endpoint_url(endpoint)
return [segment for segment in parsed_endpoint.path.split("/") if segment]
@staticmethod
def _is_azure_openai_deployment_endpoint(endpoint: str) -> bool:
if not AzureCompletion._is_azure_openai_hostname(endpoint):
return False
path_segments = AzureCompletion._get_endpoint_path_segments(endpoint)
return len(path_segments) >= 3 and path_segments[:2] == [
"openai",
"deployments",
]
@staticmethod
def _is_azure_openai_deployments_collection(endpoint: str) -> bool:
if not AzureCompletion._is_azure_openai_hostname(endpoint):
return False
path_segments = AzureCompletion._get_endpoint_path_segments(endpoint)
return path_segments == ["openai", "deployments"]
@staticmethod
def _validate_and_fix_endpoint(endpoint: str, model: str) -> str:
"""Validate and fix Azure endpoint URL format.
@@ -235,12 +194,10 @@ class AzureCompletion(BaseLLM):
Returns:
Validated and potentially corrected endpoint URL
"""
if AzureCompletion._is_azure_openai_hostname(
endpoint
) and not AzureCompletion._is_azure_openai_deployment_endpoint(endpoint):
if "openai.azure.com" in endpoint and "/openai/deployments/" not in endpoint:
endpoint = endpoint.rstrip("/")
if not AzureCompletion._is_azure_openai_deployments_collection(endpoint):
if not endpoint.endswith("/openai/deployments"):
deployment_name = model.replace("azure/", "")
endpoint = f"{endpoint}/openai/deployments/{deployment_name}"
logging.info(f"Constructed Azure OpenAI endpoint URL: {endpoint}")

View File

@@ -58,9 +58,14 @@
}
},
"reasoning": {
"initial_plan": "You are {role}, a professional with the following background: {backstory}\n\nYour primary goal is: {goal}\n\nAs {role}, you are creating a strategic plan for a task that requires your expertise and unique perspective.",
"refine_plan": "You are {role}, a professional with the following background: {backstory}\n\nYour primary goal is: {goal}\n\nAs {role}, you are refining a strategic plan for a task that requires your expertise and unique perspective.",
"create_plan_prompt": "You are {role} with this background: {backstory}\n\nYour primary goal is: {goal}\n\nYou have been assigned the following task:\n{description}\n\nExpected output:\n{expected_output}\n\nAvailable tools: {tools}\n\nBefore executing this task, create a detailed plan that leverages your expertise as {role} and outlines:\n1. Your understanding of the task from your professional perspective\n2. The key steps you'll take to complete it, drawing on your background and skills\n3. How you'll approach any challenges that might arise, considering your expertise\n4. How you'll strategically use the available tools based on your experience, exactly what tools to use and how to use them\n5. The expected outcome and how it aligns with your goal\n\nAfter creating your plan, assess whether you feel ready to execute the task or if you could do better.\nConclude with one of these statements:\n- \"READY: I am ready to execute the task.\"\n- \"NOT READY: I need to refine my plan because [specific reason].\"",
"refine_plan_prompt": "You are {role} with this background: {backstory}\n\nYour primary goal is: {goal}\n\nYou created the following plan for this task:\n{current_plan}\n\nHowever, you indicated that you're not ready to execute the task yet.\n\nPlease refine your plan further, drawing on your expertise as {role} to address any gaps or uncertainties. As you refine your plan, be specific about which available tools you will use, how you will use them, and why they are the best choices for each step. Clearly outline your tool usage strategy as part of your improved plan.\n\nAfter refining your plan, assess whether you feel ready to execute the task.\nConclude with one of these statements:\n- \"READY: I am ready to execute the task.\"\n- \"NOT READY: I need to refine my plan further because [specific reason].\""
"initial_plan": "You are {role}. Create a focused execution plan using only the essential steps needed.",
"refine_plan": "You are {role}. Refine your plan to address the specific gap while keeping it minimal.",
"create_plan_prompt": "You are {role}.\n\nTask: {description}\n\nExpected output: {expected_output}\n\nAvailable tools: {tools}\n\nCreate a focused plan with ONLY the essential steps needed. Most tasks require just 2-5 steps. Do NOT pad with unnecessary steps like \"review\", \"verify\", \"document\", or \"finalize\" unless explicitly required.\n\nFor each step, specify the action and which tool to use (if any).\n\nConclude with:\n- \"READY: I am ready to execute the task.\"\n- \"NOT READY: I need to refine my plan because [specific reason].\"",
"refine_plan_prompt": "Your plan:\n{current_plan}\n\nYou indicated you're not ready. Address the specific gap while keeping the plan minimal.\n\nConclude with READY or NOT READY."
},
"planning": {
"system_prompt": "You are a strategic planning assistant. Create minimal, effective execution plans. Prefer fewer steps over more.",
"create_plan_prompt": "Create a focused execution plan for the following task:\n\n## Task\n{description}\n\n## Expected Output\n{expected_output}\n\n## Available Tools\n{tools}\n\n## Planning Principles\nFocus on WHAT needs to be accomplished, not HOW. Group related actions into logical units. Fewer steps = better. Most tasks need 3-6 steps. Hard limit: {max_steps} steps.\n\n## Step Types (only these are valid):\n1. **Tool Step**: Uses a tool to gather information or take action\n2. **Output Step**: Synthesizes prior results into the final deliverable (usually the last step)\n\n## Rules:\n- Each step must either USE A TOOL or PRODUCE THE FINAL OUTPUT\n- Combine related tool calls: \"Research A, B, and C\" = ONE step, not three\n- Combine all synthesis into ONE final output step\n- NO standalone \"thinking\" steps (review, verify, confirm, refine, analyze) - these happen naturally between steps\n\nFor each step: State the action, specify the tool (if any), and note dependencies.\n\nAfter your plan, state READY or NOT READY.",
"refine_plan_prompt": "Your previous plan:\n{current_plan}\n\nYou indicated you weren't ready. Refine your plan to address the specific gap.\n\nKeep the plan minimal - only add steps that directly address the issue.\n\nConclude with READY or NOT READY as before."
}
}

View File

@@ -0,0 +1,103 @@
"""Types for agent planning and todo tracking."""
from __future__ import annotations
from typing import Literal
from uuid import uuid4
from pydantic import BaseModel, Field
# Todo status type
TodoStatus = Literal["pending", "running", "completed"]
class PlanStep(BaseModel):
"""A single step in the reasoning plan."""
step_number: int = Field(description="Step number (1-based)")
description: str = Field(description="What to do in this step")
tool_to_use: str | None = Field(
default=None, description="Tool to use for this step, if any"
)
depends_on: list[int] = Field(
default_factory=list, description="Step numbers this step depends on"
)
class TodoItem(BaseModel):
"""A single todo item representing a step in the execution plan."""
id: str = Field(default_factory=lambda: str(uuid4()))
step_number: int = Field(description="Order of this step in the plan (1-based)")
description: str = Field(description="What needs to be done")
tool_to_use: str | None = Field(
default=None, description="Tool to use for this step, if any"
)
status: TodoStatus = Field(default="pending", description="Current status")
depends_on: list[int] = Field(
default_factory=list, description="Step numbers this depends on"
)
result: str | None = Field(
default=None, description="Result after completion, if any"
)
class TodoList(BaseModel):
"""Collection of todos for tracking plan execution."""
items: list[TodoItem] = Field(default_factory=list)
@property
def current_todo(self) -> TodoItem | None:
"""Get the currently running todo item."""
for item in self.items:
if item.status == "running":
return item
return None
@property
def next_pending(self) -> TodoItem | None:
"""Get the next pending todo item."""
for item in self.items:
if item.status == "pending":
return item
return None
@property
def is_complete(self) -> bool:
"""Check if all todos are completed."""
return len(self.items) > 0 and all(
item.status == "completed" for item in self.items
)
@property
def pending_count(self) -> int:
"""Count of pending todos."""
return sum(1 for item in self.items if item.status == "pending")
@property
def completed_count(self) -> int:
"""Count of completed todos."""
return sum(1 for item in self.items if item.status == "completed")
def get_by_step_number(self, step_number: int) -> TodoItem | None:
"""Get a todo by its step number."""
for item in self.items:
if item.step_number == step_number:
return item
return None
def mark_running(self, step_number: int) -> None:
"""Mark a todo as running by step number."""
item = self.get_by_step_number(step_number)
if item:
item.status = "running"
def mark_completed(self, step_number: int, result: str | None = None) -> None:
"""Mark a todo as completed by step number."""
item = self.get_by_step_number(step_number)
if item:
item.status = "completed"
if result:
item.result = result

View File

@@ -1,10 +1,13 @@
"""Handles planning/reasoning for agents before task execution."""
from __future__ import annotations
import json
import logging
from typing import Any, Final, Literal, cast
from typing import TYPE_CHECKING, Any, Final, Literal, cast
from pydantic import BaseModel, Field
from crewai.agent import Agent
from crewai.events.event_bus import crewai_event_bus
from crewai.events.types.reasoning_events import (
AgentReasoningCompletedEvent,
@@ -12,14 +15,30 @@ from crewai.events.types.reasoning_events import (
AgentReasoningStartedEvent,
)
from crewai.llm import LLM
from crewai.task import Task
from crewai.utilities.llm_utils import create_llm
from crewai.utilities.planning_types import PlanStep
from crewai.utilities.string_utils import sanitize_tool_name
if TYPE_CHECKING:
from crewai.agent import Agent
from crewai.agent.planning_config import PlanningConfig
from crewai.task import Task
if TYPE_CHECKING:
from crewai.agent import Agent
from crewai.agent.planning_config import PlanningConfig
from crewai.task import Task
class ReasoningPlan(BaseModel):
"""Model representing a reasoning plan for a task."""
plan: str = Field(description="The detailed reasoning plan for the task.")
steps: list[PlanStep] = Field(
default_factory=list, description="Structured steps to execute"
)
ready: bool = Field(description="Whether the agent is ready to execute the task.")
@@ -29,24 +48,63 @@ class AgentReasoningOutput(BaseModel):
plan: ReasoningPlan = Field(description="The reasoning plan for the task.")
# Aliases for backward compatibility
PlanningPlan = ReasoningPlan
AgentPlanningOutput = AgentReasoningOutput
FUNCTION_SCHEMA: Final[dict[str, Any]] = {
"type": "function",
"function": {
"name": "create_reasoning_plan",
"description": "Create or refine a reasoning plan for a task",
"description": "Create or refine a reasoning plan for a task with structured steps",
"parameters": {
"type": "object",
"properties": {
"plan": {
"type": "string",
"description": "The detailed reasoning plan for the task.",
"description": "A brief summary of the overall plan.",
},
"steps": {
"type": "array",
"description": "List of discrete steps to execute the plan",
"items": {
"type": "object",
"properties": {
"step_number": {
"type": "integer",
"description": "Step number (1-based)",
},
"description": {
"type": "string",
"description": "What to do in this step",
},
"tool_to_use": {
"type": ["string", "null"],
"description": "Tool to use for this step, or null if no tool needed",
},
"depends_on": {
"type": "array",
"items": {"type": "integer"},
"description": "Step numbers this step depends on (empty array if none)",
},
},
"required": [
"step_number",
"description",
"tool_to_use",
"depends_on",
],
"additionalProperties": False,
},
},
"ready": {
"type": "boolean",
"description": "Whether the agent is ready to execute the task.",
},
},
"required": ["plan", "ready"],
"required": ["plan", "steps", "ready"],
"additionalProperties": False,
},
},
}
@@ -54,41 +112,101 @@ FUNCTION_SCHEMA: Final[dict[str, Any]] = {
class AgentReasoning:
"""
Handles the agent reasoning process, enabling an agent to reflect and create a plan
before executing a task.
Handles the agent planning/reasoning process, enabling an agent to reflect
and create a plan before executing a task.
Attributes:
task: The task for which the agent is reasoning.
agent: The agent performing the reasoning.
llm: The language model used for reasoning.
task: The task for which the agent is planning (optional).
agent: The agent performing the planning.
config: The planning configuration.
llm: The language model used for planning.
logger: Logger for logging events and errors.
description: Task description or input text for planning.
expected_output: Expected output description.
"""
def __init__(self, task: Task, agent: Agent) -> None:
"""Initialize the AgentReasoning with a task and an agent.
def __init__(
self,
agent: Agent,
task: Task | None = None,
*,
description: str | None = None,
expected_output: str | None = None,
) -> None:
"""Initialize the AgentReasoning with an agent and optional task.
Args:
task: The task for which the agent is reasoning.
agent: The agent performing the reasoning.
agent: The agent performing the planning.
task: The task for which the agent is planning (optional).
description: Task description or input text (used if task is None).
expected_output: Expected output (used if task is None).
"""
self.task = task
self.agent = agent
self.llm = cast(LLM, agent.llm)
self.task = task
# Use task attributes if available, otherwise use provided values
self._description = description or (
task.description if task else "Complete the requested task"
)
self._expected_output = expected_output or (
task.expected_output if task else "Complete the task successfully"
)
self.config = self._get_planning_config()
self.llm = self._resolve_llm()
self.logger = logging.getLogger(__name__)
def handle_agent_reasoning(self) -> AgentReasoningOutput:
"""Public method for the reasoning process that creates and refines a plan for the task until the agent is ready to execute it.
@property
def description(self) -> str:
"""Get the task/input description."""
return self._description
@property
def expected_output(self) -> str:
"""Get the expected output."""
return self._expected_output
def _get_planning_config(self) -> PlanningConfig:
"""Get the planning configuration from the agent.
Returns:
AgentReasoningOutput: The output of the agent reasoning process.
The planning configuration, using defaults if not set.
"""
# Emit a reasoning started event (attempt 1)
from crewai.agent.planning_config import PlanningConfig
if self.agent.planning_config is not None:
return self.agent.planning_config
# Fallback for backward compatibility
return PlanningConfig(
max_attempts=getattr(self.agent, "max_reasoning_attempts", None),
)
def _resolve_llm(self) -> LLM:
"""Resolve which LLM to use for planning.
Returns:
The LLM to use - either from config or the agent's LLM.
"""
if self.config.llm is not None:
if isinstance(self.config.llm, LLM):
return self.config.llm
return create_llm(self.config.llm)
return cast(LLM, self.agent.llm)
def handle_agent_reasoning(self) -> AgentReasoningOutput:
"""Public method for the planning process that creates and refines a plan
for the task until the agent is ready to execute it.
Returns:
AgentReasoningOutput: The output of the agent planning process.
"""
task_id = str(self.task.id) if self.task else "kickoff"
# Emit a planning started event (attempt 1)
try:
crewai_event_bus.emit(
self.agent,
AgentReasoningStartedEvent(
agent_role=self.agent.role,
task_id=str(self.task.id),
task_id=task_id,
attempt=1,
from_task=self.task,
),
@@ -98,13 +216,13 @@ class AgentReasoning:
pass
try:
output = self.__handle_agent_reasoning()
output = self._execute_planning()
crewai_event_bus.emit(
self.agent,
AgentReasoningCompletedEvent(
agent_role=self.agent.role,
task_id=str(self.task.id),
task_id=task_id,
plan=output.plan.plan,
ready=output.plan.ready,
attempt=1,
@@ -115,71 +233,77 @@ class AgentReasoning:
return output
except Exception as e:
# Emit reasoning failed event
# Emit planning failed event
try:
crewai_event_bus.emit(
self.agent,
AgentReasoningFailedEvent(
agent_role=self.agent.role,
task_id=str(self.task.id),
task_id=task_id,
error=str(e),
attempt=1,
from_task=self.task,
from_agent=self.agent,
),
)
except Exception as e:
logging.error(f"Error emitting reasoning failed event: {e}")
except Exception as event_error:
logging.error(f"Error emitting planning failed event: {event_error}")
raise
def __handle_agent_reasoning(self) -> AgentReasoningOutput:
"""Private method that handles the agent reasoning process.
def _execute_planning(self) -> AgentReasoningOutput:
"""Execute the planning process.
Returns:
The output of the agent reasoning process.
The output of the agent planning process.
"""
plan, ready = self.__create_initial_plan()
plan, steps, ready = self._create_initial_plan()
plan, steps, ready = self._refine_plan_if_needed(plan, steps, ready)
plan, ready = self.__refine_plan_if_needed(plan, ready)
reasoning_plan = ReasoningPlan(plan=plan, ready=ready)
reasoning_plan = ReasoningPlan(plan=plan, steps=steps, ready=ready)
return AgentReasoningOutput(plan=reasoning_plan)
def __create_initial_plan(self) -> tuple[str, bool]:
"""Creates the initial reasoning plan for the task.
def _create_initial_plan(self) -> tuple[str, list[PlanStep], bool]:
"""Creates the initial plan for the task.
Returns:
The initial plan and whether the agent is ready to execute the task.
A tuple of the plan summary, list of steps, and whether the agent is ready.
"""
reasoning_prompt = self.__create_reasoning_prompt()
planning_prompt = self._create_planning_prompt()
planning_prompt = self._create_planning_prompt()
if self.llm.supports_function_calling():
plan, ready = self.__call_with_function(reasoning_prompt, "initial_plan")
return plan, ready
response = _call_llm_with_reasoning_prompt(
llm=self.llm,
prompt=reasoning_prompt,
task=self.task,
reasoning_agent=self.agent,
backstory=self.__get_agent_backstory(),
plan_type="initial_plan",
plan, steps, ready = self._call_with_function(
planning_prompt, "create_plan"
)
return plan, steps, ready
response = self._call_llm_with_prompt(
prompt=planning_prompt,
plan_type="create_plan",
)
return self.__parse_reasoning_response(str(response))
plan, ready = self._parse_planning_response(str(response))
return plan, [], ready # No structured steps from text parsing
def __refine_plan_if_needed(self, plan: str, ready: bool) -> tuple[str, bool]:
"""Refines the reasoning plan if the agent is not ready to execute the task.
def _refine_plan_if_needed(
self, plan: str, steps: list[PlanStep], ready: bool
) -> tuple[str, list[PlanStep], bool]:
"""Refines the plan if the agent is not ready to execute the task.
Args:
plan: The current reasoning plan.
plan: The current plan.
steps: The current list of steps.
ready: Whether the agent is ready to execute the task.
Returns:
The refined plan and whether the agent is ready to execute the task.
The refined plan, steps, and whether the agent is ready to execute.
"""
attempt = 1
max_attempts = self.agent.max_reasoning_attempts
max_attempts = self.config.max_attempts
task_id = str(self.task.id) if self.task else "kickoff"
current_attempt = attempt + 1
while not ready and (max_attempts is None or attempt < max_attempts):
# Emit event for each refinement attempt
@@ -188,62 +312,82 @@ class AgentReasoning:
self.agent,
AgentReasoningStartedEvent(
agent_role=self.agent.role,
task_id=str(self.task.id),
attempt=attempt + 1,
task_id=task_id,
attempt=current_attempt,
from_task=self.task,
),
)
except Exception: # noqa: S110
pass
refine_prompt = self.__create_refine_prompt(plan)
refine_prompt = self._create_refine_prompt(plan)
refine_prompt = self._create_refine_prompt(plan)
if self.llm.supports_function_calling():
plan, ready = self.__call_with_function(refine_prompt, "refine_plan")
plan, steps, ready = self._call_with_function(
refine_prompt, "refine_plan"
)
else:
response = _call_llm_with_reasoning_prompt(
llm=self.llm,
response = self._call_llm_with_prompt(
prompt=refine_prompt,
task=self.task,
reasoning_agent=self.agent,
backstory=self.__get_agent_backstory(),
plan_type="refine_plan",
)
plan, ready = self.__parse_reasoning_response(str(response))
plan, ready = self._parse_planning_response(str(response))
steps = [] # No structured steps from text parsing
# Emit completed event for this refinement attempt
try:
crewai_event_bus.emit(
self.agent,
AgentReasoningCompletedEvent(
agent_role=self.agent.role,
task_id=task_id,
plan=plan,
ready=ready,
attempt=current_attempt,
from_task=self.task,
from_agent=self.agent,
),
)
except Exception: # noqa: S110
pass
attempt += 1
if max_attempts is not None and attempt >= max_attempts:
self.logger.warning(
f"Agent reasoning reached maximum attempts ({max_attempts}) without being ready. Proceeding with current plan."
f"Agent planning reached maximum attempts ({max_attempts}) "
"without being ready. Proceeding with current plan."
)
break
return plan, ready
return plan, steps, ready
def __call_with_function(self, prompt: str, prompt_type: str) -> tuple[str, bool]:
"""Calls the LLM with function calling to get a reasoning plan.
def _call_with_function(
self, prompt: str, plan_type: Literal["create_plan", "refine_plan"]
) -> tuple[str, list[PlanStep], bool]:
"""Calls the LLM with function calling to get a plan.
Args:
prompt: The prompt to send to the LLM.
prompt_type: The type of prompt (initial_plan or refine_plan).
plan_type: The type of plan being created.
Returns:
A tuple containing the plan and whether the agent is ready.
A tuple containing the plan summary, list of steps, and whether the agent is ready.
"""
self.logger.debug(f"Using function calling for {prompt_type} reasoning")
self.logger.debug(f"Using function calling for {plan_type} planning")
try:
system_prompt = self.agent.i18n.retrieve("reasoning", prompt_type).format(
role=self.agent.role,
goal=self.agent.goal,
backstory=self.__get_agent_backstory(),
)
system_prompt = self._get_system_prompt()
# Prepare a simple callable that just returns the tool arguments as JSON
def _create_reasoning_plan(plan: str, ready: bool = True) -> str:
"""Return the reasoning plan result in JSON string form."""
return json.dumps({"plan": plan, "ready": ready})
def _create_reasoning_plan(
plan: str,
steps: list[dict[str, Any]] | None = None,
ready: bool = True,
) -> str:
"""Return the planning result in JSON string form."""
return json.dumps({"plan": plan, "steps": steps or [], "ready": ready})
response = self.llm.call(
[
@@ -255,19 +399,33 @@ class AgentReasoning:
from_task=self.task,
from_agent=self.agent,
)
self.logger.debug(f"Function calling response: {response[:100]}...")
try:
result = json.loads(response)
if "plan" in result and "ready" in result:
return result["plan"], result["ready"]
# Parse steps from the response
steps: list[PlanStep] = []
raw_steps = result.get("steps", [])
try:
for step_data in raw_steps:
step = PlanStep(
step_number=step_data.get("step_number", 0),
description=step_data.get("description", ""),
tool_to_use=step_data.get("tool_to_use"),
depends_on=step_data.get("depends_on", []),
)
steps.append(step)
except Exception as step_error:
self.logger.warning(
f"Failed to parse step: {step_data}, error: {step_error}"
)
return result["plan"], steps, result["ready"]
except (json.JSONDecodeError, KeyError):
pass
response_str = str(response)
return (
response_str,
[],
"READY: I am ready to execute the task." in response_str,
)
@@ -277,13 +435,7 @@ class AgentReasoning:
)
try:
system_prompt = self.agent.i18n.retrieve(
"reasoning", prompt_type
).format(
role=self.agent.role,
goal=self.agent.goal,
backstory=self.__get_agent_backstory(),
)
system_prompt = self._get_system_prompt()
fallback_response = self.llm.call(
[
@@ -297,78 +449,165 @@ class AgentReasoning:
fallback_str = str(fallback_response)
return (
fallback_str,
[],
"READY: I am ready to execute the task." in fallback_str,
)
except Exception as inner_e:
self.logger.error(f"Error during fallback text parsing: {inner_e!s}")
return (
"Failed to generate a plan due to an error.",
[],
True,
) # Default to ready to avoid getting stuck
def __get_agent_backstory(self) -> str:
"""
Safely gets the agent's backstory, providing a default if not available.
def _call_llm_with_prompt(
self,
prompt: str,
plan_type: Literal["create_plan", "refine_plan"],
) -> str:
"""Calls the LLM with the planning prompt.
Args:
prompt: The prompt to send to the LLM.
plan_type: The type of plan being created.
Returns:
str: The agent's backstory or a default value.
The LLM response.
"""
system_prompt = self._get_system_prompt()
response = self.llm.call(
[
{"role": "system", "content": system_prompt},
{"role": "user", "content": prompt},
],
from_task=self.task,
from_agent=self.agent,
)
return str(response)
def _get_system_prompt(self) -> str:
"""Get the system prompt for planning.
Returns:
The system prompt, either custom or from i18n.
"""
if self.config.system_prompt is not None:
return self.config.system_prompt
# Try new "planning" section first, fall back to "reasoning" for compatibility
try:
return self.agent.i18n.retrieve("planning", "system_prompt")
except (KeyError, AttributeError):
# Fallback to reasoning section for backward compatibility
return self.agent.i18n.retrieve("reasoning", "initial_plan").format(
role=self.agent.role,
goal=self.agent.goal,
backstory=self._get_agent_backstory(),
)
def _get_agent_backstory(self) -> str:
"""Safely gets the agent's backstory, providing a default if not available.
Returns:
The agent's backstory or a default value.
"""
return getattr(self.agent, "backstory", "No backstory provided")
def __create_reasoning_prompt(self) -> str:
"""
Creates a prompt for the agent to reason about the task.
def _create_planning_prompt(self) -> str:
"""Creates a prompt for the agent to plan the task.
Returns:
str: The reasoning prompt.
The planning prompt.
"""
available_tools = self.__format_available_tools()
available_tools = self._format_available_tools()
return self.agent.i18n.retrieve("reasoning", "create_plan_prompt").format(
role=self.agent.role,
goal=self.agent.goal,
backstory=self.__get_agent_backstory(),
description=self.task.description,
expected_output=self.task.expected_output,
tools=available_tools,
)
# Use custom prompt if provided
if self.config.plan_prompt is not None:
return self.config.plan_prompt.format(
role=self.agent.role,
goal=self.agent.goal,
backstory=self._get_agent_backstory(),
description=self.description,
expected_output=self.expected_output,
tools=available_tools,
max_steps=self.config.max_steps,
)
def __format_available_tools(self) -> str:
"""
Formats the available tools for inclusion in the prompt.
# Try new "planning" section first
try:
return self.agent.i18n.retrieve("planning", "create_plan_prompt").format(
description=self.description,
expected_output=self.expected_output,
tools=available_tools,
max_steps=self.config.max_steps,
)
except (KeyError, AttributeError):
# Fallback to reasoning section for backward compatibility
return self.agent.i18n.retrieve("reasoning", "create_plan_prompt").format(
role=self.agent.role,
goal=self.agent.goal,
backstory=self._get_agent_backstory(),
description=self.description,
expected_output=self.expected_output,
tools=available_tools,
)
def _format_available_tools(self) -> str:
"""Formats the available tools for inclusion in the prompt.
Returns:
str: Comma-separated list of tool names.
Comma-separated list of tool names.
"""
try:
return ", ".join(
[sanitize_tool_name(tool.name) for tool in (self.task.tools or [])]
)
# Try task tools first, then agent tools
tools = []
if self.task:
tools = self.task.tools or []
if not tools:
tools = getattr(self.agent, "tools", []) or []
if not tools:
return "No tools available"
return ", ".join([sanitize_tool_name(tool.name) for tool in tools])
except (AttributeError, TypeError):
return "No tools available"
def __create_refine_prompt(self, current_plan: str) -> str:
"""
Creates a prompt for the agent to refine its reasoning plan.
def _create_refine_prompt(self, current_plan: str) -> str:
"""Creates a prompt for the agent to refine its plan.
Args:
current_plan: The current reasoning plan.
current_plan: The current plan.
Returns:
str: The refine prompt.
The refine prompt.
"""
return self.agent.i18n.retrieve("reasoning", "refine_plan_prompt").format(
role=self.agent.role,
goal=self.agent.goal,
backstory=self.__get_agent_backstory(),
current_plan=current_plan,
)
# Use custom prompt if provided
if self.config.refine_prompt is not None:
return self.config.refine_prompt.format(
role=self.agent.role,
goal=self.agent.goal,
backstory=self._get_agent_backstory(),
current_plan=current_plan,
max_steps=self.config.max_steps,
)
# Try new "planning" section first
try:
return self.agent.i18n.retrieve("planning", "refine_plan_prompt").format(
current_plan=current_plan,
)
except (KeyError, AttributeError):
# Fallback to reasoning section for backward compatibility
return self.agent.i18n.retrieve("reasoning", "refine_plan_prompt").format(
role=self.agent.role,
goal=self.agent.goal,
backstory=self._get_agent_backstory(),
current_plan=current_plan,
)
@staticmethod
def __parse_reasoning_response(response: str) -> tuple[str, bool]:
"""
Parses the reasoning response to extract the plan and whether
the agent is ready to execute the task.
def _parse_planning_response(response: str) -> tuple[str, bool]:
"""Parses the planning response to extract the plan and readiness.
Args:
response: The LLM response.
@@ -380,25 +619,13 @@ class AgentReasoning:
return "No plan was generated.", False
plan = response
ready = False
if "READY: I am ready to execute the task." in response:
ready = True
ready = "READY: I am ready to execute the task." in response
return plan, ready
def _handle_agent_reasoning(self) -> AgentReasoningOutput:
"""
Deprecated method for backward compatibility.
Use handle_agent_reasoning() instead.
Returns:
AgentReasoningOutput: The output of the agent reasoning process.
"""
self.logger.warning(
"The _handle_agent_reasoning method is deprecated. Use handle_agent_reasoning instead."
)
return self.handle_agent_reasoning()
# Alias for backward compatibility
AgentPlanning = AgentReasoning
def _call_llm_with_reasoning_prompt(
@@ -409,7 +636,9 @@ def _call_llm_with_reasoning_prompt(
backstory: str,
plan_type: Literal["initial_plan", "refine_plan"],
) -> str:
"""Calls the LLM with the reasoning prompt.
"""Deprecated: Calls the LLM with the reasoning prompt.
This function is kept for backward compatibility.
Args:
llm: The language model to use.
@@ -417,7 +646,7 @@ def _call_llm_with_reasoning_prompt(
task: The task for which the agent is reasoning.
reasoning_agent: The agent performing the reasoning.
backstory: The agent's backstory.
plan_type: The type of plan being created ("initial_plan" or "refine_plan").
plan_type: The type of plan being created.
Returns:
The LLM response.

View File

@@ -1456,7 +1456,7 @@ def test_agent_execute_task_with_tool():
)
result = agent.execute_task(task)
assert "you should always think about what to do" in result
assert "test query" in result
@pytest.mark.vcr()
@@ -1475,9 +1475,9 @@ def test_agent_execute_task_with_custom_llm():
)
result = agent.execute_task(task)
assert "In circuits they thrive" in result
assert "Artificial minds awake" in result
assert "Future's coded drive" in result
assert "Artificial minds" in result
assert "Code and circuits" in result
assert "Future undefined" in result
@pytest.mark.vcr()

View File

@@ -25,6 +25,18 @@ class TestAgentReActState:
assert state.current_answer is None
assert state.is_finished is False
assert state.ask_for_human_input is False
# Planning state fields
assert state.plan is None
assert state.plan_ready is False
def test_state_with_plan(self):
"""Test AgentReActState initialization with planning fields."""
state = AgentReActState(
plan="Step 1: Do X\nStep 2: Do Y",
plan_ready=True,
)
assert state.plan == "Step 1: Do X\nStep 2: Do Y"
assert state.plan_ready is True
def test_state_with_values(self):
"""Test AgentReActState initialization with values."""
@@ -477,3 +489,249 @@ class TestFlowInvoke:
assert result == {"output": "Done"}
assert len(executor.state.messages) >= 2
class TestAgentExecutorPlanning:
"""Test planning functionality in AgentExecutor with real agent kickoff."""
@pytest.mark.vcr()
def test_agent_kickoff_with_planning_stores_plan_in_state(self):
"""Test that Agent.kickoff() with planning enabled stores plan in executor state."""
from crewai import Agent, PlanningConfig
from crewai.llm import LLM
llm = LLM("gpt-4o-mini")
agent = Agent(
role="Math Assistant",
goal="Help solve simple math problems",
backstory="A helpful assistant that solves math problems step by step",
llm=llm,
planning_config=PlanningConfig(max_attempts=1),
verbose=False,
)
# Execute kickoff with a simple task
result = agent.kickoff("What is 2 + 2?")
# Verify result
assert result is not None
assert "4" in str(result)
@pytest.mark.vcr()
def test_agent_kickoff_without_planning_skips_plan_generation(self):
"""Test that Agent.kickoff() without planning skips planning phase."""
from crewai import Agent
from crewai.llm import LLM
llm = LLM("gpt-4o-mini")
agent = Agent(
role="Math Assistant",
goal="Help solve simple math problems",
backstory="A helpful assistant",
llm=llm,
# No planning_config = no planning
verbose=False,
)
# Execute kickoff
result = agent.kickoff("What is 3 + 3?")
# Verify we get a result
assert result is not None
assert "6" in str(result)
@pytest.mark.vcr()
def test_planning_disabled_skips_planning(self):
"""Test that planning=False skips planning."""
from crewai import Agent
from crewai.llm import LLM
llm = LLM("gpt-4o-mini")
agent = Agent(
role="Math Assistant",
goal="Help solve simple math problems",
backstory="A helpful assistant",
llm=llm,
planning=False, # Explicitly disable planning
verbose=False,
)
result = agent.kickoff("What is 5 + 5?")
# Should still complete successfully
assert result is not None
assert "10" in str(result)
def test_backward_compat_reasoning_true_enables_planning(self):
"""Test that reasoning=True (deprecated) still enables planning."""
import warnings
from crewai import Agent
from crewai.llm import LLM
llm = LLM("gpt-4o-mini")
with warnings.catch_warnings(record=True):
warnings.simplefilter("always")
agent = Agent(
role="Test Agent",
goal="Complete tasks",
backstory="A helpful agent",
llm=llm,
reasoning=True, # Deprecated but should still work
verbose=False,
)
# Should have planning_config created from reasoning=True
assert agent.planning_config is not None
assert agent.planning_enabled is True
@pytest.mark.vcr()
def test_executor_state_contains_plan_after_planning(self):
"""Test that executor state contains plan after planning phase."""
from crewai import Agent, PlanningConfig
from crewai.llm import LLM
from crewai.experimental.agent_executor import AgentExecutor
llm = LLM("gpt-4o-mini")
agent = Agent(
role="Math Assistant",
goal="Help solve simple math problems",
backstory="A helpful assistant that solves math problems step by step",
llm=llm,
planning_config=PlanningConfig(max_attempts=1),
verbose=False,
)
# Track executor for inspection
executor_ref = [None]
original_invoke = AgentExecutor.invoke
def capture_executor(self, inputs):
executor_ref[0] = self
return original_invoke(self, inputs)
with patch.object(AgentExecutor, "invoke", capture_executor):
result = agent.kickoff("What is 7 + 7?")
# Verify result
assert result is not None
# If we captured an executor, check its state
if executor_ref[0] is not None:
# After planning, state should have plan info
assert hasattr(executor_ref[0].state, "plan")
assert hasattr(executor_ref[0].state, "plan_ready")
@pytest.mark.vcr()
def test_planning_creates_minimal_steps_for_multi_step_task(self):
"""Test that planning creates only necessary steps for a multi-step task.
This task requires exactly 3 dependent steps:
1. Identify the first 3 prime numbers (2, 3, 5)
2. Sum them (2 + 3 + 5 = 10)
3. Multiply by 2 (10 * 2 = 20)
The plan should reflect these dependencies without unnecessary padding.
"""
from crewai import Agent, PlanningConfig
from crewai.llm import LLM
from crewai.experimental.agent_executor import AgentExecutor
llm = LLM("gpt-4o-mini")
agent = Agent(
role="Math Tutor",
goal="Solve multi-step math problems accurately",
backstory="An expert math tutor who breaks down problems step by step",
llm=llm,
planning_config=PlanningConfig(max_attempts=1, max_steps=10),
verbose=False,
)
# Track the plan that gets generated
captured_plan = [None]
original_invoke = AgentExecutor.invoke
def capture_plan(self, inputs):
result = original_invoke(self, inputs)
captured_plan[0] = self.state.plan
return result
with patch.object(AgentExecutor, "invoke", capture_plan):
result = agent.kickoff(
"Calculate the sum of the first 3 prime numbers, then multiply that result by 2. "
"Show your work for each step."
)
# Verify result contains the correct answer (20)
assert result is not None
assert "20" in str(result)
# Verify a plan was generated
assert captured_plan[0] is not None
# The plan should be concise - this task needs ~3 steps, not 10+
plan_text = captured_plan[0]
# Count steps by looking for numbered items or bullet points
import re
step_pattern = r"^\s*\d+[\.\):]|\n\s*-\s+"
steps = re.findall(step_pattern, plan_text, re.MULTILINE)
# Plan should have roughly 3-5 steps, not fill up to max_steps
assert len(steps) <= 6, f"Plan has too many steps ({len(steps)}): {plan_text}"
@pytest.mark.vcr()
def test_planning_handles_sequential_dependency_task(self):
"""Test planning for a task where step N depends on step N-1.
Task: Convert 100 Celsius to Fahrenheit, then round to nearest 10.
Step 1: Apply formula (C * 9/5 + 32) = 212
Step 2: Round 212 to nearest 10 = 210
This tests that the planner recognizes sequential dependencies.
"""
from crewai import Agent, PlanningConfig
from crewai.llm import LLM
from crewai.experimental.agent_executor import AgentExecutor
llm = LLM("gpt-4o-mini")
agent = Agent(
role="Unit Converter",
goal="Accurately convert between units and apply transformations",
backstory="A precise unit conversion specialist",
llm=llm,
planning_config=PlanningConfig(max_attempts=1, max_steps=10),
verbose=False,
)
captured_plan = [None]
original_invoke = AgentExecutor.invoke
def capture_plan(self, inputs):
result = original_invoke(self, inputs)
captured_plan[0] = self.state.plan
return result
with patch.object(AgentExecutor, "invoke", capture_plan):
result = agent.kickoff(
"Convert 100 degrees Celsius to Fahrenheit, then round the result to the nearest 10."
)
assert result is not None
# 100C = 212F, rounded to nearest 10 = 210
assert "210" in str(result) or "212" in str(result)
# Plan should exist and be minimal (2-3 steps for this task)
assert captured_plan[0] is not None
plan_text = captured_plan[0]
import re
step_pattern = r"^\s*\d+[\.\):]|\n\s*-\s+"
steps = re.findall(step_pattern, plan_text, re.MULTILINE)
assert len(steps) <= 5, f"Plan should be minimal ({len(steps)} steps): {plan_text}"

View File

@@ -1,240 +1,345 @@
"""Tests for reasoning in agents."""
"""Tests for planning/reasoning in agents."""
import json
import warnings
import pytest
from crewai import Agent, Task
from crewai import Agent, PlanningConfig, Task
from crewai.llm import LLM
@pytest.fixture
def mock_llm_responses():
"""Fixture for mock LLM responses."""
return {
"ready": "I'll solve this simple math problem.\n\nREADY: I am ready to execute the task.\n\n",
"not_ready": "I need to think about derivatives.\n\nNOT READY: I need to refine my plan because I'm not sure about the derivative rules.",
"ready_after_refine": "I'll use the power rule for derivatives where d/dx(x^n) = n*x^(n-1).\n\nREADY: I am ready to execute the task.",
"execution": "4",
}
# =============================================================================
# Tests for PlanningConfig configuration (no LLM calls needed)
# =============================================================================
def test_agent_with_reasoning(mock_llm_responses):
"""Test agent with reasoning."""
llm = LLM("gpt-3.5-turbo")
def test_planning_config_default_values():
"""Test PlanningConfig default values."""
config = PlanningConfig()
assert config.max_attempts is None
assert config.max_steps == 20
assert config.system_prompt is None
assert config.plan_prompt is None
assert config.refine_prompt is None
assert config.llm is None
def test_planning_config_custom_values():
"""Test PlanningConfig with custom values."""
config = PlanningConfig(
max_attempts=5,
max_steps=15,
system_prompt="Custom system",
plan_prompt="Custom plan: {description}",
refine_prompt="Custom refine: {current_plan}",
llm="gpt-4",
)
assert config.max_attempts == 5
assert config.max_steps == 15
assert config.system_prompt == "Custom system"
assert config.plan_prompt == "Custom plan: {description}"
assert config.refine_prompt == "Custom refine: {current_plan}"
assert config.llm == "gpt-4"
def test_agent_with_planning_config_custom_prompts():
"""Test agent with PlanningConfig using custom prompts."""
llm = LLM("gpt-4o-mini")
custom_system_prompt = "You are a specialized planner."
custom_plan_prompt = "Plan this task: {description}"
agent = Agent(
role="Test Agent",
goal="To test custom prompts",
backstory="I am a test agent.",
llm=llm,
planning_config=PlanningConfig(
system_prompt=custom_system_prompt,
plan_prompt=custom_plan_prompt,
max_steps=10,
),
verbose=False,
)
# Just test that the agent is created properly
assert agent.planning_config is not None
assert agent.planning_config.system_prompt == custom_system_prompt
assert agent.planning_config.plan_prompt == custom_plan_prompt
assert agent.planning_config.max_steps == 10
def test_agent_with_planning_config_disabled():
"""Test agent with PlanningConfig disabled."""
llm = LLM("gpt-4o-mini")
agent = Agent(
role="Test Agent",
goal="To test disabled planning",
backstory="I am a test agent.",
llm=llm,
planning=False,
verbose=False,
)
# Planning should be disabled
assert agent.planning_enabled is False
def test_planning_enabled_property():
"""Test the planning_enabled property on Agent."""
llm = LLM("gpt-4o-mini")
# With planning_config enabled
agent_with_planning = Agent(
role="Test Agent",
goal="Test",
backstory="Test",
llm=llm,
planning=True,
)
assert agent_with_planning.planning_enabled is True
# With planning_config disabled
agent_disabled = Agent(
role="Test Agent",
goal="Test",
backstory="Test",
llm=llm,
planning=False,
)
assert agent_disabled.planning_enabled is False
# Without planning_config
agent_no_planning = Agent(
role="Test Agent",
goal="Test",
backstory="Test",
llm=llm,
)
assert agent_no_planning.planning_enabled is False
# =============================================================================
# Tests for backward compatibility with reasoning=True (no LLM calls)
# =============================================================================
def test_agent_with_reasoning_backward_compat():
"""Test agent with reasoning=True (backward compatibility)."""
llm = LLM("gpt-4o-mini")
# This should emit a deprecation warning
with warnings.catch_warnings(record=True):
warnings.simplefilter("always")
agent = Agent(
role="Test Agent",
goal="To test the reasoning feature",
backstory="I am a test agent created to verify the reasoning feature works correctly.",
llm=llm,
reasoning=True,
verbose=False,
)
# Should have created a PlanningConfig internally
assert agent.planning_config is not None
assert agent.planning_enabled is True
def test_agent_with_reasoning_and_max_attempts_backward_compat():
"""Test agent with reasoning=True and max_reasoning_attempts (backward compatibility)."""
llm = LLM("gpt-4o-mini")
agent = Agent(
role="Test Agent",
goal="To test the reasoning feature",
backstory="I am a test agent created to verify the reasoning feature works correctly.",
backstory="I am a test agent.",
llm=llm,
reasoning=True,
verbose=True,
max_reasoning_attempts=5,
verbose=False,
)
task = Task(
description="Simple math task: What's 2+2?",
expected_output="The answer should be a number.",
agent=agent,
)
agent.llm.call = lambda messages, *args, **kwargs: (
mock_llm_responses["ready"]
if any("create a detailed plan" in msg.get("content", "") for msg in messages)
else mock_llm_responses["execution"]
)
result = agent.execute_task(task)
assert result == mock_llm_responses["execution"]
assert "Reasoning Plan:" in task.description
# Should have created a PlanningConfig with max_attempts
assert agent.planning_config is not None
assert agent.planning_config.max_attempts == 5
def test_agent_with_reasoning_not_ready_initially(mock_llm_responses):
"""Test agent with reasoning that requires refinement."""
llm = LLM("gpt-3.5-turbo")
# =============================================================================
# Tests for Agent.kickoff() with planning (uses AgentExecutor)
# =============================================================================
@pytest.mark.vcr()
def test_agent_kickoff_with_planning():
"""Test Agent.kickoff() with planning enabled generates a plan."""
llm = LLM("gpt-4o-mini")
agent = Agent(
role="Test Agent",
goal="To test the reasoning feature",
backstory="I am a test agent created to verify the reasoning feature works correctly.",
role="Math Assistant",
goal="Help solve math problems step by step",
backstory="A helpful math tutor",
llm=llm,
reasoning=True,
max_reasoning_attempts=2,
verbose=True,
planning_config=PlanningConfig(max_attempts=1),
verbose=False,
)
task = Task(
description="Complex math task: What's the derivative of x²?",
expected_output="The answer should be a mathematical expression.",
agent=agent,
)
result = agent.kickoff("What is 15 + 27?")
call_count = [0]
def mock_llm_call(messages, *args, **kwargs):
if any(
"create a detailed plan" in msg.get("content", "") for msg in messages
) or any("refine your plan" in msg.get("content", "") for msg in messages):
call_count[0] += 1
if call_count[0] == 1:
return mock_llm_responses["not_ready"]
return mock_llm_responses["ready_after_refine"]
return "2x"
agent.llm.call = mock_llm_call
result = agent.execute_task(task)
assert result == "2x"
assert call_count[0] == 2 # Should have made 2 reasoning calls
assert "Reasoning Plan:" in task.description
assert result is not None
assert "42" in str(result)
def test_agent_with_reasoning_max_attempts_reached():
"""Test agent with reasoning that reaches max attempts without being ready."""
llm = LLM("gpt-3.5-turbo")
@pytest.mark.vcr()
def test_agent_kickoff_without_planning():
"""Test Agent.kickoff() without planning skips plan generation."""
llm = LLM("gpt-4o-mini")
agent = Agent(
role="Test Agent",
goal="To test the reasoning feature",
backstory="I am a test agent created to verify the reasoning feature works correctly.",
role="Math Assistant",
goal="Help solve math problems",
backstory="A helpful assistant",
llm=llm,
reasoning=True,
max_reasoning_attempts=2,
verbose=True,
# No planning_config = no planning
verbose=False,
)
task = Task(
description="Complex math task: Solve the Riemann hypothesis.",
expected_output="A proof or disproof of the hypothesis.",
agent=agent,
)
result = agent.kickoff("What is 8 * 7?")
call_count = [0]
def mock_llm_call(messages, *args, **kwargs):
if any(
"create a detailed plan" in msg.get("content", "") for msg in messages
) or any("refine your plan" in msg.get("content", "") for msg in messages):
call_count[0] += 1
return f"Attempt {call_count[0]}: I need more time to think.\n\nNOT READY: I need to refine my plan further."
return "This is an unsolved problem in mathematics."
agent.llm.call = mock_llm_call
result = agent.execute_task(task)
assert result == "This is an unsolved problem in mathematics."
assert (
call_count[0] == 2
) # Should have made exactly 2 reasoning calls (max_attempts)
assert "Reasoning Plan:" in task.description
assert result is not None
assert "56" in str(result)
def test_agent_reasoning_error_handling():
"""Test error handling during the reasoning process."""
llm = LLM("gpt-3.5-turbo")
@pytest.mark.vcr()
def test_agent_kickoff_with_planning_disabled():
"""Test Agent.kickoff() with planning explicitly disabled via planning=False."""
llm = LLM("gpt-4o-mini")
agent = Agent(
role="Test Agent",
goal="To test the reasoning feature",
backstory="I am a test agent created to verify the reasoning feature works correctly.",
role="Math Assistant",
goal="Help solve math problems",
backstory="A helpful assistant",
llm=llm,
reasoning=True,
planning=False, # Explicitly disable planning
verbose=False,
)
task = Task(
description="Task that will cause an error",
expected_output="Output that will never be generated",
agent=agent,
)
result = agent.kickoff("What is 100 / 4?")
call_count = [0]
def mock_llm_call_error(*args, **kwargs):
call_count[0] += 1
if call_count[0] <= 2: # First calls are for reasoning
raise Exception("LLM error during reasoning")
return "Fallback execution result" # Return a value for task execution
agent.llm.call = mock_llm_call_error
result = agent.execute_task(task)
assert result == "Fallback execution result"
assert call_count[0] > 2 # Ensure we called the mock multiple times
assert result is not None
assert "25" in str(result)
@pytest.mark.skip(reason="Test requires updates for native tool calling changes")
def test_agent_with_function_calling():
"""Test agent with reasoning using function calling."""
llm = LLM("gpt-3.5-turbo")
@pytest.mark.vcr()
def test_agent_kickoff_multi_step_task_with_planning():
"""Test Agent.kickoff() with a multi-step task that benefits from planning."""
llm = LLM("gpt-4o-mini")
agent = Agent(
role="Test Agent",
goal="To test the reasoning feature",
backstory="I am a test agent created to verify the reasoning feature works correctly.",
role="Math Tutor",
goal="Solve multi-step math problems",
backstory="An expert tutor who explains step by step",
llm=llm,
reasoning=True,
verbose=True,
planning_config=PlanningConfig(max_attempts=1, max_steps=5),
verbose=False,
)
task = Task(
description="Simple math task: What's 2+2?",
expected_output="The answer should be a number.",
agent=agent,
# Task requires: find primes, sum them, then double
result = agent.kickoff(
"Find the first 3 prime numbers, add them together, then multiply by 2."
)
agent.llm.supports_function_calling = lambda: True
def mock_function_call(messages, *args, **kwargs):
if "tools" in kwargs:
return json.dumps(
{"plan": "I'll solve this simple math problem: 2+2=4.", "ready": True}
)
return "4"
agent.llm.call = mock_function_call
result = agent.execute_task(task)
assert result == "4"
assert "Reasoning Plan:" in task.description
assert "I'll solve this simple math problem: 2+2=4." in task.description
assert result is not None
# First 3 primes: 2, 3, 5 -> sum = 10 -> doubled = 20
assert "20" in str(result)
@pytest.mark.skip(reason="Test requires updates for native tool calling changes")
def test_agent_with_function_calling_fallback():
"""Test agent with reasoning using function calling that falls back to text parsing."""
llm = LLM("gpt-3.5-turbo")
# =============================================================================
# Tests for Agent.execute_task() with planning (uses CrewAgentExecutor)
# These test the legacy path via handle_reasoning()
# =============================================================================
@pytest.mark.vcr()
def test_agent_execute_task_with_planning():
"""Test Agent.execute_task() with planning via CrewAgentExecutor."""
llm = LLM("gpt-4o-mini")
agent = Agent(
role="Test Agent",
goal="To test the reasoning feature",
backstory="I am a test agent created to verify the reasoning feature works correctly.",
role="Math Assistant",
goal="Help solve math problems",
backstory="A helpful math tutor",
llm=llm,
reasoning=True,
verbose=True,
planning_config=PlanningConfig(max_attempts=1),
verbose=False,
)
task = Task(
description="Simple math task: What's 2+2?",
expected_output="The answer should be a number.",
description="What is 9 + 11?",
expected_output="A number",
agent=agent,
)
agent.llm.supports_function_calling = lambda: True
result = agent.execute_task(task)
def mock_function_call(messages, *args, **kwargs):
if "tools" in kwargs:
return "Invalid JSON that will trigger fallback. READY: I am ready to execute the task."
return "4"
assert result is not None
assert "20" in str(result)
# Planning should be appended to task description
assert "Planning:" in task.description
agent.llm.call = mock_function_call
@pytest.mark.vcr()
def test_agent_execute_task_without_planning():
"""Test Agent.execute_task() without planning."""
llm = LLM("gpt-4o-mini")
agent = Agent(
role="Math Assistant",
goal="Help solve math problems",
backstory="A helpful assistant",
llm=llm,
verbose=False,
)
task = Task(
description="What is 12 * 3?",
expected_output="A number",
agent=agent,
)
result = agent.execute_task(task)
assert result == "4"
assert "Reasoning Plan:" in task.description
assert "Invalid JSON that will trigger fallback" in task.description
assert result is not None
assert "36" in str(result)
# No planning should be added
assert "Planning:" not in task.description
@pytest.mark.vcr()
def test_agent_execute_task_with_planning_refine():
"""Test Agent.execute_task() with planning that requires refinement."""
llm = LLM("gpt-4o-mini")
agent = Agent(
role="Math Tutor",
goal="Solve complex math problems step by step",
backstory="An expert tutor",
llm=llm,
planning_config=PlanningConfig(max_attempts=2),
verbose=False,
)
task = Task(
description="Calculate the area of a circle with radius 5 (use pi = 3.14)",
expected_output="The area as a number",
agent=agent,
)
result = agent.execute_task(task)
assert result is not None
# Area = pi * r^2 = 3.14 * 25 = 78.5
assert "78" in str(result) or "79" in str(result)
assert "Planning:" in task.description

View File

@@ -0,0 +1,234 @@
interactions:
- request:
body: '{"messages":[{"role":"system","content":"You are a strategic planning assistant.
Create minimal, effective execution plans. Prefer fewer steps over more."},{"role":"user","content":"Create
a focused execution plan for the following task:\n\n## Task\nWhat is 2 + 2?\n\n##
Expected Output\nComplete the task successfully\n\n## Available Tools\nNo tools
available\n\n## Instructions\nCreate ONLY the essential steps needed to complete
this task. Use the MINIMUM number of steps required - do NOT pad your plan with
unnecessary steps. Most tasks need only 2-5 steps.\n\nFor each step:\n- State
the specific action to take\n- Specify which tool to use (if any)\n\nDo NOT
include:\n- Setup or preparation steps that are obvious\n- Verification steps
unless critical\n- Documentation or cleanup steps unless explicitly required\n-
Generic steps like \"review results\" or \"finalize output\"\n\nAfter your plan,
state:\n- \"READY: I am ready to execute the task.\" if the plan is complete\n-
\"NOT READY: I need to refine my plan because [reason].\" if you need more thinking"}],"model":"gpt-4o-mini","tool_choice":"auto","tools":[{"type":"function","function":{"name":"create_reasoning_plan","description":"Create
or refine a reasoning plan for a task","strict":true,"parameters":{"type":"object","properties":{"plan":{"type":"string","description":"The
detailed reasoning plan for the task."},"ready":{"type":"boolean","description":"Whether
the agent is ready to execute the task."}},"required":["plan","ready"],"additionalProperties":false}}}]}'
headers:
User-Agent:
- X-USER-AGENT-XXX
accept:
- application/json
accept-encoding:
- ACCEPT-ENCODING-XXX
authorization:
- AUTHORIZATION-XXX
connection:
- keep-alive
content-length:
- '1541'
content-type:
- application/json
host:
- api.openai.com
x-stainless-arch:
- X-STAINLESS-ARCH-XXX
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 1.83.0
x-stainless-read-timeout:
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-D4yTTAh68P65LybtqkwNI3p2HXcRv\",\n \"object\":
\"chat.completion\",\n \"created\": 1770078147,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"## Execution Plan\\n\\n1. **Action:**
Perform the addition operation. \\n **Tool:** None (manually calculate).\\n\\n2.
**Action:** State the result. \\n **Tool:** None (manually output).\\n\\nREADY:
I am ready to execute the task.\",\n \"refusal\": null,\n \"annotations\":
[]\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 281,\n \"completion_tokens\":
56,\n \"total_tokens\": 337,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_1590f93f9d\"\n}\n"
headers:
CF-RAY:
- CF-RAY-XXX
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Tue, 03 Feb 2026 00:22:28 GMT
Server:
- cloudflare
Set-Cookie:
- SET-COOKIE-XXX
Strict-Transport-Security:
- STS-XXX
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- X-CONTENT-TYPE-XXX
access-control-expose-headers:
- ACCESS-CONTROL-XXX
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '1165'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:
- X-RATELIMIT-LIMIT-REQUESTS-XXX
x-ratelimit-limit-tokens:
- X-RATELIMIT-LIMIT-TOKENS-XXX
x-ratelimit-remaining-requests:
- X-RATELIMIT-REMAINING-REQUESTS-XXX
x-ratelimit-remaining-tokens:
- X-RATELIMIT-REMAINING-TOKENS-XXX
x-ratelimit-reset-requests:
- X-RATELIMIT-RESET-REQUESTS-XXX
x-ratelimit-reset-tokens:
- X-RATELIMIT-RESET-TOKENS-XXX
x-request-id:
- X-REQUEST-ID-XXX
status:
code: 200
message: OK
- request:
body: '{"messages":[{"role":"system","content":"You are Math Assistant. A helpful
assistant that solves math problems step by step\nYour personal goal is: Help
solve simple math problems"},{"role":"user","content":"\nCurrent Task: What
is 2 + 2?\n\nProvide your complete response:"}],"model":"gpt-4o-mini"}'
headers:
User-Agent:
- X-USER-AGENT-XXX
accept:
- application/json
accept-encoding:
- ACCEPT-ENCODING-XXX
authorization:
- AUTHORIZATION-XXX
connection:
- keep-alive
content-length:
- '299'
content-type:
- application/json
cookie:
- COOKIE-XXX
host:
- api.openai.com
x-stainless-arch:
- X-STAINLESS-ARCH-XXX
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 1.83.0
x-stainless-read-timeout:
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-D4yTVB9mdtq1YZrUVf1aSb6dVVQ8G\",\n \"object\":
\"chat.completion\",\n \"created\": 1770078149,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"To solve the problem of 2 + 2, we simply
perform the addition:\\n\\n1. Start with the first number: 2\\n2. Add the
second number: + 2\\n3. Combine the two: 2 + 2 = 4\\n\\nTherefore, the answer
is 4.\",\n \"refusal\": null,\n \"annotations\": []\n },\n
\ \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n
\ \"usage\": {\n \"prompt_tokens\": 54,\n \"completion_tokens\": 62,\n
\ \"total_tokens\": 116,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_1590f93f9d\"\n}\n"
headers:
CF-RAY:
- CF-RAY-XXX
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Tue, 03 Feb 2026 00:22:30 GMT
Server:
- cloudflare
Strict-Transport-Security:
- STS-XXX
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- X-CONTENT-TYPE-XXX
access-control-expose-headers:
- ACCESS-CONTROL-XXX
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '1300'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:
- X-RATELIMIT-LIMIT-REQUESTS-XXX
x-ratelimit-limit-tokens:
- X-RATELIMIT-LIMIT-TOKENS-XXX
x-ratelimit-remaining-requests:
- X-RATELIMIT-REMAINING-REQUESTS-XXX
x-ratelimit-remaining-tokens:
- X-RATELIMIT-REMAINING-TOKENS-XXX
x-ratelimit-reset-requests:
- X-RATELIMIT-RESET-REQUESTS-XXX
x-ratelimit-reset-tokens:
- X-RATELIMIT-RESET-TOKENS-XXX
x-request-id:
- X-REQUEST-ID-XXX
status:
code: 200
message: OK
version: 1

View File

@@ -0,0 +1,108 @@
interactions:
- request:
body: '{"messages":[{"role":"system","content":"You are Math Assistant. A helpful
assistant\nYour personal goal is: Help solve simple math problems"},{"role":"user","content":"\nCurrent
Task: What is 3 + 3?\n\nProvide your complete response:"}],"model":"gpt-4o-mini"}'
headers:
User-Agent:
- X-USER-AGENT-XXX
accept:
- application/json
accept-encoding:
- ACCEPT-ENCODING-XXX
authorization:
- AUTHORIZATION-XXX
connection:
- keep-alive
content-length:
- '260'
content-type:
- application/json
host:
- api.openai.com
x-stainless-arch:
- X-STAINLESS-ARCH-XXX
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 1.83.0
x-stainless-read-timeout:
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-D4yTTFxQ75llVmJv0ee902FIjXE8p\",\n \"object\":
\"chat.completion\",\n \"created\": 1770078147,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"3 + 3 equals 6.\",\n \"refusal\":
null,\n \"annotations\": []\n },\n \"logprobs\": null,\n
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
47,\n \"completion_tokens\": 8,\n \"total_tokens\": 55,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_1590f93f9d\"\n}\n"
headers:
CF-RAY:
- CF-RAY-XXX
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Tue, 03 Feb 2026 00:22:27 GMT
Server:
- cloudflare
Set-Cookie:
- SET-COOKIE-XXX
Strict-Transport-Security:
- STS-XXX
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- X-CONTENT-TYPE-XXX
access-control-expose-headers:
- ACCESS-CONTROL-XXX
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '401'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:
- X-RATELIMIT-LIMIT-REQUESTS-XXX
x-ratelimit-limit-tokens:
- X-RATELIMIT-LIMIT-TOKENS-XXX
x-ratelimit-remaining-requests:
- X-RATELIMIT-REMAINING-REQUESTS-XXX
x-ratelimit-remaining-tokens:
- X-RATELIMIT-REMAINING-TOKENS-XXX
x-ratelimit-reset-requests:
- X-RATELIMIT-RESET-REQUESTS-XXX
x-ratelimit-reset-tokens:
- X-RATELIMIT-RESET-TOKENS-XXX
x-request-id:
- X-REQUEST-ID-XXX
status:
code: 200
message: OK
version: 1

View File

@@ -0,0 +1,230 @@
interactions:
- request:
body: '{"messages":[{"role":"system","content":"You are a strategic planning assistant.
Create minimal, effective execution plans. Prefer fewer steps over more."},{"role":"user","content":"Create
a focused execution plan for the following task:\n\n## Task\nWhat is 7 + 7?\n\n##
Expected Output\nComplete the task successfully\n\n## Available Tools\nNo tools
available\n\n## Instructions\nCreate ONLY the essential steps needed to complete
this task. Use the MINIMUM number of steps required - do NOT pad your plan with
unnecessary steps. Most tasks need only 2-5 steps.\n\nFor each step:\n- State
the specific action to take\n- Specify which tool to use (if any)\n\nDo NOT
include:\n- Setup or preparation steps that are obvious\n- Verification steps
unless critical\n- Documentation or cleanup steps unless explicitly required\n-
Generic steps like \"review results\" or \"finalize output\"\n\nAfter your plan,
state:\n- \"READY: I am ready to execute the task.\" if the plan is complete\n-
\"NOT READY: I need to refine my plan because [reason].\" if you need more thinking"}],"model":"gpt-4o-mini","tool_choice":"auto","tools":[{"type":"function","function":{"name":"create_reasoning_plan","description":"Create
or refine a reasoning plan for a task","strict":true,"parameters":{"type":"object","properties":{"plan":{"type":"string","description":"The
detailed reasoning plan for the task."},"ready":{"type":"boolean","description":"Whether
the agent is ready to execute the task."}},"required":["plan","ready"],"additionalProperties":false}}}]}'
headers:
User-Agent:
- X-USER-AGENT-XXX
accept:
- application/json
accept-encoding:
- ACCEPT-ENCODING-XXX
authorization:
- AUTHORIZATION-XXX
connection:
- keep-alive
content-length:
- '1541'
content-type:
- application/json
host:
- api.openai.com
x-stainless-arch:
- X-STAINLESS-ARCH-XXX
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 1.83.0
x-stainless-read-timeout:
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-D4yTdqlxwWowSdLncBERFrCgxTvVj\",\n \"object\":
\"chat.completion\",\n \"created\": 1770078157,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"## Execution Plan\\n\\n1. Calculate
the sum of 7 and 7.\\n \\nREADY: I am ready to execute the task.\",\n \"refusal\":
null,\n \"annotations\": []\n },\n \"logprobs\": null,\n
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
281,\n \"completion_tokens\": 28,\n \"total_tokens\": 309,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_1590f93f9d\"\n}\n"
headers:
CF-RAY:
- CF-RAY-XXX
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Tue, 03 Feb 2026 00:22:38 GMT
Server:
- cloudflare
Set-Cookie:
- SET-COOKIE-XXX
Strict-Transport-Security:
- STS-XXX
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- X-CONTENT-TYPE-XXX
access-control-expose-headers:
- ACCESS-CONTROL-XXX
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '709'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:
- X-RATELIMIT-LIMIT-REQUESTS-XXX
x-ratelimit-limit-tokens:
- X-RATELIMIT-LIMIT-TOKENS-XXX
x-ratelimit-remaining-requests:
- X-RATELIMIT-REMAINING-REQUESTS-XXX
x-ratelimit-remaining-tokens:
- X-RATELIMIT-REMAINING-TOKENS-XXX
x-ratelimit-reset-requests:
- X-RATELIMIT-RESET-REQUESTS-XXX
x-ratelimit-reset-tokens:
- X-RATELIMIT-RESET-TOKENS-XXX
x-request-id:
- X-REQUEST-ID-XXX
status:
code: 200
message: OK
- request:
body: '{"messages":[{"role":"system","content":"You are Math Assistant. A helpful
assistant that solves math problems step by step\nYour personal goal is: Help
solve simple math problems"},{"role":"user","content":"\nCurrent Task: What
is 7 + 7?\n\nProvide your complete response:"}],"model":"gpt-4o-mini"}'
headers:
User-Agent:
- X-USER-AGENT-XXX
accept:
- application/json
accept-encoding:
- ACCEPT-ENCODING-XXX
authorization:
- AUTHORIZATION-XXX
connection:
- keep-alive
content-length:
- '299'
content-type:
- application/json
cookie:
- COOKIE-XXX
host:
- api.openai.com
x-stainless-arch:
- X-STAINLESS-ARCH-XXX
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 1.83.0
x-stainless-read-timeout:
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-D4yTeB6Miecallw9SjSfLAXPjX2XD\",\n \"object\":
\"chat.completion\",\n \"created\": 1770078158,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"To find the sum of 7 and 7, you simply
add the two numbers together:\\n\\n7 + 7 = 14\\n\\nSo, the answer is 14.\",\n
\ \"refusal\": null,\n \"annotations\": []\n },\n \"logprobs\":
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
54,\n \"completion_tokens\": 35,\n \"total_tokens\": 89,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_1590f93f9d\"\n}\n"
headers:
CF-RAY:
- CF-RAY-XXX
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Tue, 03 Feb 2026 00:22:38 GMT
Server:
- cloudflare
Strict-Transport-Security:
- STS-XXX
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- X-CONTENT-TYPE-XXX
access-control-expose-headers:
- ACCESS-CONTROL-XXX
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '733'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:
- X-RATELIMIT-LIMIT-REQUESTS-XXX
x-ratelimit-limit-tokens:
- X-RATELIMIT-LIMIT-TOKENS-XXX
x-ratelimit-remaining-requests:
- X-RATELIMIT-REMAINING-REQUESTS-XXX
x-ratelimit-remaining-tokens:
- X-RATELIMIT-REMAINING-TOKENS-XXX
x-ratelimit-reset-requests:
- X-RATELIMIT-RESET-REQUESTS-XXX
x-ratelimit-reset-tokens:
- X-RATELIMIT-RESET-TOKENS-XXX
x-request-id:
- X-REQUEST-ID-XXX
status:
code: 200
message: OK
version: 1

View File

@@ -0,0 +1,108 @@
interactions:
- request:
body: '{"messages":[{"role":"system","content":"You are Math Assistant. A helpful
assistant\nYour personal goal is: Help solve simple math problems"},{"role":"user","content":"\nCurrent
Task: What is 5 + 5?\n\nProvide your complete response:"}],"model":"gpt-4o-mini"}'
headers:
User-Agent:
- X-USER-AGENT-XXX
accept:
- application/json
accept-encoding:
- ACCEPT-ENCODING-XXX
authorization:
- AUTHORIZATION-XXX
connection:
- keep-alive
content-length:
- '260'
content-type:
- application/json
host:
- api.openai.com
x-stainless-arch:
- X-STAINLESS-ARCH-XXX
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 1.83.0
x-stainless-read-timeout:
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-D4yTf8T2iADffpPCJBZhntLlaoaSy\",\n \"object\":
\"chat.completion\",\n \"created\": 1770078159,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"5 + 5 equals 10.\",\n \"refusal\":
null,\n \"annotations\": []\n },\n \"logprobs\": null,\n
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
47,\n \"completion_tokens\": 8,\n \"total_tokens\": 55,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_1590f93f9d\"\n}\n"
headers:
CF-RAY:
- CF-RAY-XXX
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Tue, 03 Feb 2026 00:22:40 GMT
Server:
- cloudflare
Set-Cookie:
- SET-COOKIE-XXX
Strict-Transport-Security:
- STS-XXX
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- X-CONTENT-TYPE-XXX
access-control-expose-headers:
- ACCESS-CONTROL-XXX
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '515'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:
- X-RATELIMIT-LIMIT-REQUESTS-XXX
x-ratelimit-limit-tokens:
- X-RATELIMIT-LIMIT-TOKENS-XXX
x-ratelimit-remaining-requests:
- X-RATELIMIT-REMAINING-REQUESTS-XXX
x-ratelimit-remaining-tokens:
- X-RATELIMIT-REMAINING-TOKENS-XXX
x-ratelimit-reset-requests:
- X-RATELIMIT-RESET-REQUESTS-XXX
x-ratelimit-reset-tokens:
- X-RATELIMIT-RESET-TOKENS-XXX
x-request-id:
- X-REQUEST-ID-XXX
status:
code: 200
message: OK
version: 1

View File

@@ -0,0 +1,247 @@
interactions:
- request:
body: '{"messages":[{"role":"system","content":"You are a strategic planning assistant.
Create minimal, effective execution plans. Prefer fewer steps over more."},{"role":"user","content":"Create
a focused execution plan for the following task:\n\n## Task\nCalculate the sum
of the first 3 prime numbers, then multiply that result by 2. Show your work
for each step.\n\n## Expected Output\nComplete the task successfully\n\n## Available
Tools\nNo tools available\n\n## Instructions\nCreate ONLY the essential steps
needed to complete this task. Use the MINIMUM number of steps required - do
NOT pad your plan with unnecessary steps. Most tasks need only 2-5 steps.\n\nFor
each step:\n- State the specific action to take\n- Specify which tool to use
(if any)\n\nDo NOT include:\n- Setup or preparation steps that are obvious\n-
Verification steps unless critical\n- Documentation or cleanup steps unless
explicitly required\n- Generic steps like \"review results\" or \"finalize output\"\n\nAfter
your plan, state:\n- \"READY: I am ready to execute the task.\" if the plan
is complete\n- \"NOT READY: I need to refine my plan because [reason].\" if
you need more thinking"}],"model":"gpt-4o-mini","tool_choice":"auto","tools":[{"type":"function","function":{"name":"create_reasoning_plan","description":"Create
or refine a reasoning plan for a task","strict":true,"parameters":{"type":"object","properties":{"plan":{"type":"string","description":"The
detailed reasoning plan for the task."},"ready":{"type":"boolean","description":"Whether
the agent is ready to execute the task."}},"required":["plan","ready"],"additionalProperties":false}}}]}'
headers:
User-Agent:
- X-USER-AGENT-XXX
accept:
- application/json
accept-encoding:
- ACCEPT-ENCODING-XXX
authorization:
- AUTHORIZATION-XXX
connection:
- keep-alive
content-length:
- '1636'
content-type:
- application/json
host:
- api.openai.com
x-stainless-arch:
- X-STAINLESS-ARCH-XXX
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 1.83.0
x-stainless-read-timeout:
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-D4yTWa7FxCHkHwHF25AYXXeJDBOuY\",\n \"object\":
\"chat.completion\",\n \"created\": 1770078150,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"## Execution Plan\\n\\n1. Identify
the first 3 prime numbers: 2, 3, and 5.\\n2. Calculate the sum: \\\\(2 + 3
+ 5 = 10\\\\).\\n3. Multiply the sum by 2: \\\\(10 \\\\times 2 = 20\\\\).\\n\\nREADY:
I am ready to execute the task.\",\n \"refusal\": null,\n \"annotations\":
[]\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 299,\n \"completion_tokens\":
74,\n \"total_tokens\": 373,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_1590f93f9d\"\n}\n"
headers:
CF-RAY:
- CF-RAY-XXX
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Tue, 03 Feb 2026 00:22:32 GMT
Server:
- cloudflare
Set-Cookie:
- SET-COOKIE-XXX
Strict-Transport-Security:
- STS-XXX
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- X-CONTENT-TYPE-XXX
access-control-expose-headers:
- ACCESS-CONTROL-XXX
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '1716'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:
- X-RATELIMIT-LIMIT-REQUESTS-XXX
x-ratelimit-limit-tokens:
- X-RATELIMIT-LIMIT-TOKENS-XXX
x-ratelimit-remaining-requests:
- X-RATELIMIT-REMAINING-REQUESTS-XXX
x-ratelimit-remaining-tokens:
- X-RATELIMIT-REMAINING-TOKENS-XXX
x-ratelimit-reset-requests:
- X-RATELIMIT-RESET-REQUESTS-XXX
x-ratelimit-reset-tokens:
- X-RATELIMIT-RESET-TOKENS-XXX
x-request-id:
- X-REQUEST-ID-XXX
status:
code: 200
message: OK
- request:
body: '{"messages":[{"role":"system","content":"You are Math Tutor. An expert
math tutor who breaks down problems step by step\nYour personal goal is: Solve
multi-step math problems accurately"},{"role":"user","content":"\nCurrent Task:
Calculate the sum of the first 3 prime numbers, then multiply that result by
2. Show your work for each step.\n\nProvide your complete response:"}],"model":"gpt-4o-mini"}'
headers:
User-Agent:
- X-USER-AGENT-XXX
accept:
- application/json
accept-encoding:
- ACCEPT-ENCODING-XXX
authorization:
- AUTHORIZATION-XXX
connection:
- keep-alive
content-length:
- '400'
content-type:
- application/json
cookie:
- COOKIE-XXX
host:
- api.openai.com
x-stainless-arch:
- X-STAINLESS-ARCH-XXX
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 1.83.0
x-stainless-read-timeout:
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-D4yTYJgCZf2oY7wiPMZmN4QEQhHb5\",\n \"object\":
\"chat.completion\",\n \"created\": 1770078152,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"To solve the problem, let's break it
down into two main steps: \\n\\n1. Calculate the sum of the first 3 prime
numbers.\\n2. Multiply the result of that sum by 2.\\n\\n### Step 1: Identify
the first 3 prime numbers\\nPrime numbers are natural numbers greater than
1 that have no positive divisors other than 1 and themselves. \\n\\nThe first
three prime numbers are:\\n- 2\\n- 3\\n- 5\\n\\n### Step 2: Calculate the
sum of the first 3 prime numbers\\nNow, we add these prime numbers together:\\n\\n\\\\[\\n2
+ 3 + 5\\n\\\\]\\n\\nCalculating this step-by-step:\\n- First, add 2 and 3:\\n
\ \\\\[\\n 2 + 3 = 5\\n \\\\]\\n \\n- Next, add this result to 5:\\n \\\\[\\n
\ 5 + 5 = 10\\n \\\\]\\n\\nSo, the sum of the first 3 prime numbers is \\\\(10\\\\).\\n\\n###
Step 3: Multiply the sum by 2\\nNext, we take the sum we calculated and multiply
it by 2:\\n\\n\\\\[\\n10 \\\\times 2\\n\\\\]\\n\\nCalculating this:\\n\\\\[\\n10
\\\\times 2 = 20\\n\\\\]\\n\\n### Final Answer\\nThus, the final result obtained
after performing all the steps is:\\n\\n\\\\[\\n\\\\boxed{20}\\n\\\\]\",\n
\ \"refusal\": null,\n \"annotations\": []\n },\n \"logprobs\":
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
74,\n \"completion_tokens\": 288,\n \"total_tokens\": 362,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_1590f93f9d\"\n}\n"
headers:
CF-RAY:
- CF-RAY-XXX
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Tue, 03 Feb 2026 00:22:37 GMT
Server:
- cloudflare
Strict-Transport-Security:
- STS-XXX
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- X-CONTENT-TYPE-XXX
access-control-expose-headers:
- ACCESS-CONTROL-XXX
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '4751'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:
- X-RATELIMIT-LIMIT-REQUESTS-XXX
x-ratelimit-limit-tokens:
- X-RATELIMIT-LIMIT-TOKENS-XXX
x-ratelimit-remaining-requests:
- X-RATELIMIT-REMAINING-REQUESTS-XXX
x-ratelimit-remaining-tokens:
- X-RATELIMIT-REMAINING-TOKENS-XXX
x-ratelimit-reset-requests:
- X-RATELIMIT-RESET-REQUESTS-XXX
x-ratelimit-reset-tokens:
- X-RATELIMIT-RESET-TOKENS-XXX
x-request-id:
- X-REQUEST-ID-XXX
status:
code: 200
message: OK
version: 1

View File

@@ -0,0 +1,108 @@
interactions:
- request:
body: '{"messages":[{"role":"system","content":"You are Math Assistant. A helpful
assistant\nYour personal goal is: Help solve simple math problems"},{"role":"user","content":"\nCurrent
Task: What is 5 + 5?\n\nProvide your complete response:"}],"model":"gpt-4o-mini"}'
headers:
User-Agent:
- X-USER-AGENT-XXX
accept:
- application/json
accept-encoding:
- ACCEPT-ENCODING-XXX
authorization:
- AUTHORIZATION-XXX
connection:
- keep-alive
content-length:
- '260'
content-type:
- application/json
host:
- api.openai.com
x-stainless-arch:
- X-STAINLESS-ARCH-XXX
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 1.83.0
x-stainless-read-timeout:
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-D4yXGD5IrieoUDSK5hDmJyA2gJtDc\",\n \"object\":
\"chat.completion\",\n \"created\": 1770078382,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"5 + 5 equals 10.\",\n \"refusal\":
null,\n \"annotations\": []\n },\n \"logprobs\": null,\n
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
47,\n \"completion_tokens\": 8,\n \"total_tokens\": 55,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_1590f93f9d\"\n}\n"
headers:
CF-RAY:
- CF-RAY-XXX
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Tue, 03 Feb 2026 00:26:23 GMT
Server:
- cloudflare
Set-Cookie:
- SET-COOKIE-XXX
Strict-Transport-Security:
- STS-XXX
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- X-CONTENT-TYPE-XXX
access-control-expose-headers:
- ACCESS-CONTROL-XXX
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '363'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:
- X-RATELIMIT-LIMIT-REQUESTS-XXX
x-ratelimit-limit-tokens:
- X-RATELIMIT-LIMIT-TOKENS-XXX
x-ratelimit-remaining-requests:
- X-RATELIMIT-REMAINING-REQUESTS-XXX
x-ratelimit-remaining-tokens:
- X-RATELIMIT-REMAINING-TOKENS-XXX
x-ratelimit-reset-requests:
- X-RATELIMIT-RESET-REQUESTS-XXX
x-ratelimit-reset-tokens:
- X-RATELIMIT-RESET-TOKENS-XXX
x-request-id:
- X-REQUEST-ID-XXX
status:
code: 200
message: OK
version: 1

View File

@@ -0,0 +1,242 @@
interactions:
- request:
body: '{"messages":[{"role":"system","content":"You are a strategic planning assistant.
Create minimal, effective execution plans. Prefer fewer steps over more."},{"role":"user","content":"Create
a focused execution plan for the following task:\n\n## Task\nConvert 100 degrees
Celsius to Fahrenheit, then round the result to the nearest 10.\n\n## Expected
Output\nComplete the task successfully\n\n## Available Tools\nNo tools available\n\n##
Instructions\nCreate ONLY the essential steps needed to complete this task.
Use the MINIMUM number of steps required - do NOT pad your plan with unnecessary
steps. Most tasks need only 2-5 steps.\n\nFor each step:\n- State the specific
action to take\n- Specify which tool to use (if any)\n\nDo NOT include:\n- Setup
or preparation steps that are obvious\n- Verification steps unless critical\n-
Documentation or cleanup steps unless explicitly required\n- Generic steps like
\"review results\" or \"finalize output\"\n\nAfter your plan, state:\n- \"READY:
I am ready to execute the task.\" if the plan is complete\n- \"NOT READY: I
need to refine my plan because [reason].\" if you need more thinking"}],"model":"gpt-4o-mini","tool_choice":"auto","tools":[{"type":"function","function":{"name":"create_reasoning_plan","description":"Create
or refine a reasoning plan for a task","strict":true,"parameters":{"type":"object","properties":{"plan":{"type":"string","description":"The
detailed reasoning plan for the task."},"ready":{"type":"boolean","description":"Whether
the agent is ready to execute the task."}},"required":["plan","ready"],"additionalProperties":false}}}]}'
headers:
User-Agent:
- X-USER-AGENT-XXX
accept:
- application/json
accept-encoding:
- ACCEPT-ENCODING-XXX
authorization:
- AUTHORIZATION-XXX
connection:
- keep-alive
content-length:
- '1610'
content-type:
- application/json
host:
- api.openai.com
x-stainless-arch:
- X-STAINLESS-ARCH-XXX
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 1.83.0
x-stainless-read-timeout:
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-D4yTN8fHOefyzzhvdUOHjxdFDR2HW\",\n \"object\":
\"chat.completion\",\n \"created\": 1770078141,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"## Execution Plan\\n\\n1. Convert 100
degrees Celsius to Fahrenheit using the formula: \\\\( F = C \\\\times \\\\frac{9}{5}
+ 32 \\\\).\\n2. Round the Fahrenheit result to the nearest 10.\\n\\nREADY:
I am ready to execute the task.\",\n \"refusal\": null,\n \"annotations\":
[]\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 291,\n \"completion_tokens\":
58,\n \"total_tokens\": 349,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_1590f93f9d\"\n}\n"
headers:
CF-RAY:
- CF-RAY-XXX
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Tue, 03 Feb 2026 00:22:22 GMT
Server:
- cloudflare
Set-Cookie:
- SET-COOKIE-XXX
Strict-Transport-Security:
- STS-XXX
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- X-CONTENT-TYPE-XXX
access-control-expose-headers:
- ACCESS-CONTROL-XXX
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '1089'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:
- X-RATELIMIT-LIMIT-REQUESTS-XXX
x-ratelimit-limit-tokens:
- X-RATELIMIT-LIMIT-TOKENS-XXX
x-ratelimit-remaining-requests:
- X-RATELIMIT-REMAINING-REQUESTS-XXX
x-ratelimit-remaining-tokens:
- X-RATELIMIT-REMAINING-TOKENS-XXX
x-ratelimit-reset-requests:
- X-RATELIMIT-RESET-REQUESTS-XXX
x-ratelimit-reset-tokens:
- X-RATELIMIT-RESET-TOKENS-XXX
x-request-id:
- X-REQUEST-ID-XXX
status:
code: 200
message: OK
- request:
body: '{"messages":[{"role":"system","content":"You are Unit Converter. A precise
unit conversion specialist\nYour personal goal is: Accurately convert between
units and apply transformations"},{"role":"user","content":"\nCurrent Task:
Convert 100 degrees Celsius to Fahrenheit, then round the result to the nearest
10.\n\nProvide your complete response:"}],"model":"gpt-4o-mini"}'
headers:
User-Agent:
- X-USER-AGENT-XXX
accept:
- application/json
accept-encoding:
- ACCEPT-ENCODING-XXX
authorization:
- AUTHORIZATION-XXX
connection:
- keep-alive
content-length:
- '373'
content-type:
- application/json
cookie:
- COOKIE-XXX
host:
- api.openai.com
x-stainless-arch:
- X-STAINLESS-ARCH-XXX
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 1.83.0
x-stainless-read-timeout:
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-D4yTPQewXDyPdYHI4dHPH7YGHcRge\",\n \"object\":
\"chat.completion\",\n \"created\": 1770078143,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"To convert degrees Celsius to Fahrenheit,
you can use the formula:\\n\\n\\\\[ F = \\\\left( C \\\\times \\\\frac{9}{5}
\\\\right) + 32 \\\\]\\n\\nPlugging in 100 degrees Celsius:\\n\\n\\\\[ F =
\\\\left( 100 \\\\times \\\\frac{9}{5} \\\\right) + 32 \\\\]\\n\\nCalculating
that step-by-step:\\n\\n1. Multiply 100 by 9: \\n \\\\[ 100 \\\\times 9
= 900 \\\\]\\n\\n2. Divide by 5:\\n \\\\[ 900 \\\\div 5 = 180 \\\\]\\n\\n3.
Add 32:\\n \\\\[ 180 + 32 = 212 \\\\]\\n\\nSo, 100 degrees Celsius is equal
to 212 degrees Fahrenheit.\\n\\nNow, rounding 212 to the nearest 10:\\n\\nThe
nearest multiple of 10 to 212 is 210.\\n\\nTherefore, the final result is
**210 degrees Fahrenheit**.\",\n \"refusal\": null,\n \"annotations\":
[]\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 63,\n \"completion_tokens\":
191,\n \"total_tokens\": 254,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_1590f93f9d\"\n}\n"
headers:
CF-RAY:
- CF-RAY-XXX
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Tue, 03 Feb 2026 00:22:26 GMT
Server:
- cloudflare
Strict-Transport-Security:
- STS-XXX
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- X-CONTENT-TYPE-XXX
access-control-expose-headers:
- ACCESS-CONTROL-XXX
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '3736'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:
- X-RATELIMIT-LIMIT-REQUESTS-XXX
x-ratelimit-limit-tokens:
- X-RATELIMIT-LIMIT-TOKENS-XXX
x-ratelimit-remaining-requests:
- X-RATELIMIT-REMAINING-REQUESTS-XXX
x-ratelimit-remaining-tokens:
- X-RATELIMIT-REMAINING-TOKENS-XXX
x-ratelimit-reset-requests:
- X-RATELIMIT-RESET-REQUESTS-XXX
x-ratelimit-reset-tokens:
- X-RATELIMIT-RESET-TOKENS-XXX
x-request-id:
- X-REQUEST-ID-XXX
status:
code: 200
message: OK
version: 1

View File

@@ -1,6 +1,10 @@
interactions:
- request:
body: '{"messages":[{"role":"system","content":"You are test role. test backstory\nYour personal goal is: test goal\nTo give my best complete final answer to the task respond using the exact following format:\n\nThought: I now can give a great answer\nFinal Answer: Your final answer must be the great and the most complete as possible, it must be outcome described.\n\nI MUST use these formats, my job depends on it!"},{"role":"user","content":"\nCurrent Task: Calculate 2 + 2\n\nThis is the expected criteria for your final answer: The result of the calculation\nyou MUST return the actual complete content as the final answer, not a summary.\n\nBegin! This is VERY important to you, use the tools available and give your best Final Answer, your job depends on it!\n\nThought:"}],"model":"gpt-4o-mini"}'
body: '{"messages":[{"role":"system","content":"You are test role. test backstory\nYour
personal goal is: test goal"},{"role":"user","content":"\nCurrent Task: Calculate
2 + 2\n\nThis is the expected criteria for your final answer: The result of
the calculation\nyou MUST return the actual complete content as the final answer,
not a summary.\n\nProvide your complete response:"}],"model":"gpt-4o-mini"}'
headers:
User-Agent:
- X-USER-AGENT-XXX
@@ -13,7 +17,7 @@ interactions:
connection:
- keep-alive
content-length:
- '797'
- '396'
content-type:
- application/json
host:
@@ -35,13 +39,23 @@ interactions:
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.10
- 3.13.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-CjDsYJQa2tIYBbNloukSWecpsTvdK\",\n \"object\": \"chat.completion\",\n \"created\": 1764894146,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\": \"assistant\",\n \"content\": \"I now can give a great answer \\nFinal Answer: The result of the calculation 2 + 2 is 4.\",\n \"refusal\": null,\n \"annotations\": []\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 161,\n \"completion_tokens\": 25,\n \"total_tokens\": 186,\n \"prompt_tokens_details\": {\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\": 0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\": \"default\",\n \"system_fingerprint\": \"fp_11f3029f6b\"\
\n}\n"
string: "{\n \"id\": \"chatcmpl-D5DTjYe6n92Rjo4Ox6NiZpAAdBLF0\",\n \"object\":
\"chat.completion\",\n \"created\": 1770135823,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"The result of the calculation 2 + 2
is 4.\",\n \"refusal\": null,\n \"annotations\": []\n },\n
\ \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n
\ \"usage\": {\n \"prompt_tokens\": 75,\n \"completion_tokens\": 14,\n
\ \"total_tokens\": 89,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_1590f93f9d\"\n}\n"
headers:
CF-RAY:
- CF-RAY-XXX
@@ -50,7 +64,7 @@ interactions:
Content-Type:
- application/json
Date:
- Fri, 05 Dec 2025 00:22:27 GMT
- Tue, 03 Feb 2026 16:23:43 GMT
Server:
- cloudflare
Set-Cookie:
@@ -70,13 +84,11 @@ interactions:
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '516'
- '636'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
x-envoy-upstream-service-time:
- '529'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:

View File

@@ -1,6 +1,12 @@
interactions:
- request:
body: '{"messages":[{"role":"system","content":"You are test role. test backstory\nYour personal goal is: test goal\nTo give my best complete final answer to the task respond using the exact following format:\n\nThought: I now can give a great answer\nFinal Answer: Your final answer must be the great and the most complete as possible, it must be outcome described.\n\nI MUST use these formats, my job depends on it!"},{"role":"user","content":"\nCurrent Task: Summarize the given context in one sentence\n\nThis is the expected criteria for your final answer: A one-sentence summary\nyou MUST return the actual complete content as the final answer, not a summary.\n\nThis is the context you''re working with:\nThe quick brown fox jumps over the lazy dog. This sentence contains every letter of the alphabet.\n\nBegin! This is VERY important to you, use the tools available and give your best Final Answer, your job depends on it!\n\nThought:"}],"model":"gpt-3.5-turbo"}'
body: '{"messages":[{"role":"system","content":"You are test role. test backstory\nYour
personal goal is: test goal"},{"role":"user","content":"\nCurrent Task: Summarize
the given context in one sentence\n\nThis is the expected criteria for your
final answer: A one-sentence summary\nyou MUST return the actual complete content
as the final answer, not a summary.\n\nThis is the context you''re working with:\nThe
quick brown fox jumps over the lazy dog. This sentence contains every letter
of the alphabet.\n\nProvide your complete response:"}],"model":"gpt-3.5-turbo"}'
headers:
User-Agent:
- X-USER-AGENT-XXX
@@ -13,7 +19,7 @@ interactions:
connection:
- keep-alive
content-length:
- '963'
- '562'
content-type:
- application/json
host:
@@ -35,13 +41,23 @@ interactions:
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.10
- 3.13.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-CjDtsaX0LJ0dzZz02KwKeRGYgazv1\",\n \"object\": \"chat.completion\",\n \"created\": 1764894228,\n \"model\": \"gpt-3.5-turbo-0125\",\n \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\": \"assistant\",\n \"content\": \"I now can give a great answer\\n\\nFinal Answer: The quick brown fox jumps over the lazy dog. This sentence contains every letter of the alphabet.\",\n \"refusal\": null,\n \"annotations\": []\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 191,\n \"completion_tokens\": 30,\n \"total_tokens\": 221,\n \"prompt_tokens_details\": {\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\": 0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\"\
: \"default\",\n \"system_fingerprint\": null\n}\n"
string: "{\n \"id\": \"chatcmpl-D5DTn6yIQ7HpIn5j5Bsbag1efzXPa\",\n \"object\":
\"chat.completion\",\n \"created\": 1770135827,\n \"model\": \"gpt-3.5-turbo-0125\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"The quick brown fox jumps over the
lazy dog. This sentence contains every letter of the alphabet.\",\n \"refusal\":
null,\n \"annotations\": []\n },\n \"logprobs\": null,\n
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
105,\n \"completion_tokens\": 19,\n \"total_tokens\": 124,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": null\n}\n"
headers:
CF-RAY:
- CF-RAY-XXX
@@ -50,7 +66,7 @@ interactions:
Content-Type:
- application/json
Date:
- Fri, 05 Dec 2025 00:23:49 GMT
- Tue, 03 Feb 2026 16:23:48 GMT
Server:
- cloudflare
Set-Cookie:
@@ -70,13 +86,11 @@ interactions:
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '506'
- '606'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
x-envoy-upstream-service-time:
- '559'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:

View File

@@ -1,6 +1,10 @@
interactions:
- request:
body: '{"messages":[{"role":"system","content":"You are test role. test backstory\nYour personal goal is: test goal\nTo give my best complete final answer to the task respond using the exact following format:\n\nThought: I now can give a great answer\nFinal Answer: Your final answer must be the great and the most complete as possible, it must be outcome described.\n\nI MUST use these formats, my job depends on it!"},{"role":"user","content":"\nCurrent Task: Write a haiku about AI\n\nThis is the expected criteria for your final answer: A haiku (3 lines, 5-7-5 syllable pattern) about AI\nyou MUST return the actual complete content as the final answer, not a summary.\n\nBegin! This is VERY important to you, use the tools available and give your best Final Answer, your job depends on it!\n\nThought:"}],"model":"gpt-3.5-turbo","max_tokens":50,"temperature":0.7}'
body: '{"messages":[{"role":"system","content":"You are test role. test backstory\nYour
personal goal is: test goal"},{"role":"user","content":"\nCurrent Task: Write
a haiku about AI\n\nThis is the expected criteria for your final answer: A haiku
(3 lines, 5-7-5 syllable pattern) about AI\nyou MUST return the actual complete
content as the final answer, not a summary.\n\nProvide your complete response:"}],"model":"gpt-3.5-turbo","max_tokens":50,"temperature":0.7}'
headers:
User-Agent:
- X-USER-AGENT-XXX
@@ -13,7 +17,7 @@ interactions:
connection:
- keep-alive
content-length:
- '861'
- '460'
content-type:
- application/json
host:
@@ -35,13 +39,23 @@ interactions:
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.10
- 3.13.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-CjDqr2BmEXQ08QzZKslTZJZ5vV9lo\",\n \"object\": \"chat.completion\",\n \"created\": 1764894041,\n \"model\": \"gpt-3.5-turbo-0125\",\n \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\": \"assistant\",\n \"content\": \"I now can give a great answer\\n\\nFinal Answer: \\nIn circuits they thrive, \\nArtificial minds awake, \\nFuture's coded drive.\",\n \"refusal\": null,\n \"annotations\": []\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 174,\n \"completion_tokens\": 29,\n \"total_tokens\": 203,\n \"prompt_tokens_details\": {\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\": 0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\": \"default\"\
,\n \"system_fingerprint\": null\n}\n"
string: "{\n \"id\": \"chatcmpl-D5DTgAqxaC8RmEvikXK0UDaxmVmf9\",\n \"object\":
\"chat.completion\",\n \"created\": 1770135820,\n \"model\": \"gpt-3.5-turbo-0125\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Artificial minds,\\nCode and circuits
intertwine,\\nFuture undefined.\",\n \"refusal\": null,\n \"annotations\":
[]\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 88,\n \"completion_tokens\":
13,\n \"total_tokens\": 101,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": null\n}\n"
headers:
CF-RAY:
- CF-RAY-XXX
@@ -50,7 +64,7 @@ interactions:
Content-Type:
- application/json
Date:
- Fri, 05 Dec 2025 00:20:41 GMT
- Tue, 03 Feb 2026 16:23:40 GMT
Server:
- cloudflare
Set-Cookie:
@@ -70,13 +84,11 @@ interactions:
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '434'
- '277'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
x-envoy-upstream-service-time:
- '456'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:

View File

@@ -0,0 +1,231 @@
interactions:
- request:
body: '{"messages":[{"role":"system","content":"You are a strategic planning assistant.
Create minimal, effective execution plans. Prefer fewer steps over more."},{"role":"user","content":"Create
a focused execution plan for the following task:\n\n## Task\nWhat is 9 + 11?\n\n##
Expected Output\nA number\n\n## Available Tools\nNo tools available\n\n## Instructions\nCreate
ONLY the essential steps needed to complete this task. Use the MINIMUM number
of steps required - do NOT pad your plan with unnecessary steps. Most tasks
need only 2-5 steps.\n\nFor each step:\n- State the specific action to take\n-
Specify which tool to use (if any)\n\nDo NOT include:\n- Setup or preparation
steps that are obvious\n- Verification steps unless critical\n- Documentation
or cleanup steps unless explicitly required\n- Generic steps like \"review results\"
or \"finalize output\"\n\nAfter your plan, state:\n- \"READY: I am ready to
execute the task.\" if the plan is complete\n- \"NOT READY: I need to refine
my plan because [reason].\" if you need more thinking"}],"model":"gpt-4o-mini","tool_choice":"auto","tools":[{"type":"function","function":{"name":"create_reasoning_plan","description":"Create
or refine a reasoning plan for a task","strict":true,"parameters":{"type":"object","properties":{"plan":{"type":"string","description":"The
detailed reasoning plan for the task."},"ready":{"type":"boolean","description":"Whether
the agent is ready to execute the task."}},"required":["plan","ready"],"additionalProperties":false}}}]}'
headers:
User-Agent:
- X-USER-AGENT-XXX
accept:
- application/json
accept-encoding:
- ACCEPT-ENCODING-XXX
authorization:
- AUTHORIZATION-XXX
connection:
- keep-alive
content-length:
- '1520'
content-type:
- application/json
host:
- api.openai.com
x-stainless-arch:
- X-STAINLESS-ARCH-XXX
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 1.83.0
x-stainless-read-timeout:
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-D4yVACNTzZcghQRwt5kFYQ4HAvbgI\",\n \"object\":
\"chat.completion\",\n \"created\": 1770078252,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"## Execution Plan\\n1. Calculate the
sum of 9 and 11.\\n \\nREADY: I am ready to execute the task.\",\n \"refusal\":
null,\n \"annotations\": []\n },\n \"logprobs\": null,\n
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
279,\n \"completion_tokens\": 28,\n \"total_tokens\": 307,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_1590f93f9d\"\n}\n"
headers:
CF-RAY:
- CF-RAY-XXX
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Tue, 03 Feb 2026 00:24:13 GMT
Server:
- cloudflare
Set-Cookie:
- SET-COOKIE-XXX
Strict-Transport-Security:
- STS-XXX
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- X-CONTENT-TYPE-XXX
access-control-expose-headers:
- ACCESS-CONTROL-XXX
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '951'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:
- X-RATELIMIT-LIMIT-REQUESTS-XXX
x-ratelimit-limit-tokens:
- X-RATELIMIT-LIMIT-TOKENS-XXX
x-ratelimit-remaining-requests:
- X-RATELIMIT-REMAINING-REQUESTS-XXX
x-ratelimit-remaining-tokens:
- X-RATELIMIT-REMAINING-TOKENS-XXX
x-ratelimit-reset-requests:
- X-RATELIMIT-RESET-REQUESTS-XXX
x-ratelimit-reset-tokens:
- X-RATELIMIT-RESET-TOKENS-XXX
x-request-id:
- X-REQUEST-ID-XXX
status:
code: 200
message: OK
- request:
body: '{"messages":[{"role":"system","content":"You are Math Assistant. A helpful
math tutor\nYour personal goal is: Help solve math problems"},{"role":"user","content":"\nCurrent
Task: What is 9 + 11?\n\nPlanning:\n## Execution Plan\n1. Calculate the sum
of 9 and 11.\n \nREADY: I am ready to execute the task.\n\nThis is the expected
criteria for your final answer: A number\nyou MUST return the actual complete
content as the final answer, not a summary.\n\nProvide your complete response:"}],"model":"gpt-4o-mini"}'
headers:
User-Agent:
- X-USER-AGENT-XXX
accept:
- application/json
accept-encoding:
- ACCEPT-ENCODING-XXX
authorization:
- AUTHORIZATION-XXX
connection:
- keep-alive
content-length:
- '513'
content-type:
- application/json
cookie:
- COOKIE-XXX
host:
- api.openai.com
x-stainless-arch:
- X-STAINLESS-ARCH-XXX
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 1.83.0
x-stainless-read-timeout:
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-D4yVBdTCKSdfcJYlIOX9BbzrObgFI\",\n \"object\":
\"chat.completion\",\n \"created\": 1770078253,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"9 + 11 = 20\",\n \"refusal\":
null,\n \"annotations\": []\n },\n \"logprobs\": null,\n
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
105,\n \"completion_tokens\": 7,\n \"total_tokens\": 112,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_1590f93f9d\"\n}\n"
headers:
CF-RAY:
- CF-RAY-XXX
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Tue, 03 Feb 2026 00:24:13 GMT
Server:
- cloudflare
Strict-Transport-Security:
- STS-XXX
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- X-CONTENT-TYPE-XXX
access-control-expose-headers:
- ACCESS-CONTROL-XXX
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '477'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:
- X-RATELIMIT-LIMIT-REQUESTS-XXX
x-ratelimit-limit-tokens:
- X-RATELIMIT-LIMIT-TOKENS-XXX
x-ratelimit-remaining-requests:
- X-RATELIMIT-REMAINING-REQUESTS-XXX
x-ratelimit-remaining-tokens:
- X-RATELIMIT-REMAINING-TOKENS-XXX
x-ratelimit-reset-requests:
- X-RATELIMIT-RESET-REQUESTS-XXX
x-ratelimit-reset-tokens:
- X-RATELIMIT-RESET-TOKENS-XXX
x-request-id:
- X-REQUEST-ID-XXX
status:
code: 200
message: OK
version: 1

View File

@@ -0,0 +1,243 @@
interactions:
- request:
body: '{"messages":[{"role":"system","content":"You are a strategic planning assistant.
Create minimal, effective execution plans. Prefer fewer steps over more."},{"role":"user","content":"Create
a focused execution plan for the following task:\n\n## Task\nCalculate the area
of a circle with radius 5 (use pi = 3.14)\n\n## Expected Output\nThe area as
a number\n\n## Available Tools\nNo tools available\n\n## Instructions\nCreate
ONLY the essential steps needed to complete this task. Use the MINIMUM number
of steps required - do NOT pad your plan with unnecessary steps. Most tasks
need only 2-5 steps.\n\nFor each step:\n- State the specific action to take\n-
Specify which tool to use (if any)\n\nDo NOT include:\n- Setup or preparation
steps that are obvious\n- Verification steps unless critical\n- Documentation
or cleanup steps unless explicitly required\n- Generic steps like \"review results\"
or \"finalize output\"\n\nAfter your plan, state:\n- \"READY: I am ready to
execute the task.\" if the plan is complete\n- \"NOT READY: I need to refine
my plan because [reason].\" if you need more thinking"}],"model":"gpt-4o-mini","tool_choice":"auto","tools":[{"type":"function","function":{"name":"create_reasoning_plan","description":"Create
or refine a reasoning plan for a task","strict":true,"parameters":{"type":"object","properties":{"plan":{"type":"string","description":"The
detailed reasoning plan for the task."},"ready":{"type":"boolean","description":"Whether
the agent is ready to execute the task."}},"required":["plan","ready"],"additionalProperties":false}}}]}'
headers:
User-Agent:
- X-USER-AGENT-XXX
accept:
- application/json
accept-encoding:
- ACCEPT-ENCODING-XXX
authorization:
- AUTHORIZATION-XXX
connection:
- keep-alive
content-length:
- '1577'
content-type:
- application/json
host:
- api.openai.com
x-stainless-arch:
- X-STAINLESS-ARCH-XXX
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 1.83.0
x-stainless-read-timeout:
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-D4yVCdA1csIzfoHSQvxkfrA4gDn4z\",\n \"object\":
\"chat.completion\",\n \"created\": 1770078254,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"## Execution Plan\\n1. Multiply the
radius (5) by itself (5) to get the square of the radius.\\n2. Multiply the
squared radius by pi (3.14) to calculate the area.\\n\\nREADY: I am ready
to execute the task.\",\n \"refusal\": null,\n \"annotations\":
[]\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 293,\n \"completion_tokens\":
54,\n \"total_tokens\": 347,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_1590f93f9d\"\n}\n"
headers:
CF-RAY:
- CF-RAY-XXX
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Tue, 03 Feb 2026 00:24:15 GMT
Server:
- cloudflare
Set-Cookie:
- SET-COOKIE-XXX
Strict-Transport-Security:
- STS-XXX
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- X-CONTENT-TYPE-XXX
access-control-expose-headers:
- ACCESS-CONTROL-XXX
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '845'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:
- X-RATELIMIT-LIMIT-REQUESTS-XXX
x-ratelimit-limit-tokens:
- X-RATELIMIT-LIMIT-TOKENS-XXX
x-ratelimit-remaining-requests:
- X-RATELIMIT-REMAINING-REQUESTS-XXX
x-ratelimit-remaining-tokens:
- X-RATELIMIT-REMAINING-TOKENS-XXX
x-ratelimit-reset-requests:
- X-RATELIMIT-RESET-REQUESTS-XXX
x-ratelimit-reset-tokens:
- X-RATELIMIT-RESET-TOKENS-XXX
x-request-id:
- X-REQUEST-ID-XXX
status:
code: 200
message: OK
- request:
body: '{"messages":[{"role":"system","content":"You are Math Tutor. An expert
tutor\nYour personal goal is: Solve complex math problems step by step"},{"role":"user","content":"\nCurrent
Task: Calculate the area of a circle with radius 5 (use pi = 3.14)\n\nPlanning:\n##
Execution Plan\n1. Multiply the radius (5) by itself (5) to get the square of
the radius.\n2. Multiply the squared radius by pi (3.14) to calculate the area.\n\nREADY:
I am ready to execute the task.\n\nThis is the expected criteria for your final
answer: The area as a number\nyou MUST return the actual complete content as
the final answer, not a summary.\n\nProvide your complete response:"}],"model":"gpt-4o-mini"}'
headers:
User-Agent:
- X-USER-AGENT-XXX
accept:
- application/json
accept-encoding:
- ACCEPT-ENCODING-XXX
authorization:
- AUTHORIZATION-XXX
connection:
- keep-alive
content-length:
- '682'
content-type:
- application/json
cookie:
- COOKIE-XXX
host:
- api.openai.com
x-stainless-arch:
- X-STAINLESS-ARCH-XXX
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 1.83.0
x-stainless-read-timeout:
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-D4yVDh2U2xx3qeYHcDQvbetOmVCxb\",\n \"object\":
\"chat.completion\",\n \"created\": 1770078255,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"To calculate the area of a circle with
a radius of 5, we will follow the steps outlined in the execution plan.\\n\\n1.
**Square the radius**:\\n \\\\[\\n 5 \\\\times 5 = 25\\n \\\\]\\n\\n2.
**Multiply the squared radius by pi (using \\\\(\\\\pi \\\\approx 3.14\\\\))**:\\n
\ \\\\[\\n \\\\text{Area} = \\\\pi \\\\times (\\\\text{radius})^2 = 3.14
\\\\times 25\\n \\\\]\\n\\n Now, let's perform the multiplication:\\n
\ \\\\[\\n 3.14 \\\\times 25 = 78.5\\n \\\\]\\n\\nThus, the area of the
circle is \\\\( \\\\boxed{78.5} \\\\).\",\n \"refusal\": null,\n \"annotations\":
[]\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 147,\n \"completion_tokens\":
155,\n \"total_tokens\": 302,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_1590f93f9d\"\n}\n"
headers:
CF-RAY:
- CF-RAY-XXX
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Tue, 03 Feb 2026 00:24:18 GMT
Server:
- cloudflare
Strict-Transport-Security:
- STS-XXX
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- X-CONTENT-TYPE-XXX
access-control-expose-headers:
- ACCESS-CONTROL-XXX
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '2228'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:
- X-RATELIMIT-LIMIT-REQUESTS-XXX
x-ratelimit-limit-tokens:
- X-RATELIMIT-LIMIT-TOKENS-XXX
x-ratelimit-remaining-requests:
- X-RATELIMIT-REMAINING-REQUESTS-XXX
x-ratelimit-remaining-tokens:
- X-RATELIMIT-REMAINING-TOKENS-XXX
x-ratelimit-reset-requests:
- X-RATELIMIT-RESET-REQUESTS-XXX
x-ratelimit-reset-tokens:
- X-RATELIMIT-RESET-TOKENS-XXX
x-request-id:
- X-REQUEST-ID-XXX
status:
code: 200
message: OK
version: 1

View File

@@ -1,7 +1,11 @@
interactions:
- request:
body: '{"messages":[{"role":"system","content":"You are test role. test backstory\nYour personal goal is: test goal\nYou ONLY have access to the following tools, and should NEVER make up tools that are not listed here:\n\nTool Name: dummy_tool\nTool Arguments: {''query'': {''description'': None, ''type'': ''str''}}\nTool Description: Useful for when you need to get a dummy result for a query.\n\nIMPORTANT: Use the following format in your response:\n\n```\nThought: you should always think about what to do\nAction: the action to take, only one name of [dummy_tool], just the name, exactly as it''s written.\nAction Input: the input to the action, just a simple JSON object, enclosed in curly braces, using \" to wrap keys and values.\nObservation: the result of the action\n```\n\nOnce all necessary information is gathered, return the following format:\n\n```\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n```"},{"role":"user","content":"\nCurrent
Task: Use the dummy tool to get a result for ''test query''\n\nThis is the expected criteria for your final answer: The result from the dummy tool\nyou MUST return the actual complete content as the final answer, not a summary.\n\nBegin! This is VERY important to you, use the tools available and give your best Final Answer, your job depends on it!\n\nThought:"}],"model":"gpt-3.5-turbo"}'
body: '{"messages":[{"role":"system","content":"You are test role. test backstory\nYour
personal goal is: test goal"},{"role":"user","content":"\nCurrent Task: Use
the dummy tool to get a result for ''test query''\n\nThis is the expected criteria
for your final answer: The result from the dummy tool\nyou MUST return the actual
complete content as the final answer, not a summary."}],"model":"gpt-3.5-turbo","tool_choice":"auto","tools":[{"type":"function","function":{"name":"dummy_tool","description":"Useful
for when you need to get a dummy result for a query.","strict":true,"parameters":{"properties":{"query":{"title":"Query","type":"string"}},"required":["query"],"type":"object","additionalProperties":false}}}]}'
headers:
User-Agent:
- X-USER-AGENT-XXX
@@ -14,7 +18,7 @@ interactions:
connection:
- keep-alive
content-length:
- '1381'
- '712'
content-type:
- application/json
host:
@@ -36,12 +40,26 @@ interactions:
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.10
- 3.13.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-CjDrE1Z8bFQjjxI2vDPPKgtOTm28p\",\n \"object\": \"chat.completion\",\n \"created\": 1764894064,\n \"model\": \"gpt-3.5-turbo-0125\",\n \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\": \"assistant\",\n \"content\": \"you should always think about what to do\",\n \"refusal\": null,\n \"annotations\": []\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 289,\n \"completion_tokens\": 8,\n \"total_tokens\": 297,\n \"prompt_tokens_details\": {\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\": 0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\": \"default\",\n \"system_fingerprint\": null\n}\n"
string: "{\n \"id\": \"chatcmpl-D5DTlUmKYee1DaS5AqnaUCZ6B14xV\",\n \"object\":
\"chat.completion\",\n \"created\": 1770135825,\n \"model\": \"gpt-3.5-turbo-0125\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": null,\n \"tool_calls\": [\n {\n
\ \"id\": \"call_tBCgelchfQjXXJrrM15MxqGJ\",\n \"type\":
\"function\",\n \"function\": {\n \"name\": \"dummy_tool\",\n
\ \"arguments\": \"{\\\"query\\\":\\\"test query\\\"}\"\n }\n
\ }\n ],\n \"refusal\": null,\n \"annotations\":
[]\n },\n \"logprobs\": null,\n \"finish_reason\": \"tool_calls\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 122,\n \"completion_tokens\":
16,\n \"total_tokens\": 138,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": null\n}\n"
headers:
CF-RAY:
- CF-RAY-XXX
@@ -50,7 +68,7 @@ interactions:
Content-Type:
- application/json
Date:
- Fri, 05 Dec 2025 00:21:05 GMT
- Tue, 03 Feb 2026 16:23:46 GMT
Server:
- cloudflare
Set-Cookie:
@@ -70,13 +88,124 @@ interactions:
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '379'
- '694'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:
- X-RATELIMIT-LIMIT-REQUESTS-XXX
x-ratelimit-limit-tokens:
- X-RATELIMIT-LIMIT-TOKENS-XXX
x-ratelimit-remaining-requests:
- X-RATELIMIT-REMAINING-REQUESTS-XXX
x-ratelimit-remaining-tokens:
- X-RATELIMIT-REMAINING-TOKENS-XXX
x-ratelimit-reset-requests:
- X-RATELIMIT-RESET-REQUESTS-XXX
x-ratelimit-reset-tokens:
- X-RATELIMIT-RESET-TOKENS-XXX
x-request-id:
- X-REQUEST-ID-XXX
status:
code: 200
message: OK
- request:
body: '{"messages":[{"role":"system","content":"You are test role. test backstory\nYour
personal goal is: test goal"},{"role":"user","content":"\nCurrent Task: Use
the dummy tool to get a result for ''test query''\n\nThis is the expected criteria
for your final answer: The result from the dummy tool\nyou MUST return the actual
complete content as the final answer, not a summary."},{"role":"assistant","content":null,"tool_calls":[{"id":"call_tBCgelchfQjXXJrrM15MxqGJ","type":"function","function":{"name":"dummy_tool","arguments":"{\"query\":\"test
query\"}"}}]},{"role":"tool","tool_call_id":"call_tBCgelchfQjXXJrrM15MxqGJ","name":"dummy_tool","content":"Dummy
result for: test query"},{"role":"user","content":"Analyze the tool result.
If requirements are met, provide the Final Answer. Otherwise, call the next
tool. Deliver only the answer without meta-commentary."}],"model":"gpt-3.5-turbo","tool_choice":"auto","tools":[{"type":"function","function":{"name":"dummy_tool","description":"Useful
for when you need to get a dummy result for a query.","strict":true,"parameters":{"properties":{"query":{"title":"Query","type":"string"}},"required":["query"],"type":"object","additionalProperties":false}}}]}'
headers:
User-Agent:
- X-USER-AGENT-XXX
accept:
- application/json
accept-encoding:
- ACCEPT-ENCODING-XXX
authorization:
- AUTHORIZATION-XXX
connection:
- keep-alive
content-length:
- '1202'
content-type:
- application/json
cookie:
- COOKIE-XXX
host:
- api.openai.com
x-stainless-arch:
- X-STAINLESS-ARCH-XXX
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 1.83.0
x-stainless-read-timeout:
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-D5DTmxZJI2Ee7fHNc9dYtQkD7sIY2\",\n \"object\":
\"chat.completion\",\n \"created\": 1770135826,\n \"model\": \"gpt-3.5-turbo-0125\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Dummy result for: test query\",\n \"refusal\":
null,\n \"annotations\": []\n },\n \"logprobs\": null,\n
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
188,\n \"completion_tokens\": 7,\n \"total_tokens\": 195,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": null\n}\n"
headers:
CF-RAY:
- CF-RAY-XXX
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Tue, 03 Feb 2026 16:23:47 GMT
Server:
- cloudflare
Strict-Transport-Security:
- STS-XXX
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- X-CONTENT-TYPE-XXX
access-control-expose-headers:
- ACCESS-CONTROL-XXX
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '416'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
x-envoy-upstream-service-time:
- '399'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:

View File

@@ -0,0 +1,110 @@
interactions:
- request:
body: '{"messages":[{"role":"system","content":"You are Math Assistant. A helpful
assistant\nYour personal goal is: Help solve math problems"},{"role":"user","content":"\nCurrent
Task: What is 12 * 3?\n\nThis is the expected criteria for your final answer:
A number\nyou MUST return the actual complete content as the final answer, not
a summary.\n\nProvide your complete response:"}],"model":"gpt-4o-mini"}'
headers:
User-Agent:
- X-USER-AGENT-XXX
accept:
- application/json
accept-encoding:
- ACCEPT-ENCODING-XXX
authorization:
- AUTHORIZATION-XXX
connection:
- keep-alive
content-length:
- '400'
content-type:
- application/json
host:
- api.openai.com
x-stainless-arch:
- X-STAINLESS-ARCH-XXX
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 1.83.0
x-stainless-read-timeout:
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-D4yVCw0CGLFmcVvniplwCCt8avtRb\",\n \"object\":
\"chat.completion\",\n \"created\": 1770078254,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"12 * 3 = 36\",\n \"refusal\":
null,\n \"annotations\": []\n },\n \"logprobs\": null,\n
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
75,\n \"completion_tokens\": 7,\n \"total_tokens\": 82,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_1590f93f9d\"\n}\n"
headers:
CF-RAY:
- CF-RAY-XXX
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Tue, 03 Feb 2026 00:24:14 GMT
Server:
- cloudflare
Set-Cookie:
- SET-COOKIE-XXX
Strict-Transport-Security:
- STS-XXX
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- X-CONTENT-TYPE-XXX
access-control-expose-headers:
- ACCESS-CONTROL-XXX
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '331'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:
- X-RATELIMIT-LIMIT-REQUESTS-XXX
x-ratelimit-limit-tokens:
- X-RATELIMIT-LIMIT-TOKENS-XXX
x-ratelimit-remaining-requests:
- X-RATELIMIT-REMAINING-REQUESTS-XXX
x-ratelimit-remaining-tokens:
- X-RATELIMIT-REMAINING-TOKENS-XXX
x-ratelimit-reset-requests:
- X-RATELIMIT-RESET-REQUESTS-XXX
x-ratelimit-reset-tokens:
- X-RATELIMIT-RESET-TOKENS-XXX
x-request-id:
- X-REQUEST-ID-XXX
status:
code: 200
message: OK
version: 1

View File

@@ -0,0 +1,243 @@
interactions:
- request:
body: '{"messages":[{"role":"system","content":"You are a strategic planning assistant.
Create minimal, effective execution plans. Prefer fewer steps over more."},{"role":"user","content":"Create
a focused execution plan for the following task:\n\n## Task\nFind the first
3 prime numbers, add them together, then multiply by 2.\n\n## Expected Output\nComplete
the task successfully\n\n## Available Tools\nNo tools available\n\n## Instructions\nCreate
ONLY the essential steps needed to complete this task. Use the MINIMUM number
of steps required - do NOT pad your plan with unnecessary steps. Most tasks
need only 2-5 steps.\n\nFor each step:\n- State the specific action to take\n-
Specify which tool to use (if any)\n\nDo NOT include:\n- Setup or preparation
steps that are obvious\n- Verification steps unless critical\n- Documentation
or cleanup steps unless explicitly required\n- Generic steps like \"review results\"
or \"finalize output\"\n\nAfter your plan, state:\n- \"READY: I am ready to
execute the task.\" if the plan is complete\n- \"NOT READY: I need to refine
my plan because [reason].\" if you need more thinking"}],"model":"gpt-4o-mini","tool_choice":"auto","tools":[{"type":"function","function":{"name":"create_reasoning_plan","description":"Create
or refine a reasoning plan for a task","strict":true,"parameters":{"type":"object","properties":{"plan":{"type":"string","description":"The
detailed reasoning plan for the task."},"ready":{"type":"boolean","description":"Whether
the agent is ready to execute the task."}},"required":["plan","ready"],"additionalProperties":false}}}]}'
headers:
User-Agent:
- X-USER-AGENT-XXX
accept:
- application/json
accept-encoding:
- ACCEPT-ENCODING-XXX
authorization:
- AUTHORIZATION-XXX
connection:
- keep-alive
content-length:
- '1597'
content-type:
- application/json
host:
- api.openai.com
x-stainless-arch:
- X-STAINLESS-ARCH-XXX
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 1.83.0
x-stainless-read-timeout:
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-D4yU0MD5GfSUjRW0R4cBmFJ6Hcjbi\",\n \"object\":
\"chat.completion\",\n \"created\": 1770078180,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"### Execution Plan\\n1. Identify the
first 3 prime numbers: 2, 3, and 5.\\n2. Add the prime numbers together: 2
+ 3 + 5 = 10.\\n3. Multiply the sum by 2: 10 * 2 = 20.\\n\\nREADY: I am ready
to execute the task.\",\n \"refusal\": null,\n \"annotations\":
[]\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 291,\n \"completion_tokens\":
73,\n \"total_tokens\": 364,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_1590f93f9d\"\n}\n"
headers:
CF-RAY:
- CF-RAY-XXX
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Tue, 03 Feb 2026 00:23:02 GMT
Server:
- cloudflare
Set-Cookie:
- SET-COOKIE-XXX
Strict-Transport-Security:
- STS-XXX
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- X-CONTENT-TYPE-XXX
access-control-expose-headers:
- ACCESS-CONTROL-XXX
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '1253'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:
- X-RATELIMIT-LIMIT-REQUESTS-XXX
x-ratelimit-limit-tokens:
- X-RATELIMIT-LIMIT-TOKENS-XXX
x-ratelimit-remaining-requests:
- X-RATELIMIT-REMAINING-REQUESTS-XXX
x-ratelimit-remaining-tokens:
- X-RATELIMIT-REMAINING-TOKENS-XXX
x-ratelimit-reset-requests:
- X-RATELIMIT-RESET-REQUESTS-XXX
x-ratelimit-reset-tokens:
- X-RATELIMIT-RESET-TOKENS-XXX
x-request-id:
- X-REQUEST-ID-XXX
status:
code: 200
message: OK
- request:
body: '{"messages":[{"role":"system","content":"You are Math Tutor. An expert
tutor who explains step by step\nYour personal goal is: Solve multi-step math
problems"},{"role":"user","content":"\nCurrent Task: Find the first 3 prime
numbers, add them together, then multiply by 2.\n\nProvide your complete response:"}],"model":"gpt-4o-mini"}'
headers:
User-Agent:
- X-USER-AGENT-XXX
accept:
- application/json
accept-encoding:
- ACCEPT-ENCODING-XXX
authorization:
- AUTHORIZATION-XXX
connection:
- keep-alive
content-length:
- '333'
content-type:
- application/json
cookie:
- COOKIE-XXX
host:
- api.openai.com
x-stainless-arch:
- X-STAINLESS-ARCH-XXX
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 1.83.0
x-stainless-read-timeout:
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-D4yU2qY6Xqpkz2D5yVAwagQzuPpen\",\n \"object\":
\"chat.completion\",\n \"created\": 1770078182,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"To solve the problem, let\u2019s go
through each step methodically.\\n\\n1. **Identify the first three prime numbers**:\\n
\ - **Prime numbers** are numbers greater than 1 that have no positive divisors
other than 1 and themselves.\\n - The first three prime numbers are:\\n
\ - 2\\n - 3\\n - 5\\n\\n2. **Add these prime numbers together**:\\n
\ - We add them together:\\n \\\\[\\n 2 + 3 + 5\\n \\\\]\\n -
Performing the addition step-by-step:\\n - First, add 2 and 3:\\n \\\\[\\n
\ 2 + 3 = 5\\n \\\\]\\n - Then add 5 to this result:\\n \\\\[\\n
\ 5 + 5 = 10\\n \\\\]\\n - So, the sum of the first three prime
numbers is **10**.\\n\\n3. **Multiply the sum by 2**:\\n - Now we multiply
the result by 2:\\n \\\\[\\n 10 \\\\times 2 = 20\\n \\\\]\\n \\nTherefore,
the final answer is **20**.\",\n \"refusal\": null,\n \"annotations\":
[]\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 62,\n \"completion_tokens\":
236,\n \"total_tokens\": 298,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_1590f93f9d\"\n}\n"
headers:
CF-RAY:
- CF-RAY-XXX
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Tue, 03 Feb 2026 00:23:06 GMT
Server:
- cloudflare
Strict-Transport-Security:
- STS-XXX
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- X-CONTENT-TYPE-XXX
access-control-expose-headers:
- ACCESS-CONTROL-XXX
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '3846'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:
- X-RATELIMIT-LIMIT-REQUESTS-XXX
x-ratelimit-limit-tokens:
- X-RATELIMIT-LIMIT-TOKENS-XXX
x-ratelimit-remaining-requests:
- X-RATELIMIT-REMAINING-REQUESTS-XXX
x-ratelimit-remaining-tokens:
- X-RATELIMIT-REMAINING-TOKENS-XXX
x-ratelimit-reset-requests:
- X-RATELIMIT-RESET-REQUESTS-XXX
x-ratelimit-reset-tokens:
- X-RATELIMIT-RESET-TOKENS-XXX
x-request-id:
- X-REQUEST-ID-XXX
status:
code: 200
message: OK
version: 1

View File

@@ -0,0 +1,238 @@
interactions:
- request:
body: '{"messages":[{"role":"system","content":"You are a strategic planning assistant.
Create minimal, effective execution plans. Prefer fewer steps over more."},{"role":"user","content":"Create
a focused execution plan for the following task:\n\n## Task\nWhat is 15 + 27?\n\n##
Expected Output\nComplete the task successfully\n\n## Available Tools\nNo tools
available\n\n## Instructions\nCreate ONLY the essential steps needed to complete
this task. Use the MINIMUM number of steps required - do NOT pad your plan with
unnecessary steps. Most tasks need only 2-5 steps.\n\nFor each step:\n- State
the specific action to take\n- Specify which tool to use (if any)\n\nDo NOT
include:\n- Setup or preparation steps that are obvious\n- Verification steps
unless critical\n- Documentation or cleanup steps unless explicitly required\n-
Generic steps like \"review results\" or \"finalize output\"\n\nAfter your plan,
state:\n- \"READY: I am ready to execute the task.\" if the plan is complete\n-
\"NOT READY: I need to refine my plan because [reason].\" if you need more thinking"}],"model":"gpt-4o-mini","tool_choice":"auto","tools":[{"type":"function","function":{"name":"create_reasoning_plan","description":"Create
or refine a reasoning plan for a task","strict":true,"parameters":{"type":"object","properties":{"plan":{"type":"string","description":"The
detailed reasoning plan for the task."},"ready":{"type":"boolean","description":"Whether
the agent is ready to execute the task."}},"required":["plan","ready"],"additionalProperties":false}}}]}'
headers:
User-Agent:
- X-USER-AGENT-XXX
accept:
- application/json
accept-encoding:
- ACCEPT-ENCODING-XXX
authorization:
- AUTHORIZATION-XXX
connection:
- keep-alive
content-length:
- '1543'
content-type:
- application/json
host:
- api.openai.com
x-stainless-arch:
- X-STAINLESS-ARCH-XXX
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 1.83.0
x-stainless-read-timeout:
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-D4yTrm3GkzDX47DIcce9uA3iF8kFE\",\n \"object\":
\"chat.completion\",\n \"created\": 1770078171,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"## Execution Plan\\n\\n1. Calculate
the sum of 15 and 27.\\n\\nREADY: I am ready to execute the task.\",\n \"refusal\":
null,\n \"annotations\": []\n },\n \"logprobs\": null,\n
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
281,\n \"completion_tokens\": 27,\n \"total_tokens\": 308,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_1590f93f9d\"\n}\n"
headers:
CF-RAY:
- CF-RAY-XXX
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Tue, 03 Feb 2026 00:22:51 GMT
Server:
- cloudflare
Set-Cookie:
- SET-COOKIE-XXX
Strict-Transport-Security:
- STS-XXX
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- X-CONTENT-TYPE-XXX
access-control-expose-headers:
- ACCESS-CONTROL-XXX
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '691'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:
- X-RATELIMIT-LIMIT-REQUESTS-XXX
x-ratelimit-limit-tokens:
- X-RATELIMIT-LIMIT-TOKENS-XXX
x-ratelimit-remaining-requests:
- X-RATELIMIT-REMAINING-REQUESTS-XXX
x-ratelimit-remaining-tokens:
- X-RATELIMIT-REMAINING-TOKENS-XXX
x-ratelimit-reset-requests:
- X-RATELIMIT-RESET-REQUESTS-XXX
x-ratelimit-reset-tokens:
- X-RATELIMIT-RESET-TOKENS-XXX
x-request-id:
- X-REQUEST-ID-XXX
status:
code: 200
message: OK
- request:
body: '{"messages":[{"role":"system","content":"You are Math Assistant. A helpful
math tutor\nYour personal goal is: Help solve math problems step by step"},{"role":"user","content":"\nCurrent
Task: What is 15 + 27?\n\nProvide your complete response:"}],"model":"gpt-4o-mini"}'
headers:
User-Agent:
- X-USER-AGENT-XXX
accept:
- application/json
accept-encoding:
- ACCEPT-ENCODING-XXX
authorization:
- AUTHORIZATION-XXX
connection:
- keep-alive
content-length:
- '269'
content-type:
- application/json
cookie:
- COOKIE-XXX
host:
- api.openai.com
x-stainless-arch:
- X-STAINLESS-ARCH-XXX
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 1.83.0
x-stainless-read-timeout:
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-D4yTrUOvExA9fTFDwYxvG4xEgRP6L\",\n \"object\":
\"chat.completion\",\n \"created\": 1770078171,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"To solve the problem \\\\( 15 + 27
\\\\), we can follow these steps:\\n\\n1. **Align the numbers**: Write them
one on top of the other, aligned by their rightmost digits:\\n\\n ```\\n
\ 15\\n + 27\\n ```\\n\\n2. **Add the units place**: Start from the
rightmost digits (units place):\\n - \\\\( 5 + 7 = 12 \\\\)\\n - Write
down 2 and carry over 1.\\n\\n3. **Add the tens place**: Now, move to the
next column (tens place):\\n - \\\\( 1 + 2 + 1 \\\\) (the 1 is from the
carry) \\\\( = 4 \\\\)\\n\\n4. **Combine the results**: Now, combine the results
from the tens and units places:\\n - The result in the tens place is 4 and
in the units place is 2, giving us \\\\( 42 \\\\).\\n\\nTherefore, \\\\( 15
+ 27 = 42 \\\\).\",\n \"refusal\": null,\n \"annotations\":
[]\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 50,\n \"completion_tokens\":
209,\n \"total_tokens\": 259,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_1590f93f9d\"\n}\n"
headers:
CF-RAY:
- CF-RAY-XXX
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Tue, 03 Feb 2026 00:22:55 GMT
Server:
- cloudflare
Strict-Transport-Security:
- STS-XXX
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- X-CONTENT-TYPE-XXX
access-control-expose-headers:
- ACCESS-CONTROL-XXX
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '3263'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:
- X-RATELIMIT-LIMIT-REQUESTS-XXX
x-ratelimit-limit-tokens:
- X-RATELIMIT-LIMIT-TOKENS-XXX
x-ratelimit-remaining-requests:
- X-RATELIMIT-REMAINING-REQUESTS-XXX
x-ratelimit-remaining-tokens:
- X-RATELIMIT-REMAINING-TOKENS-XXX
x-ratelimit-reset-requests:
- X-RATELIMIT-RESET-REQUESTS-XXX
x-ratelimit-reset-tokens:
- X-RATELIMIT-RESET-TOKENS-XXX
x-request-id:
- X-REQUEST-ID-XXX
status:
code: 200
message: OK
version: 1

View File

@@ -0,0 +1,110 @@
interactions:
- request:
body: '{"messages":[{"role":"system","content":"You are Math Assistant. A helpful
assistant\nYour personal goal is: Help solve math problems"},{"role":"user","content":"\nCurrent
Task: What is 100 / 4?\n\nProvide your complete response:"}],"model":"gpt-4o-mini"}'
headers:
User-Agent:
- X-USER-AGENT-XXX
accept:
- application/json
accept-encoding:
- ACCEPT-ENCODING-XXX
authorization:
- AUTHORIZATION-XXX
connection:
- keep-alive
content-length:
- '255'
content-type:
- application/json
host:
- api.openai.com
x-stainless-arch:
- X-STAINLESS-ARCH-XXX
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 1.83.0
x-stainless-read-timeout:
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-D4yU6mFapBLuCx4fJtYBup52dwwrs\",\n \"object\":
\"chat.completion\",\n \"created\": 1770078186,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"To solve the problem 100 divided by
4, you can perform the division as follows:\\n\\n100 \xF7 4 = 25\\n\\nSo,
the answer is 25.\",\n \"refusal\": null,\n \"annotations\":
[]\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 46,\n \"completion_tokens\":
36,\n \"total_tokens\": 82,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_1590f93f9d\"\n}\n"
headers:
CF-RAY:
- CF-RAY-XXX
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Tue, 03 Feb 2026 00:23:07 GMT
Server:
- cloudflare
Set-Cookie:
- SET-COOKIE-XXX
Strict-Transport-Security:
- STS-XXX
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- X-CONTENT-TYPE-XXX
access-control-expose-headers:
- ACCESS-CONTROL-XXX
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '1098'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:
- X-RATELIMIT-LIMIT-REQUESTS-XXX
x-ratelimit-limit-tokens:
- X-RATELIMIT-LIMIT-TOKENS-XXX
x-ratelimit-remaining-requests:
- X-RATELIMIT-REMAINING-REQUESTS-XXX
x-ratelimit-remaining-tokens:
- X-RATELIMIT-REMAINING-TOKENS-XXX
x-ratelimit-reset-requests:
- X-RATELIMIT-RESET-REQUESTS-XXX
x-ratelimit-reset-tokens:
- X-RATELIMIT-RESET-TOKENS-XXX
x-request-id:
- X-REQUEST-ID-XXX
status:
code: 200
message: OK
version: 1

View File

@@ -0,0 +1,108 @@
interactions:
- request:
body: '{"messages":[{"role":"system","content":"You are Math Assistant. A helpful
assistant\nYour personal goal is: Help solve math problems"},{"role":"user","content":"\nCurrent
Task: What is 8 * 7?\n\nProvide your complete response:"}],"model":"gpt-4o-mini"}'
headers:
User-Agent:
- X-USER-AGENT-XXX
accept:
- application/json
accept-encoding:
- ACCEPT-ENCODING-XXX
authorization:
- AUTHORIZATION-XXX
connection:
- keep-alive
content-length:
- '253'
content-type:
- application/json
host:
- api.openai.com
x-stainless-arch:
- X-STAINLESS-ARCH-XXX
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 1.83.0
x-stainless-read-timeout:
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-D4yTqLFhGtfq2CyS2aPPhiZL4GjtQ\",\n \"object\":
\"chat.completion\",\n \"created\": 1770078170,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"8 * 7 equals 56.\",\n \"refusal\":
null,\n \"annotations\": []\n },\n \"logprobs\": null,\n
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
46,\n \"completion_tokens\": 8,\n \"total_tokens\": 54,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_1590f93f9d\"\n}\n"
headers:
CF-RAY:
- CF-RAY-XXX
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Tue, 03 Feb 2026 00:22:50 GMT
Server:
- cloudflare
Set-Cookie:
- SET-COOKIE-XXX
Strict-Transport-Security:
- STS-XXX
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- X-CONTENT-TYPE-XXX
access-control-expose-headers:
- ACCESS-CONTROL-XXX
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '443'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:
- X-RATELIMIT-LIMIT-REQUESTS-XXX
x-ratelimit-limit-tokens:
- X-RATELIMIT-LIMIT-TOKENS-XXX
x-ratelimit-remaining-requests:
- X-RATELIMIT-REMAINING-REQUESTS-XXX
x-ratelimit-remaining-tokens:
- X-RATELIMIT-REMAINING-TOKENS-XXX
x-ratelimit-reset-requests:
- X-RATELIMIT-RESET-REQUESTS-XXX
x-ratelimit-reset-tokens:
- X-RATELIMIT-RESET-TOKENS-XXX
x-request-id:
- X-REQUEST-ID-XXX
status:
code: 200
message: OK
version: 1

View File

@@ -0,0 +1,548 @@
interactions:
- request:
body: '{"messages": [{"role": "system", "content": "You are a strategic planning
assistant. Create minimal, effective execution plans. Prefer fewer steps over
more."}, {"role": "user", "content": "Create a focused execution plan for the
following task:\n\n## Task\nResearch the current state of the AI agent market:\n1.
Search for recent information about AI agents and their market trends\n2. Read
detailed content from a relevant industry source\n3. Generate a brief report
summarizing the key findings\n\nUse the available tools for each step.\n\n##
Expected Output\nComplete the task successfully\n\n## Available Tools\nweb_search,
read_website, generate_report\n\n## Instructions\nCreate ONLY the essential
steps needed to complete this task. Use the MINIMUM number of steps required
- do NOT pad your plan with unnecessary steps. Most tasks need only 2-5 steps.\n\nFor
each step:\n- State the specific action to take\n- Specify which tool to use
(if any)\n- Note dependencies on previous steps if this step requires their
output\n- If a step involves multiple items (e.g., research 3 competitors),
note this explicitly\n\nDo NOT include:\n- Setup or preparation steps that are
obvious\n- Verification steps unless critical\n- Documentation or cleanup steps
unless explicitly required\n- Generic steps like \"review results\" or \"finalize
output\"\n\nAfter your plan, state:\n- \"READY: I am ready to execute the task.\"
if the plan is complete\n- \"NOT READY: I need to refine my plan because [reason].\"
if you need more thinking"}], "stream": false, "stop": ["\nObservation:"], "tool_choice":
"auto", "tools": [{"function": {"name": "create_reasoning_plan", "description":
"Create or refine a reasoning plan for a task with structured steps", "parameters":
{"type": "object", "properties": {"plan": {"type": "string", "description":
"A brief summary of the overall plan."}, "steps": {"type": "array", "description":
"List of discrete steps to execute the plan", "items": {"type": "object", "properties":
{"step_number": {"type": "integer", "description": "Step number (1-based)"},
"description": {"type": "string", "description": "What to do in this step"},
"tool_to_use": {"type": ["string", "null"], "description": "Tool to use for
this step, or null if no tool needed"}, "depends_on": {"type": "array", "items":
{"type": "integer"}, "description": "Step numbers this step depends on (empty
array if none)"}}, "required": ["step_number", "description", "tool_to_use",
"depends_on"], "additionalProperties": false}}, "ready": {"type": "boolean",
"description": "Whether the agent is ready to execute the task."}}, "required":
["plan", "steps", "ready"], "additionalProperties": false}}, "type": "function"}]}'
headers:
Accept:
- application/json
Connection:
- keep-alive
Content-Length:
- '2711'
Content-Type:
- application/json
User-Agent:
- X-USER-AGENT-XXX
accept-encoding:
- ACCEPT-ENCODING-XXX
api-key:
- X-API-KEY-XXX
authorization:
- AUTHORIZATION-XXX
x-ms-client-request-id:
- X-MS-CLIENT-REQUEST-ID-XXX
method: POST
uri: https://fake-azure-endpoint.openai.azure.com/openai/deployments/gpt-4o/chat/completions?api-version=2024-12-01-preview
response:
body:
string: '{"choices":[{"content_filter_results":{},"finish_reason":"tool_calls","index":0,"logprobs":null,"message":{"annotations":[],"content":null,"refusal":null,"role":"assistant","tool_calls":[{"function":{"arguments":"{\"plan\":\"Research
the current state of the AI agent market and summarize the key findings.\",\"steps\":[{\"step_number\":1,\"description\":\"Search
for recent information about AI agents and their market trends using web_search.\",\"tool_to_use\":\"web_search\",\"depends_on\":[]},{\"step_number\":2,\"description\":\"Read
detailed content from a relevant industry source using read_website, gathering
insights on trends and competitive analysis.\",\"tool_to_use\":\"read_website\",\"depends_on\":[1]},{\"step_number\":3,\"description\":\"Using
the knowledge from steps 1 and 2, generate a brief report summarizing the
AI agent market findings.\",\"tool_to_use\":\"generate_report\",\"depends_on\":[1,2]}],\"ready\":true}","name":"create_reasoning_plan"},"id":"call_TPmou69xLfxPqApRnPwI6zYV","type":"function"}]}}],"created":1770145131,"id":"chatcmpl-D5Ftr1QP6lTPXIemws2EtuKaWeSxt","model":"gpt-4o-2024-11-20","object":"chat.completion","prompt_filter_results":[{"prompt_index":0,"content_filter_results":{"hate":{"filtered":false,"severity":"safe"},"jailbreak":{"detected":false,"filtered":false},"self_harm":{"filtered":false,"severity":"safe"},"sexual":{"filtered":false,"severity":"safe"},"violence":{"filtered":false,"severity":"safe"}}}],"system_fingerprint":"fp_b54fe76834","usage":{"completion_tokens":157,"completion_tokens_details":{"accepted_prediction_tokens":0,"audio_tokens":0,"reasoning_tokens":0,"rejected_prediction_tokens":0},"prompt_tokens":480,"prompt_tokens_details":{"audio_tokens":0,"cached_tokens":0},"total_tokens":637}}
'
headers:
Content-Length:
- '1762'
Content-Type:
- application/json
Date:
- Tue, 03 Feb 2026 18:58:54 GMT
Strict-Transport-Security:
- STS-XXX
apim-request-id:
- APIM-REQUEST-ID-XXX
azureml-model-session:
- AZUREML-MODEL-SESSION-XXX
x-accel-buffering:
- 'no'
x-content-type-options:
- X-CONTENT-TYPE-XXX
x-ms-client-request-id:
- X-MS-CLIENT-REQUEST-ID-XXX
x-ms-deployment-name:
- gpt-4o
x-ms-rai-invoked:
- 'true'
x-ms-region:
- X-MS-REGION-XXX
x-ratelimit-limit-requests:
- X-RATELIMIT-LIMIT-REQUESTS-XXX
x-ratelimit-limit-tokens:
- X-RATELIMIT-LIMIT-TOKENS-XXX
x-ratelimit-remaining-requests:
- X-RATELIMIT-REMAINING-REQUESTS-XXX
x-ratelimit-remaining-tokens:
- X-RATELIMIT-REMAINING-TOKENS-XXX
x-request-id:
- X-REQUEST-ID-XXX
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "system", "content": "You are Research Analyst.
An experienced analyst skilled at gathering information and synthesizing findings
into actionable insights.\nYour personal goal is: Conduct thorough research
and produce insightful reports"}, {"role": "user", "content": "\nCurrent Task:
Research the current state of the AI agent market:\n1. Search for recent information
about AI agents and their market trends\n2. Read detailed content from a relevant
industry source\n3. Generate a brief report summarizing the key findings\n\nUse
the available tools for each step."}], "stream": false, "stop": ["\nObservation:"],
"tool_choice": "auto", "tools": [{"function": {"name": "web_search", "description":
"Search the web for information on a given topic.\n\nArgs:\n query: The search
query to look up.\n\nReturns:\n Search results as a string.", "parameters":
{"properties": {"query": {"title": "Query", "type": "string"}}, "required":
["query"], "type": "object", "additionalProperties": false}}, "type": "function"},
{"function": {"name": "read_website", "description": "Read and extract content
from a website URL.\n\nArgs:\n url: The URL of the website to read.\n\nReturns:\n The
extracted content from the website.", "parameters": {"properties": {"url": {"title":
"Url", "type": "string"}}, "required": ["url"], "type": "object", "additionalProperties":
false}}, "type": "function"}, {"function": {"name": "generate_report", "description":
"Generate a structured report based on research findings.\n\nArgs:\n title:
The title of the report.\n findings: The research findings to include.\n\nReturns:\n A
formatted report string.", "parameters": {"properties": {"title": {"title":
"Title", "type": "string"}, "findings": {"title": "Findings", "type": "string"}},
"required": ["title", "findings"], "type": "object", "additionalProperties":
false}}, "type": "function"}]}'
headers:
Accept:
- application/json
Connection:
- keep-alive
Content-Length:
- '1912'
Content-Type:
- application/json
User-Agent:
- X-USER-AGENT-XXX
accept-encoding:
- ACCEPT-ENCODING-XXX
api-key:
- X-API-KEY-XXX
authorization:
- AUTHORIZATION-XXX
x-ms-client-request-id:
- X-MS-CLIENT-REQUEST-ID-XXX
method: POST
uri: https://fake-azure-endpoint.openai.azure.com/openai/deployments/gpt-4o/chat/completions?api-version=2024-12-01-preview
response:
body:
string: '{"choices":[{"content_filter_results":{},"finish_reason":"tool_calls","index":0,"logprobs":null,"message":{"annotations":[],"content":null,"refusal":null,"role":"assistant","tool_calls":[{"function":{"arguments":"{\"query\":\"current
state of AI agent market 2023\"}","name":"web_search"},"id":"call_6RDgkQSr8S7luEHqqOaI734w","type":"function"}]}}],"created":1770145136,"id":"chatcmpl-D5FtwVvV3KE10L2JIOd7n8Ph1Iu3Q","model":"gpt-4o-2024-11-20","object":"chat.completion","prompt_filter_results":[{"prompt_index":0,"content_filter_results":{"hate":{"filtered":false,"severity":"safe"},"jailbreak":{"detected":false,"filtered":false},"self_harm":{"filtered":false,"severity":"safe"},"sexual":{"filtered":false,"severity":"safe"},"violence":{"filtered":false,"severity":"safe"}}}],"system_fingerprint":"fp_b54fe76834","usage":{"completion_tokens":23,"completion_tokens_details":{"accepted_prediction_tokens":0,"audio_tokens":0,"reasoning_tokens":0,"rejected_prediction_tokens":0},"prompt_tokens":267,"prompt_tokens_details":{"audio_tokens":0,"cached_tokens":0},"total_tokens":290}}
'
headers:
Content-Length:
- '1079'
Content-Type:
- application/json
Date:
- Tue, 03 Feb 2026 18:58:55 GMT
Strict-Transport-Security:
- STS-XXX
apim-request-id:
- APIM-REQUEST-ID-XXX
azureml-model-session:
- AZUREML-MODEL-SESSION-XXX
x-accel-buffering:
- 'no'
x-content-type-options:
- X-CONTENT-TYPE-XXX
x-ms-client-request-id:
- X-MS-CLIENT-REQUEST-ID-XXX
x-ms-deployment-name:
- gpt-4o
x-ms-rai-invoked:
- 'true'
x-ms-region:
- X-MS-REGION-XXX
x-ratelimit-limit-requests:
- X-RATELIMIT-LIMIT-REQUESTS-XXX
x-ratelimit-limit-tokens:
- X-RATELIMIT-LIMIT-TOKENS-XXX
x-ratelimit-remaining-requests:
- X-RATELIMIT-REMAINING-REQUESTS-XXX
x-ratelimit-remaining-tokens:
- X-RATELIMIT-REMAINING-TOKENS-XXX
x-request-id:
- X-REQUEST-ID-XXX
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "system", "content": "You are Research Analyst.
An experienced analyst skilled at gathering information and synthesizing findings
into actionable insights.\nYour personal goal is: Conduct thorough research
and produce insightful reports"}, {"role": "user", "content": "\nCurrent Task:
Research the current state of the AI agent market:\n1. Search for recent information
about AI agents and their market trends\n2. Read detailed content from a relevant
industry source\n3. Generate a brief report summarizing the key findings\n\nUse
the available tools for each step."}, {"role": "assistant", "content": "", "tool_calls":
[{"id": "call_6RDgkQSr8S7luEHqqOaI734w", "type": "function", "function": {"name":
"web_search", "arguments": "{\"query\":\"current state of AI agent market 2023\"}"}}]},
{"role": "tool", "tool_call_id": "call_6RDgkQSr8S7luEHqqOaI734w", "content":
"Search results for ''current state of AI agent market 2023'': Found 3 relevant
articles about the topic including market analysis, competitor data, and industry
trends."}], "stream": false, "stop": ["\nObservation:"], "tool_choice": "auto",
"tools": [{"function": {"name": "web_search", "description": "Search the web
for information on a given topic.\n\nArgs:\n query: The search query to look
up.\n\nReturns:\n Search results as a string.", "parameters": {"properties":
{"query": {"title": "Query", "type": "string"}}, "required": ["query"], "type":
"object", "additionalProperties": false}}, "type": "function"}, {"function":
{"name": "read_website", "description": "Read and extract content from a website
URL.\n\nArgs:\n url: The URL of the website to read.\n\nReturns:\n The
extracted content from the website.", "parameters": {"properties": {"url": {"title":
"Url", "type": "string"}}, "required": ["url"], "type": "object", "additionalProperties":
false}}, "type": "function"}, {"function": {"name": "generate_report", "description":
"Generate a structured report based on research findings.\n\nArgs:\n title:
The title of the report.\n findings: The research findings to include.\n\nReturns:\n A
formatted report string.", "parameters": {"properties": {"title": {"title":
"Title", "type": "string"}, "findings": {"title": "Findings", "type": "string"}},
"required": ["title", "findings"], "type": "object", "additionalProperties":
false}}, "type": "function"}]}'
headers:
Accept:
- application/json
Connection:
- keep-alive
Content-Length:
- '2381'
Content-Type:
- application/json
User-Agent:
- X-USER-AGENT-XXX
accept-encoding:
- ACCEPT-ENCODING-XXX
api-key:
- X-API-KEY-XXX
authorization:
- AUTHORIZATION-XXX
x-ms-client-request-id:
- X-MS-CLIENT-REQUEST-ID-XXX
method: POST
uri: https://fake-azure-endpoint.openai.azure.com/openai/deployments/gpt-4o/chat/completions?api-version=2024-12-01-preview
response:
body:
string: '{"choices":[{"content_filter_results":{},"finish_reason":"tool_calls","index":0,"logprobs":null,"message":{"annotations":[],"content":null,"refusal":null,"role":"assistant","tool_calls":[{"function":{"arguments":"{\"url\":
\"https://example.com/article1\"}","name":"read_website"},"id":"call_ie6tNHSbW9TWIqoXD9CN3MNZ","type":"function"},{"function":{"arguments":"{\"url\":
\"https://example.com/article2\"}","name":"read_website"},"id":"call_qxn4V4mMMpOnYSAwVuwarFkB","type":"function"},{"function":{"arguments":"{\"url\":
\"https://example.com/article3\"}","name":"read_website"},"id":"call_7ElzUIHHJvuciFWj6eIF5RhF","type":"function"}]}}],"created":1770145137,"id":"chatcmpl-D5Ftxnr2VyEYZd6zSpTJavdxSoE18","model":"gpt-4o-2024-11-20","object":"chat.completion","prompt_filter_results":[{"prompt_index":0,"content_filter_results":{"hate":{"filtered":false,"severity":"safe"},"jailbreak":{"detected":false,"filtered":false},"self_harm":{"filtered":false,"severity":"safe"},"sexual":{"filtered":false,"severity":"safe"},"violence":{"filtered":false,"severity":"safe"}}}],"system_fingerprint":"fp_b54fe76834","usage":{"completion_tokens":77,"completion_tokens_details":{"accepted_prediction_tokens":0,"audio_tokens":0,"reasoning_tokens":0,"rejected_prediction_tokens":0},"prompt_tokens":330,"prompt_tokens_details":{"audio_tokens":0,"cached_tokens":0},"total_tokens":407}}
'
headers:
Content-Length:
- '1371'
Content-Type:
- application/json
Date:
- Tue, 03 Feb 2026 18:58:57 GMT
Strict-Transport-Security:
- STS-XXX
apim-request-id:
- APIM-REQUEST-ID-XXX
azureml-model-session:
- AZUREML-MODEL-SESSION-XXX
x-accel-buffering:
- 'no'
x-content-type-options:
- X-CONTENT-TYPE-XXX
x-ms-client-request-id:
- X-MS-CLIENT-REQUEST-ID-XXX
x-ms-deployment-name:
- gpt-4o
x-ms-rai-invoked:
- 'true'
x-ms-region:
- X-MS-REGION-XXX
x-ratelimit-limit-requests:
- X-RATELIMIT-LIMIT-REQUESTS-XXX
x-ratelimit-limit-tokens:
- X-RATELIMIT-LIMIT-TOKENS-XXX
x-ratelimit-remaining-requests:
- X-RATELIMIT-REMAINING-REQUESTS-XXX
x-ratelimit-remaining-tokens:
- X-RATELIMIT-REMAINING-TOKENS-XXX
x-request-id:
- X-REQUEST-ID-XXX
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "system", "content": "You are Research Analyst.
An experienced analyst skilled at gathering information and synthesizing findings
into actionable insights.\nYour personal goal is: Conduct thorough research
and produce insightful reports"}, {"role": "user", "content": "\nCurrent Task:
Research the current state of the AI agent market:\n1. Search for recent information
about AI agents and their market trends\n2. Read detailed content from a relevant
industry source\n3. Generate a brief report summarizing the key findings\n\nUse
the available tools for each step."}, {"role": "assistant", "content": "", "tool_calls":
[{"id": "call_6RDgkQSr8S7luEHqqOaI734w", "type": "function", "function": {"name":
"web_search", "arguments": "{\"query\":\"current state of AI agent market 2023\"}"}}]},
{"role": "tool", "tool_call_id": "call_6RDgkQSr8S7luEHqqOaI734w", "content":
"Search results for ''current state of AI agent market 2023'': Found 3 relevant
articles about the topic including market analysis, competitor data, and industry
trends."}, {"role": "assistant", "content": "", "tool_calls": [{"id": "call_ie6tNHSbW9TWIqoXD9CN3MNZ",
"type": "function", "function": {"name": "read_website", "arguments": "{\"url\":
\"https://example.com/article1\"}"}}, {"id": "call_qxn4V4mMMpOnYSAwVuwarFkB",
"type": "function", "function": {"name": "read_website", "arguments": "{\"url\":
\"https://example.com/article2\"}"}}, {"id": "call_7ElzUIHHJvuciFWj6eIF5RhF",
"type": "function", "function": {"name": "read_website", "arguments": "{\"url\":
\"https://example.com/article3\"}"}}]}, {"role": "tool", "tool_call_id": "call_ie6tNHSbW9TWIqoXD9CN3MNZ",
"content": "Content from https://example.com/article1: This article discusses
key insights about the topic including market size ($50B), growth rate (15%
YoY), and major players in the industry."}, {"role": "tool", "tool_call_id":
"call_qxn4V4mMMpOnYSAwVuwarFkB", "content": "Content from https://example.com/article2:
This article discusses key insights about the topic including market size ($50B),
growth rate (15% YoY), and major players in the industry."}, {"role": "tool",
"tool_call_id": "call_7ElzUIHHJvuciFWj6eIF5RhF", "content": "Content from https://example.com/article3:
This article discusses key insights about the topic including market size ($50B),
growth rate (15% YoY), and major players in the industry."}], "stream": false,
"stop": ["\nObservation:"], "tool_choice": "auto", "tools": [{"function": {"name":
"web_search", "description": "Search the web for information on a given topic.\n\nArgs:\n query:
The search query to look up.\n\nReturns:\n Search results as a string.",
"parameters": {"properties": {"query": {"title": "Query", "type": "string"}},
"required": ["query"], "type": "object", "additionalProperties": false}}, "type":
"function"}, {"function": {"name": "read_website", "description": "Read and
extract content from a website URL.\n\nArgs:\n url: The URL of the website
to read.\n\nReturns:\n The extracted content from the website.", "parameters":
{"properties": {"url": {"title": "Url", "type": "string"}}, "required": ["url"],
"type": "object", "additionalProperties": false}}, "type": "function"}, {"function":
{"name": "generate_report", "description": "Generate a structured report based
on research findings.\n\nArgs:\n title: The title of the report.\n findings:
The research findings to include.\n\nReturns:\n A formatted report string.",
"parameters": {"properties": {"title": {"title": "Title", "type": "string"},
"findings": {"title": "Findings", "type": "string"}}, "required": ["title",
"findings"], "type": "object", "additionalProperties": false}}, "type": "function"}]}'
headers:
Accept:
- application/json
Connection:
- keep-alive
Content-Length:
- '3704'
Content-Type:
- application/json
User-Agent:
- X-USER-AGENT-XXX
accept-encoding:
- ACCEPT-ENCODING-XXX
api-key:
- X-API-KEY-XXX
authorization:
- AUTHORIZATION-XXX
x-ms-client-request-id:
- X-MS-CLIENT-REQUEST-ID-XXX
method: POST
uri: https://fake-azure-endpoint.openai.azure.com/openai/deployments/gpt-4o/chat/completions?api-version=2024-12-01-preview
response:
body:
string: '{"choices":[{"content_filter_results":{},"finish_reason":"tool_calls","index":0,"logprobs":null,"message":{"annotations":[],"content":null,"refusal":null,"role":"assistant","tool_calls":[{"function":{"arguments":"{\"title\":\"Current
State of the AI Agent Market\",\"findings\":\"The AI agent market in 2023
is valued at $50 billion, with a growth rate of 15% YoY. Major players in
the market have been identified as contributing to the expansion of industry
capabilities. Trends point to increasing adoption across industries such as
healthcare and finance, where automation and intelligence are becoming central
to operations. Emerging competitive forces are influencing pricing and technological
advancements in AI agents. Existing and new entrants focus on innovation to
differentiate their offerings and capture market share.\"}","name":"generate_report"},"id":"call_7eE0bJbpvO6YYQNs87iTifQ3","type":"function"}]}}],"created":1770145138,"id":"chatcmpl-D5Ftys4PE1B1dkTuv6EPOQyqk0xuv","model":"gpt-4o-2024-11-20","object":"chat.completion","prompt_filter_results":[{"prompt_index":0,"content_filter_results":{"hate":{"filtered":false,"severity":"safe"},"jailbreak":{"detected":false,"filtered":false},"self_harm":{"filtered":false,"severity":"safe"},"sexual":{"filtered":false,"severity":"safe"},"violence":{"filtered":false,"severity":"safe"}}}],"system_fingerprint":"fp_b54fe76834","usage":{"completion_tokens":120,"completion_tokens_details":{"accepted_prediction_tokens":0,"audio_tokens":0,"reasoning_tokens":0,"rejected_prediction_tokens":0},"prompt_tokens":537,"prompt_tokens_details":{"audio_tokens":0,"cached_tokens":0},"total_tokens":657}}
'
headers:
Content-Length:
- '1652'
Content-Type:
- application/json
Date:
- Tue, 03 Feb 2026 18:58:59 GMT
Strict-Transport-Security:
- STS-XXX
apim-request-id:
- APIM-REQUEST-ID-XXX
azureml-model-session:
- AZUREML-MODEL-SESSION-XXX
x-accel-buffering:
- 'no'
x-content-type-options:
- X-CONTENT-TYPE-XXX
x-ms-client-request-id:
- X-MS-CLIENT-REQUEST-ID-XXX
x-ms-deployment-name:
- gpt-4o
x-ms-rai-invoked:
- 'true'
x-ms-region:
- X-MS-REGION-XXX
x-ratelimit-limit-requests:
- X-RATELIMIT-LIMIT-REQUESTS-XXX
x-ratelimit-limit-tokens:
- X-RATELIMIT-LIMIT-TOKENS-XXX
x-ratelimit-remaining-requests:
- X-RATELIMIT-REMAINING-REQUESTS-XXX
x-ratelimit-remaining-tokens:
- X-RATELIMIT-REMAINING-TOKENS-XXX
x-request-id:
- X-REQUEST-ID-XXX
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "system", "content": "You are Research Analyst.
An experienced analyst skilled at gathering information and synthesizing findings
into actionable insights.\nYour personal goal is: Conduct thorough research
and produce insightful reports"}, {"role": "user", "content": "\nCurrent Task:
Research the current state of the AI agent market:\n1. Search for recent information
about AI agents and their market trends\n2. Read detailed content from a relevant
industry source\n3. Generate a brief report summarizing the key findings\n\nUse
the available tools for each step."}, {"role": "assistant", "content": "", "tool_calls":
[{"id": "call_6RDgkQSr8S7luEHqqOaI734w", "type": "function", "function": {"name":
"web_search", "arguments": "{\"query\":\"current state of AI agent market 2023\"}"}}]},
{"role": "tool", "tool_call_id": "call_6RDgkQSr8S7luEHqqOaI734w", "content":
"Search results for ''current state of AI agent market 2023'': Found 3 relevant
articles about the topic including market analysis, competitor data, and industry
trends."}, {"role": "assistant", "content": "", "tool_calls": [{"id": "call_ie6tNHSbW9TWIqoXD9CN3MNZ",
"type": "function", "function": {"name": "read_website", "arguments": "{\"url\":
\"https://example.com/article1\"}"}}, {"id": "call_qxn4V4mMMpOnYSAwVuwarFkB",
"type": "function", "function": {"name": "read_website", "arguments": "{\"url\":
\"https://example.com/article2\"}"}}, {"id": "call_7ElzUIHHJvuciFWj6eIF5RhF",
"type": "function", "function": {"name": "read_website", "arguments": "{\"url\":
\"https://example.com/article3\"}"}}]}, {"role": "tool", "tool_call_id": "call_ie6tNHSbW9TWIqoXD9CN3MNZ",
"content": "Content from https://example.com/article1: This article discusses
key insights about the topic including market size ($50B), growth rate (15%
YoY), and major players in the industry."}, {"role": "tool", "tool_call_id":
"call_qxn4V4mMMpOnYSAwVuwarFkB", "content": "Content from https://example.com/article2:
This article discusses key insights about the topic including market size ($50B),
growth rate (15% YoY), and major players in the industry."}, {"role": "tool",
"tool_call_id": "call_7ElzUIHHJvuciFWj6eIF5RhF", "content": "Content from https://example.com/article3:
This article discusses key insights about the topic including market size ($50B),
growth rate (15% YoY), and major players in the industry."}, {"role": "assistant",
"content": "", "tool_calls": [{"id": "call_7eE0bJbpvO6YYQNs87iTifQ3", "type":
"function", "function": {"name": "generate_report", "arguments": "{\"title\":\"Current
State of the AI Agent Market\",\"findings\":\"The AI agent market in 2023 is
valued at $50 billion, with a growth rate of 15% YoY. Major players in the market
have been identified as contributing to the expansion of industry capabilities.
Trends point to increasing adoption across industries such as healthcare and
finance, where automation and intelligence are becoming central to operations.
Emerging competitive forces are influencing pricing and technological advancements
in AI agents. Existing and new entrants focus on innovation to differentiate
their offerings and capture market share.\"}"}}]}, {"role": "tool", "tool_call_id":
"call_7eE0bJbpvO6YYQNs87iTifQ3", "content": "# Current State of the AI Agent
Market\n\n## Executive Summary\nThe AI agent market in 2023 is valued at $50
billion, with a growth rate of 15% YoY. Major players in the market have been
identified as contributing to the expansion of industry capabilities. Trends
point to increasing adoption across industries such as healthcare and finance,
where automation and intelligence are becoming central to operations. Emerging
competitive forces are influencing pricing and technological advancements in
AI agents. Existing and new entrants focus on innovation to differentiate their
offerings and capture market share.\n\n## Conclusion\nBased on the analysis,
the market shows strong growth potential."}], "stream": false, "stop": ["\nObservation:"],
"tool_choice": "auto", "tools": [{"function": {"name": "web_search", "description":
"Search the web for information on a given topic.\n\nArgs:\n query: The search
query to look up.\n\nReturns:\n Search results as a string.", "parameters":
{"properties": {"query": {"title": "Query", "type": "string"}}, "required":
["query"], "type": "object", "additionalProperties": false}}, "type": "function"},
{"function": {"name": "read_website", "description": "Read and extract content
from a website URL.\n\nArgs:\n url: The URL of the website to read.\n\nReturns:\n The
extracted content from the website.", "parameters": {"properties": {"url": {"title":
"Url", "type": "string"}}, "required": ["url"], "type": "object", "additionalProperties":
false}}, "type": "function"}, {"function": {"name": "generate_report", "description":
"Generate a structured report based on research findings.\n\nArgs:\n title:
The title of the report.\n findings: The research findings to include.\n\nReturns:\n A
formatted report string.", "parameters": {"properties": {"title": {"title":
"Title", "type": "string"}, "findings": {"title": "Findings", "type": "string"}},
"required": ["title", "findings"], "type": "object", "additionalProperties":
false}}, "type": "function"}]}'
headers:
Accept:
- application/json
Connection:
- keep-alive
Content-Length:
- '5276'
Content-Type:
- application/json
User-Agent:
- X-USER-AGENT-XXX
accept-encoding:
- ACCEPT-ENCODING-XXX
api-key:
- X-API-KEY-XXX
authorization:
- AUTHORIZATION-XXX
x-ms-client-request-id:
- X-MS-CLIENT-REQUEST-ID-XXX
method: POST
uri: https://fake-azure-endpoint.openai.azure.com/openai/deployments/gpt-4o/chat/completions?api-version=2024-12-01-preview
response:
body:
string: '{"choices":[{"content_filter_results":{"hate":{"filtered":false,"severity":"safe"},"protected_material_code":{"detected":false,"filtered":false},"protected_material_text":{"detected":false,"filtered":false},"self_harm":{"filtered":false,"severity":"safe"},"sexual":{"filtered":false,"severity":"safe"},"violence":{"filtered":false,"severity":"safe"}},"finish_reason":"stop","index":0,"logprobs":null,"message":{"annotations":[],"content":"The
report detailing the current state of the AI agent market has been generated
successfully. It highlights the market''s $50 billion valuation in 2023, with
a consistent annual growth rate of 15%. Key findings identify the role of
major players, impactful trends like sector adoption (healthcare and finance),
and the competitive dynamic driving innovation and technological advancements.","refusal":null,"role":"assistant"}}],"created":1770145141,"id":"chatcmpl-D5Fu18JFkDPGsKf10eiS2e2uI04MU","model":"gpt-4o-2024-11-20","object":"chat.completion","prompt_filter_results":[{"prompt_index":0,"content_filter_results":{"hate":{"filtered":false,"severity":"safe"},"jailbreak":{"detected":false,"filtered":false},"self_harm":{"filtered":false,"severity":"safe"},"sexual":{"filtered":false,"severity":"safe"},"violence":{"filtered":false,"severity":"safe"}}}],"system_fingerprint":"fp_b54fe76834","usage":{"completion_tokens":72,"completion_tokens_details":{"accepted_prediction_tokens":0,"audio_tokens":0,"reasoning_tokens":0,"rejected_prediction_tokens":0},"prompt_tokens":787,"prompt_tokens_details":{"audio_tokens":0,"cached_tokens":0},"total_tokens":859}}
'
headers:
Content-Length:
- '1597'
Content-Type:
- application/json
Date:
- Tue, 03 Feb 2026 18:59:01 GMT
Strict-Transport-Security:
- STS-XXX
apim-request-id:
- APIM-REQUEST-ID-XXX
azureml-model-session:
- AZUREML-MODEL-SESSION-XXX
x-accel-buffering:
- 'no'
x-content-type-options:
- X-CONTENT-TYPE-XXX
x-ms-client-request-id:
- X-MS-CLIENT-REQUEST-ID-XXX
x-ms-deployment-name:
- gpt-4o
x-ms-rai-invoked:
- 'true'
x-ms-region:
- X-MS-REGION-XXX
x-ratelimit-limit-requests:
- X-RATELIMIT-LIMIT-REQUESTS-XXX
x-ratelimit-limit-tokens:
- X-RATELIMIT-LIMIT-TOKENS-XXX
x-ratelimit-remaining-requests:
- X-RATELIMIT-REMAINING-REQUESTS-XXX
x-ratelimit-remaining-tokens:
- X-RATELIMIT-REMAINING-TOKENS-XXX
x-request-id:
- X-REQUEST-ID-XXX
status:
code: 200
message: OK
version: 1

View File

@@ -0,0 +1,613 @@
interactions:
- request:
body: '{"contents": [{"parts": [{"text": "Create a focused execution plan for
the following task:\n\n## Task\nResearch the current state of the AI agent market:\n1.
Search for recent information about AI agents and their market trends\n2. Read
detailed content from a relevant industry source\n3. Generate a brief report
summarizing the key findings\n\nUse the available tools for each step.\n\n##
Expected Output\nComplete the task successfully\n\n## Available Tools\nweb_search,
read_website, generate_report\n\n## Instructions\nCreate ONLY the essential
steps needed to complete this task. Use the MINIMUM number of steps required
- do NOT pad your plan with unnecessary steps. Most tasks need only 2-5 steps.\n\nFor
each step:\n- State the specific action to take\n- Specify which tool to use
(if any)\n- Note dependencies on previous steps if this step requires their
output\n- If a step involves multiple items (e.g., research 3 competitors),
note this explicitly\n\nDo NOT include:\n- Setup or preparation steps that are
obvious\n- Verification steps unless critical\n- Documentation or cleanup steps
unless explicitly required\n- Generic steps like \"review results\" or \"finalize
output\"\n\nAfter your plan, state:\n- \"READY: I am ready to execute the task.\"
if the plan is complete\n- \"NOT READY: I need to refine my plan because [reason].\"
if you need more thinking"}], "role": "user"}], "systemInstruction": {"parts":
[{"text": "You are a strategic planning assistant. Create minimal, effective
execution plans. Prefer fewer steps over more."}], "role": "user"}, "tools":
[{"functionDeclarations": [{"description": "Create or refine a reasoning plan
for a task with structured steps", "name": "create_reasoning_plan", "parameters_json_schema":
{"type": "object", "properties": {"plan": {"type": "string", "description":
"A brief summary of the overall plan."}, "steps": {"type": "array", "description":
"List of discrete steps to execute the plan", "items": {"type": "object", "properties":
{"step_number": {"type": "integer", "description": "Step number (1-based)"},
"description": {"type": "string", "description": "What to do in this step"},
"tool_to_use": {"type": ["string", "null"], "description": "Tool to use for
this step, or null if no tool needed"}, "depends_on": {"type": "array", "items":
{"type": "integer"}, "description": "Step numbers this step depends on (empty
array if none)"}}, "required": ["step_number", "description", "tool_to_use",
"depends_on"], "additionalProperties": false}}, "ready": {"type": "boolean",
"description": "Whether the agent is ready to execute the task."}}, "required":
["plan", "steps", "ready"], "additionalProperties": false}}]}], "generationConfig":
{"stopSequences": ["\nObservation:"]}}'
headers:
User-Agent:
- X-USER-AGENT-XXX
accept:
- '*/*'
accept-encoding:
- ACCEPT-ENCODING-XXX
connection:
- keep-alive
content-length:
- '2747'
content-type:
- application/json
host:
- generativelanguage.googleapis.com
x-goog-api-client:
- google-genai-sdk/1.49.0 gl-python/3.13.3
x-goog-api-key:
- X-GOOG-API-KEY-XXX
method: POST
uri: https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash:generateContent
response:
body:
string: "{\n \"candidates\": [\n {\n \"content\": {\n \"parts\":
[\n {\n \"functionCall\": {\n \"name\": \"create_reasoning_plan\",\n
\ \"args\": {\n \"steps\": [\n {\n
\ \"step_number\": 1,\n \"tool_to_use\":
\"web_search\",\n \"depends_on\": [],\n \"description\":
\"Search for recent information on AI agent market trends.\"\n },\n
\ {\n \"description\": \"Read detailed
content from a relevant industry source found in the search results.\",\n
\ \"depends_on\": [\n 1\n ],\n
\ \"step_number\": 2,\n \"tool_to_use\":
\"read_website\"\n },\n {\n \"depends_on\":
[\n 2\n ],\n \"step_number\":
3,\n \"tool_to_use\": \"generate_report\",\n \"description\":
\"Generate a brief report summarizing the key findings from the researched
content.\"\n }\n ],\n \"plan\":
\"Research the current state of the AI agent market and generate a summary
report.\",\n \"ready\": true\n }\n },\n
\ \"thoughtSignature\": \"CsIIAXLI2nztq0xBScBM9fsnIBhVvD6CWKYYyj3khtIYTNl4oRbOYnH8LQBzET8fRrxiWqnxib7Nn/UVK29pvsHX7TacFxm0ieHb5gmb2lgqo4GpG0P2+HI4xo4vnibPl3OM+bl4M39/JOdBNBmYGUJETnzV++m8INVLIlOuRPGTXRDUOKLNfkyktsJRdNuO68OMu5WXDqrlbuppG1dsK+Y8cyvzDRoMTsYU1arFOZuyLisuhDQ7nTbdXQ8AO0Oc43MdYrSFmfeDBbsnrxDKBMPYtFVadggz4RwKIFvg3Fb9ORho2GkBUum8PNWHZCBXuAoU9FaVdDFVReduuTfZuBwy/Y6MS+8mnLqcQ1wADlg7DnzaOXmALxgfApTEL6YUvwVjVYI1dpCR5ACkFgNq6ecYYumrlZHuHE59p+XOHMhrzu9c99A0mtRvpkz1yu75dMACliSf/2fu4gCPRRLsgez7llp92g7+GzVkbwkRVlMcYQ+0l28sBsfBQd01lQR0vNNbK3JHf/chtCtrRIcgqqnUbTTqm5n8iPHFBWMdEa0L4I1WfPbiUSC9OGwwaoP+Ro6h/gPH9o96gjZQpCft1myrKtgl7dpOaSharMuDfulk2Wd5wJoZ9XR/fxVYQos8sNEIZq8O1cjOW9L74ryKMtw6hF8A4kxGLhOSgXwszNqHoLCLoX6Cz5vb2hZVkCWxq8yn5k3IrTcw/16Jdkx6oNWjHZJmJMz8vl85sUH1ZwZocWHWcUUMo33ZlsBnqPZxwnAj5CW+vQS/xCC7Kq1FciqosYyHgN43bm8JqnHL4qgBAEtDPVrFfaWBpzWWVfXQkX/EVApU/Jr3t9D3gz10CFPsV7d0lx0P7jur8u8Q9n8r2HEi320kNf1EX4YnDioX+nWmHPN203OCOHpDEcEQ89gECMk5M9Xu6EZ94rXmrZJtP5kc0k37fvMexlxIZPuUmV7RpoCTrMVqMyP93eIq9FY/9WsqHVlydumTfEMPI1WY5ObNeHJFhyu2Y6dGB3ONQL1bQ0oboNZujX/AhnauV0A8OS2wsA3yLVk0GFZpTISU+WQ2/Gm7t8CUIgKT3BV4JkYuyNBTONQPLitpfO+TEMcuNZlodinLvkBtBA8B0W55kOAK7y21I9znnNKONo87jg9kfoZMvYlb2DrO9YovQDDhdCZjxXr5VZqbhMH4tb8t5kP/1auCv1GfzV6RVSgyNVYnruqJKEtqgNbLid1FB9EH3Qu0A92HLqdGsuC3qRm2qfyMEmz7iYJ5n/WA5BertGZ/O0SGLEBftgEjQOJhj7flGTEio3yyCHyvw/yGP6S8F3+mPvVLO6/eWMCGm7ig/mG8pjnysRTQTjDv966XhU9SYpinoxTd5mEuWdIHKMZxtBK7qeKY369njKJOP2K220Rk9/Ii+KyzNvbzyobK6oMNcwXgHjO+ssbH+blVUbLai3UblQ==\"\n
\ }\n ],\n \"role\": \"model\"\n },\n \"finishReason\":
\"STOP\",\n \"index\": 0,\n \"finishMessage\": \"Model generated
function call(s).\"\n }\n ],\n \"usageMetadata\": {\n \"promptTokenCount\":
529,\n \"candidatesTokenCount\": 169,\n \"totalTokenCount\": 955,\n
\ \"promptTokensDetails\": [\n {\n \"modality\": \"TEXT\",\n
\ \"tokenCount\": 529\n }\n ],\n \"thoughtsTokenCount\":
257\n },\n \"modelVersion\": \"gemini-2.5-flash\",\n \"responseId\": \"h0WCacqWA6WM_PUPl-niyQ0\"\n}\n"
headers:
Alt-Svc:
- h3=":443"; ma=2592000,h3-29=":443"; ma=2592000
Content-Type:
- application/json; charset=UTF-8
Date:
- Tue, 03 Feb 2026 18:59:19 GMT
Server:
- scaffolding on HTTPServer2
Server-Timing:
- gfet4t7; dur=2345
Transfer-Encoding:
- chunked
Vary:
- Origin
- X-Origin
- Referer
X-Content-Type-Options:
- X-CONTENT-TYPE-XXX
X-Frame-Options:
- X-FRAME-OPTIONS-XXX
X-XSS-Protection:
- '0'
status:
code: 200
message: OK
- request:
body: '{"contents": [{"parts": [{"text": "\nCurrent Task: Research the current
state of the AI agent market:\n1. Search for recent information about AI agents
and their market trends\n2. Read detailed content from a relevant industry source\n3.
Generate a brief report summarizing the key findings\n\nUse the available tools
for each step."}], "role": "user"}], "systemInstruction": {"parts": [{"text":
"You are Research Analyst. An experienced analyst skilled at gathering information
and synthesizing findings into actionable insights.\nYour personal goal is:
Conduct thorough research and produce insightful reports"}], "role": "user"},
"tools": [{"functionDeclarations": [{"description": "Search the web for information
on a given topic.\n\nArgs:\n query: The search query to look up.\n\nReturns:\n Search
results as a string.", "name": "web_search", "parameters_json_schema": {"properties":
{"query": {"title": "Query", "type": "string"}}, "required": ["query"], "type":
"object", "additionalProperties": false}}, {"description": "Read and extract
content from a website URL.\n\nArgs:\n url: The URL of the website to read.\n\nReturns:\n The
extracted content from the website.", "name": "read_website", "parameters_json_schema":
{"properties": {"url": {"title": "Url", "type": "string"}}, "required": ["url"],
"type": "object", "additionalProperties": false}}, {"description": "Generate
a structured report based on research findings.\n\nArgs:\n title: The title
of the report.\n findings: The research findings to include.\n\nReturns:\n A
formatted report string.", "name": "generate_report", "parameters_json_schema":
{"properties": {"title": {"title": "Title", "type": "string"}, "findings": {"title":
"Findings", "type": "string"}}, "required": ["title", "findings"], "type": "object",
"additionalProperties": false}}]}], "generationConfig": {"stopSequences": ["\nObservation:"]}}'
headers:
User-Agent:
- X-USER-AGENT-XXX
accept:
- '*/*'
accept-encoding:
- ACCEPT-ENCODING-XXX
connection:
- keep-alive
content-length:
- '1904'
content-type:
- application/json
host:
- generativelanguage.googleapis.com
x-goog-api-client:
- google-genai-sdk/1.49.0 gl-python/3.13.3
x-goog-api-key:
- X-GOOG-API-KEY-XXX
method: POST
uri: https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash:generateContent
response:
body:
string: "{\n \"candidates\": [\n {\n \"content\": {\n \"parts\":
[\n {\n \"functionCall\": {\n \"name\": \"web_search\",\n
\ \"args\": {\n \"query\": \"AI agent market trends
2023-2024\"\n }\n },\n \"thoughtSignature\":
\"CoYEAXLI2nzmyDix3/QA+tMOiUwpDVoA5+RJoRW7kw3okJaVYCa5Usx7eBn4xowP7oXNynS4NfawCYqboufBXjHinq13UTcYg0Y74qIrza4KuctliGmf8G7S4QoS0Y3gqCHQKsxTdShQOg8wirnr8Rdu1eyrrhWE0XKk0HPA0Ssj7zUVoJBqHPqwyvkFyXkMtpcmtq9qXmZYfMFuSKRQnYLVLllL/BpOIL3w7MuofpviO85bvYk9gX0vsDjYWS6EdVEfC9k2BWGjhHaILXT9A1iwNPdDAg33SOC+BlPrGox0ghCr5qEKnBMZhUszqaUCykczFCq+xMIA3xDGNbTjicWb53sL/PXBYLsNty1giW3nKFe8+8eRpUsHUx7oQ82m4AUxKqk99mZjaLp8bHk+rERjFZErcw/pe/3190K0WGHH5ecB4amJCzZtVrQJ1oAZhb7/P1VZ57xmt1z/c1pQgjuvnV+cWE9blh5o6mNNFbFuzJDIO2k8qrFeeDwlCF8OOrxo8F+z1evg4yjZ1+9TLCVFTmZ0S0PI54FS5afb0RdPol2/ISNw7H/dtnO4z6LhT2NmlYqYZr8qfVoUD21rmI08NFs+f/6JW5+7eSQbax76SW+6A2IqqPPyF66MCpqtEzC+hpzVsCBcIQyRQWsdm+RNAs50gmqF6W3CcTPryWkeS7w9ORqxdiU=\"\n
\ }\n ],\n \"role\": \"model\"\n },\n \"finishReason\":
\"STOP\",\n \"index\": 0,\n \"finishMessage\": \"Model generated
function call(s).\"\n }\n ],\n \"usageMetadata\": {\n \"promptTokenCount\":
319,\n \"candidatesTokenCount\": 28,\n \"totalTokenCount\": 461,\n \"promptTokensDetails\":
[\n {\n \"modality\": \"TEXT\",\n \"tokenCount\": 319\n
\ }\n ],\n \"thoughtsTokenCount\": 114\n },\n \"modelVersion\":
\"gemini-2.5-flash\",\n \"responseId\": \"iEWCaYyJENDn_uMP4q3N8QE\"\n}\n"
headers:
Alt-Svc:
- h3=":443"; ma=2592000,h3-29=":443"; ma=2592000
Content-Type:
- application/json; charset=UTF-8
Date:
- Tue, 03 Feb 2026 18:59:20 GMT
Server:
- scaffolding on HTTPServer2
Server-Timing:
- gfet4t7; dur=1193
Transfer-Encoding:
- chunked
Vary:
- Origin
- X-Origin
- Referer
X-Content-Type-Options:
- X-CONTENT-TYPE-XXX
X-Frame-Options:
- X-FRAME-OPTIONS-XXX
X-XSS-Protection:
- '0'
status:
code: 200
message: OK
- request:
body: '{"contents": [{"parts": [{"text": "\nCurrent Task: Research the current
state of the AI agent market:\n1. Search for recent information about AI agents
and their market trends\n2. Read detailed content from a relevant industry source\n3.
Generate a brief report summarizing the key findings\n\nUse the available tools
for each step."}], "role": "user"}, {"parts": [{"functionCall": {"args": {"query":
"AI agent market trends 2023-2024"}, "name": "web_search"}, "thoughtSignature":
"CoYEAXLI2nzmyDix3_QA-tMOiUwpDVoA5-RJoRW7kw3okJaVYCa5Usx7eBn4xowP7oXNynS4NfawCYqboufBXjHinq13UTcYg0Y74qIrza4KuctliGmf8G7S4QoS0Y3gqCHQKsxTdShQOg8wirnr8Rdu1eyrrhWE0XKk0HPA0Ssj7zUVoJBqHPqwyvkFyXkMtpcmtq9qXmZYfMFuSKRQnYLVLllL_BpOIL3w7MuofpviO85bvYk9gX0vsDjYWS6EdVEfC9k2BWGjhHaILXT9A1iwNPdDAg33SOC-BlPrGox0ghCr5qEKnBMZhUszqaUCykczFCq-xMIA3xDGNbTjicWb53sL_PXBYLsNty1giW3nKFe8-8eRpUsHUx7oQ82m4AUxKqk99mZjaLp8bHk-rERjFZErcw_pe_3190K0WGHH5ecB4amJCzZtVrQJ1oAZhb7_P1VZ57xmt1z_c1pQgjuvnV-cWE9blh5o6mNNFbFuzJDIO2k8qrFeeDwlCF8OOrxo8F-z1evg4yjZ1-9TLCVFTmZ0S0PI54FS5afb0RdPol2_ISNw7H_dtnO4z6LhT2NmlYqYZr8qfVoUD21rmI08NFs-f_6JW5-7eSQbax76SW-6A2IqqPPyF66MCpqtEzC-hpzVsCBcIQyRQWsdm-RNAs50gmqF6W3CcTPryWkeS7w9ORqxdiU="}],
"role": "model"}, {"parts": [{"functionResponse": {"name": "web_search", "response":
{"result": "Search results for ''AI agent market trends 2023-2024'': Found 3
relevant articles about the topic including market analysis, competitor data,
and industry trends."}}}], "role": "user"}], "systemInstruction": {"parts":
[{"text": "You are Research Analyst. An experienced analyst skilled at gathering
information and synthesizing findings into actionable insights.\nYour personal
goal is: Conduct thorough research and produce insightful reports"}], "role":
"user"}, "tools": [{"functionDeclarations": [{"description": "Search the web
for information on a given topic.\n\nArgs:\n query: The search query to look
up.\n\nReturns:\n Search results as a string.", "name": "web_search", "parameters_json_schema":
{"properties": {"query": {"title": "Query", "type": "string"}}, "required":
["query"], "type": "object", "additionalProperties": false}}, {"description":
"Read and extract content from a website URL.\n\nArgs:\n url: The URL of
the website to read.\n\nReturns:\n The extracted content from the website.",
"name": "read_website", "parameters_json_schema": {"properties": {"url": {"title":
"Url", "type": "string"}}, "required": ["url"], "type": "object", "additionalProperties":
false}}, {"description": "Generate a structured report based on research findings.\n\nArgs:\n title:
The title of the report.\n findings: The research findings to include.\n\nReturns:\n A
formatted report string.", "name": "generate_report", "parameters_json_schema":
{"properties": {"title": {"title": "Title", "type": "string"}, "findings": {"title":
"Findings", "type": "string"}}, "required": ["title", "findings"], "type": "object",
"additionalProperties": false}}]}], "generationConfig": {"stopSequences": ["\nObservation:"]}}'
headers:
User-Agent:
- X-USER-AGENT-XXX
accept:
- '*/*'
accept-encoding:
- ACCEPT-ENCODING-XXX
connection:
- keep-alive
content-length:
- '3015'
content-type:
- application/json
host:
- generativelanguage.googleapis.com
x-goog-api-client:
- google-genai-sdk/1.49.0 gl-python/3.13.3
x-goog-api-key:
- X-GOOG-API-KEY-XXX
method: POST
uri: https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash:generateContent
response:
body:
string: "{\n \"candidates\": [\n {\n \"content\": {\n \"parts\":
[\n {\n \"functionCall\": {\n \"name\": \"web_search\",\n
\ \"args\": {\n \"query\": \"latest reports AI
agent market analysis\"\n }\n },\n \"thoughtSignature\":
\"CpwGAXLI2nybmVGffIN1k+T5M2HmQNlvybZRPoo/ysgNARa+9nrPdRoBZ9RC+Dee9KSk1o7O+IU9l0sWCTirQYcroEhXon+JIQUVTed/L0s//sBOR+hJnZWoaG5ucsfJQvovAQba2Wb7uViEkdySvHfRApF0atewbC+TCKZrxDAQ6Naby8nwUTauJPKlgsBsZVnlViRfIbF7pom1zvXD+d5htjMiuJr1nOuSH0EGQWC4TUiuJD23hgockzhmIpbU/bStn8PFIQNsySEzl6H5sZdlD4auwCMCD2Q+Ur05w1uLv7n8GoSZn5dkdXLR5R7dZ+kkX+xP4w841Ih2gc6rBKT5tSedN01AuJsK65NSfOXZBwakxs58WZXDQXnIQe4d2QThAX3nPdUmhvVI6sHX+ZdtQZIrhE7hRf9j/T/wvvrUao5VDv+mxXd9bcPEV2BzSXkvkAB1SbJ+5wN7Qb2j31lkiUu5uRnZOiVxL5iCS+8Z/jEl4NjpGithbcPoNpFIDOeiE/f8kf7tQJDX+YNquPbYZRJHvIfLalVQndVGNlZVN2jXT3Wwo8So3vmzIDFVjV+pj7tSRNK8hTITm6bfHS+XUZqJdm7eHCzhonyJ7/tl7LbsstPXoZU3ZN50tNpXYOK+NzgzU7iwd9SaHVXgzQRdujWgHuBmSiSd9qNHvdaNwgARVTnMj3VYpehgIuaYMzQmgM99TdC/zmzcqHa5VZSnHKHqMIVc9gjRvVwz4DUm6VLLnKnVYFClM6gofmUI6s9fThiR5EdfimaDTlRlzh6Df33jAbRA9rUTDH6uE+DjiopCvXuHjmQqK9Smyxt2vTao9H7AYIRN7yWmdVoaG0tUSL2XQ31wIW3cEyhz7ihQwFYKJnOkQ3/CiU6KV+4ldk9UY/vKWSgItVTTE6G8Di0iviiCAkmL59Uj7vnIp80+U9rDIK7WhxpAWlrDA6cGQT2LGAXQ2liXtLa31nXyfvCezhtSS8jSVm4SHaiU+INvYtpq2Q7nXPTFbjjZooyC5FePGqAH1T+sRYbR02jaa572/NuwFgBCObTfqO8G5A==\"\n
\ }\n ],\n \"role\": \"model\"\n },\n \"finishReason\":
\"STOP\",\n \"index\": 0,\n \"finishMessage\": \"Model generated
function call(s).\"\n }\n ],\n \"usageMetadata\": {\n \"promptTokenCount\":
400,\n \"candidatesTokenCount\": 20,\n \"totalTokenCount\": 589,\n \"promptTokensDetails\":
[\n {\n \"modality\": \"TEXT\",\n \"tokenCount\": 400\n
\ }\n ],\n \"thoughtsTokenCount\": 169\n },\n \"modelVersion\":
\"gemini-2.5-flash\",\n \"responseId\": \"ikWCacqBG8ve_uMPj4KFyAY\"\n}\n"
headers:
Alt-Svc:
- h3=":443"; ma=2592000,h3-29=":443"; ma=2592000
Content-Type:
- application/json; charset=UTF-8
Date:
- Tue, 03 Feb 2026 18:59:22 GMT
Server:
- scaffolding on HTTPServer2
Server-Timing:
- gfet4t7; dur=2125
Transfer-Encoding:
- chunked
Vary:
- Origin
- X-Origin
- Referer
X-Content-Type-Options:
- X-CONTENT-TYPE-XXX
X-Frame-Options:
- X-FRAME-OPTIONS-XXX
X-XSS-Protection:
- '0'
status:
code: 200
message: OK
- request:
body: '{"contents": [{"parts": [{"text": "\nCurrent Task: Research the current
state of the AI agent market:\n1. Search for recent information about AI agents
and their market trends\n2. Read detailed content from a relevant industry source\n3.
Generate a brief report summarizing the key findings\n\nUse the available tools
for each step."}], "role": "user"}, {"parts": [{"functionCall": {"args": {"query":
"AI agent market trends 2023-2024"}, "name": "web_search"}, "thoughtSignature":
"CoYEAXLI2nzmyDix3_QA-tMOiUwpDVoA5-RJoRW7kw3okJaVYCa5Usx7eBn4xowP7oXNynS4NfawCYqboufBXjHinq13UTcYg0Y74qIrza4KuctliGmf8G7S4QoS0Y3gqCHQKsxTdShQOg8wirnr8Rdu1eyrrhWE0XKk0HPA0Ssj7zUVoJBqHPqwyvkFyXkMtpcmtq9qXmZYfMFuSKRQnYLVLllL_BpOIL3w7MuofpviO85bvYk9gX0vsDjYWS6EdVEfC9k2BWGjhHaILXT9A1iwNPdDAg33SOC-BlPrGox0ghCr5qEKnBMZhUszqaUCykczFCq-xMIA3xDGNbTjicWb53sL_PXBYLsNty1giW3nKFe8-8eRpUsHUx7oQ82m4AUxKqk99mZjaLp8bHk-rERjFZErcw_pe_3190K0WGHH5ecB4amJCzZtVrQJ1oAZhb7_P1VZ57xmt1z_c1pQgjuvnV-cWE9blh5o6mNNFbFuzJDIO2k8qrFeeDwlCF8OOrxo8F-z1evg4yjZ1-9TLCVFTmZ0S0PI54FS5afb0RdPol2_ISNw7H_dtnO4z6LhT2NmlYqYZr8qfVoUD21rmI08NFs-f_6JW5-7eSQbax76SW-6A2IqqPPyF66MCpqtEzC-hpzVsCBcIQyRQWsdm-RNAs50gmqF6W3CcTPryWkeS7w9ORqxdiU="}],
"role": "model"}, {"parts": [{"functionResponse": {"name": "web_search", "response":
{"result": "Search results for ''AI agent market trends 2023-2024'': Found 3
relevant articles about the topic including market analysis, competitor data,
and industry trends."}}}], "role": "user"}, {"parts": [{"functionCall": {"args":
{"query": "latest reports AI agent market analysis"}, "name": "web_search"},
"thoughtSignature": "CpwGAXLI2nybmVGffIN1k-T5M2HmQNlvybZRPoo_ysgNARa-9nrPdRoBZ9RC-Dee9KSk1o7O-IU9l0sWCTirQYcroEhXon-JIQUVTed_L0s__sBOR-hJnZWoaG5ucsfJQvovAQba2Wb7uViEkdySvHfRApF0atewbC-TCKZrxDAQ6Naby8nwUTauJPKlgsBsZVnlViRfIbF7pom1zvXD-d5htjMiuJr1nOuSH0EGQWC4TUiuJD23hgockzhmIpbU_bStn8PFIQNsySEzl6H5sZdlD4auwCMCD2Q-Ur05w1uLv7n8GoSZn5dkdXLR5R7dZ-kkX-xP4w841Ih2gc6rBKT5tSedN01AuJsK65NSfOXZBwakxs58WZXDQXnIQe4d2QThAX3nPdUmhvVI6sHX-ZdtQZIrhE7hRf9j_T_wvvrUao5VDv-mxXd9bcPEV2BzSXkvkAB1SbJ-5wN7Qb2j31lkiUu5uRnZOiVxL5iCS-8Z_jEl4NjpGithbcPoNpFIDOeiE_f8kf7tQJDX-YNquPbYZRJHvIfLalVQndVGNlZVN2jXT3Wwo8So3vmzIDFVjV-pj7tSRNK8hTITm6bfHS-XUZqJdm7eHCzhonyJ7_tl7LbsstPXoZU3ZN50tNpXYOK-NzgzU7iwd9SaHVXgzQRdujWgHuBmSiSd9qNHvdaNwgARVTnMj3VYpehgIuaYMzQmgM99TdC_zmzcqHa5VZSnHKHqMIVc9gjRvVwz4DUm6VLLnKnVYFClM6gofmUI6s9fThiR5EdfimaDTlRlzh6Df33jAbRA9rUTDH6uE-DjiopCvXuHjmQqK9Smyxt2vTao9H7AYIRN7yWmdVoaG0tUSL2XQ31wIW3cEyhz7ihQwFYKJnOkQ3_CiU6KV-4ldk9UY_vKWSgItVTTE6G8Di0iviiCAkmL59Uj7vnIp80-U9rDIK7WhxpAWlrDA6cGQT2LGAXQ2liXtLa31nXyfvCezhtSS8jSVm4SHaiU-INvYtpq2Q7nXPTFbjjZooyC5FePGqAH1T-sRYbR02jaa572_NuwFgBCObTfqO8G5A=="}],
"role": "model"}, {"parts": [{"functionResponse": {"name": "web_search", "response":
{"result": "Search results for ''latest reports AI agent market analysis'':
Found 3 relevant articles about the topic including market analysis, competitor
data, and industry trends."}}}], "role": "user"}], "systemInstruction": {"parts":
[{"text": "You are Research Analyst. An experienced analyst skilled at gathering
information and synthesizing findings into actionable insights.\nYour personal
goal is: Conduct thorough research and produce insightful reports"}], "role":
"user"}, "tools": [{"functionDeclarations": [{"description": "Search the web
for information on a given topic.\n\nArgs:\n query: The search query to look
up.\n\nReturns:\n Search results as a string.", "name": "web_search", "parameters_json_schema":
{"properties": {"query": {"title": "Query", "type": "string"}}, "required":
["query"], "type": "object", "additionalProperties": false}}, {"description":
"Read and extract content from a website URL.\n\nArgs:\n url: The URL of
the website to read.\n\nReturns:\n The extracted content from the website.",
"name": "read_website", "parameters_json_schema": {"properties": {"url": {"title":
"Url", "type": "string"}}, "required": ["url"], "type": "object", "additionalProperties":
false}}, {"description": "Generate a structured report based on research findings.\n\nArgs:\n title:
The title of the report.\n findings: The research findings to include.\n\nReturns:\n A
formatted report string.", "name": "generate_report", "parameters_json_schema":
{"properties": {"title": {"title": "Title", "type": "string"}, "findings": {"title":
"Findings", "type": "string"}}, "required": ["title", "findings"], "type": "object",
"additionalProperties": false}}]}], "generationConfig": {"stopSequences": ["\nObservation:"]}}'
headers:
User-Agent:
- X-USER-AGENT-XXX
accept:
- '*/*'
accept-encoding:
- ACCEPT-ENCODING-XXX
connection:
- keep-alive
content-length:
- '4512'
content-type:
- application/json
host:
- generativelanguage.googleapis.com
x-goog-api-client:
- google-genai-sdk/1.49.0 gl-python/3.13.3
x-goog-api-key:
- X-GOOG-API-KEY-XXX
method: POST
uri: https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash:generateContent
response:
body:
string: "{\n \"candidates\": [\n {\n \"content\": {\n \"parts\":
[\n {\n \"functionCall\": {\n \"name\": \"read_website\",\n
\ \"args\": {\n \"url\": \"https://www.example.com/ai-agent-market-report\"\n
\ }\n },\n \"thoughtSignature\": \"CooPAXLI2nzYy6ki8YJo4RDDzryK6qtkbIEWXCh0ZjpoRX2fgNghUrxXZUwrsrbrwEpccdvWCDpb5ZkwYMNuzMi4yRUsjfcebfd2VCwQgvWnWmitj3taAcLCUDIJX5pGt0L2O8V6ehWmrANQHGw6Qc/QVx5dMlSFeFKHtfc1M95CJz2BxZd3lnuKLCEu7LCCiqIDdd1o1y/EcGsl7OHai6WyQJg49Cvcww//Z+kfSVoAPGNedTYPIf4ImttMyofV8+yczI0IGjhFzE0Qk1Pvo84O9NyOufpPELeKY8l1yfZgSZEL0sUA6weqf1P/xzNKt6h96Rh1KpAx5iTGFtqOWJrJ8OP+ZdGJ+rA+xZQuTRtKbW/e60rC5kPgJnkhWCp1p7HTLgTGwpzzxztqL0ggURaWw8GJw8S41BcM5mFEA+a7ivMWPMLdMk6h5gr/Y8JTnLSRZFoZYHkY9bTcAFPc9gapyFlKkQDciYet/MHe2zfE7ycx0e8c0W7ISoYPHpXW/WxNekMiNdfx1eg/mEX/Y0Vjc70p+HtbnEGaBoZWJSo+cJtZdA4sNJaIuEnnVTtAdAtnoxJUOyr2jrKDAsSjHmVoeLARZO2/DuJKsEVNHzsNPMw4SrBq1DkG52Aof7KcskOfW2OyoZEUnIf23IRabKflUG/7dHGqYfLtkWRApWZYBy2eQILwsmXJ4xx27S7/02Jl0D8rX3vOQhjS7lTjrXvy8wU3biUFvbnqJuLj3ACLn7Hn1axe4dB7zIpYC1di63DDs1fQp79d2VaGRC+tVMKiQu/+yC6nCM4j8JSnpBNgvyhq3ilAr/iDHD6GkbxdBBChFbZl0KY+WkrcZpPp2g2m/beiQuzF+cM6RuaqF2W0TEjVT+OZ1ivObreXFVFWa9T5qFsTjV3E/SFlmrXdx6Kf8d0i9QeEwbzBIvvX3VNUdW1DJ95WylaUXFtljk/cqylri2j2WTPMTSV9fcM/a8UWoayUAuLzq8zgxd2Wvrm8uvGcaewAQT+yK+u5FsLFoKxaMkoakb2/1tjWfhn+MrbIoYOBIebUapUhADUJgGDTG7byEEQlbwNaa8rOZ5ZpdBrDXjtj1tsz6cqaEiCbtZWqei0myuMa0J6Z3FHoceSUgOACD/bHWSkzdW7LdKBKLxB25fKfD1hCLscE1skZ7nAEZWmrUzKz9yCiqAPfatQhHqnr3EchEB/dxbNXxCCT6IFxemhb9AsaQWjcTrlQJiuz8t4VMmu1slgEcrhweZ/PpxI/E74qvV9ljcFLcziW0BZaqepJMeBe2cLfVVki5R5kudQVuAcCEdkrwBDWAwZlO9aWTmiZ9+ggfl3F/63JzZXSM/NDc/2rJ1k65e+O+vyTzkrUwnSPnh4RPK6jeIul4SSLPgyZBwp0qHyTl4As8jpMe9Rbt0NRli6Z5eAr/IYdbicKo/pzqiupdxvu4u+jKt+F4KojU9avgWFAB01isHHO5Z1vzVJ6XDilWeah5DvHc1lkcPCXtbBGvykfFqJIBhthnFi0f/S4HV+IlOKwvKbfmL5GplD7H+DjUA5UdHjw8HoJa6uQxhbERy5dZlYqxN918aL147Afq4LuPoWUJULEsYULwUuA6HspemIKUltHacOAxZgP4OLVkB3zwssj8E6rMtU1puupHiL7J66fFaR3co71gzzlvl8R2Xi3xEQdpigxxCUAsZWMrSrITRBnKPKGF1CrFEhU6FP6bQAQ6UuhAdqLoJihwVnMTyY9fGUriwCkrQu5gK7ZQlnXyluV0J/5xWh4sOaPmwmeiUXMBPN0iGb7z24lyHaI3QXz3kGlJBnIFhseJTJo3ed66Z/LAf8I/hFC6s/sIioEmRgd4tm3Q1U05ETKrptnzo8Ac4AOTdJtbQv0uDQSkoefUbebu7x6L9Dns04VSvDPKLwlFQd81sl9DYmi9SPDxKT7S6gfG6WjJ9z35eNSR01QW1QIgAAhMU8UX4o7QQaXnUZxfZeYRMXlzu/xb4KSREvc/FeLJ+IvnPFgzryAgLi/Sipl//Eul0sjrREPYJE1GxBOIoURGa+Bsmc3yy8aWArxv/HGpbLzwjmH8TaMvB3P/4tkvT+5IjJlpe0UrR1ssqasUtwfH9yWv5+4i+EdJrJ/SJ4Hl1Vlj9zi9lEFr33Zs96kn8ZOkBHp0m7Sxr5xP+krSxQkROIpu0d02kIqQ3nob+kGbnAf0zkmH6pS3H4mG2Zksu8KcGvghx7XTl3qBOJ+ZY4vlev2cSMBfMmnbUHKvMlz4YsmTGR1JPntDguA8UJJ1UZCex4E/W3KcHwd4qvzqgMZNjdarIvGjlmnKmoL1RV7EqUsBkpk+jauY0SqoWsIIf4b6O9Y3fQRRJURLeIfGCTmlxl+df+4yjqUhxUXTaQsX2KlEJ3tZK+ZI9FdKMzqPip2oMFwf2XfLIyIuRA0NU5fDRIxqPUzYv6RQ+zhXXB1QscsAtyf1t+LsaVw//1Vfj6V3Ups4pNyiwMAZg1z9DaIHht++VfgCdqT4RLcaGI8eEDrjkOc9G/iLkLBCfOB7EZkcXT20Lc1Silxa++dZsaYp5Tni8dz6f5YWg==\"\n
\ }\n ],\n \"role\": \"model\"\n },\n \"finishReason\":
\"STOP\",\n \"index\": 0,\n \"finishMessage\": \"Model generated
function call(s).\"\n }\n ],\n \"usageMetadata\": {\n \"promptTokenCount\":
465,\n \"candidatesTokenCount\": 29,\n \"totalTokenCount\": 916,\n \"promptTokensDetails\":
[\n {\n \"modality\": \"TEXT\",\n \"tokenCount\": 465\n
\ }\n ],\n \"thoughtsTokenCount\": 422\n },\n \"modelVersion\":
\"gemini-2.5-flash\",\n \"responseId\": \"jUWCadDOIdi8_uMPxuOu4Ak\"\n}\n"
headers:
Alt-Svc:
- h3=":443"; ma=2592000,h3-29=":443"; ma=2592000
Content-Type:
- application/json; charset=UTF-8
Date:
- Tue, 03 Feb 2026 18:59:25 GMT
Server:
- scaffolding on HTTPServer2
Server-Timing:
- gfet4t7; dur=3079
Transfer-Encoding:
- chunked
Vary:
- Origin
- X-Origin
- Referer
X-Content-Type-Options:
- X-CONTENT-TYPE-XXX
X-Frame-Options:
- X-FRAME-OPTIONS-XXX
X-XSS-Protection:
- '0'
status:
code: 200
message: OK
- request:
body: '{"contents": [{"parts": [{"text": "\nCurrent Task: Research the current
state of the AI agent market:\n1. Search for recent information about AI agents
and their market trends\n2. Read detailed content from a relevant industry source\n3.
Generate a brief report summarizing the key findings\n\nUse the available tools
for each step."}], "role": "user"}, {"parts": [{"functionCall": {"args": {"query":
"AI agent market trends 2023-2024"}, "name": "web_search"}, "thoughtSignature":
"CoYEAXLI2nzmyDix3_QA-tMOiUwpDVoA5-RJoRW7kw3okJaVYCa5Usx7eBn4xowP7oXNynS4NfawCYqboufBXjHinq13UTcYg0Y74qIrza4KuctliGmf8G7S4QoS0Y3gqCHQKsxTdShQOg8wirnr8Rdu1eyrrhWE0XKk0HPA0Ssj7zUVoJBqHPqwyvkFyXkMtpcmtq9qXmZYfMFuSKRQnYLVLllL_BpOIL3w7MuofpviO85bvYk9gX0vsDjYWS6EdVEfC9k2BWGjhHaILXT9A1iwNPdDAg33SOC-BlPrGox0ghCr5qEKnBMZhUszqaUCykczFCq-xMIA3xDGNbTjicWb53sL_PXBYLsNty1giW3nKFe8-8eRpUsHUx7oQ82m4AUxKqk99mZjaLp8bHk-rERjFZErcw_pe_3190K0WGHH5ecB4amJCzZtVrQJ1oAZhb7_P1VZ57xmt1z_c1pQgjuvnV-cWE9blh5o6mNNFbFuzJDIO2k8qrFeeDwlCF8OOrxo8F-z1evg4yjZ1-9TLCVFTmZ0S0PI54FS5afb0RdPol2_ISNw7H_dtnO4z6LhT2NmlYqYZr8qfVoUD21rmI08NFs-f_6JW5-7eSQbax76SW-6A2IqqPPyF66MCpqtEzC-hpzVsCBcIQyRQWsdm-RNAs50gmqF6W3CcTPryWkeS7w9ORqxdiU="}],
"role": "model"}, {"parts": [{"functionResponse": {"name": "web_search", "response":
{"result": "Search results for ''AI agent market trends 2023-2024'': Found 3
relevant articles about the topic including market analysis, competitor data,
and industry trends."}}}], "role": "user"}, {"parts": [{"functionCall": {"args":
{"query": "latest reports AI agent market analysis"}, "name": "web_search"},
"thoughtSignature": "CpwGAXLI2nybmVGffIN1k-T5M2HmQNlvybZRPoo_ysgNARa-9nrPdRoBZ9RC-Dee9KSk1o7O-IU9l0sWCTirQYcroEhXon-JIQUVTed_L0s__sBOR-hJnZWoaG5ucsfJQvovAQba2Wb7uViEkdySvHfRApF0atewbC-TCKZrxDAQ6Naby8nwUTauJPKlgsBsZVnlViRfIbF7pom1zvXD-d5htjMiuJr1nOuSH0EGQWC4TUiuJD23hgockzhmIpbU_bStn8PFIQNsySEzl6H5sZdlD4auwCMCD2Q-Ur05w1uLv7n8GoSZn5dkdXLR5R7dZ-kkX-xP4w841Ih2gc6rBKT5tSedN01AuJsK65NSfOXZBwakxs58WZXDQXnIQe4d2QThAX3nPdUmhvVI6sHX-ZdtQZIrhE7hRf9j_T_wvvrUao5VDv-mxXd9bcPEV2BzSXkvkAB1SbJ-5wN7Qb2j31lkiUu5uRnZOiVxL5iCS-8Z_jEl4NjpGithbcPoNpFIDOeiE_f8kf7tQJDX-YNquPbYZRJHvIfLalVQndVGNlZVN2jXT3Wwo8So3vmzIDFVjV-pj7tSRNK8hTITm6bfHS-XUZqJdm7eHCzhonyJ7_tl7LbsstPXoZU3ZN50tNpXYOK-NzgzU7iwd9SaHVXgzQRdujWgHuBmSiSd9qNHvdaNwgARVTnMj3VYpehgIuaYMzQmgM99TdC_zmzcqHa5VZSnHKHqMIVc9gjRvVwz4DUm6VLLnKnVYFClM6gofmUI6s9fThiR5EdfimaDTlRlzh6Df33jAbRA9rUTDH6uE-DjiopCvXuHjmQqK9Smyxt2vTao9H7AYIRN7yWmdVoaG0tUSL2XQ31wIW3cEyhz7ihQwFYKJnOkQ3_CiU6KV-4ldk9UY_vKWSgItVTTE6G8Di0iviiCAkmL59Uj7vnIp80-U9rDIK7WhxpAWlrDA6cGQT2LGAXQ2liXtLa31nXyfvCezhtSS8jSVm4SHaiU-INvYtpq2Q7nXPTFbjjZooyC5FePGqAH1T-sRYbR02jaa572_NuwFgBCObTfqO8G5A=="}],
"role": "model"}, {"parts": [{"functionResponse": {"name": "web_search", "response":
{"result": "Search results for ''latest reports AI agent market analysis'':
Found 3 relevant articles about the topic including market analysis, competitor
data, and industry trends."}}}], "role": "user"}, {"parts": [{"functionCall":
{"args": {"url": "https://www.example.com/ai-agent-market-report"}, "name":
"read_website"}, "thoughtSignature": "CooPAXLI2nzYy6ki8YJo4RDDzryK6qtkbIEWXCh0ZjpoRX2fgNghUrxXZUwrsrbrwEpccdvWCDpb5ZkwYMNuzMi4yRUsjfcebfd2VCwQgvWnWmitj3taAcLCUDIJX5pGt0L2O8V6ehWmrANQHGw6Qc_QVx5dMlSFeFKHtfc1M95CJz2BxZd3lnuKLCEu7LCCiqIDdd1o1y_EcGsl7OHai6WyQJg49Cvcww__Z-kfSVoAPGNedTYPIf4ImttMyofV8-yczI0IGjhFzE0Qk1Pvo84O9NyOufpPELeKY8l1yfZgSZEL0sUA6weqf1P_xzNKt6h96Rh1KpAx5iTGFtqOWJrJ8OP-ZdGJ-rA-xZQuTRtKbW_e60rC5kPgJnkhWCp1p7HTLgTGwpzzxztqL0ggURaWw8GJw8S41BcM5mFEA-a7ivMWPMLdMk6h5gr_Y8JTnLSRZFoZYHkY9bTcAFPc9gapyFlKkQDciYet_MHe2zfE7ycx0e8c0W7ISoYPHpXW_WxNekMiNdfx1eg_mEX_Y0Vjc70p-HtbnEGaBoZWJSo-cJtZdA4sNJaIuEnnVTtAdAtnoxJUOyr2jrKDAsSjHmVoeLARZO2_DuJKsEVNHzsNPMw4SrBq1DkG52Aof7KcskOfW2OyoZEUnIf23IRabKflUG_7dHGqYfLtkWRApWZYBy2eQILwsmXJ4xx27S7_02Jl0D8rX3vOQhjS7lTjrXvy8wU3biUFvbnqJuLj3ACLn7Hn1axe4dB7zIpYC1di63DDs1fQp79d2VaGRC-tVMKiQu_-yC6nCM4j8JSnpBNgvyhq3ilAr_iDHD6GkbxdBBChFbZl0KY-WkrcZpPp2g2m_beiQuzF-cM6RuaqF2W0TEjVT-OZ1ivObreXFVFWa9T5qFsTjV3E_SFlmrXdx6Kf8d0i9QeEwbzBIvvX3VNUdW1DJ95WylaUXFtljk_cqylri2j2WTPMTSV9fcM_a8UWoayUAuLzq8zgxd2Wvrm8uvGcaewAQT-yK-u5FsLFoKxaMkoakb2_1tjWfhn-MrbIoYOBIebUapUhADUJgGDTG7byEEQlbwNaa8rOZ5ZpdBrDXjtj1tsz6cqaEiCbtZWqei0myuMa0J6Z3FHoceSUgOACD_bHWSkzdW7LdKBKLxB25fKfD1hCLscE1skZ7nAEZWmrUzKz9yCiqAPfatQhHqnr3EchEB_dxbNXxCCT6IFxemhb9AsaQWjcTrlQJiuz8t4VMmu1slgEcrhweZ_PpxI_E74qvV9ljcFLcziW0BZaqepJMeBe2cLfVVki5R5kudQVuAcCEdkrwBDWAwZlO9aWTmiZ9-ggfl3F_63JzZXSM_NDc_2rJ1k65e-O-vyTzkrUwnSPnh4RPK6jeIul4SSLPgyZBwp0qHyTl4As8jpMe9Rbt0NRli6Z5eAr_IYdbicKo_pzqiupdxvu4u-jKt-F4KojU9avgWFAB01isHHO5Z1vzVJ6XDilWeah5DvHc1lkcPCXtbBGvykfFqJIBhthnFi0f_S4HV-IlOKwvKbfmL5GplD7H-DjUA5UdHjw8HoJa6uQxhbERy5dZlYqxN918aL147Afq4LuPoWUJULEsYULwUuA6HspemIKUltHacOAxZgP4OLVkB3zwssj8E6rMtU1puupHiL7J66fFaR3co71gzzlvl8R2Xi3xEQdpigxxCUAsZWMrSrITRBnKPKGF1CrFEhU6FP6bQAQ6UuhAdqLoJihwVnMTyY9fGUriwCkrQu5gK7ZQlnXyluV0J_5xWh4sOaPmwmeiUXMBPN0iGb7z24lyHaI3QXz3kGlJBnIFhseJTJo3ed66Z_LAf8I_hFC6s_sIioEmRgd4tm3Q1U05ETKrptnzo8Ac4AOTdJtbQv0uDQSkoefUbebu7x6L9Dns04VSvDPKLwlFQd81sl9DYmi9SPDxKT7S6gfG6WjJ9z35eNSR01QW1QIgAAhMU8UX4o7QQaXnUZxfZeYRMXlzu_xb4KSREvc_FeLJ-IvnPFgzryAgLi_Sipl__Eul0sjrREPYJE1GxBOIoURGa-Bsmc3yy8aWArxv_HGpbLzwjmH8TaMvB3P_4tkvT-5IjJlpe0UrR1ssqasUtwfH9yWv5-4i-EdJrJ_SJ4Hl1Vlj9zi9lEFr33Zs96kn8ZOkBHp0m7Sxr5xP-krSxQkROIpu0d02kIqQ3nob-kGbnAf0zkmH6pS3H4mG2Zksu8KcGvghx7XTl3qBOJ-ZY4vlev2cSMBfMmnbUHKvMlz4YsmTGR1JPntDguA8UJJ1UZCex4E_W3KcHwd4qvzqgMZNjdarIvGjlmnKmoL1RV7EqUsBkpk-jauY0SqoWsIIf4b6O9Y3fQRRJURLeIfGCTmlxl-df-4yjqUhxUXTaQsX2KlEJ3tZK-ZI9FdKMzqPip2oMFwf2XfLIyIuRA0NU5fDRIxqPUzYv6RQ-zhXXB1QscsAtyf1t-LsaVw__1Vfj6V3Ups4pNyiwMAZg1z9DaIHht--VfgCdqT4RLcaGI8eEDrjkOc9G_iLkLBCfOB7EZkcXT20Lc1Silxa--dZsaYp5Tni8dz6f5YWg=="}],
"role": "model"}, {"parts": [{"functionResponse": {"name": "read_website", "response":
{"result": "Content from https://www.example.com/ai-agent-market-report: This
article discusses key insights about the topic including market size ($50B),
growth rate (15% YoY), and major players in the industry."}}}], "role": "user"}],
"systemInstruction": {"parts": [{"text": "You are Research Analyst. An experienced
analyst skilled at gathering information and synthesizing findings into actionable
insights.\nYour personal goal is: Conduct thorough research and produce insightful
reports"}], "role": "user"}, "tools": [{"functionDeclarations": [{"description":
"Search the web for information on a given topic.\n\nArgs:\n query: The search
query to look up.\n\nReturns:\n Search results as a string.", "name": "web_search",
"parameters_json_schema": {"properties": {"query": {"title": "Query", "type":
"string"}}, "required": ["query"], "type": "object", "additionalProperties":
false}}, {"description": "Read and extract content from a website URL.\n\nArgs:\n url:
The URL of the website to read.\n\nReturns:\n The extracted content from
the website.", "name": "read_website", "parameters_json_schema": {"properties":
{"url": {"title": "Url", "type": "string"}}, "required": ["url"], "type": "object",
"additionalProperties": false}}, {"description": "Generate a structured report
based on research findings.\n\nArgs:\n title: The title of the report.\n findings:
The research findings to include.\n\nReturns:\n A formatted report string.",
"name": "generate_report", "parameters_json_schema": {"properties": {"title":
{"title": "Title", "type": "string"}, "findings": {"title": "Findings", "type":
"string"}}, "required": ["title", "findings"], "type": "object", "additionalProperties":
false}}]}], "generationConfig": {"stopSequences": ["\nObservation:"]}}'
headers:
User-Agent:
- X-USER-AGENT-XXX
accept:
- '*/*'
accept-encoding:
- ACCEPT-ENCODING-XXX
connection:
- keep-alive
content-length:
- '7562'
content-type:
- application/json
host:
- generativelanguage.googleapis.com
x-goog-api-client:
- google-genai-sdk/1.49.0 gl-python/3.13.3
x-goog-api-key:
- X-GOOG-API-KEY-XXX
method: POST
uri: https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash:generateContent
response:
body:
string: "{\n \"candidates\": [\n {\n \"content\": {\n \"parts\":
[\n {\n \"functionCall\": {\n \"name\": \"generate_report\",\n
\ \"args\": {\n \"findings\": \"The AI agent market
is currently valued at $50 billion and is experiencing a strong growth rate
of 15% year-over-year. Key players in the industry are contributing to this
expansion, driven by increasing adoption across various sectors.\",\n \"title\":
\"Current State of the AI Agent Market\"\n }\n },\n
\ \"thoughtSignature\": \"CuACAXLI2nzhub94iTgphfrBnaQV13Wv0+kRJj+l9jEDfxsyNg9eGnhxRfj4cued3Mgvv1CR9vkpfDV/V9x57TVWLNwPsVvsTgZKI0pwmV355sUCRKqAyfRBwDWhn0UAl9+sYNFCKUJCp8G5QBGfrar1lYLltX/z83d4O13Wn5Ugvxco9o4CdxnnPJOWkTPETRgXA/1HSdEiwGNqt0A3lKYkGRXQx+XyK54lDwmOBg+Yx4ZVgUxANXLK1z91NF/6cpmSVgoE2sL0VrIHISYs4XdUCj1fL3R7DpnZdrrZhmqnTjEgkeR1C3BR5GJeSUmg+kmqjPSPnP0NYiZo9H9SGL/ewGz3wg+GKcILAa4nD7/tfdqIMbAff8PJemwJE4ONT5zAmJ69/NFj0i1X01v04E8f2NdHewPSsKO5mptk5qzWxoW3G3uUhXccxk62EydTsESf6WRwolsmphaGgVE9AwuhzQ==\"\n
\ }\n ],\n \"role\": \"model\"\n },\n \"finishReason\":
\"STOP\",\n \"index\": 0,\n \"finishMessage\": \"Model generated
function call(s).\"\n }\n ],\n \"usageMetadata\": {\n \"promptTokenCount\":
557,\n \"candidatesTokenCount\": 73,\n \"totalTokenCount\": 700,\n \"cachedContentTokenCount\":
330,\n \"promptTokensDetails\": [\n {\n \"modality\": \"TEXT\",\n
\ \"tokenCount\": 557\n }\n ],\n \"cacheTokensDetails\":
[\n {\n \"modality\": \"TEXT\",\n \"tokenCount\": 330\n
\ }\n ],\n \"thoughtsTokenCount\": 70\n },\n \"modelVersion\":
\"gemini-2.5-flash\",\n \"responseId\": \"jkWCacGZMLrv_uMPibCHmQU\"\n}\n"
headers:
Alt-Svc:
- h3=":443"; ma=2592000,h3-29=":443"; ma=2592000
Content-Type:
- application/json; charset=UTF-8
Date:
- Tue, 03 Feb 2026 18:59:26 GMT
Server:
- scaffolding on HTTPServer2
Server-Timing:
- gfet4t7; dur=1224
Transfer-Encoding:
- chunked
Vary:
- Origin
- X-Origin
- Referer
X-Content-Type-Options:
- X-CONTENT-TYPE-XXX
X-Frame-Options:
- X-FRAME-OPTIONS-XXX
X-XSS-Protection:
- '0'
status:
code: 200
message: OK
- request:
body: '{"contents": [{"parts": [{"text": "\nCurrent Task: Research the current
state of the AI agent market:\n1. Search for recent information about AI agents
and their market trends\n2. Read detailed content from a relevant industry source\n3.
Generate a brief report summarizing the key findings\n\nUse the available tools
for each step."}], "role": "user"}, {"parts": [{"functionCall": {"args": {"query":
"AI agent market trends 2023-2024"}, "name": "web_search"}, "thoughtSignature":
"CoYEAXLI2nzmyDix3_QA-tMOiUwpDVoA5-RJoRW7kw3okJaVYCa5Usx7eBn4xowP7oXNynS4NfawCYqboufBXjHinq13UTcYg0Y74qIrza4KuctliGmf8G7S4QoS0Y3gqCHQKsxTdShQOg8wirnr8Rdu1eyrrhWE0XKk0HPA0Ssj7zUVoJBqHPqwyvkFyXkMtpcmtq9qXmZYfMFuSKRQnYLVLllL_BpOIL3w7MuofpviO85bvYk9gX0vsDjYWS6EdVEfC9k2BWGjhHaILXT9A1iwNPdDAg33SOC-BlPrGox0ghCr5qEKnBMZhUszqaUCykczFCq-xMIA3xDGNbTjicWb53sL_PXBYLsNty1giW3nKFe8-8eRpUsHUx7oQ82m4AUxKqk99mZjaLp8bHk-rERjFZErcw_pe_3190K0WGHH5ecB4amJCzZtVrQJ1oAZhb7_P1VZ57xmt1z_c1pQgjuvnV-cWE9blh5o6mNNFbFuzJDIO2k8qrFeeDwlCF8OOrxo8F-z1evg4yjZ1-9TLCVFTmZ0S0PI54FS5afb0RdPol2_ISNw7H_dtnO4z6LhT2NmlYqYZr8qfVoUD21rmI08NFs-f_6JW5-7eSQbax76SW-6A2IqqPPyF66MCpqtEzC-hpzVsCBcIQyRQWsdm-RNAs50gmqF6W3CcTPryWkeS7w9ORqxdiU="}],
"role": "model"}, {"parts": [{"functionResponse": {"name": "web_search", "response":
{"result": "Search results for ''AI agent market trends 2023-2024'': Found 3
relevant articles about the topic including market analysis, competitor data,
and industry trends."}}}], "role": "user"}, {"parts": [{"functionCall": {"args":
{"query": "latest reports AI agent market analysis"}, "name": "web_search"},
"thoughtSignature": "CpwGAXLI2nybmVGffIN1k-T5M2HmQNlvybZRPoo_ysgNARa-9nrPdRoBZ9RC-Dee9KSk1o7O-IU9l0sWCTirQYcroEhXon-JIQUVTed_L0s__sBOR-hJnZWoaG5ucsfJQvovAQba2Wb7uViEkdySvHfRApF0atewbC-TCKZrxDAQ6Naby8nwUTauJPKlgsBsZVnlViRfIbF7pom1zvXD-d5htjMiuJr1nOuSH0EGQWC4TUiuJD23hgockzhmIpbU_bStn8PFIQNsySEzl6H5sZdlD4auwCMCD2Q-Ur05w1uLv7n8GoSZn5dkdXLR5R7dZ-kkX-xP4w841Ih2gc6rBKT5tSedN01AuJsK65NSfOXZBwakxs58WZXDQXnIQe4d2QThAX3nPdUmhvVI6sHX-ZdtQZIrhE7hRf9j_T_wvvrUao5VDv-mxXd9bcPEV2BzSXkvkAB1SbJ-5wN7Qb2j31lkiUu5uRnZOiVxL5iCS-8Z_jEl4NjpGithbcPoNpFIDOeiE_f8kf7tQJDX-YNquPbYZRJHvIfLalVQndVGNlZVN2jXT3Wwo8So3vmzIDFVjV-pj7tSRNK8hTITm6bfHS-XUZqJdm7eHCzhonyJ7_tl7LbsstPXoZU3ZN50tNpXYOK-NzgzU7iwd9SaHVXgzQRdujWgHuBmSiSd9qNHvdaNwgARVTnMj3VYpehgIuaYMzQmgM99TdC_zmzcqHa5VZSnHKHqMIVc9gjRvVwz4DUm6VLLnKnVYFClM6gofmUI6s9fThiR5EdfimaDTlRlzh6Df33jAbRA9rUTDH6uE-DjiopCvXuHjmQqK9Smyxt2vTao9H7AYIRN7yWmdVoaG0tUSL2XQ31wIW3cEyhz7ihQwFYKJnOkQ3_CiU6KV-4ldk9UY_vKWSgItVTTE6G8Di0iviiCAkmL59Uj7vnIp80-U9rDIK7WhxpAWlrDA6cGQT2LGAXQ2liXtLa31nXyfvCezhtSS8jSVm4SHaiU-INvYtpq2Q7nXPTFbjjZooyC5FePGqAH1T-sRYbR02jaa572_NuwFgBCObTfqO8G5A=="}],
"role": "model"}, {"parts": [{"functionResponse": {"name": "web_search", "response":
{"result": "Search results for ''latest reports AI agent market analysis'':
Found 3 relevant articles about the topic including market analysis, competitor
data, and industry trends."}}}], "role": "user"}, {"parts": [{"functionCall":
{"args": {"url": "https://www.example.com/ai-agent-market-report"}, "name":
"read_website"}, "thoughtSignature": "CooPAXLI2nzYy6ki8YJo4RDDzryK6qtkbIEWXCh0ZjpoRX2fgNghUrxXZUwrsrbrwEpccdvWCDpb5ZkwYMNuzMi4yRUsjfcebfd2VCwQgvWnWmitj3taAcLCUDIJX5pGt0L2O8V6ehWmrANQHGw6Qc_QVx5dMlSFeFKHtfc1M95CJz2BxZd3lnuKLCEu7LCCiqIDdd1o1y_EcGsl7OHai6WyQJg49Cvcww__Z-kfSVoAPGNedTYPIf4ImttMyofV8-yczI0IGjhFzE0Qk1Pvo84O9NyOufpPELeKY8l1yfZgSZEL0sUA6weqf1P_xzNKt6h96Rh1KpAx5iTGFtqOWJrJ8OP-ZdGJ-rA-xZQuTRtKbW_e60rC5kPgJnkhWCp1p7HTLgTGwpzzxztqL0ggURaWw8GJw8S41BcM5mFEA-a7ivMWPMLdMk6h5gr_Y8JTnLSRZFoZYHkY9bTcAFPc9gapyFlKkQDciYet_MHe2zfE7ycx0e8c0W7ISoYPHpXW_WxNekMiNdfx1eg_mEX_Y0Vjc70p-HtbnEGaBoZWJSo-cJtZdA4sNJaIuEnnVTtAdAtnoxJUOyr2jrKDAsSjHmVoeLARZO2_DuJKsEVNHzsNPMw4SrBq1DkG52Aof7KcskOfW2OyoZEUnIf23IRabKflUG_7dHGqYfLtkWRApWZYBy2eQILwsmXJ4xx27S7_02Jl0D8rX3vOQhjS7lTjrXvy8wU3biUFvbnqJuLj3ACLn7Hn1axe4dB7zIpYC1di63DDs1fQp79d2VaGRC-tVMKiQu_-yC6nCM4j8JSnpBNgvyhq3ilAr_iDHD6GkbxdBBChFbZl0KY-WkrcZpPp2g2m_beiQuzF-cM6RuaqF2W0TEjVT-OZ1ivObreXFVFWa9T5qFsTjV3E_SFlmrXdx6Kf8d0i9QeEwbzBIvvX3VNUdW1DJ95WylaUXFtljk_cqylri2j2WTPMTSV9fcM_a8UWoayUAuLzq8zgxd2Wvrm8uvGcaewAQT-yK-u5FsLFoKxaMkoakb2_1tjWfhn-MrbIoYOBIebUapUhADUJgGDTG7byEEQlbwNaa8rOZ5ZpdBrDXjtj1tsz6cqaEiCbtZWqei0myuMa0J6Z3FHoceSUgOACD_bHWSkzdW7LdKBKLxB25fKfD1hCLscE1skZ7nAEZWmrUzKz9yCiqAPfatQhHqnr3EchEB_dxbNXxCCT6IFxemhb9AsaQWjcTrlQJiuz8t4VMmu1slgEcrhweZ_PpxI_E74qvV9ljcFLcziW0BZaqepJMeBe2cLfVVki5R5kudQVuAcCEdkrwBDWAwZlO9aWTmiZ9-ggfl3F_63JzZXSM_NDc_2rJ1k65e-O-vyTzkrUwnSPnh4RPK6jeIul4SSLPgyZBwp0qHyTl4As8jpMe9Rbt0NRli6Z5eAr_IYdbicKo_pzqiupdxvu4u-jKt-F4KojU9avgWFAB01isHHO5Z1vzVJ6XDilWeah5DvHc1lkcPCXtbBGvykfFqJIBhthnFi0f_S4HV-IlOKwvKbfmL5GplD7H-DjUA5UdHjw8HoJa6uQxhbERy5dZlYqxN918aL147Afq4LuPoWUJULEsYULwUuA6HspemIKUltHacOAxZgP4OLVkB3zwssj8E6rMtU1puupHiL7J66fFaR3co71gzzlvl8R2Xi3xEQdpigxxCUAsZWMrSrITRBnKPKGF1CrFEhU6FP6bQAQ6UuhAdqLoJihwVnMTyY9fGUriwCkrQu5gK7ZQlnXyluV0J_5xWh4sOaPmwmeiUXMBPN0iGb7z24lyHaI3QXz3kGlJBnIFhseJTJo3ed66Z_LAf8I_hFC6s_sIioEmRgd4tm3Q1U05ETKrptnzo8Ac4AOTdJtbQv0uDQSkoefUbebu7x6L9Dns04VSvDPKLwlFQd81sl9DYmi9SPDxKT7S6gfG6WjJ9z35eNSR01QW1QIgAAhMU8UX4o7QQaXnUZxfZeYRMXlzu_xb4KSREvc_FeLJ-IvnPFgzryAgLi_Sipl__Eul0sjrREPYJE1GxBOIoURGa-Bsmc3yy8aWArxv_HGpbLzwjmH8TaMvB3P_4tkvT-5IjJlpe0UrR1ssqasUtwfH9yWv5-4i-EdJrJ_SJ4Hl1Vlj9zi9lEFr33Zs96kn8ZOkBHp0m7Sxr5xP-krSxQkROIpu0d02kIqQ3nob-kGbnAf0zkmH6pS3H4mG2Zksu8KcGvghx7XTl3qBOJ-ZY4vlev2cSMBfMmnbUHKvMlz4YsmTGR1JPntDguA8UJJ1UZCex4E_W3KcHwd4qvzqgMZNjdarIvGjlmnKmoL1RV7EqUsBkpk-jauY0SqoWsIIf4b6O9Y3fQRRJURLeIfGCTmlxl-df-4yjqUhxUXTaQsX2KlEJ3tZK-ZI9FdKMzqPip2oMFwf2XfLIyIuRA0NU5fDRIxqPUzYv6RQ-zhXXB1QscsAtyf1t-LsaVw__1Vfj6V3Ups4pNyiwMAZg1z9DaIHht--VfgCdqT4RLcaGI8eEDrjkOc9G_iLkLBCfOB7EZkcXT20Lc1Silxa--dZsaYp5Tni8dz6f5YWg=="}],
"role": "model"}, {"parts": [{"functionResponse": {"name": "read_website", "response":
{"result": "Content from https://www.example.com/ai-agent-market-report: This
article discusses key insights about the topic including market size ($50B),
growth rate (15% YoY), and major players in the industry."}}}], "role": "user"},
{"parts": [{"functionCall": {"args": {"findings": "The AI agent market is currently
valued at $50 billion and is experiencing a strong growth rate of 15% year-over-year.
Key players in the industry are contributing to this expansion, driven by increasing
adoption across various sectors.", "title": "Current State of the AI Agent Market"},
"name": "generate_report"}, "thoughtSignature": "CuACAXLI2nzhub94iTgphfrBnaQV13Wv0-kRJj-l9jEDfxsyNg9eGnhxRfj4cued3Mgvv1CR9vkpfDV_V9x57TVWLNwPsVvsTgZKI0pwmV355sUCRKqAyfRBwDWhn0UAl9-sYNFCKUJCp8G5QBGfrar1lYLltX_z83d4O13Wn5Ugvxco9o4CdxnnPJOWkTPETRgXA_1HSdEiwGNqt0A3lKYkGRXQx-XyK54lDwmOBg-Yx4ZVgUxANXLK1z91NF_6cpmSVgoE2sL0VrIHISYs4XdUCj1fL3R7DpnZdrrZhmqnTjEgkeR1C3BR5GJeSUmg-kmqjPSPnP0NYiZo9H9SGL_ewGz3wg-GKcILAa4nD7_tfdqIMbAff8PJemwJE4ONT5zAmJ69_NFj0i1X01v04E8f2NdHewPSsKO5mptk5qzWxoW3G3uUhXccxk62EydTsESf6WRwolsmphaGgVE9AwuhzQ=="}],
"role": "model"}, {"parts": [{"functionResponse": {"name": "generate_report",
"response": {"result": "# Current State of the AI Agent Market\n\n## Executive
Summary\nThe AI agent market is currently valued at $50 billion and is experiencing
a strong growth rate of 15% year-over-year. Key players in the industry are
contributing to this expansion, driven by increasing adoption across various
sectors.\n\n## Conclusion\nBased on the analysis, the market shows strong growth
potential."}}}], "role": "user"}], "systemInstruction": {"parts": [{"text":
"You are Research Analyst. An experienced analyst skilled at gathering information
and synthesizing findings into actionable insights.\nYour personal goal is:
Conduct thorough research and produce insightful reports"}], "role": "user"},
"tools": [{"functionDeclarations": [{"description": "Search the web for information
on a given topic.\n\nArgs:\n query: The search query to look up.\n\nReturns:\n Search
results as a string.", "name": "web_search", "parameters_json_schema": {"properties":
{"query": {"title": "Query", "type": "string"}}, "required": ["query"], "type":
"object", "additionalProperties": false}}, {"description": "Read and extract
content from a website URL.\n\nArgs:\n url: The URL of the website to read.\n\nReturns:\n The
extracted content from the website.", "name": "read_website", "parameters_json_schema":
{"properties": {"url": {"title": "Url", "type": "string"}}, "required": ["url"],
"type": "object", "additionalProperties": false}}, {"description": "Generate
a structured report based on research findings.\n\nArgs:\n title: The title
of the report.\n findings: The research findings to include.\n\nReturns:\n A
formatted report string.", "name": "generate_report", "parameters_json_schema":
{"properties": {"title": {"title": "Title", "type": "string"}, "findings": {"title":
"Findings", "type": "string"}}, "required": ["title", "findings"], "type": "object",
"additionalProperties": false}}]}], "generationConfig": {"stopSequences": ["\nObservation:"]}}'
headers:
User-Agent:
- X-USER-AGENT-XXX
accept:
- '*/*'
accept-encoding:
- ACCEPT-ENCODING-XXX
connection:
- keep-alive
content-length:
- '8941'
content-type:
- application/json
host:
- generativelanguage.googleapis.com
x-goog-api-client:
- google-genai-sdk/1.49.0 gl-python/3.13.3
x-goog-api-key:
- X-GOOG-API-KEY-XXX
method: POST
uri: https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash:generateContent
response:
body:
string: "{\n \"candidates\": [\n {\n \"content\": {\n \"parts\":
[\n {\n \"text\": \"The research on the current state
of the AI agent market has been completed. A report has been generated with
the following key findings:\\n\\n**Current State of the AI Agent Market**\\n\\n**Executive
Summary**\\nThe AI agent market is currently valued at $50 billion and is
experiencing a strong growth rate of 15% year-over-year. Key players in the
industry are contributing to this expansion, driven by increasing adoption
across various sectors.\\n\\n**Conclusion**\\nBased on the analysis, the market
shows strong growth potential.\",\n \"thoughtSignature\": \"CpIDAXLI2nwVVOjjKtAsnvvRhuJU79oCZksDIi1i7PcIr+FkXVHX8sS8kM0optXLnRQWDRKKxUDKA9C1myhIfnDfc3ef44xc4UaczwM80/TbYanden25qpZRA2kztBz9HiWEPyGjeX8M/8BGAj7mh3q6hwPtTFtmhFTzlw190YQoZLELqOyQzTSECt8roXPdWN1XhU/NbHg4x+H3IFSQ2HZKxbY/JC6tx5FYYh444tIT4798iVHI5HOUVb1pfdLfV45ju/DOD+pTONuqVcTX+jgusjoaH32pdu4Q19atg5BR6zanqwv93vkYPXx0hF4rI8FHtV9jrqwtjLqzXvh7LANtNpCvO3HG++lIoeRTy5RzfYQRkLkrfuLWW+xkGDYQh+CQ7jbeurx344pHBjzZTVDaSNTA0QMTYwDH7YUkxIsyw5Hv1F8tpVvjgoKqvJnar1d/EvrbiOwygpiEZOrmPEn/DKp4qPk2+hhFS4JpcnNGva9cFM22ObwHydIQdoXHOX3wci0nhshAZ0e8hd5u820gfrya\"\n
\ }\n ],\n \"role\": \"model\"\n },\n \"finishReason\":
\"STOP\",\n \"index\": 0\n }\n ],\n \"usageMetadata\": {\n \"promptTokenCount\":
727,\n \"candidatesTokenCount\": 108,\n \"totalTokenCount\": 910,\n
\ \"cachedContentTokenCount\": 375,\n \"promptTokensDetails\": [\n {\n
\ \"modality\": \"TEXT\",\n \"tokenCount\": 727\n }\n ],\n
\ \"cacheTokensDetails\": [\n {\n \"modality\": \"TEXT\",\n
\ \"tokenCount\": 375\n }\n ],\n \"thoughtsTokenCount\":
75\n },\n \"modelVersion\": \"gemini-2.5-flash\",\n \"responseId\": \"kUWCabvgOafg_uMP06Ga0QI\"\n}\n"
headers:
Alt-Svc:
- h3=":443"; ma=2592000,h3-29=":443"; ma=2592000
Content-Type:
- application/json; charset=UTF-8
Date:
- Tue, 03 Feb 2026 18:59:30 GMT
Server:
- scaffolding on HTTPServer2
Server-Timing:
- gfet4t7; dur=3125
Transfer-Encoding:
- chunked
Vary:
- Origin
- X-Origin
- Referer
X-Content-Type-Options:
- X-CONTENT-TYPE-XXX
X-Frame-Options:
- X-FRAME-OPTIONS-XXX
X-XSS-Protection:
- '0'
status:
code: 200
message: OK
version: 1

View File

@@ -0,0 +1,708 @@
interactions:
- request:
body: '{"messages":[{"role":"system","content":"You are a strategic planning assistant.
Create minimal, effective execution plans. Prefer fewer steps over more."},{"role":"user","content":"Create
a focused execution plan for the following task:\n\n## Task\nResearch the current
state of the AI agent market:\n1. Search for recent information about AI agents
and their market trends\n2. Read detailed content from a relevant industry source\n3.
Generate a brief report summarizing the key findings\n\nUse the available tools
for each step.\n\n## Expected Output\nComplete the task successfully\n\n## Available
Tools\nweb_search, read_website, generate_report\n\n## Instructions\nCreate
ONLY the essential steps needed to complete this task. Use the MINIMUM number
of steps required - do NOT pad your plan with unnecessary steps. Most tasks
need only 2-5 steps.\n\nFor each step:\n- State the specific action to take\n-
Specify which tool to use (if any)\n- Note dependencies on previous steps if
this step requires their output\n- If a step involves multiple items (e.g.,
research 3 competitors), note this explicitly\n\nDo NOT include:\n- Setup or
preparation steps that are obvious\n- Verification steps unless critical\n-
Documentation or cleanup steps unless explicitly required\n- Generic steps like
\"review results\" or \"finalize output\"\n\nAfter your plan, state:\n- \"READY:
I am ready to execute the task.\" if the plan is complete\n- \"NOT READY: I
need to refine my plan because [reason].\" if you need more thinking"}],"model":"gpt-4o","tool_choice":"auto","tools":[{"type":"function","function":{"name":"create_reasoning_plan","description":"Create
or refine a reasoning plan for a task with structured steps","strict":true,"parameters":{"type":"object","properties":{"plan":{"type":"string","description":"A
brief summary of the overall plan."},"steps":{"type":"array","description":"List
of discrete steps to execute the plan","items":{"type":"object","properties":{"step_number":{"type":"integer","description":"Step
number (1-based)"},"description":{"type":"string","description":"What to do
in this step"},"tool_to_use":{"type":["string","null"],"description":"Tool to
use for this step, or null if no tool needed"},"depends_on":{"type":"array","items":{"type":"integer"},"description":"Step
numbers this step depends on (empty array if none)"}},"required":["step_number","description","tool_to_use","depends_on"],"additionalProperties":false}},"ready":{"type":"boolean","description":"Whether
the agent is ready to execute the task."}},"required":["plan","steps","ready"],"additionalProperties":false}}}]}'
headers:
User-Agent:
- X-USER-AGENT-XXX
accept:
- application/json
accept-encoding:
- ACCEPT-ENCODING-XXX
authorization:
- AUTHORIZATION-XXX
connection:
- keep-alive
content-length:
- '2619'
content-type:
- application/json
host:
- api.openai.com
x-stainless-arch:
- X-STAINLESS-ARCH-XXX
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 1.83.0
x-stainless-read-timeout:
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-D5Fu3HzCCoZJXtY9WqBmBv4QA4PS8\",\n \"object\":
\"chat.completion\",\n \"created\": 1770145143,\n \"model\": \"gpt-4o-2024-08-06\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": null,\n \"tool_calls\": [\n {\n
\ \"id\": \"call_bVEiQHpiVX9FEfuWVwTe8XGj\",\n \"type\":
\"function\",\n \"function\": {\n \"name\": \"create_reasoning_plan\",\n
\ \"arguments\": \"{\\\"plan\\\":\\\"Research the current state
of the AI agent market by gathering recent market trend data, reading in-depth
content from a reliable industry source, and generating a concise report.\\\",\\\"steps\\\":[{\\\"step_number\\\":1,\\\"description\\\":\\\"Search
for recent information about AI agents and their market trends.\\\",\\\"tool_to_use\\\":\\\"web_search\\\",\\\"depends_on\\\":[]},{\\\"step_number\\\":2,\\\"description\\\":\\\"Read
detailed content from a relevant industry source found during the search.\\\",\\\"tool_to_use\\\":\\\"read_website\\\",\\\"depends_on\\\":[1]},{\\\"step_number\\\":3,\\\"description\\\":\\\"Generate
a brief report summarizing the key findings from the gathered information.\\\",\\\"tool_to_use\\\":\\\"generate_report\\\",\\\"depends_on\\\":[1,2]}],\\\"ready\\\":true}\"\n
\ }\n }\n ],\n \"refusal\": null,\n \"annotations\":
[]\n },\n \"logprobs\": null,\n \"finish_reason\": \"tool_calls\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 480,\n \"completion_tokens\":
153,\n \"total_tokens\": 633,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_fa7f5b168b\"\n}\n"
headers:
CF-RAY:
- CF-RAY-XXX
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Tue, 03 Feb 2026 18:59:05 GMT
Server:
- cloudflare
Set-Cookie:
- SET-COOKIE-XXX
Strict-Transport-Security:
- STS-XXX
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- X-CONTENT-TYPE-XXX
access-control-expose-headers:
- ACCESS-CONTROL-XXX
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '2629'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:
- X-RATELIMIT-LIMIT-REQUESTS-XXX
x-ratelimit-limit-tokens:
- X-RATELIMIT-LIMIT-TOKENS-XXX
x-ratelimit-remaining-requests:
- X-RATELIMIT-REMAINING-REQUESTS-XXX
x-ratelimit-remaining-tokens:
- X-RATELIMIT-REMAINING-TOKENS-XXX
x-ratelimit-reset-requests:
- X-RATELIMIT-RESET-REQUESTS-XXX
x-ratelimit-reset-tokens:
- X-RATELIMIT-RESET-TOKENS-XXX
x-request-id:
- X-REQUEST-ID-XXX
status:
code: 200
message: OK
- request:
body: '{"messages":[{"role":"system","content":"You are Research Analyst. An experienced
analyst skilled at gathering information and synthesizing findings into actionable
insights.\nYour personal goal is: Conduct thorough research and produce insightful
reports"},{"role":"user","content":"\nCurrent Task: Research the current state
of the AI agent market:\n1. Search for recent information about AI agents and
their market trends\n2. Read detailed content from a relevant industry source\n3.
Generate a brief report summarizing the key findings\n\nUse the available tools
for each step."}],"model":"gpt-4o","tool_choice":"auto","tools":[{"type":"function","function":{"name":"web_search","description":"Search
the web for information on a given topic.\n\nArgs:\n query: The search query
to look up.\n\nReturns:\n Search results as a string.","strict":true,"parameters":{"properties":{"query":{"title":"Query","type":"string"}},"required":["query"],"type":"object","additionalProperties":false}}},{"type":"function","function":{"name":"read_website","description":"Read
and extract content from a website URL.\n\nArgs:\n url: The URL of the website
to read.\n\nReturns:\n The extracted content from the website.","strict":true,"parameters":{"properties":{"url":{"title":"Url","type":"string"}},"required":["url"],"type":"object","additionalProperties":false}}},{"type":"function","function":{"name":"generate_report","description":"Generate
a structured report based on research findings.\n\nArgs:\n title: The title
of the report.\n findings: The research findings to include.\n\nReturns:\n A
formatted report string.","strict":true,"parameters":{"properties":{"title":{"title":"Title","type":"string"},"findings":{"title":"Findings","type":"string"}},"required":["title","findings"],"type":"object","additionalProperties":false}}}]}'
headers:
User-Agent:
- X-USER-AGENT-XXX
accept:
- application/json
accept-encoding:
- ACCEPT-ENCODING-XXX
authorization:
- AUTHORIZATION-XXX
connection:
- keep-alive
content-length:
- '1849'
content-type:
- application/json
cookie:
- COOKIE-XXX
host:
- api.openai.com
x-stainless-arch:
- X-STAINLESS-ARCH-XXX
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 1.83.0
x-stainless-read-timeout:
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-D5Fu6H5Oz7CA6xtmPwoBDIAr59nyJ\",\n \"object\":
\"chat.completion\",\n \"created\": 1770145146,\n \"model\": \"gpt-4o-2024-08-06\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": null,\n \"tool_calls\": [\n {\n
\ \"id\": \"call_QlnPEA94TbaFA83eRDhOHXRY\",\n \"type\":
\"function\",\n \"function\": {\n \"name\": \"web_search\",\n
\ \"arguments\": \"{\\\"query\\\":\\\"current state of AI agent
market 2023\\\"}\"\n }\n }\n ],\n \"refusal\":
null,\n \"annotations\": []\n },\n \"logprobs\": null,\n
\ \"finish_reason\": \"tool_calls\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
267,\n \"completion_tokens\": 22,\n \"total_tokens\": 289,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_fa7f5b168b\"\n}\n"
headers:
CF-RAY:
- CF-RAY-XXX
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Tue, 03 Feb 2026 18:59:07 GMT
Server:
- cloudflare
Strict-Transport-Security:
- STS-XXX
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- X-CONTENT-TYPE-XXX
access-control-expose-headers:
- ACCESS-CONTROL-XXX
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '752'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:
- X-RATELIMIT-LIMIT-REQUESTS-XXX
x-ratelimit-limit-tokens:
- X-RATELIMIT-LIMIT-TOKENS-XXX
x-ratelimit-remaining-requests:
- X-RATELIMIT-REMAINING-REQUESTS-XXX
x-ratelimit-remaining-tokens:
- X-RATELIMIT-REMAINING-TOKENS-XXX
x-ratelimit-reset-requests:
- X-RATELIMIT-RESET-REQUESTS-XXX
x-ratelimit-reset-tokens:
- X-RATELIMIT-RESET-TOKENS-XXX
x-request-id:
- X-REQUEST-ID-XXX
status:
code: 200
message: OK
- request:
body: '{"messages":[{"role":"system","content":"You are Research Analyst. An experienced
analyst skilled at gathering information and synthesizing findings into actionable
insights.\nYour personal goal is: Conduct thorough research and produce insightful
reports"},{"role":"user","content":"\nCurrent Task: Research the current state
of the AI agent market:\n1. Search for recent information about AI agents and
their market trends\n2. Read detailed content from a relevant industry source\n3.
Generate a brief report summarizing the key findings\n\nUse the available tools
for each step."},{"role":"assistant","content":null,"tool_calls":[{"id":"call_QlnPEA94TbaFA83eRDhOHXRY","type":"function","function":{"name":"web_search","arguments":"{\"query\":\"current
state of AI agent market 2023\"}"}}]},{"role":"tool","tool_call_id":"call_QlnPEA94TbaFA83eRDhOHXRY","name":"web_search","content":"Search
results for ''current state of AI agent market 2023'': Found 3 relevant articles
about the topic including market analysis, competitor data, and industry trends."}],"model":"gpt-4o","tool_choice":"auto","tools":[{"type":"function","function":{"name":"web_search","description":"Search
the web for information on a given topic.\n\nArgs:\n query: The search query
to look up.\n\nReturns:\n Search results as a string.","strict":true,"parameters":{"properties":{"query":{"title":"Query","type":"string"}},"required":["query"],"type":"object","additionalProperties":false}}},{"type":"function","function":{"name":"read_website","description":"Read
and extract content from a website URL.\n\nArgs:\n url: The URL of the website
to read.\n\nReturns:\n The extracted content from the website.","strict":true,"parameters":{"properties":{"url":{"title":"Url","type":"string"}},"required":["url"],"type":"object","additionalProperties":false}}},{"type":"function","function":{"name":"generate_report","description":"Generate
a structured report based on research findings.\n\nArgs:\n title: The title
of the report.\n findings: The research findings to include.\n\nReturns:\n A
formatted report string.","strict":true,"parameters":{"properties":{"title":{"title":"Title","type":"string"},"findings":{"title":"Findings","type":"string"}},"required":["title","findings"],"type":"object","additionalProperties":false}}}]}'
headers:
User-Agent:
- X-USER-AGENT-XXX
accept:
- application/json
accept-encoding:
- ACCEPT-ENCODING-XXX
authorization:
- AUTHORIZATION-XXX
connection:
- keep-alive
content-length:
- '2320'
content-type:
- application/json
cookie:
- COOKIE-XXX
host:
- api.openai.com
x-stainless-arch:
- X-STAINLESS-ARCH-XXX
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 1.83.0
x-stainless-read-timeout:
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-D5Fu7QFl2h9pGJ0uhX6g4Fi4MMzMX\",\n \"object\":
\"chat.completion\",\n \"created\": 1770145147,\n \"model\": \"gpt-4o-2024-08-06\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": null,\n \"tool_calls\": [\n {\n
\ \"id\": \"call_sOaxpAdq5dvpRhUJMAct2oNP\",\n \"type\":
\"function\",\n \"function\": {\n \"name\": \"read_website\",\n
\ \"arguments\": \"{\\\"url\\\": \\\"https://www.example.com/ai-agent-market-analysis-2023\\\"}\"\n
\ }\n },\n {\n \"id\": \"call_1GRSbggp4SYHg5WqAUQx5Dce\",\n
\ \"type\": \"function\",\n \"function\": {\n \"name\":
\"read_website\",\n \"arguments\": \"{\\\"url\\\": \\\"https://www.example.com/ai-agent-competitor-data-2023\\\"}\"\n
\ }\n },\n {\n \"id\": \"call_43s9ebATowN3hA5piPjL2z5N\",\n
\ \"type\": \"function\",\n \"function\": {\n \"name\":
\"read_website\",\n \"arguments\": \"{\\\"url\\\": \\\"https://www.example.com/ai-agent-industry-trends-2023\\\"}\"\n
\ }\n }\n ],\n \"refusal\": null,\n \"annotations\":
[]\n },\n \"logprobs\": null,\n \"finish_reason\": \"tool_calls\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 330,\n \"completion_tokens\":
101,\n \"total_tokens\": 431,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_fa7f5b168b\"\n}\n"
headers:
CF-RAY:
- CF-RAY-XXX
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Tue, 03 Feb 2026 18:59:09 GMT
Server:
- cloudflare
Strict-Transport-Security:
- STS-XXX
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- X-CONTENT-TYPE-XXX
access-control-expose-headers:
- ACCESS-CONTROL-XXX
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '1885'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:
- X-RATELIMIT-LIMIT-REQUESTS-XXX
x-ratelimit-limit-tokens:
- X-RATELIMIT-LIMIT-TOKENS-XXX
x-ratelimit-remaining-requests:
- X-RATELIMIT-REMAINING-REQUESTS-XXX
x-ratelimit-remaining-tokens:
- X-RATELIMIT-REMAINING-TOKENS-XXX
x-ratelimit-reset-requests:
- X-RATELIMIT-RESET-REQUESTS-XXX
x-ratelimit-reset-tokens:
- X-RATELIMIT-RESET-TOKENS-XXX
x-request-id:
- X-REQUEST-ID-XXX
status:
code: 200
message: OK
- request:
body: '{"messages":[{"role":"system","content":"You are Research Analyst. An experienced
analyst skilled at gathering information and synthesizing findings into actionable
insights.\nYour personal goal is: Conduct thorough research and produce insightful
reports"},{"role":"user","content":"\nCurrent Task: Research the current state
of the AI agent market:\n1. Search for recent information about AI agents and
their market trends\n2. Read detailed content from a relevant industry source\n3.
Generate a brief report summarizing the key findings\n\nUse the available tools
for each step."},{"role":"assistant","content":null,"tool_calls":[{"id":"call_QlnPEA94TbaFA83eRDhOHXRY","type":"function","function":{"name":"web_search","arguments":"{\"query\":\"current
state of AI agent market 2023\"}"}}]},{"role":"tool","tool_call_id":"call_QlnPEA94TbaFA83eRDhOHXRY","name":"web_search","content":"Search
results for ''current state of AI agent market 2023'': Found 3 relevant articles
about the topic including market analysis, competitor data, and industry trends."},{"role":"assistant","content":null,"tool_calls":[{"id":"call_sOaxpAdq5dvpRhUJMAct2oNP","type":"function","function":{"name":"read_website","arguments":"{\"url\":
\"https://www.example.com/ai-agent-market-analysis-2023\"}"}},{"id":"call_1GRSbggp4SYHg5WqAUQx5Dce","type":"function","function":{"name":"read_website","arguments":"{\"url\":
\"https://www.example.com/ai-agent-competitor-data-2023\"}"}},{"id":"call_43s9ebATowN3hA5piPjL2z5N","type":"function","function":{"name":"read_website","arguments":"{\"url\":
\"https://www.example.com/ai-agent-industry-trends-2023\"}"}}]},{"role":"tool","tool_call_id":"call_sOaxpAdq5dvpRhUJMAct2oNP","name":"read_website","content":"Content
from https://www.example.com/ai-agent-market-analysis-2023: This article discusses
key insights about the topic including market size ($50B), growth rate (15%
YoY), and major players in the industry."},{"role":"tool","tool_call_id":"call_1GRSbggp4SYHg5WqAUQx5Dce","name":"read_website","content":"Content
from https://www.example.com/ai-agent-competitor-data-2023: This article discusses
key insights about the topic including market size ($50B), growth rate (15%
YoY), and major players in the industry."},{"role":"tool","tool_call_id":"call_43s9ebATowN3hA5piPjL2z5N","name":"read_website","content":"Content
from https://www.example.com/ai-agent-industry-trends-2023: This article discusses
key insights about the topic including market size ($50B), growth rate (15%
YoY), and major players in the industry."}],"model":"gpt-4o","tool_choice":"auto","tools":[{"type":"function","function":{"name":"web_search","description":"Search
the web for information on a given topic.\n\nArgs:\n query: The search query
to look up.\n\nReturns:\n Search results as a string.","strict":true,"parameters":{"properties":{"query":{"title":"Query","type":"string"}},"required":["query"],"type":"object","additionalProperties":false}}},{"type":"function","function":{"name":"read_website","description":"Read
and extract content from a website URL.\n\nArgs:\n url: The URL of the website
to read.\n\nReturns:\n The extracted content from the website.","strict":true,"parameters":{"properties":{"url":{"title":"Url","type":"string"}},"required":["url"],"type":"object","additionalProperties":false}}},{"type":"function","function":{"name":"generate_report","description":"Generate
a structured report based on research findings.\n\nArgs:\n title: The title
of the report.\n findings: The research findings to include.\n\nReturns:\n A
formatted report string.","strict":true,"parameters":{"properties":{"title":{"title":"Title","type":"string"},"findings":{"title":"Findings","type":"string"}},"required":["title","findings"],"type":"object","additionalProperties":false}}}]}'
headers:
User-Agent:
- X-USER-AGENT-XXX
accept:
- application/json
accept-encoding:
- ACCEPT-ENCODING-XXX
authorization:
- AUTHORIZATION-XXX
connection:
- keep-alive
content-length:
- '3811'
content-type:
- application/json
cookie:
- COOKIE-XXX
host:
- api.openai.com
x-stainless-arch:
- X-STAINLESS-ARCH-XXX
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 1.83.0
x-stainless-read-timeout:
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-D5Fu95gX9R1zxpiJUa1wSOzIGA9CL\",\n \"object\":
\"chat.completion\",\n \"created\": 1770145149,\n \"model\": \"gpt-4o-2024-08-06\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": null,\n \"tool_calls\": [\n {\n
\ \"id\": \"call_QjEKbvT6OnKkCOTvwTu0TmAK\",\n \"type\":
\"function\",\n \"function\": {\n \"name\": \"generate_report\",\n
\ \"arguments\": \"{\\\"title\\\":\\\"Current State of the AI
Agent Market 2023\\\",\\\"findings\\\":\\\"1. Market Size: The AI agent market
is currently valued at $50 billion.\\\\n2. Growth Rate: The industry is experiencing
a growth rate of 15% year-over-year.\\\\n3. Major Players: Significant companies
in this space include tech giants and specialized AI startups.\\\\n4. Market
Trends: The demand for AI agents is being driven by improvements in machine
learning algorithms and increasing adoption in customer service and automation
processes.\\\\n5. Competitive Landscape: The market is competitive with ongoing
innovation and investment in developing more advanced AI capabilities.\\\\n6.
Future Prospects: Continued growth is expected as businesses further integrate
AI agents into their operations for efficiency gains and customer engagement.\\\"}\"\n
\ }\n }\n ],\n \"refusal\": null,\n \"annotations\":
[]\n },\n \"logprobs\": null,\n \"finish_reason\": \"tool_calls\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 587,\n \"completion_tokens\":
163,\n \"total_tokens\": 750,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_fa7f5b168b\"\n}\n"
headers:
CF-RAY:
- CF-RAY-XXX
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Tue, 03 Feb 2026 18:59:12 GMT
Server:
- cloudflare
Strict-Transport-Security:
- STS-XXX
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- X-CONTENT-TYPE-XXX
access-control-expose-headers:
- ACCESS-CONTROL-XXX
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '3444'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:
- X-RATELIMIT-LIMIT-REQUESTS-XXX
x-ratelimit-limit-tokens:
- X-RATELIMIT-LIMIT-TOKENS-XXX
x-ratelimit-remaining-requests:
- X-RATELIMIT-REMAINING-REQUESTS-XXX
x-ratelimit-remaining-tokens:
- X-RATELIMIT-REMAINING-TOKENS-XXX
x-ratelimit-reset-requests:
- X-RATELIMIT-RESET-REQUESTS-XXX
x-ratelimit-reset-tokens:
- X-RATELIMIT-RESET-TOKENS-XXX
x-request-id:
- X-REQUEST-ID-XXX
status:
code: 200
message: OK
- request:
body: '{"messages":[{"role":"system","content":"You are Research Analyst. An experienced
analyst skilled at gathering information and synthesizing findings into actionable
insights.\nYour personal goal is: Conduct thorough research and produce insightful
reports"},{"role":"user","content":"\nCurrent Task: Research the current state
of the AI agent market:\n1. Search for recent information about AI agents and
their market trends\n2. Read detailed content from a relevant industry source\n3.
Generate a brief report summarizing the key findings\n\nUse the available tools
for each step."},{"role":"assistant","content":null,"tool_calls":[{"id":"call_QlnPEA94TbaFA83eRDhOHXRY","type":"function","function":{"name":"web_search","arguments":"{\"query\":\"current
state of AI agent market 2023\"}"}}]},{"role":"tool","tool_call_id":"call_QlnPEA94TbaFA83eRDhOHXRY","name":"web_search","content":"Search
results for ''current state of AI agent market 2023'': Found 3 relevant articles
about the topic including market analysis, competitor data, and industry trends."},{"role":"assistant","content":null,"tool_calls":[{"id":"call_sOaxpAdq5dvpRhUJMAct2oNP","type":"function","function":{"name":"read_website","arguments":"{\"url\":
\"https://www.example.com/ai-agent-market-analysis-2023\"}"}},{"id":"call_1GRSbggp4SYHg5WqAUQx5Dce","type":"function","function":{"name":"read_website","arguments":"{\"url\":
\"https://www.example.com/ai-agent-competitor-data-2023\"}"}},{"id":"call_43s9ebATowN3hA5piPjL2z5N","type":"function","function":{"name":"read_website","arguments":"{\"url\":
\"https://www.example.com/ai-agent-industry-trends-2023\"}"}}]},{"role":"tool","tool_call_id":"call_sOaxpAdq5dvpRhUJMAct2oNP","name":"read_website","content":"Content
from https://www.example.com/ai-agent-market-analysis-2023: This article discusses
key insights about the topic including market size ($50B), growth rate (15%
YoY), and major players in the industry."},{"role":"tool","tool_call_id":"call_1GRSbggp4SYHg5WqAUQx5Dce","name":"read_website","content":"Content
from https://www.example.com/ai-agent-competitor-data-2023: This article discusses
key insights about the topic including market size ($50B), growth rate (15%
YoY), and major players in the industry."},{"role":"tool","tool_call_id":"call_43s9ebATowN3hA5piPjL2z5N","name":"read_website","content":"Content
from https://www.example.com/ai-agent-industry-trends-2023: This article discusses
key insights about the topic including market size ($50B), growth rate (15%
YoY), and major players in the industry."},{"role":"assistant","content":null,"tool_calls":[{"id":"call_QjEKbvT6OnKkCOTvwTu0TmAK","type":"function","function":{"name":"generate_report","arguments":"{\"title\":\"Current
State of the AI Agent Market 2023\",\"findings\":\"1. Market Size: The AI agent
market is currently valued at $50 billion.\\n2. Growth Rate: The industry is
experiencing a growth rate of 15% year-over-year.\\n3. Major Players: Significant
companies in this space include tech giants and specialized AI startups.\\n4.
Market Trends: The demand for AI agents is being driven by improvements in machine
learning algorithms and increasing adoption in customer service and automation
processes.\\n5. Competitive Landscape: The market is competitive with ongoing
innovation and investment in developing more advanced AI capabilities.\\n6.
Future Prospects: Continued growth is expected as businesses further integrate
AI agents into their operations for efficiency gains and customer engagement.\"}"}}]},{"role":"tool","tool_call_id":"call_QjEKbvT6OnKkCOTvwTu0TmAK","name":"generate_report","content":"#
Current State of the AI Agent Market 2023\n\n## Executive Summary\n1. Market
Size: The AI agent market is currently valued at $50 billion.\n2. Growth Rate:
The industry is experiencing a growth rate of 15% year-over-year.\n3. Major
Players: Significant companies in this space include tech giants and specialized
AI startups.\n4. Market Trends: The demand for AI agents is being driven by
improvements in machine learning algorithms and increasing adoption in customer
service and automation processes.\n5. Competitive Landscape: The market is competitive
with ongoing innovation and investment in developing more advanced AI capabilities.\n6.
Future Prospects: Continued growth is expected as businesses further integrate
AI agents into their operations for efficiency gains and customer engagement.\n\n##
Conclusion\nBased on the analysis, the market shows strong growth potential."}],"model":"gpt-4o","tool_choice":"auto","tools":[{"type":"function","function":{"name":"web_search","description":"Search
the web for information on a given topic.\n\nArgs:\n query: The search query
to look up.\n\nReturns:\n Search results as a string.","strict":true,"parameters":{"properties":{"query":{"title":"Query","type":"string"}},"required":["query"],"type":"object","additionalProperties":false}}},{"type":"function","function":{"name":"read_website","description":"Read
and extract content from a website URL.\n\nArgs:\n url: The URL of the website
to read.\n\nReturns:\n The extracted content from the website.","strict":true,"parameters":{"properties":{"url":{"title":"Url","type":"string"}},"required":["url"],"type":"object","additionalProperties":false}}},{"type":"function","function":{"name":"generate_report","description":"Generate
a structured report based on research findings.\n\nArgs:\n title: The title
of the report.\n findings: The research findings to include.\n\nReturns:\n A
formatted report string.","strict":true,"parameters":{"properties":{"title":{"title":"Title","type":"string"},"findings":{"title":"Findings","type":"string"}},"required":["title","findings"],"type":"object","additionalProperties":false}}}]}'
headers:
User-Agent:
- X-USER-AGENT-XXX
accept:
- application/json
accept-encoding:
- ACCEPT-ENCODING-XXX
authorization:
- AUTHORIZATION-XXX
connection:
- keep-alive
content-length:
- '5771'
content-type:
- application/json
cookie:
- COOKIE-XXX
host:
- api.openai.com
x-stainless-arch:
- X-STAINLESS-ARCH-XXX
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 1.83.0
x-stainless-read-timeout:
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-D5FuCTJhAII96iBV05ECT71cV6QmJ\",\n \"object\":
\"chat.completion\",\n \"created\": 1770145152,\n \"model\": \"gpt-4o-2024-08-06\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"The research on the current state of
the AI agent market indicates a robust and rapidly growing industry. Here\u2019s
a summary of the findings:\\n\\n1. **Market Size**: The AI agent market is
currently valued at $50 billion.\\n2. **Growth Rate**: It is experiencing
a significant growth rate of 15% year-over-year.\\n3. **Major Players**: The
market is dominated by tech giants and specialized AI startups.\\n4. **Market
Trends**: The increasing adoption of machine learning algorithms and the integration
of AI in customer service and automation are key drivers.\\n5. **Competitive
Landscape**: There is substantial competition and continuous innovation in
AI capabilities.\\n6. **Future Prospects**: The sector is expected to keep
growing as businesses increasingly use AI agents for efficiency and improved
customer engagement.\\n\\nThis analysis highlights significant opportunities
in the AI agent sector, underlining its importance in future technological
advancements.\",\n \"refusal\": null,\n \"annotations\": []\n
\ },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n
\ ],\n \"usage\": {\n \"prompt_tokens\": 920,\n \"completion_tokens\":
182,\n \"total_tokens\": 1102,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_fa7f5b168b\"\n}\n"
headers:
CF-RAY:
- CF-RAY-XXX
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Tue, 03 Feb 2026 18:59:16 GMT
Server:
- cloudflare
Strict-Transport-Security:
- STS-XXX
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- X-CONTENT-TYPE-XXX
access-control-expose-headers:
- ACCESS-CONTROL-XXX
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '3456'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:
- X-RATELIMIT-LIMIT-REQUESTS-XXX
x-ratelimit-limit-tokens:
- X-RATELIMIT-LIMIT-TOKENS-XXX
x-ratelimit-remaining-requests:
- X-RATELIMIT-REMAINING-REQUESTS-XXX
x-ratelimit-remaining-tokens:
- X-RATELIMIT-REMAINING-TOKENS-XXX
x-ratelimit-reset-requests:
- X-RATELIMIT-RESET-REQUESTS-XXX
x-ratelimit-reset-tokens:
- X-RATELIMIT-RESET-TOKENS-XXX
x-request-id:
- X-REQUEST-ID-XXX
status:
code: 200
message: OK
version: 1

View File

@@ -958,34 +958,6 @@ def test_azure_endpoint_detection_flags():
assert llm_other.is_azure_openai_endpoint == False
def test_azure_endpoint_detection_ignores_spoofed_urls():
"""
Test that endpoint detection does not trust spoofed host/path substrings
"""
with patch.dict(os.environ, {
"AZURE_API_KEY": "test-key",
"AZURE_ENDPOINT": (
"https://evil.example.com/?redirect="
"https://test.openai.azure.com/openai/deployments/gpt-4"
),
}):
llm_query_spoof = LLM(model="azure/gpt-4")
assert llm_query_spoof.is_azure_openai_endpoint == False
assert "model" in llm_query_spoof._prepare_completion_params(
messages=[{"role": "user", "content": "test"}]
)
with patch.dict(os.environ, {
"AZURE_API_KEY": "test-key",
"AZURE_ENDPOINT": "https://test.openai.azure.com.evil/openai/deployments/gpt-4",
}):
llm_host_spoof = LLM(model="azure/gpt-4")
assert llm_host_spoof.is_azure_openai_endpoint == False
assert "model" in llm_host_spoof._prepare_completion_params(
messages=[{"role": "user", "content": "test"}]
)
def test_azure_improved_error_messages():
"""
Test that improved error messages are provided for common HTTP errors

View File

@@ -0,0 +1,389 @@
"""Tests for planning types (PlanStep, TodoItem, TodoList)."""
import pytest
from uuid import UUID
from crewai.utilities.planning_types import (
PlanStep,
TodoItem,
TodoList,
TodoStatus,
)
class TestPlanStep:
"""Tests for the PlanStep model."""
def test_plan_step_with_required_fields(self):
"""Test PlanStep creation with only required fields."""
step = PlanStep(
step_number=1,
description="Research the topic",
)
assert step.step_number == 1
assert step.description == "Research the topic"
assert step.tool_to_use is None
assert step.depends_on == []
def test_plan_step_with_all_fields(self):
"""Test PlanStep creation with all fields."""
step = PlanStep(
step_number=2,
description="Search for information",
tool_to_use="search_tool",
depends_on=[1],
)
assert step.step_number == 2
assert step.description == "Search for information"
assert step.tool_to_use == "search_tool"
assert step.depends_on == [1]
def test_plan_step_with_multiple_dependencies(self):
"""Test PlanStep with multiple dependencies."""
step = PlanStep(
step_number=4,
description="Synthesize results",
depends_on=[1, 2, 3],
)
assert step.depends_on == [1, 2, 3]
def test_plan_step_requires_step_number(self):
"""Test that step_number is required."""
with pytest.raises(ValueError):
PlanStep(description="Missing step number")
def test_plan_step_requires_description(self):
"""Test that description is required."""
with pytest.raises(ValueError):
PlanStep(step_number=1)
def test_plan_step_serialization(self):
"""Test PlanStep can be serialized to dict."""
step = PlanStep(
step_number=1,
description="Test step",
tool_to_use="test_tool",
depends_on=[],
)
data = step.model_dump()
assert data["step_number"] == 1
assert data["description"] == "Test step"
assert data["tool_to_use"] == "test_tool"
assert data["depends_on"] == []
class TestTodoItem:
"""Tests for the TodoItem model."""
def test_todo_item_with_required_fields(self):
"""Test TodoItem creation with only required fields."""
todo = TodoItem(
step_number=1,
description="First task",
)
assert todo.step_number == 1
assert todo.description == "First task"
assert todo.status == "pending"
assert todo.tool_to_use is None
assert todo.depends_on == []
assert todo.result is None
# ID should be auto-generated
assert todo.id is not None
# Verify it's a valid UUID
UUID(todo.id)
def test_todo_item_with_all_fields(self):
"""Test TodoItem creation with all fields."""
todo = TodoItem(
id="custom-id-123",
step_number=2,
description="Second task",
tool_to_use="search_tool",
status="running",
depends_on=[1],
result="Task completed",
)
assert todo.id == "custom-id-123"
assert todo.step_number == 2
assert todo.description == "Second task"
assert todo.tool_to_use == "search_tool"
assert todo.status == "running"
assert todo.depends_on == [1]
assert todo.result == "Task completed"
def test_todo_item_status_values(self):
"""Test all valid status values."""
for status in ["pending", "running", "completed"]:
todo = TodoItem(
step_number=1,
description="Test",
status=status,
)
assert todo.status == status
def test_todo_item_auto_generates_unique_ids(self):
"""Test that each TodoItem gets a unique auto-generated ID."""
todo1 = TodoItem(step_number=1, description="Task 1")
todo2 = TodoItem(step_number=2, description="Task 2")
assert todo1.id != todo2.id
def test_todo_item_serialization(self):
"""Test TodoItem can be serialized to dict."""
todo = TodoItem(
step_number=1,
description="Test task",
status="pending",
)
data = todo.model_dump()
assert "id" in data
assert data["step_number"] == 1
assert data["description"] == "Test task"
assert data["status"] == "pending"
class TestTodoList:
"""Tests for the TodoList model."""
@pytest.fixture
def empty_todo_list(self):
"""Create an empty TodoList."""
return TodoList()
@pytest.fixture
def sample_todo_list(self):
"""Create a TodoList with sample items."""
return TodoList(
items=[
TodoItem(step_number=1, description="Step 1", status="completed"),
TodoItem(step_number=2, description="Step 2", status="running"),
TodoItem(step_number=3, description="Step 3", status="pending"),
TodoItem(step_number=4, description="Step 4", status="pending"),
]
)
def test_empty_todo_list(self, empty_todo_list):
"""Test empty TodoList properties."""
assert empty_todo_list.items == []
assert empty_todo_list.current_todo is None
assert empty_todo_list.next_pending is None
assert empty_todo_list.is_complete is False
assert empty_todo_list.pending_count == 0
assert empty_todo_list.completed_count == 0
def test_current_todo_property(self, sample_todo_list):
"""Test current_todo returns the running item."""
current = sample_todo_list.current_todo
assert current is not None
assert current.step_number == 2
assert current.status == "running"
def test_current_todo_returns_none_when_no_running(self):
"""Test current_todo returns None when no running items."""
todo_list = TodoList(
items=[
TodoItem(step_number=1, description="Step 1", status="completed"),
TodoItem(step_number=2, description="Step 2", status="pending"),
]
)
assert todo_list.current_todo is None
def test_next_pending_property(self, sample_todo_list):
"""Test next_pending returns the first pending item."""
next_item = sample_todo_list.next_pending
assert next_item is not None
assert next_item.step_number == 3
assert next_item.status == "pending"
def test_next_pending_returns_none_when_no_pending(self):
"""Test next_pending returns None when no pending items."""
todo_list = TodoList(
items=[
TodoItem(step_number=1, description="Step 1", status="completed"),
TodoItem(step_number=2, description="Step 2", status="completed"),
]
)
assert todo_list.next_pending is None
def test_is_complete_property_when_complete(self):
"""Test is_complete returns True when all items completed."""
todo_list = TodoList(
items=[
TodoItem(step_number=1, description="Step 1", status="completed"),
TodoItem(step_number=2, description="Step 2", status="completed"),
]
)
assert todo_list.is_complete is True
def test_is_complete_property_when_not_complete(self, sample_todo_list):
"""Test is_complete returns False when items are pending."""
assert sample_todo_list.is_complete is False
def test_is_complete_false_for_empty_list(self, empty_todo_list):
"""Test is_complete returns False for empty list."""
assert empty_todo_list.is_complete is False
def test_pending_count(self, sample_todo_list):
"""Test pending_count returns correct count."""
assert sample_todo_list.pending_count == 2
def test_completed_count(self, sample_todo_list):
"""Test completed_count returns correct count."""
assert sample_todo_list.completed_count == 1
def test_get_by_step_number(self, sample_todo_list):
"""Test get_by_step_number returns correct item."""
item = sample_todo_list.get_by_step_number(3)
assert item is not None
assert item.step_number == 3
assert item.description == "Step 3"
def test_get_by_step_number_returns_none_for_missing(self, sample_todo_list):
"""Test get_by_step_number returns None for non-existent step."""
item = sample_todo_list.get_by_step_number(99)
assert item is None
def test_mark_running(self, sample_todo_list):
"""Test mark_running changes status correctly."""
sample_todo_list.mark_running(3)
item = sample_todo_list.get_by_step_number(3)
assert item.status == "running"
def test_mark_running_does_nothing_for_missing(self, sample_todo_list):
"""Test mark_running handles missing step gracefully."""
# Should not raise an error
sample_todo_list.mark_running(99)
def test_mark_completed(self, sample_todo_list):
"""Test mark_completed changes status correctly."""
sample_todo_list.mark_completed(3)
item = sample_todo_list.get_by_step_number(3)
assert item.status == "completed"
assert item.result is None
def test_mark_completed_with_result(self, sample_todo_list):
"""Test mark_completed with result."""
sample_todo_list.mark_completed(3, result="Task output")
item = sample_todo_list.get_by_step_number(3)
assert item.status == "completed"
assert item.result == "Task output"
def test_mark_completed_does_nothing_for_missing(self, sample_todo_list):
"""Test mark_completed handles missing step gracefully."""
# Should not raise an error
sample_todo_list.mark_completed(99, result="Some result")
def test_todo_list_workflow(self):
"""Test a complete workflow through TodoList."""
# Create a todo list with 3 items
todo_list = TodoList(
items=[
TodoItem(
step_number=1,
description="Research",
tool_to_use="search_tool",
),
TodoItem(
step_number=2,
description="Analyze",
depends_on=[1],
),
TodoItem(
step_number=3,
description="Report",
depends_on=[1, 2],
),
]
)
# Initial state
assert todo_list.pending_count == 3
assert todo_list.completed_count == 0
assert todo_list.is_complete is False
# Start first task
todo_list.mark_running(1)
assert todo_list.current_todo.step_number == 1
assert todo_list.next_pending.step_number == 2
# Complete first task
todo_list.mark_completed(1, result="Research done")
assert todo_list.current_todo is None
assert todo_list.completed_count == 1
# Start and complete second task
todo_list.mark_running(2)
todo_list.mark_completed(2, result="Analysis complete")
assert todo_list.completed_count == 2
# Start and complete third task
todo_list.mark_running(3)
todo_list.mark_completed(3, result="Report generated")
# Final state
assert todo_list.is_complete is True
assert todo_list.pending_count == 0
assert todo_list.completed_count == 3
assert todo_list.current_todo is None
assert todo_list.next_pending is None
class TestTodoFromPlanStep:
"""Tests for converting PlanStep to TodoItem."""
def test_convert_plan_step_to_todo_item(self):
"""Test converting a PlanStep to TodoItem."""
step = PlanStep(
step_number=1,
description="Search for information",
tool_to_use="search_tool",
depends_on=[],
)
todo = TodoItem(
step_number=step.step_number,
description=step.description,
tool_to_use=step.tool_to_use,
depends_on=step.depends_on,
status="pending",
)
assert todo.step_number == step.step_number
assert todo.description == step.description
assert todo.tool_to_use == step.tool_to_use
assert todo.depends_on == step.depends_on
assert todo.status == "pending"
def test_convert_multiple_plan_steps_to_todo_list(self):
"""Test converting multiple PlanSteps to a TodoList."""
steps = [
PlanStep(step_number=1, description="Step 1", tool_to_use="tool1"),
PlanStep(step_number=2, description="Step 2", depends_on=[1]),
PlanStep(step_number=3, description="Step 3", depends_on=[1, 2]),
]
todos = []
for step in steps:
todo = TodoItem(
step_number=step.step_number,
description=step.description,
tool_to_use=step.tool_to_use,
depends_on=step.depends_on,
status="pending",
)
todos.append(todo)
todo_list = TodoList(items=todos)
assert len(todo_list.items) == 3
assert todo_list.pending_count == 3
assert todo_list.items[0].tool_to_use == "tool1"
assert todo_list.items[1].depends_on == [1]
assert todo_list.items[2].depends_on == [1, 2]

View File

@@ -0,0 +1,698 @@
"""Tests for structured planning with steps and todo generation.
These tests verify that the planning system correctly generates structured
PlanStep objects and converts them to TodoItems across different LLM providers.
"""
import json
import os
from unittest.mock import MagicMock, Mock, patch
import pytest
from crewai import Agent, PlanningConfig, Task
from crewai.llm import LLM
from crewai.utilities.planning_types import PlanStep, TodoItem, TodoList
from crewai.utilities.reasoning_handler import (
FUNCTION_SCHEMA,
AgentReasoning,
ReasoningPlan,
)
class TestFunctionSchema:
"""Tests for the FUNCTION_SCHEMA used in structured planning."""
def test_schema_has_required_structure(self):
"""Test that FUNCTION_SCHEMA has the correct structure."""
assert FUNCTION_SCHEMA["type"] == "function"
assert "function" in FUNCTION_SCHEMA
assert FUNCTION_SCHEMA["function"]["name"] == "create_reasoning_plan"
def test_schema_parameters_structure(self):
"""Test that parameters have correct structure."""
params = FUNCTION_SCHEMA["function"]["parameters"]
assert params["type"] == "object"
assert "properties" in params
assert "required" in params
def test_schema_has_plan_property(self):
"""Test that schema includes plan property."""
props = FUNCTION_SCHEMA["function"]["parameters"]["properties"]
assert "plan" in props
assert props["plan"]["type"] == "string"
def test_schema_has_steps_property(self):
"""Test that schema includes steps array property."""
props = FUNCTION_SCHEMA["function"]["parameters"]["properties"]
assert "steps" in props
assert props["steps"]["type"] == "array"
def test_schema_steps_items_structure(self):
"""Test that steps items have correct structure."""
items = FUNCTION_SCHEMA["function"]["parameters"]["properties"]["steps"]["items"]
assert items["type"] == "object"
assert "properties" in items
assert "required" in items
assert "additionalProperties" in items
assert items["additionalProperties"] is False
def test_schema_step_properties(self):
"""Test that step items have all required properties."""
step_props = FUNCTION_SCHEMA["function"]["parameters"]["properties"]["steps"]["items"]["properties"]
assert "step_number" in step_props
assert step_props["step_number"]["type"] == "integer"
assert "description" in step_props
assert step_props["description"]["type"] == "string"
assert "tool_to_use" in step_props
# tool_to_use should be nullable
assert step_props["tool_to_use"]["type"] == ["string", "null"]
assert "depends_on" in step_props
assert step_props["depends_on"]["type"] == "array"
def test_schema_step_required_fields(self):
"""Test that step required fields are correct."""
required = FUNCTION_SCHEMA["function"]["parameters"]["properties"]["steps"]["items"]["required"]
assert "step_number" in required
assert "description" in required
assert "tool_to_use" in required
assert "depends_on" in required
def test_schema_has_ready_property(self):
"""Test that schema includes ready property."""
props = FUNCTION_SCHEMA["function"]["parameters"]["properties"]
assert "ready" in props
assert props["ready"]["type"] == "boolean"
def test_schema_top_level_required(self):
"""Test that top-level required fields are correct."""
required = FUNCTION_SCHEMA["function"]["parameters"]["required"]
assert "plan" in required
assert "steps" in required
assert "ready" in required
def test_schema_top_level_additional_properties(self):
"""Test that additionalProperties is False at top level."""
params = FUNCTION_SCHEMA["function"]["parameters"]
assert params["additionalProperties"] is False
class TestReasoningPlan:
"""Tests for the ReasoningPlan model with structured steps."""
def test_reasoning_plan_with_empty_steps(self):
"""Test ReasoningPlan can be created with empty steps."""
plan = ReasoningPlan(
plan="Simple plan",
steps=[],
ready=True,
)
assert plan.plan == "Simple plan"
assert plan.steps == []
assert plan.ready is True
def test_reasoning_plan_with_steps(self):
"""Test ReasoningPlan with structured steps."""
steps = [
PlanStep(step_number=1, description="First step", tool_to_use="tool1"),
PlanStep(step_number=2, description="Second step", depends_on=[1]),
]
plan = ReasoningPlan(
plan="Multi-step plan",
steps=steps,
ready=True,
)
assert plan.plan == "Multi-step plan"
assert len(plan.steps) == 2
assert plan.steps[0].step_number == 1
assert plan.steps[1].depends_on == [1]
class TestAgentReasoningWithMockedLLM:
"""Tests for AgentReasoning with mocked LLM responses."""
@pytest.fixture
def mock_agent(self):
"""Create a mock agent for testing."""
agent = MagicMock()
agent.role = "Test Agent"
agent.goal = "Test goal"
agent.backstory = "Test backstory"
agent.verbose = False
agent.planning_config = PlanningConfig()
agent.i18n = MagicMock()
agent.i18n.retrieve.return_value = "Test prompt: {description}"
# Mock the llm attribute
agent.llm = MagicMock()
agent.llm.supports_function_calling.return_value = True
return agent
def test_parse_steps_from_function_response(self, mock_agent):
"""Test that steps are correctly parsed from LLM function response."""
# Mock the LLM response with structured steps
mock_response = json.dumps({
"plan": "Research and analyze",
"steps": [
{
"step_number": 1,
"description": "Search for information",
"tool_to_use": "search_tool",
"depends_on": [],
},
{
"step_number": 2,
"description": "Analyze results",
"tool_to_use": None,
"depends_on": [1],
},
],
"ready": True,
})
mock_agent.llm.call.return_value = mock_response
handler = AgentReasoning(
agent=mock_agent,
task=None,
description="Test task",
expected_output="Test output",
)
# Call the function parsing method
plan, steps, ready = handler._call_with_function(
prompt="Test prompt",
plan_type="create_plan",
)
assert plan == "Research and analyze"
assert len(steps) == 2
assert steps[0].step_number == 1
assert steps[0].tool_to_use == "search_tool"
assert steps[1].depends_on == [1]
assert ready is True
def test_parse_steps_handles_missing_optional_fields(self, mock_agent):
"""Test that missing optional fields are handled correctly."""
mock_response = json.dumps({
"plan": "Simple plan",
"steps": [
{
"step_number": 1,
"description": "Do something",
"tool_to_use": None,
"depends_on": [],
},
],
"ready": True,
})
mock_agent.llm.call.return_value = mock_response
handler = AgentReasoning(
agent=mock_agent,
task=None,
description="Test task",
expected_output="Test output",
)
plan, steps, ready = handler._call_with_function(
prompt="Test prompt",
plan_type="create_plan",
)
assert len(steps) == 1
assert steps[0].tool_to_use is None
assert steps[0].depends_on == []
def test_parse_steps_with_missing_fields_uses_defaults(self, mock_agent):
"""Test that steps with missing fields get default values."""
mock_response = json.dumps({
"plan": "Plan with step missing fields",
"steps": [
{"step_number": 1, "description": "Valid step", "tool_to_use": None, "depends_on": []},
{"step_number": 2}, # Missing description, tool_to_use, depends_on
{"step_number": 3, "description": "Another valid", "tool_to_use": None, "depends_on": []},
],
"ready": True,
})
mock_agent.llm.call.return_value = mock_response
handler = AgentReasoning(
agent=mock_agent,
task=None,
description="Test task",
expected_output="Test output",
)
plan, steps, ready = handler._call_with_function(
prompt="Test prompt",
plan_type="create_plan",
)
# All 3 steps should be parsed, with defaults for missing fields
assert len(steps) == 3
assert steps[0].step_number == 1
assert steps[0].description == "Valid step"
assert steps[1].step_number == 2
assert steps[1].description == "" # Default value
assert steps[2].step_number == 3
class TestTodoCreationFromPlan:
"""Tests for converting plan steps to todo items."""
def test_create_todos_from_plan_steps(self):
"""Test creating TodoList from PlanSteps."""
steps = [
PlanStep(
step_number=1,
description="Research competitors",
tool_to_use="search_tool",
depends_on=[],
),
PlanStep(
step_number=2,
description="Analyze data",
tool_to_use=None,
depends_on=[1],
),
PlanStep(
step_number=3,
description="Generate report",
tool_to_use="write_tool",
depends_on=[1, 2],
),
]
# Convert steps to todos (mirroring agent_executor._create_todos_from_plan)
todos = []
for step in steps:
todo = TodoItem(
step_number=step.step_number,
description=step.description,
tool_to_use=step.tool_to_use,
depends_on=step.depends_on,
status="pending",
)
todos.append(todo)
todo_list = TodoList(items=todos)
assert len(todo_list.items) == 3
assert todo_list.pending_count == 3
assert todo_list.completed_count == 0
# Verify todo properties match step properties
assert todo_list.items[0].description == "Research competitors"
assert todo_list.items[0].tool_to_use == "search_tool"
assert todo_list.items[1].depends_on == [1]
assert todo_list.items[2].depends_on == [1, 2]
# =============================================================================
# Provider-Specific Integration Tests (VCR recorded)
# =============================================================================
# Common test tools used across provider tests
def create_research_tools():
"""Create research tools for testing structured planning."""
from crewai.tools import tool
@tool
def web_search(query: str) -> str:
"""Search the web for information on a given topic.
Args:
query: The search query to look up.
Returns:
Search results as a string.
"""
# Simulated search results for testing
return f"Search results for '{query}': Found 3 relevant articles about the topic including market analysis, competitor data, and industry trends."
@tool
def read_website(url: str) -> str:
"""Read and extract content from a website URL.
Args:
url: The URL of the website to read.
Returns:
The extracted content from the website.
"""
# Simulated website content for testing
return f"Content from {url}: This article discusses key insights about the topic including market size ($50B), growth rate (15% YoY), and major players in the industry."
@tool
def generate_report(title: str, findings: str) -> str:
"""Generate a structured report based on research findings.
Args:
title: The title of the report.
findings: The research findings to include.
Returns:
A formatted report string.
"""
return f"# {title}\n\n## Executive Summary\n{findings}\n\n## Conclusion\nBased on the analysis, the market shows strong growth potential."
return web_search, read_website, generate_report
RESEARCH_TASK = """Research the current state of the AI agent market:
1. Search for recent information about AI agents and their market trends
2. Read detailed content from a relevant industry source
3. Generate a brief report summarizing the key findings
Use the available tools for each step."""
class TestOpenAIStructuredPlanning:
"""Integration tests for OpenAI structured planning with research workflow."""
@pytest.mark.vcr()
def test_openai_research_workflow_generates_steps(self):
"""Test that OpenAI generates structured plan steps for a research task."""
web_search, read_website, generate_report = create_research_tools()
llm = LLM(model="gpt-4o")
agent = Agent(
role="Research Analyst",
goal="Conduct thorough research and produce insightful reports",
backstory="An experienced analyst skilled at gathering information and synthesizing findings into actionable insights.",
llm=llm,
tools=[web_search, read_website, generate_report],
planning_config=PlanningConfig(max_attempts=1),
verbose=False,
)
result = agent.kickoff(RESEARCH_TASK)
# Verify result exists
assert result is not None
assert result.raw is not None
# The result should contain some report-like content
assert len(str(result.raw)) > 50
class TestAnthropicStructuredPlanning:
"""Integration tests for Anthropic structured planning with research workflow."""
@pytest.fixture(autouse=True)
def mock_anthropic_api_key(self):
"""Mock API key if not set."""
if "ANTHROPIC_API_KEY" not in os.environ:
with patch.dict(os.environ, {"ANTHROPIC_API_KEY": "test-key"}):
yield
else:
yield
@pytest.mark.vcr()
def test_anthropic_research_workflow_generates_steps(self):
"""Test that Anthropic generates structured plan steps for a research task."""
web_search, read_website, generate_report = create_research_tools()
llm = LLM(model="anthropic/claude-sonnet-4-20250514")
agent = Agent(
role="Research Analyst",
goal="Conduct thorough research and produce insightful reports",
backstory="An experienced analyst skilled at gathering information and synthesizing findings into actionable insights.",
llm=llm,
tools=[web_search, read_website, generate_report],
planning_config=PlanningConfig(max_attempts=1),
verbose=False,
)
result = agent.kickoff(RESEARCH_TASK)
# Verify result exists
assert result is not None
assert result.raw is not None
# The result should contain some report-like content
assert len(str(result.raw)) > 50
class TestGeminiStructuredPlanning:
"""Integration tests for Google Gemini structured planning with research workflow."""
@pytest.fixture(autouse=True)
def mock_google_api_key(self):
"""Mock API key if not set."""
if "GOOGLE_API_KEY" not in os.environ and "GEMINI_API_KEY" not in os.environ:
with patch.dict(os.environ, {"GOOGLE_API_KEY": "test-key"}):
yield
else:
yield
@pytest.mark.vcr()
def test_gemini_research_workflow_generates_steps(self):
"""Test that Gemini generates structured plan steps for a research task."""
web_search, read_website, generate_report = create_research_tools()
llm = LLM(model="gemini/gemini-2.5-flash")
agent = Agent(
role="Research Analyst",
goal="Conduct thorough research and produce insightful reports",
backstory="An experienced analyst skilled at gathering information and synthesizing findings into actionable insights.",
llm=llm,
tools=[web_search, read_website, generate_report],
planning_config=PlanningConfig(max_attempts=1),
verbose=False,
)
result = agent.kickoff(RESEARCH_TASK)
# Verify result exists
assert result is not None
assert result.raw is not None
# The result should contain some report-like content
assert len(str(result.raw)) > 50
class TestAzureStructuredPlanning:
"""Integration tests for Azure OpenAI structured planning with research workflow."""
@pytest.fixture(autouse=True)
def mock_azure_credentials(self):
"""Mock Azure credentials for tests."""
if "AZURE_API_KEY" not in os.environ:
with patch.dict(os.environ, {
"AZURE_API_KEY": "test-key",
"AZURE_ENDPOINT": "https://test.openai.azure.com"
}):
yield
else:
yield
@pytest.mark.vcr()
def test_azure_research_workflow_generates_steps(self):
"""Test that Azure OpenAI generates structured plan steps for a research task."""
web_search, read_website, generate_report = create_research_tools()
llm = LLM(model="azure/gpt-4o")
agent = Agent(
role="Research Analyst",
goal="Conduct thorough research and produce insightful reports",
backstory="An experienced analyst skilled at gathering information and synthesizing findings into actionable insights.",
llm=llm,
tools=[web_search, read_website, generate_report],
planning_config=PlanningConfig(max_attempts=1),
verbose=False,
)
result = agent.kickoff(RESEARCH_TASK)
# Verify result exists
assert result is not None
assert result.raw is not None
# The result should contain some report-like content
assert len(str(result.raw)) > 50
# =============================================================================
# Unit Tests with Mocked LLM Providers
# =============================================================================
class TestStructuredPlanningWithMockedProviders:
"""Unit tests with mocked LLM providers for faster execution."""
def _create_mock_plan_response(self, steps_data):
"""Helper to create mock plan response."""
return json.dumps({
"plan": "Test plan",
"steps": steps_data,
"ready": True,
})
def test_openai_mock_structured_response(self):
"""Test parsing OpenAI structured response."""
steps_data = [
{"step_number": 1, "description": "Search", "tool_to_use": "search", "depends_on": []},
{"step_number": 2, "description": "Analyze", "tool_to_use": None, "depends_on": [1]},
]
response = self._create_mock_plan_response(steps_data)
parsed = json.loads(response)
assert len(parsed["steps"]) == 2
assert parsed["steps"][0]["tool_to_use"] == "search"
assert parsed["steps"][1]["depends_on"] == [1]
def test_anthropic_mock_structured_response(self):
"""Test parsing Anthropic structured response (same format)."""
steps_data = [
{"step_number": 1, "description": "Research", "tool_to_use": "web_search", "depends_on": []},
{"step_number": 2, "description": "Summarize", "tool_to_use": None, "depends_on": [1]},
{"step_number": 3, "description": "Report", "tool_to_use": "write_file", "depends_on": [1, 2]},
]
response = self._create_mock_plan_response(steps_data)
parsed = json.loads(response)
assert len(parsed["steps"]) == 3
assert parsed["steps"][2]["depends_on"] == [1, 2]
def test_gemini_mock_structured_response(self):
"""Test parsing Gemini structured response (same format)."""
steps_data = [
{"step_number": 1, "description": "Gather data", "tool_to_use": "data_tool", "depends_on": []},
{"step_number": 2, "description": "Process", "tool_to_use": None, "depends_on": [1]},
]
response = self._create_mock_plan_response(steps_data)
parsed = json.loads(response)
assert len(parsed["steps"]) == 2
assert parsed["ready"] is True
def test_azure_mock_structured_response(self):
"""Test parsing Azure OpenAI structured response (same format as OpenAI)."""
steps_data = [
{"step_number": 1, "description": "Initialize", "tool_to_use": None, "depends_on": []},
{"step_number": 2, "description": "Execute", "tool_to_use": "executor", "depends_on": [1]},
{"step_number": 3, "description": "Finalize", "tool_to_use": None, "depends_on": [1, 2]},
]
response = self._create_mock_plan_response(steps_data)
parsed = json.loads(response)
assert len(parsed["steps"]) == 3
assert parsed["steps"][0]["tool_to_use"] is None
class TestTodoListIntegration:
"""Integration tests for TodoList with plan execution simulation."""
def test_full_plan_execution_workflow(self):
"""Test complete workflow from plan to todos to execution."""
# Simulate plan steps from LLM
plan_steps = [
PlanStep(
step_number=1,
description="Research the topic",
tool_to_use="search_tool",
depends_on=[],
),
PlanStep(
step_number=2,
description="Compile findings",
tool_to_use=None,
depends_on=[1],
),
PlanStep(
step_number=3,
description="Generate summary",
tool_to_use="summarize_tool",
depends_on=[1, 2],
),
]
# Convert to todos (like agent_executor._create_todos_from_plan)
todos = [
TodoItem(
step_number=step.step_number,
description=step.description,
tool_to_use=step.tool_to_use,
depends_on=step.depends_on,
status="pending",
)
for step in plan_steps
]
todo_list = TodoList(items=todos)
# Verify initial state
assert todo_list.pending_count == 3
assert todo_list.is_complete is False
# Simulate execution
for i in range(1, 4):
todo_list.mark_running(i)
assert todo_list.current_todo.step_number == i
todo_list.mark_completed(i, result=f"Step {i} completed")
# Verify final state
assert todo_list.is_complete is True
assert todo_list.completed_count == 3
assert all(item.result is not None for item in todo_list.items)
def test_dependency_aware_execution(self):
"""Test that dependencies are respected in execution order."""
steps = [
PlanStep(step_number=1, description="Base step", depends_on=[]),
PlanStep(step_number=2, description="Depends on 1", depends_on=[1]),
PlanStep(step_number=3, description="Depends on 1", depends_on=[1]),
PlanStep(step_number=4, description="Depends on 2 and 3", depends_on=[2, 3]),
]
todos = [
TodoItem(
step_number=s.step_number,
description=s.description,
depends_on=s.depends_on,
)
for s in steps
]
todo_list = TodoList(items=todos)
# Helper to check if dependencies are satisfied
def can_execute(todo: TodoItem) -> bool:
for dep in todo.depends_on:
dep_todo = todo_list.get_by_step_number(dep)
if dep_todo and dep_todo.status != "completed":
return False
return True
# Step 1 has no dependencies
assert can_execute(todo_list.items[0]) is True
# Steps 2 and 3 depend on 1 (not yet done)
assert can_execute(todo_list.items[1]) is False
assert can_execute(todo_list.items[2]) is False
# Complete step 1
todo_list.mark_completed(1)
# Now steps 2 and 3 can execute
assert can_execute(todo_list.items[1]) is True
assert can_execute(todo_list.items[2]) is True
# Step 4 still can't (depends on 2 and 3)
assert can_execute(todo_list.items[3]) is False
# Complete steps 2 and 3
todo_list.mark_completed(2)
todo_list.mark_completed(3)
# Now step 4 can execute
assert can_execute(todo_list.items[3]) is True