mirror of
https://github.com/crewAIInc/crewAI.git
synced 2025-12-16 04:18:35 +00:00
Gl/feat/a2a refactor (#3793)
* feat: agent metaclass, refactor a2a to wrappers * feat: a2a schemas and utils * chore: move agent class, update imports * refactor: organize imports to avoid circularity, add a2a to console * feat: pass response_model through call chain * feat: add standard openapi spec serialization to tools and structured output * feat: a2a events * chore: add a2a to pyproject * docs: minimal base for learn docs * fix: adjust a2a conversation flow, allow llm to decide exit until max_retries * fix: inject agent skills into initial prompt * fix: format agent card as json in prompt * refactor: simplify A2A agent prompt formatting and improve skill display * chore: wide cleanup * chore: cleanup logic, add auth cache, use json for messages in prompt * chore: update docs * fix: doc snippets formatting * feat: optimize A2A agent card fetching and improve error reporting * chore: move imports to top of file * chore: refactor hasattr check * chore: add httpx-auth, update lockfile * feat: create base public api * chore: cleanup modules, add docstrings, types * fix: exclude extra fields in prompt * chore: update docs * tests: update to correct import * chore: lint for ruff, add missing import * fix: tweak openai streaming logic for response model * tests: add reimport for test * tests: add reimport for test * fix: don't set a2a attr if not set * fix: don't set a2a attr if not set * chore: update cassettes * tests: fix tests * fix: use instructor and dont pass response_format for litellm * chore: consolidate event listeners, add typing * fix: address race condition in test, update cassettes * tests: add correct mocks, rerun cassette for json * tests: update cassette * chore: regenerate cassette after new run * fix: make token manager access-safe * fix: make token manager access-safe * merge * chore: update test and cassete for output pydantic * fix: tweak to disallow deadlock * chore: linter * fix: adjust event ordering for threading * fix: use conditional for batch check * tests: tweak for emission * tests: simplify api + event check * fix: ensure non-function calling llms see json formatted string * tests: tweak message comparison * fix: use internal instructor for litellm structure responses --------- Co-authored-by: Mike Plachta <mike@crewai.com>
This commit is contained in:
291
docs/en/learn/a2a-agent-delegation.mdx
Normal file
291
docs/en/learn/a2a-agent-delegation.mdx
Normal file
@@ -0,0 +1,291 @@
|
||||
---
|
||||
title: Agent-to-Agent (A2A) Protocol
|
||||
description: Enable CrewAI agents to delegate tasks to remote A2A-compliant agents for specialized handling
|
||||
icon: network-wired
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
## A2A Agent Delegation
|
||||
|
||||
CrewAI supports the Agent-to-Agent (A2A) protocol, allowing agents to delegate tasks to remote specialized agents. The agent's LLM automatically decides whether to handle a task directly or delegate to an A2A agent based on the task requirements.
|
||||
|
||||
<Note>
|
||||
A2A delegation requires the `a2a-sdk` package. Install with: `uv add 'crewai[a2a]'` or `pip install 'crewai[a2a]'`
|
||||
</Note>
|
||||
|
||||
## How It Works
|
||||
|
||||
When an agent is configured with A2A capabilities:
|
||||
|
||||
1. The LLM analyzes each task
|
||||
2. It decides to either:
|
||||
- Handle the task directly using its own capabilities
|
||||
- Delegate to a remote A2A agent for specialized handling
|
||||
3. If delegating, the agent communicates with the remote A2A agent through the protocol
|
||||
4. Results are returned to the CrewAI workflow
|
||||
|
||||
## Basic Configuration
|
||||
|
||||
Configure an agent for A2A delegation by setting the `a2a` parameter:
|
||||
|
||||
```python Code
|
||||
from crewai import Agent, Crew, Task
|
||||
from crewai.a2a import A2AConfig
|
||||
|
||||
agent = Agent(
|
||||
role="Research Coordinator",
|
||||
goal="Coordinate research tasks efficiently",
|
||||
backstory="Expert at delegating to specialized research agents",
|
||||
llm="gpt-4o",
|
||||
a2a=A2AConfig(
|
||||
endpoint="https://example.com/.well-known/agent-card.json",
|
||||
timeout=120,
|
||||
max_turns=10
|
||||
)
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Research the latest developments in quantum computing",
|
||||
expected_output="A comprehensive research report",
|
||||
agent=agent
|
||||
)
|
||||
|
||||
crew = Crew(agents=[agent], tasks=[task], verbose=True)
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
## Configuration Options
|
||||
|
||||
The `A2AConfig` class accepts the following parameters:
|
||||
|
||||
<ParamField path="endpoint" type="str" required>
|
||||
The A2A agent endpoint URL (typically points to `.well-known/agent-card.json`)
|
||||
</ParamField>
|
||||
|
||||
<ParamField path="auth" type="AuthScheme" default="None">
|
||||
Authentication scheme for the A2A agent. Supports Bearer tokens, OAuth2, API keys, and HTTP authentication.
|
||||
</ParamField>
|
||||
|
||||
<ParamField path="timeout" type="int" default="120">
|
||||
Request timeout in seconds
|
||||
</ParamField>
|
||||
|
||||
<ParamField path="max_turns" type="int" default="10">
|
||||
Maximum number of conversation turns with the A2A agent
|
||||
</ParamField>
|
||||
|
||||
<ParamField path="response_model" type="type[BaseModel]" default="None">
|
||||
Optional Pydantic model for requesting structured output from an A2A agent. A2A protocol does not
|
||||
enforce this, so an A2A agent does not need to honor this request.
|
||||
</ParamField>
|
||||
|
||||
<ParamField path="fail_fast" type="bool" default="True">
|
||||
Whether to raise an error immediately if agent connection fails. When `False`, the agent continues with available agents and informs the LLM about unavailable ones.
|
||||
</ParamField>
|
||||
|
||||
## Authentication
|
||||
|
||||
For A2A agents that require authentication, use one of the provided auth schemes:
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Bearer Token">
|
||||
```python Code
|
||||
from crewai.a2a import A2AConfig
|
||||
from crewai.a2a.auth import BearerTokenAuth
|
||||
|
||||
agent = Agent(
|
||||
role="Secure Coordinator",
|
||||
goal="Coordinate tasks with secured agents",
|
||||
backstory="Manages secure agent communications",
|
||||
llm="gpt-4o",
|
||||
a2a=A2AConfig(
|
||||
endpoint="https://secure-agent.example.com/.well-known/agent-card.json",
|
||||
auth=BearerTokenAuth(token="your-bearer-token"),
|
||||
timeout=120
|
||||
)
|
||||
)
|
||||
```
|
||||
</Tab>
|
||||
|
||||
<Tab title="API Key">
|
||||
```python Code
|
||||
from crewai.a2a import A2AConfig
|
||||
from crewai.a2a.auth import APIKeyAuth
|
||||
|
||||
agent = Agent(
|
||||
role="API Coordinator",
|
||||
goal="Coordinate with API-based agents",
|
||||
backstory="Manages API-authenticated communications",
|
||||
llm="gpt-4o",
|
||||
a2a=A2AConfig(
|
||||
endpoint="https://api-agent.example.com/.well-known/agent-card.json",
|
||||
auth=APIKeyAuth(
|
||||
api_key="your-api-key",
|
||||
location="header", # or "query" or "cookie"
|
||||
name="X-API-Key"
|
||||
),
|
||||
timeout=120
|
||||
)
|
||||
)
|
||||
```
|
||||
</Tab>
|
||||
|
||||
<Tab title="OAuth2">
|
||||
```python Code
|
||||
from crewai.a2a import A2AConfig
|
||||
from crewai.a2a.auth import OAuth2ClientCredentials
|
||||
|
||||
agent = Agent(
|
||||
role="OAuth Coordinator",
|
||||
goal="Coordinate with OAuth-secured agents",
|
||||
backstory="Manages OAuth-authenticated communications",
|
||||
llm="gpt-4o",
|
||||
a2a=A2AConfig(
|
||||
endpoint="https://oauth-agent.example.com/.well-known/agent-card.json",
|
||||
auth=OAuth2ClientCredentials(
|
||||
token_url="https://auth.example.com/oauth/token",
|
||||
client_id="your-client-id",
|
||||
client_secret="your-client-secret",
|
||||
scopes=["read", "write"]
|
||||
),
|
||||
timeout=120
|
||||
)
|
||||
)
|
||||
```
|
||||
</Tab>
|
||||
|
||||
<Tab title="HTTP Basic">
|
||||
```python Code
|
||||
from crewai.a2a import A2AConfig
|
||||
from crewai.a2a.auth import HTTPBasicAuth
|
||||
|
||||
agent = Agent(
|
||||
role="Basic Auth Coordinator",
|
||||
goal="Coordinate with basic auth agents",
|
||||
backstory="Manages basic authentication communications",
|
||||
llm="gpt-4o",
|
||||
a2a=A2AConfig(
|
||||
endpoint="https://basic-agent.example.com/.well-known/agent-card.json",
|
||||
auth=HTTPBasicAuth(
|
||||
username="your-username",
|
||||
password="your-password"
|
||||
),
|
||||
timeout=120
|
||||
)
|
||||
)
|
||||
```
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
## Multiple A2A Agents
|
||||
|
||||
Configure multiple A2A agents for delegation by passing a list:
|
||||
|
||||
```python Code
|
||||
from crewai.a2a import A2AConfig
|
||||
from crewai.a2a.auth import BearerTokenAuth
|
||||
|
||||
agent = Agent(
|
||||
role="Multi-Agent Coordinator",
|
||||
goal="Coordinate with multiple specialized agents",
|
||||
backstory="Expert at delegating to the right specialist",
|
||||
llm="gpt-4o",
|
||||
a2a=[
|
||||
A2AConfig(
|
||||
endpoint="https://research.example.com/.well-known/agent-card.json",
|
||||
timeout=120
|
||||
),
|
||||
A2AConfig(
|
||||
endpoint="https://data.example.com/.well-known/agent-card.json",
|
||||
auth=BearerTokenAuth(token="data-token"),
|
||||
timeout=90
|
||||
)
|
||||
]
|
||||
)
|
||||
```
|
||||
|
||||
The LLM will automatically choose which A2A agent to delegate to based on the task requirements.
|
||||
|
||||
## Error Handling
|
||||
|
||||
Control how agent connection failures are handled using the `fail_fast` parameter:
|
||||
|
||||
```python Code
|
||||
from crewai.a2a import A2AConfig
|
||||
|
||||
# Fail immediately on connection errors (default)
|
||||
agent = Agent(
|
||||
role="Research Coordinator",
|
||||
goal="Coordinate research tasks",
|
||||
backstory="Expert at delegation",
|
||||
llm="gpt-4o",
|
||||
a2a=A2AConfig(
|
||||
endpoint="https://research.example.com/.well-known/agent-card.json",
|
||||
fail_fast=True
|
||||
)
|
||||
)
|
||||
|
||||
# Continue with available agents
|
||||
agent = Agent(
|
||||
role="Multi-Agent Coordinator",
|
||||
goal="Coordinate with multiple agents",
|
||||
backstory="Expert at working with available resources",
|
||||
llm="gpt-4o",
|
||||
a2a=[
|
||||
A2AConfig(
|
||||
endpoint="https://primary.example.com/.well-known/agent-card.json",
|
||||
fail_fast=False
|
||||
),
|
||||
A2AConfig(
|
||||
endpoint="https://backup.example.com/.well-known/agent-card.json",
|
||||
fail_fast=False
|
||||
)
|
||||
]
|
||||
)
|
||||
```
|
||||
|
||||
When `fail_fast=False`:
|
||||
- If some agents fail, the LLM is informed which agents are unavailable and can delegate to working agents
|
||||
- If all agents fail, the LLM receives a notice about unavailable agents and handles the task directly
|
||||
- Connection errors are captured and included in the context for better decision-making
|
||||
|
||||
## Best Practices
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Set Appropriate Timeouts" icon="clock">
|
||||
Configure timeouts based on expected A2A agent response times. Longer-running tasks may need higher timeout values.
|
||||
</Card>
|
||||
|
||||
<Card title="Limit Conversation Turns" icon="comments">
|
||||
Use `max_turns` to prevent excessive back-and-forth. The agent will automatically conclude conversations before hitting the limit.
|
||||
</Card>
|
||||
|
||||
<Card title="Use Resilient Error Handling" icon="shield-check">
|
||||
Set `fail_fast=False` for production environments with multiple agents to gracefully handle connection failures and maintain workflow continuity.
|
||||
</Card>
|
||||
|
||||
<Card title="Secure Your Credentials" icon="lock">
|
||||
Store authentication tokens and credentials as environment variables, not in code.
|
||||
</Card>
|
||||
|
||||
<Card title="Monitor Delegation Decisions" icon="eye">
|
||||
Use verbose mode to observe when the LLM chooses to delegate versus handle tasks directly.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
## Supported Authentication Methods
|
||||
|
||||
- **Bearer Token** - Simple token-based authentication
|
||||
- **OAuth2 Client Credentials** - OAuth2 flow for machine-to-machine communication
|
||||
- **OAuth2 Authorization Code** - OAuth2 flow requiring user authorization
|
||||
- **API Key** - Key-based authentication (header, query param, or cookie)
|
||||
- **HTTP Basic** - Username/password authentication
|
||||
- **HTTP Digest** - Digest authentication (requires `httpx-auth` package)
|
||||
|
||||
## Learn More
|
||||
|
||||
For more information about the A2A protocol and reference implementations:
|
||||
|
||||
- [A2A Protocol Documentation](https://a2a-protocol.org)
|
||||
- [A2A Sample Implementations](https://github.com/a2aproject/a2a-samples)
|
||||
- [A2A Python SDK](https://github.com/a2aproject/a2a-python)
|
||||
@@ -93,10 +93,11 @@ azure-ai-inference = [
|
||||
anthropic = [
|
||||
"anthropic>=0.69.0",
|
||||
]
|
||||
# a2a = [
|
||||
# "a2a-sdk~=0.3.9",
|
||||
# "httpx-sse>=0.4.0",
|
||||
# ]
|
||||
a2a = [
|
||||
"a2a-sdk~=0.3.10",
|
||||
"httpx-auth>=0.23.1",
|
||||
"httpx-sse>=0.4.0",
|
||||
]
|
||||
|
||||
|
||||
[project.scripts]
|
||||
|
||||
@@ -3,7 +3,7 @@ from typing import Any
|
||||
import urllib.request
|
||||
import warnings
|
||||
|
||||
from crewai.agent import Agent
|
||||
from crewai.agent.core import Agent
|
||||
from crewai.crew import Crew
|
||||
from crewai.crews.crew_output import CrewOutput
|
||||
from crewai.flow.flow import Flow
|
||||
|
||||
6
lib/crewai/src/crewai/a2a/__init__.py
Normal file
6
lib/crewai/src/crewai/a2a/__init__.py
Normal file
@@ -0,0 +1,6 @@
|
||||
"""Agent-to-Agent (A2A) protocol communication module for CrewAI."""
|
||||
|
||||
from crewai.a2a.config import A2AConfig
|
||||
|
||||
|
||||
__all__ = ["A2AConfig"]
|
||||
20
lib/crewai/src/crewai/a2a/auth/__init__.py
Normal file
20
lib/crewai/src/crewai/a2a/auth/__init__.py
Normal file
@@ -0,0 +1,20 @@
|
||||
"""A2A authentication schemas."""
|
||||
|
||||
from crewai.a2a.auth.schemas import (
|
||||
APIKeyAuth,
|
||||
BearerTokenAuth,
|
||||
HTTPBasicAuth,
|
||||
HTTPDigestAuth,
|
||||
OAuth2AuthorizationCode,
|
||||
OAuth2ClientCredentials,
|
||||
)
|
||||
|
||||
|
||||
__all__ = [
|
||||
"APIKeyAuth",
|
||||
"BearerTokenAuth",
|
||||
"HTTPBasicAuth",
|
||||
"HTTPDigestAuth",
|
||||
"OAuth2AuthorizationCode",
|
||||
"OAuth2ClientCredentials",
|
||||
]
|
||||
392
lib/crewai/src/crewai/a2a/auth/schemas.py
Normal file
392
lib/crewai/src/crewai/a2a/auth/schemas.py
Normal file
@@ -0,0 +1,392 @@
|
||||
"""Authentication schemes for A2A protocol agents.
|
||||
|
||||
Supported authentication methods:
|
||||
- Bearer tokens
|
||||
- OAuth2 (Client Credentials, Authorization Code)
|
||||
- API Keys (header, query, cookie)
|
||||
- HTTP Basic authentication
|
||||
- HTTP Digest authentication
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from abc import ABC, abstractmethod
|
||||
import base64
|
||||
from collections.abc import Awaitable, Callable, MutableMapping
|
||||
import time
|
||||
from typing import Literal
|
||||
import urllib.parse
|
||||
|
||||
import httpx
|
||||
from httpx import DigestAuth
|
||||
from pydantic import BaseModel, Field, PrivateAttr
|
||||
|
||||
|
||||
class AuthScheme(ABC, BaseModel):
|
||||
"""Base class for authentication schemes."""
|
||||
|
||||
@abstractmethod
|
||||
async def apply_auth(
|
||||
self, client: httpx.AsyncClient, headers: MutableMapping[str, str]
|
||||
) -> MutableMapping[str, str]:
|
||||
"""Apply authentication to request headers.
|
||||
|
||||
Args:
|
||||
client: HTTP client for making auth requests.
|
||||
headers: Current request headers.
|
||||
|
||||
Returns:
|
||||
Updated headers with authentication applied.
|
||||
"""
|
||||
...
|
||||
|
||||
|
||||
class BearerTokenAuth(AuthScheme):
|
||||
"""Bearer token authentication (Authorization: Bearer <token>).
|
||||
|
||||
Attributes:
|
||||
token: Bearer token for authentication.
|
||||
"""
|
||||
|
||||
token: str = Field(description="Bearer token")
|
||||
|
||||
async def apply_auth(
|
||||
self, client: httpx.AsyncClient, headers: MutableMapping[str, str]
|
||||
) -> MutableMapping[str, str]:
|
||||
"""Apply Bearer token to Authorization header.
|
||||
|
||||
Args:
|
||||
client: HTTP client for making auth requests.
|
||||
headers: Current request headers.
|
||||
|
||||
Returns:
|
||||
Updated headers with Bearer token in Authorization header.
|
||||
"""
|
||||
headers["Authorization"] = f"Bearer {self.token}"
|
||||
return headers
|
||||
|
||||
|
||||
class HTTPBasicAuth(AuthScheme):
|
||||
"""HTTP Basic authentication.
|
||||
|
||||
Attributes:
|
||||
username: Username for Basic authentication.
|
||||
password: Password for Basic authentication.
|
||||
"""
|
||||
|
||||
username: str = Field(description="Username")
|
||||
password: str = Field(description="Password")
|
||||
|
||||
async def apply_auth(
|
||||
self, client: httpx.AsyncClient, headers: MutableMapping[str, str]
|
||||
) -> MutableMapping[str, str]:
|
||||
"""Apply HTTP Basic authentication.
|
||||
|
||||
Args:
|
||||
client: HTTP client for making auth requests.
|
||||
headers: Current request headers.
|
||||
|
||||
Returns:
|
||||
Updated headers with Basic auth in Authorization header.
|
||||
"""
|
||||
credentials = f"{self.username}:{self.password}"
|
||||
encoded = base64.b64encode(credentials.encode()).decode()
|
||||
headers["Authorization"] = f"Basic {encoded}"
|
||||
return headers
|
||||
|
||||
|
||||
class HTTPDigestAuth(AuthScheme):
|
||||
"""HTTP Digest authentication.
|
||||
|
||||
Note: Uses httpx-auth library for digest implementation.
|
||||
|
||||
Attributes:
|
||||
username: Username for Digest authentication.
|
||||
password: Password for Digest authentication.
|
||||
"""
|
||||
|
||||
username: str = Field(description="Username")
|
||||
password: str = Field(description="Password")
|
||||
|
||||
async def apply_auth(
|
||||
self, client: httpx.AsyncClient, headers: MutableMapping[str, str]
|
||||
) -> MutableMapping[str, str]:
|
||||
"""Digest auth is handled by httpx auth flow, not headers.
|
||||
|
||||
Args:
|
||||
client: HTTP client for making auth requests.
|
||||
headers: Current request headers.
|
||||
|
||||
Returns:
|
||||
Unchanged headers (Digest auth handled by httpx auth flow).
|
||||
"""
|
||||
return headers
|
||||
|
||||
def configure_client(self, client: httpx.AsyncClient) -> None:
|
||||
"""Configure client with Digest auth.
|
||||
|
||||
Args:
|
||||
client: HTTP client to configure with Digest authentication.
|
||||
"""
|
||||
client.auth = DigestAuth(self.username, self.password)
|
||||
|
||||
|
||||
class APIKeyAuth(AuthScheme):
|
||||
"""API Key authentication (header, query, or cookie).
|
||||
|
||||
Attributes:
|
||||
api_key: API key value for authentication.
|
||||
location: Where to send the API key (header, query, or cookie).
|
||||
name: Parameter name for the API key (default: X-API-Key).
|
||||
"""
|
||||
|
||||
api_key: str = Field(description="API key value")
|
||||
location: Literal["header", "query", "cookie"] = Field(
|
||||
default="header", description="Where to send the API key"
|
||||
)
|
||||
name: str = Field(default="X-API-Key", description="Parameter name for the API key")
|
||||
|
||||
async def apply_auth(
|
||||
self, client: httpx.AsyncClient, headers: MutableMapping[str, str]
|
||||
) -> MutableMapping[str, str]:
|
||||
"""Apply API key authentication.
|
||||
|
||||
Args:
|
||||
client: HTTP client for making auth requests.
|
||||
headers: Current request headers.
|
||||
|
||||
Returns:
|
||||
Updated headers with API key (for header/cookie locations).
|
||||
"""
|
||||
if self.location == "header":
|
||||
headers[self.name] = self.api_key
|
||||
elif self.location == "cookie":
|
||||
headers["Cookie"] = f"{self.name}={self.api_key}"
|
||||
return headers
|
||||
|
||||
def configure_client(self, client: httpx.AsyncClient) -> None:
|
||||
"""Configure client for query param API keys.
|
||||
|
||||
Args:
|
||||
client: HTTP client to configure with query param API key hook.
|
||||
"""
|
||||
if self.location == "query":
|
||||
|
||||
async def _add_api_key_param(request: httpx.Request) -> None:
|
||||
url = httpx.URL(request.url)
|
||||
request.url = url.copy_add_param(self.name, self.api_key)
|
||||
|
||||
client.event_hooks["request"].append(_add_api_key_param)
|
||||
|
||||
|
||||
class OAuth2ClientCredentials(AuthScheme):
|
||||
"""OAuth2 Client Credentials flow authentication.
|
||||
|
||||
Attributes:
|
||||
token_url: OAuth2 token endpoint URL.
|
||||
client_id: OAuth2 client identifier.
|
||||
client_secret: OAuth2 client secret.
|
||||
scopes: List of required OAuth2 scopes.
|
||||
"""
|
||||
|
||||
token_url: str = Field(description="OAuth2 token endpoint")
|
||||
client_id: str = Field(description="OAuth2 client ID")
|
||||
client_secret: str = Field(description="OAuth2 client secret")
|
||||
scopes: list[str] = Field(
|
||||
default_factory=list, description="Required OAuth2 scopes"
|
||||
)
|
||||
|
||||
_access_token: str | None = PrivateAttr(default=None)
|
||||
_token_expires_at: float | None = PrivateAttr(default=None)
|
||||
|
||||
async def apply_auth(
|
||||
self, client: httpx.AsyncClient, headers: MutableMapping[str, str]
|
||||
) -> MutableMapping[str, str]:
|
||||
"""Apply OAuth2 access token to Authorization header.
|
||||
|
||||
Args:
|
||||
client: HTTP client for making token requests.
|
||||
headers: Current request headers.
|
||||
|
||||
Returns:
|
||||
Updated headers with OAuth2 access token in Authorization header.
|
||||
"""
|
||||
if (
|
||||
self._access_token is None
|
||||
or self._token_expires_at is None
|
||||
or time.time() >= self._token_expires_at
|
||||
):
|
||||
await self._fetch_token(client)
|
||||
|
||||
if self._access_token:
|
||||
headers["Authorization"] = f"Bearer {self._access_token}"
|
||||
|
||||
return headers
|
||||
|
||||
async def _fetch_token(self, client: httpx.AsyncClient) -> None:
|
||||
"""Fetch OAuth2 access token using client credentials flow.
|
||||
|
||||
Args:
|
||||
client: HTTP client for making token request.
|
||||
|
||||
Raises:
|
||||
httpx.HTTPStatusError: If token request fails.
|
||||
"""
|
||||
data = {
|
||||
"grant_type": "client_credentials",
|
||||
"client_id": self.client_id,
|
||||
"client_secret": self.client_secret,
|
||||
}
|
||||
|
||||
if self.scopes:
|
||||
data["scope"] = " ".join(self.scopes)
|
||||
|
||||
response = await client.post(self.token_url, data=data)
|
||||
response.raise_for_status()
|
||||
|
||||
token_data = response.json()
|
||||
self._access_token = token_data["access_token"]
|
||||
expires_in = token_data.get("expires_in", 3600)
|
||||
self._token_expires_at = time.time() + expires_in - 60
|
||||
|
||||
|
||||
class OAuth2AuthorizationCode(AuthScheme):
|
||||
"""OAuth2 Authorization Code flow authentication.
|
||||
|
||||
Note: Requires interactive authorization.
|
||||
|
||||
Attributes:
|
||||
authorization_url: OAuth2 authorization endpoint URL.
|
||||
token_url: OAuth2 token endpoint URL.
|
||||
client_id: OAuth2 client identifier.
|
||||
client_secret: OAuth2 client secret.
|
||||
redirect_uri: OAuth2 redirect URI for callback.
|
||||
scopes: List of required OAuth2 scopes.
|
||||
"""
|
||||
|
||||
authorization_url: str = Field(description="OAuth2 authorization endpoint")
|
||||
token_url: str = Field(description="OAuth2 token endpoint")
|
||||
client_id: str = Field(description="OAuth2 client ID")
|
||||
client_secret: str = Field(description="OAuth2 client secret")
|
||||
redirect_uri: str = Field(description="OAuth2 redirect URI")
|
||||
scopes: list[str] = Field(
|
||||
default_factory=list, description="Required OAuth2 scopes"
|
||||
)
|
||||
|
||||
_access_token: str | None = PrivateAttr(default=None)
|
||||
_refresh_token: str | None = PrivateAttr(default=None)
|
||||
_token_expires_at: float | None = PrivateAttr(default=None)
|
||||
_authorization_callback: Callable[[str], Awaitable[str]] | None = PrivateAttr(
|
||||
default=None
|
||||
)
|
||||
|
||||
def set_authorization_callback(
|
||||
self, callback: Callable[[str], Awaitable[str]] | None
|
||||
) -> None:
|
||||
"""Set callback to handle authorization URL.
|
||||
|
||||
Args:
|
||||
callback: Async function that receives authorization URL and returns auth code.
|
||||
"""
|
||||
self._authorization_callback = callback
|
||||
|
||||
async def apply_auth(
|
||||
self, client: httpx.AsyncClient, headers: MutableMapping[str, str]
|
||||
) -> MutableMapping[str, str]:
|
||||
"""Apply OAuth2 access token to Authorization header.
|
||||
|
||||
Args:
|
||||
client: HTTP client for making token requests.
|
||||
headers: Current request headers.
|
||||
|
||||
Returns:
|
||||
Updated headers with OAuth2 access token in Authorization header.
|
||||
|
||||
Raises:
|
||||
ValueError: If authorization callback is not set.
|
||||
"""
|
||||
|
||||
if self._access_token is None:
|
||||
if self._authorization_callback is None:
|
||||
msg = "Authorization callback not set. Use set_authorization_callback()"
|
||||
raise ValueError(msg)
|
||||
await self._fetch_initial_token(client)
|
||||
elif self._token_expires_at and time.time() >= self._token_expires_at:
|
||||
await self._refresh_access_token(client)
|
||||
|
||||
if self._access_token:
|
||||
headers["Authorization"] = f"Bearer {self._access_token}"
|
||||
|
||||
return headers
|
||||
|
||||
async def _fetch_initial_token(self, client: httpx.AsyncClient) -> None:
|
||||
"""Fetch initial access token using authorization code flow.
|
||||
|
||||
Args:
|
||||
client: HTTP client for making token request.
|
||||
|
||||
Raises:
|
||||
ValueError: If authorization callback is not set.
|
||||
httpx.HTTPStatusError: If token request fails.
|
||||
"""
|
||||
params = {
|
||||
"response_type": "code",
|
||||
"client_id": self.client_id,
|
||||
"redirect_uri": self.redirect_uri,
|
||||
"scope": " ".join(self.scopes),
|
||||
}
|
||||
auth_url = f"{self.authorization_url}?{urllib.parse.urlencode(params)}"
|
||||
|
||||
if self._authorization_callback is None:
|
||||
msg = "Authorization callback not set"
|
||||
raise ValueError(msg)
|
||||
auth_code = await self._authorization_callback(auth_url)
|
||||
|
||||
data = {
|
||||
"grant_type": "authorization_code",
|
||||
"code": auth_code,
|
||||
"client_id": self.client_id,
|
||||
"client_secret": self.client_secret,
|
||||
"redirect_uri": self.redirect_uri,
|
||||
}
|
||||
|
||||
response = await client.post(self.token_url, data=data)
|
||||
response.raise_for_status()
|
||||
|
||||
token_data = response.json()
|
||||
self._access_token = token_data["access_token"]
|
||||
self._refresh_token = token_data.get("refresh_token")
|
||||
|
||||
expires_in = token_data.get("expires_in", 3600)
|
||||
self._token_expires_at = time.time() + expires_in - 60
|
||||
|
||||
async def _refresh_access_token(self, client: httpx.AsyncClient) -> None:
|
||||
"""Refresh the access token using refresh token.
|
||||
|
||||
Args:
|
||||
client: HTTP client for making token request.
|
||||
|
||||
Raises:
|
||||
httpx.HTTPStatusError: If token refresh request fails.
|
||||
"""
|
||||
if not self._refresh_token:
|
||||
await self._fetch_initial_token(client)
|
||||
return
|
||||
|
||||
data = {
|
||||
"grant_type": "refresh_token",
|
||||
"refresh_token": self._refresh_token,
|
||||
"client_id": self.client_id,
|
||||
"client_secret": self.client_secret,
|
||||
}
|
||||
|
||||
response = await client.post(self.token_url, data=data)
|
||||
response.raise_for_status()
|
||||
|
||||
token_data = response.json()
|
||||
self._access_token = token_data["access_token"]
|
||||
if "refresh_token" in token_data:
|
||||
self._refresh_token = token_data["refresh_token"]
|
||||
|
||||
expires_in = token_data.get("expires_in", 3600)
|
||||
self._token_expires_at = time.time() + expires_in - 60
|
||||
236
lib/crewai/src/crewai/a2a/auth/utils.py
Normal file
236
lib/crewai/src/crewai/a2a/auth/utils.py
Normal file
@@ -0,0 +1,236 @@
|
||||
"""Authentication utilities for A2A protocol agent communication.
|
||||
|
||||
Provides validation and retry logic for various authentication schemes including
|
||||
OAuth2, API keys, and HTTP authentication methods.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
from collections.abc import Awaitable, Callable, MutableMapping
|
||||
import re
|
||||
from typing import Final
|
||||
|
||||
from a2a.client.errors import A2AClientHTTPError
|
||||
from a2a.types import (
|
||||
APIKeySecurityScheme,
|
||||
AgentCard,
|
||||
HTTPAuthSecurityScheme,
|
||||
OAuth2SecurityScheme,
|
||||
)
|
||||
from httpx import AsyncClient, Response
|
||||
|
||||
from crewai.a2a.auth.schemas import (
|
||||
APIKeyAuth,
|
||||
AuthScheme,
|
||||
BearerTokenAuth,
|
||||
HTTPBasicAuth,
|
||||
HTTPDigestAuth,
|
||||
OAuth2AuthorizationCode,
|
||||
OAuth2ClientCredentials,
|
||||
)
|
||||
|
||||
|
||||
_auth_store: dict[int, AuthScheme | None] = {}
|
||||
|
||||
_SCHEME_PATTERN: Final[re.Pattern[str]] = re.compile(r"(\w+)\s+(.+?)(?=,\s*\w+\s+|$)")
|
||||
_PARAM_PATTERN: Final[re.Pattern[str]] = re.compile(r'(\w+)=(?:"([^"]*)"|([^\s,]+))')
|
||||
|
||||
_SCHEME_AUTH_MAPPING: Final[dict[type, tuple[type[AuthScheme], ...]]] = {
|
||||
OAuth2SecurityScheme: (
|
||||
OAuth2ClientCredentials,
|
||||
OAuth2AuthorizationCode,
|
||||
BearerTokenAuth,
|
||||
),
|
||||
APIKeySecurityScheme: (APIKeyAuth,),
|
||||
}
|
||||
|
||||
_HTTP_SCHEME_MAPPING: Final[dict[str, type[AuthScheme]]] = {
|
||||
"basic": HTTPBasicAuth,
|
||||
"digest": HTTPDigestAuth,
|
||||
"bearer": BearerTokenAuth,
|
||||
}
|
||||
|
||||
|
||||
def _raise_auth_mismatch(
|
||||
expected_classes: type[AuthScheme] | tuple[type[AuthScheme], ...],
|
||||
provided_auth: AuthScheme,
|
||||
) -> None:
|
||||
"""Raise authentication mismatch error.
|
||||
|
||||
Args:
|
||||
expected_classes: Expected authentication class or tuple of classes.
|
||||
provided_auth: Actually provided authentication instance.
|
||||
|
||||
Raises:
|
||||
A2AClientHTTPError: Always raises with 401 status code.
|
||||
"""
|
||||
if isinstance(expected_classes, tuple):
|
||||
if len(expected_classes) == 1:
|
||||
required = expected_classes[0].__name__
|
||||
else:
|
||||
names = [cls.__name__ for cls in expected_classes]
|
||||
required = f"one of ({', '.join(names)})"
|
||||
else:
|
||||
required = expected_classes.__name__
|
||||
|
||||
msg = (
|
||||
f"AgentCard requires {required} authentication, "
|
||||
f"but {type(provided_auth).__name__} was provided"
|
||||
)
|
||||
raise A2AClientHTTPError(401, msg)
|
||||
|
||||
|
||||
def parse_www_authenticate(header_value: str) -> dict[str, dict[str, str]]:
|
||||
"""Parse WWW-Authenticate header into auth challenges.
|
||||
|
||||
Args:
|
||||
header_value: The WWW-Authenticate header value.
|
||||
|
||||
Returns:
|
||||
Dictionary mapping auth scheme to its parameters.
|
||||
Example: {"Bearer": {"realm": "api", "scope": "read write"}}
|
||||
"""
|
||||
if not header_value:
|
||||
return {}
|
||||
|
||||
challenges: dict[str, dict[str, str]] = {}
|
||||
|
||||
for match in _SCHEME_PATTERN.finditer(header_value):
|
||||
scheme = match.group(1)
|
||||
params_str = match.group(2)
|
||||
|
||||
params: dict[str, str] = {}
|
||||
|
||||
for param_match in _PARAM_PATTERN.finditer(params_str):
|
||||
key = param_match.group(1)
|
||||
value = param_match.group(2) or param_match.group(3)
|
||||
params[key] = value
|
||||
|
||||
challenges[scheme] = params
|
||||
|
||||
return challenges
|
||||
|
||||
|
||||
def validate_auth_against_agent_card(
|
||||
agent_card: AgentCard, auth: AuthScheme | None
|
||||
) -> None:
|
||||
"""Validate that provided auth matches AgentCard security requirements.
|
||||
|
||||
Args:
|
||||
agent_card: The A2A AgentCard containing security requirements.
|
||||
auth: User-provided authentication scheme (or None).
|
||||
|
||||
Raises:
|
||||
A2AClientHTTPError: If auth doesn't match AgentCard requirements (status_code=401).
|
||||
"""
|
||||
|
||||
if not agent_card.security or not agent_card.security_schemes:
|
||||
return
|
||||
|
||||
if not auth:
|
||||
msg = "AgentCard requires authentication but no auth scheme provided"
|
||||
raise A2AClientHTTPError(401, msg)
|
||||
|
||||
first_security_req = agent_card.security[0] if agent_card.security else {}
|
||||
|
||||
for scheme_name in first_security_req.keys():
|
||||
security_scheme_wrapper = agent_card.security_schemes.get(scheme_name)
|
||||
if not security_scheme_wrapper:
|
||||
continue
|
||||
|
||||
scheme = security_scheme_wrapper.root
|
||||
|
||||
if allowed_classes := _SCHEME_AUTH_MAPPING.get(type(scheme)):
|
||||
if not isinstance(auth, allowed_classes):
|
||||
_raise_auth_mismatch(allowed_classes, auth)
|
||||
return
|
||||
|
||||
if isinstance(scheme, HTTPAuthSecurityScheme):
|
||||
if required_class := _HTTP_SCHEME_MAPPING.get(scheme.scheme.lower()):
|
||||
if not isinstance(auth, required_class):
|
||||
_raise_auth_mismatch(required_class, auth)
|
||||
return
|
||||
|
||||
msg = "Could not validate auth against AgentCard security requirements"
|
||||
raise A2AClientHTTPError(401, msg)
|
||||
|
||||
|
||||
async def retry_on_401(
|
||||
request_func: Callable[[], Awaitable[Response]],
|
||||
auth_scheme: AuthScheme | None,
|
||||
client: AsyncClient,
|
||||
headers: MutableMapping[str, str],
|
||||
max_retries: int = 3,
|
||||
) -> Response:
|
||||
"""Retry a request on 401 authentication error.
|
||||
|
||||
Handles 401 errors by:
|
||||
1. Parsing WWW-Authenticate header
|
||||
2. Re-acquiring credentials
|
||||
3. Retrying the request
|
||||
|
||||
Args:
|
||||
request_func: Async function that makes the HTTP request.
|
||||
auth_scheme: Authentication scheme to refresh credentials with.
|
||||
client: HTTP client for making requests.
|
||||
headers: Request headers to update with new auth.
|
||||
max_retries: Maximum number of retry attempts (default: 3).
|
||||
|
||||
Returns:
|
||||
HTTP response from the request.
|
||||
|
||||
Raises:
|
||||
httpx.HTTPStatusError: If retries are exhausted or auth scheme is None.
|
||||
"""
|
||||
last_response: Response | None = None
|
||||
last_challenges: dict[str, dict[str, str]] = {}
|
||||
|
||||
for attempt in range(max_retries):
|
||||
response = await request_func()
|
||||
|
||||
if response.status_code != 401:
|
||||
return response
|
||||
|
||||
last_response = response
|
||||
|
||||
if auth_scheme is None:
|
||||
response.raise_for_status()
|
||||
return response
|
||||
|
||||
www_authenticate = response.headers.get("WWW-Authenticate", "")
|
||||
challenges = parse_www_authenticate(www_authenticate)
|
||||
last_challenges = challenges
|
||||
|
||||
if attempt >= max_retries - 1:
|
||||
break
|
||||
|
||||
backoff_time = 2**attempt
|
||||
await asyncio.sleep(backoff_time)
|
||||
|
||||
await auth_scheme.apply_auth(client, headers)
|
||||
|
||||
if last_response:
|
||||
last_response.raise_for_status()
|
||||
return last_response
|
||||
|
||||
msg = "retry_on_401 failed without making any requests"
|
||||
if last_challenges:
|
||||
challenge_info = ", ".join(
|
||||
f"{scheme} (realm={params.get('realm', 'N/A')})"
|
||||
for scheme, params in last_challenges.items()
|
||||
)
|
||||
msg = f"{msg}. Server challenges: {challenge_info}"
|
||||
raise RuntimeError(msg)
|
||||
|
||||
|
||||
def configure_auth_client(
|
||||
auth: HTTPDigestAuth | APIKeyAuth, client: AsyncClient
|
||||
) -> None:
|
||||
"""Configure HTTP client with auth-specific settings.
|
||||
|
||||
Only HTTPDigestAuth and APIKeyAuth need client configuration.
|
||||
|
||||
Args:
|
||||
auth: Authentication scheme that requires client configuration.
|
||||
client: HTTP client to configure.
|
||||
"""
|
||||
auth.configure_client(client)
|
||||
59
lib/crewai/src/crewai/a2a/config.py
Normal file
59
lib/crewai/src/crewai/a2a/config.py
Normal file
@@ -0,0 +1,59 @@
|
||||
"""A2A configuration types.
|
||||
|
||||
This module is separate from experimental.a2a to avoid circular imports.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Annotated
|
||||
|
||||
from pydantic import (
|
||||
BaseModel,
|
||||
BeforeValidator,
|
||||
Field,
|
||||
HttpUrl,
|
||||
TypeAdapter,
|
||||
)
|
||||
|
||||
from crewai.a2a.auth.schemas import AuthScheme
|
||||
|
||||
|
||||
http_url_adapter = TypeAdapter(HttpUrl)
|
||||
|
||||
Url = Annotated[
|
||||
str,
|
||||
BeforeValidator(
|
||||
lambda value: str(http_url_adapter.validate_python(value, strict=True))
|
||||
),
|
||||
]
|
||||
|
||||
|
||||
class A2AConfig(BaseModel):
|
||||
"""Configuration for A2A protocol integration.
|
||||
|
||||
Attributes:
|
||||
endpoint: A2A agent endpoint URL.
|
||||
auth: Authentication scheme (Bearer, OAuth2, API Key, HTTP Basic/Digest).
|
||||
timeout: Request timeout in seconds (default: 120).
|
||||
max_turns: Maximum conversation turns with A2A agent (default: 10).
|
||||
response_model: Optional Pydantic model for structured A2A agent responses.
|
||||
fail_fast: If True, raise error when agent unreachable; if False, skip and continue (default: True).
|
||||
"""
|
||||
|
||||
endpoint: Url = Field(description="A2A agent endpoint URL")
|
||||
auth: AuthScheme | None = Field(
|
||||
default=None,
|
||||
description="Authentication scheme (Bearer, OAuth2, API Key, HTTP Basic/Digest)",
|
||||
)
|
||||
timeout: int = Field(default=120, description="Request timeout in seconds")
|
||||
max_turns: int = Field(
|
||||
default=10, description="Maximum conversation turns with A2A agent"
|
||||
)
|
||||
response_model: type[BaseModel] | None = Field(
|
||||
default=None,
|
||||
description="Optional Pydantic model for structured A2A agent responses. When specified, the A2A agent is expected to return JSON matching this schema.",
|
||||
)
|
||||
fail_fast: bool = Field(
|
||||
default=True,
|
||||
description="If True, raise an error immediately when the A2A agent is unreachable. If False, skip the A2A agent and continue execution.",
|
||||
)
|
||||
29
lib/crewai/src/crewai/a2a/templates.py
Normal file
29
lib/crewai/src/crewai/a2a/templates.py
Normal file
@@ -0,0 +1,29 @@
|
||||
"""String templates for A2A (Agent-to-Agent) protocol messaging and status."""
|
||||
|
||||
from string import Template
|
||||
from typing import Final
|
||||
|
||||
|
||||
AVAILABLE_AGENTS_TEMPLATE: Final[Template] = Template(
|
||||
"\n<AVAILABLE_A2A_AGENTS>\n $available_a2a_agents\n</AVAILABLE_A2A_AGENTS>\n"
|
||||
)
|
||||
PREVIOUS_A2A_CONVERSATION_TEMPLATE: Final[Template] = Template(
|
||||
"\n<PREVIOUS_A2A_CONVERSATION>\n"
|
||||
" $previous_a2a_conversation"
|
||||
"\n</PREVIOUS_A2A_CONVERSATION>\n"
|
||||
)
|
||||
CONVERSATION_TURN_INFO_TEMPLATE: Final[Template] = Template(
|
||||
"\n<CONVERSATION_PROGRESS>\n"
|
||||
' turn="$turn_count"\n'
|
||||
' max_turns="$max_turns"\n'
|
||||
" $warning"
|
||||
"\n</CONVERSATION_PROGRESS>\n"
|
||||
)
|
||||
UNAVAILABLE_AGENTS_NOTICE_TEMPLATE: Final[Template] = Template(
|
||||
"\n<A2A_AGENTS_STATUS>\n"
|
||||
" NOTE: A2A agents were configured but are currently unavailable.\n"
|
||||
" You cannot delegate to remote agents for this task.\n\n"
|
||||
" Unavailable Agents:\n"
|
||||
" $unavailable_agents"
|
||||
"\n</A2A_AGENTS_STATUS>\n"
|
||||
)
|
||||
38
lib/crewai/src/crewai/a2a/types.py
Normal file
38
lib/crewai/src/crewai/a2a/types.py
Normal file
@@ -0,0 +1,38 @@
|
||||
"""Type definitions for A2A protocol message parts."""
|
||||
|
||||
from typing import Any, Literal, Protocol, TypedDict, runtime_checkable
|
||||
|
||||
from typing_extensions import NotRequired
|
||||
|
||||
|
||||
@runtime_checkable
|
||||
class AgentResponseProtocol(Protocol):
|
||||
"""Protocol for the dynamically created AgentResponse model."""
|
||||
|
||||
a2a_ids: tuple[str, ...]
|
||||
message: str
|
||||
is_a2a: bool
|
||||
|
||||
|
||||
class PartsMetadataDict(TypedDict, total=False):
|
||||
"""Metadata for A2A message parts.
|
||||
|
||||
Attributes:
|
||||
mimeType: MIME type for the part content.
|
||||
schema: JSON schema for the part content.
|
||||
"""
|
||||
|
||||
mimeType: Literal["application/json"]
|
||||
schema: dict[str, Any]
|
||||
|
||||
|
||||
class PartsDict(TypedDict):
|
||||
"""A2A message part containing text and optional metadata.
|
||||
|
||||
Attributes:
|
||||
text: The text content of the message part.
|
||||
metadata: Optional metadata describing the part content.
|
||||
"""
|
||||
|
||||
text: str
|
||||
metadata: NotRequired[PartsMetadataDict]
|
||||
755
lib/crewai/src/crewai/a2a/utils.py
Normal file
755
lib/crewai/src/crewai/a2a/utils.py
Normal file
@@ -0,0 +1,755 @@
|
||||
"""Utility functions for A2A (Agent-to-Agent) protocol delegation."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
from collections.abc import AsyncIterator, MutableMapping
|
||||
from contextlib import asynccontextmanager
|
||||
from functools import lru_cache
|
||||
import time
|
||||
from typing import TYPE_CHECKING, Any
|
||||
import uuid
|
||||
|
||||
from a2a.client import Client, ClientConfig, ClientFactory
|
||||
from a2a.client.errors import A2AClientHTTPError
|
||||
from a2a.types import (
|
||||
AgentCard,
|
||||
Message,
|
||||
Part,
|
||||
Role,
|
||||
TaskArtifactUpdateEvent,
|
||||
TaskState,
|
||||
TaskStatusUpdateEvent,
|
||||
TextPart,
|
||||
TransportProtocol,
|
||||
)
|
||||
import httpx
|
||||
from pydantic import BaseModel, Field, create_model
|
||||
|
||||
from crewai.a2a.auth.schemas import APIKeyAuth, HTTPDigestAuth
|
||||
from crewai.a2a.auth.utils import (
|
||||
_auth_store,
|
||||
configure_auth_client,
|
||||
retry_on_401,
|
||||
validate_auth_against_agent_card,
|
||||
)
|
||||
from crewai.a2a.config import A2AConfig
|
||||
from crewai.a2a.types import PartsDict, PartsMetadataDict
|
||||
from crewai.events.event_bus import crewai_event_bus
|
||||
from crewai.events.types.a2a_events import (
|
||||
A2AConversationStartedEvent,
|
||||
A2ADelegationCompletedEvent,
|
||||
A2ADelegationStartedEvent,
|
||||
A2AMessageSentEvent,
|
||||
A2AResponseReceivedEvent,
|
||||
)
|
||||
from crewai.types.utils import create_literals_from_strings
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from a2a.types import Message, Task as A2ATask
|
||||
|
||||
from crewai.a2a.auth.schemas import AuthScheme
|
||||
|
||||
|
||||
@lru_cache()
|
||||
def _fetch_agent_card_cached(
|
||||
endpoint: str,
|
||||
auth_hash: int,
|
||||
timeout: int,
|
||||
_ttl_hash: int,
|
||||
) -> AgentCard:
|
||||
"""Cached version of fetch_agent_card with auth support.
|
||||
|
||||
Args:
|
||||
endpoint: A2A agent endpoint URL
|
||||
auth_hash: Hash of the auth object
|
||||
timeout: Request timeout
|
||||
_ttl_hash: Time-based hash for cache invalidation (unused in body)
|
||||
|
||||
Returns:
|
||||
Cached AgentCard
|
||||
"""
|
||||
auth = _auth_store.get(auth_hash)
|
||||
|
||||
loop = asyncio.new_event_loop()
|
||||
asyncio.set_event_loop(loop)
|
||||
try:
|
||||
return loop.run_until_complete(
|
||||
_fetch_agent_card_async(endpoint=endpoint, auth=auth, timeout=timeout)
|
||||
)
|
||||
finally:
|
||||
loop.close()
|
||||
|
||||
|
||||
def fetch_agent_card(
|
||||
endpoint: str,
|
||||
auth: AuthScheme | None = None,
|
||||
timeout: int = 30,
|
||||
use_cache: bool = True,
|
||||
cache_ttl: int = 300,
|
||||
) -> AgentCard:
|
||||
"""Fetch AgentCard from an A2A endpoint with optional caching.
|
||||
|
||||
Args:
|
||||
endpoint: A2A agent endpoint URL (AgentCard URL)
|
||||
auth: Optional AuthScheme for authentication
|
||||
timeout: Request timeout in seconds
|
||||
use_cache: Whether to use caching (default True)
|
||||
cache_ttl: Cache TTL in seconds (default 300 = 5 minutes)
|
||||
|
||||
Returns:
|
||||
AgentCard object with agent capabilities and skills
|
||||
|
||||
Raises:
|
||||
httpx.HTTPStatusError: If the request fails
|
||||
A2AClientHTTPError: If authentication fails
|
||||
"""
|
||||
if use_cache:
|
||||
auth_hash = hash((type(auth).__name__, id(auth))) if auth else 0
|
||||
_auth_store[auth_hash] = auth
|
||||
ttl_hash = int(time.time() // cache_ttl)
|
||||
return _fetch_agent_card_cached(endpoint, auth_hash, timeout, ttl_hash)
|
||||
|
||||
loop = asyncio.new_event_loop()
|
||||
asyncio.set_event_loop(loop)
|
||||
try:
|
||||
return loop.run_until_complete(
|
||||
_fetch_agent_card_async(endpoint=endpoint, auth=auth, timeout=timeout)
|
||||
)
|
||||
finally:
|
||||
loop.close()
|
||||
|
||||
|
||||
async def _fetch_agent_card_async(
|
||||
endpoint: str,
|
||||
auth: AuthScheme | None,
|
||||
timeout: int,
|
||||
) -> AgentCard:
|
||||
"""Async implementation of AgentCard fetching.
|
||||
|
||||
Args:
|
||||
endpoint: A2A agent endpoint URL
|
||||
auth: Optional AuthScheme for authentication
|
||||
timeout: Request timeout in seconds
|
||||
|
||||
Returns:
|
||||
AgentCard object
|
||||
"""
|
||||
if "/.well-known/agent-card.json" in endpoint:
|
||||
base_url = endpoint.replace("/.well-known/agent-card.json", "")
|
||||
agent_card_path = "/.well-known/agent-card.json"
|
||||
else:
|
||||
url_parts = endpoint.split("/", 3)
|
||||
base_url = f"{url_parts[0]}//{url_parts[2]}"
|
||||
agent_card_path = f"/{url_parts[3]}" if len(url_parts) > 3 else "/"
|
||||
|
||||
headers: MutableMapping[str, str] = {}
|
||||
if auth:
|
||||
async with httpx.AsyncClient(timeout=timeout) as temp_auth_client:
|
||||
if isinstance(auth, (HTTPDigestAuth, APIKeyAuth)):
|
||||
configure_auth_client(auth, temp_auth_client)
|
||||
headers = await auth.apply_auth(temp_auth_client, {})
|
||||
|
||||
async with httpx.AsyncClient(timeout=timeout, headers=headers) as temp_client:
|
||||
if auth and isinstance(auth, (HTTPDigestAuth, APIKeyAuth)):
|
||||
configure_auth_client(auth, temp_client)
|
||||
|
||||
agent_card_url = f"{base_url}{agent_card_path}"
|
||||
|
||||
async def _fetch_agent_card_request() -> httpx.Response:
|
||||
return await temp_client.get(agent_card_url)
|
||||
|
||||
try:
|
||||
response = await retry_on_401(
|
||||
request_func=_fetch_agent_card_request,
|
||||
auth_scheme=auth,
|
||||
client=temp_client,
|
||||
headers=temp_client.headers,
|
||||
max_retries=2,
|
||||
)
|
||||
response.raise_for_status()
|
||||
|
||||
return AgentCard.model_validate(response.json())
|
||||
|
||||
except httpx.HTTPStatusError as e:
|
||||
if e.response.status_code == 401:
|
||||
error_details = ["Authentication failed"]
|
||||
www_auth = e.response.headers.get("WWW-Authenticate")
|
||||
if www_auth:
|
||||
error_details.append(f"WWW-Authenticate: {www_auth}")
|
||||
if not auth:
|
||||
error_details.append("No auth scheme provided")
|
||||
msg = " | ".join(error_details)
|
||||
raise A2AClientHTTPError(401, msg) from e
|
||||
raise
|
||||
|
||||
|
||||
def execute_a2a_delegation(
|
||||
endpoint: str,
|
||||
auth: AuthScheme | None,
|
||||
timeout: int,
|
||||
task_description: str,
|
||||
context: str | None = None,
|
||||
context_id: str | None = None,
|
||||
task_id: str | None = None,
|
||||
reference_task_ids: list[str] | None = None,
|
||||
metadata: dict[str, Any] | None = None,
|
||||
extensions: dict[str, Any] | None = None,
|
||||
conversation_history: list[Message] | None = None,
|
||||
agent_id: str | None = None,
|
||||
agent_role: Role | None = None,
|
||||
agent_branch: Any | None = None,
|
||||
response_model: type[BaseModel] | None = None,
|
||||
turn_number: int | None = None,
|
||||
) -> dict[str, Any]:
|
||||
"""Execute a task delegation to a remote A2A agent with multi-turn support.
|
||||
|
||||
Handles:
|
||||
- AgentCard discovery
|
||||
- Authentication setup
|
||||
- Message creation and sending
|
||||
- Response parsing
|
||||
- Multi-turn conversations
|
||||
|
||||
Args:
|
||||
endpoint: A2A agent endpoint URL (AgentCard URL)
|
||||
auth: Optional AuthScheme for authentication (Bearer, OAuth2, API Key, HTTP Basic/Digest)
|
||||
timeout: Request timeout in seconds
|
||||
task_description: The task to delegate
|
||||
context: Optional context information
|
||||
context_id: Context ID for correlating messages/tasks
|
||||
task_id: Specific task identifier
|
||||
reference_task_ids: List of related task IDs
|
||||
metadata: Additional metadata (external_id, request_id, etc.)
|
||||
extensions: Protocol extensions for custom fields
|
||||
conversation_history: Previous Message objects from conversation
|
||||
agent_id: Agent identifier for logging
|
||||
agent_role: Role of the CrewAI agent delegating the task
|
||||
agent_branch: Optional agent tree branch for logging
|
||||
response_model: Optional Pydantic model for structured outputs
|
||||
turn_number: Optional turn number for multi-turn conversations
|
||||
|
||||
Returns:
|
||||
Dictionary with:
|
||||
- status: "completed", "input_required", "failed", etc.
|
||||
- result: Result string (if completed)
|
||||
- error: Error message (if failed)
|
||||
- history: List of new Message objects from this exchange
|
||||
|
||||
Raises:
|
||||
ImportError: If a2a-sdk is not installed
|
||||
"""
|
||||
is_multiturn = bool(conversation_history and len(conversation_history) > 0)
|
||||
if turn_number is None:
|
||||
turn_number = (
|
||||
len([m for m in (conversation_history or []) if m.role == Role.user]) + 1
|
||||
)
|
||||
crewai_event_bus.emit(
|
||||
agent_branch,
|
||||
A2ADelegationStartedEvent(
|
||||
endpoint=endpoint,
|
||||
task_description=task_description,
|
||||
agent_id=agent_id,
|
||||
is_multiturn=is_multiturn,
|
||||
turn_number=turn_number,
|
||||
),
|
||||
)
|
||||
|
||||
loop = asyncio.new_event_loop()
|
||||
asyncio.set_event_loop(loop)
|
||||
try:
|
||||
result = loop.run_until_complete(
|
||||
_execute_a2a_delegation_async(
|
||||
endpoint=endpoint,
|
||||
auth=auth,
|
||||
timeout=timeout,
|
||||
task_description=task_description,
|
||||
context=context,
|
||||
context_id=context_id,
|
||||
task_id=task_id,
|
||||
reference_task_ids=reference_task_ids,
|
||||
metadata=metadata,
|
||||
extensions=extensions,
|
||||
conversation_history=conversation_history or [],
|
||||
is_multiturn=is_multiturn,
|
||||
turn_number=turn_number,
|
||||
agent_branch=agent_branch,
|
||||
agent_id=agent_id,
|
||||
agent_role=agent_role,
|
||||
response_model=response_model,
|
||||
)
|
||||
)
|
||||
|
||||
crewai_event_bus.emit(
|
||||
agent_branch,
|
||||
A2ADelegationCompletedEvent(
|
||||
status=result["status"],
|
||||
result=result.get("result"),
|
||||
error=result.get("error"),
|
||||
is_multiturn=is_multiturn,
|
||||
),
|
||||
)
|
||||
|
||||
return result
|
||||
finally:
|
||||
loop.close()
|
||||
|
||||
|
||||
async def _execute_a2a_delegation_async(
|
||||
endpoint: str,
|
||||
auth: AuthScheme | None,
|
||||
timeout: int,
|
||||
task_description: str,
|
||||
context: str | None,
|
||||
context_id: str | None,
|
||||
task_id: str | None,
|
||||
reference_task_ids: list[str] | None,
|
||||
metadata: dict[str, Any] | None,
|
||||
extensions: dict[str, Any] | None,
|
||||
conversation_history: list[Message],
|
||||
is_multiturn: bool = False,
|
||||
turn_number: int = 1,
|
||||
agent_branch: Any | None = None,
|
||||
agent_id: str | None = None,
|
||||
agent_role: str | None = None,
|
||||
response_model: type[BaseModel] | None = None,
|
||||
) -> dict[str, Any]:
|
||||
"""Async implementation of A2A delegation with multi-turn support.
|
||||
|
||||
Args:
|
||||
endpoint: A2A agent endpoint URL
|
||||
auth: Optional AuthScheme for authentication
|
||||
timeout: Request timeout in seconds
|
||||
task_description: Task to delegate
|
||||
context: Optional context
|
||||
context_id: Context ID for correlation
|
||||
task_id: Specific task identifier
|
||||
reference_task_ids: Related task IDs
|
||||
metadata: Additional metadata
|
||||
extensions: Protocol extensions
|
||||
conversation_history: Previous Message objects
|
||||
is_multiturn: Whether this is a multi-turn conversation
|
||||
turn_number: Current turn number
|
||||
agent_branch: Agent tree branch for logging
|
||||
agent_id: Agent identifier for logging
|
||||
agent_role: Agent role for logging
|
||||
response_model: Optional Pydantic model for structured outputs
|
||||
|
||||
Returns:
|
||||
Dictionary with status, result/error, and new history
|
||||
"""
|
||||
agent_card = await _fetch_agent_card_async(endpoint, auth, timeout)
|
||||
|
||||
validate_auth_against_agent_card(agent_card, auth)
|
||||
|
||||
headers: MutableMapping[str, str] = {}
|
||||
if auth:
|
||||
async with httpx.AsyncClient(timeout=timeout) as temp_auth_client:
|
||||
if isinstance(auth, (HTTPDigestAuth, APIKeyAuth)):
|
||||
configure_auth_client(auth, temp_auth_client)
|
||||
headers = await auth.apply_auth(temp_auth_client, {})
|
||||
|
||||
a2a_agent_name = None
|
||||
if agent_card.name:
|
||||
a2a_agent_name = agent_card.name
|
||||
|
||||
if turn_number == 1:
|
||||
agent_id_for_event = agent_id or endpoint
|
||||
crewai_event_bus.emit(
|
||||
agent_branch,
|
||||
A2AConversationStartedEvent(
|
||||
agent_id=agent_id_for_event,
|
||||
endpoint=endpoint,
|
||||
a2a_agent_name=a2a_agent_name,
|
||||
),
|
||||
)
|
||||
|
||||
message_parts = []
|
||||
|
||||
if context:
|
||||
message_parts.append(f"Context:\n{context}\n\n")
|
||||
message_parts.append(f"{task_description}")
|
||||
message_text = "".join(message_parts)
|
||||
|
||||
if is_multiturn and conversation_history and not task_id:
|
||||
if first_task_id := conversation_history[0].task_id:
|
||||
task_id = first_task_id
|
||||
|
||||
parts: PartsDict = {"text": message_text}
|
||||
if response_model:
|
||||
parts.update(
|
||||
{
|
||||
"metadata": PartsMetadataDict(
|
||||
mimeType="application/json",
|
||||
schema=response_model.model_json_schema(),
|
||||
)
|
||||
}
|
||||
)
|
||||
|
||||
message = Message(
|
||||
role=Role.user,
|
||||
message_id=str(uuid.uuid4()),
|
||||
parts=[Part(root=TextPart(**parts))],
|
||||
context_id=context_id,
|
||||
task_id=task_id,
|
||||
reference_task_ids=reference_task_ids,
|
||||
metadata=metadata,
|
||||
extensions=extensions,
|
||||
)
|
||||
|
||||
transport_protocol = TransportProtocol("JSONRPC")
|
||||
new_messages: list[Message] = [*conversation_history, message]
|
||||
crewai_event_bus.emit(
|
||||
None,
|
||||
A2AMessageSentEvent(
|
||||
message=message_text,
|
||||
turn_number=turn_number,
|
||||
is_multiturn=is_multiturn,
|
||||
agent_role=agent_role,
|
||||
),
|
||||
)
|
||||
|
||||
async with _create_a2a_client(
|
||||
agent_card=agent_card,
|
||||
transport_protocol=transport_protocol,
|
||||
timeout=timeout,
|
||||
headers=headers,
|
||||
streaming=True,
|
||||
auth=auth,
|
||||
) as client:
|
||||
result_parts: list[str] = []
|
||||
final_result: dict[str, Any] | None = None
|
||||
event_stream = client.send_message(message)
|
||||
|
||||
try:
|
||||
async for event in event_stream:
|
||||
if isinstance(event, Message):
|
||||
new_messages.append(event)
|
||||
for part in event.parts:
|
||||
if part.root.kind == "text":
|
||||
text = part.root.text
|
||||
result_parts.append(text)
|
||||
|
||||
elif isinstance(event, tuple):
|
||||
a2a_task, update = event
|
||||
|
||||
if isinstance(update, TaskArtifactUpdateEvent):
|
||||
artifact = update.artifact
|
||||
result_parts.extend(
|
||||
part.root.text
|
||||
for part in artifact.parts
|
||||
if part.root.kind == "text"
|
||||
)
|
||||
|
||||
is_final_update = False
|
||||
if isinstance(update, TaskStatusUpdateEvent):
|
||||
is_final_update = update.final
|
||||
|
||||
if not is_final_update and a2a_task.status.state not in [
|
||||
TaskState.completed,
|
||||
TaskState.input_required,
|
||||
TaskState.failed,
|
||||
TaskState.rejected,
|
||||
TaskState.auth_required,
|
||||
TaskState.canceled,
|
||||
]:
|
||||
continue
|
||||
|
||||
if a2a_task.status.state == TaskState.completed:
|
||||
extracted_parts = _extract_task_result_parts(a2a_task)
|
||||
result_parts.extend(extracted_parts)
|
||||
if a2a_task.history:
|
||||
new_messages.extend(a2a_task.history)
|
||||
|
||||
response_text = " ".join(result_parts) if result_parts else ""
|
||||
crewai_event_bus.emit(
|
||||
None,
|
||||
A2AResponseReceivedEvent(
|
||||
response=response_text,
|
||||
turn_number=turn_number,
|
||||
is_multiturn=is_multiturn,
|
||||
status="completed",
|
||||
agent_role=agent_role,
|
||||
),
|
||||
)
|
||||
|
||||
final_result = {
|
||||
"status": "completed",
|
||||
"result": response_text,
|
||||
"history": new_messages,
|
||||
"agent_card": agent_card,
|
||||
}
|
||||
break
|
||||
|
||||
if a2a_task.status.state == TaskState.input_required:
|
||||
if a2a_task.history:
|
||||
new_messages.extend(a2a_task.history)
|
||||
|
||||
response_text = _extract_error_message(
|
||||
a2a_task, "Additional input required"
|
||||
)
|
||||
if response_text and not a2a_task.history:
|
||||
agent_message = Message(
|
||||
role=Role.agent,
|
||||
message_id=str(uuid.uuid4()),
|
||||
parts=[Part(root=TextPart(text=response_text))],
|
||||
context_id=a2a_task.context_id
|
||||
if hasattr(a2a_task, "context_id")
|
||||
else None,
|
||||
task_id=a2a_task.task_id
|
||||
if hasattr(a2a_task, "task_id")
|
||||
else None,
|
||||
)
|
||||
new_messages.append(agent_message)
|
||||
crewai_event_bus.emit(
|
||||
None,
|
||||
A2AResponseReceivedEvent(
|
||||
response=response_text,
|
||||
turn_number=turn_number,
|
||||
is_multiturn=is_multiturn,
|
||||
status="input_required",
|
||||
agent_role=agent_role,
|
||||
),
|
||||
)
|
||||
|
||||
final_result = {
|
||||
"status": "input_required",
|
||||
"error": response_text,
|
||||
"history": new_messages,
|
||||
"agent_card": agent_card,
|
||||
}
|
||||
break
|
||||
|
||||
if a2a_task.status.state in [TaskState.failed, TaskState.rejected]:
|
||||
error_msg = _extract_error_message(
|
||||
a2a_task, "Task failed without error message"
|
||||
)
|
||||
if a2a_task.history:
|
||||
new_messages.extend(a2a_task.history)
|
||||
final_result = {
|
||||
"status": "failed",
|
||||
"error": error_msg,
|
||||
"history": new_messages,
|
||||
}
|
||||
break
|
||||
|
||||
if a2a_task.status.state == TaskState.auth_required:
|
||||
error_msg = _extract_error_message(
|
||||
a2a_task, "Authentication required"
|
||||
)
|
||||
final_result = {
|
||||
"status": "auth_required",
|
||||
"error": error_msg,
|
||||
"history": new_messages,
|
||||
}
|
||||
break
|
||||
|
||||
if a2a_task.status.state == TaskState.canceled:
|
||||
error_msg = _extract_error_message(
|
||||
a2a_task, "Task was canceled"
|
||||
)
|
||||
final_result = {
|
||||
"status": "canceled",
|
||||
"error": error_msg,
|
||||
"history": new_messages,
|
||||
}
|
||||
break
|
||||
except Exception as e:
|
||||
current_exception: Exception | BaseException | None = e
|
||||
while current_exception:
|
||||
if hasattr(current_exception, "response"):
|
||||
response = current_exception.response
|
||||
if hasattr(response, "text"):
|
||||
break
|
||||
if current_exception and hasattr(current_exception, "__cause__"):
|
||||
current_exception = current_exception.__cause__
|
||||
raise
|
||||
finally:
|
||||
if hasattr(event_stream, "aclose"):
|
||||
await event_stream.aclose()
|
||||
|
||||
if final_result:
|
||||
return final_result
|
||||
|
||||
return {
|
||||
"status": "completed",
|
||||
"result": " ".join(result_parts) if result_parts else "",
|
||||
"history": new_messages,
|
||||
}
|
||||
|
||||
|
||||
@asynccontextmanager
|
||||
async def _create_a2a_client(
|
||||
agent_card: AgentCard,
|
||||
transport_protocol: TransportProtocol,
|
||||
timeout: int,
|
||||
headers: MutableMapping[str, str],
|
||||
streaming: bool,
|
||||
auth: AuthScheme | None = None,
|
||||
) -> AsyncIterator[Client]:
|
||||
"""Create and configure an A2A client.
|
||||
|
||||
Args:
|
||||
agent_card: The A2A agent card
|
||||
transport_protocol: Transport protocol to use
|
||||
timeout: Request timeout in seconds
|
||||
headers: HTTP headers (already with auth applied)
|
||||
streaming: Enable streaming responses
|
||||
auth: Optional AuthScheme for client configuration
|
||||
|
||||
Yields:
|
||||
Configured A2A client instance
|
||||
"""
|
||||
|
||||
async with httpx.AsyncClient(
|
||||
timeout=timeout,
|
||||
headers=headers,
|
||||
) as httpx_client:
|
||||
if auth and isinstance(auth, (HTTPDigestAuth, APIKeyAuth)):
|
||||
configure_auth_client(auth, httpx_client)
|
||||
|
||||
config = ClientConfig(
|
||||
httpx_client=httpx_client,
|
||||
supported_transports=[str(transport_protocol.value)],
|
||||
streaming=streaming,
|
||||
accepted_output_modes=["application/json"],
|
||||
)
|
||||
|
||||
factory = ClientFactory(config)
|
||||
client = factory.create(agent_card)
|
||||
yield client
|
||||
|
||||
|
||||
def _extract_task_result_parts(a2a_task: A2ATask) -> list[str]:
|
||||
"""Extract result parts from A2A task history and artifacts.
|
||||
|
||||
Args:
|
||||
a2a_task: A2A Task object with history and artifacts
|
||||
|
||||
Returns:
|
||||
List of result text parts
|
||||
"""
|
||||
|
||||
result_parts: list[str] = []
|
||||
|
||||
if a2a_task.history:
|
||||
for history_msg in reversed(a2a_task.history):
|
||||
if history_msg.role == Role.agent:
|
||||
result_parts.extend(
|
||||
part.root.text
|
||||
for part in history_msg.parts
|
||||
if part.root.kind == "text"
|
||||
)
|
||||
break
|
||||
|
||||
if a2a_task.artifacts:
|
||||
result_parts.extend(
|
||||
part.root.text
|
||||
for artifact in a2a_task.artifacts
|
||||
for part in artifact.parts
|
||||
if part.root.kind == "text"
|
||||
)
|
||||
|
||||
return result_parts
|
||||
|
||||
|
||||
def _extract_error_message(a2a_task: A2ATask, default: str) -> str:
|
||||
"""Extract error message from A2A task.
|
||||
|
||||
Args:
|
||||
a2a_task: A2A Task object
|
||||
default: Default message if no error found
|
||||
|
||||
Returns:
|
||||
Error message string
|
||||
"""
|
||||
if a2a_task.status and a2a_task.status.message:
|
||||
msg = a2a_task.status.message
|
||||
if msg:
|
||||
for part in msg.parts:
|
||||
if part.root.kind == "text":
|
||||
return str(part.root.text)
|
||||
return str(msg)
|
||||
|
||||
if a2a_task.history:
|
||||
for history_msg in reversed(a2a_task.history):
|
||||
for part in history_msg.parts:
|
||||
if part.root.kind == "text":
|
||||
return str(part.root.text)
|
||||
|
||||
return default
|
||||
|
||||
|
||||
def create_agent_response_model(agent_ids: tuple[str, ...]) -> type[BaseModel]:
|
||||
"""Create a dynamic AgentResponse model with Literal types for agent IDs.
|
||||
|
||||
Args:
|
||||
agent_ids: List of available A2A agent IDs
|
||||
|
||||
Returns:
|
||||
Dynamically created Pydantic model with Literal-constrained a2a_ids field
|
||||
"""
|
||||
|
||||
DynamicLiteral = create_literals_from_strings(agent_ids) # noqa: N806
|
||||
|
||||
return create_model(
|
||||
"AgentResponse",
|
||||
a2a_ids=(
|
||||
tuple[DynamicLiteral, ...], # type: ignore[valid-type]
|
||||
Field(
|
||||
default_factory=tuple,
|
||||
max_length=len(agent_ids),
|
||||
description="A2A agent IDs to delegate to.",
|
||||
),
|
||||
),
|
||||
message=(
|
||||
str,
|
||||
Field(
|
||||
description="The message content. If is_a2a=true, this is sent to the A2A agent. If is_a2a=false, this is your final answer ending the conversation."
|
||||
),
|
||||
),
|
||||
is_a2a=(
|
||||
bool,
|
||||
Field(
|
||||
description="Set to true to continue the conversation by sending this message to the A2A agent and awaiting their response. Set to false ONLY when you are completely done and providing your final answer (not when asking questions)."
|
||||
),
|
||||
),
|
||||
__base__=BaseModel,
|
||||
)
|
||||
|
||||
|
||||
def extract_a2a_agent_ids_from_config(
|
||||
a2a_config: list[A2AConfig] | A2AConfig | None,
|
||||
) -> tuple[list[A2AConfig], tuple[str, ...]]:
|
||||
"""Extract A2A agent IDs from A2A configuration.
|
||||
|
||||
Args:
|
||||
a2a_config: A2A configuration
|
||||
|
||||
Returns:
|
||||
List of A2A agent IDs
|
||||
"""
|
||||
if a2a_config is None:
|
||||
return [], ()
|
||||
|
||||
if isinstance(a2a_config, A2AConfig):
|
||||
a2a_agents = [a2a_config]
|
||||
else:
|
||||
a2a_agents = a2a_config
|
||||
return a2a_agents, tuple(config.endpoint for config in a2a_agents)
|
||||
|
||||
|
||||
def get_a2a_agents_and_response_model(
|
||||
a2a_config: list[A2AConfig] | A2AConfig | None,
|
||||
) -> tuple[list[A2AConfig], type[BaseModel]]:
|
||||
"""Get A2A agent IDs and response model.
|
||||
|
||||
Args:
|
||||
a2a_config: A2A configuration
|
||||
|
||||
Returns:
|
||||
Tuple of A2A agent IDs and response model
|
||||
"""
|
||||
a2a_agents, agent_ids = extract_a2a_agent_ids_from_config(a2a_config=a2a_config)
|
||||
return a2a_agents, create_agent_response_model(agent_ids)
|
||||
570
lib/crewai/src/crewai/a2a/wrapper.py
Normal file
570
lib/crewai/src/crewai/a2a/wrapper.py
Normal file
@@ -0,0 +1,570 @@
|
||||
"""A2A agent wrapping logic for metaclass integration.
|
||||
|
||||
Wraps agent classes with A2A delegation capabilities.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from collections.abc import Callable
|
||||
from concurrent.futures import ThreadPoolExecutor, as_completed
|
||||
from functools import wraps
|
||||
from types import MethodType
|
||||
from typing import TYPE_CHECKING, Any, cast
|
||||
|
||||
from a2a.types import Role
|
||||
from pydantic import BaseModel, ValidationError
|
||||
|
||||
from crewai.a2a.config import A2AConfig
|
||||
from crewai.a2a.templates import (
|
||||
AVAILABLE_AGENTS_TEMPLATE,
|
||||
CONVERSATION_TURN_INFO_TEMPLATE,
|
||||
PREVIOUS_A2A_CONVERSATION_TEMPLATE,
|
||||
UNAVAILABLE_AGENTS_NOTICE_TEMPLATE,
|
||||
)
|
||||
from crewai.a2a.types import AgentResponseProtocol
|
||||
from crewai.a2a.utils import (
|
||||
execute_a2a_delegation,
|
||||
fetch_agent_card,
|
||||
get_a2a_agents_and_response_model,
|
||||
)
|
||||
from crewai.events.event_bus import crewai_event_bus
|
||||
from crewai.events.types.a2a_events import (
|
||||
A2AConversationCompletedEvent,
|
||||
A2AMessageSentEvent,
|
||||
)
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from a2a.types import AgentCard, Message
|
||||
|
||||
from crewai.agent.core import Agent
|
||||
from crewai.task import Task
|
||||
from crewai.tools.base_tool import BaseTool
|
||||
|
||||
|
||||
def wrap_agent_with_a2a_instance(agent: Agent) -> None:
|
||||
"""Wrap an agent instance's execute_task method with A2A support.
|
||||
|
||||
This function modifies the agent instance by wrapping its execute_task
|
||||
method to add A2A delegation capabilities. Should only be called when
|
||||
the agent has a2a configuration set.
|
||||
|
||||
Args:
|
||||
agent: The agent instance to wrap
|
||||
"""
|
||||
original_execute_task = agent.execute_task.__func__
|
||||
|
||||
@wraps(original_execute_task)
|
||||
def execute_task_with_a2a(
|
||||
self: Agent,
|
||||
task: Task,
|
||||
context: str | None = None,
|
||||
tools: list[BaseTool] | None = None,
|
||||
) -> str:
|
||||
"""Execute task with A2A delegation support.
|
||||
|
||||
Args:
|
||||
self: The agent instance
|
||||
task: The task to execute
|
||||
context: Optional context for task execution
|
||||
tools: Optional tools available to the agent
|
||||
|
||||
Returns:
|
||||
Task execution result
|
||||
"""
|
||||
if not self.a2a:
|
||||
return original_execute_task(self, task, context, tools)
|
||||
|
||||
a2a_agents, agent_response_model = get_a2a_agents_and_response_model(self.a2a)
|
||||
|
||||
return _execute_task_with_a2a(
|
||||
self=self,
|
||||
a2a_agents=a2a_agents,
|
||||
original_fn=original_execute_task,
|
||||
task=task,
|
||||
agent_response_model=agent_response_model,
|
||||
context=context,
|
||||
tools=tools,
|
||||
)
|
||||
|
||||
object.__setattr__(agent, "execute_task", MethodType(execute_task_with_a2a, agent))
|
||||
|
||||
|
||||
def _fetch_card_from_config(
|
||||
config: A2AConfig,
|
||||
) -> tuple[A2AConfig, AgentCard | Exception]:
|
||||
"""Fetch agent card from A2A config.
|
||||
|
||||
Args:
|
||||
config: A2A configuration
|
||||
|
||||
Returns:
|
||||
Tuple of (config, card or exception)
|
||||
"""
|
||||
try:
|
||||
card = fetch_agent_card(
|
||||
endpoint=config.endpoint,
|
||||
auth=config.auth,
|
||||
timeout=config.timeout,
|
||||
)
|
||||
return config, card
|
||||
except Exception as e:
|
||||
return config, e
|
||||
|
||||
|
||||
def _fetch_agent_cards_concurrently(
|
||||
a2a_agents: list[A2AConfig],
|
||||
) -> tuple[dict[str, AgentCard], dict[str, str]]:
|
||||
"""Fetch agent cards concurrently for multiple A2A agents.
|
||||
|
||||
Args:
|
||||
a2a_agents: List of A2A agent configurations
|
||||
|
||||
Returns:
|
||||
Tuple of (agent_cards dict, failed_agents dict mapping endpoint to error message)
|
||||
"""
|
||||
agent_cards: dict[str, AgentCard] = {}
|
||||
failed_agents: dict[str, str] = {}
|
||||
|
||||
with ThreadPoolExecutor(max_workers=len(a2a_agents)) as executor:
|
||||
futures = {
|
||||
executor.submit(_fetch_card_from_config, config): config
|
||||
for config in a2a_agents
|
||||
}
|
||||
for future in as_completed(futures):
|
||||
config, result = future.result()
|
||||
if isinstance(result, Exception):
|
||||
if config.fail_fast:
|
||||
raise RuntimeError(
|
||||
f"Failed to fetch agent card from {config.endpoint}. "
|
||||
f"Ensure the A2A agent is running and accessible. Error: {result}"
|
||||
) from result
|
||||
failed_agents[config.endpoint] = str(result)
|
||||
else:
|
||||
agent_cards[config.endpoint] = result
|
||||
|
||||
return agent_cards, failed_agents
|
||||
|
||||
|
||||
def _execute_task_with_a2a(
|
||||
self: Agent,
|
||||
a2a_agents: list[A2AConfig],
|
||||
original_fn: Callable[..., str],
|
||||
task: Task,
|
||||
agent_response_model: type[BaseModel],
|
||||
context: str | None,
|
||||
tools: list[BaseTool] | None,
|
||||
) -> str:
|
||||
"""Wrap execute_task with A2A delegation logic.
|
||||
|
||||
Args:
|
||||
self: The agent instance
|
||||
a2a_agents: Dictionary of A2A agent configurations
|
||||
original_fn: The original execute_task method
|
||||
task: The task to execute
|
||||
context: Optional context for task execution
|
||||
tools: Optional tools available to the agent
|
||||
agent_response_model: Optional agent response model
|
||||
|
||||
Returns:
|
||||
Task execution result (either from LLM or A2A agent)
|
||||
"""
|
||||
original_description: str = task.description
|
||||
original_output_pydantic = task.output_pydantic
|
||||
original_response_model = task.response_model
|
||||
|
||||
agent_cards, failed_agents = _fetch_agent_cards_concurrently(a2a_agents)
|
||||
|
||||
if not agent_cards and a2a_agents and failed_agents:
|
||||
unavailable_agents_text = ""
|
||||
for endpoint, error in failed_agents.items():
|
||||
unavailable_agents_text += f" - {endpoint}: {error}\n"
|
||||
|
||||
notice = UNAVAILABLE_AGENTS_NOTICE_TEMPLATE.substitute(
|
||||
unavailable_agents=unavailable_agents_text
|
||||
)
|
||||
task.description = f"{original_description}{notice}"
|
||||
|
||||
try:
|
||||
return original_fn(self, task, context, tools)
|
||||
finally:
|
||||
task.description = original_description
|
||||
|
||||
task.description = _augment_prompt_with_a2a(
|
||||
a2a_agents=a2a_agents,
|
||||
task_description=original_description,
|
||||
agent_cards=agent_cards,
|
||||
failed_agents=failed_agents,
|
||||
)
|
||||
task.response_model = agent_response_model
|
||||
|
||||
try:
|
||||
raw_result = original_fn(self, task, context, tools)
|
||||
agent_response = _parse_agent_response(
|
||||
raw_result=raw_result, agent_response_model=agent_response_model
|
||||
)
|
||||
|
||||
if isinstance(agent_response, BaseModel) and isinstance(
|
||||
agent_response, AgentResponseProtocol
|
||||
):
|
||||
if agent_response.is_a2a:
|
||||
return _delegate_to_a2a(
|
||||
self,
|
||||
agent_response=agent_response,
|
||||
task=task,
|
||||
original_fn=original_fn,
|
||||
context=context,
|
||||
tools=tools,
|
||||
agent_cards=agent_cards,
|
||||
original_task_description=original_description,
|
||||
)
|
||||
return str(agent_response.message)
|
||||
|
||||
return raw_result
|
||||
finally:
|
||||
task.description = original_description
|
||||
task.output_pydantic = original_output_pydantic
|
||||
task.response_model = original_response_model
|
||||
|
||||
|
||||
def _augment_prompt_with_a2a(
|
||||
a2a_agents: list[A2AConfig],
|
||||
task_description: str,
|
||||
agent_cards: dict[str, AgentCard],
|
||||
conversation_history: list[Message] | None = None,
|
||||
turn_num: int = 0,
|
||||
max_turns: int | None = None,
|
||||
failed_agents: dict[str, str] | None = None,
|
||||
) -> str:
|
||||
"""Add A2A delegation instructions to prompt.
|
||||
|
||||
Args:
|
||||
a2a_agents: Dictionary of A2A agent configurations
|
||||
task_description: Original task description
|
||||
agent_cards: dictionary mapping agent IDs to AgentCards
|
||||
conversation_history: Previous A2A Messages from conversation
|
||||
turn_num: Current turn number (0-indexed)
|
||||
max_turns: Maximum allowed turns (from config)
|
||||
failed_agents: Dictionary mapping failed agent endpoints to error messages
|
||||
|
||||
Returns:
|
||||
Augmented task description with A2A instructions
|
||||
"""
|
||||
|
||||
if not agent_cards:
|
||||
return task_description
|
||||
|
||||
agents_text = ""
|
||||
|
||||
for config in a2a_agents:
|
||||
if config.endpoint in agent_cards:
|
||||
card = agent_cards[config.endpoint]
|
||||
agents_text += f"\n{card.model_dump_json(indent=2, exclude_none=True, include={'description', 'url', 'skills'})}\n"
|
||||
|
||||
failed_agents = failed_agents or {}
|
||||
if failed_agents:
|
||||
agents_text += "\n<!-- Unavailable Agents -->\n"
|
||||
for endpoint, error in failed_agents.items():
|
||||
agents_text += f"\n<!-- Agent: {endpoint}\n Status: Unavailable\n Error: {error} -->\n"
|
||||
|
||||
agents_text = AVAILABLE_AGENTS_TEMPLATE.substitute(available_a2a_agents=agents_text)
|
||||
|
||||
history_text = ""
|
||||
if conversation_history:
|
||||
for msg in conversation_history:
|
||||
history_text += f"\n{msg.model_dump_json(indent=2, exclude_none=True, exclude={'message_id'})}\n"
|
||||
|
||||
history_text = PREVIOUS_A2A_CONVERSATION_TEMPLATE.substitute(
|
||||
previous_a2a_conversation=history_text
|
||||
)
|
||||
turn_info = ""
|
||||
|
||||
if max_turns is not None and conversation_history:
|
||||
turn_count = turn_num + 1
|
||||
warning = ""
|
||||
if turn_count >= max_turns:
|
||||
warning = (
|
||||
"CRITICAL: This is the FINAL turn. You MUST conclude the conversation now.\n"
|
||||
"Set is_a2a=false and provide your final response to complete the task."
|
||||
)
|
||||
elif turn_count == max_turns - 1:
|
||||
warning = "WARNING: Next turn will be the last. Consider wrapping up the conversation."
|
||||
|
||||
turn_info = CONVERSATION_TURN_INFO_TEMPLATE.substitute(
|
||||
turn_count=turn_count,
|
||||
max_turns=max_turns,
|
||||
warning=warning,
|
||||
)
|
||||
|
||||
return f"""{task_description}
|
||||
|
||||
IMPORTANT: You have the ability to delegate this task to remote A2A agents.
|
||||
|
||||
{agents_text}
|
||||
{history_text}{turn_info}
|
||||
|
||||
|
||||
"""
|
||||
|
||||
|
||||
def _parse_agent_response(
|
||||
raw_result: str | dict[str, Any], agent_response_model: type[BaseModel]
|
||||
) -> BaseModel | str:
|
||||
"""Parse LLM output as AgentResponse or return raw agent response.
|
||||
|
||||
Args:
|
||||
raw_result: Raw output from LLM
|
||||
agent_response_model: The agent response model
|
||||
|
||||
Returns:
|
||||
Parsed AgentResponse or string
|
||||
"""
|
||||
if agent_response_model:
|
||||
try:
|
||||
if isinstance(raw_result, str):
|
||||
return agent_response_model.model_validate_json(raw_result)
|
||||
if isinstance(raw_result, dict):
|
||||
return agent_response_model.model_validate(raw_result)
|
||||
except ValidationError:
|
||||
return cast(str, raw_result)
|
||||
return cast(str, raw_result)
|
||||
|
||||
|
||||
def _handle_agent_response_and_continue(
|
||||
self: Agent,
|
||||
a2a_result: dict[str, Any],
|
||||
agent_id: str,
|
||||
agent_cards: dict[str, AgentCard] | None,
|
||||
a2a_agents: list[A2AConfig],
|
||||
original_task_description: str,
|
||||
conversation_history: list[Message],
|
||||
turn_num: int,
|
||||
max_turns: int,
|
||||
task: Task,
|
||||
original_fn: Callable[..., str],
|
||||
context: str | None,
|
||||
tools: list[BaseTool] | None,
|
||||
agent_response_model: type[BaseModel],
|
||||
) -> tuple[str | None, str | None]:
|
||||
"""Handle A2A result and get CrewAI agent's response.
|
||||
|
||||
Args:
|
||||
self: The agent instance
|
||||
a2a_result: Result from A2A delegation
|
||||
agent_id: ID of the A2A agent
|
||||
agent_cards: Pre-fetched agent cards
|
||||
a2a_agents: List of A2A configurations
|
||||
original_task_description: Original task description
|
||||
conversation_history: Conversation history
|
||||
turn_num: Current turn number
|
||||
max_turns: Maximum turns allowed
|
||||
task: The task being executed
|
||||
original_fn: Original execute_task method
|
||||
context: Optional context
|
||||
tools: Optional tools
|
||||
agent_response_model: Response model for parsing
|
||||
|
||||
Returns:
|
||||
Tuple of (final_result, current_request) where:
|
||||
- final_result is not None if conversation should end
|
||||
- current_request is the next message to send if continuing
|
||||
"""
|
||||
agent_cards_dict = agent_cards or {}
|
||||
if "agent_card" in a2a_result and agent_id not in agent_cards_dict:
|
||||
agent_cards_dict[agent_id] = a2a_result["agent_card"]
|
||||
|
||||
task.description = _augment_prompt_with_a2a(
|
||||
a2a_agents=a2a_agents,
|
||||
task_description=original_task_description,
|
||||
conversation_history=conversation_history,
|
||||
turn_num=turn_num,
|
||||
max_turns=max_turns,
|
||||
agent_cards=agent_cards_dict,
|
||||
)
|
||||
|
||||
raw_result = original_fn(self, task, context, tools)
|
||||
llm_response = _parse_agent_response(
|
||||
raw_result=raw_result, agent_response_model=agent_response_model
|
||||
)
|
||||
|
||||
if isinstance(llm_response, BaseModel) and isinstance(
|
||||
llm_response, AgentResponseProtocol
|
||||
):
|
||||
if not llm_response.is_a2a:
|
||||
final_turn_number = turn_num + 1
|
||||
crewai_event_bus.emit(
|
||||
None,
|
||||
A2AMessageSentEvent(
|
||||
message=str(llm_response.message),
|
||||
turn_number=final_turn_number,
|
||||
is_multiturn=True,
|
||||
agent_role=self.role,
|
||||
),
|
||||
)
|
||||
crewai_event_bus.emit(
|
||||
None,
|
||||
A2AConversationCompletedEvent(
|
||||
status="completed",
|
||||
final_result=str(llm_response.message),
|
||||
error=None,
|
||||
total_turns=final_turn_number,
|
||||
),
|
||||
)
|
||||
return str(llm_response.message), None
|
||||
return None, str(llm_response.message)
|
||||
|
||||
return str(raw_result), None
|
||||
|
||||
|
||||
def _delegate_to_a2a(
|
||||
self: Agent,
|
||||
agent_response: AgentResponseProtocol,
|
||||
task: Task,
|
||||
original_fn: Callable[..., str],
|
||||
context: str | None,
|
||||
tools: list[BaseTool] | None,
|
||||
agent_cards: dict[str, AgentCard] | None = None,
|
||||
original_task_description: str | None = None,
|
||||
) -> str:
|
||||
"""Delegate to A2A agent with multi-turn conversation support.
|
||||
|
||||
Args:
|
||||
self: The agent instance
|
||||
agent_response: The AgentResponse indicating delegation
|
||||
task: The task being executed (for extracting A2A fields)
|
||||
original_fn: The original execute_task method for follow-ups
|
||||
context: Optional context for task execution
|
||||
tools: Optional tools available to the agent
|
||||
agent_cards: Pre-fetched agent cards from _execute_task_with_a2a
|
||||
original_task_description: The original task description before A2A augmentation
|
||||
|
||||
Returns:
|
||||
Result from A2A agent
|
||||
|
||||
Raises:
|
||||
ImportError: If a2a-sdk is not installed
|
||||
"""
|
||||
a2a_agents, agent_response_model = get_a2a_agents_and_response_model(self.a2a)
|
||||
agent_ids = tuple(config.endpoint for config in a2a_agents)
|
||||
current_request = str(agent_response.message)
|
||||
agent_id = agent_response.a2a_ids[0]
|
||||
|
||||
if agent_id not in agent_ids:
|
||||
raise ValueError(
|
||||
f"Unknown A2A agent ID(s): {agent_response.a2a_ids} not in {agent_ids}"
|
||||
)
|
||||
|
||||
agent_config = next(filter(lambda x: x.endpoint == agent_id, a2a_agents))
|
||||
task_config = task.config or {}
|
||||
context_id = task_config.get("context_id")
|
||||
task_id_config = task_config.get("task_id")
|
||||
reference_task_ids = task_config.get("reference_task_ids")
|
||||
metadata = task_config.get("metadata")
|
||||
extensions = task_config.get("extensions")
|
||||
|
||||
if original_task_description is None:
|
||||
original_task_description = task.description
|
||||
|
||||
conversation_history: list[Message] = []
|
||||
max_turns = agent_config.max_turns
|
||||
|
||||
try:
|
||||
for turn_num in range(max_turns):
|
||||
console_formatter = getattr(crewai_event_bus, "_console", None)
|
||||
agent_branch = None
|
||||
if console_formatter:
|
||||
agent_branch = getattr(
|
||||
console_formatter, "current_agent_branch", None
|
||||
) or getattr(console_formatter, "current_task_branch", None)
|
||||
|
||||
a2a_result = execute_a2a_delegation(
|
||||
endpoint=agent_config.endpoint,
|
||||
auth=agent_config.auth,
|
||||
timeout=agent_config.timeout,
|
||||
task_description=current_request,
|
||||
context_id=context_id,
|
||||
task_id=task_id_config,
|
||||
reference_task_ids=reference_task_ids,
|
||||
metadata=metadata,
|
||||
extensions=extensions,
|
||||
conversation_history=conversation_history,
|
||||
agent_id=agent_id,
|
||||
agent_role=Role.user,
|
||||
agent_branch=agent_branch,
|
||||
response_model=agent_config.response_model,
|
||||
turn_number=turn_num + 1,
|
||||
)
|
||||
|
||||
conversation_history = a2a_result.get("history", [])
|
||||
|
||||
if a2a_result["status"] in ["completed", "input_required"]:
|
||||
final_result, next_request = _handle_agent_response_and_continue(
|
||||
self=self,
|
||||
a2a_result=a2a_result,
|
||||
agent_id=agent_id,
|
||||
agent_cards=agent_cards,
|
||||
a2a_agents=a2a_agents,
|
||||
original_task_description=original_task_description,
|
||||
conversation_history=conversation_history,
|
||||
turn_num=turn_num,
|
||||
max_turns=max_turns,
|
||||
task=task,
|
||||
original_fn=original_fn,
|
||||
context=context,
|
||||
tools=tools,
|
||||
agent_response_model=agent_response_model,
|
||||
)
|
||||
|
||||
if final_result is not None:
|
||||
return final_result
|
||||
|
||||
if next_request is not None:
|
||||
current_request = next_request
|
||||
|
||||
continue
|
||||
|
||||
error_msg = a2a_result.get("error", "Unknown error")
|
||||
crewai_event_bus.emit(
|
||||
None,
|
||||
A2AConversationCompletedEvent(
|
||||
status="failed",
|
||||
final_result=None,
|
||||
error=error_msg,
|
||||
total_turns=turn_num + 1,
|
||||
),
|
||||
)
|
||||
raise Exception(f"A2A delegation failed: {error_msg}")
|
||||
|
||||
if conversation_history:
|
||||
for msg in reversed(conversation_history):
|
||||
if msg.role == Role.agent:
|
||||
text_parts = [
|
||||
part.root.text for part in msg.parts if part.root.kind == "text"
|
||||
]
|
||||
final_message = (
|
||||
" ".join(text_parts) if text_parts else "Conversation completed"
|
||||
)
|
||||
crewai_event_bus.emit(
|
||||
None,
|
||||
A2AConversationCompletedEvent(
|
||||
status="completed",
|
||||
final_result=final_message,
|
||||
error=None,
|
||||
total_turns=max_turns,
|
||||
),
|
||||
)
|
||||
return final_message
|
||||
|
||||
crewai_event_bus.emit(
|
||||
None,
|
||||
A2AConversationCompletedEvent(
|
||||
status="failed",
|
||||
final_result=None,
|
||||
error=f"Conversation exceeded maximum turns ({max_turns})",
|
||||
total_turns=max_turns,
|
||||
),
|
||||
)
|
||||
raise Exception(f"A2A conversation exceeded maximum turns ({max_turns})")
|
||||
|
||||
finally:
|
||||
task.description = original_task_description
|
||||
5
lib/crewai/src/crewai/agent/__init__.py
Normal file
5
lib/crewai/src/crewai/agent/__init__.py
Normal file
@@ -0,0 +1,5 @@
|
||||
from crewai.agent.core import Agent
|
||||
from crewai.utilities.training_handler import CrewTrainingHandler
|
||||
|
||||
|
||||
__all__ = ["Agent", "CrewTrainingHandler"]
|
||||
@@ -2,27 +2,27 @@ from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
from collections.abc import Sequence
|
||||
import json
|
||||
import shutil
|
||||
import subprocess
|
||||
import time
|
||||
from typing import (
|
||||
TYPE_CHECKING,
|
||||
Any,
|
||||
Final,
|
||||
Literal,
|
||||
cast,
|
||||
)
|
||||
from urllib.parse import urlparse
|
||||
|
||||
from pydantic import BaseModel, Field, InstanceOf, PrivateAttr, model_validator
|
||||
from typing_extensions import Self
|
||||
|
||||
from crewai.a2a.config import A2AConfig
|
||||
from crewai.agents.agent_builder.base_agent import BaseAgent
|
||||
from crewai.agents.cache.cache_handler import CacheHandler
|
||||
from crewai.agents.crew_agent_executor import CrewAgentExecutor
|
||||
from crewai.events.event_bus import crewai_event_bus
|
||||
from crewai.events.types.agent_events import (
|
||||
AgentExecutionCompletedEvent,
|
||||
AgentExecutionErrorEvent,
|
||||
AgentExecutionStartedEvent,
|
||||
)
|
||||
from crewai.events.types.knowledge_events import (
|
||||
KnowledgeQueryCompletedEvent,
|
||||
KnowledgeQueryFailedEvent,
|
||||
@@ -70,14 +70,14 @@ if TYPE_CHECKING:
|
||||
|
||||
|
||||
# MCP Connection timeout constants (in seconds)
|
||||
MCP_CONNECTION_TIMEOUT = 10
|
||||
MCP_TOOL_EXECUTION_TIMEOUT = 30
|
||||
MCP_DISCOVERY_TIMEOUT = 15
|
||||
MCP_MAX_RETRIES = 3
|
||||
MCP_CONNECTION_TIMEOUT: Final[int] = 10
|
||||
MCP_TOOL_EXECUTION_TIMEOUT: Final[int] = 30
|
||||
MCP_DISCOVERY_TIMEOUT: Final[int] = 15
|
||||
MCP_MAX_RETRIES: Final[int] = 3
|
||||
|
||||
# Simple in-memory cache for MCP tool schemas (duration: 5 minutes)
|
||||
_mcp_schema_cache = {}
|
||||
_cache_ttl = 300 # 5 minutes
|
||||
_mcp_schema_cache: dict[str, Any] = {}
|
||||
_cache_ttl: Final[int] = 300 # 5 minutes
|
||||
|
||||
|
||||
class Agent(BaseAgent):
|
||||
@@ -197,6 +197,10 @@ class Agent(BaseAgent):
|
||||
guardrail_max_retries: int = Field(
|
||||
default=3, description="Maximum number of retries when guardrail fails"
|
||||
)
|
||||
a2a: list[A2AConfig] | A2AConfig | None = Field(
|
||||
default=None,
|
||||
description="A2A (Agent-to-Agent) configuration for delegating tasks to remote agents. Can be a single A2AConfig or a dict mapping agent IDs to configs.",
|
||||
)
|
||||
|
||||
@model_validator(mode="before")
|
||||
def validate_from_repository(cls, v: Any) -> dict[str, Any] | None | Any: # noqa: N805
|
||||
@@ -305,17 +309,19 @@ class Agent(BaseAgent):
|
||||
# If the task requires output in JSON or Pydantic format,
|
||||
# append specific instructions to the task prompt to ensure
|
||||
# that the final answer does not include any code block markers
|
||||
if task.output_json or task.output_pydantic:
|
||||
# Skip this if task.response_model is set, as native structured outputs handle schema automatically
|
||||
if (task.output_json or task.output_pydantic) and not task.response_model:
|
||||
# Generate the schema based on the output format
|
||||
if task.output_json:
|
||||
# schema = json.dumps(task.output_json, indent=2)
|
||||
schema = generate_model_description(task.output_json)
|
||||
schema_dict = generate_model_description(task.output_json)
|
||||
schema = json.dumps(schema_dict["json_schema"]["schema"], indent=2)
|
||||
task_prompt += "\n" + self.i18n.slice(
|
||||
"formatted_task_instructions"
|
||||
).format(output_format=schema)
|
||||
|
||||
elif task.output_pydantic:
|
||||
schema = generate_model_description(task.output_pydantic)
|
||||
schema_dict = generate_model_description(task.output_pydantic)
|
||||
schema = json.dumps(schema_dict["json_schema"]["schema"], indent=2)
|
||||
task_prompt += "\n" + self.i18n.slice(
|
||||
"formatted_task_instructions"
|
||||
).format(output_format=schema)
|
||||
@@ -438,6 +444,13 @@ class Agent(BaseAgent):
|
||||
else:
|
||||
task_prompt = self._use_trained_data(task_prompt=task_prompt)
|
||||
|
||||
# Import agent events locally to avoid circular imports
|
||||
from crewai.events.types.agent_events import (
|
||||
AgentExecutionCompletedEvent,
|
||||
AgentExecutionErrorEvent,
|
||||
AgentExecutionStartedEvent,
|
||||
)
|
||||
|
||||
try:
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
@@ -618,6 +631,7 @@ class Agent(BaseAgent):
|
||||
self._rpm_controller.check_or_wait if self._rpm_controller else None
|
||||
),
|
||||
callbacks=[TokenCalcHandler(self._token_process)],
|
||||
response_model=task.response_model if task else None,
|
||||
)
|
||||
|
||||
def get_delegation_tools(self, agents: list[BaseAgent]) -> list[BaseTool]:
|
||||
@@ -709,7 +723,7 @@ class Agent(BaseAgent):
|
||||
f"Specific tool '{specific_tool}' not found on MCP server: {server_url}",
|
||||
)
|
||||
|
||||
return tools
|
||||
return cast(list[BaseTool], tools)
|
||||
|
||||
except Exception as e:
|
||||
self._logger.log(
|
||||
@@ -739,9 +753,9 @@ class Agent(BaseAgent):
|
||||
|
||||
return tools
|
||||
|
||||
def _extract_server_name(self, server_url: str) -> str:
|
||||
@staticmethod
|
||||
def _extract_server_name(server_url: str) -> str:
|
||||
"""Extract clean server name from URL for tool prefixing."""
|
||||
from urllib.parse import urlparse
|
||||
|
||||
parsed = urlparse(server_url)
|
||||
domain = parsed.netloc.replace(".", "_")
|
||||
@@ -778,7 +792,9 @@ class Agent(BaseAgent):
|
||||
)
|
||||
return {}
|
||||
|
||||
async def _get_mcp_tool_schemas_async(self, server_params: dict) -> dict[str, dict]:
|
||||
async def _get_mcp_tool_schemas_async(
|
||||
self, server_params: dict[str, Any]
|
||||
) -> dict[str, dict]:
|
||||
"""Async implementation of MCP tool schema retrieval with timeouts and retries."""
|
||||
server_url = server_params["url"]
|
||||
return await self._retry_mcp_discovery(
|
||||
@@ -787,7 +803,7 @@ class Agent(BaseAgent):
|
||||
|
||||
async def _retry_mcp_discovery(
|
||||
self, operation_func, server_url: str
|
||||
) -> dict[str, dict]:
|
||||
) -> dict[str, dict[str, Any]]:
|
||||
"""Retry MCP discovery operation with exponential backoff, avoiding try-except in loop."""
|
||||
last_error = None
|
||||
|
||||
@@ -815,9 +831,10 @@ class Agent(BaseAgent):
|
||||
f"Failed to discover MCP tools after {MCP_MAX_RETRIES} attempts: {last_error}"
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
async def _attempt_mcp_discovery(
|
||||
self, operation_func, server_url: str
|
||||
) -> tuple[dict[str, dict] | None, str, bool]:
|
||||
operation_func, server_url: str
|
||||
) -> tuple[dict[str, dict[str, Any]] | None, str, bool]:
|
||||
"""Attempt single MCP discovery operation and return (result, error_message, should_retry)."""
|
||||
try:
|
||||
result = await operation_func(server_url)
|
||||
@@ -851,13 +868,13 @@ class Agent(BaseAgent):
|
||||
|
||||
async def _discover_mcp_tools_with_timeout(
|
||||
self, server_url: str
|
||||
) -> dict[str, dict]:
|
||||
) -> dict[str, dict[str, Any]]:
|
||||
"""Discover MCP tools with timeout wrapper."""
|
||||
return await asyncio.wait_for(
|
||||
self._discover_mcp_tools(server_url), timeout=MCP_DISCOVERY_TIMEOUT
|
||||
)
|
||||
|
||||
async def _discover_mcp_tools(self, server_url: str) -> dict[str, dict]:
|
||||
async def _discover_mcp_tools(self, server_url: str) -> dict[str, dict[str, Any]]:
|
||||
"""Discover tools from MCP server with proper timeout handling."""
|
||||
from mcp import ClientSession
|
||||
from mcp.client.streamable_http import streamablehttp_client
|
||||
@@ -889,7 +906,9 @@ class Agent(BaseAgent):
|
||||
}
|
||||
return schemas
|
||||
|
||||
def _json_schema_to_pydantic(self, tool_name: str, json_schema: dict) -> type:
|
||||
def _json_schema_to_pydantic(
|
||||
self, tool_name: str, json_schema: dict[str, Any]
|
||||
) -> type:
|
||||
"""Convert JSON Schema to Pydantic model for tool arguments.
|
||||
|
||||
Args:
|
||||
@@ -926,7 +945,7 @@ class Agent(BaseAgent):
|
||||
model_name = f"{tool_name.replace('-', '_').replace(' ', '_')}Schema"
|
||||
return create_model(model_name, **field_definitions)
|
||||
|
||||
def _json_type_to_python(self, field_schema: dict) -> type:
|
||||
def _json_type_to_python(self, field_schema: dict[str, Any]) -> type:
|
||||
"""Convert JSON Schema type to Python type.
|
||||
|
||||
Args:
|
||||
@@ -935,7 +954,6 @@ class Agent(BaseAgent):
|
||||
Returns:
|
||||
Python type
|
||||
"""
|
||||
from typing import Any
|
||||
|
||||
json_type = field_schema.get("type")
|
||||
|
||||
@@ -965,13 +983,15 @@ class Agent(BaseAgent):
|
||||
|
||||
return type_mapping.get(json_type, Any)
|
||||
|
||||
def _fetch_amp_mcp_servers(self, mcp_name: str) -> list[dict]:
|
||||
@staticmethod
|
||||
def _fetch_amp_mcp_servers(mcp_name: str) -> list[dict]:
|
||||
"""Fetch MCP server configurations from CrewAI AMP API."""
|
||||
# TODO: Implement AMP API call to "integrations/mcps" endpoint
|
||||
# Should return list of server configs with URLs
|
||||
return []
|
||||
|
||||
def get_multimodal_tools(self) -> Sequence[BaseTool]:
|
||||
@staticmethod
|
||||
def get_multimodal_tools() -> Sequence[BaseTool]:
|
||||
from crewai.tools.agent_tools.add_image_tool import AddImageTool
|
||||
|
||||
return [AddImageTool()]
|
||||
@@ -991,8 +1011,9 @@ class Agent(BaseAgent):
|
||||
)
|
||||
return []
|
||||
|
||||
@staticmethod
|
||||
def get_output_converter(
|
||||
self, llm: BaseLLM, text: str, model: type[BaseModel], instructions: str
|
||||
llm: BaseLLM, text: str, model: type[BaseModel], instructions: str
|
||||
) -> Converter:
|
||||
return Converter(llm=llm, text=text, model=model, instructions=instructions)
|
||||
|
||||
@@ -1022,7 +1043,8 @@ class Agent(BaseAgent):
|
||||
)
|
||||
return task_prompt
|
||||
|
||||
def _render_text_description(self, tools: list[Any]) -> str:
|
||||
@staticmethod
|
||||
def _render_text_description(tools: list[Any]) -> str:
|
||||
"""Render the tool name and description in plain text.
|
||||
|
||||
Output will be in the format of:
|
||||
0
lib/crewai/src/crewai/agent/internal/__init__.py
Normal file
0
lib/crewai/src/crewai/agent/internal/__init__.py
Normal file
76
lib/crewai/src/crewai/agent/internal/meta.py
Normal file
76
lib/crewai/src/crewai/agent/internal/meta.py
Normal file
@@ -0,0 +1,76 @@
|
||||
"""Generic metaclass for agent extensions.
|
||||
|
||||
This metaclass enables extension capabilities for agents by detecting
|
||||
extension fields in class annotations and applying appropriate wrappers.
|
||||
"""
|
||||
|
||||
import warnings
|
||||
from functools import wraps
|
||||
from typing import Any
|
||||
|
||||
from pydantic import model_validator
|
||||
from pydantic._internal._model_construction import ModelMetaclass
|
||||
|
||||
|
||||
class AgentMeta(ModelMetaclass):
|
||||
"""Generic metaclass for agent extensions.
|
||||
|
||||
Detects extension fields (like 'a2a') in class annotations and applies
|
||||
the appropriate wrapper logic to enable extension functionality.
|
||||
"""
|
||||
|
||||
def __new__(
|
||||
mcs,
|
||||
name: str,
|
||||
bases: tuple[type, ...],
|
||||
namespace: dict[str, Any],
|
||||
**kwargs: Any,
|
||||
) -> type:
|
||||
"""Create a new class with extension support.
|
||||
|
||||
Args:
|
||||
name: The name of the class being created
|
||||
bases: Base classes
|
||||
namespace: Class namespace dictionary
|
||||
**kwargs: Additional keyword arguments
|
||||
|
||||
Returns:
|
||||
The newly created class with extension support if applicable
|
||||
"""
|
||||
orig_post_init_setup = namespace.get("post_init_setup")
|
||||
|
||||
if orig_post_init_setup is not None:
|
||||
original_func = (
|
||||
orig_post_init_setup.wrapped
|
||||
if hasattr(orig_post_init_setup, "wrapped")
|
||||
else orig_post_init_setup
|
||||
)
|
||||
|
||||
def post_init_setup_with_extensions(self: Any) -> Any:
|
||||
"""Wrap post_init_setup to apply extensions after initialization.
|
||||
|
||||
Args:
|
||||
self: The agent instance
|
||||
|
||||
Returns:
|
||||
The agent instance
|
||||
"""
|
||||
result = original_func(self)
|
||||
|
||||
a2a_value = getattr(self, "a2a", None)
|
||||
if a2a_value is not None:
|
||||
from crewai.a2a.wrapper import wrap_agent_with_a2a_instance
|
||||
|
||||
wrap_agent_with_a2a_instance(self)
|
||||
|
||||
return result
|
||||
|
||||
with warnings.catch_warnings():
|
||||
warnings.filterwarnings(
|
||||
"ignore", message=".*overrides an existing Pydantic.*"
|
||||
)
|
||||
namespace["post_init_setup"] = model_validator(mode="after")(
|
||||
post_init_setup_with_extensions
|
||||
)
|
||||
|
||||
return super().__new__(mcs, name, bases, namespace, **kwargs)
|
||||
@@ -18,6 +18,7 @@ from pydantic import (
|
||||
from pydantic_core import PydanticCustomError
|
||||
from typing_extensions import Self
|
||||
|
||||
from crewai.agent.internal.meta import AgentMeta
|
||||
from crewai.agents.agent_builder.utilities.base_token_process import TokenProcess
|
||||
from crewai.agents.cache.cache_handler import CacheHandler
|
||||
from crewai.agents.tools_handler import ToolsHandler
|
||||
@@ -56,7 +57,7 @@ PlatformApp = Literal[
|
||||
PlatformAppOrAction = PlatformApp | str
|
||||
|
||||
|
||||
class BaseAgent(BaseModel, ABC):
|
||||
class BaseAgent(BaseModel, ABC, metaclass=AgentMeta):
|
||||
"""Abstract Base Class for all third party agents compatible with CrewAI.
|
||||
|
||||
Attributes:
|
||||
|
||||
@@ -9,7 +9,7 @@ from __future__ import annotations
|
||||
from collections.abc import Callable
|
||||
from typing import TYPE_CHECKING, Any, Literal, cast
|
||||
|
||||
from pydantic import GetCoreSchemaHandler
|
||||
from pydantic import BaseModel, GetCoreSchemaHandler
|
||||
from pydantic_core import CoreSchema, core_schema
|
||||
|
||||
from crewai.agents.agent_builder.base_agent_executor_mixin import CrewAgentExecutorMixin
|
||||
@@ -65,7 +65,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
llm: BaseLLM | Any,
|
||||
llm: BaseLLM | Any | None,
|
||||
task: Task,
|
||||
crew: Crew,
|
||||
agent: Agent,
|
||||
@@ -82,6 +82,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
|
||||
respect_context_window: bool = False,
|
||||
request_within_rpm_limit: Callable[[], bool] | None = None,
|
||||
callbacks: list[Any] | None = None,
|
||||
response_model: type[BaseModel] | None = None,
|
||||
) -> None:
|
||||
"""Initialize executor.
|
||||
|
||||
@@ -103,6 +104,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
|
||||
respect_context_window: Respect context limits.
|
||||
request_within_rpm_limit: RPM limit check function.
|
||||
callbacks: Optional callbacks list.
|
||||
response_model: Optional Pydantic model for structured outputs.
|
||||
"""
|
||||
self._i18n: I18N = I18N()
|
||||
self.llm = llm
|
||||
@@ -123,6 +125,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
|
||||
self.function_calling_llm = function_calling_llm
|
||||
self.respect_context_window = respect_context_window
|
||||
self.request_within_rpm_limit = request_within_rpm_limit
|
||||
self.response_model = response_model
|
||||
self.ask_for_human_input = False
|
||||
self.messages: list[LLMMessage] = []
|
||||
self.iterations = 0
|
||||
@@ -221,6 +224,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
|
||||
printer=self._printer,
|
||||
from_task=self.task,
|
||||
from_agent=self.agent,
|
||||
response_model=self.response_model,
|
||||
)
|
||||
formatted_answer = process_llm_response(answer, self.use_stop_words)
|
||||
|
||||
|
||||
@@ -3,10 +3,17 @@ import json
|
||||
import os
|
||||
from pathlib import Path
|
||||
import sys
|
||||
from typing import BinaryIO, cast
|
||||
|
||||
from cryptography.fernet import Fernet
|
||||
|
||||
|
||||
if sys.platform == "win32":
|
||||
import msvcrt
|
||||
else:
|
||||
import fcntl
|
||||
|
||||
|
||||
class TokenManager:
|
||||
def __init__(self, file_path: str = "tokens.enc") -> None:
|
||||
"""
|
||||
@@ -18,16 +25,69 @@ class TokenManager:
|
||||
self.key = self._get_or_create_key()
|
||||
self.fernet = Fernet(self.key)
|
||||
|
||||
@staticmethod
|
||||
def _acquire_lock(file_handle: BinaryIO) -> None:
|
||||
"""
|
||||
Acquire an exclusive lock on a file handle.
|
||||
|
||||
Args:
|
||||
file_handle: Open file handle to lock.
|
||||
"""
|
||||
if sys.platform == "win32":
|
||||
msvcrt.locking(file_handle.fileno(), msvcrt.LK_LOCK, 1)
|
||||
else:
|
||||
fcntl.flock(file_handle.fileno(), fcntl.LOCK_EX)
|
||||
|
||||
@staticmethod
|
||||
def _release_lock(file_handle: BinaryIO) -> None:
|
||||
"""
|
||||
Release the lock on a file handle.
|
||||
|
||||
Args:
|
||||
file_handle: Open file handle to unlock.
|
||||
"""
|
||||
if sys.platform == "win32":
|
||||
msvcrt.locking(file_handle.fileno(), msvcrt.LK_UNLCK, 1)
|
||||
else:
|
||||
fcntl.flock(file_handle.fileno(), fcntl.LOCK_UN)
|
||||
|
||||
def _get_or_create_key(self) -> bytes:
|
||||
"""
|
||||
Get or create the encryption key.
|
||||
Get or create the encryption key with file locking to prevent race conditions.
|
||||
|
||||
:return: The encryption key.
|
||||
Returns:
|
||||
The encryption key.
|
||||
"""
|
||||
key_filename = "secret.key"
|
||||
key = self.read_secure_file(key_filename)
|
||||
storage_path = self.get_secure_storage_path()
|
||||
|
||||
if key is not None:
|
||||
key = self.read_secure_file(key_filename)
|
||||
if key is not None and len(key) == 44:
|
||||
return key
|
||||
|
||||
lock_file_path = storage_path / f"{key_filename}.lock"
|
||||
|
||||
try:
|
||||
lock_file_path.touch()
|
||||
|
||||
with open(lock_file_path, "r+b") as lock_file:
|
||||
self._acquire_lock(lock_file)
|
||||
try:
|
||||
key = self.read_secure_file(key_filename)
|
||||
if key is not None and len(key) == 44:
|
||||
return key
|
||||
|
||||
new_key = Fernet.generate_key()
|
||||
self.save_secure_file(key_filename, new_key)
|
||||
return new_key
|
||||
finally:
|
||||
try:
|
||||
self._release_lock(lock_file)
|
||||
except OSError:
|
||||
pass
|
||||
except OSError:
|
||||
key = self.read_secure_file(key_filename)
|
||||
if key is not None and len(key) == 44:
|
||||
return key
|
||||
|
||||
new_key = Fernet.generate_key()
|
||||
@@ -59,14 +119,14 @@ class TokenManager:
|
||||
if encrypted_data is None:
|
||||
return None
|
||||
|
||||
decrypted_data = self.fernet.decrypt(encrypted_data) # type: ignore
|
||||
decrypted_data = self.fernet.decrypt(encrypted_data)
|
||||
data = json.loads(decrypted_data)
|
||||
|
||||
expiration = datetime.fromisoformat(data["expiration"])
|
||||
if expiration <= datetime.now():
|
||||
return None
|
||||
|
||||
return data["access_token"]
|
||||
return cast(str | None, data["access_token"])
|
||||
|
||||
def clear_tokens(self) -> None:
|
||||
"""
|
||||
@@ -74,20 +134,18 @@ class TokenManager:
|
||||
"""
|
||||
self.delete_secure_file(self.file_path)
|
||||
|
||||
def get_secure_storage_path(self) -> Path:
|
||||
@staticmethod
|
||||
def get_secure_storage_path() -> Path:
|
||||
"""
|
||||
Get the secure storage path based on the operating system.
|
||||
|
||||
:return: The secure storage path.
|
||||
"""
|
||||
if sys.platform == "win32":
|
||||
# Windows: Use %LOCALAPPDATA%
|
||||
base_path = os.environ.get("LOCALAPPDATA")
|
||||
elif sys.platform == "darwin":
|
||||
# macOS: Use ~/Library/Application Support
|
||||
base_path = os.path.expanduser("~/Library/Application Support")
|
||||
else:
|
||||
# Linux and other Unix-like: Use ~/.local/share
|
||||
base_path = os.path.expanduser("~/.local/share")
|
||||
|
||||
app_name = "crewai/credentials"
|
||||
@@ -110,7 +168,6 @@ class TokenManager:
|
||||
with open(file_path, "wb") as f:
|
||||
f.write(content)
|
||||
|
||||
# Set appropriate permissions (read/write for owner only)
|
||||
os.chmod(file_path, 0o600)
|
||||
|
||||
def read_secure_file(self, filename: str) -> bytes | None:
|
||||
|
||||
@@ -8,21 +8,15 @@ This module provides the event infrastructure that allows users to:
|
||||
- Declare handler dependencies for ordered execution
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
from crewai.events.base_event_listener import BaseEventListener
|
||||
from crewai.events.depends import Depends
|
||||
from crewai.events.event_bus import crewai_event_bus
|
||||
from crewai.events.handler_graph import CircularDependencyError
|
||||
from crewai.events.types.agent_events import (
|
||||
AgentEvaluationCompletedEvent,
|
||||
AgentEvaluationFailedEvent,
|
||||
AgentEvaluationStartedEvent,
|
||||
AgentExecutionCompletedEvent,
|
||||
AgentExecutionErrorEvent,
|
||||
AgentExecutionStartedEvent,
|
||||
LiteAgentExecutionCompletedEvent,
|
||||
LiteAgentExecutionErrorEvent,
|
||||
LiteAgentExecutionStartedEvent,
|
||||
)
|
||||
|
||||
from crewai.events.types.crew_events import (
|
||||
CrewKickoffCompletedEvent,
|
||||
CrewKickoffFailedEvent,
|
||||
@@ -100,6 +94,20 @@ from crewai.events.types.tool_usage_events import (
|
||||
)
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from crewai.events.types.agent_events import (
|
||||
AgentEvaluationCompletedEvent,
|
||||
AgentEvaluationFailedEvent,
|
||||
AgentEvaluationStartedEvent,
|
||||
AgentExecutionCompletedEvent,
|
||||
AgentExecutionErrorEvent,
|
||||
AgentExecutionStartedEvent,
|
||||
LiteAgentExecutionCompletedEvent,
|
||||
LiteAgentExecutionErrorEvent,
|
||||
LiteAgentExecutionStartedEvent,
|
||||
)
|
||||
|
||||
|
||||
__all__ = [
|
||||
"AgentEvaluationCompletedEvent",
|
||||
"AgentEvaluationFailedEvent",
|
||||
@@ -170,3 +178,27 @@ __all__ = [
|
||||
"ToolValidateInputErrorEvent",
|
||||
"crewai_event_bus",
|
||||
]
|
||||
|
||||
_AGENT_EVENT_MAPPING = {
|
||||
"AgentEvaluationCompletedEvent": "crewai.events.types.agent_events",
|
||||
"AgentEvaluationFailedEvent": "crewai.events.types.agent_events",
|
||||
"AgentEvaluationStartedEvent": "crewai.events.types.agent_events",
|
||||
"AgentExecutionCompletedEvent": "crewai.events.types.agent_events",
|
||||
"AgentExecutionErrorEvent": "crewai.events.types.agent_events",
|
||||
"AgentExecutionStartedEvent": "crewai.events.types.agent_events",
|
||||
"LiteAgentExecutionCompletedEvent": "crewai.events.types.agent_events",
|
||||
"LiteAgentExecutionErrorEvent": "crewai.events.types.agent_events",
|
||||
"LiteAgentExecutionStartedEvent": "crewai.events.types.agent_events",
|
||||
}
|
||||
|
||||
|
||||
def __getattr__(name: str):
|
||||
"""Lazy import for agent events to avoid circular imports."""
|
||||
if name in _AGENT_EVENT_MAPPING:
|
||||
import importlib
|
||||
|
||||
module_path = _AGENT_EVENT_MAPPING[name]
|
||||
module = importlib.import_module(module_path)
|
||||
return getattr(module, name)
|
||||
msg = f"module {__name__!r} has no attribute {name!r}"
|
||||
raise AttributeError(msg)
|
||||
|
||||
@@ -1,16 +1,26 @@
|
||||
"""Base event listener for CrewAI event system."""
|
||||
|
||||
from abc import ABC, abstractmethod
|
||||
|
||||
from crewai.events.event_bus import CrewAIEventsBus, crewai_event_bus
|
||||
|
||||
|
||||
class BaseEventListener(ABC):
|
||||
"""Abstract base class for event listeners."""
|
||||
|
||||
verbose: bool = False
|
||||
|
||||
def __init__(self):
|
||||
def __init__(self) -> None:
|
||||
"""Initialize the event listener and register handlers."""
|
||||
super().__init__()
|
||||
self.setup_listeners(crewai_event_bus)
|
||||
crewai_event_bus.validate_dependencies()
|
||||
|
||||
@abstractmethod
|
||||
def setup_listeners(self, crewai_event_bus: CrewAIEventsBus):
|
||||
def setup_listeners(self, crewai_event_bus: CrewAIEventsBus) -> None:
|
||||
"""Setup event listeners on the event bus.
|
||||
|
||||
Args:
|
||||
crewai_event_bus: The event bus to register listeners on.
|
||||
"""
|
||||
pass
|
||||
|
||||
@@ -1,12 +1,21 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from io import StringIO
|
||||
from typing import Any
|
||||
import threading
|
||||
from typing import TYPE_CHECKING, Any
|
||||
|
||||
from pydantic import Field, PrivateAttr
|
||||
|
||||
from crewai.events.base_event_listener import BaseEventListener
|
||||
from crewai.events.listeners.memory_listener import MemoryListener
|
||||
from crewai.events.listeners.tracing.trace_listener import TraceCollectionListener
|
||||
from crewai.events.types.a2a_events import (
|
||||
A2AConversationCompletedEvent,
|
||||
A2AConversationStartedEvent,
|
||||
A2ADelegationCompletedEvent,
|
||||
A2ADelegationStartedEvent,
|
||||
A2AMessageSentEvent,
|
||||
A2AResponseReceivedEvent,
|
||||
)
|
||||
from crewai.events.types.agent_events import (
|
||||
AgentExecutionCompletedEvent,
|
||||
AgentExecutionStartedEvent,
|
||||
@@ -79,6 +88,10 @@ from crewai.utilities import Logger
|
||||
from crewai.utilities.constants import EMITTER_COLOR
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from crewai.events.event_bus import CrewAIEventsBus
|
||||
|
||||
|
||||
class EventListener(BaseEventListener):
|
||||
_instance = None
|
||||
_telemetry: Telemetry = PrivateAttr(default_factory=lambda: Telemetry())
|
||||
@@ -105,19 +118,24 @@ class EventListener(BaseEventListener):
|
||||
self.method_branches = {}
|
||||
self._initialized = True
|
||||
self.formatter = ConsoleFormatter(verbose=True)
|
||||
self._crew_tree_lock = threading.Condition()
|
||||
|
||||
MemoryListener(formatter=self.formatter)
|
||||
# Initialize trace listener with formatter for memory event handling
|
||||
trace_listener = TraceCollectionListener()
|
||||
trace_listener.formatter = self.formatter
|
||||
|
||||
# ----------- CREW EVENTS -----------
|
||||
|
||||
def setup_listeners(self, crewai_event_bus):
|
||||
def setup_listeners(self, crewai_event_bus: CrewAIEventsBus) -> None:
|
||||
@crewai_event_bus.on(CrewKickoffStartedEvent)
|
||||
def on_crew_started(source, event: CrewKickoffStartedEvent):
|
||||
def on_crew_started(source, event: CrewKickoffStartedEvent) -> None:
|
||||
with self._crew_tree_lock:
|
||||
self.formatter.create_crew_tree(event.crew_name or "Crew", source.id)
|
||||
self._telemetry.crew_execution_span(source, event.inputs)
|
||||
self._crew_tree_lock.notify_all()
|
||||
|
||||
@crewai_event_bus.on(CrewKickoffCompletedEvent)
|
||||
def on_crew_completed(source, event: CrewKickoffCompletedEvent):
|
||||
def on_crew_completed(source, event: CrewKickoffCompletedEvent) -> None:
|
||||
# Handle telemetry
|
||||
final_string_output = event.output.raw
|
||||
self._telemetry.end_crew(source, final_string_output)
|
||||
@@ -131,7 +149,7 @@ class EventListener(BaseEventListener):
|
||||
)
|
||||
|
||||
@crewai_event_bus.on(CrewKickoffFailedEvent)
|
||||
def on_crew_failed(source, event: CrewKickoffFailedEvent):
|
||||
def on_crew_failed(source, event: CrewKickoffFailedEvent) -> None:
|
||||
self.formatter.update_crew_tree(
|
||||
self.formatter.current_crew_tree,
|
||||
event.crew_name or "Crew",
|
||||
@@ -140,23 +158,23 @@ class EventListener(BaseEventListener):
|
||||
)
|
||||
|
||||
@crewai_event_bus.on(CrewTrainStartedEvent)
|
||||
def on_crew_train_started(source, event: CrewTrainStartedEvent):
|
||||
def on_crew_train_started(source, event: CrewTrainStartedEvent) -> None:
|
||||
self.formatter.handle_crew_train_started(
|
||||
event.crew_name or "Crew", str(event.timestamp)
|
||||
)
|
||||
|
||||
@crewai_event_bus.on(CrewTrainCompletedEvent)
|
||||
def on_crew_train_completed(source, event: CrewTrainCompletedEvent):
|
||||
def on_crew_train_completed(source, event: CrewTrainCompletedEvent) -> None:
|
||||
self.formatter.handle_crew_train_completed(
|
||||
event.crew_name or "Crew", str(event.timestamp)
|
||||
)
|
||||
|
||||
@crewai_event_bus.on(CrewTrainFailedEvent)
|
||||
def on_crew_train_failed(source, event: CrewTrainFailedEvent):
|
||||
def on_crew_train_failed(source, event: CrewTrainFailedEvent) -> None:
|
||||
self.formatter.handle_crew_train_failed(event.crew_name or "Crew")
|
||||
|
||||
@crewai_event_bus.on(CrewTestResultEvent)
|
||||
def on_crew_test_result(source, event: CrewTestResultEvent):
|
||||
def on_crew_test_result(source, event: CrewTestResultEvent) -> None:
|
||||
self._telemetry.individual_test_result_span(
|
||||
source.crew,
|
||||
event.quality,
|
||||
@@ -167,11 +185,19 @@ class EventListener(BaseEventListener):
|
||||
# ----------- TASK EVENTS -----------
|
||||
|
||||
@crewai_event_bus.on(TaskStartedEvent)
|
||||
def on_task_started(source, event: TaskStartedEvent):
|
||||
def on_task_started(source, event: TaskStartedEvent) -> None:
|
||||
span = self._telemetry.task_started(crew=source.agent.crew, task=source)
|
||||
self.execution_spans[source] = span
|
||||
# Pass both task ID and task name (if set)
|
||||
task_name = source.name if hasattr(source, "name") and source.name else None
|
||||
|
||||
with self._crew_tree_lock:
|
||||
self._crew_tree_lock.wait_for(
|
||||
lambda: self.formatter.current_crew_tree is not None, timeout=5.0
|
||||
)
|
||||
|
||||
if self.formatter.current_crew_tree is not None:
|
||||
task_name = (
|
||||
source.name if hasattr(source, "name") and source.name else None
|
||||
)
|
||||
self.formatter.create_task_branch(
|
||||
self.formatter.current_crew_tree, source.id, task_name
|
||||
)
|
||||
@@ -533,5 +559,61 @@ class EventListener(BaseEventListener):
|
||||
event.verbose,
|
||||
)
|
||||
|
||||
@crewai_event_bus.on(A2ADelegationStartedEvent)
|
||||
def on_a2a_delegation_started(source, event: A2ADelegationStartedEvent):
|
||||
self.formatter.handle_a2a_delegation_started(
|
||||
event.endpoint,
|
||||
event.task_description,
|
||||
event.agent_id,
|
||||
event.is_multiturn,
|
||||
event.turn_number,
|
||||
)
|
||||
|
||||
@crewai_event_bus.on(A2ADelegationCompletedEvent)
|
||||
def on_a2a_delegation_completed(source, event: A2ADelegationCompletedEvent):
|
||||
self.formatter.handle_a2a_delegation_completed(
|
||||
event.status,
|
||||
event.result,
|
||||
event.error,
|
||||
event.is_multiturn,
|
||||
)
|
||||
|
||||
@crewai_event_bus.on(A2AConversationStartedEvent)
|
||||
def on_a2a_conversation_started(source, event: A2AConversationStartedEvent):
|
||||
# Store A2A agent name for display in conversation tree
|
||||
if event.a2a_agent_name:
|
||||
self.formatter._current_a2a_agent_name = event.a2a_agent_name
|
||||
|
||||
self.formatter.handle_a2a_conversation_started(
|
||||
event.agent_id,
|
||||
event.endpoint,
|
||||
)
|
||||
|
||||
@crewai_event_bus.on(A2AMessageSentEvent)
|
||||
def on_a2a_message_sent(source, event: A2AMessageSentEvent):
|
||||
self.formatter.handle_a2a_message_sent(
|
||||
event.message,
|
||||
event.turn_number,
|
||||
event.agent_role,
|
||||
)
|
||||
|
||||
@crewai_event_bus.on(A2AResponseReceivedEvent)
|
||||
def on_a2a_response_received(source, event: A2AResponseReceivedEvent):
|
||||
self.formatter.handle_a2a_response_received(
|
||||
event.response,
|
||||
event.turn_number,
|
||||
event.status,
|
||||
event.agent_role,
|
||||
)
|
||||
|
||||
@crewai_event_bus.on(A2AConversationCompletedEvent)
|
||||
def on_a2a_conversation_completed(source, event: A2AConversationCompletedEvent):
|
||||
self.formatter.handle_a2a_conversation_completed(
|
||||
event.status,
|
||||
event.final_result,
|
||||
event.error,
|
||||
event.total_turns,
|
||||
)
|
||||
|
||||
|
||||
event_listener = EventListener()
|
||||
|
||||
@@ -1,106 +0,0 @@
|
||||
from crewai.events.base_event_listener import BaseEventListener
|
||||
from crewai.events.types.memory_events import (
|
||||
MemoryQueryCompletedEvent,
|
||||
MemoryQueryFailedEvent,
|
||||
MemoryRetrievalCompletedEvent,
|
||||
MemoryRetrievalStartedEvent,
|
||||
MemorySaveCompletedEvent,
|
||||
MemorySaveFailedEvent,
|
||||
MemorySaveStartedEvent,
|
||||
)
|
||||
|
||||
|
||||
class MemoryListener(BaseEventListener):
|
||||
def __init__(self, formatter):
|
||||
super().__init__()
|
||||
self.formatter = formatter
|
||||
self.memory_retrieval_in_progress = False
|
||||
self.memory_save_in_progress = False
|
||||
|
||||
def setup_listeners(self, crewai_event_bus):
|
||||
@crewai_event_bus.on(MemoryRetrievalStartedEvent)
|
||||
def on_memory_retrieval_started(source, event: MemoryRetrievalStartedEvent):
|
||||
if self.memory_retrieval_in_progress:
|
||||
return
|
||||
|
||||
self.memory_retrieval_in_progress = True
|
||||
|
||||
self.formatter.handle_memory_retrieval_started(
|
||||
self.formatter.current_agent_branch,
|
||||
self.formatter.current_crew_tree,
|
||||
)
|
||||
|
||||
@crewai_event_bus.on(MemoryRetrievalCompletedEvent)
|
||||
def on_memory_retrieval_completed(source, event: MemoryRetrievalCompletedEvent):
|
||||
if not self.memory_retrieval_in_progress:
|
||||
return
|
||||
|
||||
self.memory_retrieval_in_progress = False
|
||||
self.formatter.handle_memory_retrieval_completed(
|
||||
self.formatter.current_agent_branch,
|
||||
self.formatter.current_crew_tree,
|
||||
event.memory_content,
|
||||
event.retrieval_time_ms,
|
||||
)
|
||||
|
||||
@crewai_event_bus.on(MemoryQueryCompletedEvent)
|
||||
def on_memory_query_completed(source, event: MemoryQueryCompletedEvent):
|
||||
if not self.memory_retrieval_in_progress:
|
||||
return
|
||||
|
||||
self.formatter.handle_memory_query_completed(
|
||||
self.formatter.current_agent_branch,
|
||||
event.source_type,
|
||||
event.query_time_ms,
|
||||
self.formatter.current_crew_tree,
|
||||
)
|
||||
|
||||
@crewai_event_bus.on(MemoryQueryFailedEvent)
|
||||
def on_memory_query_failed(source, event: MemoryQueryFailedEvent):
|
||||
if not self.memory_retrieval_in_progress:
|
||||
return
|
||||
|
||||
self.formatter.handle_memory_query_failed(
|
||||
self.formatter.current_agent_branch,
|
||||
self.formatter.current_crew_tree,
|
||||
event.error,
|
||||
event.source_type,
|
||||
)
|
||||
|
||||
@crewai_event_bus.on(MemorySaveStartedEvent)
|
||||
def on_memory_save_started(source, event: MemorySaveStartedEvent):
|
||||
if self.memory_save_in_progress:
|
||||
return
|
||||
|
||||
self.memory_save_in_progress = True
|
||||
|
||||
self.formatter.handle_memory_save_started(
|
||||
self.formatter.current_agent_branch,
|
||||
self.formatter.current_crew_tree,
|
||||
)
|
||||
|
||||
@crewai_event_bus.on(MemorySaveCompletedEvent)
|
||||
def on_memory_save_completed(source, event: MemorySaveCompletedEvent):
|
||||
if not self.memory_save_in_progress:
|
||||
return
|
||||
|
||||
self.memory_save_in_progress = False
|
||||
|
||||
self.formatter.handle_memory_save_completed(
|
||||
self.formatter.current_agent_branch,
|
||||
self.formatter.current_crew_tree,
|
||||
event.save_time_ms,
|
||||
event.source_type,
|
||||
)
|
||||
|
||||
@crewai_event_bus.on(MemorySaveFailedEvent)
|
||||
def on_memory_save_failed(source, event: MemorySaveFailedEvent):
|
||||
if not self.memory_save_in_progress:
|
||||
return
|
||||
|
||||
self.formatter.handle_memory_save_failed(
|
||||
self.formatter.current_agent_branch,
|
||||
event.error,
|
||||
event.source_type,
|
||||
self.formatter.current_crew_tree,
|
||||
)
|
||||
@@ -73,15 +73,19 @@ class FirstTimeTraceHandler:
|
||||
self.is_first_time = should_auto_collect_first_time_traces()
|
||||
return self.is_first_time
|
||||
|
||||
def set_batch_manager(self, batch_manager: TraceBatchManager):
|
||||
"""Set reference to batch manager for sending events."""
|
||||
def set_batch_manager(self, batch_manager: TraceBatchManager) -> None:
|
||||
"""Set reference to batch manager for sending events.
|
||||
|
||||
Args:
|
||||
batch_manager: The trace batch manager instance.
|
||||
"""
|
||||
self.batch_manager = batch_manager
|
||||
|
||||
def mark_events_collected(self):
|
||||
def mark_events_collected(self) -> None:
|
||||
"""Mark that events have been collected during execution."""
|
||||
self.collected_events = True
|
||||
|
||||
def handle_execution_completion(self):
|
||||
def handle_execution_completion(self) -> None:
|
||||
"""Handle the completion flow as shown in your diagram."""
|
||||
if not self.is_first_time or not self.collected_events:
|
||||
return
|
||||
|
||||
@@ -44,6 +44,7 @@ class TraceBatchManager:
|
||||
|
||||
def __init__(self) -> None:
|
||||
self._init_lock = Lock()
|
||||
self._batch_ready_cv = Condition(self._init_lock)
|
||||
self._pending_events_lock = Lock()
|
||||
self._pending_events_cv = Condition(self._pending_events_lock)
|
||||
self._pending_events_count = 0
|
||||
@@ -94,6 +95,8 @@ class TraceBatchManager:
|
||||
)
|
||||
self.backend_initialized = True
|
||||
|
||||
self._batch_ready_cv.notify_all()
|
||||
|
||||
return self.current_batch
|
||||
|
||||
def _initialize_backend_batch(
|
||||
@@ -161,13 +164,13 @@ class TraceBatchManager:
|
||||
f"Error initializing trace batch: {e}. Continuing without tracing."
|
||||
)
|
||||
|
||||
def begin_event_processing(self):
|
||||
"""Mark that an event handler started processing (for synchronization)"""
|
||||
def begin_event_processing(self) -> None:
|
||||
"""Mark that an event handler started processing (for synchronization)."""
|
||||
with self._pending_events_lock:
|
||||
self._pending_events_count += 1
|
||||
|
||||
def end_event_processing(self):
|
||||
"""Mark that an event handler finished processing (for synchronization)"""
|
||||
def end_event_processing(self) -> None:
|
||||
"""Mark that an event handler finished processing (for synchronization)."""
|
||||
with self._pending_events_cv:
|
||||
self._pending_events_count -= 1
|
||||
if self._pending_events_count == 0:
|
||||
@@ -385,6 +388,22 @@ class TraceBatchManager:
|
||||
"""Check if batch is initialized"""
|
||||
return self.current_batch is not None
|
||||
|
||||
def wait_for_batch_initialization(self, timeout: float = 2.0) -> bool:
|
||||
"""Wait for batch to be initialized.
|
||||
|
||||
Args:
|
||||
timeout: Maximum time to wait in seconds (default: 2.0)
|
||||
|
||||
Returns:
|
||||
True if batch was initialized, False if timeout occurred
|
||||
"""
|
||||
with self._batch_ready_cv:
|
||||
if self.current_batch is not None:
|
||||
return True
|
||||
return self._batch_ready_cv.wait_for(
|
||||
lambda: self.current_batch is not None, timeout=timeout
|
||||
)
|
||||
|
||||
def record_start_time(self, key: str):
|
||||
"""Record start time for duration calculation"""
|
||||
self.execution_start_times[key] = datetime.now(timezone.utc)
|
||||
|
||||
@@ -1,10 +1,16 @@
|
||||
"""Trace collection listener for orchestrating trace collection."""
|
||||
|
||||
import os
|
||||
from typing import Any, ClassVar
|
||||
import uuid
|
||||
|
||||
from typing_extensions import Self
|
||||
|
||||
from crewai.cli.authentication.token import AuthError, get_auth_token
|
||||
from crewai.cli.version import get_crewai_version
|
||||
from crewai.events.base_event_listener import BaseEventListener
|
||||
from crewai.events.event_bus import CrewAIEventsBus
|
||||
from crewai.events.utils.console_formatter import ConsoleFormatter
|
||||
from crewai.events.listeners.tracing.first_time_trace_handler import (
|
||||
FirstTimeTraceHandler,
|
||||
)
|
||||
@@ -53,6 +59,8 @@ from crewai.events.types.memory_events import (
|
||||
MemoryQueryCompletedEvent,
|
||||
MemoryQueryFailedEvent,
|
||||
MemoryQueryStartedEvent,
|
||||
MemoryRetrievalCompletedEvent,
|
||||
MemoryRetrievalStartedEvent,
|
||||
MemorySaveCompletedEvent,
|
||||
MemorySaveFailedEvent,
|
||||
MemorySaveStartedEvent,
|
||||
@@ -75,9 +83,7 @@ from crewai.events.types.tool_usage_events import (
|
||||
|
||||
|
||||
class TraceCollectionListener(BaseEventListener):
|
||||
"""
|
||||
Trace collection listener that orchestrates trace collection
|
||||
"""
|
||||
"""Trace collection listener that orchestrates trace collection."""
|
||||
|
||||
complex_events: ClassVar[list[str]] = [
|
||||
"task_started",
|
||||
@@ -88,11 +94,12 @@ class TraceCollectionListener(BaseEventListener):
|
||||
"agent_execution_completed",
|
||||
]
|
||||
|
||||
_instance = None
|
||||
_initialized = False
|
||||
_listeners_setup = False
|
||||
_instance: Self | None = None
|
||||
_initialized: bool = False
|
||||
_listeners_setup: bool = False
|
||||
|
||||
def __new__(cls, batch_manager: TraceBatchManager | None = None):
|
||||
def __new__(cls, batch_manager: TraceBatchManager | None = None) -> Self:
|
||||
"""Create or return singleton instance."""
|
||||
if cls._instance is None:
|
||||
cls._instance = super().__new__(cls)
|
||||
return cls._instance
|
||||
@@ -100,7 +107,14 @@ class TraceCollectionListener(BaseEventListener):
|
||||
def __init__(
|
||||
self,
|
||||
batch_manager: TraceBatchManager | None = None,
|
||||
):
|
||||
formatter: ConsoleFormatter | None = None,
|
||||
) -> None:
|
||||
"""Initialize trace collection listener.
|
||||
|
||||
Args:
|
||||
batch_manager: Optional trace batch manager instance.
|
||||
formatter: Optional console formatter for output.
|
||||
"""
|
||||
if self._initialized:
|
||||
return
|
||||
|
||||
@@ -108,19 +122,22 @@ class TraceCollectionListener(BaseEventListener):
|
||||
self.batch_manager = batch_manager or TraceBatchManager()
|
||||
self._initialized = True
|
||||
self.first_time_handler = FirstTimeTraceHandler()
|
||||
self.formatter = formatter
|
||||
self.memory_retrieval_in_progress = False
|
||||
self.memory_save_in_progress = False
|
||||
|
||||
if self.first_time_handler.initialize_for_first_time_user():
|
||||
self.first_time_handler.set_batch_manager(self.batch_manager)
|
||||
|
||||
def _check_authenticated(self) -> bool:
|
||||
"""Check if tracing should be enabled"""
|
||||
"""Check if tracing should be enabled."""
|
||||
try:
|
||||
return bool(get_auth_token())
|
||||
except AuthError:
|
||||
return False
|
||||
|
||||
def _get_user_context(self) -> dict[str, str]:
|
||||
"""Extract user context for tracing"""
|
||||
"""Extract user context for tracing."""
|
||||
return {
|
||||
"user_id": os.getenv("CREWAI_USER_ID", "anonymous"),
|
||||
"organization_id": os.getenv("CREWAI_ORG_ID", ""),
|
||||
@@ -128,9 +145,12 @@ class TraceCollectionListener(BaseEventListener):
|
||||
"trace_id": str(uuid.uuid4()),
|
||||
}
|
||||
|
||||
def setup_listeners(self, crewai_event_bus):
|
||||
"""Setup event listeners - delegates to specific handlers"""
|
||||
def setup_listeners(self, crewai_event_bus: CrewAIEventsBus) -> None:
|
||||
"""Setup event listeners - delegates to specific handlers.
|
||||
|
||||
Args:
|
||||
crewai_event_bus: The event bus to register listeners on.
|
||||
"""
|
||||
if self._listeners_setup:
|
||||
return
|
||||
|
||||
@@ -140,50 +160,52 @@ class TraceCollectionListener(BaseEventListener):
|
||||
|
||||
self._listeners_setup = True
|
||||
|
||||
def _register_flow_event_handlers(self, event_bus):
|
||||
"""Register handlers for flow events"""
|
||||
def _register_flow_event_handlers(self, event_bus: CrewAIEventsBus) -> None:
|
||||
"""Register handlers for flow events."""
|
||||
|
||||
@event_bus.on(FlowCreatedEvent)
|
||||
def on_flow_created(source, event):
|
||||
def on_flow_created(source: Any, event: FlowCreatedEvent) -> None:
|
||||
pass
|
||||
|
||||
@event_bus.on(FlowStartedEvent)
|
||||
def on_flow_started(source, event):
|
||||
def on_flow_started(source: Any, event: FlowStartedEvent) -> None:
|
||||
if not self.batch_manager.is_batch_initialized():
|
||||
self._initialize_flow_batch(source, event)
|
||||
self._handle_trace_event("flow_started", source, event)
|
||||
|
||||
@event_bus.on(MethodExecutionStartedEvent)
|
||||
def on_method_started(source, event):
|
||||
def on_method_started(source: Any, event: MethodExecutionStartedEvent) -> None:
|
||||
self._handle_trace_event("method_execution_started", source, event)
|
||||
|
||||
@event_bus.on(MethodExecutionFinishedEvent)
|
||||
def on_method_finished(source, event):
|
||||
def on_method_finished(
|
||||
source: Any, event: MethodExecutionFinishedEvent
|
||||
) -> None:
|
||||
self._handle_trace_event("method_execution_finished", source, event)
|
||||
|
||||
@event_bus.on(MethodExecutionFailedEvent)
|
||||
def on_method_failed(source, event):
|
||||
def on_method_failed(source: Any, event: MethodExecutionFailedEvent) -> None:
|
||||
self._handle_trace_event("method_execution_failed", source, event)
|
||||
|
||||
@event_bus.on(FlowFinishedEvent)
|
||||
def on_flow_finished(source, event):
|
||||
def on_flow_finished(source: Any, event: FlowFinishedEvent) -> None:
|
||||
self._handle_trace_event("flow_finished", source, event)
|
||||
|
||||
@event_bus.on(FlowPlotEvent)
|
||||
def on_flow_plot(source, event):
|
||||
def on_flow_plot(source: Any, event: FlowPlotEvent) -> None:
|
||||
self._handle_action_event("flow_plot", source, event)
|
||||
|
||||
def _register_context_event_handlers(self, event_bus):
|
||||
"""Register handlers for context events (start/end)"""
|
||||
def _register_context_event_handlers(self, event_bus: CrewAIEventsBus) -> None:
|
||||
"""Register handlers for context events (start/end)."""
|
||||
|
||||
@event_bus.on(CrewKickoffStartedEvent)
|
||||
def on_crew_started(source, event):
|
||||
def on_crew_started(source: Any, event: CrewKickoffStartedEvent) -> None:
|
||||
if not self.batch_manager.is_batch_initialized():
|
||||
self._initialize_crew_batch(source, event)
|
||||
self._handle_trace_event("crew_kickoff_started", source, event)
|
||||
|
||||
@event_bus.on(CrewKickoffCompletedEvent)
|
||||
def on_crew_completed(source, event):
|
||||
def on_crew_completed(source: Any, event: CrewKickoffCompletedEvent) -> None:
|
||||
self._handle_trace_event("crew_kickoff_completed", source, event)
|
||||
if self.batch_manager.batch_owner_type == "crew":
|
||||
if self.first_time_handler.is_first_time:
|
||||
@@ -193,7 +215,7 @@ class TraceCollectionListener(BaseEventListener):
|
||||
self.batch_manager.finalize_batch()
|
||||
|
||||
@event_bus.on(CrewKickoffFailedEvent)
|
||||
def on_crew_failed(source, event):
|
||||
def on_crew_failed(source: Any, event: CrewKickoffFailedEvent) -> None:
|
||||
self._handle_trace_event("crew_kickoff_failed", source, event)
|
||||
if self.first_time_handler.is_first_time:
|
||||
self.first_time_handler.mark_events_collected()
|
||||
@@ -202,134 +224,245 @@ class TraceCollectionListener(BaseEventListener):
|
||||
self.batch_manager.finalize_batch()
|
||||
|
||||
@event_bus.on(TaskStartedEvent)
|
||||
def on_task_started(source, event):
|
||||
def on_task_started(source: Any, event: TaskStartedEvent) -> None:
|
||||
self._handle_trace_event("task_started", source, event)
|
||||
|
||||
@event_bus.on(TaskCompletedEvent)
|
||||
def on_task_completed(source, event):
|
||||
def on_task_completed(source: Any, event: TaskCompletedEvent) -> None:
|
||||
self._handle_trace_event("task_completed", source, event)
|
||||
|
||||
@event_bus.on(TaskFailedEvent)
|
||||
def on_task_failed(source, event):
|
||||
def on_task_failed(source: Any, event: TaskFailedEvent) -> None:
|
||||
self._handle_trace_event("task_failed", source, event)
|
||||
|
||||
@event_bus.on(AgentExecutionStartedEvent)
|
||||
def on_agent_started(source, event):
|
||||
def on_agent_started(source: Any, event: AgentExecutionStartedEvent) -> None:
|
||||
self._handle_trace_event("agent_execution_started", source, event)
|
||||
|
||||
@event_bus.on(AgentExecutionCompletedEvent)
|
||||
def on_agent_completed(source, event):
|
||||
def on_agent_completed(
|
||||
source: Any, event: AgentExecutionCompletedEvent
|
||||
) -> None:
|
||||
self._handle_trace_event("agent_execution_completed", source, event)
|
||||
|
||||
@event_bus.on(LiteAgentExecutionStartedEvent)
|
||||
def on_lite_agent_started(source, event):
|
||||
def on_lite_agent_started(
|
||||
source: Any, event: LiteAgentExecutionStartedEvent
|
||||
) -> None:
|
||||
self._handle_trace_event("lite_agent_execution_started", source, event)
|
||||
|
||||
@event_bus.on(LiteAgentExecutionCompletedEvent)
|
||||
def on_lite_agent_completed(source, event):
|
||||
def on_lite_agent_completed(
|
||||
source: Any, event: LiteAgentExecutionCompletedEvent
|
||||
) -> None:
|
||||
self._handle_trace_event("lite_agent_execution_completed", source, event)
|
||||
|
||||
@event_bus.on(LiteAgentExecutionErrorEvent)
|
||||
def on_lite_agent_error(source, event):
|
||||
def on_lite_agent_error(
|
||||
source: Any, event: LiteAgentExecutionErrorEvent
|
||||
) -> None:
|
||||
self._handle_trace_event("lite_agent_execution_error", source, event)
|
||||
|
||||
@event_bus.on(AgentExecutionErrorEvent)
|
||||
def on_agent_error(source, event):
|
||||
def on_agent_error(source: Any, event: AgentExecutionErrorEvent) -> None:
|
||||
self._handle_trace_event("agent_execution_error", source, event)
|
||||
|
||||
@event_bus.on(LLMGuardrailStartedEvent)
|
||||
def on_guardrail_started(source, event):
|
||||
def on_guardrail_started(source: Any, event: LLMGuardrailStartedEvent) -> None:
|
||||
self._handle_trace_event("llm_guardrail_started", source, event)
|
||||
|
||||
@event_bus.on(LLMGuardrailCompletedEvent)
|
||||
def on_guardrail_completed(source, event):
|
||||
def on_guardrail_completed(
|
||||
source: Any, event: LLMGuardrailCompletedEvent
|
||||
) -> None:
|
||||
self._handle_trace_event("llm_guardrail_completed", source, event)
|
||||
|
||||
def _register_action_event_handlers(self, event_bus):
|
||||
"""Register handlers for action events (LLM calls, tool usage)"""
|
||||
def _register_action_event_handlers(self, event_bus: CrewAIEventsBus) -> None:
|
||||
"""Register handlers for action events (LLM calls, tool usage)."""
|
||||
|
||||
@event_bus.on(LLMCallStartedEvent)
|
||||
def on_llm_call_started(source, event):
|
||||
def on_llm_call_started(source: Any, event: LLMCallStartedEvent) -> None:
|
||||
self._handle_action_event("llm_call_started", source, event)
|
||||
|
||||
@event_bus.on(LLMCallCompletedEvent)
|
||||
def on_llm_call_completed(source, event):
|
||||
def on_llm_call_completed(source: Any, event: LLMCallCompletedEvent) -> None:
|
||||
self._handle_action_event("llm_call_completed", source, event)
|
||||
|
||||
@event_bus.on(LLMCallFailedEvent)
|
||||
def on_llm_call_failed(source, event):
|
||||
def on_llm_call_failed(source: Any, event: LLMCallFailedEvent) -> None:
|
||||
self._handle_action_event("llm_call_failed", source, event)
|
||||
|
||||
@event_bus.on(ToolUsageStartedEvent)
|
||||
def on_tool_started(source, event):
|
||||
def on_tool_started(source: Any, event: ToolUsageStartedEvent) -> None:
|
||||
self._handle_action_event("tool_usage_started", source, event)
|
||||
|
||||
@event_bus.on(ToolUsageFinishedEvent)
|
||||
def on_tool_finished(source, event):
|
||||
def on_tool_finished(source: Any, event: ToolUsageFinishedEvent) -> None:
|
||||
self._handle_action_event("tool_usage_finished", source, event)
|
||||
|
||||
@event_bus.on(ToolUsageErrorEvent)
|
||||
def on_tool_error(source, event):
|
||||
def on_tool_error(source: Any, event: ToolUsageErrorEvent) -> None:
|
||||
self._handle_action_event("tool_usage_error", source, event)
|
||||
|
||||
@event_bus.on(MemoryQueryStartedEvent)
|
||||
def on_memory_query_started(source, event):
|
||||
def on_memory_query_started(
|
||||
source: Any, event: MemoryQueryStartedEvent
|
||||
) -> None:
|
||||
self._handle_action_event("memory_query_started", source, event)
|
||||
|
||||
@event_bus.on(MemoryQueryCompletedEvent)
|
||||
def on_memory_query_completed(source, event):
|
||||
def on_memory_query_completed(
|
||||
source: Any, event: MemoryQueryCompletedEvent
|
||||
) -> None:
|
||||
self._handle_action_event("memory_query_completed", source, event)
|
||||
if self.formatter and self.memory_retrieval_in_progress:
|
||||
self.formatter.handle_memory_query_completed(
|
||||
self.formatter.current_agent_branch,
|
||||
event.source_type or "memory",
|
||||
event.query_time_ms,
|
||||
self.formatter.current_crew_tree,
|
||||
)
|
||||
|
||||
@event_bus.on(MemoryQueryFailedEvent)
|
||||
def on_memory_query_failed(source, event):
|
||||
def on_memory_query_failed(source: Any, event: MemoryQueryFailedEvent) -> None:
|
||||
self._handle_action_event("memory_query_failed", source, event)
|
||||
if self.formatter and self.memory_retrieval_in_progress:
|
||||
self.formatter.handle_memory_query_failed(
|
||||
self.formatter.current_agent_branch,
|
||||
self.formatter.current_crew_tree,
|
||||
event.error,
|
||||
event.source_type or "memory",
|
||||
)
|
||||
|
||||
@event_bus.on(MemorySaveStartedEvent)
|
||||
def on_memory_save_started(source, event):
|
||||
def on_memory_save_started(source: Any, event: MemorySaveStartedEvent) -> None:
|
||||
self._handle_action_event("memory_save_started", source, event)
|
||||
if self.formatter:
|
||||
if self.memory_save_in_progress:
|
||||
return
|
||||
|
||||
self.memory_save_in_progress = True
|
||||
|
||||
self.formatter.handle_memory_save_started(
|
||||
self.formatter.current_agent_branch,
|
||||
self.formatter.current_crew_tree,
|
||||
)
|
||||
|
||||
@event_bus.on(MemorySaveCompletedEvent)
|
||||
def on_memory_save_completed(source, event):
|
||||
def on_memory_save_completed(
|
||||
source: Any, event: MemorySaveCompletedEvent
|
||||
) -> None:
|
||||
self._handle_action_event("memory_save_completed", source, event)
|
||||
if self.formatter:
|
||||
if not self.memory_save_in_progress:
|
||||
return
|
||||
|
||||
self.memory_save_in_progress = False
|
||||
|
||||
self.formatter.handle_memory_save_completed(
|
||||
self.formatter.current_agent_branch,
|
||||
self.formatter.current_crew_tree,
|
||||
event.save_time_ms,
|
||||
event.source_type or "memory",
|
||||
)
|
||||
|
||||
@event_bus.on(MemorySaveFailedEvent)
|
||||
def on_memory_save_failed(source, event):
|
||||
def on_memory_save_failed(source: Any, event: MemorySaveFailedEvent) -> None:
|
||||
self._handle_action_event("memory_save_failed", source, event)
|
||||
if self.formatter and self.memory_save_in_progress:
|
||||
self.formatter.handle_memory_save_failed(
|
||||
self.formatter.current_agent_branch,
|
||||
event.error,
|
||||
event.source_type or "memory",
|
||||
self.formatter.current_crew_tree,
|
||||
)
|
||||
|
||||
@event_bus.on(MemoryRetrievalStartedEvent)
|
||||
def on_memory_retrieval_started(
|
||||
source: Any, event: MemoryRetrievalStartedEvent
|
||||
) -> None:
|
||||
if self.formatter:
|
||||
if self.memory_retrieval_in_progress:
|
||||
return
|
||||
|
||||
self.memory_retrieval_in_progress = True
|
||||
|
||||
self.formatter.handle_memory_retrieval_started(
|
||||
self.formatter.current_agent_branch,
|
||||
self.formatter.current_crew_tree,
|
||||
)
|
||||
|
||||
@event_bus.on(MemoryRetrievalCompletedEvent)
|
||||
def on_memory_retrieval_completed(
|
||||
source: Any, event: MemoryRetrievalCompletedEvent
|
||||
) -> None:
|
||||
if self.formatter:
|
||||
if not self.memory_retrieval_in_progress:
|
||||
return
|
||||
|
||||
self.memory_retrieval_in_progress = False
|
||||
self.formatter.handle_memory_retrieval_completed(
|
||||
self.formatter.current_agent_branch,
|
||||
self.formatter.current_crew_tree,
|
||||
event.memory_content,
|
||||
event.retrieval_time_ms,
|
||||
)
|
||||
|
||||
@event_bus.on(AgentReasoningStartedEvent)
|
||||
def on_agent_reasoning_started(source, event):
|
||||
def on_agent_reasoning_started(
|
||||
source: Any, event: AgentReasoningStartedEvent
|
||||
) -> None:
|
||||
self._handle_action_event("agent_reasoning_started", source, event)
|
||||
|
||||
@event_bus.on(AgentReasoningCompletedEvent)
|
||||
def on_agent_reasoning_completed(source, event):
|
||||
def on_agent_reasoning_completed(
|
||||
source: Any, event: AgentReasoningCompletedEvent
|
||||
) -> None:
|
||||
self._handle_action_event("agent_reasoning_completed", source, event)
|
||||
|
||||
@event_bus.on(AgentReasoningFailedEvent)
|
||||
def on_agent_reasoning_failed(source, event):
|
||||
def on_agent_reasoning_failed(
|
||||
source: Any, event: AgentReasoningFailedEvent
|
||||
) -> None:
|
||||
self._handle_action_event("agent_reasoning_failed", source, event)
|
||||
|
||||
@event_bus.on(KnowledgeRetrievalStartedEvent)
|
||||
def on_knowledge_retrieval_started(source, event):
|
||||
def on_knowledge_retrieval_started(
|
||||
source: Any, event: KnowledgeRetrievalStartedEvent
|
||||
) -> None:
|
||||
self._handle_action_event("knowledge_retrieval_started", source, event)
|
||||
|
||||
@event_bus.on(KnowledgeRetrievalCompletedEvent)
|
||||
def on_knowledge_retrieval_completed(source, event):
|
||||
def on_knowledge_retrieval_completed(
|
||||
source: Any, event: KnowledgeRetrievalCompletedEvent
|
||||
) -> None:
|
||||
self._handle_action_event("knowledge_retrieval_completed", source, event)
|
||||
|
||||
@event_bus.on(KnowledgeQueryStartedEvent)
|
||||
def on_knowledge_query_started(source, event):
|
||||
def on_knowledge_query_started(
|
||||
source: Any, event: KnowledgeQueryStartedEvent
|
||||
) -> None:
|
||||
self._handle_action_event("knowledge_query_started", source, event)
|
||||
|
||||
@event_bus.on(KnowledgeQueryCompletedEvent)
|
||||
def on_knowledge_query_completed(source, event):
|
||||
def on_knowledge_query_completed(
|
||||
source: Any, event: KnowledgeQueryCompletedEvent
|
||||
) -> None:
|
||||
self._handle_action_event("knowledge_query_completed", source, event)
|
||||
|
||||
@event_bus.on(KnowledgeQueryFailedEvent)
|
||||
def on_knowledge_query_failed(source, event):
|
||||
def on_knowledge_query_failed(
|
||||
source: Any, event: KnowledgeQueryFailedEvent
|
||||
) -> None:
|
||||
self._handle_action_event("knowledge_query_failed", source, event)
|
||||
|
||||
def _initialize_crew_batch(self, source: Any, event: Any):
|
||||
"""Initialize trace batch"""
|
||||
def _initialize_crew_batch(self, source: Any, event: Any) -> None:
|
||||
"""Initialize trace batch.
|
||||
|
||||
Args:
|
||||
source: Source object that triggered the event.
|
||||
event: Event object containing crew information.
|
||||
"""
|
||||
user_context = self._get_user_context()
|
||||
execution_metadata = {
|
||||
"crew_name": getattr(event, "crew_name", "Unknown Crew"),
|
||||
@@ -342,8 +475,13 @@ class TraceCollectionListener(BaseEventListener):
|
||||
|
||||
self._initialize_batch(user_context, execution_metadata)
|
||||
|
||||
def _initialize_flow_batch(self, source: Any, event: Any):
|
||||
"""Initialize trace batch for Flow execution"""
|
||||
def _initialize_flow_batch(self, source: Any, event: Any) -> None:
|
||||
"""Initialize trace batch for Flow execution.
|
||||
|
||||
Args:
|
||||
source: Source object that triggered the event.
|
||||
event: Event object containing flow information.
|
||||
"""
|
||||
user_context = self._get_user_context()
|
||||
execution_metadata = {
|
||||
"flow_name": getattr(event, "flow_name", "Unknown Flow"),
|
||||
@@ -359,21 +497,32 @@ class TraceCollectionListener(BaseEventListener):
|
||||
|
||||
def _initialize_batch(
|
||||
self, user_context: dict[str, str], execution_metadata: dict[str, Any]
|
||||
):
|
||||
"""Initialize trace batch - auto-enable ephemeral for first-time users."""
|
||||
) -> None:
|
||||
"""Initialize trace batch - auto-enable ephemeral for first-time users.
|
||||
|
||||
Args:
|
||||
user_context: User context information.
|
||||
execution_metadata: Metadata about the execution.
|
||||
"""
|
||||
if self.first_time_handler.is_first_time:
|
||||
return self.batch_manager.initialize_batch(
|
||||
self.batch_manager.initialize_batch(
|
||||
user_context, execution_metadata, use_ephemeral=True
|
||||
)
|
||||
return
|
||||
|
||||
use_ephemeral = not self._check_authenticated()
|
||||
return self.batch_manager.initialize_batch(
|
||||
self.batch_manager.initialize_batch(
|
||||
user_context, execution_metadata, use_ephemeral=use_ephemeral
|
||||
)
|
||||
|
||||
def _handle_trace_event(self, event_type: str, source: Any, event: Any):
|
||||
"""Generic handler for context end events"""
|
||||
def _handle_trace_event(self, event_type: str, source: Any, event: Any) -> None:
|
||||
"""Generic handler for context end events.
|
||||
|
||||
Args:
|
||||
event_type: Type of the event.
|
||||
source: Source object that triggered the event.
|
||||
event: Event object.
|
||||
"""
|
||||
self.batch_manager.begin_event_processing()
|
||||
try:
|
||||
trace_event = self._create_trace_event(event_type, source, event)
|
||||
@@ -381,9 +530,14 @@ class TraceCollectionListener(BaseEventListener):
|
||||
finally:
|
||||
self.batch_manager.end_event_processing()
|
||||
|
||||
def _handle_action_event(self, event_type: str, source: Any, event: Any):
|
||||
"""Generic handler for action events (LLM calls, tool usage)"""
|
||||
def _handle_action_event(self, event_type: str, source: Any, event: Any) -> None:
|
||||
"""Generic handler for action events (LLM calls, tool usage).
|
||||
|
||||
Args:
|
||||
event_type: Type of the event.
|
||||
source: Source object that triggered the event.
|
||||
event: Event object.
|
||||
"""
|
||||
if not self.batch_manager.is_batch_initialized():
|
||||
user_context = self._get_user_context()
|
||||
execution_metadata = {
|
||||
|
||||
141
lib/crewai/src/crewai/events/types/a2a_events.py
Normal file
141
lib/crewai/src/crewai/events/types/a2a_events.py
Normal file
@@ -0,0 +1,141 @@
|
||||
"""Events for A2A (Agent-to-Agent) delegation.
|
||||
|
||||
This module defines events emitted during A2A protocol delegation,
|
||||
including both single-turn and multiturn conversation flows.
|
||||
"""
|
||||
|
||||
from typing import Any, Literal
|
||||
|
||||
from crewai.events.base_events import BaseEvent
|
||||
|
||||
|
||||
class A2AEventBase(BaseEvent):
|
||||
"""Base class for A2A events with task/agent context."""
|
||||
|
||||
from_task: Any | None = None
|
||||
from_agent: Any | None = None
|
||||
|
||||
def __init__(self, **data):
|
||||
"""Initialize A2A event, extracting task and agent metadata."""
|
||||
if data.get("from_task"):
|
||||
task = data["from_task"]
|
||||
data["task_id"] = str(task.id)
|
||||
data["task_name"] = task.name or task.description
|
||||
data["from_task"] = None
|
||||
|
||||
if data.get("from_agent"):
|
||||
agent = data["from_agent"]
|
||||
data["agent_id"] = str(agent.id)
|
||||
data["agent_role"] = agent.role
|
||||
data["from_agent"] = None
|
||||
|
||||
super().__init__(**data)
|
||||
|
||||
|
||||
class A2ADelegationStartedEvent(A2AEventBase):
|
||||
"""Event emitted when A2A delegation starts.
|
||||
|
||||
Attributes:
|
||||
endpoint: A2A agent endpoint URL (AgentCard URL)
|
||||
task_description: Task being delegated to the A2A agent
|
||||
agent_id: A2A agent identifier
|
||||
is_multiturn: Whether this is part of a multiturn conversation
|
||||
turn_number: Current turn number (1-indexed, 1 for single-turn)
|
||||
"""
|
||||
|
||||
type: str = "a2a_delegation_started"
|
||||
endpoint: str
|
||||
task_description: str
|
||||
agent_id: str
|
||||
is_multiturn: bool = False
|
||||
turn_number: int = 1
|
||||
|
||||
|
||||
class A2ADelegationCompletedEvent(A2AEventBase):
|
||||
"""Event emitted when A2A delegation completes.
|
||||
|
||||
Attributes:
|
||||
status: Completion status (completed, input_required, failed, etc.)
|
||||
result: Result message if status is completed
|
||||
error: Error/response message (error for failed, response for input_required)
|
||||
is_multiturn: Whether this is part of a multiturn conversation
|
||||
"""
|
||||
|
||||
type: str = "a2a_delegation_completed"
|
||||
status: str
|
||||
result: str | None = None
|
||||
error: str | None = None
|
||||
is_multiturn: bool = False
|
||||
|
||||
|
||||
class A2AConversationStartedEvent(A2AEventBase):
|
||||
"""Event emitted when a multiturn A2A conversation starts.
|
||||
|
||||
This is emitted once at the beginning of a multiturn conversation,
|
||||
before the first message exchange.
|
||||
|
||||
Attributes:
|
||||
agent_id: A2A agent identifier
|
||||
endpoint: A2A agent endpoint URL
|
||||
a2a_agent_name: Name of the A2A agent from agent card
|
||||
"""
|
||||
|
||||
type: str = "a2a_conversation_started"
|
||||
agent_id: str
|
||||
endpoint: str
|
||||
a2a_agent_name: str | None = None
|
||||
|
||||
|
||||
class A2AMessageSentEvent(A2AEventBase):
|
||||
"""Event emitted when a message is sent to the A2A agent.
|
||||
|
||||
Attributes:
|
||||
message: Message content sent to the A2A agent
|
||||
turn_number: Current turn number (1-indexed)
|
||||
is_multiturn: Whether this is part of a multiturn conversation
|
||||
agent_role: Role of the CrewAI agent sending the message
|
||||
"""
|
||||
|
||||
type: str = "a2a_message_sent"
|
||||
message: str
|
||||
turn_number: int
|
||||
is_multiturn: bool = False
|
||||
agent_role: str | None = None
|
||||
|
||||
|
||||
class A2AResponseReceivedEvent(A2AEventBase):
|
||||
"""Event emitted when a response is received from the A2A agent.
|
||||
|
||||
Attributes:
|
||||
response: Response content from the A2A agent
|
||||
turn_number: Current turn number (1-indexed)
|
||||
is_multiturn: Whether this is part of a multiturn conversation
|
||||
status: Response status (input_required, completed, etc.)
|
||||
agent_role: Role of the CrewAI agent (for display)
|
||||
"""
|
||||
|
||||
type: str = "a2a_response_received"
|
||||
response: str
|
||||
turn_number: int
|
||||
is_multiturn: bool = False
|
||||
status: str
|
||||
agent_role: str | None = None
|
||||
|
||||
|
||||
class A2AConversationCompletedEvent(A2AEventBase):
|
||||
"""Event emitted when a multiturn A2A conversation completes.
|
||||
|
||||
This is emitted once at the end of a multiturn conversation.
|
||||
|
||||
Attributes:
|
||||
status: Final status (completed, failed, etc.)
|
||||
final_result: Final result if completed successfully
|
||||
error: Error message if failed
|
||||
total_turns: Total number of turns in the conversation
|
||||
"""
|
||||
|
||||
type: str = "a2a_conversation_completed"
|
||||
status: Literal["completed", "failed"]
|
||||
final_result: str | None = None
|
||||
error: str | None = None
|
||||
total_turns: int
|
||||
@@ -17,9 +17,16 @@ class ConsoleFormatter:
|
||||
current_method_branch: Tree | None = None
|
||||
current_lite_agent_branch: Tree | None = None
|
||||
tool_usage_counts: ClassVar[dict[str, int]] = {}
|
||||
current_reasoning_branch: Tree | None = None # Track reasoning status
|
||||
current_reasoning_branch: Tree | None = None
|
||||
_live_paused: bool = False
|
||||
current_llm_tool_tree: Tree | None = None
|
||||
current_a2a_conversation_branch: Tree | None = None
|
||||
current_a2a_turn_count: int = 0
|
||||
_pending_a2a_message: str | None = None
|
||||
_pending_a2a_agent_role: str | None = None
|
||||
_pending_a2a_turn_number: int | None = None
|
||||
_a2a_turn_branches: ClassVar[dict[int, Tree]] = {}
|
||||
_current_a2a_agent_name: str | None = None
|
||||
|
||||
def __init__(self, verbose: bool = False):
|
||||
self.console = Console(width=None)
|
||||
@@ -192,6 +199,11 @@ class ConsoleFormatter:
|
||||
style,
|
||||
ID=source_id,
|
||||
)
|
||||
|
||||
if status == "failed" and final_string_output:
|
||||
content.append("Error:\n", style="white bold")
|
||||
content.append(f"{final_string_output}\n", style="red")
|
||||
else:
|
||||
content.append(f"Final Output: {final_string_output}\n", style="white")
|
||||
|
||||
self.print_panel(content, title, style)
|
||||
@@ -1474,14 +1486,29 @@ class ConsoleFormatter:
|
||||
self.print()
|
||||
|
||||
elif isinstance(formatted_answer, AgentFinish):
|
||||
# Create content for the finish panel
|
||||
is_a2a_delegation = False
|
||||
try:
|
||||
output_data = json.loads(formatted_answer.output)
|
||||
if isinstance(output_data, dict):
|
||||
if output_data.get("is_a2a") is True:
|
||||
is_a2a_delegation = True
|
||||
elif "output" in output_data:
|
||||
nested_output = output_data["output"]
|
||||
if (
|
||||
isinstance(nested_output, dict)
|
||||
and nested_output.get("is_a2a") is True
|
||||
):
|
||||
is_a2a_delegation = True
|
||||
except (json.JSONDecodeError, TypeError, ValueError):
|
||||
pass
|
||||
|
||||
if not is_a2a_delegation:
|
||||
content = Text()
|
||||
content.append("Agent: ", style="white")
|
||||
content.append(f"{agent_role}\n\n", style="bright_green bold")
|
||||
content.append("Final Answer:\n", style="white")
|
||||
content.append(f"{formatted_answer.output}", style="bright_green")
|
||||
|
||||
# Create and display the finish panel
|
||||
finish_panel = Panel(
|
||||
content,
|
||||
title="✅ Agent Final Answer",
|
||||
@@ -1789,3 +1816,435 @@ class ConsoleFormatter:
|
||||
Attempts=f"{retry_count + 1}",
|
||||
)
|
||||
self.print_panel(content, "🛡️ Guardrail Failed", "red")
|
||||
|
||||
def handle_a2a_delegation_started(
|
||||
self,
|
||||
endpoint: str,
|
||||
task_description: str,
|
||||
agent_id: str,
|
||||
is_multiturn: bool = False,
|
||||
turn_number: int = 1,
|
||||
) -> None:
|
||||
"""Handle A2A delegation started event.
|
||||
|
||||
Args:
|
||||
endpoint: A2A agent endpoint URL
|
||||
task_description: Task being delegated
|
||||
agent_id: A2A agent identifier
|
||||
is_multiturn: Whether this is part of a multiturn conversation
|
||||
turn_number: Current turn number in conversation (1-indexed)
|
||||
"""
|
||||
branch_to_use = self.current_lite_agent_branch or self.current_task_branch
|
||||
tree_to_use = self.current_crew_tree or branch_to_use
|
||||
a2a_branch: Tree | None = None
|
||||
|
||||
if is_multiturn:
|
||||
if self.current_a2a_turn_count == 0 and not isinstance(
|
||||
self.current_a2a_conversation_branch, Tree
|
||||
):
|
||||
if branch_to_use is not None and tree_to_use is not None:
|
||||
self.current_a2a_conversation_branch = branch_to_use.add("")
|
||||
self.update_tree_label(
|
||||
self.current_a2a_conversation_branch,
|
||||
"💬",
|
||||
f"Multiturn A2A Conversation ({agent_id})",
|
||||
"cyan",
|
||||
)
|
||||
self.print(tree_to_use)
|
||||
self.print()
|
||||
else:
|
||||
self.current_a2a_conversation_branch = "MULTITURN_NO_TREE"
|
||||
|
||||
content = Text()
|
||||
content.append(
|
||||
"Multiturn A2A Conversation Started\n\n", style="cyan bold"
|
||||
)
|
||||
content.append("Agent ID: ", style="white")
|
||||
content.append(f"{agent_id}\n", style="cyan")
|
||||
content.append("Note: ", style="white dim")
|
||||
content.append(
|
||||
"Conversation will be tracked in tree view", style="cyan dim"
|
||||
)
|
||||
|
||||
panel = self.create_panel(
|
||||
content, "💬 Multiturn Conversation", "cyan"
|
||||
)
|
||||
self.print(panel)
|
||||
self.print()
|
||||
|
||||
self.current_a2a_turn_count = turn_number
|
||||
|
||||
return (
|
||||
self.current_a2a_conversation_branch
|
||||
if isinstance(self.current_a2a_conversation_branch, Tree)
|
||||
else None
|
||||
)
|
||||
|
||||
if branch_to_use is not None and tree_to_use is not None:
|
||||
a2a_branch = branch_to_use.add("")
|
||||
self.update_tree_label(
|
||||
a2a_branch,
|
||||
"🔗",
|
||||
f"Delegating to A2A Agent ({agent_id})",
|
||||
"cyan",
|
||||
)
|
||||
|
||||
self.print(tree_to_use)
|
||||
self.print()
|
||||
|
||||
content = Text()
|
||||
content.append("A2A Delegation Started\n\n", style="cyan bold")
|
||||
content.append("Agent ID: ", style="white")
|
||||
content.append(f"{agent_id}\n", style="cyan")
|
||||
content.append("Endpoint: ", style="white")
|
||||
content.append(f"{endpoint}\n\n", style="cyan dim")
|
||||
content.append("Task Description:\n", style="white")
|
||||
|
||||
task_preview = (
|
||||
task_description
|
||||
if len(task_description) <= 200
|
||||
else task_description[:197] + "..."
|
||||
)
|
||||
content.append(task_preview, style="cyan")
|
||||
|
||||
panel = self.create_panel(content, "🔗 A2A Delegation", "cyan")
|
||||
self.print(panel)
|
||||
self.print()
|
||||
|
||||
return a2a_branch
|
||||
|
||||
def handle_a2a_delegation_completed(
|
||||
self,
|
||||
status: str,
|
||||
result: str | None = None,
|
||||
error: str | None = None,
|
||||
is_multiturn: bool = False,
|
||||
) -> None:
|
||||
"""Handle A2A delegation completed event.
|
||||
|
||||
Args:
|
||||
status: Completion status
|
||||
result: Optional result message
|
||||
error: Optional error message (or response for input_required)
|
||||
is_multiturn: Whether this is part of a multiturn conversation
|
||||
"""
|
||||
tree_to_use = self.current_crew_tree or self.current_task_branch
|
||||
a2a_branch = None
|
||||
|
||||
if is_multiturn and self.current_a2a_conversation_branch:
|
||||
has_tree = isinstance(self.current_a2a_conversation_branch, Tree)
|
||||
|
||||
if status == "input_required" and error:
|
||||
pass
|
||||
elif status == "completed":
|
||||
if has_tree:
|
||||
final_turn = self.current_a2a_conversation_branch.add("")
|
||||
self.update_tree_label(
|
||||
final_turn,
|
||||
"✅",
|
||||
"Conversation Completed",
|
||||
"green",
|
||||
)
|
||||
|
||||
if tree_to_use:
|
||||
self.print(tree_to_use)
|
||||
self.print()
|
||||
|
||||
self.current_a2a_conversation_branch = None
|
||||
self.current_a2a_turn_count = 0
|
||||
elif status == "failed":
|
||||
if has_tree:
|
||||
error_turn = self.current_a2a_conversation_branch.add("")
|
||||
error_msg = (
|
||||
error[:150] + "..." if error and len(error) > 150 else error
|
||||
)
|
||||
self.update_tree_label(
|
||||
error_turn,
|
||||
"❌",
|
||||
f"Failed: {error_msg}" if error else "Conversation Failed",
|
||||
"red",
|
||||
)
|
||||
|
||||
if tree_to_use:
|
||||
self.print(tree_to_use)
|
||||
self.print()
|
||||
|
||||
self.current_a2a_conversation_branch = None
|
||||
self.current_a2a_turn_count = 0
|
||||
|
||||
return
|
||||
|
||||
if a2a_branch and tree_to_use:
|
||||
if status == "completed":
|
||||
self.update_tree_label(
|
||||
a2a_branch,
|
||||
"✅",
|
||||
"A2A Delegation Completed",
|
||||
"green",
|
||||
)
|
||||
elif status == "failed":
|
||||
self.update_tree_label(
|
||||
a2a_branch,
|
||||
"❌",
|
||||
"A2A Delegation Failed",
|
||||
"red",
|
||||
)
|
||||
else:
|
||||
self.update_tree_label(
|
||||
a2a_branch,
|
||||
"⚠️",
|
||||
f"A2A Delegation {status.replace('_', ' ').title()}",
|
||||
"yellow",
|
||||
)
|
||||
|
||||
self.print(tree_to_use)
|
||||
self.print()
|
||||
|
||||
if status == "completed" and result:
|
||||
content = Text()
|
||||
content.append("A2A Delegation Completed\n\n", style="green bold")
|
||||
content.append("Result:\n", style="white")
|
||||
|
||||
result_preview = result if len(result) <= 500 else result[:497] + "..."
|
||||
content.append(result_preview, style="green")
|
||||
|
||||
panel = self.create_panel(content, "✅ A2A Success", "green")
|
||||
self.print(panel)
|
||||
self.print()
|
||||
elif status == "input_required" and error:
|
||||
content = Text()
|
||||
content.append("A2A Response\n\n", style="cyan bold")
|
||||
content.append("Message:\n", style="white")
|
||||
|
||||
response_preview = error if len(error) <= 500 else error[:497] + "..."
|
||||
content.append(response_preview, style="cyan")
|
||||
|
||||
panel = self.create_panel(content, "💬 A2A Response", "cyan")
|
||||
self.print(panel)
|
||||
self.print()
|
||||
elif error:
|
||||
content = Text()
|
||||
content.append(
|
||||
"A2A Delegation Issue\n\n",
|
||||
style="red bold" if status == "failed" else "yellow bold",
|
||||
)
|
||||
content.append("Status: ", style="white")
|
||||
content.append(
|
||||
f"{status}\n\n", style="red" if status == "failed" else "yellow"
|
||||
)
|
||||
content.append("Message:\n", style="white")
|
||||
content.append(error, style="red" if status == "failed" else "yellow")
|
||||
|
||||
panel_style = "red" if status == "failed" else "yellow"
|
||||
panel_title = "❌ A2A Failed" if status == "failed" else "⚠️ A2A Status"
|
||||
panel = self.create_panel(content, panel_title, panel_style)
|
||||
self.print(panel)
|
||||
self.print()
|
||||
|
||||
def handle_a2a_conversation_started(
|
||||
self,
|
||||
agent_id: str,
|
||||
endpoint: str,
|
||||
) -> None:
|
||||
"""Handle A2A conversation started event.
|
||||
|
||||
Args:
|
||||
agent_id: A2A agent identifier
|
||||
endpoint: A2A agent endpoint URL
|
||||
"""
|
||||
branch_to_use = self.current_lite_agent_branch or self.current_task_branch
|
||||
tree_to_use = self.current_crew_tree or branch_to_use
|
||||
|
||||
if not isinstance(self.current_a2a_conversation_branch, Tree):
|
||||
if branch_to_use is not None and tree_to_use is not None:
|
||||
self.current_a2a_conversation_branch = branch_to_use.add("")
|
||||
self.update_tree_label(
|
||||
self.current_a2a_conversation_branch,
|
||||
"💬",
|
||||
f"Multiturn A2A Conversation ({agent_id})",
|
||||
"cyan",
|
||||
)
|
||||
self.print(tree_to_use)
|
||||
self.print()
|
||||
else:
|
||||
self.current_a2a_conversation_branch = "MULTITURN_NO_TREE"
|
||||
|
||||
def handle_a2a_message_sent(
|
||||
self,
|
||||
message: str,
|
||||
turn_number: int,
|
||||
agent_role: str | None = None,
|
||||
) -> None:
|
||||
"""Handle A2A message sent event.
|
||||
|
||||
Args:
|
||||
message: Message content sent to the A2A agent
|
||||
turn_number: Current turn number
|
||||
agent_role: Role of the CrewAI agent sending the message
|
||||
"""
|
||||
self._pending_a2a_message = message
|
||||
self._pending_a2a_agent_role = agent_role
|
||||
self._pending_a2a_turn_number = turn_number
|
||||
|
||||
def handle_a2a_response_received(
|
||||
self,
|
||||
response: str,
|
||||
turn_number: int,
|
||||
status: str,
|
||||
agent_role: str | None = None,
|
||||
) -> None:
|
||||
"""Handle A2A response received event.
|
||||
|
||||
Args:
|
||||
response: Response content from the A2A agent
|
||||
turn_number: Current turn number
|
||||
status: Response status (input_required, completed, etc.)
|
||||
agent_role: Role of the CrewAI agent (for display)
|
||||
"""
|
||||
if self.current_a2a_conversation_branch and isinstance(
|
||||
self.current_a2a_conversation_branch, Tree
|
||||
):
|
||||
if turn_number in self._a2a_turn_branches:
|
||||
turn_branch = self._a2a_turn_branches[turn_number]
|
||||
else:
|
||||
turn_branch = self.current_a2a_conversation_branch.add("")
|
||||
self.update_tree_label(
|
||||
turn_branch,
|
||||
"💬",
|
||||
f"Turn {turn_number}",
|
||||
"cyan",
|
||||
)
|
||||
self._a2a_turn_branches[turn_number] = turn_branch
|
||||
|
||||
crewai_agent_role = self._pending_a2a_agent_role or agent_role or "User"
|
||||
message_content = self._pending_a2a_message or "sent message"
|
||||
|
||||
message_preview = (
|
||||
message_content[:100] + "..."
|
||||
if len(message_content) > 100
|
||||
else message_content
|
||||
)
|
||||
|
||||
user_node = turn_branch.add("")
|
||||
self.update_tree_label(
|
||||
user_node,
|
||||
f"{crewai_agent_role} 👤 : ",
|
||||
f'"{message_preview}"',
|
||||
"blue",
|
||||
)
|
||||
|
||||
agent_node = turn_branch.add("")
|
||||
response_preview = (
|
||||
response[:100] + "..." if len(response) > 100 else response
|
||||
)
|
||||
|
||||
a2a_agent_display = f"{self._current_a2a_agent_name} \U0001f916: "
|
||||
|
||||
if status == "completed":
|
||||
response_color = "green"
|
||||
status_indicator = "✓"
|
||||
elif status == "input_required":
|
||||
response_color = "yellow"
|
||||
status_indicator = "❓"
|
||||
elif status == "failed":
|
||||
response_color = "red"
|
||||
status_indicator = "✗"
|
||||
elif status == "auth_required":
|
||||
response_color = "magenta"
|
||||
status_indicator = "🔒"
|
||||
elif status == "canceled":
|
||||
response_color = "dim"
|
||||
status_indicator = "⊘"
|
||||
else:
|
||||
response_color = "cyan"
|
||||
status_indicator = ""
|
||||
|
||||
label = f'"{response_preview}"'
|
||||
if status_indicator:
|
||||
label = f"{status_indicator} {label}"
|
||||
|
||||
self.update_tree_label(
|
||||
agent_node,
|
||||
a2a_agent_display,
|
||||
label,
|
||||
response_color,
|
||||
)
|
||||
|
||||
self._pending_a2a_message = None
|
||||
self._pending_a2a_agent_role = None
|
||||
self._pending_a2a_turn_number = None
|
||||
|
||||
tree_to_use = self.current_crew_tree or self.current_task_branch
|
||||
if tree_to_use:
|
||||
self.print(tree_to_use)
|
||||
self.print()
|
||||
|
||||
def handle_a2a_conversation_completed(
|
||||
self,
|
||||
status: str,
|
||||
final_result: str | None,
|
||||
error: str | None,
|
||||
total_turns: int,
|
||||
) -> None:
|
||||
"""Handle A2A conversation completed event.
|
||||
|
||||
Args:
|
||||
status: Final status (completed, failed, etc.)
|
||||
final_result: Final result if completed successfully
|
||||
error: Error message if failed
|
||||
total_turns: Total number of turns in the conversation
|
||||
"""
|
||||
if self.current_a2a_conversation_branch and isinstance(
|
||||
self.current_a2a_conversation_branch, Tree
|
||||
):
|
||||
if status == "completed":
|
||||
if self._pending_a2a_message and self._pending_a2a_agent_role:
|
||||
if total_turns in self._a2a_turn_branches:
|
||||
turn_branch = self._a2a_turn_branches[total_turns]
|
||||
else:
|
||||
turn_branch = self.current_a2a_conversation_branch.add("")
|
||||
self.update_tree_label(
|
||||
turn_branch,
|
||||
"💬",
|
||||
f"Turn {total_turns}",
|
||||
"cyan",
|
||||
)
|
||||
self._a2a_turn_branches[total_turns] = turn_branch
|
||||
|
||||
crewai_agent_role = self._pending_a2a_agent_role
|
||||
message_content = self._pending_a2a_message
|
||||
|
||||
message_preview = (
|
||||
message_content[:100] + "..."
|
||||
if len(message_content) > 100
|
||||
else message_content
|
||||
)
|
||||
|
||||
user_node = turn_branch.add("")
|
||||
self.update_tree_label(
|
||||
user_node,
|
||||
f"{crewai_agent_role} 👤 : ",
|
||||
f'"{message_preview}"',
|
||||
"green",
|
||||
)
|
||||
|
||||
self._pending_a2a_message = None
|
||||
self._pending_a2a_agent_role = None
|
||||
self._pending_a2a_turn_number = None
|
||||
elif status == "failed":
|
||||
error_turn = self.current_a2a_conversation_branch.add("")
|
||||
error_msg = error[:150] + "..." if error and len(error) > 150 else error
|
||||
self.update_tree_label(
|
||||
error_turn,
|
||||
"❌",
|
||||
f"Failed: {error_msg}" if error else "Conversation Failed",
|
||||
"red",
|
||||
)
|
||||
|
||||
tree_to_use = self.current_crew_tree or self.current_task_branch
|
||||
if tree_to_use:
|
||||
self.print(tree_to_use)
|
||||
self.print()
|
||||
|
||||
self.current_a2a_conversation_branch = None
|
||||
self.current_a2a_turn_count = 0
|
||||
|
||||
@@ -1,65 +0,0 @@
|
||||
"""A2A (Agent-to-Agent) Protocol adapter for CrewAI.
|
||||
|
||||
This module provides integration with A2A protocol-compliant agents,
|
||||
enabling CrewAI to orchestrate external agents like ServiceNow, Bedrock Agents,
|
||||
Glean, and other A2A-compliant systems.
|
||||
|
||||
Example:
|
||||
```python
|
||||
from crewai.experimental.a2a import A2AAgentAdapter
|
||||
|
||||
# Create A2A agent
|
||||
servicenow_agent = A2AAgentAdapter(
|
||||
agent_card_url="https://servicenow.example.com/.well-known/agent-card.json",
|
||||
auth_token="your-token",
|
||||
role="ServiceNow Incident Manager",
|
||||
goal="Create and manage IT incidents",
|
||||
backstory="Expert at incident management",
|
||||
)
|
||||
|
||||
# Use in crew
|
||||
crew = Crew(agents=[servicenow_agent], tasks=[task])
|
||||
```
|
||||
"""
|
||||
|
||||
from crewai.experimental.a2a.a2a_adapter import A2AAgentAdapter
|
||||
from crewai.experimental.a2a.auth import (
|
||||
APIKeyAuth,
|
||||
AuthScheme,
|
||||
BearerTokenAuth,
|
||||
HTTPBasicAuth,
|
||||
HTTPDigestAuth,
|
||||
OAuth2AuthorizationCode,
|
||||
OAuth2ClientCredentials,
|
||||
create_auth_from_agent_card,
|
||||
)
|
||||
from crewai.experimental.a2a.exceptions import (
|
||||
A2AAuthenticationError,
|
||||
A2AConfigurationError,
|
||||
A2AConnectionError,
|
||||
A2AError,
|
||||
A2AInputRequiredError,
|
||||
A2ATaskCanceledError,
|
||||
A2ATaskFailedError,
|
||||
)
|
||||
|
||||
|
||||
__all__ = [
|
||||
"A2AAgentAdapter",
|
||||
"A2AAuthenticationError",
|
||||
"A2AConfigurationError",
|
||||
"A2AConnectionError",
|
||||
"A2AError",
|
||||
"A2AInputRequiredError",
|
||||
"A2ATaskCanceledError",
|
||||
"A2ATaskFailedError",
|
||||
"APIKeyAuth",
|
||||
# Authentication
|
||||
"AuthScheme",
|
||||
"BearerTokenAuth",
|
||||
"HTTPBasicAuth",
|
||||
"HTTPDigestAuth",
|
||||
"OAuth2AuthorizationCode",
|
||||
"OAuth2ClientCredentials",
|
||||
"create_auth_from_agent_card",
|
||||
]
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,424 +0,0 @@
|
||||
"""Authentication schemes for A2A protocol agents.
|
||||
|
||||
This module provides support for various authentication methods:
|
||||
- Bearer tokens (existing)
|
||||
- OAuth2 (Client Credentials, Authorization Code)
|
||||
- API Keys (header, query, cookie)
|
||||
- HTTP Basic authentication
|
||||
- HTTP Digest authentication
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from abc import ABC, abstractmethod
|
||||
import base64
|
||||
from collections.abc import Awaitable, Callable
|
||||
from typing import TYPE_CHECKING, Any, Literal
|
||||
|
||||
import httpx
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from a2a.types import AgentCard
|
||||
|
||||
|
||||
class AuthScheme(ABC, BaseModel):
|
||||
"""Base class for authentication schemes."""
|
||||
|
||||
@abstractmethod
|
||||
async def apply_auth(
|
||||
self, client: httpx.AsyncClient, headers: dict[str, str]
|
||||
) -> dict[str, str]:
|
||||
"""Apply authentication to request headers.
|
||||
|
||||
Args:
|
||||
client: HTTP client for making auth requests.
|
||||
headers: Current request headers.
|
||||
|
||||
Returns:
|
||||
Updated headers with authentication applied.
|
||||
"""
|
||||
...
|
||||
|
||||
@abstractmethod
|
||||
def configure_client(self, client: httpx.AsyncClient) -> None:
|
||||
"""Configure the HTTP client for this auth scheme.
|
||||
|
||||
Args:
|
||||
client: HTTP client to configure.
|
||||
"""
|
||||
...
|
||||
|
||||
|
||||
class BearerTokenAuth(AuthScheme):
|
||||
"""Bearer token authentication (Authorization: Bearer <token>)."""
|
||||
|
||||
token: str = Field(description="Bearer token")
|
||||
|
||||
async def apply_auth(
|
||||
self, client: httpx.AsyncClient, headers: dict[str, str]
|
||||
) -> dict[str, str]:
|
||||
"""Apply Bearer token to Authorization header."""
|
||||
headers["Authorization"] = f"Bearer {self.token}"
|
||||
return headers
|
||||
|
||||
def configure_client(self, client: httpx.AsyncClient) -> None:
|
||||
"""No client configuration needed for Bearer tokens."""
|
||||
|
||||
|
||||
class HTTPBasicAuth(AuthScheme):
|
||||
"""HTTP Basic authentication."""
|
||||
|
||||
username: str = Field(description="Username")
|
||||
password: str = Field(description="Password")
|
||||
|
||||
async def apply_auth(
|
||||
self, client: httpx.AsyncClient, headers: dict[str, str]
|
||||
) -> dict[str, str]:
|
||||
"""Apply HTTP Basic authentication."""
|
||||
credentials = f"{self.username}:{self.password}"
|
||||
encoded = base64.b64encode(credentials.encode()).decode()
|
||||
headers["Authorization"] = f"Basic {encoded}"
|
||||
return headers
|
||||
|
||||
def configure_client(self, client: httpx.AsyncClient) -> None:
|
||||
"""No client configuration needed for Basic auth."""
|
||||
|
||||
|
||||
class HTTPDigestAuth(AuthScheme):
|
||||
"""HTTP Digest authentication.
|
||||
|
||||
Note: Uses httpx-auth library for proper digest implementation.
|
||||
"""
|
||||
|
||||
username: str = Field(description="Username")
|
||||
password: str = Field(description="Password")
|
||||
|
||||
async def apply_auth(
|
||||
self, client: httpx.AsyncClient, headers: dict[str, str]
|
||||
) -> dict[str, str]:
|
||||
"""Digest auth is handled by httpx auth flow, not headers."""
|
||||
return headers
|
||||
|
||||
def configure_client(self, client: httpx.AsyncClient) -> None:
|
||||
"""Configure client with Digest auth."""
|
||||
try:
|
||||
from httpx_auth import DigestAuth # type: ignore[import-not-found]
|
||||
|
||||
client.auth = DigestAuth(self.username, self.password) # type: ignore[import-not-found]
|
||||
except ImportError as e:
|
||||
msg = "httpx-auth required for Digest authentication. Install with: pip install httpx-auth"
|
||||
raise ImportError(msg) from e
|
||||
|
||||
|
||||
class APIKeyAuth(AuthScheme):
|
||||
"""API Key authentication (header, query, or cookie)."""
|
||||
|
||||
api_key: str = Field(description="API key value")
|
||||
location: Literal["header", "query", "cookie"] = Field(
|
||||
default="header", description="Where to send the API key"
|
||||
)
|
||||
name: str = Field(default="X-API-Key", description="Parameter name for the API key")
|
||||
|
||||
async def apply_auth(
|
||||
self, client: httpx.AsyncClient, headers: dict[str, str]
|
||||
) -> dict[str, str]:
|
||||
"""Apply API key authentication."""
|
||||
if self.location == "header":
|
||||
headers[self.name] = self.api_key
|
||||
elif self.location == "cookie":
|
||||
headers["Cookie"] = f"{self.name}={self.api_key}"
|
||||
# Query params are handled in configure_client via event hooks
|
||||
return headers
|
||||
|
||||
def configure_client(self, client: httpx.AsyncClient) -> None:
|
||||
"""Configure client for query param API keys."""
|
||||
if self.location == "query":
|
||||
# Add API key to all requests via event hook
|
||||
async def add_api_key_param(request: httpx.Request) -> None:
|
||||
url = httpx.URL(request.url)
|
||||
request.url = url.copy_add_param(self.name, self.api_key)
|
||||
|
||||
client.event_hooks["request"].append(add_api_key_param)
|
||||
|
||||
|
||||
class OAuth2ClientCredentials(AuthScheme):
|
||||
"""OAuth2 Client Credentials flow authentication."""
|
||||
|
||||
token_url: str = Field(description="OAuth2 token endpoint")
|
||||
client_id: str = Field(description="OAuth2 client ID")
|
||||
client_secret: str = Field(description="OAuth2 client secret")
|
||||
scopes: list[str] = Field(
|
||||
default_factory=list, description="Required OAuth2 scopes"
|
||||
)
|
||||
|
||||
_access_token: str | None = None
|
||||
_token_expires_at: float | None = None
|
||||
|
||||
async def apply_auth(
|
||||
self, client: httpx.AsyncClient, headers: dict[str, str]
|
||||
) -> dict[str, str]:
|
||||
"""Apply OAuth2 access token to Authorization header."""
|
||||
# Get or refresh token if needed
|
||||
import time
|
||||
|
||||
if (
|
||||
self._access_token is None
|
||||
or self._token_expires_at is None
|
||||
or time.time() >= self._token_expires_at
|
||||
):
|
||||
await self._fetch_token(client)
|
||||
|
||||
if self._access_token:
|
||||
headers["Authorization"] = f"Bearer {self._access_token}"
|
||||
|
||||
return headers
|
||||
|
||||
async def _fetch_token(self, client: httpx.AsyncClient) -> None:
|
||||
"""Fetch OAuth2 access token using client credentials flow."""
|
||||
import time
|
||||
|
||||
data = {
|
||||
"grant_type": "client_credentials",
|
||||
"client_id": self.client_id,
|
||||
"client_secret": self.client_secret,
|
||||
}
|
||||
|
||||
if self.scopes:
|
||||
data["scope"] = " ".join(self.scopes)
|
||||
|
||||
response = await client.post(self.token_url, data=data)
|
||||
response.raise_for_status()
|
||||
|
||||
token_data = response.json()
|
||||
self._access_token = token_data["access_token"]
|
||||
|
||||
# Calculate expiration time (default to 3600 seconds if not provided)
|
||||
expires_in = token_data.get("expires_in", 3600)
|
||||
self._token_expires_at = time.time() + expires_in - 60 # 60s buffer
|
||||
|
||||
def configure_client(self, client: httpx.AsyncClient) -> None:
|
||||
"""No client configuration needed for OAuth2."""
|
||||
|
||||
|
||||
class OAuth2AuthorizationCode(AuthScheme):
|
||||
"""OAuth2 Authorization Code flow authentication.
|
||||
|
||||
Note: This requires interactive authorization and is typically used
|
||||
for user-facing applications. For server-to-server, use ClientCredentials.
|
||||
"""
|
||||
|
||||
authorization_url: str = Field(description="OAuth2 authorization endpoint")
|
||||
token_url: str = Field(description="OAuth2 token endpoint")
|
||||
client_id: str = Field(description="OAuth2 client ID")
|
||||
client_secret: str = Field(description="OAuth2 client secret")
|
||||
redirect_uri: str = Field(description="OAuth2 redirect URI")
|
||||
scopes: list[str] = Field(
|
||||
default_factory=list, description="Required OAuth2 scopes"
|
||||
)
|
||||
|
||||
_access_token: str | None = None
|
||||
_refresh_token: str | None = None
|
||||
_token_expires_at: float | None = None
|
||||
_authorization_callback: Callable[[str], Awaitable[str]] | None = None
|
||||
|
||||
def set_authorization_callback(
|
||||
self, callback: Callable[[str], Awaitable[str]] | None
|
||||
) -> None:
|
||||
"""Set callback to handle authorization URL.
|
||||
|
||||
The callback receives the authorization URL and should return
|
||||
the authorization code after user completes the flow.
|
||||
"""
|
||||
self._authorization_callback = callback
|
||||
|
||||
async def apply_auth(
|
||||
self, client: httpx.AsyncClient, headers: dict[str, str]
|
||||
) -> dict[str, str]:
|
||||
"""Apply OAuth2 access token to Authorization header."""
|
||||
import time
|
||||
|
||||
# Get or refresh token if needed
|
||||
if self._access_token is None:
|
||||
if self._authorization_callback is None:
|
||||
msg = "Authorization callback not set. Use set_authorization_callback()"
|
||||
raise ValueError(msg)
|
||||
await self._fetch_initial_token(client)
|
||||
elif self._token_expires_at and time.time() >= self._token_expires_at:
|
||||
await self._refresh_access_token(client)
|
||||
|
||||
if self._access_token:
|
||||
headers["Authorization"] = f"Bearer {self._access_token}"
|
||||
|
||||
return headers
|
||||
|
||||
async def _fetch_initial_token(self, client: httpx.AsyncClient) -> None:
|
||||
"""Fetch initial access token using authorization code flow."""
|
||||
import time
|
||||
import urllib.parse
|
||||
|
||||
# Build authorization URL
|
||||
params = {
|
||||
"response_type": "code",
|
||||
"client_id": self.client_id,
|
||||
"redirect_uri": self.redirect_uri,
|
||||
"scope": " ".join(self.scopes),
|
||||
}
|
||||
auth_url = f"{self.authorization_url}?{urllib.parse.urlencode(params)}"
|
||||
|
||||
# Get authorization code from callback
|
||||
if self._authorization_callback is None:
|
||||
msg = "Authorization callback not set"
|
||||
raise ValueError(msg)
|
||||
auth_code = await self._authorization_callback(auth_url)
|
||||
|
||||
# Exchange code for token
|
||||
data = {
|
||||
"grant_type": "authorization_code",
|
||||
"code": auth_code,
|
||||
"client_id": self.client_id,
|
||||
"client_secret": self.client_secret,
|
||||
"redirect_uri": self.redirect_uri,
|
||||
}
|
||||
|
||||
response = await client.post(self.token_url, data=data)
|
||||
response.raise_for_status()
|
||||
|
||||
token_data = response.json()
|
||||
self._access_token = token_data["access_token"]
|
||||
self._refresh_token = token_data.get("refresh_token")
|
||||
|
||||
expires_in = token_data.get("expires_in", 3600)
|
||||
self._token_expires_at = time.time() + expires_in - 60
|
||||
|
||||
async def _refresh_access_token(self, client: httpx.AsyncClient) -> None:
|
||||
"""Refresh the access token using refresh token."""
|
||||
import time
|
||||
|
||||
if not self._refresh_token:
|
||||
# Re-authorize if no refresh token
|
||||
await self._fetch_initial_token(client)
|
||||
return
|
||||
|
||||
data = {
|
||||
"grant_type": "refresh_token",
|
||||
"refresh_token": self._refresh_token,
|
||||
"client_id": self.client_id,
|
||||
"client_secret": self.client_secret,
|
||||
}
|
||||
|
||||
response = await client.post(self.token_url, data=data)
|
||||
response.raise_for_status()
|
||||
|
||||
token_data = response.json()
|
||||
self._access_token = token_data["access_token"]
|
||||
if "refresh_token" in token_data:
|
||||
self._refresh_token = token_data["refresh_token"]
|
||||
|
||||
expires_in = token_data.get("expires_in", 3600)
|
||||
self._token_expires_at = time.time() + expires_in - 60
|
||||
|
||||
def configure_client(self, client: httpx.AsyncClient) -> None:
|
||||
"""No client configuration needed for OAuth2."""
|
||||
|
||||
|
||||
def create_auth_from_agent_card(
|
||||
agent_card: AgentCard, credentials: dict[str, Any]
|
||||
) -> AuthScheme | None:
|
||||
"""Create an appropriate authentication scheme from AgentCard security config.
|
||||
|
||||
Args:
|
||||
agent_card: The A2A AgentCard containing security requirements.
|
||||
credentials: User-provided credentials (passwords, tokens, keys, etc.).
|
||||
|
||||
Returns:
|
||||
Configured AuthScheme, or None if no authentication required.
|
||||
|
||||
Example:
|
||||
```python
|
||||
# For OAuth2
|
||||
credentials = {
|
||||
"client_id": "my-app",
|
||||
"client_secret": "secret123",
|
||||
}
|
||||
auth = create_auth_from_agent_card(agent_card, credentials)
|
||||
|
||||
# For API Key
|
||||
credentials = {"api_key": "key-12345"}
|
||||
auth = create_auth_from_agent_card(agent_card, credentials)
|
||||
|
||||
# For HTTP Basic
|
||||
credentials = {"username": "user", "password": "pass"}
|
||||
auth = create_auth_from_agent_card(agent_card, credentials)
|
||||
```
|
||||
"""
|
||||
if not agent_card.security or not agent_card.security_schemes:
|
||||
return None
|
||||
|
||||
# Get the first required security scheme
|
||||
first_security_req = agent_card.security[0] if agent_card.security else {}
|
||||
|
||||
for scheme_name, _scopes in first_security_req.items():
|
||||
security_scheme_obj = agent_card.security_schemes.get(scheme_name)
|
||||
if not security_scheme_obj:
|
||||
continue
|
||||
|
||||
# SecurityScheme is a dict-like object
|
||||
security_scheme = dict(security_scheme_obj) # type: ignore[arg-type]
|
||||
scheme_type = str(security_scheme.get("type", "")).lower()
|
||||
|
||||
# OAuth2
|
||||
if scheme_type == "oauth2":
|
||||
flows = security_scheme.get("flows", {})
|
||||
|
||||
if "clientCredentials" in flows:
|
||||
flow = flows["clientCredentials"]
|
||||
return OAuth2ClientCredentials(
|
||||
token_url=str(flow["tokenUrl"]),
|
||||
client_id=str(credentials.get("client_id", "")),
|
||||
client_secret=str(credentials.get("client_secret", "")),
|
||||
scopes=list(flow.get("scopes", {}).keys()),
|
||||
)
|
||||
|
||||
if "authorizationCode" in flows:
|
||||
flow = flows["authorizationCode"]
|
||||
return OAuth2AuthorizationCode(
|
||||
authorization_url=str(flow["authorizationUrl"]),
|
||||
token_url=str(flow["tokenUrl"]),
|
||||
client_id=str(credentials.get("client_id", "")),
|
||||
client_secret=str(credentials.get("client_secret", "")),
|
||||
redirect_uri=str(credentials.get("redirect_uri", "")),
|
||||
scopes=list(flow.get("scopes", {}).keys()),
|
||||
)
|
||||
|
||||
# API Key
|
||||
elif scheme_type == "apikey":
|
||||
location = str(security_scheme.get("in", "header"))
|
||||
name = str(security_scheme.get("name", "X-API-Key"))
|
||||
return APIKeyAuth(
|
||||
api_key=str(credentials.get("api_key", "")),
|
||||
location=location, # type: ignore[arg-type]
|
||||
name=name,
|
||||
)
|
||||
|
||||
# HTTP Auth
|
||||
elif scheme_type == "http":
|
||||
http_scheme = str(security_scheme.get("scheme", "")).lower()
|
||||
|
||||
if http_scheme == "basic":
|
||||
return HTTPBasicAuth(
|
||||
username=str(credentials.get("username", "")),
|
||||
password=str(credentials.get("password", "")),
|
||||
)
|
||||
|
||||
if http_scheme == "digest":
|
||||
return HTTPDigestAuth(
|
||||
username=str(credentials.get("username", "")),
|
||||
password=str(credentials.get("password", "")),
|
||||
)
|
||||
|
||||
if http_scheme == "bearer":
|
||||
return BearerTokenAuth(token=str(credentials.get("token", "")))
|
||||
|
||||
return None
|
||||
@@ -1,56 +0,0 @@
|
||||
"""Custom exceptions for A2A Agent Adapter."""
|
||||
|
||||
|
||||
class A2AError(Exception):
|
||||
"""Base exception for A2A adapter errors."""
|
||||
|
||||
|
||||
class A2ATaskFailedError(A2AError):
|
||||
"""Raised when A2A agent task fails or is rejected.
|
||||
|
||||
This exception is raised when the A2A agent reports a task
|
||||
in the 'failed' or 'rejected' state.
|
||||
"""
|
||||
|
||||
|
||||
class A2AInputRequiredError(A2AError):
|
||||
"""Raised when A2A agent requires additional input.
|
||||
|
||||
This exception is raised when the A2A agent reports a task
|
||||
in the 'input_required' state, indicating that it needs more
|
||||
information to complete the task.
|
||||
"""
|
||||
|
||||
|
||||
class A2AConfigurationError(A2AError):
|
||||
"""Raised when A2A adapter configuration is invalid.
|
||||
|
||||
This exception is raised during initialization or setup when
|
||||
the adapter configuration is invalid or incompatible.
|
||||
"""
|
||||
|
||||
|
||||
class A2AConnectionError(A2AError):
|
||||
"""Raised when connection to A2A agent fails.
|
||||
|
||||
This exception is raised when the adapter cannot establish
|
||||
a connection to the A2A agent or when network errors occur.
|
||||
"""
|
||||
|
||||
|
||||
class A2AAuthenticationError(A2AError):
|
||||
"""Raised when A2A agent requires authentication.
|
||||
|
||||
This exception is raised when the A2A agent reports a task
|
||||
in the 'auth_required' state, indicating that authentication
|
||||
is needed before the task can continue.
|
||||
"""
|
||||
|
||||
|
||||
class A2ATaskCanceledError(A2AError):
|
||||
"""Raised when A2A task is canceled.
|
||||
|
||||
This exception is raised when the A2A agent reports a task
|
||||
in the 'canceled' state, indicating the task was canceled
|
||||
either by the user or the system.
|
||||
"""
|
||||
@@ -1,56 +0,0 @@
|
||||
"""Type protocols for A2A SDK components.
|
||||
|
||||
These protocols define the expected interfaces for A2A SDK types,
|
||||
allowing for type checking without requiring the SDK to be installed.
|
||||
"""
|
||||
|
||||
from collections.abc import AsyncIterator
|
||||
from typing import Any, Protocol, runtime_checkable
|
||||
|
||||
|
||||
@runtime_checkable
|
||||
class AgentCardProtocol(Protocol):
|
||||
"""Protocol for A2A AgentCard."""
|
||||
|
||||
name: str
|
||||
version: str
|
||||
description: str
|
||||
skills: list[Any]
|
||||
capabilities: Any
|
||||
|
||||
|
||||
@runtime_checkable
|
||||
class ClientProtocol(Protocol):
|
||||
"""Protocol for A2A Client."""
|
||||
|
||||
async def send_message(self, message: Any) -> AsyncIterator[Any]:
|
||||
"""Send message to A2A agent."""
|
||||
...
|
||||
|
||||
async def get_card(self) -> AgentCardProtocol:
|
||||
"""Get agent card."""
|
||||
...
|
||||
|
||||
async def close(self) -> None:
|
||||
"""Close client connection."""
|
||||
...
|
||||
|
||||
|
||||
@runtime_checkable
|
||||
class MessageProtocol(Protocol):
|
||||
"""Protocol for A2A Message."""
|
||||
|
||||
role: Any
|
||||
message_id: str
|
||||
parts: list[Any]
|
||||
|
||||
|
||||
@runtime_checkable
|
||||
class TaskProtocol(Protocol):
|
||||
"""Protocol for A2A Task."""
|
||||
|
||||
id: str
|
||||
context_id: str
|
||||
status: Any
|
||||
history: list[Any] | None
|
||||
artifacts: list[Any] | None
|
||||
@@ -2,7 +2,7 @@ from collections.abc import Sequence
|
||||
import threading
|
||||
from typing import Any
|
||||
|
||||
from crewai.agent import Agent
|
||||
from crewai.agent.core import Agent
|
||||
from crewai.agents.agent_builder.base_agent import BaseAgent
|
||||
from crewai.events.event_bus import crewai_event_bus
|
||||
from crewai.events.types.agent_events import (
|
||||
|
||||
@@ -21,6 +21,7 @@ from typing import (
|
||||
|
||||
from dotenv import load_dotenv
|
||||
from pydantic import BaseModel, Field
|
||||
from typing_extensions import Self
|
||||
|
||||
from crewai.events.event_bus import crewai_event_bus
|
||||
from crewai.events.types.llm_events import (
|
||||
@@ -36,30 +37,34 @@ from crewai.events.types.tool_usage_events import (
|
||||
ToolUsageStartedEvent,
|
||||
)
|
||||
from crewai.llms.base_llm import BaseLLM
|
||||
from crewai.utilities import InternalInstructor
|
||||
from crewai.utilities.exceptions.context_window_exceeding_exception import (
|
||||
LLMContextLengthExceededError,
|
||||
)
|
||||
from crewai.utilities.logger_utils import suppress_warnings
|
||||
from crewai.utilities.types import LLMMessage
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from litellm import Choices
|
||||
from litellm.exceptions import ContextWindowExceededError
|
||||
from litellm.litellm_core_utils.get_supported_openai_params import (
|
||||
get_supported_openai_params,
|
||||
)
|
||||
from litellm.types.utils import ChatCompletionDeltaToolCall, ModelResponse
|
||||
from litellm.types.utils import ChatCompletionDeltaToolCall, Choices, ModelResponse
|
||||
from litellm.utils import supports_response_schema
|
||||
|
||||
from crewai.agent.core import Agent
|
||||
from crewai.task import Task
|
||||
from crewai.tools.base_tool import BaseTool
|
||||
from crewai.utilities.types import LLMMessage
|
||||
|
||||
try:
|
||||
import litellm
|
||||
from litellm import Choices, CustomLogger
|
||||
from litellm.exceptions import ContextWindowExceededError
|
||||
from litellm.integrations.custom_logger import CustomLogger
|
||||
from litellm.litellm_core_utils.get_supported_openai_params import (
|
||||
get_supported_openai_params,
|
||||
)
|
||||
from litellm.types.utils import ChatCompletionDeltaToolCall, ModelResponse
|
||||
from litellm.types.utils import ChatCompletionDeltaToolCall, Choices, ModelResponse
|
||||
from litellm.utils import supports_response_schema
|
||||
|
||||
LITELLM_AVAILABLE = True
|
||||
@@ -72,6 +77,7 @@ except ImportError:
|
||||
ChatCompletionDeltaToolCall = None # type: ignore
|
||||
ModelResponse = None # type: ignore
|
||||
supports_response_schema = None # type: ignore
|
||||
CustomLogger = None # type: ignore
|
||||
|
||||
|
||||
load_dotenv()
|
||||
@@ -104,11 +110,13 @@ class FilteredStream(io.TextIOBase):
|
||||
|
||||
return self._original_stream.write(s)
|
||||
|
||||
def flush(self):
|
||||
def flush(self) -> None:
|
||||
if self._lock:
|
||||
with self._lock:
|
||||
return self._original_stream.flush()
|
||||
return None
|
||||
|
||||
def __getattr__(self, name):
|
||||
def __getattr__(self, name: str) -> Any:
|
||||
"""Delegate attribute access to the wrapped original stream.
|
||||
|
||||
This ensures compatibility with libraries (e.g., Rich) that rely on
|
||||
@@ -122,16 +130,16 @@ class FilteredStream(io.TextIOBase):
|
||||
# confuses Rich). These explicit pass-throughs ensure the wrapped Console
|
||||
# still sees a fully-featured stream.
|
||||
@property
|
||||
def encoding(self):
|
||||
def encoding(self) -> str | Any: # type: ignore[override]
|
||||
return getattr(self._original_stream, "encoding", "utf-8")
|
||||
|
||||
def isatty(self):
|
||||
def isatty(self) -> bool:
|
||||
return self._original_stream.isatty()
|
||||
|
||||
def fileno(self):
|
||||
def fileno(self) -> int:
|
||||
return self._original_stream.fileno()
|
||||
|
||||
def writable(self):
|
||||
def writable(self) -> bool:
|
||||
return True
|
||||
|
||||
|
||||
@@ -312,7 +320,7 @@ class AccumulatedToolArgs(BaseModel):
|
||||
class LLM(BaseLLM):
|
||||
completion_cost: float | None = None
|
||||
|
||||
def __new__(cls, model: str, is_litellm: bool = False, **kwargs) -> LLM:
|
||||
def __new__(cls, model: str, is_litellm: bool = False, **kwargs: Any) -> LLM:
|
||||
"""Factory method that routes to native SDK or falls back to LiteLLM."""
|
||||
if not model or not isinstance(model, str):
|
||||
raise ValueError("Model must be a non-empty string")
|
||||
@@ -323,7 +331,9 @@ class LLM(BaseLLM):
|
||||
if native_class and not is_litellm and provider in SUPPORTED_NATIVE_PROVIDERS:
|
||||
try:
|
||||
model_string = model.partition("/")[2] if "/" in model else model
|
||||
return native_class(model=model_string, provider=provider, **kwargs)
|
||||
return cast(
|
||||
Self, native_class(model=model_string, provider=provider, **kwargs)
|
||||
)
|
||||
except Exception as e:
|
||||
raise ImportError(f"Error importing native provider: {e}") from e
|
||||
|
||||
@@ -393,13 +403,21 @@ class LLM(BaseLLM):
|
||||
callbacks: list[Any] | None = None,
|
||||
reasoning_effort: Literal["none", "low", "medium", "high"] | None = None,
|
||||
stream: bool = False,
|
||||
**kwargs,
|
||||
):
|
||||
**kwargs: Any,
|
||||
) -> None:
|
||||
"""Initialize LLM instance.
|
||||
|
||||
Note: This __init__ method is only called for fallback instances.
|
||||
Native provider instances handle their own initialization in their respective classes.
|
||||
"""
|
||||
super().__init__(
|
||||
model=model,
|
||||
temperature=temperature,
|
||||
api_key=api_key,
|
||||
base_url=base_url,
|
||||
timeout=timeout,
|
||||
**kwargs,
|
||||
)
|
||||
self.model = model
|
||||
self.timeout = timeout
|
||||
self.temperature = temperature
|
||||
@@ -454,7 +472,7 @@ class LLM(BaseLLM):
|
||||
def _prepare_completion_params(
|
||||
self,
|
||||
messages: str | list[LLMMessage],
|
||||
tools: list[dict] | None = None,
|
||||
tools: list[dict[str, BaseTool]] | None = None,
|
||||
) -> dict[str, Any]:
|
||||
"""Prepare parameters for the completion call.
|
||||
|
||||
@@ -505,9 +523,10 @@ class LLM(BaseLLM):
|
||||
params: dict[str, Any],
|
||||
callbacks: list[Any] | None = None,
|
||||
available_functions: dict[str, Any] | None = None,
|
||||
from_task: Any | None = None,
|
||||
from_agent: Any | None = None,
|
||||
) -> str:
|
||||
from_task: Task | None = None,
|
||||
from_agent: Agent | None = None,
|
||||
response_model: type[BaseModel] | None = None,
|
||||
) -> Any:
|
||||
"""Handle a streaming response from the LLM.
|
||||
|
||||
Args:
|
||||
@@ -516,6 +535,7 @@ class LLM(BaseLLM):
|
||||
available_functions: Dict of available functions
|
||||
from_task: Optional task object
|
||||
from_agent: Optional agent object
|
||||
response_model: Optional response model
|
||||
|
||||
Returns:
|
||||
str: The complete response text
|
||||
@@ -716,14 +736,30 @@ class LLM(BaseLLM):
|
||||
tool_calls = message.tool_calls
|
||||
except Exception as e:
|
||||
logging.debug(f"Error checking for tool calls: {e}")
|
||||
# --- 8) If no tool calls or no available functions, return the text response directly
|
||||
|
||||
if not tool_calls or not available_functions:
|
||||
# Track token usage and log callbacks if available in streaming mode
|
||||
if usage_info:
|
||||
self._track_token_usage_internal(usage_info)
|
||||
self._handle_streaming_callbacks(callbacks, usage_info, last_chunk)
|
||||
# Emit completion event and return response
|
||||
|
||||
if response_model and self.is_litellm:
|
||||
instructor_instance = InternalInstructor(
|
||||
content=full_response,
|
||||
model=response_model,
|
||||
llm=self,
|
||||
)
|
||||
result = instructor_instance.to_pydantic()
|
||||
structured_response = result.model_dump_json()
|
||||
self._handle_emit_call_events(
|
||||
response=structured_response,
|
||||
call_type=LLMCallType.LLM_CALL,
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
messages=params["messages"],
|
||||
)
|
||||
return structured_response
|
||||
|
||||
self._handle_emit_call_events(
|
||||
response=full_response,
|
||||
call_type=LLMCallType.LLM_CALL,
|
||||
@@ -784,9 +820,9 @@ class LLM(BaseLLM):
|
||||
tool_calls: list[ChatCompletionDeltaToolCall],
|
||||
accumulated_tool_args: defaultdict[int, AccumulatedToolArgs],
|
||||
available_functions: dict[str, Any] | None = None,
|
||||
from_task: Any | None = None,
|
||||
from_agent: Any | None = None,
|
||||
) -> None | str:
|
||||
from_task: Task | None = None,
|
||||
from_agent: Agent | None = None,
|
||||
) -> Any:
|
||||
for tool_call in tool_calls:
|
||||
current_tool_accumulator = accumulated_tool_args[tool_call.index]
|
||||
|
||||
@@ -869,8 +905,9 @@ class LLM(BaseLLM):
|
||||
params: dict[str, Any],
|
||||
callbacks: list[Any] | None = None,
|
||||
available_functions: dict[str, Any] | None = None,
|
||||
from_task: Any | None = None,
|
||||
from_agent: Any | None = None,
|
||||
from_task: Task | None = None,
|
||||
from_agent: Agent | None = None,
|
||||
response_model: type[BaseModel] | None = None,
|
||||
) -> str | Any:
|
||||
"""Handle a non-streaming response from the LLM.
|
||||
|
||||
@@ -880,23 +917,69 @@ class LLM(BaseLLM):
|
||||
available_functions: Dict of available functions
|
||||
from_task: Optional Task that invoked the LLM
|
||||
from_agent: Optional Agent that invoked the LLM
|
||||
response_model: Optional Response model
|
||||
|
||||
Returns:
|
||||
str: The response text
|
||||
"""
|
||||
# --- 1) Make the completion call
|
||||
# --- 1) Handle response_model with InternalInstructor for LiteLLM
|
||||
if response_model and self.is_litellm:
|
||||
from crewai.utilities.internal_instructor import InternalInstructor
|
||||
|
||||
messages = params.get("messages", [])
|
||||
if not messages:
|
||||
raise ValueError("Messages are required when using response_model")
|
||||
|
||||
# Combine all message content for InternalInstructor
|
||||
combined_content = "\n\n".join(
|
||||
f"{msg['role'].upper()}: {msg['content']}" for msg in messages
|
||||
)
|
||||
|
||||
instructor_instance = InternalInstructor(
|
||||
content=combined_content,
|
||||
model=response_model,
|
||||
llm=self,
|
||||
)
|
||||
result = instructor_instance.to_pydantic()
|
||||
structured_response = result.model_dump_json()
|
||||
self._handle_emit_call_events(
|
||||
response=structured_response,
|
||||
call_type=LLMCallType.LLM_CALL,
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
messages=params["messages"],
|
||||
)
|
||||
return structured_response
|
||||
|
||||
try:
|
||||
# Attempt to make the completion call, but catch context window errors
|
||||
# and convert them to our own exception type for consistent handling
|
||||
# across the codebase. This allows CrewAgentExecutor to handle context
|
||||
# length issues appropriately.
|
||||
if response_model:
|
||||
params["response_model"] = response_model
|
||||
response = litellm.completion(**params)
|
||||
|
||||
except ContextWindowExceededError as e:
|
||||
# Convert litellm's context window error to our own exception type
|
||||
# for consistent handling in the rest of the codebase
|
||||
raise LLMContextLengthExceededError(str(e)) from e
|
||||
# --- 2) Extract response message and content
|
||||
|
||||
# --- 2) Handle structured output response (when response_model is provided)
|
||||
if response_model is not None:
|
||||
# When using instructor/response_model, litellm returns a Pydantic model instance
|
||||
if isinstance(response, BaseModel):
|
||||
structured_response = response.model_dump_json()
|
||||
self._handle_emit_call_events(
|
||||
response=structured_response,
|
||||
call_type=LLMCallType.LLM_CALL,
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
messages=params["messages"],
|
||||
)
|
||||
return structured_response
|
||||
|
||||
# --- 3) Extract response message and content (standard response)
|
||||
response_message = cast(Choices, cast(ModelResponse, response).choices)[
|
||||
0
|
||||
].message
|
||||
@@ -951,9 +1034,9 @@ class LLM(BaseLLM):
|
||||
self,
|
||||
tool_calls: list[Any],
|
||||
available_functions: dict[str, Any] | None = None,
|
||||
from_task: Any | None = None,
|
||||
from_agent: Any | None = None,
|
||||
) -> str | None:
|
||||
from_task: Task | None = None,
|
||||
from_agent: Agent | None = None,
|
||||
) -> Any:
|
||||
"""Handle a tool call from the LLM.
|
||||
|
||||
Args:
|
||||
@@ -1039,11 +1122,12 @@ class LLM(BaseLLM):
|
||||
def call(
|
||||
self,
|
||||
messages: str | list[LLMMessage],
|
||||
tools: list[dict] | None = None,
|
||||
tools: list[dict[str, BaseTool]] | None = None,
|
||||
callbacks: list[Any] | None = None,
|
||||
available_functions: dict[str, Any] | None = None,
|
||||
from_task: Any | None = None,
|
||||
from_agent: Any | None = None,
|
||||
from_task: Task | None = None,
|
||||
from_agent: Agent | None = None,
|
||||
response_model: type[BaseModel] | None = None,
|
||||
) -> str | Any:
|
||||
"""High-level LLM call method.
|
||||
|
||||
@@ -1060,6 +1144,7 @@ class LLM(BaseLLM):
|
||||
that can be invoked by the LLM.
|
||||
from_task: Optional Task that invoked the LLM
|
||||
from_agent: Optional Agent that invoked the LLM
|
||||
response_model: Optional Model that contains a pydantic response model.
|
||||
|
||||
Returns:
|
||||
Union[str, Any]: Either a text response from the LLM (str) or
|
||||
@@ -1105,11 +1190,21 @@ class LLM(BaseLLM):
|
||||
# --- 7) Make the completion call and handle response
|
||||
if self.stream:
|
||||
return self._handle_streaming_response(
|
||||
params, callbacks, available_functions, from_task, from_agent
|
||||
params=params,
|
||||
callbacks=callbacks,
|
||||
available_functions=available_functions,
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
response_model=response_model,
|
||||
)
|
||||
|
||||
return self._handle_non_streaming_response(
|
||||
params, callbacks, available_functions, from_task, from_agent
|
||||
params=params,
|
||||
callbacks=callbacks,
|
||||
available_functions=available_functions,
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
response_model=response_model,
|
||||
)
|
||||
except LLMContextLengthExceededError:
|
||||
# Re-raise LLMContextLengthExceededError as it should be handled
|
||||
@@ -1141,6 +1236,7 @@ class LLM(BaseLLM):
|
||||
available_functions=available_functions,
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
response_model=response_model,
|
||||
)
|
||||
|
||||
crewai_event_bus.emit(
|
||||
@@ -1155,10 +1251,10 @@ class LLM(BaseLLM):
|
||||
self,
|
||||
response: Any,
|
||||
call_type: LLMCallType,
|
||||
from_task: Any | None = None,
|
||||
from_agent: Any | None = None,
|
||||
messages: str | list[dict[str, Any]] | None = None,
|
||||
):
|
||||
from_task: Task | None = None,
|
||||
from_agent: Agent | None = None,
|
||||
messages: str | list[LLMMessage] | None = None,
|
||||
) -> None:
|
||||
"""Handle the events for the LLM call.
|
||||
|
||||
Args:
|
||||
@@ -1324,7 +1420,7 @@ class LLM(BaseLLM):
|
||||
return self.context_window_size
|
||||
|
||||
@staticmethod
|
||||
def set_callbacks(callbacks: list[Any]):
|
||||
def set_callbacks(callbacks: list[Any]) -> None:
|
||||
"""
|
||||
Attempt to keep a single set of callbacks in litellm by removing old
|
||||
duplicates and adding new ones.
|
||||
@@ -1377,7 +1473,7 @@ class LLM(BaseLLM):
|
||||
litellm.success_callback = success_callbacks
|
||||
litellm.failure_callback = failure_callbacks
|
||||
|
||||
def __copy__(self):
|
||||
def __copy__(self) -> LLM:
|
||||
"""Create a shallow copy of the LLM instance."""
|
||||
# Filter out parameters that are already explicitly passed to avoid conflicts
|
||||
filtered_params = {
|
||||
@@ -1437,7 +1533,7 @@ class LLM(BaseLLM):
|
||||
**filtered_params,
|
||||
)
|
||||
|
||||
def __deepcopy__(self, memo):
|
||||
def __deepcopy__(self, memo: dict[int, Any] | None) -> LLM:
|
||||
"""Create a deep copy of the LLM instance."""
|
||||
import copy
|
||||
|
||||
|
||||
@@ -10,6 +10,7 @@ from abc import ABC, abstractmethod
|
||||
from datetime import datetime
|
||||
import json
|
||||
import logging
|
||||
import re
|
||||
from typing import TYPE_CHECKING, Any, Final
|
||||
|
||||
from pydantic import BaseModel
|
||||
@@ -31,11 +32,15 @@ from crewai.types.usage_metrics import UsageMetrics
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from crewai.agent.core import Agent
|
||||
from crewai.task import Task
|
||||
from crewai.tools.base_tool import BaseTool
|
||||
from crewai.utilities.types import LLMMessage
|
||||
|
||||
|
||||
DEFAULT_CONTEXT_WINDOW_SIZE: Final[int] = 4096
|
||||
DEFAULT_SUPPORTS_STOP_WORDS: Final[bool] = True
|
||||
_JSON_EXTRACTION_PATTERN: Final[re.Pattern[str]] = re.compile(r"\{.*}", re.DOTALL)
|
||||
|
||||
|
||||
class BaseLLM(ABC):
|
||||
@@ -65,9 +70,8 @@ class BaseLLM(ABC):
|
||||
temperature: float | None = None,
|
||||
api_key: str | None = None,
|
||||
base_url: str | None = None,
|
||||
timeout: float | None = None,
|
||||
provider: str | None = None,
|
||||
**kwargs,
|
||||
**kwargs: Any,
|
||||
) -> None:
|
||||
"""Initialize the BaseLLM with default attributes.
|
||||
|
||||
@@ -93,8 +97,10 @@ class BaseLLM(ABC):
|
||||
self.stop: list[str] = []
|
||||
elif isinstance(stop, str):
|
||||
self.stop = [stop]
|
||||
else:
|
||||
elif isinstance(stop, list):
|
||||
self.stop = stop
|
||||
else:
|
||||
self.stop = []
|
||||
|
||||
self._token_usage = {
|
||||
"total_tokens": 0,
|
||||
@@ -118,11 +124,12 @@ class BaseLLM(ABC):
|
||||
def call(
|
||||
self,
|
||||
messages: str | list[LLMMessage],
|
||||
tools: list[dict] | None = None,
|
||||
tools: list[dict[str, BaseTool]] | None = None,
|
||||
callbacks: list[Any] | None = None,
|
||||
available_functions: dict[str, Any] | None = None,
|
||||
from_task: Any | None = None,
|
||||
from_agent: Any | None = None,
|
||||
from_task: Task | None = None,
|
||||
from_agent: Agent | None = None,
|
||||
response_model: type[BaseModel] | None = None,
|
||||
) -> str | Any:
|
||||
"""Call the LLM with the given messages.
|
||||
|
||||
@@ -139,6 +146,7 @@ class BaseLLM(ABC):
|
||||
that can be invoked by the LLM.
|
||||
from_task: Optional task caller to be used for the LLM call.
|
||||
from_agent: Optional agent caller to be used for the LLM call.
|
||||
response_model: Optional response model to be used for the LLM call.
|
||||
|
||||
Returns:
|
||||
Either a text response from the LLM (str) or
|
||||
@@ -150,7 +158,9 @@ class BaseLLM(ABC):
|
||||
RuntimeError: If the LLM request fails for other reasons.
|
||||
"""
|
||||
|
||||
def _convert_tools_for_interference(self, tools: list[dict]) -> list[dict]:
|
||||
def _convert_tools_for_interference(
|
||||
self, tools: list[dict[str, BaseTool]]
|
||||
) -> list[dict[str, BaseTool]]:
|
||||
"""Convert tools to a format that can be used for interference.
|
||||
|
||||
Args:
|
||||
@@ -237,11 +247,11 @@ class BaseLLM(ABC):
|
||||
def _emit_call_started_event(
|
||||
self,
|
||||
messages: str | list[LLMMessage],
|
||||
tools: list[dict] | None = None,
|
||||
tools: list[dict[str, BaseTool]] | None = None,
|
||||
callbacks: list[Any] | None = None,
|
||||
available_functions: dict[str, Any] | None = None,
|
||||
from_task: Any | None = None,
|
||||
from_agent: Any | None = None,
|
||||
from_task: Task | None = None,
|
||||
from_agent: Agent | None = None,
|
||||
) -> None:
|
||||
"""Emit LLM call started event."""
|
||||
if not hasattr(crewai_event_bus, "emit"):
|
||||
@@ -264,8 +274,8 @@ class BaseLLM(ABC):
|
||||
self,
|
||||
response: Any,
|
||||
call_type: LLMCallType,
|
||||
from_task: Any | None = None,
|
||||
from_agent: Any | None = None,
|
||||
from_task: Task | None = None,
|
||||
from_agent: Agent | None = None,
|
||||
messages: str | list[dict[str, Any]] | None = None,
|
||||
) -> None:
|
||||
"""Emit LLM call completed event."""
|
||||
@@ -284,8 +294,8 @@ class BaseLLM(ABC):
|
||||
def _emit_call_failed_event(
|
||||
self,
|
||||
error: str,
|
||||
from_task: Any | None = None,
|
||||
from_agent: Any | None = None,
|
||||
from_task: Task | None = None,
|
||||
from_agent: Agent | None = None,
|
||||
) -> None:
|
||||
"""Emit LLM call failed event."""
|
||||
if not hasattr(crewai_event_bus, "emit"):
|
||||
@@ -303,8 +313,8 @@ class BaseLLM(ABC):
|
||||
def _emit_stream_chunk_event(
|
||||
self,
|
||||
chunk: str,
|
||||
from_task: Any | None = None,
|
||||
from_agent: Any | None = None,
|
||||
from_task: Task | None = None,
|
||||
from_agent: Agent | None = None,
|
||||
tool_call: dict[str, Any] | None = None,
|
||||
) -> None:
|
||||
"""Emit stream chunk event."""
|
||||
@@ -326,8 +336,8 @@ class BaseLLM(ABC):
|
||||
function_name: str,
|
||||
function_args: dict[str, Any],
|
||||
available_functions: dict[str, Any],
|
||||
from_task: Any | None = None,
|
||||
from_agent: Any | None = None,
|
||||
from_task: Task | None = None,
|
||||
from_agent: Agent | None = None,
|
||||
) -> str | None:
|
||||
"""Handle tool execution with proper event emission.
|
||||
|
||||
@@ -443,10 +453,10 @@ class BaseLLM(ABC):
|
||||
f"Message at index {i} must have 'role' and 'content' keys"
|
||||
)
|
||||
|
||||
return messages # type: ignore[return-value]
|
||||
return messages
|
||||
|
||||
@staticmethod
|
||||
def _validate_structured_output(
|
||||
self,
|
||||
response: str,
|
||||
response_format: type[BaseModel] | None,
|
||||
) -> str | BaseModel:
|
||||
@@ -471,10 +481,7 @@ class BaseLLM(ABC):
|
||||
data = json.loads(response)
|
||||
return response_format.model_validate(data)
|
||||
|
||||
# Try to extract JSON from response
|
||||
import re
|
||||
|
||||
json_match = re.search(r"\{.*\}", response, re.DOTALL)
|
||||
json_match = _JSON_EXTRACTION_PATTERN.search(response)
|
||||
if json_match:
|
||||
data = json.loads(json_match.group())
|
||||
return response_format.model_validate(data)
|
||||
@@ -487,7 +494,8 @@ class BaseLLM(ABC):
|
||||
f"Failed to parse response into {response_format.__name__}: {e}"
|
||||
) from e
|
||||
|
||||
def _extract_provider(self, model: str) -> str:
|
||||
@staticmethod
|
||||
def _extract_provider(model: str) -> str:
|
||||
"""Extract provider from model string.
|
||||
|
||||
Args:
|
||||
|
||||
@@ -1,7 +1,13 @@
|
||||
from __future__ import annotations
|
||||
|
||||
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
from typing import Any, cast
|
||||
|
||||
from pydantic import BaseModel
|
||||
|
||||
from crewai.events.types.llm_events import LLMCallType
|
||||
from crewai.llms.base_llm import BaseLLM
|
||||
from crewai.utilities.agent_utils import is_context_length_exceeded
|
||||
@@ -109,6 +115,7 @@ class AnthropicCompletion(BaseLLM):
|
||||
available_functions: dict[str, Any] | None = None,
|
||||
from_task: Any | None = None,
|
||||
from_agent: Any | None = None,
|
||||
response_model: type[BaseModel] | None = None,
|
||||
) -> str | Any:
|
||||
"""Call Anthropic messages API.
|
||||
|
||||
@@ -147,11 +154,19 @@ class AnthropicCompletion(BaseLLM):
|
||||
# Handle streaming vs non-streaming
|
||||
if self.stream:
|
||||
return self._handle_streaming_completion(
|
||||
completion_params, available_functions, from_task, from_agent
|
||||
completion_params,
|
||||
available_functions,
|
||||
from_task,
|
||||
from_agent,
|
||||
response_model,
|
||||
)
|
||||
|
||||
return self._handle_completion(
|
||||
completion_params, available_functions, from_task, from_agent
|
||||
completion_params,
|
||||
available_functions,
|
||||
from_task,
|
||||
from_agent,
|
||||
response_model,
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
@@ -290,8 +305,19 @@ class AnthropicCompletion(BaseLLM):
|
||||
available_functions: dict[str, Any] | None = None,
|
||||
from_task: Any | None = None,
|
||||
from_agent: Any | None = None,
|
||||
response_model: type[BaseModel] | None = None,
|
||||
) -> str | Any:
|
||||
"""Handle non-streaming message completion."""
|
||||
if response_model:
|
||||
structured_tool = {
|
||||
"name": "structured_output",
|
||||
"description": "Returns structured data according to the schema",
|
||||
"input_schema": response_model.model_json_schema(),
|
||||
}
|
||||
|
||||
params["tools"] = [structured_tool]
|
||||
params["tool_choice"] = {"type": "tool", "name": "structured_output"}
|
||||
|
||||
try:
|
||||
response: Message = self.client.messages.create(**params)
|
||||
|
||||
@@ -304,6 +330,24 @@ class AnthropicCompletion(BaseLLM):
|
||||
usage = self._extract_anthropic_token_usage(response)
|
||||
self._track_token_usage_internal(usage)
|
||||
|
||||
if response_model and response.content:
|
||||
tool_uses = [
|
||||
block for block in response.content if isinstance(block, ToolUseBlock)
|
||||
]
|
||||
if tool_uses and tool_uses[0].name == "structured_output":
|
||||
structured_data = tool_uses[0].input
|
||||
structured_json = json.dumps(structured_data)
|
||||
|
||||
self._emit_call_completed_event(
|
||||
response=structured_json,
|
||||
call_type=LLMCallType.LLM_CALL,
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
messages=params["messages"],
|
||||
)
|
||||
|
||||
return structured_json
|
||||
|
||||
# Check if Claude wants to use tools
|
||||
if response.content and available_functions:
|
||||
tool_uses = [
|
||||
@@ -349,8 +393,19 @@ class AnthropicCompletion(BaseLLM):
|
||||
available_functions: dict[str, Any] | None = None,
|
||||
from_task: Any | None = None,
|
||||
from_agent: Any | None = None,
|
||||
response_model: type[BaseModel] | None = None,
|
||||
) -> str:
|
||||
"""Handle streaming message completion."""
|
||||
if response_model:
|
||||
structured_tool = {
|
||||
"name": "structured_output",
|
||||
"description": "Returns structured data according to the schema",
|
||||
"input_schema": response_model.model_json_schema(),
|
||||
}
|
||||
|
||||
params["tools"] = [structured_tool]
|
||||
params["tool_choice"] = {"type": "tool", "name": "structured_output"}
|
||||
|
||||
full_response = ""
|
||||
|
||||
# Remove 'stream' parameter as messages.stream() doesn't accept it
|
||||
@@ -374,6 +429,26 @@ class AnthropicCompletion(BaseLLM):
|
||||
usage = self._extract_anthropic_token_usage(final_message)
|
||||
self._track_token_usage_internal(usage)
|
||||
|
||||
if response_model and final_message.content:
|
||||
tool_uses = [
|
||||
block
|
||||
for block in final_message.content
|
||||
if isinstance(block, ToolUseBlock)
|
||||
]
|
||||
if tool_uses and tool_uses[0].name == "structured_output":
|
||||
structured_data = tool_uses[0].input
|
||||
structured_json = json.dumps(structured_data)
|
||||
|
||||
self._emit_call_completed_event(
|
||||
response=structured_json,
|
||||
call_type=LLMCallType.LLM_CALL,
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
messages=params["messages"],
|
||||
)
|
||||
|
||||
return structured_json
|
||||
|
||||
if final_message.content and available_functions:
|
||||
tool_uses = [
|
||||
block
|
||||
|
||||
@@ -1,7 +1,11 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
from typing import Any
|
||||
from typing import Any, TYPE_CHECKING
|
||||
|
||||
from pydantic import BaseModel
|
||||
|
||||
from crewai.utilities.agent_utils import is_context_length_exceeded
|
||||
from crewai.utilities.exceptions.context_window_exceeding_exception import (
|
||||
@@ -9,6 +13,9 @@ from crewai.utilities.exceptions.context_window_exceeding_exception import (
|
||||
)
|
||||
from crewai.utilities.types import LLMMessage
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from crewai.tools.base_tool import BaseTool
|
||||
|
||||
|
||||
try:
|
||||
from azure.ai.inference import ( # type: ignore[import-not-found]
|
||||
@@ -157,11 +164,12 @@ class AzureCompletion(BaseLLM):
|
||||
def call(
|
||||
self,
|
||||
messages: str | list[LLMMessage],
|
||||
tools: list[dict] | None = None,
|
||||
tools: list[dict[str, BaseTool]] | None = None,
|
||||
callbacks: list[Any] | None = None,
|
||||
available_functions: dict[str, Any] | None = None,
|
||||
from_task: Any | None = None,
|
||||
from_agent: Any | None = None,
|
||||
response_model: type[BaseModel] | None = None,
|
||||
) -> str | Any:
|
||||
"""Call Azure AI Inference chat completions API.
|
||||
|
||||
@@ -192,17 +200,25 @@ class AzureCompletion(BaseLLM):
|
||||
|
||||
# Prepare completion parameters
|
||||
completion_params = self._prepare_completion_params(
|
||||
formatted_messages, tools
|
||||
formatted_messages, tools, response_model
|
||||
)
|
||||
|
||||
# Handle streaming vs non-streaming
|
||||
if self.stream:
|
||||
return self._handle_streaming_completion(
|
||||
completion_params, available_functions, from_task, from_agent
|
||||
completion_params,
|
||||
available_functions,
|
||||
from_task,
|
||||
from_agent,
|
||||
response_model,
|
||||
)
|
||||
|
||||
return self._handle_completion(
|
||||
completion_params, available_functions, from_task, from_agent
|
||||
completion_params,
|
||||
available_functions,
|
||||
from_task,
|
||||
from_agent,
|
||||
response_model,
|
||||
)
|
||||
|
||||
except HttpResponseError as e:
|
||||
@@ -234,12 +250,14 @@ class AzureCompletion(BaseLLM):
|
||||
self,
|
||||
messages: list[LLMMessage],
|
||||
tools: list[dict] | None = None,
|
||||
response_model: type[BaseModel] | None = None,
|
||||
) -> dict[str, Any]:
|
||||
"""Prepare parameters for Azure AI Inference chat completion.
|
||||
|
||||
Args:
|
||||
messages: Formatted messages for Azure
|
||||
tools: Tool definitions
|
||||
response_model: Pydantic model for structured output
|
||||
|
||||
Returns:
|
||||
Parameters dictionary for Azure API
|
||||
@@ -249,6 +267,15 @@ class AzureCompletion(BaseLLM):
|
||||
"stream": self.stream,
|
||||
}
|
||||
|
||||
if response_model and self.is_openai_model:
|
||||
params["response_format"] = {
|
||||
"type": "json_schema",
|
||||
"json_schema": {
|
||||
"name": response_model.__name__,
|
||||
"schema": response_model.model_json_schema(),
|
||||
},
|
||||
}
|
||||
|
||||
# Only include model parameter for non-Azure OpenAI endpoints
|
||||
# Azure OpenAI endpoints have the deployment name in the URL
|
||||
if not self.is_azure_openai_endpoint:
|
||||
@@ -334,6 +361,7 @@ class AzureCompletion(BaseLLM):
|
||||
available_functions: dict[str, Any] | None = None,
|
||||
from_task: Any | None = None,
|
||||
from_agent: Any | None = None,
|
||||
response_model: type[BaseModel] | None = None,
|
||||
) -> str | Any:
|
||||
"""Handle non-streaming chat completion."""
|
||||
# Make API call
|
||||
@@ -350,6 +378,26 @@ class AzureCompletion(BaseLLM):
|
||||
usage = self._extract_azure_token_usage(response)
|
||||
self._track_token_usage_internal(usage)
|
||||
|
||||
if response_model and self.is_openai_model:
|
||||
content = message.content or ""
|
||||
try:
|
||||
structured_data = response_model.model_validate_json(content)
|
||||
structured_json = structured_data.model_dump_json()
|
||||
|
||||
self._emit_call_completed_event(
|
||||
response=structured_json,
|
||||
call_type=LLMCallType.LLM_CALL,
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
messages=params["messages"],
|
||||
)
|
||||
|
||||
return structured_json
|
||||
except Exception as e:
|
||||
error_msg = f"Failed to validate structured output with model {response_model.__name__}: {e}"
|
||||
logging.error(error_msg)
|
||||
raise ValueError(error_msg) from e
|
||||
|
||||
# Handle tool calls
|
||||
if message.tool_calls and available_functions:
|
||||
tool_call = message.tool_calls[0] # Handle first tool call
|
||||
@@ -409,6 +457,7 @@ class AzureCompletion(BaseLLM):
|
||||
available_functions: dict[str, Any] | None = None,
|
||||
from_task: Any | None = None,
|
||||
from_agent: Any | None = None,
|
||||
response_model: type[BaseModel] | None = None,
|
||||
) -> str:
|
||||
"""Handle streaming chat completion."""
|
||||
full_response = ""
|
||||
|
||||
@@ -5,6 +5,7 @@ import logging
|
||||
import os
|
||||
from typing import TYPE_CHECKING, Any, TypedDict, cast
|
||||
|
||||
from pydantic import BaseModel
|
||||
from typing_extensions import Required
|
||||
|
||||
from crewai.events.types.llm_events import LLMCallType
|
||||
@@ -240,6 +241,7 @@ class BedrockCompletion(BaseLLM):
|
||||
available_functions: dict[str, Any] | None = None,
|
||||
from_task: Any | None = None,
|
||||
from_agent: Any | None = None,
|
||||
response_model: type[BaseModel] | None = None,
|
||||
) -> str | Any:
|
||||
"""Call AWS Bedrock Converse API."""
|
||||
try:
|
||||
|
||||
@@ -1,7 +1,10 @@
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
from typing import Any, cast
|
||||
|
||||
from pydantic import BaseModel
|
||||
|
||||
from crewai.events.types.llm_events import LLMCallType
|
||||
from crewai.llms.base_llm import BaseLLM
|
||||
from crewai.utilities.agent_utils import is_context_length_exceeded
|
||||
@@ -173,6 +176,7 @@ class GeminiCompletion(BaseLLM):
|
||||
available_functions: dict[str, Any] | None = None,
|
||||
from_task: Any | None = None,
|
||||
from_agent: Any | None = None,
|
||||
response_model: type[BaseModel] | None = None,
|
||||
) -> str | Any:
|
||||
"""Call Google Gemini generate content API.
|
||||
|
||||
@@ -202,7 +206,9 @@ class GeminiCompletion(BaseLLM):
|
||||
messages # type: ignore[arg-type]
|
||||
)
|
||||
|
||||
config = self._prepare_generation_config(system_instruction, tools)
|
||||
config = self._prepare_generation_config(
|
||||
system_instruction, tools, response_model
|
||||
)
|
||||
|
||||
if self.stream:
|
||||
return self._handle_streaming_completion(
|
||||
@@ -211,6 +217,7 @@ class GeminiCompletion(BaseLLM):
|
||||
available_functions,
|
||||
from_task,
|
||||
from_agent,
|
||||
response_model,
|
||||
)
|
||||
|
||||
return self._handle_completion(
|
||||
@@ -220,6 +227,7 @@ class GeminiCompletion(BaseLLM):
|
||||
available_functions,
|
||||
from_task,
|
||||
from_agent,
|
||||
response_model,
|
||||
)
|
||||
|
||||
except APIError as e:
|
||||
@@ -241,12 +249,14 @@ class GeminiCompletion(BaseLLM):
|
||||
self,
|
||||
system_instruction: str | None = None,
|
||||
tools: list[dict] | None = None,
|
||||
response_model: type[BaseModel] | None = None,
|
||||
) -> types.GenerateContentConfig:
|
||||
"""Prepare generation config for Google Gemini API.
|
||||
|
||||
Args:
|
||||
system_instruction: System instruction for the model
|
||||
tools: Tool definitions
|
||||
response_model: Pydantic model for structured output
|
||||
|
||||
Returns:
|
||||
GenerateContentConfig object for Gemini API
|
||||
@@ -274,6 +284,10 @@ class GeminiCompletion(BaseLLM):
|
||||
if self.stop_sequences:
|
||||
config_params["stop_sequences"] = self.stop_sequences
|
||||
|
||||
if response_model:
|
||||
config_params["response_mime_type"] = "application/json"
|
||||
config_params["response_schema"] = response_model.model_json_schema()
|
||||
|
||||
# Handle tools for supported models
|
||||
if tools and self.supports_tools:
|
||||
config_params["tools"] = self._convert_tools_for_interference(tools)
|
||||
@@ -358,6 +372,7 @@ class GeminiCompletion(BaseLLM):
|
||||
available_functions: dict[str, Any] | None = None,
|
||||
from_task: Any | None = None,
|
||||
from_agent: Any | None = None,
|
||||
response_model: type[BaseModel] | None = None,
|
||||
) -> str | Any:
|
||||
"""Handle non-streaming content generation."""
|
||||
api_params = {
|
||||
@@ -423,6 +438,7 @@ class GeminiCompletion(BaseLLM):
|
||||
available_functions: dict[str, Any] | None = None,
|
||||
from_task: Any | None = None,
|
||||
from_agent: Any | None = None,
|
||||
response_model: type[BaseModel] | None = None,
|
||||
) -> str:
|
||||
"""Handle streaming content generation."""
|
||||
full_response = ""
|
||||
|
||||
@@ -1,8 +1,10 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from collections.abc import Iterator
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
from typing import Any
|
||||
from typing import TYPE_CHECKING, Any
|
||||
|
||||
from openai import APIConnectionError, NotFoundError, OpenAI
|
||||
from openai.types.chat import ChatCompletion, ChatCompletionChunk
|
||||
@@ -19,6 +21,12 @@ from crewai.utilities.exceptions.context_window_exceeding_exception import (
|
||||
from crewai.utilities.types import LLMMessage
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from crewai.agent.core import Agent
|
||||
from crewai.task import Task
|
||||
from crewai.tools.base_tool import BaseTool
|
||||
|
||||
|
||||
class OpenAICompletion(BaseLLM):
|
||||
"""OpenAI native completion implementation.
|
||||
|
||||
@@ -51,8 +59,8 @@ class OpenAICompletion(BaseLLM):
|
||||
top_logprobs: int | None = None,
|
||||
reasoning_effort: str | None = None,
|
||||
provider: str | None = None,
|
||||
**kwargs,
|
||||
):
|
||||
**kwargs: Any,
|
||||
) -> None:
|
||||
"""Initialize OpenAI chat completion client."""
|
||||
|
||||
if provider is None:
|
||||
@@ -129,11 +137,12 @@ class OpenAICompletion(BaseLLM):
|
||||
def call(
|
||||
self,
|
||||
messages: str | list[LLMMessage],
|
||||
tools: list[dict] | None = None,
|
||||
tools: list[dict[str, BaseTool]] | None = None,
|
||||
callbacks: list[Any] | None = None,
|
||||
available_functions: dict[str, Any] | None = None,
|
||||
from_task: Any | None = None,
|
||||
from_agent: Any | None = None,
|
||||
from_task: Task | None = None,
|
||||
from_agent: Agent | None = None,
|
||||
response_model: type[BaseModel] | None = None,
|
||||
) -> str | Any:
|
||||
"""Call OpenAI chat completion API.
|
||||
|
||||
@@ -144,13 +153,14 @@ class OpenAICompletion(BaseLLM):
|
||||
available_functions: Available functions for tool calling
|
||||
from_task: Task that initiated the call
|
||||
from_agent: Agent that initiated the call
|
||||
response_model: Response model for structured output.
|
||||
|
||||
Returns:
|
||||
Chat completion response or tool call result
|
||||
"""
|
||||
try:
|
||||
self._emit_call_started_event(
|
||||
messages=messages, # type: ignore[arg-type]
|
||||
messages=messages,
|
||||
tools=tools,
|
||||
callbacks=callbacks,
|
||||
available_functions=available_functions,
|
||||
@@ -158,19 +168,27 @@ class OpenAICompletion(BaseLLM):
|
||||
from_agent=from_agent,
|
||||
)
|
||||
|
||||
formatted_messages = self._format_messages(messages) # type: ignore[arg-type]
|
||||
formatted_messages = self._format_messages(messages)
|
||||
|
||||
completion_params = self._prepare_completion_params(
|
||||
formatted_messages, tools
|
||||
messages=formatted_messages, tools=tools
|
||||
)
|
||||
|
||||
if self.stream:
|
||||
return self._handle_streaming_completion(
|
||||
completion_params, available_functions, from_task, from_agent
|
||||
params=completion_params,
|
||||
available_functions=available_functions,
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
response_model=response_model,
|
||||
)
|
||||
|
||||
return self._handle_completion(
|
||||
completion_params, available_functions, from_task, from_agent
|
||||
params=completion_params,
|
||||
available_functions=available_functions,
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
response_model=response_model,
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
@@ -182,14 +200,15 @@ class OpenAICompletion(BaseLLM):
|
||||
raise
|
||||
|
||||
def _prepare_completion_params(
|
||||
self, messages: list[LLMMessage], tools: list[dict] | None = None
|
||||
self, messages: list[LLMMessage], tools: list[dict[str, BaseTool]] | None = None
|
||||
) -> dict[str, Any]:
|
||||
"""Prepare parameters for OpenAI chat completion."""
|
||||
params = {
|
||||
params: dict[str, Any] = {
|
||||
"model": self.model,
|
||||
"messages": messages,
|
||||
"stream": self.stream,
|
||||
}
|
||||
if self.stream:
|
||||
params["stream"] = self.stream
|
||||
|
||||
params.update(self.additional_params)
|
||||
|
||||
@@ -216,22 +235,6 @@ class OpenAICompletion(BaseLLM):
|
||||
if self.is_o1_model and self.reasoning_effort:
|
||||
params["reasoning_effort"] = self.reasoning_effort
|
||||
|
||||
# Handle response format for structured outputs
|
||||
if self.response_format:
|
||||
if isinstance(self.response_format, type) and issubclass(
|
||||
self.response_format, BaseModel
|
||||
):
|
||||
# Convert Pydantic model to OpenAI response format
|
||||
params["response_format"] = {
|
||||
"type": "json_schema",
|
||||
"json_schema": {
|
||||
"name": self.response_format.__name__,
|
||||
"schema": self.response_format.model_json_schema(),
|
||||
},
|
||||
}
|
||||
else:
|
||||
params["response_format"] = self.response_format
|
||||
|
||||
if tools:
|
||||
params["tools"] = self._convert_tools_for_interference(tools)
|
||||
params["tool_choice"] = "auto"
|
||||
@@ -251,7 +254,9 @@ class OpenAICompletion(BaseLLM):
|
||||
|
||||
return {k: v for k, v in params.items() if k not in crewai_specific_params}
|
||||
|
||||
def _convert_tools_for_interference(self, tools: list[dict]) -> list[dict]:
|
||||
def _convert_tools_for_interference(
|
||||
self, tools: list[dict[str, BaseTool]]
|
||||
) -> list[dict[str, Any]]:
|
||||
"""Convert CrewAI tool format to OpenAI function calling format."""
|
||||
from crewai.llms.providers.utils.common import safe_tool_conversion
|
||||
|
||||
@@ -283,9 +288,35 @@ class OpenAICompletion(BaseLLM):
|
||||
available_functions: dict[str, Any] | None = None,
|
||||
from_task: Any | None = None,
|
||||
from_agent: Any | None = None,
|
||||
response_model: type[BaseModel] | None = None,
|
||||
) -> str | Any:
|
||||
"""Handle non-streaming chat completion."""
|
||||
try:
|
||||
if response_model:
|
||||
parsed_response = self.client.beta.chat.completions.parse(
|
||||
**params,
|
||||
response_format=response_model,
|
||||
)
|
||||
math_reasoning = parsed_response.choices[0].message
|
||||
|
||||
if math_reasoning.refusal:
|
||||
pass
|
||||
|
||||
usage = self._extract_openai_token_usage(parsed_response)
|
||||
self._track_token_usage_internal(usage)
|
||||
|
||||
parsed_object = parsed_response.choices[0].message.parsed
|
||||
if parsed_object:
|
||||
structured_json = parsed_object.model_dump_json()
|
||||
self._emit_call_completed_event(
|
||||
response=structured_json,
|
||||
call_type=LLMCallType.LLM_CALL,
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
messages=params["messages"],
|
||||
)
|
||||
return structured_json
|
||||
|
||||
response: ChatCompletion = self.client.chat.completions.create(**params)
|
||||
|
||||
usage = self._extract_openai_token_usage(response)
|
||||
@@ -380,12 +411,57 @@ class OpenAICompletion(BaseLLM):
|
||||
available_functions: dict[str, Any] | None = None,
|
||||
from_task: Any | None = None,
|
||||
from_agent: Any | None = None,
|
||||
response_model: type[BaseModel] | None = None,
|
||||
) -> str:
|
||||
"""Handle streaming chat completion."""
|
||||
full_response = ""
|
||||
tool_calls = {}
|
||||
|
||||
# Make streaming API call
|
||||
if response_model:
|
||||
completion_stream: Iterator[ChatCompletionChunk] = (
|
||||
self.client.chat.completions.create(**params)
|
||||
)
|
||||
|
||||
accumulated_content = ""
|
||||
for chunk in completion_stream:
|
||||
if not chunk.choices:
|
||||
continue
|
||||
|
||||
choice = chunk.choices[0]
|
||||
delta: ChoiceDelta = choice.delta
|
||||
|
||||
if delta.content:
|
||||
accumulated_content += delta.content
|
||||
self._emit_stream_chunk_event(
|
||||
chunk=delta.content,
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
)
|
||||
|
||||
try:
|
||||
parsed_object = response_model.model_validate_json(accumulated_content)
|
||||
structured_json = parsed_object.model_dump_json()
|
||||
|
||||
self._emit_call_completed_event(
|
||||
response=structured_json,
|
||||
call_type=LLMCallType.LLM_CALL,
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
messages=params["messages"],
|
||||
)
|
||||
|
||||
return structured_json
|
||||
except Exception as e:
|
||||
logging.error(f"Failed to parse structured output from stream: {e}")
|
||||
self._emit_call_completed_event(
|
||||
response=accumulated_content,
|
||||
call_type=LLMCallType.LLM_CALL,
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
messages=params["messages"],
|
||||
)
|
||||
return accumulated_content
|
||||
|
||||
stream: Iterator[ChatCompletionChunk] = self.client.chat.completions.create(
|
||||
**params
|
||||
)
|
||||
@@ -395,20 +471,18 @@ class OpenAICompletion(BaseLLM):
|
||||
continue
|
||||
|
||||
choice = chunk.choices[0]
|
||||
delta: ChoiceDelta = choice.delta
|
||||
chunk_delta: ChoiceDelta = choice.delta
|
||||
|
||||
# Handle content streaming
|
||||
if delta.content:
|
||||
full_response += delta.content
|
||||
if chunk_delta.content:
|
||||
full_response += chunk_delta.content
|
||||
self._emit_stream_chunk_event(
|
||||
chunk=delta.content,
|
||||
chunk=chunk_delta.content,
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
)
|
||||
|
||||
# Handle tool call streaming
|
||||
if delta.tool_calls:
|
||||
for tool_call in delta.tool_calls:
|
||||
if chunk_delta.tool_calls:
|
||||
for tool_call in chunk_delta.tool_calls:
|
||||
call_id = tool_call.id or "default"
|
||||
if call_id not in tool_calls:
|
||||
tool_calls[call_id] = {
|
||||
@@ -454,10 +528,8 @@ class OpenAICompletion(BaseLLM):
|
||||
if result is not None:
|
||||
return result
|
||||
|
||||
# Apply stop words to full response
|
||||
full_response = self._apply_stop_words(full_response)
|
||||
|
||||
# Emit completion event and return full response
|
||||
self._emit_call_completed_event(
|
||||
response=full_response,
|
||||
call_type=LLMCallType.LLM_CALL,
|
||||
@@ -523,12 +595,9 @@ class OpenAICompletion(BaseLLM):
|
||||
}
|
||||
return {"total_tokens": 0}
|
||||
|
||||
def _format_messages( # type: ignore[override]
|
||||
self, messages: str | list[LLMMessage]
|
||||
) -> list[LLMMessage]:
|
||||
def _format_messages(self, messages: str | list[LLMMessage]) -> list[LLMMessage]:
|
||||
"""Format messages for OpenAI API."""
|
||||
# Use base class formatting first
|
||||
base_formatted = super()._format_messages(messages) # type: ignore[arg-type]
|
||||
base_formatted = super()._format_messages(messages)
|
||||
|
||||
# Apply OpenAI-specific formatting
|
||||
formatted_messages: list[LLMMessage] = []
|
||||
|
||||
@@ -32,7 +32,7 @@ from pydantic_core import PydanticCustomError
|
||||
|
||||
from crewai.agents.agent_builder.base_agent import BaseAgent
|
||||
from crewai.events.event_bus import crewai_event_bus
|
||||
from crewai.events.event_types import (
|
||||
from crewai.events.types.task_events import (
|
||||
TaskCompletedEvent,
|
||||
TaskFailedEvent,
|
||||
TaskStartedEvent,
|
||||
@@ -123,6 +123,10 @@ class Task(BaseModel):
|
||||
description="A Pydantic model to be used to create a Pydantic output.",
|
||||
default=None,
|
||||
)
|
||||
response_model: type[BaseModel] | None = Field(
|
||||
description="A Pydantic model for structured LLM outputs using native provider features.",
|
||||
default=None,
|
||||
)
|
||||
output_file: str | None = Field(
|
||||
description="A file path to be used to create a file output.",
|
||||
default=None,
|
||||
|
||||
25
lib/crewai/src/crewai/types/utils.py
Normal file
25
lib/crewai/src/crewai/types/utils.py
Normal file
@@ -0,0 +1,25 @@
|
||||
"""Utilities for creating and manipulating types."""
|
||||
|
||||
from typing import Annotated, Final, Literal
|
||||
|
||||
from typing_extensions import TypeAliasType
|
||||
|
||||
|
||||
_DYNAMIC_LITERAL_ALIAS: Final[Literal["DynamicLiteral"]] = "DynamicLiteral"
|
||||
|
||||
|
||||
def create_literals_from_strings(
|
||||
values: Annotated[
|
||||
tuple[str, ...], "Should contain unique strings; duplicates will be removed"
|
||||
],
|
||||
) -> type:
|
||||
"""Create a Literal type for each A2A agent ID.
|
||||
|
||||
Args:
|
||||
values: a tuple of the A2A agent IDs
|
||||
|
||||
Returns:
|
||||
Literal type for each A2A agent ID
|
||||
"""
|
||||
unique_values: tuple[str, ...] = tuple(dict.fromkeys(values))
|
||||
return Literal.__getitem__(unique_values)
|
||||
@@ -5,6 +5,7 @@ import json
|
||||
import re
|
||||
from typing import TYPE_CHECKING, Any, Final, Literal, TypedDict
|
||||
|
||||
from pydantic import BaseModel
|
||||
from rich.console import Console
|
||||
|
||||
from crewai.agents.constants import FINAL_ANSWER_AND_PARSABLE_ACTION_ERROR_MESSAGE
|
||||
@@ -226,6 +227,7 @@ def get_llm_response(
|
||||
printer: Printer,
|
||||
from_task: Task | None = None,
|
||||
from_agent: Agent | LiteAgent | None = None,
|
||||
response_model: type[BaseModel] | None = None,
|
||||
) -> str:
|
||||
"""Call the LLM and return the response, handling any invalid responses.
|
||||
|
||||
@@ -236,6 +238,7 @@ def get_llm_response(
|
||||
printer: Printer instance for output
|
||||
from_task: Optional task context for the LLM call
|
||||
from_agent: Optional agent context for the LLM call
|
||||
response_model: Optional Pydantic model for structured outputs
|
||||
|
||||
Returns:
|
||||
The response from the LLM as a string
|
||||
@@ -250,6 +253,7 @@ def get_llm_response(
|
||||
callbacks=callbacks,
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
response_model=response_model,
|
||||
)
|
||||
except Exception as e:
|
||||
raise e
|
||||
|
||||
@@ -1,8 +1,10 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from collections.abc import Callable
|
||||
from copy import deepcopy
|
||||
import json
|
||||
import re
|
||||
from typing import TYPE_CHECKING, Any, Final, TypedDict, Union, get_args, get_origin
|
||||
from typing import TYPE_CHECKING, Any, Final, TypedDict
|
||||
|
||||
from pydantic import BaseModel, ValidationError
|
||||
from typing_extensions import Unpack
|
||||
@@ -53,7 +55,14 @@ class Converter(OutputConverter):
|
||||
"""
|
||||
try:
|
||||
if self.llm.supports_function_calling():
|
||||
result = self._create_instructor().to_pydantic()
|
||||
response = self.llm.call(
|
||||
messages=[
|
||||
{"role": "system", "content": self.instructions},
|
||||
{"role": "user", "content": self.text},
|
||||
],
|
||||
response_model=self.model,
|
||||
)
|
||||
result = self.model.model_validate_json(response)
|
||||
else:
|
||||
response = self.llm.call(
|
||||
[
|
||||
@@ -66,7 +75,7 @@ class Converter(OutputConverter):
|
||||
result = self.model.model_validate_json(response)
|
||||
except ValidationError:
|
||||
# If direct validation fails, attempt to extract valid JSON
|
||||
result = handle_partial_json(
|
||||
result = handle_partial_json( # type: ignore[assignment]
|
||||
result=response,
|
||||
model=self.model,
|
||||
is_json_output=False,
|
||||
@@ -131,7 +140,7 @@ class Converter(OutputConverter):
|
||||
return self.to_json(current_attempt + 1)
|
||||
return ConverterError(f"Failed to convert text into JSON, error: {e}.")
|
||||
|
||||
def _create_instructor(self) -> InternalInstructor:
|
||||
def _create_instructor(self) -> InternalInstructor[Any]:
|
||||
"""Create an instructor."""
|
||||
|
||||
return InternalInstructor(
|
||||
@@ -264,7 +273,7 @@ def convert_with_instructions(
|
||||
is_json_output: bool,
|
||||
agent: Agent | BaseAgent | None,
|
||||
converter_cls: type[Converter] | None = None,
|
||||
) -> dict | BaseModel | str:
|
||||
) -> dict[str, Any] | BaseModel | str:
|
||||
"""Convert a result string to a Pydantic model or JSON using instructions.
|
||||
|
||||
Args:
|
||||
@@ -336,13 +345,14 @@ def get_conversion_instructions(
|
||||
model_schema = PydanticSchemaParser(model=model).get_schema()
|
||||
instructions += (
|
||||
f"\n\nOutput ONLY the valid JSON and nothing else.\n\n"
|
||||
f"The JSON must follow this schema exactly:\n```json\n{model_schema}\n```"
|
||||
f"Use this format exactly:\n```json\n{model_schema}\n```"
|
||||
)
|
||||
else:
|
||||
model_description = generate_model_description(model)
|
||||
schema_json = json.dumps(model_description["json_schema"]["schema"], indent=2)
|
||||
instructions += (
|
||||
f"\n\nOutput ONLY the valid JSON and nothing else.\n\n"
|
||||
f"The JSON must follow this format exactly:\n{model_description}"
|
||||
f"Use this format exactly:\n```json\n{schema_json}\n```"
|
||||
)
|
||||
return instructions
|
||||
|
||||
@@ -399,57 +409,222 @@ def create_converter(
|
||||
if not converter:
|
||||
raise Exception("No output converter found or set.")
|
||||
|
||||
return converter
|
||||
return converter # type: ignore[no-any-return]
|
||||
|
||||
|
||||
def generate_model_description(model: type[BaseModel]) -> str:
|
||||
"""Generate a string description of a Pydantic model's fields and their types.
|
||||
def resolve_refs(schema: dict[str, Any]) -> dict[str, Any]:
|
||||
"""Recursively resolve all local $refs in the given JSON Schema using $defs as the source.
|
||||
|
||||
This function takes a Pydantic model class and returns a string that describes
|
||||
the model's fields and their respective types. The description includes handling
|
||||
of complex types such as `Optional`, `List`, and `Dict`, as well as nested Pydantic
|
||||
models.
|
||||
This is needed because Pydantic generates $ref-based schemas that
|
||||
some consumers (e.g. LLMs, tool frameworks) don't handle well.
|
||||
|
||||
Args:
|
||||
schema: JSON Schema dict that may contain "$refs" and "$defs".
|
||||
|
||||
Returns:
|
||||
A new schema dictionary with all local $refs replaced by their definitions.
|
||||
"""
|
||||
defs = schema.get("$defs", {})
|
||||
schema_copy = deepcopy(schema)
|
||||
|
||||
def _resolve(node: Any) -> Any:
|
||||
if isinstance(node, dict):
|
||||
ref = node.get("$ref")
|
||||
if isinstance(ref, str) and ref.startswith("#/$defs/"):
|
||||
def_name = ref.replace("#/$defs/", "")
|
||||
if def_name in defs:
|
||||
return _resolve(deepcopy(defs[def_name]))
|
||||
raise KeyError(f"Definition '{def_name}' not found in $defs.")
|
||||
return {k: _resolve(v) for k, v in node.items()}
|
||||
|
||||
if isinstance(node, list):
|
||||
return [_resolve(i) for i in node]
|
||||
|
||||
return node
|
||||
|
||||
return _resolve(schema_copy) # type: ignore[no-any-return]
|
||||
|
||||
|
||||
def add_key_in_dict_recursively(
|
||||
d: dict[str, Any], key: str, value: Any, criteria: Callable[[dict[str, Any]], bool]
|
||||
) -> dict[str, Any]:
|
||||
"""Recursively adds a key/value pair to all nested dicts matching `criteria`."""
|
||||
if isinstance(d, dict):
|
||||
if criteria(d) and key not in d:
|
||||
d[key] = value
|
||||
for v in d.values():
|
||||
add_key_in_dict_recursively(v, key, value, criteria)
|
||||
elif isinstance(d, list):
|
||||
for i in d:
|
||||
add_key_in_dict_recursively(i, key, value, criteria)
|
||||
return d
|
||||
|
||||
|
||||
def fix_discriminator_mappings(schema: dict[str, Any]) -> dict[str, Any]:
|
||||
"""Replace '#/$defs/...' references in discriminator.mapping with just the model name."""
|
||||
output = schema.get("properties", {}).get("output")
|
||||
if not output:
|
||||
return schema
|
||||
|
||||
disc = output.get("discriminator")
|
||||
if not disc or "mapping" not in disc:
|
||||
return schema
|
||||
|
||||
disc["mapping"] = {k: v.split("/")[-1] for k, v in disc["mapping"].items()}
|
||||
return schema
|
||||
|
||||
|
||||
def add_const_to_oneof_variants(schema: dict[str, Any]) -> dict[str, Any]:
|
||||
"""Add const fields to oneOf variants for discriminated unions.
|
||||
|
||||
The json_schema_to_pydantic library requires each oneOf variant to have
|
||||
a const field for the discriminator property. This function adds those
|
||||
const fields based on the discriminator mapping.
|
||||
|
||||
Args:
|
||||
schema: JSON Schema dict that may contain discriminated unions
|
||||
|
||||
Returns:
|
||||
Modified schema with const fields added to oneOf variants
|
||||
"""
|
||||
|
||||
def _process_oneof(node: dict[str, Any]) -> dict[str, Any]:
|
||||
"""Process a single node that might contain a oneOf with discriminator."""
|
||||
if not isinstance(node, dict):
|
||||
return node
|
||||
|
||||
if "oneOf" in node and "discriminator" in node:
|
||||
discriminator = node["discriminator"]
|
||||
property_name = discriminator.get("propertyName")
|
||||
mapping = discriminator.get("mapping", {})
|
||||
|
||||
if property_name and mapping:
|
||||
one_of_variants = node.get("oneOf", [])
|
||||
|
||||
for variant in one_of_variants:
|
||||
if isinstance(variant, dict) and "properties" in variant:
|
||||
variant_title = variant.get("title", "")
|
||||
|
||||
matched_disc_value = None
|
||||
for disc_value, schema_name in mapping.items():
|
||||
if variant_title == schema_name or variant_title.endswith(
|
||||
schema_name
|
||||
):
|
||||
matched_disc_value = disc_value
|
||||
break
|
||||
|
||||
if matched_disc_value is not None:
|
||||
props = variant["properties"]
|
||||
if property_name in props:
|
||||
props[property_name]["const"] = matched_disc_value
|
||||
|
||||
for key, value in node.items():
|
||||
if isinstance(value, dict):
|
||||
node[key] = _process_oneof(value)
|
||||
elif isinstance(value, list):
|
||||
node[key] = [
|
||||
_process_oneof(item) if isinstance(item, dict) else item
|
||||
for item in value
|
||||
]
|
||||
|
||||
return node
|
||||
|
||||
return _process_oneof(deepcopy(schema))
|
||||
|
||||
|
||||
def convert_oneof_to_anyof(schema: dict[str, Any]) -> dict[str, Any]:
|
||||
"""Convert oneOf to anyOf for OpenAI compatibility.
|
||||
|
||||
OpenAI's Structured Outputs support anyOf better than oneOf.
|
||||
This recursively converts all oneOf occurrences to anyOf.
|
||||
|
||||
Args:
|
||||
schema: JSON schema dictionary.
|
||||
|
||||
Returns:
|
||||
Modified schema with anyOf instead of oneOf.
|
||||
"""
|
||||
if isinstance(schema, dict):
|
||||
if "oneOf" in schema:
|
||||
schema["anyOf"] = schema.pop("oneOf")
|
||||
|
||||
for value in schema.values():
|
||||
if isinstance(value, dict):
|
||||
convert_oneof_to_anyof(value)
|
||||
elif isinstance(value, list):
|
||||
for item in value:
|
||||
if isinstance(item, dict):
|
||||
convert_oneof_to_anyof(item)
|
||||
|
||||
return schema
|
||||
|
||||
|
||||
def ensure_all_properties_required(schema: dict[str, Any]) -> dict[str, Any]:
|
||||
"""Ensure all properties are in the required array for OpenAI strict mode.
|
||||
|
||||
OpenAI's strict structured outputs require all properties to be listed
|
||||
in the required array. This recursively updates all objects to include
|
||||
all their properties in required.
|
||||
|
||||
Args:
|
||||
schema: JSON schema dictionary.
|
||||
|
||||
Returns:
|
||||
Modified schema with all properties marked as required.
|
||||
"""
|
||||
if isinstance(schema, dict):
|
||||
if schema.get("type") == "object" and "properties" in schema:
|
||||
properties = schema["properties"]
|
||||
if properties:
|
||||
schema["required"] = list(properties.keys())
|
||||
|
||||
for value in schema.values():
|
||||
if isinstance(value, dict):
|
||||
ensure_all_properties_required(value)
|
||||
elif isinstance(value, list):
|
||||
for item in value:
|
||||
if isinstance(item, dict):
|
||||
ensure_all_properties_required(item)
|
||||
|
||||
return schema
|
||||
|
||||
|
||||
def generate_model_description(model: type[BaseModel]) -> dict[str, Any]:
|
||||
"""Generate JSON schema description of a Pydantic model.
|
||||
|
||||
This function takes a Pydantic model class and returns its JSON schema,
|
||||
which includes full type information, discriminators, and all metadata.
|
||||
The schema is dereferenced to inline all $ref references for better LLM understanding.
|
||||
|
||||
Args:
|
||||
model: A Pydantic model class.
|
||||
|
||||
Returns:
|
||||
A string representation of the model's fields and types.
|
||||
A JSON schema dictionary representation of the model.
|
||||
"""
|
||||
|
||||
def describe_field(field_type: Any) -> str:
|
||||
"""Recursively describe a field's type.
|
||||
json_schema = model.model_json_schema(ref_template="#/$defs/{model}")
|
||||
|
||||
Args:
|
||||
field_type: The type of the field to describe.
|
||||
json_schema = add_key_in_dict_recursively(
|
||||
json_schema,
|
||||
key="additionalProperties",
|
||||
value=False,
|
||||
criteria=lambda d: d.get("type") == "object"
|
||||
and "additionalProperties" not in d,
|
||||
)
|
||||
|
||||
Returns:
|
||||
A string representation of the field's type.
|
||||
"""
|
||||
origin = get_origin(field_type)
|
||||
args = get_args(field_type)
|
||||
json_schema = resolve_refs(json_schema)
|
||||
|
||||
if origin is Union or (origin is None and len(args) > 0):
|
||||
# Handle both Union and the new '|' syntax
|
||||
non_none_args = [arg for arg in args if arg is not type(None)]
|
||||
if len(non_none_args) == 1:
|
||||
return f"Optional[{describe_field(non_none_args[0])}]"
|
||||
return f"Optional[Union[{', '.join(describe_field(arg) for arg in non_none_args)}]]"
|
||||
if origin is list:
|
||||
return f"List[{describe_field(args[0])}]"
|
||||
if origin is dict:
|
||||
key_type = describe_field(args[0])
|
||||
value_type = describe_field(args[1])
|
||||
return f"Dict[{key_type}, {value_type}]"
|
||||
if isinstance(field_type, type) and issubclass(field_type, BaseModel):
|
||||
return generate_model_description(field_type)
|
||||
if hasattr(field_type, "__name__"):
|
||||
return field_type.__name__
|
||||
return str(field_type)
|
||||
json_schema.pop("$defs", None)
|
||||
json_schema = fix_discriminator_mappings(json_schema)
|
||||
json_schema = convert_oneof_to_anyof(json_schema)
|
||||
json_schema = ensure_all_properties_required(json_schema)
|
||||
|
||||
fields = model.model_fields
|
||||
field_descriptions = [
|
||||
f'"{name}": {describe_field(field.annotation)}'
|
||||
for name, field in fields.items()
|
||||
]
|
||||
return "{\n " + ",\n ".join(field_descriptions) + "\n}"
|
||||
return {
|
||||
"type": "json_schema",
|
||||
"json_schema": {
|
||||
"name": model.__name__,
|
||||
"strict": True,
|
||||
"schema": json_schema,
|
||||
},
|
||||
}
|
||||
|
||||
@@ -1,10 +1,10 @@
|
||||
from typing import Any
|
||||
|
||||
import pytest
|
||||
from crewai.agent import BaseAgent
|
||||
from crewai.agents.agent_builder.base_agent import BaseAgent
|
||||
from crewai.agents.agent_adapters.base_agent_adapter import BaseAgentAdapter
|
||||
from crewai.tools.base_tool import BaseTool
|
||||
from crewai.utilities.token_counter_callback import TokenProcess
|
||||
from crewai.agents.agent_builder.utilities.base_token_process import TokenProcess
|
||||
from pydantic import BaseModel
|
||||
|
||||
|
||||
|
||||
@@ -1342,7 +1342,7 @@ def test_ensure_first_task_allow_crewai_trigger_context_is_false_does_not_inject
|
||||
assert "Trigger Payload: Context data" in second_prompt
|
||||
|
||||
|
||||
@patch("crewai.agent.CrewTrainingHandler")
|
||||
@patch("crewai.agent.core.CrewTrainingHandler")
|
||||
def test_agent_training_handler(crew_training_handler):
|
||||
task_prompt = "What is 1 + 1?"
|
||||
agent = Agent(
|
||||
@@ -1351,7 +1351,7 @@ def test_agent_training_handler(crew_training_handler):
|
||||
backstory="test backstory",
|
||||
verbose=True,
|
||||
)
|
||||
crew_training_handler().load.return_value = {
|
||||
crew_training_handler.return_value.load.return_value = {
|
||||
f"{agent.id!s}": {"0": {"human_feedback": "good"}}
|
||||
}
|
||||
|
||||
@@ -1360,11 +1360,11 @@ def test_agent_training_handler(crew_training_handler):
|
||||
assert result == "What is 1 + 1?\n\nYou MUST follow these instructions: \n good"
|
||||
|
||||
crew_training_handler.assert_has_calls(
|
||||
[mock.call(), mock.call("training_data.pkl"), mock.call().load()]
|
||||
[mock.call("training_data.pkl"), mock.call().load()]
|
||||
)
|
||||
|
||||
|
||||
@patch("crewai.agent.CrewTrainingHandler")
|
||||
@patch("crewai.agent.core.CrewTrainingHandler")
|
||||
def test_agent_use_trained_data(crew_training_handler):
|
||||
task_prompt = "What is 1 + 1?"
|
||||
agent = Agent(
|
||||
@@ -1373,7 +1373,7 @@ def test_agent_use_trained_data(crew_training_handler):
|
||||
backstory="test backstory",
|
||||
verbose=True,
|
||||
)
|
||||
crew_training_handler().load.return_value = {
|
||||
crew_training_handler.return_value.load.return_value = {
|
||||
agent.role: {
|
||||
"suggestions": [
|
||||
"The result of the math operation must be right.",
|
||||
@@ -1389,7 +1389,7 @@ def test_agent_use_trained_data(crew_training_handler):
|
||||
" - The result of the math operation must be right.\n - Result must be better than 1."
|
||||
)
|
||||
crew_training_handler.assert_has_calls(
|
||||
[mock.call(), mock.call("trained_agents_data.pkl"), mock.call().load()]
|
||||
[mock.call("trained_agents_data.pkl"), mock.call().load()]
|
||||
)
|
||||
|
||||
|
||||
|
||||
111
lib/crewai/tests/agents/test_agent_a2a_wrapping.py
Normal file
111
lib/crewai/tests/agents/test_agent_a2a_wrapping.py
Normal file
@@ -0,0 +1,111 @@
|
||||
"""Test A2A wrapper is only applied when a2a is passed to Agent."""
|
||||
|
||||
from unittest.mock import patch
|
||||
|
||||
import pytest
|
||||
|
||||
from crewai import Agent
|
||||
from crewai.a2a.config import A2AConfig
|
||||
|
||||
try:
|
||||
import a2a # noqa: F401
|
||||
|
||||
A2A_SDK_INSTALLED = True
|
||||
except ImportError:
|
||||
A2A_SDK_INSTALLED = False
|
||||
|
||||
|
||||
def test_agent_without_a2a_has_no_wrapper():
|
||||
"""Verify that agents without a2a don't get the wrapper applied."""
|
||||
agent = Agent(
|
||||
role="test role",
|
||||
goal="test goal",
|
||||
backstory="test backstory",
|
||||
)
|
||||
|
||||
assert agent.a2a is None
|
||||
assert callable(agent.execute_task)
|
||||
|
||||
|
||||
@pytest.mark.skipif(
|
||||
True,
|
||||
reason="Requires a2a-sdk to be installed. This test verifies wrapper is applied when a2a is set.",
|
||||
)
|
||||
def test_agent_with_a2a_has_wrapper():
|
||||
"""Verify that agents with a2a get the wrapper applied."""
|
||||
a2a_config = A2AConfig(
|
||||
endpoint="http://test-endpoint.com",
|
||||
)
|
||||
|
||||
agent = Agent(
|
||||
role="test role",
|
||||
goal="test goal",
|
||||
backstory="test backstory",
|
||||
a2a=a2a_config,
|
||||
)
|
||||
|
||||
assert agent.a2a is not None
|
||||
assert agent.a2a.endpoint == "http://test-endpoint.com"
|
||||
assert callable(agent.execute_task)
|
||||
|
||||
|
||||
@pytest.mark.skipif(not A2A_SDK_INSTALLED, reason="Requires a2a-sdk to be installed")
|
||||
def test_agent_with_a2a_creates_successfully():
|
||||
"""Verify that creating an agent with a2a succeeds and applies wrapper."""
|
||||
a2a_config = A2AConfig(
|
||||
endpoint="http://test-endpoint.com",
|
||||
)
|
||||
|
||||
agent = Agent(
|
||||
role="test role",
|
||||
goal="test goal",
|
||||
backstory="test backstory",
|
||||
a2a=a2a_config,
|
||||
)
|
||||
|
||||
assert agent.a2a is not None
|
||||
assert agent.a2a.endpoint == "http://test-endpoint.com/"
|
||||
assert callable(agent.execute_task)
|
||||
assert hasattr(agent.execute_task, "__wrapped__")
|
||||
|
||||
|
||||
def test_multiple_agents_without_a2a():
|
||||
"""Verify that multiple agents without a2a work correctly."""
|
||||
agent1 = Agent(
|
||||
role="agent 1",
|
||||
goal="test goal",
|
||||
backstory="test backstory",
|
||||
)
|
||||
|
||||
agent2 = Agent(
|
||||
role="agent 2",
|
||||
goal="test goal",
|
||||
backstory="test backstory",
|
||||
)
|
||||
|
||||
assert agent1.a2a is None
|
||||
assert agent2.a2a is None
|
||||
assert callable(agent1.execute_task)
|
||||
assert callable(agent2.execute_task)
|
||||
|
||||
|
||||
@pytest.mark.skipif(not A2A_SDK_INSTALLED, reason="Requires a2a-sdk to be installed")
|
||||
def test_wrapper_is_applied_differently_per_instance():
|
||||
"""Verify that agents with and without a2a have different execute_task methods."""
|
||||
agent_without_a2a = Agent(
|
||||
role="agent without a2a",
|
||||
goal="test goal",
|
||||
backstory="test backstory",
|
||||
)
|
||||
|
||||
a2a_config = A2AConfig(endpoint="http://test-endpoint.com")
|
||||
agent_with_a2a = Agent(
|
||||
role="agent with a2a",
|
||||
goal="test goal",
|
||||
backstory="test backstory",
|
||||
a2a=a2a_config,
|
||||
)
|
||||
|
||||
assert agent_without_a2a.execute_task.__func__ is not agent_with_a2a.execute_task.__func__
|
||||
assert not hasattr(agent_without_a2a.execute_task, "__wrapped__")
|
||||
assert hasattr(agent_with_a2a.execute_task, "__wrapped__")
|
||||
@@ -103,7 +103,7 @@ def test_lite_agent_created_with_correct_parameters(monkeypatch, verbose):
|
||||
super().__init__(**kwargs)
|
||||
|
||||
# Patch the LiteAgent class
|
||||
monkeypatch.setattr("crewai.agent.LiteAgent", MockLiteAgent)
|
||||
monkeypatch.setattr("crewai.agent.core.LiteAgent", MockLiteAgent)
|
||||
|
||||
# Call kickoff to create the LiteAgent
|
||||
agent.kickoff("Test query")
|
||||
@@ -123,8 +123,6 @@ def test_lite_agent_created_with_correct_parameters(monkeypatch, verbose):
|
||||
assert created_lite_agent["response_format"] is None
|
||||
|
||||
# Test with a response_format
|
||||
monkeypatch.setattr("crewai.agent.LiteAgent", MockLiteAgent)
|
||||
|
||||
class TestResponse(BaseModel):
|
||||
test_field: str
|
||||
|
||||
@@ -527,6 +525,7 @@ def test_lite_agent_with_custom_llm_and_guardrails():
|
||||
available_functions=None,
|
||||
from_task=None,
|
||||
from_agent=None,
|
||||
response_model=None,
|
||||
) -> str:
|
||||
self.call_count += 1
|
||||
|
||||
|
||||
@@ -1285,4 +1285,312 @@ interactions:
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"Convert all responses into valid
|
||||
JSON output."},{"role":"user","content":"Assess the quality of the task completed
|
||||
based on the description, expected output, and actual results.\n\nTask Description:\nPerform
|
||||
a search on specific topics.\n\nExpected Output:\nA list of relevant URLs based
|
||||
on the search query.\n\nActual Output:\nI now can give a great answer. \n\nFinal
|
||||
Answer: Here are some relevant URLs based on your search query. Please visit
|
||||
the following links for comprehensive information on the specified topics:\n\n1.
|
||||
**Artificial Intelligence Ethics**\n - https://www.aaai.org/Ethics/AIEthics.pdf\n -
|
||||
https://plato.stanford.edu/entries/ethics-ai/\n\n2. **Impact of 5G Technology**\n -
|
||||
https://www.itu.int/en/ITU-T/focusgroups/5g/Documents/FG-5G-DOC-1830.zip\n -
|
||||
https://www.gsma.com/5g/\n\n3. **Quantum Computing Developments**\n - https://www.ibm.com/quantum-computing/\n -
|
||||
https://www.microsoft.com/en-us/quantum\n\n4. **Cybersecurity Trends 2023**\n -
|
||||
https://www.csoonline.com/article/3642552/cybersecurity-trends-2023.html\n -
|
||||
https://www.forbes.com/sites/bernardmarr/2023/01/03/top-5-cybersecurity-trends-in-2023/\n\n5.
|
||||
**Sustainable Technology Innovations**\n - https://www.weforum.org/agenda/2023/01/10-innovations-sustainability/\n -
|
||||
https://www.greenbiz.com/article/13-sustainable-tech-solutions-watch-2023\n\nFeel
|
||||
free to explore these URLs for detailed content on each topic.\n\nPlease provide:\n-
|
||||
Bullet points suggestions to improve future similar tasks\n- A score from 0
|
||||
to 10 evaluating on completion, quality, and overall performance- Entities extracted
|
||||
from the task output, if any, their type, description, and relationships"}],"model":"gpt-4.1-mini"}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1741'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.109.1
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.109.1
|
||||
x-stainless-read-timeout:
|
||||
- '600'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//pFddbxu3En33rxjss1Yr2Zbq+i1wPmqgwG173Ra4dSBQ5OxqHC65lxxK
|
||||
VQL/94vhrqRVkhskaR4CmUOemTlzdjj8cAFQkCluodAbxbrtbHn3539+uV+8frXbvt789ofST/RU
|
||||
v1r+9PSk/dN9MZETfv2Emg+nptq3nUUm73qzDqgYBXX+w3J+88PN8mqZDa03aOVY03F5PZ2XLTkq
|
||||
L2eXi3J2Xc6vh+MbTxpjcQt/XQAAfMj/S6DO4N/FLcwmh5UWY1QNFrfHTQBF8FZWChUjRVaOi8nJ
|
||||
qL1jdDn2D48O4LHArbJJSfSPgiOLsnxKSpZ/nBzW/5uUJd7L4s1x0W8xKGtXHYbah1Y5jecbtG9b
|
||||
dBxl9bF42CCwiu9gpyJYFRq0exgcogEVAf/uUMvv9R664LdkyDWgIKDFrXIMliKDr+H3336O4B3w
|
||||
BiF2qKkmNMC+Ix2nII584i4xUARtUYUJ+NAoR+/RTEA5A6jiHthD7a31uyn85He4xTDJkL5DJ54j
|
||||
Cm0aI1DvS2lOyh7AVUCIvsXdRjE06DCQzuDaJ2tgjdD6gKC90xQRfADfEnOOFNbIjAFaRM7Qx9wH
|
||||
8MxoTvYpxT5zCUkyn8ILY0iKpKzdT2CLgWrSuZpywJJ7B1tlyRDvxa3BqAN1Yo9DcNQKwQhDYaeP
|
||||
hRTtedKrI6amwZj3r2ofVsNuKabU8q9DgV9sPRlQJtcpOYdapBn2QI6DN0mzD/sRjSxMGQ/Os9DC
|
||||
itxQyyDcSkZri0Cuz568mz4WRzndO22TQVgHwvo8KR8gprZVgTAKdYBKbwRaqD7kmiIGSM5gkA9E
|
||||
Yh6j/yEs7nMtckhrrKV6B+J5g62AoYspoPy57wXAZC0oLZmTBC8COAh2jP+6LyifxBk5kGa7F+mr
|
||||
c21T3XNF8VwcgypGSWmrgpRZ3HYBNcWPWLvzLpLBAJTpk1xaZGUUi893ORUwilGcd2ltj0oKEH0K
|
||||
GkEHNLQmEcoZuYNq3g6qQcfEhHEkkaGxHK37vhW8CCyKJWXh3jFaS40IBF7xhnQ8Bi/HeN/h0D/k
|
||||
8z6zjSTQb8nnlQUlTYGjZKROrmjk6gwnoJXWvUrBjoPvjRvmLt5W1W63myqlaOpDU/WBVi/u+x/T
|
||||
ztQjxPGxzir2UxFc7YOZokkVOhadVpjPloqqnkf597b/8Tz5In/3bad0VsviDTyg3jhvfbP/B8TV
|
||||
dSZMVGRwi9Z3uXPDQI0obvEGGC1KU0/uIBL+vPOvZpQ4Tclxha66f/i9fKhqr1Nsgk9drBZN9dLr
|
||||
lCOpXr8pF2/Kl/+6K+c3V7Ppe+r+D+MC28RWyQ0tEN9K7q9JOU4t3Pm2S/nbfzli5Ps5/lkxRgZl
|
||||
tnJT9vSSkw6c3emDu+8kct3mhAe48ghXfYGmlnTw0dd5mqnQlSkeAL6Vtbv9GkNEnXI3egjoTITL
|
||||
2eXV9xN2l0JAx8A92CfaJAf6zKv0pk9cfjWDOnrvLDnMZEjf0Barq+X15WJxWZ15KvuQZIi7mm64
|
||||
tV/guPZhjTFjRmKM1RqDU8G0KoRKAKrZvJpdVey7clF+1g257OmblfzvFOWKzXfqqUvAvXN+mz/f
|
||||
f6DmEUiejk7w+QuWQcZBPARA/fT4PWXZyT2c2tx2VYPOqCNr81lJpzDKc29f0n0TEN2a3p9Ven51
|
||||
QrBYSkZl9Db14DvFelP26vq4CvkSfHTP45E7YJ2ikrnfJWtHBuWc5z5iSfvtYHk+jvfWN13w6/jR
|
||||
0aImR3GzCqiidzLKR/Zdka3PF3ILyzMinb0Mii74tuMV+3eY3V3Plz3eaNI/WZfX14OVPSt7Msxn
|
||||
y+H5cY64MsiKbBw9RQqt9AbN6ezp3aKSIT8yXIzy/jSez2H3uZNrvgb+ZNAaO9FaJ3OMPs/5tC3g
|
||||
Ux6yPr/tyHMOuIgYtqRxxYRBamGwVsn2j64i7iNju6rJNRi6QP3Lq+5W1/ryZjGvb5aXxcXzxf8A
|
||||
AAD//wMAyfZbKYgOAAA=
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 996fcec5ed410df7-MXP
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 31 Oct 2025 02:44:05 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=A3QgDjhHusi3NstcRRXwQT2i7SMfD0OcenI1BlEy_v4-1761878645-1.0.1.1-_WmHHgBT0.tfSicqDzwM4WLpV34LuUoxs1uDx7zuOfyTCxX_caKAj3anb.qP2fsys5ruIhcwg6IeTGgXGXgpsuS7jIqGPsOhKxfZw1xwNa0;
|
||||
path=/; expires=Fri, 31-Oct-25 03:14:05 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=C2GzMTMsYw0c9cZ482nxxNogRgIpj2ICJMMTk0RCMY8-1761878645829-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '9162'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '9193'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-project-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-project-tokens:
|
||||
- '149999595'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149999595'
|
||||
x-ratelimit-reset-project-tokens:
|
||||
- 0s
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_6cf164b9001f417ab620f7c0d5ca8e06
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"Convert all responses into valid
|
||||
JSON output."},{"role":"user","content":"Assess the quality of the task completed
|
||||
based on the description, expected output, and actual results.\n\nTask Description:\nPerform
|
||||
a search on specific topics.\n\nExpected Output:\nA list of relevant URLs based
|
||||
on the search query.\n\nActual Output:\nI now can give a great answer. \n\nFinal
|
||||
Answer: Here are some relevant URLs based on your search query. Please visit
|
||||
the following links for comprehensive information on the specified topics:\n\n1.
|
||||
**Artificial Intelligence Ethics**\n - https://www.aaai.org/Ethics/AIEthics.pdf\n -
|
||||
https://plato.stanford.edu/entries/ethics-ai/\n\n2. **Impact of 5G Technology**\n -
|
||||
https://www.itu.int/en/ITU-T/focusgroups/5g/Documents/FG-5G-DOC-1830.zip\n -
|
||||
https://www.gsma.com/5g/\n\n3. **Quantum Computing Developments**\n - https://www.ibm.com/quantum-computing/\n -
|
||||
https://www.microsoft.com/en-us/quantum\n\n4. **Cybersecurity Trends 2023**\n -
|
||||
https://www.csoonline.com/article/3642552/cybersecurity-trends-2023.html\n -
|
||||
https://www.forbes.com/sites/bernardmarr/2023/01/03/top-5-cybersecurity-trends-in-2023/\n\n5.
|
||||
**Sustainable Technology Innovations**\n - https://www.weforum.org/agenda/2023/01/10-innovations-sustainability/\n -
|
||||
https://www.greenbiz.com/article/13-sustainable-tech-solutions-watch-2023\n\nFeel
|
||||
free to explore these URLs for detailed content on each topic.\n\nPlease provide:\n-
|
||||
Bullet points suggestions to improve future similar tasks\n- A score from 0
|
||||
to 10 evaluating on completion, quality, and overall performance- Entities extracted
|
||||
from the task output, if any, their type, description, and relationships"}],"model":"gpt-4.1-mini","response_format":{"type":"json_schema","json_schema":{"schema":{"$defs":{"Entity":{"properties":{"name":{"description":"The
|
||||
name of the entity.","title":"Name","type":"string"},"type":{"description":"The
|
||||
type of the entity.","title":"Type","type":"string"},"description":{"description":"Description
|
||||
of the entity.","title":"Description","type":"string"},"relationships":{"description":"Relationships
|
||||
of the entity.","items":{"type":"string"},"title":"Relationships","type":"array"}},"required":["name","type","description","relationships"],"title":"Entity","type":"object","additionalProperties":false}},"properties":{"suggestions":{"description":"Suggestions
|
||||
to improve future similar tasks.","items":{"type":"string"},"title":"Suggestions","type":"array"},"quality":{"description":"A
|
||||
score from 0 to 10 evaluating on completion, quality, and overall performance,
|
||||
all taking into account the task description, expected output, and the result
|
||||
of the task.","title":"Quality","type":"number"},"entities":{"description":"Entities
|
||||
extracted from the task output.","items":{"$ref":"#/$defs/Entity"},"title":"Entities","type":"array"}},"required":["suggestions","quality","entities"],"title":"TaskEvaluation","type":"object","additionalProperties":false},"name":"TaskEvaluation","strict":true}},"stream":false}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '3047'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=A3QgDjhHusi3NstcRRXwQT2i7SMfD0OcenI1BlEy_v4-1761878645-1.0.1.1-_WmHHgBT0.tfSicqDzwM4WLpV34LuUoxs1uDx7zuOfyTCxX_caKAj3anb.qP2fsys5ruIhcwg6IeTGgXGXgpsuS7jIqGPsOhKxfZw1xwNa0;
|
||||
_cfuvid=C2GzMTMsYw0c9cZ482nxxNogRgIpj2ICJMMTk0RCMY8-1761878645829-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.109.1
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-helper-method:
|
||||
- chat.completions.parse
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.109.1
|
||||
x-stainless-read-timeout:
|
||||
- '600'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//lFbBbhs3EL37K4g9a7WSbDmGboGdBm6LunWcBkhkCCNydndaLsmQQyuK
|
||||
4X8vyJUsqUoB9WIYGr43bx6Hs/N8JkRBqpiJQrbAsnO6vP70+fcPcXJ/dwf1L/6yXsfVz5+Q3ahu
|
||||
PnMxSAi7/Aslb1FDaTunkcmaPiw9AmNiHb+5HF+9ubq8uMyBzirUCdY4Li+G47IjQ+VkNJmWo4ty
|
||||
fLGBt5YkhmImvpwJIcRz/puEGoXfipkYDba/dBgCNFjMXg8JUXir0y8FhECBwfSiN0FpDaPJ2p/n
|
||||
RYhNgyEpD/Ni9mVe3JBHyXotnLdPpFBwi8KjxicwLD7e/xrEiri1kYXzqKkjA34tovGoU80iMDB2
|
||||
aDgItqIDMgxkhNTgidcCjBK1lTEM58VgXrwzIfo+SeYGjyK6hFTAmE+/JmebzwWHkmqSgq0jGYTH
|
||||
rxEDo+oZb43UUaEAsfSEtVAYpCeXShTWixC7Lgm2tUCQbcqaiFvUTsSAPohoFPpkmxKrFlhsDEun
|
||||
8JtDyX2eO9+Aoe8b3VIjeL3uwT1zlpdRpgUjk4ugYEmaeN1T/Ime6rXglAW03jkAUmIItNS9AxpB
|
||||
JSLpUeUfg41eYrLwcTAvvkZInPNidjWYF2iYmDBf5/O8MNDhvJjNi7eek2sEWtwaRq2pwaTqHbck
|
||||
Q9bDa9effUjS80977vUsoibU6ZajWpNp8o1gogAtqHOaJORuysKbSAo1GQwiRO9tNCphYCeF9qT0
|
||||
puQ+SgwtuU1PtswuzKpqtVoNAYCG1jdVr7t6e9v/M3SqzvjtYaeB7TDdY229GqKKFRr2hKHKekMJ
|
||||
VM2Lx5fBvku3nQPJqTum78UDytZYbZv1qfY8JDfqGiX3BoB6SlffP4ftC2GbyFeUGjsEIW3XRbPx
|
||||
TfBrzpPsII5DMlyhqW4fPpYPVX5bjbfRhWraVDdWxpy9+ul9OX1f3txdl+Or89HwO7kDuxJZEzpI
|
||||
sywBj5z5I4Lh2Ilr27nI6Rpv8Am1dZn+/xiUXAhpfNjGJwP6Rx4QvGwFmdxSfZfZWnzdpJXbtKfZ
|
||||
suxyIRt0+YqujoruSHobbJ2neIWmjGELO/Lger1EH1DGPMoePBoVxGQ0OT+1/HuUeZT0yFQ4dc76
|
||||
NKNFdGnkheSAPMhTW589WSP4nO0kC2Sw1qS3l+tKT05qrM4vLybT6aQ6yFD2esrM3XKnj0yqrV9i
|
||||
yEyBGEO1RG/Aqw68rxKsGo2r0XnF1pXT8ofkZDL/cWN9iCF9ISANtt2LE7fG2CfYfJlOc/c3XGVP
|
||||
sUPf5OG0pSMMAqhDJYBF2CbMo7hHmCfy1qRWBp06k1Em2pOcXmFtfezyVIIGjYJXS8ajknZ1lIeZ
|
||||
j3ux8YhmSd8Prmx8vsNpLFNNZbA69pQrYNmWfQ8+vjy+7H/qPdYxQNo3TNR6LwDGWO41pSXjcRN5
|
||||
eV0rtG2ct8vwL2hRk6HQLjxCsCatEIGtK3L05UyIx7y+xIONpHDedo4XbP/GnG6zC+VVZLs27aIX
|
||||
k+kmypZB7wLj0Zvx4AeMC4UMpMPeClRIkC2qHXa3L0FUZPcCZ3t1H+v5EXdfO5nmFPpdQEp0jGrh
|
||||
0jdcHta8O+Yx7ZX/dezV5yy4COifSOKCCX26C4U1RL3ZUMM6MHaLmkyD3nnqN77aLS7k5Go6rq8u
|
||||
J8XZy9k/AAAA//8DAPxRBgAACwAA
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 996fcf012a0a0df7-MXP
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 31 Oct 2025 02:44:13 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '7076'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '7246'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-project-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-project-tokens:
|
||||
- '149999595'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149999595'
|
||||
x-ratelimit-reset-project-tokens:
|
||||
- 0s
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_2d55304787e14773ba48a58c82053156
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
|
||||
@@ -1533,4 +1533,343 @@ interactions:
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"Convert all responses into valid
|
||||
JSON output."},{"role":"user","content":"Assess the quality of the task completed
|
||||
based on the description, expected output, and actual results.\n\nTask Description:\nPerform
|
||||
a search on specific topics.\n\nExpected Output:\nA list of relevant URLs based
|
||||
on the search query.\n\nActual Output:\nI now can give a great answer \nFinal
|
||||
Answer: \n\n1. **Artificial Intelligence in Healthcare**\n - URL: [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7073215/](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7073215/)
|
||||
- This article explores various applications of AI in healthcare, including
|
||||
diagnostics and treatment personalization.\n\n2. **Blockchain Technology and
|
||||
Its Impact on Supply Chain**\n - URL: [https://www.researchgate.net/publication/341717309_Blockchain_Technology_in_Supply_Chain_Management](https://www.researchgate.net/publication/341717309_Blockchain_Technology_in_Supply_Chain_Management)
|
||||
- This research paper discusses the potential of blockchain in enhancing supply
|
||||
chain transparency and efficiency.\n\n3. **Cybersecurity Trends for 2023**\n -
|
||||
URL: [https://www.cybersecurity-insiders.com/cybersecurity-trends-2023/](https://www.cybersecurity-insiders.com/cybersecurity-trends-2023/)
|
||||
- This resource outlines the major cybersecurity trends expected to shape the
|
||||
industry in 2023, including emerging threats and mitigation strategies.\n\n4.
|
||||
**The Impact of Remote Work on Productivity**\n - URL: [https://www.mitpressjournals.org/doi/full/10.1162/99608f92.2020.12.01](https://www.mitpressjournals.org/doi/full/10.1162/99608f92.2020.12.01)
|
||||
- This journal article provides insights into how remote work affects productivity,
|
||||
work-life balance, and organizational dynamics.\n\n5. **Quantum Computing: A
|
||||
Beginner''s Guide**\n - URL: [https://www.ibm.com/quantum-computing/learn/what-is-quantum-computing/](https://www.ibm.com/quantum-computing/learn/what-is-quantum-computing/)
|
||||
- This resource serves as an introduction to quantum computing, detailing its
|
||||
principles and potential applications.\n\n6. **Sustainable Energy Technologies
|
||||
for the Future**\n - URL: [https://www.energy.gov/eere/solar/articles/sustainable-energy-technology-future](https://www.energy.gov/eere/solar/articles/sustainable-energy-technology-future)
|
||||
- This article discusses various sustainable energy technologies that could
|
||||
play a crucial role in future energy landscapes.\n\n7. **5G Technology and Its
|
||||
Implications**\n - URL: [https://www.qualcomm.com/invention/5g/what-is-5g](https://www.qualcomm.com/invention/5g/what-is-5g)
|
||||
- This page explains what 5G technology is and explores its potential implications
|
||||
for various sectors including telecommunications and the Internet of Things
|
||||
(IoT). \n\nThese resources have been carefully selected to meet the specified
|
||||
topics and provide comprehensive insights.\n\nPlease provide:\n- Bullet points
|
||||
suggestions to improve future similar tasks\n- A score from 0 to 10 evaluating
|
||||
on completion, quality, and overall performance- Entities extracted from the
|
||||
task output, if any, their type, description, and relationships"}],"model":"gpt-4.1-mini"}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '3168'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- _cfuvid=XLC52GLAWCOeWn2vI379CnSGKjPa7f.qr2vSAQ_R66M-1744489610542-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.109.1
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.109.1
|
||||
x-stainless-read-timeout:
|
||||
- '600'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAA6RWWW8bNxB+z68Y6CUvklaSncj2m+McNVA3l9MgrQuF4s7ujs0l1+RQ8trofy9m
|
||||
D0lGm0JJXgSIx3zHDGf24QnAgNLBCQx0oViXlRmdff7j7X3xZfL+4n5N7v1vbz7dvCyyL3fr+4P7
|
||||
28FQbrjlNWrub421KyuDTM6229qjYpSo0/nz6dH86Pl03myULkUj1/KKR4fj6agkS6PZZPZsNDkc
|
||||
TQ+764UjjWFwAn8+AQB4aH6FqE3xbnACk2G/UmIIKsfByeYQwMA7IysDFQIFVpYHw+2mdpbRNty/
|
||||
fv16HZy9sg9XFuBqEGKeYxAZ4UrAZVXWP2DpVgjOg8eq8CogcIFAlr1Lo2bnawgoYTXC03Owbg1a
|
||||
WchphaAgFzdA2bBG/xRUAGJIHQawjkGlKayUiQjsmqguchXleAqlqkE7m8UO0KNK0Y+vBsOe2Wvn
|
||||
S8Xw6cOvQU6KXLRsalgTFy4ylMrfpG5tIdSW1R2sC7S7MBQA7yrUjKkwq4wi24ZzHkKFmrK6OZ+1
|
||||
SP3hXRLnVpuYitKlJ8wgxLJUvpazEo5s3rE3uFJikcsAlS4Ep1cdUHldALuKtGCBNqg8emjSdce7
|
||||
eB93aO3ea6QY0iQGiBjlWbC2cocCZ8jegMfgotcYoHQeWzRT92xuI3rCABV60f1Y7mmaAlOJgVVZ
|
||||
hbYq2BOulIFUMYaGf+MhO6i8W1GKvQ5wrf06eo9W1z0/sq3B5Owu1qeAO3ltiqKhCjaWS/RirfOw
|
||||
jMYgQ+XIctikPlqLWp6Hr7dlkKJ2voFpaS6RGX1TWWpJhrjehf9dGRJJoIzZKEIbom8KsgblEZQW
|
||||
FFoabPhJUS+9u0ERIoH+GnavSzuP8q6OugVxLDZcZPWhB902E1k+3pC5jUr4yeJ8s+hW6JUxiy5T
|
||||
Ul4NhOz/3eNYJibcfdMdmOxaVTZXrgannikjTcrAuWU0hvLmRZOFX1AZLrQSAcPtXa6r7u6lFOCj
|
||||
vRSD9lT1Oq4Gp5UUZ+e9y+D0XCIXm8gQoi6kblNSuXWBpaLFUZb+UUr+K/TBWWXovrNtB8+jkZ67
|
||||
iN60eAVzFU6SZL1ej61e0tiacmypGOdulVSlTpRn0gZD8u7ibD6ZH8ymz5I2Zb133zDqhXH6RhfS
|
||||
Ki5RF9YZl9cN1XMOcF5WSjeF/jFWlanhTE7+uG8fnGlaxnKLShZCG7v9z17ZUKn2RQkPzCSR8nd/
|
||||
jzy2zSRXjGOLnFRx2ScsOTiczqfzg8nxYit+sRW/ILto1S4atYsLZVWOkrS9HD2rl+gD6uiJa7j0
|
||||
aNP2fc4ms4Mft+5CXUsrfRScm+BDwBJ93vZmqa8wbEcOMeWNZAjsFWNO+A0m/2vmI8wR2UAp+iAf
|
||||
CsnjrZaOfAIc7Fd8lwVuSiwDGc2M8Nn5G6m4d+1AplXTKH7UtldZhpqbR+rb+OsufrUTf9isjgxl
|
||||
CEtlpPW0HjqfK9s9UZkJtVUl6bC/eSVx5TGEaxe9VSaMnc+T1FGSRWOS6WQ8nT6fJcfHzydH2fFs
|
||||
PJvMJuPpbDyZ7mXg+6gsxxLOXFlFJpufwCm8wJysRf80wJtI6U+0ufP+q0hqiB3cdnC6hxsCcYDK
|
||||
k9VUGewKr3Iy4aT3qp0+ub9ntCyb6urgRhu4ROalTdaF4hGF0b/39zLtYwysyCqZcq8s+rzetr7+
|
||||
hcgcfx05/syQeElBxxDEOydfUltUbFF5F7X7pOIGPmug+3PBmfidFrY3m/GA6DEJzii/nRI7ZEbt
|
||||
0dGGTD3KOuF7ePnszTfGxn+n/bv8e7tCvyJc/wMAAP//lJdNUsQgEIX3OYXFBTRTk3E8gRdwORZB
|
||||
0mSwSEPxY5WL3N1qEgPRWej6QdPddMH7qHvdc+nWcg4Nnt6+pwgGpJ2mhNu3nJOxL3cByNX/o3dk
|
||||
TihWnkGNHzTLFu+7cRu8btyak13RBecL9n1fs4kHlYIgQMJkTCUIRBuXHMnEvK7KvHGQsaPz9i38
|
||||
2MqURh2u3IMIFol5QrSOZXVuyJsRb6UdQjHn7eQij+ThKOBju/JWZc2KenrqVjXaKEwR2uPDt7KL
|
||||
yAeIQptQMRuTQl5hKHsL4Ik0aFsJTVX373xuxV5q1zj+JXwRpARH1+08DFruay7LPLxnHrq9bOtz
|
||||
TpgFmk0JPGrwdBcDKJHMQqcsfIYIE1caR/D0OmZEVY4f5eHctep8OrBmbr4AAAD//wMA8hX+pLEP
|
||||
AAA=
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 996fce4e98d5edbb-MXP
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 31 Oct 2025 02:43:48 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=SO9He1GVTuDOBFVy7UPgpAiqXZwuXeli0wC9daB0knQ-1761878628-1.0.1.1-jldZtxPfeAswr22lzzVxN.W_5nEflvghqpz9M59LR9olhJD78hYz4EAWr3TuFJZgs12EnzNPJXbS01lMEU5ycEqvCgqSUlH4VgvAmfcEaAA;
|
||||
path=/; expires=Fri, 31-Oct-25 03:13:48 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=7kM8M9HCcESw20u.sW4KgamO892RwyAOg8qAz9JDbJc-1761878628218-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '10632'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '10664'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-project-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-project-tokens:
|
||||
- '149999237'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149999237'
|
||||
x-ratelimit-reset-project-tokens:
|
||||
- 0s
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_38a6e11e8cf24680943544e359fb348b
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"Convert all responses into valid
|
||||
JSON output."},{"role":"user","content":"Assess the quality of the task completed
|
||||
based on the description, expected output, and actual results.\n\nTask Description:\nPerform
|
||||
a search on specific topics.\n\nExpected Output:\nA list of relevant URLs based
|
||||
on the search query.\n\nActual Output:\nI now can give a great answer \nFinal
|
||||
Answer: \n\n1. **Artificial Intelligence in Healthcare**\n - URL: [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7073215/](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7073215/)
|
||||
- This article explores various applications of AI in healthcare, including
|
||||
diagnostics and treatment personalization.\n\n2. **Blockchain Technology and
|
||||
Its Impact on Supply Chain**\n - URL: [https://www.researchgate.net/publication/341717309_Blockchain_Technology_in_Supply_Chain_Management](https://www.researchgate.net/publication/341717309_Blockchain_Technology_in_Supply_Chain_Management)
|
||||
- This research paper discusses the potential of blockchain in enhancing supply
|
||||
chain transparency and efficiency.\n\n3. **Cybersecurity Trends for 2023**\n -
|
||||
URL: [https://www.cybersecurity-insiders.com/cybersecurity-trends-2023/](https://www.cybersecurity-insiders.com/cybersecurity-trends-2023/)
|
||||
- This resource outlines the major cybersecurity trends expected to shape the
|
||||
industry in 2023, including emerging threats and mitigation strategies.\n\n4.
|
||||
**The Impact of Remote Work on Productivity**\n - URL: [https://www.mitpressjournals.org/doi/full/10.1162/99608f92.2020.12.01](https://www.mitpressjournals.org/doi/full/10.1162/99608f92.2020.12.01)
|
||||
- This journal article provides insights into how remote work affects productivity,
|
||||
work-life balance, and organizational dynamics.\n\n5. **Quantum Computing: A
|
||||
Beginner''s Guide**\n - URL: [https://www.ibm.com/quantum-computing/learn/what-is-quantum-computing/](https://www.ibm.com/quantum-computing/learn/what-is-quantum-computing/)
|
||||
- This resource serves as an introduction to quantum computing, detailing its
|
||||
principles and potential applications.\n\n6. **Sustainable Energy Technologies
|
||||
for the Future**\n - URL: [https://www.energy.gov/eere/solar/articles/sustainable-energy-technology-future](https://www.energy.gov/eere/solar/articles/sustainable-energy-technology-future)
|
||||
- This article discusses various sustainable energy technologies that could
|
||||
play a crucial role in future energy landscapes.\n\n7. **5G Technology and Its
|
||||
Implications**\n - URL: [https://www.qualcomm.com/invention/5g/what-is-5g](https://www.qualcomm.com/invention/5g/what-is-5g)
|
||||
- This page explains what 5G technology is and explores its potential implications
|
||||
for various sectors including telecommunications and the Internet of Things
|
||||
(IoT). \n\nThese resources have been carefully selected to meet the specified
|
||||
topics and provide comprehensive insights.\n\nPlease provide:\n- Bullet points
|
||||
suggestions to improve future similar tasks\n- A score from 0 to 10 evaluating
|
||||
on completion, quality, and overall performance- Entities extracted from the
|
||||
task output, if any, their type, description, and relationships"}],"model":"gpt-4.1-mini","response_format":{"type":"json_schema","json_schema":{"schema":{"$defs":{"Entity":{"properties":{"name":{"description":"The
|
||||
name of the entity.","title":"Name","type":"string"},"type":{"description":"The
|
||||
type of the entity.","title":"Type","type":"string"},"description":{"description":"Description
|
||||
of the entity.","title":"Description","type":"string"},"relationships":{"description":"Relationships
|
||||
of the entity.","items":{"type":"string"},"title":"Relationships","type":"array"}},"required":["name","type","description","relationships"],"title":"Entity","type":"object","additionalProperties":false}},"properties":{"suggestions":{"description":"Suggestions
|
||||
to improve future similar tasks.","items":{"type":"string"},"title":"Suggestions","type":"array"},"quality":{"description":"A
|
||||
score from 0 to 10 evaluating on completion, quality, and overall performance,
|
||||
all taking into account the task description, expected output, and the result
|
||||
of the task.","title":"Quality","type":"number"},"entities":{"description":"Entities
|
||||
extracted from the task output.","items":{"$ref":"#/$defs/Entity"},"title":"Entities","type":"array"}},"required":["suggestions","quality","entities"],"title":"TaskEvaluation","type":"object","additionalProperties":false},"name":"TaskEvaluation","strict":true}},"stream":false}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '4474'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- _cfuvid=7kM8M9HCcESw20u.sW4KgamO892RwyAOg8qAz9JDbJc-1761878628218-0.0.1.1-604800000;
|
||||
__cf_bm=SO9He1GVTuDOBFVy7UPgpAiqXZwuXeli0wC9daB0knQ-1761878628-1.0.1.1-jldZtxPfeAswr22lzzVxN.W_5nEflvghqpz9M59LR9olhJD78hYz4EAWr3TuFJZgs12EnzNPJXbS01lMEU5ycEqvCgqSUlH4VgvAmfcEaAA
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.109.1
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-helper-method:
|
||||
- chat.completions.parse
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.109.1
|
||||
x-stainless-read-timeout:
|
||||
- '600'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//jFZNcyM3Dr37V6D6koukluRv3Tze7MS7ca0zcZLKRi4VxEZ3w2aTHBIt
|
||||
j2bK/32LpCzJ42yVLqpSg3gAHh9AfDsCKLgqZlCoFkV1Tg+v//jv3Yf76ar//C/36ePj19XlxdWf
|
||||
ze2/725v//FLMYgedvlISl69Rsp2TpOwNdmsPKFQRJ2cn00uzi/OppfJ0NmKdHRrnAxPRpNhx4aH
|
||||
0/H0dDg+GU5ONu6tZUWhmMFfRwAA39JvTNRU9KWYwXjw+qWjELChYrY9BFB4q+OXAkPgIGikGOyM
|
||||
yhohk3L/Ni9C3zQUYuZhXsz+mhc3Rum+IkBYeqYaQt916NdgPQTDzpFA7W0HhKqF3z79DGKh4RVB
|
||||
H8gHQAPcdVQxCkFvKvIxgYpNA88sre0FlGb1FD9IS6DZPIXRvBjMi2uNnut1+hwcKa5ZgVjHKsTo
|
||||
T7R+tr4KMSCZ0HvKJwm9asFT6LUEQE/QctPqNXjStEIjgKYCQd+QUJVD/U4pUjT0roqp/vbp5++Q
|
||||
1wkLlaIQeKkpnTZWYOntExlASfGFOwJbQ0WaV+TXOcB/fIOGv25wl2tQKNRYz5RqkZY6ClBbD4SB
|
||||
yYPBFTcY7yHFWZII+cQp0BdHnskoythXVQUdCVYoCKFXLWAA1y81qwyQCor3ZXuvCJSnipesWda5
|
||||
whaNyuStUPcp+/jHU3YI4LxdcZXIehjMi889Rud5MbsczAsywsKU5PJtXhjsaF7M5sWVl3hjjBpu
|
||||
jJDW3MScgQ38RKilVegpFSBrl13u4+WmTxUF5dnF9DOYc6/lhJjf1U3Eabc4wEmmUUUVY2NskCiT
|
||||
dNOx9zoyAo58sAY1f01AmT1POsO27DaSb0VcmJXl8/PzyKglj4zuRobbUWNXpetUiV5YaQrl3e31
|
||||
+fj8eDo5LefFw8tgv/4P2qon1SIbuCfVGqttkyV2IwFuOodKwBr4tXdOr+E6njyUjjsbWzZSa2tY
|
||||
7gKx2VxnJCJk4GwRjyY49GRUToLqeDnx70E8eMqN1aDQyJCUewIrj08m55Pz4/HlYlf0Ylf0gs0i
|
||||
V7lIVS5u0WBD8U7esXa9XpIPpHof5XnvyVS5L6bj6fGh9Nzio/Wg3kBJhorNo4SqqPzQosu6Z1P1
|
||||
Qfw6EhgDDfb0RB35Jo+nqKQsqo7ltT2D+NjLTOEgIt9kNWQTOM7E+GKUb0054fgWHL8X131LWwnV
|
||||
8Ik6KwR/WP8UFXXnbdUr4VVq0sMo+8k+g88wzxEG65qUpNbfYg2Saai5JliijlNjkNiwebqlslFD
|
||||
tTbYsTqMj47FeQrh0fbeoA4j65uyslzWvdblZDyaTM6m5eXl2fiivpyOpuPpeDSZjsaTd5z80qOR
|
||||
voNr27le2DQzuIIP1LAx5H8I8LHn6uCBc2NkU7k1USufN+DqFXwAFQmyjsrgRBQbxU5TFojbdiju
|
||||
ja6DGOFll+SwCTnchiw1oTflc4sy5DB8b39Hya99EGSD8bn60ZBv1rtZxJsHJ+r/n730hw/j39Gz
|
||||
7QOEPXTK6LKP7jSuIz0IcQGJvVWnOK+HNZoqKHQHdk72SkOYyFMZrEa/m8V72Qzz0eE2m/Ww3lT4
|
||||
HUGnH//PcN5e2aGk/PjFaTR5JNgaTj/uuMjAUSS8B5zJJ03Kdl1vtp9TEvb+IEriSxzdk17YrKLk
|
||||
rClPm61ITptY88PL/sbnqe4DxrXT9FrvGdAYKzle3DUfNpaX7XapbeO8XYbvXIuaDYd24QmDNXGT
|
||||
DGJdkawvRwAPaYvt3yymhfO2c7KQuDxFwMuTzRZb7LbnnfXk4nRjFSuod4bJyfTV8gZxkdsz7G3C
|
||||
hULVUrXz3a3N2Fds9wxHe3W/z+fvsHPtbJpD4HcGpcgJVQsXlzL1tubdMU+P6dX6+2NbnlPCRSC/
|
||||
YkULYfLxLiqqsdd55y/COgh1i5pNQz7OrLT4125xoqYXp5P64mxaHL0c/Q8AAP//AwBEU5mQBw0A
|
||||
AA==
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 996fce9299c8edbb-MXP
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 31 Oct 2025 02:43:55 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '6791'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '7015'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-project-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-project-tokens:
|
||||
- '149999237'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149999237'
|
||||
x-ratelimit-reset-project-tokens:
|
||||
- 0s
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_c2800884308c4f16a5f097a079e5b09c
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
|
||||
@@ -1896,4 +1896,787 @@ interactions:
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"trace_id": "3f34b857-57cb-4019-85f5-24ea07008c41", "execution_type":
|
||||
"crew", "user_identifier": null, "execution_context": {"crew_fingerprint": null,
|
||||
"crew_name": "crew", "flow_name": null, "crewai_version": "1.2.1", "privacy_level":
|
||||
"standard"}, "execution_metadata": {"expected_duration_estimate": 300, "agent_count":
|
||||
0, "task_count": 0, "flow_method_count": 0, "execution_started_at": "2025-10-31T01:21:47.051114+00:00"},
|
||||
"ephemeral_trace_id": "3f34b857-57cb-4019-85f5-24ea07008c41"}'
|
||||
headers:
|
||||
Accept:
|
||||
- '*/*'
|
||||
Accept-Encoding:
|
||||
- gzip, deflate, zstd
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '488'
|
||||
Content-Type:
|
||||
- application/json
|
||||
User-Agent:
|
||||
- CrewAI-CLI/1.2.1
|
||||
X-Crewai-Organization-Id:
|
||||
- 73c2b193-f579-422c-84c7-76a39a1da77f
|
||||
X-Crewai-Version:
|
||||
- 1.2.1
|
||||
method: POST
|
||||
uri: https://app.crewai.com/crewai_plus/api/v1/tracing/ephemeral/batches
|
||||
response:
|
||||
body:
|
||||
string: '{"id":"55d9a1b4-f578-4046-9ec2-5e0ae5b5c4ac","ephemeral_trace_id":"3f34b857-57cb-4019-85f5-24ea07008c41","execution_type":"crew","crew_name":"crew","flow_name":null,"status":"running","duration_ms":null,"crewai_version":"1.2.1","total_events":0,"execution_context":{"crew_fingerprint":null,"crew_name":"crew","flow_name":null,"crewai_version":"1.2.1","privacy_level":"standard"},"created_at":"2025-10-31T01:21:47.937Z","updated_at":"2025-10-31T01:21:47.937Z","access_code":"TRACE-c44455c874","user_identifier":null}'
|
||||
headers:
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '515'
|
||||
Content-Type:
|
||||
- application/json; charset=utf-8
|
||||
Date:
|
||||
- Fri, 31 Oct 2025 01:21:47 GMT
|
||||
cache-control:
|
||||
- no-store
|
||||
content-security-policy:
|
||||
- 'default-src ''self'' *.app.crewai.com app.crewai.com; script-src ''self''
|
||||
''unsafe-inline'' *.app.crewai.com app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts
|
||||
https://www.gstatic.com https://run.pstmn.io https://apis.google.com https://apis.google.com/js/api.js
|
||||
https://accounts.google.com https://accounts.google.com/gsi/client https://cdnjs.cloudflare.com/ajax/libs/normalize/8.0.1/normalize.min.css.map
|
||||
https://*.google.com https://docs.google.com https://slides.google.com https://js.hs-scripts.com
|
||||
https://js.sentry-cdn.com https://browser.sentry-cdn.com https://www.googletagmanager.com
|
||||
https://js-na1.hs-scripts.com https://js.hubspot.com http://js-na1.hs-scripts.com
|
||||
https://bat.bing.com https://cdn.amplitude.com https://cdn.segment.com https://d1d3n03t5zntha.cloudfront.net/
|
||||
https://descriptusercontent.com https://edge.fullstory.com https://googleads.g.doubleclick.net
|
||||
https://js.hs-analytics.net https://js.hs-banner.com https://js.hsadspixel.net
|
||||
https://js.hscollectedforms.net https://js.usemessages.com https://snap.licdn.com
|
||||
https://static.cloudflareinsights.com https://static.reo.dev https://www.google-analytics.com
|
||||
https://share.descript.com/; style-src ''self'' ''unsafe-inline'' *.app.crewai.com
|
||||
app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts; img-src ''self'' data:
|
||||
*.app.crewai.com app.crewai.com https://zeus.tools.crewai.com https://dashboard.tools.crewai.com
|
||||
https://cdn.jsdelivr.net https://forms.hsforms.com https://track.hubspot.com
|
||||
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://www.google.com
|
||||
https://www.google.com.br; font-src ''self'' data: *.app.crewai.com app.crewai.com;
|
||||
connect-src ''self'' *.app.crewai.com app.crewai.com https://zeus.tools.crewai.com
|
||||
https://connect.useparagon.com/ https://zeus.useparagon.com/* https://*.useparagon.com/*
|
||||
https://run.pstmn.io https://connect.tools.crewai.com/ https://*.sentry.io
|
||||
https://www.google-analytics.com https://edge.fullstory.com https://rs.fullstory.com
|
||||
https://api.hubspot.com https://forms.hscollectedforms.net https://api.hubapi.com
|
||||
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://google.com/pagead/form-data/16713662509
|
||||
https://google.com/ccm/form-data/16713662509 https://www.google.com/ccm/collect
|
||||
https://worker-actionkit.tools.crewai.com https://api.reo.dev; frame-src ''self''
|
||||
*.app.crewai.com app.crewai.com https://connect.useparagon.com/ https://zeus.tools.crewai.com
|
||||
https://zeus.useparagon.com/* https://connect.tools.crewai.com/ https://docs.google.com
|
||||
https://drive.google.com https://slides.google.com https://accounts.google.com
|
||||
https://*.google.com https://app.hubspot.com/ https://td.doubleclick.net https://www.googletagmanager.com/
|
||||
https://www.youtube.com https://share.descript.com'
|
||||
etag:
|
||||
- W/"85f3e6adb7d3fa5a12f02b27ada57896"
|
||||
expires:
|
||||
- '0'
|
||||
permissions-policy:
|
||||
- camera=(), microphone=(self), geolocation=()
|
||||
pragma:
|
||||
- no-cache
|
||||
referrer-policy:
|
||||
- strict-origin-when-cross-origin
|
||||
strict-transport-security:
|
||||
- max-age=63072000; includeSubDomains
|
||||
vary:
|
||||
- Accept
|
||||
x-content-type-options:
|
||||
- nosniff
|
||||
x-frame-options:
|
||||
- SAMEORIGIN
|
||||
x-permitted-cross-domain-policies:
|
||||
- none
|
||||
x-request-id:
|
||||
- 6ec96f4d-08c8-4db0-b7b5-91974fbbd05c
|
||||
x-runtime:
|
||||
- '0.243555'
|
||||
x-xss-protection:
|
||||
- 1; mode=block
|
||||
status:
|
||||
code: 201
|
||||
message: Created
|
||||
- request:
|
||||
body: "{\"messages\":[{\"role\":\"system\",\"content\":\"I'm gonna convert this
|
||||
raw text into valid JSON.\"},{\"role\":\"user\",\"content\":\"Assess the quality
|
||||
of the training data based on the llm output, human feedback , and llm output
|
||||
improved result.\\n\\nIteration: 0\\nInitial Output:\\n- **The Evolution of
|
||||
AI Agents in Customer Service**\\n The landscape of customer service has radically
|
||||
transformed with the advent of AI agents. This article could delve into how
|
||||
businesses are employing AI to enhance customer experiences, reduce wait times,
|
||||
and streamline operations. Highlighting real-world case studies from leading
|
||||
companies, it would explore the measures taken to train these AI agents, the
|
||||
challenges of human emotional intelligence versus AI, and the future implications
|
||||
of AI agents in understanding and predicting customer needs. This combination
|
||||
would provide a comprehensive look at the efficiencies gained, while also addressing
|
||||
ethical considerations in AI communication.\\n\\n- **AI Agents as Companions:
|
||||
The Future of Personal Relationships**\\n As technology blurs the lines between
|
||||
human and machine interaction, the concept of AI agents as companions is becoming
|
||||
increasingly popular. This article could explore the psychological and social
|
||||
implications of forming bonds with AI, looking at current AI companion projects
|
||||
such as virtual pets and virtual therapists. It would question if these AI relationships
|
||||
can indeed fulfill emotional needs, and what it means for human connection in
|
||||
an increasingly digital world. This exploration would not only gather diverse
|
||||
viewpoints but also touch upon ethical considerations surrounding companionship
|
||||
and loneliness.\\n\\n- **AI Agents in Creative Arts: Redefining Creativity**\\n
|
||||
\ The infusion of AI into creative arts poses unique questions about authorship
|
||||
and originality. This topic could lead to a compelling article that examines
|
||||
how AI agents are being used to create paintings, compose music, and write literature.
|
||||
By analyzing specific tools such as OpenAI\u2019s Muse and Google\u2019s DeepDream,
|
||||
the article would aim to untangle the perceived boundaries of creativity, discussing
|
||||
whether AI can ever genuinely create or merely mimic human artists. This perspective
|
||||
would bring readers into the fascinating intersection of technology and art,
|
||||
raising questions about the future of creative expression.\\n\\n- **Ethical
|
||||
Considerations of AI Agents in Decision Making**\\n With AI increasingly taking
|
||||
on decision-making roles in various sectors\u2014from finance to healthcare
|
||||
and even law\u2014it\u2019s critical to scrutinize the ethical ramifications
|
||||
of these technologies. An article focused on this topic could explore how AI
|
||||
agents analyze vast datasets to inform decisions, the potential biases encoded
|
||||
in their algorithms, and the accountability measures necessary to monitor their
|
||||
outputs. It would invite a discussion on what role human oversight should play,
|
||||
making the case for a balanced integration of AI agents that respects both efficiency
|
||||
and ethical standards.\\n\\n- **The Financial Impacts of AI Agents on Startups**\\n
|
||||
\ Startups are typically characterized by limited resources and the need for
|
||||
agile approaches to market challenges. This article could investigate how AI
|
||||
agents are reshaping financial strategies in startups, from automating mundane
|
||||
tasks to offering insights through data analysis. By highlighting successful
|
||||
startup case studies and quantifying improvements brought about by integrating
|
||||
AI agents, readers would grasp the potential cost savings and increased efficiency
|
||||
that AI can offer. It would further explore how these startups leverage their
|
||||
AI capabilities for competitive advantages, marking a crucial study for entrepreneurs
|
||||
and investors alike.\\n\\nEach idea presents an opportunity to deeply analyze
|
||||
the impacts of AI agents across various facets of modern life, emphasizing not
|
||||
only their technological advancements but also the accompanying ethical and
|
||||
social implications.\\n\\nHuman Feedback:\\nGreat work!\\n\\nImproved Output:\\n-
|
||||
**The Evolution of AI Agents in Customer Service**\\n The landscape of customer
|
||||
service has radically transformed with the advent of AI agents. This article
|
||||
could delve into how businesses are employing AI to enhance customer experiences,
|
||||
reduce wait times, and streamline operations. By featuring in-depth case studies
|
||||
from leading companies such as Amazon and Zappos, it would explore the measures
|
||||
taken to train these AI agents while addressing the challenges of human emotional
|
||||
intelligence versus AI. The article would further investigate the future implications
|
||||
of AI agents in understanding and predicting customer needs, ensuring a comprehensive
|
||||
look at the efficiencies gained, and addressing the ethical considerations surrounding
|
||||
AI communication.\\n\\n- **AI Agents as Companions: The Future of Personal Relationships**\\n
|
||||
\ As technology blurs the lines between human and machine interaction, the concept
|
||||
of AI agents as companions is becoming increasingly popular. This article could
|
||||
explore the psychological and social implications of forming bonds with AI,
|
||||
spotlighting current AI companion projects like Replika and virtual pets such
|
||||
as Aibo. It would examine whether these AI relationships can genuinely fulfill
|
||||
emotional needs and consider what this trend means for human connection in an
|
||||
increasingly digital world. Through interviews with users and specialists, the
|
||||
article would offer diverse viewpoints and address the ethical considerations
|
||||
of companionship and loneliness, ultimately provoking thought on our future
|
||||
interactions with technology.\\n\\n- **AI Agents in Creative Arts: Redefining
|
||||
Creativity**\\n The infusion of AI into creative arts poses unique questions
|
||||
about authorship and originality. This compelling article could investigate
|
||||
how AI agents are actively creating paintings, composing music, and even writing
|
||||
prose, utilizing tools such as OpenAI\u2019s Muse and Google\u2019s DeepDream.
|
||||
It would challenge readers to consider whether AI can genuinely create or merely
|
||||
mimic human artistry, delving into the fascinating intersection of technology
|
||||
and art. The article would include perspectives from artists collaborating with
|
||||
AI, offering insights on the evolving landscape of creativity and raising important
|
||||
questions about the future of artistic expression in an AI-enhanced world.\\n\\n-
|
||||
**Ethical Considerations of AI Agents in Decision Making**\\n With AI increasingly
|
||||
taking on decision-making roles in various sectors\u2014from finance and healthcare
|
||||
to law\u2014scrutinizing the ethical ramifications of these technologies is
|
||||
imperative. This investigative article could explore how AI agents analyze vast
|
||||
datasets to inform crucial decisions while uncovering potential biases embedded
|
||||
in their algorithms. It would articulate necessary accountability measures for
|
||||
monitoring AI outputs and invite thoughtful discussion on the importance of
|
||||
human oversight. By spotlighting thought leaders in ethics and technology, the
|
||||
article would ensure a balanced perspective on integrating AI agents that respects
|
||||
both efficiency and ethical standards.\\n\\n- **The Financial Impacts of AI
|
||||
Agents on Startups**\\n Startups, characterized by limited resources and the
|
||||
need for agile strategies, are leveraging AI agents to reshape financial decision-making.
|
||||
This article could investigate the ways in which startups are harnessing AI
|
||||
to automate administrative tasks, gain insights through predictive analytics,
|
||||
and improve customer engagement. By presenting in-depth case studies, the article
|
||||
would quantify the improvements brought about by integrating AI agents, demonstrating
|
||||
the potential cost savings and increased efficiency they offer. Furthermore,
|
||||
the article could explore how startups leverage their AI capabilities for competitive
|
||||
advantages, ultimately serving as a crucial resource for entrepreneurs and investors
|
||||
looking to navigate the evolving business landscape.\\n\\nThese ideas not only
|
||||
highlight the transformative impact of AI agents across various domains but
|
||||
also address the underlying ethical, social, and financial dynamics that are
|
||||
essential for a thorough understanding of their role in society today.\\n\\n------------------------------------------------\\n\\nIteration:
|
||||
1\\nInitial Output:\\n- **The Rise of AI Agents in Remote Work Environments**
|
||||
\ \\nIn the wake of the pandemic, remote work has become a part of our daily
|
||||
lives. AI agents are stepping into this new workspace, facilitating communication,
|
||||
task management, and team collaboration like never before. By exploring the
|
||||
impact of AI agents in virtual settings, the article could delve into how these
|
||||
intelligent systems optimize productivity, reduce operational costs, and enhance
|
||||
employee satisfaction. The journey of these agents\u2014from simple chatbots
|
||||
to sophisticated virtual assistants\u2014presents a fascinating evolution that
|
||||
reshapes not just how we work but also how we connect.\\n\\n- **Ethical Implications
|
||||
of AI Decision-Making** \\nAs AI systems gain prominence in decision-making
|
||||
processes across various sectors\u2014from healthcare to finance\u2014the ethical
|
||||
implications become more pressing. An insightful article could dissect the moral
|
||||
dilemmas posed by AI, such as biases embedded in algorithms, data privacy concerns,
|
||||
and the accountability of AI's actions. By interrogating the complexities of
|
||||
trust in AI systems, the piece could highlight the critical need for transparent
|
||||
frameworks and governance, ensuring that AI serves humanity fairly and justly
|
||||
in an increasingly automated world.\\n\\n- **AI Agents as Creative Collaborators**
|
||||
\ \\nAI's foray into creative domains is revolutionizing fields such as art,
|
||||
music, and writing. An engaging article could showcase how AI agents function
|
||||
alongside human creators, pushing the boundaries of creativity and innovation.
|
||||
This exploration could highlight notable collaborations between humans and AI,
|
||||
celebrating the intriguing ways in which technology enhances artistic expression.
|
||||
By illustrating the symbiotic relationship between human intuition and AI's
|
||||
analytical prowess, the article could underscore that creativity is no longer
|
||||
solely a human domain but a collective endeavor involving intelligent machines.\\n\\n-
|
||||
**Personalized Learning Experiences through AI Agents** \\nEducation is undergoing
|
||||
a transformation with the integration of AI agents into personalized learning
|
||||
environments. An article could examine how these agents tailor educational content
|
||||
to individual student's needs, moving away from one-size-fits-all approaches
|
||||
toward truly customized learning experiences. By interviewing educators and
|
||||
students, the piece can illustrate the profound impact of AI on student engagement,
|
||||
academic performance, and emotional support, making a strong case for the necessity
|
||||
of AI-driven methods in modern education.\\n\\n- **The Future of AI in Mental
|
||||
Health Support** \\nAs mental health becomes an increasingly crucial area of
|
||||
focus, AI agents are emerging as supplementary tools for psychological support.
|
||||
An article could explore the role of AI in providing real-time assistance, stigma
|
||||
reduction, and accessibility to mental health resources. By sharing success
|
||||
stories and evidence from trials, the piece can encapsulate the potential for
|
||||
AI agents to act as companions that offer kindness and empathy, alongside professional
|
||||
support. This discussion may ultimately lead to a greater appreciation of AI's
|
||||
capacity to enhance mental well-being in an era where mental health challenges
|
||||
are more prevalent than ever.\\n\\nNotes: Each of these ideas addresses a timely
|
||||
and relevant intersection between technology and human experience, making them
|
||||
compelling subjects for in-depth articles. The potential for engaging readers
|
||||
with real-world applications, ethical considerations, and innovative collaborations
|
||||
ensures that these topics will resonate widely and inspire thoughtful discussion.\\n\\nHuman
|
||||
Feedback:\\nGreat work!\\n\\nImproved Output:\\n- **The Rise of AI Agents in
|
||||
Remote Work Environments** \\nIn the wake of the pandemic, remote work has
|
||||
become a staple in our daily routines, reshaping how businesses operate globally.
|
||||
AI agents are uniquely positioned to enhance this dynamic, stepping into virtual
|
||||
offices to facilitate seamless communication, task management, and collaborative
|
||||
projects. An article exploring the impact of AI agents in remote settings could
|
||||
investigate how these intelligent assistants optimize productivity, reduce operational
|
||||
costs, and improve employee morale by mitigating the isolation often felt in
|
||||
remote work. The journey of AI\u2014from basic scheduling tools to multifunctional
|
||||
virtual colleagues\u2014offers a captivating narrative on technology's role
|
||||
in redefining our professional landscape and fostering human connections in
|
||||
digital spaces.\\n\\n- **Ethical Implications of AI Decision-Making** \\nAs
|
||||
AI systems increasingly influence critical decision-making in various sectors
|
||||
such as healthcare, finance, and criminal justice, the ethical implications
|
||||
become a hotbed for discussion. A thorough investigation into the moral dilemmas
|
||||
surrounding AI could scrutinize issues such as system biases, data privacy,
|
||||
and the ambiguity of accountability for AI-driven outcomes. This article could
|
||||
engage with thought leaders and ethicists to illuminate the pressing need for
|
||||
ethical frameworks and governance models that ensure AI technologies promote
|
||||
equity and are designed to serve human interests, fostering a society where
|
||||
trust in AI can flourish.\\n\\n- **AI Agents as Creative Collaborators** \\nThe
|
||||
canvas of creativity is expanding with the emergence of AI agents as collaborators
|
||||
in artistic domains like visual arts, music production, and literary composition.
|
||||
An engaging article could celebrate the symbiotic relationship between human
|
||||
creators and AI, illustrating how these intelligent systems are not mere tools
|
||||
but creative partners that inspire innovation. By profiling groundbreaking collaborations
|
||||
where AI and human artists co-create, this piece would delve into the possibilities
|
||||
that arise when technology enhances human expression, evolving the narrative
|
||||
around creativity as a shared endeavor rather than a solitary pursuit.\\n\\n-
|
||||
**Personalized Learning Experiences through AI Agents** \\nEducation is at
|
||||
a transformational crossroads, with AI agents revolutionizing how students learn
|
||||
through personalized educational experiences. A compelling article could explore
|
||||
how these intelligent systems adapt learning materials to meet each student's
|
||||
unique needs, thereby moving away from traditional, uniform teaching methods.
|
||||
Interviews with educators and students could reveal powerful testimonials that
|
||||
attest to the positive effects of AI-driven personalized learning, including
|
||||
improved engagement and academic success, establishing a strong argument for
|
||||
the integration of intelligent technologies in classrooms everywhere.\\n\\n-
|
||||
**The Future of AI in Mental Health Support** \\nAs society increasingly acknowledges
|
||||
mental health issues, AI agents are emerging as vital tools in providing mental
|
||||
health support and resources. An insightful article could examine the potential
|
||||
of AI as a supplemental resource for individuals seeking emotional assistance,
|
||||
highlighting innovations in real-time supportive interactions and stigma reduction.
|
||||
By showcasing case studies of successful AI applications in therapy, the piece
|
||||
would underscore the transformative role these technologies can play in making
|
||||
mental health care accessible and efficient, positioning AI not only as a technological
|
||||
advancement but as a crucial ally in promoting overall well-being in modern
|
||||
life.\\n\\nNotes: Each idea serves as a portal into crucial conversations about
|
||||
technology's role in human experiences, categorized within a contemporary context.
|
||||
The integration of personal stories, expert interviews, and ethical considerations
|
||||
enriches these topics, ensuring they resonate widely, inspire curiosity, and
|
||||
engage readers in meaningful dialogue. This comprehensive approach will enhance
|
||||
the overall quality and relevance of the articles while appealing to a diverse
|
||||
audience.\\n\\n------------------------------------------------\\n\\nPlease
|
||||
provide:\\n- Provide a list of clear, actionable instructions derived from the
|
||||
Human Feedbacks to enhance the Agent's performance. Analyze the differences
|
||||
between Initial Outputs and Improved Outputs to generate specific action items
|
||||
for future tasks. Ensure all key and specificpoints from the human feedback
|
||||
are incorporated into these instructions.\\n- A score from 0 to 10 evaluating
|
||||
on completion, quality, and overall performance from the improved output to
|
||||
the initial output based on the human feedback\\n\"}],\"model\":\"gpt-4.1-mini\"}"
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '16697'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- _cfuvid=m.ZI0jUJJ4xpeJ9MnVSjtXyq990VBTzugjakItyO6Cs-1761055572454-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.109.1
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.109.1
|
||||
x-stainless-read-timeout:
|
||||
- '600'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//dFZNbxs3EL37Vwz2vApkR4lS34w2hxRNkaItgrQOjBE5uzs1l8OQs7LV
|
||||
wP+9GK5kyW5yEbAccua9N1/6egbQsG8uoXEDqhtTWPz48dM6/vZn/PD2l5/i+o+/fv2y6q/Gj5v3
|
||||
y0/vf25aeyGbf8jp4dULJ2MKpCxxNrtMqGRez9evz9+sX66X62oYxVOwZ33SxerF+WLkyIuL5cWr
|
||||
xXK1OF/tnw/CjkpzCX+fAQB8rb8GNHq6by5h2R5ORioFe2ouHy8BNFmCnTRYChfFqE17NDqJSrFi
|
||||
/3odAa6bK2fIcRPoXSyap/pZri28XbArb+OA0RGURI47dqw72OyAowuT59iDk+gyKQHdo2lRWiiT
|
||||
GwALmDgYdxBxtOOUxbQ7fEp+dAoqEkoLKtBnmaIHHQgwK7tAwJ6wwCiZoOM8BosOmTAs7iQHD5XX
|
||||
vZYX1017gP0uOslJMiqB5y3lQpAoW0DlrYU/MuColLdMdwXuWAeg+0RZSwtToTwD1UGmflAIhL6e
|
||||
qQB6D56SDoDRg8vkecPB9FGp+Pd6n8K68j5TKUA6sMNgVwp7yliFBxcIc9hVhxxnyp4UObQwcD8E
|
||||
7gc1yHGypJRHpTeMpQV0TqaoOMMwwXmLbtdWf4aoiGMMwGNCpyAdXL0DJTdECdIzPVHw7X06PIuY
|
||||
DeGWjNmsG0EmR1HB05aCpJGiljlOpugL6IAKgePtMcBulneYRoyzyEzR0WkqaJRakaG62sP1PFIs
|
||||
JtApvj8LAcUee459C1vesoeAsZ+wp/m1St4phWCOKwj+MlExDiPeGpfErkCmgGotMKdRxrR/0kkG
|
||||
hE0W9ICTr1hPAXzIsmVPc9JMpxGVHXQZxxrRBCisEyoVIHTDHLGKwHF2TLmSJMXQnmTC2adkICdR
|
||||
RnaPJd6Cp1GsV7HWQaZA29qfVXoeKXCk8rwVasKSWDWaoG7AECj21gaeA40jzmVuYbJYt/AsVCdF
|
||||
Ke/LzYPn4qZSUwEZdSDjjREMj5VmJ/kOs7VFcZmTPs/ZlfcQxfSw5p/GEXMNZFJl4thJdlRLLk05
|
||||
SZlpYUqEwcr1ZCqUFiiWKc9S0w5s3HmQSa0f9v26SFm2cmt3DsnNNFgxbZ+k8j1yVORo+cYwc1WJ
|
||||
dEDmpI/8LxXYiA4VxYYidbwv+cAj676F56bClAK7+ajdq1iHZWatjW8lYLieV/Q8hadMvoUkgctA
|
||||
3gZn2WMpo4gOBTRjLDyHnKnFSE7LfljuiyjsQIdsUpguVVemWsUW83M7b4HfnWSyof/DdXw4XRiZ
|
||||
uqmgba04hXBiwBhlT9h2xee95eFxOQXpU5ZNefa06ThyGW4yYZFoi6iopKZaH84MkS3B6clea1KW
|
||||
MemNyi3VcBfr1Xp22By374n59cGqohiOhpfL81X7DZc384QtJ5u0cegG8idOl6s3jyRsGMjRtjw7
|
||||
4f5/SN9yP/Pn2J94+a77o8E5Skr+JtmucU9pH69lsi37vWuPWlfATbHN5+hGmbLlw1OHU5j/NjRl
|
||||
V5TGm45jTzllnv87dOlm5S7evDrv3ry+aM4ezv4DAAD//wMAYqeVu0oJAAA=
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 996f566c0d47eda4-MXP
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 31 Oct 2025 01:21:50 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=hk.dUrCne4r20h6mq0lNOA1fyN8qNNN2wDRfIxVmRrg-1761873710-1.0.1.1-DYHNFwh3pzCCnEiUAjr8eQb_Le1gJp6eIBCaTHjkXuGf6lL2exJ6dig0Rv.r1XAEkni.IO8K2OiJiY9S1Pd29Hf1NsRPKkYXAYc8brdr5Zs;
|
||||
path=/; expires=Fri, 31-Oct-25 01:51:50 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=_ywFekSfflLNT4n4CAra7U6FQ81CokpzhqfwrjWPQmA-1761873710747-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '3361'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '3376'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-project-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-project-tokens:
|
||||
- '149995870'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149995870'
|
||||
x-ratelimit-reset-project-tokens:
|
||||
- 1ms
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 1ms
|
||||
x-request-id:
|
||||
- req_3c80ffd7d4584d2abbfecd157a8d97bf
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: "{\"messages\":[{\"role\":\"system\",\"content\":\"I'm gonna convert this
|
||||
raw text into valid JSON.\"},{\"role\":\"user\",\"content\":\"Assess the quality
|
||||
of the training data based on the llm output, human feedback , and llm output
|
||||
improved result.\\n\\nIteration: 0\\nInitial Output:\\n- **The Evolution of
|
||||
AI Agents in Customer Service**\\n The landscape of customer service has radically
|
||||
transformed with the advent of AI agents. This article could delve into how
|
||||
businesses are employing AI to enhance customer experiences, reduce wait times,
|
||||
and streamline operations. Highlighting real-world case studies from leading
|
||||
companies, it would explore the measures taken to train these AI agents, the
|
||||
challenges of human emotional intelligence versus AI, and the future implications
|
||||
of AI agents in understanding and predicting customer needs. This combination
|
||||
would provide a comprehensive look at the efficiencies gained, while also addressing
|
||||
ethical considerations in AI communication.\\n\\n- **AI Agents as Companions:
|
||||
The Future of Personal Relationships**\\n As technology blurs the lines between
|
||||
human and machine interaction, the concept of AI agents as companions is becoming
|
||||
increasingly popular. This article could explore the psychological and social
|
||||
implications of forming bonds with AI, looking at current AI companion projects
|
||||
such as virtual pets and virtual therapists. It would question if these AI relationships
|
||||
can indeed fulfill emotional needs, and what it means for human connection in
|
||||
an increasingly digital world. This exploration would not only gather diverse
|
||||
viewpoints but also touch upon ethical considerations surrounding companionship
|
||||
and loneliness.\\n\\n- **AI Agents in Creative Arts: Redefining Creativity**\\n
|
||||
\ The infusion of AI into creative arts poses unique questions about authorship
|
||||
and originality. This topic could lead to a compelling article that examines
|
||||
how AI agents are being used to create paintings, compose music, and write literature.
|
||||
By analyzing specific tools such as OpenAI\u2019s Muse and Google\u2019s DeepDream,
|
||||
the article would aim to untangle the perceived boundaries of creativity, discussing
|
||||
whether AI can ever genuinely create or merely mimic human artists. This perspective
|
||||
would bring readers into the fascinating intersection of technology and art,
|
||||
raising questions about the future of creative expression.\\n\\n- **Ethical
|
||||
Considerations of AI Agents in Decision Making**\\n With AI increasingly taking
|
||||
on decision-making roles in various sectors\u2014from finance to healthcare
|
||||
and even law\u2014it\u2019s critical to scrutinize the ethical ramifications
|
||||
of these technologies. An article focused on this topic could explore how AI
|
||||
agents analyze vast datasets to inform decisions, the potential biases encoded
|
||||
in their algorithms, and the accountability measures necessary to monitor their
|
||||
outputs. It would invite a discussion on what role human oversight should play,
|
||||
making the case for a balanced integration of AI agents that respects both efficiency
|
||||
and ethical standards.\\n\\n- **The Financial Impacts of AI Agents on Startups**\\n
|
||||
\ Startups are typically characterized by limited resources and the need for
|
||||
agile approaches to market challenges. This article could investigate how AI
|
||||
agents are reshaping financial strategies in startups, from automating mundane
|
||||
tasks to offering insights through data analysis. By highlighting successful
|
||||
startup case studies and quantifying improvements brought about by integrating
|
||||
AI agents, readers would grasp the potential cost savings and increased efficiency
|
||||
that AI can offer. It would further explore how these startups leverage their
|
||||
AI capabilities for competitive advantages, marking a crucial study for entrepreneurs
|
||||
and investors alike.\\n\\nEach idea presents an opportunity to deeply analyze
|
||||
the impacts of AI agents across various facets of modern life, emphasizing not
|
||||
only their technological advancements but also the accompanying ethical and
|
||||
social implications.\\n\\nHuman Feedback:\\nGreat work!\\n\\nImproved Output:\\n-
|
||||
**The Evolution of AI Agents in Customer Service**\\n The landscape of customer
|
||||
service has radically transformed with the advent of AI agents. This article
|
||||
could delve into how businesses are employing AI to enhance customer experiences,
|
||||
reduce wait times, and streamline operations. By featuring in-depth case studies
|
||||
from leading companies such as Amazon and Zappos, it would explore the measures
|
||||
taken to train these AI agents while addressing the challenges of human emotional
|
||||
intelligence versus AI. The article would further investigate the future implications
|
||||
of AI agents in understanding and predicting customer needs, ensuring a comprehensive
|
||||
look at the efficiencies gained, and addressing the ethical considerations surrounding
|
||||
AI communication.\\n\\n- **AI Agents as Companions: The Future of Personal Relationships**\\n
|
||||
\ As technology blurs the lines between human and machine interaction, the concept
|
||||
of AI agents as companions is becoming increasingly popular. This article could
|
||||
explore the psychological and social implications of forming bonds with AI,
|
||||
spotlighting current AI companion projects like Replika and virtual pets such
|
||||
as Aibo. It would examine whether these AI relationships can genuinely fulfill
|
||||
emotional needs and consider what this trend means for human connection in an
|
||||
increasingly digital world. Through interviews with users and specialists, the
|
||||
article would offer diverse viewpoints and address the ethical considerations
|
||||
of companionship and loneliness, ultimately provoking thought on our future
|
||||
interactions with technology.\\n\\n- **AI Agents in Creative Arts: Redefining
|
||||
Creativity**\\n The infusion of AI into creative arts poses unique questions
|
||||
about authorship and originality. This compelling article could investigate
|
||||
how AI agents are actively creating paintings, composing music, and even writing
|
||||
prose, utilizing tools such as OpenAI\u2019s Muse and Google\u2019s DeepDream.
|
||||
It would challenge readers to consider whether AI can genuinely create or merely
|
||||
mimic human artistry, delving into the fascinating intersection of technology
|
||||
and art. The article would include perspectives from artists collaborating with
|
||||
AI, offering insights on the evolving landscape of creativity and raising important
|
||||
questions about the future of artistic expression in an AI-enhanced world.\\n\\n-
|
||||
**Ethical Considerations of AI Agents in Decision Making**\\n With AI increasingly
|
||||
taking on decision-making roles in various sectors\u2014from finance and healthcare
|
||||
to law\u2014scrutinizing the ethical ramifications of these technologies is
|
||||
imperative. This investigative article could explore how AI agents analyze vast
|
||||
datasets to inform crucial decisions while uncovering potential biases embedded
|
||||
in their algorithms. It would articulate necessary accountability measures for
|
||||
monitoring AI outputs and invite thoughtful discussion on the importance of
|
||||
human oversight. By spotlighting thought leaders in ethics and technology, the
|
||||
article would ensure a balanced perspective on integrating AI agents that respects
|
||||
both efficiency and ethical standards.\\n\\n- **The Financial Impacts of AI
|
||||
Agents on Startups**\\n Startups, characterized by limited resources and the
|
||||
need for agile strategies, are leveraging AI agents to reshape financial decision-making.
|
||||
This article could investigate the ways in which startups are harnessing AI
|
||||
to automate administrative tasks, gain insights through predictive analytics,
|
||||
and improve customer engagement. By presenting in-depth case studies, the article
|
||||
would quantify the improvements brought about by integrating AI agents, demonstrating
|
||||
the potential cost savings and increased efficiency they offer. Furthermore,
|
||||
the article could explore how startups leverage their AI capabilities for competitive
|
||||
advantages, ultimately serving as a crucial resource for entrepreneurs and investors
|
||||
looking to navigate the evolving business landscape.\\n\\nThese ideas not only
|
||||
highlight the transformative impact of AI agents across various domains but
|
||||
also address the underlying ethical, social, and financial dynamics that are
|
||||
essential for a thorough understanding of their role in society today.\\n\\n------------------------------------------------\\n\\nIteration:
|
||||
1\\nInitial Output:\\n- **The Rise of AI Agents in Remote Work Environments**
|
||||
\ \\nIn the wake of the pandemic, remote work has become a part of our daily
|
||||
lives. AI agents are stepping into this new workspace, facilitating communication,
|
||||
task management, and team collaboration like never before. By exploring the
|
||||
impact of AI agents in virtual settings, the article could delve into how these
|
||||
intelligent systems optimize productivity, reduce operational costs, and enhance
|
||||
employee satisfaction. The journey of these agents\u2014from simple chatbots
|
||||
to sophisticated virtual assistants\u2014presents a fascinating evolution that
|
||||
reshapes not just how we work but also how we connect.\\n\\n- **Ethical Implications
|
||||
of AI Decision-Making** \\nAs AI systems gain prominence in decision-making
|
||||
processes across various sectors\u2014from healthcare to finance\u2014the ethical
|
||||
implications become more pressing. An insightful article could dissect the moral
|
||||
dilemmas posed by AI, such as biases embedded in algorithms, data privacy concerns,
|
||||
and the accountability of AI's actions. By interrogating the complexities of
|
||||
trust in AI systems, the piece could highlight the critical need for transparent
|
||||
frameworks and governance, ensuring that AI serves humanity fairly and justly
|
||||
in an increasingly automated world.\\n\\n- **AI Agents as Creative Collaborators**
|
||||
\ \\nAI's foray into creative domains is revolutionizing fields such as art,
|
||||
music, and writing. An engaging article could showcase how AI agents function
|
||||
alongside human creators, pushing the boundaries of creativity and innovation.
|
||||
This exploration could highlight notable collaborations between humans and AI,
|
||||
celebrating the intriguing ways in which technology enhances artistic expression.
|
||||
By illustrating the symbiotic relationship between human intuition and AI's
|
||||
analytical prowess, the article could underscore that creativity is no longer
|
||||
solely a human domain but a collective endeavor involving intelligent machines.\\n\\n-
|
||||
**Personalized Learning Experiences through AI Agents** \\nEducation is undergoing
|
||||
a transformation with the integration of AI agents into personalized learning
|
||||
environments. An article could examine how these agents tailor educational content
|
||||
to individual student's needs, moving away from one-size-fits-all approaches
|
||||
toward truly customized learning experiences. By interviewing educators and
|
||||
students, the piece can illustrate the profound impact of AI on student engagement,
|
||||
academic performance, and emotional support, making a strong case for the necessity
|
||||
of AI-driven methods in modern education.\\n\\n- **The Future of AI in Mental
|
||||
Health Support** \\nAs mental health becomes an increasingly crucial area of
|
||||
focus, AI agents are emerging as supplementary tools for psychological support.
|
||||
An article could explore the role of AI in providing real-time assistance, stigma
|
||||
reduction, and accessibility to mental health resources. By sharing success
|
||||
stories and evidence from trials, the piece can encapsulate the potential for
|
||||
AI agents to act as companions that offer kindness and empathy, alongside professional
|
||||
support. This discussion may ultimately lead to a greater appreciation of AI's
|
||||
capacity to enhance mental well-being in an era where mental health challenges
|
||||
are more prevalent than ever.\\n\\nNotes: Each of these ideas addresses a timely
|
||||
and relevant intersection between technology and human experience, making them
|
||||
compelling subjects for in-depth articles. The potential for engaging readers
|
||||
with real-world applications, ethical considerations, and innovative collaborations
|
||||
ensures that these topics will resonate widely and inspire thoughtful discussion.\\n\\nHuman
|
||||
Feedback:\\nGreat work!\\n\\nImproved Output:\\n- **The Rise of AI Agents in
|
||||
Remote Work Environments** \\nIn the wake of the pandemic, remote work has
|
||||
become a staple in our daily routines, reshaping how businesses operate globally.
|
||||
AI agents are uniquely positioned to enhance this dynamic, stepping into virtual
|
||||
offices to facilitate seamless communication, task management, and collaborative
|
||||
projects. An article exploring the impact of AI agents in remote settings could
|
||||
investigate how these intelligent assistants optimize productivity, reduce operational
|
||||
costs, and improve employee morale by mitigating the isolation often felt in
|
||||
remote work. The journey of AI\u2014from basic scheduling tools to multifunctional
|
||||
virtual colleagues\u2014offers a captivating narrative on technology's role
|
||||
in redefining our professional landscape and fostering human connections in
|
||||
digital spaces.\\n\\n- **Ethical Implications of AI Decision-Making** \\nAs
|
||||
AI systems increasingly influence critical decision-making in various sectors
|
||||
such as healthcare, finance, and criminal justice, the ethical implications
|
||||
become a hotbed for discussion. A thorough investigation into the moral dilemmas
|
||||
surrounding AI could scrutinize issues such as system biases, data privacy,
|
||||
and the ambiguity of accountability for AI-driven outcomes. This article could
|
||||
engage with thought leaders and ethicists to illuminate the pressing need for
|
||||
ethical frameworks and governance models that ensure AI technologies promote
|
||||
equity and are designed to serve human interests, fostering a society where
|
||||
trust in AI can flourish.\\n\\n- **AI Agents as Creative Collaborators** \\nThe
|
||||
canvas of creativity is expanding with the emergence of AI agents as collaborators
|
||||
in artistic domains like visual arts, music production, and literary composition.
|
||||
An engaging article could celebrate the symbiotic relationship between human
|
||||
creators and AI, illustrating how these intelligent systems are not mere tools
|
||||
but creative partners that inspire innovation. By profiling groundbreaking collaborations
|
||||
where AI and human artists co-create, this piece would delve into the possibilities
|
||||
that arise when technology enhances human expression, evolving the narrative
|
||||
around creativity as a shared endeavor rather than a solitary pursuit.\\n\\n-
|
||||
**Personalized Learning Experiences through AI Agents** \\nEducation is at
|
||||
a transformational crossroads, with AI agents revolutionizing how students learn
|
||||
through personalized educational experiences. A compelling article could explore
|
||||
how these intelligent systems adapt learning materials to meet each student's
|
||||
unique needs, thereby moving away from traditional, uniform teaching methods.
|
||||
Interviews with educators and students could reveal powerful testimonials that
|
||||
attest to the positive effects of AI-driven personalized learning, including
|
||||
improved engagement and academic success, establishing a strong argument for
|
||||
the integration of intelligent technologies in classrooms everywhere.\\n\\n-
|
||||
**The Future of AI in Mental Health Support** \\nAs society increasingly acknowledges
|
||||
mental health issues, AI agents are emerging as vital tools in providing mental
|
||||
health support and resources. An insightful article could examine the potential
|
||||
of AI as a supplemental resource for individuals seeking emotional assistance,
|
||||
highlighting innovations in real-time supportive interactions and stigma reduction.
|
||||
By showcasing case studies of successful AI applications in therapy, the piece
|
||||
would underscore the transformative role these technologies can play in making
|
||||
mental health care accessible and efficient, positioning AI not only as a technological
|
||||
advancement but as a crucial ally in promoting overall well-being in modern
|
||||
life.\\n\\nNotes: Each idea serves as a portal into crucial conversations about
|
||||
technology's role in human experiences, categorized within a contemporary context.
|
||||
The integration of personal stories, expert interviews, and ethical considerations
|
||||
enriches these topics, ensuring they resonate widely, inspire curiosity, and
|
||||
engage readers in meaningful dialogue. This comprehensive approach will enhance
|
||||
the overall quality and relevance of the articles while appealing to a diverse
|
||||
audience.\\n\\n------------------------------------------------\\n\\nPlease
|
||||
provide:\\n- Provide a list of clear, actionable instructions derived from the
|
||||
Human Feedbacks to enhance the Agent's performance. Analyze the differences
|
||||
between Initial Outputs and Improved Outputs to generate specific action items
|
||||
for future tasks. Ensure all key and specificpoints from the human feedback
|
||||
are incorporated into these instructions.\\n- A score from 0 to 10 evaluating
|
||||
on completion, quality, and overall performance from the improved output to
|
||||
the initial output based on the human feedback\\n\"}],\"model\":\"gpt-4.1-mini\",\"response_format\":{\"type\":\"json_schema\",\"json_schema\":{\"schema\":{\"properties\":{\"suggestions\":{\"description\":\"List
|
||||
of clear, actionable instructions derived from the Human Feedbacks to enhance
|
||||
the Agent's performance. Analyze the differences between Initial Outputs and
|
||||
Improved Outputs to generate specific action items for future tasks. Ensure
|
||||
all key and specific points from the human feedback are incorporated into these
|
||||
instructions.\",\"items\":{\"type\":\"string\"},\"title\":\"Suggestions\",\"type\":\"array\"},\"quality\":{\"description\":\"A
|
||||
score from 0 to 10 evaluating on completion, quality, and overall performance
|
||||
from the improved output to the initial output based on the human feedback.\",\"title\":\"Quality\",\"type\":\"number\"},\"final_summary\":{\"description\":\"A
|
||||
step by step action items to improve the next Agent based on the human-feedback
|
||||
and improved output.\",\"title\":\"Final Summary\",\"type\":\"string\"}},\"required\":[\"suggestions\",\"quality\",\"final_summary\"],\"title\":\"TrainingTaskEvaluation\",\"type\":\"object\",\"additionalProperties\":false},\"name\":\"TrainingTaskEvaluation\",\"strict\":true}},\"stream\":false}"
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '17792'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- _cfuvid=_ywFekSfflLNT4n4CAra7U6FQ81CokpzhqfwrjWPQmA-1761873710747-0.0.1.1-604800000;
|
||||
__cf_bm=hk.dUrCne4r20h6mq0lNOA1fyN8qNNN2wDRfIxVmRrg-1761873710-1.0.1.1-DYHNFwh3pzCCnEiUAjr8eQb_Le1gJp6eIBCaTHjkXuGf6lL2exJ6dig0Rv.r1XAEkni.IO8K2OiJiY9S1Pd29Hf1NsRPKkYXAYc8brdr5Zs
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.109.1
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-helper-method:
|
||||
- chat.completions.parse
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.109.1
|
||||
x-stainless-read-timeout:
|
||||
- '600'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//nFZNb9tGEL37Vwx46YUSLNmJXd+CIgFyKAIUDYIiMoTR7pCcZrm72VnK
|
||||
VgP/92KWlEQnDlD0Ypmcna83783y2wVAxba6g8p0mE0f3eK3T3/dJGc+pBV/5NSu3339Y/X+9092
|
||||
f9W8G6paPcLubzL56LU0oY+OMgc/mk0izKRRVzevV7c3VzerVTH0wZJTtzbmxfVytejZ82J9uX61
|
||||
uLxerK4n9y6wIanu4PMFAMC38lcL9ZYeqzu4rI9vehLBlqq70yGAKgWnbyoUYcnoc1WfjSb4TL7U
|
||||
/m1TydC2JFq5bKq7z5vqvTdusAQSyXDDBugRtTkB9BYSoVs8hOQsGBQCyYNlEsgByHfoDUHuCEwi
|
||||
yzt2nA/FzVLMHYSmGDFlNo6ALaEsN1VdkoYUQ8JM0JMv5ehxHzLuHIHii55JaogpKPRSQ0iQQ3AC
|
||||
iRzt0edSBZoOcohs9CmmsGer/t4kyuWfTI95zPrGWrC8pyQEe6aHGNhnARlMByjAPlPS9wIPnDug
|
||||
x0hJEw9CacrfhaHtMjhCS2mCIbGW0BF4TAkz76lg4MhbwCF3IXE+nCpIJAKUOzboapBgWH/Vgfqg
|
||||
QKDTqoUtaTAFhh6jY8PZHTRhosaRyYCwQ6cTsMVbIUvUkZexAHQH4Qnut7NRWRKTOI6RdwfgMn/2
|
||||
LXCvaaacTUjQDHlI6rEnF6LOaSSFholBScXo1A1NhuBLM3Rs9aMQ9CERkG+x1fjquuc9W3Do2wFb
|
||||
LQgz2IQPUoKmAqsOYj7YXwT8oPWP2bmPIWV9PnGpENiHTKJDkqHvMRWWaviJAgM6/ofGsEJZ2VYI
|
||||
CewhU+oL/yZmmXGC5x6nwkorpEAcUx8xyihfpIYmmEEUCnK4U3oXXD2XIGHIjj3J2F/BxlJGdtME
|
||||
xybtSS8xhRgE3URHpRklmOQ8QuGF2y7rlO/rTfVVe8yHTXV3W2+qhj267QiGvttUfwYFL4U9Hct+
|
||||
02qoMOQ4KNHJS2mmo5EWwsErLD/uhqNm/q9Q2xSGiUmWxQxSUrF/tnDGwckS3gytgj4NrKBx1HGk
|
||||
pNWp6o50njbLCP1J0iHB16FQpEmhHzU9ym6UuVbFujV0zb24z5bw9ixE/G9KxlKczHfTUbX1d5Kd
|
||||
BruEo1rPKgmwe0lMJynvyxmDsQz1rCJKJHkJzxVSRNFx2znlzneUR13v+nCm+lkGS/iwp4TOzYle
|
||||
1t+c5vMhHemtGwz9uFrqZ4tmBOyBnFsUSpCdr+YdCllN0w09emiI7A7Nl+WmeppfcYmaQVDvWT84
|
||||
NzOg1xulJNLL9X6yPJ2uUxfamMJOvnNV8bB0W2VD8Hp1Sg6xKtanC4D7cm0Pz27iKqbQx7zN4QuV
|
||||
dOtf1zdjwOr8vTAz376erDlkdGfD1Xp1Vb8QcjvCKbO7vzJoOrKzoJfXt6cmdJjhbLu8mPX+Y0kv
|
||||
hR/7Z9/Oovw0/NlgDMVMdhtVRuZ52+djiXRf/OzYCetScCWqY0PbzJR0HpYaHNz4oVPJQTL124Z9
|
||||
SykmHr92mri9NuvbV6vm9vW6uni6+BcAAP//AwBX4Bvy/AkAAA==
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 996f56845b23eda4-MXP
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 31 Oct 2025 01:21:54 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '3815'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '3846'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-project-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-project-tokens:
|
||||
- '149995870'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149995867'
|
||||
x-ratelimit-reset-project-tokens:
|
||||
- 1ms
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 1ms
|
||||
x-request-id:
|
||||
- req_9a753d9290c642af94be4e13bec9b6d8
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
|
||||
@@ -1,35 +1,137 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Scorer. You''re an
|
||||
expert scorer, specialized in scoring titles.\nYour personal goal is: Score
|
||||
the title\nTo give my best complete final answer to the task use the exact following
|
||||
body: '{"trace_id": "4ced1ade-0d34-4d28-a47d-61011b1f3582", "execution_type":
|
||||
"crew", "user_identifier": null, "execution_context": {"crew_fingerprint": null,
|
||||
"crew_name": "crew", "flow_name": null, "crewai_version": "1.2.1", "privacy_level":
|
||||
"standard"}, "execution_metadata": {"expected_duration_estimate": 300, "agent_count":
|
||||
0, "task_count": 0, "flow_method_count": 0, "execution_started_at": "2025-10-31T07:25:08.937105+00:00"},
|
||||
"ephemeral_trace_id": "4ced1ade-0d34-4d28-a47d-61011b1f3582"}'
|
||||
headers:
|
||||
Accept:
|
||||
- '*/*'
|
||||
Accept-Encoding:
|
||||
- gzip, deflate, zstd
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '488'
|
||||
Content-Type:
|
||||
- application/json
|
||||
User-Agent:
|
||||
- CrewAI-CLI/1.2.1
|
||||
X-Crewai-Organization-Id:
|
||||
- 73c2b193-f579-422c-84c7-76a39a1da77f
|
||||
X-Crewai-Version:
|
||||
- 1.2.1
|
||||
method: POST
|
||||
uri: https://app.crewai.com/crewai_plus/api/v1/tracing/ephemeral/batches
|
||||
response:
|
||||
body:
|
||||
string: '{"id":"8657c7bd-19a7-4873-b561-7cfc910b1b81","ephemeral_trace_id":"4ced1ade-0d34-4d28-a47d-61011b1f3582","execution_type":"crew","crew_name":"crew","flow_name":null,"status":"running","duration_ms":null,"crewai_version":"1.2.1","total_events":0,"execution_context":{"crew_fingerprint":null,"crew_name":"crew","flow_name":null,"crewai_version":"1.2.1","privacy_level":"standard"},"created_at":"2025-10-31T07:25:09.569Z","updated_at":"2025-10-31T07:25:09.569Z","access_code":"TRACE-7f02e40cd9","user_identifier":null}'
|
||||
headers:
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '515'
|
||||
Content-Type:
|
||||
- application/json; charset=utf-8
|
||||
Date:
|
||||
- Fri, 31 Oct 2025 07:25:09 GMT
|
||||
cache-control:
|
||||
- no-store
|
||||
content-security-policy:
|
||||
- 'default-src ''self'' *.app.crewai.com app.crewai.com; script-src ''self''
|
||||
''unsafe-inline'' *.app.crewai.com app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts
|
||||
https://www.gstatic.com https://run.pstmn.io https://apis.google.com https://apis.google.com/js/api.js
|
||||
https://accounts.google.com https://accounts.google.com/gsi/client https://cdnjs.cloudflare.com/ajax/libs/normalize/8.0.1/normalize.min.css.map
|
||||
https://*.google.com https://docs.google.com https://slides.google.com https://js.hs-scripts.com
|
||||
https://js.sentry-cdn.com https://browser.sentry-cdn.com https://www.googletagmanager.com
|
||||
https://js-na1.hs-scripts.com https://js.hubspot.com http://js-na1.hs-scripts.com
|
||||
https://bat.bing.com https://cdn.amplitude.com https://cdn.segment.com https://d1d3n03t5zntha.cloudfront.net/
|
||||
https://descriptusercontent.com https://edge.fullstory.com https://googleads.g.doubleclick.net
|
||||
https://js.hs-analytics.net https://js.hs-banner.com https://js.hsadspixel.net
|
||||
https://js.hscollectedforms.net https://js.usemessages.com https://snap.licdn.com
|
||||
https://static.cloudflareinsights.com https://static.reo.dev https://www.google-analytics.com
|
||||
https://share.descript.com/; style-src ''self'' ''unsafe-inline'' *.app.crewai.com
|
||||
app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts; img-src ''self'' data:
|
||||
*.app.crewai.com app.crewai.com https://zeus.tools.crewai.com https://dashboard.tools.crewai.com
|
||||
https://cdn.jsdelivr.net https://forms.hsforms.com https://track.hubspot.com
|
||||
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://www.google.com
|
||||
https://www.google.com.br; font-src ''self'' data: *.app.crewai.com app.crewai.com;
|
||||
connect-src ''self'' *.app.crewai.com app.crewai.com https://zeus.tools.crewai.com
|
||||
https://connect.useparagon.com/ https://zeus.useparagon.com/* https://*.useparagon.com/*
|
||||
https://run.pstmn.io https://connect.tools.crewai.com/ https://*.sentry.io
|
||||
https://www.google-analytics.com https://edge.fullstory.com https://rs.fullstory.com
|
||||
https://api.hubspot.com https://forms.hscollectedforms.net https://api.hubapi.com
|
||||
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://google.com/pagead/form-data/16713662509
|
||||
https://google.com/ccm/form-data/16713662509 https://www.google.com/ccm/collect
|
||||
https://worker-actionkit.tools.crewai.com https://api.reo.dev; frame-src ''self''
|
||||
*.app.crewai.com app.crewai.com https://connect.useparagon.com/ https://zeus.tools.crewai.com
|
||||
https://zeus.useparagon.com/* https://connect.tools.crewai.com/ https://docs.google.com
|
||||
https://drive.google.com https://slides.google.com https://accounts.google.com
|
||||
https://*.google.com https://app.hubspot.com/ https://td.doubleclick.net https://www.googletagmanager.com/
|
||||
https://www.youtube.com https://share.descript.com'
|
||||
etag:
|
||||
- W/"684f9dff2cfefa325ac69ea38dba2309"
|
||||
expires:
|
||||
- '0'
|
||||
permissions-policy:
|
||||
- camera=(), microphone=(self), geolocation=()
|
||||
pragma:
|
||||
- no-cache
|
||||
referrer-policy:
|
||||
- strict-origin-when-cross-origin
|
||||
strict-transport-security:
|
||||
- max-age=63072000; includeSubDomains
|
||||
vary:
|
||||
- Accept
|
||||
x-content-type-options:
|
||||
- nosniff
|
||||
x-frame-options:
|
||||
- SAMEORIGIN
|
||||
x-permitted-cross-domain-policies:
|
||||
- none
|
||||
x-request-id:
|
||||
- 630cda16-c991-4ed0-b534-16c03eb2ffca
|
||||
x-runtime:
|
||||
- '0.072382'
|
||||
x-xss-protection:
|
||||
- 1; mode=block
|
||||
status:
|
||||
code: 201
|
||||
message: Created
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Scorer. You''re an expert
|
||||
scorer, specialized in scoring titles.\nYour personal goal is: Score the title\nTo
|
||||
give my best complete final answer to the task respond using the exact following
|
||||
format:\n\nThought: I now can give a great answer\nFinal Answer: Your final
|
||||
answer must be the great and the most complete as possible, it must be outcome
|
||||
described.\n\nI MUST use these formats, my job depends on it!"}, {"role": "user",
|
||||
"content": "\nCurrent Task: Give me an integer score between 1-5 for the following
|
||||
title: ''The impact of AI in the future of work''\n\nThis is the expect criteria
|
||||
for your final answer: The score of the title.\nyou MUST return the actual complete
|
||||
content as the final answer, not a summary.\n\nBegin! This is VERY important
|
||||
to you, use the tools available and give your best Final Answer, your job depends
|
||||
on it!\n\nThought:"}], "model": "gpt-4o"}'
|
||||
described.\n\nI MUST use these formats, my job depends on it!"},{"role":"user","content":"\nCurrent
|
||||
Task: Give me an integer score between 1-5 for the following title: ''The impact
|
||||
of AI in the future of work''\n\nThis is the expected criteria for your final
|
||||
answer: The score of the title.\nyou MUST return the actual complete content
|
||||
as the final answer, not a summary.\nEnsure your final answer contains only
|
||||
the content in the following format: {\n \"properties\": {\n \"score\":
|
||||
{\n \"title\": \"Score\",\n \"type\": \"integer\"\n }\n },\n \"required\":
|
||||
[\n \"score\"\n ],\n \"title\": \"ScoreOutput\",\n \"type\": \"object\",\n \"additionalProperties\":
|
||||
false\n}\n\nEnsure the final output does not include any code block markers
|
||||
like ```json or ```python.\n\nBegin! This is VERY important to you, use the
|
||||
tools available and give your best Final Answer, your job depends on it!\n\nThought:"}],"model":"gpt-4.1-mini"}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '915'
|
||||
- '1340'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=9.8sBYBkvBR8R1K_bVF7xgU..80XKlEIg3N2OBbTSCU-1727214102-1.0.1.1-.qiTLXbPamYUMSuyNsOEB9jhGu.jOifujOrx9E2JZvStbIZ9RTIiE44xKKNfLPxQkOi6qAT3h6htK8lPDGV_5g;
|
||||
_cfuvid=lbRdAddVWV6W3f5Dm9SaOPWDUOxqtZBSPr_fTW26nEA-1727213194587-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.47.0
|
||||
- OpenAI/Python 1.109.1
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
@@ -39,29 +141,32 @@ interactions:
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.47.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
- 1.109.1
|
||||
x-stainless-read-timeout:
|
||||
- '600'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.11.7
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
content: "{\n \"id\": \"chatcmpl-AB7g417Go7DkGG2YvjkT783QSBFRT\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1727214484,\n \"model\": \"gpt-4o-2024-05-13\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"I now can give a great answer\\nFinal
|
||||
Answer: 4\",\n \"refusal\": null\n },\n \"logprobs\": null,\n
|
||||
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
|
||||
186,\n \"completion_tokens\": 13,\n \"total_tokens\": 199,\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0\n }\n },\n \"system_fingerprint\": \"fp_52a7f40b0b\"\n}\n"
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//jFLBbpwwEL3zFaM5LxFQSHa55pReWkXRVlWJkGMP4AZs1zZJq9X+e2XY
|
||||
LGzaSr34MG/e83szc4gAUAosAXnHPB9MH99+Eer++f7z14c9FR+b/acbdptysW9+3AmBm8DQT9+J
|
||||
+zfWFdeD6clLrWaYW2Kegmp6c51ud0WR7CZg0IL6QGuNj/OrNB6kknGWZEWc5HGan+idlpwclvAt
|
||||
AgA4TG8wqgT9xBKSzVtlIOdYS1iemwDQ6j5UkDknnWfK42YBuVae1OT9odNj2/kS7kDpV+BMQStf
|
||||
CBi0IQAw5V7JVupQKYAKHdeWKiwhr9RxLWmpGR0LudTY9yuAKaU9C3OZwjyekOPZfq9bY/WTe0fF
|
||||
RirputoSc1oFq85rgxN6jAAepzGNF8nRWD0YX3v9TNN32Tad9XBZz4KmuxPotWf9Uv+QnIZ7qVcL
|
||||
8kz2bjVo5Ix3JBbqshU2CqlXQLRK/aebv2nPyaVq/0d+ATgn40nUxpKQ/DLx0mYpXO+/2s5Tngyj
|
||||
I/siOdVekg2bENSwsZ9PCt0v52moG6lassbK+a4aU+c82xZps73OMDpGvwEAAP//AwDHX8XpZgMA
|
||||
AA==
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 8c85fa001c351cf3-GRU
|
||||
- 99716ab4788dea35-FCO
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
@@ -69,139 +174,340 @@ interactions:
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Tue, 24 Sep 2024 21:48:05 GMT
|
||||
- Fri, 31 Oct 2025 07:25:10 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=S.q8_0ONHDHBHNOJdMZHwJDue9lKhWQHpKuP2lsspx4-1761895510-1.0.1.1-QUDxMm9SVfRT2R188bLcvxUd6SXIBmZgnz3D35UF95nNg8zX5Gzdg2OmU.uo29rqaGatjupcLPNMyhfOqeoyhNQ28Zz1ESSQLq0y70x3IvM;
|
||||
path=/; expires=Fri, 31-Oct-25 07:55:10 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=TvP4GePeQO8E5c_xWNGzJb84f940MFRG_lZ_0hWAc5M-1761895510432-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '187'
|
||||
- '569'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
x-envoy-upstream-service-time:
|
||||
- '587'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-project-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '30000000'
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-project-tokens:
|
||||
- '149999700'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29999781'
|
||||
- '149999700'
|
||||
x-ratelimit-reset-project-tokens:
|
||||
- 0s
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_d5f223e0442a0df22717b3acabffaea0
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
- req_393e029e99d54ab0b4e7c69c5cba099f
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages": [{"role": "user", "content": "4"}, {"role": "system", "content":
|
||||
"I''m gonna convert this raw text into valid JSON.\n\nThe json should have the
|
||||
following structure, with the following keys:\n{\n score: int\n}"}], "model":
|
||||
"gpt-4o", "tool_choice": {"type": "function", "function": {"name": "ScoreOutput"}},
|
||||
"tools": [{"type": "function", "function": {"name": "ScoreOutput", "description":
|
||||
"Correctly extracted `ScoreOutput` with all the required parameters with correct
|
||||
types", "parameters": {"properties": {"score": {"title": "Score", "type": "integer"}},
|
||||
"required": ["score"], "type": "object"}}}]}'
|
||||
body: '{"events": [{"event_id": "ea607d3f-c9ff-4aa8-babb-a84eb6d16663", "timestamp":
|
||||
"2025-10-31T07:25:08.935640+00:00", "type": "crew_kickoff_started", "event_data":
|
||||
{"timestamp": "2025-10-31T07:25:08.935640+00:00", "type": "crew_kickoff_started",
|
||||
"source_fingerprint": null, "source_type": null, "fingerprint_metadata": null,
|
||||
"task_id": null, "task_name": null, "agent_id": null, "agent_role": null, "crew_name":
|
||||
"crew", "crew": null, "inputs": null}}, {"event_id": "8e792d78-fe9c-4601-a7b4-7b105fa8fb40",
|
||||
"timestamp": "2025-10-31T07:25:08.937816+00:00", "type": "task_started", "event_data":
|
||||
{"task_description": "Give me an integer score between 1-5 for the following
|
||||
title: ''The impact of AI in the future of work''", "expected_output": "The
|
||||
score of the title.", "task_name": "Give me an integer score between 1-5 for
|
||||
the following title: ''The impact of AI in the future of work''", "context":
|
||||
"", "agent_role": "Scorer", "task_id": "677cf2dd-96a9-4eac-9140-0ecaba9609f7"}},
|
||||
{"event_id": "a2fcdfee-a395-4dc8-99b8-ba3d8d843a70", "timestamp": "2025-10-31T07:25:08.938816+00:00",
|
||||
"type": "agent_execution_started", "event_data": {"agent_role": "Scorer", "agent_goal":
|
||||
"Score the title", "agent_backstory": "You''re an expert scorer, specialized
|
||||
in scoring titles."}}, {"event_id": "b0ba7582-6ea0-4b66-a64a-0a1e38d57502",
|
||||
"timestamp": "2025-10-31T07:25:08.938996+00:00", "type": "llm_call_started",
|
||||
"event_data": {"timestamp": "2025-10-31T07:25:08.938996+00:00", "type": "llm_call_started",
|
||||
"source_fingerprint": null, "source_type": null, "fingerprint_metadata": null,
|
||||
"task_id": "677cf2dd-96a9-4eac-9140-0ecaba9609f7", "task_name": "Give me an
|
||||
integer score between 1-5 for the following title: ''The impact of AI in the
|
||||
future of work''", "agent_id": "8d6e3481-36fa-4fca-9665-977e6d76a969", "agent_role":
|
||||
"Scorer", "from_task": null, "from_agent": null, "model": "gpt-4.1-mini", "messages":
|
||||
[{"role": "system", "content": "You are Scorer. You''re an expert scorer, specialized
|
||||
in scoring titles.\nYour personal goal is: Score the title\nTo give my best
|
||||
complete final answer to the task respond using the exact following format:\n\nThought:
|
||||
I now can give a great answer\nFinal Answer: Your final answer must be the great
|
||||
and the most complete as possible, it must be outcome described.\n\nI MUST use
|
||||
these formats, my job depends on it!"}, {"role": "user", "content": "\nCurrent
|
||||
Task: Give me an integer score between 1-5 for the following title: ''The impact
|
||||
of AI in the future of work''\n\nThis is the expected criteria for your final
|
||||
answer: The score of the title.\nyou MUST return the actual complete content
|
||||
as the final answer, not a summary.\nEnsure your final answer contains only
|
||||
the content in the following format: {\n \"properties\": {\n \"score\":
|
||||
{\n \"title\": \"Score\",\n \"type\": \"integer\"\n }\n },\n \"required\":
|
||||
[\n \"score\"\n ],\n \"title\": \"ScoreOutput\",\n \"type\": \"object\",\n \"additionalProperties\":
|
||||
false\n}\n\nEnsure the final output does not include any code block markers
|
||||
like ```json or ```python.\n\nBegin! This is VERY important to you, use the
|
||||
tools available and give your best Final Answer, your job depends on it!\n\nThought:"}],
|
||||
"tools": null, "callbacks": ["<crewai.utilities.token_counter_callback.TokenCalcHandler
|
||||
object at 0x11da36000>"], "available_functions": null}}, {"event_id": "ab6b168b-d954-494f-ae58-d9ef7a1941dc",
|
||||
"timestamp": "2025-10-31T07:25:10.466669+00:00", "type": "llm_call_completed",
|
||||
"event_data": {"timestamp": "2025-10-31T07:25:10.466669+00:00", "type": "llm_call_completed",
|
||||
"source_fingerprint": null, "source_type": null, "fingerprint_metadata": null,
|
||||
"task_id": "677cf2dd-96a9-4eac-9140-0ecaba9609f7", "task_name": "Give me an
|
||||
integer score between 1-5 for the following title: ''The impact of AI in the
|
||||
future of work''", "agent_id": "8d6e3481-36fa-4fca-9665-977e6d76a969", "agent_role":
|
||||
"Scorer", "from_task": null, "from_agent": null, "messages": [{"role": "system",
|
||||
"content": "You are Scorer. You''re an expert scorer, specialized in scoring
|
||||
titles.\nYour personal goal is: Score the title\nTo give my best complete final
|
||||
answer to the task respond using the exact following format:\n\nThought: I now
|
||||
can give a great answer\nFinal Answer: Your final answer must be the great and
|
||||
the most complete as possible, it must be outcome described.\n\nI MUST use these
|
||||
formats, my job depends on it!"}, {"role": "user", "content": "\nCurrent Task:
|
||||
Give me an integer score between 1-5 for the following title: ''The impact of
|
||||
AI in the future of work''\n\nThis is the expected criteria for your final answer:
|
||||
The score of the title.\nyou MUST return the actual complete content as the
|
||||
final answer, not a summary.\nEnsure your final answer contains only the content
|
||||
in the following format: {\n \"properties\": {\n \"score\": {\n \"title\":
|
||||
\"Score\",\n \"type\": \"integer\"\n }\n },\n \"required\": [\n \"score\"\n ],\n \"title\":
|
||||
\"ScoreOutput\",\n \"type\": \"object\",\n \"additionalProperties\": false\n}\n\nEnsure
|
||||
the final output does not include any code block markers like ```json or ```python.\n\nBegin!
|
||||
This is VERY important to you, use the tools available and give your best Final
|
||||
Answer, your job depends on it!\n\nThought:"}], "response": "Thought: I now
|
||||
can give a great answer\n{\n \"score\": 4\n}", "call_type": "<LLMCallType.LLM_CALL:
|
||||
''llm_call''>", "model": "gpt-4.1-mini"}}, {"event_id": "0b8a17b6-e7d2-464d-a969-56dd705a40ef",
|
||||
"timestamp": "2025-10-31T07:25:10.466933+00:00", "type": "agent_execution_completed",
|
||||
"event_data": {"agent_role": "Scorer", "agent_goal": "Score the title", "agent_backstory":
|
||||
"You''re an expert scorer, specialized in scoring titles."}}, {"event_id": "b835b8e7-992b-4364-9ff8-25c81203ef77",
|
||||
"timestamp": "2025-10-31T07:25:10.467175+00:00", "type": "task_completed", "event_data":
|
||||
{"task_description": "Give me an integer score between 1-5 for the following
|
||||
title: ''The impact of AI in the future of work''", "task_name": "Give me an
|
||||
integer score between 1-5 for the following title: ''The impact of AI in the
|
||||
future of work''", "task_id": "677cf2dd-96a9-4eac-9140-0ecaba9609f7", "output_raw":
|
||||
"Thought: I now can give a great answer\n{\n \"score\": 4\n}", "output_format":
|
||||
"OutputFormat.PYDANTIC", "agent_role": "Scorer"}}, {"event_id": "a9973b74-9ca6-46c3-b219-0b11ffa9e210",
|
||||
"timestamp": "2025-10-31T07:25:10.469421+00:00", "type": "crew_kickoff_completed",
|
||||
"event_data": {"timestamp": "2025-10-31T07:25:10.469421+00:00", "type": "crew_kickoff_completed",
|
||||
"source_fingerprint": null, "source_type": null, "fingerprint_metadata": null,
|
||||
"task_id": null, "task_name": null, "agent_id": null, "agent_role": null, "crew_name":
|
||||
"crew", "crew": null, "output": {"description": "Give me an integer score between
|
||||
1-5 for the following title: ''The impact of AI in the future of work''", "name":
|
||||
"Give me an integer score between 1-5 for the following title: ''The impact
|
||||
of AI in the future of work''", "expected_output": "The score of the title.",
|
||||
"summary": "Give me an integer score between 1-5 for the following...", "raw":
|
||||
"Thought: I now can give a great answer\n{\n \"score\": 4\n}", "pydantic":
|
||||
{}, "json_dict": null, "agent": "Scorer", "output_format": "pydantic"}, "total_tokens":
|
||||
300}}], "batch_metadata": {"events_count": 8, "batch_sequence": 1, "is_final_batch":
|
||||
false}}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '615'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=9.8sBYBkvBR8R1K_bVF7xgU..80XKlEIg3N2OBbTSCU-1727214102-1.0.1.1-.qiTLXbPamYUMSuyNsOEB9jhGu.jOifujOrx9E2JZvStbIZ9RTIiE44xKKNfLPxQkOi6qAT3h6htK8lPDGV_5g;
|
||||
_cfuvid=lbRdAddVWV6W3f5Dm9SaOPWDUOxqtZBSPr_fTW26nEA-1727213194587-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.47.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.47.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.11.7
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
content: "{\n \"id\": \"chatcmpl-AB7g5CniMQJ0VGcH8UKTUvm5YmLv8\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1727214485,\n \"model\": \"gpt-4o-2024-05-13\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": null,\n \"tool_calls\": [\n {\n
|
||||
\ \"id\": \"call_a5sTjq3Ebf2ePCGDCPDYn6ob\",\n \"type\":
|
||||
\"function\",\n \"function\": {\n \"name\": \"ScoreOutput\",\n
|
||||
\ \"arguments\": \"{\\\"score\\\":4}\"\n }\n }\n
|
||||
\ ],\n \"refusal\": null\n },\n \"logprobs\": null,\n
|
||||
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
|
||||
100,\n \"completion_tokens\": 5,\n \"total_tokens\": 105,\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0\n }\n },\n \"system_fingerprint\": \"fp_e375328146\"\n}\n"
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 8c85fa03d9621cf3-GRU
|
||||
Accept:
|
||||
- '*/*'
|
||||
Accept-Encoding:
|
||||
- gzip, deflate, zstd
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Length:
|
||||
- '7336'
|
||||
Content-Type:
|
||||
- application/json
|
||||
User-Agent:
|
||||
- CrewAI-CLI/1.2.1
|
||||
X-Crewai-Organization-Id:
|
||||
- 73c2b193-f579-422c-84c7-76a39a1da77f
|
||||
X-Crewai-Version:
|
||||
- 1.2.1
|
||||
method: POST
|
||||
uri: https://app.crewai.com/crewai_plus/api/v1/tracing/ephemeral/batches/4ced1ade-0d34-4d28-a47d-61011b1f3582/events
|
||||
response:
|
||||
body:
|
||||
string: '{"events_created":8,"ephemeral_trace_batch_id":"8657c7bd-19a7-4873-b561-7cfc910b1b81"}'
|
||||
headers:
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '86'
|
||||
Content-Type:
|
||||
- application/json; charset=utf-8
|
||||
Date:
|
||||
- Tue, 24 Sep 2024 21:48:05 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '137'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
- Fri, 31 Oct 2025 07:25:11 GMT
|
||||
cache-control:
|
||||
- no-store
|
||||
content-security-policy:
|
||||
- 'default-src ''self'' *.app.crewai.com app.crewai.com; script-src ''self''
|
||||
''unsafe-inline'' *.app.crewai.com app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts
|
||||
https://www.gstatic.com https://run.pstmn.io https://apis.google.com https://apis.google.com/js/api.js
|
||||
https://accounts.google.com https://accounts.google.com/gsi/client https://cdnjs.cloudflare.com/ajax/libs/normalize/8.0.1/normalize.min.css.map
|
||||
https://*.google.com https://docs.google.com https://slides.google.com https://js.hs-scripts.com
|
||||
https://js.sentry-cdn.com https://browser.sentry-cdn.com https://www.googletagmanager.com
|
||||
https://js-na1.hs-scripts.com https://js.hubspot.com http://js-na1.hs-scripts.com
|
||||
https://bat.bing.com https://cdn.amplitude.com https://cdn.segment.com https://d1d3n03t5zntha.cloudfront.net/
|
||||
https://descriptusercontent.com https://edge.fullstory.com https://googleads.g.doubleclick.net
|
||||
https://js.hs-analytics.net https://js.hs-banner.com https://js.hsadspixel.net
|
||||
https://js.hscollectedforms.net https://js.usemessages.com https://snap.licdn.com
|
||||
https://static.cloudflareinsights.com https://static.reo.dev https://www.google-analytics.com
|
||||
https://share.descript.com/; style-src ''self'' ''unsafe-inline'' *.app.crewai.com
|
||||
app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts; img-src ''self'' data:
|
||||
*.app.crewai.com app.crewai.com https://zeus.tools.crewai.com https://dashboard.tools.crewai.com
|
||||
https://cdn.jsdelivr.net https://forms.hsforms.com https://track.hubspot.com
|
||||
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://www.google.com
|
||||
https://www.google.com.br; font-src ''self'' data: *.app.crewai.com app.crewai.com;
|
||||
connect-src ''self'' *.app.crewai.com app.crewai.com https://zeus.tools.crewai.com
|
||||
https://connect.useparagon.com/ https://zeus.useparagon.com/* https://*.useparagon.com/*
|
||||
https://run.pstmn.io https://connect.tools.crewai.com/ https://*.sentry.io
|
||||
https://www.google-analytics.com https://edge.fullstory.com https://rs.fullstory.com
|
||||
https://api.hubspot.com https://forms.hscollectedforms.net https://api.hubapi.com
|
||||
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://google.com/pagead/form-data/16713662509
|
||||
https://google.com/ccm/form-data/16713662509 https://www.google.com/ccm/collect
|
||||
https://worker-actionkit.tools.crewai.com https://api.reo.dev; frame-src ''self''
|
||||
*.app.crewai.com app.crewai.com https://connect.useparagon.com/ https://zeus.tools.crewai.com
|
||||
https://zeus.useparagon.com/* https://connect.tools.crewai.com/ https://docs.google.com
|
||||
https://drive.google.com https://slides.google.com https://accounts.google.com
|
||||
https://*.google.com https://app.hubspot.com/ https://td.doubleclick.net https://www.googletagmanager.com/
|
||||
https://www.youtube.com https://share.descript.com'
|
||||
etag:
|
||||
- W/"be223998b84365d3a863f942c880adfb"
|
||||
expires:
|
||||
- '0'
|
||||
permissions-policy:
|
||||
- camera=(), microphone=(self), geolocation=()
|
||||
pragma:
|
||||
- no-cache
|
||||
referrer-policy:
|
||||
- strict-origin-when-cross-origin
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '30000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29999947'
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
- max-age=63072000; includeSubDomains
|
||||
vary:
|
||||
- Accept
|
||||
x-content-type-options:
|
||||
- nosniff
|
||||
x-frame-options:
|
||||
- SAMEORIGIN
|
||||
x-permitted-cross-domain-policies:
|
||||
- none
|
||||
x-request-id:
|
||||
- req_f4a8f8fa4736d7f903e91433ec9ff69a
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
- 9c19d6df-9190-4764-afed-f3444939d2e4
|
||||
x-runtime:
|
||||
- '0.123911'
|
||||
x-xss-protection:
|
||||
- 1; mode=block
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"status": "completed", "duration_ms": 2305, "final_event_count": 8}'
|
||||
headers:
|
||||
Accept:
|
||||
- '*/*'
|
||||
Accept-Encoding:
|
||||
- gzip, deflate, zstd
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '68'
|
||||
Content-Type:
|
||||
- application/json
|
||||
User-Agent:
|
||||
- CrewAI-CLI/1.2.1
|
||||
X-Crewai-Organization-Id:
|
||||
- 73c2b193-f579-422c-84c7-76a39a1da77f
|
||||
X-Crewai-Version:
|
||||
- 1.2.1
|
||||
method: PATCH
|
||||
uri: https://app.crewai.com/crewai_plus/api/v1/tracing/ephemeral/batches/4ced1ade-0d34-4d28-a47d-61011b1f3582/finalize
|
||||
response:
|
||||
body:
|
||||
string: '{"id":"8657c7bd-19a7-4873-b561-7cfc910b1b81","ephemeral_trace_id":"4ced1ade-0d34-4d28-a47d-61011b1f3582","execution_type":"crew","crew_name":"crew","flow_name":null,"status":"completed","duration_ms":2305,"crewai_version":"1.2.1","total_events":8,"execution_context":{"crew_name":"crew","flow_name":null,"privacy_level":"standard","crewai_version":"1.2.1","crew_fingerprint":null},"created_at":"2025-10-31T07:25:09.569Z","updated_at":"2025-10-31T07:25:11.837Z","access_code":"TRACE-7f02e40cd9","user_identifier":null}'
|
||||
headers:
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '517'
|
||||
Content-Type:
|
||||
- application/json; charset=utf-8
|
||||
Date:
|
||||
- Fri, 31 Oct 2025 07:25:11 GMT
|
||||
cache-control:
|
||||
- no-store
|
||||
content-security-policy:
|
||||
- 'default-src ''self'' *.app.crewai.com app.crewai.com; script-src ''self''
|
||||
''unsafe-inline'' *.app.crewai.com app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts
|
||||
https://www.gstatic.com https://run.pstmn.io https://apis.google.com https://apis.google.com/js/api.js
|
||||
https://accounts.google.com https://accounts.google.com/gsi/client https://cdnjs.cloudflare.com/ajax/libs/normalize/8.0.1/normalize.min.css.map
|
||||
https://*.google.com https://docs.google.com https://slides.google.com https://js.hs-scripts.com
|
||||
https://js.sentry-cdn.com https://browser.sentry-cdn.com https://www.googletagmanager.com
|
||||
https://js-na1.hs-scripts.com https://js.hubspot.com http://js-na1.hs-scripts.com
|
||||
https://bat.bing.com https://cdn.amplitude.com https://cdn.segment.com https://d1d3n03t5zntha.cloudfront.net/
|
||||
https://descriptusercontent.com https://edge.fullstory.com https://googleads.g.doubleclick.net
|
||||
https://js.hs-analytics.net https://js.hs-banner.com https://js.hsadspixel.net
|
||||
https://js.hscollectedforms.net https://js.usemessages.com https://snap.licdn.com
|
||||
https://static.cloudflareinsights.com https://static.reo.dev https://www.google-analytics.com
|
||||
https://share.descript.com/; style-src ''self'' ''unsafe-inline'' *.app.crewai.com
|
||||
app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts; img-src ''self'' data:
|
||||
*.app.crewai.com app.crewai.com https://zeus.tools.crewai.com https://dashboard.tools.crewai.com
|
||||
https://cdn.jsdelivr.net https://forms.hsforms.com https://track.hubspot.com
|
||||
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://www.google.com
|
||||
https://www.google.com.br; font-src ''self'' data: *.app.crewai.com app.crewai.com;
|
||||
connect-src ''self'' *.app.crewai.com app.crewai.com https://zeus.tools.crewai.com
|
||||
https://connect.useparagon.com/ https://zeus.useparagon.com/* https://*.useparagon.com/*
|
||||
https://run.pstmn.io https://connect.tools.crewai.com/ https://*.sentry.io
|
||||
https://www.google-analytics.com https://edge.fullstory.com https://rs.fullstory.com
|
||||
https://api.hubspot.com https://forms.hscollectedforms.net https://api.hubapi.com
|
||||
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://google.com/pagead/form-data/16713662509
|
||||
https://google.com/ccm/form-data/16713662509 https://www.google.com/ccm/collect
|
||||
https://worker-actionkit.tools.crewai.com https://api.reo.dev; frame-src ''self''
|
||||
*.app.crewai.com app.crewai.com https://connect.useparagon.com/ https://zeus.tools.crewai.com
|
||||
https://zeus.useparagon.com/* https://connect.tools.crewai.com/ https://docs.google.com
|
||||
https://drive.google.com https://slides.google.com https://accounts.google.com
|
||||
https://*.google.com https://app.hubspot.com/ https://td.doubleclick.net https://www.googletagmanager.com/
|
||||
https://www.youtube.com https://share.descript.com'
|
||||
etag:
|
||||
- W/"bff97e21bd1971750dcfdb102fba9dcd"
|
||||
expires:
|
||||
- '0'
|
||||
permissions-policy:
|
||||
- camera=(), microphone=(self), geolocation=()
|
||||
pragma:
|
||||
- no-cache
|
||||
referrer-policy:
|
||||
- strict-origin-when-cross-origin
|
||||
strict-transport-security:
|
||||
- max-age=63072000; includeSubDomains
|
||||
vary:
|
||||
- Accept
|
||||
x-content-type-options:
|
||||
- nosniff
|
||||
x-frame-options:
|
||||
- SAMEORIGIN
|
||||
x-permitted-cross-domain-policies:
|
||||
- none
|
||||
x-request-id:
|
||||
- 2b6cd38d-78fa-4676-94ff-80e3bcf48a03
|
||||
x-runtime:
|
||||
- '0.064858'
|
||||
x-xss-protection:
|
||||
- 1; mode=block
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
|
||||
@@ -1649,4 +1649,784 @@ interactions:
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: "{\"messages\":[{\"role\":\"system\",\"content\":\"Convert all responses
|
||||
into valid JSON output.\"},{\"role\":\"user\",\"content\":\"Assess the quality
|
||||
of the task completed based on the description, expected output, and actual
|
||||
results.\\n\\nTask Description:\\nResearch a topic to teach a kid aged 6 about
|
||||
math.\\n\\nExpected Output:\\nA topic, explanation, angle, and examples.\\n\\nActual
|
||||
Output:\\nI now can give a great answer \\nFinal Answer: \\n\\n**Topic: Introduction
|
||||
to Basic Addition**\\n\\n**Explanation:**\\nBasic addition is about combining
|
||||
two or more groups of things together to find out how many there are in total.
|
||||
It's one of the most fundamental concepts in math and is a building block for
|
||||
all other math skills. Teaching addition to a 6-year-old involves using simple
|
||||
numbers and relatable examples that help them visualize and understand the concept
|
||||
of adding together.\\n\\n**Angle:**\\nTo make the concept of addition fun and
|
||||
engaging, we can use everyday objects that a child is familiar with, such as
|
||||
toys, fruits, or drawing items. Incorporating visuals and interactive elements
|
||||
will keep their attention and help reinforce the idea of combining numbers.\\n\\n**Examples:**\\n\\n1.
|
||||
**Using Objects:**\\n - **Scenario:** Let\u2019s say you have 2 apples and
|
||||
your friend gives you 3 more apples.\\n - **Visual**: Arrange the apples in
|
||||
front of the child.\\n - **Question:** \\\"How many apples do you have now?\\\"\\n
|
||||
\ - **Calculation:** 2 apples (your apples) + 3 apples (friend's apples) =
|
||||
5 apples. \\n - **Conclusion:** \\\"You now have 5 apples!\\\"\\n\\n2. **Drawing
|
||||
Pictures:**\\n - **Scenario:** Draw 4 stars on one side of the paper and 2
|
||||
stars on the other side.\\n - **Activity:** Ask the child to count the stars
|
||||
in the first group and then the second group.\\n - **Question:** \\\"If we
|
||||
put them together, how many stars do we have?\\\"\\n - **Calculation:** 4
|
||||
stars + 2 stars = 6 stars. \\n - **Conclusion:** \\\"You drew 6 stars all
|
||||
together!\\\"\\n\\n3. **Story Problems:**\\n - **Scenario:** \\\"You have
|
||||
5 toy cars, and you buy 3 more from the store. How many cars do you have?\\\"\\n
|
||||
\ - **Interaction:** Create a fun story around the toy cars (perhaps the cars
|
||||
are going on an adventure).\\n - **Calculation:** 5 toy cars + 3 toy cars
|
||||
= 8 toy cars. \\n - **Conclusion:** \\\"You now have a total of 8 toy cars
|
||||
for your adventure!\\\"\\n\\n4. **Games:**\\n - **Activity:** Play a simple
|
||||
game where you roll a pair of dice. Each die shows a number.\\n - **Task:**
|
||||
Ask the child to add the numbers on the dice together.\\n - **Example:** If
|
||||
one die shows 2 and the other shows 4, the child will say \u201C2 + 4 = 6!\u201D\\n
|
||||
\ - **Conclusion:** \u201CWhoever gets the highest number wins a point!\u201D\\n\\nIn
|
||||
summary, when teaching a 6-year-old about basic addition, it is essential to
|
||||
use simple numbers, real-life examples, visual aids, and engaging activities.
|
||||
This ensures the child can grasp the concept while having a fun learning experience.
|
||||
Making math relatable to their world helps build a strong foundation for their
|
||||
future learning!\\n\\nPlease provide:\\n- Bullet points suggestions to improve
|
||||
future similar tasks\\n- A score from 0 to 10 evaluating on completion, quality,
|
||||
and overall performance- Entities extracted from the task output, if any, their
|
||||
type, description, and relationships\"}],\"model\":\"gpt-4.1-mini\"}"
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '3303'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- _cfuvid=Q23zZGhbuNaTNh.RPoM_1O4jWXLFM.KtSgSytn2NO.Q-1744492727869-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.109.1
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.109.1
|
||||
x-stainless-read-timeout:
|
||||
- '600'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//xFZLbxw3DL77VxBz6WXXyG4c2/EtiYM0QNMGtYMCzQY2LXFmFGukichZ
|
||||
exD4vxeU9uU2TQLEaPewGIgi+fHjQ/y8B1A5W51AZVoU0/V++uKPP1+l+nyWfh+fzz+dHp3XMrzs
|
||||
T2e//vz6zS/VRDXi1UcystbaN7HrPYmLoYhNIhRSq7Ojw9nx0fFs9jgLumjJq1rTy/RgfzbtXHDT
|
||||
+aP5k+mjg+nsYKXeRmeIqxN4vwcA8Dn/K9Bg6bY6gUeT9UlHzNhQdbK5BFCl6PWkQmbHgkGqyVZo
|
||||
YhAKGfvl5eVHjmERPi8CwKKiJfoBNYyFGtRDPWYTE+nJ08n6yMSuoyCsp4vqvCUQ5Gu4QYYVF2Tz
|
||||
V6KWArsl+REwWMC+T7FPDkVP6pgA4XA6EqZp9BZwsI6CoX1Qm3GQfhBwwfjBEgOC8YQJJPbOTIBu
|
||||
e48h450AhQYbFxrA0HiaZGdh6CjFgSGRpyUGAbpFRceA3jWBLNw4aUFaUmNkFHVxWgB4DM2ADYFj
|
||||
YKea2W4ij4JXazcuCCU04pYEHUkbLQMmAh6ahljITuCmdabNh3RryHsKkoMXQtMq6jEOoQHTOm8T
|
||||
hX1440JM4Lo+xSVlqsHEwVu4IujQErgAJgbjmAIxg17ONGUKYOl4QA/o7BqFi0F1ELqo0CQNRoZE
|
||||
FjoMgdL+otLc3k1KJWyVcorfr/P+uqQCEpUKsKRscBySoQyiOGbgQeNl6JMLmSq4iemaWyLJ92qP
|
||||
3BpMlvcX1aaszopbQLhKjmpAZmLW8FfEZtXofbyZDj1kzp2MIBEaHBoqBMIQLCUtfGVj1/7bFJfO
|
||||
EojruRSfxV6UslUJrOsJrpC1FpSwbPMnBu7JuNqZkm/iEogWZFALLKOnXWfvmDRF2oKrdHco2Zl6
|
||||
3lSixHWaIRFavHLeybhr6Jm1gMBtTAI8dB2mUT1f0wiC14Q3ODIwmQxcbX8anLmGRDWl3Ewltx9W
|
||||
uaUgThztJnbV6hvpWPr6ObIz8MxaV2bCZHtNxp5Wza/NeE9miU1y/XqOlPlQD8GiZhI9dChtrl7q
|
||||
BQSHphWlQXOQyd6/Zy53m1Zi6/oC+kMR3k2+iv5Zr/z+C+qXhX74LQ/yr8N/p6XgQsZXu8SbMaKg
|
||||
LXUxsCQUAlwx9S38a9kO9PtXt35fhxXSHZOrWDA1JOubWlYlFi7ZLr+79ef3UXYmmB6aMSYTg91Q
|
||||
pvVp4hCkzGq7Ia2MYpvwxoWG/3MKT4tjeOvyXPwRFs/jCC8enkhpXdryuK7HmDqItc4HiWmEPsUr
|
||||
T51WZn4R6fu76sEpPcuI3hZEP0LoqTP00H0chyTthk19qTBJYbLBLrd2fpz/v6Z+hd03yjBP9UW4
|
||||
W4TLy8vdHS9RPTDqohkG73cEGEKUAjwP0pXkbrNP+thoCfHfVKvaBcftRSLkGHR3ZIl9laV3e/q2
|
||||
6N463FtFqz7FrpcLideU3R09nhd71XZf3koPnh6spBIF/VYwm88PJ1+weGFJ0Hne2X0rg6Ylu9Xd
|
||||
Lsq6WcYdwd5O3P/E8yXbJXYXmu8xvxUYfeXIXvSJrDP3Y95eS/Qxr59fvrbhOQOumNLSGboQR0lz
|
||||
YanGwZctv+KRhbqL2oWGUl6/9ErdXxyY+fGTWX18OK/27vb+AgAA//8DAJ7NZpf5DAAA
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 996fc202cde1ed4f-MXP
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 31 Oct 2025 02:35:21 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=eO4EWmV.5ZoECkIpnAaY5sSBUK9wFdJdNhKbyTIO478-1761878121-1.0.1.1-gSm1br4q740ZTDBXAgbtjUsTnLBFSxwCDB_yXRSeDzk6jRc5RKIB6wcLCiGioSy3PTKja7Goyu.0qGURIIKtGEBkZGwEMYLmMLerG00d5Rg;
|
||||
path=/; expires=Fri, 31-Oct-25 03:05:21 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=csCCKW32niSRt5uCN_12uTrv6uFSvpNcPlYFnmVIBrg-1761878121273-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '7373'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '7391'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-project-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-project-tokens:
|
||||
- '149999212'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149999210'
|
||||
x-ratelimit-reset-project-tokens:
|
||||
- 0s
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_03319b21e980480fbaf69e09f1d9241c
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: "{\"messages\":[{\"role\":\"system\",\"content\":\"Convert all responses
|
||||
into valid JSON output.\"},{\"role\":\"user\",\"content\":\"Assess the quality
|
||||
of the task completed based on the description, expected output, and actual
|
||||
results.\\n\\nTask Description:\\nResearch a topic to teach a kid aged 6 about
|
||||
math.\\n\\nExpected Output:\\nA topic, explanation, angle, and examples.\\n\\nActual
|
||||
Output:\\nI now can give a great answer \\nFinal Answer: \\n\\n**Topic: Introduction
|
||||
to Basic Addition**\\n\\n**Explanation:**\\nBasic addition is about combining
|
||||
two or more groups of things together to find out how many there are in total.
|
||||
It's one of the most fundamental concepts in math and is a building block for
|
||||
all other math skills. Teaching addition to a 6-year-old involves using simple
|
||||
numbers and relatable examples that help them visualize and understand the concept
|
||||
of adding together.\\n\\n**Angle:**\\nTo make the concept of addition fun and
|
||||
engaging, we can use everyday objects that a child is familiar with, such as
|
||||
toys, fruits, or drawing items. Incorporating visuals and interactive elements
|
||||
will keep their attention and help reinforce the idea of combining numbers.\\n\\n**Examples:**\\n\\n1.
|
||||
**Using Objects:**\\n - **Scenario:** Let\u2019s say you have 2 apples and
|
||||
your friend gives you 3 more apples.\\n - **Visual**: Arrange the apples in
|
||||
front of the child.\\n - **Question:** \\\"How many apples do you have now?\\\"\\n
|
||||
\ - **Calculation:** 2 apples (your apples) + 3 apples (friend's apples) =
|
||||
5 apples. \\n - **Conclusion:** \\\"You now have 5 apples!\\\"\\n\\n2. **Drawing
|
||||
Pictures:**\\n - **Scenario:** Draw 4 stars on one side of the paper and 2
|
||||
stars on the other side.\\n - **Activity:** Ask the child to count the stars
|
||||
in the first group and then the second group.\\n - **Question:** \\\"If we
|
||||
put them together, how many stars do we have?\\\"\\n - **Calculation:** 4
|
||||
stars + 2 stars = 6 stars. \\n - **Conclusion:** \\\"You drew 6 stars all
|
||||
together!\\\"\\n\\n3. **Story Problems:**\\n - **Scenario:** \\\"You have
|
||||
5 toy cars, and you buy 3 more from the store. How many cars do you have?\\\"\\n
|
||||
\ - **Interaction:** Create a fun story around the toy cars (perhaps the cars
|
||||
are going on an adventure).\\n - **Calculation:** 5 toy cars + 3 toy cars
|
||||
= 8 toy cars. \\n - **Conclusion:** \\\"You now have a total of 8 toy cars
|
||||
for your adventure!\\\"\\n\\n4. **Games:**\\n - **Activity:** Play a simple
|
||||
game where you roll a pair of dice. Each die shows a number.\\n - **Task:**
|
||||
Ask the child to add the numbers on the dice together.\\n - **Example:** If
|
||||
one die shows 2 and the other shows 4, the child will say \u201C2 + 4 = 6!\u201D\\n
|
||||
\ - **Conclusion:** \u201CWhoever gets the highest number wins a point!\u201D\\n\\nIn
|
||||
summary, when teaching a 6-year-old about basic addition, it is essential to
|
||||
use simple numbers, real-life examples, visual aids, and engaging activities.
|
||||
This ensures the child can grasp the concept while having a fun learning experience.
|
||||
Making math relatable to their world helps build a strong foundation for their
|
||||
future learning!\\n\\nPlease provide:\\n- Bullet points suggestions to improve
|
||||
future similar tasks\\n- A score from 0 to 10 evaluating on completion, quality,
|
||||
and overall performance- Entities extracted from the task output, if any, their
|
||||
type, description, and relationships\"}],\"model\":\"gpt-4.1-mini\",\"response_format\":{\"type\":\"json_schema\",\"json_schema\":{\"schema\":{\"$defs\":{\"Entity\":{\"properties\":{\"name\":{\"description\":\"The
|
||||
name of the entity.\",\"title\":\"Name\",\"type\":\"string\"},\"type\":{\"description\":\"The
|
||||
type of the entity.\",\"title\":\"Type\",\"type\":\"string\"},\"description\":{\"description\":\"Description
|
||||
of the entity.\",\"title\":\"Description\",\"type\":\"string\"},\"relationships\":{\"description\":\"Relationships
|
||||
of the entity.\",\"items\":{\"type\":\"string\"},\"title\":\"Relationships\",\"type\":\"array\"}},\"required\":[\"name\",\"type\",\"description\",\"relationships\"],\"title\":\"Entity\",\"type\":\"object\",\"additionalProperties\":false}},\"properties\":{\"suggestions\":{\"description\":\"Suggestions
|
||||
to improve future similar tasks.\",\"items\":{\"type\":\"string\"},\"title\":\"Suggestions\",\"type\":\"array\"},\"quality\":{\"description\":\"A
|
||||
score from 0 to 10 evaluating on completion, quality, and overall performance,
|
||||
all taking into account the task description, expected output, and the result
|
||||
of the task.\",\"title\":\"Quality\",\"type\":\"number\"},\"entities\":{\"description\":\"Entities
|
||||
extracted from the task output.\",\"items\":{\"$ref\":\"#/$defs/Entity\"},\"title\":\"Entities\",\"type\":\"array\"}},\"required\":[\"suggestions\",\"quality\",\"entities\"],\"title\":\"TaskEvaluation\",\"type\":\"object\",\"additionalProperties\":false},\"name\":\"TaskEvaluation\",\"strict\":true}},\"stream\":false}"
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '4609'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- _cfuvid=csCCKW32niSRt5uCN_12uTrv6uFSvpNcPlYFnmVIBrg-1761878121273-0.0.1.1-604800000;
|
||||
__cf_bm=eO4EWmV.5ZoECkIpnAaY5sSBUK9wFdJdNhKbyTIO478-1761878121-1.0.1.1-gSm1br4q740ZTDBXAgbtjUsTnLBFSxwCDB_yXRSeDzk6jRc5RKIB6wcLCiGioSy3PTKja7Goyu.0qGURIIKtGEBkZGwEMYLmMLerG00d5Rg
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.109.1
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-helper-method:
|
||||
- chat.completions.parse
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.109.1
|
||||
x-stainless-read-timeout:
|
||||
- '600'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//pFbfbxs3DH7PX0HoZRlgB7GTpqnfsg1Ys6JYsbYrsF5g0BJ9p0Ynqvrh
|
||||
xAvyvw+SfLbTpUWKvtzDUSK/7yNF8u4AQGglZiBkh1H2zox//fDPy+OTi5fm/Wvp/ckf9vzT838/
|
||||
xFdn5pU9FaN8gxefSMbh1pHk3hmKmm01S08YKXudPD+bnD8/n0ynxdCzIpOvtS6OT48m415bPZ4e
|
||||
T5+Nj0/Hk4132bGWFMQMPh4AANyVbwZqFd2KGRyPhj89hYAtidn2EIDwbPIfgSHoENFGMdoZJdtI
|
||||
tmC/a0RIbUshIw+NmH1sxKWVJikChIXXtARekV9pugFeQuwIZKeN+ikA3eoQtW2hx9jBteUbQ6ol
|
||||
YA/aRvIUYoDIEFEb9uUq3TqDFnMwWFCM5I8aMWrEhVKw0iGhAdQqZBdG2+ty3VPg5CUFiB1GkJyM
|
||||
gsoLtIVIKLuMokBjK8nFAD17qihQRr0is66BLq1k79hjpOyDQujJxhxPdiSvs58le0hWkc+6qeKZ
|
||||
ZGf150QFT08YkqcHUrQegxv0iey0rOHeVm0BldKZNBrwZHJh1FOF6ZKN4ZtxcmAoBLYlyiJpoyA5
|
||||
tsWnttGzSpLUwLEGeON5pRVB1C4U5A492Vj8kkoSI/sAbKHjm+wWFbpYUQ66oXOeUXaAUrKvhPkB
|
||||
O0Pobf7vUNJRI65Gjfic0Oi4bsTsxagRZKOOmkoB3TXCYk+NmDXiFwxawsWGfUEc167aXueqeZdV
|
||||
KP8VBem1q+dmjbiAZbIKc3rQDKRzwku1abtis8qYJPcLXdDFG860S+pbz8mFmhFt2yLpUlsFnDb0
|
||||
ubpNdqNkyUt+BJ12m4fwPpQ8ValgzSmHy5p4srAo1AqYcK2NCY24uh/tk/+zdIgAh+icofDzQ/rv
|
||||
Bv0vtHpUgL8IzdjoJYGO1AdIGzT1nZg1KOrZhlhqOVMaimwrVuQt3G9z1BboFnMDg8lQxftPdatB
|
||||
JY3bfH7B+DePN5nSGy1j8hTgMET0383879oJ1MZbT7HjQr0j43YZKMkDtCoDqiI9meb0R2i+jezX
|
||||
8MbzwuTEHEZeg/wGz3dD/3iU7fuQz1j0HnOngtKdb2vrrFOk9gxcGKr15obI+cFfa/V02ic/Qvt3
|
||||
7HNKlZYEno3Rtv1qZnPTLe3hEcKXu7YMzuB6L8FkW2xpl2Jtd81nW96x85zabsAABVAug5D6fpgE
|
||||
nkIy8enKnH6nMlf3+xPV0zIFzGPdJmP2DGgtxxo7z/KrjeV+O70Ntzmd4YurYqmtDt3cEwa2eVKH
|
||||
yE4U6/0BwFXZEtKDwS+c597FeeRrKuFenE2rP7HbTnbWk+npxlp64c4wmZ6fjR7xOFeUR3nY2zSE
|
||||
RNmR2t3drSWYlOY9w8Ee7//jecx35a5t+xT3O4PMvY/U3HlSWj7kvDvmKTfnrx3b6lwAi5AXIEnz
|
||||
qMnnXChaYjJ1pxJhHSL186W2LXnndV2slm5+KqfnzybL87OpOLg/+A8AAP//AwBsA26GZwoAAA==
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 996fc2325f81ed4f-MXP
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 31 Oct 2025 02:35:26 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '4765'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '4807'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-project-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-project-tokens:
|
||||
- '149999212'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149999212'
|
||||
x-ratelimit-reset-project-tokens:
|
||||
- 0s
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_847e5f52c37f478b9a56a99113ce7d62
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"input":["Story Problems (toy cars)(Teaching Technique): Using narrative
|
||||
contexts to create relatable math problems for kids."],"model":"text-embedding-3-small","encoding_format":"base64"}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '189'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- _cfuvid=I1qVn4HwObmpZbHCIfihkYYxjalVXJj8SvhRNmXBdMA-1744492719162-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.109.1
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.109.1
|
||||
x-stainless-read-timeout:
|
||||
- '600'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/embeddings
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAA1R6Ww+ySpfm/fcrdvat0xEQqap9xxkEpDiI4mQyAUUERY5VQHX6v3f07emZuTER
|
||||
SSpWrXrWc1j//q+//vq7zeviNv39z19/v6tx+vt/fJ/dsyn7+5+//ue//vrrr7/+/ff5/71ZNHlx
|
||||
v1ef8vf678fqcy+Wv//5i/vvJ//3pX/++lsVp4Sq14xzmTUHKgL264z93lvrNdzfHViP6EW17nCq
|
||||
SZJuW5BWjzM2hOLI6BjxEmoMLqIX6xOCWTBmEe0TQcKK95AY+8zDBljCOlJHdHtt3iixjTrxjPBx
|
||||
lVE8HW4PDmzeN4ZPIGgGuskjHWW81GH/xUONhXldwctB9bB2Y1d3uTwSB+XP9kCzWfbi3VmYZbSq
|
||||
L4k+jMLUplrY2ej40W/06p99MFaOPKP04+1pwEFar/M6EOiyW4IzzzKG1eDHFzgZhoCPSNJirnLs
|
||||
FUb9rODo3O3B2KWdj+j+mdLU2uhsOdweAnzPqUjNfEndF2+ODqzKJMN+ta3ALg3sFO7EUsSXzYfE
|
||||
rEoeN9CZCFNH4Fi+zge9R+FhAfjsHhe2orrQ4Xf/cBytlcazWOdgGowyVSpvBHMaGDK4Nw4g4uSr
|
||||
OefyrET99rSnyclp6+VdRTeQIxZQ/6FSMDmDbYI1lU80t+mcr8o4qOh1HHKqbu9KzZgm6lC92Iws
|
||||
hyGueeUq+jAsTj1N3tKnXruNyaGwS2MCPloyUHd/beDNgj4RP10JaFcmHnKPFw1fir0Rr4Z3LOEm
|
||||
yx0aCaar0UN5iFCSpAZNvbeYE58JAUx2u8hfn5ePOzprKsC0Ofn0bH/4mnz8pEcy59cUr9B3OfFx
|
||||
8CUXcSZNzXXU5okrN796pCEaXZd/6EUJ3wfPwsG4vQ1zmywz2N1eLxrM5AVoZK06wPKzoReOU/OF
|
||||
35Qm3JpOj918HzFBOxQ6dO1PSQ0+Sly2rS43ON5uFj5GeleztOkCJOR5j1VNe9Yz75gcTIK28t90
|
||||
igdeYWAFt1TYY98fT7HQiJ8Nyu/WlWz72qi50LAiGNzMkHoyeA80tN9noAg3SDWK8mFur7cCGmpX
|
||||
YesilfHi38sVviLPpcXgyMMuIB2BFfgIhNsLtcu8+O5D/axdMC6Ojsa43c6Habg86OO73vR2lxfK
|
||||
99pE5Wi4aMIDBwTppZ/QY4Y/+cS10g3aezBTvFTOsNvH2oyKNj/5qz4ELn/Ffgb9va3gTD4+6uV0
|
||||
jl/oQ9KZ5vL5w3Z97bV7PD72ZNO+9Lhd+vEEZWXjkFF4uTGfJOUJgQeSqNptsnzeKLkD7b1iUNwd
|
||||
KpfH4WeDQr8ufNQel5gkbexAWYQm1vaPwOXkIiwlFF996jLXjHn3YUXwGSspPp0WbeA1ZpcI96Sm
|
||||
h4+xgjWznEw8PgvfT+6Iq+diPUtwaE4CzjrFygXdsG1Y75yKAM0hYBGSgwwtFL+xuQlAPJfW9Qxl
|
||||
cWNiefOcwRLuyxLF3rnGjvpetKVoZh+4hSoR8uKhu9ibfSntob2lt069uOuyzDd4OcgevX6MiJEP
|
||||
l6nwh5/p01yG1d6VM7DTxKfB/arHc3s9FVCb3wi796Crl+D0SOFGk0J6WDE/LBcbRrDOPd/f6rpa
|
||||
c5oUBug5+QnGVfcYFrPb2r/zI0KwspyO7lxCuq9TerAuB7CTAXPQ6m0GAjU61isKHiJ4SMXeB7zy
|
||||
qL/1NcNVe7yxeWB+zD85i8BDnzGsV8+q7o09DuC33klDxCOYTlchAkNzFvwdyvlhyZ7XE7xuZAUr
|
||||
wmS6wmVQG5TKIkcvcFtqC7eNT/BUZDZWU3keRtiqK9oANBKejxJt5B2f2+/Xbk/d7IFzgT+KAXrG
|
||||
Wkr97f41jAaSR2TvNQMrfFzVJKy4APnETOktfAxsFclxA4fBvOLz+dCxMW8rE5xj1lCT9LW2Ylqe
|
||||
QBl2vT/OCnTX222W0VghBZ9AEebU1A8VLFQk0dvpiUDfiB8IWzCuuADEZF2iVg7Sw5DDwfUW5gI2
|
||||
lQglsOGxyXe2+8ULE06nE6XXUlPc9R5sfVjKiGFnf6kAdffhC+XBJiLCgZF43nlChCSlyahudyuY
|
||||
tKNnQ3EvNn/6J3fKlB5lc5BjR9qG2q6SyhewhgqTjdtXMVWHVwrvH6XBj4v/dtlpVwdoadoGW9/z
|
||||
mIn/alB1cDpqwFDSSDqt8h/8Ad5Yx4y0dQWtfNKoct+o2jwVsg4vV2lH/f05iVdOdBp4d60Ia0Yh
|
||||
1otA9Qb9zq9Y3Es9HehZRgrwHv77jk7D7hnCFe5aOmKrjA/uzJujDQUheGAfwKWmrRn2iAjpCedW
|
||||
eq05/XSS4C482dgUMzWeX0EkQu+RtTRbPEPjwZFxUHI3Lo7pqDEuCG0dIs4wqKnfdoBknO7Aj43O
|
||||
FFfph62NSCGsDnbn87Up57sbrQT44ycXorXumspzgG5X+qKaEkuAjRES4Tl9HrAl3IJ6Tm57HfGZ
|
||||
Bn1BMu5D61Wq/jtveucSVWNSlJbIQOUZ3/WKz6e4C2dowVSiJ2bYjGNnSYSz4NbYIprt7ohKHTi6
|
||||
c4D17Pxiy1OdZfi8Xw9YvvY3l3MfWQ9NimofSS7Q5jVOfdRcPi227E07zMVaiGCAPMX2R0rrQT8F
|
||||
AaqKRsO27mGwc+oaomZSVGxpV0tr6iG4odToL796cdcWXFL0LriE+u3rFRNNeAgAbS9Pv57tfT1k
|
||||
I8jg+R6M+CFbtGaCkI7gi480ePJXxopqM4OjvF6wbOo1W16CJkHuVpv4sEufMZG1rQflzpEIB4ol
|
||||
Z4+i5NAXP6l1keR8fbyXBsp1OZLtzkmBoJxVB4CyW6l234ZgkXxqQnQJTKrkR4stV/ugwq27ij5n
|
||||
Ka+YvmIRwkcZQnrwoJtzDexSVI/bF3WXpWLrxlMg8viRYG/cOu5OfkcBMloTUrs1zsOM25sPrxG8
|
||||
+mJQufl8mqsKBKL6pvYqzMP08Z8y4u2oIZuj3rhr4XURBNjAWL5wFVuPWsihBIwOjS/yyF4/vjdW
|
||||
5pkev/1pzNteh9DhqF+OW1gv8yGQ0XZKMpqS1gC9dN/6cLc5mPheDJAN01sW4OeEmc9HbGHzg/dP
|
||||
4MkmjK+5qLk0lV8ilI9eQ5Z9Iee7NJAzpBjbPVZxf2J8cHpkML1rwI+2g5ezRZFbxHlUoqp0ShiN
|
||||
xXMJDtQ+f/FOc+cmVs/ooxZ3aqYo/PKjFsLKX3fUUO/nmEvM1AMXnnexuQuFeP59/+IrPTbvCizi
|
||||
AmWo5uKZ6u59z8haSAH4HMmerOfuCpby/Cyg3jcm9cvPkq/PpBIQ5RhPmLRn9bQ5GRv0uZeGH/eW
|
||||
OszvviiAA28pdot+chfWvQnaB95Inbx6gHstbG0oxclMZeImOXHQKsFXbArUfMfjsHhL5KO9vsH+
|
||||
HjSeO+8esABRtuF9knFvbX4+b6KU7PiIhl8842WOyvDb77A81pa2XLGfwtc+A9hJScpYaE8neDu+
|
||||
bjTa3p/1agO3gEDjGHZtGsTjWYA9iKu2w4dcrN1134kqlFIrpopR8mzJHPEGZuFQYy8Mnxp56yUH
|
||||
DWXeUSNN45wmrTVDXu1Tsq267dDfBPYCONETfLTOMF6wsJiwf9QTVj51HTMpYR783deL+3oB5lYi
|
||||
2WPcSz67sas2EbN8obg7VPT2UGR3uZ/e0a/+qCEUE2Pi3naQ7swStTYfkrPzMdggjXANTdA9ArPI
|
||||
9ybA/Oxh7ZQF9QjDTvrVG1X9IIxnQ64LCN7XBN8GYeNSw37cgFhwiDra0gE24tyD4/4hUQdeT2wF
|
||||
d3cDa2PlMM77rP4wK1ph2px9X1Daub4ntJbAaxVVej8Mijb/+PypfwY4Pc1n9/3lD9C47zf+6t4c
|
||||
rQn3iQOH5R1Tk4hHNh+xJgFu5T0sf/Foji1hhNpp6LCin5DbT9q0An0oFppeXshlC1EzRE/WhrqB
|
||||
5LsCNRMf7rwlpTcAFMDSeNDh/ZxQrIADqL/17kM2vxg+WO/TsOJDvYFVx5XUN59CzQ6HNoVavG6w
|
||||
/dVzY3EpdfS8rIC6rfcc5rVbTPS5VwY1mZkOvNHcCug80/nP/92xLNhAQVdNeujaKp6THRVheGCA
|
||||
bEEiDEQxmx48+OlMNpN8BbuoyCAs6m4hG/WcAmq6vgTfoDsRdT2k2mxevBP84t8f/ifAW0BAEJIS
|
||||
a3b0ACzkPybU6pdDFlLZ8TyfhBHl7lkjrwtL4nGwswIWFr2TvbgaGq/cIwn2/k6gSt0EMTN1pYTT
|
||||
NrtQG61tPd4oLSTxcK2pQ3kSr+y97SG6iSdsz2o3LOPSOL9+Qz2UvIfpaisq8ma9/Omtein6SILv
|
||||
xyTjwjQUjZM5KQWDLC+EWYKjLelS6j/9Qr/9P19e3kuAYNtE2D8kdzCNxQfCcrkJNP7yf2E/8xmi
|
||||
949O2Ge9g5WMYQOk2oHUZ/XBXfaLvSIdV/c/+kcwn8CBH8uX/UTKS9CXMCskxTB8n5d5W5usqFaR
|
||||
a4089Ta0c2k9GCb0yVbD2nWxNI6GNxW+7L1IjRfcD0vu5w68Eo8nW9nCtQBmx5GuljhTrOvqMFvn
|
||||
0ERKM8zYKQycL0W8ymjTe2esVtskF4y9FUCm8zG1BVbXazVqI5I7W6KXmKo1bbTegxs/UHHgFp98
|
||||
EvPcloj7fGCdxayel0tjo6++or/vUynbL+im2YjlMIyHuf58ZBA1cvFHv8+n7NDDyQenP3xESNrc
|
||||
Bm8BLFgjkZwvhhdI6FVcIsKJvROzohJWSJLxTgPcvbVFK9Uz2g4vnXrCy8157fAsoHtMNFJee6h9
|
||||
jKl9wea5NvTrh+TsbqQlIFkh48hRzWFZGiGA+rIdfHgW1JivzIjAtglTbFFcgnks2Bnadbbzw9t6
|
||||
iDkzozK07+aC/btw0XazUbzgaKkTNY/zM562HWug/hZqqtVnO38dsSvBZnQuvvTFI36p85f0Sied
|
||||
MDvasikPkQCWyNp9+V7G+Ch+Ob/6pfk51Os5umQedF81h1X5/o7XzUN20PuMJXr0plgbRc9V4V1v
|
||||
rlSvgamt16KoYDvfzthr5LfLXv2Vg8LchvhQak+X6v1io5cUN1/+27lLUEsiOOCL8/VbMJsRmjPk
|
||||
vz4RNtZA05b7py5AR9wt9q7i6ac3T8DKhNRvl9vRHdXWP0GTK2NyacelnhfUVzB9NoMvHN51/fMH
|
||||
JO4sivjy3c9FlwwRIuZnZF7tNp4Ph7cM0X3eYove3WFqt0YBkuoqYsd2onqJkOvDL18g4AC6P/wW
|
||||
GvXlSn/8bpTe+Rn6e0ehefMptV3snmbouNuErCwJwBBlYiAdqHP2a5/HLvPkhwwa2dMprl/y8O2/
|
||||
6h/+6uTVFrBoCW2kPy2Z4mOnMf74FAjgxnPkj6cnYux8TDewWwSd1PUr1pbJygUgbksOn3ppcNd+
|
||||
P0vgczoyivNeqnunhyNs5d3Gf9bNnLPPXEMEF2HE8scL4vVWo/SPX/V48GI8cA66AbsqPZoCxc+5
|
||||
Q3mUoHHxQnyrscNoaE9n0ArmC/vXdoyXu/IxUXMmCLvUZ/Hy9RdALroq2UgVcRevuMnS6dq+qR3b
|
||||
n3odu8pBRSjl1Lndny7DBi9KizNGNIPXE+CdQTbRsHxiH955K1+Za3Pwq29/9f71K+82pJsIkB29
|
||||
D8N8H8IKffGf4gF12hxNTIdnzaI+x+J4WL98Ho5q6OA7JM4gHKTUB24XXujdIMRd7Dbtf/3ye7/7
|
||||
/Kfv0IdkM9lZn5CtX38LfPu9/zGPfk6omXjoYAshEbb+IZ5P99cZjvu7RF0q2/Uu8XUfVkfJw2YN
|
||||
EzYfaJeC5OPm2OgNI+eXPSzgomCPmpekGmb9smvg/MqO2E6nzl3FHK9wjYMF+0M6xGxXZyXkvEn6
|
||||
3e/8i/cbeOPYkez3vjSMe0gLSC3TpVmlqjm5pHqK0nFn4cPNJNpsIJtAnjQD1TV9yWe9RytwX0/O
|
||||
58OrnzOxv0gQka7HeCVavF6X0yhl2U6m2kZohqW6gRdgDzXADluf+fLVA3Akx7ufKdc0/+kRoF2v
|
||||
Hrb2wQUsguoI8Npf0M9vdokbtw20Et/B+mNuY7opuBlkzuGANT40YvaxPBFcSg9hRROteD1epVE6
|
||||
Rs/JF3WPsu7rn4HYJMl3/94xSSdJlvjt/YxlDuJh1e1KQF5/C3F2iup4PjnEgUeGX9iQchmsRF0r
|
||||
xB4S+PKtYFi48W7vt450xHJ0H+PFxPYZRv2qUPWrFzqJG27gy0f8z7ij8RJdfEE6hoNKbe+d5uvn
|
||||
mXNgbcUtLspPGPOH24X78WeKm2at1xY8Uunx1h7f9UswX3G4Avu+vVFnlse8W3HloK342fnvcgGA
|
||||
3I63EmTXTUIt67OwxVGeKvr6gfh7P7SJOm4AlsjYYXlXDWCZskiG0kd8EGHPGUwg1zgFHftkWHvC
|
||||
Fxh5xxSgUSdXfOnNRJubrvb/+LVfvcLmbCv48GpJM/a+/HU1i7SA1tnkqXIYWD2HqVFAxUB7svBx
|
||||
NdDl0lUQRVWN3YdgME711VVSU62gWth08XKtvNevvxCJ5D0rKCs28MtnfYnzdxrN1aeOls8zJJuY
|
||||
lzXhlw9Uj6HCVnhocpYix4EVJx7ovT2GOROkpoWjmOZYLpgF5vNhlSDJTwNVoxy7azIgDia5HGOv
|
||||
U3faFw8cybwWPXYW762RcJ/Y8Itv1Bbz1Z3qm30G+V6ZyP4CZveltv4ZRsb5Q2U1AXVnt0GPjGly
|
||||
sKNcxfibx/TS9tbVP39p4FiWNJCPaw3b3uVYzz11TNACslK9WLOcxFBvYCOGD6yt5eCuz6Tn4GLu
|
||||
MrKV9v+n38FKKamWu329xJEoIWHwBVLJdyPeMW3WwTtDd58F3psRMl4b8OUzVG4273rdZ6cSeE93
|
||||
S+V4R8HsuU//5y/Th3cfXMJLkgq1Y6SThX8e8yUMlBuMFdPC3qzctHXSphn+9OTVPxM2wsR7gdIJ
|
||||
DHxluZ+z9yXv4XrvK2xi58WWaV4lxE9Exkf8zFy62qYM5boav+f/Bt3srQTqb66mQRiyge1hz4EU
|
||||
0RUbr1LXVjCrDop2jCPbr96Z223KSd6MXGz4b8vtVTUdoRXcPjjhTJh3t5c9w+Y5N1jvpUFjvVev
|
||||
cIeCnFR7++7OWrB6cNbGMzYaO4pnxWl1eFms1hfFPHJnYZVauHkXjMCDqeVr2d0CAK7RkZrZ+5Ev
|
||||
ElcXEMW5jz1ynerxzMINKqzp7kvH58H96kkfRs8IUM8h4TCnyF/h1c87rGNPdbmjWG3gg6fnP/kU
|
||||
+eoJeNpueqodOyGf7spHh8GaXchmSN24hmnfw0U+P+l3vWHd9UMAkci1uLj1m3yqroEEFYfeqPrl
|
||||
fysomxb+9O5Vr1UwnQ+fCvZx1mAvMNqcxJWior3T3MkyO2cwfP3qX//94/988SaDh66y6dGyw6H7
|
||||
6l3w0S42WVJeyJme31sYt9GO6oKVsTW9XG04+7pF8xcmw3L/DDfQPmuemhzWh6XbZD7Ei9zh8MsX
|
||||
xuVCHMB3vE1/fuaiMbmEo5I9qPbTJ+govxAoh5Xs2jEc2A6dTVjLToNVKxxjVjxuOvz51ZaoP9zV
|
||||
CRYb8S9fwgZ92GD3bO89+OWB3lXk8jeSnDPQVDEmAnZe4I+e/eorsj129X/le+I72ZH5yV/BWitL
|
||||
hhwLr1RxuufwvR8e/Po95I//IT+h98dv1F4bgS3zQxShFSYRgcvxxCaZ+8hA+kgPf1LKOl/Hrnf2
|
||||
P3xwFHcaFp9TevQ9L6zarabNv3wyCMeStF88WL58GT0ZxVg/u5bGz0I4w3dmrl+8K11aSe0LzIfT
|
||||
iz5Acq6//X3+6UuMdb0algbfgp/e9ddQcYd1mnQBGuTNY9M9hoztvKVFnvC5+jv5XbqsTn0RzO9A
|
||||
xMrpWjKiaO4GuPutT2Xcvd2ffwR1wx7pcXp92LygqkRQuXS+tD0CNshicIPYUUey/+Y/ff48bsQL
|
||||
PYU/fxEId8E+w09tn6g6Pjv21fsQbp9+jI1yyYFwGCiEqVmr5JffrfF7ID88/vln7ny42hUkgYmx
|
||||
yyvbgf345Pc+4wewXCZs+E5GKCprInz1yvwKMhH2tnOgPzxfHSXpf3hH5mjYue2ILwL45S2K0ynD
|
||||
HPVBhr7+E73uUuXHT0U4ST33y1O0dad43m8/sXN4Cdrw1WuouSUu1U7ZPCwO6Wfwy4e/ed9AWnDJ
|
||||
gPORNzTtQpp//UkVff1emhxdrZ5LKzyjG4lirJDKzrng00KY343rT8/X67YxSvjLT+yGbNni6ekN
|
||||
/v2bCviPf/311//6TRg07b14fwcDpmKZ/u2/RwX+bfdvY5O933/GEMiYlcXf//zXBMLf3dA23fS/
|
||||
p/ZVfMa///lLEP7MGvw9tVP2/n+f/+u71H/86z8BAAD//wMAXnT+LOAgAAA=
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 996fc255fec1ed94-MXP
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 31 Oct 2025 02:35:27 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=pbtvo4SJtDJBflp9bAkwF2aOSGVwUv_1kk.LV5Z1BD8-1761878127-1.0.1.1-Lp8CDqx4ZF41xS5B7q3.TqbAczOcLsXkN.80bpc7MSmUHsJTo1Gi5tuYiz1LC7oWjWQZPhRE5g.z.NwEe_FQPowDCsvKZUUzuNNNL8T1BKE;
|
||||
path=/; expires=Fri, 31-Oct-25 03:05:27 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=OmupBuWMOaSbKIkKtzxmkldESV9dhmGPizW9UT17JA4-1761878127991-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-allow-origin:
|
||||
- '*'
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-model:
|
||||
- text-embedding-3-small
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '175'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
via:
|
||||
- envoy-router-568dcd8c65-kpd72
|
||||
x-envoy-upstream-service-time:
|
||||
- '386'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '10000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '9999972'
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_99f475d50b2c411eace09a3706a27f7a
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"input":["Games (dice rolling)(Teaching Activity): Interactive play method
|
||||
to engage children in learning addition through rolling dice and summing the
|
||||
results."],"model":"text-embedding-3-small","encoding_format":"base64"}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '224'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- _cfuvid=OmupBuWMOaSbKIkKtzxmkldESV9dhmGPizW9UT17JA4-1761878127991-0.0.1.1-604800000;
|
||||
__cf_bm=pbtvo4SJtDJBflp9bAkwF2aOSGVwUv_1kk.LV5Z1BD8-1761878127-1.0.1.1-Lp8CDqx4ZF41xS5B7q3.TqbAczOcLsXkN.80bpc7MSmUHsJTo1Gi5tuYiz1LC7oWjWQZPhRE5g.z.NwEe_FQPowDCsvKZUUzuNNNL8T1BKE
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.109.1
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.109.1
|
||||
x-stainless-read-timeout:
|
||||
- '600'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/embeddings
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAA1SaW8+ySrulz79fMTNP6S+yk6qaZwgoWykERex0OoCIgIpsqoBaWf+9g+/K6u6T
|
||||
JxHxUWpzj2uMu/7jX3/99Xeb1UU+/v3PX3+/qmH8+3+s1+7pmP79z1//819//fXXX//x+/v/3Vm8
|
||||
s+J+rz7l7/bfm9XnXsx///MX/99X/u9N//z1d1/sejIF3zib59ulhDixErprslM2KHNmgTPyVHzX
|
||||
v0I0mw/Ugn2gi9g1lS+j0/EtIq27OAQoi5NN3PbjocUR9njHC2U2qwFsgZioMQ69C6un41RD4DqT
|
||||
70P3vImm2VFVtN2OJdZBqWaSmk0Qld9K9qfUukSjhJmP9vZg4hsZCnex80SD9vU2kS35HDJROOga
|
||||
dEt+pvl1tGv+gdQJiMmpote7OtfzOw0bcN6dLWp6j10kZj2uIE4uJ1wQhWOD4Vze6N18dXxiWcIm
|
||||
dTfFME7hDkf6MWILPD879FGLr19H3exO6Hk/QKPmC5xX4ymadlqtoOP5s/Pll3gFku4GJZLdVsWX
|
||||
x6zpwq2XDfA1nDdNAH+vibXPeDCeijvePyNcC1f4HeCuvWv4pFwaveWVyICTqVVY66mjM4/5Guye
|
||||
8gf7tXysZ/lzmtCVmS/qdpFeC6Kk+HC2JgPvjtojE6TAS5TyW7zIOGoiu9/Gq4+Ewbn7tY7v2ZLv
|
||||
nikIUvvgQzPwIokIbgEPnyqjun5kbL6NDw9WzFpwuI+Dmh+GYwev5nHwlyz0gCAF7wmm4UiwOmhW
|
||||
JtwGO1BaLqvo4xQ4mfB1OAVacwB8CSmeKz7LuVVYWDb4QTwSTXa7BIi16ZE+4kXq5z21HOTJ1KLq
|
||||
YJyAIOGAB5eb5VE/gTeXGKg7wGfvmfS8045AclL1jPavaYsj0aD1xCu2DHdcmPjSBzlgfnjFGRTl
|
||||
7kXN6qm4VPfuAQToPtF7/nz3xEdmgsZPtcGuL9XR94ruAUSZW1D3kSJ3DvZBCS8bw8KZD8RoRua9
|
||||
g9/ysPiLd2H9EuqpCrFWA+pCrOqkf9ga8od7Qs+HRx1N1qlRkJ47G/Jbv8v+og5QuXxu2E4GH4iO
|
||||
cCHwVOkF3oWPM5A+by2Hu5BuqJWKk87u7YYHNc99iCAdtUjc+Q0HbabeaSJGF5d/3Y0OIgPGNP92
|
||||
Dljn3wM48XvqKoJcL6MoL+h+VwJsfy0zmgu3m6AQX03qzuGhl5z0VqJqd6/xRU237O74Cr9dnM7D
|
||||
ASy7iAimOQFPHi1qrM/DPy+DBezxeMDGfSP280s/ekp1NVTq7aa2ZmH3CKGvvkrCtzBwl1drcTDr
|
||||
zlus67bKWLRzYviqSIt9jYtcwfEVEfamIlK933u9yA1dg67m/oB35l3v50szcxAn1xN23uip94VK
|
||||
O8jlS0rXegPGsg5LKJpQx/m1V3Wh6TeTsq6PdX352VTvzwrUjDOgOrUfPTs6TgCTZpviy3Aa2XSc
|
||||
eg5+S+2B9/ZWBvO63tCInj7Ozsmgz++U56CCK5F61xn2U/E6qciM+pLqoIkyamsKhHyXAlL1ZB+x
|
||||
ueF9lC/HE7Wv+OZOwUOJoUQXk17iOHYJjp4BDEr8JeQIvJpv8tsBzrYrUvcmPHuJRaWPwPXJU+Pw
|
||||
HEC/6XYDPHbJFdv1IGZDB/UBnvL9E6tP6+6y7yXh4UMCkOqidnBZKs0OEgbDwdFUaZFwG/oWXjZW
|
||||
Ru/7/SFbcPQNoSPsC+wXsRqRUcocEFDFJud2U2StuU0H+IlrlzqJaoJhE9gH0HL+1+dJHjHxS585
|
||||
+BreFqt2azO+NgIH3d5lTMN+VGvRNNoAFcLmSaauGUH/iKcQ7YXN4oudo/T0rOEDVIOOo3ucXBnT
|
||||
PcNAQhzw1HT1KRtm15hQYfM5TqVM1yUu7C1Avp5PH9KwuGNmZDkkm4uOg+6T67P77hq4rj98v1R1
|
||||
Lb5aC8J4f5PwweyNbJiYFCsPF9x9FrWfup88EMJqtCqa4ndSf+fbRoFvf7vBmVZs3Qlb2htpRgwI
|
||||
ktWLu1yrLoSSuBdxuL+4YM4zRQNqMDdU94+gn6CDCUyUxqH3TB/0GcTbM1qCLsc7Xkn0+Vf/FBDY
|
||||
9KcvxIdOC+Ht+sXGNgf9hEqFQH4KX9R5c3smzY3aIr4LLjQB9MVmQfqq0A0GF6uwluvpUH0GtGtD
|
||||
m163ctVP2Cw9mC8XidqR+3aXXMshmu7aTP3NJayXZ9kWv/+PPW8WdHZJShkpuM8JM3o/Y6AKVCgs
|
||||
RoozcVLZnO1NDYXbJsZZl4BoOTZCABtJb30hjkwwx2+swHDLMDZI6kQzpC8RFg++pw911uuJVbAC
|
||||
+aIlGB/9kvGXjfMG4+mlU2ObWL3A7EcOtOng03O8IWDpTNVA+kF18O1Iglo8NqkCd9r5iK98amUs
|
||||
2CocIBv/Tg1uO+vLhYkhvMFGxhaYnF4U9lsRVtfsga2XKDEWz54Hgxs80vjsBvqsBnWOuEpEZGaZ
|
||||
zKagfMmoEDoN7zbChy07v4EA3g536jFTzUQcJQbSpsjE1iQ+6omTQQtXPSOSa1z0eV1/2y7HIpH1
|
||||
m+vyRyonylwsETY4kNYMAa+Es60H1OPbr7ugwfLRy2v3NK69PJq/dsmhlrtVVHN4h83d86sAzbj5
|
||||
OEn5Qz2dJBZCVxRUnMGQZMzquVUvCob3D+bWM7NqGd753YGGy2hFyygNZ5ByyYgzKinR/HXOpeJ5
|
||||
E/SFB3P7sd92ImRhcfJBEGv1fEVSC1vNc2lsDrbL5qZW4S0oTeoXiQ3E+w2UcHg1DQ3eVRO928SR
|
||||
obBYIy6mGAMBkq+h0P4Q0/OxpGBiFV8C8q1HImqBnLFhb79BU4olNtf5mz9LVsI0fFk4b2wtEvb0
|
||||
FEMmzxuql/u9LlyackJ/xmtHdTa6bzahfIjO2LrEQ0SaoEvgwW0w9TzG9cvG2HkIE+eD8daqXclJ
|
||||
TyXYWJuFSNei0YftzdagZPMukQ1p30vD4VrCQOAPNB/HrTsX7q2BP/5MP7uzPpeNb0DrEuT0EROj
|
||||
l1pOT9HKe/S61qP58horePSrgohQa/QFIK6B2TlF1DIUBpi3hAUEV++CT1+p0r/hverQ4cM+2PiU
|
||||
G306PK0G6DOMCUL3rqdGXbXwbmkfbFiq5/LKbHMwKUQJW5E0ukOITwoK0tNA7+jBu6xQaQu5u7n1
|
||||
xYft68thkAt4J5iSNlJAtuDzxgdQz0Z6fG5KwI5OVEAwWidqGbKTtdPyTJB+uH6J7GcwIqXdBOj8
|
||||
yRENec/sO9dTJ9AmjzfFCT5mnYFAACEv69QmHV/TFi0+4qrOoP6RU8B4RVKHIkA7aidEzkjWaQta
|
||||
edpH+sdlfAeWEtX0JGKDyU3P+lFt4I8vtOqEWTtq04CMenKp/uPnDuoEhfVmS4/Oh/VTyVEF0olT
|
||||
8aHba4B9L8wH8WxI2K47j/HxNZ+UdT0Racw7l8n5h4dmdN6R4Hlg9SIJtzOMXo1AlNOuYdP506gQ
|
||||
6pFJVdXu2CKYeNp6HvnSw9md9P5ozwf4krvGZxfLrSfDuTfKjrufsEuFidFHPAVweHQfjF3+Fo16
|
||||
4nLKzRkmeuIVWWeBshlAGc0CtQDdM3YrdynMuvsDu+dU6OcnEHgYZ8ejP2FH0lkqIxk6tjbQ1Q9F
|
||||
tDI5CNfnX/mf1rN+XSZ4czaQOvl+U5NQV0U0fj6hL95GFHX04yrwefl0PmRe4gq3ISzAqvc0/7bP
|
||||
jD6uYYdwcpfWetv+4WnghcOWJgh/2RzsvwPk/deFbIU2BJO+jBPsMtGgRovOYFmuRx7qefSiWj59
|
||||
ovnsbTX4gYGPk0g+gUVpZBk4BLYr3+qReL+x8s/+tS63tKagevrwqFQI764Ir/UyymFvmZS8HzbR
|
||||
6fdSaQjsvAPWp9rUF4MzcvRnf1/Hb88UZ84R7S8VzsfxppMipQcoLFil94HTGRHvygK/E07I9nt8
|
||||
uIL7rt6o3zZ7Gt++XTSdpMqA0wKJvzmEni4dnaxQWPiWCCttedVHd4IPd5H85tBkusA/TR+eql3h
|
||||
gzne9sPtwMfwjOoHNfnsqBOAcg90+VEk8O6HGePPWgPFEYY0fVYXfaKZJm49byP485VjbOHpgwOf
|
||||
93fG9mRRl92evgM5+a0SZdkBsHBjYcFVv4ly0v3VHxv8luipTL0wvNdsOhIRnHdphrXRtyPp9vQt
|
||||
8PMn+HBJs+X1PRJ4+MwfaoYGZFTO9zlg3lzS/XA6sp+/QEQPQnrXew6822QMoZSaV3+uYKgLfjxU
|
||||
cOV5/0vWAR32yIIf9azT/EqZO8ZvU9mu9XSdD7sWm9OdgzMPbIxjYtQzVb8qjLMLw4eHTdx2GI4t
|
||||
OKD86i+Vv6t5w05EWN8OhND7+OkHw2YefCuzTLWLCPWuvZkdDO/bGF/2/NMlM1flSIWlSO/BIQRs
|
||||
QX4M9zY/0nX/rU82HZD8HGp6unLROv9JiiA9vUgxXVx3OQ3GG263+RkXRPi4TM2zGO6d/Rab2TPM
|
||||
WCpnFSi/H5cax0fvTm29fcPoxQj+8dHUgWeM1vpA4JVGLjMjzYOXW5b4snjhatZybgoMk3tRbRnb
|
||||
bHzpex+9Gwli7eeH5d0tgLI0LPh8qrbZ4sOXin71y3lzLzbNrjdBTE5Ham63n+h1G9IcikR94V+9
|
||||
mh6otn78ja+Ie7g0lWYLWZczpXnjTf38dfISCsv1TV1f0qOpGiMZjaj2/fLxRWzetZoGX7Jwx+5N
|
||||
6t1FEj8hbLXMwLid7vWkpVkLo1dvEbb6/X6vTBpa9Z9w6/jNB3PnoF9e4iy7DDD/uWhw9R8YrzzH
|
||||
4ukpAj+uc6qNJ6kf37fJh89L6GJtH1XRmOfcAlEpV/h8h1YkrPwIVbVMcPDc+4B/XlIOgp1/8EWo
|
||||
GfqkarvVT/vEp23/AmNXkVZBmV2s+9eJlp3vNPCjlhAfnZejMyrkHVx5htAFw3qUAiNFOy5IaOLc
|
||||
QsYn1CuhlGJEvcZ12TyGBx9cNh6hZ04y+9W/GPBVLcAHP74fhn0H0zafqYWOZTSlxY7A6PW1aGB/
|
||||
9vp8G+oO9HVr4/M2DV3GhXwMwTVo6U+PBK5QW1Tf8Nl/00/giktc56irXEZ4kpSMzQ30YbWLEdnW
|
||||
Qxx9u+rdwpRLR2xw25PO9mDnQds8nLA1TLW77HztDeISnjDeunN0J22rglV/Sd+drWzZ+o8DKB5L
|
||||
svJ/0v/qL6yF/dF/4ajUJ6JnB+gQrqVadgH6UqTUAOo72eHCyo+11JPjGewfAPtzk80R27JBU/Rc
|
||||
u9E9jkk025KugjJiAnl+lSDi7fZrwMTWn1S132omWGLbKitf+7sd1/St8z13sDtECmGloUZDH90s
|
||||
oKXWiRbf7MEmw+5U4KvnM00MiCJaUieFnNxG1A3RhjE1F8if8RM1n7ClkhMfPtz2hO9v9dUvz3Lb
|
||||
ge22+hAl3z9qNhY4gMLgT9TgZC+b+vNFBmD3kqlx38T1lFAiw4v+yKnWL0m2PJ9cqpyeU7j6O7kf
|
||||
k7MVo3GXONg/bztGo11WQV8tFKxH3UlneJu8lfGT3rA7n56M1Q8vhp7X6BTH9BuNu1ZTUW+KNXVO
|
||||
2nfl80BBK99SrR/LfvlOmxSKvCYQhPI9YLonddAeLwPex6G+6puXw6TgJZ/f5lk/51kpok1BBOwT
|
||||
wXRnZu1KCKl6o+F4bGq2B3DVp6EhUvVM9TV/qpDskghHukfduaveHaTz6YG9avD6hRuqN7JmQ8Mh
|
||||
IjygV4tXIXqamc9fd0mW2K0SgG96yvzFYW93se8viGZBp4R7xa1OzXDW4Po+3c8s09lemVTAvE7B
|
||||
u68yRbMa9DnkKhBgv7iRftlf6gPKD4ZHHy3h3cFulRDUt6viNysP/HjwDx/vKris+QmEkPcp+OUJ
|
||||
jNC3QsCqt9RjXqJPoyYTqM+yjo9r/X+/vscBdpl8xUf9/dHn81HzwC0AW4zX71vOTBtgvuCTzwmC
|
||||
z+aykv8rHwhPcxMtqlhDSDb37x9/xWu31gDhttRxIbBXtPLDtFXA84JdysxM/PEO1H3eZ0VLs+my
|
||||
cRq45svUsPQ0mo6zfwbH7mzgX55GJy+Q0SlbNH/TTvd+fGkVj5AoFj+/p5PmdOHgWj+otea7U4hP
|
||||
MrhbJ+ZfZErZr54i54Zf1LsSEs3MfhTwOVsGTrnbM6Lf+NLAT5y+qJWyHRCq6NUh3v9cqPN8HMCE
|
||||
rWhBax7s8598cJm39B08fMqMemHA6zSVBRlqk8+Toshkd9ZUXYX6wbhjTT5tspm0sw/7esjxo+1f
|
||||
TMCEa6Hr8E9yUmKLzXWXtFBLnZPPr/Vz9XcVPLiArOtdrWdM8lB5Xl4dPvZP3118hBMQlHudXhd4
|
||||
ZP3jKmjwamoMu3Y6sl9+Bqw5BP42IUm2rPoPNk3zWD9P9MURLoPcJvc3vlmenn3P3qyitR798fPz
|
||||
mseCbc0abPWnms3bTS1DN1gKHz03KpDOmmlAFSYBzne0Zn/2x/GcWP5t5S86eYny411/e8U3feDk
|
||||
WwDJN3jS8y9P2OyP3FYYbl96zIwAdLtj64OXTCTqfdtdRm2dlODHl/uHcNHn5AwU+Pt9++fp4P7h
|
||||
HaMWC2q/TK2WhGFU4OXmPfwNN9wAu1pWAV0RRNiuW/2PnwRZ50dYfdTPjG0Xudh+4PmJ1/oVjXX3
|
||||
5eUPTHgCq053+bXe/fQEW5ECou8ydbIYpO6BAEVI+hmSpwFXP4Vt65Tpo3p6aHBxpD11TbnMvmte
|
||||
AwJBPPhwN1n1IhzGVHmcpBRbzm0Br4dXxDDl4tQHK29NWsqpkHnNB+v6MQK02ED1l39iY3utGcl3
|
||||
yxv21p5i9XmIaqkfT9OPV3BR3EjNFKcMUT7cFH8O80MvSLhSoR+nIvWqcY7YI3kkIF+uEj2+Dbke
|
||||
omBfwKM/d9htpIUtknA6o8vmoWDLgPdsKe3sAOzRKHFMmx0YzWKSoX2NHOx28cwmTq4KWNgKJfyh
|
||||
tPTvrZ8MRVXfPbW1sdHX/F2Gv/2G4+Var/2dBuaLmvgtIE93mp0qgWuei534af386huB3SPER/17
|
||||
yRiuMSev/Im97/et99HOOcPH6e1jC5Cdu+6XDkLd8MnKv/0ciUfuz/PtKsVa873gjMxvLuJ7//T1
|
||||
STjN5E9/gq3j24fdIwBpWxv+86qoQDJqK4TbLS2p+QHPrH9c7QKuekMU9eVFy1pfULU7Ix96o+YK
|
||||
townMJnH7A+Pzb0Sp9ARzOIPX6z+U4R7e6sScc1DZ/djQHiDvUu2hyVjzEmtGNaCeSSkuPn1eDDT
|
||||
WJGfku0LQxjWc+FWE5Kl7oEPZ/vQ//QNqTCvMebaN6P1g5A//GKOjySaeOB3UBKWK/bcUGCzgzcN
|
||||
8DwBUw3173rpzC4HF/0s4nTN89f+TwjXvM2H/KeJ2LCvOegQ5UKPa56w/Pxel+8D6ozQcEVVtBvw
|
||||
6zfcl8Ls55fGa5B8M3P1I74+57nuweVd7df1yfp57X/BOOV2hDsruO7NKEp++Tt1so0GJFybEIq8
|
||||
nf2ZTzHUe2/ryZ3w2+/ueGdBg0h0d+nDpbuevY7TWVlf4zwMeHfxodMBe3QY+QptyKQ17wbu00TU
|
||||
ON6f9ejHjgUFNXPp0XkTNp+PwgTp/mDStb9R//qdcM0nqT5VWiZGmT1BrwKLDxXLYyO4isYvz8La
|
||||
Qo7uvExq9UcvvBDTaErvcgW7anf0JdcQ3BmSJYD9fXlQc7z7jNXFF4JwO2PswpNdi49ZVVBgHB1s
|
||||
3IFeC9tNOIBfXv3r97FU2lqQbK46UQ7aDsyrPihxpp2pVV5wtvYvD7+8EJ+31xos+8tJRi9P5DCW
|
||||
hkVf82kC4r3PYW/tJ7LqoHgwaSae3iL3oLMFTAlc8xfsNV5QMy4qBhCB1KUObqi7eMd9BTaN1K/j
|
||||
L7iLcHglSs3zO+p5UxDNzEoJ/KaOgA9KbQCe1YXy68/RW3GUMsaFJw5tlMLCHrNttiSKysPxQ2Ws
|
||||
X9wtWBjJfKBN1tNXpGKO2h9/hFsCCdo7hTtjoh+QwysvbOz20GULtBso2aKL988YgbdwGBMYCNyA
|
||||
9ZV/xcfM56g3+Zq6N/Gw9sfsCX7g3aS+9QCAaK+ng8rvy8W3yeyiBcCyQw7Ph6v++9mkzJEFl6Cx
|
||||
8am5p4w2olXCzAiuuHjFrcv4M+fAVf8xVvkZzGYhy2DNC/ya2pt+6oBSwfH00amWnVp3ahtngbK0
|
||||
7emf/Ikr1A6dJJHHquHd3J9eov1L8tZ8e1NPi/uxlJsjxdSRFS0SVUFO4N+/UwH/+a+//vpfvxMG
|
||||
7/ZevNaDAWMxj//+76MC/5b+PbzT1+vPMQQypGXx9z//dQLh72/fvr/j/x7bpvgMf//zlwj+nDX4
|
||||
e2zH9PX/Xv/X+lX/+a//AwAA//8DAPH/mWrgIAAA
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 996fcf368d2ced1a-MXP
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 31 Oct 2025 02:44:14 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-allow-origin:
|
||||
- '*'
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-model:
|
||||
- text-embedding-3-small
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '53'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
via:
|
||||
- envoy-router-568dcd8c65-4dhs8
|
||||
x-envoy-upstream-service-time:
|
||||
- '81'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '10000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '9999963'
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_457f533e12f84f9ab20f97d4416ea060
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
|
||||
@@ -1966,4 +1966,556 @@ interactions:
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"Convert all responses into valid
|
||||
JSON output."},{"role":"user","content":"Assess the quality of the task completed
|
||||
based on the description, expected output, and actual results.\n\nTask Description:\nResearch
|
||||
a topic to teach a kid aged 6 about math.\n\nExpected Output:\nA topic, explanation,
|
||||
angle, and examples.\n\nActual Output:\nI now can give a great answer \nFinal
|
||||
Answer: \n\n**Topic:** Introduction to Addition\n\n**Explanation:** \nAddition
|
||||
is one of the simplest operations in math. It''s all about putting things together.
|
||||
When we add, we combine two or more numbers to find out how many we have in
|
||||
total. For a 6-year-old, it can be visualized as combining different groups
|
||||
of objects. Here''s how we can teach it:\n\n1. **Basic Concept**: Explain that
|
||||
addition means bringing two amounts together to get a new total. Use simple
|
||||
language like, \"If you have 2 apples and I give you 3 more apples, how many
|
||||
apples do you have now?\"\n\n2. **Visual Aids**: Use physical objects like blocks,
|
||||
beads, or fruit. Show the child one group with 2 blocks and another group with
|
||||
3 blocks. Next, combine them and count the total together.\n\n3. **Symbols of
|
||||
Addition**: Introduce the plus sign (+) and the equals sign (=). For instance,
|
||||
you can explain that \"2 + 3 = 5\" means that when adding 2 and 3 together,
|
||||
they make 5.\n\n**Angle:** \nMake it fun and interactive! Use games and stories
|
||||
to keep the child engaged. For example, you could create a story about a little
|
||||
monster who collects candies. Every time he meets a friend, he adds more candies
|
||||
to his pile. \n\n**Examples:**\n- **Using Blocks**: Start with 4 blocks. If
|
||||
you add 2 more, how many blocks do you have? (4 + 2 = 6)\n- **Finger Counting**:
|
||||
Have the child count fingers. Hold up 3 fingers on one hand and 2 on the other
|
||||
hand. Ask, \"How many fingers are up?\" and help them see that when they count
|
||||
all the fingers together, they get 5.\n- **Story Problem**: \"You have 1 toy
|
||||
car, and your friend gives you 3 more toy cars. How many do you have now?\"
|
||||
Write it down for them: 1 + 3 = ? And help them count to find the answer is
|
||||
4.\n\nMake sure to encourage the child as they explore addition! Celebrate their
|
||||
successes and provide help as needed. This will help foster a positive attitude
|
||||
towards math, making it not just an academic subject, but an enjoyable activity.\n\nPlease
|
||||
provide:\n- Bullet points suggestions to improve future similar tasks\n- A score
|
||||
from 0 to 10 evaluating on completion, quality, and overall performance- Entities
|
||||
extracted from the task output, if any, their type, description, and relationships"}],"model":"gpt-4.1-mini"}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '2663'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- _cfuvid=Dmt0PIbI1kDJNEjEUHh50Vv4Y9ZcWX6w2Uku_CIohuk-1751391377646-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.109.1
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.109.1
|
||||
x-stainless-read-timeout:
|
||||
- '600'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAA8xXUW8cNw5+z68g5uVadG3EaS7xBehDrujhCrRIrnVQXG4LmytxZ1hrJEWU7OwF
|
||||
/u8HSjO7s03jFMgBrR9sYyhR/D5+JKV3DwA6tt0z6MyA2YzRnXz90+t/Pn39g3B83b/898X58FTO
|
||||
f+LzL199968vQ7fSHWHzC5k87zo1YYyOMgffzCYRZlKvZ0+fnJ0/PT979LdqGIMlp9v6mE8en56d
|
||||
jOz55NHDR389efj45OzxtH0IbEi6Z/CfBwAA7+pvDdRbets9g4er+ctIIthT92y/CKBLwemXDkVY
|
||||
MvrcrQ5GE3wmX2O/urr6RYJf+3drD7DueIwp3NBIPl9K6XsShSRrDURX6JpvvXHFEiAYR5gogeRU
|
||||
TC6JYLMDehsdG85uBw435Nj3QGgGiJgyfHYRIpsVfPM2OvSo3lfw3PeO9BsqifI5YIY8EGyoZ+/V
|
||||
wTYkIJQdsCWfecumbj1dd6tFWCHFkDATjCERWL6hJAQ0uQX2N8HdqDvL2y0l8hnEkMfEQSAkaDkV
|
||||
yAEMZkr6zw0mJgvsMyWSLIDeguKucUneOZJlGM+tBYRNYtqCD5kgeEAREtH1isoM7OxfBIq3lDQ5
|
||||
Vi0htTPQZL4heFMm6mswA5lrIN9jX1MDtqTZW1Zu2ffLGF4JgbCCTuDQ9wV70gNkCElhCakATKWk
|
||||
5mvKhKzABC9s6eAeU08ZsFjWHcACCE9OdoTpJDj7qwRUXSx0U/M2Fpd5JMsIyFZWIMUMgAI3bKkR
|
||||
jzHKCvKAGQx6aMVUkcZhJ2zQ7XOjHjeUFcaBkGUY36tAggfBLeUdZI71iBlYAwq3A3koNSfvHeH4
|
||||
mmDjgrmuOzeEVgBTKN7CLhTftxQmOtLfyxQUEKAHRT+qDsO2kcgjgSeyZDWdNWXNoMWgnwZyEW45
|
||||
D6DJ8DP7jkSOVf5ChdtAKRMRVcU1TLLFYA5JVHEZ2YX3NAIRDQEaE1LVnMayEGQiicFL1bOe9/Oq
|
||||
NQUxIZG2gPPpQyMRHV1uQ7rcm9fdRVWMXKtMbsm5KZOZbAM3tYyGe7WU3gqwNQEtr6qY6AhiLQfN
|
||||
zVzEK0DnQApn3DiqHBzpETSEqcFpFFUjilX95iGUfsjb4lbAVa21OXkTSppLS9ftCzH4U/g2gwnF
|
||||
WdgQTO3Rap+rPWZudhrIiDmz71eAcKtKa60j71QFi/C91X6JmXqmWt2tOxy3gznl607VnJmWLfhd
|
||||
+7O37hr5z61lDXmvFl2Qd3HKzfeYB3gRpwo4WmRJTOLYvs953KCwAUych5EyGwjzXgVkwrjhptPb
|
||||
oPKrdPgybihVVLVrQA4Z3enRYYlcq8GBY0P1czPere6FF10REO49fPbF5/dh/HE3boK7H2BbA0Va
|
||||
RbK3OlEIcOLwYyHPtkW0i6VkLz+SmPf9Xh5Q/EAxkfZoaXXYfu7mf38fXfSmoJsJ++r/RdhEVC0b
|
||||
PYBV3kk7R3FZp4l2vzcF/0QczsGF7Sew+fc6DT5A4st5gLyoA+R+Il+p5Orwk4KuTsTDTJjlp/2l
|
||||
T6HEuXOZUHw+mvJ/AKcX8xhBttrwPoHPf7DvKcHXE6wPELs/8ILM4PlNoY9xW++K1XebjzNvyjjC
|
||||
SHkIteAb+/zfP0nF74HmGegn8vtjDmkHuAlFu7DjnJ3eir3ovckE58hUWoyOG/qQsL853DjrFf1+
|
||||
9p+D6KmZXL3zH8ge8frAM2yLr5KuVOyH+P4e8ofm4dU0D9rNUvvZfHW6Nxf1srT2d2t/dXW1fGkl
|
||||
2hZBfe754tzCgN6H3CKoE3Cy3O1fdS70MYWN/Gprt2XPMlwmQgleX3CSQ+yq9e6BXtn09ViOHoRd
|
||||
TGGM+TKHa6rHPXk4vR67w6t1YX38ZLLW4X0wnD3aW448XlrS66YsXqCdQTOQPew9PFf1JREWhgcL
|
||||
3O/H81u+G3b2/e9xfzAYQ1H1ERNZrne7/wEAAP//wqasKBVUhONSBg9nsIOVilOLyjKTU+NLMlOL
|
||||
QHGRkpqWWJoD6WsrFVcWl6TmxkNKo4KiTEiHO60g3iTZyMLUMM3CzEiJq5YLAAAA//8DAJuHmHV/
|
||||
EAAA
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 996fc2662b62bab1-MXP
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 31 Oct 2025 02:35:39 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=GCnzowRk3BBtL4hThExLBaTMoCuPX2iDYiXpVFdUP00-1761878139-1.0.1.1-XLODyX0MQKS_p7.OT8NQGYtBAEoNV5jjkXr.7wBXtRTDsCzL487WWm2eDTtkhfUnOPLSw0b3ttpMmgPZc26O86CB2NAaNHDAENdlxghjQL8;
|
||||
path=/; expires=Fri, 31-Oct-25 03:05:39 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=7F8vpJeCnAeLp1RRxe6VMVnO1Uwd.ucHtiVvA_sGMd0-1761878139796-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '9968'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '9983'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-project-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-project-tokens:
|
||||
- '149999367'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149999365'
|
||||
x-ratelimit-reset-project-tokens:
|
||||
- 0s
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_2eaa8f840d3346c1ad94db49b3227441
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"Convert all responses into valid
|
||||
JSON output."},{"role":"user","content":"Assess the quality of the task completed
|
||||
based on the description, expected output, and actual results.\n\nTask Description:\nResearch
|
||||
a topic to teach a kid aged 6 about math.\n\nExpected Output:\nA topic, explanation,
|
||||
angle, and examples.\n\nActual Output:\nI now can give a great answer \nFinal
|
||||
Answer: \n\n**Topic:** Introduction to Addition\n\n**Explanation:** \nAddition
|
||||
is one of the simplest operations in math. It''s all about putting things together.
|
||||
When we add, we combine two or more numbers to find out how many we have in
|
||||
total. For a 6-year-old, it can be visualized as combining different groups
|
||||
of objects. Here''s how we can teach it:\n\n1. **Basic Concept**: Explain that
|
||||
addition means bringing two amounts together to get a new total. Use simple
|
||||
language like, \"If you have 2 apples and I give you 3 more apples, how many
|
||||
apples do you have now?\"\n\n2. **Visual Aids**: Use physical objects like blocks,
|
||||
beads, or fruit. Show the child one group with 2 blocks and another group with
|
||||
3 blocks. Next, combine them and count the total together.\n\n3. **Symbols of
|
||||
Addition**: Introduce the plus sign (+) and the equals sign (=). For instance,
|
||||
you can explain that \"2 + 3 = 5\" means that when adding 2 and 3 together,
|
||||
they make 5.\n\n**Angle:** \nMake it fun and interactive! Use games and stories
|
||||
to keep the child engaged. For example, you could create a story about a little
|
||||
monster who collects candies. Every time he meets a friend, he adds more candies
|
||||
to his pile. \n\n**Examples:**\n- **Using Blocks**: Start with 4 blocks. If
|
||||
you add 2 more, how many blocks do you have? (4 + 2 = 6)\n- **Finger Counting**:
|
||||
Have the child count fingers. Hold up 3 fingers on one hand and 2 on the other
|
||||
hand. Ask, \"How many fingers are up?\" and help them see that when they count
|
||||
all the fingers together, they get 5.\n- **Story Problem**: \"You have 1 toy
|
||||
car, and your friend gives you 3 more toy cars. How many do you have now?\"
|
||||
Write it down for them: 1 + 3 = ? And help them count to find the answer is
|
||||
4.\n\nMake sure to encourage the child as they explore addition! Celebrate their
|
||||
successes and provide help as needed. This will help foster a positive attitude
|
||||
towards math, making it not just an academic subject, but an enjoyable activity.\n\nPlease
|
||||
provide:\n- Bullet points suggestions to improve future similar tasks\n- A score
|
||||
from 0 to 10 evaluating on completion, quality, and overall performance- Entities
|
||||
extracted from the task output, if any, their type, description, and relationships"}],"model":"gpt-4.1-mini","response_format":{"type":"json_schema","json_schema":{"schema":{"$defs":{"Entity":{"properties":{"name":{"description":"The
|
||||
name of the entity.","title":"Name","type":"string"},"type":{"description":"The
|
||||
type of the entity.","title":"Type","type":"string"},"description":{"description":"Description
|
||||
of the entity.","title":"Description","type":"string"},"relationships":{"description":"Relationships
|
||||
of the entity.","items":{"type":"string"},"title":"Relationships","type":"array"}},"required":["name","type","description","relationships"],"title":"Entity","type":"object","additionalProperties":false}},"properties":{"suggestions":{"description":"Suggestions
|
||||
to improve future similar tasks.","items":{"type":"string"},"title":"Suggestions","type":"array"},"quality":{"description":"A
|
||||
score from 0 to 10 evaluating on completion, quality, and overall performance,
|
||||
all taking into account the task description, expected output, and the result
|
||||
of the task.","title":"Quality","type":"number"},"entities":{"description":"Entities
|
||||
extracted from the task output.","items":{"$ref":"#/$defs/Entity"},"title":"Entities","type":"array"}},"required":["suggestions","quality","entities"],"title":"TaskEvaluation","type":"object","additionalProperties":false},"name":"TaskEvaluation","strict":true}},"stream":false}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '3969'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- _cfuvid=7F8vpJeCnAeLp1RRxe6VMVnO1Uwd.ucHtiVvA_sGMd0-1761878139796-0.0.1.1-604800000;
|
||||
__cf_bm=GCnzowRk3BBtL4hThExLBaTMoCuPX2iDYiXpVFdUP00-1761878139-1.0.1.1-XLODyX0MQKS_p7.OT8NQGYtBAEoNV5jjkXr.7wBXtRTDsCzL487WWm2eDTtkhfUnOPLSw0b3ttpMmgPZc26O86CB2NAaNHDAENdlxghjQL8
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.109.1
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-helper-method:
|
||||
- chat.completions.parse
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.109.1
|
||||
x-stainless-read-timeout:
|
||||
- '600'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//nFZRb9tGDH7PryDuacXsoE7TNDOwh6wo0D5sKdpsA1YHBn2iJC6nu9vx
|
||||
5MQN8t8HnmzL6dKh24sAHXkfv4+kSN0fARiuzByMbTHbLrrp69//ePuOX6yv3n4In/nszeXLXy/X
|
||||
73+Z3X64+KkxE70RVn+SzbtbxzZ00VHm4AezTYSZFHX26mx2/up8dvq8GLpQkdNrTczT0+PZtGPP
|
||||
05PnJy+nz0+ns9Pt9TawJTFz+HQEAHBfnkrUV3Rn5lDAyklHItiQme+dAEwKTk8MirBk9NlMRqMN
|
||||
PpMv3O8XRvqmIVHmsjDzTwvzzlvXVwQIQlbP4ZZzCzHoJUYHf/Vbf8gtgW3ZVdBx02ZAuQH0Fdzi
|
||||
RiAHQC+3lNSt09cV5UwJ0Ge2HDETOMLk2TdgW3SOfENyvDCTQiKkGJI6db3L3FHFCAdkwfENwZor
|
||||
CgIhAcYoID1nXDmCOqSBWSKvoROxr0OyVDjnENkOgd6noBiAHhS406qBhisQhLaFrE8liTbzmvNG
|
||||
AVtyEaJDXwAdiQQ/IF5UFSCsElMNdKcuWLKoiWw3I1puWQYmwALcxZC0UiUuwtl0Q5imwVUareQJ
|
||||
Oswt1KH3VUFEB3LDzm1T9jp44YoScCmghlhjYhyyxR7oDrVJtWyYwaLWIgeoeE1JCEpb3OWSTPaZ
|
||||
Eom+1Ps8Hi/M9WRh/urRcd4szPyHycJoT2Sm0jv3C+Oxo4WZlyywRi7U8iYOpz+rgstICfe2isQm
|
||||
jsP7fGEuPWlQzapw4Zsh7G4UHZqGCbBfB7cuvRO6FZcuyrdB2XchEfi+W1EqbViz15LkkNENuUrk
|
||||
BryW47btf2Pp0QFyJcXl46ZbBVcSgIda3myzuDDXD5NDxVuAix3AXvTVruJXIbgnNb9vN8IWHQxT
|
||||
ZdvbKxfsjUxgRVjJRIXVqecMvVC1b8F1icqfac+yfHhD0b4qdqzOFyIOVP9LBbdeT2q5agmi6wWE
|
||||
Gw/fff+sTATSrtmd/fhsryFRTCTk80h/LPZ/p3/hG0dfyf5FjCmgbZ9kfQEd5TYUTuULHfmwB4S6
|
||||
90VH+TDKHCCdctCLQjfYkRS75JCYStfdEMWDCUm+wYb+R0nGhjuU9c65XrImak3wyOVLaR8jWa51
|
||||
ynhdBZZkn/0ynlTfqHU3OgZhuxas2TeUwIbeZ/bNZK91AzGFlaPu24p1/XC4iBLVvaBuQ987d2BA
|
||||
70MecHQFXm8tD/ul50KjceWLq6Zmz9IuE6EErwtOcoimWB+OAK7Lcu0f7UsTU+hiXuZwQyXc+Yvt
|
||||
cjXjUh+tJ+dnW2sZJ6NhNpvtLI8QlxVlZCcHC9pYtC1V491xm2NfcTgwHB3o/iefp7AH7eybb4Ef
|
||||
DdZSzFQtY6KK7WPNo1sinU9fc9vnuRA2QmnNlpaZKWktKqqxd8OviJGNZOqWQ2PFxMP/SB2Xp/bk
|
||||
/OWsPj87MUcPR38DAAD//wMAZsj3gp4JAAA=
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 996fc2a68c7dbab1-MXP
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 31 Oct 2025 02:35:43 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '3333'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '3358'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-project-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-project-tokens:
|
||||
- '149999367'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149999365'
|
||||
x-ratelimit-reset-project-tokens:
|
||||
- 0s
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_e56794132fe345d0bfcee4804d1ea766
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"input":["Examples(Illustrative Examples): Specific instances used to
|
||||
explain addition including using blocks, finger counting, and story problems."],"model":"text-embedding-3-small","encoding_format":"base64"}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '211'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- _cfuvid=vLbBcLMQoKUtOCugPnUg_H9aADRheAVHbrMDJqmikBA-1751391361577-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.109.1
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.109.1
|
||||
x-stainless-read-timeout:
|
||||
- '600'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/embeddings
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//VJpJ06rMloXn3684cabWDekkkzOjkx4SAVErKioAEelEugTyxv3vFfp+
|
||||
catq4gAxCJO991rPyvznX79+/e7SKs+m339+/W7Kcfr9H59r92RKfv/59Z9//fr169c/v5//7868
|
||||
TfP7vXwV39u/X5ave77+/vOL+veV/73pz6/fdyxv3psFZ0DxAtfBvUZsb1kMVBHXFmeob+cC2VeY
|
||||
kPp5fi9Q4d4TeojUWK1zDDZYZZKIratE22vyuBqC3N4gstfiYndBZXRwgozsHfbOrGzWY+rgGdU0
|
||||
utLFLdzwRCghHC8AKdAewbYX3xxY/ZuAtPJiVoyFnhq4QqQjy92/bUJJrwCw/WPAWkrfFNLspUJo
|
||||
yzpGruRdCHF6sEDrvT69YmQvIXaJDEHUFQsOVE0e2DG7MGCrexE9nBiF2+Ww7QSp9xXsW89nyMZO
|
||||
kUHz3CbeSEBYLRmkLGEO+L232XqXzqHrBJA6SEcUDq8lXLw88oTBOb+R5wAxpez3ogqWv2jI9+Vl
|
||||
GI3z3YBCnt+xFO/eZLXFxRN8NL2RL4tqRWFD5mBzjFlkpKlLaHCUfQHTxoICWzdSBh/XGvr3LfSG
|
||||
xQ8GsgevTsCLkeM7PnLK8jBzGZpgiJG7agNZwWpq8HSHJ+SRZ5euKTvwwq2/BPMBZGK6nC/jAhvy
|
||||
kLEysHLFPIhSgDD29zg3EqWi2cxXBbVdOZS6vRUyPKATwceUhqT7HgHqojO+IOTZHWUyAfZK0s6B
|
||||
914ucOI8gpQ1epuHNXeKsXSg1JS6GVcNXD0VIhVqRbidjEMHnPsR4ltjAHt0es+CDPJybztqEWC5
|
||||
/HaFe2k54LTHVbi8tMoSLq0yY1T3t3RzKbEWbsytR4/9zgek2ZslvBnhwyPU7UKWNHv58Ob5PDal
|
||||
UqwW26UpoZmuR3zNoKCsKwh9oaBb47ueFZ2kYS3YCl9h0Zf9YWn6YAPSCY3ejtmKahEdURaU49PH
|
||||
ViqV1ZAdGgcW4uOGPRgign3lncBiX484AscdIQ1QIUTVECDjvWOqiandHI5rd5nBzZmUCZ49Bg5M
|
||||
PiDt3vLD5lJGC69IO2LFCMSBOVFcxmPKfGHz8RrAerovidBHDo/swg1t4l/KUugM2sfytx7b7lJA
|
||||
jowSThrFqxhoSJ7gXk8derx6jVAPOmMg1qQMyRySbGovE0e4A93y6HGnVVs/2TuwSMhAdjT4IXv1
|
||||
iSpcJjHAibdPQ1Y+yCWM4lzAysFqATG3UIXQ9F3sEualTPXThdBpTBsp90YdmPjWasLdFhiPL4JH
|
||||
tXrrZRZGo8yxHHaNvZGkrPmTxjk4ABimZKTOhTDXu23mXm5JOhAgHnYQMug87rRhvQ8HCKWr5yNd
|
||||
zGh7qlS6Fxh4LtD1yF2qhXbdBGYOkebdKOY2K58MH/KbaM00ux9TopwDjg8cLfOWw8EnVBOf+m//
|
||||
YemYkmFL5ocKz9w8IyuXu2qJK67gndaIPOYY+AOhDyslHE+cghPV0wFjEn/mTyrgvTt1URXmwJab
|
||||
oNjFgPP2MQNC3cYdfHndhP1czUK6buQCxi9KxsmnfsiBmSwQZdkVP159C9ZrqCYQ1V2Kz93dJIx4
|
||||
5A0YSCpGl/tapNvCKCL4zptjORohw4fEECzARtg5M0a6rM65FGBwemOPL+dwe2VTDaYMN3Mjt+90
|
||||
VreCgcpuMBC6GmpITY92A4maRchmn321SZazgHPcA6xNBydcYHl1wI29kvkgjJdwSNxrBAb+TnlF
|
||||
cVwUwsiSJjyPw3vepsmv6MI/FYJ6Go/Y37prNbXnSBbETEqRTGbVHlHQB3CcmwT7vFnZpNNnCt5X
|
||||
rcUOoqNqu9C7BOa1q6Ds3DLK4sgKJ9isJOKgGtSKZsU7A59SXyAn0vbpdOG5GjJC9Pbsi30ISQgt
|
||||
Eda7YPXoR+rbQ5veOJgxnIKCz/O3yj5YwgpKHd+b2rRJ2IkcDM8nBj0Ejx7WurFK6D58B9viNa8+
|
||||
/VRCT0sJUnjrZrOVRQq+eSW9t2+TMl2Ghz/DSyk33kxOsb09dHUUPvMGSZYjV5RYSJYQbdX5p99X
|
||||
/91HUEOKiuQ+49KFM8te0JvTHocRRVfffhAoXvbmulwjwlR4x/DzW1BRPhURwGGtRoLCDRP2JNNO
|
||||
qbjiStitexPpirOk0/3CUYKljA3WstZIF0/LS+gUxgEFy64ja2gdI2g8Mwofa7hPp0G7J3Csawal
|
||||
ynW0x/PQ8FA6uSNK7B0dzojKO7iCQkdoHkawxfe+h7uwn5F2DJZqlVjJERbdqWYUX2TCaBvDw2HN
|
||||
b0gfo7fCLIwtQp4VM+ykr3e1KugsQzc/KfjyDiuFiAuJBD8mGLk2skPm239hebl49zJcwHKNZFmA
|
||||
7PU60zjH1fZ9H4wfmh59S+CwffyAUDg5QoZ7kxXaM+8ZnN6rhhX9tLeXlzZYUCiKE05e5E2WgBwd
|
||||
QXk2HdZdD4aDOecFPFngPK88FQ/jSeMi4V5JEBnejQoJaHcdvJpqj6+1ew4pubtGQqLmERJ3iTv8
|
||||
1Os6Pe7zzrDWcNnxFeSZLr7gfDspgH14hids6rPC8SFFyuKV1w1eAdphZe6YEN8mJRG2oHXRqSyc
|
||||
ipRxk//U4y2/vNLNqwMV3FCgYIsc+bCvIzES7jqcPKGjn+l45/Y1vxwuLHLc8zCQOeozOJ8ODLr5
|
||||
RzMlRRBQcMvqFUnNqpLPehhQGV9nj9KzGoy4EGthF3YzVqRCIFulU7VABPY2O8nlqDCq1stwt8dH
|
||||
ZLqqktLP8c3wL398zhxV6PYYpHoh+G9qmPfm+Z0yZTiXsIyZxdvOZpQuK/Q5QX/6DDoGb4Mwd5nX
|
||||
oNUv/dzyoCGEHCsKioHMztRcmYCRSjkTrJVZsIds1t4uBx5CZapzpL2vePjoryhwZr3/6J2RsnNU
|
||||
ZoIUeDTyzL5T5mBkI/iZz0hvNm4gRiwvcLHdcX4d3dpe6hyJ8OOXZqC963Bp5GshUCU0cS5UUfrq
|
||||
9JmBMjoznrAwt5/6gtSR67wXc7RC9njvVegLLIWdmy4OjMSLlnCdHB89InZUllnhAqHhHhFSOdr4
|
||||
rF8pw/RTZyRWeXsl4UsF0tGOPEHZERunZrgJYHX26HxkpWprHORBv6ZuXvHSaLAW1MPhE6F2UHCM
|
||||
X/ZbIk7HQ5j0P/3Nxtkkw/VyfeLLIS6HuQj9HMwhb3jkYGmEeSpPCNVxDZEsLL6ydC0nwhx7C1YZ
|
||||
KkqHs3oKgC46Dr7lOq28lOnaQrA5Bopv50XZ/K6eBeYR7mfmkGIbT2NhCXdG5fB9ZxzTtQ9QDKnX
|
||||
LsVSs9Zk7Q6ggPmMXl4VhlpKUerOAdiVRWyzp2s4BHjnw3nexzP/0b9JjKlYaNrbE6mjIgI2XXMZ
|
||||
GsV98JbiXKbkoSEVJCiTsAe2gKwBb4yAMVwe58hmFXyw2gjetusTy1prD8t+5kVAvWCKPFjJytav
|
||||
Yim8bhPEcpY41fbuDhFUutr53D9UIypnCrTPxkVScJ/sZVe2AczczULysjMITT02BpprvyHxo2dE
|
||||
HJkNnMmcYceKjhXm5juEyqFjsDHjplqCh6p99RV77cMjuB3iDBp+YiFX9I7VGuZaBnjpMGFPINpA
|
||||
yFp28NBSOTqHb6CQsb5CyOXNCV8+86k7qswG4JoI3vT5PfPV7+c4zVi7t8lAe3QWwHigbWyIOpMu
|
||||
nSVskJgTwceLdazW07oWUOdigBXq3dudcbzWkFPrCbnyrgBLIuYQAt84YxUTL6Uu+s6H56yPkU7R
|
||||
DVnzzM4BwLbtgXpX2ltoohE890uPDfAcwSSsD4uXL7cjMluutjf+Bf1vv3147T1g+s7W8GgyFHK9
|
||||
oqpGQ3hcQeTfS49sZkFWcqk6oaBrA4uYrcFKhnqBhZZRKIPCFLbROcwFqQ++eqHY1EnjYohOrwcW
|
||||
J90cWErCAWTZp4O/errG8csQ7pUCPZZqSDUGldjDKZsaj+irMiydRS8/fh05MQ6nlc1naJql7NHq
|
||||
xQLr5I0tTydHjI8Z0JWRbhQNplEX4JBPhnSN+uBvvtHfykK2/ViMwnV1E2wT0SXrYkuUoPoz9Ba3
|
||||
1FMiP1XqIPQF7X38c8qYyj2Cn3r0GJe4KcUfoAZfD3zB+kHz0/V6MXagUncAKdP1RKiz3nmwPgQP
|
||||
bKjcMWUDzPhQZKw7lozRJou5ThmsKd9F2vh+gUGfxUzIPUvziqC8DThz5wQQlt1hF16zcIpjbMB3
|
||||
qBCv9akwbQ7UwIA9z63IUw7aMOZ3SRNG2KtIPnGZQvRZzAVXcivsaaZQfecrQN1QI3HPq+H09F81
|
||||
BMCtkPjxg/NSSjz03OsFaw9tTdtkvqjw8GQcb0dOsbJNQhnBQ0OfkZLTpr1QCy3C5pDf8SnDQ4ir
|
||||
cvWERowaHFP8lm5Ke9cgjjQNa2zvVttoZrEwXw4nZDLpXiHySQzALY1S7G7xki4N/4iAzYkmSsgx
|
||||
SbEuS73Aqe00k46l7Y6Vwgwe133t0fSjSpunj1sg7J47D370cxnBW4a7YGg8uoIj2RQ+977vZx5S
|
||||
2KVbwviecHeh4dWOQNnr5NU1jFlQzeCV2/byqW9+cTwa60dXVSjtbThwAtTBaxmLAVto6iO8KaHr
|
||||
kQulKGNadwm8L8f7R28OQ+fErgi7lHNwhIkXkgZpBjjR8gVrTshUy2neamFUSgd5EbxWCyul+Y9/
|
||||
v6UaUoj7kJhvvSL5nuwA3pqYgxFKWCxpoA/Jd16VNFTx1U86u6sZdYTv4Sj98APzIHYBd07dYQvq
|
||||
kfIzH5Y56Dz6sLhknEPIwOBlD8hFi6Gwuix1P/7i5//apy0ReCjP8/txuVVEaS75N+9A4id/oU/H
|
||||
bYYW3iA+Bu+O4OF69ODGDebPvKLyu6lCoX4m2BpUWVmOxUGEt+YsIq/FkkJmWM8gsq4NPsqtGc79
|
||||
TaaERx6rWIdMrlD6U9dAQ2U2Ppnjzv7yHA/xxUSmvDX2MqSZAbAuNn/Pp/Xp+Fz8YmSvkbStGtO6
|
||||
SARPTnp0/PAmObRdC2ZuD+agdqSBZYJjB7lt5pDymSfsca0c6BTWAVuf/l79ptmg0JQVEqPKTXF2
|
||||
Kndwza4EBU2mKYx71WK4O+AeycdYVxh628UAuPp1buSYVcZrrbVf/cTaUz9+13/39ZfIFq+7Ydnx
|
||||
ww6crMN5JoHyJD/+dAhTjMXl+iSL6BgiWGa/Qxc3n6stvpe94PHPGxJ9eak2ZjA+7uHp49gJ44qy
|
||||
T/wVvp/qHafRsISrIcZXyFm1gWVhWextauoOvNTe9Kr+MoBRrtMNAkWlkB4iTKbrIGswumQaDqdO
|
||||
VbZWPVGwPFE5tt71EK7SQykF7IjsfDLHXMFckjHwkCwadtLLJVy2aJzBuAweMlSuSb/zhtda3fK2
|
||||
zZvChbthB/L53vXq98aD1X+XkfDRB2wlz92XP1XB6c071ofqBjbJk2IYThcVK9m5+Pg9XYbPw4rQ
|
||||
8XRT0w8f5HCUnwq6nc4uaPW93IOnPagz/RjKkHz9R8weKnQ8809l6ycFCtRBOXqvm7imbyR1mvDh
|
||||
LY+hrXrYBN9PAMkWjK0ieAzMu6Y2ARZxiN2dcQynqS19uJNbFTkSMcDyvkglvKl+7JXGZVL6r1/+
|
||||
5BHYrnOh2mLzWUDXcHRkPi6Hgbwr1oJJW+rocXnpKeOLZQYJx+rYfuWDvdjCwQH1dgJI88hjWA9G
|
||||
IwohnYTIbWpTmQNV0ACWJGuGEunAmoIuE75+L6Kje7WZaSoerptXYe3LJxHIErDG6RVZx3wMf+qF
|
||||
FYoXCj98gBfXtuC9lo5YpWv0s97Ch1e8EQpTuqxcFAt7mn/88Ng3X4KffvIORkiF60IcDWQL12HJ
|
||||
csqBuKkbwHpANEa+drdJ5S2qQJ/UyFsP9WZvJ2rJYQIbZeYIvgDWD5sdZCzP8ZijvAsnVRlU3nmo
|
||||
3oxr3q7IcpAgdCP/gd1EblPSvBQKrH4qfPNiZWWvaXlgqvg4s1PtDqu7mh3wqOcOaUFkKUQ+WAVw
|
||||
b8E671oEhxYudg4zCqw/fD8LXNAJAKAKmcJ4SdfUnlRwWuQjNvOLnlI1LwUHzmoN7Ozvcbh8/CVP
|
||||
Z5qErHdth+ThiZ7QnfkVI+NYkzFiLwz88DNWg92rWp6nyfvy4byvzN5e3aPFwIzhFezs7qo90ZHB
|
||||
C0n4sJGiiFS1unexgPblMCKFurFkOrKrxn/zWPnEQWXcz5sI88D2Z8g6AyBBSlvwOeIZHVmcKPRR
|
||||
F0ZomaP4M09Zcn8WQuuXFZJgwylTmHs5NF+di8NYTezVytoZStg/I+eiOWTV8B6CxMs4fBX1OFy0
|
||||
k1WCO6tKyLg7KKTnHvKgvjgjdvpEHRbtLTqC8zRkj9kOeUW4LSl/eO1RHlgyLotrQAFXOvrq3Zqk
|
||||
aQ2CvXdAGsUH4cf/z3BPHWJkUQwgPeVxPdh31A6bTPpQ+rZ7lPBNLB9p1cuzZ63kY9CB/TTPB5mt
|
||||
PnnN+PVHH//mk/VTPzDorB+eq5hgZGNIlTtz3m0nhSy7XVdALVAlnFq3QPnkTxTom1ONTAGaA8Wf
|
||||
VgqGVVwgb5PWYQ6f7hXim3rF5qkRbeo08+2XDzz6Pc3h+7y9rzy7FfyX/9MpITcffvNR6pOH//jf
|
||||
/f6gYLM/wJC0SuTAb74RMRSVbqZ+4mFqv/ivHxhGubvG8EltwjcPqXB2aDyALi3/4TEVsOtT9WFA
|
||||
uQ5STbdNl7diXKEx5k/smOpTweT+LMGjSSxsr4uZ0q/74EBGdVZs77ezsgDZjuB0n+e5zHl6wA9N
|
||||
1yB/3W/zum+aqvvwHezqQ4eUjj0rC6XuPOjRlY7Na9vay8MNWkisEeLrh4+2xLolEAVth52nwZHh
|
||||
LEbjD49aPYXJ6jfTAj55C7oN5jyQok13/Jf/fcF624TLT1fBOi9XFN9bvqq//v7TDx4PL2248hyT
|
||||
Q+loRl8+IfjhiY5AV4aOLsyxD7fRjGJYVvwJW1l5AuvTxzX8+oFv3rDSkcgLh4FZvOBh6WRLtIzi
|
||||
O7hjZmKVl3A9EljAtyVeUCYRg2xauUUCnakSSnebDJav32ueKfImeZ4rclZvPmTSgPmuf7hJjF1D
|
||||
sdKfWA8fpb3UnpWDuKBzpA5lRVpf7HMY7NAy75B9sYmrq5bwCOML0sSVEJI8T7lQBXI2UzRV2ttn
|
||||
PwZ+9ZJE1LkiTrv34Gc/C2tO1ZFF7ebkxz879VtJ16IOum9ejFRM5nANLTeGtzg2kbXXpWrpvC0G
|
||||
L2MIsOMcdWW7Bd5y+PAR8oocpJu4cjtI2ZyCr+9dXA3KOxKBeBdrlLyjm9IqfOzBwFGzDz93YN7a
|
||||
qIRXVyfecHNcBQf20n7zDWSbpjvMGr/IQmo3PNZoq67Iamo19Nzkgo8nc/vkFYsM4fs8Y72vXsNG
|
||||
ts7/9v+84oCAZfUDFXYuW3r9i5iABTK3gx99Q4+bwVQrHRkc4G87gKXIm4dvHg6Hx7X55onD/NCd
|
||||
kT9dIONxdwelK8VyBaD2YoEUwJfK+lrrGEb+o8TSeW7SLnX6kf/4cWRrbzWluwMo4eG2CMgI+JP9
|
||||
zR/gNTIP2KZW/EnxYwj7xDlj6eN/NlY/ReDMjTNGbSL/5OuQGvb1Dy9tN9lyoNE+NG/j0NNm8cIv
|
||||
0DzXCVbO6guM+sFZ4F5eZGS+D709fvdzXuUWYaksxoG8AfzRO+x4Z4H07tWL4bv97B+sEQgXp9cs
|
||||
IYsY/ye/XS46E8DGupjILStFoc/JMMJg8RSsPtey2hjkjeCblzyuhpoygpNSMMfOghLVe4GVKH0B
|
||||
tUxt5q3PruHm+y8ZFrumQvk3D3dCMgrPElvz7uz0YD3avA8+vPj1O8OWaBED0TIT7MZPANaoTxz+
|
||||
Mz+9ddru9iJodwgbLRaRIhV3UpvrlH95CZm3lguxOecl9HBuzVitk5S+D+tOWPiz+NnvZcP+ibuF
|
||||
Z5Vbi775Ac3UbgaBrgtz31bHtKtb0RKocTchu3+s9uoLQw6/PGV6Kj0sDzOWoTjrHjKoQldom9uL
|
||||
sBHjZt5bZ1CRm6BDyJFZwpJqY2U564Xz5dmPPo7kpz9Zrjyj7KO/m0muI0S4N2aGs62Bjal+BOGS
|
||||
SMjdYj8ke6WjgNNaEfrM/7QvXqdWeFufelv8bSAg5rXD3d4zyGSAUlHlnYPg44ewdd+YlHzzkFsa
|
||||
p/N3f305p89YsC9g9LYsGQfmW1+F9RCw8uGH1b0pUPjkcdihRhn0ezj58Pf3VMC//vr167++Jwza
|
||||
7p43n4MBU75O//j3UYF/sP8Y26Rpfo4hzGNS5L///H0C4fd76Nr39N9TV+ev8fefXwz3c9bg99RN
|
||||
SfN/r//1edS//vofAAAA//8DADmCS/LgIAAA
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 996fc2bdfba74c68-MXP
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 31 Oct 2025 02:35:43 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=.S.0fRlkXW0.BA2KmS9TTms9JPq5SLnXktKdk0f0xho-1761878143-1.0.1.1-FYQzY6Kr.UXjSIXdnFMpEPUn.35ba4Hk8i16kCdAKgJwCLZiQAN8v9XzelGaNBPwPS9rIX_MqRctKhBDHgbMD_f_8fk0YOHhnCFfbGi56A8;
|
||||
path=/; expires=Fri, 31-Oct-25 03:05:43 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=fhKPFQ8oWIy28FS88b8siDfqJGAFSqTpIwMwmdY2q_s-1761878143910-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-allow-origin:
|
||||
- '*'
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-model:
|
||||
- text-embedding-3-small
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '94'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
via:
|
||||
- envoy-router-568dcd8c65-k74d4
|
||||
x-envoy-upstream-service-time:
|
||||
- '122'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '10000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '9999966'
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_d2945ad09e9c45ca99f752686167bb97
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -204,4 +204,235 @@ interactions:
|
||||
- req_d24b98d762df8198d3d365639be80fe4
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"Please convert the following text
|
||||
into valid JSON.\n\nOutput ONLY the valid JSON and nothing else.\n\nThe JSON
|
||||
must follow this schema exactly:\n```json\n{\n score: int\n}\n```"},{"role":"user","content":"4"}],"model":"gpt-4.1-mini"}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '277'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.109.1
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.109.1
|
||||
x-stainless-read-timeout:
|
||||
- '600'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAA4yST2+cMBDF73wKa85LBITsEm5VKvWUQ0/9RwReM7BOzdi1TbXVar97ZdgspE2l
|
||||
XjjMb97w3nhOEWMgWygZiAP3YjAqfvj0+ajl7iM9dI9ff2w5T+Rx3+fvvjx+OL6HTVDo/TMK/6K6
|
||||
EXowCr3UNGNhkXsMU9PdNi12t0lRTGDQLaog642P85s0HiTJOEuyuzjJ4zS/yA9aCnRQsm8RY4yd
|
||||
pm8wSi0eoWTJ5qUyoHO8RyivTYyB1SpUgDsnnefkYbNAockjTd6bpnl2mio6VRRYBU5oixWULK/o
|
||||
XFHTNGupxW50PPinUakV4ETa85B/Mv10IeerTaV7Y/Xe/SGFTpJ0h9oid5qCJee1gYmeI8aepnWM
|
||||
rxKCsXowvvb6O06/y+/ncbC8wgLT2wv02nO11LfZ5o1pdYueS+VW6wTBxQHbRbnsno+t1CsQrTL/
|
||||
beat2XNuSf3/jF+AEGg8trWx2ErxOvDSZjHc6L/arjueDIND+1MKrL1EG96hxY6Paj4ccL+cx6Hu
|
||||
JPVojZXz9XSmzkVW3KVdsc0gOke/AQAA//8DAILgqohMAwAA
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 996f4750dfd259cb-MXP
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 31 Oct 2025 01:11:28 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=NFLqe8oMW.d350lBeNJ9PQDQM.Rj0B9eCRBNNKM18qg-1761873088-1.0.1.1-Ipgawg95icfLAihgKfper9rYrjt3ZrKVSv_9lKRqJzx.FBfkZrcDqSW3Zt7TiktUIOSgO9JpX3Ia3Fu9g3DMTwWpaGJtoOj3u0I2USV9.qQ;
|
||||
path=/; expires=Fri, 31-Oct-25 01:41:28 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=dQQqd3jb3DFD.LOIZmhxylJs2Rzp3rGIU3yFiaKkBls-1761873088861-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '481'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '570'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-project-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-project-tokens:
|
||||
- '149999952'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149999955'
|
||||
x-ratelimit-reset-project-tokens:
|
||||
- 0s
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_1b331f2fb8d943249e9c336608e2f2cf
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"Please convert the following text
|
||||
into valid JSON.\n\nOutput ONLY the valid JSON and nothing else.\n\nThe JSON
|
||||
must follow this schema exactly:\n```json\n{\n score: int\n}\n```"},{"role":"user","content":"4"}],"model":"gpt-4.1-mini","response_format":{"type":"json_schema","json_schema":{"schema":{"properties":{"score":{"title":"Score","type":"integer"}},"required":["score"],"title":"ScoreOutput","type":"object","additionalProperties":false},"name":"ScoreOutput","strict":true}},"stream":false}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '541'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=NFLqe8oMW.d350lBeNJ9PQDQM.Rj0B9eCRBNNKM18qg-1761873088-1.0.1.1-Ipgawg95icfLAihgKfper9rYrjt3ZrKVSv_9lKRqJzx.FBfkZrcDqSW3Zt7TiktUIOSgO9JpX3Ia3Fu9g3DMTwWpaGJtoOj3u0I2USV9.qQ;
|
||||
_cfuvid=dQQqd3jb3DFD.LOIZmhxylJs2Rzp3rGIU3yFiaKkBls-1761873088861-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.109.1
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-helper-method:
|
||||
- chat.completions.parse
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.109.1
|
||||
x-stainless-read-timeout:
|
||||
- '600'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//jFLLbtswELzrK4g9W4HlyI6sWx8IeuulQIs2gUCTK4kpRRLkKnBg+N8L
|
||||
SrYl5wH0osPOznBmtIeEMVASSgai5SQ6p9MvP3/t3Tf6JFX79ffL04/b5+/Z/X672zT55x4WkWF3
|
||||
TyjozLoRtnMaSVkzwsIjJ4yq2d0mK+5ul8V2ADorUUda4yjNb7K0U0alq+VqnS7zNMtP9NYqgQFK
|
||||
9idhjLHD8I1GjcQ9lGy5OE86DIE3COVliTHwVscJ8BBUIG4IFhMorCE0g/fDAwRhPT5AmR/nOx7r
|
||||
PvBo1PRazwBujCUegw7uHk/I8eJH28Z5uwuvqFAro0JbeeTBmvh2IOtgQI8JY49D7v4qCjhvO0cV
|
||||
2b84PFesRzmY6p7AM0aWuJ7G21NV12KVROJKh1ltILhoUU7MqWPeS2VnQDKL/NbLe9pjbGWa/5Gf
|
||||
ACHQEcrKeZRKXOed1jzGW/xo7VLxYBgC+mclsCKFPv4GiTXv9XggEF4CYVfVyjTonVfjldSuysWq
|
||||
WGd1sVlBckz+AQAA//8DAKv/0dE0AwAA
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 996f4755989559cb-MXP
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 31 Oct 2025 01:11:29 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '400'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '659'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-project-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-project-tokens:
|
||||
- '149999955'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149999955'
|
||||
x-ratelimit-reset-project-tokens:
|
||||
- 0s
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_7829900551634a0db8009042f31db7fc
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
|
||||
@@ -1,415 +1,137 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Scorer. You''re an
|
||||
expert scorer, specialized in scoring titles.\nYour personal goal is: Score
|
||||
the title\nTo give my best complete final answer to the task use the exact following
|
||||
format:\n\nThought: I now can give a great answer\nFinal Answer: Your final
|
||||
answer must be the great and the most complete as possible, it must be outcome
|
||||
described.\n\nI MUST use these formats, my job depends on it!"}, {"role": "user",
|
||||
"content": "\nCurrent Task: Give me an integer score between 1-5 for the following
|
||||
title: ''The impact of AI in the future of work''\n\nThis is the expect criteria
|
||||
for your final answer: The score of the title.\nyou MUST return the actual complete
|
||||
content as the final answer, not a summary.\n\nBegin! This is VERY important
|
||||
to you, use the tools available and give your best Final Answer, your job depends
|
||||
on it!\n\nThought:"}], "model": "gpt-4-0125-preview"}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '927'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=9.8sBYBkvBR8R1K_bVF7xgU..80XKlEIg3N2OBbTSCU-1727214102-1.0.1.1-.qiTLXbPamYUMSuyNsOEB9jhGu.jOifujOrx9E2JZvStbIZ9RTIiE44xKKNfLPxQkOi6qAT3h6htK8lPDGV_5g;
|
||||
_cfuvid=lbRdAddVWV6W3f5Dm9SaOPWDUOxqtZBSPr_fTW26nEA-1727213194587-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.47.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.47.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.11.7
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
content: "{\n \"id\": \"chatcmpl-AB7gEbfUEcEY8uxRqngZ1AHO3Kh8G\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1727214494,\n \"model\": \"gpt-4-0125-preview\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"Thought: I now can give a great answer\\nFinal
|
||||
Answer: 4\",\n \"refusal\": null\n },\n \"logprobs\": null,\n
|
||||
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
|
||||
187,\n \"completion_tokens\": 15,\n \"total_tokens\": 202,\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0\n }\n },\n \"system_fingerprint\": null\n}\n"
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 8c85fa3b6b9e1cf3-GRU
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Tue, 24 Sep 2024 21:48:14 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '730'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '2000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '1999781'
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 6ms
|
||||
x-request-id:
|
||||
- req_7229ec6efc9642277f866a4769b8428c
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
- request:
|
||||
body: '{"messages": [{"role": "user", "content": "4"}, {"role": "system", "content":
|
||||
"I''m gonna convert this raw text into valid JSON.\n\nThe json should have the
|
||||
following structure, with the following keys:\n{\n score: int\n}"}], "model":
|
||||
"gpt-3.5-turbo-0125", "tool_choice": {"type": "function", "function": {"name":
|
||||
"ScoreOutput"}}, "tools": [{"type": "function", "function": {"name": "ScoreOutput",
|
||||
"description": "Correctly extracted `ScoreOutput` with all the required parameters
|
||||
with correct types", "parameters": {"properties": {"score": {"title": "Score",
|
||||
"type": "integer"}}, "required": ["score"], "type": "object"}}}]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '627'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=9.8sBYBkvBR8R1K_bVF7xgU..80XKlEIg3N2OBbTSCU-1727214102-1.0.1.1-.qiTLXbPamYUMSuyNsOEB9jhGu.jOifujOrx9E2JZvStbIZ9RTIiE44xKKNfLPxQkOi6qAT3h6htK8lPDGV_5g;
|
||||
_cfuvid=lbRdAddVWV6W3f5Dm9SaOPWDUOxqtZBSPr_fTW26nEA-1727213194587-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.47.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.47.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.11.7
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
content: "{\n \"id\": \"chatcmpl-AB7gF9MWuZGxknKnrtesloXhXendq\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1727214495,\n \"model\": \"gpt-3.5-turbo-0125\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": null,\n \"tool_calls\": [\n {\n
|
||||
\ \"id\": \"call_vIxfdg9Ebnr1Z3TthsEcVXby\",\n \"type\":
|
||||
\"function\",\n \"function\": {\n \"name\": \"ScoreOutput\",\n
|
||||
\ \"arguments\": \"{\\\"score\\\":4}\"\n }\n }\n
|
||||
\ ],\n \"refusal\": null\n },\n \"logprobs\": null,\n
|
||||
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
|
||||
103,\n \"completion_tokens\": 5,\n \"total_tokens\": 108,\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0\n }\n },\n \"system_fingerprint\": null\n}\n"
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 8c85fa41bc891cf3-GRU
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Tue, 24 Sep 2024 21:48:15 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '253'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '50000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '49999946'
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_fe42bae7f8f0d8830aa96ac82a70bb78
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Scorer. You''re an
|
||||
expert scorer, specialized in scoring titles.\nYour personal goal is: Score
|
||||
the title\nTo give my best complete final answer to the task use the exact following
|
||||
format:\n\nThought: I now can give a great answer\nFinal Answer: Your final
|
||||
answer must be the great and the most complete as possible, it must be outcome
|
||||
described.\n\nI MUST use these formats, my job depends on it!"}, {"role": "user",
|
||||
"content": "\nCurrent Task: Given the score the title ''The impact of AI in
|
||||
the future of work'' got, give me an integer score between 1-5 for the following
|
||||
title: ''Return of the Jedi'', you MUST give it a score, use your best judgment\n\nThis
|
||||
is the expect criteria for your final answer: The score of the title.\nyou MUST
|
||||
return the actual complete content as the final answer, not a summary.\n\nThis
|
||||
is the context you''re working with:\n4\n\nBegin! This is VERY important to
|
||||
you, use the tools available and give your best Final Answer, your job depends
|
||||
on it!\n\nThought:"}], "model": "gpt-4-0125-preview"}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1076'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=9.8sBYBkvBR8R1K_bVF7xgU..80XKlEIg3N2OBbTSCU-1727214102-1.0.1.1-.qiTLXbPamYUMSuyNsOEB9jhGu.jOifujOrx9E2JZvStbIZ9RTIiE44xKKNfLPxQkOi6qAT3h6htK8lPDGV_5g;
|
||||
_cfuvid=lbRdAddVWV6W3f5Dm9SaOPWDUOxqtZBSPr_fTW26nEA-1727213194587-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.47.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.47.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.11.7
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
content: "{\n \"id\": \"chatcmpl-AB7gFkgb0JsYMhXR8qaHnXKOQfP7B\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1727214495,\n \"model\": \"gpt-4-0125-preview\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"Thought: I now can give a great answer\\n\\nFinal
|
||||
Answer: 5\",\n \"refusal\": null\n },\n \"logprobs\": null,\n
|
||||
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
|
||||
223,\n \"completion_tokens\": 15,\n \"total_tokens\": 238,\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0\n }\n },\n \"system_fingerprint\": null\n}\n"
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 8c85fa461a4d1cf3-GRU
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Tue, 24 Sep 2024 21:48:16 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '799'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '2000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '1999744'
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 7ms
|
||||
x-request-id:
|
||||
- req_7ea6637f8e14c2b260cce6e3e3004cbb
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
- request:
|
||||
body: !!binary |
|
||||
CsITCiQKIgoMc2VydmljZS5uYW1lEhIKEGNyZXdBSS10ZWxlbWV0cnkSmRMKEgoQY3Jld2FpLnRl
|
||||
bGVtZXRyeRKbAQoQNiHDtik0VJXUvY2TAXlq5xIIVN7ytJgQOTQqClRvb2wgVXNhZ2UwATkgE44P
|
||||
bkz4F0GgvJEPbkz4F0oaCg5jcmV3YWlfdmVyc2lvbhIICgYwLjYxLjBKJwoJdG9vbF9uYW1lEhoK
|
||||
GEFzayBxdWVzdGlvbiB0byBjb3dvcmtlckoOCghhdHRlbXB0cxICGAF6AhgBhQEAAQAAEpACChB9
|
||||
D1jc9xslW6yx+1EBiZyOEgg07DmIaDWZkCoOVGFzayBFeGVjdXRpb24wATnAXgcVbUz4F0H4h/Rb
|
||||
bkz4F0ouCghjcmV3X2tleRIiCiA1ZTZlZmZlNjgwYTVkOTdkYzM4NzNiMTQ4MjVjY2ZhM0oxCgdj
|
||||
cmV3X2lkEiYKJDFjZWJhZTk5LWYwNmQtNDEzYS05N2ExLWRlZWU1NjU3ZWFjNkouCgh0YXNrX2tl
|
||||
eRIiCiAyN2VmMzhjYzk5ZGE0YThkZWQ3MGVkNDA2ZTQ0YWI4NkoxCgd0YXNrX2lkEiYKJDU1MjQ5
|
||||
M2IwLWFmNDctNGVmMC04M2NjLWIwYmRjMzUxZWY2N3oCGAGFAQABAAASnAkKEIX+S/hQ6K5kLLu+
|
||||
55qXcH8SCOxCl7XWayIeKgxDcmV3IENyZWF0ZWQwATkQYspcbkz4F0G4Ls5cbkz4F0oaCg5jcmV3
|
||||
YWlfdmVyc2lvbhIICgYwLjYxLjBKGgoOcHl0aG9uX3ZlcnNpb24SCAoGMy4xMS43Si4KCGNyZXdf
|
||||
a2V5EiIKIGQ0MjYwODMzYWIwYzIwYmI0NDkyMmM3OTlhYTk2YjRhSjEKB2NyZXdfaWQSJgokMjE2
|
||||
YmRkZDYtYzVhOS00NDk2LWFlYzctYjNlMDBhNzQ5NDVjShwKDGNyZXdfcHJvY2VzcxIMCgpzZXF1
|
||||
ZW50aWFsShEKC2NyZXdfbWVtb3J5EgIQAEoaChRjcmV3X251bWJlcl9vZl90YXNrcxICGAJKGwoV
|
||||
Y3Jld19udW1iZXJfb2ZfYWdlbnRzEgIYAUrlAgoLY3Jld19hZ2VudHMS1QIK0gJbeyJrZXkiOiAi
|
||||
OTJlN2ViMTkxNjY0YzkzNTc4NWVkN2Q0MjQwYTI5NGQiLCAiaWQiOiAiMDUzYWJkMGUtNzc0Ny00
|
||||
Mzc5LTg5ZWUtMTc1YjkwYWRjOGFjIiwgInJvbGUiOiAiU2NvcmVyIiwgInZlcmJvc2U/IjogdHJ1
|
||||
ZSwgIm1heF9pdGVyIjogMTUsICJtYXhfcnBtIjogbnVsbCwgImZ1bmN0aW9uX2NhbGxpbmdfbGxt
|
||||
IjogImdwdC0zLjUtdHVyYm8tMDEyNSIsICJsbG0iOiAiZ3B0LTQtMDEyNS1wcmV2aWV3IiwgImRl
|
||||
bGVnYXRpb25fZW5hYmxlZD8iOiBmYWxzZSwgImFsbG93X2NvZGVfZXhlY3V0aW9uPyI6IGZhbHNl
|
||||
LCAibWF4X3JldHJ5X2xpbWl0IjogMiwgInRvb2xzX25hbWVzIjogW119XUrkAwoKY3Jld190YXNr
|
||||
cxLVAwrSA1t7ImtleSI6ICIyN2VmMzhjYzk5ZGE0YThkZWQ3MGVkNDA2ZTQ0YWI4NiIsICJpZCI6
|
||||
ICIxMTgyYzllZi02NzU3LTQ0ZTktOTA4Yi1jZmE2ZWIzODYxNWEiLCAiYXN5bmNfZXhlY3V0aW9u
|
||||
PyI6IGZhbHNlLCAiaHVtYW5faW5wdXQ/IjogZmFsc2UsICJhZ2VudF9yb2xlIjogIlNjb3JlciIs
|
||||
ICJhZ2VudF9rZXkiOiAiOTJlN2ViMTkxNjY0YzkzNTc4NWVkN2Q0MjQwYTI5NGQiLCAidG9vbHNf
|
||||
bmFtZXMiOiBbXX0sIHsia2V5IjogIjYwOWRlZTM5MTA4OGNkMWM4N2I4ZmE2NmFhNjdhZGJlIiwg
|
||||
ImlkIjogImJkZDhiZWYxLWZhNTYtNGQwYy1hYjQ0LTdiMjE0YzY2ODhiNSIsICJhc3luY19leGVj
|
||||
dXRpb24/IjogZmFsc2UsICJodW1hbl9pbnB1dD8iOiBmYWxzZSwgImFnZW50X3JvbGUiOiAiU2Nv
|
||||
cmVyIiwgImFnZW50X2tleSI6ICI5MmU3ZWIxOTE2NjRjOTM1Nzg1ZWQ3ZDQyNDBhMjk0ZCIsICJ0
|
||||
b29sc19uYW1lcyI6IFtdfV16AhgBhQEAAQAAEo4CChCtIlcpdDnI8/HhoLC7gN6iEgje2a5QieRJ
|
||||
MSoMVGFzayBDcmVhdGVkMAE58GXmXG5M+BdBKC3nXG5M+BdKLgoIY3Jld19rZXkSIgogZDQyNjA4
|
||||
MzNhYjBjMjBiYjQ0OTIyYzc5OWFhOTZiNGFKMQoHY3Jld19pZBImCiQyMTZiZGRkNi1jNWE5LTQ0
|
||||
OTYtYWVjNy1iM2UwMGE3NDk0NWNKLgoIdGFza19rZXkSIgogMjdlZjM4Y2M5OWRhNGE4ZGVkNzBl
|
||||
ZDQwNmU0NGFiODZKMQoHdGFza19pZBImCiQxMTgyYzllZi02NzU3LTQ0ZTktOTA4Yi1jZmE2ZWIz
|
||||
ODYxNWF6AhgBhQEAAQAAEpACChCBZ3BQ5YuuLU2Wn6fiGtU/Egh7U3eIthSUQioOVGFzayBFeGVj
|
||||
dXRpb24wATlIe+dcbkz4F0HwIavCbkz4F0ouCghjcmV3X2tleRIiCiBkNDI2MDgzM2FiMGMyMGJi
|
||||
NDQ5MjJjNzk5YWE5NmI0YUoxCgdjcmV3X2lkEiYKJDIxNmJkZGQ2LWM1YTktNDQ5Ni1hZWM3LWIz
|
||||
ZTAwYTc0OTQ1Y0ouCgh0YXNrX2tleRIiCiAyN2VmMzhjYzk5ZGE0YThkZWQ3MGVkNDA2ZTQ0YWI4
|
||||
NkoxCgd0YXNrX2lkEiYKJDExODJjOWVmLTY3NTctNDRlOS05MDhiLWNmYTZlYjM4NjE1YXoCGAGF
|
||||
AQABAAASjgIKEOXCP/jH0lAyFChYhl/yRVASCMIALtbkZaYqKgxUYXNrIENyZWF0ZWQwATl4b8vC
|
||||
bkz4F0E4x8zCbkz4F0ouCghjcmV3X2tleRIiCiBkNDI2MDgzM2FiMGMyMGJiNDQ5MjJjNzk5YWE5
|
||||
NmI0YUoxCgdjcmV3X2lkEiYKJDIxNmJkZGQ2LWM1YTktNDQ5Ni1hZWM3LWIzZTAwYTc0OTQ1Y0ou
|
||||
Cgh0YXNrX2tleRIiCiA2MDlkZWUzOTEwODhjZDFjODdiOGZhNjZhYTY3YWRiZUoxCgd0YXNrX2lk
|
||||
EiYKJGJkZDhiZWYxLWZhNTYtNGQwYy1hYjQ0LTdiMjE0YzY2ODhiNXoCGAGFAQABAAA=
|
||||
body: '{"trace_id": "f4e3d2a7-6f34-4327-afca-c78e71cadd72", "execution_type":
|
||||
"crew", "user_identifier": null, "execution_context": {"crew_fingerprint": null,
|
||||
"crew_name": "crew", "flow_name": null, "crewai_version": "1.2.1", "privacy_level":
|
||||
"standard"}, "execution_metadata": {"expected_duration_estimate": 300, "agent_count":
|
||||
0, "task_count": 0, "flow_method_count": 0, "execution_started_at": "2025-10-31T21:52:20.918825+00:00"},
|
||||
"ephemeral_trace_id": "f4e3d2a7-6f34-4327-afca-c78e71cadd72"}'
|
||||
headers:
|
||||
Accept:
|
||||
- '*/*'
|
||||
Accept-Encoding:
|
||||
- gzip, deflate
|
||||
- gzip, deflate, zstd
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '2501'
|
||||
- '488'
|
||||
Content-Type:
|
||||
- application/x-protobuf
|
||||
- application/json
|
||||
User-Agent:
|
||||
- OTel-OTLP-Exporter-Python/1.27.0
|
||||
- CrewAI-CLI/1.2.1
|
||||
X-Crewai-Organization-Id:
|
||||
- 73c2b193-f579-422c-84c7-76a39a1da77f
|
||||
X-Crewai-Version:
|
||||
- 1.2.1
|
||||
method: POST
|
||||
uri: https://telemetry.crewai.com:4319/v1/traces
|
||||
uri: https://app.crewai.com/crewai_plus/api/v1/tracing/ephemeral/batches
|
||||
response:
|
||||
body:
|
||||
string: "\n\0"
|
||||
string: '{"id":"2adb4334-2adb-4585-90b9-03921447ab54","ephemeral_trace_id":"f4e3d2a7-6f34-4327-afca-c78e71cadd72","execution_type":"crew","crew_name":"crew","flow_name":null,"status":"running","duration_ms":null,"crewai_version":"1.2.1","total_events":0,"execution_context":{"crew_fingerprint":null,"crew_name":"crew","flow_name":null,"crewai_version":"1.2.1","privacy_level":"standard"},"created_at":"2025-10-31T21:52:21.259Z","updated_at":"2025-10-31T21:52:21.259Z","access_code":"TRACE-c984d48836","user_identifier":null}'
|
||||
headers:
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '2'
|
||||
- '515'
|
||||
Content-Type:
|
||||
- application/x-protobuf
|
||||
- application/json; charset=utf-8
|
||||
Date:
|
||||
- Tue, 24 Sep 2024 21:48:17 GMT
|
||||
- Fri, 31 Oct 2025 21:52:21 GMT
|
||||
cache-control:
|
||||
- no-store
|
||||
content-security-policy:
|
||||
- 'default-src ''self'' *.app.crewai.com app.crewai.com; script-src ''self''
|
||||
''unsafe-inline'' *.app.crewai.com app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts
|
||||
https://www.gstatic.com https://run.pstmn.io https://apis.google.com https://apis.google.com/js/api.js
|
||||
https://accounts.google.com https://accounts.google.com/gsi/client https://cdnjs.cloudflare.com/ajax/libs/normalize/8.0.1/normalize.min.css.map
|
||||
https://*.google.com https://docs.google.com https://slides.google.com https://js.hs-scripts.com
|
||||
https://js.sentry-cdn.com https://browser.sentry-cdn.com https://www.googletagmanager.com
|
||||
https://js-na1.hs-scripts.com https://js.hubspot.com http://js-na1.hs-scripts.com
|
||||
https://bat.bing.com https://cdn.amplitude.com https://cdn.segment.com https://d1d3n03t5zntha.cloudfront.net/
|
||||
https://descriptusercontent.com https://edge.fullstory.com https://googleads.g.doubleclick.net
|
||||
https://js.hs-analytics.net https://js.hs-banner.com https://js.hsadspixel.net
|
||||
https://js.hscollectedforms.net https://js.usemessages.com https://snap.licdn.com
|
||||
https://static.cloudflareinsights.com https://static.reo.dev https://www.google-analytics.com
|
||||
https://share.descript.com/; style-src ''self'' ''unsafe-inline'' *.app.crewai.com
|
||||
app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts; img-src ''self'' data:
|
||||
*.app.crewai.com app.crewai.com https://zeus.tools.crewai.com https://dashboard.tools.crewai.com
|
||||
https://cdn.jsdelivr.net https://forms.hsforms.com https://track.hubspot.com
|
||||
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://www.google.com
|
||||
https://www.google.com.br; font-src ''self'' data: *.app.crewai.com app.crewai.com;
|
||||
connect-src ''self'' *.app.crewai.com app.crewai.com https://zeus.tools.crewai.com
|
||||
https://connect.useparagon.com/ https://zeus.useparagon.com/* https://*.useparagon.com/*
|
||||
https://run.pstmn.io https://connect.tools.crewai.com/ https://*.sentry.io
|
||||
https://www.google-analytics.com https://edge.fullstory.com https://rs.fullstory.com
|
||||
https://api.hubspot.com https://forms.hscollectedforms.net https://api.hubapi.com
|
||||
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://google.com/pagead/form-data/16713662509
|
||||
https://google.com/ccm/form-data/16713662509 https://www.google.com/ccm/collect
|
||||
https://worker-actionkit.tools.crewai.com https://api.reo.dev; frame-src ''self''
|
||||
*.app.crewai.com app.crewai.com https://connect.useparagon.com/ https://zeus.tools.crewai.com
|
||||
https://zeus.useparagon.com/* https://connect.tools.crewai.com/ https://docs.google.com
|
||||
https://drive.google.com https://slides.google.com https://accounts.google.com
|
||||
https://*.google.com https://app.hubspot.com/ https://td.doubleclick.net https://www.googletagmanager.com/
|
||||
https://www.youtube.com https://share.descript.com'
|
||||
etag:
|
||||
- W/"de8355cd003b150e7c530e4f15d97140"
|
||||
expires:
|
||||
- '0'
|
||||
permissions-policy:
|
||||
- camera=(), microphone=(self), geolocation=()
|
||||
pragma:
|
||||
- no-cache
|
||||
referrer-policy:
|
||||
- strict-origin-when-cross-origin
|
||||
strict-transport-security:
|
||||
- max-age=63072000; includeSubDomains
|
||||
vary:
|
||||
- Accept
|
||||
x-content-type-options:
|
||||
- nosniff
|
||||
x-frame-options:
|
||||
- SAMEORIGIN
|
||||
x-permitted-cross-domain-policies:
|
||||
- none
|
||||
x-request-id:
|
||||
- 09d43be3-106a-44dd-a9a2-816d53f91d5d
|
||||
x-runtime:
|
||||
- '0.066900'
|
||||
x-xss-protection:
|
||||
- 1; mode=block
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
code: 201
|
||||
message: Created
|
||||
- request:
|
||||
body: '{"messages": [{"role": "user", "content": "5"}, {"role": "system", "content":
|
||||
"I''m gonna convert this raw text into valid JSON.\n\nThe json should have the
|
||||
following structure, with the following keys:\n{\n score: int\n}"}], "model":
|
||||
"gpt-3.5-turbo-0125", "tool_choice": {"type": "function", "function": {"name":
|
||||
"ScoreOutput"}}, "tools": [{"type": "function", "function": {"name": "ScoreOutput",
|
||||
"description": "Correctly extracted `ScoreOutput` with all the required parameters
|
||||
with correct types", "parameters": {"properties": {"score": {"title": "Score",
|
||||
"type": "integer"}}, "required": ["score"], "type": "object"}}}]}'
|
||||
body: '{"messages":[{"role":"system","content":"You are Scorer. You''re an expert
|
||||
scorer, specialized in scoring titles.\nYour personal goal is: Score the title\nTo
|
||||
give my best complete final answer to the task respond using the exact following
|
||||
format:\n\nThought: I now can give a great answer\nFinal Answer: Your final
|
||||
answer must be the great and the most complete as possible, it must be outcome
|
||||
described.\n\nI MUST use these formats, my job depends on it!"},{"role":"user","content":"\nCurrent
|
||||
Task: Give me an integer score between 1-5 for the following title: ''The impact
|
||||
of AI in the future of work''\n\nThis is the expected criteria for your final
|
||||
answer: The score of the title.\nyou MUST return the actual complete content
|
||||
as the final answer, not a summary.\nEnsure your final answer contains only
|
||||
the content in the following format: {\n \"properties\": {\n \"score\":
|
||||
{\n \"title\": \"Score\",\n \"type\": \"integer\"\n }\n },\n \"required\":
|
||||
[\n \"score\"\n ],\n \"title\": \"ScoreOutput\",\n \"type\": \"object\",\n \"additionalProperties\":
|
||||
false\n}\n\nEnsure the final output does not include any code block markers
|
||||
like ```json or ```python.\n\nBegin! This is VERY important to you, use the
|
||||
tools available and give your best Final Answer, your job depends on it!\n\nThought:"}],"model":"gpt-4o"}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '627'
|
||||
- '1334'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=9.8sBYBkvBR8R1K_bVF7xgU..80XKlEIg3N2OBbTSCU-1727214102-1.0.1.1-.qiTLXbPamYUMSuyNsOEB9jhGu.jOifujOrx9E2JZvStbIZ9RTIiE44xKKNfLPxQkOi6qAT3h6htK8lPDGV_5g;
|
||||
_cfuvid=lbRdAddVWV6W3f5Dm9SaOPWDUOxqtZBSPr_fTW26nEA-1727213194587-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.47.0
|
||||
- OpenAI/Python 1.109.1
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
@@ -419,32 +141,34 @@ interactions:
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.47.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
- 1.109.1
|
||||
x-stainless-read-timeout:
|
||||
- '600'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.11.7
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
content: "{\n \"id\": \"chatcmpl-AB7gHtXxJaZ5NZiXZzc0HUAObflqc\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1727214497,\n \"model\": \"gpt-3.5-turbo-0125\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": null,\n \"tool_calls\": [\n {\n
|
||||
\ \"id\": \"call_hFpl2Plo4DudP1t3SGHby2vo\",\n \"type\":
|
||||
\"function\",\n \"function\": {\n \"name\": \"ScoreOutput\",\n
|
||||
\ \"arguments\": \"{\\\"score\\\":5}\"\n }\n }\n
|
||||
\ ],\n \"refusal\": null\n },\n \"logprobs\": null,\n
|
||||
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
|
||||
103,\n \"completion_tokens\": 5,\n \"total_tokens\": 108,\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0\n }\n },\n \"system_fingerprint\": null\n}\n"
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//jFPLbtswELz7KxY824WdOPHjFgQokPbQFi2aolEgrMmVzIQiWXLlNAn8
|
||||
7wElxZLTFuhFAmf2Ncvh8whAaCXWIOQWWVbeTC6vw8fi+pF31dWHO39xap++XH3/ef9kLpc/7sQ4
|
||||
ZbjNHUl+zXonXeUNsXa2pWUgZEpVZ4vz2Wq+OJvPGqJyikxKKz1P5m5yMj2ZT6bLyfS8S9w6LSmK
|
||||
NdyMAACem28a0Sr6LdYwHb8iFcWIJYn1IQhABGcSIjBGHRkti3FPSmeZbDP1FVj3ABItlHpHgFCm
|
||||
iQFtfKCQ2ffaooGL5rSG58wCZMIH5ymwppiJDkxwlC7QAEkYazYNlomvLT0ekI++47RlKikcsYqi
|
||||
DNqnXbZB37YESU5pSUHTDAoXgLcETRvYYCQFzoLmCIEM7dBKArQKdOVRciba8vv0249bNYF+1TqQ
|
||||
Sk1u3mhJx9su7q2UTzX7mruRh2JaSxwIVEonEWg+H+2tQBOpizmsbp7Z/fCmAhV1xGQUWxszINBa
|
||||
x5jqNh657Zj9wRXGlT64TXyTKgptddzmgTA6mxwQ2XnRsPtRUpvcVx8ZKl145Tlnd09Nu5PlrK0n
|
||||
er/37GrVkewYTY+fLjvPHtfLFTFqEwf+FRLlllSf2psda6XdgBgNVP85zd9qt8q1Lf+nfE9ISZ5J
|
||||
5T6Q0vJYcR8WKN39v8IOW24GFpHCTkvKWVNIN6GowNq0L1XEx8hU5YW2JQUfdPtcC5/LTTFbLM/O
|
||||
zhditB+9AAAA//8DAB7xWDm3BAAA
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 8c85fa4cebc21cf3-GRU
|
||||
- 99766103c9f57d16-EWR
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
@@ -452,37 +176,193 @@ interactions:
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Tue, 24 Sep 2024 21:48:17 GMT
|
||||
- Fri, 31 Oct 2025 21:52:23 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=M0OyXPOd4vZCE92p.8e.is2jhrt7g6vYTBI3Y2Pg7PE-1761947543-1.0.1.1-orJHNWV50gzMMUsFex2S_O1ofp7KQ_r.9iAzzWwYGyBW1puzUvacw0OkY2KXSZf2mcUI_Rwg6lzRuwAT6WkysTCS52D.rp3oNdgPcSk3JSk;
|
||||
path=/; expires=Fri, 31-Oct-25 22:22:23 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=LmEPJTcrhfn7YibgpOHVOK1U30pNnM9.PFftLZG98qs-1761947543691-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '196'
|
||||
- '1824'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
x-envoy-upstream-service-time:
|
||||
- '1855'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-project-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '50000000'
|
||||
- '30000000'
|
||||
x-ratelimit-remaining-project-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '49999947'
|
||||
- '29999700'
|
||||
x-ratelimit-reset-project-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_deaa35bcd479744c6ce7363ae1b27d9e
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
- req_ef5bf5e7aa51435489f0c9d725916ff7
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Scorer. You''re an expert
|
||||
scorer, specialized in scoring titles.\nYour personal goal is: Score the title\nTo
|
||||
give my best complete final answer to the task respond using the exact following
|
||||
format:\n\nThought: I now can give a great answer\nFinal Answer: Your final
|
||||
answer must be the great and the most complete as possible, it must be outcome
|
||||
described.\n\nI MUST use these formats, my job depends on it!"},{"role":"user","content":"\nCurrent
|
||||
Task: Given the score the title ''The impact of AI in the future of work'' got,
|
||||
give me an integer score between 1-5 for the following title: ''Return of the
|
||||
Jedi'', you MUST give it a score, use your best judgment\n\nThis is the expected
|
||||
criteria for your final answer: The score of the title.\nyou MUST return the
|
||||
actual complete content as the final answer, not a summary.\nEnsure your final
|
||||
answer contains only the content in the following format: {\n \"properties\":
|
||||
{\n \"score\": {\n \"title\": \"Score\",\n \"type\": \"integer\"\n }\n },\n \"required\":
|
||||
[\n \"score\"\n ],\n \"title\": \"ScoreOutput\",\n \"type\": \"object\",\n \"additionalProperties\":
|
||||
false\n}\n\nEnsure the final output does not include any code block markers
|
||||
like ```json or ```python.\n\nThis is the context you''re working with:\n{\n \"properties\":
|
||||
{\n \"score\": {\n \"title\": \"Score\",\n \"type\": \"integer\",\n \"description\":
|
||||
\"The assigned score for the title based on its relevance and impact\"\n }\n },\n \"required\":
|
||||
[\n \"score\"\n ],\n \"title\": \"ScoreOutput\",\n \"type\": \"object\",\n \"additionalProperties\":
|
||||
false,\n \"score\": 4\n}\n\nBegin! This is VERY important to you, use the tools
|
||||
available and give your best Final Answer, your job depends on it!\n\nThought:"}],"model":"gpt-4o"}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1840'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=M0OyXPOd4vZCE92p.8e.is2jhrt7g6vYTBI3Y2Pg7PE-1761947543-1.0.1.1-orJHNWV50gzMMUsFex2S_O1ofp7KQ_r.9iAzzWwYGyBW1puzUvacw0OkY2KXSZf2mcUI_Rwg6lzRuwAT6WkysTCS52D.rp3oNdgPcSk3JSk;
|
||||
_cfuvid=LmEPJTcrhfn7YibgpOHVOK1U30pNnM9.PFftLZG98qs-1761947543691-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.109.1
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.109.1
|
||||
x-stainless-read-timeout:
|
||||
- '600'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//jJNdb5swFIbv8yssX5OJpCRpuJumrZoqtdO6rRelQo59AG/G9uxDuyji
|
||||
v1cGGkjWSbsB+Tzn0+/xYUYIlYKmhPKKIa+tmn+4d9fyvr76Wl5cXcer2215s5R3H3c/xPf9DY1C
|
||||
hNn9BI6vUe+4qa0ClEb3mDtgCCHrYrNebJPNKkk6UBsBKoSVFueJmS/jZTKPL+fxegisjOTgaUoe
|
||||
ZoQQcui+oUUt4A9NSRy9WmrwnpVA06MTIdQZFSyUeS89Mo00GiE3GkF3XX+rTFNWmJLPRJtnwpkm
|
||||
pXwCwkgZWidM+2dwmf4kNVPkfXdKySHThGTUOmPBoQSf0cEYzJ4bBxNLsKFE1dkyetfjaAL3dmBS
|
||||
I5TgMtrDNvzaqK/m4HcjHYjg+XBWKxwfB7/zUrcN2gaHgtNivXZHwISQQTmmvpzMVTDlYfA5jpZk
|
||||
up1eqYOi8SwoqhulJoBpbZCFvJ2YjwNpj/IpU1pndv4slBZSS1/lDpg3Okjl0Vja0XYWpg1r0pwo
|
||||
HwSpLeZofkFXLomXfT46LuZILy8GiAaZmkRdrqI38uUCkEnlJ4tGOeMViDF03ErWCGkmYDaZ+u9u
|
||||
3srdTy51+T/pR8A5WASRWwdC8tOJRzcHQft/uR1vuWuYenBPkkOOElxQQkDBGtU/Ker3HqHOC6lL
|
||||
cNbJ/l0VNl+stut1wra7DZ21sxcAAAD//wMAwih5UmAEAAA=
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 99766114cf7c7d16-EWR
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 31 Oct 2025 21:52:25 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '1188'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '1206'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-project-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '30000000'
|
||||
x-ratelimit-remaining-project-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29999586'
|
||||
x-ratelimit-reset-project-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_030ffb3d92bb47589d61d50b48f068d4
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
|
||||
@@ -530,4 +530,235 @@ interactions:
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"Please convert the following text
|
||||
into valid JSON.\n\nOutput ONLY the valid JSON and nothing else.\n\nThe JSON
|
||||
must follow this schema exactly:\n```json\n{\n score: int\n}\n```"},{"role":"user","content":"4"}],"model":"gpt-4.1-mini"}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '277'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.109.1
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.109.1
|
||||
x-stainless-read-timeout:
|
||||
- '600'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//jJIxb9swEIV3/QriZiuwFNlxtBadig6ZGrQKJJo8SbQpkiWpwoHh/16Q
|
||||
ci0lTYEuGu679/TueOeEEBAcSgKsp54NRqafvj2ffh4PX56e2n37+vW7Gj8/n/qjE/64OcAqKPT+
|
||||
gMz/Ud0xPRiJXmg1YWaRegyu2cM22z3crx/XEQyaowyyzvi0uMvSQSiR5ut8k66LNCuu8l4Lhg5K
|
||||
8iMhhJBz/IagiuMJShLNYmVA52iHUN6aCAGrZagAdU44T5WH1QyZVh5VzN40zcFpValzpQKrwDFt
|
||||
sYKSFJW6VKppmqXUYjs6GvKrUcoFoEppT8P8MfTLlVxuMaXujNV7904KrVDC9bVF6rQKkZzXBiK9
|
||||
JIS8xHWMbyYEY/VgfO31EePvisfJDuZXmGF2f4Veeyrn+jZffeBWc/RUSLdYJzDKeuSzct49HbnQ
|
||||
C5AsZv47zEfe09xCdf9jPwPG0HjktbHIBXs78NxmMdzov9puO46BwaH9JRjWXqAN78CxpaOcDgfc
|
||||
q/M41K1QHVpjxXQ9rakLlu82Wbvb5pBckt8AAAD//wMA5Zmg4EwDAAA=
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 996f475b7e3fedda-MXP
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 31 Oct 2025 01:11:31 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=9OwoBJAn84Nsq0RZdCIu06cNB6RLqor4C1.Q58nU28U-1761873091-1.0.1.1-p82_h8Vnxe0NfH5Iv6MFt.SderZj.v9VnCx_ro6ti2MGhlJOLFsPd6XhBxPsnmuV7Vt_4_uqAbE57E5f1Epl1cmGBT.0844N3CLnTwZFWQI;
|
||||
path=/; expires=Fri, 31-Oct-25 01:41:31 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=E4.xW3I8m58fngo4vkTKo8hmBumar1HkV.yU8KKjlZg-1761873091967-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '1770'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '1998'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-project-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-project-tokens:
|
||||
- '149999955'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149999952'
|
||||
x-ratelimit-reset-project-tokens:
|
||||
- 0s
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_ba7a12cb40744f648d17844196f9c2c6
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"Please convert the following text
|
||||
into valid JSON.\n\nOutput ONLY the valid JSON and nothing else.\n\nThe JSON
|
||||
must follow this schema exactly:\n```json\n{\n score: int\n}\n```"},{"role":"user","content":"4"}],"model":"gpt-4.1-mini","response_format":{"type":"json_schema","json_schema":{"schema":{"properties":{"score":{"title":"Score","type":"integer"}},"required":["score"],"title":"ScoreOutput","type":"object","additionalProperties":false},"name":"ScoreOutput","strict":true}},"stream":false}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '541'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=9OwoBJAn84Nsq0RZdCIu06cNB6RLqor4C1.Q58nU28U-1761873091-1.0.1.1-p82_h8Vnxe0NfH5Iv6MFt.SderZj.v9VnCx_ro6ti2MGhlJOLFsPd6XhBxPsnmuV7Vt_4_uqAbE57E5f1Epl1cmGBT.0844N3CLnTwZFWQI;
|
||||
_cfuvid=E4.xW3I8m58fngo4vkTKo8hmBumar1HkV.yU8KKjlZg-1761873091967-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.109.1
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-helper-method:
|
||||
- chat.completions.parse
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.109.1
|
||||
x-stainless-read-timeout:
|
||||
- '600'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//jFLBbqMwFLzzFdY7hypQklKuVS899NpK2wo59gPcGtuyH92uovz7ypAE
|
||||
0t1KvXB482Y8M7x9whgoCRUD0XESvdPp3dPzZ/gty77xO7fZPj4MMs/ldYf3Lt/CKjLs7g0FnVhX
|
||||
wvZOIylrJlh45IRRNbvZZuXN9fo2H4HeStSR1jpKi6ss7ZVRab7ON+m6SLPiSO+sEhigYr8Sxhjb
|
||||
j99o1Ej8hIqtV6dJjyHwFqE6LzEG3uo4AR6CCsQNwWoGhTWEZvS+f4EgrMcXqIrDcsdjMwQejZpB
|
||||
6wXAjbHEY9DR3esROZz9aNs6b3fhCxUaZVToao88WBPfDmQdjOghYex1zD1cRAHnbe+oJvuO43Pl
|
||||
ZpKDue4ZPGFkiet5fHus6lKslkhc6bCoDQQXHcqZOXfMB6nsAkgWkf/18j/tKbYy7U/kZ0AIdISy
|
||||
dh6lEpd55zWP8Ra/WztXPBqGgP5DCaxJoY+/QWLDBz0dCIQ/gbCvG2Va9M6r6UoaVxciLzdZU25z
|
||||
SA7JXwAAAP//AwAXjqY4NAMAAA==
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 996f47692b63edda-MXP
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 31 Oct 2025 01:11:33 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '929'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '991'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-project-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-project-tokens:
|
||||
- '149999955'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149999955'
|
||||
x-ratelimit-reset-project-tokens:
|
||||
- 0s
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_892607f68e764ba3846c431954608c36
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
|
||||
@@ -1667,4 +1667,553 @@ interactions:
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"Convert all responses into valid
|
||||
JSON output."},{"role":"user","content":"Assess the quality of the task completed
|
||||
based on the description, expected output, and actual results.\n\nTask Description:\nResearch
|
||||
a topic to teach a kid aged 6 about math.\n\nExpected Output:\nA topic, explanation,
|
||||
angle, and examples.\n\nActual Output:\nI now can give a great answer.\nFinal
|
||||
Answer: \n**Topic**: Basic Addition\n\n**Explanation**:\nAddition is a fundamental
|
||||
concept in math that means combining two or more numbers to get a new total.
|
||||
It''s like putting together pieces of a puzzle to see the whole picture. When
|
||||
we add, we take two or more groups of things and count them all together.\n\n**Angle**:\nUse
|
||||
relatable and engaging real-life scenarios to illustrate addition, making it
|
||||
fun and easier for a 6-year-old to understand and apply.\n\n**Examples**:\n\n1.
|
||||
**Counting Apples**:\n Let''s say you have 2 apples and your friend gives
|
||||
you 3 more apples. How many apples do you have in total?\n - You start with
|
||||
2 apples.\n - Your friend gives you 3 more apples.\n - Now, you count all
|
||||
the apples together: 2 + 3 = 5.\n - So, you have 5 apples in total.\n\n2.
|
||||
**Toy Cars**:\n Imagine you have 4 toy cars and you find 2 more toy cars in
|
||||
your room. How many toy cars do you have now?\n - You start with 4 toy cars.\n -
|
||||
You find 2 more toy cars.\n - You count them all together: 4 + 2 = 6.\n -
|
||||
So, you have 6 toy cars in total.\n\n3. **Drawing Pictures**:\n If you draw
|
||||
3 pictures today and 2 pictures tomorrow, how many pictures will you have drawn
|
||||
in total?\n - You draw 3 pictures today.\n - You draw 2 pictures tomorrow.\n -
|
||||
You add them together: 3 + 2 = 5.\n - So, you will have drawn 5 pictures in
|
||||
total.\n\n4. **Using Fingers**:\n Let''s use your fingers to practice addition.
|
||||
Show 3 fingers on one hand and 1 finger on the other hand. How many fingers
|
||||
are you holding up?\n - 3 fingers on one hand.\n - 1 finger on the other
|
||||
hand.\n - Put them together and count: 3 + 1 = 4.\n - So, you are holding
|
||||
up 4 fingers.\n\nBy using objects that kids are familiar with, such as apples,
|
||||
toy cars, drawings, and even their own fingers, we can make the concept of addition
|
||||
relatable and enjoyable. Practicing with real items helps children visualize
|
||||
the math and understand that addition is simply combining groups to find out
|
||||
how many there are altogether.\n\nPlease provide:\n- Bullet points suggestions
|
||||
to improve future similar tasks\n- A score from 0 to 10 evaluating on completion,
|
||||
quality, and overall performance- Entities extracted from the task output, if
|
||||
any, their type, description, and relationships"}],"model":"gpt-4.1-mini"}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '2711'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- _cfuvid=uF44YidguuLD6X0Fw3uiyzdru2Ad2jXf2Nx1M4V87qI-1749851140865-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.109.1
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.109.1
|
||||
x-stainless-read-timeout:
|
||||
- '600'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//rFfbbhw3DH33VxDz1AK7htdxnNRvqZsiLZCmaNMWaDdYcyXODGONNJEo
|
||||
24Mg/15QM3uLc0ObF69HpMjDy6Gkt0cAFdvqAirTopiud/PLv/5+9vzvrsvPfrv7/hneDD/fnbw5
|
||||
bf8Mty9+4GqmO8L6NRnZ7Do2oesdCQc/ik0kFFKri0fni8ePHi/OzoqgC5acbmt6mZ8dL+Yde56f
|
||||
npw+nJ+czRdn0/Y2sKFUXcA/RwAAb8tfBeot3VUXcDLbrHSUEjZUXWyVAKoYnK5UmBInQS/VbCc0
|
||||
wQv5gv3q6up1Cn7p3y49wLLiro/hhjryskq5aShpSGmpQFRDdZ7e9Y4NixugDrFDAWkJQpY+C7AH
|
||||
hCQxG8mRLNziAN/QcXM8g59/f/ELhAgIxhFGEOp6h0LfggRI1GNEIZDQs5kB3fUOPar3GaBvHOmP
|
||||
BbpDTXRS12AcRpbheFnNNuB+8sZlS3DDKaNL6m8vjrJrFAGyTeqZfIveEGRvKWquLPumKA4h+wZM
|
||||
y85G8vtenlgL7IUiGuEbUi8tepvmwUNZYmFKsKYheAuvc5IdcAnQx9AFISDfYFOSvW/8MvjEliJw
|
||||
iUXRvMmbAEIsu3sZY9HMF4CjWfVtCpxItSMjEPyoE7yh/sDNi7omLcY6MtWQctdhHHTrNQ0geE2o
|
||||
tZuqS94Wf5HY1yGae6D/SASJNcIIDn2TsSk46IY8xHboxsjXJEJR4XhFd8vSwvl8IIzz4Gzat/ic
|
||||
vIYM6AfoI0V6kzmxbGMpubj24daRbQg8kSULa6pDJJCWEzhKKRzU7dcYbli7AyPjlND6oDQGFZ8E
|
||||
sKz5IS+gzeq1CkkG1ZraGbNlCXGYwTV7StKSsPn2Q70o3O+qJYSmpVgKicV+8NCG27GAlHRF9ToF
|
||||
wuiA6ppKl7mDRn/qUy6BKmU8ASdNjLKdfKFmZPLWDSNrvAk5YqNRSBtDbtqQp8ruiHa8rNT6q9k4
|
||||
C5IJkZT5300LsWihK4vL6qW6xnStruvs3ADTEFTaa2G3RB8pLW0ong+5Hcmh4NrRPsu77IR7R2BJ
|
||||
kB3teH8ML8d21jDVMzY0x76PodeSUtl+S87Nd0PoGJ6zDxH2hpsmKzttF+jQko6tcZYJ+2a2x22W
|
||||
YRo8W65CcadVPJwfCdfsWMaMb5tmWz5Padvfy0qbW2fEiu5EPZHdm7Fvx5+t3jAm/HtMbOCJtayp
|
||||
2/aCqsnQb6qiyT6QWUomcj/u2RSuzt6iRoNuQyilggndmgtun7s1xRJhQwIIEgTd1CIA72afhPoZ
|
||||
kJejx0/DfI7SUofCBh2Ensb2g46wANxBldugdOqU+BPsL8X527b7IqGbO64JkiGPkUP6WIaVwer3
|
||||
iTbsp2P4I5XDBDt2jBHGS0OaTrKelKNmbKMyFwCnvH0p/qcTLT4C9YPieyB/DDlC6slwzeZgGrJz
|
||||
OUk5lzfILg5spbxebVp5r4EPEN9HfRmyl5LB/j10Hwngnsa9GLThfAOnSk4Frxl+sPmYevjhNqv7
|
||||
mf0M1pdhgEuMXxPkGUgYwGAcYZ7uPieg5/8F6A8Rb9X6r1ym3tcE/AD6yegEePv5fzI7cuNH9g19
|
||||
nfROZBsNAqa9m54CRVtawltYbHCfHeLe/Ptq4l45DZf+3dJfXV3t36Aj1TmhXuN9dm5PgN4HGW8W
|
||||
yoZXR5uMbG7rLjR9DOv03taqZs+pXUXCFLzezJOEvirSd0d6JuurIB9c9KvxIriScE3F3fmD6VVQ
|
||||
7V4jO+nDR4tJWib5TrA4PdlIDiyuxsM37b0sKqPHnt3t3T1D9DoU9gRHe3Hfx/Mh22Ps7JsvMb8T
|
||||
GD1JyK76SJbNYcw7tUg6eT+mts1zAVwlijdsaCVMUWthqcbsxjdUlYYk1K3GNusjjw+pul+dmdPH
|
||||
Dxf14/PT6ujd0b8AAAD//wMAcO55jFcOAAA=
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 996fc2c3bbf5d7df-MXP
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 31 Oct 2025 02:35:53 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=kgKK5IJqaYXKSVMHugdSipIgres75xcyE7AFoQvJpYQ-1761878153-1.0.1.1-Gs3miwKehE3t4oQeqLEaesnuSTAZMKeqirw5cieEuAcSRSUCmzwzKvXjWzc8yPxfuzLx3j8JOtRH4vqLwl0.G4VN12X8AB5I4TbGRI8pdZ0;
|
||||
path=/; expires=Fri, 31-Oct-25 03:05:53 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=gRWE8NibQIkdP415ySHVelZVNQP_TP1Yiq9t0KwvhpI-1761878153913-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '9127'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '9143'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-project-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-project-tokens:
|
||||
- '149999355'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149999357'
|
||||
x-ratelimit-reset-project-tokens:
|
||||
- 0s
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_42acad04b20045f2807d0d17c11e5ce4
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"Convert all responses into valid
|
||||
JSON output."},{"role":"user","content":"Assess the quality of the task completed
|
||||
based on the description, expected output, and actual results.\n\nTask Description:\nResearch
|
||||
a topic to teach a kid aged 6 about math.\n\nExpected Output:\nA topic, explanation,
|
||||
angle, and examples.\n\nActual Output:\nI now can give a great answer.\nFinal
|
||||
Answer: \n**Topic**: Basic Addition\n\n**Explanation**:\nAddition is a fundamental
|
||||
concept in math that means combining two or more numbers to get a new total.
|
||||
It''s like putting together pieces of a puzzle to see the whole picture. When
|
||||
we add, we take two or more groups of things and count them all together.\n\n**Angle**:\nUse
|
||||
relatable and engaging real-life scenarios to illustrate addition, making it
|
||||
fun and easier for a 6-year-old to understand and apply.\n\n**Examples**:\n\n1.
|
||||
**Counting Apples**:\n Let''s say you have 2 apples and your friend gives
|
||||
you 3 more apples. How many apples do you have in total?\n - You start with
|
||||
2 apples.\n - Your friend gives you 3 more apples.\n - Now, you count all
|
||||
the apples together: 2 + 3 = 5.\n - So, you have 5 apples in total.\n\n2.
|
||||
**Toy Cars**:\n Imagine you have 4 toy cars and you find 2 more toy cars in
|
||||
your room. How many toy cars do you have now?\n - You start with 4 toy cars.\n -
|
||||
You find 2 more toy cars.\n - You count them all together: 4 + 2 = 6.\n -
|
||||
So, you have 6 toy cars in total.\n\n3. **Drawing Pictures**:\n If you draw
|
||||
3 pictures today and 2 pictures tomorrow, how many pictures will you have drawn
|
||||
in total?\n - You draw 3 pictures today.\n - You draw 2 pictures tomorrow.\n -
|
||||
You add them together: 3 + 2 = 5.\n - So, you will have drawn 5 pictures in
|
||||
total.\n\n4. **Using Fingers**:\n Let''s use your fingers to practice addition.
|
||||
Show 3 fingers on one hand and 1 finger on the other hand. How many fingers
|
||||
are you holding up?\n - 3 fingers on one hand.\n - 1 finger on the other
|
||||
hand.\n - Put them together and count: 3 + 1 = 4.\n - So, you are holding
|
||||
up 4 fingers.\n\nBy using objects that kids are familiar with, such as apples,
|
||||
toy cars, drawings, and even their own fingers, we can make the concept of addition
|
||||
relatable and enjoyable. Practicing with real items helps children visualize
|
||||
the math and understand that addition is simply combining groups to find out
|
||||
how many there are altogether.\n\nPlease provide:\n- Bullet points suggestions
|
||||
to improve future similar tasks\n- A score from 0 to 10 evaluating on completion,
|
||||
quality, and overall performance- Entities extracted from the task output, if
|
||||
any, their type, description, and relationships"}],"model":"gpt-4.1-mini","response_format":{"type":"json_schema","json_schema":{"schema":{"$defs":{"Entity":{"properties":{"name":{"description":"The
|
||||
name of the entity.","title":"Name","type":"string"},"type":{"description":"The
|
||||
type of the entity.","title":"Type","type":"string"},"description":{"description":"Description
|
||||
of the entity.","title":"Description","type":"string"},"relationships":{"description":"Relationships
|
||||
of the entity.","items":{"type":"string"},"title":"Relationships","type":"array"}},"required":["name","type","description","relationships"],"title":"Entity","type":"object","additionalProperties":false}},"properties":{"suggestions":{"description":"Suggestions
|
||||
to improve future similar tasks.","items":{"type":"string"},"title":"Suggestions","type":"array"},"quality":{"description":"A
|
||||
score from 0 to 10 evaluating on completion, quality, and overall performance,
|
||||
all taking into account the task description, expected output, and the result
|
||||
of the task.","title":"Quality","type":"number"},"entities":{"description":"Entities
|
||||
extracted from the task output.","items":{"$ref":"#/$defs/Entity"},"title":"Entities","type":"array"}},"required":["suggestions","quality","entities"],"title":"TaskEvaluation","type":"object","additionalProperties":false},"name":"TaskEvaluation","strict":true}},"stream":false}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '4017'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- _cfuvid=gRWE8NibQIkdP415ySHVelZVNQP_TP1Yiq9t0KwvhpI-1761878153913-0.0.1.1-604800000;
|
||||
__cf_bm=kgKK5IJqaYXKSVMHugdSipIgres75xcyE7AFoQvJpYQ-1761878153-1.0.1.1-Gs3miwKehE3t4oQeqLEaesnuSTAZMKeqirw5cieEuAcSRSUCmzwzKvXjWzc8yPxfuzLx3j8JOtRH4vqLwl0.G4VN12X8AB5I4TbGRI8pdZ0
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.109.1
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-helper-method:
|
||||
- chat.completions.parse
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.109.1
|
||||
x-stainless-read-timeout:
|
||||
- '600'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//vJZNbyM3DIbv+RWELr3YQex8ub65aYrdQ9EtmiJAdwKDljgzTDSSVh9O
|
||||
jCD/vZBmHDupD0WR7sWw9Yrk84oac56PAAQrMQchW4yyc3p8dfvXp9vZ6WT1uz6v4283fLqYfbm+
|
||||
/fX+6dPNVIxyhF3dk4zbqGNpO6cpsjW9LD1hpJx1cnkxmV3OJudnReisIp3DGhfHZ8eTcceGx9OT
|
||||
6fn45Gw8ORvCW8uSgpjD1yMAgOfymUGNoicxh5PRdqWjELAhMX/dBCC81XlFYAgcIpooRjtRWhPJ
|
||||
FPbnSoTUNBQyeajE/GslPhupkyJYc0ioA1gPbCJ5lJHXBKSpIxMDRAtkWjSSgEyDTVmG2nrY2GQa
|
||||
kC1r5ckcV2JUiT+4c5rrDWg0TcKGgNZkoE4+tuRzsg6jbAHhYrwh9GOr1Q8B8rl6askEtgY0rUn3
|
||||
+RZKwbc0gGfGgseRaSCTNvlcZsB26CNLdpgDAI0CT2xq6yWBJvSGTTOQOpIZNLY7Bfp250TWQ3hg
|
||||
rXsYoCdHMpIqxnNIJJRtDhmOuU/62UjrnfUYCWoitUL5UKhDoBDKyXUUW6sKfUcYkidIRpHP/VM9
|
||||
3d2oEt8Sao6bSsx/HFWCTCyec+ueK2Gwo0rMK/ETBpawUIqz34IQN67XbqxjWZYUBenZ9VvmlVhA
|
||||
nYzCjIM6G5DkIrDJvWkhtph/rK1eU+nMisvZxEebrXTWE5jUrcgXEw1FQDD0CNFGHNrmSZcOhJbd
|
||||
cN+un5xGg6+cC9NoKt+unzA/VaESdy+jfXeHfV31vAedXR2kbbxNLoCt97lrNgqwhwZp07aFB9Df
|
||||
n/I7zKscnGsuXG9jn3Ywd7gPBljrFKLvr6utAYcikELOiCVj/9g8ULl423YVTlxpKrc8oml4pem/
|
||||
erixG7hC/6Hw0W5Aov8e+D97fMwlv7CMyX9sD1Sf+3vY+LMU/IVNQx/birpP+f9YuHvZHzue6hQw
|
||||
zz6TtN4T0Bgb+2x54N0NysvriNO2cd6uwrtQUbPh0C49YbAmj7MQrRNFfTkCuCujNL2ZjsJ527m4
|
||||
jPaBSrnZxTBKxW6E79Tp5VYtfwc7YTI53SpvMi4VRWQd9saxkChbUrvY3ezGpNjuCUd7vv/Jcyh3
|
||||
751N82/S7wSZG0xq6Twplm8977Z5ui+z7fC213MuwCKQX7OkZWTyuReKaky6f/EQYRMidcv+tjnP
|
||||
/dtH7ZZncjo7n9Szi6k4ejn6GwAA//8DAN8BUTKMCQAA
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 996fc2fe8f28d7df-MXP
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 31 Oct 2025 02:35:57 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '3759'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '3850'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-project-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-project-tokens:
|
||||
- '149999357'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149999355'
|
||||
x-ratelimit-reset-project-tokens:
|
||||
- 0s
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_dc7b54ac42e9466fba10bf30986b2bf0
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"input":["Using Fingers(Example): An illustration of addition using fingers
|
||||
to make the concept relatable and tangible."],"model":"text-embedding-3-small","encoding_format":"base64"}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '183'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- _cfuvid=NaNzk_g04ozIFjR4zD0Joz9IJWntjKPTzifJUuegmAo-1756166265356-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.109.1
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.109.1
|
||||
x-stainless-read-timeout:
|
||||
- '600'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/embeddings
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAA1R6WRO6Opvn/fspTp1bp0s2SXLukF3ABAERp6amABEBcWEJkK7+7l347+mZufEC
|
||||
KAlJnue35d//9ddff7+zusiHv//56+9n1Q9//4/12i0d0r//+et//uuvv/76699/v//fk0WbFbdb
|
||||
9Sp/j/9uVq9bMf/9z1/cf1/5vw/989ff3/zNE7Me93UHrC6X94LLkYObLC7lv7EKZn/rUs8rmpoZ
|
||||
J9+HidhMJDWaTz2/CnuBR28k1OT3LuCDuVVQcE0dopN2qJfTxf+iiJcPhBxUN5thU0LY388W5qvZ
|
||||
C0VFxxDWzfZFFKXQgbB9hip8XiKeqFuvZ0u6HDmgF1ueGGGkALE0DhvYB+OepopVADafuxLBxh6I
|
||||
1hMYTvxdKSB6oAwLfUqy5agtCXKwVWHZYhmYgutFAet4SUyoU/dPAhzYbr4uVYdNyLjLVV9gcy0t
|
||||
qquyonHdvd9AOgYSObTHRKOp0MSQpMWVuNExZ91nHDeAFalPtbrGnXi7MRttvhUkx01xcAU9ezTo
|
||||
4w1Xqn2vt5rb2M8KjtzNI+cwUph4fpsTXKTgQVV1rjU+lBUJpVgySeg0Vibc3l4Br+r7TZX+UQN2
|
||||
qnQdVbau0KuYnTQuVcIW1Q16EcMWDTDtZaNFvHsSMdA7JePfvdQidrmkxHSPd3c+PMs3zJNNiMXJ
|
||||
rurZyuYS7oUDR4PyYmlv+Ih7KNf7L7E+nRWKxSBiaNaLQLAzq4B39ahBGbVbcvqIPZhySHVwEjYJ
|
||||
DbzEd6n81Ra0cG1D9WdWZ+zalj5K9rc9CaSMaAuTwgl+WqzSsKnvWo8r3UFIlaoRLodrJrpB4COB
|
||||
1xSq18+mppxbOSjPvz7NzpqrUb29KVDM6WaUxZulCZx9kWHG3Deehlxzn+Hl3UBQfVWiFEkXTls1
|
||||
adBtakaa3b6vml6dXILN4Ok0dO9mLVynXQMb1T6O9SmNs9692JMcLZ5HzrvHPhS/4p7bmdMVUQ8e
|
||||
SzArWs+BsAomas9FVy+Tesbo6PVkBORBNO7ZiDIsrwkdBeUyZSzlfRPhJv5SPdtZ2QyfPJaFD7JJ
|
||||
EqrXjF8YCqDYORTzzqBnwmfpYnhlHKFm6hlsNl7EgeomOeFxCFqXFcPWg2F33GH5WisdO4eOAvdj
|
||||
wdOjafshd3nBFt54USb4VClsRN9shFKCbsTvdAkwpX35SNzJDRbbbZot2cFv0I5YHtG+V1QzP4wC
|
||||
AHIxpnbpBzU/wm8Kw933izmZbTKGbV+CwWwdiBsEgzapvC7LmTdy1ImXqmu+pJTR7sbvKbmeDW2W
|
||||
ODEA7qOHxAlNI2R1gRwYlJNBk3E4dtyW4yDCkvLC4h17Gl38rwDbJTqQC6I06/lTLwG3UDbknvpN
|
||||
NqTv9otg4wzEcVNb49/+E8Oro1R4MW9KJuyaqwKVaeuNUvwx6vG5f+ZIOEOL3G9prAl1dBihLAln
|
||||
vCsDu+PlY+yjjVto63zI2XzrFQFGfVaO6OzZ7nKfWYN6PfSps5e8cGZDGaFh+3rh3VXehFM0ow1U
|
||||
X+qFWuo76RjvbVKQ7O976umLlU33WwlRmj2OJFwMmS3RFnzBW7kBmpL7zh1ezW0EzYd36aHOZW2R
|
||||
so0NRS+RSfIWqbt0MM2hFdc9wTbQQy7C+wT5i85RskR9NrfzLoB27OjkCAZXEzY7rQLCEO3pHfUm
|
||||
GLcH6wtTLJtYZoIMqHKqKtSCMKBGcH3UY+86MZDEIaCaqWaA7h9jCxGnH+iF3zw0Tgv6Fuix0uNJ
|
||||
tv1afDP6hTggkOrfLKxFdAoaJNo9ovrb22fTMVBMdIx0h1xfG6MTw0Mgw/a7wfT2GjltoWSaIMd2
|
||||
iHomtGv2JkyHu4OgkDwXSNahUk3Qg98GePfh9JDP50MJw7FdKE7xmM2OZkcQdC+X4nAONfED1Uqe
|
||||
9holpDjz2ZyFDwnSWN4Qgx1EdxE/1xH96v0aHCZ3/Jy1HurWCxBru5u6pVFOGLZH9qaWcjUz4cg7
|
||||
JeSOtwM9PIMEzNtN7sAbycxxom0TjlzTBdDG04GcQLGrewc/JZRFOiXkMxC2bOIaI+3k2kR/Zlo4
|
||||
K0H+hlOSfcjBTPaMzU0Ww1vM9vREsrSewW4YQTBcXeoxuWXzeKIYvtTuSGzruwHj8P5AeNlNKj2p
|
||||
dHBnX1dj5AMN4V17lNwFlE2DptGLMG/tXxr71U+kjxUxHyrHmLrFGHjDJqXG3aDuGAh6BZS3opLf
|
||||
+s7z8lBRkgiA6DZPwPK43yP4LSOGxc+UZIKYfmRoPDc22X9ED7BxzEfQTQEhxmF5asw4JQH8ePRK
|
||||
7OTehFy0vZbwIKQNdeC21pZzrzkiSv2cptb4ZQvKOgkmm82X7J++H/Jo9HVkmd57nEPrwlY8qlAg
|
||||
3TRsrPxgiWV5A40dOvz6GZhYc3Bku73bmE8dwjhPS3zAP64valyuJ8A6y9hAtT/VVGeq704C1UuU
|
||||
NRkbxX1k1KN/ezYobZ8G1e7inA19TCVI0vxK8/frVovi8xzB/PxM6cV2olqY7hwHD1eyIYrtqK5A
|
||||
iTRBITuGGEzMDwWzG75QcficXrOuY+PGYim6CNfDb307bqv6DSzL5UuDaNvXUygIX3QhmzPJQ1Jl
|
||||
w6s599A1Y4XkXfjNmCYbJbr2RUSJe7rVw4sdSxjcdItei/zA5lMleHCpbgLxzsbenc+GHqFmwDqJ
|
||||
ryez42rTU+H+lW3/9L/ZfXAeuu089qs/d1JLxf7NL135WDi/wTuBnF4pRBOrZzeY+ijITTGHxFz8
|
||||
IxAfNeJg6uID5vaHoJsXdbB/eELNFws6rqgeX8SQYxD7UN66CYdRhOKP0FLFegvhwmsXDMV82BBy
|
||||
yLbucJpOE9oKUzbKL2fIJr9jPlLrT0EN2DHth7+o4POAmBHNXG7la+hXryrjlmzi9m8Ovh5XlWRB
|
||||
cNRYtQxf2Sja45it+EsllVdgW+8S6u7SHkzdvYGoV54hsVzCa727++jwW8aM7nO/dCfX9jdQ9FJ5
|
||||
3EIh6dZ+38LhepcwW/kJ09nUoLAjO/zaR0bHr/0aBkUX482ztcHkXuwFBari/+GTAjk+KhTcTIsa
|
||||
39unmx+LW8K3Vc7ENjKvG9RjtoF2kFvkpH29bHokZYASsZ2o973lGv2MIwTr+mFw3nzDGd1LHxm7
|
||||
7YG6sCk6AW48DLdNN1DzqCHQ3/HswCUzAF4egRnyqUsUsPmWEE/9o2aT0p0nSVpykxxvrOlYevV6
|
||||
KIYLT518CDIqPm8RVKPdh/i3CmlUgm8VnSdFJl6slBk72J0DwQtgqu/ubrjUohAgFy5s/Bi3qqbu
|
||||
wzFBHX1r4h4vRyASrozgEOGe2Bz51AzKFofqmCzjjC2rnnEvtuj1yFRCknsMlkTEFWxcIx5BEAwu
|
||||
02IfwvkWHukRDlk3SHe5hCHhEE2qfelOczlu4In5GXE2oeBOu6dmIxQeW2oJuzJbJGlfoe3ttND9
|
||||
hlzqWdiJEK77gyqyPXVlfJEqCEL5SCx/Prm0P/MLPPnch/hmpIXiQ9n7EMEek6j4PsNJVr8C3JmO
|
||||
RskD7utJPSURFNTAoRo8VNlMvbaCvKl0VJPDupvuyiGGenuZxnzizXDJDkkDNNPy8WYITHd4CjNE
|
||||
ehBREpVlo7EzX8VQLxBPFV8g3R+8EIbOINZkqzVvd3wLyQdjaioXPxOq14IRXUIZtzsTgiVvJ4zE
|
||||
7sZ+88+W0brJUL4rbxqD5dNN2/ZoQ6NojsTPXkM2TOd9AsG40XEbP3fdkgMOoguBZ7wFgq3x1+EV
|
||||
wYNsPvCcMt6lQnMwodGRCLO7eMqm0j8mQFoKk6jQ6jNa+PcGZEa3IdrhY7vM7tz8p2/wRLtztvj9
|
||||
OQXLmaNEW/nPKG7iHGHsyXRPBzdkWncb4VO4jiTipBEs+5g5KHrZEfWfWA7HrSqPsCklld7nYuNO
|
||||
Z2UQ4Oldvyj+PmxXuLycHHJUBhQflXM4PWAuwHyZFLw1T8ewnpssguHy5Ki6Bxyg6ndfwNkAId7I
|
||||
uwDMB3lIgZRsb7hZ8UIIdS6F2WXpiV5RIRwlc05gpy5P6jAhZYtWVj6sbFOh3h1QsNCGL+HcdDYx
|
||||
bW3oRjwdOABF4Ut0U58BixxqQ8dwrtQ4FlX2B/8zoXrgOfdL7Ws+kx7awewSzO+qWmCW+4XK1T/S
|
||||
PVtgtsSHLwfjik103a/anM/7EsrDciM6nrYZs3fAh0WolDS1z1bHb4+eCizYZfQ47Udt+Iw6B2t/
|
||||
0/7RI/0guRKUa+2LYZt02hx0JxOqfVgT4ygKWn/U+woYwskb5/wTsv7CJ7a8vYULIf747liTvDFU
|
||||
ivo6bsqtF/YQfU0IB0uge/FqdxN/twvYPfGFWHWItMVr0xE+wrokxt5914PYAE92zOFNwz36gumi
|
||||
2yp8IMWmpw25dMuPn586qaSGBduwPyXOgiSQEWJnH579+jF0gG/T8yYv66dzmhwghhNPcn15hYvu
|
||||
7RaY8rFMzaB8d8t12rXQc8s3uZikYUuKr77cveWImPamB0O/sAC+XTwSY/d4ZMPzU6RAY61Mf3xn
|
||||
kf2yQtrRNql17qeM8Z6Qwjs7KdQsTjFgG/RV4A/vf37A6k9AeJvakZgvtnTT5QUbeDS6K97dp5xN
|
||||
+/ikQDWDd3Jd++cUXO8KSC3ep/ZpbDM6qEYAL36YjmCUlFo4AcNDkLvPRHnJlsZ7U6/KByXgibs5
|
||||
e9oc3lAFGXlnpLiGRONvvcIhLscbiu+HoX59xYMgn9RMotphvGqjexM5edXHmHtwojvP8fn9Z/xq
|
||||
8XiHU+9yOQxU1Sd7UFy72Q8WB2CnQsT73qC2mHEpQTlOa6JKGdXY5/rFUNbgTD3feQAqDRKG0ivB
|
||||
VB+8uZ4j3vbA4xO5VKkT0DF0Cto/+t+51mW9XJLgjWA/LfToPd1uVBOSwDRodLIvzyd3WfkV2HZ9
|
||||
jWEfyux5iCUOenx0JjZQrHDtV7E8GklILa0WwbS93OI/ekkJweROiZH7INY1QAnnuK5Q9ov05//N
|
||||
Ob3XQ3gpW7SvbEbOMh+FdBjsHHwbZaK3aT+6s/GybLjyXWqTGLpzUTx9SC5GSR2LAbCQuTCBO+Uz
|
||||
lpP3I5xxv21kKY9ici0Du+aiqIjBrx5Uo9prwtoPf/1lHET3XvdLNgag69sTUeIeu3/4+DP8Sj+9
|
||||
l/G7IIaA7qKJOCgR2Pz8FAkscSxSDOjDrbJD0kKkyhVRLKLW3NiEKXTEHSHuM+PZZLWqAKdDx1HV
|
||||
qB7ufNsbbxTdlyvVmnqrLZeXmiN8bnjqMV2qh+JQlmDVy/TOSFePcUFbIJ3Dkhibx6kWeFWFcPVH
|
||||
aPF9vDWWV4ADJ/l9puZN0jRR5AwHltzcjXxcBDUrw3mB9Lvb4vE9pTXLjNyDgIb2Hz7E8Xc7/6NP
|
||||
cIpxOERtkiA7KKzR54LFZe8hHcGvX//Wn/Hi3ABnszHoTRYho5ftqYC9G5vUxFyXTbjynN/+I8pn
|
||||
eoLpqd5bML4Egxyf9z1bVj4PnlU6kB9/ndb1geqxHolq3spwqaN9D+6RZ5GrG+616SYEDvooekis
|
||||
sRPrtd9XYO33WM6GBxCTg/JF7zhoqVXJh5D2y95BsvUdV71Sgumczxg9/XBL7FXPM1WaffjJbIfG
|
||||
UfwKF+hB/ecXkSg6OpnAa3cMP5fiSE4hat0BHzQVnh6HgR4LTwqnMFTNn1+DWVNgV9yMiwfnXlOJ
|
||||
xgQ746cvdn58hh4fEl9/qNeWqI7eNT2tePO+hTsOVJuKUXcvq+GyfWYKyI9yRjTs6zVn5GGPgHHP
|
||||
yV52zHpsE8kBZx/uaUDpPlz07NEiySHvsU7h4k7LBrdwXa91fh9sGaVWlWU37shaHz/96cHYNec/
|
||||
/owoLpIEe/3kj4veL4Cp2emN7ixUiHG5zoDVvhHDvFFc6pz3vMt+fuvqHxAnH5ZwvmBWgDh2PuNG
|
||||
iV5g1YcQjo1t09uBO3aLd/nKMBLvCTG8WnNZZ+x66E7FTA5HPIHpBMQYRnV+IbEsOiHdU8cBR+rs
|
||||
Ke4vLZjo8/WV5Xm+E33wTt1cDDcdmuIsYiGznu73yKslasXXmR5LFGfc/qlwv++nStZvwpnzIh+c
|
||||
D41J4m/j19y2PTo/fUWt7mGAj7072fBSJTbNwqMXsh9/uu6sbGTCy8wEodwI8LPwB+opPgBs651k
|
||||
xLPyTPY8sbIxK84R/O7bzQjhsWTTkqYKdOHEiC0bb+3Hx2BZfACxxStwBz7TdbB8xAlv9uSl8YFy
|
||||
k8Hj+cbkeEcGE53XXYHj1fSJwttK+JbVSgCWmBKKjShxF0qkBba6cMcb77XvJpgdJpgJ5WOU40Wt
|
||||
hUqoVMQmTiMWmZWa56Wuhx9t8clBe2mAY83BhlEy1ET9BnG9vMVehT8/8MQTK2SX7TWH+au3ifOM
|
||||
MOv23oP7kweQFs5sXP3z3YceXLzUuVVPVkIDGDS5Sux0FOqPSSsVRYftSM1gB8N+vQ9E1DxWPNqz
|
||||
SckHCEt0nVa8UeqfXwTnw8tf8e+iTVz8bH78ZdyteMxCuVbQOj56Ua5myM/Wk4NH43Ml++ntgCF9
|
||||
pQlUxXOGt1H6ZDRKnF7m032Pp29nuKu/IYP60HTE7us+G23d+8LuxuX0mFqvrNshGICNtZjELTOj
|
||||
XnZM4YDgyATvzOQBltXvBwnwBxIexqs7ajQ1geQr0cqPxPpP/vDbb+6DSvXYzI/4N5+j3DsILD5G
|
||||
y5/3Y25Thz3bGTLkjvfD2p9rbd0PJjy9H68172CA9sqswniunz++w0SOxj3g1W4Zl5GrtPHZl1+E
|
||||
PuqJrvmB1s/lCMH5/d4Sx0uTeunROEJ2395xNe5Cd8lbyYM//ukdBrMWjN0By/ftU8W7cDt2w+qf
|
||||
w2nE0fjj52wgbw4+TPn6w4OaCRbqASnsmJyD66ObcKXbaCwLlxx/fuyvPkDmW7RY8XEqzqYJV35I
|
||||
danxtDFvJQw+meMQDLhF+/UveeXL4zJoXDcAax+hvN9GRNNHm/VrXoQ21mQSo3m57ozl0Aarv0Ms
|
||||
MYjc2bYPCrAO0pNcvXCuSzsIIvShrouXbHLCuVJAAnREv3g0HTMUJHOXwCA4R8RZ/a454hUPXicn
|
||||
oNrh83ZnOd0JcNVLIwc/N3e6pXiB4NI6GLZzAebrYVnQ7/uvNlbqNW9pERtpNAprviaOTumgDoIz
|
||||
UbLXEK5+aAyrxZup3p+CcMYHVwV5/vZHASyHjp31wEPt2a3x53F8Adaa2gaseIefk6wzfvHt/o+/
|
||||
eF759srfc/nnF0PnbQBu93QduPrdlMzXwe13VlfBVd8MU8O3HcVPqQLX9NLiqcSO+ycfGK+6T5w1
|
||||
n1nfj2HxTVNisO0zHB5RLYO1XvGDqC2bVz4L1O8YUePdQ23hVWcDzVfB6BFTD/B5tLFh0otwvGsl
|
||||
DcfVDwdrHoXhcDhoA+63LdStQCQmer/D7qenUUhaih9y140aT1VAntUJ8zaMwVSIbQLNPAGr3sRs
|
||||
5fMJvI23Lf3jz65+KfSUmaf3rLmE489vAQr2CZa+Sidq1scGHq0/mN/JvPvpRAB//gXxqn4ASzS7
|
||||
HOTVz0LdVAMhGwalkG+aodPT6n9MT8JseMxyneZubbh841QVbGuQEEsrTCDs5dco78mgUGySSzbV
|
||||
Ym7Dx2NzGjd9fmFDcJwLZLExoAa35cH/4YORSP1um7jsmO9TaPMeImFHXMYh6ZGilf+MQ/2ptekE
|
||||
tjGI6jlb/dyeLcxy3xBx5uG/xlsP/QbEQ1WPXRAcXb40oA5FxZSI9dh+tWG4qxNqcEH+8AWeubYK
|
||||
jVTySW6JL8DsPHyjld+Nm3Bm7hDKtgQNIiTUqYvJXZrGWMCaX41gc+7X/ZLIcM0nsViEHlukTLDh
|
||||
bYcZ+emFr8ZTBaz1M84r/i3VS8bQC+QaP673yuU+ywH/8Hlkh4+t/Znf8OJJJHt5dcZyJSwh06Lr
|
||||
2Ct+BpYt5wgQTalHtHyZwKwE0Rfq7XmieMW3NW8WwI3nZeL6j23N1ONGh+N5uKx+U+x2F2WYgFp3
|
||||
BV7a47OevuQtwccndumPL7HmCgs4K1+LnPfuu1v07NPAixckeIaKkfGCFTZg8ISZaBb7hKNrAx3m
|
||||
+Hhf/YIToz+/rTbkEV/pVwX8hvYxWOuTGMo5DZ/gvHN++R61oWC6AjhxC3peYn5ED+lcs9sNOHA/
|
||||
5jx1yYO4fHKrffjbf7kLE8YqvNNhNx7KP3kbe8zXBP79OxXwH//666//9Tth0L5vxXM9GDAU8/Bv
|
||||
/31U4N/Ef+vb9Pn8cwxh7NOy+Puf/zqB8Pene7ef4X8P76Z49X//85fA/zlr8PfwHtLn/3v9X+ur
|
||||
/uNf/wkAAP//AwBrHHLA4CAAAA==
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 996fc319cecdedb7-MXP
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 31 Oct 2025 02:35:59 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=EdEqm0c4qXDd37eMDquvqY8nUh6i44aqdC__ePBIwdQ-1761878159-1.0.1.1-Y1V.w1Y5bONiTamPzQiiY1qXjAjFOKz.YIxcoojb6aoHLm_rch0X.0RiwtAKa7Of5uQVYh6zABwFVdBp_nG4IDme6u0HfG2NG.fQ10OUTo4;
|
||||
path=/; expires=Fri, 31-Oct-25 03:05:59 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=BvgP1vTLLqluS8718kR0r7eL._6ojjbRzMUW6Yptgfk-1761878159085-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-allow-origin:
|
||||
- '*'
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-model:
|
||||
- text-embedding-3-small
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '58'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
via:
|
||||
- envoy-router-54b578b84c-pcfpl
|
||||
x-envoy-upstream-service-time:
|
||||
- '227'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '10000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '9999973'
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_1bad6e1c88504b1fb51574bb4f61deba
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
|
||||
@@ -29,7 +29,12 @@ class TestTokenManager(unittest.TestCase):
|
||||
@patch("crewai.cli.shared.token_manager.Fernet.generate_key")
|
||||
@patch("crewai.cli.shared.token_manager.TokenManager.read_secure_file")
|
||||
@patch("crewai.cli.shared.token_manager.TokenManager.save_secure_file")
|
||||
def test_get_or_create_key_new(self, mock_save, mock_read, mock_generate):
|
||||
@patch("crewai.cli.shared.token_manager.TokenManager._acquire_lock")
|
||||
@patch("crewai.cli.shared.token_manager.TokenManager._release_lock")
|
||||
@patch("builtins.open", new_callable=unittest.mock.mock_open)
|
||||
def test_get_or_create_key_new(
|
||||
self, mock_open, mock_release_lock, mock_acquire_lock, mock_save, mock_read, mock_generate
|
||||
):
|
||||
mock_key = b"new_key"
|
||||
mock_read.return_value = None
|
||||
mock_generate.return_value = mock_key
|
||||
@@ -37,9 +42,14 @@ class TestTokenManager(unittest.TestCase):
|
||||
result = self.token_manager._get_or_create_key()
|
||||
|
||||
self.assertEqual(result, mock_key)
|
||||
mock_read.assert_called_once_with("secret.key")
|
||||
# read_secure_file is called twice: once for fast path, once inside lock
|
||||
self.assertEqual(mock_read.call_count, 2)
|
||||
mock_read.assert_called_with("secret.key")
|
||||
mock_generate.assert_called_once()
|
||||
mock_save.assert_called_once_with("secret.key", mock_key)
|
||||
# Verify lock was acquired and released
|
||||
mock_acquire_lock.assert_called_once()
|
||||
mock_release_lock.assert_called_once()
|
||||
|
||||
@patch("crewai.cli.shared.token_manager.TokenManager.save_secure_file")
|
||||
def test_save_tokens(self, mock_save):
|
||||
@@ -136,3 +146,21 @@ class TestTokenManager(unittest.TestCase):
|
||||
mock_path.__truediv__.return_value.unlink.assert_called_once_with(
|
||||
missing_ok=True
|
||||
)
|
||||
|
||||
@patch("crewai.cli.shared.token_manager.Fernet.generate_key")
|
||||
@patch("crewai.cli.shared.token_manager.TokenManager.read_secure_file")
|
||||
@patch("crewai.cli.shared.token_manager.TokenManager.save_secure_file")
|
||||
@patch("builtins.open", side_effect=OSError(9, "Bad file descriptor"))
|
||||
def test_get_or_create_key_oserror_fallback(
|
||||
self, mock_open, mock_save, mock_read, mock_generate
|
||||
):
|
||||
"""Test that OSError during file locking falls back to lock-free creation."""
|
||||
mock_key = Fernet.generate_key()
|
||||
mock_read.return_value = None
|
||||
mock_generate.return_value = mock_key
|
||||
|
||||
result = self.token_manager._get_or_create_key()
|
||||
|
||||
self.assertEqual(result, mock_key)
|
||||
self.assertGreaterEqual(mock_generate.call_count, 1)
|
||||
self.assertGreaterEqual(mock_save.call_count, 1)
|
||||
|
||||
@@ -78,6 +78,17 @@ def auto_mock_telemetry(request):
|
||||
mock_instance = create_mock_telemetry_instance()
|
||||
mock_telemetry_class.return_value = mock_instance
|
||||
|
||||
# Create mock for TraceBatchManager
|
||||
mock_trace_manager = Mock()
|
||||
mock_trace_manager.add_trace = Mock()
|
||||
mock_trace_manager.send_batch = Mock()
|
||||
mock_trace_manager.stop = Mock()
|
||||
|
||||
# Create mock for BatchSpanProcessor to prevent OpenTelemetry background threads
|
||||
mock_batch_processor = Mock()
|
||||
mock_batch_processor.shutdown = Mock()
|
||||
mock_batch_processor.force_flush = Mock()
|
||||
|
||||
with (
|
||||
patch(
|
||||
"crewai.events.event_listener.Telemetry",
|
||||
@@ -86,6 +97,22 @@ def auto_mock_telemetry(request):
|
||||
patch("crewai.tools.tool_usage.Telemetry", mock_telemetry_class),
|
||||
patch("crewai.cli.command.Telemetry", mock_telemetry_class),
|
||||
patch("crewai.cli.create_flow.Telemetry", mock_telemetry_class),
|
||||
patch(
|
||||
"crewai.events.listeners.tracing.trace_batch_manager.TraceBatchManager",
|
||||
return_value=mock_trace_manager,
|
||||
),
|
||||
patch(
|
||||
"crewai.events.listeners.tracing.trace_listener.TraceBatchManager",
|
||||
return_value=mock_trace_manager,
|
||||
),
|
||||
patch(
|
||||
"crewai.events.listeners.tracing.first_time_trace_handler.TraceBatchManager",
|
||||
return_value=mock_trace_manager,
|
||||
),
|
||||
patch(
|
||||
"opentelemetry.sdk.trace.export.BatchSpanProcessor",
|
||||
return_value=mock_batch_processor,
|
||||
),
|
||||
):
|
||||
yield mock_instance
|
||||
|
||||
@@ -175,8 +202,8 @@ def clear_event_bus_handlers(setup_test_environment):
|
||||
|
||||
yield
|
||||
|
||||
# Shutdown event bus and wait for all handlers to complete
|
||||
crewai_event_bus.shutdown(wait=True)
|
||||
# Shutdown event bus without waiting to avoid hanging on blocked threads
|
||||
crewai_event_bus.shutdown(wait=False)
|
||||
crewai_event_bus._initialize()
|
||||
|
||||
callback = EvaluationTraceCallback()
|
||||
|
||||
@@ -482,3 +482,48 @@ def test_openai_get_client_params_no_base_url():
|
||||
client_params = llm._get_client_params()
|
||||
# When no base_url is provided, it should not be in the params (filtered out as None)
|
||||
assert "base_url" not in client_params or client_params.get("base_url") is None
|
||||
|
||||
|
||||
def test_openai_streaming_with_response_model():
|
||||
"""
|
||||
Test that streaming with response_model works correctly and doesn't call invalid API methods.
|
||||
This test verifies the fix for the bug where streaming with response_model attempted to call
|
||||
self.client.responses.stream() with invalid parameters (input, text_format).
|
||||
"""
|
||||
from pydantic import BaseModel
|
||||
|
||||
class TestResponse(BaseModel):
|
||||
"""Test response model."""
|
||||
|
||||
answer: str
|
||||
confidence: float
|
||||
|
||||
llm = LLM(model="openai/gpt-4o", stream=True)
|
||||
|
||||
with patch.object(llm.client.chat.completions, "create") as mock_create:
|
||||
mock_chunk1 = MagicMock()
|
||||
mock_chunk1.choices = [
|
||||
MagicMock(delta=MagicMock(content='{"answer": "test", ', tool_calls=None))
|
||||
]
|
||||
|
||||
mock_chunk2 = MagicMock()
|
||||
mock_chunk2.choices = [
|
||||
MagicMock(
|
||||
delta=MagicMock(content='"confidence": 0.95}', tool_calls=None)
|
||||
)
|
||||
]
|
||||
|
||||
mock_create.return_value = iter([mock_chunk1, mock_chunk2])
|
||||
|
||||
result = llm.call("Test question", response_model=TestResponse)
|
||||
|
||||
assert result is not None
|
||||
assert isinstance(result, str)
|
||||
|
||||
assert mock_create.called
|
||||
call_kwargs = mock_create.call_args[1]
|
||||
assert call_kwargs["model"] == "gpt-4o"
|
||||
assert call_kwargs["stream"] is True
|
||||
|
||||
assert "input" not in call_kwargs
|
||||
assert "text_format" not in call_kwargs
|
||||
|
||||
@@ -1,11 +1,13 @@
|
||||
"""Test Agent creation and execution basic functionality."""
|
||||
|
||||
from io import StringIO
|
||||
import json
|
||||
import threading
|
||||
from collections import defaultdict
|
||||
from concurrent.futures import Future
|
||||
from hashlib import md5
|
||||
import re
|
||||
import sys
|
||||
from unittest.mock import ANY, MagicMock, call, patch
|
||||
|
||||
from crewai.agent import Agent
|
||||
@@ -2442,37 +2444,51 @@ def test_memory_events_are_emitted():
|
||||
|
||||
@crewai_event_bus.on(MemorySaveStartedEvent)
|
||||
def handle_memory_save_started(source, event):
|
||||
with condition:
|
||||
events["MemorySaveStartedEvent"].append(event)
|
||||
condition.notify_all()
|
||||
|
||||
@crewai_event_bus.on(MemorySaveCompletedEvent)
|
||||
def handle_memory_save_completed(source, event):
|
||||
with condition:
|
||||
events["MemorySaveCompletedEvent"].append(event)
|
||||
condition.notify_all()
|
||||
|
||||
@crewai_event_bus.on(MemorySaveFailedEvent)
|
||||
def handle_memory_save_failed(source, event):
|
||||
with condition:
|
||||
events["MemorySaveFailedEvent"].append(event)
|
||||
condition.notify_all()
|
||||
|
||||
@crewai_event_bus.on(MemoryQueryStartedEvent)
|
||||
def handle_memory_query_started(source, event):
|
||||
with condition:
|
||||
events["MemoryQueryStartedEvent"].append(event)
|
||||
condition.notify_all()
|
||||
|
||||
@crewai_event_bus.on(MemoryQueryCompletedEvent)
|
||||
def handle_memory_query_completed(source, event):
|
||||
with condition:
|
||||
events["MemoryQueryCompletedEvent"].append(event)
|
||||
condition.notify_all()
|
||||
|
||||
@crewai_event_bus.on(MemoryQueryFailedEvent)
|
||||
def handle_memory_query_failed(source, event):
|
||||
with condition:
|
||||
events["MemoryQueryFailedEvent"].append(event)
|
||||
condition.notify_all()
|
||||
|
||||
@crewai_event_bus.on(MemoryRetrievalStartedEvent)
|
||||
def handle_memory_retrieval_started(source, event):
|
||||
with condition:
|
||||
events["MemoryRetrievalStartedEvent"].append(event)
|
||||
condition.notify_all()
|
||||
|
||||
@crewai_event_bus.on(MemoryRetrievalCompletedEvent)
|
||||
def handle_memory_retrieval_completed(source, event):
|
||||
with condition:
|
||||
events["MemoryRetrievalCompletedEvent"].append(event)
|
||||
condition.notify()
|
||||
condition.notify_all()
|
||||
|
||||
math_researcher = Agent(
|
||||
role="Researcher",
|
||||
@@ -2497,10 +2513,17 @@ def test_memory_events_are_emitted():
|
||||
|
||||
with condition:
|
||||
success = condition.wait_for(
|
||||
lambda: len(events["MemoryRetrievalCompletedEvent"]) >= 1, timeout=5
|
||||
lambda: (
|
||||
len(events["MemorySaveStartedEvent"]) >= 3
|
||||
and len(events["MemorySaveCompletedEvent"]) >= 3
|
||||
and len(events["MemoryQueryStartedEvent"]) >= 3
|
||||
and len(events["MemoryQueryCompletedEvent"]) >= 3
|
||||
and len(events["MemoryRetrievalCompletedEvent"]) >= 1
|
||||
),
|
||||
timeout=10,
|
||||
)
|
||||
|
||||
assert success, "Timeout waiting for memory events"
|
||||
assert success, f"Timeout waiting for memory events. Got: {dict(events)}"
|
||||
assert len(events["MemorySaveStartedEvent"]) == 3
|
||||
assert len(events["MemorySaveCompletedEvent"]) == 3
|
||||
assert len(events["MemorySaveFailedEvent"]) == 0
|
||||
@@ -2590,19 +2613,16 @@ def test_long_term_memory_with_memory_flag():
|
||||
agent=math_researcher,
|
||||
)
|
||||
|
||||
with (
|
||||
patch("crewai.utilities.printer.Printer.print") as mock_print,
|
||||
patch("crewai.memory.long_term.long_term_memory.LongTermMemory.save") as save_memory,
|
||||
):
|
||||
crew = Crew(
|
||||
agents=[math_researcher],
|
||||
tasks=[task1],
|
||||
memory=True,
|
||||
long_term_memory=LongTermMemory(),
|
||||
)
|
||||
|
||||
with (
|
||||
patch("crewai.utilities.printer.Printer.print") as mock_print,
|
||||
patch(
|
||||
"crewai.memory.long_term.long_term_memory.LongTermMemory.save"
|
||||
) as save_memory,
|
||||
):
|
||||
crew.kickoff()
|
||||
mock_print.assert_not_called()
|
||||
save_memory.assert_called_once()
|
||||
@@ -2855,7 +2875,7 @@ def test_manager_agent_with_tools_raises_exception(researcher, writer):
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_crew_train_success(researcher, writer):
|
||||
def test_crew_train_success(researcher, writer, monkeypatch):
|
||||
task = Task(
|
||||
description="Come up with a list of 5 interesting ideas to explore for an article, then write one amazing paragraph highlight for each idea that showcases how good an article about this topic could be. Return the list of ideas with their paragraph and your notes.",
|
||||
expected_output="5 bullet points with a paragraph for each idea.",
|
||||
@@ -2885,7 +2905,10 @@ def test_crew_train_success(researcher, writer):
|
||||
condition.notify()
|
||||
|
||||
# Mock human input to avoid blocking during training
|
||||
with patch("builtins.input", return_value="Great work!"):
|
||||
# Use StringIO to simulate user input for multiple calls to input()
|
||||
mock_inputs = StringIO("Great work!\n" * 10) # Provide enough inputs for all iterations
|
||||
monkeypatch.setattr("sys.stdin", mock_inputs)
|
||||
|
||||
crew.train(
|
||||
n_iterations=2, inputs={"topic": "AI"}, filename="trained_agents_data.pkl"
|
||||
)
|
||||
|
||||
@@ -31,6 +31,7 @@ class CustomLLM(BaseLLM):
|
||||
available_functions=None,
|
||||
from_task=None,
|
||||
from_agent=None,
|
||||
response_model=None,
|
||||
):
|
||||
"""
|
||||
Mock LLM call that returns a predefined response.
|
||||
@@ -162,6 +163,9 @@ class JWTAuthLLM(BaseLLM):
|
||||
tools: Optional[List[dict]] = None,
|
||||
callbacks: Optional[List[Any]] = None,
|
||||
available_functions: Optional[Dict[str, Any]] = None,
|
||||
from_task=None,
|
||||
from_agent=None,
|
||||
response_model=None,
|
||||
) -> Union[str, Any]:
|
||||
"""Record the call and return a predefined response."""
|
||||
self.calls.append(
|
||||
@@ -241,6 +245,9 @@ class TimeoutHandlingLLM(BaseLLM):
|
||||
tools: Optional[List[dict]] = None,
|
||||
callbacks: Optional[List[Any]] = None,
|
||||
available_functions: Optional[Dict[str, Any]] = None,
|
||||
from_task=None,
|
||||
from_agent=None,
|
||||
response_model=None,
|
||||
) -> Union[str, Any]:
|
||||
"""Simulate API calls with timeout handling and retry logic.
|
||||
|
||||
|
||||
@@ -340,7 +340,7 @@ def test_output_pydantic_hierarchical():
|
||||
)
|
||||
result = crew.kickoff()
|
||||
assert isinstance(result.pydantic, ScoreOutput)
|
||||
assert result.to_dict() == {"score": 4}
|
||||
assert result.to_dict() == {"score": 0}
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
@@ -574,8 +574,8 @@ def test_output_pydantic_to_another_task():
|
||||
goal="Score the title",
|
||||
backstory="You're an expert scorer, specialized in scoring titles.",
|
||||
allow_delegation=False,
|
||||
llm="gpt-4-0125-preview",
|
||||
function_calling_llm="gpt-3.5-turbo-0125",
|
||||
llm="gpt-4o",
|
||||
function_calling_llm="gpt-4o",
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
@@ -599,7 +599,7 @@ def test_output_pydantic_to_another_task():
|
||||
assert isinstance(pydantic_result, ScoreOutput), (
|
||||
"Expected pydantic result to be of type ScoreOutput"
|
||||
)
|
||||
assert pydantic_result.score == 5
|
||||
assert pydantic_result.score == 4
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
|
||||
@@ -57,6 +57,7 @@ class TestTraceListenerSetup:
|
||||
if hasattr(TraceCollectionListener, "_instance"):
|
||||
TraceCollectionListener._instance = None
|
||||
TraceCollectionListener._initialized = False
|
||||
TraceCollectionListener._listeners_setup = False
|
||||
|
||||
# Reset EventListener singleton
|
||||
if hasattr(EventListener, "_instance"):
|
||||
@@ -74,6 +75,7 @@ class TestTraceListenerSetup:
|
||||
if hasattr(TraceCollectionListener, "_instance"):
|
||||
TraceCollectionListener._instance = None
|
||||
TraceCollectionListener._initialized = False
|
||||
TraceCollectionListener._listeners_setup = False
|
||||
|
||||
if hasattr(EventListener, "_instance"):
|
||||
EventListener._instance = None
|
||||
@@ -131,25 +133,16 @@ class TestTraceListenerSetup:
|
||||
)
|
||||
crew = Crew(agents=[agent], tasks=[task], verbose=True)
|
||||
|
||||
from crewai.events.listeners.tracing.trace_listener import TraceCollectionListener
|
||||
trace_listener = TraceCollectionListener()
|
||||
from crewai.events.event_bus import crewai_event_bus
|
||||
|
||||
trace_listener.setup_listeners(crewai_event_bus)
|
||||
|
||||
with patch.object(
|
||||
trace_listener.batch_manager,
|
||||
"initialize_batch",
|
||||
return_value=None,
|
||||
) as initialize_mock:
|
||||
crew.kickoff()
|
||||
|
||||
assert initialize_mock.call_count >= 1
|
||||
initialized = trace_listener.batch_manager.wait_for_batch_initialization(timeout=5.0)
|
||||
|
||||
call_args = initialize_mock.call_args_list[0]
|
||||
assert len(call_args[0]) == 2 # user_context, execution_metadata
|
||||
_, execution_metadata = call_args[0]
|
||||
assert isinstance(execution_metadata, dict)
|
||||
assert "crew_name" in execution_metadata
|
||||
assert initialized, "Batch should have been initialized"
|
||||
assert trace_listener.batch_manager.is_batch_initialized()
|
||||
assert trace_listener.batch_manager.current_batch is not None
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_batch_manager_finalizes_batch_clears_buffer(self):
|
||||
@@ -364,24 +357,21 @@ class TestTraceListenerSetup:
|
||||
)
|
||||
crew = Crew(agents=[agent], tasks=[task], tracing=True)
|
||||
|
||||
with patch.object(TraceBatchManager, "initialize_batch") as mock_initialize:
|
||||
from crewai.events.listeners.tracing.trace_listener import TraceCollectionListener
|
||||
trace_listener = TraceCollectionListener()
|
||||
|
||||
crew.kickoff()
|
||||
|
||||
assert mock_initialize.call_count >= 1
|
||||
assert mock_initialize.call_args_list[0][1]["use_ephemeral"] is True
|
||||
wait_for_event_handlers()
|
||||
|
||||
assert trace_listener.batch_manager.is_batch_initialized(), (
|
||||
"Batch should have been initialized for unauthenticated user"
|
||||
)
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_trace_listener_with_authenticated_user(self):
|
||||
"""Test that trace listener properly handles authenticated batches"""
|
||||
with (
|
||||
patch.dict(os.environ, {"CREWAI_TRACING_ENABLED": "true"}),
|
||||
patch(
|
||||
"crewai.events.listeners.tracing.trace_batch_manager.PlusAPI"
|
||||
) as mock_plus_api_class,
|
||||
):
|
||||
mock_plus_api_instance = MagicMock()
|
||||
mock_plus_api_class.return_value = mock_plus_api_instance
|
||||
|
||||
with patch.dict(os.environ, {"CREWAI_TRACING_ENABLED": "true"}):
|
||||
agent = Agent(
|
||||
role="Test Agent",
|
||||
goal="Test goal",
|
||||
@@ -394,21 +384,17 @@ class TestTraceListenerSetup:
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
with (
|
||||
patch.object(TraceBatchManager, "initialize_batch") as mock_initialize,
|
||||
patch.object(
|
||||
TraceBatchManager, "finalize_batch"
|
||||
) as mock_finalize_backend_batch,
|
||||
):
|
||||
from crewai.events.listeners.tracing.trace_listener import TraceCollectionListener
|
||||
trace_listener = TraceCollectionListener()
|
||||
|
||||
crew = Crew(agents=[agent], tasks=[task], tracing=True)
|
||||
crew.kickoff()
|
||||
|
||||
wait_for_event_handlers()
|
||||
|
||||
mock_plus_api_class.assert_called_with(api_key="mock_token_12345")
|
||||
|
||||
assert mock_initialize.call_count >= 1
|
||||
mock_finalize_backend_batch.assert_called_with()
|
||||
assert mock_finalize_backend_batch.call_count >= 1
|
||||
assert trace_listener.batch_manager.is_batch_initialized(), (
|
||||
"Batch should have been initialized for authenticated user"
|
||||
)
|
||||
|
||||
# Helper method to ensure cleanup
|
||||
def teardown_method(self):
|
||||
@@ -489,26 +475,15 @@ class TestTraceListenerSetup:
|
||||
assert trace_listener.first_time_handler.is_first_time is True
|
||||
assert trace_listener.first_time_handler.collected_events is False
|
||||
|
||||
with (
|
||||
patch.object(
|
||||
trace_listener.first_time_handler,
|
||||
"handle_execution_completion",
|
||||
wraps=trace_listener.first_time_handler.handle_execution_completion,
|
||||
) as mock_handle_completion,
|
||||
patch.object(
|
||||
trace_listener.batch_manager,
|
||||
"add_event",
|
||||
wraps=trace_listener.batch_manager.add_event,
|
||||
) as mock_add_event,
|
||||
):
|
||||
trace_listener.batch_manager.batch_owner_type = "crew"
|
||||
|
||||
result = crew.kickoff()
|
||||
wait_for_event_handlers()
|
||||
assert result is not None
|
||||
|
||||
assert mock_handle_completion.call_count >= 1
|
||||
assert mock_add_event.call_count >= 1
|
||||
|
||||
assert trace_listener.first_time_handler.collected_events is True
|
||||
assert trace_listener.first_time_handler.collected_events is True, (
|
||||
"Events should have been collected"
|
||||
)
|
||||
|
||||
mock_prompt.assert_called_once()
|
||||
|
||||
@@ -556,9 +531,10 @@ class TestTraceListenerSetup:
|
||||
from crewai.events.event_bus import crewai_event_bus
|
||||
|
||||
trace_listener = TraceCollectionListener()
|
||||
trace_listener.setup_listeners(crewai_event_bus)
|
||||
|
||||
assert trace_listener.first_time_handler.is_first_time is True
|
||||
trace_listener.batch_manager.ephemeral_trace_url = (
|
||||
"https://crewai.com/trace/mock-id"
|
||||
)
|
||||
|
||||
with (
|
||||
patch.object(
|
||||
@@ -569,26 +545,17 @@ class TestTraceListenerSetup:
|
||||
patch.object(
|
||||
trace_listener.first_time_handler, "_display_ephemeral_trace_link"
|
||||
) as mock_display_link,
|
||||
patch.object(
|
||||
trace_listener.first_time_handler,
|
||||
"handle_execution_completion",
|
||||
wraps=trace_listener.first_time_handler.handle_execution_completion,
|
||||
) as mock_handle_completion,
|
||||
):
|
||||
trace_listener.batch_manager.ephemeral_trace_url = (
|
||||
"https://crewai.com/trace/mock-id"
|
||||
)
|
||||
trace_listener.setup_listeners(crewai_event_bus)
|
||||
|
||||
assert trace_listener.first_time_handler.is_first_time is True
|
||||
|
||||
trace_listener.first_time_handler.collected_events = True
|
||||
|
||||
crew.kickoff()
|
||||
wait_for_event_handlers()
|
||||
|
||||
assert mock_handle_completion.call_count >= 1, (
|
||||
"handle_execution_completion should be called"
|
||||
)
|
||||
|
||||
assert trace_listener.first_time_handler.collected_events is True, (
|
||||
"Events should be marked as collected"
|
||||
)
|
||||
trace_listener.first_time_handler.handle_execution_completion()
|
||||
|
||||
mock_init_backend.assert_called_once()
|
||||
|
||||
@@ -636,14 +603,13 @@ class TestTraceListenerSetup:
|
||||
)
|
||||
crew = Crew(agents=[agent], tasks=[task])
|
||||
|
||||
with patch.object(TraceBatchManager, "initialize_batch") as mock_initialize:
|
||||
result = crew.kickoff()
|
||||
|
||||
assert trace_listener.batch_manager.wait_for_pending_events(timeout=5.0), (
|
||||
"Timeout waiting for trace event handlers to complete"
|
||||
wait_for_event_handlers()
|
||||
|
||||
assert trace_listener.batch_manager.is_batch_initialized(), (
|
||||
"Batch should have been initialized for first-time user"
|
||||
)
|
||||
assert mock_initialize.call_count >= 1
|
||||
assert mock_initialize.call_args_list[0][1]["use_ephemeral"] is True
|
||||
assert result is not None
|
||||
|
||||
def test_first_time_handler_timeout_behavior(self):
|
||||
@@ -699,60 +665,43 @@ class TestTraceListenerSetup:
|
||||
|
||||
mock_mark_completed.assert_called_once()
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_trace_batch_marked_as_failed_on_finalize_error(self, mock_plus_api_calls):
|
||||
def test_trace_batch_marked_as_failed_on_finalize_error(self):
|
||||
"""Test that trace batch is marked as failed when finalization returns non-200 status"""
|
||||
# Test the error handling logic directly in TraceBatchManager
|
||||
batch_manager = TraceBatchManager()
|
||||
|
||||
with patch.dict(os.environ, {"CREWAI_TRACING_ENABLED": "true"}):
|
||||
agent = Agent(
|
||||
role="Test Agent",
|
||||
goal="Test goal",
|
||||
backstory="Test backstory",
|
||||
llm="gpt-4o-mini",
|
||||
# Initialize a batch
|
||||
batch_manager.current_batch = batch_manager.initialize_batch(
|
||||
user_context={"privacy_level": "standard"},
|
||||
execution_metadata={
|
||||
"execution_type": "crew",
|
||||
"crew_name": "test_crew",
|
||||
},
|
||||
)
|
||||
task = Task(
|
||||
description="Say hello to the world",
|
||||
expected_output="hello world",
|
||||
agent=agent,
|
||||
)
|
||||
crew = Crew(agents=[agent], tasks=[task], verbose=True)
|
||||
|
||||
trace_listener = TraceCollectionListener()
|
||||
from crewai.events.event_bus import crewai_event_bus
|
||||
|
||||
trace_listener.setup_listeners(crewai_event_bus)
|
||||
|
||||
mock_init_response = MagicMock()
|
||||
mock_init_response.status_code = 200
|
||||
mock_init_response.json.return_value = {"trace_id": "test_batch_id_12345"}
|
||||
batch_manager.trace_batch_id = "test_batch_id_12345"
|
||||
batch_manager.backend_initialized = True
|
||||
|
||||
# Mock the API responses
|
||||
with (
|
||||
patch.object(
|
||||
trace_listener.batch_manager.plus_api,
|
||||
"initialize_trace_batch",
|
||||
return_value=mock_init_response,
|
||||
),
|
||||
patch.object(
|
||||
trace_listener.batch_manager.plus_api,
|
||||
batch_manager.plus_api,
|
||||
"send_trace_events",
|
||||
return_value=MagicMock(status_code=200),
|
||||
),
|
||||
patch.object(
|
||||
trace_listener.batch_manager.plus_api,
|
||||
batch_manager.plus_api,
|
||||
"finalize_trace_batch",
|
||||
return_value=MagicMock(
|
||||
status_code=500, text="Internal Server Error"
|
||||
),
|
||||
return_value=MagicMock(status_code=500, text="Internal Server Error"),
|
||||
),
|
||||
patch.object(
|
||||
trace_listener.batch_manager.plus_api,
|
||||
batch_manager.plus_api,
|
||||
"mark_trace_batch_as_failed",
|
||||
wraps=mock_plus_api_calls["mark_trace_batch_as_failed"],
|
||||
) as mock_mark_failed,
|
||||
):
|
||||
crew.kickoff()
|
||||
wait_for_event_handlers()
|
||||
# Call finalize_batch directly
|
||||
batch_manager.finalize_batch()
|
||||
|
||||
mock_mark_failed.assert_called_once()
|
||||
call_args = mock_mark_failed.call_args_list[0]
|
||||
assert call_args[0][1] == "Internal Server Error"
|
||||
# Verify that mark_trace_batch_as_failed was called with the error message
|
||||
mock_mark_failed.assert_called_once_with(
|
||||
"test_batch_id_12345", "Internal Server Error"
|
||||
)
|
||||
|
||||
@@ -1,12 +1,9 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages": [{"role": "user", "content": "Name: Alice, Age: 30"}], "model":
|
||||
"gpt-4o-mini", "tool_choice": {"type": "function", "function": {"name": "SimpleModel"}},
|
||||
"tools": [{"type": "function", "function": {"name": "SimpleModel", "description":
|
||||
"Correctly extracted `SimpleModel` with all the required parameters with correct
|
||||
types", "parameters": {"properties": {"name": {"title": "Name", "type": "string"},
|
||||
"age": {"title": "Age", "type": "integer"}}, "required": ["age", "name"], "type":
|
||||
"object"}}}]}'
|
||||
body: '{"messages":[{"role":"system","content":"Please convert the following text
|
||||
into valid JSON.\n\nOutput ONLY the valid JSON and nothing else.\n\nThe JSON
|
||||
must follow this schema exactly:\n```json\n{\n name: str,\n age: int\n}\n```"},{"role":"user","content":"Name:
|
||||
Alice, Age: 30"}],"model":"gpt-4o-mini","response_format":{"type":"json_schema","json_schema":{"schema":{"properties":{"name":{"title":"Name","type":"string"},"age":{"title":"Age","type":"integer"}},"required":["name","age"],"title":"SimpleModel","type":"object","additionalProperties":false},"name":"SimpleModel","strict":true}},"stream":false}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
@@ -15,54 +12,49 @@ interactions:
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '507'
|
||||
- '614'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.59.6
|
||||
- OpenAI/Python 1.109.1
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-helper-method:
|
||||
- chat.completions.parse
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.59.6
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
- 1.109.1
|
||||
x-stainless-read-timeout:
|
||||
- '600'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.7
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
content: "{\n \"id\": \"chatcmpl-Aq4a4xDv8G0i4fbTtPJEI2B8UNBup\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1736974028,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": null,\n \"tool_calls\": [\n {\n
|
||||
\ \"id\": \"call_uO5nec8hTk1fpYINM8TUafhe\",\n \"type\":
|
||||
\"function\",\n \"function\": {\n \"name\": \"SimpleModel\",\n
|
||||
\ \"arguments\": \"{\\\"name\\\":\\\"Alice\\\",\\\"age\\\":30}\"\n
|
||||
\ }\n }\n ],\n \"refusal\": null\n },\n
|
||||
\ \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n
|
||||
\ \"usage\": {\n \"prompt_tokens\": 79,\n \"completion_tokens\": 10,\n
|
||||
\ \"total_tokens\": 89,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
|
||||
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_72ed7ab54c\"\n}\n"
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//jJJNa9wwEIbv/hViznaRN/vpW9tDCbmEQkmgDkaRxl519YUkl6bL/vci
|
||||
e7N2vqAXH+aZd/y+ozlmhIAUUBHgexa5dqr4end/+7d84nf0dkG71eH6x5dveGPu9fLw/QrypLCP
|
||||
v5DHZ9UnbrVTGKU1I+YeWcQ0tdysy+2G7nblALQVqJKsc7FY2kJLI4sFXSwLuinK7Vm9t5JjgIr8
|
||||
zAgh5Dh8k08j8A9UhObPFY0hsA6hujQRAt6qVAEWggyRmQj5BLk1Ec1g/ViDYRprqGr4rCTHGvIa
|
||||
WJcqV/Q0V3ls+8CSc9MrNQPMGBtZSj74fTiT08Whsp3z9jG8kkIrjQz7xiML1iQ3IVoHAz1lhDwM
|
||||
m+hfhAPnrXaxifaAw+9Kuh3nwfQAE92dWbSRqZmo3OTvjGsERiZVmK0SOON7FJN02jvrhbQzkM1C
|
||||
vzXz3uwxuDTd/4yfAOfoIorGeRSSvww8tXlM5/lR22XJg2EI6H9Ljk2U6NNDCGxZr8ajgfAUIuqm
|
||||
laZD77wcL6d1zWpNWbvG1WoH2Sn7BwAA//8DAFzfDxVHAwAA
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 9028b81aeb1cb05f-ATL
|
||||
- 996f142248320e95-MXP
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
@@ -70,15 +62,17 @@ interactions:
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Wed, 15 Jan 2025 20:47:08 GMT
|
||||
- Fri, 31 Oct 2025 00:36:32 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=PzayZLF04c14veGc.0ocVg3VHBbpzKRW8Hqox8L9U7c-1736974028-1.0.1.1-mZpK8.SH9l7K2z8Tvt6z.dURiVPjFqEz7zYEITfRwdr5z0razsSebZGN9IRPmI5XC_w5rbZW2Kg6hh5cenXinQ;
|
||||
path=/; expires=Wed, 15-Jan-25 21:17:08 GMT; domain=.api.openai.com; HttpOnly;
|
||||
- __cf_bm=EsqV2uuHnkXCOCTW4ZgAmdmEKc4Mm3rVQw8twE209RI-1761870992-1.0.1.1-9xJoNnZ.Dpd56yJgZXGBk6iT6jSA7DBzzX2o7PVGP0baco7.cdHEcyfEimiAqgD6HguvoiO.P6i.fx.aeHfpa6fmsTSTXeC5pUlCU_yJcRA;
|
||||
path=/; expires=Fri, 31-Oct-25 01:06:32 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=ciwC3n2Srn20xx4JhEUeN6Ap0tNBaE44S95nIilboQ0-1736974028496-0.0.1.1-604800000;
|
||||
- _cfuvid=KGFXdIUU9WK3qTOFK_oSCA_E_JdqnOONwqzgqMuyGto-1761870992424-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
@@ -87,28 +81,41 @@ interactions:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '439'
|
||||
- '488'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
x-envoy-upstream-service-time:
|
||||
- '519'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-project-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-project-tokens:
|
||||
- '149999945'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149999978'
|
||||
- '149999945'
|
||||
x-ratelimit-reset-project-tokens:
|
||||
- 0s
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_a468000458b9d2848b7497b2e3d485a3
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
- req_4a7800f3477e434ba981c5ba29a6d7d3
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
|
||||
File diff suppressed because one or more lines are too long
@@ -231,7 +231,7 @@ def test_get_conversion_instructions_gpt() -> None:
|
||||
expected_instructions = (
|
||||
"Please convert the following text into valid JSON.\n\n"
|
||||
"Output ONLY the valid JSON and nothing else.\n\n"
|
||||
"The JSON must follow this schema exactly:\n```json\n"
|
||||
"Use this format exactly:\n```json\n"
|
||||
f"{model_schema}\n```"
|
||||
)
|
||||
assert instructions == expected_instructions
|
||||
@@ -241,8 +241,14 @@ def test_get_conversion_instructions_non_gpt() -> None:
|
||||
llm = LLM(model="ollama/llama3.1", base_url="http://localhost:11434")
|
||||
with patch.object(LLM, "supports_function_calling", return_value=False):
|
||||
instructions = get_conversion_instructions(SimpleModel, llm)
|
||||
assert '"name": str' in instructions
|
||||
assert '"age": int' in instructions
|
||||
# Check that the JSON schema is properly formatted
|
||||
assert "Please convert the following text into valid JSON" in instructions
|
||||
assert "Output ONLY the valid JSON and nothing else" in instructions
|
||||
assert "Use this format exactly" in instructions
|
||||
assert "```json" in instructions
|
||||
assert '"type": "object"' in instructions
|
||||
assert '"properties"' in instructions
|
||||
assert "'type': 'json_schema'" not in instructions
|
||||
|
||||
|
||||
# Tests for is_gpt
|
||||
@@ -295,16 +301,24 @@ def test_create_converter_fails_without_agent_or_converter_cls() -> None:
|
||||
|
||||
def test_generate_model_description_simple_model() -> None:
|
||||
description = generate_model_description(SimpleModel)
|
||||
expected_description = '{\n "name": str,\n "age": int\n}'
|
||||
assert description == expected_description
|
||||
# generate_model_description now returns a JSON schema dict
|
||||
assert isinstance(description, dict)
|
||||
assert description["type"] == "json_schema"
|
||||
assert description["json_schema"]["name"] == "SimpleModel"
|
||||
assert description["json_schema"]["strict"] is True
|
||||
assert "name" in description["json_schema"]["schema"]["properties"]
|
||||
assert "age" in description["json_schema"]["schema"]["properties"]
|
||||
|
||||
|
||||
def test_generate_model_description_nested_model() -> None:
|
||||
description = generate_model_description(NestedModel)
|
||||
expected_description = (
|
||||
'{\n "id": int,\n "data": {\n "name": str,\n "age": int\n}\n}'
|
||||
)
|
||||
assert description == expected_description
|
||||
# generate_model_description now returns a JSON schema dict
|
||||
assert isinstance(description, dict)
|
||||
assert description["type"] == "json_schema"
|
||||
assert description["json_schema"]["name"] == "NestedModel"
|
||||
assert description["json_schema"]["strict"] is True
|
||||
assert "id" in description["json_schema"]["schema"]["properties"]
|
||||
assert "data" in description["json_schema"]["schema"]["properties"]
|
||||
|
||||
|
||||
def test_generate_model_description_optional_field() -> None:
|
||||
@@ -313,8 +327,11 @@ def test_generate_model_description_optional_field() -> None:
|
||||
age: int | None
|
||||
|
||||
description = generate_model_description(ModelWithOptionalField)
|
||||
expected_description = '{\n "name": str,\n "age": int | None\n}'
|
||||
assert description == expected_description
|
||||
# generate_model_description now returns a JSON schema dict
|
||||
assert isinstance(description, dict)
|
||||
assert description["type"] == "json_schema"
|
||||
assert description["json_schema"]["name"] == "ModelWithOptionalField"
|
||||
assert description["json_schema"]["strict"] is True
|
||||
|
||||
|
||||
def test_generate_model_description_list_field() -> None:
|
||||
@@ -322,8 +339,11 @@ def test_generate_model_description_list_field() -> None:
|
||||
items: list[int]
|
||||
|
||||
description = generate_model_description(ModelWithListField)
|
||||
expected_description = '{\n "items": List[int]\n}'
|
||||
assert description == expected_description
|
||||
# generate_model_description now returns a JSON schema dict
|
||||
assert isinstance(description, dict)
|
||||
assert description["type"] == "json_schema"
|
||||
assert description["json_schema"]["name"] == "ModelWithListField"
|
||||
assert description["json_schema"]["strict"] is True
|
||||
|
||||
|
||||
def test_generate_model_description_dict_field() -> None:
|
||||
@@ -331,8 +351,11 @@ def test_generate_model_description_dict_field() -> None:
|
||||
attributes: dict[str, int]
|
||||
|
||||
description = generate_model_description(ModelWithDictField)
|
||||
expected_description = '{\n "attributes": Dict[str, int]\n}'
|
||||
assert description == expected_description
|
||||
# generate_model_description now returns a JSON schema dict
|
||||
assert isinstance(description, dict)
|
||||
assert description["type"] == "json_schema"
|
||||
assert description["json_schema"]["name"] == "ModelWithDictField"
|
||||
assert description["json_schema"]["strict"] is True
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
@@ -374,9 +397,11 @@ def test_converter_with_llama3_2_model() -> None:
|
||||
assert output.age == 30
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_converter_with_llama3_1_model() -> None:
|
||||
llm = LLM(model="ollama/llama3.1", base_url="http://localhost:11434")
|
||||
llm = Mock(spec=LLM)
|
||||
llm.supports_function_calling.return_value = True
|
||||
llm.call.return_value = '{"name": "Alice Llama", "age": 30}'
|
||||
|
||||
sample_text = "Name: Alice Llama, Age: 30"
|
||||
instructions = get_conversion_instructions(SimpleModel, llm)
|
||||
converter = Converter(
|
||||
@@ -570,9 +595,8 @@ def test_converter_with_ambiguous_input() -> None:
|
||||
def test_converter_with_function_calling() -> None:
|
||||
llm = Mock(spec=LLM)
|
||||
llm.supports_function_calling.return_value = True
|
||||
|
||||
instructor = Mock()
|
||||
instructor.to_pydantic.return_value = SimpleModel(name="Eve", age=35)
|
||||
# Mock the llm.call to return a valid JSON string
|
||||
llm.call.return_value = '{"name": "Eve", "age": 35}'
|
||||
|
||||
converter = Converter(
|
||||
llm=llm,
|
||||
@@ -581,13 +605,16 @@ def test_converter_with_function_calling() -> None:
|
||||
instructions="Convert this text.",
|
||||
)
|
||||
|
||||
with patch.object(converter, '_create_instructor', return_value=instructor):
|
||||
output = converter.to_pydantic()
|
||||
|
||||
assert isinstance(output, SimpleModel)
|
||||
assert output.name == "Eve"
|
||||
assert output.age == 35
|
||||
instructor.to_pydantic.assert_called_once()
|
||||
|
||||
# Verify llm.call was called with correct parameters
|
||||
llm.call.assert_called_once()
|
||||
call_args = llm.call.call_args
|
||||
assert call_args[1]["response_model"] == SimpleModel
|
||||
|
||||
|
||||
def test_generate_model_description_union_field() -> None:
|
||||
@@ -595,8 +622,11 @@ def test_generate_model_description_union_field() -> None:
|
||||
field: int | str | None
|
||||
|
||||
description = generate_model_description(UnionModel)
|
||||
expected_description = '{\n "field": int | str | None\n}'
|
||||
assert description == expected_description
|
||||
# generate_model_description now returns a JSON schema dict
|
||||
assert isinstance(description, dict)
|
||||
assert description["type"] == "json_schema"
|
||||
assert description["json_schema"]["name"] == "UnionModel"
|
||||
assert description["json_schema"]["strict"] is True
|
||||
|
||||
def test_internal_instructor_with_openai_provider() -> None:
|
||||
"""Test InternalInstructor with OpenAI provider using registry pattern."""
|
||||
|
||||
45
uv.lock
generated
45
uv.lock
generated
@@ -59,6 +59,22 @@ dev = [
|
||||
{ name = "vcrpy", specifier = "==7.0.0" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "a2a-sdk"
|
||||
version = "0.3.10"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "google-api-core" },
|
||||
{ name = "httpx" },
|
||||
{ name = "httpx-sse" },
|
||||
{ name = "protobuf" },
|
||||
{ name = "pydantic" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/de/5a/3634ce054a8985c0d2ca0cb2ed1c8c8fdcd67456ddb6496895483c17fee0/a2a_sdk-0.3.10.tar.gz", hash = "sha256:f2df01935fb589c6ebaf8581aede4fe059a30a72cd38e775035337c78f8b2cca", size = 225974, upload-time = "2025-10-21T20:40:38.423Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/ba/9b/82df9530ed77d30831c49ffffc827222961422d444c0d684101e945ee214/a2a_sdk-0.3.10-py3-none-any.whl", hash = "sha256:b216ccc5ccfd00dcfa42f0f2dc709bc7ba057550717a34b0b1b34a99a76749cf", size = 140291, upload-time = "2025-10-21T20:40:36.929Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "accelerate"
|
||||
version = "1.11.0"
|
||||
@@ -1076,6 +1092,11 @@ dependencies = [
|
||||
]
|
||||
|
||||
[package.optional-dependencies]
|
||||
a2a = [
|
||||
{ name = "a2a-sdk" },
|
||||
{ name = "httpx-auth" },
|
||||
{ name = "httpx-sse" },
|
||||
]
|
||||
anthropic = [
|
||||
{ name = "anthropic" },
|
||||
]
|
||||
@@ -1128,6 +1149,7 @@ watson = [
|
||||
|
||||
[package.metadata]
|
||||
requires-dist = [
|
||||
{ name = "a2a-sdk", marker = "extra == 'a2a'", specifier = "~=0.3.10" },
|
||||
{ name = "anthropic", marker = "extra == 'anthropic'", specifier = ">=0.69.0" },
|
||||
{ name = "appdirs", specifier = ">=1.4.4" },
|
||||
{ name = "azure-ai-inference", marker = "extra == 'azure-ai-inference'", specifier = ">=1.0.0b9" },
|
||||
@@ -1138,6 +1160,8 @@ requires-dist = [
|
||||
{ name = "crewai-tools", marker = "extra == 'tools'", editable = "lib/crewai-tools" },
|
||||
{ name = "docling", marker = "extra == 'docling'", specifier = ">=2.12.0" },
|
||||
{ name = "google-genai", marker = "extra == 'google-genai'", specifier = ">=1.2.0" },
|
||||
{ name = "httpx-auth", marker = "extra == 'a2a'", specifier = ">=0.23.1" },
|
||||
{ name = "httpx-sse", marker = "extra == 'a2a'", specifier = ">=0.4.0" },
|
||||
{ name = "ibm-watsonx-ai", marker = "extra == 'watson'", specifier = ">=1.3.39" },
|
||||
{ name = "instructor", specifier = ">=1.3.3" },
|
||||
{ name = "json-repair", specifier = "==0.25.2" },
|
||||
@@ -1169,7 +1193,7 @@ requires-dist = [
|
||||
{ name = "uv", specifier = ">=0.4.25" },
|
||||
{ name = "voyageai", marker = "extra == 'voyageai'", specifier = ">=0.3.5" },
|
||||
]
|
||||
provides-extras = ["anthropic", "aws", "azure-ai-inference", "bedrock", "docling", "embeddings", "google-genai", "litellm", "mem0", "openpyxl", "pandas", "pdfplumber", "qdrant", "tools", "voyageai", "watson"]
|
||||
provides-extras = ["a2a", "anthropic", "aws", "azure-ai-inference", "bedrock", "docling", "embeddings", "google-genai", "litellm", "mem0", "openpyxl", "pandas", "pdfplumber", "qdrant", "tools", "voyageai", "watson"]
|
||||
|
||||
[[package]]
|
||||
name = "crewai-devtools"
|
||||
@@ -2405,18 +2429,17 @@ wheels = [
|
||||
|
||||
[[package]]
|
||||
name = "httpx"
|
||||
version = "0.27.2"
|
||||
version = "0.28.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "anyio" },
|
||||
{ name = "certifi" },
|
||||
{ name = "httpcore" },
|
||||
{ name = "idna" },
|
||||
{ name = "sniffio" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/78/82/08f8c936781f67d9e6b9eeb8a0c8b4e406136ea4c3d1f89a5db71d42e0e6/httpx-0.27.2.tar.gz", hash = "sha256:f7c2be1d2f3c3c3160d441802406b206c2b76f5947b11115e6df10c6c65e66c2", size = 144189, upload-time = "2024-08-27T12:54:01.334Z" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/b1/df/48c586a5fe32a0f01324ee087459e112ebb7224f646c0b5023f5e79e9956/httpx-0.28.1.tar.gz", hash = "sha256:75e98c5f16b0f35b567856f597f06ff2270a374470a5c2392242528e3e3e42fc", size = 141406, upload-time = "2024-12-06T15:37:23.222Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/56/95/9377bcb415797e44274b51d46e3249eba641711cf3348050f76ee7b15ffc/httpx-0.27.2-py3-none-any.whl", hash = "sha256:7bb2708e112d8fdd7829cd4243970f0c223274051cb35ee80c03301ee29a3df0", size = 76395, upload-time = "2024-08-27T12:53:59.653Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/2a/39/e50c7c3a983047577ee07d2a9e53faf5a69493943ec3f6a384bdc792deb2/httpx-0.28.1-py3-none-any.whl", hash = "sha256:d909fcccc110f8c7faf814ca82a9a4d816bc5a6dbfea25d6591d6985b8ba59ad", size = 73517, upload-time = "2024-12-06T15:37:21.509Z" },
|
||||
]
|
||||
|
||||
[package.optional-dependencies]
|
||||
@@ -2424,6 +2447,18 @@ http2 = [
|
||||
{ name = "h2" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "httpx-auth"
|
||||
version = "0.23.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "httpx" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/a8/d4/6bd616f89d1ce43f602b62ec274e33beee6c2bce3d68396e692daafdb57d/httpx_auth-0.23.1.tar.gz", hash = "sha256:27b5a6022ad1b41a303b8737fa2e3e4bce6bbbe7ab67fed0b261359be62e0434", size = 121418, upload-time = "2025-01-07T18:47:20.05Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/2f/23/a72f91bea596b522ac297b948ffee6decdedb535c034fca8062bd72981ce/httpx_auth-0.23.1-py3-none-any.whl", hash = "sha256:04f8bd0824efe3d9fb79690cc670b0da98ea809babb7aea04a72f334d4fd5ec5", size = 45328, upload-time = "2025-01-07T18:47:18.694Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "httpx-sse"
|
||||
version = "0.4.0"
|
||||
|
||||
Reference in New Issue
Block a user