mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-01-10 00:28:31 +00:00
Lorenze/better tracing events (#3382)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
* feat: implement tool usage limit exception handling - Introduced `ToolUsageLimitExceeded` exception to manage maximum usage limits for tools. - Enhanced `CrewStructuredTool` to check and raise this exception when the usage limit is reached. - Updated `_run` and `_execute` methods to include usage limit checks and handle exceptions appropriately, improving reliability and user feedback. * feat: enhance PlusAPI and ToolUsage with task metadata - Removed the `send_trace_batch` method from PlusAPI to streamline the API. - Added timeout parameters to trace event methods in PlusAPI for improved reliability. - Updated ToolUsage to include task metadata (task name and ID) in event emissions, enhancing traceability and context during tool usage. - Refactored event handling in LLM and ToolUsage events to ensure task information is consistently captured. * feat: enhance memory and event handling with task and agent metadata - Added task and agent metadata to various memory and event classes, improving traceability and context during memory operations. - Updated the `ContextualMemory` and `Memory` classes to associate tasks and agents, allowing for better context management. - Enhanced event emissions in `LLM`, `ToolUsage`, and memory events to include task and agent information, facilitating improved debugging and monitoring. - Refactored event handling to ensure consistent capture of task and agent details across the system. * drop * refactor: clean up unused imports in memory and event modules - Removed unused TYPE_CHECKING imports from long_term_memory.py to streamline the code. - Eliminated unnecessary import from memory_events.py, enhancing clarity and maintainability. * fix memory tests * fix task_completed payload * fix: remove unused test agent variable in external memory tests * refactor: remove unused agent parameter from Memory class save method - Eliminated the agent parameter from the save method in the Memory class to streamline the code and improve clarity. - Updated the TraceBatchManager class by moving initialization of attributes into the constructor for better organization and readability. * refactor: enhance ExecutionState and ReasoningEvent classes with optional task and agent identifiers - Added optional `current_agent_id` and `current_task_id` attributes to the `ExecutionState` class for better tracking of agent and task states. - Updated the `from_task` attribute in the `ReasoningEvent` class to use `Optional[Any]` instead of a specific type, improving flexibility in event handling. * refactor: update ExecutionState class by removing unused agent and task identifiers - Removed the `current_agent_id` and `current_task_id` attributes from the `ExecutionState` class to simplify the code and enhance clarity. - Adjusted the import statements to include `Optional` for better type handling. * refactor: streamline LLM event handling in LiteAgent - Removed unused LLM event emissions (LLMCallStartedEvent, LLMCallCompletedEvent, LLMCallFailedEvent) from the LiteAgent class to simplify the code and improve performance. - Adjusted the flow of LLM response handling by eliminating unnecessary event bus interactions, enhancing clarity and maintainability. * flow ownership and not emitting events when a crew is done * refactor: remove unused agent parameter from ShortTermMemory save method - Eliminated the agent parameter from the save method in the ShortTermMemory class to streamline the code and improve clarity. - This change enhances the maintainability of the memory management system by reducing unnecessary complexity. * runtype check fix * fixing tests * fix lints * fix: update event assertions in test_llm_emits_event_with_lite_agent - Adjusted the expected counts for completed and started events in the test to reflect the correct behavior of the LiteAgent. - Updated assertions for agent roles and IDs to match the expected values after recent changes in event handling. * fix: update task name assertions in event tests - Modified assertions in `test_stream_llm_emits_event_with_task_and_agent_info` and `test_llm_emits_event_with_task_and_agent_info` to use `task.description` as a fallback for `task.name`. This ensures that the tests correctly validate the task name even when it is not explicitly set. * fix: update test assertions for output values and improve readability - Updated assertions in `test_output_json_dict_hierarchical` to reflect the correct expected score value. - Enhanced readability of assertions in `test_output_pydantic_to_another_task` and `test_key` by formatting the error messages for clarity. - These changes ensure that the tests accurately validate the expected outputs and improve overall code quality. * test fixes * fix crew_test * added another fixture * fix: ensure agent and task assignments in contextual memory are conditional - Updated the ContextualMemory class to check for the existence of short-term, long-term, external, and extended memory before assigning agent and task attributes. This prevents potential attribute errors when memory types are not initialized.
This commit is contained in:
@@ -320,15 +320,20 @@ class Agent(BaseAgent):
|
||||
event=MemoryRetrievalStartedEvent(
|
||||
task_id=str(task.id) if task else None,
|
||||
source_type="agent",
|
||||
from_agent=self,
|
||||
from_task=task,
|
||||
),
|
||||
)
|
||||
|
||||
start_time = time.time()
|
||||
|
||||
contextual_memory = ContextualMemory(
|
||||
self.crew._short_term_memory,
|
||||
self.crew._long_term_memory,
|
||||
self.crew._entity_memory,
|
||||
self.crew._external_memory,
|
||||
agent=self,
|
||||
task=task,
|
||||
)
|
||||
memory = contextual_memory.build_context_for_task(task, context)
|
||||
if memory.strip() != "":
|
||||
@@ -341,6 +346,8 @@ class Agent(BaseAgent):
|
||||
memory_content=memory,
|
||||
retrieval_time_ms=(time.time() - start_time) * 1000,
|
||||
source_type="agent",
|
||||
from_agent=self,
|
||||
from_task=task,
|
||||
),
|
||||
)
|
||||
knowledge_config = (
|
||||
|
||||
@@ -43,7 +43,6 @@ class CrewAgentExecutorMixin:
|
||||
metadata={
|
||||
"observation": self.task.description,
|
||||
},
|
||||
agent=self.agent.role,
|
||||
)
|
||||
except Exception as e:
|
||||
print(f"Failed to add to short term memory: {e}")
|
||||
@@ -65,7 +64,6 @@ class CrewAgentExecutorMixin:
|
||||
"description": self.task.description,
|
||||
"messages": self.messages,
|
||||
},
|
||||
agent=self.agent.role,
|
||||
)
|
||||
except Exception as e:
|
||||
print(f"Failed to add to external memory: {e}")
|
||||
@@ -158,7 +156,9 @@ class CrewAgentExecutorMixin:
|
||||
self._printer.print(content=prompt, color="bold_yellow")
|
||||
response = input()
|
||||
if response.strip() != "":
|
||||
self._printer.print(content="\nProcessing your feedback...", color="cyan")
|
||||
self._printer.print(
|
||||
content="\nProcessing your feedback...", color="cyan"
|
||||
)
|
||||
return response
|
||||
finally:
|
||||
event_listener.formatter.resume_live_updates()
|
||||
|
||||
@@ -117,9 +117,6 @@ class PlusAPI:
|
||||
def get_organizations(self) -> requests.Response:
|
||||
return self._make_request("GET", self.ORGANIZATIONS_RESOURCE)
|
||||
|
||||
def send_trace_batch(self, payload) -> requests.Response:
|
||||
return self._make_request("POST", self.TRACING_RESOURCE, json=payload)
|
||||
|
||||
def initialize_trace_batch(self, payload) -> requests.Response:
|
||||
return self._make_request(
|
||||
"POST", f"{self.TRACING_RESOURCE}/batches", json=payload
|
||||
@@ -135,6 +132,7 @@ class PlusAPI:
|
||||
"POST",
|
||||
f"{self.TRACING_RESOURCE}/batches/{trace_batch_id}/events",
|
||||
json=payload,
|
||||
timeout=30,
|
||||
)
|
||||
|
||||
def send_ephemeral_trace_events(
|
||||
@@ -144,6 +142,7 @@ class PlusAPI:
|
||||
"POST",
|
||||
f"{self.EPHEMERAL_TRACING_RESOURCE}/batches/{trace_batch_id}/events",
|
||||
json=payload,
|
||||
timeout=30,
|
||||
)
|
||||
|
||||
def finalize_trace_batch(self, trace_batch_id: str, payload) -> requests.Response:
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
import threading
|
||||
from typing import Any
|
||||
from typing import Any, Optional
|
||||
|
||||
from crewai.experimental.evaluation.base_evaluator import AgentEvaluationResult, AggregationStrategy
|
||||
from crewai.agent import Agent
|
||||
@@ -15,10 +15,11 @@ from crewai.utilities.events.agent_events import LiteAgentExecutionCompletedEven
|
||||
from crewai.experimental.evaluation.base_evaluator import AgentAggregatedEvaluationResult, EvaluationScore, MetricCategory
|
||||
|
||||
class ExecutionState:
|
||||
current_agent_id: Optional[str] = None
|
||||
current_task_id: Optional[str] = None
|
||||
|
||||
def __init__(self):
|
||||
self.traces = {}
|
||||
self.current_agent_id: str | None = None
|
||||
self.current_task_id: str | None = None
|
||||
self.iteration = 1
|
||||
self.iterations_results = {}
|
||||
self.agent_evaluators = {}
|
||||
|
||||
@@ -69,12 +69,7 @@ from crewai.utilities.events.agent_events import (
|
||||
LiteAgentExecutionStartedEvent,
|
||||
)
|
||||
from crewai.utilities.events.crewai_event_bus import crewai_event_bus
|
||||
from crewai.utilities.events.llm_events import (
|
||||
LLMCallCompletedEvent,
|
||||
LLMCallFailedEvent,
|
||||
LLMCallStartedEvent,
|
||||
LLMCallType,
|
||||
)
|
||||
|
||||
from crewai.utilities.llm_utils import create_llm
|
||||
from crewai.utilities.printer import Printer
|
||||
from crewai.utilities.token_counter_callback import TokenCalcHandler
|
||||
@@ -519,19 +514,6 @@ class LiteAgent(FlowTrackable, BaseModel):
|
||||
|
||||
enforce_rpm_limit(self.request_within_rpm_limit)
|
||||
|
||||
llm = cast(LLM, self.llm)
|
||||
model = llm.model if hasattr(llm, "model") else "unknown"
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=LLMCallStartedEvent(
|
||||
messages=self._messages,
|
||||
tools=None,
|
||||
callbacks=self._callbacks,
|
||||
from_agent=self,
|
||||
model=model,
|
||||
),
|
||||
)
|
||||
|
||||
try:
|
||||
answer = get_llm_response(
|
||||
llm=cast(LLM, self.llm),
|
||||
@@ -541,23 +523,7 @@ class LiteAgent(FlowTrackable, BaseModel):
|
||||
from_agent=self,
|
||||
)
|
||||
|
||||
# Emit LLM call completed event
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=LLMCallCompletedEvent(
|
||||
messages=self._messages,
|
||||
response=answer,
|
||||
call_type=LLMCallType.LLM_CALL,
|
||||
from_agent=self,
|
||||
model=model,
|
||||
),
|
||||
)
|
||||
except Exception as e:
|
||||
# Emit LLM call failed event
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=LLMCallFailedEvent(error=str(e), from_agent=self),
|
||||
)
|
||||
raise e
|
||||
|
||||
formatted_answer = process_llm_response(answer, self.use_stop_words)
|
||||
|
||||
@@ -851,7 +851,9 @@ class LLM(BaseLLM):
|
||||
return tool_calls
|
||||
|
||||
# --- 7) Handle tool calls if present
|
||||
tool_result = self._handle_tool_call(tool_calls, available_functions)
|
||||
tool_result = self._handle_tool_call(
|
||||
tool_calls, available_functions, from_task, from_agent
|
||||
)
|
||||
if tool_result is not None:
|
||||
return tool_result
|
||||
# --- 8) If tool call handling didn't return a result, emit completion event and return text response
|
||||
@@ -868,6 +870,8 @@ class LLM(BaseLLM):
|
||||
self,
|
||||
tool_calls: List[Any],
|
||||
available_functions: Optional[Dict[str, Any]] = None,
|
||||
from_task: Optional[Any] = None,
|
||||
from_agent: Optional[Any] = None,
|
||||
) -> Optional[str]:
|
||||
"""Handle a tool call from the LLM.
|
||||
|
||||
@@ -902,6 +906,8 @@ class LLM(BaseLLM):
|
||||
event=ToolUsageStartedEvent(
|
||||
tool_name=function_name,
|
||||
tool_args=function_args,
|
||||
from_agent=from_agent,
|
||||
from_task=from_task,
|
||||
),
|
||||
)
|
||||
|
||||
@@ -914,12 +920,17 @@ class LLM(BaseLLM):
|
||||
tool_args=function_args,
|
||||
started_at=started_at,
|
||||
finished_at=datetime.now(),
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
),
|
||||
)
|
||||
|
||||
# --- 3.3) Emit success event
|
||||
self._handle_emit_call_events(
|
||||
response=result, call_type=LLMCallType.TOOL_CALL
|
||||
response=result,
|
||||
call_type=LLMCallType.TOOL_CALL,
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
)
|
||||
return result
|
||||
except Exception as e:
|
||||
@@ -1139,7 +1150,11 @@ class LLM(BaseLLM):
|
||||
|
||||
# TODO: Remove this code after merging PR https://github.com/BerriAI/litellm/pull/10917
|
||||
# Ollama doesn't supports last message to be 'assistant'
|
||||
if "ollama" in self.model.lower() and messages and messages[-1]["role"] == "assistant":
|
||||
if (
|
||||
"ollama" in self.model.lower()
|
||||
and messages
|
||||
and messages[-1]["role"] == "assistant"
|
||||
):
|
||||
return messages + [{"role": "user", "content": ""}]
|
||||
|
||||
# Handle Anthropic models
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
from typing import Optional
|
||||
from typing import Optional, TYPE_CHECKING
|
||||
|
||||
from crewai.memory import (
|
||||
EntityMemory,
|
||||
@@ -7,6 +7,10 @@ from crewai.memory import (
|
||||
ShortTermMemory,
|
||||
)
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from crewai.agent import Agent
|
||||
from crewai.task import Task
|
||||
|
||||
|
||||
class ContextualMemory:
|
||||
def __init__(
|
||||
@@ -15,11 +19,28 @@ class ContextualMemory:
|
||||
ltm: LongTermMemory,
|
||||
em: EntityMemory,
|
||||
exm: ExternalMemory,
|
||||
agent: Optional["Agent"] = None,
|
||||
task: Optional["Task"] = None,
|
||||
):
|
||||
self.stm = stm
|
||||
self.ltm = ltm
|
||||
self.em = em
|
||||
self.exm = exm
|
||||
self.agent = agent
|
||||
self.task = task
|
||||
|
||||
if self.stm is not None:
|
||||
self.stm.agent = self.agent
|
||||
self.stm.task = self.task
|
||||
if self.ltm is not None:
|
||||
self.ltm.agent = self.agent
|
||||
self.ltm.task = self.task
|
||||
if self.em is not None:
|
||||
self.em.agent = self.agent
|
||||
self.em.task = self.task
|
||||
if self.exm is not None:
|
||||
self.exm.agent = self.agent
|
||||
self.exm.task = self.task
|
||||
|
||||
def build_context_for_task(self, task, context) -> str:
|
||||
"""
|
||||
@@ -49,10 +70,7 @@ class ContextualMemory:
|
||||
|
||||
stm_results = self.stm.search(query)
|
||||
formatted_results = "\n".join(
|
||||
[
|
||||
f"- {result['context']}"
|
||||
for result in stm_results
|
||||
]
|
||||
[f"- {result['context']}" for result in stm_results]
|
||||
)
|
||||
return f"Recent Insights:\n{formatted_results}" if stm_results else ""
|
||||
|
||||
@@ -89,10 +107,7 @@ class ContextualMemory:
|
||||
|
||||
em_results = self.em.search(query)
|
||||
formatted_results = "\n".join(
|
||||
[
|
||||
f"- {result['context']}"
|
||||
for result in em_results
|
||||
] # type: ignore # Invalid index type "str" for "str"; expected type "SupportsIndex | slice"
|
||||
[f"- {result['context']}" for result in em_results] # type: ignore # Invalid index type "str" for "str"; expected type "SupportsIndex | slice"
|
||||
)
|
||||
return f"Entities:\n{formatted_results}" if em_results else ""
|
||||
|
||||
|
||||
@@ -35,7 +35,7 @@ class EntityMemory(Memory):
|
||||
raise ImportError(
|
||||
"Mem0 is not installed. Please install it with `pip install mem0ai`."
|
||||
)
|
||||
config = embedder_config.get("config")
|
||||
config = embedder_config.get("config") if embedder_config else None
|
||||
storage = Mem0Storage(type="short_term", crew=crew, config=config)
|
||||
else:
|
||||
storage = (
|
||||
@@ -60,6 +60,8 @@ class EntityMemory(Memory):
|
||||
event=MemorySaveStartedEvent(
|
||||
metadata=item.metadata,
|
||||
source_type="entity_memory",
|
||||
from_agent=self.agent,
|
||||
from_task=self.task,
|
||||
),
|
||||
)
|
||||
|
||||
@@ -85,6 +87,8 @@ class EntityMemory(Memory):
|
||||
metadata=item.metadata,
|
||||
save_time_ms=(time.time() - start_time) * 1000,
|
||||
source_type="entity_memory",
|
||||
from_agent=self.agent,
|
||||
from_task=self.task,
|
||||
),
|
||||
)
|
||||
except Exception as e:
|
||||
@@ -94,6 +98,8 @@ class EntityMemory(Memory):
|
||||
metadata=item.metadata,
|
||||
error=str(e),
|
||||
source_type="entity_memory",
|
||||
from_agent=self.agent,
|
||||
from_task=self.task,
|
||||
),
|
||||
)
|
||||
raise
|
||||
@@ -111,6 +117,8 @@ class EntityMemory(Memory):
|
||||
limit=limit,
|
||||
score_threshold=score_threshold,
|
||||
source_type="entity_memory",
|
||||
from_agent=self.agent,
|
||||
from_task=self.task,
|
||||
),
|
||||
)
|
||||
|
||||
@@ -129,6 +137,8 @@ class EntityMemory(Memory):
|
||||
score_threshold=score_threshold,
|
||||
query_time_ms=(time.time() - start_time) * 1000,
|
||||
source_type="entity_memory",
|
||||
from_agent=self.agent,
|
||||
from_task=self.task,
|
||||
),
|
||||
)
|
||||
|
||||
|
||||
22
src/crewai/memory/external/external_memory.py
vendored
22
src/crewai/memory/external/external_memory.py
vendored
@@ -53,7 +53,6 @@ class ExternalMemory(Memory):
|
||||
self,
|
||||
value: Any,
|
||||
metadata: Optional[Dict[str, Any]] = None,
|
||||
agent: Optional[str] = None,
|
||||
) -> None:
|
||||
"""Saves a value into the external storage."""
|
||||
crewai_event_bus.emit(
|
||||
@@ -61,24 +60,30 @@ class ExternalMemory(Memory):
|
||||
event=MemorySaveStartedEvent(
|
||||
value=value,
|
||||
metadata=metadata,
|
||||
agent_role=agent,
|
||||
source_type="external_memory",
|
||||
from_agent=self.agent,
|
||||
from_task=self.task,
|
||||
),
|
||||
)
|
||||
|
||||
start_time = time.time()
|
||||
try:
|
||||
item = ExternalMemoryItem(value=value, metadata=metadata, agent=agent)
|
||||
super().save(value=item.value, metadata=item.metadata, agent=item.agent)
|
||||
item = ExternalMemoryItem(
|
||||
value=value,
|
||||
metadata=metadata,
|
||||
agent=self.agent.role if self.agent else None,
|
||||
)
|
||||
super().save(value=item.value, metadata=item.metadata)
|
||||
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=MemorySaveCompletedEvent(
|
||||
value=value,
|
||||
metadata=metadata,
|
||||
agent_role=agent,
|
||||
save_time_ms=(time.time() - start_time) * 1000,
|
||||
source_type="external_memory",
|
||||
from_agent=self.agent,
|
||||
from_task=self.task,
|
||||
),
|
||||
)
|
||||
except Exception as e:
|
||||
@@ -87,9 +92,10 @@ class ExternalMemory(Memory):
|
||||
event=MemorySaveFailedEvent(
|
||||
value=value,
|
||||
metadata=metadata,
|
||||
agent_role=agent,
|
||||
error=str(e),
|
||||
source_type="external_memory",
|
||||
from_agent=self.agent,
|
||||
from_task=self.task,
|
||||
),
|
||||
)
|
||||
raise
|
||||
@@ -107,6 +113,8 @@ class ExternalMemory(Memory):
|
||||
limit=limit,
|
||||
score_threshold=score_threshold,
|
||||
source_type="external_memory",
|
||||
from_agent=self.agent,
|
||||
from_task=self.task,
|
||||
),
|
||||
)
|
||||
|
||||
@@ -125,6 +133,8 @@ class ExternalMemory(Memory):
|
||||
score_threshold=score_threshold,
|
||||
query_time_ms=(time.time() - start_time) * 1000,
|
||||
source_type="external_memory",
|
||||
from_agent=self.agent,
|
||||
from_task=self.task,
|
||||
),
|
||||
)
|
||||
|
||||
|
||||
@@ -37,13 +37,17 @@ class LongTermMemory(Memory):
|
||||
metadata=item.metadata,
|
||||
agent_role=item.agent,
|
||||
source_type="long_term_memory",
|
||||
from_agent=self.agent,
|
||||
from_task=self.task,
|
||||
),
|
||||
)
|
||||
|
||||
start_time = time.time()
|
||||
try:
|
||||
metadata = item.metadata
|
||||
metadata.update({"agent": item.agent, "expected_output": item.expected_output})
|
||||
metadata.update(
|
||||
{"agent": item.agent, "expected_output": item.expected_output}
|
||||
)
|
||||
self.storage.save( # type: ignore # BUG?: Unexpected keyword argument "task_description","score","datetime" for "save" of "Storage"
|
||||
task_description=item.task,
|
||||
score=metadata["quality"],
|
||||
@@ -59,6 +63,8 @@ class LongTermMemory(Memory):
|
||||
agent_role=item.agent,
|
||||
save_time_ms=(time.time() - start_time) * 1000,
|
||||
source_type="long_term_memory",
|
||||
from_agent=self.agent,
|
||||
from_task=self.task,
|
||||
),
|
||||
)
|
||||
except Exception as e:
|
||||
@@ -74,13 +80,19 @@ class LongTermMemory(Memory):
|
||||
)
|
||||
raise
|
||||
|
||||
def search(self, task: str, latest_n: int = 3) -> List[Dict[str, Any]]: # type: ignore # signature of "search" incompatible with supertype "Memory"
|
||||
def search( # type: ignore # signature of "search" incompatible with supertype "Memory"
|
||||
self,
|
||||
task: str,
|
||||
latest_n: int = 3,
|
||||
) -> List[Dict[str, Any]]: # type: ignore # signature of "search" incompatible with supertype "Memory"
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=MemoryQueryStartedEvent(
|
||||
query=task,
|
||||
limit=latest_n,
|
||||
source_type="long_term_memory",
|
||||
from_agent=self.agent,
|
||||
from_task=self.task,
|
||||
),
|
||||
)
|
||||
|
||||
@@ -96,6 +108,8 @@ class LongTermMemory(Memory):
|
||||
limit=latest_n,
|
||||
query_time_ms=(time.time() - start_time) * 1000,
|
||||
source_type="long_term_memory",
|
||||
from_agent=self.agent,
|
||||
from_task=self.task,
|
||||
),
|
||||
)
|
||||
|
||||
|
||||
@@ -1,7 +1,11 @@
|
||||
from typing import Any, Dict, List, Optional
|
||||
from typing import Any, Dict, List, Optional, TYPE_CHECKING
|
||||
|
||||
from pydantic import BaseModel
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from crewai.agent import Agent
|
||||
from crewai.task import Task
|
||||
|
||||
|
||||
class Memory(BaseModel):
|
||||
"""
|
||||
@@ -12,19 +16,38 @@ class Memory(BaseModel):
|
||||
crew: Optional[Any] = None
|
||||
|
||||
storage: Any
|
||||
_agent: Optional["Agent"] = None
|
||||
_task: Optional["Task"] = None
|
||||
|
||||
def __init__(self, storage: Any, **data: Any):
|
||||
super().__init__(storage=storage, **data)
|
||||
|
||||
@property
|
||||
def task(self) -> Optional["Task"]:
|
||||
"""Get the current task associated with this memory."""
|
||||
return self._task
|
||||
|
||||
@task.setter
|
||||
def task(self, task: Optional["Task"]) -> None:
|
||||
"""Set the current task associated with this memory."""
|
||||
self._task = task
|
||||
|
||||
@property
|
||||
def agent(self) -> Optional["Agent"]:
|
||||
"""Get the current agent associated with this memory."""
|
||||
return self._agent
|
||||
|
||||
@agent.setter
|
||||
def agent(self, agent: Optional["Agent"]) -> None:
|
||||
"""Set the current agent associated with this memory."""
|
||||
self._agent = agent
|
||||
|
||||
def save(
|
||||
self,
|
||||
value: Any,
|
||||
metadata: Optional[Dict[str, Any]] = None,
|
||||
agent: Optional[str] = None,
|
||||
) -> None:
|
||||
metadata = metadata or {}
|
||||
if agent:
|
||||
metadata["agent"] = agent
|
||||
|
||||
self.storage.save(value, metadata)
|
||||
|
||||
|
||||
@@ -37,7 +37,7 @@ class ShortTermMemory(Memory):
|
||||
raise ImportError(
|
||||
"Mem0 is not installed. Please install it with `pip install mem0ai`."
|
||||
)
|
||||
config = embedder_config.get("config")
|
||||
config = embedder_config.get("config") if embedder_config else None
|
||||
storage = Mem0Storage(type="short_term", crew=crew, config=config)
|
||||
else:
|
||||
storage = (
|
||||
@@ -57,34 +57,42 @@ class ShortTermMemory(Memory):
|
||||
self,
|
||||
value: Any,
|
||||
metadata: Optional[Dict[str, Any]] = None,
|
||||
agent: Optional[str] = None,
|
||||
) -> None:
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=MemorySaveStartedEvent(
|
||||
value=value,
|
||||
metadata=metadata,
|
||||
agent_role=agent,
|
||||
source_type="short_term_memory",
|
||||
from_agent=self.agent,
|
||||
from_task=self.task,
|
||||
),
|
||||
)
|
||||
|
||||
start_time = time.time()
|
||||
try:
|
||||
item = ShortTermMemoryItem(data=value, metadata=metadata, agent=agent)
|
||||
item = ShortTermMemoryItem(
|
||||
data=value,
|
||||
metadata=metadata,
|
||||
agent=self.agent.role if self.agent else None,
|
||||
)
|
||||
if self._memory_provider == "mem0":
|
||||
item.data = f"Remember the following insights from Agent run: {item.data}"
|
||||
item.data = (
|
||||
f"Remember the following insights from Agent run: {item.data}"
|
||||
)
|
||||
|
||||
super().save(value=item.data, metadata=item.metadata, agent=item.agent)
|
||||
super().save(value=item.data, metadata=item.metadata)
|
||||
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=MemorySaveCompletedEvent(
|
||||
value=value,
|
||||
metadata=metadata,
|
||||
agent_role=agent,
|
||||
# agent_role=agent,
|
||||
save_time_ms=(time.time() - start_time) * 1000,
|
||||
source_type="short_term_memory",
|
||||
from_agent=self.agent,
|
||||
from_task=self.task,
|
||||
),
|
||||
)
|
||||
except Exception as e:
|
||||
@@ -93,9 +101,10 @@ class ShortTermMemory(Memory):
|
||||
event=MemorySaveFailedEvent(
|
||||
value=value,
|
||||
metadata=metadata,
|
||||
agent_role=agent,
|
||||
error=str(e),
|
||||
source_type="short_term_memory",
|
||||
from_agent=self.agent,
|
||||
from_task=self.task,
|
||||
),
|
||||
)
|
||||
raise
|
||||
@@ -113,6 +122,8 @@ class ShortTermMemory(Memory):
|
||||
limit=limit,
|
||||
score_threshold=score_threshold,
|
||||
source_type="short_term_memory",
|
||||
from_agent=self.agent,
|
||||
from_task=self.task,
|
||||
),
|
||||
)
|
||||
|
||||
@@ -131,6 +142,8 @@ class ShortTermMemory(Memory):
|
||||
score_threshold=score_threshold,
|
||||
query_time_ms=(time.time() - start_time) * 1000,
|
||||
source_type="short_term_memory",
|
||||
from_agent=self.agent,
|
||||
from_task=self.task,
|
||||
),
|
||||
)
|
||||
|
||||
|
||||
@@ -433,7 +433,7 @@ class Task(BaseModel):
|
||||
|
||||
pydantic_output, json_output = self._export_output(result)
|
||||
task_output = TaskOutput(
|
||||
name=self.name,
|
||||
name=self.name or self.description,
|
||||
description=self.description,
|
||||
expected_output=self.expected_output,
|
||||
raw=result,
|
||||
@@ -561,8 +561,8 @@ class Task(BaseModel):
|
||||
should_inject = self.allow_crewai_trigger_context
|
||||
|
||||
if should_inject and self.agent:
|
||||
crew = getattr(self.agent, 'crew', None)
|
||||
if crew and hasattr(crew, '_inputs') and crew._inputs:
|
||||
crew = getattr(self.agent, "crew", None)
|
||||
if crew and hasattr(crew, "_inputs") and crew._inputs:
|
||||
trigger_payload = crew._inputs.get("crewai_trigger_payload")
|
||||
if trigger_payload is not None:
|
||||
description += f"\n\nTrigger Payload: {trigger_payload}"
|
||||
@@ -780,7 +780,9 @@ Follow these guidelines:
|
||||
if self.create_directory and not directory.exists():
|
||||
directory.mkdir(parents=True, exist_ok=True)
|
||||
elif not self.create_directory and not directory.exists():
|
||||
raise RuntimeError(f"Directory {directory} does not exist and create_directory is False")
|
||||
raise RuntimeError(
|
||||
f"Directory {directory} does not exist and create_directory is False"
|
||||
)
|
||||
|
||||
with resolved_path.open("w", encoding="utf-8") as file:
|
||||
if isinstance(result, dict):
|
||||
|
||||
@@ -16,6 +16,12 @@ if TYPE_CHECKING:
|
||||
from crewai.tools.base_tool import BaseTool
|
||||
|
||||
|
||||
class ToolUsageLimitExceeded(Exception):
|
||||
"""Exception raised when a tool has reached its maximum usage limit."""
|
||||
|
||||
pass
|
||||
|
||||
|
||||
class CrewStructuredTool:
|
||||
"""A structured tool that can operate on any number of inputs.
|
||||
|
||||
@@ -227,17 +233,25 @@ class CrewStructuredTool:
|
||||
"""
|
||||
parsed_args = self._parse_args(input)
|
||||
|
||||
if self.has_reached_max_usage_count():
|
||||
raise ToolUsageLimitExceeded(
|
||||
f"Tool '{self.name}' has reached its maximum usage limit of {self.max_usage_count}. You should not use the {self.name} tool again."
|
||||
)
|
||||
|
||||
self._increment_usage_count()
|
||||
|
||||
if inspect.iscoroutinefunction(self.func):
|
||||
return await self.func(**parsed_args, **kwargs)
|
||||
else:
|
||||
# Run sync functions in a thread pool
|
||||
import asyncio
|
||||
try:
|
||||
if inspect.iscoroutinefunction(self.func):
|
||||
return await self.func(**parsed_args, **kwargs)
|
||||
else:
|
||||
# Run sync functions in a thread pool
|
||||
import asyncio
|
||||
|
||||
return await asyncio.get_event_loop().run_in_executor(
|
||||
None, lambda: self.func(**parsed_args, **kwargs)
|
||||
)
|
||||
return await asyncio.get_event_loop().run_in_executor(
|
||||
None, lambda: self.func(**parsed_args, **kwargs)
|
||||
)
|
||||
except Exception:
|
||||
raise
|
||||
|
||||
def _run(self, *args, **kwargs) -> Any:
|
||||
"""Legacy method for compatibility."""
|
||||
@@ -252,12 +266,22 @@ class CrewStructuredTool:
|
||||
"""Main method for tool execution."""
|
||||
parsed_args = self._parse_args(input)
|
||||
|
||||
if self.has_reached_max_usage_count():
|
||||
raise ToolUsageLimitExceeded(
|
||||
f"Tool '{self.name}' has reached its maximum usage limit of {self.max_usage_count}. You should not use the {self.name} tool again."
|
||||
)
|
||||
|
||||
self._increment_usage_count()
|
||||
|
||||
if inspect.iscoroutinefunction(self.func):
|
||||
result = asyncio.run(self.func(**parsed_args, **kwargs))
|
||||
return result
|
||||
|
||||
try:
|
||||
result = self.func(**parsed_args, **kwargs)
|
||||
except Exception:
|
||||
raise
|
||||
|
||||
result = self.func(**parsed_args, **kwargs)
|
||||
|
||||
if asyncio.iscoroutine(result):
|
||||
@@ -265,6 +289,13 @@ class CrewStructuredTool:
|
||||
|
||||
return result
|
||||
|
||||
def has_reached_max_usage_count(self) -> bool:
|
||||
"""Check if the tool has reached its maximum usage count."""
|
||||
return (
|
||||
self.max_usage_count is not None
|
||||
and self.current_usage_count >= self.max_usage_count
|
||||
)
|
||||
|
||||
def _increment_usage_count(self) -> None:
|
||||
"""Increment the usage count."""
|
||||
self.current_usage_count += 1
|
||||
|
||||
@@ -178,8 +178,10 @@ class ToolUsage:
|
||||
|
||||
if self.agent.fingerprint:
|
||||
event_data.update(self.agent.fingerprint)
|
||||
|
||||
crewai_event_bus.emit(self,ToolUsageStartedEvent(**event_data))
|
||||
if self.task:
|
||||
event_data["task_name"] = self.task.name or self.task.description
|
||||
event_data["task_id"] = str(self.task.id)
|
||||
crewai_event_bus.emit(self, ToolUsageStartedEvent(**event_data))
|
||||
|
||||
started_at = time.time()
|
||||
from_cache = False
|
||||
@@ -311,12 +313,15 @@ class ToolUsage:
|
||||
if self.agent and hasattr(self.agent, "tools_results"):
|
||||
self.agent.tools_results.append(data)
|
||||
|
||||
if available_tool and hasattr(available_tool, 'current_usage_count'):
|
||||
if available_tool and hasattr(available_tool, "current_usage_count"):
|
||||
available_tool.current_usage_count += 1
|
||||
if hasattr(available_tool, 'max_usage_count') and available_tool.max_usage_count is not None:
|
||||
if (
|
||||
hasattr(available_tool, "max_usage_count")
|
||||
and available_tool.max_usage_count is not None
|
||||
):
|
||||
self._printer.print(
|
||||
content=f"Tool '{available_tool.name}' usage: {available_tool.current_usage_count}/{available_tool.max_usage_count}",
|
||||
color="blue"
|
||||
color="blue",
|
||||
)
|
||||
|
||||
return result
|
||||
@@ -362,7 +367,7 @@ class ToolUsage:
|
||||
Error message if limit reached, None otherwise
|
||||
"""
|
||||
if (
|
||||
hasattr(tool, 'max_usage_count')
|
||||
hasattr(tool, "max_usage_count")
|
||||
and tool.max_usage_count is not None
|
||||
and tool.current_usage_count >= tool.max_usage_count
|
||||
):
|
||||
@@ -605,6 +610,9 @@ class ToolUsage:
|
||||
"output": result,
|
||||
}
|
||||
)
|
||||
if self.task:
|
||||
event_data["task_id"] = str(self.task.id)
|
||||
event_data["task_name"] = self.task.name or self.task.description
|
||||
crewai_event_bus.emit(self, ToolUsageFinishedEvent(**event_data))
|
||||
|
||||
def _prepare_event_data(
|
||||
|
||||
@@ -11,7 +11,9 @@ class BaseEvent(BaseModel):
|
||||
timestamp: datetime = Field(default_factory=lambda: datetime.now(timezone.utc))
|
||||
type: str
|
||||
source_fingerprint: Optional[str] = None # UUID string of the source entity
|
||||
source_type: Optional[str] = None # "agent", "task", "crew", "memory", "entity_memory", "short_term_memory", "long_term_memory", "external_memory"
|
||||
source_type: Optional[str] = (
|
||||
None # "agent", "task", "crew", "memory", "entity_memory", "short_term_memory", "long_term_memory", "external_memory"
|
||||
)
|
||||
fingerprint_metadata: Optional[Dict[str, Any]] = None # Any relevant metadata
|
||||
|
||||
def to_json(self, exclude: set[str] | None = None):
|
||||
@@ -25,3 +27,20 @@ class BaseEvent(BaseModel):
|
||||
dict: A JSON-serializable dictionary.
|
||||
"""
|
||||
return to_serializable(self, exclude=exclude)
|
||||
|
||||
def _set_task_params(self, data: Dict[str, Any]):
|
||||
if "from_task" in data and (task := data["from_task"]):
|
||||
self.task_id = task.id
|
||||
self.task_name = task.name or task.description
|
||||
self.from_task = None
|
||||
|
||||
def _set_agent_params(self, data: Dict[str, Any]):
|
||||
task = data.get("from_task", None)
|
||||
agent = task.agent if task else data.get("from_agent", None)
|
||||
|
||||
if not agent:
|
||||
return
|
||||
|
||||
self.agent_id = agent.id
|
||||
self.agent_role = agent.role
|
||||
self.from_agent = None
|
||||
|
||||
@@ -41,6 +41,12 @@ class TraceBatchManager:
|
||||
"""Single responsibility: Manage batches and event buffering"""
|
||||
|
||||
is_current_batch_ephemeral: bool = False
|
||||
trace_batch_id: Optional[str] = None
|
||||
current_batch: Optional[TraceBatch] = None
|
||||
event_buffer: List[TraceEvent] = []
|
||||
execution_start_times: Dict[str, datetime] = {}
|
||||
batch_owner_type: Optional[str] = None
|
||||
batch_owner_id: Optional[str] = None
|
||||
|
||||
def __init__(self):
|
||||
try:
|
||||
@@ -48,11 +54,6 @@ class TraceBatchManager:
|
||||
except AuthError:
|
||||
self.plus_api = PlusAPI(api_key="")
|
||||
|
||||
self.trace_batch_id: Optional[str] = None # Backend ID
|
||||
self.current_batch: Optional[TraceBatch] = None
|
||||
self.event_buffer: List[TraceEvent] = []
|
||||
self.execution_start_times: Dict[str, datetime] = {}
|
||||
|
||||
def initialize_batch(
|
||||
self,
|
||||
user_context: Dict[str, str],
|
||||
@@ -178,14 +179,16 @@ class TraceBatchManager:
|
||||
if not self.current_batch:
|
||||
return None
|
||||
|
||||
self.current_batch.events = self.event_buffer.copy()
|
||||
if self.event_buffer:
|
||||
self._send_events_to_backend()
|
||||
self._finalize_backend_batch()
|
||||
|
||||
self.current_batch.events = self.event_buffer.copy()
|
||||
|
||||
finalized_batch = self.current_batch
|
||||
|
||||
self.batch_owner_type = None
|
||||
self.batch_owner_id = None
|
||||
|
||||
self.current_batch = None
|
||||
self.event_buffer.clear()
|
||||
self.trace_batch_id = None
|
||||
|
||||
@@ -75,10 +75,18 @@ class TraceCollectionListener(BaseEventListener):
|
||||
Trace collection listener that orchestrates trace collection
|
||||
"""
|
||||
|
||||
complex_events = ["task_started", "llm_call_started", "llm_call_completed"]
|
||||
complex_events = [
|
||||
"task_started",
|
||||
"task_completed",
|
||||
"llm_call_started",
|
||||
"llm_call_completed",
|
||||
"agent_execution_started",
|
||||
"agent_execution_completed",
|
||||
]
|
||||
|
||||
_instance = None
|
||||
_initialized = False
|
||||
_listeners_setup = False
|
||||
|
||||
def __new__(cls, batch_manager=None):
|
||||
if cls._instance is None:
|
||||
@@ -116,10 +124,15 @@ class TraceCollectionListener(BaseEventListener):
|
||||
def setup_listeners(self, crewai_event_bus):
|
||||
"""Setup event listeners - delegates to specific handlers"""
|
||||
|
||||
if self._listeners_setup:
|
||||
return
|
||||
|
||||
self._register_flow_event_handlers(crewai_event_bus)
|
||||
self._register_context_event_handlers(crewai_event_bus)
|
||||
self._register_action_event_handlers(crewai_event_bus)
|
||||
|
||||
self._listeners_setup = True
|
||||
|
||||
def _register_flow_event_handlers(self, event_bus):
|
||||
"""Register handlers for flow events"""
|
||||
|
||||
@@ -148,7 +161,8 @@ class TraceCollectionListener(BaseEventListener):
|
||||
@event_bus.on(FlowFinishedEvent)
|
||||
def on_flow_finished(source, event):
|
||||
self._handle_trace_event("flow_finished", source, event)
|
||||
self.batch_manager.finalize_batch()
|
||||
if self.batch_manager.batch_owner_type == "flow":
|
||||
self.batch_manager.finalize_batch()
|
||||
|
||||
@event_bus.on(FlowPlotEvent)
|
||||
def on_flow_plot(source, event):
|
||||
@@ -166,7 +180,8 @@ class TraceCollectionListener(BaseEventListener):
|
||||
@event_bus.on(CrewKickoffCompletedEvent)
|
||||
def on_crew_completed(source, event):
|
||||
self._handle_trace_event("crew_kickoff_completed", source, event)
|
||||
self.batch_manager.finalize_batch()
|
||||
if self.batch_manager.batch_owner_type == "crew":
|
||||
self.batch_manager.finalize_batch()
|
||||
|
||||
@event_bus.on(CrewKickoffFailedEvent)
|
||||
def on_crew_failed(source, event):
|
||||
@@ -218,7 +233,7 @@ class TraceCollectionListener(BaseEventListener):
|
||||
self._handle_trace_event("llm_guardrail_completed", source, event)
|
||||
|
||||
def _register_action_event_handlers(self, event_bus):
|
||||
"""Register handlers for action events (LLM calls, tool usage, memory)"""
|
||||
"""Register handlers for action events (LLM calls, tool usage)"""
|
||||
|
||||
@event_bus.on(LLMCallStartedEvent)
|
||||
def on_llm_call_started(source, event):
|
||||
@@ -289,6 +304,9 @@ class TraceCollectionListener(BaseEventListener):
|
||||
"crewai_version": get_crewai_version(),
|
||||
}
|
||||
|
||||
self.batch_manager.batch_owner_type = "crew"
|
||||
self.batch_manager.batch_owner_id = getattr(source, "id", str(uuid.uuid4()))
|
||||
|
||||
self._initialize_batch(user_context, execution_metadata)
|
||||
|
||||
def _initialize_flow_batch(self, source: Any, event: Any):
|
||||
@@ -301,6 +319,9 @@ class TraceCollectionListener(BaseEventListener):
|
||||
"execution_type": "flow",
|
||||
}
|
||||
|
||||
self.batch_manager.batch_owner_type = "flow"
|
||||
self.batch_manager.batch_owner_id = getattr(source, "id", str(uuid.uuid4()))
|
||||
|
||||
self._initialize_batch(user_context, execution_metadata)
|
||||
|
||||
def _initialize_batch(
|
||||
@@ -358,12 +379,44 @@ class TraceCollectionListener(BaseEventListener):
|
||||
return {
|
||||
"task_description": event.task.description,
|
||||
"expected_output": event.task.expected_output,
|
||||
"task_name": event.task.name,
|
||||
"task_name": event.task.name or event.task.description,
|
||||
"context": event.context,
|
||||
"agent": source.agent.role,
|
||||
"agent_role": source.agent.role,
|
||||
"task_id": str(event.task.id),
|
||||
}
|
||||
elif event_type == "task_completed":
|
||||
return {
|
||||
"task_description": event.task.description if event.task else None,
|
||||
"task_name": event.task.name or event.task.description
|
||||
if event.task
|
||||
else None,
|
||||
"task_id": str(event.task.id) if event.task else None,
|
||||
"output_raw": event.output.raw if event.output else None,
|
||||
"output_format": str(event.output.output_format)
|
||||
if event.output
|
||||
else None,
|
||||
"agent_role": event.output.agent if event.output else None,
|
||||
}
|
||||
elif event_type == "agent_execution_started":
|
||||
return {
|
||||
"agent_role": event.agent.role,
|
||||
"agent_goal": event.agent.goal,
|
||||
"agent_backstory": event.agent.backstory,
|
||||
}
|
||||
elif event_type == "agent_execution_completed":
|
||||
return {
|
||||
"agent_role": event.agent.role,
|
||||
"agent_goal": event.agent.goal,
|
||||
"agent_backstory": event.agent.backstory,
|
||||
}
|
||||
elif event_type == "llm_call_started":
|
||||
return self._safe_serialize_to_dict(event)
|
||||
event_data = self._safe_serialize_to_dict(event)
|
||||
event_data["task_name"] = (
|
||||
event.task_name or event.task_description
|
||||
if hasattr(event, "task_name") and event.task_name
|
||||
else None
|
||||
)
|
||||
return event_data
|
||||
elif event_type == "llm_call_completed":
|
||||
return self._safe_serialize_to_dict(event)
|
||||
else:
|
||||
|
||||
@@ -13,26 +13,14 @@ class LLMEventBase(BaseEvent):
|
||||
agent_id: Optional[str] = None
|
||||
agent_role: Optional[str] = None
|
||||
|
||||
from_task: Optional[Any] = None
|
||||
from_agent: Optional[Any] = None
|
||||
|
||||
def __init__(self, **data):
|
||||
super().__init__(**data)
|
||||
self._set_agent_params(data)
|
||||
self._set_task_params(data)
|
||||
|
||||
def _set_agent_params(self, data: Dict[str, Any]):
|
||||
task = data.get("from_task", None)
|
||||
agent = task.agent if task else data.get("from_agent", None)
|
||||
|
||||
if not agent:
|
||||
return
|
||||
|
||||
self.agent_id = agent.id
|
||||
self.agent_role = agent.role
|
||||
|
||||
def _set_task_params(self, data: Dict[str, Any]):
|
||||
if "from_task" in data and (task := data["from_task"]):
|
||||
self.task_id = task.id
|
||||
self.task_name = task.name
|
||||
|
||||
|
||||
class LLMCallType(Enum):
|
||||
"""Type of LLM call being made"""
|
||||
|
||||
@@ -3,7 +3,24 @@ from typing import Any, Dict, Optional
|
||||
from crewai.utilities.events.base_events import BaseEvent
|
||||
|
||||
|
||||
class MemoryQueryStartedEvent(BaseEvent):
|
||||
class MemoryBaseEvent(BaseEvent):
|
||||
"""Base event for memory operations"""
|
||||
|
||||
type: str
|
||||
task_id: Optional[str] = None
|
||||
task_name: Optional[str] = None
|
||||
from_task: Optional[Any] = None
|
||||
from_agent: Optional[Any] = None
|
||||
agent_role: Optional[str] = None
|
||||
agent_id: Optional[str] = None
|
||||
|
||||
def __init__(self, **data):
|
||||
super().__init__(**data)
|
||||
self._set_agent_params(data)
|
||||
self._set_task_params(data)
|
||||
|
||||
|
||||
class MemoryQueryStartedEvent(MemoryBaseEvent):
|
||||
"""Event emitted when a memory query is started"""
|
||||
|
||||
type: str = "memory_query_started"
|
||||
@@ -12,7 +29,7 @@ class MemoryQueryStartedEvent(BaseEvent):
|
||||
score_threshold: Optional[float] = None
|
||||
|
||||
|
||||
class MemoryQueryCompletedEvent(BaseEvent):
|
||||
class MemoryQueryCompletedEvent(MemoryBaseEvent):
|
||||
"""Event emitted when a memory query is completed successfully"""
|
||||
|
||||
type: str = "memory_query_completed"
|
||||
@@ -23,7 +40,7 @@ class MemoryQueryCompletedEvent(BaseEvent):
|
||||
query_time_ms: float
|
||||
|
||||
|
||||
class MemoryQueryFailedEvent(BaseEvent):
|
||||
class MemoryQueryFailedEvent(MemoryBaseEvent):
|
||||
"""Event emitted when a memory query fails"""
|
||||
|
||||
type: str = "memory_query_failed"
|
||||
@@ -33,7 +50,7 @@ class MemoryQueryFailedEvent(BaseEvent):
|
||||
error: str
|
||||
|
||||
|
||||
class MemorySaveStartedEvent(BaseEvent):
|
||||
class MemorySaveStartedEvent(MemoryBaseEvent):
|
||||
"""Event emitted when a memory save operation is started"""
|
||||
|
||||
type: str = "memory_save_started"
|
||||
@@ -42,7 +59,7 @@ class MemorySaveStartedEvent(BaseEvent):
|
||||
agent_role: Optional[str] = None
|
||||
|
||||
|
||||
class MemorySaveCompletedEvent(BaseEvent):
|
||||
class MemorySaveCompletedEvent(MemoryBaseEvent):
|
||||
"""Event emitted when a memory save operation is completed successfully"""
|
||||
|
||||
type: str = "memory_save_completed"
|
||||
@@ -52,7 +69,7 @@ class MemorySaveCompletedEvent(BaseEvent):
|
||||
save_time_ms: float
|
||||
|
||||
|
||||
class MemorySaveFailedEvent(BaseEvent):
|
||||
class MemorySaveFailedEvent(MemoryBaseEvent):
|
||||
"""Event emitted when a memory save operation fails"""
|
||||
|
||||
type: str = "memory_save_failed"
|
||||
@@ -62,14 +79,14 @@ class MemorySaveFailedEvent(BaseEvent):
|
||||
error: str
|
||||
|
||||
|
||||
class MemoryRetrievalStartedEvent(BaseEvent):
|
||||
class MemoryRetrievalStartedEvent(MemoryBaseEvent):
|
||||
"""Event emitted when memory retrieval for a task prompt starts"""
|
||||
|
||||
type: str = "memory_retrieval_started"
|
||||
task_id: Optional[str] = None
|
||||
|
||||
|
||||
class MemoryRetrievalCompletedEvent(BaseEvent):
|
||||
class MemoryRetrievalCompletedEvent(MemoryBaseEvent):
|
||||
"""Event emitted when memory retrieval for a task prompt completes successfully"""
|
||||
|
||||
type: str = "memory_retrieval_completed"
|
||||
|
||||
@@ -1,16 +1,34 @@
|
||||
from crewai.utilities.events.base_events import BaseEvent
|
||||
from typing import Any, Optional
|
||||
|
||||
|
||||
class AgentReasoningStartedEvent(BaseEvent):
|
||||
class ReasoningEvent(BaseEvent):
|
||||
"""Base event for reasoning events."""
|
||||
|
||||
type: str
|
||||
attempt: int = 1
|
||||
agent_role: str
|
||||
task_id: str
|
||||
task_name: Optional[str] = None
|
||||
from_task: Optional[Any] = None
|
||||
agent_id: Optional[str] = None
|
||||
from_agent: Optional[Any] = None
|
||||
|
||||
def __init__(self, **data):
|
||||
super().__init__(**data)
|
||||
self._set_task_params(data)
|
||||
self._set_agent_params(data)
|
||||
|
||||
|
||||
class AgentReasoningStartedEvent(ReasoningEvent):
|
||||
"""Event emitted when an agent starts reasoning about a task."""
|
||||
|
||||
type: str = "agent_reasoning_started"
|
||||
agent_role: str
|
||||
task_id: str
|
||||
attempt: int = 1 # The current reasoning/refinement attempt
|
||||
|
||||
|
||||
class AgentReasoningCompletedEvent(BaseEvent):
|
||||
class AgentReasoningCompletedEvent(ReasoningEvent):
|
||||
"""Event emitted when an agent finishes its reasoning process."""
|
||||
|
||||
type: str = "agent_reasoning_completed"
|
||||
@@ -18,14 +36,12 @@ class AgentReasoningCompletedEvent(BaseEvent):
|
||||
task_id: str
|
||||
plan: str
|
||||
ready: bool
|
||||
attempt: int = 1
|
||||
|
||||
|
||||
class AgentReasoningFailedEvent(BaseEvent):
|
||||
class AgentReasoningFailedEvent(ReasoningEvent):
|
||||
"""Event emitted when the reasoning process fails."""
|
||||
|
||||
type: str = "agent_reasoning_failed"
|
||||
agent_role: str
|
||||
task_id: str
|
||||
error: str
|
||||
attempt: int = 1
|
||||
@@ -9,17 +9,24 @@ class ToolUsageEvent(BaseEvent):
|
||||
|
||||
agent_key: Optional[str] = None
|
||||
agent_role: Optional[str] = None
|
||||
agent_id: Optional[str] = None
|
||||
tool_name: str
|
||||
tool_args: Dict[str, Any] | str
|
||||
tool_class: Optional[str] = None
|
||||
run_attempts: int | None = None
|
||||
delegations: int | None = None
|
||||
agent: Optional[Any] = None
|
||||
task_name: Optional[str] = None
|
||||
task_id: Optional[str] = None
|
||||
from_task: Optional[Any] = None
|
||||
from_agent: Optional[Any] = None
|
||||
|
||||
model_config = {"arbitrary_types_allowed": True}
|
||||
|
||||
def __init__(self, **data):
|
||||
super().__init__(**data)
|
||||
self._set_agent_params(data)
|
||||
self._set_task_params(data)
|
||||
# Set fingerprint data from the agent
|
||||
if self.agent and hasattr(self.agent, "fingerprint") and self.agent.fingerprint:
|
||||
self.source_fingerprint = self.agent.fingerprint.uuid_str
|
||||
|
||||
@@ -18,17 +18,20 @@ from crewai.utilities.events.reasoning_events import (
|
||||
|
||||
class ReasoningPlan(BaseModel):
|
||||
"""Model representing a reasoning plan for a task."""
|
||||
|
||||
plan: str = Field(description="The detailed reasoning plan for the task.")
|
||||
ready: bool = Field(description="Whether the agent is ready to execute the task.")
|
||||
|
||||
|
||||
class AgentReasoningOutput(BaseModel):
|
||||
"""Model representing the output of the agent reasoning process."""
|
||||
|
||||
plan: ReasoningPlan = Field(description="The reasoning plan for the task.")
|
||||
|
||||
|
||||
class ReasoningFunction(BaseModel):
|
||||
"""Model for function calling with reasoning."""
|
||||
|
||||
plan: str = Field(description="The detailed reasoning plan for the task.")
|
||||
ready: bool = Field(description="Whether the agent is ready to execute the task.")
|
||||
|
||||
@@ -38,6 +41,7 @@ class AgentReasoning:
|
||||
Handles the agent reasoning process, enabling an agent to reflect and create a plan
|
||||
before executing a task.
|
||||
"""
|
||||
|
||||
def __init__(self, task: Task, agent: Agent):
|
||||
if not task or not agent:
|
||||
raise ValueError("Both task and agent must be provided.")
|
||||
@@ -63,6 +67,7 @@ class AgentReasoning:
|
||||
agent_role=self.agent.role,
|
||||
task_id=str(self.task.id),
|
||||
attempt=1,
|
||||
from_task=self.task,
|
||||
),
|
||||
)
|
||||
except Exception:
|
||||
@@ -82,6 +87,7 @@ class AgentReasoning:
|
||||
plan=output.plan.plan,
|
||||
ready=output.plan.ready,
|
||||
attempt=1,
|
||||
from_task=self.task,
|
||||
),
|
||||
)
|
||||
except Exception:
|
||||
@@ -98,6 +104,7 @@ class AgentReasoning:
|
||||
task_id=str(self.task.id),
|
||||
error=str(e),
|
||||
attempt=1,
|
||||
from_task=self.task,
|
||||
),
|
||||
)
|
||||
except Exception:
|
||||
@@ -135,14 +142,16 @@ class AgentReasoning:
|
||||
system_prompt = self.i18n.retrieve("reasoning", "initial_plan").format(
|
||||
role=self.agent.role,
|
||||
goal=self.agent.goal,
|
||||
backstory=self.__get_agent_backstory()
|
||||
backstory=self.__get_agent_backstory(),
|
||||
)
|
||||
|
||||
response = self.llm.call(
|
||||
[
|
||||
{"role": "system", "content": system_prompt},
|
||||
{"role": "user", "content": reasoning_prompt}
|
||||
]
|
||||
{"role": "user", "content": reasoning_prompt},
|
||||
],
|
||||
from_task=self.task,
|
||||
from_agent=self.agent,
|
||||
)
|
||||
|
||||
return self.__parse_reasoning_response(str(response))
|
||||
@@ -170,6 +179,7 @@ class AgentReasoning:
|
||||
agent_role=self.agent.role,
|
||||
task_id=str(self.task.id),
|
||||
attempt=attempt + 1,
|
||||
from_task=self.task,
|
||||
),
|
||||
)
|
||||
except Exception:
|
||||
@@ -183,14 +193,16 @@ class AgentReasoning:
|
||||
system_prompt = self.i18n.retrieve("reasoning", "refine_plan").format(
|
||||
role=self.agent.role,
|
||||
goal=self.agent.goal,
|
||||
backstory=self.__get_agent_backstory()
|
||||
backstory=self.__get_agent_backstory(),
|
||||
)
|
||||
|
||||
response = self.llm.call(
|
||||
[
|
||||
{"role": "system", "content": system_prompt},
|
||||
{"role": "user", "content": refine_prompt}
|
||||
]
|
||||
{"role": "user", "content": refine_prompt},
|
||||
],
|
||||
from_task=self.task,
|
||||
from_agent=self.agent,
|
||||
)
|
||||
plan, ready = self.__parse_reasoning_response(str(response))
|
||||
|
||||
@@ -227,23 +239,23 @@ class AgentReasoning:
|
||||
"properties": {
|
||||
"plan": {
|
||||
"type": "string",
|
||||
"description": "The detailed reasoning plan for the task."
|
||||
"description": "The detailed reasoning plan for the task.",
|
||||
},
|
||||
"ready": {
|
||||
"type": "boolean",
|
||||
"description": "Whether the agent is ready to execute the task."
|
||||
}
|
||||
"description": "Whether the agent is ready to execute the task.",
|
||||
},
|
||||
},
|
||||
"required": ["plan", "ready"]
|
||||
}
|
||||
}
|
||||
"required": ["plan", "ready"],
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
try:
|
||||
system_prompt = self.i18n.retrieve("reasoning", prompt_type).format(
|
||||
role=self.agent.role,
|
||||
goal=self.agent.goal,
|
||||
backstory=self.__get_agent_backstory()
|
||||
backstory=self.__get_agent_backstory(),
|
||||
)
|
||||
|
||||
# Prepare a simple callable that just returns the tool arguments as JSON
|
||||
@@ -254,10 +266,12 @@ class AgentReasoning:
|
||||
response = self.llm.call(
|
||||
[
|
||||
{"role": "system", "content": system_prompt},
|
||||
{"role": "user", "content": prompt}
|
||||
{"role": "user", "content": prompt},
|
||||
],
|
||||
tools=[function_schema],
|
||||
available_functions={"create_reasoning_plan": _create_reasoning_plan},
|
||||
from_task=self.task,
|
||||
from_agent=self.agent,
|
||||
)
|
||||
|
||||
self.logger.debug(f"Function calling response: {response[:100]}...")
|
||||
@@ -270,30 +284,43 @@ class AgentReasoning:
|
||||
pass
|
||||
|
||||
response_str = str(response)
|
||||
return response_str, "READY: I am ready to execute the task." in response_str
|
||||
return (
|
||||
response_str,
|
||||
"READY: I am ready to execute the task." in response_str,
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
self.logger.warning(f"Error during function calling: {str(e)}. Falling back to text parsing.")
|
||||
self.logger.warning(
|
||||
f"Error during function calling: {str(e)}. Falling back to text parsing."
|
||||
)
|
||||
|
||||
try:
|
||||
system_prompt = self.i18n.retrieve("reasoning", prompt_type).format(
|
||||
role=self.agent.role,
|
||||
goal=self.agent.goal,
|
||||
backstory=self.__get_agent_backstory()
|
||||
backstory=self.__get_agent_backstory(),
|
||||
)
|
||||
|
||||
fallback_response = self.llm.call(
|
||||
[
|
||||
{"role": "system", "content": system_prompt},
|
||||
{"role": "user", "content": prompt}
|
||||
]
|
||||
{"role": "user", "content": prompt},
|
||||
],
|
||||
from_task=self.task,
|
||||
from_agent=self.agent,
|
||||
)
|
||||
|
||||
fallback_str = str(fallback_response)
|
||||
return fallback_str, "READY: I am ready to execute the task." in fallback_str
|
||||
return (
|
||||
fallback_str,
|
||||
"READY: I am ready to execute the task." in fallback_str,
|
||||
)
|
||||
except Exception as inner_e:
|
||||
self.logger.error(f"Error during fallback text parsing: {str(inner_e)}")
|
||||
return "Failed to generate a plan due to an error.", True # Default to ready to avoid getting stuck
|
||||
return (
|
||||
"Failed to generate a plan due to an error.",
|
||||
True,
|
||||
) # Default to ready to avoid getting stuck
|
||||
|
||||
def __get_agent_backstory(self) -> str:
|
||||
"""
|
||||
@@ -319,7 +346,7 @@ class AgentReasoning:
|
||||
backstory=self.__get_agent_backstory(),
|
||||
description=self.task.description,
|
||||
expected_output=self.task.expected_output,
|
||||
tools=available_tools
|
||||
tools=available_tools,
|
||||
)
|
||||
|
||||
def __format_available_tools(self) -> str:
|
||||
@@ -330,7 +357,7 @@ class AgentReasoning:
|
||||
str: Comma-separated list of tool names.
|
||||
"""
|
||||
try:
|
||||
return ', '.join([tool.name for tool in (self.task.tools or [])])
|
||||
return ", ".join([tool.name for tool in (self.task.tools or [])])
|
||||
except (AttributeError, TypeError):
|
||||
return "No tools available"
|
||||
|
||||
@@ -348,7 +375,7 @@ class AgentReasoning:
|
||||
role=self.agent.role,
|
||||
goal=self.agent.goal,
|
||||
backstory=self.__get_agent_backstory(),
|
||||
current_plan=current_plan
|
||||
current_plan=current_plan,
|
||||
)
|
||||
|
||||
def __parse_reasoning_response(self, response: str) -> Tuple[str, bool]:
|
||||
|
||||
@@ -508,4 +508,162 @@ interactions:
|
||||
- req_80eed127ea0361c637657470cf9b647e
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Crew Manager. You
|
||||
are a seasoned manager with a knack for getting the best out of your team.\nYou
|
||||
are also known for your ability to delegate work to the right people, and to
|
||||
ask the right questions to get the best out of your team.\nEven though you don''t
|
||||
perform tasks by yourself, you have a lot of experience in the field, which
|
||||
allows you to properly evaluate the work of your team members.\nYour personal
|
||||
goal is: Manage the team to complete the task in the best way possible.\nYou
|
||||
ONLY have access to the following tools, and should NEVER make up tools that
|
||||
are not listed here:\n\nTool Name: Delegate work to coworker\nTool Arguments:
|
||||
{''task'': {''description'': ''The task to delegate'', ''type'': ''str''}, ''context'':
|
||||
{''description'': ''The context for the task'', ''type'': ''str''}, ''coworker'':
|
||||
{''description'': ''The role/name of the coworker to delegate to'', ''type'':
|
||||
''str''}}\nTool Description: Delegate a specific task to one of the following
|
||||
coworkers: Researcher\nThe input to this tool should be the coworker, the task
|
||||
you want them to do, and ALL necessary context to execute the task, they know
|
||||
nothing about the task, so share absolutely everything you know, don''t reference
|
||||
things but instead explain them.\nTool Name: Ask question to coworker\nTool
|
||||
Arguments: {''question'': {''description'': ''The question to ask'', ''type'':
|
||||
''str''}, ''context'': {''description'': ''The context for the question'', ''type'':
|
||||
''str''}, ''coworker'': {''description'': ''The role/name of the coworker to
|
||||
ask'', ''type'': ''str''}}\nTool Description: Ask a specific question to one
|
||||
of the following coworkers: Researcher\nThe input to this tool should be the
|
||||
coworker, the question you have for them, and ALL necessary context to ask the
|
||||
question properly, they know nothing about the question, so share absolutely
|
||||
everything you know, don''t reference things but instead explain them.\n\nIMPORTANT:
|
||||
Use the following format in your response:\n\n```\nThought: you should always
|
||||
think about what to do\nAction: the action to take, only one name of [Delegate
|
||||
work to coworker, Ask question to coworker], just the name, exactly as it''s
|
||||
written.\nAction Input: the input to the action, just a simple JSON object,
|
||||
enclosed in curly braces, using \" to wrap keys and values.\nObservation: the
|
||||
result of the action\n```\n\nOnce all necessary information is gathered, return
|
||||
the following format:\n\n```\nThought: I now know the final answer\nFinal Answer:
|
||||
the final answer to the original input question\n```"}, {"role": "user", "content":
|
||||
"\nCurrent Task: Ask the researched to say hi!\n\nThis is the expected criteria
|
||||
for your final answer: Howdy!\nyou MUST return the actual complete content as
|
||||
the final answer, not a summary.\n\nBegin! This is VERY important to you, use
|
||||
the tools available and give your best Final Answer, your job depends on it!\n\nThought:"},
|
||||
{"role": "assistant", "content": "Thought: To complete the task, I need to ask
|
||||
the researcher to say \"Howdy!\" I will use the \"Ask question to coworker\"
|
||||
tool to instruct the researcher accordingly.\n\nAction: Ask question to coworker\nAction
|
||||
Input: {\"question\": \"Can you please say hi?\", \"context\": \"The expected
|
||||
greeting is: Howdy!\", \"coworker\": \"Researcher\"}\nObservation: Howdy!"}],
|
||||
"model": "gpt-4o", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '3318'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- _cfuvid=g371YzJ.yOdjD9dcZxJ8VI4huWlRJL2j8lbKDhE0qV8-1743463280779-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//jFJdj9MwEHzPr1j83KL0Ky15A6SKQ7oHdCAh4BS5ziYxdWxjbyjl1P9+
|
||||
stNrcnAn3Ysle3bGM7t7lwAwWbIcmGg4idaq6fvN7nrb/LrObm72/su3d8dP9uux83+3/mr5kU0C
|
||||
w+x+oqAH1mthWquQpNE9LBxywqA6W6+yWZbNs0UEWlOiCrTa0nRppvN0vpymm2manYmNkQI9y+F7
|
||||
AgBwF89gUZf4h+WQTh5eWvSe18jySxEAc0aFF8a9l564JjYZQGE0oY6uPzemqxvK4Qq0OcA+HNQg
|
||||
VFJzBVz7AzqAH3ob72/jPYcP5lAeX40lHVad5yGR7pQaAVxrQzx0JIa5PSOni31lauvMzv9DZZXU
|
||||
0jeFQ+6NDlY9GcsiekoAbmObukfJmXWmtVSQ2WP8bp3Oez02DGZAZ6szSIa4GrFm68kTekWJxKXy
|
||||
o0YzwUWD5UAdpsK7UpoRkIxS/+/mKe0+udT1S+QHQAi0hGVhHZZSPE48lDkMe/tc2aXL0TDz6H5L
|
||||
gQVJdGESJVa8U/1KMX/0hG1RSV2js072e1XZYpO+WWWrxULsWHJK7gEAAP//AwAfm3tPYAMAAA==
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 974f08878cb7239e-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 25 Aug 2025 23:57:43 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=veVLwO9uj9FvIBC_v55vfVlSdU8NHY.wdBapmpmtHn4-1756166263-1.0.1.1-LJ9bNXe6v06jg_mjVZp8vfvLT.9Hf8xUHOf2FempuntDnL5ogQXRuJvIJipz1trGr96_3WUCWlsexhQlkdtveEH6NbFMrm__Y61khA_IyPM;
|
||||
path=/; expires=Tue, 26-Aug-25 00:27:43 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=miqmxIsNDZtXvrtcyhpeSI3TjT_zcUas9Enn6gGtIsI-1756166263603-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '472'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '550'
|
||||
x-ratelimit-limit-project-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '30000000'
|
||||
x-ratelimit-remaining-project-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29999222'
|
||||
x-ratelimit-reset-project-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 1ms
|
||||
x-request-id:
|
||||
- req_4f627ee303b9429db117e9699b00656a
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
|
||||
@@ -498,4 +498,204 @@ interactions:
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Crew Manager. You
|
||||
are a seasoned manager with a knack for getting the best out of your team.\nYou
|
||||
are also known for your ability to delegate work to the right people, and to
|
||||
ask the right questions to get the best out of your team.\nEven though you don''t
|
||||
perform tasks by yourself, you have a lot of experience in the field, which
|
||||
allows you to properly evaluate the work of your team members.\nYour personal
|
||||
goal is: Manage the team to complete the task in the best way possible.\nYou
|
||||
ONLY have access to the following tools, and should NEVER make up tools that
|
||||
are not listed here:\n\nTool Name: Delegate work to coworker\nTool Arguments:
|
||||
{''task'': {''description'': ''The task to delegate'', ''type'': ''str''}, ''context'':
|
||||
{''description'': ''The context for the task'', ''type'': ''str''}, ''coworker'':
|
||||
{''description'': ''The role/name of the coworker to delegate to'', ''type'':
|
||||
''str''}}\nTool Description: Delegate a specific task to one of the following
|
||||
coworkers: First Agent\nThe input to this tool should be the coworker, the task
|
||||
you want them to do, and ALL necessary context to execute the task, they know
|
||||
nothing about the task, so share absolutely everything you know, don''t reference
|
||||
things but instead explain them.\nTool Name: Ask question to coworker\nTool
|
||||
Arguments: {''question'': {''description'': ''The question to ask'', ''type'':
|
||||
''str''}, ''context'': {''description'': ''The context for the question'', ''type'':
|
||||
''str''}, ''coworker'': {''description'': ''The role/name of the coworker to
|
||||
ask'', ''type'': ''str''}}\nTool Description: Ask a specific question to one
|
||||
of the following coworkers: First Agent\nThe input to this tool should be the
|
||||
coworker, the question you have for them, and ALL necessary context to ask the
|
||||
question properly, they know nothing about the question, so share absolutely
|
||||
everything you know, don''t reference things but instead explain them.\n\nIMPORTANT:
|
||||
Use the following format in your response:\n\n```\nThought: you should always
|
||||
think about what to do\nAction: the action to take, only one name of [Delegate
|
||||
work to coworker, Ask question to coworker], just the name, exactly as it''s
|
||||
written.\nAction Input: the input to the action, just a simple JSON object,
|
||||
enclosed in curly braces, using \" to wrap keys and values.\nObservation: the
|
||||
result of the action\n```\n\nOnce all necessary information is gathered, return
|
||||
the following format:\n\n```\nThought: I now know the final answer\nFinal Answer:
|
||||
the final answer to the original input question\n```"}, {"role": "user", "content":
|
||||
"\nCurrent Task: Process initial data\n\nThis is the expected criteria for your
|
||||
final answer: Initial analysis\nyou MUST return the actual complete content
|
||||
as the final answer, not a summary.\n\nBegin! This is VERY important to you,
|
||||
use the tools available and give your best Final Answer, your job depends on
|
||||
it!\n\nThought:"}, {"role": "assistant", "content": "Thought: I need to delegate
|
||||
the task of processing the initial data to the First Agent to ensure we have
|
||||
a thorough and accurate analysis. I will provide them with all the necessary
|
||||
details to complete this task effectively.\n\nAction: Delegate work to coworker\nAction
|
||||
Input: {\"task\": \"Process initial data\", \"context\": \"The task involves
|
||||
analyzing the initial data set we have received. This includes cleaning the
|
||||
data, categorizing it for analysis, identifying any trends or patterns, and
|
||||
summarizing the findings. The goal is to have a clear understanding of what
|
||||
the data indicates and any initial insights that can be drawn from it.\", \"coworker\":
|
||||
\"First Agent\"}\nObservation: To process the initial data set effectively,
|
||||
we will follow several key steps to ensure we clean, categorize, analyze, and
|
||||
summarize our findings comprehensively.\n\n1. **Data Cleaning**: \n - First,
|
||||
we will review the dataset for missing, inconsistent, or erroneous entries.
|
||||
\n - We will handle missing values by either imputing them based on surrounding
|
||||
data or removing records where necessary.\n - Additionally, we will standardize
|
||||
categorical variables to ensure consistency (e.g., ensuring all location names
|
||||
are spelled the same way).\n\n2. **Data Categorization**: \n - Next, we will
|
||||
categorize the data into relevant segments that will aid our analysis. \n -
|
||||
This involves grouping data points based on common characteristics, such as
|
||||
demographics, time periods, or any key performance indicators (KPIs) we are
|
||||
focusing on.\n\n3. **Trend and Pattern Identification**: \n - With the cleaned
|
||||
and categorized data, we will perform a detailed analysis to identify trends
|
||||
and patterns.\n - This will involve using statistical tools and visualizations
|
||||
to uncover relationships within the data. We will look at time series analysis,
|
||||
correlation coefficients, and significant outliers that may require further
|
||||
investigation.\n\n4. **Summarizing Findings**: \n - Finally, we will compile
|
||||
a summary of our findings, including both qualitative insights and quantitative
|
||||
metrics.\n - This summary should encapsulate the key trends identified, any
|
||||
notable patterns, and implications of these findings.\n - We will also document
|
||||
any limitations of the data and suggest areas for further research if necessary.\n\nBy
|
||||
completing these steps, we will not only have a clear understanding of what
|
||||
the data indicates but also provide actionable insights that can guide our next
|
||||
steps. This comprehensive analysis will serve as a solid foundation for any
|
||||
additional exploration or decision-making initiatives related to our project."}],
|
||||
"model": "gpt-4o", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '5593'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- _cfuvid=YeODa6MF5ug3OZUV6ob1dSrBKCM8BXbKkS77TIihYoE-1755550362828-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//jFXbbtxGDH33VxB6KRCsDV8T129pmgBBH5qiblO0DuzxDCWxHnFUklpn
|
||||
HeTfC472ljYF+rLAisPLOTwkPx0ANJSaK2hiHywOYz58dTn2H18/9T+d/KT8+Otr/uXFu6ffLs/k
|
||||
+rvfY7Nwj3L/J0bbeB3FMowZjQrP5igYDD3qyYuL56enJ+enF9UwlITZ3brRDs/L4enx6fnh8eXh
|
||||
8fO1Y18oojZX8McBAMCn+uslcsKPzRUcLzZfBlQNHTZX20cAjZTsX5qgSmqBrVnsjLGwIdeq7+7u
|
||||
bvi6L1PX2xW8BS6P8OA/1iO0xCFDYH1EueE39d/L+u8KrguMUiKq1qfEZBQypGABFA2wbTEaLTGv
|
||||
FvCI8Eg5w8QJxcIDguISJWSIQkYxZFDDUcEKIOskCAGcS8EeWWmJEDjklZIewQ3f8MkRPHv2ved6
|
||||
lTEwcffs2RW8X6dRC2JwvwJKyEbtiriDwAliEfGquIOBVIm7BRDHwk4Ssi2gCKBIYSyTArIJocIj
|
||||
WU9ccTo+RTuC654UiJclL1Eh9qV4OECyHgVoGCcLrgMY0PqSFNoim6SwDHlC9WSCQ1n6Jy+jagdB
|
||||
MBZJegRvJvFoQxGsHA6TmoPjFCTRE0IMhl2RSuAyCIX7jHOmiaktMpCtjpyv0x1fa5enWp2z9iNH
|
||||
hOg0YlpsQc5M3iMU6QLTEyYgtgIjihEjGyh2A7J5/4NBGyJlsmD7raoZOynT6BA3Ee+DYoLCoH0Q
|
||||
TBD7ICEaCqlRVMj0gJBwKJ2EsfcvRcBoQE9OJekCkPvA0YN6vYIZl8FhlBaIlbreFJKER67ozxz9
|
||||
tSCnqoJ3wQyF4e2sDopbKt6T9VtSHbIzsYAyCbQlTrqWV0+tuVT35fWAKzBPoTXHOOdQWFLwllnF
|
||||
FjJYKXl+siSdQl43Qo/gB1xtxVLzEMc8JZyhK1YpbrjdyTnPMlObEqEu4LGnjKDUcYXGBmWyTCi6
|
||||
bQAXw1Rl0k7mw6ZRJiOetXLubP08DUMQenJob4gTcafO0HWPEEstzE0+tbspSBLaOlwBtPqvZm3U
|
||||
AhgV/nLATsYSd30KuXCnlNDNbBv7gCYUdT1ptfRYlii140Pweax0L7ZcL2bmi0/ydhdlGmiexLVd
|
||||
p65DNWjn4QJBxSCxh0R1N3gvnIaXqUepCiueUnG9oropSGBDbwZYX8R35/6yKuwy3MzRN+rLIK9F
|
||||
pot5B47EPE+9T6kLDSO56+EQHjbbat2cUYrfGBj7oOi13d3d7W9zwXbS4MeEp5z3DIG5rKH7Hfmw
|
||||
tnzeXo5culHKvf7DtWmJSftbwaCF/UqolbGp1s8HAB/qhZq+ODrNKGUY7dbKA9Z0Jycn53PAZncU
|
||||
d+bTi43VioW853f2/HLxlZC3CS1Q1r0z18QQe0w7391NDFOismc42AP+73q+FnsGT9z9n/A7Q4w4
|
||||
GqbbUTBR/BLz7pmgd/S/nm2JrgU3irKkiLdGKN6MhG2Y8nzQG12p4XDbEncoo9B81dvx9vL424vn
|
||||
F2dn8b45+HzwNwAAAP//AwDhfkSS3ggAAA==
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 97544b3fd9c66894-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Tue, 26 Aug 2025 15:17:10 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=AK6x7s00CdjvAhZqoKc.oyU2huXbBJAB_qi1o9cIHkk-1756221430-1.0.1.1-s9cWi1kLPHCBoqRe8BhCYWgaKEG.LQvm0b0NNJkJrpuMMIAUz9sSqijPatK.t2wknR3Qo65.PTew2trnDH5_.mL1l4JewiW1VndksvCWngY;
|
||||
path=/; expires=Tue, 26-Aug-25 15:47:10 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=3NkIk1Ua5GwknkJHax_bb1dBUHU9Yobu11sjZ9yu7Rg-1756221430892-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '5563'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '5651'
|
||||
x-ratelimit-limit-project-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '30000000'
|
||||
x-ratelimit-remaining-project-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29998658'
|
||||
x-ratelimit-reset-project-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 2ms
|
||||
x-request-id:
|
||||
- req_8ee5ddbc01374cf487da8763d7dee507
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
|
||||
@@ -1158,4 +1158,201 @@ interactions:
|
||||
- req_04364e8bad0fc80241f59550b9320360
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Crew Manager. You
|
||||
are a seasoned manager with a knack for getting the best out of your team.\nYou
|
||||
are also known for your ability to delegate work to the right people, and to
|
||||
ask the right questions to get the best out of your team.\nEven though you don''t
|
||||
perform tasks by yourself, you have a lot of experience in the field, which
|
||||
allows you to properly evaluate the work of your team members.\nYour personal
|
||||
goal is: Manage the team to complete the task in the best way possible.\nYou
|
||||
ONLY have access to the following tools, and should NEVER make up tools that
|
||||
are not listed here:\n\nTool Name: Delegate work to coworker\nTool Arguments:
|
||||
{''task'': {''description'': ''The task to delegate'', ''type'': ''str''}, ''context'':
|
||||
{''description'': ''The context for the task'', ''type'': ''str''}, ''coworker'':
|
||||
{''description'': ''The role/name of the coworker to delegate to'', ''type'':
|
||||
''str''}}\nTool Description: Delegate a specific task to one of the following
|
||||
coworkers: Senior Writer, Researcher, CEO\nThe input to this tool should be
|
||||
the coworker, the task you want them to do, and ALL necessary context to execute
|
||||
the task, they know nothing about the task, so share absolutely everything you
|
||||
know, don''t reference things but instead explain them.\nTool Name: Ask question
|
||||
to coworker\nTool Arguments: {''question'': {''description'': ''The question
|
||||
to ask'', ''type'': ''str''}, ''context'': {''description'': ''The context for
|
||||
the question'', ''type'': ''str''}, ''coworker'': {''description'': ''The role/name
|
||||
of the coworker to ask'', ''type'': ''str''}}\nTool Description: Ask a specific
|
||||
question to one of the following coworkers: Senior Writer, Researcher, CEO\nThe
|
||||
input to this tool should be the coworker, the question you have for them, and
|
||||
ALL necessary context to ask the question properly, they know nothing about
|
||||
the question, so share absolutely everything you know, don''t reference things
|
||||
but instead explain them.\n\nIMPORTANT: Use the following format in your response:\n\n```\nThought:
|
||||
you should always think about what to do\nAction: the action to take, only one
|
||||
name of [Delegate work to coworker, Ask question to coworker], just the name,
|
||||
exactly as it''s written.\nAction Input: the input to the action, just a simple
|
||||
JSON object, enclosed in curly braces, using \" to wrap keys and values.\nObservation:
|
||||
the result of the action\n```\n\nOnce all necessary information is gathered,
|
||||
return the following format:\n\n```\nThought: I now know the final answer\nFinal
|
||||
Answer: the final answer to the original input question\n```"}, {"role": "user",
|
||||
"content": "\nCurrent Task: Write one amazing paragraph about AI.\n\nThis is
|
||||
the expected criteria for your final answer: A single paragraph with 4 sentences.\nyou
|
||||
MUST return the actual complete content as the final answer, not a summary.\n\nThis
|
||||
is the context you''re working with:\n\"Artificial Intelligence (AI) stands
|
||||
at the forefront of technological advancement, utilizing machine learning, neural
|
||||
networks, and natural language processing to replicate human cognitive functions.
|
||||
The transformative power of AI spans across industries, driving revolutionary
|
||||
changes from predictive diagnostics in healthcare to algorithmic trading in
|
||||
finance. By rapidly analyzing large datasets, AI reveals hidden patterns, enhances
|
||||
efficiency, and sparks unprecedented innovation. As we embrace this technology,
|
||||
it is crucial to prioritize ethical considerations and responsible development
|
||||
to fully leverage its potential while addressing associated risks.\"\n\nBegin!
|
||||
This is VERY important to you, use the tools available and give your best Final
|
||||
Answer, your job depends on it!\n\nThought:"}, {"role": "assistant", "content":
|
||||
"Thought: To craft an amazing paragraph about AI as described in the context,
|
||||
I should delegate this task to the Senior Writer and include all the detailed
|
||||
context provided.\n\nAction: Delegate work to coworker\nAction Input: {\"task\":
|
||||
\"Write one amazing paragraph about AI\", \"context\": \"Artificial Intelligence
|
||||
(AI) is a transformative technology that leverages machine learning, neural
|
||||
networks, and natural language processing to perform tasks that traditionally
|
||||
require human intelligence, such as visual perception, speech recognition, and
|
||||
decision-making. Its potential to revolutionize industries\u2014from healthcare
|
||||
with predictive diagnostics to finance with algorithmic trading\u2014underscores
|
||||
its profound impact on our daily lives and the global economy. With the ability
|
||||
to analyze vast amounts of data quickly and accurately, AI can unveil hidden
|
||||
patterns, drive efficiencies, and foster innovation at an unprecedented scale.
|
||||
As we move forward, the ethical and responsible development of AI will be paramount
|
||||
to harness its benefits while mitigating risks.\", \"coworker\": \"Senior Writer\"}\n\nObservation:
|
||||
The Senior Writer has received the task and context.\nObservation: Artificial
|
||||
Intelligence (AI) is a transformative technology that leverages machine learning,
|
||||
neural networks, and natural language processing to perform tasks that traditionally
|
||||
require human intelligence, such as visual perception, speech recognition, and
|
||||
decision-making. Its potential to revolutionize industries\u2014from healthcare
|
||||
with predictive diagnostics to finance with algorithmic trading\u2014underscores
|
||||
its profound impact on our daily lives and the global economy. With the ability
|
||||
to analyze vast amounts of data quickly and accurately, AI can unveil hidden
|
||||
patterns, drive efficiencies, and foster innovation at an unprecedented scale.
|
||||
As we move forward, the ethical and responsible development of AI will be paramount
|
||||
to harness its benefits while mitigating risks."}], "model": "gpt-4o", "stop":
|
||||
["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '5729'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- _cfuvid=Q23zZGhbuNaTNh.RPoM_1O4jWXLFM.KtSgSytn2NO.Q-1744492727869-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//jFTbbhtHDH33VxD71AKyIcu27PhNCNBCBYICRdACqQOZmuHuspolN0Ou
|
||||
VCUwkI/IF+ZLiln5orYp0JfB7pzhmXPIIT+dAFQcq1uoQoseuj6dvr5Zv1m+e/P6yvI+XL+bCl//
|
||||
+uN08cvPP81u3laTEqHrPyj4U9RZ0K5P5KxygEMmdCqs59dX8/P5fDafj0CnkVIJa3o/vdTT2XR2
|
||||
eTq9OZ3OHwNb5UBW3cLvJwAAn8a1SJRIf1a3MJ087XRkhg1Vt8+HAKqsqexUaMbmKF5NXsCg4iSj
|
||||
6vv7+zt52+rQtH4LSxDdwaYs3hLULJgAxXaUz+7kh/F3Mf7ewiI71xwYEyzFKSVuSALBd4vl98AG
|
||||
CJ5RrNbcofOWwCm0okmbPXiLDom2lLEhgw5Dy0KQCLOwNBMQGjImEPKd5o1NACWCoI+7CaUZsCHo
|
||||
swYyY2nAFXrK5TJwtI0drvCMkUs1MKU9ZPowcCZohw4F+Ej0BGwILaDBlm3AVLgC9SVyAtYThRYy
|
||||
BW2ED3tFTqTAxiqnHW5YmjNYukGvJbMlKa6QaatpKBH8kYAlDuaZyb5+/lJn7aAlTN4GzAQ79hb6
|
||||
TJHDmKzI2Iiac7DCVCpRkjsew9RoZm87DgeH0nz9/GWQSNmCZjLgoiRrrYNE4K7H4KACOmSIyGkP
|
||||
ibdko4tS5ybpGhNQUNFufwa/lVsKgGtO7PuiAAXT/iPBFs0BOx3EDbSGiI7wYeCwSfuREEMYMjql
|
||||
/QQWSwgoMMiWOEHLMZJAj+6UxSYQc7FKdXlGJIHpsdC1mlMGFtEtlvQBOow8faZAkcQpggVMdAYL
|
||||
gx1Bp1uCWvMOc5yM2slbDuPzjZDJehXjdSKItKWkfUfiRf9iCTtOCdYEPeaDseK3xSxkh1SuSagu
|
||||
H7uWE0HHzg16eXaZbWNnd3J/f3/cYJnqwbD0twwpHQEooj5aGlv7/SPy8NzMSZs+69r+EVrVLGzt
|
||||
KhOaSmlcc+2rEX04AXg/Do3hb3Og6rN2va9cNzRedz69fnUgrF7m1BF8efGIujqmI2A2m02+QbmK
|
||||
5MjJjiZPFTC0FF9iX8YUDpH1CDg5Mv5vPd/iPphnaf4P/QsQSidTXD1117Hnl2OZyiD/r2PPiR4F
|
||||
V0Z5y4FWzpRLMSLVOKTDjK1sb07dqmZpKPeZD4O27lc301dX86uLi7CuTh5O/gIAAP//AwAI5M9k
|
||||
cQYAAA==
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 974f089aacfe239e-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 25 Aug 2025 23:57:47 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=x3Z22TL7MiNrhhoqoBAkoerRf2slLEFNFLerhsr8uWg-1756166267-1.0.1.1-YsxFr.4IpdTvhEkob8OJGHggHZj5mRzRdK0Ta9PllabKVSUEp9sdbUxQpYatb5wW12dk9nglKUcebewHsL7RgYB0Y0BPnFRoyX0r5kn14HI;
|
||||
path=/; expires=Tue, 26-Aug-25 00:27:47 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=H4uvKyyXorF4TlJkoLerJYI2PABtQ6M4T.XxqO7FHCQ-1756166267474-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '1314'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '1380'
|
||||
x-ratelimit-limit-project-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '30000000'
|
||||
x-ratelimit-remaining-project-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29998623'
|
||||
x-ratelimit-reset-project-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 2ms
|
||||
x-request-id:
|
||||
- req_47e6588839144756828145be5a5a3b63
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
|
||||
@@ -716,4 +716,249 @@ interactions:
|
||||
- req_04659f093830c14dbc5342c119533861
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Crew Manager. You
|
||||
are a seasoned manager with a knack for getting the best out of your team.\nYou
|
||||
are also known for your ability to delegate work to the right people, and to
|
||||
ask the right questions to get the best out of your team.\nEven though you don''t
|
||||
perform tasks by yourself, you have a lot of experience in the field, which
|
||||
allows you to properly evaluate the work of your team members.\nYour personal
|
||||
goal is: Manage the team to complete the task in the best way possible.\nYou
|
||||
ONLY have access to the following tools, and should NEVER make up tools that
|
||||
are not listed here:\n\nTool Name: Delegate work to coworker\nTool Arguments:
|
||||
{''task'': {''description'': ''The task to delegate'', ''type'': ''str''}, ''context'':
|
||||
{''description'': ''The context for the task'', ''type'': ''str''}, ''coworker'':
|
||||
{''description'': ''The role/name of the coworker to delegate to'', ''type'':
|
||||
''str''}}\nTool Description: Delegate a specific task to one of the following
|
||||
coworkers: Researcher, Senior Writer\nThe input to this tool should be the coworker,
|
||||
the task you want them to do, and ALL necessary context to execute the task,
|
||||
they know nothing about the task, so share absolutely everything you know, don''t
|
||||
reference things but instead explain them.\nTool Name: Ask question to coworker\nTool
|
||||
Arguments: {''question'': {''description'': ''The question to ask'', ''type'':
|
||||
''str''}, ''context'': {''description'': ''The context for the question'', ''type'':
|
||||
''str''}, ''coworker'': {''description'': ''The role/name of the coworker to
|
||||
ask'', ''type'': ''str''}}\nTool Description: Ask a specific question to one
|
||||
of the following coworkers: Researcher, Senior Writer\nThe input to this tool
|
||||
should be the coworker, the question you have for them, and ALL necessary context
|
||||
to ask the question properly, they know nothing about the question, so share
|
||||
absolutely everything you know, don''t reference things but instead explain
|
||||
them.\n\nIMPORTANT: Use the following format in your response:\n\n```\nThought:
|
||||
you should always think about what to do\nAction: the action to take, only one
|
||||
name of [Delegate work to coworker, Ask question to coworker], just the name,
|
||||
exactly as it''s written.\nAction Input: the input to the action, just a simple
|
||||
JSON object, enclosed in curly braces, using \" to wrap keys and values.\nObservation:
|
||||
the result of the action\n```\n\nOnce all necessary information is gathered,
|
||||
return the following format:\n\n```\nThought: I now know the final answer\nFinal
|
||||
Answer: the final answer to the original input question\n```"}, {"role": "user",
|
||||
"content": "\nCurrent Task: Come up with a list of 5 interesting ideas to explore
|
||||
for an article, then write one amazing paragraph highlight for each idea that
|
||||
showcases how good an article about this topic could be. Return the list of
|
||||
ideas with their paragraph and your notes.\n\nThis is the expected criteria
|
||||
for your final answer: 5 bullet points with a paragraph for each idea.\nyou
|
||||
MUST return the actual complete content as the final answer, not a summary.\n\nBegin!
|
||||
This is VERY important to you, use the tools available and give your best Final
|
||||
Answer, your job depends on it!\n\nThought:"}, {"role": "assistant", "content":
|
||||
"```\nThought: To come up with interesting article ideas and write compelling
|
||||
highlights, I should first generate a list of potential topics. The Researcher
|
||||
can help with gathering current trends or interesting topics that might be engaging.
|
||||
Then, I can delegate writing the paragraph highlights to the Senior Writer.\n\nAction:
|
||||
Ask question to coworker\nAction Input: {\"question\": \"What are some current
|
||||
trending topics or subjects that are interesting to explore for an article?\",
|
||||
\"context\": \"I need to generate a list of 5 interesting ideas to explore for
|
||||
an article, each paired with an amazing paragraph highlight. Current trends
|
||||
or unusual insights could be a good source of inspiration for these ideas.\",
|
||||
\"coworker\": \"Researcher\"}\nObservation: **1. The Rise of Autonomous AI Agents
|
||||
in Daily Life** \nAs artificial intelligence technology progresses, the integration
|
||||
of autonomous AI agents into everyday life becomes increasingly prominent. These
|
||||
agents, capable of making decisions without human intervention, are reshaping
|
||||
industries from healthcare to finance. Exploring case studies where autonomous
|
||||
AI has successfully decreased operational costs or improved efficiency can reveal
|
||||
not only the benefits but also the ethical implications of delegating decision-making
|
||||
to machines. This topic offers an exciting opportunity to dive into the AI landscape,
|
||||
showcasing current developments such as AI assistants and autonomous vehicles.\n\n**2.
|
||||
Ethical Implications of Generative AI in Creative Industries** \nThe surge
|
||||
of generative AI tools in creative fields, such as art, music, and writing,
|
||||
has sparked a heated debate about authorship and originality. This article could
|
||||
investigate how these tools are being used by artists and creators, examining
|
||||
both the potential for innovation and the risk of devaluing traditional art
|
||||
forms. Highlighting perspectives from creators, legal experts, and ethicists
|
||||
could provide a comprehensive overview of the challenges faced, including copyright
|
||||
concerns and the emotional impact on human artists. This discussion is vital
|
||||
as the creative landscape evolves alongside technological advancements, making
|
||||
it ripe for exploration.\n\n**3. AI in Climate Change Mitigation: Current Solutions
|
||||
and Future Potential** \nAs the world grapples with climate change, AI technology
|
||||
is increasingly being harnessed to develop innovative solutions for sustainability.
|
||||
From predictive analytics that optimize energy consumption to machine learning
|
||||
algorithms that improve carbon capture methods, AI''s potential in environmental
|
||||
science is vast. This topic invites an exploration of existing AI applications
|
||||
in climate initiatives, with a focus on groundbreaking research and initiatives
|
||||
aimed at reducing humanity''s carbon footprint. Highlighting successful projects
|
||||
and technology partnerships can illustrate the positive impact AI can have on
|
||||
global climate efforts, inspiring further exploration and investment in this
|
||||
area.\n\n**4. The Future of Work: How AI is Reshaping Employment Landscapes** \nThe
|
||||
discussions around AI''s impact on the workforce are both urgent and complex,
|
||||
as advances in automation and machine learning continue to transform the job
|
||||
market. This article could delve into the current trends of AI-driven job displacement
|
||||
alongside opportunities for upskilling and the creation of new job roles. By
|
||||
examining case studies of companies that integrate AI effectively and the resulting
|
||||
workforce adaptations, readers can gain valuable insights into preparing for
|
||||
a future where humans and AI collaborate. This exploration highlights the importance
|
||||
of policies that promote workforce resilience in the face of change.\n\n**5.
|
||||
Decentralized AI: Exploring the Role of Blockchain in AI Development** \nAs
|
||||
blockchain technology sweeps through various sectors, its application in AI
|
||||
development presents a fascinating topic worth examining. Decentralized AI could
|
||||
address issues of data privacy, security, and democratization in AI models by
|
||||
allowing users to retain ownership of data while benefiting from AI''s capabilities.
|
||||
This article could analyze how decentralized networks are disrupting traditional
|
||||
AI development models, featuring innovative projects that harness the synergy
|
||||
between blockchain and AI. Highlighting potential pitfalls and the future landscape
|
||||
of decentralized AI could stimulate discussion among technologists, entrepreneurs,
|
||||
and policymakers alike.\n\nThese topics not only reflect current trends but
|
||||
also probe deeper into ethical and practical considerations, making them timely
|
||||
and relevant for contemporary audiences."}], "model": "gpt-4o", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '7865'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- _cfuvid=icM8L9.yRk22iqPJZhDQeVHgfWZOnl1YiBFMJmVO8_8-1743465529821-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//bFfbbhy5EX3fryjoZQFjJNiO7az1pvUlK8ABAsdBAqweXENWd5fFZhEs
|
||||
do9G+/NBkZyLtPsiaGZYzbqcc+r0Hz8BXLC/uIYLN2FxcwqXH37Zfnz/6N/e/qBZCf3+/X/e3H+5
|
||||
j/8rMYwXG4uQ7Q9y5RB15WROgQpLbD+7TFjInvrq72/fvXr39t379/WHWTwFCxtTuXwjl69fvn5z
|
||||
+fKXy5fveuAk7EgvruH3nwAA/qh/LcXo6eHiGl5uDt/MpIojXVwfDwFcZAn2zQWqshaM5WJz+tFJ
|
||||
LBRr1t+/f7+L3yZZxqlcwy1E2cGEKwFCYC0gA3AslEkLxxGKJHYKGD2UiThDwoxjxjTBxOMUeJyK
|
||||
QpmwwExU7BC4zIUyIwySYaRIGeuzKI442j+YC7tAwJ5Qr+7iXfzMEQPcRN1Rvoa7+OoKXrz4NhF8
|
||||
ZSXL6WYpEmWWReHmFm5GikWBI3xEDnv4wgO9eAFwFwHgRusFAzvGUIsJgUeKjqCQm6IEGfeQsoyZ
|
||||
VEk3NWk7N1qiEu0+fHIfHu4rArRS3nvcQ+CBYEtOZrKfbPLKcQz12TNHiuUKvk2k1OM34DDhNtSC
|
||||
Zry3VnhyrCxRYcdlkqXAtMwY2wxWipbPBjATZNIJk8Vw9IuWzKQwZJlhIgxlcnaoCAwcMTq6gk8P
|
||||
KUi2AIdKoGXxFrKbKNOz+iZU0MU5Uh2WEPaWlpVDHiRR6woGcKJFQTLwnLKs5IEGazNFtweHETKt
|
||||
hAGiFJAY9rWxW4o0cFHYLgUwqNRvqUzsbDxzCuzqBWpt8RRobHA5tOayt6oIzOgmjqTWV9YGTpBh
|
||||
oGwQBXpwXEMlJclliVz2FuZ5pTY8u/rmFgJGrw4TbUAn2bk6OHBLzhQLeFopSJrrzHVxE2BDwYFa
|
||||
jQ9nLVxpMkA3LL827H7q9d0+q+8fnQ5rzYMjfDDBsI+3x6kekWwE0CWPFTDjk8giEioB3CF+YApe
|
||||
N8eEMZcNzIuy29R0d7n2ZtOGnTDfkwc08BTy4GmLhQC3BkFcyiRZJ041UjKPxk8u+974A4GdLMED
|
||||
x9XUYrQHTLKzJiv1DA2UW7LuLoam7b7Gam9hTV6yboAecOZo57ZSpjqnJKZZxmHTEY5R1kbPLkaQ
|
||||
We8bZlYMS4VIRs8drJiLBc56Bb8dlMrOJMqayFnPOn9OWRj2AtBDomx0tYsqUGvCrVgDPnuTS5P+
|
||||
TBNFtfbLSnll2llCVQQnDIHiaJegI78xiQiLr0CTtM+WDziJjnI8CizQLD1/nhM641FXhN63PgHP
|
||||
6hY1egArrFysYu3y2xFxRDnQKsHKxSBxVEv/qIQVpehX04y5qVTnGxfInKh2n6qW1PZXjP/NMN4B
|
||||
HHi2yX+YMI4E/+SKBJZ4DR86of4tYWkMsDI/L2XJBP86jPdcuC3/neTgwXZMCtR0EVy/xNVLNpUB
|
||||
Jy3nZ/rbADdhjibwvkpA4/QRRiuBHrOyCnXRghxxyw3nnw0ZKZPnChXAiGFfbBfWZSep8MyPBMbK
|
||||
cW+D1GVOFZ8noYJAmCuqMYySuUxzj+8KCg7zVqIthtqUmcokxuKb25/1jAEcgeLKWaLNCANoVV2q
|
||||
w0ctTwSR48qFuiIe52bApAdua93ULJ0pkwlJbzFHLlw7pJvWfFvkblGD4phliX6bqUEkkxJm05vo
|
||||
zwMBeTZ1KZDJL65Ow0DMZf+zHmoeRErKbFvyCUFPi8jIZm6r0+NseWMukapCaV08HIKJp+XfpEO5
|
||||
Tq2T6Oa2nqo+x6oIsrV11iumYZDKd46auG7MYclloie47zWa1NkMrGWlaSFh5cSbg2fpAJcB/iv5
|
||||
/hp+k10Ve4Wvxx3+aU5B9vVBXw40fSr8J4bbHdb2BoqTLnSy3A+SHTWlNe20hRFL09fqTh82dR80
|
||||
jtdZ2+qaT1X9Ca3mFzku1U+UjFFNSOt9P2QLs+2O8perwFM4X7WHjVoyRV/3383tpc+8UqxP8qwp
|
||||
YBOeM3E6re/qcSTDkvSeQ6hM6krZZK7hOtKuPs9MsF7Br/uzffLE/MhQe4LRPjQidt9XtyoNQ9sM
|
||||
YX/aMqRLqMA867XHVBp1NpAJvRkQQ9iIHMG2UbV5HLX549qQlClhQ5dkI1WDSfNjlR4N5wZWCQG3
|
||||
BjzqbT4H4hPjXTEuudhsrbwkwRxZr86cqJRzmGRSDl07GoRsPdXGVG2tUH5rUP5IjmLJGPiRLKvr
|
||||
M0dpcV+lOdlfg7h7N1nlHC37jyf/dK7t29O5MyrrjihZttleSWDFzOaolFxbyeYcz6SqX3Fm0ayv
|
||||
Wq0awoDqOOLpzcUKL9MJDVfwvKoOXPTe3gaAVZeGE48FIWVe0e03ls+SueybKfA0i7NpPJ4nVV/x
|
||||
tHqcEGTXTU82WYZMtl1Adl22jjfsJg5Hk1zBYYunMr2+Ldg64qPjfcq2upMem+nyT+qKVGzkzX95
|
||||
1ryk8twgPWtjy34DA2FZcnvPOO7KoxBXVPXNWlGg+7YAt1R2RPF8yg3Mz+3XcaclLgOGcPI+nRAn
|
||||
21K93V+OSwvPSzDWnhkhnMVKPDobNU23aEqZIi25W7r/AwAA//+MWMsOwiAQvPcrDGcPvlr1YwxB
|
||||
WCopAuFx8NB/N0ur4CvxPMuwC7vJzuQRuV3ZkGWDVgM2fdbFtWb2IFNgKNlN0roCmDF2Hn5U66cZ
|
||||
GZ/6XNveeXsOb0eJVEaFC8U9xRrU4iFaRzI6NovFKfsA6UXaExxgF2m0A+Tr1rt2PxGSYj0UuNsd
|
||||
ZzTayHQBNutVt/xCSQU2pg6VmUA44xcQ5WxxHlgSylZAUxX+mc837ql4Zfp/6AvAObgIgj4Wsrrm
|
||||
EuYBm/RX2POhc8Ik4L7OgUYFHj9DgGRJT7YJCbcQ4UqlMj34vKRgiHT0sDq2Xbvd8jNpxuYOAAD/
|
||||
/wMAWvKreUQSAAA=
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 974efac6faf5cf15-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 25 Aug 2025 23:48:25 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=yJXo3QqFAYYxljNSZBr47v.aphUMHM_ulx8v95ohSS4-1756165705-1.0.1.1-hhSjgTF5yD4vPuetC1LZGfC4KDF6RXEOSk8B4k4kOCq0XyfGNZkzt8kMn.F..aqu9T7UeYwKJprb.ZhscrKkokkqPou4V3aYXTAqIb2GZvE;
|
||||
path=/; expires=Tue, 26-Aug-25 00:18:25 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=yh2RjFXBfB0RrOxRjxqkpRIka8SH8Bv0F2PRUGTNLkY-1756165705944-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '6170'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '6192'
|
||||
x-ratelimit-limit-project-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '30000000'
|
||||
x-ratelimit-remaining-project-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29998089'
|
||||
x-ratelimit-reset-project-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 3ms
|
||||
x-request-id:
|
||||
- req_f470345c656b4bb7abb0ab09372eabef
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
|
||||
@@ -1384,4 +1384,351 @@ interactions:
|
||||
- req_7fdc097dc5895a4d2d953e8b1a2479d1
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Crew Manager. You
|
||||
are a seasoned manager with a knack for getting the best out of your team.\nYou
|
||||
are also known for your ability to delegate work to the right people, and to
|
||||
ask the right questions to get the best out of your team.\nEven though you don''t
|
||||
perform tasks by yourself, you have a lot of experience in the field, which
|
||||
allows you to properly evaluate the work of your team members.\nYour personal
|
||||
goal is: Manage the team to complete the task in the best way possible.\nYou
|
||||
ONLY have access to the following tools, and should NEVER make up tools that
|
||||
are not listed here:\n\nTool Name: Delegate work to coworker\nTool Arguments:
|
||||
{''task'': {''description'': ''The task to delegate'', ''type'': ''str''}, ''context'':
|
||||
{''description'': ''The context for the task'', ''type'': ''str''}, ''coworker'':
|
||||
{''description'': ''The role/name of the coworker to delegate to'', ''type'':
|
||||
''str''}}\nTool Description: Delegate a specific task to one of the following
|
||||
coworkers: Researcher, Senior Writer\nThe input to this tool should be the coworker,
|
||||
the task you want them to do, and ALL necessary context to execute the task,
|
||||
they know nothing about the task, so share absolutely everything you know, don''t
|
||||
reference things but instead explain them.\nTool Name: Ask question to coworker\nTool
|
||||
Arguments: {''question'': {''description'': ''The question to ask'', ''type'':
|
||||
''str''}, ''context'': {''description'': ''The context for the question'', ''type'':
|
||||
''str''}, ''coworker'': {''description'': ''The role/name of the coworker to
|
||||
ask'', ''type'': ''str''}}\nTool Description: Ask a specific question to one
|
||||
of the following coworkers: Researcher, Senior Writer\nThe input to this tool
|
||||
should be the coworker, the question you have for them, and ALL necessary context
|
||||
to ask the question properly, they know nothing about the question, so share
|
||||
absolutely everything you know, don''t reference things but instead explain
|
||||
them.\n\nIMPORTANT: Use the following format in your response:\n\n```\nThought:
|
||||
you should always think about what to do\nAction: the action to take, only one
|
||||
name of [Delegate work to coworker, Ask question to coworker], just the name,
|
||||
exactly as it''s written.\nAction Input: the input to the action, just a simple
|
||||
JSON object, enclosed in curly braces, using \" to wrap keys and values.\nObservation:
|
||||
the result of the action\n```\n\nOnce all necessary information is gathered,
|
||||
return the following format:\n\n```\nThought: I now know the final answer\nFinal
|
||||
Answer: the final answer to the original input question\n```"}, {"role": "user",
|
||||
"content": "\nCurrent Task: Come up with a list of 5 interesting ideas to explore
|
||||
for an article, then write one amazing paragraph highlight for each idea that
|
||||
showcases how good an article about this topic could be. Return the list of
|
||||
ideas with their paragraph and your notes.\n\nThis is the expected criteria
|
||||
for your final answer: 5 bullet points with a paragraph for each idea.\nyou
|
||||
MUST return the actual complete content as the final answer, not a summary.\n\nBegin!
|
||||
This is VERY important to you, use the tools available and give your best Final
|
||||
Answer, your job depends on it!\n\nThought:"}, {"role": "assistant", "content":
|
||||
"Thought: The first step is to brainstorm and research interesting ideas for
|
||||
the article. Since the task involves both research and writing, I will need
|
||||
to delegate the brainstorming of ideas to the Researcher and the writing of
|
||||
the paragraphs to the Senior Writer. \n\nAction: Delegate work to coworker\nAction
|
||||
Input: {\"task\": \"Come up with a list of 5 interesting ideas to explore for
|
||||
an article.\", \"context\": \"We need a list of 5 interesting ideas that would
|
||||
make compelling articles. The topics should be relevant, timely, and engaging
|
||||
for a broad audience.\", \"coworker\": \"Researcher\"}\nObservation: 1. **The
|
||||
Rise of AI in Healthcare: Transforming Patient Care and Diagnostics**\n \"Imagine
|
||||
a world where routine check-ups and complex diagnostics are seamlessly conducted
|
||||
by intelligent systems. AI in healthcare is rapidly transforming this vision
|
||||
into reality, enhancing patient care through early and accurate disease detection,
|
||||
personalized treatment plans, and continuous health monitoring. This revolution
|
||||
is not just a futuristic concept; it''s already happening with AI-driven technologies
|
||||
enabling doctors to make more informed decisions and offering patients a more
|
||||
proactive approach to their health. The ethical and privacy concerns, however,
|
||||
pose new challenges that the industry must navigate to ensure responsible advancement.\"\n\n2.
|
||||
**AI Agents in the Workplace: Boosting Productivity and Shaping the Future of
|
||||
Jobs**\n \"The modern workplace is being redefined by the integration of AI
|
||||
agents, intelligent systems designed to handle repetitive tasks, analyze vast
|
||||
amounts of data, and assist in complex decision-making processes. These innovations
|
||||
are boosting productivity, allowing employees to focus on creativity and strategic
|
||||
thinking. The rise of AI in the workspace heralds a future where human and machine
|
||||
collaboration leads to unparalleled efficiencies and novel job opportunities.
|
||||
However, this shift also brings with it challenges related to job displacement
|
||||
and the urgent need for reskilling the workforce to thrive in an AI-augmented
|
||||
environment.\"\n\n3. **Ethical AI: Ensuring Fairness, Accountability, and Transparency**\n \"As
|
||||
AI technology becomes more pervasive, the quest for ethical AI has become critical.
|
||||
Ensuring fairness, accountability, and transparency in AI systems is essential
|
||||
to prevent biases and promote trust. This involves creating robust frameworks
|
||||
and guidelines that govern AI development and deployment. Real-world case studies
|
||||
highlight both the triumphs and the tribulations of tech companies as they navigate
|
||||
the complex moral landscape that AI presents. As society grapples with these
|
||||
challenges, the push for ethical AI continues to drive innovation and policy-making
|
||||
in the tech industry.\"\n\n4. **AI-Powered Startups: Innovators Changing the
|
||||
Tech Landscape**\n \"In the bustling world of tech startups, AI-powered companies
|
||||
are emerging as the vanguards of innovation, disrupting traditional business
|
||||
models and reshaping entire industries. From revolutionizing financial services
|
||||
to pioneering advances in healthcare and beyond, these startups are at the forefront
|
||||
of technological progress. By leveraging machine learning and AI, they are solving
|
||||
complex problems and bringing novel solutions to market at an unprecedented
|
||||
pace. Investors are taking notice, pouring capital into these ventures, and
|
||||
fueling a new wave of growth and competition in the AI ecosystem.\"\n\n5. **The
|
||||
Future of AI in Entertainment: From Movies to Video Games**\n \"The entertainment
|
||||
industry is undergoing a seismic shift driven by AI technology. In movies, AI
|
||||
is revolutionizing special effects, enabling filmmakers to create stunning,
|
||||
hyper-realistic worlds. In video games, AI is enhancing player experiences by
|
||||
creating adaptive, intelligent characters and dynamic storylines that respond
|
||||
to player choices. This transformative power of AI promises to push the boundaries
|
||||
of creativity, offering audiences more immersive and interactive entertainment
|
||||
experiences. As the technology continues to evolve, the future of entertainment
|
||||
looks set to be more exciting and unpredictable than ever before.\""}, {"role":
|
||||
"assistant", "content": "Thought: I''ll proceed to delegate the writing of one
|
||||
amazing paragraph highlight for each of the identified ideas to the Senior Writer,
|
||||
ensuring each topic is adequately detailed to showcase its potential.\n\nAction:
|
||||
Delegate work to coworker\nAction Input: {\"task\": \"Write one amazing paragraph
|
||||
highlight for each of these 5 ideas that showcases how good an article about
|
||||
these topics could be.\", \"context\": \"Here are the 5 ideas:\\n1. The Rise
|
||||
of AI in Healthcare: Transforming Patient Care and Diagnostics - Explore how
|
||||
AI is revolutionizing healthcare through enhanced diagnostic tools, personalized
|
||||
treatment plans, and post-operative care. Discuss current achievements, potential
|
||||
future applications, and ethical considerations.\\n2. AI Agents in the Workplace:
|
||||
Boosting Productivity and Shaping the Future of Jobs - Examine the integration
|
||||
of AI agents in various industries. Highlight how these intelligent agents are
|
||||
improving efficiency, automating mundane tasks, and assisting in decision-making
|
||||
processes. Address potential job displacement and the need for upskilling the
|
||||
workforce.\\n3. Ethical AI: Ensuring Fairness, Accountability, and Transparency
|
||||
- Delve into the ethical implications of AI deployment. Analyze the frameworks
|
||||
and guidelines being proposed to ensure AI systems are fair, transparent, and
|
||||
accountable. Include case studies of ethical dilemmas faced by companies using
|
||||
AI.\\n4. AI-Powered Startups: Innovators Changing the Tech Landscape - Provide
|
||||
an overview of how AI-driven startups are disrupting traditional business models
|
||||
and industries. Profile some of the most innovative startups, their business
|
||||
models, and the unique challenges they face. Discuss the role of venture capital
|
||||
in fostering AI innovation.\\n5. The Future of AI in Entertainment: From Movies
|
||||
to Video Games - Investigate how AI is transforming the entertainment industry,
|
||||
from scriptwriting and special effects in movies to creating more immersive
|
||||
and intelligent experiences in video games. Offer predictions about future trends
|
||||
and possible impacts on content creation and consumption.\", \"coworker\": \"Senior
|
||||
Writer\"}\nObservation:\nObservation: 1. **The Rise of AI in Healthcare: Transforming
|
||||
Patient Care and Diagnostics**\n AI is undeniably revolutionizing the healthcare
|
||||
sector, introducing transformative changes in patient care and diagnostics.
|
||||
With the advent of advanced diagnostic tools powered by AI, healthcare professionals
|
||||
can now detect and monitor diseases with unprecedented accuracy and speed. Personalized
|
||||
treatment plans, which were once a distant dream, are becoming a reality as
|
||||
AI algorithms analyze vast amounts of patient data, tailoring treatments that
|
||||
are specific to individual needs. Post-operative care has also seen significant
|
||||
improvements, with AI-driven monitoring systems ensuring that patients receive
|
||||
timely and effective care. Current achievements like IBM Watson''s oncology
|
||||
platform or Google''s DeepMind in eye care are just the beginning, with future
|
||||
applications promising to further enhance surgical robotics, predictive analytics,
|
||||
and remote patient monitoring. However, alongside these advancements, ethical
|
||||
considerations such as patient privacy, data security, and decision-making transparency
|
||||
must be rigorously addressed to fully harness AI\u2019s potential in healthcare.\n\n2.
|
||||
**AI Agents in the Workplace: Boosting Productivity and Shaping the Future of
|
||||
Jobs**\n The integration of AI agents in the workplace marks a pivotal shift
|
||||
in how businesses operate and compete. These intelligent agents are revolutionizing
|
||||
various industries by automating mundane tasks, optimizing workflows, and providing
|
||||
actionable insights from data analysis. For instance, AI chatbots in customer
|
||||
service can handle routine inquiries, freeing human agents to tackle more complex
|
||||
issues. In manufacturing, AI systems predict equipment failures, reducing downtime
|
||||
and maintaining peak productivity. By enhancing operational efficiency, AI agents
|
||||
not only boost productivity but also reshape the future of jobs. While the risk
|
||||
of job displacement is a valid concern, it concurrently heralds opportunities
|
||||
for upskilling and creating new roles that harness human creativity and strategic
|
||||
thinking. Preparing the workforce for these changes is crucial, fostering a
|
||||
symbiotic relationship between AI and human labor to drive innovation and economic
|
||||
growth.\n\n3. **Ethical AI: Ensuring Fairness, Accountability, and Transparency**\n As
|
||||
AI technologies become increasingly pervasive, ensuring their ethical deployment
|
||||
is paramount. Ethical AI demands the formulation and implementation of robust
|
||||
frameworks that enshrine fairness, accountability, and transparency. Such frameworks
|
||||
are essential in mitigating biases embedded within AI systems, which can perpetuate
|
||||
and even exacerbate inequality. Case studies, such as the controversy surrounding
|
||||
facial recognition systems and their racial biases, highlight the pressing need
|
||||
for ethical oversight. Transparency in how AI models make decisions, coupled
|
||||
with accountability measures for their outcomes, establishes trust and integrity
|
||||
in AI applications. Legislative frameworks from the EU\u2019s GDPR to the IEEE\u2019s
|
||||
global initiatives for ethical AI provide the scaffolding for these principles.
|
||||
Companies are now faced with the challenge of integrating these ethical considerations
|
||||
into their development processes, ensuring that AI advancements benefit all
|
||||
of society equitably.\n\n4. **AI-Powered Startups: Innovators Changing the Tech
|
||||
Landscape**\n AI-powered startups are at the forefront of a technological
|
||||
renaissance, disrupting traditional business models and reshaping industries.
|
||||
These startups leverage AI to offer innovative solutions, whether it''s automating
|
||||
financial trading with precision algorithms or enhancing cybersecurity through
|
||||
predictive analytics. High-profile examples like OpenAI and UiPath illustrate
|
||||
the transformative potential of AI-driven innovation. These companies face unique
|
||||
challenges, from securing sufficient funding to navigating the complexities
|
||||
of large-scale implementation. Venture capital plays a critical role in this
|
||||
ecosystem, providing the necessary resources and support to turn bold ideas
|
||||
into impactful realities. The synergy between AI innovation and entrepreneurial
|
||||
spirit is fostering a dynamic landscape where the boundaries of technology and
|
||||
commerce are continually expanding.\n\n5. **The Future of AI in Entertainment:
|
||||
From Movies to Video Games**\n The entertainment industry is undergoing a
|
||||
metamorphic change with the integration of AI technologies, paving the way for
|
||||
more immersive and engaging experiences. In movies, AI streamlines the scriptwriting
|
||||
process, enhances special effects, and even predicts box office success. Machine
|
||||
learning models analyze vast datasets to create compelling narratives and optimize
|
||||
marketing strategies. Video games have also been revolutionized by AI, with
|
||||
non-player characters (NPCs) displaying more lifelike behaviors and adaptive
|
||||
learning algorithms personalizing gaming experiences. Future trends indicate
|
||||
a seamless blend of AI with augmented reality (AR) and virtual reality (VR),
|
||||
creating interactive environments where the line between virtual and real worlds
|
||||
blur. These advancements are not only transforming content creation but also
|
||||
revolutionizing how audiences consume and interact with entertainment, predicting
|
||||
a future where AI is an indispensable tool in the creative arsenal."}], "model":
|
||||
"gpt-4o", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '14961'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- _cfuvid=Dmt0PIbI1kDJNEjEUHh50Vv4Y9ZcWX6w2Uku_CIohuk-1751391377646-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAA3RXwW7cSA69z1cQvgQI2o0km80EvjmOM9OzE8CYeDdYbC5UiZIYl4oastSdzlzm
|
||||
N/b39ksWrJLanWD3YMOWqoqvyMf3qD9+ALjg9uIKLsKAOYxTvLx53bz/2wva/WP/9p+v8PrHt7n/
|
||||
5dme2y9B728uNr5Dms8U8rprG2ScImWWVF8HJczkpz7/8a+vnr969eLV6/JilJaib+unfPlSLl88
|
||||
e/Hy8tnry2evlo2DcCC7uIJ//QAA8Ef57RBTS18uruDZZn0ykhn2dHF1WgRwoRL9yQWasWVM+WLz
|
||||
+DJIypQK6vtB5n7IV7CDJAd48F95IOg4YQRMdiDdwg4G3BP0mAdSassKpd9n9n+4JTTA1EJQ7DK1
|
||||
EESVbJLUcuphQsVecRoMssBIlMt++jJRyOi5MpCuPMtoD9tP6VN6V8Jfl/BXn9LzLTx9ej8Q/MZG
|
||||
vvh6B5zgZ8KYh4BKV3CvmKwTHT3kHWamlOEGlQqyt4x9Essc7OnTTwmgnGAwp5YSYxOPoLSXODsc
|
||||
/upnOJ7hFACMQhbdAKes0s6hLFljYuY9QRgw9WSObFoAhBVA+whgCx85D+V8bPe+Sjr/C1Og84WQ
|
||||
RaLBJIeS8+YI17vNOaRJpSMzloTRIGAqJWwpU8gl6CiJsyi0bIRGBgcPPKdJKVBLyWuFIcyK4Vg2
|
||||
2ETUbuGO1PxQ/uq1dgqPDnOKmGwDh4HDAA4KJAUC9POdYtAq4bgBx9ZQkFIKBCWMnI+A5knH2Ity
|
||||
HkanDMbjV4I9WgYcZU65MGFNXosZN5CRo2hN94LEIA+YSxybKHBXkgWcWt5zO2OERNTaFu7E8qVM
|
||||
pEuBfMfgZI0mYEQJjPvk+x0+j5PKnkqETc3V9e6yVd5TWnPpOOxomUYDSjZXYI5mQW3gyfVomUeK
|
||||
Na/UdRROELZwM6v6DTEMTEtEiPxAsHvzHj5iNklPzNMrUfqjZz47z0AUfhLpIz0xeEs0vefUOt/o
|
||||
uNzOfz7PVlusoZ5T4tQvt+nmPPuaaYocls6bVEa2cguBblbvcKA0OBvBZu05YASVRpy7G5iUWq53
|
||||
KfWrT/2SSqNkOlXvMWFb+FkOtCfdAEZJvXFLjs9opf2ScspDiRYk+RpdINocBmfPevKkvMdw3BSC
|
||||
eGPOyvlYQbQU2DvicsSHU4tOqJTCEUZPTEOg3IvKbF6dtlUyc5779WM8woCayJys//nz395/rpWM
|
||||
0fP82H1Fpl64Ll3v4LovFeRU0v5R9GGKGOgK3oi3sitSEY3M+9IJqYUPA06rzLyrdZEOfpFmFSiX
|
||||
O06Z+pqGRfXwm0iHNRKMqA8GCBPvJWMEG7jLBbEcoJmN/UpkUJuhapKbFWXaeiirwWLkvhCzhnE2
|
||||
fa+Le1SW2SG0s2VlMpcmnLO4DKYexjm1mKqY2wZkyjzWrQ63i3JYCOPdxsUiMPj52ERHYdwP2aBT
|
||||
GWuFC8+MbQvvRH1Bds5sPB1uuo3UhITZsoykYKR7DlQUccDURgKVOXPy0920mGwDnRJ57GEeMa0X
|
||||
zgIZw0MkGEUJqp1/ATabybawSzBimjsMuXR+wbDKwdIZ4L44FcHskOOsHkxpsYxWDsmFoeozcsrI
|
||||
qbgk4YNn5MSSLbw5Lo3o7xcVc1l2OeHATunNGSeSZJAUj9A46b45C5o5V9FTsgEnqh5/ot1nadyW
|
||||
Bo71jbI9LM9d3AvFyo3YObbHyM6eFEjTBjiXv6ukef+QYmydapNonhNn50gnCvNkDxxjqXgZF6gy
|
||||
JtEBfGBZdH1twFqZuurUN5adv70r/sDpocjLndKEunZTYZlooBKz6szJmg2CzoExbqATy6TVo+w4
|
||||
NuwKB0qxys7AEzSUD+4SnuTULoAiNn6uQHEG4JRkX1u0SH2QJCMH6FUOeSgy8ReXidtF3K53V3C7
|
||||
Osc75HLXDVyH4AaIDceTmN2fidc6thQTzRSG5NZQus+t1nF4plzJ4xEm0j0a72lz7lLEetLYlqYo
|
||||
x7WoPqMVA97CI05oacTUWiWL6DjHx3uyd4ZvP6mTSuP62imO5DVYqknJBvXe6053xf9x13Oh3sIH
|
||||
l/yzk1yIXMBOQjxy5r6yp+Ey29DYUNtSW6yO01lnriOL68FEOlGeVwkkt3b6goG08Wec6Pe5jCtb
|
||||
uEEjsDy3RS5WE/Jc+Pzsk4La0S1SZa6TbofOLB8ApHfaSzqJQ7liKYDWRRX1Bgbuh+iKV06e3Ixq
|
||||
S1Bb+LvWq4TzddtvaLFK/PUOyieFwYgPdLJB20CQeYpLWr7LPIyE5gK1dgoryJydTu7GlrGJbIP3
|
||||
pXppS92LJfnmmuPzYWILv1LPFuu0dVa+ouV+v9u/F0/96e3db0VrB4Ld7e1tedhHaUpxOXM5wb5J
|
||||
wPVuMYwqURaw6yTWvJ/6fFJOgafoYn0j44TJO8TZ46Nxh2FNRCnjgDFS6osEnqy2dorR/5tGOFXg
|
||||
rNDSnqJUrZ9UQrHYzXdzoefobMqBhhJ17HIcPa5JYMrH4hqe72NRjJd1sLi8W4b/Dxk1z5Ndwa7K
|
||||
jajBjYvaqnr3FAb41ds14ESnj5zL9fPBlhNKMjCvPU2dyvIFcqYqZeSjhGxWrbZl03nKy0TV8mJE
|
||||
62Cxcq+OgbaMNo8TwjpinEBEHwexpyJmAtJ1pCcl3RPYMnKU3qUyk3J+Yudjhn+hptJKBZGPF15Y
|
||||
/7Yp1D//0hA9c9JwbEjXqRHy8F8AAAD//4xYS28bNxC++1cQOsWAbAR1HcS9BUXbXNIGAepbIHBX
|
||||
syuiFMkMSdU6+L8X35DcpWIXyJn74JAz34vhgF/Vtbfqo5kPN/BZYEZ60gC9qtT/CuQqL/xtPut0
|
||||
UMbaXOipmNlLd7jqSFFyzVes9NEOaVz6Fv2qsjPfctesIl38sepe2JFc5UBSUwWi5JXTpwaQBbJE
|
||||
yRQu9pOymme6iaOG5LqA8lv1SE6EwaiDgZYMVp/B+yObVN2ACDUwcAThFZDbdoIOv3SEgdAMax19
|
||||
5pFKg8QswkDmP7NTg7ctRJDZMsegxzRlW51j6x8Vz454Pves/D35usQUmBxlxknHYNgIwfVkvz87
|
||||
DYa2bVrQY1wubQCWa66ntEzEuQnmI0FYaC4kYFzWsAz0FLScvAzvfUsrVl1f8orfXCKG4MNh/6J+
|
||||
xzV+8if8LHn1aPbk1R/6SL0FoP6dNlLnll3w7EtNR0r66DkczFjFzopzL11Erx+2KujTop30WdBU
|
||||
1K9BtbFMBM521oI2CG4Y6rMpYqmgSGFY9KMFJlSQZhPSv+gbSNwCkdtmL2Nx70XQ0piqLRBOruMY
|
||||
1eCfAA9G3OiI92/VJ/hmR8qSZhHPC/x0gQKsQ6Si6UsMVwxP0Z9OM1eSwT+rSSk+imS3TWiiynI1
|
||||
M66mJGGipQc0YW+P1pRGzt55d4PZIcaVsB4TcVRv/vz8a7yusvpcds+krJlIcGWggz4Z4Ds2pvc6
|
||||
CHwstXawFpacBguzPr64ntqBiQk6DgHJKNJHRdwTkHuwhPqlLYpEyPOxREMtuHnz4cu1bOZkOGVt
|
||||
14XHL9fbVcajz1AltkvuZNi7QnnreKE1lvltnyusoS2UO4zDYDM3MLygzsLj1eWkPvCriWbdi3e9
|
||||
4bm0rxBLGppOGhDcnqsba9svx3Axd2vsIcNWjVMpqwSJ2snpxkAuio9FeNdserUwwI1ITtvbPoxl
|
||||
mnLUyIJdtrZb0M75mo8iBv5aV56X4Nf6ObAf4nevbibjTDzs4Aa8Q8gbkw8bWX2+UuqrBMz5IjPe
|
||||
IAYKaZf8PyS/++nd+4fywc2aaa/LD29/rqsJccO6cHf/cLd95ZO7PSHIi11KvRn1eKD9+u4aaeOC
|
||||
fLdw1RX+cj+vfbsUb9z8I59fF8aRQqL9rl13X/P6GBNC//97bDlo2fCmphG7ZIhxGXuadLYlj98U
|
||||
5txNxs3EEK8Syk9h9/7tw/27+7u7cdhcPV/9BwAA//8DAN7ucS6dGAAA
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 974f08a7bda1239e-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 25 Aug 2025 23:58:18 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=ivMv06nBtEBQA0RDYIILchv9UWNGtVY5W5EPjY_N5kM-1756166298-1.0.1.1-2Qisdm01V_T5uiCTx3ArSE5G4lPmSJXAC_FziI_DwOsWC_nN5OI4nlAAcT5w8A.DGB6SVkyNXoYzPqHQgSR2pUc8YECmxkJF6LXkXDhzC0A;
|
||||
path=/; expires=Tue, 26-Aug-25 00:28:18 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=xP6HSAlowr0MDcDirVgpB.BcvTTqAlUD_fl6h1bu7io-1756166298177-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '29910'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '29994'
|
||||
x-ratelimit-limit-project-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '30000000'
|
||||
x-ratelimit-remaining-project-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29996336'
|
||||
x-ratelimit-reset-project-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 7ms
|
||||
x-request-id:
|
||||
- req_ae1d0ce3d5584bbba38ba76c69a0e541
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
|
||||
@@ -1524,4 +1524,362 @@ interactions:
|
||||
- req_55957b9cd106057878eea339ef651998
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Crew Manager. You
|
||||
are a seasoned manager with a knack for getting the best out of your team.\nYou
|
||||
are also known for your ability to delegate work to the right people, and to
|
||||
ask the right questions to get the best out of your team.\nEven though you don''t
|
||||
perform tasks by yourself, you have a lot of experience in the field, which
|
||||
allows you to properly evaluate the work of your team members.\nYour personal
|
||||
goal is: Manage the team to complete the task in the best way possible.\nYou
|
||||
ONLY have access to the following tools, and should NEVER make up tools that
|
||||
are not listed here:\n\nTool Name: Delegate work to coworker\nTool Arguments:
|
||||
{''task'': {''description'': ''The task to delegate'', ''type'': ''str''}, ''context'':
|
||||
{''description'': ''The context for the task'', ''type'': ''str''}, ''coworker'':
|
||||
{''description'': ''The role/name of the coworker to delegate to'', ''type'':
|
||||
''str''}}\nTool Description: Delegate a specific task to one of the following
|
||||
coworkers: Researcher, Senior Writer\nThe input to this tool should be the coworker,
|
||||
the task you want them to do, and ALL necessary context to execute the task,
|
||||
they know nothing about the task, so share absolutely everything you know, don''t
|
||||
reference things but instead explain them.\nTool Name: Ask question to coworker\nTool
|
||||
Arguments: {''question'': {''description'': ''The question to ask'', ''type'':
|
||||
''str''}, ''context'': {''description'': ''The context for the question'', ''type'':
|
||||
''str''}, ''coworker'': {''description'': ''The role/name of the coworker to
|
||||
ask'', ''type'': ''str''}}\nTool Description: Ask a specific question to one
|
||||
of the following coworkers: Researcher, Senior Writer\nThe input to this tool
|
||||
should be the coworker, the question you have for them, and ALL necessary context
|
||||
to ask the question properly, they know nothing about the question, so share
|
||||
absolutely everything you know, don''t reference things but instead explain
|
||||
them.\n\nIMPORTANT: Use the following format in your response:\n\n```\nThought:
|
||||
you should always think about what to do\nAction: the action to take, only one
|
||||
name of [Delegate work to coworker, Ask question to coworker], just the name,
|
||||
exactly as it''s written.\nAction Input: the input to the action, just a simple
|
||||
JSON object, enclosed in curly braces, using \" to wrap keys and values.\nObservation:
|
||||
the result of the action\n```\n\nOnce all necessary information is gathered,
|
||||
return the following format:\n\n```\nThought: I now know the final answer\nFinal
|
||||
Answer: the final answer to the original input question\n```"}, {"role": "user",
|
||||
"content": "\nCurrent Task: Come up with a list of 5 interesting ideas to explore
|
||||
for an article, then write one amazing paragraph highlight for each idea that
|
||||
showcases how good an article about this topic could be. Return the list of
|
||||
ideas with their paragraph and your notes.\n\nThis is the expected criteria
|
||||
for your final answer: 5 bullet points with a paragraph for each idea.\nyou
|
||||
MUST return the actual complete content as the final answer, not a summary.\n\nBegin!
|
||||
This is VERY important to you, use the tools available and give your best Final
|
||||
Answer, your job depends on it!\n\nThought:"}, {"role": "assistant", "content":
|
||||
"Thought: To come up with a list of 5 interesting article ideas and write a
|
||||
compelling paragraph for each one, I should first generate the ideas. For this,
|
||||
I will delegate the task to the Researcher and then use the Senior Writer to
|
||||
create the paragraphs.\n\nAction: Delegate work to coworker\nAction Input: {\"task\":
|
||||
\"Generate a list of 5 interesting ideas to explore for an article.\", \"context\":
|
||||
\"We need to come up with 5 interesting and unique article ideas that will captivate
|
||||
readers.\", \"coworker\": \"Researcher\"}\nObservation: 1. **The Rise of AI
|
||||
in Healthcare: Revolutionizing Diagnosis and Treatment**\n - Artificial intelligence
|
||||
is transforming the healthcare industry in unprecedented ways. From enhancing
|
||||
diagnostic accuracy with AI-driven tools to creating personalized treatment
|
||||
plans, AI is bridging gaps in medical care and bringing about a revolution.
|
||||
Imagine a world where diseases are diagnosed at lightning speed, and treatments
|
||||
are tailored to individual genetic makeups. This article will explore these
|
||||
groundbreaking advancements, featuring real-world applications and expert insights,
|
||||
revealing the future of healthcare driven by AI.\n\n2. **AI and Ethics: Navigating
|
||||
the Moral Complexity of Machine Intelligence**\n - As AI technology advances,
|
||||
ethical dilemmas emerge, challenging our understanding of morality and responsibility.
|
||||
This article delves into the complex world of AI ethics, examining pressing
|
||||
issues like algorithmic bias, privacy concerns, and the ethical responsibilities
|
||||
of developers. With input from ethicists, technologists, and policymakers, it
|
||||
offers a comprehensive look at the moral landscape of AI, guiding readers through
|
||||
the critical questions that shape the future of machine intelligence and its
|
||||
impact on society.\n\n3. **AI Agents in the Workplace: Enhancing Productivity
|
||||
or Eroding Jobs?**\n - The introduction of AI agents in the workplace presents
|
||||
a double-edged sword; while these technologies can vastly enhance productivity
|
||||
by automating routine tasks, they also raise concerns about job displacement.
|
||||
This article paints a balanced picture, drawing on data and expert forecasts
|
||||
to explore both sides of the argument. Are we heading towards a future where
|
||||
AI aids human creativity and efficiency, or are we on the brink of a job crisis?
|
||||
The answer may be a nuanced blend of both.\n\n4. **From Science Fiction to Reality:
|
||||
The Evolution of AI Agents in Popular Culture**\n - AI has long been a staple
|
||||
of science fiction, captivating audiences with visions of intelligent machines.
|
||||
This article traces the portrayal of AI agents in movies, books, and video games,
|
||||
juxtaposing these fictional narratives against today\u2019s technological realities.
|
||||
Interviews with creators from the entertainment industry and AI experts illuminate
|
||||
how close\u2014or far\u2014we are from making these sci-fi dreams a reality,
|
||||
providing a fascinating journey through the evolution of AI in popular culture.\n\n5.
|
||||
**The Future of AI in Personalization: Creating Tailored Experiences in Everyday
|
||||
Life**\n - Imagine a world where every experience, from shopping to entertainment,
|
||||
is uniquely tailored to your preferences by intelligent systems. This article
|
||||
explores the cutting-edge of AI-driven personalization, discussing current technologies
|
||||
and future potentials. Featuring insights from industry leaders and technologists,
|
||||
readers will discover how AI is revolutionizing the way we live, offering bespoke
|
||||
experiences that were once the realm of fantasy but are swiftly becoming a part
|
||||
of our daily lives."}, {"role": "assistant", "content": "Thought: Now that I
|
||||
have the list of 5 interesting article ideas from the Researcher, the next step
|
||||
is to ask the Senior Writer to create a compelling paragraph for each idea.
|
||||
This will ensure that each topic is highlighted effectively.\n\nAction: Delegate
|
||||
work to coworker\nAction Input: {\"task\": \"Write one amazing paragraph highlight
|
||||
for each of these 5 article ideas: \\n\\n1. **The Rise of AI in Healthcare:
|
||||
Revolutionizing Diagnosis and Treatment**\\n- This article could explore how
|
||||
AI is making strides in the healthcare industry. From AI-driven diagnostic tools
|
||||
to personalized treatment plans, the piece could highlight real-world applications
|
||||
and the potential future impact. Interviews with healthcare professionals and
|
||||
case studies could add depth and credibility.\\n\\n2. **AI and Ethics: Navigating
|
||||
the Moral Complexity of Machine Intelligence**\\n- Delve into the ethical considerations
|
||||
surrounding AI development and deployment. The article could examine issues
|
||||
such as bias in AI algorithms, privacy concerns, and the ethical responsibility
|
||||
of AI developers. Expert opinions from ethicists, technologists, and policymakers
|
||||
can provide a well-rounded perspective.\\n\\n3. **AI Agents in the Workplace:
|
||||
Enhancing Productivity or Eroding Jobs?**\\n- This piece can analyze the dual
|
||||
impact of AI agents in the professional environment. While some argue that AI
|
||||
can significantly boost productivity by automating mundane tasks, others fear
|
||||
job displacement. The article can present both sides, backed by data and forecasts
|
||||
about labor market trends.\\n\\n4. **From Science Fiction to Reality: The Evolution
|
||||
of AI Agents in Popular Culture**\\n- An engaging piece that traces the depiction
|
||||
of AI agents in movies, books, and video games, and compares these fictional
|
||||
portrayals to the current state of AI technology. This can include interviews
|
||||
with creators from the entertainment industry and AI experts to discuss how
|
||||
close we are to realizing these sci-fi visions.\\n\\n5. **The Future of AI in
|
||||
Personalization: Creating Tailored Experiences in Everyday Life**\\n- Discuss
|
||||
how AI is enhancing personalization in various domains such as shopping, entertainment,
|
||||
education, and personal finance. The focus can be on current technologies and
|
||||
future possibilities, featuring insights from industry leaders and technologists
|
||||
who are driving these innovations.\", \"context\": \"Create compelling paragraphs
|
||||
that highlight the potential and intrigue of each article idea. These should
|
||||
be engaging and showcase how fascinating each article could be.\", \"coworker\":
|
||||
\"Senior Writer\"}\n\nObservation: The Senior Writer provided the following
|
||||
compelling paragraphs for each article idea:\n\n1. **The Rise of AI in Healthcare:
|
||||
Revolutionizing Diagnosis and Treatment**\n - Imagine a world where a doctor
|
||||
can diagnose a condition with pinpoint accuracy, thanks to the help of artificial
|
||||
intelligence. AI is already transforming healthcare with advanced diagnostic
|
||||
tools and personalized treatment plans. This article would delve into real-world
|
||||
cases where AI has made a significant impact, providing insights from healthcare
|
||||
professionals who witness these technological advancements firsthand. The future
|
||||
of medicine is not only in the hands of doctors and nurses but also in the code
|
||||
and algorithms of AI systems.\n\n2. **AI and Ethics: Navigating the Moral Complexity
|
||||
of Machine Intelligence**\n - The rapid development of artificial intelligence
|
||||
brings with it not just technological advancements but also a slew of ethical
|
||||
dilemmas. This article will explore the complex moral landscape AI inhabits,
|
||||
questioning the bias that may be programmed into algorithms, the privacy concerns
|
||||
surrounding data usage, and the ethical responsibilities of those who develop
|
||||
AI technologies. Insightful perspectives from ethicists, technologists, and
|
||||
policymakers will highlight how society can navigate these challenges to ensure
|
||||
that AI benefits humanity as a whole.\n\n3. **AI Agents in the Workplace: Enhancing
|
||||
Productivity or Eroding Jobs?**\n - As AI agents become more integrated into
|
||||
the workforce, the debate over their impact intensifies. This article will examine
|
||||
how AI can boost productivity by automating routine tasks, thereby freeing up
|
||||
human workers for more creative and strategic roles. However, it will also address
|
||||
the fear of job displacement and present data and forecasts about the future
|
||||
labor market. Balanced viewpoints from industry analysts and workers affected
|
||||
by AI implementation will offer a comprehensive understanding of the pros and
|
||||
cons of AI in the workplace.\n\n4. **From Science Fiction to Reality: The Evolution
|
||||
of AI Agents in Popular Culture**\n - From HAL 9000 to Jarvis, AI agents have
|
||||
long captivated our imaginations in books, movies, and video games. This article
|
||||
will take readers on a journey through time, highlighting how AI has been portrayed
|
||||
in popular culture and comparing these fictional depictions to the current state
|
||||
of AI technology. Conversations with creators from the entertainment industry
|
||||
and AI experts will reveal whether these sci-fi visions are becoming our reality
|
||||
and what the future holds for AI in culture.\n\n5. **The Future of AI in Personalization:
|
||||
Creating Tailored Experiences in Everyday Life**\n - In an increasingly digital
|
||||
world, AI is enhancing personalization across various domains, crafting tailored
|
||||
experiences that cater to individual needs. This article will explore how AI
|
||||
is revolutionizing everything from shopping and entertainment to education and
|
||||
personal finance. Featuring insights from industry leaders and technologists,
|
||||
the piece will showcase the current state of personalization technologies and
|
||||
speculate on future advancements, painting a vivid picture of a world where
|
||||
AI knows us better than we know ourselves.\nObservation: 1. **The Rise of AI
|
||||
in Healthcare: Revolutionizing Diagnosis and Treatment**\n - Artificial Intelligence
|
||||
is spearheading a transformative era in healthcare, fundamentally altering how
|
||||
diseases are diagnosed and treated. AI-driven diagnostic tools are already outperforming
|
||||
human experts in identifying ailments from medical images, while personalized
|
||||
treatment plans are becoming a reality through predictive analytics that cater
|
||||
to individual patient needs. This article will dive into real-world applications,
|
||||
such as AI algorithms detecting early-stage cancers or predicting patient outcomes
|
||||
with unprecedented accuracy. Featuring interviews with healthcare professionals
|
||||
and insightful case studies, it will paint a vivid picture of how AI is not
|
||||
only enhancing medical outcomes but also reshaping the entire healthcare landscape
|
||||
for the future.\n\n2. **AI and Ethics: Navigating the Moral Complexity of Machine
|
||||
Intelligence**\n - As Artificial Intelligence continues to evolve, it brings
|
||||
with it a host of ethical dilemmas that society must grapple with. This article
|
||||
will delve into the moral intricacies of AI, focusing on pressing issues such
|
||||
as algorithmic bias, the preservation of privacy, and the ethical duties of
|
||||
AI developers. By examining high-profile cases of biased AI and exploring the
|
||||
perspectives of ethicists, technologists, and policymakers, the piece will offer
|
||||
a comprehensive look at the ethical landscape of machine intelligence. It aims
|
||||
to provoke thought and spark dialogue on how we can responsibly harness AI''s
|
||||
power while safeguarding human values and rights.\n\n3. **AI Agents in the Workplace:
|
||||
Enhancing Productivity or Eroding Jobs?**\n - The integration of AI agents
|
||||
in the workplace presents a paradox of progress and peril. On one hand, AI can
|
||||
drastically enhance productivity by automating repetitive tasks and offering
|
||||
intelligent insights that drive innovation. On the other, there is a growing
|
||||
concern about job displacement and the socioeconomic implications of a heavily
|
||||
automated workforce. This article will explore this dual impact, presenting
|
||||
data and forecasts on labor market trends and featuring opinions from industry
|
||||
experts, economists, and affected workers. By striking a balance between optimism
|
||||
and caution, it will provide a nuanced view of AI''s role in reshaping the future
|
||||
of work.\n\n4. **From Science Fiction to Reality: The Evolution of AI Agents
|
||||
in Popular Culture**\n - The portrayal of AI agents in popular culture has
|
||||
long fascinated and terrified us, from the benevolent, human-like robots of
|
||||
movies and books to the dystopian scenarios of video games. This article will
|
||||
take readers on a journey through the evolution of AI in fiction, contrasting
|
||||
these imaginative narratives with the current capabilities and limitations of
|
||||
real-world AI. Interviews with filmmakers, authors, and game developers, alongside
|
||||
insights from AI researchers, will shed light on how closely today''s technology
|
||||
mirrors these sci-fi visions. The piece aims to captivate readers by exploring
|
||||
the interplay between creativity and technological advancement, revealing the
|
||||
thin line between fantasy and reality.\n\n5. **The Future of AI in Personalization:
|
||||
Creating Tailored Experiences in Everyday Life**\n - AI is revolutionizing
|
||||
the concept of personalization, offering users tailored experiences in areas
|
||||
as diverse as shopping, entertainment, education, and personal finance. This
|
||||
article will explore how AI algorithms are learning from user behaviors and
|
||||
preferences to deliver highly customized content and recommendations, enhancing
|
||||
user satisfaction and engagement. By highlighting current technologies and featuring
|
||||
interviews with industry leaders and technologists, it will forecast future
|
||||
possibilities and innovations. The piece will demonstrate how AI personalization
|
||||
is not just a trend, but a paradigm shift in how we interact with technology
|
||||
in our daily lives, promising a future where every experience can be uniquely
|
||||
tailored to individual needs. \n\nThese paragraphs showcase the potential and
|
||||
intrigue of each topic, demonstrating how engaging and insightful the articles
|
||||
could be."}], "model": "gpt-4o", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '16854'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//fFdLb9zIEb7vryjoYsCYEWyvvFnMJVAUCVGQ3TUcBzlEB9d0F8myml1M
|
||||
dXNG3P3zQRXJGUnZ5CJoZtjV9fgexd++A7jgeLGDi9BhDf2Qtjc/4nhz9Y/93dXPN+/i3fheb8bv
|
||||
m/rLk/6714uNnZD9Nwp1PXUZpB8SVZY8/xyUsJJFff+Hjz+8/+Hq44cr/6GXSMmOtUPdXsn2w7sP
|
||||
V9t3P27f/bAc7IQDlYsd/Os7AIDf/K+lmCM9Xezg3Wb9pqdSsKWL3ekhgAuVZN9cYClcKuZ6sTn/
|
||||
GCRXyp71169fH/KXTsa2qzu4hyxHeLQ/tSNoOGMCzOVI+pDv/NO1f9rBQ35/CW/ffukIPnMhkAau
|
||||
74Ez/IUw1S6g0g4+00HSaO3gXzm38GfGNkvhApgjfLHm9JTr27cPGQC2cK2VGw6MCe5zpZS4pRwI
|
||||
uEAZCLUjjBYHoSrm0oj2WPlAQIp2d3e6ewPNmCNadExpAkyV1I52coTIhbBQAVSCOOdE0XOq88Au
|
||||
4fp+G5UPlNcHKgeoImk+hUkJ4wQy1oHUEvHgY48Z6GkgrcUS4ki5cjN50pwsnQKNSg89RQ6YgHts
|
||||
qWzg2HEiGEiLZEz8Ky252BEYEub53j0F6ecWKGHiOkHt1MYHg1pIbwdmTFPlUKB2WCFgJYUqwDny
|
||||
geOICQasbKEzUSyX8KWzoWjlkAiOnBJEC8S5il+0PYqmCDgMiQPaRMsGyhg6wGKDx9SKcu36ApEq
|
||||
hWo5EmqatqViSxAwB9ICoqdEc3vKQsYapKcCR64djHlQCmS9s7GEMCqG6RLuCOvoU+RcSQ9Mx+XE
|
||||
efAwqDRUClsbZ5xxLtx2tRkTBCwEpY6Rredc51IH5FwB4WDNgYFDHdURbWAxVBfIUkFymoByhzlY
|
||||
DusET7nvxwqYijWsdDjYM8YiQ4DS8xQT5lgCDgSN6My00a68fMgP+YPRyjqaI9zWjkPZwc944Bbr
|
||||
GvEnUUxw41rzZBCQBn7C0HGmF7w586r8T2qZFnAeqRg+jK4H8sbsrc9Ld9ma00mpdhFZTmj4SNT3
|
||||
uECsSGCqE/RjqdCq4YT88O9Ci9KKLSun93I4V+WAganMYrKBRsJYrGjJBpri/3Mplu2KvRPwOMCe
|
||||
sWw8pD1NenCgWrRB+YBh2swct6GsVYz1dCFEOlAS4+Al/GkCesKes/Oa225rwDKOBpcOafw6irDM
|
||||
ip6GJLqOyIIM5Gwsp65xqZYfhS5Lknb+aGcHSRymHh9J1wKYwtItaRpSQDBvUeooF2NmEnkErC+K
|
||||
OeNKGugXQPCzaV/CfQXk3oc9qBzkkaDO6u+JlAH10RQvSTuS9d0ocHT6GqwHyYX3aYIONVMx6r8p
|
||||
MMiRdFGwgg21I2o86+EBk03M4qsRsTjOv19wft26KHL2Uv4p+jgkDLSD2xPTPqnE0XrpWFe4VfHw
|
||||
f5V9+eMJ5GZFVmyrp7HbaF6EP67hZ4TYLwgDKkZ5mnEirQFtngopp0v4JYNkgg5z3FhEa0VUNENw
|
||||
a5kVwYXnnOV+Ahyr9DNplQaq7NJcsTzO4X2uq5jNE6qrVi2scgcCzllmKHsyVofUjtSRou6OCK3K
|
||||
0YIFMZ3NgHsZK3yTvbmdl+xOsuLf+CoUJIsRh/uzrFsb0NTqwOlUBEVvXSNqIPpvRs/gNyxxAfcX
|
||||
7gcMdbP22VKLWNETaEQpYKnFAJZwLwo96iNVs7wc5/Y0J7GXgbNn5sbJOY6l6rTa7AaWMk5swqah
|
||||
sKa8krlU5cfZOPeYfGB7qkeiDDJU7rn0fjig7yvPzEHlwJEAIY92LIL5zoyuNwVs0TJ0vdT8Wc/t
|
||||
IcvB8X5leL+zCv4e2KX3zkxQsnHx8+zlOwfx7bo0LRA+M+STDGNChZsxWfwX0B9Eq+JkhvQK+MNy
|
||||
KsynoMMCSXILDZbA2afruCBVbpgijGUzN9tq2VM2X6BcNzOht4kfCVT2Uh0tvRx44fde5NHFxQ7G
|
||||
qVQZGDOUQBmVxR+3bgq02NPv7h0VLThh9G0hA8I3GTXTedFxyXvVI87QzO3cuKU5P+dhFPIlywo1
|
||||
NmVUxVmX3dwsWhhVjR0BB9xz4rrWk7jneubFs03o+v7SjfT5FtJw6lcRx7F2ogsirdhn7rIBtP4X
|
||||
g9WJ8N7v63sDEqGGzp/zjpSOLBPT6EWQQ5JCaYIqEac35WwpE/SsKlqWwkvgbcNwYNuHvN2rtawu
|
||||
EHCofMB67vl+euVlvmwNCacTY/y1Zha6GTiro5kLYTwYT3oHjNLBoL1Eqh1nSOZKa6QGc8Uyh1kW
|
||||
WmfLx/Xl4u7EpHnKn04bso9lBzeeS27hC7JpUIRbEwanmMP/9kA6RZzgb9w8W4l8rdNXLygOBlPQ
|
||||
wXed4eVlm7Nmj8VaVdcr6eWVqGSrSfElWgvZv6WTwfRhYwshaUXOc4sojmEJv7iO3+lvX/n/y+2y
|
||||
nz5bvn29JFRfWxxSlinsqcMDGy78CqWGdM62iq1jlqZvOWmCMJYqvb+BLC+Ky3SC9D3luG7/503Y
|
||||
ryhYuTQ4S5ovRLnF1mHgAuzRDcTuUgvhzsih16r/esU/6X5acPoSea7+q2avBrPq8CCl8Atinz21
|
||||
/AcAAP//jFjLbsMgELz7KxDnHOy2avMxjRA2i01qg8NDVQ/+92oBB5L60PPCwsI+ZqauigxPF6Od
|
||||
t1gU+YWfEmFnBFdEuzxNrVPC/wlNqHEhblLSR1aaIFQMiA8+t51SskoTEywRHAcufoXDsYkTLc2r
|
||||
HMR3HPaA2VzlW4QjPZCg1S3EnrDn5CPfyzzvU0fGX6sBFmRwHMUIHea5MnCtTW5+qENcsmW7Kw+z
|
||||
GVdreve0lUqllZsYFoHRqDLgHKDRujWEXKLCER5EC4oBr5558wXxuNeu7ZJDWkSVYv7odqs3ns/V
|
||||
vnP3cjpwyQTgy7hKJqEDHyYQZW/RVHgQylSGpgr8732OfKfglR7/474YBmw9INjOkeuYyzIL14hw
|
||||
jpfdHzpemCIRUgMwr8DiZwiQPMxJEKLux3lYmFR6BLtalVQhubK3917KFtrhTJut+QUAAP//AwAX
|
||||
W2+SHhMAAA==
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 974ede154cc51701-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 25 Aug 2025 23:29:03 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=5JVBbOOy3Z42TimGf_AeViTUOxCEjozvdxUWf6hbQmU-1756164543-1.0.1.1-ygsmJWfEEPPXLcyJJ.RM_4EU3eUrQ5umh1p5NnpVok3tJoYTzI_qdTIg85sL3QuSmGfOIM9i8qLgCV4etmoxJoh_wAlBljpVEi27.U3_dz0;
|
||||
path=/; expires=Mon, 25-Aug-25 23:59:03 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=zAJ0jqazHLpztPpOyEi_EdNt5yCl9A2Gzu_U4E02x2E-1756164543702-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '19204'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '19224'
|
||||
x-ratelimit-limit-project-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '30000000'
|
||||
x-ratelimit-remaining-project-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29995866'
|
||||
x-ratelimit-reset-project-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 8ms
|
||||
x-request-id:
|
||||
- req_946a3113146e463cb4c0bc9cb8cb9a22
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
|
||||
@@ -607,4 +607,163 @@ interactions:
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Crew Manager. You
|
||||
are a seasoned manager with a knack for getting the best out of your team.\nYou
|
||||
are also known for your ability to delegate work to the right people, and to
|
||||
ask the right questions to get the best out of your team.\nEven though you don''t
|
||||
perform tasks by yourself, you have a lot of experience in the field, which
|
||||
allows you to properly evaluate the work of your team members.\nYour personal
|
||||
goal is: Manage the team to complete the task in the best way possible.\nYou
|
||||
ONLY have access to the following tools, and should NEVER make up tools that
|
||||
are not listed here:\n\nTool Name: Delegate work to coworker\nTool Arguments:
|
||||
{''task'': {''description'': ''The task to delegate'', ''type'': ''str''}, ''context'':
|
||||
{''description'': ''The context for the task'', ''type'': ''str''}, ''coworker'':
|
||||
{''description'': ''The role/name of the coworker to delegate to'', ''type'':
|
||||
''str''}}\nTool Description: Delegate a specific task to one of the following
|
||||
coworkers: Scorer\nThe input to this tool should be the coworker, the task you
|
||||
want them to do, and ALL necessary context to execute the task, they know nothing
|
||||
about the task, so share absolutely everything you know, don''t reference things
|
||||
but instead explain them.\nTool Name: Ask question to coworker\nTool Arguments:
|
||||
{''question'': {''description'': ''The question to ask'', ''type'': ''str''},
|
||||
''context'': {''description'': ''The context for the question'', ''type'': ''str''},
|
||||
''coworker'': {''description'': ''The role/name of the coworker to ask'', ''type'':
|
||||
''str''}}\nTool Description: Ask a specific question to one of the following
|
||||
coworkers: Scorer\nThe input to this tool should be the coworker, the question
|
||||
you have for them, and ALL necessary context to ask the question properly, they
|
||||
know nothing about the question, so share absolutely everything you know, don''t
|
||||
reference things but instead explain them.\n\nIMPORTANT: Use the following format
|
||||
in your response:\n\n```\nThought: you should always think about what to do\nAction:
|
||||
the action to take, only one name of [Delegate work to coworker, Ask question
|
||||
to coworker], just the name, exactly as it''s written.\nAction Input: the input
|
||||
to the action, just a simple JSON object, enclosed in curly braces, using \"
|
||||
to wrap keys and values.\nObservation: the result of the action\n```\n\nOnce
|
||||
all necessary information is gathered, return the following format:\n\n```\nThought:
|
||||
I now know the final answer\nFinal Answer: the final answer to the original
|
||||
input question\n```"}, {"role": "user", "content": "\nCurrent Task: Give me
|
||||
an integer score between 1-5 for the following title: ''The impact of AI in
|
||||
the future of work''\n\nThis is the expected criteria for your final answer:
|
||||
The score of the title.\nyou MUST return the actual complete content as the
|
||||
final answer, not a summary.\n\nBegin! This is VERY important to you, use the
|
||||
tools available and give your best Final Answer, your job depends on it!\n\nThought:"},
|
||||
{"role": "assistant", "content": "Thought: I need to get an integer score between
|
||||
1-5 for the given title. To do this, I''ll ask the Scorer to evaluate the title
|
||||
based on specific criteria.\n\nAction: Ask question to coworker\nAction Input:
|
||||
{\"question\": \"Can you provide an integer score between 1-5 for the following
|
||||
title: ''The impact of AI in the future of work''?\", \"context\": \"We need
|
||||
a score for the title to assess its impact and relevance. Please evaluate based
|
||||
on factors such as clarity, relevance, and potential interest to readers.\",
|
||||
\"coworker\": \"Scorer\"}\nObservation: The score for the title ''The impact
|
||||
of AI in the future of work'' is 4 out of 5."}], "model": "gpt-4o", "stop":
|
||||
["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '3675'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- _cfuvid=lnKPmS3pW0Tw6qE4ZI8cf4lyVDuLg7ZNxmzxi7CqlME-1756165116673-0.0.1.1-604800000;
|
||||
__cf_bm=r95s2Hh1kHgI6PI08wwTySLyufxeDnmixIX5GVDEyoA-1756165116-1.0.1.1-X5dML3NG4uvFgDnjqJ4pQWbB9MyxJisxAZJlRivMc7T1f83cCNBH_cy5fJjn4XjKks_UKUUj5PvBGifu2gLkaiqYYWvbSjrhjKseuPN8.Q4
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//jFLva9swEP3uv0LoczzsJnYTfyuFsUHHftDA2FJsRT7bamRJSOd1o+R/
|
||||
H5LT2F1b2BfB3bv3dO/uHiNCqKhpQSjvGPLeyPh6vV8e7vHbdvv5dtvfaPhw/WX59dOP71l+dUMX
|
||||
nqH398DxifWO695IQKHVCHMLDMGrppdZnuZZmuYB6HUN0tNag/FKxxfJxSpO1nGSn4idFhwcLcjP
|
||||
iBBCHsPrW1Q1/KYFSRZPmR6cYy3Q4lxECLVa+gxlzgmHTCFdTCDXCkGFrquq2qnbTg9thwX5SJR+
|
||||
IAf/YAekEYpJwpR7ALtT70N0FaKCrHaqqqq5qoVmcMybUoOUM4AppZH5oQQ/dyfkeHYgdWus3rt/
|
||||
qLQRSriutMCcVr5bh9rQgB4jQu7CpIZn5qmxujdYoj5A+O5yk456dNrNhKbrE4gamZzy62SzeEWv
|
||||
rAGZkG42a8oZ76CeqNNi2FALPQOimeuX3bymPToXqv0f+QngHAxCXRoLteDPHU9lFvzpvlV2nnJo
|
||||
mDqwvwSHEgVYv4kaGjbI8aqo++MQ+rIRqgVrrBhPqzHlOtlkebZc8j2NjtFfAAAA//8DAJCAq9Rj
|
||||
AwAA
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 974eec8be8b98486-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 25 Aug 2025 23:38:37 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '383'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '488'
|
||||
x-ratelimit-limit-project-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '30000000'
|
||||
x-ratelimit-remaining-project-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29999132'
|
||||
x-ratelimit-reset-project-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 1ms
|
||||
x-request-id:
|
||||
- req_58742f78663b48c2be01fcc6497a46da
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
|
||||
@@ -449,4 +449,162 @@ interactions:
|
||||
- req_32907c02e96c6932e9424220bb24bb7a
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Manager. You''re great
|
||||
at delegating work about scoring.\nYour personal goal is: Coordinate scoring
|
||||
processes\nYou ONLY have access to the following tools, and should NEVER make
|
||||
up tools that are not listed here:\n\nTool Name: Delegate work to coworker\nTool
|
||||
Arguments: {''task'': {''description'': ''The task to delegate'', ''type'':
|
||||
''str''}, ''context'': {''description'': ''The context for the task'', ''type'':
|
||||
''str''}, ''coworker'': {''description'': ''The role/name of the coworker to
|
||||
delegate to'', ''type'': ''str''}}\nTool Description: Delegate a specific task
|
||||
to one of the following coworkers: Scorer\nThe input to this tool should be
|
||||
the coworker, the task you want them to do, and ALL necessary context to execute
|
||||
the task, they know nothing about the task, so share absolutely everything you
|
||||
know, don''t reference things but instead explain them.\nTool Name: Ask question
|
||||
to coworker\nTool Arguments: {''question'': {''description'': ''The question
|
||||
to ask'', ''type'': ''str''}, ''context'': {''description'': ''The context for
|
||||
the question'', ''type'': ''str''}, ''coworker'': {''description'': ''The role/name
|
||||
of the coworker to ask'', ''type'': ''str''}}\nTool Description: Ask a specific
|
||||
question to one of the following coworkers: Scorer\nThe input to this tool should
|
||||
be the coworker, the question you have for them, and ALL necessary context to
|
||||
ask the question properly, they know nothing about the question, so share absolutely
|
||||
everything you know, don''t reference things but instead explain them.\n\nIMPORTANT:
|
||||
Use the following format in your response:\n\n```\nThought: you should always
|
||||
think about what to do\nAction: the action to take, only one name of [Delegate
|
||||
work to coworker, Ask question to coworker], just the name, exactly as it''s
|
||||
written.\nAction Input: the input to the action, just a simple JSON object,
|
||||
enclosed in curly braces, using \" to wrap keys and values.\nObservation: the
|
||||
result of the action\n```\n\nOnce all necessary information is gathered, return
|
||||
the following format:\n\n```\nThought: I now know the final answer\nFinal Answer:
|
||||
the final answer to the original input question\n```"}, {"role": "user", "content":
|
||||
"\nCurrent Task: Give me an integer score between 1-5 for the following title:
|
||||
''The impact of AI in the future of work''\n\nThis is the expected criteria
|
||||
for your final answer: The score of the title.\nyou MUST return the actual complete
|
||||
content as the final answer, not a summary.\n\nBegin! This is VERY important
|
||||
to you, use the tools available and give your best Final Answer, your job depends
|
||||
on it!\n\nThought:"}, {"role": "assistant", "content": "Thought: To provide
|
||||
an accurate score for the title \"The impact of AI in the future of work,\"
|
||||
I need to delegate the task of scoring it to the Scorer, who is specialized
|
||||
in this activity.\n\nAction: Delegate work to coworker\nAction Input: {\"coworker\":
|
||||
\"Scorer\", \"task\": \"Give me an integer score between 1-5 for the following
|
||||
title: ''The impact of AI in the future of work''\", \"context\": \"Your task
|
||||
is to evaluate the provided title on a scale from 1 to 5 based on criteria such
|
||||
as relevance, clarity, and intrigue.\"}\n\nObservation: The scorer has received
|
||||
the task to score the title ''The impact of AI in the future of work''.\nObservation:
|
||||
4"}], "model": "gpt-4o", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '3322'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- _cfuvid=CW_cKQGYWY3cL.S6Xo5z0cmkmWHy5Q50OA_KjPEijNk-1735926034530-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAA4xSTWvcMBC9+1cMOq+Dvd/1LTQt5NJLP6C0wWjlsa1G1ghpnLSE/e9F2uzaaVPo
|
||||
RSC9eU/vzcxTBiB0IyoQqpesBmfyt/vDSpvbDx+97L+gftjhzVdX3rzT1/3nrVhEBh1+oOIz60rR
|
||||
4AyyJnuClUfJGFXL3WZbbjdluU7AQA2aSOsc52vKl8VynRf7vHjWVT1phUFU8C0DAHhKZ7RoG/wp
|
||||
KigW55cBQ5AdiupSBCA8mfgiZAg6sLQsFhOoyDLa5PpTT2PXcwW3YOkR7uPBPUKrrTQgbXhEf/Xd
|
||||
vk/X63StYD0X89iOQcYsdjRmBkhriWXsRYpx94wcL8YNdc7TIfxBFa22OvS1RxnIRpOByYmEHjOA
|
||||
u9Sg8UVm4TwNjmume0zf7U59TmHPI5nQC8jE0sxYy/3iFb26QZbahFmLhZKqx2aiTvOQY6NpBmSz
|
||||
1H+7eU37lFzb7n/kJ0ApdIxN7Tw2Wr1MPJV5jBv7r7JLl5NhEdA/aIU1a/RxEg22cjSnZRLhV2Ac
|
||||
6lbbDr3z+rRRrav3xZvNdrNaqYPIjtlvAAAA//8DAAsTiaNaAwAA
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 974eec7c4d438486-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 25 Aug 2025 23:38:35 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=ULOCzvd5MabFyMl9eSNQ3Sk_hNbVH25aTAfkSaYM6Cw-1756165115-1.0.1.1-jeyMlIwg.jml3h9XQDOYQXVxEFP44CSbUCENlr4W6RpdsZg08M.fWUD5kBWDNcupp2skTxNh5gCYFNop3HIrWq5pzgrydiA3FuSe7U15X1Y;
|
||||
path=/; expires=Tue, 26-Aug-25 00:08:35 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=e_81pnlfj.JohLmFm8gTRISqkPCT6GwCVpOdPsECI.s-1756165115098-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '680'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '715'
|
||||
x-ratelimit-limit-project-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '30000000'
|
||||
x-ratelimit-remaining-project-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29999221'
|
||||
x-ratelimit-reset-project-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 1ms
|
||||
x-request-id:
|
||||
- req_4006f3362c434ea2a39b9901f3f2e5cb
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
|
||||
@@ -728,4 +728,173 @@ interactions:
|
||||
- req_c065668c78262d2781db5b3b04f70dba
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Crew Manager. You
|
||||
are a seasoned manager with a knack for getting the best out of your team.\nYou
|
||||
are also known for your ability to delegate work to the right people, and to
|
||||
ask the right questions to get the best out of your team.\nEven though you don''t
|
||||
perform tasks by yourself, you have a lot of experience in the field, which
|
||||
allows you to properly evaluate the work of your team members.\nYour personal
|
||||
goal is: Manage the team to complete the task in the best way possible.\nYou
|
||||
ONLY have access to the following tools, and should NEVER make up tools that
|
||||
are not listed here:\n\nTool Name: Delegate work to coworker\nTool Arguments:
|
||||
{''task'': {''description'': ''The task to delegate'', ''type'': ''str''}, ''context'':
|
||||
{''description'': ''The context for the task'', ''type'': ''str''}, ''coworker'':
|
||||
{''description'': ''The role/name of the coworker to delegate to'', ''type'':
|
||||
''str''}}\nTool Description: Delegate a specific task to one of the following
|
||||
coworkers: Scorer\nThe input to this tool should be the coworker, the task you
|
||||
want them to do, and ALL necessary context to execute the task, they know nothing
|
||||
about the task, so share absolutely everything you know, don''t reference things
|
||||
but instead explain them.\nTool Name: Ask question to coworker\nTool Arguments:
|
||||
{''question'': {''description'': ''The question to ask'', ''type'': ''str''},
|
||||
''context'': {''description'': ''The context for the question'', ''type'': ''str''},
|
||||
''coworker'': {''description'': ''The role/name of the coworker to ask'', ''type'':
|
||||
''str''}}\nTool Description: Ask a specific question to one of the following
|
||||
coworkers: Scorer\nThe input to this tool should be the coworker, the question
|
||||
you have for them, and ALL necessary context to ask the question properly, they
|
||||
know nothing about the question, so share absolutely everything you know, don''t
|
||||
reference things but instead explain them.\n\nIMPORTANT: Use the following format
|
||||
in your response:\n\n```\nThought: you should always think about what to do\nAction:
|
||||
the action to take, only one name of [Delegate work to coworker, Ask question
|
||||
to coworker], just the name, exactly as it''s written.\nAction Input: the input
|
||||
to the action, just a simple JSON object, enclosed in curly braces, using \"
|
||||
to wrap keys and values.\nObservation: the result of the action\n```\n\nOnce
|
||||
all necessary information is gathered, return the following format:\n\n```\nThought:
|
||||
I now know the final answer\nFinal Answer: the final answer to the original
|
||||
input question\n```"}, {"role": "user", "content": "\nCurrent Task: Give me
|
||||
an integer score between 1-5 for the following title: ''The impact of AI in
|
||||
the future of work'', check examples to based your evaluation.\n\nThis is the
|
||||
expected criteria for your final answer: The score of the title.\nyou MUST return
|
||||
the actual complete content as the final answer, not a summary.\n\nBegin! This
|
||||
is VERY important to you, use the tools available and give your best Final Answer,
|
||||
your job depends on it!\n\nThought:"}, {"role": "assistant", "content": "Thought:
|
||||
To give an accurate score for the title \"The impact of AI in the future of
|
||||
work\" based on the specified criteria, I''ll need the expertise of the Scorer.
|
||||
They specialize in evaluating content, so I must delegate this task to them.
|
||||
I need to provide them with all the necessary context, including the title and
|
||||
the importance of giving a well-considered, integer score between 1-5. Since
|
||||
the request stresses the importance of this task and its implications for my
|
||||
job, I will make sure to communicate the urgency and the criteria very clearly
|
||||
to the Scorer.\n\nAction: Delegate work to coworker\n\nAction Input: {\"task\":
|
||||
\"Evaluate the title ''The impact of AI in the future of work'' and give it
|
||||
an integer score between 1-5, based on your expert assessment.\", \"context\":
|
||||
\"The task is to assign a score to the given title based on its significance,
|
||||
relevance, and impactfulness regarding the future of work influenced by artificial
|
||||
intelligence. This score needs to reflect a thorough evaluation within the range
|
||||
of 1 (being the least impactful or relevant) to 5 (being the most impactful
|
||||
or relevant). The assessment is highly important and could significantly affect
|
||||
our future work, so it''s crucial to consider the current discourse on AI, its
|
||||
potential transformative power on industries and job markets, and general expectations
|
||||
of its future role.\", \"coworker\": \"Scorer\"}\nObservation: The score of
|
||||
the title \"The impact of AI in the future of work\" is 5."}], "model": "gpt-4-0125-preview",
|
||||
"stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '4575'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- _cfuvid=FasG_74jjKfEtA2Mjn2iKdu62tv3zosBxYNYTk1E12M-1734025911900-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//jJJNaxsxEIbv/hWDznaIE9t19uYWUkLTQiG3OiyydnZXtlajSLN2m+D/
|
||||
XqSNvZs2hV4E0jPvq/l6GQEIXYgMhKolq8aZyafl5nq7emqWz9+/bO8/Pu9n3+4VL+nz01e8FeOo
|
||||
oM0WFZ9UF4oaZ5A12Q4rj5Ixuk4/zBfTxXw6nSfQUIEmyirHk9nkcno1nziPe42HV2VNWmEQGfwY
|
||||
AQC8pDPmaAv8KTK4HJ9eGgxBViiycxCA8GTii5Ah6MDSshj3UJFltCnth5raquYM7sDSAXbx4Bqh
|
||||
1FYakDYc0K/t2t6m+yrdM3ioEYIij0BlCmfNBmEtItCNk4ojWd2Btp1dy20XfSC/WwvQAeYXw5w8
|
||||
lm2QsSe2NWYApLXEMvY0dePxlRzP9RuqnKdN+EMqSm11qHOPMpCNtQYmJxI9jgAeU5/bN60TzlPj
|
||||
OGfaYfruZnHV+Yl+tD29PkEmlmagupmN3/HLC2SpTRhMSiipaix6aT9W2RaaBmA0qPrvbN7z7irX
|
||||
tvof+x4ohY6xyJ3HQqu3FfdhHuPm/yvs3OWUsAjo91phzhp9nESBpWxNt5Mi/AqMTV5qW6F3XqfF
|
||||
jJMcHUe/AQAA//8DAErhO7yXAwAA
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 974eec827a978486-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 25 Aug 2025 23:38:36 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=r95s2Hh1kHgI6PI08wwTySLyufxeDnmixIX5GVDEyoA-1756165116-1.0.1.1-X5dML3NG4uvFgDnjqJ4pQWbB9MyxJisxAZJlRivMc7T1f83cCNBH_cy5fJjn4XjKks_UKUUj5PvBGifu2gLkaiqYYWvbSjrhjKseuPN8.Q4;
|
||||
path=/; expires=Tue, 26-Aug-25 00:08:36 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=lnKPmS3pW0Tw6qE4ZI8cf4lyVDuLg7ZNxmzxi7CqlME-1756165116673-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '1211'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '1305'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '2000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '1998911'
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 32ms
|
||||
x-request-id:
|
||||
- req_3ee53a55bafb9d8ab8855cf07987362b
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
|
||||
@@ -555,4 +555,639 @@ interactions:
|
||||
- req_d13285e05176425b72094c7e74370ab6
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Crew Manager. You
|
||||
are a seasoned manager with a knack for getting the best out of your team.\nYou
|
||||
are also known for your ability to delegate work to the right people, and to
|
||||
ask the right questions to get the best out of your team.\nEven though you don''t
|
||||
perform tasks by yourself, you have a lot of experience in the field, which
|
||||
allows you to properly evaluate the work of your team members.\nYour personal
|
||||
goal is: Manage the team to complete the task in the best way possible.\nYou
|
||||
ONLY have access to the following tools, and should NEVER make up tools that
|
||||
are not listed here:\n\nTool Name: Delegate work to coworker\nTool Arguments:
|
||||
{''task'': {''description'': ''The task to delegate'', ''type'': ''str''}, ''context'':
|
||||
{''description'': ''The context for the task'', ''type'': ''str''}, ''coworker'':
|
||||
{''description'': ''The role/name of the coworker to delegate to'', ''type'':
|
||||
''str''}}\nTool Description: Delegate a specific task to one of the following
|
||||
coworkers: Scorer\nThe input to this tool should be the coworker, the task you
|
||||
want them to do, and ALL necessary context to execute the task, they know nothing
|
||||
about the task, so share absolutely everything you know, don''t reference things
|
||||
but instead explain them.\nTool Name: Ask question to coworker\nTool Arguments:
|
||||
{''question'': {''description'': ''The question to ask'', ''type'': ''str''},
|
||||
''context'': {''description'': ''The context for the question'', ''type'': ''str''},
|
||||
''coworker'': {''description'': ''The role/name of the coworker to ask'', ''type'':
|
||||
''str''}}\nTool Description: Ask a specific question to one of the following
|
||||
coworkers: Scorer\nThe input to this tool should be the coworker, the question
|
||||
you have for them, and ALL necessary context to ask the question properly, they
|
||||
know nothing about the question, so share absolutely everything you know, don''t
|
||||
reference things but instead explain them.\n\nIMPORTANT: Use the following format
|
||||
in your response:\n\n```\nThought: you should always think about what to do\nAction:
|
||||
the action to take, only one name of [Delegate work to coworker, Ask question
|
||||
to coworker], just the name, exactly as it''s written.\nAction Input: the input
|
||||
to the action, just a simple JSON object, enclosed in curly braces, using \"
|
||||
to wrap keys and values.\nObservation: the result of the action\n```\n\nOnce
|
||||
all necessary information is gathered, return the following format:\n\n```\nThought:
|
||||
I now know the final answer\nFinal Answer: the final answer to the original
|
||||
input question\n```"}, {"role": "user", "content": "\nCurrent Task: Give me
|
||||
an integer score between 1-5 for the following title: ''The impact of AI in
|
||||
the future of work''\n\nThis is the expected criteria for your final answer:
|
||||
The score of the title.\nyou MUST return the actual complete content as the
|
||||
final answer, not a summary.\nEnsure your final answer contains only the content
|
||||
in the following format: {\n \"score\": int\n}\n\nEnsure the final output does
|
||||
not include any code block markers like ```json or ```python.\n\nBegin! This
|
||||
is VERY important to you, use the tools available and give your best Final Answer,
|
||||
your job depends on it!\n\nThought:"}], "model": "gpt-4o", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '3194'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//jFRNb9swDL3nVxC69JIESZukrW9Dt0MxbNi6HQbMRaBItM1WkTSRTjYU
|
||||
/e+D7KTOvoBdDFmPpN57IvU0AlBkVQHKNFrMNrrJzdVm9vZuER8u7lbz9zfvPlzSbvXmo919ebg7
|
||||
V+OcETYPaOSYNTVhGx0KBd/DJqEWzFXnl8vVfLW4ni06YBssupxWR5kswuR8dr6YzK4ms9UhsQlk
|
||||
kFUBX0cAAE/dN1P0Fr+rAmbj484WmXWNqngJAlApuLyjNDOxaC9qPIAmeEHfsf7chLZupIDPDYJo
|
||||
foSE31pKyKCBTUgIEmCDkAvVHm3+lRxL4hBKlfNoG7URCBW8ugXyHV610ibMe/uQHksFG81oIXjg
|
||||
iIYqMmASCSbSU7gFbkLrLFh0WGtBkIa453M471PmksawbwIQQ0KOwTNtHEIVEuBOu1YL+Rq0tx3z
|
||||
vO5Y8rT0pX9l8rUU8Pp4ROaVq5uQV5iOIXDrYysFPJUqEyhVAaX61FtxFF7A2f8JPyvVGMre8e/S
|
||||
1/qQwo4sgvZAXrDGdHB6g7JH9DCfLAe7SBiwqtAI7dAjcycwocOd9gancJN9sJigCXvYo3NAAkbH
|
||||
TINBS77qLCunxf5oBvJMdSOZwOE+QyTTKzljyO2TBXGjY/bxIKqzDP2OUvBb9MLTo7yDhYNXqVTP
|
||||
pz2XsGpZ55b3rXMngPY+iM4Mu26/PyDPL/3tQh1T2PBvqaoiT9ysE2oOPvcyS4iqQ59HAPfdHLW/
|
||||
jIaKKWyjrCU8Ynfc6nLV11PD5A7o/Pz6gEoQ7QbgarYc/6Xg2qJocnwyispo06AdUoe51a2lcAKM
|
||||
TmT/SedvtXvp5Ov/KT8AxmAUtOuY0JL5VfIQljC/bP8Ke7G5I6wY044MroUw5auwWOnW9Y+O4h8s
|
||||
uF1X5GtMMVH/8lRxfTW7Xq6WFxdmo0bPo58AAAD//wMA5uaSKYIFAAA=
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 974ee75a69b9fa49-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 25 Aug 2025 23:35:08 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=YpzdHjazbzCs_jFT50s5M0hX1JuSBHogZUhvJPwKBxo-1756164908-1.0.1.1-0_bgL0r_yqn68W2fJN2xF5CCACGY7K50iAbPVUOmryv.tRDq7ol7sxSwg426hLg0zfDxbndIK7kH_GroqzjZPEDEvz1epE4gpDLuWX.pkao;
|
||||
path=/; expires=Tue, 26-Aug-25 00:05:08 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=tx8GIgZ811onK882sIkQbg1SC1WHXyFyvZ_UvhApcYI-1756164908485-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '4262'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '4328'
|
||||
x-ratelimit-limit-project-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '30000000'
|
||||
x-ratelimit-remaining-project-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29999242'
|
||||
x-ratelimit-reset-project-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 1ms
|
||||
x-request-id:
|
||||
- req_45f89eb33cb2484197c2a2510b2e4416
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Scorer. You''re an
|
||||
expert scorer, specialized in scoring titles.\nYour personal goal is: Score
|
||||
the title\nTo give my best complete final answer to the task respond using the
|
||||
exact following format:\n\nThought: I now can give a great answer\nFinal Answer:
|
||||
Your final answer must be the great and the most complete as possible, it must
|
||||
be outcome described.\n\nI MUST use these formats, my job depends on it!"},
|
||||
{"role": "user", "content": "\nCurrent Task: Score the title: ''The impact of
|
||||
AI in the future of work''\n\nThis is the expected criteria for your final answer:
|
||||
Your best answer to your coworker asking you this, accounting for the context
|
||||
shared.\nyou MUST return the actual complete content as the final answer, not
|
||||
a summary.\n\nThis is the context you''re working with:\nProvide an integer
|
||||
score between 1-5 based on its effectiveness and relevance. Consider how well
|
||||
it captures attention and provides insight into the topic of AI''s role in shaping
|
||||
future work environments.\n\nBegin! This is VERY important to you, use the tools
|
||||
available and give your best Final Answer, your job depends on it!\n\nThought:"},
|
||||
{"role": "assistant", "content": "Thought: The task requires a score to be assigned
|
||||
to the title \"The impact of AI in the future of work\" based on specific criteria.
|
||||
I should delegate this task to the Scorer, who is responsible for evaluating
|
||||
and scoring titles.\n\nAction: Delegate work to coworker\nAction Input: {\"task\":
|
||||
\"Score the title: ''The impact of AI in the future of work''\", \"context\":
|
||||
\"Provide an integer score between 1-5 based on its effectiveness and relevance.
|
||||
Consider how well it captures attention and provides insight into the topic
|
||||
of AI''s role in shaping future work environments.\", \"coworker\": \"Scorer\"}\nObservation:
|
||||
I encountered an error: Action ''Delegate work to coworker'' don''t exist, these
|
||||
are the only available Actions:\n\nMoving on then. I MUST either use a tool
|
||||
(use one at time) OR give my best final answer not both at the same time. When
|
||||
responding, I must use the following format:\n\n```\nThought: you should always
|
||||
think about what to do\nAction: the action to take, should be one of []\nAction
|
||||
Input: the input to the action, dictionary enclosed in curly braces\nObservation:
|
||||
the result of the action\n```\nThis Thought/Action/Action Input/Result can repeat
|
||||
N times. Once I know the final answer, I must return the following format:\n\n```\nThought:
|
||||
I now can give a great answer\nFinal Answer: Your final answer must be the great
|
||||
and the most complete as possible, it must be outcome described\n\n```"}], "model":
|
||||
"gpt-4o", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '2680'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=YpzdHjazbzCs_jFT50s5M0hX1JuSBHogZUhvJPwKBxo-1756164908-1.0.1.1-0_bgL0r_yqn68W2fJN2xF5CCACGY7K50iAbPVUOmryv.tRDq7ol7sxSwg426hLg0zfDxbndIK7kH_GroqzjZPEDEvz1epE4gpDLuWX.pkao;
|
||||
_cfuvid=tx8GIgZ811onK882sIkQbg1SC1WHXyFyvZ_UvhApcYI-1756164908485-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//lFVLi+NGEL77VxS65GIbj/G8nNMQ8hhyCAkT2BAvpt1dkmqnVS26SnbM
|
||||
Mv996ZZlazKzsHsxuOv9fV+VPk8ACnLFGgpbG7VN62c/3e0Wfxj9+/ahWdarp+Xv+pvx/zRPH/68
|
||||
/1AV0xQRdp/Q6hA1t6FpPSoF7s02olFMWa9ur2+ublb3i7tsaIJDn8KqVmerMFsulqvZ4m62uDkF
|
||||
1oEsSrGGfycAAJ/zb2qRHf5XrGExHV4aFDEVFuuzE0ARg08vhREhUcNaTC9GG1iRc9dPdeiqWtfw
|
||||
CIzoQAMYERQBrRGU1CNsiqcagZrWWIVQwsMjEGd72WkXMb0dQnzeFLAzgg4CA6mA2ZEnPaac1rTZ
|
||||
02gqTIHBsIM2hj05BGKhqlYg1pDzpu4vlaQ2LXE1VEulAHlPMXCDrDKHRziQ94B74zujOOo9VQKx
|
||||
xiOUMTRwlbq5noINLOQwprypVyxLtEp75DR7ai6ix71hi/MNb/hqDj+PXdbwdK5hPZroj0DsyBrF
|
||||
hJ3R3MQJ6b49R2I7EajDIU1mcj55B8gpHGqyNZCAAQ0t2WTZxWBcwggjis5zA20djSSGxuxsCpCu
|
||||
qlBUQKhiKskaVrC14QoFQgSHe/ShzfAN1azhzBPtE4SGwXSOkC2C7SKFLhEaOgVFW3PwoSJrPBiX
|
||||
QeqJSEgt5/DXAN0YJRKoqar9cUBWszC6GBNAJ2wosGRxDTWOmQtsWh+OqQYIWg1R5vAgWR7JPW2Z
|
||||
EFeZg9J3yDbx+insoDHxGbVn9I1wpqA1yam/M2QtRiVOtVKQUoP+eCbSOBdRUrHEm+AAah791ySO
|
||||
0/tJi2mg6aBPDoe0XVT1ogyx19Z4B8Mh81DRHsFAlc4HGJYDxg3/Qmw8POR/a7iG2Qjd79hRGql9
|
||||
rHT9EUiHTZX3V1V6sb/d2LNKHx5/kDMLef++urbjkxSx7MSki8id9yODYQ7a45iO4ceT5eV8/nyo
|
||||
2hh28r/QoiQmqbdJGoHTqRMNbZGtLxOAj/nMdq8uZ9HG0LS61fCMudz1atXnKy6H/WJdLk9HuNCg
|
||||
xl8MtzdD2KuEW4dqyMvoUhfW2BrdJfRy1tP2hZFhMhr7bTvv5e5HJ66+Jf3FYC22im7bRnRkX498
|
||||
cYuYPnxfczvDnBsuBOOeLG6VMCYqHJam8/03qZCjKDbbkrjC2EbqP0xlu3Xlorzd3V/f2GLyMvkC
|
||||
AAD//wMAlpQiYaEHAAA=
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 974ee7768809fa49-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 25 Aug 2025 23:35:12 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '4274'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '4339'
|
||||
x-ratelimit-limit-project-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '30000000'
|
||||
x-ratelimit-remaining-project-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29999380'
|
||||
x-ratelimit-reset-project-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 1ms
|
||||
x-request-id:
|
||||
- req_9ad5330b210c40cab7dbaa4822ab4144
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Crew Manager. You
|
||||
are a seasoned manager with a knack for getting the best out of your team.\nYou
|
||||
are also known for your ability to delegate work to the right people, and to
|
||||
ask the right questions to get the best out of your team.\nEven though you don''t
|
||||
perform tasks by yourself, you have a lot of experience in the field, which
|
||||
allows you to properly evaluate the work of your team members.\nYour personal
|
||||
goal is: Manage the team to complete the task in the best way possible.\nYou
|
||||
ONLY have access to the following tools, and should NEVER make up tools that
|
||||
are not listed here:\n\nTool Name: Delegate work to coworker\nTool Arguments:
|
||||
{''task'': {''description'': ''The task to delegate'', ''type'': ''str''}, ''context'':
|
||||
{''description'': ''The context for the task'', ''type'': ''str''}, ''coworker'':
|
||||
{''description'': ''The role/name of the coworker to delegate to'', ''type'':
|
||||
''str''}}\nTool Description: Delegate a specific task to one of the following
|
||||
coworkers: Scorer\nThe input to this tool should be the coworker, the task you
|
||||
want them to do, and ALL necessary context to execute the task, they know nothing
|
||||
about the task, so share absolutely everything you know, don''t reference things
|
||||
but instead explain them.\nTool Name: Ask question to coworker\nTool Arguments:
|
||||
{''question'': {''description'': ''The question to ask'', ''type'': ''str''},
|
||||
''context'': {''description'': ''The context for the question'', ''type'': ''str''},
|
||||
''coworker'': {''description'': ''The role/name of the coworker to ask'', ''type'':
|
||||
''str''}}\nTool Description: Ask a specific question to one of the following
|
||||
coworkers: Scorer\nThe input to this tool should be the coworker, the question
|
||||
you have for them, and ALL necessary context to ask the question properly, they
|
||||
know nothing about the question, so share absolutely everything you know, don''t
|
||||
reference things but instead explain them.\n\nIMPORTANT: Use the following format
|
||||
in your response:\n\n```\nThought: you should always think about what to do\nAction:
|
||||
the action to take, only one name of [Delegate work to coworker, Ask question
|
||||
to coworker], just the name, exactly as it''s written.\nAction Input: the input
|
||||
to the action, just a simple JSON object, enclosed in curly braces, using \"
|
||||
to wrap keys and values.\nObservation: the result of the action\n```\n\nOnce
|
||||
all necessary information is gathered, return the following format:\n\n```\nThought:
|
||||
I now know the final answer\nFinal Answer: the final answer to the original
|
||||
input question\n```"}, {"role": "user", "content": "\nCurrent Task: Give me
|
||||
an integer score between 1-5 for the following title: ''The impact of AI in
|
||||
the future of work''\n\nThis is the expected criteria for your final answer:
|
||||
The score of the title.\nyou MUST return the actual complete content as the
|
||||
final answer, not a summary.\nEnsure your final answer contains only the content
|
||||
in the following format: {\n \"score\": int\n}\n\nEnsure the final output does
|
||||
not include any code block markers like ```json or ```python.\n\nBegin! This
|
||||
is VERY important to you, use the tools available and give your best Final Answer,
|
||||
your job depends on it!\n\nThought:"}, {"role": "assistant", "content": "Thought:
|
||||
The task requires a score to be assigned to the title \"The impact of AI in
|
||||
the future of work\" based on specific criteria. I should delegate this task
|
||||
to the Scorer, who is responsible for evaluating and scoring titles.\n\nAction:
|
||||
Delegate work to coworker\nAction Input: {\"task\": \"Score the title: ''The
|
||||
impact of AI in the future of work''\", \"context\": \"Provide an integer score
|
||||
between 1-5 based on its effectiveness and relevance. Consider how well it captures
|
||||
attention and provides insight into the topic of AI''s role in shaping future
|
||||
work environments.\", \"coworker\": \"Scorer\"}\nObservation: 5 - The title
|
||||
\"The impact of AI in the future of work\" is effective and relevant; it captures
|
||||
attention and provides clear insight into the topic of AI''s influence on future
|
||||
work environments."}], "model": "gpt-4o", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '4046'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=YpzdHjazbzCs_jFT50s5M0hX1JuSBHogZUhvJPwKBxo-1756164908-1.0.1.1-0_bgL0r_yqn68W2fJN2xF5CCACGY7K50iAbPVUOmryv.tRDq7ol7sxSwg426hLg0zfDxbndIK7kH_GroqzjZPEDEvz1epE4gpDLuWX.pkao;
|
||||
_cfuvid=tx8GIgZ811onK882sIkQbg1SC1WHXyFyvZ_UvhApcYI-1756164908485-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//jFLLbtswELzrKxY8W4FkW37oFhQomhbopUEPrQKBplYSG4pkyVXSwPC/
|
||||
F5RiS0lTIBcC5OwMZ3b3GAEwWbEcmGg5ic6q+MPukNx+eyJt09/f/fbHzUNF6svXZLX//Klji8Aw
|
||||
h18o6My6EqazCkkaPcLCIScMquk226Sb9T5dDUBnKlSB1liK1yZeJst1nOziZPNMbI0U6FkOPyMA
|
||||
gONwBou6wj8sh2RxfunQe94gyy9FAMwZFV4Y91564prYYgKF0YR6cH3bmr5pKYcb0OYR7sNBLUIt
|
||||
NVfAtX9Ed1Xoj8P1erjmcCw0QMG8MA4LlkNW6NNc32Hdex7i6V6pGcC1NsRDe4Zkd8/I6ZJFmcY6
|
||||
c/CvqKyWWvq2dMi90cG3J2PZgJ4igLuhZ/2LNjDrTGepJHOPw3e7LB312DSlCV2eQTLE1Yy1XS7e
|
||||
0CsrJC6Vn3WdCS5arCbqNCLeV9LMgGiW+l83b2mPyaVu3iM/AUKgJaxK67CS4mXiqcxhWOL/lV26
|
||||
PBhmHt2DFFiSRBcmUWHNezXuF/NPnrAra6kbdNbJcclqW+6SfbbJVitxYNEp+gsAAP//AwDpVFDZ
|
||||
bQMAAA==
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 974ee792a8c6fa49-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 25 Aug 2025 23:35:13 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '570'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '598'
|
||||
x-ratelimit-limit-project-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '30000000'
|
||||
x-ratelimit-remaining-project-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29999042'
|
||||
x-ratelimit-reset-project-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 1ms
|
||||
x-request-id:
|
||||
- req_457bdb91af6147b39002e1b4804064fa
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Crew Manager. You
|
||||
are a seasoned manager with a knack for getting the best out of your team.\nYou
|
||||
are also known for your ability to delegate work to the right people, and to
|
||||
ask the right questions to get the best out of your team.\nEven though you don''t
|
||||
perform tasks by yourself, you have a lot of experience in the field, which
|
||||
allows you to properly evaluate the work of your team members.\nYour personal
|
||||
goal is: Manage the team to complete the task in the best way possible.\nYou
|
||||
ONLY have access to the following tools, and should NEVER make up tools that
|
||||
are not listed here:\n\nTool Name: Delegate work to coworker\nTool Arguments:
|
||||
{''task'': {''description'': ''The task to delegate'', ''type'': ''str''}, ''context'':
|
||||
{''description'': ''The context for the task'', ''type'': ''str''}, ''coworker'':
|
||||
{''description'': ''The role/name of the coworker to delegate to'', ''type'':
|
||||
''str''}}\nTool Description: Delegate a specific task to one of the following
|
||||
coworkers: Scorer\nThe input to this tool should be the coworker, the task you
|
||||
want them to do, and ALL necessary context to execute the task, they know nothing
|
||||
about the task, so share absolutely everything you know, don''t reference things
|
||||
but instead explain them.\nTool Name: Ask question to coworker\nTool Arguments:
|
||||
{''question'': {''description'': ''The question to ask'', ''type'': ''str''},
|
||||
''context'': {''description'': ''The context for the question'', ''type'': ''str''},
|
||||
''coworker'': {''description'': ''The role/name of the coworker to ask'', ''type'':
|
||||
''str''}}\nTool Description: Ask a specific question to one of the following
|
||||
coworkers: Scorer\nThe input to this tool should be the coworker, the question
|
||||
you have for them, and ALL necessary context to ask the question properly, they
|
||||
know nothing about the question, so share absolutely everything you know, don''t
|
||||
reference things but instead explain them.\n\nIMPORTANT: Use the following format
|
||||
in your response:\n\n```\nThought: you should always think about what to do\nAction:
|
||||
the action to take, only one name of [Delegate work to coworker, Ask question
|
||||
to coworker], just the name, exactly as it''s written.\nAction Input: the input
|
||||
to the action, just a simple JSON object, enclosed in curly braces, using \"
|
||||
to wrap keys and values.\nObservation: the result of the action\n```\n\nOnce
|
||||
all necessary information is gathered, return the following format:\n\n```\nThought:
|
||||
I now know the final answer\nFinal Answer: the final answer to the original
|
||||
input question\n```"}, {"role": "user", "content": "\nCurrent Task: Give me
|
||||
an integer score between 1-5 for the following title: ''The impact of AI in
|
||||
the future of work''\n\nThis is the expected criteria for your final answer:
|
||||
The score of the title.\nyou MUST return the actual complete content as the
|
||||
final answer, not a summary.\nEnsure your final answer contains only the content
|
||||
in the following format: {\n \"score\": int\n}\n\nEnsure the final output does
|
||||
not include any code block markers like ```json or ```python.\n\nBegin! This
|
||||
is VERY important to you, use the tools available and give your best Final Answer,
|
||||
your job depends on it!\n\nThought:"}, {"role": "assistant", "content": "Thought:
|
||||
The task requires a score to be assigned to the title \"The impact of AI in
|
||||
the future of work\" based on specific criteria. I should delegate this task
|
||||
to the Scorer, who is responsible for evaluating and scoring titles.\n\nAction:
|
||||
Delegate work to coworker\nAction Input: {\"task\": \"Score the title: ''The
|
||||
impact of AI in the future of work''\", \"context\": \"Provide an integer score
|
||||
between 1-5 based on its effectiveness and relevance. Consider how well it captures
|
||||
attention and provides insight into the topic of AI''s role in shaping future
|
||||
work environments.\", \"coworker\": \"Scorer\"}\nObservation: {\n \"score\":
|
||||
5\n}"}], "model": "gpt-4o", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '3872'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=YpzdHjazbzCs_jFT50s5M0hX1JuSBHogZUhvJPwKBxo-1756164908-1.0.1.1-0_bgL0r_yqn68W2fJN2xF5CCACGY7K50iAbPVUOmryv.tRDq7ol7sxSwg426hLg0zfDxbndIK7kH_GroqzjZPEDEvz1epE4gpDLuWX.pkao;
|
||||
_cfuvid=tx8GIgZ811onK882sIkQbg1SC1WHXyFyvZ_UvhApcYI-1756164908485-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAA4xSXYvbMBB8969Y9BwXx03c4LfjoKVwlGsphbY+jCKvbfVkSZXWTY+Q/16kfNhp
|
||||
e3AvAml2VjOzu08AmGxYCUz0nMRgVXq72eab/tOXdx/vOudvv+4+PHU/+3vxend3/40tAsNsf6Cg
|
||||
M+uVMINVSNLoIywccsLQdflmXSyLdbYsIjCYBlWgdZbSlUnzLF+l2SbNihOxN1KgZyV8TwAA9vEM
|
||||
EnWDv1kJ2eL8MqD3vENWXooAmDMqvDDuvfTENbHFBAqjCXVU/bk3Y9dTCe9Bmx08hoN6hFZqroBr
|
||||
v0NX6bfxdhNvJewrDVAxL4zDipWwrvRh3t5hO3oe3OlRqRnAtTbEQzrR2MMJOVysKNNZZ7b+Lypr
|
||||
pZa+rx1yb3SQ7clYFtFDAvAQIxuvUmDWmcFSTeYR43eb/BQZm4Y0ofnyBJIhrmas1Rm46lc3SFwq
|
||||
PwudCS56bCbqNCE+NtLMgGTm+l81/+t9dC5195L2EyAEWsKmtg4bKa4dT2UOww4/V3ZJOQpmHt0v
|
||||
KbAmiS5MosGWj+q4Xsw/ecKhbqXu0FknjzvW2npVbNs2w0xsWHJI/gAAAP//AwCoHa0pbAMAAA==
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 974eea15dfd2fa56-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 25 Aug 2025 23:36:57 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '867'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '949'
|
||||
x-ratelimit-limit-project-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '30000000'
|
||||
x-ratelimit-remaining-project-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29999086'
|
||||
x-ratelimit-reset-project-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 1ms
|
||||
x-request-id:
|
||||
- req_78fb283974264a01982de445532cac51
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
|
||||
@@ -579,4 +579,657 @@ interactions:
|
||||
- req_dd38f9b6e9e6ac64ebbb827111e31414
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Crew Manager. You
|
||||
are a seasoned manager with a knack for getting the best out of your team.\nYou
|
||||
are also known for your ability to delegate work to the right people, and to
|
||||
ask the right questions to get the best out of your team.\nEven though you don''t
|
||||
perform tasks by yourself, you have a lot of experience in the field, which
|
||||
allows you to properly evaluate the work of your team members.\nYour personal
|
||||
goal is: Manage the team to complete the task in the best way possible.\nYou
|
||||
ONLY have access to the following tools, and should NEVER make up tools that
|
||||
are not listed here:\n\nTool Name: Delegate work to coworker\nTool Arguments:
|
||||
{''task'': {''description'': ''The task to delegate'', ''type'': ''str''}, ''context'':
|
||||
{''description'': ''The context for the task'', ''type'': ''str''}, ''coworker'':
|
||||
{''description'': ''The role/name of the coworker to delegate to'', ''type'':
|
||||
''str''}}\nTool Description: Delegate a specific task to one of the following
|
||||
coworkers: Scorer\nThe input to this tool should be the coworker, the task you
|
||||
want them to do, and ALL necessary context to execute the task, they know nothing
|
||||
about the task, so share absolutely everything you know, don''t reference things
|
||||
but instead explain them.\nTool Name: Ask question to coworker\nTool Arguments:
|
||||
{''question'': {''description'': ''The question to ask'', ''type'': ''str''},
|
||||
''context'': {''description'': ''The context for the question'', ''type'': ''str''},
|
||||
''coworker'': {''description'': ''The role/name of the coworker to ask'', ''type'':
|
||||
''str''}}\nTool Description: Ask a specific question to one of the following
|
||||
coworkers: Scorer\nThe input to this tool should be the coworker, the question
|
||||
you have for them, and ALL necessary context to ask the question properly, they
|
||||
know nothing about the question, so share absolutely everything you know, don''t
|
||||
reference things but instead explain them.\n\nIMPORTANT: Use the following format
|
||||
in your response:\n\n```\nThought: you should always think about what to do\nAction:
|
||||
the action to take, only one name of [Delegate work to coworker, Ask question
|
||||
to coworker], just the name, exactly as it''s written.\nAction Input: the input
|
||||
to the action, just a simple JSON object, enclosed in curly braces, using \"
|
||||
to wrap keys and values.\nObservation: the result of the action\n```\n\nOnce
|
||||
all necessary information is gathered, return the following format:\n\n```\nThought:
|
||||
I now know the final answer\nFinal Answer: the final answer to the original
|
||||
input question\n```"}, {"role": "user", "content": "\nCurrent Task: Give me
|
||||
an integer score between 1-5 for the following title: ''The impact of AI in
|
||||
the future of work''\n\nThis is the expected criteria for your final answer:
|
||||
The score of the title.\nyou MUST return the actual complete content as the
|
||||
final answer, not a summary.\nEnsure your final answer contains only the content
|
||||
in the following format: {\n \"score\": int\n}\n\nEnsure the final output does
|
||||
not include any code block markers like ```json or ```python.\n\nBegin! This
|
||||
is VERY important to you, use the tools available and give your best Final Answer,
|
||||
your job depends on it!\n\nThought:"}], "model": "gpt-4o", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '3194'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=N_ULh2gJbt2dWumbG8h6_rw_QD0TBQ3f1NYS.FsG3w0-1756165097-1.0.1.1-fGULs7H8u7wOLv7QAwRQdlWcZr2zj6dkLXQk5xPa.7SBOn9qj1nh6.VDONMzgxqO2telES4KZZPzeC2G4YFJXV5Q4hemTm4jcMsXA10XFYM;
|
||||
_cfuvid=_3IyqhyxR3x9q67TqmIC1eWhmeXZIMbk_6ChUHMngHM-1756165097262-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//jFRNb9swDL3nVxC69OIEcdukmW/FdlgOOwwL1mJzESgSbWuVJY+i84Ei
|
||||
/32Q8+F064BeDJuP7/GRpvQyABBGiwyEqiSrurHDj7PVzePj3aJ6UA9f59Xd/H7ifqy/04fPX2gn
|
||||
ksjwq1+o+MQaKV83Ftl4d4AVoWSMqundZJpOJ+n4pgNqr9FGWtnw8NYPr8fXt8PxbDieHomVNwqD
|
||||
yODnAADgpXtGi07jVmQwTk6RGkOQJYrsnAQgyNsYETIEE1g6FkkPKu8YXed6Ufm2rDiDOThEDexB
|
||||
o8VSMgJXCCzDM/gCgvJkXHmIGbYIuVhUCKZupOKYcT8H4zq8aLkljLGNp+dcRNEY/6Y8IY1gDhtj
|
||||
LTTk10Z3ZWrYGK5AWgsOVeyHdtC53HJk4xZVe+FolLvc3as45ww+nfzGajFb+fiGdEqBuWtazuAl
|
||||
F5Gciwxy0Zm5aOfqfd1cgXcgIShpEQryNaSx5GSUiwRycfR8KLE4zY/wd2sIA+Ba2lby60GuZEAd
|
||||
ZQup2FOA0KoKZABCi2vpFCZHYwlIp0FZSYZ3I4j6oWsjVL61GlYI0oFxjCUSrJA3iO5oMIFNhYSQ
|
||||
ggnQeE+d1iR+4Vahtej43MRxfv2gKBf7yw0iLNog4wK71toLQDrnWcapd7v7dET25221vmzIr8Jf
|
||||
VFEYZ0K1JJTBu7iZgX0jOnQ/AHjqTkX7atFFQ75ueMn+Gbty07vpQU/057BH05vJEWXP0vbALE2T
|
||||
NwSXGlkaGy4OllBSVah7an8KZauNvwAGF23/a+ct7UPrxpXvke8BpbBh1MuGUBv1uuU+jTDeU/9L
|
||||
O4+5MywC0tooXLJBir9CYyFbe7hCRNgFxnpZGFciNWQO90jRLG+nq6IY41jNxGA/+AMAAP//AwCd
|
||||
ZJbYUAUAAA==
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 974eec36591c8486-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 25 Aug 2025 23:38:27 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '3819'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '3856'
|
||||
x-ratelimit-limit-project-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '30000000'
|
||||
x-ratelimit-remaining-project-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29999242'
|
||||
x-ratelimit-reset-project-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 1ms
|
||||
x-request-id:
|
||||
- req_e91c7ccb367243348f19a41989b6992d
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Scorer. You''re an
|
||||
expert scorer, specialized in scoring titles.\nYour personal goal is: Score
|
||||
the title\nTo give my best complete final answer to the task respond using the
|
||||
exact following format:\n\nThought: I now can give a great answer\nFinal Answer:
|
||||
Your final answer must be the great and the most complete as possible, it must
|
||||
be outcome described.\n\nI MUST use these formats, my job depends on it!"},
|
||||
{"role": "user", "content": "\nCurrent Task: Score the title ''The impact of
|
||||
AI in the future of work'' on a scale from 1 to 5.\n\nThis is the expected criteria
|
||||
for your final answer: Your best answer to your coworker asking you this, accounting
|
||||
for the context shared.\nyou MUST return the actual complete content as the
|
||||
final answer, not a summary.\n\nThis is the context you''re working with:\nThe
|
||||
task requires evaluating the title based on factors such as relevance, impact,
|
||||
and clarity. The score should be an integer between 1 to 5, where 1 is poor
|
||||
and 5 is excellent.\n\nBegin! This is VERY important to you, use the tools available
|
||||
and give your best Final Answer, your job depends on it!\n\nThought:"}, {"role":
|
||||
"assistant", "content": "Thought: I need to delegate the task of scoring the
|
||||
title \"The impact of AI in the future of work\" to the Scorer. I will provide
|
||||
them with all necessary context to execute the task.\n\nAction: Delegate work
|
||||
to coworker\nAction Input: {\"task\": \"Score the title ''The impact of AI in
|
||||
the future of work'' on a scale from 1 to 5.\", \"context\": \"The task requires
|
||||
evaluating the title based on factors such as relevance, impact, and clarity.
|
||||
The score should be an integer between 1 to 5, where 1 is poor and 5 is excellent.\",
|
||||
\"coworker\": \"Scorer\"}\nObservation: I encountered an error: Action ''Delegate
|
||||
work to coworker'' don''t exist, these are the only available Actions:\n\nMoving
|
||||
on then. I MUST either use a tool (use one at time) OR give my best final answer
|
||||
not both at the same time. When responding, I must use the following format:\n\n```\nThought:
|
||||
you should always think about what to do\nAction: the action to take, should
|
||||
be one of []\nAction Input: the input to the action, dictionary enclosed in
|
||||
curly braces\nObservation: the result of the action\n```\nThis Thought/Action/Action
|
||||
Input/Result can repeat N times. Once I know the final answer, I must return
|
||||
the following format:\n\n```\nThought: I now can give a great answer\nFinal
|
||||
Answer: Your final answer must be the great and the most complete as possible,
|
||||
it must be outcome described\n\n```"}], "model": "gpt-4o", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '2627'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=N_ULh2gJbt2dWumbG8h6_rw_QD0TBQ3f1NYS.FsG3w0-1756165097-1.0.1.1-fGULs7H8u7wOLv7QAwRQdlWcZr2zj6dkLXQk5xPa.7SBOn9qj1nh6.VDONMzgxqO2telES4KZZPzeC2G4YFJXV5Q4hemTm4jcMsXA10XFYM;
|
||||
_cfuvid=_3IyqhyxR3x9q67TqmIC1eWhmeXZIMbk_6ChUHMngHM-1756165097262-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//nFVNb9xGDL37VxC6uDG0W8fxOsbejKAfBooWKHwp6sDgzlAS49GMOqR2
|
||||
rQb57wVHXu+mSYCiFwEaDsk3j4/kxxOAin21hsp1qK4fwuLd9ebNJk43dPHTdHH3yx+/Xvrh8a7T
|
||||
H27jm6aqzSNtPpDTvdfSpX4IpJzibHaZUMmivn67unp9tXp9/rYY+uQpmFs76OIyLS7OLy4X59eL
|
||||
86tnxy6xI6nW8OcJAMDH8jWI0dNTtYbzen/Skwi2VK1fLgFUOQU7qVCERTFqVR+MLkWlWFDfdWls
|
||||
O13DLcS0A4cRWt4SILQGHTDKjvJ9/JEjBrgpf2u4S0BbDCMqgXYEyhoITu86Au4HdAqpgZtb4FjM
|
||||
zahjJjvbpfx4WsOOIBJ50AQuRWFPGTIF2mJ0U/0cowaMHlzAzDot4WfKdColHoqQSE9R1/fxPi7g
|
||||
7Oz3Z2+C71bfr16tz87g7gUYC3TcdmHaJ7HYAjdZuWHHGOA2KoXALZUIN7evzAehxw8pg6aBnaHn
|
||||
qJRJtLwreZxOBRoUXdA2hS3HFpRcF1NILTsMEDB6cTjQsoD5nIY5g8tjASBjERJoZ6Q3DTkV8Lyl
|
||||
LAQc/SiamaRQMuTUkAinKBYp+B17qmHXseugx0cSYAUE5Z7CVFx6wsixbcajTAnoaQgp0/I+Asw0
|
||||
3s7V+wqHMrYtiRpmy5/G6Pe1FjbWrNyGT9nxYKK3DMJtNI4xaphAM0ZpUu5LFY0EGdARbCbAUVOP
|
||||
ahx+SBupoXSO/Uba2RE4VGqTkTALA4NStgtd2llkz9Z2GIo/YCYYKFsy8ku4iRimv0uFOhKCIVkD
|
||||
GPGuw9iSzMq3nGECjgWkZeHYhLGoYkiB3WT0ZqlhMwpHMh3OaPYPalJ2x4S+m+X7DVW6QJhnmafo
|
||||
WGgJtwqeMznDMSSOKpBGLfH3lSv63CeVgZwxDKWpnxS++7LjXpke5mJJkWLppJgA+w23I+tUm26M
|
||||
HhOOc6auTSCrIMImJ/SAo2fjoQaOLozeLh+EiEFqEB09RX1mpKVIGQNkQk9ZXpqHfGmffaNYDx1I
|
||||
ThGoH0KarLlBM0UvS+vx37YWLNT/Y9yAuJRL6wA9OQrBQq8Kq6mBVaEcPf1l0yxM4HAw99KcZZo/
|
||||
1dCkvMPsF9pxnGkSGQl2rN1+QNUvA4zm9x9Jf4/yRXXL42mcqRkFbRnEMYQjA8aYFE3VZQ+8f7Z8
|
||||
epn8IbVDThv5l2vVcGTpHjKhpGhTXjQNVbF+OgF4XzbM+NnSqIac+kEfND1SSbe6ejPHqw477WC9
|
||||
uL54tmpSDAfD9eWq/krAB0+KHORoSVUOXUf+4HrYaKa1dGQ4OXr2l3C+Fnt+Osf2v4Q/GJyjQck/
|
||||
DJk8u8+ffLiWydrwW9deaC6AK6G8ZUcPypStFJ4aHMO8jiuZRKl/aDi2lIfM805uhofLq03TnNO5
|
||||
u65OPp38AwAA//8DAJ7ERcucCAAA
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 974eec4fbebf8486-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 25 Aug 2025 23:38:31 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '4218'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '4238'
|
||||
x-ratelimit-limit-project-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '30000000'
|
||||
x-ratelimit-remaining-project-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29999394'
|
||||
x-ratelimit-reset-project-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 1ms
|
||||
x-request-id:
|
||||
- req_28f00852660542e88719133a722fce84
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Crew Manager. You
|
||||
are a seasoned manager with a knack for getting the best out of your team.\nYou
|
||||
are also known for your ability to delegate work to the right people, and to
|
||||
ask the right questions to get the best out of your team.\nEven though you don''t
|
||||
perform tasks by yourself, you have a lot of experience in the field, which
|
||||
allows you to properly evaluate the work of your team members.\nYour personal
|
||||
goal is: Manage the team to complete the task in the best way possible.\nYou
|
||||
ONLY have access to the following tools, and should NEVER make up tools that
|
||||
are not listed here:\n\nTool Name: Delegate work to coworker\nTool Arguments:
|
||||
{''task'': {''description'': ''The task to delegate'', ''type'': ''str''}, ''context'':
|
||||
{''description'': ''The context for the task'', ''type'': ''str''}, ''coworker'':
|
||||
{''description'': ''The role/name of the coworker to delegate to'', ''type'':
|
||||
''str''}}\nTool Description: Delegate a specific task to one of the following
|
||||
coworkers: Scorer\nThe input to this tool should be the coworker, the task you
|
||||
want them to do, and ALL necessary context to execute the task, they know nothing
|
||||
about the task, so share absolutely everything you know, don''t reference things
|
||||
but instead explain them.\nTool Name: Ask question to coworker\nTool Arguments:
|
||||
{''question'': {''description'': ''The question to ask'', ''type'': ''str''},
|
||||
''context'': {''description'': ''The context for the question'', ''type'': ''str''},
|
||||
''coworker'': {''description'': ''The role/name of the coworker to ask'', ''type'':
|
||||
''str''}}\nTool Description: Ask a specific question to one of the following
|
||||
coworkers: Scorer\nThe input to this tool should be the coworker, the question
|
||||
you have for them, and ALL necessary context to ask the question properly, they
|
||||
know nothing about the question, so share absolutely everything you know, don''t
|
||||
reference things but instead explain them.\n\nIMPORTANT: Use the following format
|
||||
in your response:\n\n```\nThought: you should always think about what to do\nAction:
|
||||
the action to take, only one name of [Delegate work to coworker, Ask question
|
||||
to coworker], just the name, exactly as it''s written.\nAction Input: the input
|
||||
to the action, just a simple JSON object, enclosed in curly braces, using \"
|
||||
to wrap keys and values.\nObservation: the result of the action\n```\n\nOnce
|
||||
all necessary information is gathered, return the following format:\n\n```\nThought:
|
||||
I now know the final answer\nFinal Answer: the final answer to the original
|
||||
input question\n```"}, {"role": "user", "content": "\nCurrent Task: Give me
|
||||
an integer score between 1-5 for the following title: ''The impact of AI in
|
||||
the future of work''\n\nThis is the expected criteria for your final answer:
|
||||
The score of the title.\nyou MUST return the actual complete content as the
|
||||
final answer, not a summary.\nEnsure your final answer contains only the content
|
||||
in the following format: {\n \"score\": int\n}\n\nEnsure the final output does
|
||||
not include any code block markers like ```json or ```python.\n\nBegin! This
|
||||
is VERY important to you, use the tools available and give your best Final Answer,
|
||||
your job depends on it!\n\nThought:"}, {"role": "assistant", "content": "Thought:
|
||||
I need to delegate the task of scoring the title \"The impact of AI in the future
|
||||
of work\" to the Scorer. I will provide them with all necessary context to execute
|
||||
the task.\n\nAction: Delegate work to coworker\nAction Input: {\"task\": \"Score
|
||||
the title ''The impact of AI in the future of work'' on a scale from 1 to 5.\",
|
||||
\"context\": \"The task requires evaluating the title based on factors such
|
||||
as relevance, impact, and clarity. The score should be an integer between 1
|
||||
to 5, where 1 is poor and 5 is excellent.\", \"coworker\": \"Scorer\"}\nObservation:
|
||||
To evaluate the title ''The impact of AI in the future of work'', we need to
|
||||
consider relevancy, impact, and clarity. Here''s the assessment:\n\n- **Relevance
|
||||
(5/5):** The title is highly relevant, as Artificial Intelligence (AI) is a
|
||||
major topic of interest in today''s fast-evolving technological landscape. The
|
||||
future of work is a crucial subject that affects diverse industries and professions
|
||||
worldwide, which makes it a timely and meaningful subject to explore.\n \n-
|
||||
**Impact (5/5):** The title suggests a profound impact since AI is anticipated
|
||||
to significantly transform the workspace by automating jobs, creating new job
|
||||
categories, and altering how traditional jobs are performed. Analyzing these
|
||||
potential changes can greatly inform and influence policymakers, businesses,
|
||||
and the workforce.\n \n- **Clarity (5/5):** The title is clear and concise.
|
||||
It directly points out the subject (AI) and the specific context (the future
|
||||
of work) it impacts. There''s no ambiguity, making it accessible to a broad
|
||||
audience, including professionals, students, and general readers interested
|
||||
in technology''s influence on employment trends.\n\nOverall, the title ''The
|
||||
impact of AI in the future of work'' scores an excellent 5 out of 5. It adequately
|
||||
captures a complex, forward-thinking issue with clarity, relevance, and significant
|
||||
impact potential."}], "model": "gpt-4o", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '5147'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=N_ULh2gJbt2dWumbG8h6_rw_QD0TBQ3f1NYS.FsG3w0-1756165097-1.0.1.1-fGULs7H8u7wOLv7QAwRQdlWcZr2zj6dkLXQk5xPa.7SBOn9qj1nh6.VDONMzgxqO2telES4KZZPzeC2G4YFJXV5Q4hemTm4jcMsXA10XFYM;
|
||||
_cfuvid=_3IyqhyxR3x9q67TqmIC1eWhmeXZIMbk_6ChUHMngHM-1756165097262-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//jFJda9wwEHz3r1j0fA72fcXxW1oIlPahlFAKdTA6eW2rkSVFWt+1Pe6/
|
||||
F9mXs9Ok0BeBNDujmd09RgBMViwHJlpOorMqfp/tVvXTzVP7ZU2Zv91/fWc/Hu7Ft893+0+/2SIw
|
||||
zO4HCnpmXQnTWYUkjR5h4ZATBtX0erNNt5s0TQegMxWqQGssxWsTL5PlOk6yONmeia2RAj3L4XsE
|
||||
AHAczmBRV/iT5ZAsnl869J43yPJLEQBzRoUXxr2XnrgmtphAYTShHlzft6ZvWsrhA2hzgMdwUItQ
|
||||
S80VcO0P6K4KfTdcb4drDsdCAxTMC+OwYDlsCn2a6zuse89DPN0rNQO41oZ4aM+Q7OGMnC5ZlGms
|
||||
Mzv/F5XVUkvflg65Nzr49mQsG9BTBPAw9Kx/0QZmnekslWQecfguTbLtKMimMU3wMj2DZIirGS1N
|
||||
rhdvKJYVEpfKz/rOBBctVhN3GhLvK2lmQDTL/drOW9pjdqmb/5GfACHQElaldVhJ8TLyVOYwrPG/
|
||||
yi59Hgwzj24vBZYk0YVZVFjzXo0bxvwvT9iVtdQNOuvkuGa1LbPkZrPdrFZix6JT9AcAAP//AwBy
|
||||
ib1VbwMAAA==
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 974eec6aa9b18486-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 25 Aug 2025 23:38:32 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '556'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '614'
|
||||
x-ratelimit-limit-project-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '30000000'
|
||||
x-ratelimit-remaining-project-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29998767'
|
||||
x-ratelimit-reset-project-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 2ms
|
||||
x-request-id:
|
||||
- req_92650754cfb84bb9bff203c3b554da6e
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Crew Manager. You
|
||||
are a seasoned manager with a knack for getting the best out of your team.\nYou
|
||||
are also known for your ability to delegate work to the right people, and to
|
||||
ask the right questions to get the best out of your team.\nEven though you don''t
|
||||
perform tasks by yourself, you have a lot of experience in the field, which
|
||||
allows you to properly evaluate the work of your team members.\nYour personal
|
||||
goal is: Manage the team to complete the task in the best way possible.\nYou
|
||||
ONLY have access to the following tools, and should NEVER make up tools that
|
||||
are not listed here:\n\nTool Name: Delegate work to coworker\nTool Arguments:
|
||||
{''task'': {''description'': ''The task to delegate'', ''type'': ''str''}, ''context'':
|
||||
{''description'': ''The context for the task'', ''type'': ''str''}, ''coworker'':
|
||||
{''description'': ''The role/name of the coworker to delegate to'', ''type'':
|
||||
''str''}}\nTool Description: Delegate a specific task to one of the following
|
||||
coworkers: Scorer\nThe input to this tool should be the coworker, the task you
|
||||
want them to do, and ALL necessary context to execute the task, they know nothing
|
||||
about the task, so share absolutely everything you know, don''t reference things
|
||||
but instead explain them.\nTool Name: Ask question to coworker\nTool Arguments:
|
||||
{''question'': {''description'': ''The question to ask'', ''type'': ''str''},
|
||||
''context'': {''description'': ''The context for the question'', ''type'': ''str''},
|
||||
''coworker'': {''description'': ''The role/name of the coworker to ask'', ''type'':
|
||||
''str''}}\nTool Description: Ask a specific question to one of the following
|
||||
coworkers: Scorer\nThe input to this tool should be the coworker, the question
|
||||
you have for them, and ALL necessary context to ask the question properly, they
|
||||
know nothing about the question, so share absolutely everything you know, don''t
|
||||
reference things but instead explain them.\n\nIMPORTANT: Use the following format
|
||||
in your response:\n\n```\nThought: you should always think about what to do\nAction:
|
||||
the action to take, only one name of [Delegate work to coworker, Ask question
|
||||
to coworker], just the name, exactly as it''s written.\nAction Input: the input
|
||||
to the action, just a simple JSON object, enclosed in curly braces, using \"
|
||||
to wrap keys and values.\nObservation: the result of the action\n```\n\nOnce
|
||||
all necessary information is gathered, return the following format:\n\n```\nThought:
|
||||
I now know the final answer\nFinal Answer: the final answer to the original
|
||||
input question\n```"}, {"role": "user", "content": "\nCurrent Task: Give me
|
||||
an integer score between 1-5 for the following title: ''The impact of AI in
|
||||
the future of work''\n\nThis is the expected criteria for your final answer:
|
||||
The score of the title.\nyou MUST return the actual complete content as the
|
||||
final answer, not a summary.\nEnsure your final answer contains only the content
|
||||
in the following format: {\n \"score\": int\n}\n\nEnsure the final output does
|
||||
not include any code block markers like ```json or ```python.\n\nBegin! This
|
||||
is VERY important to you, use the tools available and give your best Final Answer,
|
||||
your job depends on it!\n\nThought:"}, {"role": "assistant", "content": "Thought:
|
||||
I need to delegate the task of scoring the title \"The impact of AI in the future
|
||||
of work\" to the Scorer. I will provide them with all necessary context to execute
|
||||
the task.\n\nAction: Delegate work to coworker\nAction Input: {\"task\": \"Score
|
||||
the title ''The impact of AI in the future of work'' on a scale from 1 to 5.\",
|
||||
\"context\": \"The task requires evaluating the title based on factors such
|
||||
as relevance, impact, and clarity. The score should be an integer between 1
|
||||
to 5, where 1 is poor and 5 is excellent.\", \"coworker\": \"Scorer\"}\nObservation:
|
||||
{\n \"score\": 5\n}"}], "model": "gpt-4o", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '3822'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- _cfuvid=CW_cKQGYWY3cL.S6Xo5z0cmkmWHy5Q50OA_KjPEijNk-1735926034530-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//jFLBjpswEL3zFSOfQ0VIIFlu1UpV01MPbaWqrJBjBvCusS17yLaK8u+V
|
||||
IRvYdiv1Ysl+857fm5lzBMBkzQpgouMkeqvi+/0x++Z2h/RR3X/++N3Lr+r4Kdkc9N3hdGKrwDDH
|
||||
RxT0wnonTG8VkjR6goVDThhU17ssX+dZmmYj0JsaVaC1luKtidMk3cbJPk7yK7EzUqBnBfyIAADO
|
||||
4xks6hp/sgKS1ctLj97zFllxKwJgzqjwwrj30hPXxFYzKIwm1KPrL50Z2o4KOIA2z/AUDuoQGqm5
|
||||
Aq79M7pSfxhv78dbAedSA5TMC+OwZAVkpb4s5R02g+chnR6UWgBca0M8dGcM9nBFLrcoyrTWmaP/
|
||||
g8oaqaXvKofcGx1sezKWjeglAngYWza86gKzzvSWKjJPOH63T/NJj81DmtF0fQXJEFcL1na3ekOv
|
||||
qpG4VH7RdCa46LCeqfOE+FBLswCiReq/3bylPSWXuv0f+RkQAi1hXVmHtRSvE89lDsMO/6vs1uXR
|
||||
MPPoTlJgRRJdmESNDR/UtF7M//KEfdVI3aKzTk471thqn9xlebbZiCOLLtFvAAAA//8DAGD1uEts
|
||||
AwAA
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 974eef2f6d51dc0d-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 25 Aug 2025 23:40:25 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=OccMxrmCRox_7bIJtedFlhoPOuuUvZzIFuNich2H1JY-1756165225-1.0.1.1-GZu0FYiwF3GrVFp6V5lDMCNsqJ3lPFJoATnPc50dACgSXHbkzGM7.MHv7GvUoDKXSyRy2Rxf5hGsRH_FPp.tBEzcki6qQDXIBhWde1NEJkE;
|
||||
path=/; expires=Tue, 26-Aug-25 00:10:25 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=aj4gjWj2skMQt34sZXxfiLoWjDQdxXvxh0I9isfN5II-1756165225584-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '510'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '623'
|
||||
x-ratelimit-limit-project-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '30000000'
|
||||
x-ratelimit-remaining-project-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29999099'
|
||||
x-ratelimit-reset-project-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 1ms
|
||||
x-request-id:
|
||||
- req_18884789d50842b7bcfffb3752d3fd0f
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
|
||||
@@ -561,4 +561,634 @@ interactions:
|
||||
- req_c8bb2af8f4a63658c3c9316045a413e1
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Crew Manager. You
|
||||
are a seasoned manager with a knack for getting the best out of your team.\nYou
|
||||
are also known for your ability to delegate work to the right people, and to
|
||||
ask the right questions to get the best out of your team.\nEven though you don''t
|
||||
perform tasks by yourself, you have a lot of experience in the field, which
|
||||
allows you to properly evaluate the work of your team members.\nYour personal
|
||||
goal is: Manage the team to complete the task in the best way possible.\nYou
|
||||
ONLY have access to the following tools, and should NEVER make up tools that
|
||||
are not listed here:\n\nTool Name: Delegate work to coworker\nTool Arguments:
|
||||
{''task'': {''description'': ''The task to delegate'', ''type'': ''str''}, ''context'':
|
||||
{''description'': ''The context for the task'', ''type'': ''str''}, ''coworker'':
|
||||
{''description'': ''The role/name of the coworker to delegate to'', ''type'':
|
||||
''str''}}\nTool Description: Delegate a specific task to one of the following
|
||||
coworkers: Scorer\nThe input to this tool should be the coworker, the task you
|
||||
want them to do, and ALL necessary context to execute the task, they know nothing
|
||||
about the task, so share absolutely everything you know, don''t reference things
|
||||
but instead explain them.\nTool Name: Ask question to coworker\nTool Arguments:
|
||||
{''question'': {''description'': ''The question to ask'', ''type'': ''str''},
|
||||
''context'': {''description'': ''The context for the question'', ''type'': ''str''},
|
||||
''coworker'': {''description'': ''The role/name of the coworker to ask'', ''type'':
|
||||
''str''}}\nTool Description: Ask a specific question to one of the following
|
||||
coworkers: Scorer\nThe input to this tool should be the coworker, the question
|
||||
you have for them, and ALL necessary context to ask the question properly, they
|
||||
know nothing about the question, so share absolutely everything you know, don''t
|
||||
reference things but instead explain them.\n\nIMPORTANT: Use the following format
|
||||
in your response:\n\n```\nThought: you should always think about what to do\nAction:
|
||||
the action to take, only one name of [Delegate work to coworker, Ask question
|
||||
to coworker], just the name, exactly as it''s written.\nAction Input: the input
|
||||
to the action, just a simple JSON object, enclosed in curly braces, using \"
|
||||
to wrap keys and values.\nObservation: the result of the action\n```\n\nOnce
|
||||
all necessary information is gathered, return the following format:\n\n```\nThought:
|
||||
I now know the final answer\nFinal Answer: the final answer to the original
|
||||
input question\n```"}, {"role": "user", "content": "\nCurrent Task: Give me
|
||||
an integer score between 1-5 for the following title: ''The impact of AI in
|
||||
the future of work''\n\nThis is the expected criteria for your final answer:
|
||||
The score of the title.\nyou MUST return the actual complete content as the
|
||||
final answer, not a summary.\nEnsure your final answer contains only the content
|
||||
in the following format: {\n \"score\": int\n}\n\nEnsure the final output does
|
||||
not include any code block markers like ```json or ```python.\n\nBegin! This
|
||||
is VERY important to you, use the tools available and give your best Final Answer,
|
||||
your job depends on it!\n\nThought:"}], "model": "gpt-4o", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '3194'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAA4xUTW/bMAy951cQuvSSFEnbpI1vxYYBxbC1w3rZ5iJQJNrWKouCRLstivz3QY4T
|
||||
u/sAdjEMPvLxPYrS6wRAGC0yEKqSrGpvZ++utud36tZ8jtWHL/rO8O26/fbykb/P3ScnpqmCtj9R
|
||||
8aHqVFHtLbKhHlYBJWNiXVwuV4vVcr5edkBNGm0qKz3PLmh2Nj+7mM2vZvNVX1iRURhFBj8mAACv
|
||||
3TdJdBqfRQbz6SFSY4yyRJEdkwBEIJsiQsZoIkvHYjqAihyj61TfV9SUFWdwAw5RAxNotFhKRuAK
|
||||
gWV8BCogKgrGlfuYYYspk5oAXxUFDFPwgVqj+5QangxX0PV5ZpBbanhUKp0eyCVDJZ0+zV3urlUa
|
||||
XQbvDxKeKDymTorSH4ZDCtw433AGr7lIJLnIIBedlFGbk/sKwdReKk4Wrm/AuA4uGm4CplhiPQFy
|
||||
ICEqaRGKQDUsUsvlaS6mkIvexL7F/UG1cS3ZFiNgK20j+TCb0rToRjaHsUgHxjGWGLphImyRnxAd
|
||||
LLrEJWxlRJ20HB2cRPCUTspI2xuZQkCLrXQKp11d4gwYGSy2aI+a+3ENcwm52I13IGDRRJlW0DXW
|
||||
jgDpHLFMQ+6276FHdsd9s1T6QNv4W6kojDOx2gSUkVzarcjkRYfuJgAP3V43b1ZV+EC15w3TI3bt
|
||||
VperPZ8YbtKALhYXPcrE0g7A5bq/Dm8JNxpZGhtHV0MoqSrUQ+lwj2SjDY2Aycj2n3L+xr23blz5
|
||||
P/QDoBR6Rr3xAbVRby0PaQHTS/OvtOOYO8EiYmiNwg0bDOkoNBaysftHQMSXyFhvCuNKDD6Y/UtQ
|
||||
+M3VfL1cLc/P1VZMdpNfAAAA//8DAFYiBXASBQAA
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 974eec034af08486-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 25 Aug 2025 23:38:17 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=N_ULh2gJbt2dWumbG8h6_rw_QD0TBQ3f1NYS.FsG3w0-1756165097-1.0.1.1-fGULs7H8u7wOLv7QAwRQdlWcZr2zj6dkLXQk5xPa.7SBOn9qj1nh6.VDONMzgxqO2telES4KZZPzeC2G4YFJXV5Q4hemTm4jcMsXA10XFYM;
|
||||
path=/; expires=Tue, 26-Aug-25 00:08:17 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=_3IyqhyxR3x9q67TqmIC1eWhmeXZIMbk_6ChUHMngHM-1756165097262-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '2201'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '2275'
|
||||
x-ratelimit-limit-project-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '30000000'
|
||||
x-ratelimit-remaining-project-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29999242'
|
||||
x-ratelimit-reset-project-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 1ms
|
||||
x-request-id:
|
||||
- req_8253241bc24547018d1600b0ec310060
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Scorer. You''re an
|
||||
expert scorer, specialized in scoring titles.\nYour personal goal is: Score
|
||||
the title\nTo give my best complete final answer to the task respond using the
|
||||
exact following format:\n\nThought: I now can give a great answer\nFinal Answer:
|
||||
Your final answer must be the great and the most complete as possible, it must
|
||||
be outcome described.\n\nI MUST use these formats, my job depends on it!"},
|
||||
{"role": "user", "content": "\nCurrent Task: Score the title ''The impact of
|
||||
AI in the future of work'' on a scale from 1 to 5.\n\nThis is the expected criteria
|
||||
for your final answer: Your best answer to your coworker asking you this, accounting
|
||||
for the context shared.\nyou MUST return the actual complete content as the
|
||||
final answer, not a summary.\n\nThis is the context you''re working with:\nThe
|
||||
task involves evaluating the given title and providing an integer score between
|
||||
1 and 5 based on the title''s potential impact, relevance, and interest level.\n\nBegin!
|
||||
This is VERY important to you, use the tools available and give your best Final
|
||||
Answer, your job depends on it!\n\nThought:"}, {"role": "assistant", "content":
|
||||
"Thought: I need to delegate the task of scoring the title to our Scorer, providing
|
||||
them with context about the title and the task at hand.\n\nAction: Delegate
|
||||
work to coworker\nAction Input: {\"task\": \"Score the title ''The impact of
|
||||
AI in the future of work'' on a scale from 1 to 5.\", \"context\": \"The task
|
||||
involves evaluating the given title and providing an integer score between 1
|
||||
and 5 based on the title''s potential impact, relevance, and interest level.\",
|
||||
\"coworker\": \"Scorer\"}\nObservation: I encountered an error: Action ''Delegate
|
||||
work to coworker'' don''t exist, these are the only available Actions:\n\nMoving
|
||||
on then. I MUST either use a tool (use one at time) OR give my best final answer
|
||||
not both at the same time. When responding, I must use the following format:\n\n```\nThought:
|
||||
you should always think about what to do\nAction: the action to take, should
|
||||
be one of []\nAction Input: the input to the action, dictionary enclosed in
|
||||
curly braces\nObservation: the result of the action\n```\nThis Thought/Action/Action
|
||||
Input/Result can repeat N times. Once I know the final answer, I must return
|
||||
the following format:\n\n```\nThought: I now can give a great answer\nFinal
|
||||
Answer: Your final answer must be the great and the most complete as possible,
|
||||
it must be outcome described\n\n```"}], "model": "gpt-4o", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '2548'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=N_ULh2gJbt2dWumbG8h6_rw_QD0TBQ3f1NYS.FsG3w0-1756165097-1.0.1.1-fGULs7H8u7wOLv7QAwRQdlWcZr2zj6dkLXQk5xPa.7SBOn9qj1nh6.VDONMzgxqO2telES4KZZPzeC2G4YFJXV5Q4hemTm4jcMsXA10XFYM;
|
||||
_cfuvid=_3IyqhyxR3x9q67TqmIC1eWhmeXZIMbk_6ChUHMngHM-1756165097262-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAA5xVTY/bRgy9+1cQuvRiL7zfW98WKVoYCNoiSHpoHRjjESUxOxoKQ8peI9j/XnC0
|
||||
lrxpChS9CNJw+PhIPlJfZwAFlcUKCt849W0XFu8edtcfPskfhO/p+Vjq49Ofx19/2/10uPz0Cxdz
|
||||
8+DdF/R68rrw3HYBlTgOZp/QKRrq5f3t3eXd7fLH+2xoucRgbnWnixteXC2vbhbLh8Xy7tWxYfIo
|
||||
xQr+mgEAfM1PoxhLfC5WsJyfTloUcTUWq/ESQJE42EnhREjURS3mk9FzVIyZ9ceG+7rRFazhQCEA
|
||||
7l3onSJog6CkAWHnBEvgCKQCHZsnuQDUds7rHBIG3LvocQ4ulkBRMaEoBNxjACfQYcpoNe0xQo79
|
||||
rBebuIkL+H2EW2e4FXwc424Kex/CAFfwuAaKGanqtU9oZwdOT5sCpK9rFBVwUJL4XoQ4GuWGD+an
|
||||
6JvIgWtCGfLU5KJUnNoM0QXnUeZwaMg3QIYjVEeqyLuoOS+lFsMRlDvycCBt4EAlLhI631CsjWcg
|
||||
76z1cmGpfTjVZchJ+qwUA2+obsLxVDi1Gj2uoXECO8QIFE01QrEOx1zOOpmG7JVh7xJxL+CkQ69y
|
||||
qsEcKFahx+iNyxfegQlAhpYkLLGimFnGshdNhAPH9alZ761ZRpTktfokEOhpSBmcarIunBdlbHTZ
|
||||
o93JLU58sDC+T8RCeszxPUePKULC2qXS7I/rHyQzPHXUcqg4mYha95SZKjiwccIQ7HsofMXpvMND
|
||||
doIu+SYr6h1HoRJT9mhQ8FSo+UngXeI9lWgd9pwwI45azxhnIxH5AN7FLF1wUNs0g4tywLSJP1N0
|
||||
AR7z1wpuYfH/tDsJYrhe9WEcKn07U5bVWbVHDTs1ftNoDgHfiJ7iKB1Th6BXTjKgq0xDbNgca86h
|
||||
xkILuB33+k0C2HaBjy1GvTjfLgmrXpwtt9iHcGZwMbIOE2J77fOr5WXcZIHrLvFOvnEtTLzSbG0q
|
||||
ONrWEuWuyNaXGcDnvDH7N0uw6BK3nW6VnzCHu72+GvCKaUdP1qvl7atVWV2YDPfX9/PvAG5LVEdB
|
||||
zpZu4Z1vsJxcpw3t+pL4zDA7S/ufdL6HPaROsf4v8JPBe+wUy22XsCT/NuXpWkLbTP92bSxzJlwI
|
||||
pj153CphslaUWLk+DL+XQo6i2G4rijWmLtHwj6m67c3drqqWuPQPxexl9jcAAAD//wMAzZYQhWwH
|
||||
AAA=
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 974eec121c168486-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 25 Aug 2025 23:38:21 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '3958'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '4032'
|
||||
x-ratelimit-limit-project-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '30000000'
|
||||
x-ratelimit-remaining-project-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29999412'
|
||||
x-ratelimit-reset-project-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 1ms
|
||||
x-request-id:
|
||||
- req_bd8ada32c9474dad85877e84f73b632b
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Crew Manager. You
|
||||
are a seasoned manager with a knack for getting the best out of your team.\nYou
|
||||
are also known for your ability to delegate work to the right people, and to
|
||||
ask the right questions to get the best out of your team.\nEven though you don''t
|
||||
perform tasks by yourself, you have a lot of experience in the field, which
|
||||
allows you to properly evaluate the work of your team members.\nYour personal
|
||||
goal is: Manage the team to complete the task in the best way possible.\nYou
|
||||
ONLY have access to the following tools, and should NEVER make up tools that
|
||||
are not listed here:\n\nTool Name: Delegate work to coworker\nTool Arguments:
|
||||
{''task'': {''description'': ''The task to delegate'', ''type'': ''str''}, ''context'':
|
||||
{''description'': ''The context for the task'', ''type'': ''str''}, ''coworker'':
|
||||
{''description'': ''The role/name of the coworker to delegate to'', ''type'':
|
||||
''str''}}\nTool Description: Delegate a specific task to one of the following
|
||||
coworkers: Scorer\nThe input to this tool should be the coworker, the task you
|
||||
want them to do, and ALL necessary context to execute the task, they know nothing
|
||||
about the task, so share absolutely everything you know, don''t reference things
|
||||
but instead explain them.\nTool Name: Ask question to coworker\nTool Arguments:
|
||||
{''question'': {''description'': ''The question to ask'', ''type'': ''str''},
|
||||
''context'': {''description'': ''The context for the question'', ''type'': ''str''},
|
||||
''coworker'': {''description'': ''The role/name of the coworker to ask'', ''type'':
|
||||
''str''}}\nTool Description: Ask a specific question to one of the following
|
||||
coworkers: Scorer\nThe input to this tool should be the coworker, the question
|
||||
you have for them, and ALL necessary context to ask the question properly, they
|
||||
know nothing about the question, so share absolutely everything you know, don''t
|
||||
reference things but instead explain them.\n\nIMPORTANT: Use the following format
|
||||
in your response:\n\n```\nThought: you should always think about what to do\nAction:
|
||||
the action to take, only one name of [Delegate work to coworker, Ask question
|
||||
to coworker], just the name, exactly as it''s written.\nAction Input: the input
|
||||
to the action, just a simple JSON object, enclosed in curly braces, using \"
|
||||
to wrap keys and values.\nObservation: the result of the action\n```\n\nOnce
|
||||
all necessary information is gathered, return the following format:\n\n```\nThought:
|
||||
I now know the final answer\nFinal Answer: the final answer to the original
|
||||
input question\n```"}, {"role": "user", "content": "\nCurrent Task: Give me
|
||||
an integer score between 1-5 for the following title: ''The impact of AI in
|
||||
the future of work''\n\nThis is the expected criteria for your final answer:
|
||||
The score of the title.\nyou MUST return the actual complete content as the
|
||||
final answer, not a summary.\nEnsure your final answer contains only the content
|
||||
in the following format: {\n \"score\": int\n}\n\nEnsure the final output does
|
||||
not include any code block markers like ```json or ```python.\n\nBegin! This
|
||||
is VERY important to you, use the tools available and give your best Final Answer,
|
||||
your job depends on it!\n\nThought:"}, {"role": "assistant", "content": "Thought:
|
||||
I need to delegate the task of scoring the title to our Scorer, providing them
|
||||
with context about the title and the task at hand.\n\nAction: Delegate work
|
||||
to coworker\nAction Input: {\"task\": \"Score the title ''The impact of AI in
|
||||
the future of work'' on a scale from 1 to 5.\", \"context\": \"The task involves
|
||||
evaluating the given title and providing an integer score between 1 and 5 based
|
||||
on the title''s potential impact, relevance, and interest level.\", \"coworker\":
|
||||
\"Scorer\"}\nObservation: 5 - The title \"The impact of AI in the future of
|
||||
work\" is highly impactful, relevant, and interesting due to the transformative
|
||||
potential of AI technologies in various job sectors and its relevance to ongoing
|
||||
discussions about the future of employment."}], "model": "gpt-4o", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '3994'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=N_ULh2gJbt2dWumbG8h6_rw_QD0TBQ3f1NYS.FsG3w0-1756165097-1.0.1.1-fGULs7H8u7wOLv7QAwRQdlWcZr2zj6dkLXQk5xPa.7SBOn9qj1nh6.VDONMzgxqO2telES4KZZPzeC2G4YFJXV5Q4hemTm4jcMsXA10XFYM;
|
||||
_cfuvid=_3IyqhyxR3x9q67TqmIC1eWhmeXZIMbk_6ChUHMngHM-1756165097262-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//jFLBbtQwEL3nK0Y+b6pkd5OmuRUQEgiJIgE9kCry2pPErGNbtsOCVvvv
|
||||
yMl2k9JW4uLDvHnj997MMQIggpMSCOuoZ72R8dtit/levOkG/u72k9x/zL7d8DuXfjl8vr++I6vA
|
||||
0LufyPwj64rp3kj0QqsJZhapxzA1vc7yNM/SJB2BXnOUgdYaH291vE7W2zgp4iQ/EzstGDpSwo8I
|
||||
AOA4vkGi4viblJCsHis9OkdbJOWlCYBYLUOFUOeE81R5sppBppVHNar+2umh7XwJH0DpA+zD4zuE
|
||||
RigqgSp3QHtVqUq9Hwu3Y6GEY6UAKuKYtliRErJKnZY/WGwGR4NBNUi5AKhS2tMQ0Ojt4YycLm6k
|
||||
bo3VO/cPlTRCCdfVFqnTKih3XhsyoqcI4GFMbXgSBDFW98bXXu9x/K7Ybqd5ZN7TjK7TM+i1p3LB
|
||||
yrPVC/Nqjp4K6Ra5E0ZZh3ymzkuiAxd6AUQL18/VvDR7ci5U+z/jZ4AxNB55bSxywZ46ntsshjN+
|
||||
re2S8iiYOLS/BMPaC7RhExwbOsjpwoj74zz2dSNUi9ZYMZ1ZY+oiucnybLNhOxKdor8AAAD//wMA
|
||||
Q4ZWRG8DAAA=
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 974eec2bee738486-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 25 Aug 2025 23:38:22 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '879'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '923'
|
||||
x-ratelimit-limit-project-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '30000000'
|
||||
x-ratelimit-remaining-project-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29999055'
|
||||
x-ratelimit-reset-project-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 1ms
|
||||
x-request-id:
|
||||
- req_5699dd0d905247f6b4b41d76411079e0
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Crew Manager. You
|
||||
are a seasoned manager with a knack for getting the best out of your team.\nYou
|
||||
are also known for your ability to delegate work to the right people, and to
|
||||
ask the right questions to get the best out of your team.\nEven though you don''t
|
||||
perform tasks by yourself, you have a lot of experience in the field, which
|
||||
allows you to properly evaluate the work of your team members.\nYour personal
|
||||
goal is: Manage the team to complete the task in the best way possible.\nYou
|
||||
ONLY have access to the following tools, and should NEVER make up tools that
|
||||
are not listed here:\n\nTool Name: Delegate work to coworker\nTool Arguments:
|
||||
{''task'': {''description'': ''The task to delegate'', ''type'': ''str''}, ''context'':
|
||||
{''description'': ''The context for the task'', ''type'': ''str''}, ''coworker'':
|
||||
{''description'': ''The role/name of the coworker to delegate to'', ''type'':
|
||||
''str''}}\nTool Description: Delegate a specific task to one of the following
|
||||
coworkers: Scorer\nThe input to this tool should be the coworker, the task you
|
||||
want them to do, and ALL necessary context to execute the task, they know nothing
|
||||
about the task, so share absolutely everything you know, don''t reference things
|
||||
but instead explain them.\nTool Name: Ask question to coworker\nTool Arguments:
|
||||
{''question'': {''description'': ''The question to ask'', ''type'': ''str''},
|
||||
''context'': {''description'': ''The context for the question'', ''type'': ''str''},
|
||||
''coworker'': {''description'': ''The role/name of the coworker to ask'', ''type'':
|
||||
''str''}}\nTool Description: Ask a specific question to one of the following
|
||||
coworkers: Scorer\nThe input to this tool should be the coworker, the question
|
||||
you have for them, and ALL necessary context to ask the question properly, they
|
||||
know nothing about the question, so share absolutely everything you know, don''t
|
||||
reference things but instead explain them.\n\nIMPORTANT: Use the following format
|
||||
in your response:\n\n```\nThought: you should always think about what to do\nAction:
|
||||
the action to take, only one name of [Delegate work to coworker, Ask question
|
||||
to coworker], just the name, exactly as it''s written.\nAction Input: the input
|
||||
to the action, just a simple JSON object, enclosed in curly braces, using \"
|
||||
to wrap keys and values.\nObservation: the result of the action\n```\n\nOnce
|
||||
all necessary information is gathered, return the following format:\n\n```\nThought:
|
||||
I now know the final answer\nFinal Answer: the final answer to the original
|
||||
input question\n```"}, {"role": "user", "content": "\nCurrent Task: Give me
|
||||
an integer score between 1-5 for the following title: ''The impact of AI in
|
||||
the future of work''\n\nThis is the expected criteria for your final answer:
|
||||
The score of the title.\nyou MUST return the actual complete content as the
|
||||
final answer, not a summary.\nEnsure your final answer contains only the content
|
||||
in the following format: {\n \"score\": int\n}\n\nEnsure the final output does
|
||||
not include any code block markers like ```json or ```python.\n\nBegin! This
|
||||
is VERY important to you, use the tools available and give your best Final Answer,
|
||||
your job depends on it!\n\nThought:"}, {"role": "assistant", "content": "Thought:
|
||||
I need to delegate the task of scoring the title to our Scorer, providing them
|
||||
with context about the title and the task at hand.\n\nAction: Delegate work
|
||||
to coworker\nAction Input: {\"task\": \"Score the title ''The impact of AI in
|
||||
the future of work'' on a scale from 1 to 5.\", \"context\": \"The task involves
|
||||
evaluating the given title and providing an integer score between 1 and 5 based
|
||||
on the title''s potential impact, relevance, and interest level.\", \"coworker\":
|
||||
\"Scorer\"}\nObservation: {\n \"score\": 5\n}"}], "model": "gpt-4o", "stop":
|
||||
["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '3760'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=N_ULh2gJbt2dWumbG8h6_rw_QD0TBQ3f1NYS.FsG3w0-1756165097-1.0.1.1-fGULs7H8u7wOLv7QAwRQdlWcZr2zj6dkLXQk5xPa.7SBOn9qj1nh6.VDONMzgxqO2telES4KZZPzeC2G4YFJXV5Q4hemTm4jcMsXA10XFYM;
|
||||
_cfuvid=_3IyqhyxR3x9q67TqmIC1eWhmeXZIMbk_6ChUHMngHM-1756165097262-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//jFJBbtswELzrFQuerUB2LVXVLQnQoufmEKAKBJpaSawpLkFSdgvDfy9I
|
||||
u5bSpkAuBMjZGc7s7ikBYLJlFTAxcC9Go9LHcrf9dnh+mO7p+VN+3PdPGvcH9+XD7rHs2CowaPcD
|
||||
hf/DuhM0GoVekr7AwiL3GFTXH/NiXeTrIovASC2qQOuNT7eUbrLNNs3KNCuuxIGkQMcq+J4AAJzi
|
||||
GSzqFn+yCqJMfBnROd4jq25FAMySCi+MOyed59qz1QwK0h51dP000NQPvoKvoOkI+3D4AaGTmivg
|
||||
2h3R3tX6c7zex2sFp1oD1MwJslizCvJan5f6FrvJ8RBPT0otAK41eR7aE5O9XJHzLYui3ljaub+o
|
||||
rJNauqGxyB3p4Nt5Miyi5wTgJfZsetUGZiyNxjee9hi/K7P8osfmKc3oZn0FPXmuFqxNsXpDr2nR
|
||||
c6ncoutMcDFgO1PnEfGplbQAkkXqf928pX1JLnX/HvkZEAKNx7YxFlspXieeyyyGJf5f2a3L0TBz
|
||||
aA9SYOMl2jCJFjs+qct+MffLeRybTuoerbHysmSdabbFrusyzETJknPyGwAA//8DAAH6vKZtAwAA
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 974eed9bdf8aeb2a-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 25 Aug 2025 23:39:21 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '1085'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '1107'
|
||||
x-ratelimit-limit-project-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '30000000'
|
||||
x-ratelimit-remaining-project-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29999114'
|
||||
x-ratelimit-reset-project-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 1ms
|
||||
x-request-id:
|
||||
- req_1b900c7238124d04835cb54adf6d7aeb
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
|
||||
@@ -1028,4 +1028,643 @@ interactions:
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"input": ["Example: Toy Cars(Math Example): An example used to illustrate
|
||||
addition by counting toy cars."], "model": "text-embedding-3-small", "encoding_format":
|
||||
"base64"}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '172'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-read-timeout:
|
||||
- '600'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/embeddings
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAA1SayxKyyral+/spVqwudUIuSmauHncQkOQmYkVFBSAiKKJcEsgT590r8N9xqqpj
|
||||
A1ExYY45xjfzP//1119/d3lTFuPf//z196sexr//x3bslo3Z3//89T//9ddff/31n7/X/+/Mss3L
|
||||
261+V7/Tf2/W71u5/P3PX+x/H/m/J/3z199GidoTsqiXc6z1VOAqVIG3cv6hWV69X6PVmpZJIGeG
|
||||
Ljjqn7CVmAbrIAqdxU01A72LRSRyzxx7wtqhhPiJszzR7rJmNt5ZAV4PM8KnOEjy7j4JIrCyF4+P
|
||||
EpdSrqvADFFdWTjvOTefzfulAL6rIazT4aOuBzS0ULmMJr7QgoL5CpgMKm9HndgDjiPSFYIEMfZP
|
||||
OAwno1nUN8dCsdNSknFE6Yk8OxrIeL0gWms30WLcLzxwGsTjhCOywwY49lAcz0ci3dCN8tTrvkC+
|
||||
5pGHhNxXh+6GM6ivionlopLBaHwkAx33s4hzP0uatatXEdVt3xCj1M4OL/F+gQaN/eJYlUlEVVJp
|
||||
CClH19vRvqW0FDsJkFPwwfHBvOZrEEkQfa+WQqQy7ihHglGC/Wuf4Ys1CM7KsE0L2WhF0yGVS0py
|
||||
oEroeryzxKidrzq7SVKil+z02HH7tV9vPPiC9W0gotkVymd8/jDITjt7EpPb2M9BkDGiWdDJW+Is
|
||||
pJzRnGPw5c2aaNv1LH4Uaig7DhLO2lBqhL1eScBdFmaCB7PIeXEXsOiY8Qou67BW5/mcfmFvqDbJ
|
||||
09TuhcXxakj1x4P4h8sJrFZFGZC3mYTl9GFSLtYEHk7M5UrczzfL16vaTWgVpNcEjX2QC0Omzuh+
|
||||
WjC5F3dZ5bbrgbbl8uRKh2+/lPuqQlrj73CS6cAZ7ezFwyftTJK/+TBin/2ng69Zbogf3s903a0D
|
||||
C5P7fiDGAXMqvYmHCu3Fe0pUt4mb5dl/JQRiVE2cL0r5nKPQQq4KSmxl54Wu73bOEI+EGZvyWXMW
|
||||
cv5WQAsvO6+6IUQXHAw8jNUr56H3lKqcF/AJTCTmNvHZ86GOfhAlyLwtBlHGQ9svmewbKD+rA1ZM
|
||||
S+1ZculEeL9FDTkKbtmze8bjobFjPS85tJJKhtisEdTWdIKOKed8rE0THGK/JXinXMHqJHcXPtVz
|
||||
M4nc22z47f5D1aje+BxncSRkKA8BfYk3j3OBkXOckc8QGDXEGmfd83W6PDpRwK6G02SsKT/mc4fi
|
||||
9XQm4YteGi4x8hbSM+SwHH8UIDxHX0It4r4T/Mp6xL4+QYocU0nIkRFhv3Cur/ypJxPUCljTHftF
|
||||
j8OgYmyBr7MmhlIi7elfSGQN55791F0HC+rNGMdHqJLwiNsDGa0DDgjX9XSIcQUNP42wW7prtIzX
|
||||
pUZj+MxwnJ3nvH19MvunB1in7NOZ7eLpwiJa2ak9YM6huzWaIBeYV68wJTcSHrPji3VqvfFv/WbW
|
||||
/qZwDLUJyxaL6cpZ2R5MeHcierNbnElmJRddBlIQ+0HFZhkKeYbXa14Q3YJuxJKoz6DccZjcU4tX
|
||||
e8h5DLScviVYQrw6p/regzyJWWyH06eZX71vgdIXLZItVdgLdz5Q0PPpOcRPvAPd9LZCY9hm3pAV
|
||||
Vr4mluyiVy1FWFmCXUMfXyzB90BqchQi2q9OkZdQTg9v4hDWiLizHWio6mFEnE6vm7Ve9jYctITB
|
||||
UqsOdJLY6/CrP29ubZnysdYOqLspBpaWWQPC42sq8LRIKzHCdnTGvohCxN/3e6Luwj6axN2VhyZj
|
||||
KSTop6gRQhc8gQD6/TQv1wMdd2zjofr9bDzuzfTOEtmsAZ/xikku9HrOc1a4R2f5hLDq0m+z8rZl
|
||||
wcaQTWLVkU7XrmItxDbTizipj3tKvY8CJl/0cGCxhM5DcUzBpo/kLDSduhyrWvvVC/bG2VbpHB1r
|
||||
KKZdgc2+zcBy1toEnZa6JGf/a0fzxTgOMHwfj7gw/TpaKj4N4TRdF2+3uz4BldYoEZIdH2PpcBnB
|
||||
ylmhCJs2U4kVtiNY7tO0wmIZhymy2Lkn1G4qVE7XHivj0jf0W11ryKnakYSTLDcrhLOG/NNTxuab
|
||||
6en8eo0ZtM2IJyfCmqpQHDQfPeuITgBkx0i4T6SGE2FvOObeer4eq7pAJiAM8bj30aF9YVdg+fon
|
||||
nBifnbMeM2FA6+UjeztQv8B8QCiGfZsZHndUH9GsDG4Br91lmECzi3I2ctcSLHaKcMK9DWc9XUQI
|
||||
4joY8TWVGUoSW4pRxIoHrLo97vly5W2U7NgYy9bAgtlJ3hBSLrniY5p+6Fjy1IfnWb7hm/UsI5Z1
|
||||
HxY0dryHpaT9gtXOWg+qXHrf9EToJ+3TFGjrJ/gMnIvKCsZx/6sfrMv6GA2POe8gc8+xJ/ToRef7
|
||||
JOzhYmeI6MC5OFRvEgNp0fmyfV5QJ/XDMhB764M4PTdEozJoPGwSxpy+yavrV4LbDq3v9kO0cRmi
|
||||
+SIlEG79ceIPbaWu0hy7aML3h4f8pY7WJVBSpKSd+ace1z6L9+jRcTlWkuTcr8fqW4BPxnoeAiFW
|
||||
eYkNBnSqtZrcinuVs6mkl8gch4mcCkmM1sOuT8CvPiNdMxoKdtKEgAF9XJjSkFO9DwxoSMwF6yA/
|
||||
OXO2q1NEE3giR7+eAWXmHYSbXkz78VqAFbGchnbw+MayfH7Q9THbIvym1hf74Z2jvVXBBM5FffMA
|
||||
gY9oeb3GGdLnbSHYotiZI3ctIG3TowdbpY9ojuQZ3Uu3wZ7RQad7v482BBTM0wM0bS/kO6lC4/wI
|
||||
J053lXw9u76L4mZeiDTJFWCf/eMLbmqnYc04xPmcQlOESP5U2E1OaTQd1EpE013ksO6vej78Wf9L
|
||||
LxN73DfNvJ40C+0m+0KM8WCp9FO8KvTJ4hrH3k7NhU/59FF4IacJ+PXSr+37uId+Ib2wN0mVQ5dA
|
||||
meHmL4kDsmM+XgEzA7deEnLSIMlXfCID8KbrC5vp8aCucxDtkZIQQk66q0dUMOcQxU2kE3mX6f2a
|
||||
7YIOGkRssQ5qxVnkIYnhzgtMIjfzrqfjmV1/eoCTMnacKfKqEh3tbJoIyGwqWM8uBjtkvKev6T/6
|
||||
ueTFPRRGG06gwp+eCvqhhudK53BqVyWdH6slAn2pc4Ljx8lZVxekENajP61h6/QFp7z/3V+07HTO
|
||||
Oae88vCxX1QiebsHXW4U+3A5uDciLY+Yst3NTOFN/sQTD5mDuiKeshDfghuRXfoB3bGuCnAOjZe3
|
||||
yKcW0Is5a8DeGwaW4/ACptCfJzT35TgxHvNwqPpmniDN3nCi/vrs5938ShDU5hQHfoTAZL5sCEp0
|
||||
Bn/qk9f62oMwniRsqLLvrM1L6KDspCrxwlFXl2Pd8fB65ecJNaR06Bgxys9/egxkCoeMZ3aGWjyr
|
||||
WMn0Qz8GkfWEn8NgkKIO5Yg/1g8XbfWAPZsE6kwu3R7uT4qL9U6XoqV7LiEsq0Qj2gNe8w5HjQcv
|
||||
k21Nk3JI1VkezhCiRz17B7dJKAld0MJNX7f1MCjnFLkGwRnusd5PnNNxxlME3EOVCfY/lkNPFxDD
|
||||
JeODif3p86f6dkCYrPj3POVLZHGxqD7vd2zS4UNnhmXj3/2eGOWQOgM+PyDs8oeC/TG704X1Kh9d
|
||||
u/NAwlKj/fR8P/eQDPiANY+cosUpAx6daqMmsuDmPVuxgY0YwjfE8b9DvxToqIAhnkNyexqfqNWb
|
||||
twe7w10kzlFt+mGvzylyzmqLf/6TTufehxiHJ6JvfpAX9MVCQukOJC6LVz4+VjmEmWCwWI7rkX7q
|
||||
r+fD2dE3/ehZh86RXIFVqIOJ89dXvshDGYuc0nE49N5BPzIsHMSb8h2w9OpzSs92wIJJYlqiLZ0T
|
||||
rRdNHVBDrwAr4WT0lGCigOYMCU5M6+VQaY5b5FL3S4rWNnIuA2MIPp+1nARruDiCd3nwiFvKM7Fa
|
||||
5QOGrd/8+gnB/bhS6kXPWJT6mk5sWC4REdR1ho/jwSGYvC50mXGxguzoxthIkotDX5+sgsc0Z4ja
|
||||
M3a+SANhD6x6+UwjIzLOciwhD10tWYn3PcoqOxRyBk4yaYlSR01DCzj70Po+SmIt8xsswenMQseU
|
||||
EiIZhxBwXsDEsObedBJbxe65LQ9AE32TCSoLjNaSry34XHGOLciXlP+m7y+YHXPxpsylzgx2gQG9
|
||||
KX95fKsc1Tk/xB1arWHBMkFN8z2doQsco/pOQ+iV0XwKcQHceDZIsaRpPmeo78AOPSOihZMZsdLw
|
||||
LsSxrhQSq8dvNMxn/4sU7rMjpgZPzhBbLIMiQ5Wng0tdsFzVKvnTH0XusjTLHIU+hNpFw8qWD9bm
|
||||
tfuC/AwHIpnyiS4X6cRDrY7eHkkrU/35KRhH5xv2OJw6HUQWC9/wY5PS41tn2fIg+lonZapR0vWr
|
||||
j2PmV6/EbASejge1q1Gav2Jsxouk8naV/tEbfKJF4iyJZbFilEAZb9+njotzD3/9jWCg3sGkf64M
|
||||
3Po/NkxLV/uLurYwTL8nb594EMyJFTBoCAXRA7urGgmR+zVgcJjPxK1DOWfDk8qg/VBY2F/mJ1js
|
||||
sokRXMY91t8EOOSXh2I9OW18QY14CGwWjrWUENMCb2fq6nUPeD97YT29ThERzL0P5es1IrrGXfKB
|
||||
s/09Wtv+Q3TyYtXRfJwM8C69lrjI4KPldFmf6PDx99j9HquIIBaVENKvj8NJfvTUjzIePKNVwGrf
|
||||
HqLJu1Ye+HZGiD3IQHX88YT9UFrEM7rCIaEnJlCraYQlZASAO13EFtzLCWPsiprKioc2hubkZVga
|
||||
PwSQi74k8A0+Kw4YsOu/haDbEJfujciFDOga2zUDlXdHJkTpwxm9iDNQdzLKiXmtvDozM88DemY4
|
||||
D47zMVrIua6hVgdvbNuC3VNpLp7QScqQYFlf1MU/vWOYyOURSyHe0T9++1ZLPnY9popWvSHujydN
|
||||
l6dXq+wUsXugdFZO/HH5bv68nUFvyDY2sqesUqdQEtDinU30yhyiH3+Ck5/V3m3ze4vSmQm0BW3C
|
||||
Se18HXI4kALsyq9KNO79yid5PRZAa8IdNrLCUNcbT7+wyqsdyerIaJZP2ddQ1V9vfJQQpWtkoQme
|
||||
A+eFldqxGs6PMg2o3N7BkXyumxXspBh9322Lj/FSUSKvRx7SNjsSb+u3pOS/NXzqcYSzTS/4Oco6
|
||||
8dlMdyzLp5ZSabhr8BUavUfp+RbRJbBXSG6nAAcuPVJBHT8JdM6P0BNie6cun+obbqngQy47O6CU
|
||||
s6LNL3g3svGnSJiu1Yy6nkmIEV5uzmq2exeC880lxzRgwaqQxUMb78A6fXrqvOMfT+jn5ouoFus3
|
||||
0xBjC4ZCeyOmL8oqN0VcCM/qV/bY8WBGgp1NLTgewpYY+91dXdzM5uFTTyJvnxUtmPvKT+Gtej0J
|
||||
Tv1AXY23PKCTaofE6VuYr4l0H+DxeJ+wTG9CsyyRNcOimQpimZYOOHhwBvirvxN9LirZ/ATUg97B
|
||||
6kmt+kXpcAIds1awFyaZOh8znoFBxurYNv1Hs/nNGgb7GRG5Mu1/+6lkOsQkXdKEzp+CE5HKiQ45
|
||||
FZdXtKT6/gkvpVuRoJBzwFLTFNGWx0jwNJ50EDTHgp/rxcZadnPy9aLvGaA9w8vEcuk5n/f6nKEJ
|
||||
oxPxOLxXF7tKn3DLE5v+js4ffqKHloDPfblz6MZD4MYTiSrEl5zylrzCk2qF2JF13LOCoqXIXSiD
|
||||
tbAdIvIts1WcSLJseY/mw7FkC5RdTk/y428Ca7QKrMQbJjoj5upcIOsr7rwpx0fKvpq1TxMLmt5c
|
||||
bnniDL7ZLvgiUahiEmZn3NC20VbYP3cZPhIo9ewcBSLyfFHBWNYDlV2dUwqHxuUndusvS4FkBaLg
|
||||
bWG85eslP5wlgJE1kos2ImcRDMVH0qDcsbHl0VFgTBec5LH1HhtZmPV3w8C75wNP8MWHOpT8KsLa
|
||||
cndY4dK4mX781CCZSOxML/rVLgYGco96xJs/oJP+jlxov78r/j0v1EkuLSji2cWXN+Ns/qsL4aa/
|
||||
nuBS11kl9jr96hsby2w6lHoPCQnF2OHf+i+cHfiQqWKfWMnLBJxbWskhyHgdu8pSRDPDwgS6Gl9i
|
||||
hz7naGD2VYKC9DCSjXf04+udK0A+rNibk/ZL19tezOCWVyaIzm0+3/lHhhhJjIl2Rfd8mfBuhWNk
|
||||
GNhGnt8sPz+08ePpq8pBPy+RlcLamvqJscni9DgYW7Dx3UnYKTxYV38vgs1//3hzxN35xyr6X5Wb
|
||||
dlVSqbx4IBo8KZaIDZTYzs/f/+FLxrKvnaWrxT96RLb+4FD/bBmgUDmXYOErUX7jt3AM3sq0CC5o
|
||||
hsTIXRha02vL00ouLFhTxI1v4dOb8VQhBbsaFNHM4mLcaxG98VCCX2uasCPkOFoWrPpiz7EKlhlw
|
||||
byg+By3YeNpPr/P1W2YZnG/Kkajp8dIvfXpeYfx0dSJb8LvxcUcBjJxJExc/jhHnpu4fPkTSZFTo
|
||||
1n+qP7zYs4VB3fxrAok3j8T2+Gc/O1kE/6z/CdS6KsjrkQUXZM/ervJeDjHvWBP3PVN4IyOW6rgE
|
||||
igT8/B0Qt4ze+UrOQYbo875gt1aDiJ6uqwYzc/SJ42dJv3DKe4byh39jfeMz3xzEFvzpi7Hx4rmr
|
||||
QAqDz1nG8cbvu2PJlgCw/Ixd9TjmCz5/E9gkRY4di47NKs/2BNx4NYjB+cCZx9QRIXpUM3GE5pvP
|
||||
TpW68FJQltjJy+qHw47VwGKBw8QsfunM37K3QNYeWWLK+uT80XNGTiWc2Lun2mnNjoGW27bEbHYL
|
||||
pfo7f0JzGBMsVxIHJkE/WGDcNYeJt3eaunYVtECyizsSWM8rXZ+vyYZ1MgKiZPq1GfTXywNuJBie
|
||||
GCb7Zv4WzxS+Qb9i6QEFSg7ytwZPjX3jGJXPnDYfVMGXauX4UljUGVQylz+ei60HiOn67OsM6Y8n
|
||||
nRahCBrBDwoN7guFxSpBWjQvTrI/bH6U6Bu/GNuP5EPhRFOsMyJwRoZ/MEhNJED0XTioXWJE3n7j
|
||||
uxPqkeVs840O5m0qYZW8aN71VZrCpNwNWHv1t3zmtaL7M78y7O6qrqs/71F1MnIsA4drRhw8Wdiw
|
||||
eTQxqq+qhNfcUAzN5eXtTckGv/wGf/7ZUK+yM4dHr4R6rd1xfLjgP34J1QkBHvw9X00PRDBo/Jec
|
||||
Nv60PKZb+ONr3iG+YvAn3zxXYhK1usgNzytlB9+T5W39+B3NYSCXPz+95SEBDGB31cDm14hXavt/
|
||||
9/fjYQ6xrL12oGftTIFr95RJUFgzXbe8CDa/h4tw+kYLBHb568fYPiR9Q423Yvz0mRjjclLH8Giw
|
||||
4h06OfbGva6uS3S04SSVIvnND1fjfZyglb15TxyXqZmVWYaQIO1BvOypOZPxsQpoDmTzJ+3Jmfey
|
||||
uIdVUV+I3YZSL6wuWIFVVB72UWKCwXrOE4y4fUB+/JeOF1DBwM6WP/xmbEbRB8/YbyZaWEtPugrW
|
||||
4Oqk7vZ81NHCuakCux4m5Fr4E9h4TCZqcUzwredODn+6rAw8ZHGHXXQ2opW3JQsVIcdufj3Jf/NP
|
||||
6MZ+NvnIcynLK7c9rEQGEKzBWaVnKzJg9318SYFucrRMZyiBLX/jMkxE54+/Oy8Kh6WlO4L+3e5n
|
||||
+LD3DTm5YhPRw66Z4Jyr13/zcWXZx/BSDi4JQDT3a2RxE6JcfMX+Cyx0ucqPFk278Io9x6zV5aJo
|
||||
LcyNasa60Ksq9/pc/+2n1PdujsinfsyQm+GHhGHb97/8CG3eTLCX6aVK73wgodMq2eR4VGs6l/tD
|
||||
ByP9PWKzMk8OPVY1i3KztoiHzm9nFVFuAP2hm9h5emrPxtquBNSoDuTa7Nb+zzz38/FXcpK4LH/7
|
||||
Z0uDHUhDb1fcHyqtl30N7fY4kRMteGddItlG8YrP087NS4f3ccHA6Nycpy1vOlTvrxoU0NPHib3H
|
||||
Dvd+KTO63ybBe8WfZ06XcFZQKsL9tBMiDqzqm3EP1ys747u7Hh32bPsD9AE8bZ8nKu3Lo48cds9j
|
||||
GTRGP2Zg9GEpMRVRmz2Ixt88c+uXk7j5yYXarA+ipDjjYDt/+eWrri++5ObXa07L/Sf8M3+WCdc1
|
||||
VCWdBoN9FGzznXu/OEVkoE93Wf/MG/nN/wIDq8yfPLNQg1/B9v9IuPGxpZoLBd4iTSKRBftm/eWz
|
||||
H5/0XmLg8M1r10FW40dysp5MNKhkX0AtulymGXkDXV+9v0d//3YF/Ne//vrrf/12GLTdrXxtGwPG
|
||||
chn/47+3CvyH8B9Dm71ef7YhTENWlX//8+8dCH9/+q79jP977J7le/j7n784+Gevwd9jN2av//f4
|
||||
v7af+q9//R8AAAD//wMAY2Pk+OAgAAA=
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 974f08951aa4251c-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 25 Aug 2025 23:57:45 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=L28ZVHkSMOSpzu9MoQzdDgOYGHI9uQZnBcGj5Txg9e8-1756166265-1.0.1.1-9494dtPfVD9FwZG8RVJ6EnsQqAuzb2nnmeyRQLpPrX5IK2bR3KadJFV3K8HTAJa0lU9lIhCtu4fezMScnlMYOGAO0zGsIWMyozgrpePziWg;
|
||||
path=/; expires=Tue, 26-Aug-25 00:27:45 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=NaNzk_g04ozIFjR4zD0Joz9IJWntjKPTzifJUuegmAo-1756166265356-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-allow-origin:
|
||||
- '*'
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-model:
|
||||
- text-embedding-3-small
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '69'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
via:
|
||||
- envoy-router-74b9c68cd4-sld2z
|
||||
x-envoy-upstream-service-time:
|
||||
- '183'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '10000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '9999977'
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_f779ba78781c452aa13e52ea0ca6a682
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"input": ["Example: Using Fingers(Math Example): An interactive way to
|
||||
teach addition using fingers as counting tools."], "model": "text-embedding-3-small",
|
||||
"encoding_format": "base64"}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '186'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=L28ZVHkSMOSpzu9MoQzdDgOYGHI9uQZnBcGj5Txg9e8-1756166265-1.0.1.1-9494dtPfVD9FwZG8RVJ6EnsQqAuzb2nnmeyRQLpPrX5IK2bR3KadJFV3K8HTAJa0lU9lIhCtu4fezMScnlMYOGAO0zGsIWMyozgrpePziWg;
|
||||
_cfuvid=NaNzk_g04ozIFjR4zD0Joz9IJWntjKPTzifJUuegmAo-1756166265356-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-read-timeout:
|
||||
- '600'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/embeddings
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAA1R6SROySrPm/vsVJ86WviGTVHJ2CIjIVCiD2NHRASoIDghIQdWN+9878P3idvfG
|
||||
BZZaVmXmM2T+57/++uvvtmhul+/f//z197Mevn//j+XZNf/mf//z1//8119//fXXf/5e/7+Vt1dx
|
||||
u17rd/Vb/nuzfl9v89///MX/95P/u+ifv/4+7q12rFMtdNkoCLF6MLQjPr2iJ6Mxv4lhdz6V440L
|
||||
2oZt+SpE3fqj4XQ4Dw2TQT+qqlXkwaHICpdOl3elDvaT4O0W6mhqonMNt7N3widmYTZrrpIDn14I
|
||||
Nl/tjvFW+M3AfT4fOLqah0KUXHGA+Lb7EizNpsGCioTIGXcpdvcacafkIteArnZBLg/njMj2eMxU
|
||||
E102eHPaqBEbvEeGsnaNiEMUpR+K88qEM6oK4im9485pWg5yn1g+PivjhdFMxzVszTQfBbGMDD4V
|
||||
LQob4e0HpZCqxcgSFEAhfWzsNaxA3zQ9jfB6SFesV3Bg8yF/TGgVqdy4mpSDyztM19SNnj1xJndW
|
||||
T56rxlQz4bIl4U3XItHjVzdwpU2B4xntitl/3I8Aj/YzynzUMkH00QCllSd4u2NOL3yuuq2Ij01E
|
||||
gmK4ugMeeVBtGj3IfjCdhueVoIbQFTlscPLdZSjSLDUuvY7gZtq70naURZXbPYzxHdOqmDxla0K6
|
||||
X1d4f0oYo8/yGECNhxU53PBYsMu9qEA+8wXeSVe/kTykWDDsXBlbe83oJa5IJ3XzGh18vEhGQe3L
|
||||
y0L1fkzIYUt3xmSbm04V/E4nW5qbPR/yjxxOh/OIz/t65XbAOxM6cDQj4eP9KAZSfWqV1ttbsOLM
|
||||
DPHokwzq4ZyIBDe7i9EqrW6pI+/E5PYsqfs9gNLC9BB47GaZatBKsiw4lzQeucxAxtQEwgAPs3BH
|
||||
4dwHxvy8oEzFZjyTrC620dTMxkMlTikTbMvIYCziPFjf088Igjr2k8zNJtKzWcDO+xZEwkZwU+Ao
|
||||
9oP1jCiiTJUrWPKDxPprZtS/8Dzyg3s7ytfvnomB8+ngvsq3IyGG3dPY83TALjt+q5d3LRjE0QgH
|
||||
0VNwGZHA5bvqSCH9ym6g1NMRUZtDHHDh7UK0c2wysRE3gborlIgYp5WEWL+9p8g+cN2oRFcWtSWy
|
||||
TXDSTiD4yeGIr42bCPtaT/E1bC/GLPZPCs6lL7GxT7Y9f7GyUN1bISa26MzuSOm5RdvLTsDuuPYj
|
||||
4UDfOfqEmU7Cct02gjQngUrm3gvmzT53pQRzF7TEH/bCYjTmQz5MMNhvMnJfxBVEyiqqbqdzQWw1
|
||||
q10qhscKGDLqQAYiGNParF7woTYjucFSly/iXau6D/cbrJppbwxKMx5hXcozTi71vaBzG2TKqjRD
|
||||
nDhRZYzqWgmgczcuDp7l0RAcoRL/3JfPd9Sd8iOvoejsxCMHpyoaz4QPVb9US5zwrcOE8RoqarHa
|
||||
fYLX00X9tHUEUNF82eAChrqg+6GlQLYPlyTyd11MnZxk6rNYZyQS2Bu1yhY8sNm3IC7kzKDmcNRV
|
||||
PrxsyfF2gWK2dbOD9kB9Epw4L5ptq7qoPvDNL18i4cFzOkhF8CKRPxiFpJ+QhQzgPXL4rt2GiR2C
|
||||
Xz3BeIkP0a+HEIRLs8Wm27uGFILGq1l/I8Sfnk0zb/tAhHy9ibD92osGXV+3Ovh8tCX67XKJunbm
|
||||
HeAgqDGelNkgkc1yUG0uGHmQ9gabPWbDyu/O40poh36OXm4HuLJkstv7h4Z/6gcO4hPdE7+TrsaS
|
||||
/xRiKz6SdK8ZjXD+Xi/wScmLaNvog4gKB12VOCseH46r9QKXA4eqq1KTKw+biGcylUGhvBdw1vXF
|
||||
BnPfxbCaBB2XvvxkLA5qXv3Vb7x3nIhJ4pFHJxAV7KbMQKxvigxkjTOIdbe+xszpSox09Gkw7hLf
|
||||
YGru2tDJeI8Dp06KaXplnPqr/6dCexVTMpUxWs+fBh9uOCikoFMq5Deci/HhJfZULtY58vtswKZ3
|
||||
ttB8+w4B1IXtkYNGL8VQaK0ITtW5AfWQiGg7mIMa4OEQrMqzW4zX54oCq/cO1o15ZPOkW+YvnvAp
|
||||
P62LobdDR71qx4gcTXtgkzaH8i9fyfZ23jdzWZx1lHWajDdq07JJ5fmHOjthSmJGcc/TkXTwHToL
|
||||
m1V+R/N8tCd1imslUISDUVCdBSma1VkOVD7iIwpkBHTnIjsQHcll3534HSVXzwAfsk0bMX015epL
|
||||
yg74HKzf/ZRqR1NVtl0QcP1qV4jPTeuAmudbrK97O6Kf81zDYa9DwBDhmtmKvAAJfqvjY81yg1/w
|
||||
VoGvtyM362qxmZ1dHnjqiSRTs9ro27GpVPl0eC/73zC+FIcQttyqwZuZyw12xVqMTlw4E89/5Mv/
|
||||
hxeIfcnjjZhI6Mvf2BEeiaRj414QYx6n1QjVNB1I2rmvaAreawVIKefklI+S218HJVPzW7YnJy1/
|
||||
N1/AN/MPHmEhvRbEOz881VoZmHiPy97gj5oRQzf6jBzfzGTCb//JlS9x9qoiNPuVf0OC3kzB9PLU
|
||||
aKr92oKy3VukNJ1VTzfJ5ajan7NLEg9LLsND4KDGTp3l+x5oer+xBRxPN/ioH8yCXd46QCoHBb5W
|
||||
D7sX/DzV1KG/XonV6iPqNrMyQJ9WJinXKyeaPK5VlHL9UHHqjXHx42Nqb3QpDlnLR6/L4+EgYZ3v
|
||||
sRtvymhOEtNUf/np1c+q//E5NOihS8pr3RiTxI6e+vs+9632/fB1Vjnit5qMj/096SUaWBOIq0sZ
|
||||
UNsvClasbzYc8fZInOmQ9QLajx2oaLPGtslFBbUO2AMze0XYfnzcRsKv24jaercbif58Fe1NCFtV
|
||||
PyUc2SRMdyUQ7w/VTY4dcTjTj6hDM09lzXEXyHJ2KsRVS1vVfb4f2Cn12uV5xaqBs0oL+52kGiRV
|
||||
+0DZZxstUCOjRq/7gyrK9M5bsl+Nu559u/Si+ofiFDw9fDLo9FImSO3rhyQvIUG0OPtH9FgbHbHr
|
||||
4lnQck5btKoqGzvgP40x6zwe8lNYE2s4e7149l85eGe0x87bK13aV/xLFdKqwJc60JkU2EMH6R5V
|
||||
I2ztsZ/OhypTfSHXsU7Jq2eqpAZQpZcNtvXp6xLF2Obq997wRAtjxSDP+6zBwvewv5JrNm0kO4aO
|
||||
BmXAxmeKhJNdpepVCyMSJzu9mNtDE0J54j3sGCw1GKfbFXrMz/04zfNg0Lm1MpimaBM0dzogGj2C
|
||||
FrlefiCmRj5LPqQmtGodYeNZXfu+ofJLtenhgf1bWhdztkeAkucTEe2sC81QZjRQ4bgTgs1pc42m
|
||||
+HBo1T/1odu4rui/NQu4Mrkt+XpCfSkfQtU03HlkkLls8i6Rrj6VIAqYvAmML47uo+quXuugzzpv
|
||||
iZ+dB+trohLv4ZwZ1ZkVw+lY9SQ3ndidVVXnock+JvaR8y7oTq1itTn5Pt58ah/RIfjkAGqzIdYl
|
||||
mxumYkkG+uiOxNsYlTF3WjOCJRhKIKz8sRmbwFBgt0MtNpqH1Qjc0/aQfDiqGJvOqhmWfIN6596J
|
||||
Ufotm9y0f4Bm0GmEXWsgiStuFNm7Nif2tch76rKGR+R6egV0o1jNjAL9guRSWPB4e+sJj1AF9Oxv
|
||||
8WEvCQ1DNohKernEJAxf+4LhwbJBUHgNx0jTGjEhzxo+1Qhjde7rXqoddlSPPaRYH7MWMR3FL7V7
|
||||
zUds5fePOwTbhw6Hb3wn+cIn521v8dB/4YmNLli7805ba+C4uB2RlJz7Hx6rzrhNxxW/WjU/fFWU
|
||||
d7bCVvCE5inN1wAN0agT7aZXxUTHdwvnujXG9b4u3c9FwYCyg13hWPnUjJzju64khuDgzfcNUaMY
|
||||
fg42HNfYUps4orf+7anHb+Zjs8xTRLhSeSHirglOWLtCI061Wr1+rk9ysDW/GLeO6YCoH2viRjD1
|
||||
TPT3Cmxke0PSvp8Qq06lBgt+j3yZi2w6ZD4HH0OQyd7ATsGW85afTnkn+1rR+0FYqRyyO0HHG/H9
|
||||
dmknSwF8rHwgpqd/3MlK9h4yqdIE0oABPXWFUvAbcEfxHTNj2joqKOtTEhFj3noF/cYZ//s83lc2
|
||||
59Jc6Af0vN087IeKW8wyqi6qJWwU4q/uW1c6GfkIVbT2sVkkdjR1VU6RxBfrgE92dSTJtyyAPOH7
|
||||
5by1gg89Yq5Xk6QTL9nvmz98bSjsBv/w7bN9lhnErr8it3Xlu7xhKMoffXxiFmGsv60eSFpVLgnQ
|
||||
SSj6MnUzZJeFMPILn6RzIDsQfcogGBhpI/a8XgO08GtibSht7k0ix2BDuMaxGrwQC8VDDm3XPQNe
|
||||
PG5dCmk4woI/ZOHj7veM+xx98az/6psxxR1REPpeE5x+6nM0+49PCMt9k5NmZwUjV2WEpb4TfK0N
|
||||
V7hSTVdXL2kgrj92zbzobbV/nndYi9SLMdehraCFDxMj+7LmU7duDcg8Gjg6zAc2G9Z5hPuXC8hm
|
||||
s8+NaYvPFCWSQbGtrR+MorFIgStPNxycklUxtFF7/O0HB1nSRXM8d7WiXactudJLj2gixJwq2akc
|
||||
0Kx69rR7ZhnkL/cZrF+r2fhm75eGGqGyifV+1ZHg7PEAKUl1bNE37VkioQkcUzaxdo4fjFWG7IHE
|
||||
mTFO6l0eMf2ETPjpYf0ZkX4uWdHC4+XCuMQfY0w0TVXjzmvsIPFtTG1UHVHc7gxsx23YDLuqssCN
|
||||
eIYLNEjRdP3Mk9pU21eAIpiaSbmKPJB7fCTn0dfdKXCLG4Quz2HHuMjFJKeIoqpIxkBJ1Jb9+DY8
|
||||
iWIEa2W8IGYDxylO6UX4+jyaBo9HngMfd0/sXNZdQ3vDqaEGWQ2krW248/d0v6mLvhn75f6WfOGQ
|
||||
4O6CgDbbT8GmnTugBR+Cx8Z6R9Q3FQ+W+kMMLRyiMVnLEyDp1mNrQ4/9kl8i7HeDhf3tu2ck8LWL
|
||||
+tp0wsjze7+RHmnuoJ/eWfKp+PEv8J5pGnBbfo3mg3zjkWLscuIv99nrKzmDyP/uyK4ZN2zKs/4G
|
||||
RqnauLyvKzS/6o5H/Ce+ksOgGxHRozKE26c5EM9N+Z424sZD3WUGshVnvRCP2tNBivRdY4yzKZqm
|
||||
QcvQKq0oPgmqYQjWJIbAe+X5Fw/N3Pq7CyreabDEMxikMuQAvpqsBcSWkfu5rFY5VK7OY4df8S7b
|
||||
6+MNRbxSkC1p5mhUnJlXl3wdqyUfaLKeKOzDV4nd9GIZglAFCoxn3sTZ49M3cyIdONSevZHciuFq
|
||||
PE4Ne6ji8+qQAH0ezcQrQQUvKT8Qd0Nv7qANgoWM+1kjftJajC54rqqi7eFk9fAiAeYefvWcWN+1
|
||||
ZEzGmjxQvjYiYjwrtZkPlOSQ2uVn5JWPzr4uCo9qq1YRcW7nxBC67JXCOruFo4wVHk25MsdAz3hL
|
||||
jNfeaaaYGIPqyZ4Z3HLv2LMv/xlAG80Vxpwps4lctgPsNg+TFFiqXTYc7AAWPkH8934o6JkPLZU6
|
||||
sY639U12F3+CgjWOCXbBEKJx4RPK/nOT8O7hBBHbjpOoRmc7xnHKOc2iF0ERIvUdgPulBam/G0fN
|
||||
QHmNQuY2DblvDR4EOfKI++RjV6ykwIKqOI0Lnq6b1rsUOlx5heINSvR+qQcjWHFm4TzXpYL8/J8G
|
||||
VXS5z0NEvXvYqrCZEL6qFlewRV+rK0U84fN636CpOK8sZdEv2HC9L5uOW4WCFecWdi9EQZPg7EOY
|
||||
49cLe4y0xR/9/vMTsH9v3Vkts0l1j0Qke3oMm3kYswvKR2IQfH8fGpZQM1X6SYzxj3/O4bACyHau
|
||||
g03tmkfTe5/ocM1fNTHDyexZtnccILdbE0xyOrlUcWZRVXyTjMitzIhRXr4oaZGciPPhGrbEQ4Z+
|
||||
69VTSdw//obHeSPeHLZWLx5vpQdX6ylhZ8wnl9r3Bd8fbhpglluFlFAvBiuK9tgRDk30q4/K9upR
|
||||
Yv/02I/fMLBLcrN736Wb702BZ6KMWHu8H9FcHnyKlvqN9cuTd2c4f1K4R7DG2s45usw5fhzkb8zr
|
||||
SFfroJizu/iATwAO3hsv25iGft3Cgvd4x51JQYRPFavPAmUjNcqyF5+5mK3D+Hsi+L7WmPDT78PG
|
||||
DX5+QCQQ8e0AGdmV2EXZ9A/HNUdYkcIhdiluIl5pHROtFP5E3L2GXakqqKjaPJ/9m7+kyVeBfuJj
|
||||
7MluzHrn+LHRoif+6NWpklYyLHo+ENeC0dCu8O0/+thK2LcfwrUdoAsnFti8GA+DP16FHIjidfhg
|
||||
PDQmLnoAalGJsD7FJRqficWB8DzfiNM3FzQC71D0YY8uGC+D7U6d3g5QnkSP6LzqMTH+iBbU2TsL
|
||||
vqHSR3RMhssf/PPPqdTTZxIA7Lf7I8bEaJt5hzY1SMePjY+r9RhNB6Xgf35/MIWvTzEy9g3RpzyY
|
||||
OIzVnUF95aUARpWJfTX79oP29jv49Re8cvSLQcOGroYJHxM3DCqD/vRZc8I+2RRdF3U/PaF3eCC+
|
||||
sROj6Qn1DURjOJOdEleM2q85hofivMmep5ZB449oqgvfJtY0OZHkHO823Eq8+fnN7oC+6woWPzng
|
||||
BwTNIB3MGJWvLybO5jm5TAtlHVpWc9iviwmx1d59waY4qiNq409P/KdrKuH1KGAnurKip7x8Q1pr
|
||||
JmRbvTFil32oIGuyfRwq5bsfZ+cSo2IK3gTfplsxfOgX/uj3bZCNDfNzSQSnNQeys1ZuwZ+FvYLo
|
||||
YDFsFsba/SrcboRP617JZvF72uku2NAbbYq3QRb0P/2BzgfPW/oFH2MqwOOUuLbOeOc/cDSYW6sG
|
||||
3LYp2e3LirFFn6rpiz7++MXz8SHr4NdCgF0pOTfDmFMTyISTxW/QGmGtNw9IZIeMwrZjxhcZkgeS
|
||||
ppwC4ZP0Eetv0gt9YaVh2/Me0eyiMPzDb9Os09B3nFYDjOgh4JOiz80Qd28FKb5FyM5NP8b4wGUN
|
||||
OuobYi/9k7aSJPl3PjgKjLsx9+1H/6P3tUU/0Vf15eH1zRE29OneE9LUjnqIb3usqWbfz4/1gYdo
|
||||
635wsPQTvtxT8wDl7zc2v76Lhmx3tSGPv+NPH0fvtLimyuJ3Evy+nRlTExIrh0LuRnJ1txE79aEG
|
||||
rZamI1riZwZ8sxQGTjku/lnfZWPVqevrSR1Xb+HUz5ZwtFR+3q2X/tLMehmco7rgB3G1h45+eIKy
|
||||
hx5gnzzlhd9xIjyd6504J352p6Pmporex5+Fj+oRswQkw2s6dqMar9ZoDA+PEQTPc8nuGngN88uo
|
||||
/eNvlMt+pewFHuwe0zCSe8jQdNeNQI3SZ0Ws1awU3eXGj/A0Nhp2X5ZRDGkWj6gbMfudZzPdej9T
|
||||
tIkbR7r43ZKwyy/o53d5Qr+KxiZiKZye2XkEd3Vv6F1/6qCfFUr09fn868cEsOaDEzH71CiEYLJt
|
||||
uDY39Ef/fY96bYPd5PdxteDbuH5nRxgtSHHS32r0VcuQ/voDBJtsaDo48DIs/tCoFuKAmLK7e3/8
|
||||
9E1FDDYv/B0WfCBWxats8UtysMOKkUjeBC59T20N7lW0MT7ipp87BTxkPb7aj582NDl1MmxlWAer
|
||||
H38sD9sJ+E96Jdvp8DXmNfAWWvg5KcBICkq/Ko9ajDOyX+8NxK82bxv49Ebw1r6/CurQ0APWhLuR
|
||||
2/JnxBw7kUE9iO9x1SVfYzpZ+xrCdtSJmSZqP2d7BioUU0hK834tmA0iB58yMrFdiveIt5JNoG5l
|
||||
bj32541jzIl0BoXu5iAgGf9u2P0aXEBI64IEFO0N0nrXFr3h1BHr1CVIoEIXgrSfyNL/a/oXTrVK
|
||||
XfhBgBRvYMt+A/Trn0RRm7uMK+lDvTzBGF8L/vQ//clZVwtvVVNyqV8/QnCoqZPjWkfF5Ia8A+db
|
||||
fxq5fvWOhtpIeTDKlR2sF//u55+hBY8wfux2rvRcNZY6cIlL9J14Lfj3/qqD2oQ3bLz2XTOI7kVG
|
||||
NAoKbIqagGZLyM0/fjX9+SU/f27hR3jvnenPz+gAwg9HNPy6FfNHMCmMpThi4zDPjH4YnoDf6nLA
|
||||
L/g5k8aOf3yUWEdQosk3s/zXfw0YZD0i5eIdLf1AnLnCq1j06/DD44WP2S5/9s0H5DEZ8a/e0sHZ
|
||||
PpTjptthN+uGgn6po8BPb3nHY1qIx63GrV3ZyZZ+xIv1F+N+g+GhAbnMrxBNi58B3NbAAZclTjQ/
|
||||
le8NFvwgGzJFSDpIXw1kGPakmM81++P/bd93nfifz4bR9TsL4e/fVMB//euvv/7Xb8Lg1V5vz2Uw
|
||||
4Hubv//x36MC/yH9x/DKn88/YwjjkFe3v//59wTC35++fX2+//vbPm7v4e9//hL/PWvw97f95s//
|
||||
9/m/lp/6r3/9HwAAAP//AwDb9NaU4CAAAA==
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 974f0896accb251c-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 25 Aug 2025 23:57:45 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-allow-origin:
|
||||
- '*'
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-model:
|
||||
- text-embedding-3-small
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '86'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
via:
|
||||
- envoy-router-74b9c68cd4-l7mvz
|
||||
x-envoy-upstream-service-time:
|
||||
- '120'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '10000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '9999974'
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_3118ad98ecf3443a93df58599e50e305
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"input": ["Example: Using Fingers(Math Example): An interactive way to
|
||||
teach addition using fingers as counting tools."], "model": "text-embedding-3-small",
|
||||
"encoding_format": "base64"}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '186'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=L28ZVHkSMOSpzu9MoQzdDgOYGHI9uQZnBcGj5Txg9e8-1756166265-1.0.1.1-9494dtPfVD9FwZG8RVJ6EnsQqAuzb2nnmeyRQLpPrX5IK2bR3KadJFV3K8HTAJa0lU9lIhCtu4fezMScnlMYOGAO0zGsIWMyozgrpePziWg;
|
||||
_cfuvid=NaNzk_g04ozIFjR4zD0Joz9IJWntjKPTzifJUuegmAo-1756166265356-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.93.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.93.0
|
||||
x-stainless-read-timeout:
|
||||
- '600'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/embeddings
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAA1R6SROySrPm/vsVJ86WviGTVHJ2CIjIVCiD2NHRASoIDghIQdWN+9878P3idvfG
|
||||
BZZaVmXmM2T+57/++uvvtmhul+/f//z197Mevn//j+XZNf/mf//z1//8119//fXXf/5e/7+Vt1dx
|
||||
u17rd/Vb/nuzfl9v89///MX/95P/u+ifv/4+7q12rFMtdNkoCLF6MLQjPr2iJ6Mxv4lhdz6V440L
|
||||
2oZt+SpE3fqj4XQ4Dw2TQT+qqlXkwaHICpdOl3elDvaT4O0W6mhqonMNt7N3widmYTZrrpIDn14I
|
||||
Nl/tjvFW+M3AfT4fOLqah0KUXHGA+Lb7EizNpsGCioTIGXcpdvcacafkIteArnZBLg/njMj2eMxU
|
||||
E102eHPaqBEbvEeGsnaNiEMUpR+K88qEM6oK4im9485pWg5yn1g+PivjhdFMxzVszTQfBbGMDD4V
|
||||
LQob4e0HpZCqxcgSFEAhfWzsNaxA3zQ9jfB6SFesV3Bg8yF/TGgVqdy4mpSDyztM19SNnj1xJndW
|
||||
T56rxlQz4bIl4U3XItHjVzdwpU2B4xntitl/3I8Aj/YzynzUMkH00QCllSd4u2NOL3yuuq2Ij01E
|
||||
gmK4ugMeeVBtGj3IfjCdhueVoIbQFTlscPLdZSjSLDUuvY7gZtq70naURZXbPYzxHdOqmDxla0K6
|
||||
X1d4f0oYo8/yGECNhxU53PBYsMu9qEA+8wXeSVe/kTykWDDsXBlbe83oJa5IJ3XzGh18vEhGQe3L
|
||||
y0L1fkzIYUt3xmSbm04V/E4nW5qbPR/yjxxOh/OIz/t65XbAOxM6cDQj4eP9KAZSfWqV1ttbsOLM
|
||||
DPHokwzq4ZyIBDe7i9EqrW6pI+/E5PYsqfs9gNLC9BB47GaZatBKsiw4lzQeucxAxtQEwgAPs3BH
|
||||
4dwHxvy8oEzFZjyTrC620dTMxkMlTikTbMvIYCziPFjf088Igjr2k8zNJtKzWcDO+xZEwkZwU+Ao
|
||||
9oP1jCiiTJUrWPKDxPprZtS/8Dzyg3s7ytfvnomB8+ngvsq3IyGG3dPY83TALjt+q5d3LRjE0QgH
|
||||
0VNwGZHA5bvqSCH9ym6g1NMRUZtDHHDh7UK0c2wysRE3gborlIgYp5WEWL+9p8g+cN2oRFcWtSWy
|
||||
TXDSTiD4yeGIr42bCPtaT/E1bC/GLPZPCs6lL7GxT7Y9f7GyUN1bISa26MzuSOm5RdvLTsDuuPYj
|
||||
4UDfOfqEmU7Cct02gjQngUrm3gvmzT53pQRzF7TEH/bCYjTmQz5MMNhvMnJfxBVEyiqqbqdzQWw1
|
||||
q10qhscKGDLqQAYiGNParF7woTYjucFSly/iXau6D/cbrJppbwxKMx5hXcozTi71vaBzG2TKqjRD
|
||||
nDhRZYzqWgmgczcuDp7l0RAcoRL/3JfPd9Sd8iOvoejsxCMHpyoaz4QPVb9US5zwrcOE8RoqarHa
|
||||
fYLX00X9tHUEUNF82eAChrqg+6GlQLYPlyTyd11MnZxk6rNYZyQS2Bu1yhY8sNm3IC7kzKDmcNRV
|
||||
PrxsyfF2gWK2dbOD9kB9Epw4L5ptq7qoPvDNL18i4cFzOkhF8CKRPxiFpJ+QhQzgPXL4rt2GiR2C
|
||||
Xz3BeIkP0a+HEIRLs8Wm27uGFILGq1l/I8Sfnk0zb/tAhHy9ibD92osGXV+3Ovh8tCX67XKJunbm
|
||||
HeAgqDGelNkgkc1yUG0uGHmQ9gabPWbDyu/O40poh36OXm4HuLJkstv7h4Z/6gcO4hPdE7+TrsaS
|
||||
/xRiKz6SdK8ZjXD+Xi/wScmLaNvog4gKB12VOCseH46r9QKXA4eqq1KTKw+biGcylUGhvBdw1vXF
|
||||
BnPfxbCaBB2XvvxkLA5qXv3Vb7x3nIhJ4pFHJxAV7KbMQKxvigxkjTOIdbe+xszpSox09Gkw7hLf
|
||||
YGru2tDJeI8Dp06KaXplnPqr/6dCexVTMpUxWs+fBh9uOCikoFMq5Deci/HhJfZULtY58vtswKZ3
|
||||
ttB8+w4B1IXtkYNGL8VQaK0ITtW5AfWQiGg7mIMa4OEQrMqzW4zX54oCq/cO1o15ZPOkW+YvnvAp
|
||||
P62LobdDR71qx4gcTXtgkzaH8i9fyfZ23jdzWZx1lHWajDdq07JJ5fmHOjthSmJGcc/TkXTwHToL
|
||||
m1V+R/N8tCd1imslUISDUVCdBSma1VkOVD7iIwpkBHTnIjsQHcll3534HSVXzwAfsk0bMX015epL
|
||||
yg74HKzf/ZRqR1NVtl0QcP1qV4jPTeuAmudbrK97O6Kf81zDYa9DwBDhmtmKvAAJfqvjY81yg1/w
|
||||
VoGvtyM362qxmZ1dHnjqiSRTs9ro27GpVPl0eC/73zC+FIcQttyqwZuZyw12xVqMTlw4E89/5Mv/
|
||||
hxeIfcnjjZhI6Mvf2BEeiaRj414QYx6n1QjVNB1I2rmvaAreawVIKefklI+S218HJVPzW7YnJy1/
|
||||
N1/AN/MPHmEhvRbEOz881VoZmHiPy97gj5oRQzf6jBzfzGTCb//JlS9x9qoiNPuVf0OC3kzB9PLU
|
||||
aKr92oKy3VukNJ1VTzfJ5ajan7NLEg9LLsND4KDGTp3l+x5oer+xBRxPN/ioH8yCXd46QCoHBb5W
|
||||
D7sX/DzV1KG/XonV6iPqNrMyQJ9WJinXKyeaPK5VlHL9UHHqjXHx42Nqb3QpDlnLR6/L4+EgYZ3v
|
||||
sRtvymhOEtNUf/np1c+q//E5NOihS8pr3RiTxI6e+vs+9632/fB1Vjnit5qMj/096SUaWBOIq0sZ
|
||||
UNsvClasbzYc8fZInOmQ9QLajx2oaLPGtslFBbUO2AMze0XYfnzcRsKv24jaercbif58Fe1NCFtV
|
||||
PyUc2SRMdyUQ7w/VTY4dcTjTj6hDM09lzXEXyHJ2KsRVS1vVfb4f2Cn12uV5xaqBs0oL+52kGiRV
|
||||
+0DZZxstUCOjRq/7gyrK9M5bsl+Nu559u/Si+ofiFDw9fDLo9FImSO3rhyQvIUG0OPtH9FgbHbHr
|
||||
4lnQck5btKoqGzvgP40x6zwe8lNYE2s4e7149l85eGe0x87bK13aV/xLFdKqwJc60JkU2EMH6R5V
|
||||
I2ztsZ/OhypTfSHXsU7Jq2eqpAZQpZcNtvXp6xLF2Obq997wRAtjxSDP+6zBwvewv5JrNm0kO4aO
|
||||
BmXAxmeKhJNdpepVCyMSJzu9mNtDE0J54j3sGCw1GKfbFXrMz/04zfNg0Lm1MpimaBM0dzogGj2C
|
||||
FrlefiCmRj5LPqQmtGodYeNZXfu+ofJLtenhgf1bWhdztkeAkucTEe2sC81QZjRQ4bgTgs1pc42m
|
||||
+HBo1T/1odu4rui/NQu4Mrkt+XpCfSkfQtU03HlkkLls8i6Rrj6VIAqYvAmML47uo+quXuugzzpv
|
||||
iZ+dB+trohLv4ZwZ1ZkVw+lY9SQ3ndidVVXnock+JvaR8y7oTq1itTn5Pt58ah/RIfjkAGqzIdYl
|
||||
mxumYkkG+uiOxNsYlTF3WjOCJRhKIKz8sRmbwFBgt0MtNpqH1Qjc0/aQfDiqGJvOqhmWfIN6596J
|
||||
Ufotm9y0f4Bm0GmEXWsgiStuFNm7Nif2tch76rKGR+R6egV0o1jNjAL9guRSWPB4e+sJj1AF9Oxv
|
||||
8WEvCQ1DNohKernEJAxf+4LhwbJBUHgNx0jTGjEhzxo+1Qhjde7rXqoddlSPPaRYH7MWMR3FL7V7
|
||||
zUds5fePOwTbhw6Hb3wn+cIn521v8dB/4YmNLli7805ba+C4uB2RlJz7Hx6rzrhNxxW/WjU/fFWU
|
||||
d7bCVvCE5inN1wAN0agT7aZXxUTHdwvnujXG9b4u3c9FwYCyg13hWPnUjJzju64khuDgzfcNUaMY
|
||||
fg42HNfYUps4orf+7anHb+Zjs8xTRLhSeSHirglOWLtCI061Wr1+rk9ysDW/GLeO6YCoH2viRjD1
|
||||
TPT3Cmxke0PSvp8Qq06lBgt+j3yZi2w6ZD4HH0OQyd7ATsGW85afTnkn+1rR+0FYqRyyO0HHG/H9
|
||||
dmknSwF8rHwgpqd/3MlK9h4yqdIE0oABPXWFUvAbcEfxHTNj2joqKOtTEhFj3noF/cYZ//s83lc2
|
||||
59Jc6Af0vN087IeKW8wyqi6qJWwU4q/uW1c6GfkIVbT2sVkkdjR1VU6RxBfrgE92dSTJtyyAPOH7
|
||||
5by1gg89Yq5Xk6QTL9nvmz98bSjsBv/w7bN9lhnErr8it3Xlu7xhKMoffXxiFmGsv60eSFpVLgnQ
|
||||
SSj6MnUzZJeFMPILn6RzIDsQfcogGBhpI/a8XgO08GtibSht7k0ix2BDuMaxGrwQC8VDDm3XPQNe
|
||||
PG5dCmk4woI/ZOHj7veM+xx98az/6psxxR1REPpeE5x+6nM0+49PCMt9k5NmZwUjV2WEpb4TfK0N
|
||||
V7hSTVdXL2kgrj92zbzobbV/nndYi9SLMdehraCFDxMj+7LmU7duDcg8Gjg6zAc2G9Z5hPuXC8hm
|
||||
s8+NaYvPFCWSQbGtrR+MorFIgStPNxycklUxtFF7/O0HB1nSRXM8d7WiXactudJLj2gixJwq2akc
|
||||
0Kx69rR7ZhnkL/cZrF+r2fhm75eGGqGyifV+1ZHg7PEAKUl1bNE37VkioQkcUzaxdo4fjFWG7IHE
|
||||
mTFO6l0eMf2ETPjpYf0ZkX4uWdHC4+XCuMQfY0w0TVXjzmvsIPFtTG1UHVHc7gxsx23YDLuqssCN
|
||||
eIYLNEjRdP3Mk9pU21eAIpiaSbmKPJB7fCTn0dfdKXCLG4Quz2HHuMjFJKeIoqpIxkBJ1Jb9+DY8
|
||||
iWIEa2W8IGYDxylO6UX4+jyaBo9HngMfd0/sXNZdQ3vDqaEGWQ2krW248/d0v6mLvhn75f6WfOGQ
|
||||
4O6CgDbbT8GmnTugBR+Cx8Z6R9Q3FQ+W+kMMLRyiMVnLEyDp1mNrQ4/9kl8i7HeDhf3tu2ck8LWL
|
||||
+tp0wsjze7+RHmnuoJ/eWfKp+PEv8J5pGnBbfo3mg3zjkWLscuIv99nrKzmDyP/uyK4ZN2zKs/4G
|
||||
RqnauLyvKzS/6o5H/Ce+ksOgGxHRozKE26c5EM9N+Z424sZD3WUGshVnvRCP2tNBivRdY4yzKZqm
|
||||
QcvQKq0oPgmqYQjWJIbAe+X5Fw/N3Pq7CyreabDEMxikMuQAvpqsBcSWkfu5rFY5VK7OY4df8S7b
|
||||
6+MNRbxSkC1p5mhUnJlXl3wdqyUfaLKeKOzDV4nd9GIZglAFCoxn3sTZ49M3cyIdONSevZHciuFq
|
||||
PE4Ne6ji8+qQAH0ezcQrQQUvKT8Qd0Nv7qANgoWM+1kjftJajC54rqqi7eFk9fAiAeYefvWcWN+1
|
||||
ZEzGmjxQvjYiYjwrtZkPlOSQ2uVn5JWPzr4uCo9qq1YRcW7nxBC67JXCOruFo4wVHk25MsdAz3hL
|
||||
jNfeaaaYGIPqyZ4Z3HLv2LMv/xlAG80Vxpwps4lctgPsNg+TFFiqXTYc7AAWPkH8934o6JkPLZU6
|
||||
sY639U12F3+CgjWOCXbBEKJx4RPK/nOT8O7hBBHbjpOoRmc7xnHKOc2iF0ERIvUdgPulBam/G0fN
|
||||
QHmNQuY2DblvDR4EOfKI++RjV6ykwIKqOI0Lnq6b1rsUOlx5heINSvR+qQcjWHFm4TzXpYL8/J8G
|
||||
VXS5z0NEvXvYqrCZEL6qFlewRV+rK0U84fN636CpOK8sZdEv2HC9L5uOW4WCFecWdi9EQZPg7EOY
|
||||
49cLe4y0xR/9/vMTsH9v3Vkts0l1j0Qke3oMm3kYswvKR2IQfH8fGpZQM1X6SYzxj3/O4bACyHau
|
||||
g03tmkfTe5/ocM1fNTHDyexZtnccILdbE0xyOrlUcWZRVXyTjMitzIhRXr4oaZGciPPhGrbEQ4Z+
|
||||
69VTSdw//obHeSPeHLZWLx5vpQdX6ylhZ8wnl9r3Bd8fbhpglluFlFAvBiuK9tgRDk30q4/K9upR
|
||||
Yv/02I/fMLBLcrN736Wb702BZ6KMWHu8H9FcHnyKlvqN9cuTd2c4f1K4R7DG2s45usw5fhzkb8zr
|
||||
SFfroJizu/iATwAO3hsv25iGft3Cgvd4x51JQYRPFavPAmUjNcqyF5+5mK3D+Hsi+L7WmPDT78PG
|
||||
DX5+QCQQ8e0AGdmV2EXZ9A/HNUdYkcIhdiluIl5pHROtFP5E3L2GXakqqKjaPJ/9m7+kyVeBfuJj
|
||||
7MluzHrn+LHRoif+6NWpklYyLHo+ENeC0dCu8O0/+thK2LcfwrUdoAsnFti8GA+DP16FHIjidfhg
|
||||
PDQmLnoAalGJsD7FJRqficWB8DzfiNM3FzQC71D0YY8uGC+D7U6d3g5QnkSP6LzqMTH+iBbU2TsL
|
||||
vqHSR3RMhssf/PPPqdTTZxIA7Lf7I8bEaJt5hzY1SMePjY+r9RhNB6Xgf35/MIWvTzEy9g3RpzyY
|
||||
OIzVnUF95aUARpWJfTX79oP29jv49Re8cvSLQcOGroYJHxM3DCqD/vRZc8I+2RRdF3U/PaF3eCC+
|
||||
sROj6Qn1DURjOJOdEleM2q85hofivMmep5ZB449oqgvfJtY0OZHkHO823Eq8+fnN7oC+6woWPzng
|
||||
BwTNIB3MGJWvLybO5jm5TAtlHVpWc9iviwmx1d59waY4qiNq409P/KdrKuH1KGAnurKip7x8Q1pr
|
||||
JmRbvTFil32oIGuyfRwq5bsfZ+cSo2IK3gTfplsxfOgX/uj3bZCNDfNzSQSnNQeys1ZuwZ+FvYLo
|
||||
YDFsFsba/SrcboRP617JZvF72uku2NAbbYq3QRb0P/2BzgfPW/oFH2MqwOOUuLbOeOc/cDSYW6sG
|
||||
3LYp2e3LirFFn6rpiz7++MXz8SHr4NdCgF0pOTfDmFMTyISTxW/QGmGtNw9IZIeMwrZjxhcZkgeS
|
||||
ppwC4ZP0Eetv0gt9YaVh2/Me0eyiMPzDb9Os09B3nFYDjOgh4JOiz80Qd28FKb5FyM5NP8b4wGUN
|
||||
OuobYi/9k7aSJPl3PjgKjLsx9+1H/6P3tUU/0Vf15eH1zRE29OneE9LUjnqIb3usqWbfz4/1gYdo
|
||||
635wsPQTvtxT8wDl7zc2v76Lhmx3tSGPv+NPH0fvtLimyuJ3Evy+nRlTExIrh0LuRnJ1txE79aEG
|
||||
rZamI1riZwZ8sxQGTjku/lnfZWPVqevrSR1Xb+HUz5ZwtFR+3q2X/tLMehmco7rgB3G1h45+eIKy
|
||||
hx5gnzzlhd9xIjyd6504J352p6Pmporex5+Fj+oRswQkw2s6dqMar9ZoDA+PEQTPc8nuGngN88uo
|
||||
/eNvlMt+pewFHuwe0zCSe8jQdNeNQI3SZ0Ws1awU3eXGj/A0Nhp2X5ZRDGkWj6gbMfudZzPdej9T
|
||||
tIkbR7r43ZKwyy/o53d5Qr+KxiZiKZye2XkEd3Vv6F1/6qCfFUr09fn868cEsOaDEzH71CiEYLJt
|
||||
uDY39Ef/fY96bYPd5PdxteDbuH5nRxgtSHHS32r0VcuQ/voDBJtsaDo48DIs/tCoFuKAmLK7e3/8
|
||||
9E1FDDYv/B0WfCBWxats8UtysMOKkUjeBC59T20N7lW0MT7ipp87BTxkPb7aj582NDl1MmxlWAer
|
||||
H38sD9sJ+E96Jdvp8DXmNfAWWvg5KcBICkq/Ko9ajDOyX+8NxK82bxv49Ebw1r6/CurQ0APWhLuR
|
||||
2/JnxBw7kUE9iO9x1SVfYzpZ+xrCdtSJmSZqP2d7BioUU0hK834tmA0iB58yMrFdiveIt5JNoG5l
|
||||
bj32541jzIl0BoXu5iAgGf9u2P0aXEBI64IEFO0N0nrXFr3h1BHr1CVIoEIXgrSfyNL/a/oXTrVK
|
||||
XfhBgBRvYMt+A/Trn0RRm7uMK+lDvTzBGF8L/vQ//clZVwtvVVNyqV8/QnCoqZPjWkfF5Ia8A+db
|
||||
fxq5fvWOhtpIeTDKlR2sF//u55+hBY8wfux2rvRcNZY6cIlL9J14Lfj3/qqD2oQ3bLz2XTOI7kVG
|
||||
NAoKbIqagGZLyM0/fjX9+SU/f27hR3jvnenPz+gAwg9HNPy6FfNHMCmMpThi4zDPjH4YnoDf6nLA
|
||||
L/g5k8aOf3yUWEdQosk3s/zXfw0YZD0i5eIdLf1AnLnCq1j06/DD44WP2S5/9s0H5DEZ8a/e0sHZ
|
||||
PpTjptthN+uGgn6po8BPb3nHY1qIx63GrV3ZyZZ+xIv1F+N+g+GhAbnMrxBNi58B3NbAAZclTjQ/
|
||||
le8NFvwgGzJFSDpIXw1kGPakmM81++P/bd93nfifz4bR9TsL4e/fVMB//euvv/7Xb8Lg1V5vz2Uw
|
||||
4Hubv//x36MC/yH9x/DKn88/YwjjkFe3v//59wTC35++fX2+//vbPm7v4e9//hL/PWvw97f95s//
|
||||
9/m/lp/6r3/9HwAAAP//AwDb9NaU4CAAAA==
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 974f1417bd93ed39-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Tue, 26 Aug 2025 00:05:36 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-allow-origin:
|
||||
- '*'
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-model:
|
||||
- text-embedding-3-small
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '67'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
via:
|
||||
- envoy-router-6855f85975-l2j69
|
||||
x-envoy-upstream-service-time:
|
||||
- '85'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '10000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '9999974'
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_415fc597e216418bba813b4bb2f0e130
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
|
||||
@@ -222,13 +222,12 @@ def test_external_memory_custom_storage(custom_storage, crew_with_external_memor
|
||||
|
||||
test_value = "test value"
|
||||
test_metadata = {"source": "test"}
|
||||
test_agent = "test_agent"
|
||||
external_memory.save(value=test_value, metadata=test_metadata, agent=test_agent)
|
||||
external_memory.save(value=test_value, metadata=test_metadata)
|
||||
|
||||
results = external_memory.search("test")
|
||||
assert len(results) == 1
|
||||
assert results[0]["value"] == test_value
|
||||
assert results[0]["metadata"] == test_metadata | {"agent": test_agent}
|
||||
assert results[0]["metadata"] == test_metadata
|
||||
|
||||
external_memory.reset()
|
||||
results = external_memory.search("test")
|
||||
@@ -259,7 +258,6 @@ def test_external_memory_search_events(
|
||||
|
||||
assert len(events["MemoryQueryStartedEvent"]) == 1
|
||||
assert len(events["MemoryQueryCompletedEvent"]) == 1
|
||||
assert len(events["MemoryQueryFailedEvent"]) == 0
|
||||
|
||||
assert dict(events["MemoryQueryStartedEvent"][0]) == {
|
||||
"timestamp": ANY,
|
||||
@@ -267,6 +265,12 @@ def test_external_memory_search_events(
|
||||
"source_fingerprint": None,
|
||||
"source_type": "external_memory",
|
||||
"fingerprint_metadata": None,
|
||||
"task_id": None,
|
||||
"task_name": None,
|
||||
"from_task": None,
|
||||
"from_agent": None,
|
||||
"agent_role": None,
|
||||
"agent_id": None,
|
||||
"query": "test value",
|
||||
"limit": 3,
|
||||
"score_threshold": 0.35,
|
||||
@@ -278,6 +282,12 @@ def test_external_memory_search_events(
|
||||
"source_fingerprint": None,
|
||||
"source_type": "external_memory",
|
||||
"fingerprint_metadata": None,
|
||||
"task_id": None,
|
||||
"task_name": None,
|
||||
"from_task": None,
|
||||
"from_agent": None,
|
||||
"agent_role": None,
|
||||
"agent_id": None,
|
||||
"query": "test value",
|
||||
"results": [],
|
||||
"limit": 3,
|
||||
@@ -306,12 +316,10 @@ def test_external_memory_save_events(
|
||||
external_memory_with_mocked_config.save(
|
||||
value="saving value",
|
||||
metadata={"task": "test_task"},
|
||||
agent="test_agent",
|
||||
)
|
||||
|
||||
assert len(events["MemorySaveStartedEvent"]) == 1
|
||||
assert len(events["MemorySaveCompletedEvent"]) == 1
|
||||
assert len(events["MemorySaveFailedEvent"]) == 0
|
||||
|
||||
assert dict(events["MemorySaveStartedEvent"][0]) == {
|
||||
"timestamp": ANY,
|
||||
@@ -319,9 +327,14 @@ def test_external_memory_save_events(
|
||||
"source_fingerprint": None,
|
||||
"source_type": "external_memory",
|
||||
"fingerprint_metadata": None,
|
||||
"task_id": None,
|
||||
"task_name": None,
|
||||
"from_task": None,
|
||||
"from_agent": None,
|
||||
"agent_role": None,
|
||||
"agent_id": None,
|
||||
"value": "saving value",
|
||||
"metadata": {"task": "test_task"},
|
||||
"agent_role": "test_agent",
|
||||
}
|
||||
|
||||
assert dict(events["MemorySaveCompletedEvent"][0]) == {
|
||||
@@ -330,8 +343,13 @@ def test_external_memory_save_events(
|
||||
"source_fingerprint": None,
|
||||
"source_type": "external_memory",
|
||||
"fingerprint_metadata": None,
|
||||
"task_id": None,
|
||||
"task_name": None,
|
||||
"from_task": None,
|
||||
"from_agent": None,
|
||||
"agent_role": None,
|
||||
"agent_id": None,
|
||||
"value": "saving value",
|
||||
"metadata": {"task": "test_task", "agent": "test_agent"},
|
||||
"agent_role": "test_agent",
|
||||
"metadata": {"task": "test_task"},
|
||||
"save_time_ms": ANY,
|
||||
}
|
||||
|
||||
@@ -51,9 +51,14 @@ def test_long_term_memory_save_events(long_term_memory):
|
||||
"source_fingerprint": None,
|
||||
"source_type": "long_term_memory",
|
||||
"fingerprint_metadata": None,
|
||||
"task_id": None,
|
||||
"task_name": None,
|
||||
"from_task": None,
|
||||
"from_agent": None,
|
||||
"agent_role": "test_agent",
|
||||
"agent_id": None,
|
||||
"value": "test_task",
|
||||
"metadata": {"task": "test_task", "quality": 0.5},
|
||||
"agent_role": "test_agent",
|
||||
}
|
||||
assert dict(events["MemorySaveCompletedEvent"][0]) == {
|
||||
"timestamp": ANY,
|
||||
@@ -61,6 +66,12 @@ def test_long_term_memory_save_events(long_term_memory):
|
||||
"source_fingerprint": None,
|
||||
"source_type": "long_term_memory",
|
||||
"fingerprint_metadata": None,
|
||||
"task_id": None,
|
||||
"task_name": None,
|
||||
"from_task": None,
|
||||
"from_agent": None,
|
||||
"agent_role": "test_agent",
|
||||
"agent_id": None,
|
||||
"value": "test_task",
|
||||
"metadata": {
|
||||
"task": "test_task",
|
||||
@@ -68,7 +79,6 @@ def test_long_term_memory_save_events(long_term_memory):
|
||||
"agent": "test_agent",
|
||||
"expected_output": "test_output",
|
||||
},
|
||||
"agent_role": "test_agent",
|
||||
"save_time_ms": ANY,
|
||||
}
|
||||
|
||||
@@ -100,6 +110,12 @@ def test_long_term_memory_search_events(long_term_memory):
|
||||
"source_fingerprint": None,
|
||||
"source_type": "long_term_memory",
|
||||
"fingerprint_metadata": None,
|
||||
"task_id": None,
|
||||
"task_name": None,
|
||||
"from_task": None,
|
||||
"from_agent": None,
|
||||
"agent_role": None,
|
||||
"agent_id": None,
|
||||
"query": "test query",
|
||||
"limit": 5,
|
||||
"score_threshold": None,
|
||||
@@ -111,6 +127,12 @@ def test_long_term_memory_search_events(long_term_memory):
|
||||
"source_fingerprint": None,
|
||||
"source_type": "long_term_memory",
|
||||
"fingerprint_metadata": None,
|
||||
"task_id": None,
|
||||
"task_name": None,
|
||||
"from_task": None,
|
||||
"from_agent": None,
|
||||
"agent_role": None,
|
||||
"agent_id": None,
|
||||
"query": "test query",
|
||||
"results": None,
|
||||
"limit": 5,
|
||||
|
||||
@@ -57,7 +57,6 @@ def test_short_term_memory_search_events(short_term_memory):
|
||||
|
||||
assert len(events["MemoryQueryStartedEvent"]) == 1
|
||||
assert len(events["MemoryQueryCompletedEvent"]) == 1
|
||||
assert len(events["MemoryQueryFailedEvent"]) == 0
|
||||
|
||||
assert dict(events["MemoryQueryStartedEvent"][0]) == {
|
||||
"timestamp": ANY,
|
||||
@@ -65,6 +64,12 @@ def test_short_term_memory_search_events(short_term_memory):
|
||||
"source_fingerprint": None,
|
||||
"source_type": "short_term_memory",
|
||||
"fingerprint_metadata": None,
|
||||
"task_id": None,
|
||||
"task_name": None,
|
||||
"from_task": None,
|
||||
"from_agent": None,
|
||||
"agent_role": None,
|
||||
"agent_id": None,
|
||||
"query": "test value",
|
||||
"limit": 3,
|
||||
"score_threshold": 0.35,
|
||||
@@ -76,6 +81,12 @@ def test_short_term_memory_search_events(short_term_memory):
|
||||
"source_fingerprint": None,
|
||||
"source_type": "short_term_memory",
|
||||
"fingerprint_metadata": None,
|
||||
"task_id": None,
|
||||
"task_name": None,
|
||||
"from_task": None,
|
||||
"from_agent": None,
|
||||
"agent_role": None,
|
||||
"agent_id": None,
|
||||
"query": "test value",
|
||||
"results": [],
|
||||
"limit": 3,
|
||||
@@ -99,12 +110,10 @@ def test_short_term_memory_save_events(short_term_memory):
|
||||
short_term_memory.save(
|
||||
value="test value",
|
||||
metadata={"task": "test_task"},
|
||||
agent="test_agent",
|
||||
)
|
||||
|
||||
assert len(events["MemorySaveStartedEvent"]) == 1
|
||||
assert len(events["MemorySaveCompletedEvent"]) == 1
|
||||
assert len(events["MemorySaveFailedEvent"]) == 0
|
||||
|
||||
assert dict(events["MemorySaveStartedEvent"][0]) == {
|
||||
"timestamp": ANY,
|
||||
@@ -112,9 +121,14 @@ def test_short_term_memory_save_events(short_term_memory):
|
||||
"source_fingerprint": None,
|
||||
"source_type": "short_term_memory",
|
||||
"fingerprint_metadata": None,
|
||||
"task_id": None,
|
||||
"task_name": None,
|
||||
"from_task": None,
|
||||
"from_agent": None,
|
||||
"agent_role": None,
|
||||
"agent_id": None,
|
||||
"value": "test value",
|
||||
"metadata": {"task": "test_task"},
|
||||
"agent_role": "test_agent",
|
||||
}
|
||||
|
||||
assert dict(events["MemorySaveCompletedEvent"][0]) == {
|
||||
@@ -123,9 +137,14 @@ def test_short_term_memory_save_events(short_term_memory):
|
||||
"source_fingerprint": None,
|
||||
"source_type": "short_term_memory",
|
||||
"fingerprint_metadata": None,
|
||||
"task_id": None,
|
||||
"task_name": None,
|
||||
"from_task": None,
|
||||
"from_agent": None,
|
||||
"agent_role": None,
|
||||
"agent_id": None,
|
||||
"value": "test value",
|
||||
"metadata": {"task": "test_task", "agent": "test_agent"},
|
||||
"agent_role": "test_agent",
|
||||
"metadata": {"task": "test_task"},
|
||||
"save_time_ms": ANY,
|
||||
}
|
||||
|
||||
|
||||
@@ -366,7 +366,7 @@ def test_hierarchical_process(researcher, writer):
|
||||
|
||||
assert (
|
||||
result.raw
|
||||
== "**1. The Rise of Autonomous AI Agents in Daily Life** \nAs artificial intelligence technology progresses, the integration of autonomous AI agents into everyday life becomes increasingly prominent. These agents, capable of making decisions without human intervention, are reshaping industries from healthcare to finance. Exploring case studies where autonomous AI has successfully decreased operational costs or improved efficiency can reveal not only the benefits but also the ethical implications of delegating decision-making to machines. This topic offers an exciting opportunity to dive into the AI landscape, showcasing current developments such as AI assistants and autonomous vehicles.\n\n**2. Ethical Implications of Generative AI in Creative Industries** \nThe surge of generative AI tools in creative fields, such as art, music, and writing, has sparked a heated debate about authorship and originality. This article could investigate how these tools are being used by artists and creators, examining both the potential for innovation and the risk of devaluing traditional art forms. Highlighting perspectives from creators, legal experts, and ethicists could provide a comprehensive overview of the challenges faced, including copyright concerns and the emotional impact on human artists. This discussion is vital as the creative landscape evolves alongside technological advancements, making it ripe for exploration.\n\n**3. AI in Climate Change Mitigation: Current Solutions and Future Potential** \nAs the world grapples with climate change, AI technology is increasingly being harnessed to develop innovative solutions for sustainability. From predictive analytics that optimize energy consumption to machine learning algorithms that improve carbon capture methods, AI's potential in environmental science is vast. This topic invites an exploration of existing AI applications in climate initiatives, with a focus on groundbreaking research and initiatives aimed at reducing humanity's carbon footprint. Highlighting successful projects and technology partnerships can illustrate the positive impact AI can have on global climate efforts, inspiring further exploration and investment in this area.\n\n**4. The Future of Work: How AI is Reshaping Employment Landscapes** \nThe discussions around AI's impact on the workforce are both urgent and complex, as advances in automation and machine learning continue to transform the job market. This article could delve into the current trends of AI-driven job displacement alongside opportunities for upskilling and the creation of new job roles. By examining case studies of companies that integrate AI effectively and the resulting workforce adaptations, readers can gain valuable insights into preparing for a future where humans and AI collaborate. This exploration highlights the importance of policies that promote workforce resilience in the face of change.\n\n**5. Decentralized AI: Exploring the Role of Blockchain in AI Development** \nAs blockchain technology sweeps through various sectors, its application in AI development presents a fascinating topic worth examining. Decentralized AI could address issues of data privacy, security, and democratization in AI models by allowing users to retain ownership of data while benefiting from AI's capabilities. This article could analyze how decentralized networks are disrupting traditional AI development models, featuring innovative projects that harness the synergy between blockchain and AI. Highlighting potential pitfalls and the future landscape of decentralized AI could stimulate discussion among technologists, entrepreneurs, and policymakers alike.\n\nThese topics not only reflect current trends but also probe deeper into ethical and practical considerations, making them timely and relevant for contemporary audiences."
|
||||
== "1. **The Rise of Autonomous AI Agents in Daily Life** \n As artificial intelligence technology progresses, the integration of autonomous AI agents into everyday life becomes increasingly prominent. These agents, capable of making decisions without human intervention, are reshaping industries from healthcare to finance. Exploring case studies where autonomous AI has successfully decreased operational costs or improved efficiency can reveal not only the benefits but also the ethical implications of delegating decision-making to machines. This topic offers an exciting opportunity to dive into the AI landscape, showcasing current developments such as AI assistants and autonomous vehicles.\n\n2. **Ethical Implications of Generative AI in Creative Industries** \n The surge of generative AI tools in creative fields, such as art, music, and writing, has sparked a heated debate about authorship and originality. This article could investigate how these tools are being used by artists and creators, examining both the potential for innovation and the risk of devaluing traditional art forms. Highlighting perspectives from creators, legal experts, and ethicists could provide a comprehensive overview of the challenges faced, including copyright concerns and the emotional impact on human artists. This discussion is vital as the creative landscape evolves alongside technological advancements, making it ripe for exploration.\n\n3. **AI in Climate Change Mitigation: Current Solutions and Future Potential** \n As the world grapples with climate change, AI technology is increasingly being harnessed to develop innovative solutions for sustainability. From predictive analytics that optimize energy consumption to machine learning algorithms that improve carbon capture methods, AI's potential in environmental science is vast. This topic invites an exploration of existing AI applications in climate initiatives, with a focus on groundbreaking research and initiatives aimed at reducing humanity's carbon footprint. Highlighting successful projects and technology partnerships can illustrate the positive impact AI can have on global climate efforts, inspiring further exploration and investment in this area.\n\n4. **The Future of Work: How AI is Reshaping Employment Landscapes** \n The discussions around AI's impact on the workforce are both urgent and complex, as advances in automation and machine learning continue to transform the job market. This article could delve into the current trends of AI-driven job displacement alongside opportunities for upskilling and the creation of new job roles. By examining case studies of companies that integrate AI effectively and the resulting workforce adaptations, readers can gain valuable insights into preparing for a future where humans and AI collaborate. This exploration highlights the importance of policies that promote workforce resilience in the face of change.\n\n5. **Decentralized AI: Exploring the Role of Blockchain in AI Development** \n As blockchain technology sweeps through various sectors, its application in AI development presents a fascinating topic worth examining. Decentralized AI could address issues of data privacy, security, and democratization in AI models by allowing users to retain ownership of data while benefiting from AI's capabilities. This article could analyze how decentralized networks are disrupting traditional AI development models, featuring innovative projects that harness the synergy between blockchain and AI. Highlighting potential pitfalls and the future landscape of decentralized AI could stimulate discussion among technologists, entrepreneurs, and policymakers alike."
|
||||
)
|
||||
|
||||
|
||||
@@ -624,12 +624,12 @@ def test_crew_with_delegating_agents_should_not_override_task_tools(ceo, writer)
|
||||
_, kwargs = mock_execute_sync.call_args
|
||||
tools = kwargs["tools"]
|
||||
|
||||
assert any(
|
||||
isinstance(tool, TestTool) for tool in tools
|
||||
), "TestTool should be present"
|
||||
assert any(
|
||||
"delegate" in tool.name.lower() for tool in tools
|
||||
), "Delegation tool should be present"
|
||||
assert any(isinstance(tool, TestTool) for tool in tools), (
|
||||
"TestTool should be present"
|
||||
)
|
||||
assert any("delegate" in tool.name.lower() for tool in tools), (
|
||||
"Delegation tool should be present"
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
@@ -688,12 +688,12 @@ def test_crew_with_delegating_agents_should_not_override_agent_tools(ceo, writer
|
||||
_, kwargs = mock_execute_sync.call_args
|
||||
tools = kwargs["tools"]
|
||||
|
||||
assert any(
|
||||
isinstance(tool, TestTool) for tool in new_ceo.tools
|
||||
), "TestTool should be present"
|
||||
assert any(
|
||||
"delegate" in tool.name.lower() for tool in tools
|
||||
), "Delegation tool should be present"
|
||||
assert any(isinstance(tool, TestTool) for tool in new_ceo.tools), (
|
||||
"TestTool should be present"
|
||||
)
|
||||
assert any("delegate" in tool.name.lower() for tool in tools), (
|
||||
"Delegation tool should be present"
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
@@ -817,17 +817,17 @@ def test_task_tools_override_agent_tools_with_allow_delegation(researcher, write
|
||||
used_tools = kwargs["tools"]
|
||||
|
||||
# Confirm AnotherTestTool is present but TestTool is not
|
||||
assert any(
|
||||
isinstance(tool, AnotherTestTool) for tool in used_tools
|
||||
), "AnotherTestTool should be present"
|
||||
assert not any(
|
||||
isinstance(tool, TestTool) for tool in used_tools
|
||||
), "TestTool should not be present among used tools"
|
||||
assert any(isinstance(tool, AnotherTestTool) for tool in used_tools), (
|
||||
"AnotherTestTool should be present"
|
||||
)
|
||||
assert not any(isinstance(tool, TestTool) for tool in used_tools), (
|
||||
"TestTool should not be present among used tools"
|
||||
)
|
||||
|
||||
# Confirm delegation tool(s) are present
|
||||
assert any(
|
||||
"delegate" in tool.name.lower() for tool in used_tools
|
||||
), "Delegation tool should be present"
|
||||
assert any("delegate" in tool.name.lower() for tool in used_tools), (
|
||||
"Delegation tool should be present"
|
||||
)
|
||||
|
||||
# Finally, make sure the agent's original tools remain unchanged
|
||||
assert len(researcher_with_delegation.tools) == 1
|
||||
@@ -931,9 +931,9 @@ def test_cache_hitting_between_agents(researcher, writer, ceo):
|
||||
tool="multiplier", input={"first_number": 2, "second_number": 6}
|
||||
)
|
||||
assert cache_calls[0] == expected_call, f"First call mismatch: {cache_calls[0]}"
|
||||
assert (
|
||||
cache_calls[1] == expected_call
|
||||
), f"Second call mismatch: {cache_calls[1]}"
|
||||
assert cache_calls[1] == expected_call, (
|
||||
f"Second call mismatch: {cache_calls[1]}"
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
@@ -1676,9 +1676,9 @@ def test_code_execution_flag_adds_code_tool_upon_kickoff():
|
||||
|
||||
# Verify that exactly one tool was used and it was a CodeInterpreterTool
|
||||
assert len(used_tools) == 1, "Should have exactly one tool"
|
||||
assert isinstance(
|
||||
used_tools[0], CodeInterpreterTool
|
||||
), "Tool should be CodeInterpreterTool"
|
||||
assert isinstance(used_tools[0], CodeInterpreterTool), (
|
||||
"Tool should be CodeInterpreterTool"
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
@@ -1762,10 +1762,10 @@ def test_agent_usage_metrics_are_captured_for_hierarchical_process():
|
||||
assert result.raw == "Howdy!"
|
||||
|
||||
assert result.token_usage == UsageMetrics(
|
||||
total_tokens=1673,
|
||||
prompt_tokens=1562,
|
||||
completion_tokens=111,
|
||||
successful_requests=3,
|
||||
total_tokens=2390,
|
||||
prompt_tokens=2264,
|
||||
completion_tokens=126,
|
||||
successful_requests=4,
|
||||
cached_prompt_tokens=0,
|
||||
)
|
||||
|
||||
@@ -3817,9 +3817,9 @@ def test_fetch_inputs():
|
||||
expected_placeholders = {"role_detail", "topic", "field"}
|
||||
actual_placeholders = crew.fetch_inputs()
|
||||
|
||||
assert (
|
||||
actual_placeholders == expected_placeholders
|
||||
), f"Expected {expected_placeholders}, but got {actual_placeholders}"
|
||||
assert actual_placeholders == expected_placeholders, (
|
||||
f"Expected {expected_placeholders}, but got {actual_placeholders}"
|
||||
)
|
||||
|
||||
|
||||
def test_task_tools_preserve_code_execution_tools():
|
||||
@@ -3894,20 +3894,20 @@ def test_task_tools_preserve_code_execution_tools():
|
||||
used_tools = kwargs["tools"]
|
||||
|
||||
# Verify all expected tools are present
|
||||
assert any(
|
||||
isinstance(tool, TestTool) for tool in used_tools
|
||||
), "Task's TestTool should be present"
|
||||
assert any(
|
||||
isinstance(tool, CodeInterpreterTool) for tool in used_tools
|
||||
), "CodeInterpreterTool should be present"
|
||||
assert any(
|
||||
"delegate" in tool.name.lower() for tool in used_tools
|
||||
), "Delegation tool should be present"
|
||||
assert any(isinstance(tool, TestTool) for tool in used_tools), (
|
||||
"Task's TestTool should be present"
|
||||
)
|
||||
assert any(isinstance(tool, CodeInterpreterTool) for tool in used_tools), (
|
||||
"CodeInterpreterTool should be present"
|
||||
)
|
||||
assert any("delegate" in tool.name.lower() for tool in used_tools), (
|
||||
"Delegation tool should be present"
|
||||
)
|
||||
|
||||
# Verify the total number of tools (TestTool + CodeInterpreter + 2 delegation tools)
|
||||
assert (
|
||||
len(used_tools) == 4
|
||||
), "Should have TestTool, CodeInterpreter, and 2 delegation tools"
|
||||
assert len(used_tools) == 4, (
|
||||
"Should have TestTool, CodeInterpreter, and 2 delegation tools"
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
@@ -3951,9 +3951,9 @@ def test_multimodal_flag_adds_multimodal_tools():
|
||||
used_tools = kwargs["tools"]
|
||||
|
||||
# Check that the multimodal tool was added
|
||||
assert any(
|
||||
isinstance(tool, AddImageTool) for tool in used_tools
|
||||
), "AddImageTool should be present when agent is multimodal"
|
||||
assert any(isinstance(tool, AddImageTool) for tool in used_tools), (
|
||||
"AddImageTool should be present when agent is multimodal"
|
||||
)
|
||||
|
||||
# Verify we have exactly one tool (just the AddImageTool)
|
||||
assert len(used_tools) == 1, "Should only have the AddImageTool"
|
||||
@@ -4217,9 +4217,9 @@ def test_crew_guardrail_feedback_in_context():
|
||||
assert len(execution_contexts) > 1, "Task should have been executed multiple times"
|
||||
|
||||
# Verify that the second execution included the guardrail feedback
|
||||
assert (
|
||||
"Output must contain the keyword 'IMPORTANT'" in execution_contexts[1]
|
||||
), "Guardrail feedback should be included in retry context"
|
||||
assert "Output must contain the keyword 'IMPORTANT'" in execution_contexts[1], (
|
||||
"Guardrail feedback should be included in retry context"
|
||||
)
|
||||
|
||||
# Verify final output meets guardrail requirements
|
||||
assert "IMPORTANT" in result.raw, "Final output should contain required keyword"
|
||||
@@ -4435,46 +4435,46 @@ def test_crew_copy_with_memory():
|
||||
try:
|
||||
crew_copy = crew.copy()
|
||||
|
||||
assert hasattr(
|
||||
crew_copy, "_short_term_memory"
|
||||
), "Copied crew should have _short_term_memory"
|
||||
assert (
|
||||
crew_copy._short_term_memory is not None
|
||||
), "Copied _short_term_memory should not be None"
|
||||
assert (
|
||||
id(crew_copy._short_term_memory) != original_short_term_id
|
||||
), "Copied _short_term_memory should be a new object"
|
||||
assert hasattr(crew_copy, "_short_term_memory"), (
|
||||
"Copied crew should have _short_term_memory"
|
||||
)
|
||||
assert crew_copy._short_term_memory is not None, (
|
||||
"Copied _short_term_memory should not be None"
|
||||
)
|
||||
assert id(crew_copy._short_term_memory) != original_short_term_id, (
|
||||
"Copied _short_term_memory should be a new object"
|
||||
)
|
||||
|
||||
assert hasattr(
|
||||
crew_copy, "_long_term_memory"
|
||||
), "Copied crew should have _long_term_memory"
|
||||
assert (
|
||||
crew_copy._long_term_memory is not None
|
||||
), "Copied _long_term_memory should not be None"
|
||||
assert (
|
||||
id(crew_copy._long_term_memory) != original_long_term_id
|
||||
), "Copied _long_term_memory should be a new object"
|
||||
assert hasattr(crew_copy, "_long_term_memory"), (
|
||||
"Copied crew should have _long_term_memory"
|
||||
)
|
||||
assert crew_copy._long_term_memory is not None, (
|
||||
"Copied _long_term_memory should not be None"
|
||||
)
|
||||
assert id(crew_copy._long_term_memory) != original_long_term_id, (
|
||||
"Copied _long_term_memory should be a new object"
|
||||
)
|
||||
|
||||
assert hasattr(
|
||||
crew_copy, "_entity_memory"
|
||||
), "Copied crew should have _entity_memory"
|
||||
assert (
|
||||
crew_copy._entity_memory is not None
|
||||
), "Copied _entity_memory should not be None"
|
||||
assert (
|
||||
id(crew_copy._entity_memory) != original_entity_id
|
||||
), "Copied _entity_memory should be a new object"
|
||||
assert hasattr(crew_copy, "_entity_memory"), (
|
||||
"Copied crew should have _entity_memory"
|
||||
)
|
||||
assert crew_copy._entity_memory is not None, (
|
||||
"Copied _entity_memory should not be None"
|
||||
)
|
||||
assert id(crew_copy._entity_memory) != original_entity_id, (
|
||||
"Copied _entity_memory should be a new object"
|
||||
)
|
||||
|
||||
if original_external_id:
|
||||
assert hasattr(
|
||||
crew_copy, "_external_memory"
|
||||
), "Copied crew should have _external_memory"
|
||||
assert (
|
||||
crew_copy._external_memory is not None
|
||||
), "Copied _external_memory should not be None"
|
||||
assert (
|
||||
id(crew_copy._external_memory) != original_external_id
|
||||
), "Copied _external_memory should be a new object"
|
||||
assert hasattr(crew_copy, "_external_memory"), (
|
||||
"Copied crew should have _external_memory"
|
||||
)
|
||||
assert crew_copy._external_memory is not None, (
|
||||
"Copied _external_memory should not be None"
|
||||
)
|
||||
assert id(crew_copy._external_memory) != original_external_id, (
|
||||
"Copied _external_memory should be a new object"
|
||||
)
|
||||
else:
|
||||
assert (
|
||||
not hasattr(crew_copy, "_external_memory")
|
||||
@@ -4754,5 +4754,4 @@ def test_ensure_exchanged_messages_are_propagated_to_external_memory():
|
||||
external_memory_save.assert_called_once_with(
|
||||
value=ANY,
|
||||
metadata={"description": ANY, "messages": expected_messages},
|
||||
agent=ANY,
|
||||
)
|
||||
|
||||
@@ -159,7 +159,7 @@ def test_task_callback_returns_task_output():
|
||||
"json_dict": None,
|
||||
"agent": researcher.role,
|
||||
"summary": "Give me a list of 5 interesting ideas to explore...",
|
||||
"name": None,
|
||||
"name": task.name or task.description,
|
||||
"expected_output": "Bullet point list of 5 interesting ideas.",
|
||||
"output_format": OutputFormat.RAW,
|
||||
}
|
||||
@@ -340,7 +340,7 @@ def test_output_pydantic_hierarchical():
|
||||
)
|
||||
result = crew.kickoff()
|
||||
assert isinstance(result.pydantic, ScoreOutput)
|
||||
assert result.to_dict() == {"score": 4}
|
||||
assert result.to_dict() == {"score": 5}
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
@@ -401,8 +401,8 @@ def test_output_json_hierarchical():
|
||||
manager_llm="gpt-4o",
|
||||
)
|
||||
result = crew.kickoff()
|
||||
assert result.json == '{"score": 4}'
|
||||
assert result.to_dict() == {"score": 4}
|
||||
assert result.json == '{"score": 5}'
|
||||
assert result.to_dict() == {"score": 5}
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
@@ -560,8 +560,8 @@ def test_output_json_dict_hierarchical():
|
||||
manager_llm="gpt-4o",
|
||||
)
|
||||
result = crew.kickoff()
|
||||
assert {"score": 4} == result.json_dict
|
||||
assert result.to_dict() == {"score": 4}
|
||||
assert {"score": 5} == result.json_dict
|
||||
assert result.to_dict() == {"score": 5}
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
@@ -596,9 +596,9 @@ def test_output_pydantic_to_another_task():
|
||||
crew = Crew(agents=[scorer], tasks=[task1, task2], verbose=True)
|
||||
result = crew.kickoff()
|
||||
pydantic_result = result.pydantic
|
||||
assert isinstance(
|
||||
pydantic_result, ScoreOutput
|
||||
), "Expected pydantic result to be of type ScoreOutput"
|
||||
assert isinstance(pydantic_result, ScoreOutput), (
|
||||
"Expected pydantic result to be of type ScoreOutput"
|
||||
)
|
||||
assert pydantic_result.score == 5
|
||||
|
||||
|
||||
@@ -1081,9 +1081,9 @@ def test_key():
|
||||
assert task.key == hash, "The key should be the hash of the description."
|
||||
|
||||
task.interpolate_inputs_and_add_conversation_history(inputs={"topic": "AI"})
|
||||
assert (
|
||||
task.key == hash
|
||||
), "The key should be the hash of the non-interpolated description."
|
||||
assert task.key == hash, (
|
||||
"The key should be the hash of the non-interpolated description."
|
||||
)
|
||||
|
||||
|
||||
def test_output_file_validation():
|
||||
|
||||
@@ -476,7 +476,7 @@ def test_tool_selection_error_event_direct():
|
||||
def event_handler(source, event):
|
||||
received_events.append(event)
|
||||
|
||||
with pytest.raises(Exception) as exc_info:
|
||||
with pytest.raises(Exception):
|
||||
tool_usage._select_tool("Non Existent Tool")
|
||||
assert len(received_events) == 1
|
||||
event = received_events[0]
|
||||
@@ -490,7 +490,7 @@ def test_tool_selection_error_event_direct():
|
||||
assert "don't exist" in event.error
|
||||
|
||||
received_events.clear()
|
||||
with pytest.raises(Exception) as exc_info:
|
||||
with pytest.raises(Exception):
|
||||
tool_usage._select_tool("")
|
||||
|
||||
assert len(received_events) == 1
|
||||
@@ -563,7 +563,7 @@ def test_tool_validate_input_error_event():
|
||||
|
||||
# Test invalid input
|
||||
invalid_input = "invalid json {[}"
|
||||
with pytest.raises(Exception) as exc_info:
|
||||
with pytest.raises(Exception):
|
||||
tool_usage._validate_tool_input(invalid_input)
|
||||
|
||||
# Verify event was emitted
|
||||
@@ -589,6 +589,9 @@ def test_tool_usage_finished_event_with_result():
|
||||
# Create mock task
|
||||
mock_task = MagicMock()
|
||||
mock_task.delegations = 0
|
||||
mock_task.name = "Test Task"
|
||||
mock_task.description = "A test task for tool usage"
|
||||
mock_task.id = "test-task-id"
|
||||
|
||||
# Create mock tool
|
||||
class TestTool(BaseTool):
|
||||
@@ -665,6 +668,9 @@ def test_tool_usage_finished_event_with_cached_result():
|
||||
# Create mock task
|
||||
mock_task = MagicMock()
|
||||
mock_task.delegations = 0
|
||||
mock_task.name = "Test Task"
|
||||
mock_task.description = "A test task for tool usage"
|
||||
mock_task.id = "test-task-id"
|
||||
|
||||
# Create mock tool
|
||||
class TestTool(BaseTool):
|
||||
|
||||
@@ -891,7 +891,7 @@ def test_stream_llm_emits_event_with_task_and_agent_info():
|
||||
assert set(all_agent_roles) == {agent.role}
|
||||
assert set(all_agent_id) == {agent.id}
|
||||
assert set(all_task_id) == {task.id}
|
||||
assert set(all_task_name) == {task.name}
|
||||
assert set(all_task_name) == {task.name or task.description}
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
@@ -942,7 +942,7 @@ def test_llm_emits_event_with_task_and_agent_info(base_agent, base_task):
|
||||
assert set(all_agent_roles) == {base_agent.role}
|
||||
assert set(all_agent_id) == {base_agent.id}
|
||||
assert set(all_task_id) == {base_task.id}
|
||||
assert set(all_task_name) == {base_task.name}
|
||||
assert set(all_task_name) == {base_task.name or base_task.description}
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
@@ -978,9 +978,9 @@ def test_llm_emits_event_with_lite_agent():
|
||||
)
|
||||
agent.kickoff(messages=[{"role": "user", "content": "say hi!"}])
|
||||
|
||||
assert len(completed_event) == 2
|
||||
assert len(completed_event) == 1
|
||||
assert len(failed_event) == 0
|
||||
assert len(started_event) == 2
|
||||
assert len(started_event) == 1
|
||||
assert len(stream_event) == 15
|
||||
|
||||
all_events = completed_event + failed_event + started_event + stream_event
|
||||
@@ -990,8 +990,8 @@ def test_llm_emits_event_with_lite_agent():
|
||||
all_task_name = [event.task_name for event in all_events if event.task_name]
|
||||
|
||||
# ensure all events have the agent + task props set
|
||||
assert len(all_agent_roles) == 19
|
||||
assert len(all_agent_id) == 19
|
||||
assert len(all_agent_roles) == 17
|
||||
assert len(all_agent_id) == 17
|
||||
assert len(all_task_id) == 0
|
||||
assert len(all_task_name) == 0
|
||||
|
||||
|
||||
Reference in New Issue
Block a user