mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-01-07 23:28:30 +00:00
Compare commits
11 Commits
gl/feat/py
...
devin/1762
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
5a3cd627bf | ||
|
|
fbe4aa4bd1 | ||
|
|
c205d2e8de | ||
|
|
fcb5b19b2e | ||
|
|
01f0111d52 | ||
|
|
6b52587c67 | ||
|
|
629f7f34ce | ||
|
|
0f1c173d02 | ||
|
|
19c5b9a35e | ||
|
|
1ed307b58c | ||
|
|
d29867bbb6 |
11
.github/dependabot.yml
vendored
Normal file
11
.github/dependabot.yml
vendored
Normal file
@@ -0,0 +1,11 @@
|
||||
# To get started with Dependabot version updates, you'll need to specify which
|
||||
# package ecosystems to update and where the package manifests are located.
|
||||
# Please see the documentation for all configuration options:
|
||||
# https://docs.github.com/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file
|
||||
|
||||
version: 2
|
||||
updates:
|
||||
- package-ecosystem: uv # See documentation for possible values
|
||||
directory: "/" # Location of package manifests
|
||||
schedule:
|
||||
interval: "weekly"
|
||||
@@ -60,6 +60,7 @@ crew = Crew(
|
||||
| **Output Pydantic** _(optional)_ | `output_pydantic` | `Optional[Type[BaseModel]]` | A Pydantic model for task output. |
|
||||
| **Callback** _(optional)_ | `callback` | `Optional[Any]` | Function/object to be executed after task completion. |
|
||||
| **Guardrail** _(optional)_ | `guardrail` | `Optional[Callable]` | Function to validate task output before proceeding to next task. |
|
||||
| **Guardrails** _(optional)_ | `guardrails` | `Optional[List[Callable] | List[str]]` | List of guardrails to validate task output before proceeding to next task. |
|
||||
| **Guardrail Max Retries** _(optional)_ | `guardrail_max_retries` | `Optional[int]` | Maximum number of retries when guardrail validation fails. Defaults to 3. |
|
||||
|
||||
<Note type="warning" title="Deprecated: max_retries">
|
||||
@@ -223,6 +224,7 @@ By default, the `TaskOutput` will only include the `raw` output. A `TaskOutput`
|
||||
| **JSON Dict** | `json_dict` | `Optional[Dict[str, Any]]` | A dictionary representing the JSON output of the task. |
|
||||
| **Agent** | `agent` | `str` | The agent that executed the task. |
|
||||
| **Output Format** | `output_format` | `OutputFormat` | The format of the task output, with options including RAW, JSON, and Pydantic. The default is RAW. |
|
||||
| **Messages** | `messages` | `list[LLMMessage]` | The messages from the last task execution. |
|
||||
|
||||
### Task Methods and Properties
|
||||
|
||||
@@ -341,7 +343,11 @@ Task guardrails provide a way to validate and transform task outputs before they
|
||||
are passed to the next task. This feature helps ensure data quality and provides
|
||||
feedback to agents when their output doesn't meet specific criteria.
|
||||
|
||||
Guardrails are implemented as Python functions that contain custom validation logic, giving you complete control over the validation process and ensuring reliable, deterministic results.
|
||||
CrewAI supports two types of guardrails:
|
||||
|
||||
1. **Function-based guardrails**: Python functions with custom validation logic, giving you complete control over the validation process and ensuring reliable, deterministic results.
|
||||
|
||||
2. **LLM-based guardrails**: String descriptions that use the agent's LLM to validate outputs based on natural language criteria. These are ideal for complex or subjective validation requirements.
|
||||
|
||||
### Function-Based Guardrails
|
||||
|
||||
@@ -355,12 +361,12 @@ def validate_blog_content(result: TaskOutput) -> Tuple[bool, Any]:
|
||||
"""Validate blog content meets requirements."""
|
||||
try:
|
||||
# Check word count
|
||||
word_count = len(result.split())
|
||||
word_count = len(result.raw.split())
|
||||
if word_count > 200:
|
||||
return (False, "Blog content exceeds 200 words")
|
||||
|
||||
# Additional validation logic here
|
||||
return (True, result.strip())
|
||||
return (True, result.raw.strip())
|
||||
except Exception as e:
|
||||
return (False, "Unexpected error during validation")
|
||||
|
||||
@@ -372,6 +378,147 @@ blog_task = Task(
|
||||
)
|
||||
```
|
||||
|
||||
### LLM-Based Guardrails (String Descriptions)
|
||||
|
||||
Instead of writing custom validation functions, you can use string descriptions that leverage LLM-based validation. When you provide a string to the `guardrail` or `guardrails` parameter, CrewAI automatically creates an `LLMGuardrail` that uses the agent's LLM to validate the output based on your description.
|
||||
|
||||
**Requirements**:
|
||||
- The task must have an `agent` assigned (the guardrail uses the agent's LLM)
|
||||
- Provide a clear, descriptive string explaining the validation criteria
|
||||
|
||||
```python Code
|
||||
from crewai import Task
|
||||
|
||||
# Single LLM-based guardrail
|
||||
blog_task = Task(
|
||||
description="Write a blog post about AI",
|
||||
expected_output="A blog post under 200 words",
|
||||
agent=blog_agent,
|
||||
guardrail="The blog post must be under 200 words and contain no technical jargon"
|
||||
)
|
||||
```
|
||||
|
||||
LLM-based guardrails are particularly useful for:
|
||||
- **Complex validation logic** that's difficult to express programmatically
|
||||
- **Subjective criteria** like tone, style, or quality assessments
|
||||
- **Natural language requirements** that are easier to describe than code
|
||||
|
||||
The LLM guardrail will:
|
||||
1. Analyze the task output against your description
|
||||
2. Return `(True, output)` if the output complies with the criteria
|
||||
3. Return `(False, feedback)` with specific feedback if validation fails
|
||||
|
||||
**Example with detailed validation criteria**:
|
||||
|
||||
```python Code
|
||||
research_task = Task(
|
||||
description="Research the latest developments in quantum computing",
|
||||
expected_output="A comprehensive research report",
|
||||
agent=researcher_agent,
|
||||
guardrail="""
|
||||
The research report must:
|
||||
- Be at least 1000 words long
|
||||
- Include at least 5 credible sources
|
||||
- Cover both technical and practical applications
|
||||
- Be written in a professional, academic tone
|
||||
- Avoid speculation or unverified claims
|
||||
"""
|
||||
)
|
||||
```
|
||||
|
||||
### Multiple Guardrails
|
||||
|
||||
You can apply multiple guardrails to a task using the `guardrails` parameter. Multiple guardrails are executed sequentially, with each guardrail receiving the output from the previous one. This allows you to chain validation and transformation steps.
|
||||
|
||||
The `guardrails` parameter accepts:
|
||||
- A list of guardrail functions or string descriptions
|
||||
- A single guardrail function or string (same as `guardrail`)
|
||||
|
||||
**Note**: If `guardrails` is provided, it takes precedence over `guardrail`. The `guardrail` parameter will be ignored when `guardrails` is set.
|
||||
|
||||
```python Code
|
||||
from typing import Tuple, Any
|
||||
from crewai import TaskOutput, Task
|
||||
|
||||
def validate_word_count(result: TaskOutput) -> Tuple[bool, Any]:
|
||||
"""Validate word count is within limits."""
|
||||
word_count = len(result.raw.split())
|
||||
if word_count < 100:
|
||||
return (False, f"Content too short: {word_count} words. Need at least 100 words.")
|
||||
if word_count > 500:
|
||||
return (False, f"Content too long: {word_count} words. Maximum is 500 words.")
|
||||
return (True, result.raw)
|
||||
|
||||
def validate_no_profanity(result: TaskOutput) -> Tuple[bool, Any]:
|
||||
"""Check for inappropriate language."""
|
||||
profanity_words = ["badword1", "badword2"] # Example list
|
||||
content_lower = result.raw.lower()
|
||||
for word in profanity_words:
|
||||
if word in content_lower:
|
||||
return (False, f"Inappropriate language detected: {word}")
|
||||
return (True, result.raw)
|
||||
|
||||
def format_output(result: TaskOutput) -> Tuple[bool, Any]:
|
||||
"""Format and clean the output."""
|
||||
formatted = result.raw.strip()
|
||||
# Capitalize first letter
|
||||
formatted = formatted[0].upper() + formatted[1:] if formatted else formatted
|
||||
return (True, formatted)
|
||||
|
||||
# Apply multiple guardrails sequentially
|
||||
blog_task = Task(
|
||||
description="Write a blog post about AI",
|
||||
expected_output="A well-formatted blog post between 100-500 words",
|
||||
agent=blog_agent,
|
||||
guardrails=[
|
||||
validate_word_count, # First: validate length
|
||||
validate_no_profanity, # Second: check content
|
||||
format_output # Third: format the result
|
||||
],
|
||||
guardrail_max_retries=3
|
||||
)
|
||||
```
|
||||
|
||||
In this example, the guardrails execute in order:
|
||||
1. `validate_word_count` checks the word count
|
||||
2. `validate_no_profanity` checks for inappropriate language (using the output from step 1)
|
||||
3. `format_output` formats the final result (using the output from step 2)
|
||||
|
||||
If any guardrail fails, the error is sent back to the agent, and the task is retried up to `guardrail_max_retries` times.
|
||||
|
||||
**Mixing function-based and LLM-based guardrails**:
|
||||
|
||||
You can combine both function-based and string-based guardrails in the same list:
|
||||
|
||||
```python Code
|
||||
from typing import Tuple, Any
|
||||
from crewai import TaskOutput, Task
|
||||
|
||||
def validate_word_count(result: TaskOutput) -> Tuple[bool, Any]:
|
||||
"""Validate word count is within limits."""
|
||||
word_count = len(result.raw.split())
|
||||
if word_count < 100:
|
||||
return (False, f"Content too short: {word_count} words. Need at least 100 words.")
|
||||
if word_count > 500:
|
||||
return (False, f"Content too long: {word_count} words. Maximum is 500 words.")
|
||||
return (True, result.raw)
|
||||
|
||||
# Mix function-based and LLM-based guardrails
|
||||
blog_task = Task(
|
||||
description="Write a blog post about AI",
|
||||
expected_output="A well-formatted blog post between 100-500 words",
|
||||
agent=blog_agent,
|
||||
guardrails=[
|
||||
validate_word_count, # Function-based: precise word count check
|
||||
"The content must be engaging and suitable for a general audience", # LLM-based: subjective quality check
|
||||
"The writing style should be clear, concise, and free of technical jargon" # LLM-based: style validation
|
||||
],
|
||||
guardrail_max_retries=3
|
||||
)
|
||||
```
|
||||
|
||||
This approach combines the precision of programmatic validation with the flexibility of LLM-based assessment for subjective criteria.
|
||||
|
||||
### Guardrail Function Requirements
|
||||
|
||||
1. **Function Signature**:
|
||||
|
||||
@@ -12,7 +12,7 @@ dependencies = [
|
||||
"pytube>=15.0.0",
|
||||
"requests>=2.32.5",
|
||||
"docker>=7.1.0",
|
||||
"crewai==1.3.0",
|
||||
"crewai==1.4.1",
|
||||
"lancedb>=0.5.4",
|
||||
"tiktoken>=0.8.0",
|
||||
"beautifulsoup4>=4.13.4",
|
||||
|
||||
@@ -287,4 +287,4 @@ __all__ = [
|
||||
"ZapierActionTools",
|
||||
]
|
||||
|
||||
__version__ = "1.3.0"
|
||||
__version__ = "1.4.1"
|
||||
|
||||
@@ -12,12 +12,16 @@ from pydantic.types import ImportString
|
||||
|
||||
|
||||
class QdrantToolSchema(BaseModel):
|
||||
query: str = Field(..., description="Query to search in Qdrant DB")
|
||||
query: str = Field(
|
||||
..., description="Query to search in Qdrant DB - always required."
|
||||
)
|
||||
filter_by: str | None = Field(
|
||||
default=None, description="Parameter to filter the search by."
|
||||
default=None,
|
||||
description="Parameter to filter the search by. When filtering, needs to be used in conjunction with filter_value.",
|
||||
)
|
||||
filter_value: Any | None = Field(
|
||||
default=None, description="Value to filter the search by."
|
||||
default=None,
|
||||
description="Value to filter the search by. When filtering, needs to be used in conjunction with filter_by.",
|
||||
)
|
||||
|
||||
|
||||
|
||||
@@ -48,7 +48,7 @@ Repository = "https://github.com/crewAIInc/crewAI"
|
||||
|
||||
[project.optional-dependencies]
|
||||
tools = [
|
||||
"crewai-tools==1.3.0",
|
||||
"crewai-tools==1.4.1",
|
||||
]
|
||||
embeddings = [
|
||||
"tiktoken~=0.8.0"
|
||||
|
||||
@@ -40,7 +40,7 @@ def _suppress_pydantic_deprecation_warnings() -> None:
|
||||
|
||||
_suppress_pydantic_deprecation_warnings()
|
||||
|
||||
__version__ = "1.3.0"
|
||||
__version__ = "1.4.1"
|
||||
_telemetry_submitted = False
|
||||
|
||||
|
||||
|
||||
@@ -497,7 +497,37 @@ def _delegate_to_a2a(
|
||||
|
||||
conversation_history = a2a_result.get("history", [])
|
||||
|
||||
if a2a_result["status"] in ["completed", "input_required"]:
|
||||
if a2a_result["status"] == "completed":
|
||||
# Do NOT call _handle_agent_response_and_continue as it may trigger another delegation
|
||||
final_message = a2a_result.get("result", "")
|
||||
|
||||
# If result is empty, try to extract from conversation history
|
||||
if not final_message and conversation_history:
|
||||
for msg in reversed(conversation_history):
|
||||
if msg.role == Role.agent:
|
||||
text_parts = [
|
||||
part.root.text for part in msg.parts if part.root.kind == "text"
|
||||
]
|
||||
final_message = (
|
||||
" ".join(text_parts) if text_parts else "Conversation completed"
|
||||
)
|
||||
break
|
||||
|
||||
if not final_message:
|
||||
final_message = "Conversation completed"
|
||||
|
||||
crewai_event_bus.emit(
|
||||
None,
|
||||
A2AConversationCompletedEvent(
|
||||
status="completed",
|
||||
final_result=final_message,
|
||||
error=None,
|
||||
total_turns=turn_num + 1,
|
||||
),
|
||||
)
|
||||
return final_message
|
||||
|
||||
if a2a_result["status"] == "input_required":
|
||||
final_result, next_request = _handle_agent_response_and_continue(
|
||||
self=self,
|
||||
a2a_result=a2a_result,
|
||||
|
||||
@@ -119,6 +119,7 @@ class Agent(BaseAgent):
|
||||
|
||||
_times_executed: int = PrivateAttr(default=0)
|
||||
_mcp_clients: list[Any] = PrivateAttr(default_factory=list)
|
||||
_last_messages: list[LLMMessage] = PrivateAttr(default_factory=list)
|
||||
max_execution_time: int | None = Field(
|
||||
default=None,
|
||||
description="Maximum execution time for an agent to execute a task",
|
||||
@@ -538,6 +539,12 @@ class Agent(BaseAgent):
|
||||
event=AgentExecutionCompletedEvent(agent=self, task=task, output=result),
|
||||
)
|
||||
|
||||
self._last_messages = (
|
||||
self.agent_executor.messages.copy()
|
||||
if self.agent_executor and hasattr(self.agent_executor, "messages")
|
||||
else []
|
||||
)
|
||||
|
||||
self._cleanup_mcp_clients()
|
||||
|
||||
return result
|
||||
@@ -618,22 +625,22 @@ class Agent(BaseAgent):
|
||||
response_template=self.response_template,
|
||||
).task_execution()
|
||||
|
||||
stop_sequences = [self.i18n.slice("observation")]
|
||||
stop_words = [self.i18n.slice("observation")]
|
||||
|
||||
if self.response_template:
|
||||
stop_sequences.append(
|
||||
stop_words.append(
|
||||
self.response_template.split("{{ .Response }}")[1].strip()
|
||||
)
|
||||
|
||||
self.agent_executor = CrewAgentExecutor(
|
||||
llm=self.llm, # type: ignore[arg-type]
|
||||
llm=self.llm,
|
||||
task=task, # type: ignore[arg-type]
|
||||
agent=self,
|
||||
crew=self.crew,
|
||||
tools=parsed_tools,
|
||||
prompt=prompt,
|
||||
original_tools=raw_tools,
|
||||
stop_sequences=stop_sequences,
|
||||
stop_words=stop_words,
|
||||
max_iter=self.max_iter,
|
||||
tools_handler=self.tools_handler,
|
||||
tools_names=get_tool_names(parsed_tools),
|
||||
@@ -974,9 +981,7 @@ class Agent(BaseAgent):
|
||||
path = parsed.path.replace("/", "_").strip("_")
|
||||
return f"{domain}_{path}" if path else domain
|
||||
|
||||
def _get_mcp_tool_schemas(
|
||||
self, server_params: dict[str, Any]
|
||||
) -> dict[str, dict[str, Any]] | Any:
|
||||
def _get_mcp_tool_schemas(self, server_params: dict) -> dict[str, dict]:
|
||||
"""Get tool schemas from MCP server for wrapper creation with caching."""
|
||||
server_url = server_params["url"]
|
||||
|
||||
@@ -1008,7 +1013,7 @@ class Agent(BaseAgent):
|
||||
|
||||
async def _get_mcp_tool_schemas_async(
|
||||
self, server_params: dict[str, Any]
|
||||
) -> dict[str, dict[str, Any]]:
|
||||
) -> dict[str, dict]:
|
||||
"""Async implementation of MCP tool schema retrieval with timeouts and retries."""
|
||||
server_url = server_params["url"]
|
||||
return await self._retry_mcp_discovery(
|
||||
@@ -1016,7 +1021,7 @@ class Agent(BaseAgent):
|
||||
)
|
||||
|
||||
async def _retry_mcp_discovery(
|
||||
self, operation_func: Any, server_url: str
|
||||
self, operation_func, server_url: str
|
||||
) -> dict[str, dict[str, Any]]:
|
||||
"""Retry MCP discovery operation with exponential backoff, avoiding try-except in loop."""
|
||||
last_error = None
|
||||
@@ -1047,7 +1052,7 @@ class Agent(BaseAgent):
|
||||
|
||||
@staticmethod
|
||||
async def _attempt_mcp_discovery(
|
||||
operation_func: Any, server_url: str
|
||||
operation_func, server_url: str
|
||||
) -> tuple[dict[str, dict[str, Any]] | None, str, bool]:
|
||||
"""Attempt single MCP discovery operation and return (result, error_message, should_retry)."""
|
||||
try:
|
||||
@@ -1151,13 +1156,13 @@ class Agent(BaseAgent):
|
||||
Field(..., description=field_description),
|
||||
)
|
||||
else:
|
||||
field_definitions[field_name] = ( # type: ignore[assignment]
|
||||
field_definitions[field_name] = (
|
||||
field_type | None,
|
||||
Field(default=None, description=field_description),
|
||||
)
|
||||
|
||||
model_name = f"{tool_name.replace('-', '_').replace(' ', '_')}Schema"
|
||||
return create_model(model_name, **field_definitions) # type: ignore[no-any-return,call-overload]
|
||||
return create_model(model_name, **field_definitions)
|
||||
|
||||
def _json_type_to_python(self, field_schema: dict[str, Any]) -> type:
|
||||
"""Convert JSON Schema type to Python type.
|
||||
@@ -1177,12 +1182,12 @@ class Agent(BaseAgent):
|
||||
if "const" in option:
|
||||
types.append(str)
|
||||
else:
|
||||
types.append(self._json_type_to_python(option)) # type: ignore[arg-type]
|
||||
types.append(self._json_type_to_python(option))
|
||||
unique_types = list(set(types))
|
||||
if len(unique_types) > 1:
|
||||
result = unique_types[0]
|
||||
for t in unique_types[1:]:
|
||||
result = result | t # type: ignore[assignment]
|
||||
result = result | t
|
||||
return result
|
||||
return unique_types[0]
|
||||
|
||||
@@ -1195,10 +1200,10 @@ class Agent(BaseAgent):
|
||||
"object": dict,
|
||||
}
|
||||
|
||||
return type_mapping.get(json_type, Any) # type: ignore[arg-type]
|
||||
return type_mapping.get(json_type, Any)
|
||||
|
||||
@staticmethod
|
||||
def _fetch_amp_mcp_servers(mcp_name: str) -> list[dict[str, Any]]:
|
||||
def _fetch_amp_mcp_servers(mcp_name: str) -> list[dict]:
|
||||
"""Fetch MCP server configurations from CrewAI AMP API."""
|
||||
# TODO: Implement AMP API call to "integrations/mcps" endpoint
|
||||
# Should return list of server configs with URLs
|
||||
@@ -1343,6 +1348,15 @@ class Agent(BaseAgent):
|
||||
def set_fingerprint(self, fingerprint: Fingerprint) -> None:
|
||||
self.security_config.fingerprint = fingerprint
|
||||
|
||||
@property
|
||||
def last_messages(self) -> list[LLMMessage]:
|
||||
"""Get messages from the last task execution.
|
||||
|
||||
Returns:
|
||||
List of LLM messages from the most recent task execution.
|
||||
"""
|
||||
return self._last_messages
|
||||
|
||||
def _get_knowledge_search_query(self, task_prompt: str, task: Task) -> str | None:
|
||||
"""Generate a search query for the knowledge base based on the task description."""
|
||||
crewai_event_bus.emit(
|
||||
@@ -1437,7 +1451,7 @@ class Agent(BaseAgent):
|
||||
goal=self.goal,
|
||||
backstory=self.backstory,
|
||||
llm=self.llm,
|
||||
tools=self.tools,
|
||||
tools=self.tools or [],
|
||||
max_iterations=self.max_iter,
|
||||
max_execution_time=self.max_execution_time,
|
||||
respect_context_window=self.respect_context_window,
|
||||
|
||||
@@ -137,7 +137,7 @@ class BaseAgent(BaseModel, ABC, metaclass=AgentMeta):
|
||||
default=False,
|
||||
description="Enable agent to delegate and ask questions among each other.",
|
||||
)
|
||||
tools: list[BaseTool] = Field(
|
||||
tools: list[BaseTool] | None = Field(
|
||||
default_factory=list, description="Tools at agents' disposal"
|
||||
)
|
||||
max_iter: int = Field(
|
||||
|
||||
@@ -38,6 +38,10 @@ from crewai.utilities.agent_utils import (
|
||||
)
|
||||
from crewai.utilities.constants import TRAINING_DATA_FILE
|
||||
from crewai.utilities.i18n import I18N, get_i18n
|
||||
from crewai.utilities.llm_call_hooks import (
|
||||
get_after_llm_call_hooks,
|
||||
get_before_llm_call_hooks,
|
||||
)
|
||||
from crewai.utilities.printer import Printer
|
||||
from crewai.utilities.tool_utils import execute_tool_and_check_finality
|
||||
from crewai.utilities.training_handler import CrewTrainingHandler
|
||||
@@ -73,7 +77,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
|
||||
max_iter: int,
|
||||
tools: list[CrewStructuredTool],
|
||||
tools_names: str,
|
||||
stop_sequences: list[str],
|
||||
stop_words: list[str],
|
||||
tools_description: str,
|
||||
tools_handler: ToolsHandler,
|
||||
step_callback: Any = None,
|
||||
@@ -95,7 +99,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
|
||||
max_iter: Maximum iterations.
|
||||
tools: Available tools.
|
||||
tools_names: Tool names string.
|
||||
stop_sequences: Stop sequences list for halting generation.
|
||||
stop_words: Stop word list.
|
||||
tools_description: Tool descriptions.
|
||||
tools_handler: Tool handler instance.
|
||||
step_callback: Optional step callback.
|
||||
@@ -114,6 +118,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
|
||||
self.prompt = prompt
|
||||
self.tools = tools
|
||||
self.tools_names = tools_names
|
||||
self.stop = stop_words
|
||||
self.max_iter = max_iter
|
||||
self.callbacks = callbacks or []
|
||||
self._printer: Printer = Printer()
|
||||
@@ -129,8 +134,20 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
|
||||
self.messages: list[LLMMessage] = []
|
||||
self.iterations = 0
|
||||
self.log_error_after = 3
|
||||
self.before_llm_call_hooks: list[Callable] = []
|
||||
self.after_llm_call_hooks: list[Callable] = []
|
||||
self.before_llm_call_hooks.extend(get_before_llm_call_hooks())
|
||||
self.after_llm_call_hooks.extend(get_after_llm_call_hooks())
|
||||
if self.llm:
|
||||
self.llm.stop_sequences.extend(stop_sequences)
|
||||
# This may be mutating the shared llm object and needs further evaluation
|
||||
existing_stop = getattr(self.llm, "stop", [])
|
||||
self.llm.stop = list(
|
||||
set(
|
||||
existing_stop + self.stop
|
||||
if isinstance(existing_stop, list)
|
||||
else self.stop
|
||||
)
|
||||
)
|
||||
|
||||
@property
|
||||
def use_stop_words(self) -> bool:
|
||||
@@ -139,7 +156,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
|
||||
Returns:
|
||||
bool: True if tool should be used or not.
|
||||
"""
|
||||
return self.llm.supports_stop_words if self.llm else False
|
||||
return self.llm.supports_stop_words() if self.llm else False
|
||||
|
||||
def invoke(self, inputs: dict[str, Any]) -> dict[str, Any]:
|
||||
"""Execute the agent with given inputs.
|
||||
@@ -205,6 +222,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
|
||||
llm=self.llm,
|
||||
callbacks=self.callbacks,
|
||||
)
|
||||
break
|
||||
|
||||
enforce_rpm_limit(self.request_within_rpm_limit)
|
||||
|
||||
@@ -216,8 +234,9 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
|
||||
from_task=self.task,
|
||||
from_agent=self.agent,
|
||||
response_model=self.response_model,
|
||||
executor_context=self,
|
||||
)
|
||||
formatted_answer = process_llm_response(answer, self.use_stop_words)
|
||||
formatted_answer = process_llm_response(answer, self.use_stop_words) # type: ignore[assignment]
|
||||
|
||||
if isinstance(formatted_answer, AgentAction):
|
||||
# Extract agent fingerprint if available
|
||||
@@ -249,11 +268,11 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
|
||||
formatted_answer, tool_result
|
||||
)
|
||||
|
||||
self._invoke_step_callback(formatted_answer)
|
||||
self._append_message(formatted_answer.text)
|
||||
self._invoke_step_callback(formatted_answer) # type: ignore[arg-type]
|
||||
self._append_message(formatted_answer.text) # type: ignore[union-attr,attr-defined]
|
||||
|
||||
except OutputParserError as e: # noqa: PERF203
|
||||
formatted_answer = handle_output_parser_exception(
|
||||
except OutputParserError as e:
|
||||
formatted_answer = handle_output_parser_exception( # type: ignore[assignment]
|
||||
e=e,
|
||||
messages=self.messages,
|
||||
iterations=self.iterations,
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
import time
|
||||
from typing import Any
|
||||
from typing import TYPE_CHECKING, Any, TypeVar, cast
|
||||
import webbrowser
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
@@ -13,6 +13,8 @@ from crewai.cli.shared.token_manager import TokenManager
|
||||
|
||||
console = Console()
|
||||
|
||||
TOauth2Settings = TypeVar("TOauth2Settings", bound="Oauth2Settings")
|
||||
|
||||
|
||||
class Oauth2Settings(BaseModel):
|
||||
provider: str = Field(
|
||||
@@ -28,9 +30,15 @@ class Oauth2Settings(BaseModel):
|
||||
description="OAuth2 audience value, typically used to identify the target API or resource.",
|
||||
default=None,
|
||||
)
|
||||
extra: dict[str, Any] = Field(
|
||||
description="Extra configuration for the OAuth2 provider.",
|
||||
default={},
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def from_settings(cls):
|
||||
def from_settings(cls: type[TOauth2Settings]) -> TOauth2Settings:
|
||||
"""Create an Oauth2Settings instance from the CLI settings."""
|
||||
|
||||
settings = Settings()
|
||||
|
||||
return cls(
|
||||
@@ -38,12 +46,20 @@ class Oauth2Settings(BaseModel):
|
||||
domain=settings.oauth2_domain,
|
||||
client_id=settings.oauth2_client_id,
|
||||
audience=settings.oauth2_audience,
|
||||
extra=settings.oauth2_extra,
|
||||
)
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from crewai.cli.authentication.providers.base_provider import BaseProvider
|
||||
|
||||
|
||||
class ProviderFactory:
|
||||
@classmethod
|
||||
def from_settings(cls, settings: Oauth2Settings | None = None):
|
||||
def from_settings(
|
||||
cls: type["ProviderFactory"], # noqa: UP037
|
||||
settings: Oauth2Settings | None = None,
|
||||
) -> "BaseProvider": # noqa: UP037
|
||||
settings = settings or Oauth2Settings.from_settings()
|
||||
|
||||
import importlib
|
||||
@@ -53,11 +69,11 @@ class ProviderFactory:
|
||||
)
|
||||
provider = getattr(module, f"{settings.provider.capitalize()}Provider")
|
||||
|
||||
return provider(settings)
|
||||
return cast("BaseProvider", provider(settings))
|
||||
|
||||
|
||||
class AuthenticationCommand:
|
||||
def __init__(self):
|
||||
def __init__(self) -> None:
|
||||
self.token_manager = TokenManager()
|
||||
self.oauth2_provider = ProviderFactory.from_settings()
|
||||
|
||||
@@ -84,7 +100,7 @@ class AuthenticationCommand:
|
||||
timeout=20,
|
||||
)
|
||||
response.raise_for_status()
|
||||
return response.json()
|
||||
return cast(dict[str, Any], response.json())
|
||||
|
||||
def _display_auth_instructions(self, device_code_data: dict[str, str]) -> None:
|
||||
"""Display the authentication instructions to the user."""
|
||||
|
||||
@@ -24,3 +24,7 @@ class BaseProvider(ABC):
|
||||
|
||||
@abstractmethod
|
||||
def get_client_id(self) -> str: ...
|
||||
|
||||
def get_required_fields(self) -> list[str]:
|
||||
"""Returns which provider-specific fields inside the "extra" dict will be required"""
|
||||
return []
|
||||
|
||||
@@ -3,16 +3,16 @@ from crewai.cli.authentication.providers.base_provider import BaseProvider
|
||||
|
||||
class OktaProvider(BaseProvider):
|
||||
def get_authorize_url(self) -> str:
|
||||
return f"https://{self.settings.domain}/oauth2/default/v1/device/authorize"
|
||||
return f"{self._oauth2_base_url()}/v1/device/authorize"
|
||||
|
||||
def get_token_url(self) -> str:
|
||||
return f"https://{self.settings.domain}/oauth2/default/v1/token"
|
||||
return f"{self._oauth2_base_url()}/v1/token"
|
||||
|
||||
def get_jwks_url(self) -> str:
|
||||
return f"https://{self.settings.domain}/oauth2/default/v1/keys"
|
||||
return f"{self._oauth2_base_url()}/v1/keys"
|
||||
|
||||
def get_issuer(self) -> str:
|
||||
return f"https://{self.settings.domain}/oauth2/default"
|
||||
return self._oauth2_base_url().removesuffix("/oauth2")
|
||||
|
||||
def get_audience(self) -> str:
|
||||
if self.settings.audience is None:
|
||||
@@ -27,3 +27,16 @@ class OktaProvider(BaseProvider):
|
||||
"Client ID is required. Please set it in the configuration."
|
||||
)
|
||||
return self.settings.client_id
|
||||
|
||||
def get_required_fields(self) -> list[str]:
|
||||
return ["authorization_server_name", "using_org_auth_server"]
|
||||
|
||||
def _oauth2_base_url(self) -> str:
|
||||
using_org_auth_server = self.settings.extra.get("using_org_auth_server", False)
|
||||
|
||||
if using_org_auth_server:
|
||||
base_url = f"https://{self.settings.domain}/oauth2"
|
||||
else:
|
||||
base_url = f"https://{self.settings.domain}/oauth2/{self.settings.extra.get('authorization_server_name', 'default')}"
|
||||
|
||||
return f"{base_url}"
|
||||
|
||||
@@ -11,18 +11,18 @@ console = Console()
|
||||
|
||||
|
||||
class BaseCommand:
|
||||
def __init__(self):
|
||||
def __init__(self) -> None:
|
||||
self._telemetry = Telemetry()
|
||||
self._telemetry.set_tracer()
|
||||
|
||||
|
||||
class PlusAPIMixin:
|
||||
def __init__(self, telemetry):
|
||||
def __init__(self, telemetry: Telemetry) -> None:
|
||||
try:
|
||||
telemetry.set_tracer()
|
||||
self.plus_api_client = PlusAPI(api_key=get_auth_token())
|
||||
except Exception:
|
||||
self._deploy_signup_error_span = telemetry.deploy_signup_error_span()
|
||||
telemetry.deploy_signup_error_span()
|
||||
console.print(
|
||||
"Please sign up/login to CrewAI+ before using the CLI.",
|
||||
style="bold red",
|
||||
|
||||
@@ -2,6 +2,7 @@ import json
|
||||
from logging import getLogger
|
||||
from pathlib import Path
|
||||
import tempfile
|
||||
from typing import Any
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
@@ -136,7 +137,12 @@ class Settings(BaseModel):
|
||||
default=DEFAULT_CLI_SETTINGS["oauth2_domain"],
|
||||
)
|
||||
|
||||
def __init__(self, config_path: Path | None = None, **data):
|
||||
oauth2_extra: dict[str, Any] = Field(
|
||||
description="Extra configuration for the OAuth2 provider.",
|
||||
default={},
|
||||
)
|
||||
|
||||
def __init__(self, config_path: Path | None = None, **data: dict[str, Any]) -> None:
|
||||
"""Load Settings from config path with fallback support"""
|
||||
if config_path is None:
|
||||
config_path = get_writable_config_path()
|
||||
|
||||
@@ -1,9 +1,10 @@
|
||||
from typing import Any
|
||||
from typing import Any, cast
|
||||
|
||||
import requests
|
||||
from requests.exceptions import JSONDecodeError, RequestException
|
||||
from rich.console import Console
|
||||
|
||||
from crewai.cli.authentication.main import Oauth2Settings, ProviderFactory
|
||||
from crewai.cli.command import BaseCommand
|
||||
from crewai.cli.settings.main import SettingsCommand
|
||||
from crewai.cli.version import get_crewai_version
|
||||
@@ -13,7 +14,7 @@ console = Console()
|
||||
|
||||
|
||||
class EnterpriseConfigureCommand(BaseCommand):
|
||||
def __init__(self):
|
||||
def __init__(self) -> None:
|
||||
super().__init__()
|
||||
self.settings_command = SettingsCommand()
|
||||
|
||||
@@ -54,25 +55,12 @@ class EnterpriseConfigureCommand(BaseCommand):
|
||||
except JSONDecodeError as e:
|
||||
raise ValueError(f"Invalid JSON response from {oauth_endpoint}") from e
|
||||
|
||||
required_fields = [
|
||||
"audience",
|
||||
"domain",
|
||||
"device_authorization_client_id",
|
||||
"provider",
|
||||
]
|
||||
missing_fields = [
|
||||
field for field in required_fields if field not in oauth_config
|
||||
]
|
||||
|
||||
if missing_fields:
|
||||
raise ValueError(
|
||||
f"Missing required fields in OAuth2 configuration: {', '.join(missing_fields)}"
|
||||
)
|
||||
self._validate_oauth_config(oauth_config)
|
||||
|
||||
console.print(
|
||||
"✅ Successfully retrieved OAuth2 configuration", style="green"
|
||||
)
|
||||
return oauth_config
|
||||
return cast(dict[str, Any], oauth_config)
|
||||
|
||||
except RequestException as e:
|
||||
raise ValueError(f"Failed to connect to enterprise URL: {e!s}") from e
|
||||
@@ -89,6 +77,7 @@ class EnterpriseConfigureCommand(BaseCommand):
|
||||
"oauth2_audience": oauth_config["audience"],
|
||||
"oauth2_client_id": oauth_config["device_authorization_client_id"],
|
||||
"oauth2_domain": oauth_config["domain"],
|
||||
"oauth2_extra": oauth_config["extra"],
|
||||
}
|
||||
|
||||
console.print("🔄 Updating local OAuth2 configuration...")
|
||||
@@ -99,3 +88,38 @@ class EnterpriseConfigureCommand(BaseCommand):
|
||||
|
||||
except Exception as e:
|
||||
raise ValueError(f"Failed to update OAuth2 settings: {e!s}") from e
|
||||
|
||||
def _validate_oauth_config(self, oauth_config: dict[str, Any]) -> None:
|
||||
required_fields = [
|
||||
"audience",
|
||||
"domain",
|
||||
"device_authorization_client_id",
|
||||
"provider",
|
||||
"extra",
|
||||
]
|
||||
|
||||
missing_basic_fields = [
|
||||
field for field in required_fields if field not in oauth_config
|
||||
]
|
||||
missing_provider_specific_fields = [
|
||||
field
|
||||
for field in self._get_provider_specific_fields(oauth_config["provider"])
|
||||
if field not in oauth_config.get("extra", {})
|
||||
]
|
||||
|
||||
if missing_basic_fields:
|
||||
raise ValueError(
|
||||
f"Missing required fields in OAuth2 configuration: [{', '.join(missing_basic_fields)}]"
|
||||
)
|
||||
|
||||
if missing_provider_specific_fields:
|
||||
raise ValueError(
|
||||
f"Missing authentication provider required fields in OAuth2 configuration: [{', '.join(missing_provider_specific_fields)}] (Configured provider: '{oauth_config['provider']}')"
|
||||
)
|
||||
|
||||
def _get_provider_specific_fields(self, provider_name: str) -> list[str]:
|
||||
provider = ProviderFactory.from_settings(
|
||||
Oauth2Settings(provider=provider_name, client_id="dummy", domain="dummy")
|
||||
)
|
||||
|
||||
return provider.get_required_fields()
|
||||
|
||||
@@ -3,7 +3,7 @@ import subprocess
|
||||
|
||||
|
||||
class Repository:
|
||||
def __init__(self, path="."):
|
||||
def __init__(self, path: str = ".") -> None:
|
||||
self.path = path
|
||||
|
||||
if not self.is_git_installed():
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
from typing import Any
|
||||
from urllib.parse import urljoin
|
||||
|
||||
import requests
|
||||
@@ -36,19 +37,21 @@ class PlusAPI:
|
||||
str(settings.enterprise_base_url) or DEFAULT_CREWAI_ENTERPRISE_URL
|
||||
)
|
||||
|
||||
def _make_request(self, method: str, endpoint: str, **kwargs) -> requests.Response:
|
||||
def _make_request(
|
||||
self, method: str, endpoint: str, **kwargs: Any
|
||||
) -> requests.Response:
|
||||
url = urljoin(self.base_url, endpoint)
|
||||
session = requests.Session()
|
||||
session.trust_env = False
|
||||
return session.request(method, url, headers=self.headers, **kwargs)
|
||||
|
||||
def login_to_tool_repository(self):
|
||||
def login_to_tool_repository(self) -> requests.Response:
|
||||
return self._make_request("POST", f"{self.TOOLS_RESOURCE}/login")
|
||||
|
||||
def get_tool(self, handle: str):
|
||||
def get_tool(self, handle: str) -> requests.Response:
|
||||
return self._make_request("GET", f"{self.TOOLS_RESOURCE}/{handle}")
|
||||
|
||||
def get_agent(self, handle: str):
|
||||
def get_agent(self, handle: str) -> requests.Response:
|
||||
return self._make_request("GET", f"{self.AGENTS_RESOURCE}/{handle}")
|
||||
|
||||
def publish_tool(
|
||||
@@ -58,8 +61,8 @@ class PlusAPI:
|
||||
version: str,
|
||||
description: str | None,
|
||||
encoded_file: str,
|
||||
available_exports: list[str] | None = None,
|
||||
):
|
||||
available_exports: list[dict[str, Any]] | None = None,
|
||||
) -> requests.Response:
|
||||
params = {
|
||||
"handle": handle,
|
||||
"public": is_public,
|
||||
@@ -111,13 +114,13 @@ class PlusAPI:
|
||||
def list_crews(self) -> requests.Response:
|
||||
return self._make_request("GET", self.CREWS_RESOURCE)
|
||||
|
||||
def create_crew(self, payload) -> requests.Response:
|
||||
def create_crew(self, payload: dict[str, Any]) -> requests.Response:
|
||||
return self._make_request("POST", self.CREWS_RESOURCE, json=payload)
|
||||
|
||||
def get_organizations(self) -> requests.Response:
|
||||
return self._make_request("GET", self.ORGANIZATIONS_RESOURCE)
|
||||
|
||||
def initialize_trace_batch(self, payload) -> requests.Response:
|
||||
def initialize_trace_batch(self, payload: dict[str, Any]) -> requests.Response:
|
||||
return self._make_request(
|
||||
"POST",
|
||||
f"{self.TRACING_RESOURCE}/batches",
|
||||
@@ -125,14 +128,18 @@ class PlusAPI:
|
||||
timeout=30,
|
||||
)
|
||||
|
||||
def initialize_ephemeral_trace_batch(self, payload) -> requests.Response:
|
||||
def initialize_ephemeral_trace_batch(
|
||||
self, payload: dict[str, Any]
|
||||
) -> requests.Response:
|
||||
return self._make_request(
|
||||
"POST",
|
||||
f"{self.EPHEMERAL_TRACING_RESOURCE}/batches",
|
||||
json=payload,
|
||||
)
|
||||
|
||||
def send_trace_events(self, trace_batch_id: str, payload) -> requests.Response:
|
||||
def send_trace_events(
|
||||
self, trace_batch_id: str, payload: dict[str, Any]
|
||||
) -> requests.Response:
|
||||
return self._make_request(
|
||||
"POST",
|
||||
f"{self.TRACING_RESOURCE}/batches/{trace_batch_id}/events",
|
||||
@@ -141,7 +148,7 @@ class PlusAPI:
|
||||
)
|
||||
|
||||
def send_ephemeral_trace_events(
|
||||
self, trace_batch_id: str, payload
|
||||
self, trace_batch_id: str, payload: dict[str, Any]
|
||||
) -> requests.Response:
|
||||
return self._make_request(
|
||||
"POST",
|
||||
@@ -150,7 +157,9 @@ class PlusAPI:
|
||||
timeout=30,
|
||||
)
|
||||
|
||||
def finalize_trace_batch(self, trace_batch_id: str, payload) -> requests.Response:
|
||||
def finalize_trace_batch(
|
||||
self, trace_batch_id: str, payload: dict[str, Any]
|
||||
) -> requests.Response:
|
||||
return self._make_request(
|
||||
"PATCH",
|
||||
f"{self.TRACING_RESOURCE}/batches/{trace_batch_id}/finalize",
|
||||
@@ -159,7 +168,7 @@ class PlusAPI:
|
||||
)
|
||||
|
||||
def finalize_ephemeral_trace_batch(
|
||||
self, trace_batch_id: str, payload
|
||||
self, trace_batch_id: str, payload: dict[str, Any]
|
||||
) -> requests.Response:
|
||||
return self._make_request(
|
||||
"PATCH",
|
||||
|
||||
@@ -34,7 +34,7 @@ class SettingsCommand(BaseCommand):
|
||||
current_value = getattr(self.settings, field_name)
|
||||
description = field_info.description or "No description available"
|
||||
display_value = (
|
||||
str(current_value) if current_value is not None else "Not set"
|
||||
str(current_value) if current_value not in [None, {}] else "Not set"
|
||||
)
|
||||
|
||||
table.add_row(field_name, display_value, description)
|
||||
|
||||
@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
|
||||
authors = [{ name = "Your Name", email = "you@example.com" }]
|
||||
requires-python = ">=3.10,<3.14"
|
||||
dependencies = [
|
||||
"crewai[tools]==1.3.0"
|
||||
"crewai[tools]==1.4.1"
|
||||
]
|
||||
|
||||
[project.scripts]
|
||||
|
||||
@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
|
||||
authors = [{ name = "Your Name", email = "you@example.com" }]
|
||||
requires-python = ">=3.10,<3.14"
|
||||
dependencies = [
|
||||
"crewai[tools]==1.3.0"
|
||||
"crewai[tools]==1.4.1"
|
||||
]
|
||||
|
||||
[project.scripts]
|
||||
|
||||
@@ -30,11 +30,11 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
|
||||
A class to handle tool repository related operations for CrewAI projects.
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
def __init__(self) -> None:
|
||||
BaseCommand.__init__(self)
|
||||
PlusAPIMixin.__init__(self, telemetry=self._telemetry)
|
||||
|
||||
def create(self, handle: str):
|
||||
def create(self, handle: str) -> None:
|
||||
self._ensure_not_in_project()
|
||||
|
||||
folder_name = handle.replace(" ", "_").replace("-", "_").lower()
|
||||
@@ -64,7 +64,7 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
|
||||
finally:
|
||||
os.chdir(old_directory)
|
||||
|
||||
def publish(self, is_public: bool, force: bool = False):
|
||||
def publish(self, is_public: bool, force: bool = False) -> None:
|
||||
if not git.Repository().is_synced() and not force:
|
||||
console.print(
|
||||
"[bold red]Failed to publish tool.[/bold red]\n"
|
||||
@@ -137,7 +137,7 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
|
||||
style="bold green",
|
||||
)
|
||||
|
||||
def install(self, handle: str):
|
||||
def install(self, handle: str) -> None:
|
||||
self._print_current_organization()
|
||||
get_response = self.plus_api_client.get_tool(handle)
|
||||
|
||||
@@ -180,7 +180,7 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
|
||||
settings.org_name = login_response_json["current_organization"]["name"]
|
||||
settings.dump()
|
||||
|
||||
def _add_package(self, tool_details: dict[str, Any]):
|
||||
def _add_package(self, tool_details: dict[str, Any]) -> None:
|
||||
is_from_pypi = tool_details.get("source", None) == "pypi"
|
||||
tool_handle = tool_details["handle"]
|
||||
repository_handle = tool_details["repository"]["handle"]
|
||||
@@ -209,7 +209,7 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
|
||||
click.echo(add_package_result.stderr, err=True)
|
||||
raise SystemExit
|
||||
|
||||
def _ensure_not_in_project(self):
|
||||
def _ensure_not_in_project(self) -> None:
|
||||
if os.path.isfile("./pyproject.toml"):
|
||||
console.print(
|
||||
"[bold red]Oops! It looks like you're inside a project.[/bold red]"
|
||||
|
||||
@@ -5,7 +5,7 @@ import os
|
||||
from pathlib import Path
|
||||
import shutil
|
||||
import sys
|
||||
from typing import Any, get_type_hints
|
||||
from typing import Any, cast, get_type_hints
|
||||
|
||||
import click
|
||||
from rich.console import Console
|
||||
@@ -23,7 +23,9 @@ if sys.version_info >= (3, 11):
|
||||
console = Console()
|
||||
|
||||
|
||||
def copy_template(src, dst, name, class_name, folder_name):
|
||||
def copy_template(
|
||||
src: Path, dst: Path, name: str, class_name: str, folder_name: str
|
||||
) -> None:
|
||||
"""Copy a file from src to dst."""
|
||||
with open(src, "r") as file:
|
||||
content = file.read()
|
||||
@@ -40,13 +42,13 @@ def copy_template(src, dst, name, class_name, folder_name):
|
||||
click.secho(f" - Created {dst}", fg="green")
|
||||
|
||||
|
||||
def read_toml(file_path: str = "pyproject.toml"):
|
||||
def read_toml(file_path: str = "pyproject.toml") -> dict[str, Any]:
|
||||
"""Read the content of a TOML file and return it as a dictionary."""
|
||||
with open(file_path, "rb") as f:
|
||||
return tomli.load(f)
|
||||
|
||||
|
||||
def parse_toml(content):
|
||||
def parse_toml(content: str) -> dict[str, Any]:
|
||||
if sys.version_info >= (3, 11):
|
||||
return tomllib.loads(content)
|
||||
return tomli.loads(content)
|
||||
@@ -103,7 +105,7 @@ def _get_project_attribute(
|
||||
)
|
||||
except Exception as e:
|
||||
# Handle TOML decode errors for Python 3.11+
|
||||
if sys.version_info >= (3, 11) and isinstance(e, tomllib.TOMLDecodeError): # type: ignore
|
||||
if sys.version_info >= (3, 11) and isinstance(e, tomllib.TOMLDecodeError):
|
||||
console.print(
|
||||
f"Error: {pyproject_path} is not a valid TOML file.", style="bold red"
|
||||
)
|
||||
@@ -126,7 +128,7 @@ def _get_nested_value(data: dict[str, Any], keys: list[str]) -> Any:
|
||||
return reduce(dict.__getitem__, keys, data)
|
||||
|
||||
|
||||
def fetch_and_json_env_file(env_file_path: str = ".env") -> dict:
|
||||
def fetch_and_json_env_file(env_file_path: str = ".env") -> dict[str, Any]:
|
||||
"""Fetch the environment variables from a .env file and return them as a dictionary."""
|
||||
try:
|
||||
# Read the .env file
|
||||
@@ -150,7 +152,7 @@ def fetch_and_json_env_file(env_file_path: str = ".env") -> dict:
|
||||
return {}
|
||||
|
||||
|
||||
def tree_copy(source, destination):
|
||||
def tree_copy(source: Path, destination: Path) -> None:
|
||||
"""Copies the entire directory structure from the source to the destination."""
|
||||
for item in os.listdir(source):
|
||||
source_item = os.path.join(source, item)
|
||||
@@ -161,7 +163,7 @@ def tree_copy(source, destination):
|
||||
shutil.copy2(source_item, destination_item)
|
||||
|
||||
|
||||
def tree_find_and_replace(directory, find, replace):
|
||||
def tree_find_and_replace(directory: Path, find: str, replace: str) -> None:
|
||||
"""Recursively searches through a directory, replacing a target string in
|
||||
both file contents and filenames with a specified replacement string.
|
||||
"""
|
||||
@@ -187,7 +189,7 @@ def tree_find_and_replace(directory, find, replace):
|
||||
os.rename(old_dirpath, new_dirpath)
|
||||
|
||||
|
||||
def load_env_vars(folder_path):
|
||||
def load_env_vars(folder_path: Path) -> dict[str, Any]:
|
||||
"""
|
||||
Loads environment variables from a .env file in the specified folder path.
|
||||
|
||||
@@ -208,7 +210,9 @@ def load_env_vars(folder_path):
|
||||
return env_vars
|
||||
|
||||
|
||||
def update_env_vars(env_vars, provider, model):
|
||||
def update_env_vars(
|
||||
env_vars: dict[str, Any], provider: str, model: str
|
||||
) -> dict[str, Any] | None:
|
||||
"""
|
||||
Updates environment variables with the API key for the selected provider and model.
|
||||
|
||||
@@ -220,15 +224,20 @@ def update_env_vars(env_vars, provider, model):
|
||||
Returns:
|
||||
- None
|
||||
"""
|
||||
api_key_var = ENV_VARS.get(
|
||||
provider,
|
||||
[
|
||||
click.prompt(
|
||||
f"Enter the environment variable name for your {provider.capitalize()} API key",
|
||||
type=str,
|
||||
)
|
||||
],
|
||||
)[0]
|
||||
provider_config = cast(
|
||||
list[str],
|
||||
ENV_VARS.get(
|
||||
provider,
|
||||
[
|
||||
click.prompt(
|
||||
f"Enter the environment variable name for your {provider.capitalize()} API key",
|
||||
type=str,
|
||||
)
|
||||
],
|
||||
),
|
||||
)
|
||||
|
||||
api_key_var = provider_config[0]
|
||||
|
||||
if api_key_var not in env_vars:
|
||||
try:
|
||||
@@ -246,7 +255,7 @@ def update_env_vars(env_vars, provider, model):
|
||||
return env_vars
|
||||
|
||||
|
||||
def write_env_file(folder_path, env_vars):
|
||||
def write_env_file(folder_path: Path, env_vars: dict[str, Any]) -> None:
|
||||
"""
|
||||
Writes environment variables to a .env file in the specified folder.
|
||||
|
||||
@@ -342,18 +351,18 @@ def get_crews(crew_path: str = "crew.py", require: bool = False) -> list[Crew]:
|
||||
return crew_instances
|
||||
|
||||
|
||||
def get_crew_instance(module_attr) -> Crew | None:
|
||||
def get_crew_instance(module_attr: Any) -> Crew | None:
|
||||
if (
|
||||
callable(module_attr)
|
||||
and hasattr(module_attr, "is_crew_class")
|
||||
and module_attr.is_crew_class
|
||||
):
|
||||
return module_attr().crew()
|
||||
return cast(Crew, module_attr().crew())
|
||||
try:
|
||||
if (ismethod(module_attr) or isfunction(module_attr)) and get_type_hints(
|
||||
module_attr
|
||||
).get("return") is Crew:
|
||||
return module_attr()
|
||||
return cast(Crew, module_attr())
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
@@ -362,7 +371,7 @@ def get_crew_instance(module_attr) -> Crew | None:
|
||||
return None
|
||||
|
||||
|
||||
def fetch_crews(module_attr) -> list[Crew]:
|
||||
def fetch_crews(module_attr: Any) -> list[Crew]:
|
||||
crew_instances: list[Crew] = []
|
||||
|
||||
if crew_instance := get_crew_instance(module_attr):
|
||||
@@ -377,7 +386,7 @@ def fetch_crews(module_attr) -> list[Crew]:
|
||||
return crew_instances
|
||||
|
||||
|
||||
def is_valid_tool(obj):
|
||||
def is_valid_tool(obj: Any) -> bool:
|
||||
from crewai.tools.base_tool import Tool
|
||||
|
||||
if isclass(obj):
|
||||
@@ -389,7 +398,7 @@ def is_valid_tool(obj):
|
||||
return isinstance(obj, Tool)
|
||||
|
||||
|
||||
def extract_available_exports(dir_path: str = "src"):
|
||||
def extract_available_exports(dir_path: str = "src") -> list[dict[str, Any]]:
|
||||
"""
|
||||
Extract available tool classes from the project's __init__.py files.
|
||||
Only includes classes that inherit from BaseTool or functions decorated with @tool.
|
||||
@@ -419,7 +428,9 @@ def extract_available_exports(dir_path: str = "src"):
|
||||
raise SystemExit(1) from e
|
||||
|
||||
|
||||
def build_env_with_tool_repository_credentials(repository_handle: str):
|
||||
def build_env_with_tool_repository_credentials(
|
||||
repository_handle: str,
|
||||
) -> dict[str, Any]:
|
||||
repository_handle = repository_handle.upper().replace("-", "_")
|
||||
settings = Settings()
|
||||
|
||||
@@ -472,7 +483,7 @@ def _load_tools_from_init(init_file: Path) -> list[dict[str, Any]]:
|
||||
sys.modules.pop("temp_module", None)
|
||||
|
||||
|
||||
def _print_no_tools_warning():
|
||||
def _print_no_tools_warning() -> None:
|
||||
"""
|
||||
Display warning and usage instructions if no tools were found.
|
||||
"""
|
||||
|
||||
@@ -809,6 +809,7 @@ class Crew(FlowTrackable, BaseModel):
|
||||
"json_dict": output.json_dict,
|
||||
"output_format": output.output_format,
|
||||
"agent": output.agent,
|
||||
"messages": output.messages,
|
||||
},
|
||||
"task_index": task_index,
|
||||
"inputs": inputs,
|
||||
@@ -1236,6 +1237,7 @@ class Crew(FlowTrackable, BaseModel):
|
||||
pydantic=stored_output["pydantic"],
|
||||
json_dict=stored_output["json_dict"],
|
||||
output_format=stored_output["output_format"],
|
||||
messages=stored_output.get("messages", []),
|
||||
)
|
||||
self.tasks[i].output = task_output
|
||||
|
||||
|
||||
@@ -358,6 +358,7 @@ class LiteAgent(FlowTrackable, BaseModel):
|
||||
pydantic=formatted_result,
|
||||
agent_role=self.role,
|
||||
usage_metrics=usage_metrics.model_dump() if usage_metrics else None,
|
||||
messages=self._messages,
|
||||
)
|
||||
|
||||
# Process guardrail if set
|
||||
|
||||
@@ -6,6 +6,8 @@ from typing import Any
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from crewai.utilities.types import LLMMessage
|
||||
|
||||
|
||||
class LiteAgentOutput(BaseModel):
|
||||
"""Class that represents the result of a LiteAgent execution."""
|
||||
@@ -20,6 +22,7 @@ class LiteAgentOutput(BaseModel):
|
||||
usage_metrics: dict[str, Any] | None = Field(
|
||||
description="Token usage metrics for this execution", default=None
|
||||
)
|
||||
messages: list[LLMMessage] = Field(description="Messages of the agent", default=[])
|
||||
|
||||
def to_dict(self) -> dict[str, Any]:
|
||||
"""Convert pydantic_output to a dictionary."""
|
||||
|
||||
@@ -20,7 +20,8 @@ from typing import (
|
||||
)
|
||||
|
||||
from dotenv import load_dotenv
|
||||
from pydantic import BaseModel, Field, model_validator
|
||||
import httpx
|
||||
from pydantic import BaseModel, Field
|
||||
from typing_extensions import Self
|
||||
|
||||
from crewai.events.event_bus import crewai_event_bus
|
||||
@@ -37,6 +38,13 @@ from crewai.events.types.tool_usage_events import (
|
||||
ToolUsageStartedEvent,
|
||||
)
|
||||
from crewai.llms.base_llm import BaseLLM
|
||||
from crewai.llms.constants import (
|
||||
ANTHROPIC_MODELS,
|
||||
AZURE_MODELS,
|
||||
BEDROCK_MODELS,
|
||||
GEMINI_MODELS,
|
||||
OPENAI_MODELS,
|
||||
)
|
||||
from crewai.utilities import InternalInstructor
|
||||
from crewai.utilities.exceptions.context_window_exceeding_exception import (
|
||||
LLMContextLengthExceededError,
|
||||
@@ -53,6 +61,7 @@ if TYPE_CHECKING:
|
||||
from litellm.utils import supports_response_schema
|
||||
|
||||
from crewai.agent.core import Agent
|
||||
from crewai.llms.hooks.base import BaseInterceptor
|
||||
from crewai.task import Task
|
||||
from crewai.tools.base_tool import BaseTool
|
||||
from crewai.utilities.types import LLMMessage
|
||||
@@ -318,152 +327,67 @@ class AccumulatedToolArgs(BaseModel):
|
||||
|
||||
|
||||
class LLM(BaseLLM):
|
||||
completion_cost: float | None = Field(
|
||||
default=None, description="The completion cost of the LLM."
|
||||
)
|
||||
top_p: float | None = Field(
|
||||
default=None, description="Sampling probability threshold."
|
||||
)
|
||||
n: int | None = Field(
|
||||
default=None, description="Number of completions to generate."
|
||||
)
|
||||
max_completion_tokens: int | None = Field(
|
||||
default=None,
|
||||
description="Maximum number of tokens to generate in the completion.",
|
||||
)
|
||||
max_tokens: int | None = Field(
|
||||
default=None,
|
||||
description="Maximum number of tokens allowed in the prompt + completion.",
|
||||
)
|
||||
presence_penalty: float | None = Field(
|
||||
default=None, description="Penalty on the presence penalty."
|
||||
)
|
||||
frequency_penalty: float | None = Field(
|
||||
default=None, description="Penalty on the frequency penalty."
|
||||
)
|
||||
logit_bias: dict[int, float] | None = Field(
|
||||
default=None,
|
||||
description="Modifies the likelihood of specified tokens appearing in the completion.",
|
||||
)
|
||||
response_format: type[BaseModel] | None = Field(
|
||||
default=None,
|
||||
description="Pydantic model class for structured response parsing.",
|
||||
)
|
||||
seed: int | None = Field(
|
||||
default=None,
|
||||
description="Random seed for reproducibility.",
|
||||
)
|
||||
logprobs: int | None = Field(
|
||||
default=None,
|
||||
description="Number of top logprobs to return.",
|
||||
)
|
||||
top_logprobs: int | None = Field(
|
||||
default=None,
|
||||
description="Number of top logprobs to return.",
|
||||
)
|
||||
api_base: str | None = Field(
|
||||
default=None,
|
||||
description="Base URL for the API endpoint.",
|
||||
)
|
||||
api_version: str | None = Field(
|
||||
default=None,
|
||||
description="API version to use.",
|
||||
)
|
||||
callbacks: list[Any] = Field(
|
||||
default_factory=list,
|
||||
description="List of callback handlers for LLM events.",
|
||||
)
|
||||
reasoning_effort: Literal["none", "low", "medium", "high"] | None = Field(
|
||||
default=None,
|
||||
description="Level of reasoning effort for the LLM.",
|
||||
)
|
||||
context_window_size: int = Field(
|
||||
default=0,
|
||||
description="The context window size of the LLM.",
|
||||
)
|
||||
is_anthropic: bool = Field(
|
||||
default=False,
|
||||
description="Indicates if the model is from Anthropic provider.",
|
||||
)
|
||||
supports_function_calling: bool = Field(
|
||||
default=False,
|
||||
description="Indicates if the model supports function calling.",
|
||||
)
|
||||
supports_stop_words: bool = Field(
|
||||
default=False,
|
||||
description="Indicates if the model supports stop words.",
|
||||
)
|
||||
|
||||
@model_validator(mode="after")
|
||||
def initialize_client(self) -> Self:
|
||||
self.is_anthropic = any(
|
||||
prefix in self.model.lower() for prefix in ANTHROPIC_PREFIXES
|
||||
)
|
||||
try:
|
||||
provider = self._get_custom_llm_provider()
|
||||
self.supports_function_calling = litellm.utils.supports_function_calling(
|
||||
self.model, custom_llm_provider=provider
|
||||
)
|
||||
except Exception as e:
|
||||
logging.error(f"Failed to check function calling support: {e!s}")
|
||||
self.supports_function_calling = False
|
||||
try:
|
||||
params = get_supported_openai_params(model=self.model)
|
||||
self.supports_stop_words = params is not None and "stop" in params
|
||||
except Exception as e:
|
||||
logging.error(f"Failed to get supported params: {e!s}")
|
||||
self.supports_stop_words = False
|
||||
|
||||
with suppress_warnings():
|
||||
callback_types = [type(callback) for callback in self.callbacks]
|
||||
for callback in litellm.success_callback[:]:
|
||||
if type(callback) in callback_types:
|
||||
litellm.success_callback.remove(callback)
|
||||
|
||||
for callback in litellm._async_success_callback[:]:
|
||||
if type(callback) in callback_types:
|
||||
litellm._async_success_callback.remove(callback)
|
||||
|
||||
litellm.callbacks = self.callbacks
|
||||
|
||||
with suppress_warnings():
|
||||
success_callbacks_str = os.environ.get("LITELLM_SUCCESS_CALLBACKS", "")
|
||||
success_callbacks: list[str | Callable[..., Any] | CustomLogger] = []
|
||||
if success_callbacks_str:
|
||||
success_callbacks = [
|
||||
cb.strip() for cb in success_callbacks_str.split(",") if cb.strip()
|
||||
]
|
||||
|
||||
failure_callbacks_str = os.environ.get("LITELLM_FAILURE_CALLBACKS", "")
|
||||
if failure_callbacks_str:
|
||||
failure_callbacks: list[str | Callable[..., Any] | CustomLogger] = [
|
||||
cb.strip() for cb in failure_callbacks_str.split(",") if cb.strip()
|
||||
]
|
||||
|
||||
litellm.success_callback = success_callbacks
|
||||
litellm.failure_callback = failure_callbacks
|
||||
return self
|
||||
|
||||
# @computed_field
|
||||
# @property
|
||||
# def is_anthropic(self) -> bool:
|
||||
# """Determine if the model is from Anthropic provider."""
|
||||
# anthropic_prefixes = ("anthropic/", "claude-", "claude/")
|
||||
# return any(prefix in self.model.lower() for prefix in anthropic_prefixes)
|
||||
completion_cost: float | None = None
|
||||
|
||||
def __new__(cls, model: str, is_litellm: bool = False, **kwargs: Any) -> LLM:
|
||||
"""Factory method that routes to native SDK or falls back to LiteLLM."""
|
||||
"""Factory method that routes to native SDK or falls back to LiteLLM.
|
||||
|
||||
Routing priority:
|
||||
1. If 'provider' kwarg is present, use that provider with constants
|
||||
2. If only 'model' kwarg, use constants to infer provider
|
||||
3. If "/" in model name:
|
||||
- Check if prefix is a native provider (openai/anthropic/azure/bedrock/gemini)
|
||||
- If yes, validate model against constants
|
||||
- If valid, route to native SDK; otherwise route to LiteLLM
|
||||
"""
|
||||
if not model or not isinstance(model, str):
|
||||
raise ValueError("Model must be a non-empty string")
|
||||
|
||||
provider = model.partition("/")[0] if "/" in model else "openai"
|
||||
explicit_provider = kwargs.get("provider")
|
||||
|
||||
native_class = cls._get_native_provider(provider)
|
||||
if explicit_provider:
|
||||
provider = explicit_provider
|
||||
use_native = True
|
||||
model_string = model
|
||||
elif "/" in model:
|
||||
prefix, _, model_part = model.partition("/")
|
||||
|
||||
provider_mapping = {
|
||||
"openai": "openai",
|
||||
"anthropic": "anthropic",
|
||||
"claude": "anthropic",
|
||||
"azure": "azure",
|
||||
"azure_openai": "azure",
|
||||
"google": "gemini",
|
||||
"gemini": "gemini",
|
||||
"bedrock": "bedrock",
|
||||
"aws": "bedrock",
|
||||
}
|
||||
|
||||
canonical_provider = provider_mapping.get(prefix.lower())
|
||||
|
||||
if canonical_provider and cls._validate_model_in_constants(
|
||||
model_part, canonical_provider
|
||||
):
|
||||
provider = canonical_provider
|
||||
use_native = True
|
||||
model_string = model_part
|
||||
else:
|
||||
provider = prefix
|
||||
use_native = False
|
||||
model_string = model_part
|
||||
else:
|
||||
provider = cls._infer_provider_from_model(model)
|
||||
use_native = True
|
||||
model_string = model
|
||||
|
||||
native_class = cls._get_native_provider(provider) if use_native else None
|
||||
if native_class and not is_litellm and provider in SUPPORTED_NATIVE_PROVIDERS:
|
||||
try:
|
||||
model_string = model.partition("/")[2] if "/" in model else model
|
||||
# Remove 'provider' from kwargs if it exists to avoid duplicate keyword argument
|
||||
kwargs_copy = {k: v for k, v in kwargs.items() if k != 'provider'}
|
||||
return cast(
|
||||
Self, native_class(model=model_string, provider=provider, **kwargs)
|
||||
Self, native_class(model=model_string, provider=provider, **kwargs_copy)
|
||||
)
|
||||
except NotImplementedError:
|
||||
raise
|
||||
@@ -480,6 +404,63 @@ class LLM(BaseLLM):
|
||||
instance.is_litellm = True
|
||||
return instance
|
||||
|
||||
@classmethod
|
||||
def _validate_model_in_constants(cls, model: str, provider: str) -> bool:
|
||||
"""Validate if a model name exists in the provider's constants.
|
||||
|
||||
Args:
|
||||
model: The model name to validate
|
||||
provider: The provider to check against (canonical name)
|
||||
|
||||
Returns:
|
||||
True if the model exists in the provider's constants, False otherwise
|
||||
"""
|
||||
if provider == "openai":
|
||||
return model in OPENAI_MODELS
|
||||
|
||||
if provider == "anthropic" or provider == "claude":
|
||||
return model in ANTHROPIC_MODELS
|
||||
|
||||
if provider == "gemini":
|
||||
return model in GEMINI_MODELS
|
||||
|
||||
if provider == "bedrock":
|
||||
return model in BEDROCK_MODELS
|
||||
|
||||
if provider == "azure":
|
||||
# azure does not provide a list of available models, determine a better way to handle this
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
@classmethod
|
||||
def _infer_provider_from_model(cls, model: str) -> str:
|
||||
"""Infer the provider from the model name.
|
||||
|
||||
Args:
|
||||
model: The model name without provider prefix
|
||||
|
||||
Returns:
|
||||
The inferred provider name, defaults to "openai"
|
||||
"""
|
||||
|
||||
if model in OPENAI_MODELS:
|
||||
return "openai"
|
||||
|
||||
if model in ANTHROPIC_MODELS:
|
||||
return "anthropic"
|
||||
|
||||
if model in GEMINI_MODELS:
|
||||
return "gemini"
|
||||
|
||||
if model in BEDROCK_MODELS:
|
||||
return "bedrock"
|
||||
|
||||
if model in AZURE_MODELS:
|
||||
return "azure"
|
||||
|
||||
return "openai"
|
||||
|
||||
@classmethod
|
||||
def _get_native_provider(cls, provider: str) -> type | None:
|
||||
"""Get native provider class if available."""
|
||||
@@ -512,6 +493,98 @@ class LLM(BaseLLM):
|
||||
|
||||
return None
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
model: str,
|
||||
timeout: float | int | None = None,
|
||||
temperature: float | None = None,
|
||||
top_p: float | None = None,
|
||||
n: int | None = None,
|
||||
stop: str | list[str] | None = None,
|
||||
max_completion_tokens: int | None = None,
|
||||
max_tokens: int | float | None = None,
|
||||
presence_penalty: float | None = None,
|
||||
frequency_penalty: float | None = None,
|
||||
logit_bias: dict[int, float] | None = None,
|
||||
response_format: type[BaseModel] | None = None,
|
||||
seed: int | None = None,
|
||||
logprobs: int | None = None,
|
||||
top_logprobs: int | None = None,
|
||||
base_url: str | None = None,
|
||||
api_base: str | None = None,
|
||||
api_version: str | None = None,
|
||||
api_key: str | None = None,
|
||||
callbacks: list[Any] | None = None,
|
||||
reasoning_effort: Literal["none", "low", "medium", "high"] | None = None,
|
||||
stream: bool = False,
|
||||
interceptor: BaseInterceptor[httpx.Request, httpx.Response] | None = None,
|
||||
**kwargs: Any,
|
||||
) -> None:
|
||||
"""Initialize LLM instance.
|
||||
|
||||
Note: This __init__ method is only called for fallback instances.
|
||||
Native provider instances handle their own initialization in their respective classes.
|
||||
"""
|
||||
super().__init__(
|
||||
model=model,
|
||||
temperature=temperature,
|
||||
api_key=api_key,
|
||||
base_url=base_url,
|
||||
timeout=timeout,
|
||||
**kwargs,
|
||||
)
|
||||
self.model = model
|
||||
self.timeout = timeout
|
||||
self.temperature = temperature
|
||||
self.top_p = top_p
|
||||
self.n = n
|
||||
self.max_completion_tokens = max_completion_tokens
|
||||
self.max_tokens = max_tokens
|
||||
self.presence_penalty = presence_penalty
|
||||
self.frequency_penalty = frequency_penalty
|
||||
self.logit_bias = logit_bias
|
||||
self.response_format = response_format
|
||||
self.seed = seed
|
||||
self.logprobs = logprobs
|
||||
self.top_logprobs = top_logprobs
|
||||
self.base_url = base_url
|
||||
self.api_base = api_base
|
||||
self.api_version = api_version
|
||||
self.api_key = api_key
|
||||
self.callbacks = callbacks
|
||||
self.context_window_size = 0
|
||||
self.reasoning_effort = reasoning_effort
|
||||
self.additional_params = kwargs
|
||||
self.is_anthropic = self._is_anthropic_model(model)
|
||||
self.stream = stream
|
||||
self.interceptor = interceptor
|
||||
|
||||
litellm.drop_params = True
|
||||
|
||||
# Normalize self.stop to always be a list[str]
|
||||
if stop is None:
|
||||
self.stop: list[str] = []
|
||||
elif isinstance(stop, str):
|
||||
self.stop = [stop]
|
||||
else:
|
||||
self.stop = stop
|
||||
|
||||
self.set_callbacks(callbacks or [])
|
||||
self.set_env_callbacks()
|
||||
|
||||
@staticmethod
|
||||
def _is_anthropic_model(model: str) -> bool:
|
||||
"""Determine if the model is from Anthropic provider.
|
||||
|
||||
Args:
|
||||
model: The model identifier string.
|
||||
|
||||
Returns:
|
||||
bool: True if the model is from Anthropic, False otherwise.
|
||||
"""
|
||||
anthropic_prefixes = ("anthropic/", "claude-", "claude/")
|
||||
return any(prefix in model.lower() for prefix in anthropic_prefixes)
|
||||
|
||||
def _prepare_completion_params(
|
||||
self,
|
||||
messages: str | list[LLMMessage],
|
||||
@@ -1225,6 +1298,8 @@ class LLM(BaseLLM):
|
||||
message["role"] = msg_role
|
||||
# --- 5) Set up callbacks if provided
|
||||
with suppress_warnings():
|
||||
if callbacks and len(callbacks) > 0:
|
||||
self.set_callbacks(callbacks)
|
||||
try:
|
||||
# --- 6) Prepare parameters for the completion call
|
||||
params = self._prepare_completion_params(messages, tools)
|
||||
@@ -1413,6 +1488,24 @@ class LLM(BaseLLM):
|
||||
"Please remove response_format or use a supported model."
|
||||
)
|
||||
|
||||
def supports_function_calling(self) -> bool:
|
||||
try:
|
||||
provider = self._get_custom_llm_provider()
|
||||
return litellm.utils.supports_function_calling(
|
||||
self.model, custom_llm_provider=provider
|
||||
)
|
||||
except Exception as e:
|
||||
logging.error(f"Failed to check function calling support: {e!s}")
|
||||
return False
|
||||
|
||||
def supports_stop_words(self) -> bool:
|
||||
try:
|
||||
params = get_supported_openai_params(model=self.model)
|
||||
return params is not None and "stop" in params
|
||||
except Exception as e:
|
||||
logging.error(f"Failed to get supported params: {e!s}")
|
||||
return False
|
||||
|
||||
def get_context_window_size(self) -> int:
|
||||
"""
|
||||
Returns the context window size, using 75% of the maximum to avoid
|
||||
@@ -1442,6 +1535,60 @@ class LLM(BaseLLM):
|
||||
self.context_window_size = int(value * CONTEXT_WINDOW_USAGE_RATIO)
|
||||
return self.context_window_size
|
||||
|
||||
@staticmethod
|
||||
def set_callbacks(callbacks: list[Any]) -> None:
|
||||
"""
|
||||
Attempt to keep a single set of callbacks in litellm by removing old
|
||||
duplicates and adding new ones.
|
||||
"""
|
||||
with suppress_warnings():
|
||||
callback_types = [type(callback) for callback in callbacks]
|
||||
for callback in litellm.success_callback[:]:
|
||||
if type(callback) in callback_types:
|
||||
litellm.success_callback.remove(callback)
|
||||
|
||||
for callback in litellm._async_success_callback[:]:
|
||||
if type(callback) in callback_types:
|
||||
litellm._async_success_callback.remove(callback)
|
||||
|
||||
litellm.callbacks = callbacks
|
||||
|
||||
@staticmethod
|
||||
def set_env_callbacks() -> None:
|
||||
"""Sets the success and failure callbacks for the LiteLLM library from environment variables.
|
||||
|
||||
This method reads the `LITELLM_SUCCESS_CALLBACKS` and `LITELLM_FAILURE_CALLBACKS`
|
||||
environment variables, which should contain comma-separated lists of callback names.
|
||||
It then assigns these lists to `litellm.success_callback` and `litellm.failure_callback`,
|
||||
respectively.
|
||||
|
||||
If the environment variables are not set or are empty, the corresponding callback lists
|
||||
will be set to empty lists.
|
||||
|
||||
Examples:
|
||||
LITELLM_SUCCESS_CALLBACKS="langfuse,langsmith"
|
||||
LITELLM_FAILURE_CALLBACKS="langfuse"
|
||||
|
||||
This will set `litellm.success_callback` to ["langfuse", "langsmith"] and
|
||||
`litellm.failure_callback` to ["langfuse"].
|
||||
"""
|
||||
with suppress_warnings():
|
||||
success_callbacks_str = os.environ.get("LITELLM_SUCCESS_CALLBACKS", "")
|
||||
success_callbacks: list[str | Callable[..., Any] | CustomLogger] = []
|
||||
if success_callbacks_str:
|
||||
success_callbacks = [
|
||||
cb.strip() for cb in success_callbacks_str.split(",") if cb.strip()
|
||||
]
|
||||
|
||||
failure_callbacks_str = os.environ.get("LITELLM_FAILURE_CALLBACKS", "")
|
||||
if failure_callbacks_str:
|
||||
failure_callbacks: list[str | Callable[..., Any] | CustomLogger] = [
|
||||
cb.strip() for cb in failure_callbacks_str.split(",") if cb.strip()
|
||||
]
|
||||
|
||||
litellm.success_callback = success_callbacks
|
||||
litellm.failure_callback = failure_callbacks
|
||||
|
||||
def __copy__(self) -> LLM:
|
||||
"""Create a shallow copy of the LLM instance."""
|
||||
# Filter out parameters that are already explicitly passed to avoid conflicts
|
||||
@@ -1502,7 +1649,7 @@ class LLM(BaseLLM):
|
||||
**filtered_params,
|
||||
)
|
||||
|
||||
def __deepcopy__(self, memo: dict[int, Any] | None) -> LLM: # type: ignore[override]
|
||||
def __deepcopy__(self, memo: dict[int, Any] | None) -> LLM:
|
||||
"""Create a deep copy of the LLM instance."""
|
||||
import copy
|
||||
|
||||
|
||||
@@ -13,9 +13,8 @@ import logging
|
||||
import re
|
||||
from typing import TYPE_CHECKING, Any, Final
|
||||
|
||||
from pydantic import AliasChoices, BaseModel, Field, PrivateAttr, field_validator
|
||||
from pydantic import BaseModel
|
||||
|
||||
from crewai.agents.agent_builder.utilities.base_token_process import TokenProcess
|
||||
from crewai.events.event_bus import crewai_event_bus
|
||||
from crewai.events.types.llm_events import (
|
||||
LLMCallCompletedEvent,
|
||||
@@ -29,7 +28,6 @@ from crewai.events.types.tool_usage_events import (
|
||||
ToolUsageFinishedEvent,
|
||||
ToolUsageStartedEvent,
|
||||
)
|
||||
from crewai.llms.hooks import BaseInterceptor
|
||||
from crewai.types.usage_metrics import UsageMetrics
|
||||
|
||||
|
||||
@@ -45,7 +43,7 @@ DEFAULT_SUPPORTS_STOP_WORDS: Final[bool] = True
|
||||
_JSON_EXTRACTION_PATTERN: Final[re.Pattern[str]] = re.compile(r"\{.*}", re.DOTALL)
|
||||
|
||||
|
||||
class BaseLLM(BaseModel, ABC):
|
||||
class BaseLLM(ABC):
|
||||
"""Abstract base class for LLM implementations.
|
||||
|
||||
This class defines the interface that all LLM implementations must follow.
|
||||
@@ -57,105 +55,70 @@ class BaseLLM(BaseModel, ABC):
|
||||
implement proper validation for input parameters and provide clear error
|
||||
messages when things go wrong.
|
||||
|
||||
|
||||
Attributes:
|
||||
model: The model identifier/name.
|
||||
temperature: Optional temperature setting for response generation.
|
||||
stop: A list of stop sequences that the LLM should use to stop generation.
|
||||
additional_params: Additional provider-specific parameters.
|
||||
"""
|
||||
|
||||
provider: str | re.Pattern[str] = Field(
|
||||
default="openai", description="The provider of the LLM."
|
||||
)
|
||||
model: str = Field(description="The model identifier/name.")
|
||||
temperature: float | None = Field(
|
||||
default=None, ge=0, le=2, description="Temperature for response generation."
|
||||
)
|
||||
api_key: str | None = Field(default=None, description="API key for authentication.")
|
||||
base_url: str | None = Field(default=None, description="Base URL for API calls.")
|
||||
timeout: float | None = Field(default=None, description="Timeout for API calls.")
|
||||
max_retries: int = Field(
|
||||
default=2, description="Maximum number of API requests to make."
|
||||
)
|
||||
max_tokens: int | None = Field(
|
||||
default=None, description="Maximum tokens for response generation."
|
||||
)
|
||||
stream: bool | None = Field(default=False, description="Stream the API requests.")
|
||||
client: Any = Field(description="Underlying LLM client instance.")
|
||||
interceptor: BaseInterceptor[Any, Any] | None = Field(
|
||||
default=None,
|
||||
description="An optional HTTPX interceptor for modifying requests/responses.",
|
||||
)
|
||||
client_params: dict[str, Any] = Field(
|
||||
default_factory=dict,
|
||||
description="Additional parameters for the underlying LLM client.",
|
||||
)
|
||||
supports_stop_words: bool = Field(
|
||||
default=DEFAULT_SUPPORTS_STOP_WORDS,
|
||||
description="Whether or not to support stop words.",
|
||||
)
|
||||
stop_sequences: list[str] = Field(
|
||||
default_factory=list,
|
||||
validation_alias=AliasChoices("stop_sequences", "stop"),
|
||||
description="Stop sequences for generation (synchronized with stop).",
|
||||
)
|
||||
is_litellm: bool = Field(
|
||||
default=False, description="Is this LLM implementation in litellm?"
|
||||
)
|
||||
additional_params: dict[str, Any] = Field(
|
||||
default_factory=dict,
|
||||
description="Additional parameters for LLM calls.",
|
||||
)
|
||||
_token_usage: TokenProcess = PrivateAttr(default_factory=TokenProcess)
|
||||
is_litellm: bool = False
|
||||
|
||||
@field_validator("provider", mode="before")
|
||||
@classmethod
|
||||
def extract_provider_from_model(
|
||||
cls, v: str | re.Pattern[str] | None, info: Any
|
||||
) -> str | re.Pattern[str]:
|
||||
"""Extract provider from model string if not explicitly provided.
|
||||
def __init__(
|
||||
self,
|
||||
model: str,
|
||||
temperature: float | None = None,
|
||||
api_key: str | None = None,
|
||||
base_url: str | None = None,
|
||||
provider: str | None = None,
|
||||
**kwargs: Any,
|
||||
) -> None:
|
||||
"""Initialize the BaseLLM with default attributes.
|
||||
|
||||
Args:
|
||||
v: Provided provider value (can be str, Pattern, or None)
|
||||
info: Validation info containing other field values
|
||||
|
||||
Returns:
|
||||
Provider name (str) or Pattern
|
||||
model: The model identifier/name.
|
||||
temperature: Optional temperature setting for response generation.
|
||||
stop: Optional list of stop sequences for generation.
|
||||
**kwargs: Additional provider-specific parameters.
|
||||
"""
|
||||
# If provider explicitly provided, validate and return it
|
||||
if v is not None:
|
||||
if not isinstance(v, (str, re.Pattern)):
|
||||
raise ValueError(f"Provider must be str or Pattern, got {type(v)}")
|
||||
return v
|
||||
if not model:
|
||||
raise ValueError("Model name is required and cannot be empty")
|
||||
|
||||
model: str = info.data.get("model", "")
|
||||
if "/" in model:
|
||||
return model.partition("/")[0]
|
||||
return "openai"
|
||||
self.model = model
|
||||
self.temperature = temperature
|
||||
self.api_key = api_key
|
||||
self.base_url = base_url
|
||||
# Store additional parameters for provider-specific use
|
||||
self.additional_params = kwargs
|
||||
self._provider = provider or "openai"
|
||||
|
||||
@field_validator("stop_sequences", mode="before")
|
||||
@classmethod
|
||||
def normalize_stop_sequences(
|
||||
cls, v: str | list[str] | set[str] | None
|
||||
) -> list[str]:
|
||||
"""Validate and normalize stop sequences.
|
||||
stop = kwargs.pop("stop", None)
|
||||
if stop is None:
|
||||
self.stop: list[str] = []
|
||||
elif isinstance(stop, str):
|
||||
self.stop = [stop]
|
||||
elif isinstance(stop, list):
|
||||
self.stop = stop
|
||||
else:
|
||||
self.stop = []
|
||||
|
||||
Converts string to list and handles None values.
|
||||
AliasChoices handles accepting both 'stop' and 'stop_sequences' parameter names.
|
||||
"""
|
||||
if v is None:
|
||||
return []
|
||||
if isinstance(v, str):
|
||||
return [v]
|
||||
if isinstance(v, set):
|
||||
return list(v)
|
||||
if isinstance(v, list):
|
||||
return v
|
||||
return []
|
||||
self._token_usage = {
|
||||
"total_tokens": 0,
|
||||
"prompt_tokens": 0,
|
||||
"completion_tokens": 0,
|
||||
"successful_requests": 0,
|
||||
"cached_prompt_tokens": 0,
|
||||
}
|
||||
|
||||
@property
|
||||
def stop(self) -> list[str]:
|
||||
"""Alias for stop_sequences to maintain backward compatibility."""
|
||||
return self.stop_sequences
|
||||
def provider(self) -> str:
|
||||
"""Get the provider of the LLM."""
|
||||
return self._provider
|
||||
|
||||
@provider.setter
|
||||
def provider(self, value: str) -> None:
|
||||
"""Set the provider of the LLM."""
|
||||
self._provider = value
|
||||
|
||||
@abstractmethod
|
||||
def call(
|
||||
@@ -208,6 +171,14 @@ class BaseLLM(BaseModel, ABC):
|
||||
"""
|
||||
return tools
|
||||
|
||||
def supports_stop_words(self) -> bool:
|
||||
"""Check if the LLM supports stop words.
|
||||
|
||||
Returns:
|
||||
True if the LLM supports stop words, False otherwise.
|
||||
"""
|
||||
return DEFAULT_SUPPORTS_STOP_WORDS
|
||||
|
||||
def _supports_stop_words_implementation(self) -> bool:
|
||||
"""Check if stop words are configured for this LLM instance.
|
||||
|
||||
@@ -535,7 +506,7 @@ class BaseLLM(BaseModel, ABC):
|
||||
"""
|
||||
if "/" in model:
|
||||
return model.partition("/")[0]
|
||||
return "openai"
|
||||
return "openai" # Default provider
|
||||
|
||||
def _track_token_usage_internal(self, usage_data: dict[str, Any]) -> None:
|
||||
"""Track token usage internally in the LLM instance.
|
||||
@@ -564,11 +535,11 @@ class BaseLLM(BaseModel, ABC):
|
||||
or 0
|
||||
)
|
||||
|
||||
self._token_usage.prompt_tokens += prompt_tokens
|
||||
self._token_usage.completion_tokens += completion_tokens
|
||||
self._token_usage.total_tokens += prompt_tokens + completion_tokens
|
||||
self._token_usage.successful_requests += 1
|
||||
self._token_usage.cached_prompt_tokens += cached_tokens
|
||||
self._token_usage["prompt_tokens"] += prompt_tokens
|
||||
self._token_usage["completion_tokens"] += completion_tokens
|
||||
self._token_usage["total_tokens"] += prompt_tokens + completion_tokens
|
||||
self._token_usage["successful_requests"] += 1
|
||||
self._token_usage["cached_prompt_tokens"] += cached_tokens
|
||||
|
||||
def get_token_usage_summary(self) -> UsageMetrics:
|
||||
"""Get summary of token usage for this LLM instance.
|
||||
@@ -576,10 +547,4 @@ class BaseLLM(BaseModel, ABC):
|
||||
Returns:
|
||||
Dictionary with token usage totals
|
||||
"""
|
||||
return UsageMetrics(
|
||||
prompt_tokens=self._token_usage.prompt_tokens,
|
||||
completion_tokens=self._token_usage.completion_tokens,
|
||||
total_tokens=self._token_usage.total_tokens,
|
||||
successful_requests=self._token_usage.successful_requests,
|
||||
cached_prompt_tokens=self._token_usage.cached_prompt_tokens,
|
||||
)
|
||||
return UsageMetrics(**self._token_usage)
|
||||
|
||||
558
lib/crewai/src/crewai/llms/constants.py
Normal file
558
lib/crewai/src/crewai/llms/constants.py
Normal file
@@ -0,0 +1,558 @@
|
||||
from typing import Literal, TypeAlias
|
||||
|
||||
|
||||
OpenAIModels: TypeAlias = Literal[
|
||||
"gpt-3.5-turbo",
|
||||
"gpt-3.5-turbo-0125",
|
||||
"gpt-3.5-turbo-0301",
|
||||
"gpt-3.5-turbo-0613",
|
||||
"gpt-3.5-turbo-1106",
|
||||
"gpt-3.5-turbo-16k",
|
||||
"gpt-3.5-turbo-16k-0613",
|
||||
"gpt-3.5-turbo-instruct",
|
||||
"gpt-3.5-turbo-instruct-0914",
|
||||
"gpt-4",
|
||||
"gpt-4-0125-preview",
|
||||
"gpt-4-0314",
|
||||
"gpt-4-0613",
|
||||
"gpt-4-1106-preview",
|
||||
"gpt-4-32k",
|
||||
"gpt-4-32k-0314",
|
||||
"gpt-4-32k-0613",
|
||||
"gpt-4-turbo",
|
||||
"gpt-4-turbo-2024-04-09",
|
||||
"gpt-4-turbo-preview",
|
||||
"gpt-4-vision-preview",
|
||||
"gpt-4.1",
|
||||
"gpt-4.1-2025-04-14",
|
||||
"gpt-4.1-mini",
|
||||
"gpt-4.1-mini-2025-04-14",
|
||||
"gpt-4.1-nano",
|
||||
"gpt-4.1-nano-2025-04-14",
|
||||
"gpt-4o",
|
||||
"gpt-4o-2024-05-13",
|
||||
"gpt-4o-2024-08-06",
|
||||
"gpt-4o-2024-11-20",
|
||||
"gpt-4o-audio-preview",
|
||||
"gpt-4o-audio-preview-2024-10-01",
|
||||
"gpt-4o-audio-preview-2024-12-17",
|
||||
"gpt-4o-audio-preview-2025-06-03",
|
||||
"gpt-4o-mini",
|
||||
"gpt-4o-mini-2024-07-18",
|
||||
"gpt-4o-mini-audio-preview",
|
||||
"gpt-4o-mini-audio-preview-2024-12-17",
|
||||
"gpt-4o-mini-realtime-preview",
|
||||
"gpt-4o-mini-realtime-preview-2024-12-17",
|
||||
"gpt-4o-mini-search-preview",
|
||||
"gpt-4o-mini-search-preview-2025-03-11",
|
||||
"gpt-4o-mini-transcribe",
|
||||
"gpt-4o-mini-tts",
|
||||
"gpt-4o-realtime-preview",
|
||||
"gpt-4o-realtime-preview-2024-10-01",
|
||||
"gpt-4o-realtime-preview-2024-12-17",
|
||||
"gpt-4o-realtime-preview-2025-06-03",
|
||||
"gpt-4o-search-preview",
|
||||
"gpt-4o-search-preview-2025-03-11",
|
||||
"gpt-4o-transcribe",
|
||||
"gpt-4o-transcribe-diarize",
|
||||
"gpt-5",
|
||||
"gpt-5-2025-08-07",
|
||||
"gpt-5-chat",
|
||||
"gpt-5-chat-latest",
|
||||
"gpt-5-codex",
|
||||
"gpt-5-mini",
|
||||
"gpt-5-mini-2025-08-07",
|
||||
"gpt-5-nano",
|
||||
"gpt-5-nano-2025-08-07",
|
||||
"gpt-5-pro",
|
||||
"gpt-5-pro-2025-10-06",
|
||||
"gpt-5-search-api",
|
||||
"gpt-5-search-api-2025-10-14",
|
||||
"gpt-audio",
|
||||
"gpt-audio-2025-08-28",
|
||||
"gpt-audio-mini",
|
||||
"gpt-audio-mini-2025-10-06",
|
||||
"gpt-image-1",
|
||||
"gpt-image-1-mini",
|
||||
"gpt-realtime",
|
||||
"gpt-realtime-2025-08-28",
|
||||
"gpt-realtime-mini",
|
||||
"gpt-realtime-mini-2025-10-06",
|
||||
"o1",
|
||||
"o1-preview",
|
||||
"o1-2024-12-17",
|
||||
"o1-mini",
|
||||
"o1-mini-2024-09-12",
|
||||
"o1-pro",
|
||||
"o1-pro-2025-03-19",
|
||||
"o3-mini",
|
||||
"o3",
|
||||
"o4-mini",
|
||||
"whisper-1",
|
||||
]
|
||||
OPENAI_MODELS: list[OpenAIModels] = [
|
||||
"gpt-3.5-turbo",
|
||||
"gpt-3.5-turbo-0125",
|
||||
"gpt-3.5-turbo-0301",
|
||||
"gpt-3.5-turbo-0613",
|
||||
"gpt-3.5-turbo-1106",
|
||||
"gpt-3.5-turbo-16k",
|
||||
"gpt-3.5-turbo-16k-0613",
|
||||
"gpt-3.5-turbo-instruct",
|
||||
"gpt-3.5-turbo-instruct-0914",
|
||||
"gpt-4",
|
||||
"gpt-4-0125-preview",
|
||||
"gpt-4-0314",
|
||||
"gpt-4-0613",
|
||||
"gpt-4-1106-preview",
|
||||
"gpt-4-32k",
|
||||
"gpt-4-32k-0314",
|
||||
"gpt-4-32k-0613",
|
||||
"gpt-4-turbo",
|
||||
"gpt-4-turbo-2024-04-09",
|
||||
"gpt-4-turbo-preview",
|
||||
"gpt-4-vision-preview",
|
||||
"gpt-4.1",
|
||||
"gpt-4.1-2025-04-14",
|
||||
"gpt-4.1-mini",
|
||||
"gpt-4.1-mini-2025-04-14",
|
||||
"gpt-4.1-nano",
|
||||
"gpt-4.1-nano-2025-04-14",
|
||||
"gpt-4o",
|
||||
"gpt-4o-2024-05-13",
|
||||
"gpt-4o-2024-08-06",
|
||||
"gpt-4o-2024-11-20",
|
||||
"gpt-4o-audio-preview",
|
||||
"gpt-4o-audio-preview-2024-10-01",
|
||||
"gpt-4o-audio-preview-2024-12-17",
|
||||
"gpt-4o-audio-preview-2025-06-03",
|
||||
"gpt-4o-mini",
|
||||
"gpt-4o-mini-2024-07-18",
|
||||
"gpt-4o-mini-audio-preview",
|
||||
"gpt-4o-mini-audio-preview-2024-12-17",
|
||||
"gpt-4o-mini-realtime-preview",
|
||||
"gpt-4o-mini-realtime-preview-2024-12-17",
|
||||
"gpt-4o-mini-search-preview",
|
||||
"gpt-4o-mini-search-preview-2025-03-11",
|
||||
"gpt-4o-mini-transcribe",
|
||||
"gpt-4o-mini-tts",
|
||||
"gpt-4o-realtime-preview",
|
||||
"gpt-4o-realtime-preview-2024-10-01",
|
||||
"gpt-4o-realtime-preview-2024-12-17",
|
||||
"gpt-4o-realtime-preview-2025-06-03",
|
||||
"gpt-4o-search-preview",
|
||||
"gpt-4o-search-preview-2025-03-11",
|
||||
"gpt-4o-transcribe",
|
||||
"gpt-4o-transcribe-diarize",
|
||||
"gpt-5",
|
||||
"gpt-5-2025-08-07",
|
||||
"gpt-5-chat",
|
||||
"gpt-5-chat-latest",
|
||||
"gpt-5-codex",
|
||||
"gpt-5-mini",
|
||||
"gpt-5-mini-2025-08-07",
|
||||
"gpt-5-nano",
|
||||
"gpt-5-nano-2025-08-07",
|
||||
"gpt-5-pro",
|
||||
"gpt-5-pro-2025-10-06",
|
||||
"gpt-5-search-api",
|
||||
"gpt-5-search-api-2025-10-14",
|
||||
"gpt-audio",
|
||||
"gpt-audio-2025-08-28",
|
||||
"gpt-audio-mini",
|
||||
"gpt-audio-mini-2025-10-06",
|
||||
"gpt-image-1",
|
||||
"gpt-image-1-mini",
|
||||
"gpt-realtime",
|
||||
"gpt-realtime-2025-08-28",
|
||||
"gpt-realtime-mini",
|
||||
"gpt-realtime-mini-2025-10-06",
|
||||
"o1",
|
||||
"o1-preview",
|
||||
"o1-2024-12-17",
|
||||
"o1-mini",
|
||||
"o1-mini-2024-09-12",
|
||||
"o1-pro",
|
||||
"o1-pro-2025-03-19",
|
||||
"o3-mini",
|
||||
"o3",
|
||||
"o4-mini",
|
||||
"whisper-1",
|
||||
]
|
||||
|
||||
|
||||
AnthropicModels: TypeAlias = Literal[
|
||||
"claude-3-7-sonnet-latest",
|
||||
"claude-3-7-sonnet-20250219",
|
||||
"claude-3-5-haiku-latest",
|
||||
"claude-3-5-haiku-20241022",
|
||||
"claude-haiku-4-5",
|
||||
"claude-haiku-4-5-20251001",
|
||||
"claude-sonnet-4-20250514",
|
||||
"claude-sonnet-4-0",
|
||||
"claude-4-sonnet-20250514",
|
||||
"claude-sonnet-4-5",
|
||||
"claude-sonnet-4-5-20250929",
|
||||
"claude-3-5-sonnet-latest",
|
||||
"claude-3-5-sonnet-20241022",
|
||||
"claude-3-5-sonnet-20240620",
|
||||
"claude-opus-4-0",
|
||||
"claude-opus-4-20250514",
|
||||
"claude-4-opus-20250514",
|
||||
"claude-opus-4-1",
|
||||
"claude-opus-4-1-20250805",
|
||||
"claude-3-opus-latest",
|
||||
"claude-3-opus-20240229",
|
||||
"claude-3-sonnet-20240229",
|
||||
"claude-3-haiku-latest",
|
||||
"claude-3-haiku-20240307",
|
||||
]
|
||||
ANTHROPIC_MODELS: list[AnthropicModels] = [
|
||||
"claude-3-7-sonnet-latest",
|
||||
"claude-3-7-sonnet-20250219",
|
||||
"claude-3-5-haiku-latest",
|
||||
"claude-3-5-haiku-20241022",
|
||||
"claude-haiku-4-5",
|
||||
"claude-haiku-4-5-20251001",
|
||||
"claude-sonnet-4-20250514",
|
||||
"claude-sonnet-4-0",
|
||||
"claude-4-sonnet-20250514",
|
||||
"claude-sonnet-4-5",
|
||||
"claude-sonnet-4-5-20250929",
|
||||
"claude-3-5-sonnet-latest",
|
||||
"claude-3-5-sonnet-20241022",
|
||||
"claude-3-5-sonnet-20240620",
|
||||
"claude-opus-4-0",
|
||||
"claude-opus-4-20250514",
|
||||
"claude-4-opus-20250514",
|
||||
"claude-opus-4-1",
|
||||
"claude-opus-4-1-20250805",
|
||||
"claude-3-opus-latest",
|
||||
"claude-3-opus-20240229",
|
||||
"claude-3-sonnet-20240229",
|
||||
"claude-3-haiku-latest",
|
||||
"claude-3-haiku-20240307",
|
||||
]
|
||||
|
||||
GeminiModels: TypeAlias = Literal[
|
||||
"gemini-2.5-pro",
|
||||
"gemini-2.5-pro-preview-03-25",
|
||||
"gemini-2.5-pro-preview-05-06",
|
||||
"gemini-2.5-pro-preview-06-05",
|
||||
"gemini-2.5-flash",
|
||||
"gemini-2.5-flash-preview-05-20",
|
||||
"gemini-2.5-flash-preview-04-17",
|
||||
"gemini-2.5-flash-image",
|
||||
"gemini-2.5-flash-image-preview",
|
||||
"gemini-2.5-flash-lite",
|
||||
"gemini-2.5-flash-lite-preview-06-17",
|
||||
"gemini-2.5-flash-preview-09-2025",
|
||||
"gemini-2.5-flash-lite-preview-09-2025",
|
||||
"gemini-2.5-flash-preview-tts",
|
||||
"gemini-2.5-pro-preview-tts",
|
||||
"gemini-2.5-computer-use-preview-10-2025",
|
||||
"gemini-2.0-flash",
|
||||
"gemini-2.0-flash-001",
|
||||
"gemini-2.0-flash-exp",
|
||||
"gemini-2.0-flash-exp-image-generation",
|
||||
"gemini-2.0-flash-lite",
|
||||
"gemini-2.0-flash-lite-001",
|
||||
"gemini-2.0-flash-lite-preview",
|
||||
"gemini-2.0-flash-lite-preview-02-05",
|
||||
"gemini-2.0-flash-preview-image-generation",
|
||||
"gemini-2.0-flash-thinking-exp",
|
||||
"gemini-2.0-flash-thinking-exp-01-21",
|
||||
"gemini-2.0-flash-thinking-exp-1219",
|
||||
"gemini-2.0-pro-exp",
|
||||
"gemini-2.0-pro-exp-02-05",
|
||||
"gemini-exp-1206",
|
||||
"gemini-1.5-pro",
|
||||
"gemini-1.5-flash",
|
||||
"gemini-1.5-flash-8b",
|
||||
"gemini-flash-latest",
|
||||
"gemini-flash-lite-latest",
|
||||
"gemini-pro-latest",
|
||||
"gemini-2.0-flash-live-001",
|
||||
"gemini-live-2.5-flash-preview",
|
||||
"gemini-2.5-flash-live-preview",
|
||||
"gemini-robotics-er-1.5-preview",
|
||||
"gemini-gemma-2-27b-it",
|
||||
"gemini-gemma-2-9b-it",
|
||||
"gemma-3-1b-it",
|
||||
"gemma-3-4b-it",
|
||||
"gemma-3-12b-it",
|
||||
"gemma-3-27b-it",
|
||||
"gemma-3n-e2b-it",
|
||||
"gemma-3n-e4b-it",
|
||||
"learnlm-2.0-flash-experimental",
|
||||
]
|
||||
GEMINI_MODELS: list[GeminiModels] = [
|
||||
"gemini-2.5-pro",
|
||||
"gemini-2.5-pro-preview-03-25",
|
||||
"gemini-2.5-pro-preview-05-06",
|
||||
"gemini-2.5-pro-preview-06-05",
|
||||
"gemini-2.5-flash",
|
||||
"gemini-2.5-flash-preview-05-20",
|
||||
"gemini-2.5-flash-preview-04-17",
|
||||
"gemini-2.5-flash-image",
|
||||
"gemini-2.5-flash-image-preview",
|
||||
"gemini-2.5-flash-lite",
|
||||
"gemini-2.5-flash-lite-preview-06-17",
|
||||
"gemini-2.5-flash-preview-09-2025",
|
||||
"gemini-2.5-flash-lite-preview-09-2025",
|
||||
"gemini-2.5-flash-preview-tts",
|
||||
"gemini-2.5-pro-preview-tts",
|
||||
"gemini-2.5-computer-use-preview-10-2025",
|
||||
"gemini-2.0-flash",
|
||||
"gemini-2.0-flash-001",
|
||||
"gemini-2.0-flash-exp",
|
||||
"gemini-2.0-flash-exp-image-generation",
|
||||
"gemini-2.0-flash-lite",
|
||||
"gemini-2.0-flash-lite-001",
|
||||
"gemini-2.0-flash-lite-preview",
|
||||
"gemini-2.0-flash-lite-preview-02-05",
|
||||
"gemini-2.0-flash-preview-image-generation",
|
||||
"gemini-2.0-flash-thinking-exp",
|
||||
"gemini-2.0-flash-thinking-exp-01-21",
|
||||
"gemini-2.0-flash-thinking-exp-1219",
|
||||
"gemini-2.0-pro-exp",
|
||||
"gemini-2.0-pro-exp-02-05",
|
||||
"gemini-exp-1206",
|
||||
"gemini-1.5-pro",
|
||||
"gemini-1.5-flash",
|
||||
"gemini-1.5-flash-8b",
|
||||
"gemini-flash-latest",
|
||||
"gemini-flash-lite-latest",
|
||||
"gemini-pro-latest",
|
||||
"gemini-2.0-flash-live-001",
|
||||
"gemini-live-2.5-flash-preview",
|
||||
"gemini-2.5-flash-live-preview",
|
||||
"gemini-robotics-er-1.5-preview",
|
||||
"gemini-gemma-2-27b-it",
|
||||
"gemini-gemma-2-9b-it",
|
||||
"gemma-3-1b-it",
|
||||
"gemma-3-4b-it",
|
||||
"gemma-3-12b-it",
|
||||
"gemma-3-27b-it",
|
||||
"gemma-3n-e2b-it",
|
||||
"gemma-3n-e4b-it",
|
||||
"learnlm-2.0-flash-experimental",
|
||||
]
|
||||
|
||||
|
||||
AzureModels: TypeAlias = Literal[
|
||||
"gpt-3.5-turbo",
|
||||
"gpt-3.5-turbo-0301",
|
||||
"gpt-3.5-turbo-0613",
|
||||
"gpt-3.5-turbo-16k",
|
||||
"gpt-3.5-turbo-16k-0613",
|
||||
"gpt-35-turbo",
|
||||
"gpt-35-turbo-0125",
|
||||
"gpt-35-turbo-1106",
|
||||
"gpt-35-turbo-16k-0613",
|
||||
"gpt-35-turbo-instruct-0914",
|
||||
"gpt-4",
|
||||
"gpt-4-0314",
|
||||
"gpt-4-0613",
|
||||
"gpt-4-1106-preview",
|
||||
"gpt-4-0125-preview",
|
||||
"gpt-4-32k",
|
||||
"gpt-4-32k-0314",
|
||||
"gpt-4-32k-0613",
|
||||
"gpt-4-turbo",
|
||||
"gpt-4-turbo-2024-04-09",
|
||||
"gpt-4-vision",
|
||||
"gpt-4o",
|
||||
"gpt-4o-2024-05-13",
|
||||
"gpt-4o-2024-08-06",
|
||||
"gpt-4o-2024-11-20",
|
||||
"gpt-4o-mini",
|
||||
"gpt-5",
|
||||
"o1",
|
||||
"o1-mini",
|
||||
"o1-preview",
|
||||
"o3-mini",
|
||||
"o3",
|
||||
"o4-mini",
|
||||
]
|
||||
AZURE_MODELS: list[AzureModels] = [
|
||||
"gpt-3.5-turbo",
|
||||
"gpt-3.5-turbo-0301",
|
||||
"gpt-3.5-turbo-0613",
|
||||
"gpt-3.5-turbo-16k",
|
||||
"gpt-3.5-turbo-16k-0613",
|
||||
"gpt-35-turbo",
|
||||
"gpt-35-turbo-0125",
|
||||
"gpt-35-turbo-1106",
|
||||
"gpt-35-turbo-16k-0613",
|
||||
"gpt-35-turbo-instruct-0914",
|
||||
"gpt-4",
|
||||
"gpt-4-0314",
|
||||
"gpt-4-0613",
|
||||
"gpt-4-1106-preview",
|
||||
"gpt-4-0125-preview",
|
||||
"gpt-4-32k",
|
||||
"gpt-4-32k-0314",
|
||||
"gpt-4-32k-0613",
|
||||
"gpt-4-turbo",
|
||||
"gpt-4-turbo-2024-04-09",
|
||||
"gpt-4-vision",
|
||||
"gpt-4o",
|
||||
"gpt-4o-2024-05-13",
|
||||
"gpt-4o-2024-08-06",
|
||||
"gpt-4o-2024-11-20",
|
||||
"gpt-4o-mini",
|
||||
"gpt-5",
|
||||
"o1",
|
||||
"o1-mini",
|
||||
"o1-preview",
|
||||
"o3-mini",
|
||||
"o3",
|
||||
"o4-mini",
|
||||
]
|
||||
|
||||
|
||||
BedrockModels: TypeAlias = Literal[
|
||||
"ai21.jamba-1-5-large-v1:0",
|
||||
"ai21.jamba-1-5-mini-v1:0",
|
||||
"amazon.nova-lite-v1:0",
|
||||
"amazon.nova-lite-v1:0:24k",
|
||||
"amazon.nova-lite-v1:0:300k",
|
||||
"amazon.nova-micro-v1:0",
|
||||
"amazon.nova-micro-v1:0:128k",
|
||||
"amazon.nova-micro-v1:0:24k",
|
||||
"amazon.nova-premier-v1:0",
|
||||
"amazon.nova-premier-v1:0:1000k",
|
||||
"amazon.nova-premier-v1:0:20k",
|
||||
"amazon.nova-premier-v1:0:8k",
|
||||
"amazon.nova-premier-v1:0:mm",
|
||||
"amazon.nova-pro-v1:0",
|
||||
"amazon.nova-pro-v1:0:24k",
|
||||
"amazon.nova-pro-v1:0:300k",
|
||||
"amazon.titan-text-express-v1",
|
||||
"amazon.titan-text-express-v1:0:8k",
|
||||
"amazon.titan-text-lite-v1",
|
||||
"amazon.titan-text-lite-v1:0:4k",
|
||||
"amazon.titan-tg1-large",
|
||||
"anthropic.claude-3-5-haiku-20241022-v1:0",
|
||||
"anthropic.claude-3-5-sonnet-20240620-v1:0",
|
||||
"anthropic.claude-3-5-sonnet-20241022-v2:0",
|
||||
"anthropic.claude-3-7-sonnet-20250219-v1:0",
|
||||
"anthropic.claude-3-haiku-20240307-v1:0",
|
||||
"anthropic.claude-3-haiku-20240307-v1:0:200k",
|
||||
"anthropic.claude-3-haiku-20240307-v1:0:48k",
|
||||
"anthropic.claude-3-opus-20240229-v1:0",
|
||||
"anthropic.claude-3-opus-20240229-v1:0:12k",
|
||||
"anthropic.claude-3-opus-20240229-v1:0:200k",
|
||||
"anthropic.claude-3-opus-20240229-v1:0:28k",
|
||||
"anthropic.claude-3-sonnet-20240229-v1:0",
|
||||
"anthropic.claude-3-sonnet-20240229-v1:0:200k",
|
||||
"anthropic.claude-3-sonnet-20240229-v1:0:28k",
|
||||
"anthropic.claude-haiku-4-5-20251001-v1:0",
|
||||
"anthropic.claude-instant-v1:2:100k",
|
||||
"anthropic.claude-opus-4-1-20250805-v1:0",
|
||||
"anthropic.claude-opus-4-20250514-v1:0",
|
||||
"anthropic.claude-sonnet-4-20250514-v1:0",
|
||||
"anthropic.claude-sonnet-4-5-20250929-v1:0",
|
||||
"anthropic.claude-v2:0:100k",
|
||||
"anthropic.claude-v2:0:18k",
|
||||
"anthropic.claude-v2:1:18k",
|
||||
"anthropic.claude-v2:1:200k",
|
||||
"cohere.command-r-plus-v1:0",
|
||||
"cohere.command-r-v1:0",
|
||||
"cohere.rerank-v3-5:0",
|
||||
"deepseek.r1-v1:0",
|
||||
"meta.llama3-1-70b-instruct-v1:0",
|
||||
"meta.llama3-1-8b-instruct-v1:0",
|
||||
"meta.llama3-2-11b-instruct-v1:0",
|
||||
"meta.llama3-2-1b-instruct-v1:0",
|
||||
"meta.llama3-2-3b-instruct-v1:0",
|
||||
"meta.llama3-2-90b-instruct-v1:0",
|
||||
"meta.llama3-3-70b-instruct-v1:0",
|
||||
"meta.llama3-70b-instruct-v1:0",
|
||||
"meta.llama3-8b-instruct-v1:0",
|
||||
"meta.llama4-maverick-17b-instruct-v1:0",
|
||||
"meta.llama4-scout-17b-instruct-v1:0",
|
||||
"mistral.mistral-7b-instruct-v0:2",
|
||||
"mistral.mistral-large-2402-v1:0",
|
||||
"mistral.mistral-small-2402-v1:0",
|
||||
"mistral.mixtral-8x7b-instruct-v0:1",
|
||||
"mistral.pixtral-large-2502-v1:0",
|
||||
"openai.gpt-oss-120b-1:0",
|
||||
"openai.gpt-oss-20b-1:0",
|
||||
"qwen.qwen3-32b-v1:0",
|
||||
"qwen.qwen3-coder-30b-a3b-v1:0",
|
||||
"twelvelabs.pegasus-1-2-v1:0",
|
||||
]
|
||||
BEDROCK_MODELS: list[BedrockModels] = [
|
||||
"ai21.jamba-1-5-large-v1:0",
|
||||
"ai21.jamba-1-5-mini-v1:0",
|
||||
"amazon.nova-lite-v1:0",
|
||||
"amazon.nova-lite-v1:0:24k",
|
||||
"amazon.nova-lite-v1:0:300k",
|
||||
"amazon.nova-micro-v1:0",
|
||||
"amazon.nova-micro-v1:0:128k",
|
||||
"amazon.nova-micro-v1:0:24k",
|
||||
"amazon.nova-premier-v1:0",
|
||||
"amazon.nova-premier-v1:0:1000k",
|
||||
"amazon.nova-premier-v1:0:20k",
|
||||
"amazon.nova-premier-v1:0:8k",
|
||||
"amazon.nova-premier-v1:0:mm",
|
||||
"amazon.nova-pro-v1:0",
|
||||
"amazon.nova-pro-v1:0:24k",
|
||||
"amazon.nova-pro-v1:0:300k",
|
||||
"amazon.titan-text-express-v1",
|
||||
"amazon.titan-text-express-v1:0:8k",
|
||||
"amazon.titan-text-lite-v1",
|
||||
"amazon.titan-text-lite-v1:0:4k",
|
||||
"amazon.titan-tg1-large",
|
||||
"anthropic.claude-3-5-haiku-20241022-v1:0",
|
||||
"anthropic.claude-3-5-sonnet-20240620-v1:0",
|
||||
"anthropic.claude-3-5-sonnet-20241022-v2:0",
|
||||
"anthropic.claude-3-7-sonnet-20250219-v1:0",
|
||||
"anthropic.claude-3-haiku-20240307-v1:0",
|
||||
"anthropic.claude-3-haiku-20240307-v1:0:200k",
|
||||
"anthropic.claude-3-haiku-20240307-v1:0:48k",
|
||||
"anthropic.claude-3-opus-20240229-v1:0",
|
||||
"anthropic.claude-3-opus-20240229-v1:0:12k",
|
||||
"anthropic.claude-3-opus-20240229-v1:0:200k",
|
||||
"anthropic.claude-3-opus-20240229-v1:0:28k",
|
||||
"anthropic.claude-3-sonnet-20240229-v1:0",
|
||||
"anthropic.claude-3-sonnet-20240229-v1:0:200k",
|
||||
"anthropic.claude-3-sonnet-20240229-v1:0:28k",
|
||||
"anthropic.claude-haiku-4-5-20251001-v1:0",
|
||||
"anthropic.claude-instant-v1:2:100k",
|
||||
"anthropic.claude-opus-4-1-20250805-v1:0",
|
||||
"anthropic.claude-opus-4-20250514-v1:0",
|
||||
"anthropic.claude-sonnet-4-20250514-v1:0",
|
||||
"anthropic.claude-sonnet-4-5-20250929-v1:0",
|
||||
"anthropic.claude-v2:0:100k",
|
||||
"anthropic.claude-v2:0:18k",
|
||||
"anthropic.claude-v2:1:18k",
|
||||
"anthropic.claude-v2:1:200k",
|
||||
"cohere.command-r-plus-v1:0",
|
||||
"cohere.command-r-v1:0",
|
||||
"cohere.rerank-v3-5:0",
|
||||
"deepseek.r1-v1:0",
|
||||
"meta.llama3-1-70b-instruct-v1:0",
|
||||
"meta.llama3-1-8b-instruct-v1:0",
|
||||
"meta.llama3-2-11b-instruct-v1:0",
|
||||
"meta.llama3-2-1b-instruct-v1:0",
|
||||
"meta.llama3-2-3b-instruct-v1:0",
|
||||
"meta.llama3-2-90b-instruct-v1:0",
|
||||
"meta.llama3-3-70b-instruct-v1:0",
|
||||
"meta.llama3-70b-instruct-v1:0",
|
||||
"meta.llama3-8b-instruct-v1:0",
|
||||
"meta.llama4-maverick-17b-instruct-v1:0",
|
||||
"meta.llama4-scout-17b-instruct-v1:0",
|
||||
"mistral.mistral-7b-instruct-v0:2",
|
||||
"mistral.mistral-large-2402-v1:0",
|
||||
"mistral.mistral-small-2402-v1:0",
|
||||
"mistral.mixtral-8x7b-instruct-v0:1",
|
||||
"mistral.pixtral-large-2502-v1:0",
|
||||
"openai.gpt-oss-120b-1:0",
|
||||
"openai.gpt-oss-20b-1:0",
|
||||
"qwen.qwen3-32b-v1:0",
|
||||
"qwen.qwen3-coder-30b-a3b-v1:0",
|
||||
"twelvelabs.pegasus-1-2-v1:0",
|
||||
]
|
||||
@@ -5,14 +5,11 @@ import logging
|
||||
import os
|
||||
from typing import TYPE_CHECKING, Any, cast
|
||||
|
||||
from pydantic import BaseModel, Field, PrivateAttr, computed_field, model_validator
|
||||
from typing_extensions import Self
|
||||
from pydantic import BaseModel
|
||||
|
||||
from crewai.events.types.llm_events import LLMCallType
|
||||
from crewai.llm import CONTEXT_WINDOW_USAGE_RATIO
|
||||
from crewai.llms.base_llm import BaseLLM
|
||||
from crewai.llms.hooks.transport import HTTPTransport
|
||||
from crewai.llms.providers.utils.common import safe_tool_conversion
|
||||
from crewai.utilities.agent_utils import is_context_length_exceeded
|
||||
from crewai.utilities.exceptions.context_window_exceeding_exception import (
|
||||
LLMContextLengthExceededError,
|
||||
@@ -21,8 +18,7 @@ from crewai.utilities.types import LLMMessage
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from crewai.agent import Agent
|
||||
from crewai.task import Task
|
||||
from crewai.llms.hooks.base import BaseInterceptor
|
||||
|
||||
try:
|
||||
from anthropic import Anthropic
|
||||
@@ -35,19 +31,6 @@ except ImportError:
|
||||
) from None
|
||||
|
||||
|
||||
ANTHROPIC_CONTEXT_WINDOWS: dict[str, int] = {
|
||||
"claude-3-5-sonnet": 200000,
|
||||
"claude-3-5-haiku": 200000,
|
||||
"claude-3-opus": 200000,
|
||||
"claude-3-sonnet": 200000,
|
||||
"claude-3-haiku": 200000,
|
||||
"claude-3-7-sonnet": 200000,
|
||||
"claude-2.1": 200000,
|
||||
"claude-2": 100000,
|
||||
"claude-instant": 100000,
|
||||
}
|
||||
|
||||
|
||||
class AnthropicCompletion(BaseLLM):
|
||||
"""Anthropic native completion implementation.
|
||||
|
||||
@@ -55,69 +38,110 @@ class AnthropicCompletion(BaseLLM):
|
||||
offering native tool use, streaming support, and proper message formatting.
|
||||
"""
|
||||
|
||||
model: str = Field(
|
||||
default="claude-3-5-sonnet-20241022",
|
||||
description="Anthropic model name (e.g., 'claude-3-5-sonnet-20241022')",
|
||||
)
|
||||
max_tokens: int = Field(
|
||||
default=4096,
|
||||
description="Maximum number of allowed tokens in response.",
|
||||
)
|
||||
top_p: float | None = Field(
|
||||
default=None,
|
||||
description="Nucleus sampling parameter.",
|
||||
)
|
||||
_client: Anthropic = PrivateAttr(
|
||||
default_factory=Anthropic,
|
||||
)
|
||||
def __init__(
|
||||
self,
|
||||
model: str = "claude-3-5-sonnet-20241022",
|
||||
api_key: str | None = None,
|
||||
base_url: str | None = None,
|
||||
timeout: float | None = None,
|
||||
max_retries: int = 2,
|
||||
temperature: float | None = None,
|
||||
max_tokens: int = 4096, # Required for Anthropic
|
||||
top_p: float | None = None,
|
||||
stop_sequences: list[str] | None = None,
|
||||
stream: bool = False,
|
||||
client_params: dict[str, Any] | None = None,
|
||||
interceptor: BaseInterceptor[httpx.Request, httpx.Response] | None = None,
|
||||
**kwargs: Any,
|
||||
):
|
||||
"""Initialize Anthropic chat completion client.
|
||||
|
||||
@model_validator(mode="after")
|
||||
def initialize_client(self) -> Self:
|
||||
"""Initialize the Anthropic client after Pydantic validation.
|
||||
|
||||
This runs after all field validation is complete, ensuring that:
|
||||
- All BaseLLM fields are set (model, temperature, stop_sequences, etc.)
|
||||
- Field validators have run (stop_sequences is normalized to set[str])
|
||||
- API key and other configuration is ready
|
||||
Args:
|
||||
model: Anthropic model name (e.g., 'claude-3-5-sonnet-20241022')
|
||||
api_key: Anthropic API key (defaults to ANTHROPIC_API_KEY env var)
|
||||
base_url: Custom base URL for Anthropic API
|
||||
timeout: Request timeout in seconds
|
||||
max_retries: Maximum number of retries
|
||||
temperature: Sampling temperature (0-1)
|
||||
max_tokens: Maximum tokens in response (required for Anthropic)
|
||||
top_p: Nucleus sampling parameter
|
||||
stop_sequences: Stop sequences (Anthropic uses stop_sequences, not stop)
|
||||
stream: Enable streaming responses
|
||||
client_params: Additional parameters for the Anthropic client
|
||||
interceptor: HTTP interceptor for modifying requests/responses at transport level.
|
||||
**kwargs: Additional parameters
|
||||
"""
|
||||
super().__init__(
|
||||
model=model, temperature=temperature, stop=stop_sequences or [], **kwargs
|
||||
)
|
||||
|
||||
# Client params
|
||||
self.interceptor = interceptor
|
||||
self.client_params = client_params
|
||||
self.base_url = base_url
|
||||
self.timeout = timeout
|
||||
self.max_retries = max_retries
|
||||
|
||||
self.client = Anthropic(**self._get_client_params())
|
||||
|
||||
# Store completion parameters
|
||||
self.max_tokens = max_tokens
|
||||
self.top_p = top_p
|
||||
self.stream = stream
|
||||
self.stop_sequences = stop_sequences or []
|
||||
|
||||
# Model-specific settings
|
||||
self.is_claude_3 = "claude-3" in model.lower()
|
||||
self.supports_tools = self.is_claude_3 # Claude 3+ supports tool use
|
||||
|
||||
@property
|
||||
def stop(self) -> list[str]:
|
||||
"""Get stop sequences sent to the API."""
|
||||
return self.stop_sequences
|
||||
|
||||
@stop.setter
|
||||
def stop(self, value: list[str] | str | None) -> None:
|
||||
"""Set stop sequences.
|
||||
|
||||
Synchronizes stop_sequences to ensure values set by CrewAgentExecutor
|
||||
are properly sent to the Anthropic API.
|
||||
|
||||
Args:
|
||||
value: Stop sequences as a list, single string, or None
|
||||
"""
|
||||
if value is None:
|
||||
self.stop_sequences = []
|
||||
elif isinstance(value, str):
|
||||
self.stop_sequences = [value]
|
||||
elif isinstance(value, list):
|
||||
self.stop_sequences = value
|
||||
else:
|
||||
self.stop_sequences = []
|
||||
|
||||
def _get_client_params(self) -> dict[str, Any]:
|
||||
"""Get client parameters."""
|
||||
|
||||
if self.api_key is None:
|
||||
self.api_key = os.getenv("ANTHROPIC_API_KEY")
|
||||
if self.api_key is None:
|
||||
raise ValueError("ANTHROPIC_API_KEY is required")
|
||||
|
||||
params = self.model_dump(
|
||||
include={"api_key", "base_url", "timeout", "max_retries"},
|
||||
exclude_none=True,
|
||||
)
|
||||
client_params = {
|
||||
"api_key": self.api_key,
|
||||
"base_url": self.base_url,
|
||||
"timeout": self.timeout,
|
||||
"max_retries": self.max_retries,
|
||||
}
|
||||
|
||||
if self.interceptor:
|
||||
transport = HTTPTransport(interceptor=self.interceptor)
|
||||
http_client = httpx.Client(transport=transport)
|
||||
params["http_client"] = http_client
|
||||
client_params["http_client"] = http_client # type: ignore[assignment]
|
||||
|
||||
if self.client_params:
|
||||
params.update(self.client_params)
|
||||
client_params.update(self.client_params)
|
||||
|
||||
self._client = Anthropic(**params)
|
||||
return self
|
||||
|
||||
@computed_field # type: ignore[prop-decorator]
|
||||
@property
|
||||
def is_claude_3(self) -> bool:
|
||||
"""Check if the model is Claude 3 or higher."""
|
||||
return "claude-3" in self.model.lower()
|
||||
|
||||
@computed_field # type: ignore[prop-decorator]
|
||||
@property
|
||||
def supports_tools(self) -> bool:
|
||||
"""Check if the model supports tool use."""
|
||||
return self.is_claude_3
|
||||
|
||||
@computed_field # type: ignore[prop-decorator]
|
||||
@property
|
||||
def supports_function_calling(self) -> bool:
|
||||
"""Check if the model supports function calling."""
|
||||
return self.supports_tools
|
||||
return client_params
|
||||
|
||||
def call(
|
||||
self,
|
||||
@@ -125,8 +149,8 @@ class AnthropicCompletion(BaseLLM):
|
||||
tools: list[dict[str, Any]] | None = None,
|
||||
callbacks: list[Any] | None = None,
|
||||
available_functions: dict[str, Any] | None = None,
|
||||
from_task: Task | None = None,
|
||||
from_agent: Agent | None = None,
|
||||
from_task: Any | None = None,
|
||||
from_agent: Any | None = None,
|
||||
response_model: type[BaseModel] | None = None,
|
||||
) -> str | Any:
|
||||
"""Call Anthropic messages API.
|
||||
@@ -205,21 +229,25 @@ class AnthropicCompletion(BaseLLM):
|
||||
Returns:
|
||||
Parameters dictionary for Anthropic API
|
||||
"""
|
||||
params = self.model_dump(
|
||||
include={
|
||||
"model",
|
||||
"max_tokens",
|
||||
"stream",
|
||||
"temperature",
|
||||
"top_p",
|
||||
"stop_sequences",
|
||||
},
|
||||
)
|
||||
params["messages"] = messages
|
||||
params = {
|
||||
"model": self.model,
|
||||
"messages": messages,
|
||||
"max_tokens": self.max_tokens,
|
||||
"stream": self.stream,
|
||||
}
|
||||
|
||||
# Add system message if present
|
||||
if system_message:
|
||||
params["system"] = system_message
|
||||
|
||||
# Add optional parameters if set
|
||||
if self.temperature is not None:
|
||||
params["temperature"] = self.temperature
|
||||
if self.top_p is not None:
|
||||
params["top_p"] = self.top_p
|
||||
if self.stop_sequences:
|
||||
params["stop_sequences"] = self.stop_sequences
|
||||
|
||||
# Handle tools for Claude 3+
|
||||
if tools and self.supports_tools:
|
||||
params["tools"] = self._convert_tools_for_interference(tools)
|
||||
@@ -238,6 +266,8 @@ class AnthropicCompletion(BaseLLM):
|
||||
continue
|
||||
|
||||
try:
|
||||
from crewai.llms.providers.utils.common import safe_tool_conversion
|
||||
|
||||
name, description, parameters = safe_tool_conversion(tool, "Anthropic")
|
||||
except (ImportError, KeyError, ValueError) as e:
|
||||
logging.error(f"Error converting tool to Anthropic format: {e}")
|
||||
@@ -311,8 +341,8 @@ class AnthropicCompletion(BaseLLM):
|
||||
self,
|
||||
params: dict[str, Any],
|
||||
available_functions: dict[str, Any] | None = None,
|
||||
from_task: Task | None = None,
|
||||
from_agent: Agent | None = None,
|
||||
from_task: Any | None = None,
|
||||
from_agent: Any | None = None,
|
||||
response_model: type[BaseModel] | None = None,
|
||||
) -> str | Any:
|
||||
"""Handle non-streaming message completion."""
|
||||
@@ -327,7 +357,7 @@ class AnthropicCompletion(BaseLLM):
|
||||
params["tool_choice"] = {"type": "tool", "name": "structured_output"}
|
||||
|
||||
try:
|
||||
response: Message = self._client.messages.create(**params)
|
||||
response: Message = self.client.messages.create(**params)
|
||||
|
||||
except Exception as e:
|
||||
if is_context_length_exceeded(e):
|
||||
@@ -399,8 +429,8 @@ class AnthropicCompletion(BaseLLM):
|
||||
self,
|
||||
params: dict[str, Any],
|
||||
available_functions: dict[str, Any] | None = None,
|
||||
from_task: Task | None = None,
|
||||
from_agent: Agent | None = None,
|
||||
from_task: Any | None = None,
|
||||
from_agent: Any | None = None,
|
||||
response_model: type[BaseModel] | None = None,
|
||||
) -> str:
|
||||
"""Handle streaming message completion."""
|
||||
@@ -421,7 +451,7 @@ class AnthropicCompletion(BaseLLM):
|
||||
stream_params = {k: v for k, v in params.items() if k != "stream"}
|
||||
|
||||
# Make streaming API call
|
||||
with self._client.messages.stream(**stream_params) as stream:
|
||||
with self.client.messages.stream(**stream_params) as stream:
|
||||
for event in stream:
|
||||
if hasattr(event, "delta") and hasattr(event.delta, "text"):
|
||||
text_delta = event.delta.text
|
||||
@@ -495,8 +525,8 @@ class AnthropicCompletion(BaseLLM):
|
||||
tool_uses: list[ToolUseBlock],
|
||||
params: dict[str, Any],
|
||||
available_functions: dict[str, Any],
|
||||
from_task: Task | None = None,
|
||||
from_agent: Agent | None = None,
|
||||
from_task: Any | None = None,
|
||||
from_agent: Any | None = None,
|
||||
) -> str:
|
||||
"""Handle the complete tool use conversation flow.
|
||||
|
||||
@@ -549,7 +579,7 @@ class AnthropicCompletion(BaseLLM):
|
||||
|
||||
try:
|
||||
# Send tool results back to Claude for final response
|
||||
final_response: Message = self._client.messages.create(**follow_up_params)
|
||||
final_response: Message = self.client.messages.create(**follow_up_params)
|
||||
|
||||
# Track token usage for follow-up call
|
||||
follow_up_usage = self._extract_anthropic_token_usage(final_response)
|
||||
@@ -596,24 +626,48 @@ class AnthropicCompletion(BaseLLM):
|
||||
return tool_results[0]["content"]
|
||||
raise e
|
||||
|
||||
def supports_function_calling(self) -> bool:
|
||||
"""Check if the model supports function calling."""
|
||||
return self.supports_tools
|
||||
|
||||
def supports_stop_words(self) -> bool:
|
||||
"""Check if the model supports stop words."""
|
||||
return True # All Claude models support stop sequences
|
||||
|
||||
def get_context_window_size(self) -> int:
|
||||
"""Get the context window size for the model."""
|
||||
from crewai.llm import CONTEXT_WINDOW_USAGE_RATIO
|
||||
|
||||
# Context window sizes for Anthropic models
|
||||
context_windows = {
|
||||
"claude-3-5-sonnet": 200000,
|
||||
"claude-3-5-haiku": 200000,
|
||||
"claude-3-opus": 200000,
|
||||
"claude-3-sonnet": 200000,
|
||||
"claude-3-haiku": 200000,
|
||||
"claude-3-7-sonnet": 200000,
|
||||
"claude-2.1": 200000,
|
||||
"claude-2": 100000,
|
||||
"claude-instant": 100000,
|
||||
}
|
||||
|
||||
# Find the best match for the model name
|
||||
for model_prefix, size in ANTHROPIC_CONTEXT_WINDOWS.items():
|
||||
for model_prefix, size in context_windows.items():
|
||||
if self.model.startswith(model_prefix):
|
||||
return int(size * CONTEXT_WINDOW_USAGE_RATIO)
|
||||
|
||||
# Default context window size for Claude models
|
||||
return int(200000 * CONTEXT_WINDOW_USAGE_RATIO)
|
||||
|
||||
@staticmethod
|
||||
def _extract_anthropic_token_usage(response: Message) -> dict[str, Any]:
|
||||
def _extract_anthropic_token_usage(self, response: Message) -> dict[str, Any]:
|
||||
"""Extract token usage from Anthropic response."""
|
||||
if response.usage:
|
||||
if hasattr(response, "usage") and response.usage:
|
||||
usage = response.usage
|
||||
input_tokens = getattr(usage, "input_tokens", 0)
|
||||
output_tokens = getattr(usage, "output_tokens", 0)
|
||||
return {
|
||||
"input_tokens": usage.input_tokens,
|
||||
"output_tokens": usage.output_tokens,
|
||||
"total_tokens": usage.input_tokens + usage.output_tokens,
|
||||
"input_tokens": input_tokens,
|
||||
"output_tokens": output_tokens,
|
||||
"total_tokens": input_tokens + output_tokens,
|
||||
}
|
||||
return {"total_tokens": 0}
|
||||
|
||||
@@ -1,14 +1,12 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
from typing import TYPE_CHECKING, Any, cast
|
||||
import os
|
||||
from typing import Any, cast
|
||||
|
||||
from pydantic import BaseModel, Field, PrivateAttr, computed_field, model_validator
|
||||
from typing_extensions import Self
|
||||
from pydantic import BaseModel
|
||||
|
||||
from crewai.events.types.llm_events import LLMCallType
|
||||
from crewai.llm import CONTEXT_WINDOW_USAGE_RATIO, LLM_CONTEXT_WINDOW_SIZES
|
||||
from crewai.llms.base_llm import BaseLLM
|
||||
from crewai.llms.hooks.base import BaseInterceptor
|
||||
from crewai.utilities.agent_utils import is_context_length_exceeded
|
||||
from crewai.utilities.exceptions.context_window_exceeding_exception import (
|
||||
LLMContextLengthExceededError,
|
||||
@@ -16,11 +14,6 @@ from crewai.utilities.exceptions.context_window_exceeding_exception import (
|
||||
from crewai.utilities.types import LLMMessage
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from crewai.agent import Agent
|
||||
from crewai.task import Task
|
||||
|
||||
|
||||
try:
|
||||
from google import genai # type: ignore[import-untyped]
|
||||
from google.genai import types # type: ignore[import-untyped]
|
||||
@@ -31,27 +24,6 @@ except ImportError:
|
||||
) from None
|
||||
|
||||
|
||||
GEMINI_CONTEXT_WINDOWS: dict[str, int] = {
|
||||
"gemini-2.0-flash": 1048576, # 1M tokens
|
||||
"gemini-2.0-flash-thinking": 32768,
|
||||
"gemini-2.0-flash-lite": 1048576,
|
||||
"gemini-2.5-flash": 1048576,
|
||||
"gemini-2.5-pro": 1048576,
|
||||
"gemini-1.5-pro": 2097152, # 2M tokens
|
||||
"gemini-1.5-flash": 1048576,
|
||||
"gemini-1.5-flash-8b": 1048576,
|
||||
"gemini-1.0-pro": 32768,
|
||||
"gemma-3-1b": 32000,
|
||||
"gemma-3-4b": 128000,
|
||||
"gemma-3-12b": 128000,
|
||||
"gemma-3-27b": 128000,
|
||||
}
|
||||
|
||||
# Context window validation constraints
|
||||
MIN_CONTEXT_WINDOW: int = 1024
|
||||
MAX_CONTEXT_WINDOW: int = 2097152
|
||||
|
||||
|
||||
class GeminiCompletion(BaseLLM):
|
||||
"""Google Gemini native completion implementation.
|
||||
|
||||
@@ -59,140 +31,78 @@ class GeminiCompletion(BaseLLM):
|
||||
offering native function calling, streaming support, and proper Gemini formatting.
|
||||
"""
|
||||
|
||||
model: str = Field(
|
||||
default="gemini-2.0-flash-001",
|
||||
description="Gemini model name (e.g., 'gemini-2.0-flash-001', 'gemini-1.5-pro')",
|
||||
)
|
||||
project: str | None = Field(
|
||||
default=None,
|
||||
description="Google Cloud project ID (for Vertex AI)",
|
||||
)
|
||||
location: str = Field(
|
||||
default="us-central1",
|
||||
description="Google Cloud location (for Vertex AI)",
|
||||
)
|
||||
top_p: float | None = Field(
|
||||
default=None,
|
||||
description="Nucleus sampling parameter",
|
||||
)
|
||||
top_k: int | None = Field(
|
||||
default=None,
|
||||
description="Top-k sampling parameter",
|
||||
)
|
||||
max_output_tokens: int | None = Field(
|
||||
default=None,
|
||||
description="Maximum tokens in response",
|
||||
)
|
||||
safety_settings: dict[str, Any] | None = Field(
|
||||
default=None,
|
||||
description="Safety filter settings",
|
||||
)
|
||||
_client: genai.Client = PrivateAttr( # type: ignore[no-any-unimported]
|
||||
default_factory=genai.Client,
|
||||
)
|
||||
def __init__(
|
||||
self,
|
||||
model: str = "gemini-2.0-flash-001",
|
||||
api_key: str | None = None,
|
||||
project: str | None = None,
|
||||
location: str | None = None,
|
||||
temperature: float | None = None,
|
||||
top_p: float | None = None,
|
||||
top_k: int | None = None,
|
||||
max_output_tokens: int | None = None,
|
||||
stop_sequences: list[str] | None = None,
|
||||
stream: bool = False,
|
||||
safety_settings: dict[str, Any] | None = None,
|
||||
client_params: dict[str, Any] | None = None,
|
||||
interceptor: BaseInterceptor[Any, Any] | None = None,
|
||||
**kwargs: Any,
|
||||
):
|
||||
"""Initialize Google Gemini chat completion client.
|
||||
|
||||
@model_validator(mode="after")
|
||||
def initialize_client(self) -> Self:
|
||||
"""Initialize the Anthropic client after Pydantic validation.
|
||||
|
||||
This runs after all field validation is complete, ensuring that:
|
||||
- All BaseLLM fields are set (model, temperature, stop_sequences, etc.)
|
||||
- Field validators have run (stop_sequences is normalized to set[str])
|
||||
- API key and other configuration is ready
|
||||
Args:
|
||||
model: Gemini model name (e.g., 'gemini-2.0-flash-001', 'gemini-1.5-pro')
|
||||
api_key: Google API key (defaults to GOOGLE_API_KEY or GEMINI_API_KEY env var)
|
||||
project: Google Cloud project ID (for Vertex AI)
|
||||
location: Google Cloud location (for Vertex AI, defaults to 'us-central1')
|
||||
temperature: Sampling temperature (0-2)
|
||||
top_p: Nucleus sampling parameter
|
||||
top_k: Top-k sampling parameter
|
||||
max_output_tokens: Maximum tokens in response
|
||||
stop_sequences: Stop sequences
|
||||
stream: Enable streaming responses
|
||||
safety_settings: Safety filter settings
|
||||
client_params: Additional parameters to pass to the Google Gen AI Client constructor.
|
||||
Supports parameters like http_options, credentials, debug_config, etc.
|
||||
interceptor: HTTP interceptor (not yet supported for Gemini).
|
||||
**kwargs: Additional parameters
|
||||
"""
|
||||
self._client = genai.Client(**self._get_client_params())
|
||||
return self
|
||||
if interceptor is not None:
|
||||
raise NotImplementedError(
|
||||
"HTTP interceptors are not yet supported for Google Gemini provider. "
|
||||
"Interceptors are currently supported for OpenAI and Anthropic providers only."
|
||||
)
|
||||
|
||||
# def __init__(
|
||||
# self,
|
||||
# model: str = "gemini-2.0-flash-001",
|
||||
# api_key: str | None = None,
|
||||
# project: str | None = None,
|
||||
# location: str | None = None,
|
||||
# temperature: float | None = None,
|
||||
# top_p: float | None = None,
|
||||
# top_k: int | None = None,
|
||||
# max_output_tokens: int | None = None,
|
||||
# stop_sequences: list[str] | None = None,
|
||||
# stream: bool = False,
|
||||
# safety_settings: dict[str, Any] | None = None,
|
||||
# client_params: dict[str, Any] | None = None,
|
||||
# interceptor: BaseInterceptor[Any, Any] | None = None,
|
||||
# **kwargs: Any,
|
||||
# # ):
|
||||
# """Initialize Google Gemini chat completion client.
|
||||
#
|
||||
# Args:
|
||||
# model: Gemini model name (e.g., 'gemini-2.0-flash-001', 'gemini-1.5-pro')
|
||||
# api_key: Google API key (defaults to GOOGLE_API_KEY or GEMINI_API_KEY env var)
|
||||
# project: Google Cloud project ID (for Vertex AI)
|
||||
# location: Google Cloud location (for Vertex AI, defaults to 'us-central1')
|
||||
# temperature: Sampling temperature (0-2)
|
||||
# top_p: Nucleus sampling parameter
|
||||
# top_k: Top-k sampling parameter
|
||||
# max_output_tokens: Maximum tokens in response
|
||||
# stop_sequences: Stop sequences
|
||||
# stream: Enable streaming responses
|
||||
# safety_settings: Safety filter settings
|
||||
# client_params: Additional parameters to pass to the Google Gen AI Client constructor.
|
||||
# Supports parameters like http_options, credentials, debug_config, etc.
|
||||
# interceptor: HTTP interceptor (not yet supported for Gemini).
|
||||
# **kwargs: Additional parameters
|
||||
# """
|
||||
# if interceptor is not None:
|
||||
# raise NotImplementedError(
|
||||
# "HTTP interceptors are not yet supported for Google Gemini provider. "
|
||||
# "Interceptors are currently supported for OpenAI and Anthropic providers only."
|
||||
# )
|
||||
#
|
||||
# super().__init__(
|
||||
# model=model, temperature=temperature, stop=stop_sequences or [], **kwargs
|
||||
# )
|
||||
#
|
||||
# # Store client params for later use
|
||||
# self.client_params = client_params or {}
|
||||
#
|
||||
# # Get API configuration with environment variable fallbacks
|
||||
# self.api_key = (
|
||||
# api_key or os.getenv("GOOGLE_API_KEY") or os.getenv("GEMINI_API_KEY")
|
||||
# )
|
||||
# self.project = project or os.getenv("GOOGLE_CLOUD_PROJECT")
|
||||
# self.location = location or os.getenv("GOOGLE_CLOUD_LOCATION") or "us-central1"
|
||||
#
|
||||
# use_vertexai = os.getenv("GOOGLE_GENAI_USE_VERTEXAI", "").lower() == "true"
|
||||
#
|
||||
# self.client = self._initialize_client(use_vertexai)
|
||||
#
|
||||
# # Store completion parameters
|
||||
# self.top_p = top_p
|
||||
# self.top_k = top_k
|
||||
# self.max_output_tokens = max_output_tokens
|
||||
# self.stream = stream
|
||||
# self.safety_settings = safety_settings or {}
|
||||
# self.stop_sequences = stop_sequences or []
|
||||
#
|
||||
# # Model-specific settings
|
||||
# self.is_gemini_2 = "gemini-2" in model.lower()
|
||||
# self.is_gemini_1_5 = "gemini-1.5" in model.lower()
|
||||
# self.supports_tools = self.is_gemini_1_5 or self.is_gemini_2
|
||||
super().__init__(
|
||||
model=model, temperature=temperature, stop=stop_sequences or [], **kwargs
|
||||
)
|
||||
|
||||
@computed_field # type: ignore[prop-decorator]
|
||||
@property
|
||||
def is_gemini_2(self) -> bool:
|
||||
"""Check if the model is Gemini 2.x."""
|
||||
return "gemini-2" in self.model.lower()
|
||||
# Store client params for later use
|
||||
self.client_params = client_params or {}
|
||||
|
||||
@computed_field # type: ignore[prop-decorator]
|
||||
@property
|
||||
def is_gemini_1_5(self) -> bool:
|
||||
"""Check if the model is Gemini 1.5.x."""
|
||||
return "gemini-1.5" in self.model.lower()
|
||||
# Get API configuration with environment variable fallbacks
|
||||
self.api_key = (
|
||||
api_key or os.getenv("GOOGLE_API_KEY") or os.getenv("GEMINI_API_KEY")
|
||||
)
|
||||
self.project = project or os.getenv("GOOGLE_CLOUD_PROJECT")
|
||||
self.location = location or os.getenv("GOOGLE_CLOUD_LOCATION") or "us-central1"
|
||||
|
||||
@computed_field # type: ignore[prop-decorator]
|
||||
@property
|
||||
def supports_tools(self) -> bool:
|
||||
"""Check if the model supports tool/function calling."""
|
||||
return self.is_gemini_1_5 or self.is_gemini_2
|
||||
use_vertexai = os.getenv("GOOGLE_GENAI_USE_VERTEXAI", "").lower() == "true"
|
||||
|
||||
self.client = self._initialize_client(use_vertexai)
|
||||
|
||||
# Store completion parameters
|
||||
self.top_p = top_p
|
||||
self.top_k = top_k
|
||||
self.max_output_tokens = max_output_tokens
|
||||
self.stream = stream
|
||||
self.safety_settings = safety_settings or {}
|
||||
self.stop_sequences = stop_sequences or []
|
||||
|
||||
# Model-specific settings
|
||||
self.is_gemini_2 = "gemini-2" in model.lower()
|
||||
self.is_gemini_1_5 = "gemini-1.5" in model.lower()
|
||||
self.supports_tools = self.is_gemini_1_5 or self.is_gemini_2
|
||||
|
||||
@property
|
||||
def stop(self) -> list[str]:
|
||||
@@ -232,12 +142,6 @@ class GeminiCompletion(BaseLLM):
|
||||
if self.client_params:
|
||||
client_params.update(self.client_params)
|
||||
|
||||
if self.interceptor:
|
||||
raise NotImplementedError(
|
||||
"HTTP interceptors are not yet supported for Google Gemini provider. "
|
||||
"Interceptors are currently supported for OpenAI and Anthropic providers only."
|
||||
)
|
||||
|
||||
if use_vertexai or self.project:
|
||||
client_params.update(
|
||||
{
|
||||
@@ -277,7 +181,7 @@ class GeminiCompletion(BaseLLM):
|
||||
|
||||
if (
|
||||
hasattr(self, "client")
|
||||
and hasattr(self._client, "vertexai")
|
||||
and hasattr(self.client, "vertexai")
|
||||
and self.client.vertexai
|
||||
):
|
||||
# Vertex AI configuration
|
||||
@@ -302,8 +206,8 @@ class GeminiCompletion(BaseLLM):
|
||||
tools: list[dict[str, Any]] | None = None,
|
||||
callbacks: list[Any] | None = None,
|
||||
available_functions: dict[str, Any] | None = None,
|
||||
from_task: Task | None = None,
|
||||
from_agent: Agent | None = None,
|
||||
from_task: Any | None = None,
|
||||
from_agent: Any | None = None,
|
||||
response_model: type[BaseModel] | None = None,
|
||||
) -> str | Any:
|
||||
"""Call Google Gemini generate content API.
|
||||
@@ -390,16 +294,7 @@ class GeminiCompletion(BaseLLM):
|
||||
GenerateContentConfig object for Gemini API
|
||||
"""
|
||||
self.tools = tools
|
||||
config_params = self.model_dump(
|
||||
include={
|
||||
"temperature",
|
||||
"top_p",
|
||||
"top_k",
|
||||
"max_output_tokens",
|
||||
"stop_sequences",
|
||||
"safety_settings",
|
||||
}
|
||||
)
|
||||
config_params = {}
|
||||
|
||||
# Add system instruction if present
|
||||
if system_instruction:
|
||||
@@ -409,6 +304,18 @@ class GeminiCompletion(BaseLLM):
|
||||
)
|
||||
config_params["system_instruction"] = system_content
|
||||
|
||||
# Add generation config parameters
|
||||
if self.temperature is not None:
|
||||
config_params["temperature"] = self.temperature
|
||||
if self.top_p is not None:
|
||||
config_params["top_p"] = self.top_p
|
||||
if self.top_k is not None:
|
||||
config_params["top_k"] = self.top_k
|
||||
if self.max_output_tokens is not None:
|
||||
config_params["max_output_tokens"] = self.max_output_tokens
|
||||
if self.stop_sequences:
|
||||
config_params["stop_sequences"] = self.stop_sequences
|
||||
|
||||
if response_model:
|
||||
config_params["response_mime_type"] = "application/json"
|
||||
config_params["response_schema"] = response_model.model_json_schema()
|
||||
@@ -417,6 +324,9 @@ class GeminiCompletion(BaseLLM):
|
||||
if tools and self.supports_tools:
|
||||
config_params["tools"] = self._convert_tools_for_interference(tools)
|
||||
|
||||
if self.safety_settings:
|
||||
config_params["safety_settings"] = self.safety_settings
|
||||
|
||||
return types.GenerateContentConfig(**config_params)
|
||||
|
||||
def _convert_tools_for_interference( # type: ignore[no-any-unimported]
|
||||
@@ -494,8 +404,8 @@ class GeminiCompletion(BaseLLM):
|
||||
system_instruction: str | None,
|
||||
config: types.GenerateContentConfig,
|
||||
available_functions: dict[str, Any] | None = None,
|
||||
from_task: Task | None = None,
|
||||
from_agent: Agent | None = None,
|
||||
from_task: Any | None = None,
|
||||
from_agent: Any | None = None,
|
||||
response_model: type[BaseModel] | None = None,
|
||||
) -> str | Any:
|
||||
"""Handle non-streaming content generation."""
|
||||
@@ -506,7 +416,7 @@ class GeminiCompletion(BaseLLM):
|
||||
}
|
||||
|
||||
try:
|
||||
response = self._client.models.generate_content(**api_params)
|
||||
response = self.client.models.generate_content(**api_params)
|
||||
|
||||
usage = self._extract_token_usage(response)
|
||||
except Exception as e:
|
||||
@@ -560,8 +470,8 @@ class GeminiCompletion(BaseLLM):
|
||||
contents: list[types.Content],
|
||||
config: types.GenerateContentConfig,
|
||||
available_functions: dict[str, Any] | None = None,
|
||||
from_task: Task | None = None,
|
||||
from_agent: Agent | None = None,
|
||||
from_task: Any | None = None,
|
||||
from_agent: Any | None = None,
|
||||
response_model: type[BaseModel] | None = None,
|
||||
) -> str:
|
||||
"""Handle streaming content generation."""
|
||||
@@ -574,7 +484,7 @@ class GeminiCompletion(BaseLLM):
|
||||
"config": config,
|
||||
}
|
||||
|
||||
for chunk in self._client.models.generate_content_stream(**api_params):
|
||||
for chunk in self.client.models.generate_content_stream(**api_params):
|
||||
if hasattr(chunk, "text") and chunk.text:
|
||||
full_response += chunk.text
|
||||
self._emit_stream_chunk_event(
|
||||
@@ -627,30 +537,52 @@ class GeminiCompletion(BaseLLM):
|
||||
|
||||
return full_response
|
||||
|
||||
@computed_field # type: ignore[prop-decorator]
|
||||
@property
|
||||
def supports_function_calling(self) -> bool:
|
||||
"""Check if the model supports function calling."""
|
||||
return self.supports_tools
|
||||
|
||||
def supports_stop_words(self) -> bool:
|
||||
"""Check if the model supports stop words."""
|
||||
return True
|
||||
|
||||
def get_context_window_size(self) -> int:
|
||||
"""Get the context window size for the model."""
|
||||
from crewai.llm import CONTEXT_WINDOW_USAGE_RATIO, LLM_CONTEXT_WINDOW_SIZES
|
||||
|
||||
min_context = 1024
|
||||
max_context = 2097152
|
||||
|
||||
for key, value in LLM_CONTEXT_WINDOW_SIZES.items():
|
||||
if value < MIN_CONTEXT_WINDOW or value > MAX_CONTEXT_WINDOW:
|
||||
if value < min_context or value > max_context:
|
||||
raise ValueError(
|
||||
f"Context window for {key} must be between {MIN_CONTEXT_WINDOW} and {MAX_CONTEXT_WINDOW}"
|
||||
f"Context window for {key} must be between {min_context} and {max_context}"
|
||||
)
|
||||
|
||||
context_windows = {
|
||||
"gemini-2.0-flash": 1048576, # 1M tokens
|
||||
"gemini-2.0-flash-thinking": 32768,
|
||||
"gemini-2.0-flash-lite": 1048576,
|
||||
"gemini-2.5-flash": 1048576,
|
||||
"gemini-2.5-pro": 1048576,
|
||||
"gemini-1.5-pro": 2097152, # 2M tokens
|
||||
"gemini-1.5-flash": 1048576,
|
||||
"gemini-1.5-flash-8b": 1048576,
|
||||
"gemini-1.0-pro": 32768,
|
||||
"gemma-3-1b": 32000,
|
||||
"gemma-3-4b": 128000,
|
||||
"gemma-3-12b": 128000,
|
||||
"gemma-3-27b": 128000,
|
||||
}
|
||||
|
||||
# Find the best match for the model name
|
||||
for model_prefix, size in GEMINI_CONTEXT_WINDOWS.items():
|
||||
for model_prefix, size in context_windows.items():
|
||||
if self.model.startswith(model_prefix):
|
||||
return int(size * CONTEXT_WINDOW_USAGE_RATIO)
|
||||
|
||||
# Default context window size for Gemini models
|
||||
return int(1048576 * CONTEXT_WINDOW_USAGE_RATIO) # 1M tokens
|
||||
|
||||
@staticmethod
|
||||
def _extract_token_usage(response: dict[str, Any]) -> dict[str, Any]:
|
||||
def _extract_token_usage(self, response: dict[str, Any]) -> dict[str, Any]:
|
||||
"""Extract token usage from Gemini response."""
|
||||
if hasattr(response, "usage_metadata"):
|
||||
usage = response.usage_metadata
|
||||
@@ -662,8 +594,8 @@ class GeminiCompletion(BaseLLM):
|
||||
}
|
||||
return {"total_tokens": 0}
|
||||
|
||||
@staticmethod
|
||||
def _convert_contents_to_dict( # type: ignore[no-any-unimported]
|
||||
self,
|
||||
contents: list[types.Content],
|
||||
) -> list[dict[str, str]]:
|
||||
"""Convert contents to dict format."""
|
||||
|
||||
@@ -4,23 +4,16 @@ from collections.abc import Iterator
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
from typing import TYPE_CHECKING, Any, Final
|
||||
from typing import TYPE_CHECKING, Any
|
||||
|
||||
import httpx
|
||||
from openai import APIConnectionError, NotFoundError, OpenAI
|
||||
from openai.types.chat import ChatCompletion, ChatCompletionChunk
|
||||
from openai.types.chat.chat_completion import Choice
|
||||
from openai.types.chat.chat_completion_chunk import ChoiceDelta
|
||||
from pydantic import (
|
||||
BaseModel,
|
||||
Field,
|
||||
PrivateAttr,
|
||||
model_validator,
|
||||
)
|
||||
from typing_extensions import Self
|
||||
from pydantic import BaseModel
|
||||
|
||||
from crewai.events.types.llm_events import LLMCallType
|
||||
from crewai.llm import CONTEXT_WINDOW_USAGE_RATIO, LLM_CONTEXT_WINDOW_SIZES
|
||||
from crewai.llms.base_llm import BaseLLM
|
||||
from crewai.llms.hooks.transport import HTTPTransport
|
||||
from crewai.utilities.agent_utils import is_context_length_exceeded
|
||||
@@ -32,28 +25,11 @@ from crewai.utilities.types import LLMMessage
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from crewai.agent.core import Agent
|
||||
from crewai.llms.hooks.base import BaseInterceptor
|
||||
from crewai.task import Task
|
||||
from crewai.tools.base_tool import BaseTool
|
||||
|
||||
|
||||
OPENAI_CONTEXT_WINDOWS: dict[str, int] = {
|
||||
"gpt-4": 8192,
|
||||
"gpt-4o": 128000,
|
||||
"gpt-4o-mini": 200000,
|
||||
"gpt-4-turbo": 128000,
|
||||
"gpt-4.1": 1047576,
|
||||
"gpt-4.1-mini-2025-04-14": 1047576,
|
||||
"gpt-4.1-nano-2025-04-14": 1047576,
|
||||
"o1-preview": 128000,
|
||||
"o1-mini": 128000,
|
||||
"o3-mini": 200000,
|
||||
"o4-mini": 200000,
|
||||
}
|
||||
|
||||
MIN_CONTEXT_WINDOW: Final[int] = 1024
|
||||
MAX_CONTEXT_WINDOW: Final[int] = 2097152
|
||||
|
||||
|
||||
class OpenAICompletion(BaseLLM):
|
||||
"""OpenAI native completion implementation.
|
||||
|
||||
@@ -61,125 +37,112 @@ class OpenAICompletion(BaseLLM):
|
||||
offering native structured outputs, function calling, and streaming support.
|
||||
"""
|
||||
|
||||
model: str = Field(
|
||||
default="gpt-4o",
|
||||
description="OpenAI model name (e.g., 'gpt-4o')",
|
||||
)
|
||||
organization: str | None = Field(
|
||||
default=None,
|
||||
description="Name of the OpenAI organization",
|
||||
)
|
||||
project: str | None = Field(
|
||||
default=None,
|
||||
description="Name of the OpenAI project",
|
||||
)
|
||||
api_base: str | None = Field(
|
||||
default=os.getenv("OPENAI_BASE_URL"),
|
||||
description="Base URL for OpenAI API",
|
||||
)
|
||||
default_headers: dict[str, str] | None = Field(
|
||||
default=None,
|
||||
description="Default headers for OpenAI API requests",
|
||||
)
|
||||
default_query: dict[str, Any] | None = Field(
|
||||
default=None,
|
||||
description="Default query parameters for OpenAI API requests",
|
||||
)
|
||||
top_p: float | None = Field(
|
||||
default=None,
|
||||
description="Top-p sampling parameter",
|
||||
)
|
||||
frequency_penalty: float | None = Field(
|
||||
default=None,
|
||||
description="Frequency penalty parameter",
|
||||
)
|
||||
presence_penalty: float | None = Field(
|
||||
default=None,
|
||||
description="Presence penalty parameter",
|
||||
)
|
||||
max_completion_tokens: int | None = Field(
|
||||
default=None,
|
||||
description="Maximum tokens for completion",
|
||||
)
|
||||
seed: int | None = Field(
|
||||
default=None,
|
||||
description="Random seed for reproducibility",
|
||||
)
|
||||
response_format: dict[str, Any] | type[BaseModel] | None = Field(
|
||||
default=None,
|
||||
description="Response format for structured output",
|
||||
)
|
||||
logprobs: bool | None = Field(
|
||||
default=None,
|
||||
description="Whether to include log probabilities",
|
||||
)
|
||||
top_logprobs: int | None = Field(
|
||||
default=None,
|
||||
description="Number of top log probabilities to return",
|
||||
)
|
||||
reasoning_effort: str | None = Field(
|
||||
default=None,
|
||||
description="Reasoning effort level for o1 models",
|
||||
)
|
||||
supports_function_calling: bool = Field(
|
||||
default=True,
|
||||
description="Whether the model supports function calling",
|
||||
)
|
||||
is_o1_model: bool = Field(
|
||||
default=False,
|
||||
description="Whether the model is an o1 model",
|
||||
)
|
||||
is_gpt4_model: bool = Field(
|
||||
default=False,
|
||||
description="Whether the model is a GPT-4 model",
|
||||
)
|
||||
_client: OpenAI = PrivateAttr(
|
||||
default_factory=OpenAI,
|
||||
)
|
||||
def __init__(
|
||||
self,
|
||||
model: str = "gpt-4o",
|
||||
api_key: str | None = None,
|
||||
base_url: str | None = None,
|
||||
organization: str | None = None,
|
||||
project: str | None = None,
|
||||
timeout: float | None = None,
|
||||
max_retries: int = 2,
|
||||
default_headers: dict[str, str] | None = None,
|
||||
default_query: dict[str, Any] | None = None,
|
||||
client_params: dict[str, Any] | None = None,
|
||||
temperature: float | None = None,
|
||||
top_p: float | None = None,
|
||||
frequency_penalty: float | None = None,
|
||||
presence_penalty: float | None = None,
|
||||
max_tokens: int | None = None,
|
||||
max_completion_tokens: int | None = None,
|
||||
seed: int | None = None,
|
||||
stream: bool = False,
|
||||
response_format: dict[str, Any] | type[BaseModel] | None = None,
|
||||
logprobs: bool | None = None,
|
||||
top_logprobs: int | None = None,
|
||||
reasoning_effort: str | None = None,
|
||||
provider: str | None = None,
|
||||
interceptor: BaseInterceptor[httpx.Request, httpx.Response] | None = None,
|
||||
**kwargs: Any,
|
||||
) -> None:
|
||||
"""Initialize OpenAI chat completion client."""
|
||||
|
||||
@model_validator(mode="after")
|
||||
def initialize_client(self) -> Self:
|
||||
"""Initialize the Anthropic client after Pydantic validation.
|
||||
if provider is None:
|
||||
provider = kwargs.pop("provider", "openai")
|
||||
|
||||
self.interceptor = interceptor
|
||||
# Client configuration attributes
|
||||
self.organization = organization
|
||||
self.project = project
|
||||
self.max_retries = max_retries
|
||||
self.default_headers = default_headers
|
||||
self.default_query = default_query
|
||||
self.client_params = client_params
|
||||
self.timeout = timeout
|
||||
self.base_url = base_url
|
||||
self.api_base = kwargs.pop("api_base", None)
|
||||
|
||||
super().__init__(
|
||||
model=model,
|
||||
temperature=temperature,
|
||||
api_key=api_key or os.getenv("OPENAI_API_KEY"),
|
||||
base_url=base_url,
|
||||
timeout=timeout,
|
||||
provider=provider,
|
||||
**kwargs,
|
||||
)
|
||||
|
||||
client_config = self._get_client_params()
|
||||
if self.interceptor:
|
||||
transport = HTTPTransport(interceptor=self.interceptor)
|
||||
http_client = httpx.Client(transport=transport)
|
||||
client_config["http_client"] = http_client
|
||||
|
||||
self.client = OpenAI(**client_config)
|
||||
|
||||
# Completion parameters
|
||||
self.top_p = top_p
|
||||
self.frequency_penalty = frequency_penalty
|
||||
self.presence_penalty = presence_penalty
|
||||
self.max_tokens = max_tokens
|
||||
self.max_completion_tokens = max_completion_tokens
|
||||
self.seed = seed
|
||||
self.stream = stream
|
||||
self.response_format = response_format
|
||||
self.logprobs = logprobs
|
||||
self.top_logprobs = top_logprobs
|
||||
self.reasoning_effort = reasoning_effort
|
||||
self.is_o1_model = "o1" in model.lower()
|
||||
self.is_gpt4_model = "gpt-4" in model.lower()
|
||||
|
||||
def _get_client_params(self) -> dict[str, Any]:
|
||||
"""Get OpenAI client parameters."""
|
||||
|
||||
This runs after all field validation is complete, ensuring that:
|
||||
- All BaseLLM fields are set (model, temperature, stop_sequences, etc.)
|
||||
- Field validators have run (stop_sequences is normalized to set[str])
|
||||
- API key and other configuration is ready
|
||||
"""
|
||||
if self.api_key is None:
|
||||
self.api_key = os.getenv("OPENAI_API_KEY")
|
||||
if self.api_key is None:
|
||||
raise ValueError("OPENAI_API_KEY is required")
|
||||
|
||||
self.is_o1_model = "o1" in self.model.lower()
|
||||
self.supports_function_calling = not self.is_o1_model
|
||||
self.is_gpt4_model = "gpt-4" in self.model.lower()
|
||||
self.supports_stop_words = not self.is_o1_model
|
||||
base_params = {
|
||||
"api_key": self.api_key,
|
||||
"organization": self.organization,
|
||||
"project": self.project,
|
||||
"base_url": self.base_url
|
||||
or self.api_base
|
||||
or os.getenv("OPENAI_BASE_URL")
|
||||
or None,
|
||||
"timeout": self.timeout,
|
||||
"max_retries": self.max_retries,
|
||||
"default_headers": self.default_headers,
|
||||
"default_query": self.default_query,
|
||||
}
|
||||
|
||||
params = self.model_dump(
|
||||
include={
|
||||
"api_key",
|
||||
"organization",
|
||||
"project",
|
||||
"base_url",
|
||||
"timeout",
|
||||
"max_retries",
|
||||
"default_headers",
|
||||
"default_query",
|
||||
},
|
||||
exclude_none=True,
|
||||
)
|
||||
|
||||
if self.interceptor:
|
||||
transport = HTTPTransport(interceptor=self.interceptor)
|
||||
http_client = httpx.Client(transport=transport)
|
||||
params["http_client"] = http_client
|
||||
client_params = {k: v for k, v in base_params.items() if v is not None}
|
||||
|
||||
if self.client_params:
|
||||
params.update(self.client_params)
|
||||
client_params.update(self.client_params)
|
||||
|
||||
self._client = OpenAI(**params)
|
||||
return self
|
||||
return client_params
|
||||
|
||||
def call(
|
||||
self,
|
||||
@@ -250,26 +213,38 @@ class OpenAICompletion(BaseLLM):
|
||||
self, messages: list[LLMMessage], tools: list[dict[str, BaseTool]] | None = None
|
||||
) -> dict[str, Any]:
|
||||
"""Prepare parameters for OpenAI chat completion."""
|
||||
params = self.model_dump(
|
||||
include={
|
||||
"model",
|
||||
"stream",
|
||||
"temperature",
|
||||
"top_p",
|
||||
"frequency_penalty",
|
||||
"presence_penalty",
|
||||
"max_completion_tokens",
|
||||
"max_tokens",
|
||||
"seed",
|
||||
"logprobs",
|
||||
"top_logprobs",
|
||||
"reasoning_effort",
|
||||
},
|
||||
exclude_none=True,
|
||||
)
|
||||
params["messages"] = messages
|
||||
params: dict[str, Any] = {
|
||||
"model": self.model,
|
||||
"messages": messages,
|
||||
}
|
||||
if self.stream:
|
||||
params["stream"] = self.stream
|
||||
|
||||
params.update(self.additional_params)
|
||||
|
||||
if self.temperature is not None:
|
||||
params["temperature"] = self.temperature
|
||||
if self.top_p is not None:
|
||||
params["top_p"] = self.top_p
|
||||
if self.frequency_penalty is not None:
|
||||
params["frequency_penalty"] = self.frequency_penalty
|
||||
if self.presence_penalty is not None:
|
||||
params["presence_penalty"] = self.presence_penalty
|
||||
if self.max_completion_tokens is not None:
|
||||
params["max_completion_tokens"] = self.max_completion_tokens
|
||||
elif self.max_tokens is not None:
|
||||
params["max_tokens"] = self.max_tokens
|
||||
if self.seed is not None:
|
||||
params["seed"] = self.seed
|
||||
if self.logprobs is not None:
|
||||
params["logprobs"] = self.logprobs
|
||||
if self.top_logprobs is not None:
|
||||
params["top_logprobs"] = self.top_logprobs
|
||||
|
||||
# Handle o1 model specific parameters
|
||||
if self.is_o1_model and self.reasoning_effort:
|
||||
params["reasoning_effort"] = self.reasoning_effort
|
||||
|
||||
if tools:
|
||||
params["tools"] = self._convert_tools_for_interference(tools)
|
||||
params["tool_choice"] = "auto"
|
||||
@@ -321,14 +296,14 @@ class OpenAICompletion(BaseLLM):
|
||||
self,
|
||||
params: dict[str, Any],
|
||||
available_functions: dict[str, Any] | None = None,
|
||||
from_task: Task | None = None,
|
||||
from_agent: Agent | None = None,
|
||||
from_task: Any | None = None,
|
||||
from_agent: Any | None = None,
|
||||
response_model: type[BaseModel] | None = None,
|
||||
) -> str | Any:
|
||||
"""Handle non-streaming chat completion."""
|
||||
try:
|
||||
if response_model:
|
||||
parsed_response = self._client.beta.chat.completions.parse(
|
||||
parsed_response = self.client.beta.chat.completions.parse(
|
||||
**params,
|
||||
response_format=response_model,
|
||||
)
|
||||
@@ -352,7 +327,7 @@ class OpenAICompletion(BaseLLM):
|
||||
)
|
||||
return structured_json
|
||||
|
||||
response: ChatCompletion = self._client.chat.completions.create(**params)
|
||||
response: ChatCompletion = self.client.chat.completions.create(**params)
|
||||
|
||||
usage = self._extract_openai_token_usage(response)
|
||||
|
||||
@@ -444,8 +419,8 @@ class OpenAICompletion(BaseLLM):
|
||||
self,
|
||||
params: dict[str, Any],
|
||||
available_functions: dict[str, Any] | None = None,
|
||||
from_task: Task | None = None,
|
||||
from_agent: Agent | None = None,
|
||||
from_task: Any | None = None,
|
||||
from_agent: Any | None = None,
|
||||
response_model: type[BaseModel] | None = None,
|
||||
) -> str:
|
||||
"""Handle streaming chat completion."""
|
||||
@@ -454,7 +429,7 @@ class OpenAICompletion(BaseLLM):
|
||||
|
||||
if response_model:
|
||||
completion_stream: Iterator[ChatCompletionChunk] = (
|
||||
self._client.chat.completions.create(**params)
|
||||
self.client.chat.completions.create(**params)
|
||||
)
|
||||
|
||||
accumulated_content = ""
|
||||
@@ -497,7 +472,7 @@ class OpenAICompletion(BaseLLM):
|
||||
)
|
||||
return accumulated_content
|
||||
|
||||
stream: Iterator[ChatCompletionChunk] = self._client.chat.completions.create(
|
||||
stream: Iterator[ChatCompletionChunk] = self.client.chat.completions.create(
|
||||
**params
|
||||
)
|
||||
|
||||
@@ -575,31 +550,58 @@ class OpenAICompletion(BaseLLM):
|
||||
|
||||
return full_response
|
||||
|
||||
def supports_function_calling(self) -> bool:
|
||||
"""Check if the model supports function calling."""
|
||||
return not self.is_o1_model
|
||||
|
||||
def supports_stop_words(self) -> bool:
|
||||
"""Check if the model supports stop words."""
|
||||
return not self.is_o1_model
|
||||
|
||||
def get_context_window_size(self) -> int:
|
||||
"""Get the context window size for the model."""
|
||||
from crewai.llm import CONTEXT_WINDOW_USAGE_RATIO, LLM_CONTEXT_WINDOW_SIZES
|
||||
|
||||
min_context = 1024
|
||||
max_context = 2097152
|
||||
|
||||
for key, value in LLM_CONTEXT_WINDOW_SIZES.items():
|
||||
if value < MIN_CONTEXT_WINDOW or value > MAX_CONTEXT_WINDOW:
|
||||
if value < min_context or value > max_context:
|
||||
raise ValueError(
|
||||
f"Context window for {key} must be between {MIN_CONTEXT_WINDOW} and {MAX_CONTEXT_WINDOW}"
|
||||
f"Context window for {key} must be between {min_context} and {max_context}"
|
||||
)
|
||||
|
||||
# Context window sizes for OpenAI models
|
||||
context_windows = {
|
||||
"gpt-4": 8192,
|
||||
"gpt-4o": 128000,
|
||||
"gpt-4o-mini": 200000,
|
||||
"gpt-4-turbo": 128000,
|
||||
"gpt-4.1": 1047576,
|
||||
"gpt-4.1-mini-2025-04-14": 1047576,
|
||||
"gpt-4.1-nano-2025-04-14": 1047576,
|
||||
"o1-preview": 128000,
|
||||
"o1-mini": 128000,
|
||||
"o3-mini": 200000,
|
||||
"o4-mini": 200000,
|
||||
}
|
||||
|
||||
# Find the best match for the model name
|
||||
for model_prefix, size in OPENAI_CONTEXT_WINDOWS.items():
|
||||
for model_prefix, size in context_windows.items():
|
||||
if self.model.startswith(model_prefix):
|
||||
return int(size * CONTEXT_WINDOW_USAGE_RATIO)
|
||||
|
||||
# Default context window size
|
||||
return int(8192 * CONTEXT_WINDOW_USAGE_RATIO)
|
||||
|
||||
@staticmethod
|
||||
def _extract_openai_token_usage(response: ChatCompletion) -> dict[str, Any]:
|
||||
def _extract_openai_token_usage(self, response: ChatCompletion) -> dict[str, Any]:
|
||||
"""Extract token usage from OpenAI ChatCompletion response."""
|
||||
if response.usage:
|
||||
if hasattr(response, "usage") and response.usage:
|
||||
usage = response.usage
|
||||
return {
|
||||
"prompt_tokens": usage.prompt_tokens,
|
||||
"completion_tokens": usage.completion_tokens,
|
||||
"total_tokens": usage.total_tokens,
|
||||
"prompt_tokens": getattr(usage, "prompt_tokens", 0),
|
||||
"completion_tokens": getattr(usage, "completion_tokens", 0),
|
||||
"total_tokens": getattr(usage, "total_tokens", 0),
|
||||
}
|
||||
return {"total_tokens": 0}
|
||||
|
||||
|
||||
@@ -539,6 +539,7 @@ class Task(BaseModel):
|
||||
json_dict=json_output,
|
||||
agent=agent.role,
|
||||
output_format=self._get_output_format(),
|
||||
messages=agent.last_messages,
|
||||
)
|
||||
|
||||
if self._guardrails:
|
||||
@@ -949,6 +950,7 @@ Follow these guidelines:
|
||||
json_dict=json_output,
|
||||
agent=agent.role,
|
||||
output_format=self._get_output_format(),
|
||||
messages=agent.last_messages,
|
||||
)
|
||||
|
||||
return task_output
|
||||
|
||||
@@ -6,6 +6,7 @@ from typing import Any
|
||||
from pydantic import BaseModel, Field, model_validator
|
||||
|
||||
from crewai.tasks.output_format import OutputFormat
|
||||
from crewai.utilities.types import LLMMessage
|
||||
|
||||
|
||||
class TaskOutput(BaseModel):
|
||||
@@ -40,6 +41,7 @@ class TaskOutput(BaseModel):
|
||||
output_format: OutputFormat = Field(
|
||||
description="Output format of the task", default=OutputFormat.RAW
|
||||
)
|
||||
messages: list[LLMMessage] = Field(description="Messages of the task", default=[])
|
||||
|
||||
@model_validator(mode="after")
|
||||
def set_summary(self):
|
||||
|
||||
@@ -33,6 +33,7 @@ from crewai.utilities.types import LLMMessage
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from crewai.agent import Agent
|
||||
from crewai.agents.crew_agent_executor import CrewAgentExecutor
|
||||
from crewai.lite_agent import LiteAgent
|
||||
from crewai.llm import LLM
|
||||
from crewai.task import Task
|
||||
@@ -127,7 +128,7 @@ def handle_max_iterations_exceeded(
|
||||
messages: list[LLMMessage],
|
||||
llm: LLM | BaseLLM,
|
||||
callbacks: list[TokenCalcHandler],
|
||||
) -> AgentAction | AgentFinish:
|
||||
) -> AgentFinish:
|
||||
"""Handles the case when the maximum number of iterations is exceeded. Performs one more LLM call to get the final answer.
|
||||
|
||||
Args:
|
||||
@@ -139,7 +140,7 @@ def handle_max_iterations_exceeded(
|
||||
callbacks: List of callbacks for the LLM call.
|
||||
|
||||
Returns:
|
||||
The final formatted answer after exceeding max iterations.
|
||||
AgentFinish with the final answer after exceeding max iterations.
|
||||
"""
|
||||
printer.print(
|
||||
content="Maximum iterations reached. Requesting final answer.",
|
||||
@@ -157,7 +158,7 @@ def handle_max_iterations_exceeded(
|
||||
|
||||
# Perform one more LLM call to get the final answer
|
||||
answer = llm.call(
|
||||
messages, # type: ignore[arg-type]
|
||||
messages,
|
||||
callbacks=callbacks,
|
||||
)
|
||||
|
||||
@@ -168,8 +169,16 @@ def handle_max_iterations_exceeded(
|
||||
)
|
||||
raise ValueError("Invalid response from LLM call - None or empty.")
|
||||
|
||||
# Return the formatted answer, regardless of its type
|
||||
return format_answer(answer=answer)
|
||||
formatted = format_answer(answer=answer)
|
||||
|
||||
# If format_answer returned an AgentAction, convert it to AgentFinish
|
||||
if isinstance(formatted, AgentFinish):
|
||||
return formatted
|
||||
return AgentFinish(
|
||||
thought=formatted.thought,
|
||||
output=formatted.text,
|
||||
text=formatted.text,
|
||||
)
|
||||
|
||||
|
||||
def format_message_for_llm(
|
||||
@@ -228,6 +237,7 @@ def get_llm_response(
|
||||
from_task: Task | None = None,
|
||||
from_agent: Agent | LiteAgent | None = None,
|
||||
response_model: type[BaseModel] | None = None,
|
||||
executor_context: CrewAgentExecutor | None = None,
|
||||
) -> str:
|
||||
"""Call the LLM and return the response, handling any invalid responses.
|
||||
|
||||
@@ -239,6 +249,7 @@ def get_llm_response(
|
||||
from_task: Optional task context for the LLM call
|
||||
from_agent: Optional agent context for the LLM call
|
||||
response_model: Optional Pydantic model for structured outputs
|
||||
executor_context: Optional executor context for hook invocation
|
||||
|
||||
Returns:
|
||||
The response from the LLM as a string
|
||||
@@ -247,12 +258,17 @@ def get_llm_response(
|
||||
Exception: If an error occurs.
|
||||
ValueError: If the response is None or empty.
|
||||
"""
|
||||
|
||||
if executor_context is not None:
|
||||
_setup_before_llm_call_hooks(executor_context, printer)
|
||||
messages = executor_context.messages
|
||||
|
||||
try:
|
||||
answer = llm.call(
|
||||
messages, # type: ignore[arg-type]
|
||||
messages,
|
||||
callbacks=callbacks,
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
from_agent=from_agent, # type: ignore[arg-type]
|
||||
response_model=response_model,
|
||||
)
|
||||
except Exception as e:
|
||||
@@ -264,7 +280,7 @@ def get_llm_response(
|
||||
)
|
||||
raise ValueError("Invalid response from LLM call - None or empty.")
|
||||
|
||||
return answer
|
||||
return _setup_after_llm_call_hooks(executor_context, answer, printer)
|
||||
|
||||
|
||||
def process_llm_response(
|
||||
@@ -294,8 +310,8 @@ def handle_agent_action_core(
|
||||
formatted_answer: AgentAction,
|
||||
tool_result: ToolResult,
|
||||
messages: list[LLMMessage] | None = None,
|
||||
step_callback: Callable | None = None,
|
||||
show_logs: Callable | None = None,
|
||||
step_callback: Callable | None = None, # type: ignore[type-arg]
|
||||
show_logs: Callable | None = None, # type: ignore[type-arg]
|
||||
) -> AgentAction | AgentFinish:
|
||||
"""Core logic for handling agent actions and tool results.
|
||||
|
||||
@@ -481,7 +497,7 @@ def summarize_messages(
|
||||
),
|
||||
]
|
||||
summary = llm.call(
|
||||
messages, # type: ignore[arg-type]
|
||||
messages,
|
||||
callbacks=callbacks,
|
||||
)
|
||||
summarized_contents.append({"content": str(summary)})
|
||||
@@ -653,3 +669,92 @@ def load_agent_from_repository(from_repository: str) -> dict[str, Any]:
|
||||
else:
|
||||
attributes[key] = value
|
||||
return attributes
|
||||
|
||||
|
||||
def _setup_before_llm_call_hooks(
|
||||
executor_context: CrewAgentExecutor | None, printer: Printer
|
||||
) -> None:
|
||||
"""Setup and invoke before_llm_call hooks for the executor context.
|
||||
|
||||
Args:
|
||||
executor_context: The executor context to setup the hooks for.
|
||||
printer: Printer instance for error logging.
|
||||
"""
|
||||
if executor_context and executor_context.before_llm_call_hooks:
|
||||
from crewai.utilities.llm_call_hooks import LLMCallHookContext
|
||||
|
||||
original_messages = executor_context.messages
|
||||
|
||||
hook_context = LLMCallHookContext(executor_context)
|
||||
try:
|
||||
for hook in executor_context.before_llm_call_hooks:
|
||||
hook(hook_context)
|
||||
except Exception as e:
|
||||
printer.print(
|
||||
content=f"Error in before_llm_call hook: {e}",
|
||||
color="yellow",
|
||||
)
|
||||
|
||||
if not isinstance(executor_context.messages, list):
|
||||
printer.print(
|
||||
content=(
|
||||
"Warning: before_llm_call hook replaced messages with non-list. "
|
||||
"Restoring original messages list. Hooks should modify messages in-place, "
|
||||
"not replace the list (e.g., use context.messages.append() not context.messages = [])."
|
||||
),
|
||||
color="yellow",
|
||||
)
|
||||
if isinstance(original_messages, list):
|
||||
executor_context.messages = original_messages
|
||||
else:
|
||||
executor_context.messages = []
|
||||
|
||||
|
||||
def _setup_after_llm_call_hooks(
|
||||
executor_context: CrewAgentExecutor | None,
|
||||
answer: str,
|
||||
printer: Printer,
|
||||
) -> str:
|
||||
"""Setup and invoke after_llm_call hooks for the executor context.
|
||||
|
||||
Args:
|
||||
executor_context: The executor context to setup the hooks for.
|
||||
answer: The LLM response string.
|
||||
printer: Printer instance for error logging.
|
||||
|
||||
Returns:
|
||||
The potentially modified response string.
|
||||
"""
|
||||
if executor_context and executor_context.after_llm_call_hooks:
|
||||
from crewai.utilities.llm_call_hooks import LLMCallHookContext
|
||||
|
||||
original_messages = executor_context.messages
|
||||
|
||||
hook_context = LLMCallHookContext(executor_context, response=answer)
|
||||
try:
|
||||
for hook in executor_context.after_llm_call_hooks:
|
||||
modified_response = hook(hook_context)
|
||||
if modified_response is not None and isinstance(modified_response, str):
|
||||
answer = modified_response
|
||||
|
||||
except Exception as e:
|
||||
printer.print(
|
||||
content=f"Error in after_llm_call hook: {e}",
|
||||
color="yellow",
|
||||
)
|
||||
|
||||
if not isinstance(executor_context.messages, list):
|
||||
printer.print(
|
||||
content=(
|
||||
"Warning: after_llm_call hook replaced messages with non-list. "
|
||||
"Restoring original messages list. Hooks should modify messages in-place, "
|
||||
"not replace the list (e.g., use context.messages.append() not context.messages = [])."
|
||||
),
|
||||
color="yellow",
|
||||
)
|
||||
if isinstance(original_messages, list):
|
||||
executor_context.messages = original_messages
|
||||
else:
|
||||
executor_context.messages = []
|
||||
|
||||
return answer
|
||||
|
||||
115
lib/crewai/src/crewai/utilities/llm_call_hooks.py
Normal file
115
lib/crewai/src/crewai/utilities/llm_call_hooks.py
Normal file
@@ -0,0 +1,115 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from collections.abc import Callable
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from crewai.agents.crew_agent_executor import CrewAgentExecutor
|
||||
|
||||
|
||||
class LLMCallHookContext:
|
||||
"""Context object passed to LLM call hooks with full executor access.
|
||||
|
||||
Provides hooks with complete access to the executor state, allowing
|
||||
modification of messages, responses, and executor attributes.
|
||||
|
||||
Attributes:
|
||||
executor: Full reference to the CrewAgentExecutor instance
|
||||
messages: Direct reference to executor.messages (mutable list).
|
||||
Can be modified in both before_llm_call and after_llm_call hooks.
|
||||
Modifications in after_llm_call hooks persist to the next iteration,
|
||||
allowing hooks to modify conversation history for subsequent LLM calls.
|
||||
IMPORTANT: Modify messages in-place (e.g., append, extend, remove items).
|
||||
Do NOT replace the list (e.g., context.messages = []), as this will break
|
||||
the executor. Use context.messages.append() or context.messages.extend()
|
||||
instead of assignment.
|
||||
agent: Reference to the agent executing the task
|
||||
task: Reference to the task being executed
|
||||
crew: Reference to the crew instance
|
||||
llm: Reference to the LLM instance
|
||||
iterations: Current iteration count
|
||||
response: LLM response string (only set for after_llm_call hooks).
|
||||
Can be modified by returning a new string from after_llm_call hook.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
executor: CrewAgentExecutor,
|
||||
response: str | None = None,
|
||||
) -> None:
|
||||
"""Initialize hook context with executor reference.
|
||||
|
||||
Args:
|
||||
executor: The CrewAgentExecutor instance
|
||||
response: Optional response string (for after_llm_call hooks)
|
||||
"""
|
||||
self.executor = executor
|
||||
self.messages = executor.messages
|
||||
self.agent = executor.agent
|
||||
self.task = executor.task
|
||||
self.crew = executor.crew
|
||||
self.llm = executor.llm
|
||||
self.iterations = executor.iterations
|
||||
self.response = response
|
||||
|
||||
|
||||
# Global hook registries (optional convenience feature)
|
||||
_before_llm_call_hooks: list[Callable[[LLMCallHookContext], None]] = []
|
||||
_after_llm_call_hooks: list[Callable[[LLMCallHookContext], str | None]] = []
|
||||
|
||||
|
||||
def register_before_llm_call_hook(
|
||||
hook: Callable[[LLMCallHookContext], None],
|
||||
) -> None:
|
||||
"""Register a global before_llm_call hook.
|
||||
|
||||
Global hooks are added to all executors automatically.
|
||||
This is a convenience function for registering hooks that should
|
||||
apply to all LLM calls across all executors.
|
||||
|
||||
Args:
|
||||
hook: Function that receives LLMCallHookContext and can modify
|
||||
context.messages directly. Should return None.
|
||||
IMPORTANT: Modify messages in-place (append, extend, remove items).
|
||||
Do NOT replace the list (context.messages = []), as this will break execution.
|
||||
"""
|
||||
_before_llm_call_hooks.append(hook)
|
||||
|
||||
|
||||
def register_after_llm_call_hook(
|
||||
hook: Callable[[LLMCallHookContext], str | None],
|
||||
) -> None:
|
||||
"""Register a global after_llm_call hook.
|
||||
|
||||
Global hooks are added to all executors automatically.
|
||||
This is a convenience function for registering hooks that should
|
||||
apply to all LLM calls across all executors.
|
||||
|
||||
Args:
|
||||
hook: Function that receives LLMCallHookContext and can modify:
|
||||
- The response: Return modified response string or None to keep original
|
||||
- The messages: Modify context.messages directly (mutable reference)
|
||||
Both modifications are supported and can be used together.
|
||||
IMPORTANT: Modify messages in-place (append, extend, remove items).
|
||||
Do NOT replace the list (context.messages = []), as this will break execution.
|
||||
"""
|
||||
_after_llm_call_hooks.append(hook)
|
||||
|
||||
|
||||
def get_before_llm_call_hooks() -> list[Callable[[LLMCallHookContext], None]]:
|
||||
"""Get all registered global before_llm_call hooks.
|
||||
|
||||
Returns:
|
||||
List of registered before hooks
|
||||
"""
|
||||
return _before_llm_call_hooks.copy()
|
||||
|
||||
|
||||
def get_after_llm_call_hooks() -> list[Callable[[LLMCallHookContext], str | None]]:
|
||||
"""Get all registered global after_llm_call hooks.
|
||||
|
||||
Returns:
|
||||
List of registered after hooks
|
||||
"""
|
||||
return _after_llm_call_hooks.copy()
|
||||
@@ -1,6 +1,8 @@
|
||||
"""Types for CrewAI utilities."""
|
||||
|
||||
from typing import Any, Literal, TypedDict
|
||||
from typing import Any, Literal
|
||||
|
||||
from typing_extensions import TypedDict
|
||||
|
||||
|
||||
class LLMMessage(TypedDict):
|
||||
|
||||
356
lib/crewai/tests/a2a/test_a2a_completed_status.py
Normal file
356
lib/crewai/tests/a2a/test_a2a_completed_status.py
Normal file
@@ -0,0 +1,356 @@
|
||||
"""Test A2A delegation properly handles 'completed' status without looping."""
|
||||
|
||||
from unittest.mock import MagicMock, Mock, patch
|
||||
from uuid import uuid4
|
||||
|
||||
import pytest
|
||||
|
||||
from crewai import Agent, Task
|
||||
from crewai.a2a.config import A2AConfig
|
||||
|
||||
try:
|
||||
from a2a.types import AgentCard, Message, Part, Role, TextPart
|
||||
|
||||
A2A_SDK_INSTALLED = True
|
||||
except ImportError:
|
||||
A2A_SDK_INSTALLED = False
|
||||
|
||||
|
||||
@pytest.mark.skipif(not A2A_SDK_INSTALLED, reason="Requires a2a-sdk to be installed")
|
||||
def test_a2a_delegation_stops_on_completed_status():
|
||||
"""Test that A2A delegation stops immediately when remote agent returns 'completed' status.
|
||||
|
||||
This test verifies the fix for issue #3899 where the server agent was ignoring
|
||||
the 'completed' status and delegating the same request again, causing an infinite loop.
|
||||
"""
|
||||
a2a_config = A2AConfig(
|
||||
endpoint="http://test-endpoint.com",
|
||||
max_turns=10,
|
||||
)
|
||||
|
||||
agent = Agent(
|
||||
role="Test Agent",
|
||||
goal="Test goal",
|
||||
backstory="Test backstory",
|
||||
a2a=a2a_config,
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Test task",
|
||||
expected_output="Test output",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
final_message_text = "This is the final answer from the remote agent"
|
||||
mock_history = [
|
||||
Message(
|
||||
role=Role.user,
|
||||
message_id=str(uuid4()),
|
||||
parts=[Part(root=TextPart(text="Initial request"))],
|
||||
),
|
||||
Message(
|
||||
role=Role.agent,
|
||||
message_id=str(uuid4()),
|
||||
parts=[Part(root=TextPart(text=final_message_text))],
|
||||
),
|
||||
]
|
||||
|
||||
mock_a2a_result = {
|
||||
"status": "completed",
|
||||
"result": final_message_text,
|
||||
"history": mock_history,
|
||||
"agent_card": MagicMock(spec=AgentCard),
|
||||
}
|
||||
|
||||
mock_agent_card = MagicMock(spec=AgentCard)
|
||||
mock_agent_card.name = "Test Remote Agent"
|
||||
mock_agent_card.url = "http://test-endpoint.com"
|
||||
|
||||
with patch("crewai.a2a.wrapper.execute_a2a_delegation") as mock_execute:
|
||||
with patch("crewai.a2a.wrapper.fetch_agent_card", return_value=mock_agent_card):
|
||||
with patch("crewai.a2a.wrapper._handle_agent_response_and_continue") as mock_handle:
|
||||
mock_execute.return_value = mock_a2a_result
|
||||
|
||||
from crewai.a2a.wrapper import _delegate_to_a2a
|
||||
|
||||
mock_agent_response = Mock()
|
||||
mock_agent_response.is_a2a = True
|
||||
mock_agent_response.a2a_ids = ["http://test-endpoint.com/"]
|
||||
mock_agent_response.message = "Please delegate this task"
|
||||
|
||||
result = _delegate_to_a2a(
|
||||
self=agent,
|
||||
agent_response=mock_agent_response,
|
||||
task=task,
|
||||
original_fn=Mock(),
|
||||
context=None,
|
||||
tools=None,
|
||||
agent_cards={"http://test-endpoint.com/": mock_agent_card},
|
||||
original_task_description="Test task",
|
||||
)
|
||||
|
||||
assert mock_execute.call_count == 1, (
|
||||
f"execute_a2a_delegation should be called exactly once, "
|
||||
f"but was called {mock_execute.call_count} times"
|
||||
)
|
||||
|
||||
assert mock_handle.call_count == 0, (
|
||||
"_handle_agent_response_and_continue should NOT be called "
|
||||
"when status is 'completed'"
|
||||
)
|
||||
|
||||
assert result == final_message_text
|
||||
|
||||
|
||||
@pytest.mark.skipif(not A2A_SDK_INSTALLED, reason="Requires a2a-sdk to be installed")
|
||||
def test_a2a_delegation_continues_on_input_required():
|
||||
"""Test that A2A delegation continues when remote agent returns 'input_required' status.
|
||||
|
||||
This test verifies that the 'input_required' status still triggers the LLM
|
||||
to decide on next steps, unlike 'completed' which should return immediately.
|
||||
"""
|
||||
a2a_config = A2AConfig(
|
||||
endpoint="http://test-endpoint.com",
|
||||
max_turns=10,
|
||||
)
|
||||
|
||||
agent = Agent(
|
||||
role="Test Agent",
|
||||
goal="Test goal",
|
||||
backstory="Test backstory",
|
||||
a2a=a2a_config,
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Test task",
|
||||
expected_output="Test output",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
mock_history_1 = [
|
||||
Message(
|
||||
role=Role.user,
|
||||
message_id=str(uuid4()),
|
||||
parts=[Part(root=TextPart(text="Initial request"))],
|
||||
),
|
||||
Message(
|
||||
role=Role.agent,
|
||||
message_id=str(uuid4()),
|
||||
parts=[Part(root=TextPart(text="I need more information"))],
|
||||
),
|
||||
]
|
||||
|
||||
mock_history_2 = [
|
||||
*mock_history_1,
|
||||
Message(
|
||||
role=Role.user,
|
||||
message_id=str(uuid4()),
|
||||
parts=[Part(root=TextPart(text="Here is the additional info"))],
|
||||
),
|
||||
Message(
|
||||
role=Role.agent,
|
||||
message_id=str(uuid4()),
|
||||
parts=[Part(root=TextPart(text="Final answer with all info"))],
|
||||
),
|
||||
]
|
||||
|
||||
mock_a2a_result_1 = {
|
||||
"status": "input_required",
|
||||
"error": "I need more information",
|
||||
"history": mock_history_1,
|
||||
"agent_card": MagicMock(spec=AgentCard),
|
||||
}
|
||||
|
||||
mock_a2a_result_2 = {
|
||||
"status": "completed",
|
||||
"result": "Final answer with all info",
|
||||
"history": mock_history_2,
|
||||
"agent_card": MagicMock(spec=AgentCard),
|
||||
}
|
||||
|
||||
mock_agent_card = MagicMock(spec=AgentCard)
|
||||
mock_agent_card.name = "Test Remote Agent"
|
||||
mock_agent_card.url = "http://test-endpoint.com"
|
||||
|
||||
with patch("crewai.a2a.wrapper.execute_a2a_delegation") as mock_execute:
|
||||
with patch("crewai.a2a.wrapper.fetch_agent_card", return_value=mock_agent_card):
|
||||
with patch("crewai.a2a.wrapper._handle_agent_response_and_continue") as mock_handle:
|
||||
mock_execute.side_effect = [mock_a2a_result_1, mock_a2a_result_2]
|
||||
|
||||
mock_handle.return_value = (None, "Here is the additional info")
|
||||
|
||||
from crewai.a2a.wrapper import _delegate_to_a2a
|
||||
|
||||
mock_agent_response = Mock()
|
||||
mock_agent_response.is_a2a = True
|
||||
mock_agent_response.a2a_ids = ["http://test-endpoint.com/"]
|
||||
mock_agent_response.message = "Please delegate this task"
|
||||
|
||||
result = _delegate_to_a2a(
|
||||
self=agent,
|
||||
agent_response=mock_agent_response,
|
||||
task=task,
|
||||
original_fn=Mock(),
|
||||
context=None,
|
||||
tools=None,
|
||||
agent_cards={"http://test-endpoint.com/": mock_agent_card},
|
||||
original_task_description="Test task",
|
||||
)
|
||||
|
||||
assert mock_execute.call_count == 2, (
|
||||
f"execute_a2a_delegation should be called twice, "
|
||||
f"but was called {mock_execute.call_count} times"
|
||||
)
|
||||
|
||||
assert mock_handle.call_count == 1, (
|
||||
"_handle_agent_response_and_continue should be called once "
|
||||
"for 'input_required' status"
|
||||
)
|
||||
|
||||
assert result == "Final answer with all info"
|
||||
|
||||
|
||||
@pytest.mark.skipif(not A2A_SDK_INSTALLED, reason="Requires a2a-sdk to be installed")
|
||||
def test_a2a_delegation_completed_with_empty_history():
|
||||
"""Test that A2A delegation handles 'completed' status with empty history gracefully.
|
||||
|
||||
This test verifies that when the remote agent returns 'completed' but the history
|
||||
is empty or doesn't contain an agent message, we still return a reasonable result.
|
||||
"""
|
||||
a2a_config = A2AConfig(
|
||||
endpoint="http://test-endpoint.com",
|
||||
max_turns=10,
|
||||
)
|
||||
|
||||
agent = Agent(
|
||||
role="Test Agent",
|
||||
goal="Test goal",
|
||||
backstory="Test backstory",
|
||||
a2a=a2a_config,
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Test task",
|
||||
expected_output="Test output",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
mock_a2a_result = {
|
||||
"status": "completed",
|
||||
"result": "", # Empty result
|
||||
"history": [], # Empty history
|
||||
"agent_card": MagicMock(spec=AgentCard),
|
||||
}
|
||||
|
||||
mock_agent_card = MagicMock(spec=AgentCard)
|
||||
mock_agent_card.name = "Test Remote Agent"
|
||||
mock_agent_card.url = "http://test-endpoint.com"
|
||||
|
||||
with patch("crewai.a2a.wrapper.execute_a2a_delegation") as mock_execute:
|
||||
with patch("crewai.a2a.wrapper.fetch_agent_card", return_value=mock_agent_card):
|
||||
with patch("crewai.a2a.wrapper._handle_agent_response_and_continue") as mock_handle:
|
||||
mock_execute.return_value = mock_a2a_result
|
||||
|
||||
from crewai.a2a.wrapper import _delegate_to_a2a
|
||||
|
||||
mock_agent_response = Mock()
|
||||
mock_agent_response.is_a2a = True
|
||||
mock_agent_response.a2a_ids = ["http://test-endpoint.com/"]
|
||||
mock_agent_response.message = "Please delegate this task"
|
||||
|
||||
result = _delegate_to_a2a(
|
||||
self=agent,
|
||||
agent_response=mock_agent_response,
|
||||
task=task,
|
||||
original_fn=Mock(),
|
||||
context=None,
|
||||
tools=None,
|
||||
agent_cards={"http://test-endpoint.com/": mock_agent_card},
|
||||
original_task_description="Test task",
|
||||
)
|
||||
|
||||
assert mock_execute.call_count == 1
|
||||
|
||||
assert mock_handle.call_count == 0
|
||||
|
||||
assert result == "Conversation completed"
|
||||
|
||||
|
||||
@pytest.mark.skipif(not A2A_SDK_INSTALLED, reason="Requires a2a-sdk to be installed")
|
||||
def test_a2a_delegation_completed_extracts_from_history():
|
||||
"""Test that A2A delegation extracts final message from history when result is empty.
|
||||
|
||||
This test verifies that when the remote agent returns 'completed' with an empty result
|
||||
but has messages in the history, we extract the final agent message from history.
|
||||
"""
|
||||
a2a_config = A2AConfig(
|
||||
endpoint="http://test-endpoint.com",
|
||||
max_turns=10,
|
||||
)
|
||||
|
||||
agent = Agent(
|
||||
role="Test Agent",
|
||||
goal="Test goal",
|
||||
backstory="Test backstory",
|
||||
a2a=a2a_config,
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Test task",
|
||||
expected_output="Test output",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
final_message_text = "Final message from history"
|
||||
mock_history = [
|
||||
Message(
|
||||
role=Role.user,
|
||||
message_id=str(uuid4()),
|
||||
parts=[Part(root=TextPart(text="Initial request"))],
|
||||
),
|
||||
Message(
|
||||
role=Role.agent,
|
||||
message_id=str(uuid4()),
|
||||
parts=[Part(root=TextPart(text=final_message_text))],
|
||||
),
|
||||
]
|
||||
|
||||
mock_a2a_result = {
|
||||
"status": "completed",
|
||||
"result": "", # Empty result, should extract from history
|
||||
"history": mock_history,
|
||||
"agent_card": MagicMock(spec=AgentCard),
|
||||
}
|
||||
|
||||
mock_agent_card = MagicMock(spec=AgentCard)
|
||||
mock_agent_card.name = "Test Remote Agent"
|
||||
mock_agent_card.url = "http://test-endpoint.com"
|
||||
|
||||
with patch("crewai.a2a.wrapper.execute_a2a_delegation") as mock_execute:
|
||||
with patch("crewai.a2a.wrapper.fetch_agent_card", return_value=mock_agent_card):
|
||||
with patch("crewai.a2a.wrapper._handle_agent_response_and_continue") as mock_handle:
|
||||
mock_execute.return_value = mock_a2a_result
|
||||
|
||||
from crewai.a2a.wrapper import _delegate_to_a2a
|
||||
|
||||
mock_agent_response = Mock()
|
||||
mock_agent_response.is_a2a = True
|
||||
mock_agent_response.a2a_ids = ["http://test-endpoint.com/"]
|
||||
mock_agent_response.message = "Please delegate this task"
|
||||
|
||||
result = _delegate_to_a2a(
|
||||
self=agent,
|
||||
agent_response=mock_agent_response,
|
||||
task=task,
|
||||
original_fn=Mock(),
|
||||
context=None,
|
||||
tools=None,
|
||||
agent_cards={"http://test-endpoint.com/": mock_agent_card},
|
||||
original_task_description="Test task",
|
||||
)
|
||||
|
||||
assert mock_execute.call_count == 1
|
||||
|
||||
assert mock_handle.call_count == 0
|
||||
|
||||
assert result == final_message_text
|
||||
@@ -508,7 +508,47 @@ def test_agent_custom_max_iterations():
|
||||
assert isinstance(result, str)
|
||||
assert len(result) > 0
|
||||
assert call_count > 0
|
||||
assert call_count == 3
|
||||
# With max_iter=1, expect 2 calls:
|
||||
# - Call 1: iteration 0
|
||||
# - Call 2: iteration 1 (max reached, handle_max_iterations_exceeded called, then loop breaks)
|
||||
assert call_count == 2
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
@pytest.mark.timeout(30)
|
||||
def test_agent_max_iterations_stops_loop():
|
||||
"""Test that agent execution terminates when max_iter is reached."""
|
||||
|
||||
@tool
|
||||
def get_data(step: str) -> str:
|
||||
"""Get data for a step. Always returns data requiring more steps."""
|
||||
return f"Data for {step}: incomplete, need to query more steps."
|
||||
|
||||
agent = Agent(
|
||||
role="data collector",
|
||||
goal="collect data using the get_data tool",
|
||||
backstory="You must use the get_data tool extensively",
|
||||
max_iter=2,
|
||||
allow_delegation=False,
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Use get_data tool for step1, step2, step3, step4, step5, step6, step7, step8, step9, and step10. Do NOT stop until you've called it for ALL steps.",
|
||||
expected_output="A summary of all data collected",
|
||||
)
|
||||
|
||||
result = agent.execute_task(
|
||||
task=task,
|
||||
tools=[get_data],
|
||||
)
|
||||
|
||||
assert result is not None
|
||||
assert isinstance(result, str)
|
||||
|
||||
assert agent.agent_executor.iterations <= agent.max_iter + 2, (
|
||||
f"Agent ran {agent.agent_executor.iterations} iterations "
|
||||
f"but should stop around {agent.max_iter + 1}. "
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
@@ -2674,3 +2714,293 @@ def test_agent_without_apps_no_platform_tools():
|
||||
|
||||
tools = crew._prepare_tools(agent, task, [])
|
||||
assert tools == []
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_before_llm_call_hook_modifies_messages():
|
||||
"""Test that before_llm_call hooks can modify messages."""
|
||||
from crewai.utilities.llm_call_hooks import LLMCallHookContext, register_before_llm_call_hook
|
||||
|
||||
hook_called = False
|
||||
original_message_count = 0
|
||||
|
||||
def before_hook(context: LLMCallHookContext) -> None:
|
||||
nonlocal hook_called, original_message_count
|
||||
hook_called = True
|
||||
original_message_count = len(context.messages)
|
||||
context.messages.append({
|
||||
"role": "user",
|
||||
"content": "Additional context: This is a test modification."
|
||||
})
|
||||
|
||||
register_before_llm_call_hook(before_hook)
|
||||
|
||||
try:
|
||||
agent = Agent(
|
||||
role="Test Agent",
|
||||
goal="Test goal",
|
||||
backstory="Test backstory",
|
||||
allow_delegation=False,
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Say hello",
|
||||
expected_output="A greeting",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
result = agent.execute_task(task)
|
||||
|
||||
assert hook_called, "before_llm_call hook should have been called"
|
||||
assert len(agent.agent_executor.messages) > original_message_count
|
||||
assert result is not None
|
||||
finally:
|
||||
pass
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_after_llm_call_hook_modifies_messages_for_next_iteration():
|
||||
"""Test that after_llm_call hooks can modify messages for the next iteration."""
|
||||
from crewai.utilities.llm_call_hooks import LLMCallHookContext, register_after_llm_call_hook
|
||||
|
||||
hook_call_count = 0
|
||||
hook_iterations = []
|
||||
messages_added_in_iteration_0 = False
|
||||
test_message_content = "HOOK_ADDED_MESSAGE_FOR_NEXT_ITERATION"
|
||||
|
||||
def after_hook(context: LLMCallHookContext) -> str | None:
|
||||
nonlocal hook_call_count, hook_iterations, messages_added_in_iteration_0
|
||||
hook_call_count += 1
|
||||
current_iteration = context.iterations
|
||||
hook_iterations.append(current_iteration)
|
||||
|
||||
if current_iteration == 0:
|
||||
messages_before = len(context.messages)
|
||||
context.messages.append({
|
||||
"role": "user",
|
||||
"content": test_message_content
|
||||
})
|
||||
messages_added_in_iteration_0 = True
|
||||
assert len(context.messages) == messages_before + 1
|
||||
|
||||
return None
|
||||
|
||||
register_after_llm_call_hook(after_hook)
|
||||
|
||||
try:
|
||||
agent = Agent(
|
||||
role="Test Agent",
|
||||
goal="Test goal",
|
||||
backstory="Test backstory",
|
||||
allow_delegation=False,
|
||||
max_iter=3,
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Count to 3, taking your time",
|
||||
expected_output="A count",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
result = agent.execute_task(task)
|
||||
|
||||
assert hook_call_count > 0, "after_llm_call hook should have been called"
|
||||
assert messages_added_in_iteration_0, "Message should have been added in iteration 0"
|
||||
|
||||
executor_messages = agent.agent_executor.messages
|
||||
message_contents = [msg.get("content", "") for msg in executor_messages if isinstance(msg, dict)]
|
||||
assert any(test_message_content in content for content in message_contents), (
|
||||
f"Message added by hook in iteration 0 should be present in executor messages. "
|
||||
f"Messages: {message_contents}"
|
||||
)
|
||||
|
||||
assert len(executor_messages) > 2, "Executor should have more than initial messages"
|
||||
assert result is not None
|
||||
finally:
|
||||
pass
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_after_llm_call_hook_modifies_messages():
|
||||
"""Test that after_llm_call hooks can modify messages for next iteration."""
|
||||
from crewai.utilities.llm_call_hooks import LLMCallHookContext, register_after_llm_call_hook
|
||||
|
||||
hook_called = False
|
||||
messages_before_hook = 0
|
||||
|
||||
def after_hook(context: LLMCallHookContext) -> str | None:
|
||||
nonlocal hook_called, messages_before_hook
|
||||
hook_called = True
|
||||
messages_before_hook = len(context.messages)
|
||||
context.messages.append({
|
||||
"role": "user",
|
||||
"content": "Remember: This is iteration 2 context."
|
||||
})
|
||||
return None # Don't modify response
|
||||
|
||||
register_after_llm_call_hook(after_hook)
|
||||
|
||||
try:
|
||||
agent = Agent(
|
||||
role="Test Agent",
|
||||
goal="Test goal",
|
||||
backstory="Test backstory",
|
||||
allow_delegation=False,
|
||||
max_iter=2,
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Count to 2",
|
||||
expected_output="A count",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
result = agent.execute_task(task)
|
||||
|
||||
assert hook_called, "after_llm_call hook should have been called"
|
||||
assert len(agent.agent_executor.messages) > messages_before_hook
|
||||
assert result is not None
|
||||
finally:
|
||||
pass
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_llm_call_hooks_with_crew():
|
||||
"""Test that LLM call hooks work with crew execution."""
|
||||
from crewai.utilities.llm_call_hooks import (
|
||||
LLMCallHookContext,
|
||||
register_after_llm_call_hook,
|
||||
register_before_llm_call_hook,
|
||||
)
|
||||
|
||||
before_hook_called = False
|
||||
after_hook_called = False
|
||||
|
||||
def before_hook(context: LLMCallHookContext) -> None:
|
||||
nonlocal before_hook_called
|
||||
before_hook_called = True
|
||||
assert context.executor is not None
|
||||
assert context.agent is not None
|
||||
assert context.task is not None
|
||||
context.messages.append({
|
||||
"role": "system",
|
||||
"content": "Additional system context from hook."
|
||||
})
|
||||
|
||||
def after_hook(context: LLMCallHookContext) -> str | None:
|
||||
nonlocal after_hook_called
|
||||
after_hook_called = True
|
||||
assert context.response is not None
|
||||
assert len(context.messages) > 0
|
||||
return None
|
||||
|
||||
register_before_llm_call_hook(before_hook)
|
||||
register_after_llm_call_hook(after_hook)
|
||||
|
||||
try:
|
||||
agent = Agent(
|
||||
role="Researcher",
|
||||
goal="Research topics",
|
||||
backstory="You are a researcher",
|
||||
allow_delegation=False,
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Research AI frameworks",
|
||||
expected_output="A research summary",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
crew = Crew(agents=[agent], tasks=[task])
|
||||
result = crew.kickoff()
|
||||
|
||||
assert before_hook_called, "before_llm_call hook should have been called"
|
||||
assert after_hook_called, "after_llm_call hook should have been called"
|
||||
assert result is not None
|
||||
assert result.raw is not None
|
||||
finally:
|
||||
pass
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_llm_call_hooks_can_modify_executor_attributes():
|
||||
"""Test that hooks can access and modify executor attributes like tools."""
|
||||
from crewai.utilities.llm_call_hooks import LLMCallHookContext, register_before_llm_call_hook
|
||||
from crewai.tools import tool
|
||||
|
||||
@tool
|
||||
def test_tool() -> str:
|
||||
"""A test tool."""
|
||||
return "test result"
|
||||
|
||||
hook_called = False
|
||||
original_tools_count = 0
|
||||
|
||||
def before_hook(context: LLMCallHookContext) -> None:
|
||||
nonlocal hook_called, original_tools_count
|
||||
hook_called = True
|
||||
original_tools_count = len(context.executor.tools)
|
||||
assert context.executor.max_iter > 0
|
||||
assert context.executor.iterations >= 0
|
||||
assert context.executor.tools is not None
|
||||
|
||||
register_before_llm_call_hook(before_hook)
|
||||
|
||||
try:
|
||||
agent = Agent(
|
||||
role="Test Agent",
|
||||
goal="Test goal",
|
||||
backstory="Test backstory",
|
||||
tools=[test_tool],
|
||||
allow_delegation=False,
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Use the test tool",
|
||||
expected_output="Tool result",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
result = agent.execute_task(task)
|
||||
|
||||
assert hook_called, "before_llm_call hook should have been called"
|
||||
assert original_tools_count >= 0
|
||||
assert result is not None
|
||||
finally:
|
||||
pass
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_llm_call_hooks_error_handling():
|
||||
"""Test that hook errors don't break execution."""
|
||||
from crewai.utilities.llm_call_hooks import LLMCallHookContext, register_before_llm_call_hook
|
||||
|
||||
hook_called = False
|
||||
|
||||
def error_hook(context: LLMCallHookContext) -> None:
|
||||
nonlocal hook_called
|
||||
hook_called = True
|
||||
raise ValueError("Test hook error")
|
||||
|
||||
register_before_llm_call_hook(error_hook)
|
||||
|
||||
try:
|
||||
agent = Agent(
|
||||
role="Test Agent",
|
||||
goal="Test goal",
|
||||
backstory="Test backstory",
|
||||
allow_delegation=False,
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Say hello",
|
||||
expected_output="A greeting",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
result = agent.execute_task(task)
|
||||
|
||||
assert hook_called, "before_llm_call hook should have been called"
|
||||
assert result is not None
|
||||
finally:
|
||||
pass
|
||||
|
||||
@@ -238,6 +238,27 @@ def test_lite_agent_returns_usage_metrics():
|
||||
assert result.usage_metrics["total_tokens"] > 0
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_lite_agent_output_includes_messages():
|
||||
"""Test that LiteAgentOutput includes messages from agent execution."""
|
||||
llm = LLM(model="gpt-4o-mini")
|
||||
agent = Agent(
|
||||
role="Research Assistant",
|
||||
goal="Find information about the population of Tokyo",
|
||||
backstory="You are a helpful research assistant who can search for information about the population of Tokyo.",
|
||||
llm=llm,
|
||||
tools=[WebSearchTool()],
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
result = agent.kickoff("What is the population of Tokyo?")
|
||||
|
||||
assert isinstance(result, LiteAgentOutput)
|
||||
assert hasattr(result, "messages")
|
||||
assert isinstance(result.messages, list)
|
||||
assert len(result.messages) > 0
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
@pytest.mark.asyncio
|
||||
async def test_lite_agent_returns_usage_metrics_async():
|
||||
|
||||
@@ -0,0 +1,126 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Test Agent. Test backstory\nYour
|
||||
personal goal is: Test goal\nTo give my best complete final answer to the task
|
||||
respond using the exact following format:\n\nThought: I now can give a great
|
||||
answer\nFinal Answer: Your final answer must be the great and the most complete
|
||||
as possible, it must be outcome described.\n\nI MUST use these formats, my job
|
||||
depends on it!"},{"role":"user","content":"\nCurrent Task: Count to 2\n\nThis
|
||||
is the expected criteria for your final answer: A count\nyou MUST return the
|
||||
actual complete content as the final answer, not a summary.\n\nBegin! This is
|
||||
VERY important to you, use the tools available and give your best Final Answer,
|
||||
your job depends on it!\n\nThought:"},{"role":"user","content":"Additional context:
|
||||
This is a test modification."}],"model":"gpt-4.1-mini"}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '849'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.109.1
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.109.1
|
||||
x-stainless-read-timeout:
|
||||
- '600'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//jFJNb5wwEL3zK0Y+QwSI7LLcokqVcujHoR9S2wg5ZsBujceyTdIo2v9e
|
||||
GTYLaROpFyTmzXt+b2YeEwCmOtYAE5IHMVqdveH09UHKLx+/2eFzkAdZXL8XJPr9h3dFydLIoNuf
|
||||
KMIT60LQaDUGRWaBhUMeMKoW+11ZH/K8rmZgpA51pA02ZNVFkY3KqKzMy8ssr7KiOtElKYGeNfA9
|
||||
AQB4nL/RqOnwN2sgT58qI3rPB2TNuQmAOdKxwrj3ygduAktXUJAJaGbvnyRNgwwNXIOhexDcwKDu
|
||||
EDgMMQBw4+/R/TBvleEarua/BooUyq2gw37yPKYyk9YbgBtDgcepzFFuTsjxbF7TYB3d+r+orFdG
|
||||
edk65J5MNOoDWTajxwTgZh7S9Cw3s45GG9pAv3B+rtgdFj22LmeD1icwUOB6W9+nL+i1HQautN+M
|
||||
mQkuJHYrdd0JnzpFGyDZpP7XzUvaS3Jlhv+RXwEh0AbsWuuwU+J54rXNYbzd19rOU54NM4/uTgls
|
||||
g0IXN9Fhzye9HBTzDz7g2PbKDOisU8tV9batRFlfFn29K1lyTP4AAAD//wMApumqgWQDAAA=
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 99d044543db94e48-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Tue, 11 Nov 2025 19:41:25 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=KLlCOQ_zxXquDvj96O28ObVFEoAbFE8R7zlmuiuXH1M-1762890085-1.0.1.1-UChItG1GnLDHrErY60dUpkbD3lEkSvfkTQpOmEtzd0fjjm_y1pJQiB.VDXVi2pPIMSelir0ZgiVXSh5.hGPb3RjQqbH3pv0Rr_2dQ59OIQ8;
|
||||
path=/; expires=Tue, 11-Nov-25 20:11:25 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=u.Z6xV9tQd3ucK35BinKtlCkewcI6q_uQicyeEeeR18-1762890085355-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '559'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '735'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-project-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-project-tokens:
|
||||
- '149999817'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149999817'
|
||||
x-ratelimit-reset-project-tokens:
|
||||
- 0s
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_bcaa0f8500714ed09f967488b238ce2e
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
@@ -0,0 +1,222 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"trace_id": "aeb82647-004a-4a30-9481-d55f476d5659", "execution_type":
|
||||
"crew", "user_identifier": null, "execution_context": {"crew_fingerprint": null,
|
||||
"crew_name": "Unknown Crew", "flow_name": null, "crewai_version": "1.4.1", "privacy_level":
|
||||
"standard"}, "execution_metadata": {"expected_duration_estimate": 300, "agent_count":
|
||||
0, "task_count": 0, "flow_method_count": 0, "execution_started_at": "2025-11-11T19:45:17.648657+00:00"}}'
|
||||
headers:
|
||||
Accept:
|
||||
- '*/*'
|
||||
Accept-Encoding:
|
||||
- gzip, deflate, zstd
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '434'
|
||||
Content-Type:
|
||||
- application/json
|
||||
User-Agent:
|
||||
- CrewAI-CLI/1.4.1
|
||||
X-Crewai-Version:
|
||||
- 1.4.1
|
||||
method: POST
|
||||
uri: https://app.crewai.com/crewai_plus/api/v1/tracing/batches
|
||||
response:
|
||||
body:
|
||||
string: '{"error":"bad_credentials","message":"Bad credentials"}'
|
||||
headers:
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '55'
|
||||
Content-Type:
|
||||
- application/json; charset=utf-8
|
||||
Date:
|
||||
- Tue, 11 Nov 2025 19:45:17 GMT
|
||||
cache-control:
|
||||
- no-store
|
||||
content-security-policy:
|
||||
- 'default-src ''self'' *.app.crewai.com app.crewai.com; script-src ''self''
|
||||
''unsafe-inline'' *.app.crewai.com app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts
|
||||
https://www.gstatic.com https://run.pstmn.io https://apis.google.com https://apis.google.com/js/api.js
|
||||
https://accounts.google.com https://accounts.google.com/gsi/client https://cdnjs.cloudflare.com/ajax/libs/normalize/8.0.1/normalize.min.css.map
|
||||
https://*.google.com https://docs.google.com https://slides.google.com https://js.hs-scripts.com
|
||||
https://js.sentry-cdn.com https://browser.sentry-cdn.com https://www.googletagmanager.com
|
||||
https://js-na1.hs-scripts.com https://js.hubspot.com http://js-na1.hs-scripts.com
|
||||
https://bat.bing.com https://cdn.amplitude.com https://cdn.segment.com https://d1d3n03t5zntha.cloudfront.net/
|
||||
https://descriptusercontent.com https://edge.fullstory.com https://googleads.g.doubleclick.net
|
||||
https://js.hs-analytics.net https://js.hs-banner.com https://js.hsadspixel.net
|
||||
https://js.hscollectedforms.net https://js.usemessages.com https://snap.licdn.com
|
||||
https://static.cloudflareinsights.com https://static.reo.dev https://www.google-analytics.com
|
||||
https://share.descript.com/; style-src ''self'' ''unsafe-inline'' *.app.crewai.com
|
||||
app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts; img-src ''self'' data:
|
||||
*.app.crewai.com app.crewai.com https://zeus.tools.crewai.com https://dashboard.tools.crewai.com
|
||||
https://cdn.jsdelivr.net https://forms.hsforms.com https://track.hubspot.com
|
||||
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://www.google.com
|
||||
https://www.google.com.br; font-src ''self'' data: *.app.crewai.com app.crewai.com;
|
||||
connect-src ''self'' *.app.crewai.com app.crewai.com https://zeus.tools.crewai.com
|
||||
https://connect.useparagon.com/ https://zeus.useparagon.com/* https://*.useparagon.com/*
|
||||
https://run.pstmn.io https://connect.tools.crewai.com/ https://*.sentry.io
|
||||
https://www.google-analytics.com https://edge.fullstory.com https://rs.fullstory.com
|
||||
https://api.hubspot.com https://forms.hscollectedforms.net https://api.hubapi.com
|
||||
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://google.com/pagead/form-data/16713662509
|
||||
https://google.com/ccm/form-data/16713662509 https://www.google.com/ccm/collect
|
||||
https://worker-actionkit.tools.crewai.com https://api.reo.dev; frame-src ''self''
|
||||
*.app.crewai.com app.crewai.com https://connect.useparagon.com/ https://zeus.tools.crewai.com
|
||||
https://zeus.useparagon.com/* https://connect.tools.crewai.com/ https://docs.google.com
|
||||
https://drive.google.com https://slides.google.com https://accounts.google.com
|
||||
https://*.google.com https://app.hubspot.com/ https://td.doubleclick.net https://www.googletagmanager.com/
|
||||
https://www.youtube.com https://share.descript.com'
|
||||
expires:
|
||||
- '0'
|
||||
permissions-policy:
|
||||
- camera=(), microphone=(self), geolocation=()
|
||||
pragma:
|
||||
- no-cache
|
||||
referrer-policy:
|
||||
- strict-origin-when-cross-origin
|
||||
strict-transport-security:
|
||||
- max-age=63072000; includeSubDomains
|
||||
vary:
|
||||
- Accept
|
||||
x-content-type-options:
|
||||
- nosniff
|
||||
x-frame-options:
|
||||
- SAMEORIGIN
|
||||
x-permitted-cross-domain-policies:
|
||||
- none
|
||||
x-request-id:
|
||||
- 48a89b0d-206b-4c1b-aa0d-ecc3b4ab525c
|
||||
x-runtime:
|
||||
- '0.088251'
|
||||
x-xss-protection:
|
||||
- 1; mode=block
|
||||
status:
|
||||
code: 401
|
||||
message: Unauthorized
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Test Agent. Test backstory\nYour
|
||||
personal goal is: Test goal\nTo give my best complete final answer to the task
|
||||
respond using the exact following format:\n\nThought: I now can give a great
|
||||
answer\nFinal Answer: Your final answer must be the great and the most complete
|
||||
as possible, it must be outcome described.\n\nI MUST use these formats, my job
|
||||
depends on it!"},{"role":"user","content":"\nCurrent Task: Count to 3, taking
|
||||
your time\n\nThis is the expected criteria for your final answer: A count\nyou
|
||||
MUST return the actual complete content as the final answer, not a summary.\n\nBegin!
|
||||
This is VERY important to you, use the tools available and give your best Final
|
||||
Answer, your job depends on it!\n\nThought:"}],"model":"gpt-4.1-mini"}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '790'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.109.1
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.109.1
|
||||
x-stainless-read-timeout:
|
||||
- '600'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//jFJNa9wwEL37Vww6r43tOpuNb2nKQgslOSy0NA1mIo9tdWVJSHK2Jex/
|
||||
L/J+2Ns20IuE5s0bzXszrxEAEzUrgfEOPe+NjO9Q41atP3/79GG7vX8QD0Xq15svX9/fUd+yRWDo
|
||||
5x/E/YmVcN0bSV5odYC5JfQUqmbXy3x1k77LViPQ65pkoLXGx0WSxb1QIs7T/CpOizgrjvROC06O
|
||||
lfAYAQC8jmdoVNX0k5WQLk6RnpzDllh5TgJgVssQYeiccB6VZ4sJ5Fp5UmPvm04PbedL+AhK74Cj
|
||||
gla8ECC0QQCgcjuy39VaKJRwO75KuFeUJAlsdnq8OkuUzD+w1AwOg0o1SDkDUCntMbg0Sns6Ivuz
|
||||
GKlbY/Wz+4PKGqGE6ypL6LQKjTuvDRvRfQTwNJo2XPjAjNW98ZXXWxq/y5ZH09g0rBl6cwS99ihn
|
||||
8esTcFGvqsmjkG5mO+PIO6on6jQjHGqhZ0A0U/13N/+qfVAuVPs/5SeAczKe6spYqgW/VDylWQq7
|
||||
/Fba2eWxYebIvghOlRdkwyRqanCQhwVj7pfz1FeNUC1ZY8VhyxpTFTxfXWXNapmzaB/9BgAA//8D
|
||||
AL0LXHV0AwAA
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 99d04a06dc4d1949-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Tue, 11 Nov 2025 19:45:18 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=KnsnYxgmlpoHf.5TWnNgU30xb2tc0gK7SC2BbUkud2M-1762890318-1.0.1.1-3KeaQY59x5mY6n8DINELLaH9_b68w7W4ZZ0KeOknBHmQyDwx5qbtDonfYxOjsO_KykjtJLHpB0bsINSNEa9TrjNQHqUWTlRhldfTLenUG44;
|
||||
path=/; expires=Tue, 11-Nov-25 20:15:18 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=ekC35NRP79GCMP.eTi_odl5.6DIsAeFEXKlanWUZOH4-1762890318589-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '598'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '632'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-project-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-project-tokens:
|
||||
- '149999827'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149999827'
|
||||
x-ratelimit-reset-project-tokens:
|
||||
- 0s
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_cb36cbe6c33b42a28675e8c6d9a36fe9
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
@@ -0,0 +1,495 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"trace_id": "REDACTED_TRACE_ID", "execution_type":
|
||||
"crew", "user_identifier": null, "execution_context": {"crew_fingerprint": null,
|
||||
"crew_name": "Unknown Crew", "flow_name": null, "crewai_version": "1.4.0", "privacy_level":
|
||||
"standard"}, "execution_metadata": {"expected_duration_estimate": 300, "agent_count":
|
||||
0, "task_count": 0, "flow_method_count": 0, "execution_started_at": "2025-11-07T18:27:07.650947+00:00"}}'
|
||||
headers:
|
||||
Accept:
|
||||
- '*/*'
|
||||
Accept-Encoding:
|
||||
- gzip, deflate, zstd
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '434'
|
||||
Content-Type:
|
||||
- application/json
|
||||
User-Agent:
|
||||
- CrewAI-CLI/1.4.0
|
||||
X-Crewai-Version:
|
||||
- 1.4.0
|
||||
method: POST
|
||||
uri: https://app.crewai.com/crewai_plus/api/v1/tracing/batches
|
||||
response:
|
||||
body:
|
||||
string: '{"error":"bad_credentials","message":"Bad credentials"}'
|
||||
headers:
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '55'
|
||||
Content-Type:
|
||||
- application/json; charset=utf-8
|
||||
Date:
|
||||
- Fri, 07 Nov 2025 18:27:07 GMT
|
||||
cache-control:
|
||||
- no-store
|
||||
content-security-policy:
|
||||
- 'default-src ''self'' *.app.crewai.com app.crewai.com; script-src ''self''
|
||||
''unsafe-inline'' *.app.crewai.com app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts
|
||||
https://www.gstatic.com https://run.pstmn.io https://apis.google.com https://apis.google.com/js/api.js
|
||||
https://accounts.google.com https://accounts.google.com/gsi/client https://cdnjs.cloudflare.com/ajax/libs/normalize/8.0.1/normalize.min.css.map
|
||||
https://*.google.com https://docs.google.com https://slides.google.com https://js.hs-scripts.com
|
||||
https://js.sentry-cdn.com https://browser.sentry-cdn.com https://www.googletagmanager.com
|
||||
https://js-na1.hs-scripts.com https://js.hubspot.com http://js-na1.hs-scripts.com
|
||||
https://bat.bing.com https://cdn.amplitude.com https://cdn.segment.com https://d1d3n03t5zntha.cloudfront.net/
|
||||
https://descriptusercontent.com https://edge.fullstory.com https://googleads.g.doubleclick.net
|
||||
https://js.hs-analytics.net https://js.hs-banner.com https://js.hsadspixel.net
|
||||
https://js.hscollectedforms.net https://js.usemessages.com https://snap.licdn.com
|
||||
https://static.cloudflareinsights.com https://static.reo.dev https://www.google-analytics.com
|
||||
https://share.descript.com/; style-src ''self'' ''unsafe-inline'' *.app.crewai.com
|
||||
app.crewai.com https://cdn.jsdelivr.net/npm/apexcharts; img-src ''self'' data:
|
||||
*.app.crewai.com app.crewai.com https://zeus.tools.crewai.com https://dashboard.tools.crewai.com
|
||||
https://cdn.jsdelivr.net https://forms.hsforms.com https://track.hubspot.com
|
||||
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://www.google.com
|
||||
https://www.google.com.br; font-src ''self'' data: *.app.crewai.com app.crewai.com;
|
||||
connect-src ''self'' *.app.crewai.com app.crewai.com https://zeus.tools.crewai.com
|
||||
https://connect.useparagon.com/ https://zeus.useparagon.com/* https://*.useparagon.com/*
|
||||
https://run.pstmn.io https://connect.tools.crewai.com/ https://*.sentry.io
|
||||
https://www.google-analytics.com https://edge.fullstory.com https://rs.fullstory.com
|
||||
https://api.hubspot.com https://forms.hscollectedforms.net https://api.hubapi.com
|
||||
https://px.ads.linkedin.com https://px4.ads.linkedin.com https://google.com/pagead/form-data/16713662509
|
||||
https://google.com/ccm/form-data/16713662509 https://www.google.com/ccm/collect
|
||||
https://worker-actionkit.tools.crewai.com https://api.reo.dev; frame-src ''self''
|
||||
*.app.crewai.com app.crewai.com https://connect.useparagon.com/ https://zeus.tools.crewai.com
|
||||
https://zeus.useparagon.com/* https://connect.tools.crewai.com/ https://docs.google.com
|
||||
https://drive.google.com https://slides.google.com https://accounts.google.com
|
||||
https://*.google.com https://app.hubspot.com/ https://td.doubleclick.net https://www.googletagmanager.com/
|
||||
https://www.youtube.com https://share.descript.com'
|
||||
expires:
|
||||
- '0'
|
||||
permissions-policy:
|
||||
- camera=(), microphone=(self), geolocation=()
|
||||
pragma:
|
||||
- no-cache
|
||||
referrer-policy:
|
||||
- strict-origin-when-cross-origin
|
||||
strict-transport-security:
|
||||
- max-age=63072000; includeSubDomains
|
||||
vary:
|
||||
- Accept
|
||||
x-content-type-options:
|
||||
- nosniff
|
||||
x-frame-options:
|
||||
- SAMEORIGIN
|
||||
x-permitted-cross-domain-policies:
|
||||
- none
|
||||
x-request-id:
|
||||
- REDACTED_REQUEST_ID
|
||||
x-runtime:
|
||||
- '0.080681'
|
||||
x-xss-protection:
|
||||
- 1; mode=block
|
||||
status:
|
||||
code: 401
|
||||
message: Unauthorized
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are data collector. You must
|
||||
use the get_data tool extensively\nYour personal goal is: collect data using
|
||||
the get_data tool\nYou ONLY have access to the following tools, and should NEVER
|
||||
make up tools that are not listed here:\n\nTool Name: get_data\nTool Arguments:
|
||||
{''step'': {''description'': None, ''type'': ''str''}}\nTool Description: Get
|
||||
data for a step. Always returns data requiring more steps.\n\nIMPORTANT: Use
|
||||
the following format in your response:\n\n```\nThought: you should always think
|
||||
about what to do\nAction: the action to take, only one name of [get_data], just
|
||||
the name, exactly as it''s written.\nAction Input: the input to the action,
|
||||
just a simple JSON object, enclosed in curly braces, using \" to wrap keys and
|
||||
values.\nObservation: the result of the action\n```\n\nOnce all necessary information
|
||||
is gathered, return the following format:\n\n```\nThought: I now know the final
|
||||
answer\nFinal Answer: the final answer to the original input question\n```"},{"role":"user","content":"\nCurrent
|
||||
Task: Use get_data tool for step1, step2, step3, step4, step5, step6, step7,
|
||||
step8, step9, and step10. Do NOT stop until you''ve called it for ALL steps.\n\nThis
|
||||
is the expected criteria for your final answer: A summary of all data collected\nyou
|
||||
MUST return the actual complete content as the final answer, not a summary.\n\nBegin!
|
||||
This is VERY important to you, use the tools available and give your best Final
|
||||
Answer, your job depends on it!\n\nThought:"}],"model":"gpt-4.1-mini"}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1534'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.109.1
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.109.1
|
||||
x-stainless-read-timeout:
|
||||
- '600'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAA4xSYWvbMBD97l9x6HMcYsfpUn8rg0FHYbAOyrYUo0hnW5ksCem8tYT89yG7id2t
|
||||
g30x5t69p/fu7pgAMCVZCUy0nETndPr+2919j4fr9VNR/Opv7vBD/bAVXz/dfzx8fmCLyLD7Awo6
|
||||
s5bCdk4jKWtGWHjkhFE1e3eVb4rVKt8OQGcl6khrHKXFMks7ZVSar/JNuirSrHiht1YJDKyE7wkA
|
||||
wHH4RqNG4hMrYbU4VzoMgTfIyksTAPNWxwrjIahA3BBbTKCwhtAM3r+0tm9aKuEWQmt7LSEQ9wT7
|
||||
ZxBWaxSkTAOSE4faegiELgMeQJlAvheEcrkzNyLmLqFBqmLruQK3xvVUwnHHInHHyvEn27HT3I/H
|
||||
ug88DsX0Ws8AbowlHqWGSTy+IKdLdm0b5+0+/EFltTIqtJVHHqyJOQNZxwb0lAA8DjPuX42NOW87
|
||||
RxXZHzg8t15nox6bdjuh4zYBGFniesbaXC/e0KskElc6zLbEBBctyok6rZT3UtkZkMxS/+3mLe0x
|
||||
uTLN/8hPgBDoCGXlPEolXiee2jzG0/9X22XKg2EW0P9UAitS6OMmJNa81+M9svAcCLuqVqZB77wa
|
||||
j7J2VSHy7Sart1c5S07JbwAAAP//AwCiugNoowMAAA==
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 99aee205bbd2de96-EWR
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 07 Nov 2025 18:27:08 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=REDACTED_COOKIE;
|
||||
path=/; expires=Fri, 07-Nov-25 18:57:08 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=REDACTED_COOKIE;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- REDACTED_ORG_ID
|
||||
openai-processing-ms:
|
||||
- '557'
|
||||
openai-project:
|
||||
- REDACTED_PROJECT_ID
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '701'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- '500'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '200000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '499'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '199645'
|
||||
x-ratelimit-reset-requests:
|
||||
- 120ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 106ms
|
||||
x-request-id:
|
||||
- REDACTED_REQUEST_ID
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are data collector. You must
|
||||
use the get_data tool extensively\nYour personal goal is: collect data using
|
||||
the get_data tool\nYou ONLY have access to the following tools, and should NEVER
|
||||
make up tools that are not listed here:\n\nTool Name: get_data\nTool Arguments:
|
||||
{''step'': {''description'': None, ''type'': ''str''}}\nTool Description: Get
|
||||
data for a step. Always returns data requiring more steps.\n\nIMPORTANT: Use
|
||||
the following format in your response:\n\n```\nThought: you should always think
|
||||
about what to do\nAction: the action to take, only one name of [get_data], just
|
||||
the name, exactly as it''s written.\nAction Input: the input to the action,
|
||||
just a simple JSON object, enclosed in curly braces, using \" to wrap keys and
|
||||
values.\nObservation: the result of the action\n```\n\nOnce all necessary information
|
||||
is gathered, return the following format:\n\n```\nThought: I now know the final
|
||||
answer\nFinal Answer: the final answer to the original input question\n```"},{"role":"user","content":"\nCurrent
|
||||
Task: Use get_data tool for step1, step2, step3, step4, step5, step6, step7,
|
||||
step8, step9, and step10. Do NOT stop until you''ve called it for ALL steps.\n\nThis
|
||||
is the expected criteria for your final answer: A summary of all data collected\nyou
|
||||
MUST return the actual complete content as the final answer, not a summary.\n\nBegin!
|
||||
This is VERY important to you, use the tools available and give your best Final
|
||||
Answer, your job depends on it!\n\nThought:"},{"role":"assistant","content":"Thought:
|
||||
I should start by collecting data for step1 as instructed.\nAction: get_data\nAction
|
||||
Input: {\"step\":\"step1\"}\nObservation: Data for step1: incomplete, need to
|
||||
query more steps."}],"model":"gpt-4.1-mini"}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1757'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=REDACTED_COOKIE;
|
||||
_cfuvid=REDACTED_COOKIE
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.109.1
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.109.1
|
||||
x-stainless-read-timeout:
|
||||
- '600'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//jFNNb9swDL37VxA6x0HiOU3mW9cOQ4F9YNjQQ5fCUGXaVidLqkQnzYL8
|
||||
90F2ErtbB+xiCHx8j+QjvY8AmCxYBkzUnERjVXx19/Hb5tPm/fbq8sPX5+Wvx6V+t93efXY1v71m
|
||||
k8AwD48o6MSaCtNYhSSN7mHhkBMG1fnyIlmks1nytgMaU6AKtMpSnE7ncSO1jJNZsohnaTxPj/Ta
|
||||
SIGeZfAjAgDYd9/QqC7wmWUwm5wiDXrPK2TZOQmAOaNChHHvpSeuiU0GUBhNqLvev9emrWrK4AY0
|
||||
YgFkIKBStxjentAmfVApFAQFJw4en1rUJLlSO+AeHD610mExXetLESzIoELKQ+4pAjfatpTBfs2C
|
||||
5ppl/SNZs8Naf3nw6Da8p16HEqVxffEMpD56ixNojMMu7kGjCIO73XQ8msOy9Tz4q1ulRgDX2lBX
|
||||
oTP1/ogczjYqU1lnHvwfVFZKLX2dO+Te6GCZJ2NZhx4igPtuXe2LDTDrTGMpJ/MTu3Jvlqtejw1n
|
||||
MqBpegTJEFejeJJMXtHLCyQulR8tnAkuaiwG6nAdvC2kGQHRaOq/u3lNu59c6up/5AdACLSERW4d
|
||||
FlK8nHhIcxj+on+lnV3uGmbhSKTAnCS6sIkCS96q/rSZ33nCJi+lrtBZJ/v7Lm2eimS1mJeri4RF
|
||||
h+g3AAAA//8DABrUefPuAwAA
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 99aee20dba0bde96-EWR
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 07 Nov 2025 18:27:10 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- REDACTED_ORG_ID
|
||||
openai-processing-ms:
|
||||
- '942'
|
||||
openai-project:
|
||||
- REDACTED_PROJECT_ID
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '1074'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- '500'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '200000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '499'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '199599'
|
||||
x-ratelimit-reset-requests:
|
||||
- 120ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 120ms
|
||||
x-request-id:
|
||||
- REDACTED_REQUEST_ID
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are data collector. You must
|
||||
use the get_data tool extensively\nYour personal goal is: collect data using
|
||||
the get_data tool\nYou ONLY have access to the following tools, and should NEVER
|
||||
make up tools that are not listed here:\n\nTool Name: get_data\nTool Arguments:
|
||||
{''step'': {''description'': None, ''type'': ''str''}}\nTool Description: Get
|
||||
data for a step. Always returns data requiring more steps.\n\nIMPORTANT: Use
|
||||
the following format in your response:\n\n```\nThought: you should always think
|
||||
about what to do\nAction: the action to take, only one name of [get_data], just
|
||||
the name, exactly as it''s written.\nAction Input: the input to the action,
|
||||
just a simple JSON object, enclosed in curly braces, using \" to wrap keys and
|
||||
values.\nObservation: the result of the action\n```\n\nOnce all necessary information
|
||||
is gathered, return the following format:\n\n```\nThought: I now know the final
|
||||
answer\nFinal Answer: the final answer to the original input question\n```"},{"role":"user","content":"\nCurrent
|
||||
Task: Use get_data tool for step1, step2, step3, step4, step5, step6, step7,
|
||||
step8, step9, and step10. Do NOT stop until you''ve called it for ALL steps.\n\nThis
|
||||
is the expected criteria for your final answer: A summary of all data collected\nyou
|
||||
MUST return the actual complete content as the final answer, not a summary.\n\nBegin!
|
||||
This is VERY important to you, use the tools available and give your best Final
|
||||
Answer, your job depends on it!\n\nThought:"},{"role":"assistant","content":"Thought:
|
||||
I should start by collecting data for step1 as instructed.\nAction: get_data\nAction
|
||||
Input: {\"step\":\"step1\"}\nObservation: Data for step1: incomplete, need to
|
||||
query more steps."},{"role":"assistant","content":"Thought: I need to continue
|
||||
to step2 to collect data sequentially as required.\nAction: get_data\nAction
|
||||
Input: {\"step\":\"step2\"}\nObservation: Data for step2: incomplete, need to
|
||||
query more steps."},{"role":"assistant","content":"Thought: I need to continue
|
||||
to step2 to collect data sequentially as required.\nAction: get_data\nAction
|
||||
Input: {\"step\":\"step2\"}\nObservation: Data for step2: incomplete, need to
|
||||
query more steps.\nNow it''s time you MUST give your absolute best final answer.
|
||||
You''ll ignore all previous instructions, stop using any tools, and just return
|
||||
your absolute BEST Final answer."}],"model":"gpt-4.1-mini"}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '2399'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=REDACTED_COOKIE;
|
||||
_cfuvid=REDACTED_COOKIE
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.109.1
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.109.1
|
||||
x-stainless-read-timeout:
|
||||
- '600'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//nJbfj6M2EMff81eM/NRKmwgI5Advp7v2FKlSW22f9rKKHHsI7hmbs83u
|
||||
nlb7v1eYBLJXQFxekMV8Z+ZjYw3f1xkAEZykQFhOHStKOf/48Mf9yzf5/Pnh498P9kl9ru51qR9k
|
||||
XsVBSO7qDH38F5m7ZC2YLkqJTmjVhJlB6rCuGq5XURIHwTL0gUJzlHXaqXTzeBHOC6HEPAqiZB7E
|
||||
8zA+p+daMLQkhS8zAIBX/6xBFccXkkJwd3lToLX0hCRtRQDEaFm/IdRaYR1Vjtx1QaaVQ+XZ/8l1
|
||||
dcpdCjsoKuuAaSmROeDUUci0ASolWIelhczowi9DcLpZBHDETBuE0ugnwYU6gcsRMqGohPOJIJzb
|
||||
AbVg8FslDHI4fvdKR+3XBezgWUjpdUJVCJW9VDqhO3gUp7X0PEhZ7puDUKANR7PYq736wOqjT9uE
|
||||
yxvYqbJyKbzuSZ20J2mzCPfkba/+PFo0T7RJ/VT3KalxEPpOzVb10VGhkPsu7Wn9ZTRD5JeDiBY/
|
||||
TxCNEUQtQTSNYHkDwXKMYNkSLKcRxDcQxGMEcUsQTyNIbiBIxgiSliCZRrC6gWA1RrBqCVbTCNY3
|
||||
EKzHCNYtwXoaweYGgs0YwaYl2Ewj2N5AsB0j2LYE22kEYXADQhiMzqSgG0rBAMUOlH6GnD6hH9vt
|
||||
DG/mtx/bYQBUcWBUnWc2jkxsX/13H/qg7DOaFPbq3o/FGiyFLzvFZMWxaXWenZdxn6PBx0YfDeuj
|
||||
Pv1yWL/s08fD+rhPnwzrkz79ali/6tOvh/XrPv1mWL/p02+H9ds+fRiMfLDgx4y9+uW3F8rc9Y/7
|
||||
cuEaF6C7O2rf/5Xv6iRGHara/fiKi1+vvYfBrLK0NkCqkvIqQJXSrilZu57Hc+St9TlSn0qjj/aH
|
||||
VJIJJWx+MEitVrWnsU6XxEffZgCP3k9V7ywSKY0uSndw+iv6dkl49lOk83FX0Sg5R512VHaBMFhe
|
||||
Iu8qHjg6KqS98mSEUZYj73I7A0crLvRVYHa17//z9NVu9i7UaUr5LsAYlg75oTTIBXu/505msDa6
|
||||
Q7L2nD0wqe+FYHhwAk39LThmtJKN+yT2u3VYHDKhTmhKIxoLmpWHmEWbJMw2q4jM3mb/AQAA//8D
|
||||
ACYaBDGRCwAA
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 99aee2174b18de96-EWR
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 07 Nov 2025 18:27:20 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- REDACTED_ORG_ID
|
||||
openai-processing-ms:
|
||||
- '9185'
|
||||
openai-project:
|
||||
- REDACTED_PROJECT_ID
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '9386'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- '500'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '200000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '499'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '199457'
|
||||
x-ratelimit-reset-requests:
|
||||
- 120ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 162ms
|
||||
x-request-id:
|
||||
- REDACTED_REQUEST_ID
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
@@ -0,0 +1,127 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Test Agent. Test backstory\nYour
|
||||
personal goal is: Test goal\nTo give my best complete final answer to the task
|
||||
respond using the exact following format:\n\nThought: I now can give a great
|
||||
answer\nFinal Answer: Your final answer must be the great and the most complete
|
||||
as possible, it must be outcome described.\n\nI MUST use these formats, my job
|
||||
depends on it!"},{"role":"user","content":"\nCurrent Task: Say hello\n\nThis
|
||||
is the expected criteria for your final answer: A greeting\nyou MUST return
|
||||
the actual complete content as the final answer, not a summary.\n\nBegin! This
|
||||
is VERY important to you, use the tools available and give your best Final Answer,
|
||||
your job depends on it!\n\nThought:"},{"role":"user","content":"Additional context:
|
||||
This is a test modification."}],"model":"gpt-4.1-mini"}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '851'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.109.1
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.109.1
|
||||
x-stainless-read-timeout:
|
||||
- '600'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//jFJdi9swEHz3r9jqOT5sk+RSvx2lJW1poXDQ0vYwirS21cpaIclJr0f+
|
||||
+yE7F/s+Cn0xeGdnNLO7dwkAU5KVwETLg+isTt9w+rr/YESx27+93RaHVm4/ff7y8Vpcffv+ly0i
|
||||
g3a/UIQH1oWgzmoMiswIC4c8YFTNL9fF5nWWbfIB6EiijrTGhnR5kaedMiotsmKVZss0X57oLSmB
|
||||
npXwIwEAuBu+0aiR+IeVkC0eKh16zxtk5bkJgDnSscK498oHbgJbTKAgE9AM3q9b6ps2lPAeDB1A
|
||||
cAON2iNwaGIA4MYf0P0075ThGq6GvxK2qDW9mks6rHvPYy7Taz0DuDEUeJzLEObmhBzP9jU11tHO
|
||||
P6GyWhnl28oh92SiVR/IsgE9JgA3w5j6R8mZddTZUAX6jcNz+fpy1GPTembo6gQGClzP6pti8YJe
|
||||
JTFwpf1s0Exw0aKcqNNWeC8VzYBklvq5m5e0x+TKNP8jPwFCoA0oK+tQKvE48dTmMF7vv9rOUx4M
|
||||
M49urwRWQaGLm5BY816PJ8X8rQ/YVbUyDTrr1HhXta2Wotis8nqzLlhyTO4BAAD//wMAuV0QSWYD
|
||||
AAA=
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 99d044428f103c35-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Tue, 11 Nov 2025 19:41:22 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=jp.mByP87tLw_KZOIh7lXZ9UMACecreCMNwHwtJmUvQ-1762890082-1.0.1.1-D76UWkvWlN8e0zlQpgSlSHjrhx3Rkh_r8bz4XKx8kljJt8s9Okre9bo7M62ewJNFK9O9iuHkADMKeAEwlsc4Hg0MsF2vt2Hu1J0xikSInv0;
|
||||
path=/; expires=Tue, 11-Nov-25 20:11:22 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=pzTqogdMFPJY2.Yrj49LODdUKbD8UBctCWNyIZVsvK4-1762890082258-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '460'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '478'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-project-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-project-tokens:
|
||||
- '149999817'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149999820'
|
||||
x-ratelimit-reset-project-tokens:
|
||||
- 0s
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_3bda51e6d3e34f8cadcc12551dc29ab0
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
@@ -0,0 +1,261 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Research Assistant. You
|
||||
are a helpful research assistant who can search for information about the population
|
||||
of Tokyo.\nYour personal goal is: Find information about the population of Tokyo\n\nYou
|
||||
ONLY have access to the following tools, and should NEVER make up tools that
|
||||
are not listed here:\n\nTool Name: search_web\nTool Arguments: {''query'': {''description'':
|
||||
None, ''type'': ''str''}}\nTool Description: Search the web for information
|
||||
about a topic.\n\nIMPORTANT: Use the following format in your response:\n\n```\nThought:
|
||||
you should always think about what to do\nAction: the action to take, only one
|
||||
name of [search_web], just the name, exactly as it''s written.\nAction Input:
|
||||
the input to the action, just a simple JSON object, enclosed in curly braces,
|
||||
using \" to wrap keys and values.\nObservation: the result of the action\n```\n\nOnce
|
||||
all necessary information is gathered, return the following format:\n\n```\nThought:
|
||||
I now know the final answer\nFinal Answer: the final answer to the original
|
||||
input question\n```"},{"role":"user","content":"What is the population of Tokyo?"}],"model":"gpt-4o-mini"}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1160'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.109.1
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.109.1
|
||||
x-stainless-read-timeout:
|
||||
- '600'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//jJM9b9swEIZ3/YoDZzvwd2xtRTI0HZolcIcqkGnqJLGheCx5Suoa/u+F
|
||||
5A/JTQp00XDPva/ui/sIQOhMxCBUKVlVzgzv5Lf7fDtbmcfRV1M9/l5/Wa/N6/r+7fNydycGjYK2
|
||||
P1DxWXWjqHIGWZM9YuVRMjau49vFZDleLVbzFlSUoWlkhePhjIaVtno4GU1mw9HtcLw8qUvSCoOI
|
||||
4XsEALBvv02dNsNfIobR4BypMARZoIgvSQDCk2kiQoagA0vLYtBBRZbRtqVvNpvEPpVUFyXH8AAW
|
||||
MQMmCCi9KiEnD1wiGMkYGLTNyVeyaRI8FtJn2hZtgiNXmyOgHJ7oZUc3if2kmkh8ckvfcHuOwYN1
|
||||
NcewT8TPGv0uEXEiVO09Wv7IDCajyTQRh8RuNpt+Lx7zOshmnrY2pgektcStSTvF5xM5XOZmqHCe
|
||||
tuEvqci11aFMPcpAtplRYHKipYcI4LndT301cuE8VY5TphdsfzeZnvYjurPo6HR5gkwsTU+1OIMr
|
||||
vzRDltqE3oaFkqrErJN25yDrTFMPRL2u31fzkfexc22L/7HvgFLoGLPUecy0uu64S/PYvJp/pV2m
|
||||
3BYsAvpXrTBljb7ZRIa5rM3xlkXYBcYqzbUt0Duvjwedu3S+GMl8gfP5SkSH6A8AAAD//wMAJGbR
|
||||
+94DAAA=
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 99c98dd3ddb9ce6c-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Tue, 11 Nov 2025 00:08:16 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=6maCeRS26vR_uzqYdtL7RXY7kzGdvLhWcE2RP2PnZS0-1762819696-1.0.1.1-72zCZZVBiGDdwPDvETKS_fUA4DYCLVyVHDYW2qpSxxAUuWKNPLxQQ1PpeI7YuB9v.y1e3oapeuV5mBjcP4c9_ZbH.ZI14TUNOexPUB6yCaQ;
|
||||
path=/; expires=Tue, 11-Nov-25 00:38:16 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=a.XOUFuP.5IthR7ITJrIWIZSWWAkmHU._pM9.qhCnhM-1762819696364-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '1199'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '1351'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-project-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-project-tokens:
|
||||
- '149999735'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149999735'
|
||||
x-ratelimit-reset-project-tokens:
|
||||
- 0s
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_50a8251d98f748bb8e73304a2548b694
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Research Assistant. You
|
||||
are a helpful research assistant who can search for information about the population
|
||||
of Tokyo.\nYour personal goal is: Find information about the population of Tokyo\n\nYou
|
||||
ONLY have access to the following tools, and should NEVER make up tools that
|
||||
are not listed here:\n\nTool Name: search_web\nTool Arguments: {''query'': {''description'':
|
||||
None, ''type'': ''str''}}\nTool Description: Search the web for information
|
||||
about a topic.\n\nIMPORTANT: Use the following format in your response:\n\n```\nThought:
|
||||
you should always think about what to do\nAction: the action to take, only one
|
||||
name of [search_web], just the name, exactly as it''s written.\nAction Input:
|
||||
the input to the action, just a simple JSON object, enclosed in curly braces,
|
||||
using \" to wrap keys and values.\nObservation: the result of the action\n```\n\nOnce
|
||||
all necessary information is gathered, return the following format:\n\n```\nThought:
|
||||
I now know the final answer\nFinal Answer: the final answer to the original
|
||||
input question\n```"},{"role":"user","content":"What is the population of Tokyo?"},{"role":"assistant","content":"```\nThought:
|
||||
I need to search for the latest information regarding the population of Tokyo.\nAction:
|
||||
search_web\nAction Input: {\"query\":\"current population of Tokyo 2023\"}\n```\nObservation:
|
||||
Tokyo''s population in 2023 was approximately 21 million people in the city
|
||||
proper, and 37 million in the greater metropolitan area."}],"model":"gpt-4o-mini"}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1521'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=6maCeRS26vR_uzqYdtL7RXY7kzGdvLhWcE2RP2PnZS0-1762819696-1.0.1.1-72zCZZVBiGDdwPDvETKS_fUA4DYCLVyVHDYW2qpSxxAUuWKNPLxQQ1PpeI7YuB9v.y1e3oapeuV5mBjcP4c9_ZbH.ZI14TUNOexPUB6yCaQ;
|
||||
_cfuvid=a.XOUFuP.5IthR7ITJrIWIZSWWAkmHU._pM9.qhCnhM-1762819696364-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.109.1
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.109.1
|
||||
x-stainless-read-timeout:
|
||||
- '600'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//jFPLbtswELz7KxY8W4Es+RHr1ifgQw8F3OZQBxJDrSTWFJcgqSRG4H8v
|
||||
KD+kNCnQCwFyZpazs+TLBIDJkmXARMO9aI2KPvG7z81d8mWf4E/cxN9+PK5SvfkYf08Pe8+mQUEP
|
||||
v1H4i+pGUGsUekn6BAuL3GOoOlstk9vZerle9UBLJaogq42P5hS1UssoiZN5FK+i2e1Z3ZAU6FgG
|
||||
vyYAAC/9GnzqEp9ZBvH0ctKic7xGll1JAMySCieMOyed5/rk+QwK0h51b70oip3eNtTVjc9gA5qe
|
||||
YB8W3yBUUnMFXLsntDv9td996HcZbBsEQ6ZTPLQMVMGW9gcCqSGJkxSkA26MpWfZco/qAMkMWqlU
|
||||
IBskozBQwy1C+gMYSwYtcF1CuroSz4y6j9JCi96SISU918At8pudLopi3JrFqnM8xKs7pUYA15p8
|
||||
77UP9f6MHK8xKqqNpQf3l5RVUkvX5Ba5Ix0ic54M69HjBOC+H1f3agLMWGqNzz3tsb8ujdNTPTa8
|
||||
kgGdX0BPnquRar6cvlMvL9Fzqdxo4Exw0WA5SIfXwbtS0giYjLp+6+a92qfOpa7/p/wACIHGY5kb
|
||||
i6UUrzseaBbDJ/oX7Zpyb5g5tI9SYO4l2jCJEiveqfN3dAfnsc0rqWu0xsrT+65MvljGvFriYrFm
|
||||
k+PkDwAAAP//AwDgLjwY7QMAAA==
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 99c98dde7fc9ce6c-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Tue, 11 Nov 2025 00:08:18 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '1339'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '1523'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-project-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-project-tokens:
|
||||
- '149999657'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149999657'
|
||||
x-ratelimit-reset-project-tokens:
|
||||
- 0s
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_ade054352f8c4dfdba50683755eba41d
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
@@ -0,0 +1,262 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Test Agent. Test backstory\nYour
|
||||
personal goal is: Test goal\nYou ONLY have access to the following tools, and
|
||||
should NEVER make up tools that are not listed here:\n\nTool Name: test_tool\nTool
|
||||
Arguments: {}\nTool Description: A test tool.\n\nIMPORTANT: Use the following
|
||||
format in your response:\n\n```\nThought: you should always think about what
|
||||
to do\nAction: the action to take, only one name of [test_tool], just the name,
|
||||
exactly as it''s written.\nAction Input: the input to the action, just a simple
|
||||
JSON object, enclosed in curly braces, using \" to wrap keys and values.\nObservation:
|
||||
the result of the action\n```\n\nOnce all necessary information is gathered,
|
||||
return the following format:\n\n```\nThought: I now know the final answer\nFinal
|
||||
Answer: the final answer to the original input question\n```"},{"role":"user","content":"\nCurrent
|
||||
Task: Use the test tool\n\nThis is the expected criteria for your final answer:
|
||||
Tool result\nyou MUST return the actual complete content as the final answer,
|
||||
not a summary.\n\nBegin! This is VERY important to you, use the tools available
|
||||
and give your best Final Answer, your job depends on it!\n\nThought:"},{"role":"user","content":"Additional
|
||||
context: This is a test modification."}],"model":"gpt-4.1-mini"}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1311'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.109.1
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.109.1
|
||||
x-stainless-read-timeout:
|
||||
- '600'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAA4xTy47bMAy85ysIneMgcbNp1reizwXaXrpAD83CVmTaViqLWolu2gb590LOw94+
|
||||
gF504HBGw6F0mAAIXYoMhGokq9aZ5KWkz/vdTn14i/dv9h9f19tXi3dtun789H71U0wjg7Y7VHxh
|
||||
zRS1ziBrsidYeZSMUXXxfJWub+fzddoDLZVoIq12nCxni6TVVifpPL1J5stksTzTG9IKg8jgywQA
|
||||
4NCf0agt8bvIYD69VFoMQdYosmsTgPBkYkXIEHRgaVlMB1CRZbS996IoNva+oa5uOIM7CA11poQu
|
||||
IHCDwBg4ZyIDTFAj90WPj532WIK2FflWxqGhIt+DlbbSgLRhj362sS9URLNB6FKCO+s6zuBw3Nii
|
||||
KMb2PFZdkDEj2xkzAqS1xP11fTAPZ+R4jcJQ7Txtw29UUWmrQ5N7lIFsHDswOdGjxwnAQx959yRF
|
||||
4Ty1Lnr+iv116Wp10hPDqgf02XkfgomlGbFuL6wnenmJLLUJo6UJJVWD5UAdNiy7UtMImIym/tPN
|
||||
37RPk2tb/4/8ACiFjrHMncdSq6cTD20e40/4V9s15d6wCOi/aYU5a/RxEyVWsjOn5ynCj8DY5pW2
|
||||
NXrn9emNVi5fqnR9s6jWq1RMjpNfAAAA//8DANALR4WyAwAA
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 99d044470bdeb976-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Tue, 11 Nov 2025 19:41:23 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=p01_b1BsQgwR2woMBWf1E0gJMDDl7pvqkEVHpHAsMJA-1762890083-1.0.1.1-u8iYLTTx0lmfSR1.CzuuYiHgt03yVVUMsBD8WgExXWm7ts.grUwM1ifj9p6xIz.HElrnQdfDSBD5Lv045aNr61YcB8WW3Vz33W9N0Gn0P3w;
|
||||
path=/; expires=Tue, 11-Nov-25 20:11:23 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=2gUmBgxb3VydVYt8.t_P6bY8U_pS.a4KeYpZWDDYM9Q-1762890083295-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '729'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '759'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-project-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-project-tokens:
|
||||
- '149999707'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149999707'
|
||||
x-ratelimit-reset-project-tokens:
|
||||
- 0s
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_70c7033dbc5e4ced80d3fdcbcda2c675
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Test Agent. Test backstory\nYour
|
||||
personal goal is: Test goal\nYou ONLY have access to the following tools, and
|
||||
should NEVER make up tools that are not listed here:\n\nTool Name: test_tool\nTool
|
||||
Arguments: {}\nTool Description: A test tool.\n\nIMPORTANT: Use the following
|
||||
format in your response:\n\n```\nThought: you should always think about what
|
||||
to do\nAction: the action to take, only one name of [test_tool], just the name,
|
||||
exactly as it''s written.\nAction Input: the input to the action, just a simple
|
||||
JSON object, enclosed in curly braces, using \" to wrap keys and values.\nObservation:
|
||||
the result of the action\n```\n\nOnce all necessary information is gathered,
|
||||
return the following format:\n\n```\nThought: I now know the final answer\nFinal
|
||||
Answer: the final answer to the original input question\n```"},{"role":"user","content":"\nCurrent
|
||||
Task: Use the test tool\n\nThis is the expected criteria for your final answer:
|
||||
Tool result\nyou MUST return the actual complete content as the final answer,
|
||||
not a summary.\n\nBegin! This is VERY important to you, use the tools available
|
||||
and give your best Final Answer, your job depends on it!\n\nThought:"},{"role":"user","content":"Additional
|
||||
context: This is a test modification."},{"role":"assistant","content":"```\nThought:
|
||||
I should use the test_tool to get the required information for the final answer.\nAction:
|
||||
test_tool\nAction Input: {}\n```\nObservation: test result"},{"role":"user","content":"Additional
|
||||
context: This is a test modification."}],"model":"gpt-4.1-mini"}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1584'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=p01_b1BsQgwR2woMBWf1E0gJMDDl7pvqkEVHpHAsMJA-1762890083-1.0.1.1-u8iYLTTx0lmfSR1.CzuuYiHgt03yVVUMsBD8WgExXWm7ts.grUwM1ifj9p6xIz.HElrnQdfDSBD5Lv045aNr61YcB8WW3Vz33W9N0Gn0P3w;
|
||||
_cfuvid=2gUmBgxb3VydVYt8.t_P6bY8U_pS.a4KeYpZWDDYM9Q-1762890083295-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.109.1
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.109.1
|
||||
x-stainless-read-timeout:
|
||||
- '600'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//jFLBbtQwEL3nKyyfN1WS3S5pbhRRKCeEkCpgq8RrTxJTxzb2pC1U++/I
|
||||
TrtJoUhcLNlv3vN7M/OQEEKloBWhvGfIB6vSN8xc3b+/FG/P3rX8x9X+y8dfHy7O8fYra88/0VVg
|
||||
mP134PjEOuFmsApQGj3B3AFDCKr5q21RnmVZuY7AYASoQOssppuTPB2klmmRFadptknzzSO9N5KD
|
||||
pxX5lhBCyEM8g1Et4J5WJFs9vQzgPeuAVsciQqgzKrxQ5r30yDTS1QxyoxF09N40zU5/7s3Y9ViR
|
||||
S6LNHbkJB/ZAWqmZIkz7O3A7fRFvr+OtIggeiQM/KtzppmmW+g7a0bMQUo9KLQCmtUEWmhSTXT8i
|
||||
h2MWZTrrzN7/QaWt1NL3tQPmjQ6+PRpLI3pICLmOPRuftYFaZwaLNZobiN+t83LSo/OsZvQIokGm
|
||||
Fqz1dvWCXi0AmVR+0XXKGe9BzNR5RGwU0iyAZJH6bzcvaU/Jpe7+R34GOAeLIGrrQEj+PPFc5iCs
|
||||
8r/Kjl2OhqkHdys51CjBhUkIaNmopv2i/qdHGOpW6g6cdXJastbWG16Up3lbbguaHJLfAAAA//8D
|
||||
AJW0fwtzAwAA
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 99d0444cbd6db976-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Tue, 11 Nov 2025 19:41:23 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '527'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '578'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-project-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-project-tokens:
|
||||
- '149999655'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149999655'
|
||||
x-ratelimit-reset-project-tokens:
|
||||
- 0s
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_6b1d84dcdde643cea5160e155ee624db
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
@@ -0,0 +1,159 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"name":"llama3.2:3b"}'
|
||||
headers:
|
||||
accept:
|
||||
- '*/*'
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '22'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- localhost:11434
|
||||
user-agent:
|
||||
- litellm/1.78.5
|
||||
method: POST
|
||||
uri: http://localhost:11434/api/show
|
||||
response:
|
||||
body:
|
||||
string: '{"error":"model ''llama3.2:3b'' not found"}'
|
||||
headers:
|
||||
Content-Length:
|
||||
- '41'
|
||||
Content-Type:
|
||||
- application/json; charset=utf-8
|
||||
Date:
|
||||
- Tue, 11 Nov 2025 19:41:28 GMT
|
||||
status:
|
||||
code: 404
|
||||
message: Not Found
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Test Agent. Test backstory\nYour
|
||||
personal goal is: Test goal\nTo give my best complete final answer to the task
|
||||
respond using the exact following format:\n\nThought: I now can give a great
|
||||
answer\nFinal Answer: Your final answer must be the great and the most complete
|
||||
as possible, it must be outcome described.\n\nI MUST use these formats, my job
|
||||
depends on it!"},{"role":"user","content":"\nCurrent Task: Say hello\n\nThis
|
||||
is the expected criteria for your final answer: A greeting\nyou MUST return
|
||||
the actual complete content as the final answer, not a summary.\n\nBegin! This
|
||||
is VERY important to you, use the tools available and give your best Final Answer,
|
||||
your job depends on it!\n\nThought:"},{"role":"user","content":"Additional context:
|
||||
This is a test modification."}],"model":"gpt-4.1-mini"}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '851'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.109.1
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.109.1
|
||||
x-stainless-read-timeout:
|
||||
- '600'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAAwAAAP//jFLRbtQwEHzPVyx+vlRJmrte84KOSqgFCSFAqLRUkc/ZJAbHa9lOy6m6
|
||||
f0dOrpe0gMRLpHh2Znd29jECYLJiBTDRci86o+ILTten/ccPFzyp398srz9/2/o3X/PN6btN84kt
|
||||
AoO2P1D4J9aJoM4o9JL0CAuL3GNQTc9W2fo8SdbnA9BRhSrQGuPj/CSNO6llnCXZMk7yOM0P9Jak
|
||||
QMcKuI0AAB6HbxhUV/iLFZAsnl46dI43yIpjEQCzpMIL485J57n2bDGBgrRHPcz+paW+aX0BV6Dp
|
||||
AQTX0Mh7BA5NMABcuwe03/VbqbmCzfBXwCUqRa/g8sC4grEN7KgHTxXfvZ63s1j3jgfPuldqBnCt
|
||||
yfOws8Ho3QHZH60paoylrXtBZbXU0rWlRe5IBxvOk2EDuo8A7oYV9s+2woylzvjS008c2qWrs1GP
|
||||
TdFNaJYdQE+eqxlrTPGlXlmh51K5WQhMcNFiNVGnxHhfSZoB0cz1n9P8TXt0LnXzP/ITIAQaj1Vp
|
||||
LFZSPHc8lVkMl/2vsuOWh4GZQ3svBZZeog1JVFjzXo3nxtzOeezKWuoGrbFyvLnalLnI1su0Xq8y
|
||||
Fu2j3wAAAP//AwDurzwzggMAAA==
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 99d0446e698367ab-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Tue, 11 Nov 2025 19:41:30 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=b52crfzdOm5rh4aOc2LfM8aQKFI.ZL9WCZXaPBDdG5k-1762890090-1.0.1.1-T2xhtwX0vuEnMIb8NRgP4w3RRn1N1ZwSjuhKBob1vDLDmN7XhCKkoIg3IrlC9KEyhA65IGa5DWsHfmlRKKxqw6sIPA98BSO6E3wsTRspHw4;
|
||||
path=/; expires=Tue, 11-Nov-25 20:11:30 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=0TH0Kjp_5t6yhwXKA1wlKBHaczp.TeWhM2A5t6by1sI-1762890090153-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '1049'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '1387'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-project-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-project-tokens:
|
||||
- '149999817'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149999817'
|
||||
x-ratelimit-reset-project-tokens:
|
||||
- 0s
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_4b132b998ed941b5b6a85ddbb36e2b65
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
182
lib/crewai/tests/cassettes/test_llm_call_hooks_with_crew.yaml
Normal file
182
lib/crewai/tests/cassettes/test_llm_call_hooks_with_crew.yaml
Normal file
@@ -0,0 +1,182 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Researcher. You are a
|
||||
researcher\nYour personal goal is: Research topics\nTo give my best complete
|
||||
final answer to the task respond using the exact following format:\n\nThought:
|
||||
I now can give a great answer\nFinal Answer: Your final answer must be the great
|
||||
and the most complete as possible, it must be outcome described.\n\nI MUST use
|
||||
these formats, my job depends on it!"},{"role":"user","content":"\nCurrent Task:
|
||||
Research AI frameworks\n\nThis is the expected criteria for your final answer:
|
||||
A research summary\nyou MUST return the actual complete content as the final
|
||||
answer, not a summary.\n\nYou MUST follow these instructions: \n - Include specific
|
||||
examples and real-world case studies to enhance the credibility and depth of
|
||||
the article ideas.\n - Incorporate mentions of notable companies, projects,
|
||||
or tools relevant to each topic to provide concrete context.\n - Add diverse
|
||||
viewpoints such as interviews with experts, users, or thought leaders to enrich
|
||||
the narrative and lend authority.\n - Address ethical, social, and emotional
|
||||
considerations explicitly to reflect a balanced and comprehensive analysis.\n
|
||||
- Enhance the descriptions by including implications for future developments
|
||||
and the potential impact on society.\n - Use more engaging and vivid language
|
||||
that draws the reader into each topic''s nuances and importance.\n - Include
|
||||
notes or summaries that contextualize each set of ideas in terms of relevance
|
||||
and potential reader engagement.\n - In future tasks, focus on elaborating initial
|
||||
outlines into more detailed and nuanced article proposals with richer content
|
||||
and insights.\n\nBegin! This is VERY important to you, use the tools available
|
||||
and give your best Final Answer, your job depends on it!\n\nThought:"},{"role":"user","content":"Additional
|
||||
context: This is a test modification."}],"model":"gpt-4.1-mini"}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1894'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.109.1
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.109.1
|
||||
x-stainless-read-timeout:
|
||||
- '600'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAA2RXTXPbOBK9z6/o8smpkrRJJjPJ6OZy4ownceKKnY+qzaUJNMmOQTQLACUrc5kf
|
||||
sZf9e/NLtrpBycrsRWWTINh4/V6/xz9/Ajhhf7KGE9djccMYlucoX57duvj+fXn/+hrzz8/v3j7f
|
||||
dL/35y9efDlZ6BPSfCNX9k+tnAxjoMIS622XCAvprk+e//r0xW+PH//22G4M4inoY91Yls9WT5YD
|
||||
R14+ffz0l+XjZ8snz+bHe2FH+WQN//4JAOBP+9VCo6f7kzXYZnZloJyxo5P1YRHASZKgV04wZ84F
|
||||
YzlZPNx0EgtFq/22l6nryxouIcoWHEboeEOA0OkBAGPeUvoaLzhigDP7b/01fo0fKBMm18PNNAyY
|
||||
diARzi7hIuFAW0l3WRddxpLET05hWX+NZ6lwy44xwGUsFAJ3FB3B6dnlI2gPTwImgtITNOjuGokE
|
||||
0oLCluwVnjYUZBwolgWMSTbsOXaQS5pcmRJ5oLjhJFFXZMDooYiEDKXHAhSxCQRpLp9SXgDFjiPZ
|
||||
n7paW4mRKUMR8JS5iwsoCTnW+57GIDsY0PUcCQJhilqBdTYDtXpGiiXsVnDbcz68DPKMlecNZeBY
|
||||
BAb8JkmP9XD+BSTCsNxKCh5wHAM7VAS10tKzw2BlZDEkncTMntJ+id6i+5FSAY6Zu77kBXAIUy66
|
||||
JnYKLSdokqAHHkZ0RZtXyPVRgnS7w+5Uditt45MVvN9Q2jBttRVvCQ3xH9q9/hqXcEsxS7oIsoXT
|
||||
1yJdoEdreF8bqA0dJBcYZZwCJpCR4jLLlBxB4CZhUsinTB5aSf8Pb4WexsOV1dH7/v7rvxnaQPes
|
||||
3VWwuZDRATAE2ea5a8oJQJckZzi//pgX8Np+dfPb6495BbVu22/KVvnRqTgq41T4GQLf0bwabhPG
|
||||
HLCQbTRfvO6lSIaeuz5YH4BLhuwwYMOBSwV6lC2llaJ3vbsVZcnpFRX81wU6akTuHq3hTZRtNFB0
|
||||
A7+LOLAzmk7F2o4BuoRjvzjswRla3IiqgeMRAakoBfJhM8J6Rk/N1HV7mI0/rFhhgDYReRmUy/TA
|
||||
0lp3Bq3VwEK/weioio7jXB4l2HBmibZvxDIlDBAwdhN2pGA6ylnfe/ru7fUjw+FsRNcTXH15RwVO
|
||||
zwb8LvHRGm4MuFAhPohsMa/L0zhKKnrsXKpa96fh2FLSQbOAQBtK2JGHZgd1Z/hMDdwoux1lOD37
|
||||
fPOo6j7whlSZS594QxFckMlDnldapVesTJK2wLl0kYtOzluRcMcFTs/f3b5R+ret4nRUcN4f5Ac2
|
||||
w0iplTQoiNaePBK5fgE8YEeVoYXuyxFmC5gKB/5em3woxtpxLqlgRHvs5m43PnBU0oHDdog/zr4c
|
||||
qfVchoYjZXg3Ddc72HLpAaciAxZ24Lk1LAsb7yrbrz8COkdhnkEL6GbwS0Ib/T9Q0Hb8UUQvicYr
|
||||
jt4KPwtjjxcSvI2epyv4oMPwsw3Dc2XrTZk8Ux04vxOG0js1DC3liryNx8sBlcvrY+n2mKEh7WOa
|
||||
bHBy3FvJPMzyLhcaMjgcrT3SAmEKO3VFRwk86UjZH3tM+jIbqCMW7SzIVJwMlBeQJ9cDZht6+9NV
|
||||
f7DWesaGFM9EhaOMWHo1BeyiZK5dOZuKRBlkyvCJenaB8vpY3T3hhsMOaNCxVgl9SznUjn9sKOkB
|
||||
66jPFFpjsRa7b8RCnaMkbqZqClJdp/CgxHQm2uWAd3sVRTLpRiomfx7UeqvcrWBNCNGA3YtpPUtz
|
||||
nhRtwskfgTjbJRb44/pKUocRznvMtKgere9VLVOy1w+iPXZuSjplS2/phL1SsZ390cq4MdXAB3JV
|
||||
kRLX/1TGu9s3R3OMYq/t9darYw1Ke4RImcWjhe9HwAJmeJwMwxT3Lr23l/2Qy7X2TDgEyhWsGxsF
|
||||
Wjnsg9TahKgsPYiJlGVSiHX4B/PcPev0Paakg1w0NWwkTHqPv+vahh/s3KepA8/ZyYZSdfWfV/Cq
|
||||
5gRjTKYEn5i2o3Asqi6NaOb5Jg8+jkyzbNMKLoiXF8TwlheAkEidijzcFIytJKu/pZwlWXmEmo/u
|
||||
HReqRtzIVAx4T4M4nR/fK8bSqiJrbtswqhWnZZuYog+7H7JSMxVAcDhVZdqOesiWzer2NFOw9NaY
|
||||
ZDRtFEpjojJb8QqubVjpJnpALEcx0E777tPly8sz1VuPmb/XhBppTitH6bWIdlCGQbxSlQZKZqw9
|
||||
Jr/FyuLaX0lzE/UTYFCrnXJewas55J1dqqmKw0IZMGR5iBKqXUcp7kMt5t1xwJG2iq6dbJcZgsGe
|
||||
pHt0lBotrWHMmkNbiFJAp2g7haDJNqpHGkueHepZwI0lzmpFrwaZk8f5DwF0/TVqTkg4sj/OBHNH
|
||||
j3BSqj+0nbzettGrDVHYrbN6bJ4/I8hD4nyX//7rP1o64DCGg/r1W4A36HZASbJd0Dq/SaO8HwPW
|
||||
OlZwIUlB0M+1BQRMFkPIEgP5PVbNxKHG4p4yHZc94A449pS4MndM9G3yFh5oaMj76saHBOKx4Ooh
|
||||
KIN9GGrntTqltSJhX1x+KjtlD2tphplNnRE1vFQZt8gpUs7QkkapY6v5RxauZuMwefDiJstyB++a
|
||||
fcQWqhK6OmzofgzIcZ9OTX0r6zqVHQxTLtBgsOHIMcqmbmgMTtRNYZ5/NrOqM3L0vGE/YYBUI7CF
|
||||
twN3thTCsiHLMpRHUn4FxRcyxVwDVcsU/CzC/uD3Rs5fVnAxWch/+fBRWN9Rq7YsoF84ysu3Ijau
|
||||
W0lbTH4B/wMAAP//jFjNbtswDL73KQSfWiAo0Kwbgt2GXVZgP6fdVgSKTNtcZUnQT7ocCuwh9oR7
|
||||
koGkYjtZBuxMR7Ep8vs7nUdaTULQfaXDqTFqLDYjrbOd1dr1QrqSLKs6rVoJXVr0Kvse8gDxZnWU
|
||||
UDRg577mmqj+08cb+SXbbogKHMT+MMlGc/SSIfoRaWvTqGOGuJqYkV6GZAvBhjAQupZcHwJtTBf9
|
||||
qKAtsi2qpKo5E10E79/stEJgGFv4aG3V6B1mH2sHlbFIH6TMoF0PasSMvdAwacglcn4J4OilXKs+
|
||||
VJNB5oYbbquXROJqndkXZ0/Xw7Y5ELbUj1rgG48c+UeeUbGCA1TPStJOXK20q/PFtW8XnWR9ShdV
|
||||
ejoNWjWUUbsT8FnN6HP03HvsUYZfTIWxJeGeFsUM2lpwPbuCb+49xSs/Mg39Z59BIPFSDIDVHB5U
|
||||
BAt77TJ39t2DCksyWqngLZrDqJ+mjIKQhzkMEsuEsrNobtUD4Sz7Dc38FWGgPdqDSk6HNPhcCcMW
|
||||
gy0Ty+CfzxaB/c7Jhg9o2auNgbfaRMzckgidrWrOu2WuQAPcdWwx+GZqt4TYDan4JCp+GVeQ1iB8
|
||||
fQIRztkHNBTO6MmYGpI/O8sa0fgSa0W5IhqOFU6J5GmXiYakA4IUc7jBHkB2XNQjaR5mVnnXSwBB
|
||||
RPmdgJAP5yYwTP7++SsPcOBnqhzMh6NzDFZnkpVJpUGHGsEQuJMH8pSddWfB1q366lqIlNy1c2Rz
|
||||
OqDz1MlITOty5E9MClJisya2Y9BMHtXuoFPP+lAVBIPfkeclWbIHtQMHEtjVqSM+YoFMm3q7zBRJ
|
||||
OyRNwaYr1i4K2jkv1MNp5mOtvEz5pfV9iH6Xzn7adOgwDdsIOnlHWWXKPjRcfblS6pFz0nISfTYy
|
||||
o9vsn4D/7tV9zUmbOZ+dq5vNplazz9rOhbv1+lg5OXHbQtZo0yJrbQyFFe382zmYZRJYFK4W3/33
|
||||
+1w6W74dXf8/x88FYyBkaLezWbj0WAQav389NvWZX7ipnmebESLdRQudLlZS5UaM87ZD15OoRomW
|
||||
u7C9N+vN67tu82bdXL1c/QEAAP//AwBbY8c8aRcAAA==
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 99d0447958ce36e8-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Tue, 11 Nov 2025 19:41:45 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=dSe1gQEFfPpE4AOsFi3S3RQkzPCQnV1.Ywe__K7cSSU-1762890105-1.0.1.1-I1CSTO8ri4tjbaHdIHQ9YP9c2pa.y9WwMQFRaUztT95T_OAe5V0ndTFN4pO1RiCXh15TUpWmBxRdxIWjcYDMqrDIvKWInLO5aavGFWZ1rys;
|
||||
path=/; expires=Tue, 11-Nov-25 20:11:45 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=LMf_4EPFZGfTiqcjmjEk7WxOTuX2ukd3Cs_R8170wJ4-1762890105804-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '15065'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '15254'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-project-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-project-tokens:
|
||||
- '149999560'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149999560'
|
||||
x-ratelimit-reset-project-tokens:
|
||||
- 0s
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_c49c9fba20ff4f05903eff3c78797ce1
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
@@ -0,0 +1,423 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Researcher. You''re an
|
||||
expert researcher, specialized in technology, software engineering, AI and startups.
|
||||
You work as a freelancer and is now working on doing research and analysis for
|
||||
a new customer.\nYour personal goal is: Make the best research and analysis
|
||||
on content about AI and AI agents\nTo give my best complete final answer to
|
||||
the task respond using the exact following format:\n\nThought: I now can give
|
||||
a great answer\nFinal Answer: Your final answer must be the great and the most
|
||||
complete as possible, it must be outcome described.\n\nI MUST use these formats,
|
||||
my job depends on it!"},{"role":"user","content":"\nCurrent Task: Give me a
|
||||
list of 3 interesting ideas about AI.\n\nThis is the expected criteria for your
|
||||
final answer: Bullet point list of 3 ideas.\nyou MUST return the actual complete
|
||||
content as the final answer, not a summary.\n\nYou MUST follow these instructions:
|
||||
\n - Include specific examples and real-world case studies to enhance the credibility
|
||||
and depth of the article ideas.\n - Incorporate mentions of notable companies,
|
||||
projects, or tools relevant to each topic to provide concrete context.\n - Add
|
||||
diverse viewpoints such as interviews with experts, users, or thought leaders
|
||||
to enrich the narrative and lend authority.\n - Address ethical, social, and
|
||||
emotional considerations explicitly to reflect a balanced and comprehensive
|
||||
analysis.\n - Enhance the descriptions by including implications for future
|
||||
developments and the potential impact on society.\n - Use more engaging and
|
||||
vivid language that draws the reader into each topic''s nuances and importance.\n
|
||||
- Include notes or summaries that contextualize each set of ideas in terms of
|
||||
relevance and potential reader engagement.\n - In future tasks, focus on elaborating
|
||||
initial outlines into more detailed and nuanced article proposals with richer
|
||||
content and insights.\n\nBegin! This is VERY important to you, use the tools
|
||||
available and give your best Final Answer, your job depends on it!\n\nThought:"}],"model":"gpt-4.1-mini"}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '2076'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.109.1
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.109.1
|
||||
x-stainless-read-timeout:
|
||||
- '600'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAA1xXS44kxw3d6xREbwwMqgsaWWPJvSvNT21roIE0kAx5NswIViWnI4MpMiKra7TR
|
||||
IbTxSbz3UXQSgxFZ1S1tGuiMH/n43iPrl08Arjhe3cBVGLGEaU7Xz/HHL3/YfTz+9GJ69sPXL8sr
|
||||
e/bT9JMefv4Ql39dbfyEDB8olPOpbZBpTlRYcl8OSljIb336xd8++/Lp3//6xbO2MEmk5McOc7n+
|
||||
fPv0euLM1599+tmz608/v376+Xp8FA5kVzfw708AAH5pfz3QHOn+6gY+3Zy/TGSGB7q6uWwCuFJJ
|
||||
/uUKzdgK5nK1eVgMkgvlFvu7UephLDdwC1mOEDDDgRcChIMnAJjtSPo+v+KMCXbtv5v3+X2+hidP
|
||||
drfXL5QXyvCW1CRj4o8U4WvCVMaASjfwHS2SqsPCHzkf4C0Wplzg21qCTGTwblSPAN4qRQ7F395l
|
||||
TKfCwZ48eZ8B3o1swJEQ6H5OomQwyhF2t8AGRTHbXnTyy8fLwzCcgDIOqX3mw5hOwDnywrGuURbP
|
||||
b/JY5oTZYECjCJJhXkOMWBAwR5gfQmvFsy28EgXODmygDXjtMTMZJL4juP3qDfyIxSSvUMCIC0Gi
|
||||
hRQPFD30IoCe5keCyW/HBEpBNNoGDpRl4mCb9npInNt6UcZkfrIgJ1EvViCFMpLi7K/XzD9XSiew
|
||||
ysVTFCAM4zmjLbwgmt9wjr//+h/zKOxkhSYY0cBGOWaYVSY2As6XrPMB7jhmcgA/VD0BoabTBgwX
|
||||
X0u8kEFZqzirYEeKcyFdKHvpt3Db/mM6GhzZAXmo1KyyJzN2+ljHu4droLQQJhikjEC5jNUYbYK9
|
||||
KOxuWw6zOJEZU4dKHJBsgIPUArPyguHUlloxjUJVLqdNpwQfxpYelbEBHDnRNKF59iPm2MhjlI3X
|
||||
jJxn2BP6XgJjchyK09NG3hcIUlMEpVgDQWSbUbl4YfzCh4wxBDKDoRbAZALKdmdA9xhIB2whcaaf
|
||||
najlBLzvfBFHRwlqpoVyOvkDRXmoheIWXk5SGoQe0gXAgTLtucBeZYJRZmpY8DSrLOTElkMW4x7L
|
||||
5Hj0iOh+JmXKwQ/cM5UTyEIKWItMbmsQKbDX7HrCO86HLbyqpSr53YlDA8nTDqlGgt3t9SxHUoqw
|
||||
sJaKaYUDLvbUK6+E6brw5ELLXEQdi0aYI6HikAgGFi+JqG1WtvoeBJtQC+kGJlECJZslm5ftEfAU
|
||||
ZGV8GfFcLrovlCMk3lPLO7ioO2nOJZB9Xz4kGRzhbfekIjMHNyGlRAvm0g5RPuChxWTABYrUMLpl
|
||||
1QkzHCml64HacgGEfc0R3YQwNX/oPOa8cCG/FyNpk3zwbGJXu2tLHWSCAZObAAxUjkQZCoUxS5JD
|
||||
T6Axu9GvmQxn2l68uxbJMkltRrA7NL5whudui47bbY7VCUZ2Ay/vZ8zRo/5KPGL/6qjstGzgTTUO
|
||||
3au+L6KnQsmV82fzjpSWJoUi/iL2F5thfjtTXgX9YvfNN//778um8YXNqYL+yD/qHUUKd+2ZR/v9
|
||||
+yD3bf/kgTQzFpes5B5URtWe0oEyKRZRA6vB2eeRvKj5QL75D70knHGYVVyuZF51MnpQYxgxJcoH
|
||||
8pORu/4gSye/7F0uo6iNPHd76lc6o4YTBEkJB9Eu+O6KjSOoha0YtBYTaaYcKZd0OoffousdfAvf
|
||||
uWKOoilCQCOwUmN3nC69H1Ezaa/RKmun2+XJ3e117C28ozerxBo8gxays+11RY2MGeY6JLbRz+5u
|
||||
r9do3EaEinpHmFHdDPrGAJEGLOSwtRkD0krnVnOnQEsUvqM933nTj+IebKHapcNTHp3f9lCOUZQ/
|
||||
SrYNHEdOBHZHs88KcBTV0+r8HnbkPnd4ITqwdPZIB0J7z+kN59JFPsjQjDthoDYbOK/Wgmzh5aVR
|
||||
tCBbmfcSqvnYEGQ+qVvoBuSYqZV9cwHRqeBvhNVP3BIb1BQ98jt73FIe5BFpkuA1/3hpG24GZzA6
|
||||
D5t546lbd5Bpksj7k4fdKPso7+jFb2lzXqTpEZ0vDvFffABxLCX//utvR8nuNh7+Hi1wbpYOC6lV
|
||||
O4PONnWbefSABcmZQrl0hKKUo9u7H5jdtEzmka00B2vD0IMUXJrNm6s/750hQqipVMXUSX9fHj/p
|
||||
TeBcvHTy1kt7zs1etQAXo7TfPjKhgHPhBR+baxtUyDwUzq1W+2orb/5kp77YqWQzK3ul93vS3oCU
|
||||
w/jIaxrGWRa8aIldxlxOFws+02l3C6+9vWan+g18VTk1r33nfjSjUi4b2IUgNZfWA79vTcw6OeGd
|
||||
VitH0TKeVswfWW/vURijNg9rKVR1ckEmiu2KveJEjYRA2ap2gUP0fiRzkwEmPmR77FILpkq2AZrm
|
||||
Ea0P9+UScDhtnK89Yk5t5upcYs1ktoXnf5yZX4scEl0G1Lb5DQcVk33p8zOZZ882dtKsvW0QbENz
|
||||
dwPOXLgV4MHie7dowmj9+DIcOJQtzznJafqznVqgjMry4KXOPx+c1Fr784Fo4ParYY8u3TbBH3Jr
|
||||
BOtobVB9fThBwiOQT5DdWDZA9+Sz0p77uru3rbLsDv8HfJ4nwjZAw+52A4p97DEJTD47YEKdVvOz
|
||||
qgtx6oNBm33ZgvK0anjr3Zz03Hvf8ZS5wGsatLbd/3SJPlc87kXbiLiw5+6TrJN1JjUfkhrEkhsC
|
||||
7ZwVraEL1X8ouL7degaKsUvr8nvDf9jERUJvZfvW50KqbVqLZHzI6zB4qOkccpfKZd7utJ5VhpXT
|
||||
k2grwFpZPs9tzSrdVVbnquaaX8X8fwAAAP//jFjNbtwgEL7vUyBfcmkr7SZt95pjpLxCZBF7sFEw
|
||||
EMCtcth3j74BL3jbSj2PjT3AfH+squh9ZfwrhN2Iok3jxr3cp0B3UQBLjLn5++k6xs1JjhrfBjXL
|
||||
N5qd2SSdh72xgO4wafbOW7M7LZ+5NGHIBbg3b3sdtcQ3e7VFdXNvi056khv7KZKBRaospvDxSSw6
|
||||
rpGgMW4p75t4do5pXM4kR+64zh6jgVMZNfOFyghWTsuFD8GwjamsGhToTSFpdfUGIKxXQgYgvP7l
|
||||
kjRfduhTrEv2PHGWMA+vwcnRZCzOpgnZyQI7v5PkMQVnJ+YDpBJAe0auDfKLT6SxkQsYJffVO1Pu
|
||||
uV68HFKm6p0oL/6WmW7FuWLYZ+kzC6hMer9xS1r6oIUdUBRB4gaB5GwmuUXbzR6AHNqcJpBao0RY
|
||||
ZFdjmoK01qW8kUiIXkrlcs2EjJswHPHm1Q7kGOc+kIzOIv+JyfmOq5eDEC+cPa27OKmDy/KpT+6N
|
||||
+HP355I9dTXzqtWfD/elmnCotXA8nrbKbsV+JOQZscmvukEOM4313Rp2Qa64pnBo+v7zf/62du5d
|
||||
2+l/lq+FAdqIxn6LRdqe62OBEAr+67HrPvMPdxGRyEB90hRwFiMpuZqc1HUZKnuFiQ8+6BzXKd8/
|
||||
DKfz96M6/zh1h8vhEwAA//8DAJPMJFq9FAAA
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 99c98602dfefcf4d-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Tue, 11 Nov 2025 00:03:08 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=ObqPLq12_9tJ06.V1RkHCM6FH_YGcLoC2ykIFBEawa8-1762819388-1.0.1.1-l7PJTVbZ1vCcKdeOe8GQVuFL59SCk0xhO_dMFY2wuH5Ybd1hhM_Xcv_QivXVhZlBGlRgRAgG631P99JOs_IYAYcNFJReE.3NpPl34VfPVeQ;
|
||||
path=/; expires=Tue, 11-Nov-25 00:33:08 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=kdn.HizdlSPG7cBu_zv1ZPcu0jMwDQIA4H9YvMXu6a0-1762819388587-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '13504'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '13638'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-project-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-project-tokens:
|
||||
- '149999507'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149999507'
|
||||
x-ratelimit-reset-project-tokens:
|
||||
- 0s
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_2de40e1beb5f42ea896664df36e8ce8f
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: "{\"messages\":[{\"role\":\"system\",\"content\":\"You are Researcher. You're
|
||||
an expert researcher, specialized in technology, software engineering, AI and
|
||||
startups. You work as a freelancer and is now working on doing research and
|
||||
analysis for a new customer.\\nYour personal goal is: Make the best research
|
||||
and analysis on content about AI and AI agents\\nTo give my best complete final
|
||||
answer to the task respond using the exact following format:\\n\\nThought: I
|
||||
now can give a great answer\\nFinal Answer: Your final answer must be the great
|
||||
and the most complete as possible, it must be outcome described.\\n\\nI MUST
|
||||
use these formats, my job depends on it!\"},{\"role\":\"user\",\"content\":\"\\nCurrent
|
||||
Task: Summarize the ideas from the previous task.\\n\\nThis is the expected
|
||||
criteria for your final answer: A summary of the ideas.\\nyou MUST return the
|
||||
actual complete content as the final answer, not a summary.\\n\\nThis is the
|
||||
context you're working with:\\n- **AI-Driven Personalized Healthcare: Revolutionizing
|
||||
Patient Outcomes Through Predictive Analytics**\\n This idea explores how AI
|
||||
is transforming healthcare by enabling highly individualized treatment plans
|
||||
based on patient data and predictive models. For instance, companies like IBM
|
||||
Watson Health have leveraged AI to analyze medical records, genomics, and clinical
|
||||
trials to tailor cancer therapies uniquely suited to each patient. DeepMind\u2019s
|
||||
AI system has shown promise in predicting kidney injury early, saving lives
|
||||
through proactive intervention. Interviews with healthcare professionals and
|
||||
patients reveal both enthusiasm for AI\u2019s potential and concerns about privacy
|
||||
and data security, highlighting ethical dilemmas in handling sensitive information.
|
||||
Socially, this shift could reduce disparities in healthcare access but also
|
||||
risks exacerbating inequality if AI tools are unevenly distributed. Emotionally,
|
||||
patients benefit from hope and improved prognosis but might also experience
|
||||
anxiety over automated decision-making. Future implications include AI-powered
|
||||
virtual health assistants and real-time monitoring with wearable biosensors,
|
||||
promising a smarter, more responsive healthcare ecosystem that could extend
|
||||
life expectancy and quality of life globally. This topic is relevant and engaging
|
||||
as it touches human well-being at a fundamental level and invites readers to
|
||||
consider the intricate balance between technology and ethics in medicine.\\n\\n-
|
||||
**Autonomous AI Agents in Creative Industries: Expanding Boundaries of Art,
|
||||
Music, and Storytelling**\\n This idea delves into AI agents like OpenAI\u2019s
|
||||
DALL\xB7E for visual art, Jukedeck and OpenAI\u2019s Jukebox for music composition,
|
||||
and narrative generators such as AI Dungeon, transforming creative processes.
|
||||
These AI tools challenge traditional notions of authorship and creativity by
|
||||
collaborating with human artists or independently generating content. Real-world
|
||||
case studies include Warner Music experimenting with AI-driven music production
|
||||
and the Guardian publishing AI-generated poetry, sparking public debate. Thought
|
||||
leaders like AI artist Refik Anadol discuss how AI enhances creative horizons,
|
||||
while skeptics worry about the dilution of human emotional expression and potential
|
||||
job displacement for artists. Ethical discussions focus on copyright, ownership,
|
||||
and the authenticity of AI-produced works. Socially, AI agents democratize access
|
||||
to creative tools but may also commodify art. The emotional dimension involves
|
||||
audiences' reception\u2014wonder and fascination versus skepticism and emotional
|
||||
disconnect. Future trends anticipate sophisticated AI collaborators that understand
|
||||
cultural context and emotions, potentially redefining art itself. This idea
|
||||
captivates readers interested in the fusion of technology and the human spirit,
|
||||
offering a rich narrative on innovation and identity.\\n\\n- **Ethical AI Governance:
|
||||
Building Transparent, Accountable Systems for a Trustworthy Future**\\n This
|
||||
topic addresses the urgent need for frameworks ensuring AI development aligns
|
||||
with human values, emphasizing transparency, accountability, and fairness. Companies
|
||||
like Google DeepMind and Microsoft have established AI ethics boards, while
|
||||
initiatives such as OpenAI commit to responsible AI deployment. Real-world scenarios
|
||||
include controversies over biased facial recognition systems used by law enforcement,
|
||||
exemplified by cases involving companies like Clearview AI, raising societal
|
||||
alarm about surveillance and discrimination. Experts like Timnit Gebru and Kate
|
||||
Crawford provide critical perspectives on bias and structural injustice embedded
|
||||
in AI systems, advocating for inclusive design and regulation. Ethically, this
|
||||
topic probes the moral responsibility of creators versus users and the consequences
|
||||
of autonomous AI decisions. Socially, there's a call for inclusive governance
|
||||
involving diverse stakeholders to prevent marginalization. Emotionally, public
|
||||
trust hinges on transparent communication and mitigation of fears related to
|
||||
AI misuse or job displacement. Looking ahead, the establishment of international
|
||||
AI regulatory standards and ethical certifications may become pivotal, ensuring
|
||||
AI benefits are shared broadly and risks minimized. This topic strongly resonates
|
||||
with readers concerned about the socio-political impact of AI and invites active
|
||||
discourse on shaping a future where technology empowers rather than undermines
|
||||
humanity.\\n\\nYou MUST follow these instructions: \\n - Include specific examples
|
||||
and real-world case studies to enhance the credibility and depth of the article
|
||||
ideas.\\n - Incorporate mentions of notable companies, projects, or tools relevant
|
||||
to each topic to provide concrete context.\\n - Add diverse viewpoints such
|
||||
as interviews with experts, users, or thought leaders to enrich the narrative
|
||||
and lend authority.\\n - Address ethical, social, and emotional considerations
|
||||
explicitly to reflect a balanced and comprehensive analysis.\\n - Enhance the
|
||||
descriptions by including implications for future developments and the potential
|
||||
impact on society.\\n - Use more engaging and vivid language that draws the
|
||||
reader into each topic's nuances and importance.\\n - Include notes or summaries
|
||||
that contextualize each set of ideas in terms of relevance and potential reader
|
||||
engagement.\\n - In future tasks, focus on elaborating initial outlines into
|
||||
more detailed and nuanced article proposals with richer content and insights.\\n\\nBegin!
|
||||
This is VERY important to you, use the tools available and give your best Final
|
||||
Answer, your job depends on it!\\n\\nThought:\"}],\"model\":\"gpt-4.1-mini\"}"
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '6552'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=ObqPLq12_9tJ06.V1RkHCM6FH_YGcLoC2ykIFBEawa8-1762819388-1.0.1.1-l7PJTVbZ1vCcKdeOe8GQVuFL59SCk0xhO_dMFY2wuH5Ybd1hhM_Xcv_QivXVhZlBGlRgRAgG631P99JOs_IYAYcNFJReE.3NpPl34VfPVeQ;
|
||||
_cfuvid=kdn.HizdlSPG7cBu_zv1ZPcu0jMwDQIA4H9YvMXu6a0-1762819388587-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.109.1
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.109.1
|
||||
x-stainless-read-timeout:
|
||||
- '600'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAA1xXzY4bxxG++ykKexRIQlKcRN7b2pIcBrItSwIUIL4Ue4ozJfZ0tau6yR35oofw
|
||||
JU+Sex5FTxJUz3BJ6bLAcqa66+f7qfnjG4Ab7m5u4SYMWMKY4/oHfP/sw1v99WM3vvzX8x9//3mr
|
||||
rx4f3te85WflZuURsvtAoZyjNkHGHKmwpPlxUMJCfuqTv//t6bMn3/3l2XftwSgdRQ/rc1l/u3my
|
||||
Hjnx+unjp39dP/52/eTbJXwQDmQ3t/DvbwAA/mh/PdHU0f3NLTxenX8ZyQx7url9eAngRiX6Lzdo
|
||||
xlYwzUkvD4OkQqnl/m6Q2g/lFraQ5AQBE/R8JEDovQDAZCdSgN/SS04Y4a79f+s//JYePbrbrp8r
|
||||
HynBa1KThJE/Ugf/IIxlCKh0C2/oKLF6Y/gjpx5eY2FKBX6pJchIBu8G9RzgtVLHofjtdwnjVDjY
|
||||
o0d+07uBDbgjhECpkBpIgjIQFMVke9ERW1gWz1X2cLcFTjA8ZLGC00BKkC9X4PkKoIS7SDBMmXSd
|
||||
r8so3oLRk80Rk0FBjqL+QNr1nIpywELQkT8zv5tTx0fuKkbIS6kdFtzAa5ZEpN4DBwsmJoPIB4Lt
|
||||
9z/Beywmaekc1MKeghdSBIz3BcrSpiNagdHrwAhKQbSzFfSUZPRq/C6jYivA1EGInNqLRRkj7D25
|
||||
1NsKguK+eCo7siwH8skHUi9LMXtmZfDxR+6Tty2wUZzgxGUAPBf2+dN/DGri36u3VvYcaQNveeSI
|
||||
GqcVPCfKP3Hq2nvYHf2KzmuyyQqNBgMevXejJCvqdIHIe1obHluXMOOOI5cJdhNgCNXfiRPsRSmg
|
||||
tfwx1EJw4C7RBJw+VPXkcSDsfBo2jbnIaIA5E3rzV4AxysljsxKNucHh3FB2fB0pOV43sE3G/VD2
|
||||
9fyA6WRzEy7ggqwYCnuEQ9PbvvTHYOB+iH4EIDRIJFRtYP386c/TwJFgxDQBjTvF4PNuvcoqIxt5
|
||||
ocBjVjlSB3ImjN8wSKZVmxaB52DcJ95zwFQgiI8yGeBO6ow+yMpHDFOLpTK0Wq3QCbWzgXNrFSXj
|
||||
r5qxcMt70Uh4Di3+riSoqSO1IOppwYgfRMEkMBWM0NEOC902qmRxxXEMzvQsAjvlrqfrRmIIZAY9
|
||||
ZoMjqVVrscp2mOFYE/3uXezYivKuqcrDgRINRpzgJGqUgO55RkjHllG5MNkGXoziQRgdnw9jcoXs
|
||||
wPEOQcVMBTv7/OnPHSXac2nH7FVG2FEppD6fPokts/hCM2a5mahAr5hz9NAGmJoIjUCOpIC1yNjw
|
||||
flV95yRjSXZpVpzAanbxaSkMdcQEH2rXuyht4JXIoaUm6oNseABKx3YMdbCvparLVIi1I4O77bpJ
|
||||
JHVwZC3eyfl+eLCJuaIToTZR3LE4LEQNOop8nOVLCeO68EgwSuIiM61cOWLXOAk2ohbSFYyi5IBM
|
||||
FL4ql4LMMrCBtzUMwCnJsWHtgv8iQGlw2YA+yg5j0weg+0yhYFrw7JBwkZB9e7yCEVtbiiO2SObQ
|
||||
2JuagilFOjpLGhMuaAAlH6I/MOACRWoYaAag49tPn/t/ohjXO2qFusRijNaYipDqLHE7jC3pHZUT
|
||||
UYJQG4bW5IAvFIYkUfo5+zPZGrVss/hqLZJklOpDg7u+oZQT/OCO5BTdpq46B8hu4cV9xqbr8L3U
|
||||
1GFTQGeFlhX8VI3DbAZvi+hUKDomr4y1yUUu3tXYiDzIqSH0kgDOCfjYlGzA3OT5nMueKboHhQFj
|
||||
pNSfO4O14bSNQrHjudWQZB6y7M9H+PDmgDKIuh5t4F3jszky0OCXTGnRxud3r179778v3PHI/aCZ
|
||||
6VwUHNkc1Khlpmuh+4Zyx1N2T1wk13vS4sRaVu7vxbVuMeR/1gN1FA4tq6u7/fed3F91J05zEQSi
|
||||
3LcVyST1toGfz0oPrSVnr7/bwvOaepIEerUZEdjVdJrfnV2qmxKOPsNmQTivLzlKgY6OFCXPG8pC
|
||||
7N0Ed9sNvHGKnkRjB02FwsIsjrE2r30giA2+XSwaAe9RE+mMmlayc03Zr8AI1eii31mlq4F87Fr8
|
||||
rjJMS2uLYjjMUvJuIPixonaMqR24k9hBrrtzTvOB6/M4O8hCRacVuGo3HrvuxGmJcTkPUtXI7Xkf
|
||||
62Irx7YvP/QYtbAVeEN7Pvg62UkEGvOA5r12hN9tvbbG4DOQB1H+KMlW8054kdz5NGt7n6+cgTxM
|
||||
0rIoNF0a+QKw4HYTDALOFtXMSylyUwVJfrlbFal0dFGhRgA/MpwZ4d4VfTF4yNH3LDJr77ufLYY8
|
||||
e61T1CUAguRJfe9YgZx8LRk4rx449nDFLIu8n5xjXwzhJHqwB0bTxU09JEYKZ15l0jLBXnGkFrOB
|
||||
txJ49tcykNHFnH3RCw69j3TF/BVIptSALu4zrqQ7N2D3WW984DwDZVfLF9YYZByl4/3UBEcdxEVg
|
||||
RLP1gs22NOVavjZ+rB1TcrwoenVNLGakUwrtuBP5PuDqn+RIcUnYKeQL+YGyJ2bj3NO2OzVqXM0y
|
||||
MqVld3o5G/EVX50cS2nNoJXQOPVu+JIH73Voc7jbQpAYcSeKxdvTluJI85dGIc1KbS6hxlIVI7SP
|
||||
u/uv/G1xptUX/bss0SfHJ/tnBc4FLpqGWpbFb7bRgL4tN6Ap+YRs/gbq66w83q59tYXVV0bXNiB/
|
||||
OvPJMiuXlS/gtjgEwh4tcJqV6WFNPg+h1WoUzopx2RZW/mWYSkOS1zzn0Pot+4W5HHy6OvNmsdgX
|
||||
Zfg/AAAA//+MWctu5DYQvPsriDnLRmxsAiM3Z5E1jE1O8XUx4FAtDdcUKfMxGwfwvwfVTUmciQ85
|
||||
GQZHlNiP6qoid87Dk3oEKfMI0K/qt2KFxDyj0WcdyedOPRgTis8c+b+qeJGR/xxLyhX8JM3taGUc
|
||||
AF8+0oRDnChlO3IA+VTTTPWc2C2GQ0m5aSZFPhWmXA9PZ2jP2ECzC2/yL8s0DjJzFYnySbtC2wzN
|
||||
64EMQrWciAWWhG7QNnpK6Ub9QZqDgBQqju4qVh9DGB2t2o4f/NOCNochi6KbRelK+QqvUYegWagK
|
||||
QIY4am//qR3F+8qYbRQF97fN0i05gHnMwSeLHOCHmADNmEPdQyjFZCl1daDxLLU6gQxrwBIr5tHL
|
||||
1F9kqERSStjpH4qgewxJaEcgQmX6F7r9syPNmlA9PHU8WicUMHFueyBLZJqTSjyRdcIJ8YmRNHLi
|
||||
+/oJdaxFy89zUY/anXS1TOrkq7oOHcmmjXK1B5cMP9vJ26we6RAL7/4VH/M56h9DiD3Q+mR7hhub
|
||||
UHNcnq+okeAhQanvqVfajSHafMRXIXZrV6UcixGQgdBGW1GC+pkpF0YrJh8dpH4w0sisYJEKvLBT
|
||||
9FqsdFFPkKy8d6SxOKDbm5rIHLW3aWpGm0HSe+4TFAuzJgCDTDrIEk/ysrVCqmlQ2TcwFHgWqjon
|
||||
31+XtGj1C5mGt9FrkekAADkjwkvFVIVhp1kbtgdW8XYx/za60giFbazhzOOKPoq9wWq9WG9CnANT
|
||||
3B7KKyED+oWOwWE2VsLDRIxARNSkIzPQ2lf1rAlIiM5eDYQb9QXzTvtmPkDDQlRxliIdFhRkhaKt
|
||||
z9r6phQzUA99Q74XN25DS+7b4iu96xQNg2ysJsvgt4n2yaaSqBKTvmeA9qP6Hg4r86lw97clEfCL
|
||||
5mWHpyrehJKy6ci/XQajNJIAfFNhLPUBRWdWiKGY2T6RGhOzKKnZngJ4bw5qLDpqn4kkO1UQVINA
|
||||
pBGzFve2uRMk6Ih1eBhpC4V7U7B9J1gGZxO2J5pXMYoxIY7bylcqBmBnNnfqd6yeC/sRwdWxI/UJ
|
||||
MDzZ6pYtikSPElrr1SLo9DI3xSxtxjdNLC+SDBb0VtTwnhCLagJNUh8283i9vr7Gn98Bc2zc2qQO
|
||||
2rwIRuAQkTKJkVDhW3N946Cpg0ZklFgBtxN6lhXgdg7WLw4nVCvI7CVMgItJcjuODv6eU6Ieqqb2
|
||||
LFQKJyAx3VqTVAmK0uoEU7dTU3HZDtoQm5Xa98nouYqixbobGJisiBMDWwue0pkd3dJfBqEVA5pk
|
||||
NRQrqGjNcaNF3HMtBzqHPtm0ZQErB2XNIzaTCcXBJIqcSqokkyptDyU7lq3r61Ha7HOj+oBgjuXI
|
||||
HJJm63sQdwglTEB99k6lzxZCb6dGiw6rWfh2015PRBpK0rgj8cW5ZkF71AU/jIuRb3Xlfb0KcWGc
|
||||
Yziki0d3g/U2Hfcg2cHj2iPlMO949f1KqW985VLOblF24hnsc3ghft3t7e0n2XC33fU0yz+tyxmY
|
||||
sa3c3d7ddx/sua+XBs3Fzc5oc6R+e3a75QEEhGbhqjn5fz/oo73l9NaP/2f7bcHAHKJ+v9ydtIfe
|
||||
fhbpOzt8H/9sjTR/8C7BSTe0z5YistHToIur92oyYveDBX2ao5V7qmHefzJ39z/fDve/3O2u3q/+
|
||||
BQAA//8DAPcawNa2GwAA
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 99c9865b6af3cf4d-SJC
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Tue, 11 Nov 2025 00:03:32 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '22788'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '22942'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-project-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-project-tokens:
|
||||
- '149998392'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149998392'
|
||||
x-ratelimit-reset-project-tokens:
|
||||
- 0s
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_48c359c72cdc47aeb89c6d6eeffdce7d
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
@@ -37,6 +37,36 @@ class TestOktaProvider:
|
||||
provider = OktaProvider(settings)
|
||||
expected_url = "https://my-company.okta.com/oauth2/default/v1/device/authorize"
|
||||
assert provider.get_authorize_url() == expected_url
|
||||
|
||||
def test_get_authorize_url_with_custom_authorization_server_name(self):
|
||||
settings = Oauth2Settings(
|
||||
provider="okta",
|
||||
domain="test-domain.okta.com",
|
||||
client_id="test-client-id",
|
||||
audience=None,
|
||||
extra={
|
||||
"using_org_auth_server": False,
|
||||
"authorization_server_name": "my_auth_server_xxxAAA777"
|
||||
}
|
||||
)
|
||||
provider = OktaProvider(settings)
|
||||
expected_url = "https://test-domain.okta.com/oauth2/my_auth_server_xxxAAA777/v1/device/authorize"
|
||||
assert provider.get_authorize_url() == expected_url
|
||||
|
||||
def test_get_authorize_url_when_using_org_auth_server(self):
|
||||
settings = Oauth2Settings(
|
||||
provider="okta",
|
||||
domain="test-domain.okta.com",
|
||||
client_id="test-client-id",
|
||||
audience=None,
|
||||
extra={
|
||||
"using_org_auth_server": True,
|
||||
"authorization_server_name": None
|
||||
}
|
||||
)
|
||||
provider = OktaProvider(settings)
|
||||
expected_url = "https://test-domain.okta.com/oauth2/v1/device/authorize"
|
||||
assert provider.get_authorize_url() == expected_url
|
||||
|
||||
def test_get_token_url(self):
|
||||
expected_url = "https://test-domain.okta.com/oauth2/default/v1/token"
|
||||
@@ -53,6 +83,36 @@ class TestOktaProvider:
|
||||
expected_url = "https://another-domain.okta.com/oauth2/default/v1/token"
|
||||
assert provider.get_token_url() == expected_url
|
||||
|
||||
def test_get_token_url_with_custom_authorization_server_name(self):
|
||||
settings = Oauth2Settings(
|
||||
provider="okta",
|
||||
domain="test-domain.okta.com",
|
||||
client_id="test-client-id",
|
||||
audience=None,
|
||||
extra={
|
||||
"using_org_auth_server": False,
|
||||
"authorization_server_name": "my_auth_server_xxxAAA777"
|
||||
}
|
||||
)
|
||||
provider = OktaProvider(settings)
|
||||
expected_url = "https://test-domain.okta.com/oauth2/my_auth_server_xxxAAA777/v1/token"
|
||||
assert provider.get_token_url() == expected_url
|
||||
|
||||
def test_get_token_url_when_using_org_auth_server(self):
|
||||
settings = Oauth2Settings(
|
||||
provider="okta",
|
||||
domain="test-domain.okta.com",
|
||||
client_id="test-client-id",
|
||||
audience=None,
|
||||
extra={
|
||||
"using_org_auth_server": True,
|
||||
"authorization_server_name": None
|
||||
}
|
||||
)
|
||||
provider = OktaProvider(settings)
|
||||
expected_url = "https://test-domain.okta.com/oauth2/v1/token"
|
||||
assert provider.get_token_url() == expected_url
|
||||
|
||||
def test_get_jwks_url(self):
|
||||
expected_url = "https://test-domain.okta.com/oauth2/default/v1/keys"
|
||||
assert self.provider.get_jwks_url() == expected_url
|
||||
@@ -68,6 +128,36 @@ class TestOktaProvider:
|
||||
expected_url = "https://dev.okta.com/oauth2/default/v1/keys"
|
||||
assert provider.get_jwks_url() == expected_url
|
||||
|
||||
def test_get_jwks_url_with_custom_authorization_server_name(self):
|
||||
settings = Oauth2Settings(
|
||||
provider="okta",
|
||||
domain="test-domain.okta.com",
|
||||
client_id="test-client-id",
|
||||
audience=None,
|
||||
extra={
|
||||
"using_org_auth_server": False,
|
||||
"authorization_server_name": "my_auth_server_xxxAAA777"
|
||||
}
|
||||
)
|
||||
provider = OktaProvider(settings)
|
||||
expected_url = "https://test-domain.okta.com/oauth2/my_auth_server_xxxAAA777/v1/keys"
|
||||
assert provider.get_jwks_url() == expected_url
|
||||
|
||||
def test_get_jwks_url_when_using_org_auth_server(self):
|
||||
settings = Oauth2Settings(
|
||||
provider="okta",
|
||||
domain="test-domain.okta.com",
|
||||
client_id="test-client-id",
|
||||
audience=None,
|
||||
extra={
|
||||
"using_org_auth_server": True,
|
||||
"authorization_server_name": None
|
||||
}
|
||||
)
|
||||
provider = OktaProvider(settings)
|
||||
expected_url = "https://test-domain.okta.com/oauth2/v1/keys"
|
||||
assert provider.get_jwks_url() == expected_url
|
||||
|
||||
def test_get_issuer(self):
|
||||
expected_issuer = "https://test-domain.okta.com/oauth2/default"
|
||||
assert self.provider.get_issuer() == expected_issuer
|
||||
@@ -83,6 +173,36 @@ class TestOktaProvider:
|
||||
expected_issuer = "https://prod.okta.com/oauth2/default"
|
||||
assert provider.get_issuer() == expected_issuer
|
||||
|
||||
def test_get_issuer_with_custom_authorization_server_name(self):
|
||||
settings = Oauth2Settings(
|
||||
provider="okta",
|
||||
domain="test-domain.okta.com",
|
||||
client_id="test-client-id",
|
||||
audience=None,
|
||||
extra={
|
||||
"using_org_auth_server": False,
|
||||
"authorization_server_name": "my_auth_server_xxxAAA777"
|
||||
}
|
||||
)
|
||||
provider = OktaProvider(settings)
|
||||
expected_issuer = "https://test-domain.okta.com/oauth2/my_auth_server_xxxAAA777"
|
||||
assert provider.get_issuer() == expected_issuer
|
||||
|
||||
def test_get_issuer_when_using_org_auth_server(self):
|
||||
settings = Oauth2Settings(
|
||||
provider="okta",
|
||||
domain="test-domain.okta.com",
|
||||
client_id="test-client-id",
|
||||
audience=None,
|
||||
extra={
|
||||
"using_org_auth_server": True,
|
||||
"authorization_server_name": None
|
||||
}
|
||||
)
|
||||
provider = OktaProvider(settings)
|
||||
expected_issuer = "https://test-domain.okta.com"
|
||||
assert provider.get_issuer() == expected_issuer
|
||||
|
||||
def test_get_audience(self):
|
||||
assert self.provider.get_audience() == "test-audience"
|
||||
|
||||
@@ -100,3 +220,38 @@ class TestOktaProvider:
|
||||
|
||||
def test_get_client_id(self):
|
||||
assert self.provider.get_client_id() == "test-client-id"
|
||||
|
||||
def test_get_required_fields(self):
|
||||
assert set(self.provider.get_required_fields()) == set(["authorization_server_name", "using_org_auth_server"])
|
||||
|
||||
def test_oauth2_base_url(self):
|
||||
assert self.provider._oauth2_base_url() == "https://test-domain.okta.com/oauth2/default"
|
||||
|
||||
def test_oauth2_base_url_with_custom_authorization_server_name(self):
|
||||
settings = Oauth2Settings(
|
||||
provider="okta",
|
||||
domain="test-domain.okta.com",
|
||||
client_id="test-client-id",
|
||||
audience=None,
|
||||
extra={
|
||||
"using_org_auth_server": False,
|
||||
"authorization_server_name": "my_auth_server_xxxAAA777"
|
||||
}
|
||||
)
|
||||
|
||||
provider = OktaProvider(settings)
|
||||
assert provider._oauth2_base_url() == "https://test-domain.okta.com/oauth2/my_auth_server_xxxAAA777"
|
||||
|
||||
def test_oauth2_base_url_when_using_org_auth_server(self):
|
||||
settings = Oauth2Settings(
|
||||
provider="okta",
|
||||
domain="test-domain.okta.com",
|
||||
client_id="test-client-id",
|
||||
audience=None,
|
||||
extra={
|
||||
"using_org_auth_server": True,
|
||||
"authorization_server_name": None
|
||||
}
|
||||
)
|
||||
provider = OktaProvider(settings)
|
||||
assert provider._oauth2_base_url() == "https://test-domain.okta.com/oauth2"
|
||||
@@ -37,7 +37,8 @@ class TestEnterpriseConfigureCommand(unittest.TestCase):
|
||||
'audience': 'test_audience',
|
||||
'domain': 'test.domain.com',
|
||||
'device_authorization_client_id': 'test_client_id',
|
||||
'provider': 'workos'
|
||||
'provider': 'workos',
|
||||
'extra': {}
|
||||
}
|
||||
mock_requests_get.return_value = mock_response
|
||||
|
||||
@@ -60,11 +61,12 @@ class TestEnterpriseConfigureCommand(unittest.TestCase):
|
||||
('oauth2_provider', 'workos'),
|
||||
('oauth2_audience', 'test_audience'),
|
||||
('oauth2_client_id', 'test_client_id'),
|
||||
('oauth2_domain', 'test.domain.com')
|
||||
('oauth2_domain', 'test.domain.com'),
|
||||
('oauth2_extra', {})
|
||||
]
|
||||
|
||||
actual_calls = self.mock_settings_command.set.call_args_list
|
||||
self.assertEqual(len(actual_calls), 5)
|
||||
self.assertEqual(len(actual_calls), 6)
|
||||
|
||||
for i, (key, value) in enumerate(expected_calls):
|
||||
call_args = actual_calls[i][0]
|
||||
|
||||
@@ -36,7 +36,7 @@ def test_anthropic_completion_is_used_when_claude_provider():
|
||||
|
||||
from crewai.llms.providers.anthropic.completion import AnthropicCompletion
|
||||
assert isinstance(llm, AnthropicCompletion)
|
||||
assert llm.provider == "claude"
|
||||
assert llm.provider == "anthropic"
|
||||
assert llm.model == "claude-3-5-sonnet-20241022"
|
||||
|
||||
|
||||
|
||||
@@ -39,7 +39,7 @@ def test_azure_completion_is_used_when_azure_openai_provider():
|
||||
|
||||
from crewai.llms.providers.azure.completion import AzureCompletion
|
||||
assert isinstance(llm, AzureCompletion)
|
||||
assert llm.provider == "azure_openai"
|
||||
assert llm.provider == "azure"
|
||||
assert llm.model == "gpt-4"
|
||||
|
||||
|
||||
|
||||
@@ -24,7 +24,7 @@ def test_gemini_completion_is_used_when_google_provider():
|
||||
llm = LLM(model="google/gemini-2.0-flash-001")
|
||||
|
||||
assert llm.__class__.__name__ == "GeminiCompletion"
|
||||
assert llm.provider == "google"
|
||||
assert llm.provider == "gemini"
|
||||
assert llm.model == "gemini-2.0-flash-001"
|
||||
|
||||
|
||||
|
||||
@@ -154,7 +154,7 @@ class TestGeminiProviderInterceptor:
|
||||
# Gemini provider should raise NotImplementedError
|
||||
with pytest.raises(NotImplementedError) as exc_info:
|
||||
LLM(
|
||||
model="gemini/gemini-pro",
|
||||
model="gemini/gemini-2.5-pro",
|
||||
interceptor=interceptor,
|
||||
api_key="test-gemini-key",
|
||||
)
|
||||
@@ -169,7 +169,7 @@ class TestGeminiProviderInterceptor:
|
||||
|
||||
with pytest.raises(NotImplementedError) as exc_info:
|
||||
LLM(
|
||||
model="gemini/gemini-pro",
|
||||
model="gemini/gemini-2.5-pro",
|
||||
interceptor=interceptor,
|
||||
api_key="test-gemini-key",
|
||||
)
|
||||
@@ -181,7 +181,7 @@ class TestGeminiProviderInterceptor:
|
||||
def test_gemini_without_interceptor_works(self) -> None:
|
||||
"""Test that Gemini LLM works without interceptor."""
|
||||
llm = LLM(
|
||||
model="gemini/gemini-pro",
|
||||
model="gemini/gemini-2.5-pro",
|
||||
api_key="test-gemini-key",
|
||||
)
|
||||
|
||||
@@ -231,7 +231,7 @@ class TestUnsupportedProviderMessages:
|
||||
|
||||
with pytest.raises(NotImplementedError) as exc_info:
|
||||
LLM(
|
||||
model="gemini/gemini-pro",
|
||||
model="gemini/gemini-2.5-pro",
|
||||
interceptor=interceptor,
|
||||
api_key="test-gemini-key",
|
||||
)
|
||||
@@ -282,7 +282,7 @@ class TestProviderSupportMatrix:
|
||||
# Gemini - NOT SUPPORTED
|
||||
with pytest.raises(NotImplementedError):
|
||||
LLM(
|
||||
model="gemini/gemini-pro",
|
||||
model="gemini/gemini-2.5-pro",
|
||||
interceptor=interceptor,
|
||||
api_key="test",
|
||||
)
|
||||
@@ -315,5 +315,5 @@ class TestProviderSupportMatrix:
|
||||
assert not hasattr(bedrock_llm, 'interceptor') or bedrock_llm.interceptor is None
|
||||
|
||||
# Gemini - doesn't have interceptor attribute
|
||||
gemini_llm = LLM(model="gemini/gemini-pro", api_key="test")
|
||||
assert not hasattr(gemini_llm, 'interceptor') or gemini_llm.interceptor is None
|
||||
gemini_llm = LLM(model="gemini/gemini-2.5-pro", api_key="test")
|
||||
assert not hasattr(gemini_llm, 'interceptor') or gemini_llm.interceptor is None
|
||||
|
||||
@@ -16,7 +16,7 @@ def test_openai_completion_is_used_when_openai_provider():
|
||||
"""
|
||||
Test that OpenAICompletion from completion.py is used when LLM uses provider 'openai'
|
||||
"""
|
||||
llm = LLM(model="openai/gpt-4o")
|
||||
llm = LLM(model="gpt-4o")
|
||||
|
||||
assert llm.__class__.__name__ == "OpenAICompletion"
|
||||
assert llm.provider == "openai"
|
||||
@@ -70,7 +70,7 @@ def test_openai_completion_module_is_imported():
|
||||
del sys.modules[module_name]
|
||||
|
||||
# Create LLM instance - this should trigger the import
|
||||
LLM(model="openai/gpt-4o")
|
||||
LLM(model="gpt-4o")
|
||||
|
||||
# Verify the module was imported
|
||||
assert module_name in sys.modules
|
||||
@@ -97,7 +97,7 @@ def test_native_openai_raises_error_when_initialization_fails():
|
||||
|
||||
# This should raise ImportError, not fall back to LiteLLM
|
||||
with pytest.raises(ImportError) as excinfo:
|
||||
LLM(model="openai/gpt-4o")
|
||||
LLM(model="gpt-4o")
|
||||
|
||||
assert "Error importing native provider" in str(excinfo.value)
|
||||
assert "Native SDK failed" in str(excinfo.value)
|
||||
@@ -108,7 +108,7 @@ def test_openai_completion_initialization_parameters():
|
||||
Test that OpenAICompletion is initialized with correct parameters
|
||||
"""
|
||||
llm = LLM(
|
||||
model="openai/gpt-4o",
|
||||
model="gpt-4o",
|
||||
temperature=0.7,
|
||||
max_tokens=1000,
|
||||
api_key="test-key"
|
||||
@@ -311,7 +311,7 @@ def test_openai_completion_call_returns_usage_metrics():
|
||||
role="Research Assistant",
|
||||
goal="Find information about the population of Tokyo",
|
||||
backstory="You are a helpful research assistant.",
|
||||
llm=LLM(model="openai/gpt-4o"),
|
||||
llm=LLM(model="gpt-4o"),
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
@@ -331,6 +331,7 @@ def test_openai_completion_call_returns_usage_metrics():
|
||||
assert result.token_usage.cached_prompt_tokens == 0
|
||||
|
||||
|
||||
@pytest.mark.skip(reason="Allow for litellm")
|
||||
def test_openai_raises_error_when_model_not_supported():
|
||||
"""Test that OpenAICompletion raises ValueError when model not supported"""
|
||||
|
||||
@@ -354,7 +355,7 @@ def test_openai_client_setup_with_extra_arguments():
|
||||
Test that OpenAICompletion is initialized with correct parameters
|
||||
"""
|
||||
llm = LLM(
|
||||
model="openai/gpt-4o",
|
||||
model="gpt-4o",
|
||||
temperature=0.7,
|
||||
max_tokens=1000,
|
||||
top_p=0.5,
|
||||
@@ -391,7 +392,7 @@ def test_extra_arguments_are_passed_to_openai_completion():
|
||||
"""
|
||||
Test that extra arguments are passed to OpenAICompletion
|
||||
"""
|
||||
llm = LLM(model="openai/gpt-4o", temperature=0.7, max_tokens=1000, top_p=0.5, max_retries=3)
|
||||
llm = LLM(model="gpt-4o", temperature=0.7, max_tokens=1000, top_p=0.5, max_retries=3)
|
||||
|
||||
with patch.object(llm.client.chat.completions, 'create') as mock_create:
|
||||
mock_create.return_value = MagicMock(
|
||||
|
||||
@@ -340,7 +340,7 @@ def test_sync_task_execution(researcher, writer):
|
||||
)
|
||||
|
||||
mock_task_output = TaskOutput(
|
||||
description="Mock description", raw="mocked output", agent="mocked agent"
|
||||
description="Mock description", raw="mocked output", agent="mocked agent", messages=[]
|
||||
)
|
||||
|
||||
# Because we are mocking execute_sync, we never hit the underlying _execute_core
|
||||
@@ -412,7 +412,7 @@ def test_manager_agent_delegating_to_assigned_task_agent(researcher, writer):
|
||||
)
|
||||
|
||||
mock_task_output = TaskOutput(
|
||||
description="Mock description", raw="mocked output", agent="mocked agent"
|
||||
description="Mock description", raw="mocked output", agent="mocked agent", messages=[]
|
||||
)
|
||||
|
||||
# Because we are mocking execute_sync, we never hit the underlying _execute_core
|
||||
@@ -513,7 +513,7 @@ def test_manager_agent_delegates_with_varied_role_cases():
|
||||
)
|
||||
|
||||
mock_task_output = TaskOutput(
|
||||
description="Mock description", raw="mocked output", agent="mocked agent"
|
||||
description="Mock description", raw="mocked output", agent="mocked agent", messages=[]
|
||||
)
|
||||
task.output = mock_task_output
|
||||
|
||||
@@ -611,7 +611,7 @@ def test_crew_with_delegating_agents_should_not_override_task_tools(ceo, writer)
|
||||
)
|
||||
|
||||
mock_task_output = TaskOutput(
|
||||
description="Mock description", raw="mocked output", agent="mocked agent"
|
||||
description="Mock description", raw="mocked output", agent="mocked agent", messages=[]
|
||||
)
|
||||
|
||||
# Because we are mocking execute_sync, we never hit the underlying _execute_core
|
||||
@@ -669,7 +669,7 @@ def test_crew_with_delegating_agents_should_not_override_agent_tools(ceo, writer
|
||||
)
|
||||
|
||||
mock_task_output = TaskOutput(
|
||||
description="Mock description", raw="mocked output", agent="mocked agent"
|
||||
description="Mock description", raw="mocked output", agent="mocked agent", messages=[]
|
||||
)
|
||||
|
||||
# Because we are mocking execute_sync, we never hit the underlying _execute_core
|
||||
@@ -788,7 +788,7 @@ def test_task_tools_override_agent_tools_with_allow_delegation(researcher, write
|
||||
)
|
||||
|
||||
mock_task_output = TaskOutput(
|
||||
description="Mock description", raw="mocked output", agent="mocked agent"
|
||||
description="Mock description", raw="mocked output", agent="mocked agent", messages=[]
|
||||
)
|
||||
|
||||
# We mock execute_sync to verify which tools get used at runtime
|
||||
@@ -1225,7 +1225,7 @@ async def test_async_task_execution_call_count(researcher, writer):
|
||||
|
||||
# Create a valid TaskOutput instance to mock the return value
|
||||
mock_task_output = TaskOutput(
|
||||
description="Mock description", raw="mocked output", agent="mocked agent"
|
||||
description="Mock description", raw="mocked output", agent="mocked agent", messages=[]
|
||||
)
|
||||
|
||||
# Create a MagicMock Future instance
|
||||
@@ -1784,7 +1784,7 @@ def test_hierarchical_kickoff_usage_metrics_include_manager(researcher):
|
||||
Task,
|
||||
"execute_sync",
|
||||
return_value=TaskOutput(
|
||||
description="dummy", raw="Hello", agent=researcher.role
|
||||
description="dummy", raw="Hello", agent=researcher.role, messages=[]
|
||||
),
|
||||
):
|
||||
crew.kickoff()
|
||||
@@ -1828,7 +1828,7 @@ def test_hierarchical_crew_creation_tasks_with_agents(researcher, writer):
|
||||
)
|
||||
|
||||
mock_task_output = TaskOutput(
|
||||
description="Mock description", raw="mocked output", agent="mocked agent"
|
||||
description="Mock description", raw="mocked output", agent="mocked agent", messages=[]
|
||||
)
|
||||
|
||||
# Because we are mocking execute_sync, we never hit the underlying _execute_core
|
||||
@@ -1881,7 +1881,7 @@ def test_hierarchical_crew_creation_tasks_with_async_execution(researcher, write
|
||||
)
|
||||
|
||||
mock_task_output = TaskOutput(
|
||||
description="Mock description", raw="mocked output", agent="mocked agent"
|
||||
description="Mock description", raw="mocked output", agent="mocked agent", messages=[]
|
||||
)
|
||||
|
||||
# Create a mock Future that returns our TaskOutput
|
||||
@@ -2246,11 +2246,13 @@ def test_conditional_task_uses_last_output(researcher, writer):
|
||||
description="First task output",
|
||||
raw="First success output", # Will be used by third task's condition
|
||||
agent=researcher.role,
|
||||
messages=[],
|
||||
)
|
||||
mock_third = TaskOutput(
|
||||
description="Third task output",
|
||||
raw="Third task executed", # Output when condition succeeds using first task output
|
||||
agent=writer.role,
|
||||
messages=[],
|
||||
)
|
||||
|
||||
# Set up mocks for task execution and conditional logic
|
||||
@@ -2318,11 +2320,13 @@ def test_conditional_tasks_result_collection(researcher, writer):
|
||||
description="Success output",
|
||||
raw="Success output", # Triggers third task's condition
|
||||
agent=researcher.role,
|
||||
messages=[],
|
||||
)
|
||||
mock_conditional = TaskOutput(
|
||||
description="Conditional output",
|
||||
raw="Conditional task executed",
|
||||
agent=writer.role,
|
||||
messages=[],
|
||||
)
|
||||
|
||||
# Set up mocks for task execution and conditional logic
|
||||
@@ -2399,6 +2403,7 @@ def test_multiple_conditional_tasks(researcher, writer):
|
||||
description="Mock success",
|
||||
raw="Success and proceed output",
|
||||
agent=researcher.role,
|
||||
messages=[],
|
||||
)
|
||||
|
||||
# Set up mocks for task execution
|
||||
@@ -2806,7 +2811,7 @@ def test_manager_agent(researcher, writer):
|
||||
)
|
||||
|
||||
mock_task_output = TaskOutput(
|
||||
description="Mock description", raw="mocked output", agent="mocked agent"
|
||||
description="Mock description", raw="mocked output", agent="mocked agent", messages=[]
|
||||
)
|
||||
|
||||
# Because we are mocking execute_sync, we never hit the underlying _execute_core
|
||||
@@ -3001,6 +3006,7 @@ def test_replay_feature(researcher, writer):
|
||||
output_format=OutputFormat.RAW,
|
||||
pydantic=None,
|
||||
summary="Mocked output for list of ideas",
|
||||
messages=[],
|
||||
)
|
||||
|
||||
crew.kickoff()
|
||||
@@ -3052,6 +3058,7 @@ def test_crew_task_db_init():
|
||||
output_format=OutputFormat.RAW,
|
||||
pydantic=None,
|
||||
summary="Write about AI in healthcare...",
|
||||
messages=[],
|
||||
)
|
||||
|
||||
crew.kickoff()
|
||||
@@ -3114,6 +3121,7 @@ def test_replay_task_with_context():
|
||||
output_format=OutputFormat.RAW,
|
||||
pydantic=None,
|
||||
summary="Detailed report on AI advancements...",
|
||||
messages=[],
|
||||
)
|
||||
mock_task_output2 = TaskOutput(
|
||||
description="Summarize the AI advancements report.",
|
||||
@@ -3123,6 +3131,7 @@ def test_replay_task_with_context():
|
||||
output_format=OutputFormat.RAW,
|
||||
pydantic=None,
|
||||
summary="Summary of the AI advancements report...",
|
||||
messages=[],
|
||||
)
|
||||
mock_task_output3 = TaskOutput(
|
||||
description="Write an article based on the AI advancements summary.",
|
||||
@@ -3132,6 +3141,7 @@ def test_replay_task_with_context():
|
||||
output_format=OutputFormat.RAW,
|
||||
pydantic=None,
|
||||
summary="Article on AI advancements...",
|
||||
messages=[],
|
||||
)
|
||||
mock_task_output4 = TaskOutput(
|
||||
description="Create a presentation based on the AI advancements article.",
|
||||
@@ -3141,6 +3151,7 @@ def test_replay_task_with_context():
|
||||
output_format=OutputFormat.RAW,
|
||||
pydantic=None,
|
||||
summary="Presentation on AI advancements...",
|
||||
messages=[],
|
||||
)
|
||||
|
||||
with patch.object(Task, "execute_sync") as mock_execute_task:
|
||||
@@ -3164,6 +3175,70 @@ def test_replay_task_with_context():
|
||||
db_handler.reset()
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_replay_preserves_messages():
|
||||
"""Test that replay preserves messages from stored task outputs."""
|
||||
from crewai.utilities.types import LLMMessage
|
||||
|
||||
agent = Agent(
|
||||
role="Test Agent",
|
||||
goal="Test goal",
|
||||
backstory="Test backstory",
|
||||
allow_delegation=False,
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Say hello",
|
||||
expected_output="A greeting",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
crew = Crew(agents=[agent], tasks=[task], process=Process.sequential)
|
||||
|
||||
mock_messages: list[LLMMessage] = [
|
||||
{"role": "system", "content": "You are a helpful assistant."},
|
||||
{"role": "user", "content": "Say hello"},
|
||||
{"role": "assistant", "content": "Hello!"},
|
||||
]
|
||||
|
||||
mock_task_output = TaskOutput(
|
||||
description="Say hello",
|
||||
raw="Hello!",
|
||||
agent="Test Agent",
|
||||
messages=mock_messages,
|
||||
)
|
||||
|
||||
with patch.object(Task, "execute_sync", return_value=mock_task_output):
|
||||
crew.kickoff()
|
||||
|
||||
# Verify the task output was stored with messages
|
||||
db_handler = TaskOutputStorageHandler()
|
||||
stored_outputs = db_handler.load()
|
||||
assert stored_outputs is not None
|
||||
assert len(stored_outputs) > 0
|
||||
|
||||
# Verify messages are in the stored output
|
||||
stored_output = stored_outputs[0]["output"]
|
||||
assert "messages" in stored_output
|
||||
assert len(stored_output["messages"]) == 3
|
||||
assert stored_output["messages"][0]["role"] == "system"
|
||||
assert stored_output["messages"][1]["role"] == "user"
|
||||
assert stored_output["messages"][2]["role"] == "assistant"
|
||||
|
||||
# Replay the task and verify messages are preserved
|
||||
with patch.object(Task, "execute_sync", return_value=mock_task_output):
|
||||
replayed_output = crew.replay(str(task.id))
|
||||
|
||||
# Verify the replayed task output has messages
|
||||
assert len(replayed_output.tasks_output) > 0
|
||||
replayed_task_output = replayed_output.tasks_output[0]
|
||||
assert hasattr(replayed_task_output, "messages")
|
||||
assert isinstance(replayed_task_output.messages, list)
|
||||
assert len(replayed_task_output.messages) == 3
|
||||
|
||||
db_handler.reset()
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_replay_with_context():
|
||||
agent = Agent(role="test_agent", backstory="Test Description", goal="Test Goal")
|
||||
@@ -3181,6 +3256,7 @@ def test_replay_with_context():
|
||||
pydantic=None,
|
||||
json_dict={},
|
||||
output_format=OutputFormat.RAW,
|
||||
messages=[],
|
||||
)
|
||||
task1.output = context_output
|
||||
|
||||
@@ -3241,6 +3317,7 @@ def test_replay_with_context_set_to_nullable():
|
||||
description="Test Task Output",
|
||||
raw="test raw output",
|
||||
agent="test_agent",
|
||||
messages=[],
|
||||
)
|
||||
crew.kickoff()
|
||||
|
||||
@@ -3264,6 +3341,7 @@ def test_replay_with_invalid_task_id():
|
||||
pydantic=None,
|
||||
json_dict={},
|
||||
output_format=OutputFormat.RAW,
|
||||
messages=[],
|
||||
)
|
||||
task1.output = context_output
|
||||
|
||||
@@ -3328,6 +3406,7 @@ def test_replay_interpolates_inputs_properly(mock_interpolate_inputs):
|
||||
pydantic=None,
|
||||
json_dict={},
|
||||
output_format=OutputFormat.RAW,
|
||||
messages=[],
|
||||
)
|
||||
task1.output = context_output
|
||||
|
||||
@@ -3386,6 +3465,7 @@ def test_replay_setup_context():
|
||||
pydantic=None,
|
||||
json_dict={},
|
||||
output_format=OutputFormat.RAW,
|
||||
messages=[],
|
||||
)
|
||||
task1.output = context_output
|
||||
crew = Crew(agents=[agent], tasks=[task1, task2], process=Process.sequential)
|
||||
@@ -3619,6 +3699,7 @@ def test_conditional_should_skip(researcher, writer):
|
||||
description="Task 1 description",
|
||||
raw="Task 1 output",
|
||||
agent="Researcher",
|
||||
messages=[],
|
||||
)
|
||||
|
||||
result = crew_met.kickoff()
|
||||
@@ -3653,6 +3734,7 @@ def test_conditional_should_execute(researcher, writer):
|
||||
description="Task 1 description",
|
||||
raw="Task 1 output",
|
||||
agent="Researcher",
|
||||
messages=[],
|
||||
)
|
||||
|
||||
crew_met.kickoff()
|
||||
@@ -3824,7 +3906,7 @@ def test_task_tools_preserve_code_execution_tools():
|
||||
)
|
||||
|
||||
mock_task_output = TaskOutput(
|
||||
description="Mock description", raw="mocked output", agent="mocked agent"
|
||||
description="Mock description", raw="mocked output", agent="mocked agent", messages=[]
|
||||
)
|
||||
|
||||
with patch.object(
|
||||
@@ -3878,7 +3960,7 @@ def test_multimodal_flag_adds_multimodal_tools():
|
||||
crew = Crew(agents=[multimodal_agent], tasks=[task], process=Process.sequential)
|
||||
|
||||
mock_task_output = TaskOutput(
|
||||
description="Mock description", raw="mocked output", agent="mocked agent"
|
||||
description="Mock description", raw="mocked output", agent="mocked agent", messages=[]
|
||||
)
|
||||
|
||||
# Mock execute_sync to verify the tools passed at runtime
|
||||
@@ -3942,6 +4024,7 @@ def test_multimodal_agent_image_tool_handling():
|
||||
description="Mock description",
|
||||
raw="A detailed analysis of the image",
|
||||
agent="Image Analyst",
|
||||
messages=[],
|
||||
)
|
||||
|
||||
with patch.object(Task, "execute_sync") as mock_execute_sync:
|
||||
|
||||
@@ -710,7 +710,7 @@ def test_native_provider_raises_error_when_supported_but_fails():
|
||||
mock_get_native.return_value = mock_provider
|
||||
|
||||
with pytest.raises(ImportError) as excinfo:
|
||||
LLM(model="openai/gpt-4", is_litellm=False)
|
||||
LLM(model="gpt-4", is_litellm=False)
|
||||
|
||||
assert "Error importing native provider" in str(excinfo.value)
|
||||
assert "Native provider initialization failed" in str(excinfo.value)
|
||||
@@ -725,3 +725,113 @@ def test_native_provider_falls_back_to_litellm_when_not_in_supported_list():
|
||||
# Should fall back to LiteLLM
|
||||
assert llm.is_litellm is True
|
||||
assert llm.model == "groq/llama-3.1-70b-versatile"
|
||||
|
||||
|
||||
def test_prefixed_models_with_valid_constants_use_native_sdk():
|
||||
"""Test that models with native provider prefixes use native SDK when model is in constants."""
|
||||
# Test openai/ prefix with actual OpenAI model in constants → Native SDK
|
||||
with patch.dict(os.environ, {"OPENAI_API_KEY": "test-key"}):
|
||||
llm = LLM(model="openai/gpt-4o", is_litellm=False)
|
||||
assert llm.is_litellm is False
|
||||
assert llm.provider == "openai"
|
||||
|
||||
# Test anthropic/ prefix with Claude model in constants → Native SDK
|
||||
with patch.dict(os.environ, {"ANTHROPIC_API_KEY": "test-key"}):
|
||||
llm2 = LLM(model="anthropic/claude-opus-4-0", is_litellm=False)
|
||||
assert llm2.is_litellm is False
|
||||
assert llm2.provider == "anthropic"
|
||||
|
||||
# Test gemini/ prefix with Gemini model in constants → Native SDK
|
||||
with patch.dict(os.environ, {"GOOGLE_API_KEY": "test-key"}):
|
||||
llm3 = LLM(model="gemini/gemini-2.5-pro", is_litellm=False)
|
||||
assert llm3.is_litellm is False
|
||||
assert llm3.provider == "gemini"
|
||||
|
||||
|
||||
def test_prefixed_models_with_invalid_constants_use_litellm():
|
||||
"""Test that models with native provider prefixes use LiteLLM when model is NOT in constants."""
|
||||
# Test openai/ prefix with non-OpenAI model (not in OPENAI_MODELS) → LiteLLM
|
||||
llm = LLM(model="openai/gemini-2.5-flash", is_litellm=False)
|
||||
assert llm.is_litellm is True
|
||||
assert llm.model == "openai/gemini-2.5-flash"
|
||||
|
||||
# Test openai/ prefix with unknown future model → LiteLLM
|
||||
llm2 = LLM(model="openai/gpt-future-6", is_litellm=False)
|
||||
assert llm2.is_litellm is True
|
||||
assert llm2.model == "openai/gpt-future-6"
|
||||
|
||||
# Test anthropic/ prefix with non-Anthropic model → LiteLLM
|
||||
llm3 = LLM(model="anthropic/gpt-4o", is_litellm=False)
|
||||
assert llm3.is_litellm is True
|
||||
assert llm3.model == "anthropic/gpt-4o"
|
||||
|
||||
|
||||
def test_prefixed_models_with_non_native_providers_use_litellm():
|
||||
"""Test that models with non-native provider prefixes always use LiteLLM."""
|
||||
# Test groq/ prefix (not a native provider) → LiteLLM
|
||||
llm = LLM(model="groq/llama-3.3-70b", is_litellm=False)
|
||||
assert llm.is_litellm is True
|
||||
assert llm.model == "groq/llama-3.3-70b"
|
||||
|
||||
# Test together/ prefix (not a native provider) → LiteLLM
|
||||
llm2 = LLM(model="together/qwen-2.5-72b", is_litellm=False)
|
||||
assert llm2.is_litellm is True
|
||||
assert llm2.model == "together/qwen-2.5-72b"
|
||||
|
||||
|
||||
def test_unprefixed_models_use_native_sdk():
|
||||
"""Test that unprefixed models use native SDK when model is in constants."""
|
||||
# gpt-4o is in OPENAI_MODELS → Native OpenAI SDK
|
||||
with patch.dict(os.environ, {"OPENAI_API_KEY": "test-key"}):
|
||||
llm = LLM(model="gpt-4o", is_litellm=False)
|
||||
assert llm.is_litellm is False
|
||||
assert llm.provider == "openai"
|
||||
|
||||
# claude-opus-4-0 is in ANTHROPIC_MODELS → Native Anthropic SDK
|
||||
with patch.dict(os.environ, {"ANTHROPIC_API_KEY": "test-key"}):
|
||||
llm2 = LLM(model="claude-opus-4-0", is_litellm=False)
|
||||
assert llm2.is_litellm is False
|
||||
assert llm2.provider == "anthropic"
|
||||
|
||||
# gemini-2.5-pro is in GEMINI_MODELS → Native Gemini SDK
|
||||
with patch.dict(os.environ, {"GOOGLE_API_KEY": "test-key"}):
|
||||
llm3 = LLM(model="gemini-2.5-pro", is_litellm=False)
|
||||
assert llm3.is_litellm is False
|
||||
assert llm3.provider == "gemini"
|
||||
|
||||
|
||||
def test_explicit_provider_kwarg_takes_priority():
|
||||
"""Test that explicit provider kwarg takes priority over model name inference."""
|
||||
# Explicit provider=openai should use OpenAI even if model name suggests otherwise
|
||||
with patch.dict(os.environ, {"OPENAI_API_KEY": "test-key"}):
|
||||
llm = LLM(model="gpt-4o", provider="openai", is_litellm=False)
|
||||
assert llm.is_litellm is False
|
||||
assert llm.provider == "openai"
|
||||
|
||||
# Explicit provider for a model with "/" should still use that provider
|
||||
with patch.dict(os.environ, {"OPENAI_API_KEY": "test-key"}):
|
||||
llm2 = LLM(model="gpt-4o", provider="openai", is_litellm=False)
|
||||
assert llm2.is_litellm is False
|
||||
assert llm2.provider == "openai"
|
||||
|
||||
|
||||
def test_validate_model_in_constants():
|
||||
"""Test the _validate_model_in_constants method."""
|
||||
# OpenAI models
|
||||
assert LLM._validate_model_in_constants("gpt-4o", "openai") is True
|
||||
assert LLM._validate_model_in_constants("gpt-future-6", "openai") is False
|
||||
|
||||
# Anthropic models
|
||||
assert LLM._validate_model_in_constants("claude-opus-4-0", "claude") is True
|
||||
assert LLM._validate_model_in_constants("claude-future-5", "claude") is False
|
||||
|
||||
# Gemini models
|
||||
assert LLM._validate_model_in_constants("gemini-2.5-pro", "gemini") is True
|
||||
assert LLM._validate_model_in_constants("gemini-future", "gemini") is False
|
||||
|
||||
# Azure models
|
||||
assert LLM._validate_model_in_constants("gpt-4o", "azure") is True
|
||||
assert LLM._validate_model_in_constants("gpt-35-turbo", "azure") is True
|
||||
|
||||
# Bedrock models
|
||||
assert LLM._validate_model_in_constants("anthropic.claude-opus-4-1-20250805-v1:0", "bedrock") is True
|
||||
|
||||
@@ -162,6 +162,7 @@ def test_task_callback_returns_task_output():
|
||||
"name": task.name or task.description,
|
||||
"expected_output": "Bullet point list of 5 interesting ideas.",
|
||||
"output_format": OutputFormat.RAW,
|
||||
"messages": [],
|
||||
}
|
||||
assert output_dict == expected_output
|
||||
|
||||
@@ -1680,3 +1681,44 @@ def test_task_copy_with_list_context():
|
||||
assert isinstance(copied_task2.context, list)
|
||||
assert len(copied_task2.context) == 1
|
||||
assert copied_task2.context[0] is task1
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_task_output_includes_messages():
|
||||
"""Test that TaskOutput includes messages from agent execution."""
|
||||
researcher = Agent(
|
||||
role="Researcher",
|
||||
goal="Make the best research and analysis on content about AI and AI agents",
|
||||
backstory="You're an expert researcher, specialized in technology, software engineering, AI and startups. You work as a freelancer and is now working on doing research and analysis for a new customer.",
|
||||
allow_delegation=False,
|
||||
)
|
||||
|
||||
task1 = Task(
|
||||
description="Give me a list of 3 interesting ideas about AI.",
|
||||
expected_output="Bullet point list of 3 ideas.",
|
||||
agent=researcher,
|
||||
)
|
||||
|
||||
task2 = Task(
|
||||
description="Summarize the ideas from the previous task.",
|
||||
expected_output="A summary of the ideas.",
|
||||
agent=researcher,
|
||||
)
|
||||
|
||||
crew = Crew(agents=[researcher], tasks=[task1, task2], process=Process.sequential)
|
||||
result = crew.kickoff()
|
||||
|
||||
# Verify both tasks have messages
|
||||
assert len(result.tasks_output) == 2
|
||||
|
||||
# Check first task output has messages
|
||||
task1_output = result.tasks_output[0]
|
||||
assert hasattr(task1_output, "messages")
|
||||
assert isinstance(task1_output.messages, list)
|
||||
assert len(task1_output.messages) > 0
|
||||
|
||||
# Check second task output has messages
|
||||
task2_output = result.tasks_output[1]
|
||||
assert hasattr(task2_output, "messages")
|
||||
assert isinstance(task2_output.messages, list)
|
||||
assert len(task2_output.messages) > 0
|
||||
|
||||
@@ -38,6 +38,7 @@ def test_task_without_guardrail():
|
||||
agent.role = "test_agent"
|
||||
agent.execute_task.return_value = "test result"
|
||||
agent.crew = None
|
||||
agent.last_messages = []
|
||||
|
||||
task = create_smart_task(description="Test task", expected_output="Output")
|
||||
|
||||
@@ -56,6 +57,7 @@ def test_task_with_successful_guardrail_func():
|
||||
agent.role = "test_agent"
|
||||
agent.execute_task.return_value = "test result"
|
||||
agent.crew = None
|
||||
agent.last_messages = []
|
||||
|
||||
task = create_smart_task(
|
||||
description="Test task", expected_output="Output", guardrail=guardrail
|
||||
@@ -76,6 +78,7 @@ def test_task_with_failing_guardrail():
|
||||
agent.role = "test_agent"
|
||||
agent.execute_task.side_effect = ["bad result", "good result"]
|
||||
agent.crew = None
|
||||
agent.last_messages = []
|
||||
|
||||
task = create_smart_task(
|
||||
description="Test task",
|
||||
@@ -103,6 +106,7 @@ def test_task_with_guardrail_retries():
|
||||
agent.role = "test_agent"
|
||||
agent.execute_task.return_value = "bad result"
|
||||
agent.crew = None
|
||||
agent.last_messages = []
|
||||
|
||||
task = create_smart_task(
|
||||
description="Test task",
|
||||
@@ -128,6 +132,7 @@ def test_guardrail_error_in_context():
|
||||
agent = Mock()
|
||||
agent.role = "test_agent"
|
||||
agent.crew = None
|
||||
agent.last_messages = []
|
||||
|
||||
task = create_smart_task(
|
||||
description="Test task",
|
||||
@@ -295,6 +300,7 @@ def test_hallucination_guardrail_integration():
|
||||
agent.role = "test_agent"
|
||||
agent.execute_task.return_value = "test result"
|
||||
agent.crew = None
|
||||
agent.last_messages = []
|
||||
|
||||
mock_llm = Mock(spec=LLM)
|
||||
guardrail = HallucinationGuardrail(
|
||||
@@ -342,6 +348,7 @@ def test_multiple_guardrails_sequential_processing():
|
||||
agent.role = "sequential_agent"
|
||||
agent.execute_task.return_value = "original text"
|
||||
agent.crew = None
|
||||
agent.last_messages = []
|
||||
|
||||
task = create_smart_task(
|
||||
description="Test sequential guardrails",
|
||||
@@ -391,6 +398,7 @@ def test_multiple_guardrails_with_validation_failure():
|
||||
agent.role = "validation_agent"
|
||||
agent.execute_task = mock_execute_task
|
||||
agent.crew = None
|
||||
agent.last_messages = []
|
||||
|
||||
task = create_smart_task(
|
||||
description="Test guardrails with validation",
|
||||
@@ -432,6 +440,7 @@ def test_multiple_guardrails_with_mixed_string_and_taskoutput():
|
||||
agent.role = "mixed_agent"
|
||||
agent.execute_task.return_value = "original"
|
||||
agent.crew = None
|
||||
agent.last_messages = []
|
||||
|
||||
task = create_smart_task(
|
||||
description="Test mixed return types",
|
||||
@@ -469,6 +478,7 @@ def test_multiple_guardrails_with_retry_on_middle_guardrail():
|
||||
agent.role = "retry_agent"
|
||||
agent.execute_task.return_value = "base"
|
||||
agent.crew = None
|
||||
agent.last_messages = []
|
||||
|
||||
task = create_smart_task(
|
||||
description="Test retry in middle guardrail",
|
||||
@@ -500,6 +510,7 @@ def test_multiple_guardrails_with_max_retries_exceeded():
|
||||
agent.role = "failing_agent"
|
||||
agent.execute_task.return_value = "test"
|
||||
agent.crew = None
|
||||
agent.last_messages = []
|
||||
|
||||
task = create_smart_task(
|
||||
description="Test max retries with multiple guardrails",
|
||||
@@ -523,6 +534,7 @@ def test_multiple_guardrails_empty_list():
|
||||
agent.role = "empty_agent"
|
||||
agent.execute_task.return_value = "no guardrails"
|
||||
agent.crew = None
|
||||
agent.last_messages = []
|
||||
|
||||
task = create_smart_task(
|
||||
description="Test empty guardrails list",
|
||||
@@ -582,6 +594,7 @@ def test_multiple_guardrails_processing_order():
|
||||
agent.role = "order_agent"
|
||||
agent.execute_task.return_value = "base"
|
||||
agent.crew = None
|
||||
agent.last_messages = []
|
||||
|
||||
task = create_smart_task(
|
||||
description="Test processing order",
|
||||
@@ -625,6 +638,7 @@ def test_multiple_guardrails_with_pydantic_output():
|
||||
agent.role = "pydantic_agent"
|
||||
agent.execute_task.return_value = "test content"
|
||||
agent.crew = None
|
||||
agent.last_messages = []
|
||||
|
||||
task = create_smart_task(
|
||||
description="Test guardrails with Pydantic",
|
||||
@@ -658,6 +672,7 @@ def test_guardrails_vs_single_guardrail_mutual_exclusion():
|
||||
agent.role = "exclusion_agent"
|
||||
agent.execute_task.return_value = "test"
|
||||
agent.crew = None
|
||||
agent.last_messages = []
|
||||
|
||||
task = create_smart_task(
|
||||
description="Test mutual exclusion",
|
||||
@@ -700,6 +715,7 @@ def test_per_guardrail_independent_retry_tracking():
|
||||
agent.role = "independent_retry_agent"
|
||||
agent.execute_task.return_value = "base"
|
||||
agent.crew = None
|
||||
agent.last_messages = []
|
||||
|
||||
task = create_smart_task(
|
||||
description="Test independent retry tracking",
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
"""CrewAI development tools."""
|
||||
|
||||
__version__ = "1.3.0"
|
||||
__version__ = "1.4.1"
|
||||
|
||||
Reference in New Issue
Block a user