Compare commits

..

4 Commits

Author SHA1 Message Date
Lucas Gomide
d26ece7343 Merge branch 'main' into fix-issue-2263 2025-04-14 17:29:48 -03:00
Lucas Gomide
739d58a3ec Merge branch 'main' into fix-issue-2263 2025-04-14 16:35:46 -03:00
Lorenze Jay
7b1388b34c Merge branch 'main' into fix-issue-2263 2025-04-14 08:35:46 -07:00
Lucas Gomide
24a0f66141 fix: add type hints and ignore type checks for config access 2025-04-14 11:46:45 -03:00
48 changed files with 476 additions and 2680 deletions

View File

@@ -1,33 +0,0 @@
name: Notify Downstream
on:
push:
branches:
- main
permissions:
contents: read
jobs:
notify-downstream:
runs-on: ubuntu-latest
steps:
- name: Generate GitHub App token
id: app-token
uses: tibdex/github-app-token@v2
with:
app_id: ${{ secrets.OSS_SYNC_APP_ID }}
private_key: ${{ secrets.OSS_SYNC_APP_PRIVATE_KEY }}
- name: Notify Repo B
uses: peter-evans/repository-dispatch@v3
with:
token: ${{ steps.app-token.outputs.token }}
repository: ${{ secrets.OSS_SYNC_DOWNSTREAM_REPO }}
event-type: upstream-commit
client-payload: |
{
"commit_sha": "${{ github.sha }}"
}

View File

@@ -288,20 +288,26 @@ To add a guardrail to a task, provide a validation function through the `guardra
```python Code
from typing import Tuple, Union, Dict, Any
from crewai import TaskOutput
def validate_blog_content(result: TaskOutput) -> Tuple[bool, Any]:
def validate_blog_content(result: str) -> Tuple[bool, Union[Dict[str, Any], str]]:
"""Validate blog content meets requirements."""
try:
# Check word count
word_count = len(result.split())
if word_count > 200:
return (False, "Blog content exceeds 200 words")
return (False, {
"error": "Blog content exceeds 200 words",
"code": "WORD_COUNT_ERROR",
"context": {"word_count": word_count}
})
# Additional validation logic here
return (True, result.strip())
except Exception as e:
return (False, "Unexpected error during validation")
return (False, {
"error": "Unexpected error during validation",
"code": "SYSTEM_ERROR"
})
blog_task = Task(
description="Write a blog post about AI",
@@ -319,24 +325,29 @@ blog_task = Task(
- Type hints are recommended but optional
2. **Return Values**:
- On success: it returns a tuple of `(bool, Any)`. For example: `(True, validated_result)`
- On Failure: it returns a tuple of `(bool, str)`. For example: `(False, "Error message explain the failure")`
- Success: Return `(True, validated_result)`
- Failure: Return `(False, error_details)`
### Error Handling Best Practices
1. **Structured Error Responses**:
```python Code
from crewai import TaskOutput
def validate_with_context(result: TaskOutput) -> Tuple[bool, Any]:
def validate_with_context(result: str) -> Tuple[bool, Union[Dict[str, Any], str]]:
try:
# Main validation logic
validated_data = perform_validation(result)
return (True, validated_data)
except ValidationError as e:
return (False, f"VALIDATION_ERROR: {str(e)}")
return (False, {
"error": str(e),
"code": "VALIDATION_ERROR",
"context": {"input": result}
})
except Exception as e:
return (False, str(e))
return (False, {
"error": "Unexpected error",
"code": "SYSTEM_ERROR"
})
```
2. **Error Categories**:
@@ -347,25 +358,28 @@ def validate_with_context(result: TaskOutput) -> Tuple[bool, Any]:
3. **Validation Chain**:
```python Code
from typing import Any, Dict, List, Tuple, Union
from crewai import TaskOutput
def complex_validation(result: TaskOutput) -> Tuple[bool, Any]:
def complex_validation(result: str) -> Tuple[bool, Union[str, Dict[str, Any]]]:
"""Chain multiple validation steps."""
# Step 1: Basic validation
if not result:
return (False, "Empty result")
return (False, {"error": "Empty result", "code": "EMPTY_INPUT"})
# Step 2: Content validation
try:
validated = validate_content(result)
if not validated:
return (False, "Invalid content")
return (False, {"error": "Invalid content", "code": "CONTENT_ERROR"})
# Step 3: Format validation
formatted = format_output(validated)
return (True, formatted)
except Exception as e:
return (False, str(e))
return (False, {
"error": str(e),
"code": "VALIDATION_ERROR",
"context": {"step": "content_validation"}
})
```
### Handling Guardrail Results
@@ -380,16 +394,19 @@ When a guardrail returns `(False, error)`:
Example with retry handling:
```python Code
from typing import Optional, Tuple, Union
from crewai import TaskOutput, Task
def validate_json_output(result: TaskOutput) -> Tuple[bool, Any]:
def validate_json_output(result: str) -> Tuple[bool, Union[Dict[str, Any], str]]:
"""Validate and parse JSON output."""
try:
# Try to parse as JSON
data = json.loads(result)
return (True, data)
except json.JSONDecodeError as e:
return (False, "Invalid JSON format")
return (False, {
"error": "Invalid JSON format",
"code": "JSON_ERROR",
"context": {"line": e.lineno, "column": e.colno}
})
task = Task(
description="Generate a JSON report",

View File

@@ -1,443 +0,0 @@
---
title: Bring your own agent
description: Learn how to bring your own agents that work within a Crew.
icon: robots
---
Interoperability is a core concept in CrewAI. This guide will show you how to bring your own agents that work within a Crew.
## Adapter Guide for Bringing your own agents (Langgraph Agents, OpenAI Agents, etc...)
We require 3 adapters to turn any agent from different frameworks to work within crew.
1. BaseAgentAdapter
2. BaseToolAdapter
3. BaseConverter
## BaseAgentAdapter
This abstract class defines the common interface and functionality that all
agent adapters must implement. It extends BaseAgent to maintain compatibility
with the CrewAI framework while adding adapter-specific requirements.
Required Methods:
1. `def configure_tools`
2. `def configure_structured_output`
## Creating your own Adapter
To integrate an agent from a different framework (e.g., LangGraph, Autogen, OpenAI Assistants) into CrewAI, you need to create a custom adapter by inheriting from `BaseAgentAdapter`. This adapter acts as a compatibility layer, translating between the CrewAI interfaces and the specific requirements of your external agent.
Here's how you implement your custom adapter:
1. **Inherit from `BaseAgentAdapter`**:
```python
from crewai.agents.agent_adapters.base_agent_adapter import BaseAgentAdapter
from crewai.tools import BaseTool
from typing import List, Optional, Any, Dict
class MyCustomAgentAdapter(BaseAgentAdapter):
# ... implementation details ...
```
2. **Implement `__init__`**:
The constructor should call the parent class constructor `super().__init__(**kwargs)` and perform any initialization specific to your external agent. You can use the optional `agent_config` dictionary passed during CrewAI's `Agent` initialization to configure your adapter and the underlying agent.
```python
def __init__(self, agent_config: Optional[Dict[str, Any]] = None, **kwargs: Any):
super().__init__(agent_config=agent_config, **kwargs)
# Initialize your external agent here, possibly using agent_config
# Example: self.external_agent = initialize_my_agent(agent_config)
print(f"Initializing MyCustomAgentAdapter with config: {agent_config}")
```
3. **Implement `configure_tools`**:
This abstract method is crucial. It receives a list of CrewAI `BaseTool` instances. Your implementation must convert or adapt these tools into the format expected by your external agent framework. This might involve wrapping them, extracting specific attributes, or registering them with the external agent instance.
```python
def configure_tools(self, tools: Optional[List[BaseTool]] = None) -> None:
if tools:
adapted_tools = []
for tool in tools:
# Adapt CrewAI BaseTool to the format your agent expects
# Example: adapted_tool = adapt_to_my_framework(tool)
# adapted_tools.append(adapted_tool)
pass # Replace with your actual adaptation logic
# Configure the external agent with the adapted tools
# Example: self.external_agent.set_tools(adapted_tools)
print(f"Configuring tools for MyCustomAgentAdapter: {adapted_tools}") # Placeholder
else:
# Handle the case where no tools are provided
# Example: self.external_agent.set_tools([])
print("No tools provided for MyCustomAgentAdapter.")
```
4. **Implement `configure_structured_output`**:
This method is called when the CrewAI `Agent` is configured with structured output requirements (e.g., `output_json` or `output_pydantic`). Your adapter needs to ensure the external agent is set up to comply with these requirements. This might involve setting specific parameters on the external agent or ensuring its underlying model supports the requested format. If the external agent doesn't support structured output in a way compatible with CrewAI's expectations, you might need to handle the conversion or raise an appropriate error.
```python
def configure_structured_output(self, structured_output: Any) -> None:
# Configure your external agent to produce output in the specified format
# Example: self.external_agent.set_output_format(structured_output)
self.adapted_structured_output = True # Signal that structured output is handled
print(f"Configuring structured output for MyCustomAgentAdapter: {structured_output}")
```
By implementing these methods, your `MyCustomAgentAdapter` will allow your custom agent implementation to function correctly within a CrewAI crew, interacting with tasks and tools seamlessly. Remember to replace the example comments and print statements with your actual adaptation logic specific to the external agent framework you are integrating.
## BaseToolAdapter implementation
The `BaseToolAdapter` class is responsible for converting CrewAI's native `BaseTool` objects into a format that your specific external agent framework can understand and utilize. Different agent frameworks (like LangGraph, OpenAI Assistants, etc.) have their own unique ways of defining and handling tools, and the `BaseToolAdapter` acts as the translator.
Here's how you implement your custom tool adapter:
1. **Inherit from `BaseToolAdapter`**:
```python
from crewai.agents.agent_adapters.base_tool_adapter import BaseToolAdapter
from crewai.tools import BaseTool
from typing import List, Any
class MyCustomToolAdapter(BaseToolAdapter):
# ... implementation details ...
```
2. **Implement `configure_tools`**:
This is the core abstract method you must implement. It receives a list of CrewAI `BaseTool` instances provided to the agent. Your task is to iterate through this list, adapt each `BaseTool` into the format expected by your external framework, and store the converted tools in the `self.converted_tools` list (which is initialized in the base class constructor).
```python
def configure_tools(self, tools: List[BaseTool]) -> None:
"""Configure and convert CrewAI tools for the specific implementation."""
self.converted_tools = [] # Reset in case it's called multiple times
for tool in tools:
# Sanitize the tool name if required by the target framework
sanitized_name = self.sanitize_tool_name(tool.name)
# --- Your Conversion Logic Goes Here ---
# Example: Convert BaseTool to a dictionary format for LangGraph
# converted_tool = {
# "name": sanitized_name,
# "description": tool.description,
# "parameters": tool.args_schema.schema() if tool.args_schema else {},
# # Add any other framework-specific fields
# }
# Example: Convert BaseTool to an OpenAI function definition
# converted_tool = {
# "type": "function",
# "function": {
# "name": sanitized_name,
# "description": tool.description,
# "parameters": tool.args_schema.schema() if tool.args_schema else {"type": "object", "properties": {}},
# }
# }
# --- Replace above examples with your actual adaptation ---
converted_tool = self.adapt_tool_to_my_framework(tool, sanitized_name) # Placeholder
self.converted_tools.append(converted_tool)
print(f"Adapted tool '{tool.name}' to '{sanitized_name}' for MyCustomToolAdapter") # Placeholder
print(f"MyCustomToolAdapter finished configuring tools: {len(self.converted_tools)} adapted.") # Placeholder
# --- Helper method for adaptation (Example) ---
def adapt_tool_to_my_framework(self, tool: BaseTool, sanitized_name: str) -> Any:
# Replace this with the actual logic to convert a CrewAI BaseTool
# to the format needed by your specific external agent framework.
# This will vary greatly depending on the target framework.
adapted_representation = {
"framework_specific_name": sanitized_name,
"framework_specific_description": tool.description,
"inputs": tool.args_schema.schema() if tool.args_schema else None,
"implementation_reference": tool.run # Or however the framework needs to call it
}
# Also ensure the tool works both sync and async
async def async_tool_wrapper(*args, **kwargs):
output = tool.run(*args, **kwargs)
if inspect.isawaitable(output):
return await output
else:
return output
adapted_tool = MyFrameworkTool(
name=sanitized_name,
description=tool.description,
inputs=tool.args_schema.schema() if tool.args_schema else None,
implementation_reference=async_tool_wrapper
)
return adapted_representation
```
3. **Using the Adapter**:
Typically, you would instantiate your `MyCustomToolAdapter` within your `MyCustomAgentAdapter`'s `configure_tools` method and use it to process the tools before configuring your external agent.
```python
# Inside MyCustomAgentAdapter.configure_tools
def configure_tools(self, tools: Optional[List[BaseTool]] = None) -> None:
if tools:
tool_adapter = MyCustomToolAdapter() # Instantiate your tool adapter
tool_adapter.configure_tools(tools) # Convert the tools
adapted_tools = tool_adapter.tools() # Get the converted tools
# Now configure your external agent with the adapted_tools
# Example: self.external_agent.set_tools(adapted_tools)
print(f"Configuring external agent with adapted tools: {adapted_tools}") # Placeholder
else:
# Handle no tools case
print("No tools provided for MyCustomAgentAdapter.")
```
By creating a `BaseToolAdapter`, you decouple the tool conversion logic from the agent adaptation, making the integration cleaner and more modular. Remember to replace the placeholder examples with the actual conversion logic required by your specific external agent framework.
## BaseConverter
The `BaseConverterAdapter` plays a crucial role when a CrewAI `Task` requires an agent to return its final output in a specific structured format, such as JSON or a Pydantic model. It bridges the gap between CrewAI's structured output requirements and the capabilities of your external agent.
Its primary responsibilities are:
1. **Configuring the Agent for Structured Output:** Based on the `Task`'s requirements (`output_json` or `output_pydantic`), it instructs the associated `BaseAgentAdapter` (and indirectly, the external agent) on what format is expected.
2. **Enhancing the System Prompt:** It modifies the agent's system prompt to include clear instructions on *how* to generate the output in the required structure.
3. **Post-processing the Result:** It takes the raw output from the agent and attempts to parse, validate, and format it according to the required structure, ultimately returning a string representation (e.g., a JSON string).
Here's how you implement your custom converter adapter:
1. **Inherit from `BaseConverterAdapter`**:
```python
from crewai.agents.agent_adapters.base_converter_adapter import BaseConverterAdapter
# Assuming you have your MyCustomAgentAdapter defined
# from .my_custom_agent_adapter import MyCustomAgentAdapter
from crewai.task import Task
from typing import Any
class MyCustomConverterAdapter(BaseConverterAdapter):
# Store the expected output type (e.g., 'json', 'pydantic', 'text')
_output_type: str = 'text'
_output_schema: Any = None # Store JSON schema or Pydantic model
# ... implementation details ...
```
2. **Implement `__init__`**:
The constructor must accept the corresponding `agent_adapter` instance it will work with.
```python
def __init__(self, agent_adapter: Any): # Use your specific AgentAdapter type hint
self.agent_adapter = agent_adapter
print(f"Initializing MyCustomConverterAdapter for agent adapter: {type(agent_adapter).__name__}")
```
3. **Implement `configure_structured_output`**:
This method receives the CrewAI `Task` object. You need to check the task's `output_json` and `output_pydantic` attributes to determine the required output structure. Store this information (e.g., in `_output_type` and `_output_schema`) and potentially call configuration methods on your `self.agent_adapter` if the external agent needs specific setup for structured output (which might have been partially handled in the agent adapter's `configure_structured_output` already).
```python
def configure_structured_output(self, task: Task) -> None:
"""Configure the expected structured output based on the task."""
if task.output_pydantic:
self._output_type = 'pydantic'
self._output_schema = task.output_pydantic
print(f"Converter: Configured for Pydantic output: {self._output_schema.__name__}")
elif task.output_json:
self._output_type = 'json'
self._output_schema = task.output_json
print(f"Converter: Configured for JSON output with schema: {self._output_schema}")
else:
self._output_type = 'text'
self._output_schema = None
print("Converter: Configured for standard text output.")
# Optionally, inform the agent adapter if needed
# self.agent_adapter.set_output_mode(self._output_type, self._output_schema)
```
4. **Implement `enhance_system_prompt`**:
This method takes the agent's base system prompt string and should append instructions tailored to the currently configured `_output_type` and `_output_schema`. The goal is to guide the LLM powering the agent to produce output in the correct format.
```python
def enhance_system_prompt(self, base_prompt: str) -> str:
"""Enhance the system prompt with structured output instructions."""
if self._output_type == 'text':
return base_prompt # No enhancement needed for plain text
instructions = "\n\nYour final answer MUST be formatted as "
if self._output_type == 'json':
schema_str = json.dumps(self._output_schema, indent=2)
instructions += f"a JSON object conforming to the following schema:\n```json\n{schema_str}\n```"
elif self._output_type == 'pydantic':
schema_str = json.dumps(self._output_schema.model_json_schema(), indent=2)
instructions += f"a JSON object conforming to the Pydantic model '{self._output_schema.__name__}' with the following schema:\n```json\n{schema_str}\n```"
instructions += "\nEnsure your entire response is ONLY the valid JSON object, without any introductory text, explanations, or concluding remarks."
print(f"Converter: Enhancing prompt for {self._output_type} output.")
return base_prompt + instructions
```
*Note: The exact prompt engineering might need tuning based on the agent/LLM being used.*
5. **Implement `post_process_result`**:
This method receives the raw string output from the agent. If structured output was requested (`json` or `pydantic`), you should attempt to parse the string into the expected format. Handle potential parsing errors (e.g., log them, attempt simple fixes, or raise an exception). Crucially, the method must **always return a string**, even if the intermediate format was a dictionary or Pydantic object (e.g., by serializing it back to a JSON string).
```python
import json
from pydantic import ValidationError
def post_process_result(self, result: str) -> str:
"""Post-process the agent's result to ensure it matches the expected format."""
print(f"Converter: Post-processing result for {self._output_type} output.")
if self._output_type == 'json':
try:
# Attempt to parse and re-serialize to ensure validity and consistent format
parsed_json = json.loads(result)
# Optional: Validate against self._output_schema if it's a JSON schema dictionary
# from jsonschema import validate
# validate(instance=parsed_json, schema=self._output_schema)
return json.dumps(parsed_json)
except json.JSONDecodeError as e:
print(f"Error: Failed to parse JSON output: {e}\nRaw output:\n{result}")
# Handle error: return raw, raise exception, or try to fix
return result # Example: return raw output on failure
# except Exception as e: # Catch validation errors if using jsonschema
# print(f"Error: JSON output failed schema validation: {e}\nRaw output:\n{result}")
# return result
elif self._output_type == 'pydantic':
try:
# Attempt to parse into the Pydantic model
model_instance = self._output_schema.model_validate_json(result)
# Return the model serialized back to JSON
return model_instance.model_dump_json()
except ValidationError as e:
print(f"Error: Failed to validate Pydantic output: {e}\nRaw output:\n{result}")
# Handle error
return result # Example: return raw output on failure
except json.JSONDecodeError as e:
print(f"Error: Failed to parse JSON for Pydantic model: {e}\nRaw output:\n{result}")
return result
else: # 'text'
return result # No processing needed for plain text
```
By implementing these methods, your `MyCustomConverterAdapter` ensures that structured output requests from CrewAI tasks are correctly handled by your integrated external agent, improving the reliability and usability of your custom agent within the CrewAI framework.
## Out of the Box Adapters
We provide out of the box adapters for the following frameworks:
1. LangGraph
2. OpenAI Agents
## Kicking off a crew with adapted agents:
```python
import json
import os
from typing import List
from crewai_tools import SerperDevTool
from src.crewai import Agent, Crew, Task
from langchain_openai import ChatOpenAI
from pydantic import BaseModel
from crewai.agents.agent_adapters.langgraph.langgraph_adapter import (
LangGraphAgentAdapter,
)
from crewai.agents.agent_adapters.openai_agents.openai_adapter import OpenAIAgentAdapter
# CrewAI Agent
code_helper_agent = Agent(
role="Code Helper",
goal="Help users solve coding problems effectively and provide clear explanations.",
backstory="You are an experienced programmer with deep knowledge across multiple programming languages and frameworks. You specialize in solving complex coding challenges and explaining solutions clearly.",
allow_delegation=False,
verbose=True,
)
# OpenAI Agent Adapter
link_finder_agent = OpenAIAgentAdapter(
role="Link Finder",
goal="Find the most relevant and high-quality resources for coding tasks.",
backstory="You are a research specialist with a talent for finding the most helpful resources. You're skilled at using search tools to discover documentation, tutorials, and examples that directly address the user's coding needs.",
tools=[SerperDevTool()],
allow_delegation=False,
verbose=True,
)
# LangGraph Agent Adapter
reporter_agent = LangGraphAgentAdapter(
role="Reporter",
goal="Report the results of the tasks.",
backstory="You are a reporter who reports the results of the other tasks",
llm=ChatOpenAI(model="gpt-4o"),
allow_delegation=True,
verbose=True,
)
class Code(BaseModel):
code: str
task = Task(
description="Give an answer to the coding question: {task}",
expected_output="A thorough answer to the coding question: {task}",
agent=code_helper_agent,
output_json=Code,
)
task2 = Task(
description="Find links to resources that can help with coding tasks. Use the serper tool to find resources that can help.",
expected_output="A list of links to resources that can help with coding tasks",
agent=link_finder_agent,
)
class Report(BaseModel):
code: str
links: List[str]
task3 = Task(
description="Report the results of the tasks.",
expected_output="A report of the results of the tasks. this is the code produced and then the links to the resources that can help with the coding task.",
agent=reporter_agent,
output_json=Report,
)
# Use in CrewAI
crew = Crew(
agents=[code_helper_agent, link_finder_agent, reporter_agent],
tasks=[task, task2, task3],
verbose=True,
)
result = crew.kickoff(
inputs={"task": "How do you implement an abstract class in python?"}
)
# Print raw result first
print("Raw result:", result)
# Handle result based on its type
if hasattr(result, "json_dict") and result.json_dict:
json_result = result.json_dict
print("\nStructured JSON result:")
print(f"{json.dumps(json_result, indent=2)}")
# Access fields safely
if isinstance(json_result, dict):
if "code" in json_result:
print("\nCode:")
print(
json_result["code"][:200] + "..."
if len(json_result["code"]) > 200
else json_result["code"]
)
if "links" in json_result:
print("\nLinks:")
for link in json_result["links"][:5]: # Print first 5 links
print(f"- {link}")
if len(json_result["links"]) > 5:
print(f"...and {len(json_result['links']) - 5} more links")
elif hasattr(result, "pydantic") and result.pydantic:
print("\nPydantic model result:")
print(result.pydantic.model_dump_json(indent=2))
else:
# Fallback to raw output
print("\nNo structured result available, using raw output:")
print(result.raw[:500] + "..." if len(result.raw) > 500 else result.raw)
```

View File

@@ -30,7 +30,7 @@ pip install 'crewai[tools]'
Here are updated examples on how to utilize the JSONSearchTool effectively for searching within JSON files. These examples take into account the current implementation and usage patterns identified in the codebase.
```python Code
from crewai_tools import JSONSearchTool
from crewai.json_tools import JSONSearchTool # Updated import path
# General JSON content search
# This approach is suitable when the JSON path is either known beforehand or can be dynamically identified.

View File

@@ -81,10 +81,10 @@ dev-dependencies = [
"pillow>=10.2.0",
"cairosvg>=2.7.1",
"pytest>=8.0.0",
"pytest-vcr>=1.0.2",
"python-dotenv>=1.0.0",
"pytest-asyncio>=0.23.7",
"pytest-subprocess>=1.5.2",
"pytest-recording>=0.13.2",
]
[project.scripts]

View File

@@ -535,7 +535,6 @@ class Agent(BaseAgent):
verbose=self.verbose,
response_format=response_format,
i18n=self.i18n,
original_agent=self,
)
return await lite_agent.kickoff_async(messages)

View File

@@ -1,42 +0,0 @@
from abc import ABC, abstractmethod
from typing import Any, Dict, List, Optional
from pydantic import PrivateAttr
from crewai.agent import BaseAgent
from crewai.tools import BaseTool
class BaseAgentAdapter(BaseAgent, ABC):
"""Base class for all agent adapters in CrewAI.
This abstract class defines the common interface and functionality that all
agent adapters must implement. It extends BaseAgent to maintain compatibility
with the CrewAI framework while adding adapter-specific requirements.
"""
adapted_structured_output: bool = False
_agent_config: Optional[Dict[str, Any]] = PrivateAttr(default=None)
model_config = {"arbitrary_types_allowed": True}
def __init__(self, agent_config: Optional[Dict[str, Any]] = None, **kwargs: Any):
super().__init__(adapted_agent=True, **kwargs)
self._agent_config = agent_config
@abstractmethod
def configure_tools(self, tools: Optional[List[BaseTool]] = None) -> None:
"""Configure and adapt tools for the specific agent implementation.
Args:
tools: Optional list of BaseTool instances to be configured
"""
pass
def configure_structured_output(self, structured_output: Any) -> None:
"""Configure the structured output for the specific agent implementation.
Args:
structured_output: The structured output to be configured
"""
pass

View File

@@ -1,29 +0,0 @@
from abc import ABC, abstractmethod
class BaseConverterAdapter(ABC):
"""Base class for all converter adapters in CrewAI.
This abstract class defines the common interface and functionality that all
converter adapters must implement for converting structured output.
"""
def __init__(self, agent_adapter):
self.agent_adapter = agent_adapter
@abstractmethod
def configure_structured_output(self, task) -> None:
"""Configure agents to return structured output.
Must support json and pydantic output.
"""
pass
@abstractmethod
def enhance_system_prompt(self, base_prompt: str) -> str:
"""Enhance the system prompt with structured output instructions."""
pass
@abstractmethod
def post_process_result(self, result: str) -> str:
"""Post-process the result to ensure it matches the expected format: string."""
pass

View File

@@ -1,37 +0,0 @@
from abc import ABC, abstractmethod
from typing import Any, List, Optional
from crewai.tools.base_tool import BaseTool
class BaseToolAdapter(ABC):
"""Base class for all tool adapters in CrewAI.
This abstract class defines the common interface that all tool adapters
must implement. It provides the structure for adapting CrewAI tools to
different frameworks and platforms.
"""
original_tools: List[BaseTool]
converted_tools: List[Any]
def __init__(self, tools: Optional[List[BaseTool]] = None):
self.original_tools = tools or []
self.converted_tools = []
@abstractmethod
def configure_tools(self, tools: List[BaseTool]) -> None:
"""Configure and convert tools for the specific implementation.
Args:
tools: List of BaseTool instances to be configured and converted
"""
pass
def tools(self) -> List[Any]:
"""Return all converted tools."""
return self.converted_tools
def sanitize_tool_name(self, tool_name: str) -> str:
"""Sanitize tool name for API compatibility."""
return tool_name.replace(" ", "_")

View File

@@ -1,226 +0,0 @@
from typing import Any, AsyncIterable, Dict, List, Optional
from pydantic import Field, PrivateAttr
from crewai.agents.agent_adapters.base_agent_adapter import BaseAgentAdapter
from crewai.agents.agent_adapters.langgraph.langgraph_tool_adapter import (
LangGraphToolAdapter,
)
from crewai.agents.agent_adapters.langgraph.structured_output_converter import (
LangGraphConverterAdapter,
)
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.tools.agent_tools.agent_tools import AgentTools
from crewai.tools.base_tool import BaseTool
from crewai.utilities import Logger
from crewai.utilities.converter import Converter
from crewai.utilities.events import crewai_event_bus
from crewai.utilities.events.agent_events import (
AgentExecutionCompletedEvent,
AgentExecutionErrorEvent,
AgentExecutionStartedEvent,
)
try:
from langchain_core.messages import ToolMessage
from langgraph.checkpoint.memory import MemorySaver
from langgraph.prebuilt import create_react_agent
LANGGRAPH_AVAILABLE = True
except ImportError:
LANGGRAPH_AVAILABLE = False
class LangGraphAgentAdapter(BaseAgentAdapter):
"""Adapter for LangGraph agents to work with CrewAI."""
model_config = {"arbitrary_types_allowed": True}
_logger: Logger = PrivateAttr(default_factory=lambda: Logger())
_tool_adapter: LangGraphToolAdapter = PrivateAttr()
_graph: Any = PrivateAttr(default=None)
_memory: Any = PrivateAttr(default=None)
_max_iterations: int = PrivateAttr(default=10)
function_calling_llm: Any = Field(default=None)
step_callback: Any = Field(default=None)
model: str = Field(default="gpt-4o")
verbose: bool = Field(default=False)
def __init__(
self,
role: str,
goal: str,
backstory: str,
tools: Optional[List[BaseTool]] = None,
llm: Any = None,
max_iterations: int = 10,
agent_config: Optional[Dict[str, Any]] = None,
**kwargs,
):
"""Initialize the LangGraph agent adapter."""
if not LANGGRAPH_AVAILABLE:
raise ImportError(
"LangGraph Agent Dependencies are not installed. Please install it using `uv add langchain-core langgraph`"
)
super().__init__(
role=role,
goal=goal,
backstory=backstory,
tools=tools,
llm=llm or self.model,
agent_config=agent_config,
**kwargs,
)
self._tool_adapter = LangGraphToolAdapter(tools=tools)
self._converter_adapter = LangGraphConverterAdapter(self)
self._max_iterations = max_iterations
self._setup_graph()
def _setup_graph(self) -> None:
"""Set up the LangGraph workflow graph."""
try:
self._memory = MemorySaver()
converted_tools: List[Any] = self._tool_adapter.tools()
if self._agent_config:
self._graph = create_react_agent(
model=self.llm,
tools=converted_tools,
checkpointer=self._memory,
debug=self.verbose,
**self._agent_config,
)
else:
self._graph = create_react_agent(
model=self.llm,
tools=converted_tools or [],
checkpointer=self._memory,
debug=self.verbose,
)
except ImportError as e:
self._logger.log(
"error", f"Failed to import LangGraph dependencies: {str(e)}"
)
raise
except Exception as e:
self._logger.log("error", f"Error setting up LangGraph agent: {str(e)}")
raise
def _build_system_prompt(self) -> str:
"""Build a system prompt for the LangGraph agent."""
base_prompt = f"""
You are {self.role}.
Your goal is: {self.goal}
Your backstory: {self.backstory}
When working on tasks, think step-by-step and use the available tools when necessary.
"""
return self._converter_adapter.enhance_system_prompt(base_prompt)
def execute_task(
self,
task: Any,
context: Optional[str] = None,
tools: Optional[List[BaseTool]] = None,
) -> str:
"""Execute a task using the LangGraph workflow."""
self.create_agent_executor(tools)
self.configure_structured_output(task)
try:
task_prompt = task.prompt() if hasattr(task, "prompt") else str(task)
if context:
task_prompt = self.i18n.slice("task_with_context").format(
task=task_prompt, context=context
)
crewai_event_bus.emit(
self,
event=AgentExecutionStartedEvent(
agent=self,
tools=self.tools,
task_prompt=task_prompt,
task=task,
),
)
session_id = f"task_{id(task)}"
config = {"configurable": {"thread_id": session_id}}
result = self._graph.invoke(
{
"messages": [
("system", self._build_system_prompt()),
("user", task_prompt),
]
},
config,
)
messages = result.get("messages", [])
last_message = messages[-1] if messages else None
final_answer = ""
if isinstance(last_message, dict):
final_answer = last_message.get("content", "")
elif hasattr(last_message, "content"):
final_answer = getattr(last_message, "content", "")
final_answer = (
self._converter_adapter.post_process_result(final_answer)
or "Task execution completed but no clear answer was provided."
)
crewai_event_bus.emit(
self,
event=AgentExecutionCompletedEvent(
agent=self, task=task, output=final_answer
),
)
return final_answer
except Exception as e:
self._logger.log("error", f"Error executing LangGraph task: {str(e)}")
crewai_event_bus.emit(
self,
event=AgentExecutionErrorEvent(
agent=self,
task=task,
error=str(e),
),
)
raise
def create_agent_executor(self, tools: Optional[List[BaseTool]] = None) -> None:
"""Configure the LangGraph agent for execution."""
self.configure_tools(tools)
def configure_tools(self, tools: Optional[List[BaseTool]] = None) -> None:
"""Configure tools for the LangGraph agent."""
if tools:
all_tools = list(self.tools or []) + list(tools or [])
self._tool_adapter.configure_tools(all_tools)
available_tools = self._tool_adapter.tools()
self._graph.tools = available_tools
def get_delegation_tools(self, agents: List[BaseAgent]) -> List[BaseTool]:
"""Implement delegation tools support for LangGraph."""
agent_tools = AgentTools(agents=agents)
return agent_tools.tools()
def get_output_converter(
self, llm: Any, text: str, model: Any, instructions: str
) -> Any:
"""Convert output format if needed."""
return Converter(llm=llm, text=text, model=model, instructions=instructions)
def configure_structured_output(self, task) -> None:
"""Configure the structured output for LangGraph."""
self._converter_adapter.configure_structured_output(task)

View File

@@ -1,61 +0,0 @@
import inspect
from typing import Any, List, Optional
from crewai.agents.agent_adapters.base_tool_adapter import BaseToolAdapter
from crewai.tools.base_tool import BaseTool
class LangGraphToolAdapter(BaseToolAdapter):
"""Adapts CrewAI tools to LangGraph agent tool compatible format"""
def __init__(self, tools: Optional[List[BaseTool]] = None):
self.original_tools = tools or []
self.converted_tools = []
def configure_tools(self, tools: List[BaseTool]) -> None:
"""
Configure and convert CrewAI tools to LangGraph-compatible format.
LangGraph expects tools in langchain_core.tools format.
"""
from langchain_core.tools import BaseTool, StructuredTool
converted_tools = []
if self.original_tools:
all_tools = tools + self.original_tools
else:
all_tools = tools
for tool in all_tools:
if isinstance(tool, BaseTool):
converted_tools.append(tool)
continue
sanitized_name = self.sanitize_tool_name(tool.name)
async def tool_wrapper(*args, tool=tool, **kwargs):
output = None
if len(args) > 0 and isinstance(args[0], str):
output = tool.run(args[0])
elif "input" in kwargs:
output = tool.run(kwargs["input"])
else:
output = tool.run(**kwargs)
if inspect.isawaitable(output):
result = await output
else:
result = output
return result
converted_tool = StructuredTool(
name=sanitized_name,
description=tool.description,
func=tool_wrapper,
args_schema=tool.args_schema,
)
converted_tools.append(converted_tool)
self.converted_tools = converted_tools
def tools(self) -> List[Any]:
return self.converted_tools or []

View File

@@ -1,80 +0,0 @@
import json
from crewai.agents.agent_adapters.base_converter_adapter import BaseConverterAdapter
from crewai.utilities.converter import generate_model_description
class LangGraphConverterAdapter(BaseConverterAdapter):
"""Adapter for handling structured output conversion in LangGraph agents"""
def __init__(self, agent_adapter):
"""Initialize the converter adapter with a reference to the agent adapter"""
self.agent_adapter = agent_adapter
self._output_format = None
self._schema = None
self._system_prompt_appendix = None
def configure_structured_output(self, task) -> None:
"""Configure the structured output for LangGraph."""
if not (task.output_json or task.output_pydantic):
self._output_format = None
self._schema = None
self._system_prompt_appendix = None
return
if task.output_json:
self._output_format = "json"
self._schema = generate_model_description(task.output_json)
elif task.output_pydantic:
self._output_format = "pydantic"
self._schema = generate_model_description(task.output_pydantic)
self._system_prompt_appendix = self._generate_system_prompt_appendix()
def _generate_system_prompt_appendix(self) -> str:
"""Generate an appendix for the system prompt to enforce structured output"""
if not self._output_format or not self._schema:
return ""
return f"""
Important: Your final answer MUST be provided in the following structured format:
{self._schema}
DO NOT include any markdown code blocks, backticks, or other formatting around your response.
The output should be raw JSON that exactly matches the specified schema.
"""
def enhance_system_prompt(self, original_prompt: str) -> str:
"""Add structured output instructions to the system prompt if needed"""
if not self._system_prompt_appendix:
return original_prompt
return f"{original_prompt}\n{self._system_prompt_appendix}"
def post_process_result(self, result: str) -> str:
"""Post-process the result to ensure it matches the expected format"""
if not self._output_format:
return result
# Try to extract valid JSON if it's wrapped in code blocks or other text
if self._output_format in ["json", "pydantic"]:
try:
# First, try to parse as is
json.loads(result)
return result
except json.JSONDecodeError:
# Try to extract JSON from the text
import re
json_match = re.search(r"(\{.*\})", result, re.DOTALL)
if json_match:
try:
extracted = json_match.group(1)
# Validate it's proper JSON
json.loads(extracted)
return extracted
except:
pass
return result

View File

@@ -1,178 +0,0 @@
from typing import Any, List, Optional
from pydantic import Field, PrivateAttr
from crewai.agents.agent_adapters.base_agent_adapter import BaseAgentAdapter
from crewai.agents.agent_adapters.openai_agents.structured_output_converter import (
OpenAIConverterAdapter,
)
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.tools import BaseTool
from crewai.tools.agent_tools.agent_tools import AgentTools
from crewai.utilities import Logger
from crewai.utilities.events import crewai_event_bus
from crewai.utilities.events.agent_events import (
AgentExecutionCompletedEvent,
AgentExecutionErrorEvent,
AgentExecutionStartedEvent,
)
try:
from agents import Agent as OpenAIAgent # type: ignore
from agents import Runner, enable_verbose_stdout_logging # type: ignore
from .openai_agent_tool_adapter import OpenAIAgentToolAdapter
OPENAI_AVAILABLE = True
except ImportError:
OPENAI_AVAILABLE = False
class OpenAIAgentAdapter(BaseAgentAdapter):
"""Adapter for OpenAI Assistants"""
model_config = {"arbitrary_types_allowed": True}
_openai_agent: "OpenAIAgent" = PrivateAttr()
_logger: Logger = PrivateAttr(default_factory=lambda: Logger())
_active_thread: Optional[str] = PrivateAttr(default=None)
function_calling_llm: Any = Field(default=None)
step_callback: Any = Field(default=None)
_tool_adapter: "OpenAIAgentToolAdapter" = PrivateAttr()
_converter_adapter: OpenAIConverterAdapter = PrivateAttr()
def __init__(
self,
model: str = "gpt-4o-mini",
tools: Optional[List[BaseTool]] = None,
agent_config: Optional[dict] = None,
**kwargs,
):
if not OPENAI_AVAILABLE:
raise ImportError(
"OpenAI Agent Dependencies are not installed. Please install it using `uv add openai-agents`"
)
else:
role = kwargs.pop("role", None)
goal = kwargs.pop("goal", None)
backstory = kwargs.pop("backstory", None)
super().__init__(
role=role,
goal=goal,
backstory=backstory,
tools=tools,
agent_config=agent_config,
**kwargs,
)
self._tool_adapter = OpenAIAgentToolAdapter(tools=tools)
self.llm = model
self._converter_adapter = OpenAIConverterAdapter(self)
def _build_system_prompt(self) -> str:
"""Build a system prompt for the OpenAI agent."""
base_prompt = f"""
You are {self.role}.
Your goal is: {self.goal}
Your backstory: {self.backstory}
When working on tasks, think step-by-step and use the available tools when necessary.
"""
return self._converter_adapter.enhance_system_prompt(base_prompt)
def execute_task(
self,
task: Any,
context: Optional[str] = None,
tools: Optional[List[BaseTool]] = None,
) -> str:
"""Execute a task using the OpenAI Assistant"""
self._converter_adapter.configure_structured_output(task)
self.create_agent_executor(tools)
if self.verbose:
enable_verbose_stdout_logging()
try:
task_prompt = task.prompt()
if context:
task_prompt = self.i18n.slice("task_with_context").format(
task=task_prompt, context=context
)
crewai_event_bus.emit(
self,
event=AgentExecutionStartedEvent(
agent=self,
tools=self.tools,
task_prompt=task_prompt,
task=task,
),
)
result = self.agent_executor.run_sync(self._openai_agent, task_prompt)
final_answer = self.handle_execution_result(result)
crewai_event_bus.emit(
self,
event=AgentExecutionCompletedEvent(
agent=self, task=task, output=final_answer
),
)
return final_answer
except Exception as e:
self._logger.log("error", f"Error executing OpenAI task: {str(e)}")
crewai_event_bus.emit(
self,
event=AgentExecutionErrorEvent(
agent=self,
task=task,
error=str(e),
),
)
raise
def create_agent_executor(self, tools: Optional[List[BaseTool]] = None) -> None:
"""
Configure the OpenAI agent for execution.
While OpenAI handles execution differently through Runner,
we can use this method to set up tools and configurations.
"""
all_tools = list(self.tools or []) + list(tools or [])
instructions = self._build_system_prompt()
self._openai_agent = OpenAIAgent(
name=self.role,
instructions=instructions,
model=self.llm,
**self._agent_config or {},
)
if all_tools:
self.configure_tools(all_tools)
self.agent_executor = Runner
def configure_tools(self, tools: Optional[List[BaseTool]] = None) -> None:
"""Configure tools for the OpenAI Assistant"""
if tools:
self._tool_adapter.configure_tools(tools)
if self._tool_adapter.converted_tools:
self._openai_agent.tools = self._tool_adapter.converted_tools
def handle_execution_result(self, result: Any) -> str:
"""Process OpenAI Assistant execution result converting any structured output to a string"""
return self._converter_adapter.post_process_result(result.final_output)
def get_delegation_tools(self, agents: List[BaseAgent]) -> List[BaseTool]:
"""Implement delegation tools support"""
agent_tools = AgentTools(agents=agents)
tools = agent_tools.tools()
return tools
def configure_structured_output(self, task) -> None:
"""Configure the structured output for the specific agent implementation.
Args:
structured_output: The structured output to be configured
"""
self._converter_adapter.configure_structured_output(task)

View File

@@ -1,91 +0,0 @@
import inspect
from typing import Any, List, Optional
from agents import FunctionTool, Tool
from crewai.agents.agent_adapters.base_tool_adapter import BaseToolAdapter
from crewai.tools import BaseTool
class OpenAIAgentToolAdapter(BaseToolAdapter):
"""Adapter for OpenAI Assistant tools"""
def __init__(self, tools: Optional[List[BaseTool]] = None):
self.original_tools = tools or []
def configure_tools(self, tools: List[BaseTool]) -> None:
"""Configure tools for the OpenAI Assistant"""
if self.original_tools:
all_tools = tools + self.original_tools
else:
all_tools = tools
if all_tools:
self.converted_tools = self._convert_tools_to_openai_format(all_tools)
def _convert_tools_to_openai_format(
self, tools: Optional[List[BaseTool]]
) -> List[Tool]:
"""Convert CrewAI tools to OpenAI Assistant tool format"""
if not tools:
return []
def sanitize_tool_name(name: str) -> str:
"""Convert tool name to match OpenAI's required pattern"""
import re
sanitized = re.sub(r"[^a-zA-Z0-9_-]", "_", name).lower()
return sanitized
def create_tool_wrapper(tool: BaseTool):
"""Create a wrapper function that handles the OpenAI function tool interface"""
async def wrapper(context_wrapper: Any, arguments: Any) -> Any:
# Get the parameter name from the schema
param_name = list(
tool.args_schema.model_json_schema()["properties"].keys()
)[0]
# Handle different argument types
if isinstance(arguments, dict):
args_dict = arguments
elif isinstance(arguments, str):
try:
import json
args_dict = json.loads(arguments)
except json.JSONDecodeError:
args_dict = {param_name: arguments}
else:
args_dict = {param_name: str(arguments)}
# Run the tool with the processed arguments
output = tool._run(**args_dict)
# Await if the tool returned a coroutine
if inspect.isawaitable(output):
result = await output
else:
result = output
# Ensure the result is JSON serializable
if isinstance(result, (dict, list, str, int, float, bool, type(None))):
return result
return str(result)
return wrapper
openai_tools = []
for tool in tools:
schema = tool.args_schema.model_json_schema()
schema.update({"additionalProperties": False, "type": "object"})
openai_tool = FunctionTool(
name=sanitize_tool_name(tool.name),
description=tool.description,
params_json_schema=schema,
on_invoke_tool=create_tool_wrapper(tool),
)
openai_tools.append(openai_tool)
return openai_tools

View File

@@ -1,122 +0,0 @@
import json
import re
from crewai.agents.agent_adapters.base_converter_adapter import BaseConverterAdapter
from crewai.utilities.converter import generate_model_description
from crewai.utilities.i18n import I18N
class OpenAIConverterAdapter(BaseConverterAdapter):
"""
Adapter for handling structured output conversion in OpenAI agents.
This adapter enhances the OpenAI agent to handle structured output formats
and post-processes the results when needed.
Attributes:
_output_format: The expected output format (json, pydantic, or None)
_schema: The schema description for the expected output
_output_model: The Pydantic model for the output
"""
def __init__(self, agent_adapter):
"""Initialize the converter adapter with a reference to the agent adapter"""
self.agent_adapter = agent_adapter
self._output_format = None
self._schema = None
self._output_model = None
def configure_structured_output(self, task) -> None:
"""
Configure the structured output for OpenAI agent based on task requirements.
Args:
task: The task containing output format requirements
"""
# Reset configuration
self._output_format = None
self._schema = None
self._output_model = None
# If no structured output is required, return early
if not (task.output_json or task.output_pydantic):
return
# Configure based on task output format
if task.output_json:
self._output_format = "json"
self._schema = generate_model_description(task.output_json)
self.agent_adapter._openai_agent.output_type = task.output_json
self._output_model = task.output_json
elif task.output_pydantic:
self._output_format = "pydantic"
self._schema = generate_model_description(task.output_pydantic)
self.agent_adapter._openai_agent.output_type = task.output_pydantic
self._output_model = task.output_pydantic
def enhance_system_prompt(self, base_prompt: str) -> str:
"""
Enhance the base system prompt with structured output requirements if needed.
Args:
base_prompt: The original system prompt
Returns:
Enhanced system prompt with output format instructions if needed
"""
if not self._output_format:
return base_prompt
output_schema = (
I18N()
.slice("formatted_task_instructions")
.format(output_format=self._schema)
)
return f"{base_prompt}\n\n{output_schema}"
def post_process_result(self, result: str) -> str:
"""
Post-process the result to ensure it matches the expected format.
This method attempts to extract valid JSON from the result if necessary.
Args:
result: The raw result from the agent
Returns:
Processed result conforming to the expected output format
"""
if not self._output_format:
return result
# Try to extract valid JSON if it's wrapped in code blocks or other text
if isinstance(result, str) and self._output_format in ["json", "pydantic"]:
# First, try to parse as is
try:
json.loads(result)
return result
except json.JSONDecodeError:
# Try to extract JSON from markdown code blocks
code_block_pattern = r"```(?:json)?\s*([\s\S]*?)```"
code_blocks = re.findall(code_block_pattern, result)
for block in code_blocks:
try:
json.loads(block.strip())
return block.strip()
except json.JSONDecodeError:
continue
# Try to extract any JSON-like structure
json_pattern = r"(\{[\s\S]*\})"
json_matches = re.findall(json_pattern, result, re.DOTALL)
for match in json_matches:
try:
json.loads(match)
return match
except json.JSONDecodeError:
continue
# If all extraction attempts fail, return the original
return str(result)

View File

@@ -62,6 +62,8 @@ class BaseAgent(ABC, BaseModel):
Abstract method to execute a task.
create_agent_executor(tools=None) -> None:
Abstract method to create an agent executor.
_parse_tools(tools: List[BaseTool]) -> List[Any]:
Abstract method to parse tools.
get_delegation_tools(agents: List["BaseAgent"]):
Abstract method to set the agents task tools for handling delegation and question asking to other agents in crew.
get_output_converter(llm, model, instructions):
@@ -152,9 +154,6 @@ class BaseAgent(ABC, BaseModel):
callbacks: List[Callable] = Field(
default=[], description="Callbacks to be used for the agent"
)
adapted_agent: bool = Field(
default=False, description="Whether the agent is adapted"
)
@model_validator(mode="before")
@classmethod
@@ -171,15 +170,15 @@ class BaseAgent(ABC, BaseModel):
tool meets these criteria, it is processed and added to the list of
tools. Otherwise, a ValueError is raised.
"""
if not tools:
return []
processed_tools = []
required_attrs = ["name", "func", "description"]
for tool in tools:
if isinstance(tool, BaseTool):
processed_tools.append(tool)
elif all(hasattr(tool, attr) for attr in required_attrs):
elif (
hasattr(tool, "name")
and hasattr(tool, "func")
and hasattr(tool, "description")
):
# Tool has the required attributes, create a Tool instance
processed_tools.append(Tool.from_langchain(tool))
else:
@@ -261,6 +260,13 @@ class BaseAgent(ABC, BaseModel):
"""Set the task tools that init BaseAgenTools class."""
pass
@abstractmethod
def get_output_converter(
self, llm: Any, text: str, model: type[BaseModel] | None, instructions: str
) -> Converter:
"""Get the converter class for the agent to create json/pydantic outputs."""
pass
def copy(self: T) -> T: # type: ignore # Signature of "copy" incompatible with supertype "BaseModel"
"""Create a deep copy of the Agent."""
exclude = {

View File

@@ -273,9 +273,11 @@ def get_crew(crew_path: str = "crew.py", require: bool = False) -> Crew | None:
for attr_name in dir(module):
attr = getattr(module, attr_name)
try:
if callable(attr) and hasattr(attr, "crew"):
crew_instance = attr().crew()
return crew_instance
if isinstance(attr, Crew) and hasattr(attr, "kickoff"):
print(
f"Found valid crew object in attribute '{attr_name}' at {crew_os_path}."
)
return attr
except Exception as e:
print(f"Error processing attribute {attr_name}: {e}")

View File

@@ -1399,12 +1399,12 @@ class Crew(BaseModel):
RuntimeError: If the specified memory system fails to reset
"""
reset_functions = {
"long": (getattr(self, "_long_term_memory", None), "long term"),
"short": (getattr(self, "_short_term_memory", None), "short term"),
"entity": (getattr(self, "_entity_memory", None), "entity"),
"knowledge": (getattr(self, "knowledge", None), "knowledge"),
"kickoff_outputs": (getattr(self, "_task_output_handler", None), "task output"),
"external": (getattr(self, "_external_memory", None), "external"),
"long": (self._long_term_memory, "long term"),
"short": (self._short_term_memory, "short term"),
"entity": (self._entity_memory, "entity"),
"knowledge": (self.knowledge, "knowledge"),
"kickoff_outputs": (self._task_output_handler, "task output"),
"external": (self._external_memory, "external"),
}
memory_system, name = reset_functions[memory_type]

View File

@@ -21,7 +21,7 @@ class SQLiteFlowPersistence(FlowPersistence):
moderate performance requirements.
"""
db_path: str
db_path: str # Type annotation for instance variable
def __init__(self, db_path: Optional[str] = None):
"""Initialize SQLite persistence.

View File

@@ -4,12 +4,9 @@ import os
import sys
import threading
import warnings
from collections import defaultdict
from contextlib import contextmanager
from types import SimpleNamespace
from typing import (
Any,
DefaultDict,
Dict,
List,
Literal,
@@ -21,8 +18,7 @@ from typing import (
)
from dotenv import load_dotenv
from litellm.types.utils import ChatCompletionDeltaToolCall
from pydantic import BaseModel, Field
from pydantic import BaseModel
from crewai.utilities.events.llm_events import (
LLMCallCompletedEvent,
@@ -223,15 +219,6 @@ class StreamingChoices(TypedDict):
finish_reason: Optional[str]
class FunctionArgs(BaseModel):
name: str = ""
arguments: str = ""
class AccumulatedToolArgs(BaseModel):
function: FunctionArgs = Field(default_factory=FunctionArgs)
class LLM(BaseLLM):
def __init__(
self,
@@ -384,11 +371,6 @@ class LLM(BaseLLM):
last_chunk = None
chunk_count = 0
usage_info = None
tool_calls = None
accumulated_tool_args: DefaultDict[int, AccumulatedToolArgs] = defaultdict(
AccumulatedToolArgs
)
# --- 2) Make sure stream is set to True and include usage metrics
params["stream"] = True
@@ -446,20 +428,6 @@ class LLM(BaseLLM):
if chunk_content is None and isinstance(delta, dict):
# Some models might send empty content chunks
chunk_content = ""
# Enable tool calls using streaming
if "tool_calls" in delta:
tool_calls = delta["tool_calls"]
if tool_calls:
result = self._handle_streaming_tool_calls(
tool_calls=tool_calls,
accumulated_tool_args=accumulated_tool_args,
available_functions=available_functions,
)
if result is not None:
chunk_content = result
except Exception as e:
logging.debug(f"Error extracting content from chunk: {e}")
logging.debug(f"Chunk format: {type(chunk)}, content: {chunk}")
@@ -474,6 +442,7 @@ class LLM(BaseLLM):
self,
event=LLMStreamChunkEvent(chunk=chunk_content),
)
# --- 4) Fallback to non-streaming if no content received
if not full_response.strip() and chunk_count == 0:
logging.warning(
@@ -532,7 +501,7 @@ class LLM(BaseLLM):
)
# --- 6) If still empty, raise an error instead of using a default response
if not full_response.strip() and len(accumulated_tool_args) == 0:
if not full_response.strip():
raise Exception(
"No content received from streaming response. Received empty chunks or failed to extract content."
)
@@ -564,8 +533,8 @@ class LLM(BaseLLM):
tool_calls = getattr(message, "tool_calls")
except Exception as e:
logging.debug(f"Error checking for tool calls: {e}")
# --- 8) If no tool calls or no available functions, return the text response directly
# --- 8) If no tool calls or no available functions, return the text response directly
if not tool_calls or not available_functions:
# Log token usage if available in streaming mode
self._handle_streaming_callbacks(callbacks, usage_info, last_chunk)
@@ -599,47 +568,6 @@ class LLM(BaseLLM):
)
raise Exception(f"Failed to get streaming response: {str(e)}")
def _handle_streaming_tool_calls(
self,
tool_calls: List[ChatCompletionDeltaToolCall],
accumulated_tool_args: DefaultDict[int, AccumulatedToolArgs],
available_functions: Optional[Dict[str, Any]] = None,
) -> None | str:
for tool_call in tool_calls:
current_tool_accumulator = accumulated_tool_args[tool_call.index]
if tool_call.function.name:
current_tool_accumulator.function.name = tool_call.function.name
if tool_call.function.arguments:
current_tool_accumulator.function.arguments += (
tool_call.function.arguments
)
crewai_event_bus.emit(
self,
event=LLMStreamChunkEvent(
tool_call=tool_call.to_dict(),
chunk=tool_call.function.arguments,
),
)
if (
current_tool_accumulator.function.name
and current_tool_accumulator.function.arguments
and available_functions
):
try:
json.loads(current_tool_accumulator.function.arguments)
return self._handle_tool_call(
[current_tool_accumulator],
available_functions,
)
except json.JSONDecodeError:
continue
return None
def _handle_streaming_callbacks(
self,
callbacks: Optional[List[Any]],

View File

@@ -1,2 +0,0 @@
CREWAI_TELEMETRY_BASE_URL: str = "https://telemetry.crewai.com:4319"
CREWAI_TELEMETRY_SERVICE_NAME: str = "crewAI-telemetry"

View File

@@ -9,11 +9,6 @@ from contextlib import contextmanager
from importlib.metadata import version
from typing import TYPE_CHECKING, Any, Optional
from crewai.telemetry.constants import (
CREWAI_TELEMETRY_BASE_URL,
CREWAI_TELEMETRY_SERVICE_NAME,
)
@contextmanager
def suppress_warnings():
@@ -57,15 +52,16 @@ class Telemetry:
return
try:
telemetry_endpoint = "https://telemetry.crewai.com:4319"
self.resource = Resource(
attributes={SERVICE_NAME: CREWAI_TELEMETRY_SERVICE_NAME},
attributes={SERVICE_NAME: "crewAI-telemetry"},
)
with suppress_warnings():
self.provider = TracerProvider(resource=self.resource)
processor = BatchSpanProcessor(
OTLPSpanExporter(
endpoint=f"{CREWAI_TELEMETRY_BASE_URL}/v1/traces",
endpoint=f"{telemetry_endpoint}/v1/traces",
timeout=30,
)
)
@@ -79,12 +75,12 @@ class Telemetry:
):
raise # Re-raise the exception to not interfere with system signals
self.ready = False
def _is_telemetry_disabled(self) -> bool:
"""Check if telemetry should be disabled based on environment variables."""
return (
os.getenv("OTEL_SDK_DISABLED", "false").lower() == "true"
or os.getenv("CREWAI_DISABLE_TELEMETRY", "false").lower() == "true"
os.getenv("OTEL_SDK_DISABLED", "false").lower() == "true" or
os.getenv("CREWAI_DISABLE_TELEMETRY", "false").lower() == "true"
)
def set_tracer(self):

View File

@@ -216,7 +216,7 @@ def convert_with_instructions(
def get_conversion_instructions(model: Type[BaseModel], llm: Any) -> str:
instructions = "Please convert the following text into valid JSON."
if llm and not isinstance(llm, str) and llm.supports_function_calling():
if llm.supports_function_calling():
model_schema = PydanticSchemaParser(model=model).get_schema()
instructions += (
f"\n\nOutput ONLY the valid JSON and nothing else.\n\n"

View File

@@ -1,8 +1,6 @@
from enum import Enum
from typing import Any, Dict, List, Optional, Union
from pydantic import BaseModel
from crewai.utilities.events.base_events import BaseEvent
@@ -43,21 +41,8 @@ class LLMCallFailedEvent(BaseEvent):
type: str = "llm_call_failed"
class FunctionCall(BaseModel):
arguments: str
name: Optional[str] = None
class ToolCall(BaseModel):
id: Optional[str] = None
function: FunctionCall
type: Optional[str] = None
index: int
class LLMStreamChunkEvent(BaseEvent):
"""Event emitted when a streaming chunk is received"""
type: str = "llm_stream_chunk"
chunk: str
tool_call: Optional[ToolCall] = None

View File

@@ -29,14 +29,7 @@ class InternalInstructor:
import instructor
from litellm import completion
is_custom_openai = getattr(self.llm, 'model', '').startswith('custom_openai/')
mode = instructor.Mode.PARALLEL_TOOLS if is_custom_openai else instructor.Mode.TOOLS
self._client = instructor.from_litellm(
completion,
mode=mode,
)
self._client = instructor.from_litellm(completion)
def to_json(self):
model = self.to_pydantic()
@@ -47,8 +40,4 @@ class InternalInstructor:
model = self._client.chat.completions.create(
model=self.llm.model, response_model=self.model, messages=messages
)
if isinstance(model, list) and len(model) > 0:
return model[0] # Return the first model from the list
return model

View File

@@ -1,113 +0,0 @@
from typing import Any, Dict, List, Optional
import pytest
from pydantic import BaseModel
from crewai.agent import BaseAgent
from crewai.agents.agent_adapters.base_agent_adapter import BaseAgentAdapter
from crewai.tools import BaseTool
from crewai.utilities.token_counter_callback import TokenProcess
# Concrete implementation for testing
class ConcreteAgentAdapter(BaseAgentAdapter):
def configure_tools(
self, tools: Optional[List[BaseTool]] = None, **kwargs: Any
) -> None:
# Simple implementation for testing
self.tools = tools or []
def execute_task(
self,
task: Any,
context: Optional[str] = None,
tools: Optional[List[Any]] = None,
) -> str:
# Dummy implementation needed due to BaseAgent inheritance
return "Task executed"
def create_agent_executor(self, tools: Optional[List[BaseTool]] = None) -> Any:
# Dummy implementation
return None
def get_delegation_tools(
self, tools: List[BaseTool], tool_map: Optional[Dict[str, BaseTool]]
) -> List[BaseTool]:
# Dummy implementation
return []
def _parse_output(self, agent_output: Any, token_process: TokenProcess):
# Dummy implementation
pass
def get_output_converter(self, tools: Optional[List[BaseTool]] = None) -> Any:
# Dummy implementation
return None
def test_base_agent_adapter_initialization():
"""Test initialization of the concrete agent adapter."""
adapter = ConcreteAgentAdapter(
role="test role", goal="test goal", backstory="test backstory"
)
assert isinstance(adapter, BaseAgent)
assert isinstance(adapter, BaseAgentAdapter)
assert adapter.role == "test role"
assert adapter._agent_config is None
assert adapter.adapted_structured_output is False
def test_base_agent_adapter_initialization_with_config():
"""Test initialization with agent_config."""
config = {"model": "gpt-4"}
adapter = ConcreteAgentAdapter(
agent_config=config,
role="test role",
goal="test goal",
backstory="test backstory",
)
assert adapter._agent_config == config
def test_configure_tools_method_exists():
"""Test that configure_tools method exists and can be called."""
adapter = ConcreteAgentAdapter(
role="test role", goal="test goal", backstory="test backstory"
)
# Create dummy tools if needed, or pass None
tools = []
adapter.configure_tools(tools)
assert hasattr(adapter, "tools")
assert adapter.tools == tools
def test_configure_structured_output_method_exists():
"""Test that configure_structured_output method exists and can be called."""
adapter = ConcreteAgentAdapter(
role="test role", goal="test goal", backstory="test backstory"
)
# Define a dummy structure or pass None/Any
class DummyOutput(BaseModel):
data: str
structured_output = DummyOutput
adapter.configure_structured_output(structured_output)
# Add assertions here if configure_structured_output modifies state
# For now, just ensuring it runs without error is sufficient
pass
def test_base_agent_adapter_inherits_base_agent():
"""Test that BaseAgentAdapter inherits from BaseAgent."""
assert issubclass(BaseAgentAdapter, BaseAgent)
class ConcreteAgentAdapterWithoutRequiredMethods(BaseAgentAdapter):
pass
def test_base_agent_adapter_fails_without_required_methods():
"""Test that BaseAgentAdapter fails without required methods."""
with pytest.raises(TypeError):
ConcreteAgentAdapterWithoutRequiredMethods() # type: ignore

View File

@@ -1,94 +0,0 @@
from typing import Any, List
from unittest.mock import Mock
import pytest
from crewai.agents.agent_adapters.base_tool_adapter import BaseToolAdapter
from crewai.tools.base_tool import BaseTool
class ConcreteToolAdapter(BaseToolAdapter):
def configure_tools(self, tools: List[BaseTool]) -> None:
self.converted_tools = [f"converted_{tool.name}" for tool in tools]
@pytest.fixture
def mock_tool_1():
tool = Mock(spec=BaseTool)
tool.name = "Mock Tool 1"
return tool
@pytest.fixture
def mock_tool_2():
tool = Mock(spec=BaseTool)
tool.name = "MockTool2"
return tool
@pytest.fixture
def tools_list(mock_tool_1, mock_tool_2):
return [mock_tool_1, mock_tool_2]
def test_initialization_with_tools(tools_list):
adapter = ConcreteToolAdapter(tools=tools_list)
assert adapter.original_tools == tools_list
assert adapter.converted_tools == [] # Conversion happens in configure_tools
def test_initialization_without_tools():
adapter = ConcreteToolAdapter()
assert adapter.original_tools == []
assert adapter.converted_tools == []
def test_configure_tools(tools_list):
adapter = ConcreteToolAdapter()
adapter.configure_tools(tools_list)
assert adapter.converted_tools == ["converted_Mock Tool 1", "converted_MockTool2"]
assert adapter.original_tools == [] # original_tools is only set in init
adapter_with_init_tools = ConcreteToolAdapter(tools=tools_list)
adapter_with_init_tools.configure_tools(tools_list)
assert adapter_with_init_tools.converted_tools == [
"converted_Mock Tool 1",
"converted_MockTool2",
]
assert adapter_with_init_tools.original_tools == tools_list
def test_tools_method(tools_list):
adapter = ConcreteToolAdapter()
adapter.configure_tools(tools_list)
assert adapter.tools() == ["converted_Mock Tool 1", "converted_MockTool2"]
def test_tools_method_empty():
adapter = ConcreteToolAdapter()
assert adapter.tools() == []
def test_sanitize_tool_name_with_spaces():
adapter = ConcreteToolAdapter()
assert adapter.sanitize_tool_name("Tool With Spaces") == "Tool_With_Spaces"
def test_sanitize_tool_name_without_spaces():
adapter = ConcreteToolAdapter()
assert adapter.sanitize_tool_name("ToolWithoutSpaces") == "ToolWithoutSpaces"
def test_sanitize_tool_name_empty():
adapter = ConcreteToolAdapter()
assert adapter.sanitize_tool_name("") == ""
class ConcreteToolAdapterWithoutRequiredMethods(BaseToolAdapter):
pass
def test_tool_adapted_fails_without_required_methods():
"""Test that BaseToolAdapter fails without required methods."""
with pytest.raises(TypeError):
ConcreteToolAdapterWithoutRequiredMethods() # type: ignore

View File

@@ -18,6 +18,9 @@ class MockAgent(BaseAgent):
def create_agent_executor(self, tools=None) -> None: ...
def _parse_tools(self, tools: List[BaseTool]) -> List[BaseTool]:
return []
def get_delegation_tools(self, agents: List["BaseAgent"]): ...
def get_output_converter(

View File

@@ -1,133 +0,0 @@
interactions:
- request:
body: '{"messages": [{"role": "user", "content": "What is the weather in New York?"}],
"model": "gpt-4o", "stop": [], "stream": true, "stream_options": {"include_usage":
true}, "tools": [{"type": "function", "function": {"name": "get_weather", "description":
"Get the current weather in a given location", "parameters": {"type": "object",
"properties": {"location": {"type": "string", "description": "The city and state,
e.g. San Francisco, CA"}}, "required": ["location"]}}}]}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '470'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.74.0
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.74.0
x-stainless-raw-response:
- 'true'
x-stainless-read-timeout:
- '600.0'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.12
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: 'data: {"id":"chatcmpl-BMyImYKF6jsJsQzWvhNfNb4gAIRVc","object":"chat.completion.chunk","created":1744814716,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"role":"assistant","content":null,"tool_calls":[{"index":0,"id":"call_bCEixqN8Y40SUyius8ZfVErH","type":"function","function":{"name":"get_weather","arguments":""}}],"refusal":null},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMyImYKF6jsJsQzWvhNfNb4gAIRVc","object":"chat.completion.chunk","created":1744814716,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"tool_calls":[{"index":0,"function":{"arguments":"{\""}}]},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMyImYKF6jsJsQzWvhNfNb4gAIRVc","object":"chat.completion.chunk","created":1744814716,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"tool_calls":[{"index":0,"function":{"arguments":"location"}}]},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMyImYKF6jsJsQzWvhNfNb4gAIRVc","object":"chat.completion.chunk","created":1744814716,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"tool_calls":[{"index":0,"function":{"arguments":"\":\""}}]},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMyImYKF6jsJsQzWvhNfNb4gAIRVc","object":"chat.completion.chunk","created":1744814716,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"tool_calls":[{"index":0,"function":{"arguments":"New"}}]},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMyImYKF6jsJsQzWvhNfNb4gAIRVc","object":"chat.completion.chunk","created":1744814716,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"tool_calls":[{"index":0,"function":{"arguments":"
York"}}]},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMyImYKF6jsJsQzWvhNfNb4gAIRVc","object":"chat.completion.chunk","created":1744814716,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"tool_calls":[{"index":0,"function":{"arguments":","}}]},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMyImYKF6jsJsQzWvhNfNb4gAIRVc","object":"chat.completion.chunk","created":1744814716,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"tool_calls":[{"index":0,"function":{"arguments":"
NY"}}]},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMyImYKF6jsJsQzWvhNfNb4gAIRVc","object":"chat.completion.chunk","created":1744814716,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"tool_calls":[{"index":0,"function":{"arguments":"\"}"}}]},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMyImYKF6jsJsQzWvhNfNb4gAIRVc","object":"chat.completion.chunk","created":1744814716,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"tool_calls"}],"usage":null}
data: {"id":"chatcmpl-BMyImYKF6jsJsQzWvhNfNb4gAIRVc","object":"chat.completion.chunk","created":1744814716,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[],"usage":{"prompt_tokens":68,"completion_tokens":18,"total_tokens":86,"prompt_tokens_details":{"cached_tokens":0,"audio_tokens":0},"completion_tokens_details":{"reasoning_tokens":0,"audio_tokens":0,"accepted_prediction_tokens":0,"rejected_prediction_tokens":0}}}
data: [DONE]
'
headers:
CF-RAY:
- 93147723ecc1f237-GRU
Connection:
- keep-alive
Content-Type:
- text/event-stream; charset=utf-8
Date:
- Wed, 16 Apr 2025 14:45:16 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '620'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '30000000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '29999989'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_0a08d5f042ef769aeb2c941e398f65f4
status:
code: 200
message: OK
version: 1

View File

@@ -1,133 +0,0 @@
interactions:
- request:
body: '{"messages": [{"role": "user", "content": "What is the weather in New York?"}],
"model": "gpt-4o", "stop": [], "stream": true, "stream_options": {"include_usage":
true}, "tools": [{"type": "function", "function": {"name": "get_weather", "description":
"Get the current weather in a given location", "parameters": {"type": "object",
"properties": {"location": {"type": "string", "description": "The city and state,
e.g. San Francisco, CA"}}, "required": ["location"]}}}]}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '470'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.74.0
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.74.0
x-stainless-raw-response:
- 'true'
x-stainless-read-timeout:
- '600.0'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.12
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: 'data: {"id":"chatcmpl-BMy4cOLWqB5n5X1Zog3uV9Y7Lrkwp","object":"chat.completion.chunk","created":1744813838,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"role":"assistant","content":null,"tool_calls":[{"index":0,"id":"call_ccog5nyDCLYpoWzksuCkGEqY","type":"function","function":{"name":"get_weather","arguments":""}}],"refusal":null},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4cOLWqB5n5X1Zog3uV9Y7Lrkwp","object":"chat.completion.chunk","created":1744813838,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"tool_calls":[{"index":0,"function":{"arguments":"{\""}}]},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4cOLWqB5n5X1Zog3uV9Y7Lrkwp","object":"chat.completion.chunk","created":1744813838,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"tool_calls":[{"index":0,"function":{"arguments":"location"}}]},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4cOLWqB5n5X1Zog3uV9Y7Lrkwp","object":"chat.completion.chunk","created":1744813838,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"tool_calls":[{"index":0,"function":{"arguments":"\":\""}}]},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4cOLWqB5n5X1Zog3uV9Y7Lrkwp","object":"chat.completion.chunk","created":1744813838,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"tool_calls":[{"index":0,"function":{"arguments":"New"}}]},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4cOLWqB5n5X1Zog3uV9Y7Lrkwp","object":"chat.completion.chunk","created":1744813838,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"tool_calls":[{"index":0,"function":{"arguments":"
York"}}]},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4cOLWqB5n5X1Zog3uV9Y7Lrkwp","object":"chat.completion.chunk","created":1744813838,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"tool_calls":[{"index":0,"function":{"arguments":","}}]},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4cOLWqB5n5X1Zog3uV9Y7Lrkwp","object":"chat.completion.chunk","created":1744813838,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"tool_calls":[{"index":0,"function":{"arguments":"
NY"}}]},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4cOLWqB5n5X1Zog3uV9Y7Lrkwp","object":"chat.completion.chunk","created":1744813838,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"tool_calls":[{"index":0,"function":{"arguments":"\"}"}}]},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4cOLWqB5n5X1Zog3uV9Y7Lrkwp","object":"chat.completion.chunk","created":1744813838,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"tool_calls"}],"usage":null}
data: {"id":"chatcmpl-BMy4cOLWqB5n5X1Zog3uV9Y7Lrkwp","object":"chat.completion.chunk","created":1744813838,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[],"usage":{"prompt_tokens":68,"completion_tokens":18,"total_tokens":86,"prompt_tokens_details":{"cached_tokens":0,"audio_tokens":0},"completion_tokens_details":{"reasoning_tokens":0,"audio_tokens":0,"accepted_prediction_tokens":0,"rejected_prediction_tokens":0}}}
data: [DONE]
'
headers:
CF-RAY:
- 931461bbddc27df9-GRU
Connection:
- keep-alive
Content-Type:
- text/event-stream; charset=utf-8
Date:
- Wed, 16 Apr 2025 14:30:39 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '445'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '30000000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '29999989'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_1bb5862de2891623d44c012aba597c5e
status:
code: 200
message: OK
version: 1

View File

@@ -1,279 +0,0 @@
interactions:
- request:
body: '{"messages": [{"role": "user", "content": "What is the weather in New York?"}],
"model": "gpt-4o", "stop": [], "stream": true, "stream_options": {"include_usage":
true}}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '169'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.74.0
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.74.0
x-stainless-raw-response:
- 'true'
x-stainless-read-timeout:
- '600.0'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.12
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: 'data: {"id":"chatcmpl-BMy4dVQFM7KUrmflCVk4i454PXCga","object":"chat.completion.chunk","created":1744813839,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"role":"assistant","content":"","refusal":null},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4dVQFM7KUrmflCVk4i454PXCga","object":"chat.completion.chunk","created":1744813839,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"content":"I''m"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4dVQFM7KUrmflCVk4i454PXCga","object":"chat.completion.chunk","created":1744813839,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"content":"
unable"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4dVQFM7KUrmflCVk4i454PXCga","object":"chat.completion.chunk","created":1744813839,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"content":"
to"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4dVQFM7KUrmflCVk4i454PXCga","object":"chat.completion.chunk","created":1744813839,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"content":"
provide"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4dVQFM7KUrmflCVk4i454PXCga","object":"chat.completion.chunk","created":1744813839,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"content":"
real"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4dVQFM7KUrmflCVk4i454PXCga","object":"chat.completion.chunk","created":1744813839,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"content":"-time"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4dVQFM7KUrmflCVk4i454PXCga","object":"chat.completion.chunk","created":1744813839,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"content":"
information"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4dVQFM7KUrmflCVk4i454PXCga","object":"chat.completion.chunk","created":1744813839,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"content":"
or"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4dVQFM7KUrmflCVk4i454PXCga","object":"chat.completion.chunk","created":1744813839,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"content":"
current"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4dVQFM7KUrmflCVk4i454PXCga","object":"chat.completion.chunk","created":1744813839,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"content":"
weather"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4dVQFM7KUrmflCVk4i454PXCga","object":"chat.completion.chunk","created":1744813839,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"content":"
updates"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4dVQFM7KUrmflCVk4i454PXCga","object":"chat.completion.chunk","created":1744813839,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"content":"."},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4dVQFM7KUrmflCVk4i454PXCga","object":"chat.completion.chunk","created":1744813839,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"content":"
For"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4dVQFM7KUrmflCVk4i454PXCga","object":"chat.completion.chunk","created":1744813839,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"content":"
the"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4dVQFM7KUrmflCVk4i454PXCga","object":"chat.completion.chunk","created":1744813839,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"content":"
latest"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4dVQFM7KUrmflCVk4i454PXCga","object":"chat.completion.chunk","created":1744813839,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"content":"
weather"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4dVQFM7KUrmflCVk4i454PXCga","object":"chat.completion.chunk","created":1744813839,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"content":"
information"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4dVQFM7KUrmflCVk4i454PXCga","object":"chat.completion.chunk","created":1744813839,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"content":"
in"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4dVQFM7KUrmflCVk4i454PXCga","object":"chat.completion.chunk","created":1744813839,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"content":"
New"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4dVQFM7KUrmflCVk4i454PXCga","object":"chat.completion.chunk","created":1744813839,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"content":"
York"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4dVQFM7KUrmflCVk4i454PXCga","object":"chat.completion.chunk","created":1744813839,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"content":","},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4dVQFM7KUrmflCVk4i454PXCga","object":"chat.completion.chunk","created":1744813839,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"content":"
I"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4dVQFM7KUrmflCVk4i454PXCga","object":"chat.completion.chunk","created":1744813839,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"content":"
recommend"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4dVQFM7KUrmflCVk4i454PXCga","object":"chat.completion.chunk","created":1744813839,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"content":"
checking"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4dVQFM7KUrmflCVk4i454PXCga","object":"chat.completion.chunk","created":1744813839,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"content":"
a"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4dVQFM7KUrmflCVk4i454PXCga","object":"chat.completion.chunk","created":1744813839,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"content":"
reliable"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4dVQFM7KUrmflCVk4i454PXCga","object":"chat.completion.chunk","created":1744813839,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"content":"
weather"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4dVQFM7KUrmflCVk4i454PXCga","object":"chat.completion.chunk","created":1744813839,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"content":"
website"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4dVQFM7KUrmflCVk4i454PXCga","object":"chat.completion.chunk","created":1744813839,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"content":"
or"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4dVQFM7KUrmflCVk4i454PXCga","object":"chat.completion.chunk","created":1744813839,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"content":"
app"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4dVQFM7KUrmflCVk4i454PXCga","object":"chat.completion.chunk","created":1744813839,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"content":","},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4dVQFM7KUrmflCVk4i454PXCga","object":"chat.completion.chunk","created":1744813839,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"content":"
such"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4dVQFM7KUrmflCVk4i454PXCga","object":"chat.completion.chunk","created":1744813839,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"content":"
as"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4dVQFM7KUrmflCVk4i454PXCga","object":"chat.completion.chunk","created":1744813839,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"content":"
the"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4dVQFM7KUrmflCVk4i454PXCga","object":"chat.completion.chunk","created":1744813839,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"content":"
National"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4dVQFM7KUrmflCVk4i454PXCga","object":"chat.completion.chunk","created":1744813839,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"content":"
Weather"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4dVQFM7KUrmflCVk4i454PXCga","object":"chat.completion.chunk","created":1744813839,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"content":"
Service"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4dVQFM7KUrmflCVk4i454PXCga","object":"chat.completion.chunk","created":1744813839,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"content":","},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4dVQFM7KUrmflCVk4i454PXCga","object":"chat.completion.chunk","created":1744813839,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"content":"
Weather"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4dVQFM7KUrmflCVk4i454PXCga","object":"chat.completion.chunk","created":1744813839,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"content":".com"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4dVQFM7KUrmflCVk4i454PXCga","object":"chat.completion.chunk","created":1744813839,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"content":","},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4dVQFM7KUrmflCVk4i454PXCga","object":"chat.completion.chunk","created":1744813839,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"content":"
or"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4dVQFM7KUrmflCVk4i454PXCga","object":"chat.completion.chunk","created":1744813839,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"content":"
a"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4dVQFM7KUrmflCVk4i454PXCga","object":"chat.completion.chunk","created":1744813839,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"content":"
similar"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4dVQFM7KUrmflCVk4i454PXCga","object":"chat.completion.chunk","created":1744813839,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"content":"
service"},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4dVQFM7KUrmflCVk4i454PXCga","object":"chat.completion.chunk","created":1744813839,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{"content":"."},"logprobs":null,"finish_reason":null}],"usage":null}
data: {"id":"chatcmpl-BMy4dVQFM7KUrmflCVk4i454PXCga","object":"chat.completion.chunk","created":1744813839,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}],"usage":null}
data: {"id":"chatcmpl-BMy4dVQFM7KUrmflCVk4i454PXCga","object":"chat.completion.chunk","created":1744813839,"model":"gpt-4o-2024-08-06","service_tier":"default","system_fingerprint":"fp_22890b9c0a","choices":[],"usage":{"prompt_tokens":15,"completion_tokens":47,"total_tokens":62,"prompt_tokens_details":{"cached_tokens":0,"audio_tokens":0},"completion_tokens_details":{"reasoning_tokens":0,"audio_tokens":0,"accepted_prediction_tokens":0,"rejected_prediction_tokens":0}}}
data: [DONE]
'
headers:
CF-RAY:
- 931461c25bb47df9-GRU
Connection:
- keep-alive
Content-Type:
- text/event-stream; charset=utf-8
Date:
- Wed, 16 Apr 2025 14:30:40 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '298'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '30000000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '29999989'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_89971fd68e5a59c9fa50e04106228b0a
status:
code: 200
message: OK
version: 1

View File

@@ -8,7 +8,6 @@ from dotenv import load_dotenv
load_result = load_dotenv(override=True)
@pytest.fixture(autouse=True)
def setup_test_environment():
"""Set up test environment with a temporary directory for SQLite storage."""
@@ -16,13 +15,11 @@ def setup_test_environment():
# Create the directory with proper permissions
storage_dir = Path(temp_dir) / "crewai_test_storage"
storage_dir.mkdir(parents=True, exist_ok=True)
# Validate that the directory was created successfully
if not storage_dir.exists() or not storage_dir.is_dir():
raise RuntimeError(
f"Failed to create test storage directory: {storage_dir}"
)
raise RuntimeError(f"Failed to create test storage directory: {storage_dir}")
# Verify directory permissions
try:
# Try to create a test file to verify write permissions
@@ -30,20 +27,11 @@ def setup_test_environment():
test_file.touch()
test_file.unlink()
except (OSError, IOError) as e:
raise RuntimeError(
f"Test storage directory {storage_dir} is not writable: {e}"
)
raise RuntimeError(f"Test storage directory {storage_dir} is not writable: {e}")
# Set environment variable to point to the test storage directory
os.environ["CREWAI_STORAGE_DIR"] = str(storage_dir)
yield
# Cleanup is handled automatically when tempfile context exits
@pytest.fixture(scope="module")
def vcr_config(request) -> dict:
return {
"cassette_library_dir": "tests/cassettes",
}

View File

@@ -42,6 +42,11 @@ from crewai.utilities.events.event_listener import EventListener
from crewai.utilities.rpm_controller import RPMController
from crewai.utilities.task_output_storage_handler import TaskOutputStorageHandler
# Skip streaming tests when running in CI/CD environments
skip_streaming_in_ci = pytest.mark.skipif(
os.getenv("CI") is not None, reason="Skipping streaming tests in CI/CD environments"
)
ceo = Agent(
role="CEO",
goal="Make sure the writers in your company produce amazing content.",
@@ -958,6 +963,7 @@ def test_api_calls_throttling(capsys):
moveon.assert_called()
@skip_streaming_in_ci
@pytest.mark.vcr(filter_headers=["authorization"])
def test_crew_kickoff_usage_metrics():
inputs = [
@@ -993,6 +999,7 @@ def test_crew_kickoff_usage_metrics():
assert result.token_usage.cached_prompt_tokens == 0
@skip_streaming_in_ci
@pytest.mark.vcr(filter_headers=["authorization"])
def test_crew_kickoff_streaming_usage_metrics():
inputs = [
@@ -4249,93 +4256,53 @@ def test_crew_kickoff_for_each_works_with_manager_agent_copy():
assert crew_copy.manager_agent.role == crew.manager_agent.role
assert crew_copy.manager_agent.goal == crew.manager_agent.goal
def test_crew_copy_with_memory():
"""Test that copying a crew with memory enabled does not raise validation errors and copies memory correctly."""
agent = Agent(role="Test Agent", goal="Test Goal", backstory="Test Backstory")
task = Task(description="Test Task", expected_output="Test Output", agent=agent)
crew = Crew(agents=[agent], tasks=[task], memory=True)
original_short_term_id = (
id(crew._short_term_memory) if crew._short_term_memory else None
)
original_long_term_id = (
id(crew._long_term_memory) if crew._long_term_memory else None
)
original_short_term_id = id(crew._short_term_memory) if crew._short_term_memory else None
original_long_term_id = id(crew._long_term_memory) if crew._long_term_memory else None
original_entity_id = id(crew._entity_memory) if crew._entity_memory else None
original_external_id = id(crew._external_memory) if crew._external_memory else None
original_user_id = id(crew._user_memory) if crew._user_memory else None
try:
crew_copy = crew.copy()
assert hasattr(
crew_copy, "_short_term_memory"
), "Copied crew should have _short_term_memory"
assert (
crew_copy._short_term_memory is not None
), "Copied _short_term_memory should not be None"
assert (
id(crew_copy._short_term_memory) != original_short_term_id
), "Copied _short_term_memory should be a new object"
assert hasattr(crew_copy, "_short_term_memory"), "Copied crew should have _short_term_memory"
assert crew_copy._short_term_memory is not None, "Copied _short_term_memory should not be None"
assert id(crew_copy._short_term_memory) != original_short_term_id, "Copied _short_term_memory should be a new object"
assert hasattr(
crew_copy, "_long_term_memory"
), "Copied crew should have _long_term_memory"
assert (
crew_copy._long_term_memory is not None
), "Copied _long_term_memory should not be None"
assert (
id(crew_copy._long_term_memory) != original_long_term_id
), "Copied _long_term_memory should be a new object"
assert hasattr(crew_copy, "_long_term_memory"), "Copied crew should have _long_term_memory"
assert crew_copy._long_term_memory is not None, "Copied _long_term_memory should not be None"
assert id(crew_copy._long_term_memory) != original_long_term_id, "Copied _long_term_memory should be a new object"
assert hasattr(
crew_copy, "_entity_memory"
), "Copied crew should have _entity_memory"
assert (
crew_copy._entity_memory is not None
), "Copied _entity_memory should not be None"
assert (
id(crew_copy._entity_memory) != original_entity_id
), "Copied _entity_memory should be a new object"
assert hasattr(crew_copy, "_entity_memory"), "Copied crew should have _entity_memory"
assert crew_copy._entity_memory is not None, "Copied _entity_memory should not be None"
assert id(crew_copy._entity_memory) != original_entity_id, "Copied _entity_memory should be a new object"
if original_external_id:
assert hasattr(
crew_copy, "_external_memory"
), "Copied crew should have _external_memory"
assert (
crew_copy._external_memory is not None
), "Copied _external_memory should not be None"
assert (
id(crew_copy._external_memory) != original_external_id
), "Copied _external_memory should be a new object"
assert hasattr(crew_copy, "_external_memory"), "Copied crew should have _external_memory"
assert crew_copy._external_memory is not None, "Copied _external_memory should not be None"
assert id(crew_copy._external_memory) != original_external_id, "Copied _external_memory should be a new object"
else:
assert (
not hasattr(crew_copy, "_external_memory")
or crew_copy._external_memory is None
), "Copied _external_memory should be None if not originally present"
assert not hasattr(crew_copy, "_external_memory") or crew_copy._external_memory is None, "Copied _external_memory should be None if not originally present"
if original_user_id:
assert hasattr(
crew_copy, "_user_memory"
), "Copied crew should have _user_memory"
assert (
crew_copy._user_memory is not None
), "Copied _user_memory should not be None"
assert (
id(crew_copy._user_memory) != original_user_id
), "Copied _user_memory should be a new object"
assert hasattr(crew_copy, "_user_memory"), "Copied crew should have _user_memory"
assert crew_copy._user_memory is not None, "Copied _user_memory should not be None"
assert id(crew_copy._user_memory) != original_user_id, "Copied _user_memory should be a new object"
else:
assert (
not hasattr(crew_copy, "_user_memory") or crew_copy._user_memory is None
), "Copied _user_memory should be None if not originally present"
assert not hasattr(crew_copy, "_user_memory") or crew_copy._user_memory is None, "Copied _user_memory should be None if not originally present"
except pydantic_core.ValidationError as e:
if "Input should be an instance of" in str(e) and ("Memory" in str(e)):
pytest.fail(
f"Copying with memory raised Pydantic ValidationError, likely due to incorrect memory copy: {e}"
)
else:
raise e # Re-raise other validation errors
if "Input should be an instance of" in str(e) and ("Memory" in str(e)):
pytest.fail(f"Copying with memory raised Pydantic ValidationError, likely due to incorrect memory copy: {e}")
else:
raise e # Re-raise other validation errors
except Exception as e:
pytest.fail(f"Copying crew raised an unexpected exception: {e}")

View File

@@ -2,16 +2,13 @@ import os
from time import sleep
from unittest.mock import MagicMock, patch
import litellm
import pytest
from pydantic import BaseModel
from crewai.agents.agent_builder.utilities.base_token_process import TokenProcess
from crewai.llm import CONTEXT_WINDOW_USAGE_RATIO, LLM
from crewai.utilities.events import (
LLMCallCompletedEvent,
LLMStreamChunkEvent,
)
from crewai.utilities.events import crewai_event_bus
from crewai.utilities.events.tool_usage_events import ToolExecutionErrorEvent
from crewai.utilities.token_counter_callback import TokenCalcHandler
@@ -307,27 +304,6 @@ def test_context_window_validation():
assert "must be between 1024 and 2097152" in str(excinfo.value)
@pytest.fixture
def get_weather_tool_schema():
return {
"type": "function",
"function": {
"name": "get_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
}
},
"required": ["location"],
},
},
}
@pytest.mark.vcr(filter_headers=["authorization"])
@pytest.fixture
def anthropic_llm():
@@ -419,117 +395,3 @@ def test_deepseek_r1_with_open_router():
result = llm.call("What is the capital of France?")
assert isinstance(result, str)
assert "Paris" in result
def assert_event_count(
mock_emit,
expected_completed_tool_call: int = 0,
expected_stream_chunk: int = 0,
expected_completed_llm_call: int = 0,
expected_final_chunk_result: str = "",
):
event_count = {
"completed_tool_call": 0,
"stream_chunk": 0,
"completed_llm_call": 0,
}
final_chunk_result = ""
for _call in mock_emit.call_args_list:
event = _call[1]["event"]
if (
isinstance(event, LLMCallCompletedEvent)
and event.call_type.value == "tool_call"
):
event_count["completed_tool_call"] += 1
elif isinstance(event, LLMStreamChunkEvent):
event_count["stream_chunk"] += 1
final_chunk_result += event.chunk
elif (
isinstance(event, LLMCallCompletedEvent)
and event.call_type.value == "llm_call"
):
event_count["completed_llm_call"] += 1
else:
continue
assert event_count["completed_tool_call"] == expected_completed_tool_call
assert event_count["stream_chunk"] == expected_stream_chunk
assert event_count["completed_llm_call"] == expected_completed_llm_call
assert final_chunk_result == expected_final_chunk_result
@pytest.fixture
def mock_emit() -> MagicMock:
from crewai.utilities.events.crewai_event_bus import CrewAIEventsBus
with patch.object(CrewAIEventsBus, "emit") as mock_emit:
yield mock_emit
@pytest.mark.vcr(filter_headers=["authorization"])
def test_handle_streaming_tool_calls(get_weather_tool_schema, mock_emit):
llm = LLM(model="openai/gpt-4o", stream=True)
response = llm.call(
messages=[
{"role": "user", "content": "What is the weather in New York?"},
],
tools=[get_weather_tool_schema],
available_functions={
"get_weather": lambda location: f"The weather in {location} is sunny"
},
)
assert response == "The weather in New York, NY is sunny"
expected_final_chunk_result = (
'{"location":"New York, NY"}The weather in New York, NY is sunny'
)
assert_event_count(
mock_emit=mock_emit,
expected_completed_tool_call=1,
expected_stream_chunk=10,
expected_completed_llm_call=1,
expected_final_chunk_result=expected_final_chunk_result,
)
@pytest.mark.vcr(filter_headers=["authorization"])
def test_handle_streaming_tool_calls_no_available_functions(
get_weather_tool_schema, mock_emit
):
llm = LLM(model="openai/gpt-4o", stream=True)
response = llm.call(
messages=[
{"role": "user", "content": "What is the weather in New York?"},
],
tools=[get_weather_tool_schema],
)
assert response == ""
assert_event_count(
mock_emit=mock_emit,
expected_stream_chunk=9,
expected_completed_llm_call=1,
expected_final_chunk_result='{"location":"New York, NY"}',
)
@pytest.mark.vcr(filter_headers=["authorization"])
def test_handle_streaming_tool_calls_no_tools(mock_emit):
llm = LLM(model="openai/gpt-4o", stream=True)
response = llm.call(
messages=[
{"role": "user", "content": "What is the weather in New York?"},
],
)
assert (
response
== "I'm unable to provide real-time information or current weather updates. For the latest weather information in New York, I recommend checking a reliable weather website or app, such as the National Weather Service, Weather.com, or a similar service."
)
assert_event_count(
mock_emit=mock_emit,
expected_stream_chunk=46,
expected_completed_llm_call=1,
expected_final_chunk_result=response,
)

View File

@@ -0,0 +1,270 @@
interactions:
- request:
body: ''
headers:
accept:
- '*/*'
accept-encoding:
- gzip, deflate
connection:
- keep-alive
host:
- api.mem0.ai
user-agent:
- python-httpx/0.27.0
method: GET
uri: https://api.mem0.ai/v1/memories/?user_id=test
response:
body:
string: '[]'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8b477138bad847b9-BOM
Connection:
- keep-alive
Content-Length:
- '2'
Content-Type:
- application/json
Date:
- Sat, 17 Aug 2024 06:00:11 GMT
NEL:
- '{"success_fraction":0,"report_to":"cf-nel","max_age":604800}'
Report-To:
- '{"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v4?s=uuyH2foMJVDpV%2FH52g1q%2FnvXKe3dBKVzvsK0mqmSNezkiszNR9OgrEJfVqmkX%2FlPFRP2sH4zrOuzGo6k%2FjzsjYJczqSWJUZHN2pPujiwnr1E9W%2BdLGKmG6%2FqPrGYAy2SBRWkkJVWsTO3OQ%3D%3D"}],"group":"cf-nel","max_age":604800}'
Server:
- cloudflare
allow:
- GET, POST, DELETE, OPTIONS
alt-svc:
- h3=":443"; ma=86400
cross-origin-opener-policy:
- same-origin
referrer-policy:
- same-origin
vary:
- Accept, origin, Cookie
x-content-type-options:
- nosniff
x-frame-options:
- DENY
status:
code: 200
message: OK
- request:
body: '{"batch": [{"properties": {"python_version": "3.12.4 (v3.12.4:8e8a4baf65,
Jun 6 2024, 17:33:18) [Clang 13.0.0 (clang-1300.0.29.30)]", "os": "darwin",
"os_version": "Darwin Kernel Version 23.4.0: Wed Feb 21 21:44:54 PST 2024; root:xnu-10063.101.15~2/RELEASE_ARM64_T6030",
"os_release": "23.4.0", "processor": "arm", "machine": "arm64", "function":
"mem0.client.main.MemoryClient", "$lib": "posthog-python", "$lib_version": "3.5.0",
"$geoip_disable": true}, "timestamp": "2024-08-17T06:00:11.526640+00:00", "context":
{}, "distinct_id": "fd411bd3-99a2-42d6-acd7-9fca8ad09580", "event": "client.init"}],
"historical_migration": false, "sentAt": "2024-08-17T06:00:11.701621+00:00",
"api_key": "phc_hgJkUVJFYtmaJqrvf6CYN67TIQ8yhXAkWzUn9AMU4yX"}'
headers:
Accept:
- '*/*'
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '740'
Content-Type:
- application/json
User-Agent:
- posthog-python/3.5.0
method: POST
uri: https://us.i.posthog.com/batch/
response:
body:
string: '{"status":"Ok"}'
headers:
Connection:
- keep-alive
Content-Length:
- '15'
Content-Type:
- application/json
Date:
- Sat, 17 Aug 2024 06:00:12 GMT
access-control-allow-credentials:
- 'true'
server:
- envoy
vary:
- origin, access-control-request-method, access-control-request-headers
x-envoy-upstream-service-time:
- '69'
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "user", "content": "Remember the following insights
from Agent run: test value with provider"}], "metadata": {"task": "test_task_provider",
"agent": "test_agent_provider"}, "app_id": "Researcher"}'
headers:
accept:
- '*/*'
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '219'
content-type:
- application/json
host:
- api.mem0.ai
user-agent:
- python-httpx/0.27.0
method: POST
uri: https://api.mem0.ai/v1/memories/
response:
body:
string: '{"message":"ok"}'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8b477140282547b9-BOM
Connection:
- keep-alive
Content-Length:
- '16'
Content-Type:
- application/json
Date:
- Sat, 17 Aug 2024 06:00:13 GMT
NEL:
- '{"success_fraction":0,"report_to":"cf-nel","max_age":604800}'
Report-To:
- '{"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v4?s=FRjJKSk3YxVj03wA7S05H8ts35KnWfqS3wb6Rfy4kVZ4BgXfw7nJbm92wI6vEv5fWcAcHVnOlkJDggs11B01BMuB2k3a9RqlBi0dJNiMuk%2Bgm5xE%2BODMPWJctYNRwQMjNVbteUpS%2Fad8YA%3D%3D"}],"group":"cf-nel","max_age":604800}'
Server:
- cloudflare
allow:
- GET, POST, DELETE, OPTIONS
alt-svc:
- h3=":443"; ma=86400
cross-origin-opener-policy:
- same-origin
referrer-policy:
- same-origin
vary:
- Accept, origin, Cookie
x-content-type-options:
- nosniff
x-frame-options:
- DENY
status:
code: 200
message: OK
- request:
body: '{"query": "test value with provider", "limit": 3, "app_id": "Researcher"}'
headers:
accept:
- '*/*'
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '73'
content-type:
- application/json
host:
- api.mem0.ai
user-agent:
- python-httpx/0.27.0
method: POST
uri: https://api.mem0.ai/v1/memories/search/
response:
body:
string: '[]'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8b47714d083b47b9-BOM
Connection:
- keep-alive
Content-Length:
- '2'
Content-Type:
- application/json
Date:
- Sat, 17 Aug 2024 06:00:14 GMT
NEL:
- '{"success_fraction":0,"report_to":"cf-nel","max_age":604800}'
Report-To:
- '{"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v4?s=2DRWL1cdKdMvnE8vx1fPUGeTITOgSGl3N5g84PS6w30GRqpfz79BtSx6REhpnOiFV8kM6KGqln0iCZ5yoHc2jBVVJXhPJhQ5t0uerD9JFnkphjISrJOU1MJjZWneT9PlNABddxvVNCmluA%3D%3D"}],"group":"cf-nel","max_age":604800}'
Server:
- cloudflare
allow:
- POST, OPTIONS
alt-svc:
- h3=":443"; ma=86400
cross-origin-opener-policy:
- same-origin
referrer-policy:
- same-origin
vary:
- Accept, origin, Cookie
x-content-type-options:
- nosniff
x-frame-options:
- DENY
status:
code: 200
message: OK
- request:
body: '{"batch": [{"properties": {"python_version": "3.12.4 (v3.12.4:8e8a4baf65,
Jun 6 2024, 17:33:18) [Clang 13.0.0 (clang-1300.0.29.30)]", "os": "darwin",
"os_version": "Darwin Kernel Version 23.4.0: Wed Feb 21 21:44:54 PST 2024; root:xnu-10063.101.15~2/RELEASE_ARM64_T6030",
"os_release": "23.4.0", "processor": "arm", "machine": "arm64", "function":
"mem0.client.main.MemoryClient", "$lib": "posthog-python", "$lib_version": "3.5.0",
"$geoip_disable": true}, "timestamp": "2024-08-17T06:00:13.593952+00:00", "context":
{}, "distinct_id": "fd411bd3-99a2-42d6-acd7-9fca8ad09580", "event": "client.add"}],
"historical_migration": false, "sentAt": "2024-08-17T06:00:13.858277+00:00",
"api_key": "phc_hgJkUVJFYtmaJqrvf6CYN67TIQ8yhXAkWzUn9AMU4yX"}'
headers:
Accept:
- '*/*'
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '739'
Content-Type:
- application/json
User-Agent:
- posthog-python/3.5.0
method: POST
uri: https://us.i.posthog.com/batch/
response:
body:
string: '{"status":"Ok"}'
headers:
Connection:
- keep-alive
Content-Length:
- '15'
Content-Type:
- application/json
Date:
- Sat, 17 Aug 2024 06:00:13 GMT
access-control-allow-credentials:
- 'true'
server:
- envoy
vary:
- origin, access-control-request-method, access-control-request-headers
x-envoy-upstream-service-time:
- '33'
status:
code: 200
message: OK
version: 1

View File

@@ -16,13 +16,6 @@ delegate_tool = tools[0]
ask_tool = tools[1]
@pytest.fixture(scope="module")
def vcr_config(request) -> dict:
return {
"cassette_library_dir": "tests/tools/agent_tools/cassettes",
}
@pytest.mark.vcr(filter_headers=["authorization"])
def test_delegate_work():
result = delegate_tool.run(

View File

@@ -21,13 +21,6 @@ from crewai.utilities.converter import (
from crewai.utilities.pydantic_schema_parser import PydanticSchemaParser
@pytest.fixture(scope="module")
def vcr_config(request) -> dict:
return {
"cassette_library_dir": "tests/utilities/cassettes",
}
# Sample Pydantic models for testing
class EmailResponse(BaseModel):
previous_message_content: str

View File

@@ -50,13 +50,10 @@ from crewai.utilities.events.tool_usage_events import (
ToolUsageErrorEvent,
)
@pytest.fixture(scope="module")
def vcr_config(request) -> dict:
return {
"cassette_library_dir": "tests/utilities/cassettes",
}
# Skip streaming tests when running in CI/CD environments
skip_streaming_in_ci = pytest.mark.skipif(
os.getenv("CI") is not None, reason="Skipping streaming tests in CI/CD environments"
)
base_agent = Agent(
role="base_agent",
@@ -628,6 +625,7 @@ def test_llm_emits_call_failed_event():
assert received_events[0].error == error_message
@skip_streaming_in_ci
@pytest.mark.vcr(filter_headers=["authorization"])
def test_llm_emits_stream_chunk_events():
"""Test that LLM emits stream chunk events when streaming is enabled."""
@@ -652,6 +650,7 @@ def test_llm_emits_stream_chunk_events():
assert "".join(received_chunks) == response
@skip_streaming_in_ci
@pytest.mark.vcr(filter_headers=["authorization"])
def test_llm_no_stream_chunks_when_streaming_disabled():
"""Test that LLM doesn't emit stream chunk events when streaming is disabled."""

View File

@@ -1,72 +0,0 @@
import unittest
from unittest.mock import MagicMock, patch
import pytest
from pydantic import BaseModel
from crewai.utilities.internal_instructor import InternalInstructor
class TestOutput(BaseModel):
value: str
class TestInternalInstructor(unittest.TestCase):
@patch("instructor.from_litellm")
def test_tools_mode_for_regular_models(self, mock_from_litellm):
mock_llm = MagicMock()
mock_llm.model = "gpt-4o"
mock_instructor = MagicMock()
mock_from_litellm.return_value = mock_instructor
instructor = InternalInstructor(
content="Test content",
model=TestOutput,
llm=mock_llm
)
import instructor
mock_from_litellm.assert_called_once_with(
unittest.mock.ANY,
mode=instructor.Mode.TOOLS
)
@patch("instructor.from_litellm")
def test_parallel_tools_mode_for_custom_openai(self, mock_from_litellm):
mock_llm = MagicMock()
mock_llm.model = "custom_openai/some-model"
mock_instructor = MagicMock()
mock_from_litellm.return_value = mock_instructor
instructor = InternalInstructor(
content="Test content",
model=TestOutput,
llm=mock_llm
)
import instructor
mock_from_litellm.assert_called_once_with(
unittest.mock.ANY,
mode=instructor.Mode.PARALLEL_TOOLS
)
@patch("instructor.from_litellm")
def test_handling_list_response_in_to_pydantic(self, mock_from_litellm):
mock_llm = MagicMock()
mock_llm.model = "custom_openai/some-model"
mock_instructor = MagicMock()
mock_chat = MagicMock()
mock_instructor.chat.completions.create.return_value = [
TestOutput(value="test value")
]
mock_from_litellm.return_value = mock_instructor
instructor = InternalInstructor(
content="Test content",
model=TestOutput,
llm=mock_llm
)
result = instructor.to_pydantic()
assert isinstance(result, TestOutput)
assert result.value == "test value"

193
uv.lock generated
View File

@@ -1,32 +1,42 @@
version = 1
revision = 1
requires-python = ">=3.10, <3.13"
resolution-markers = [
"python_full_version < '3.11' and platform_python_implementation == 'PyPy' and sys_platform == 'darwin'",
"python_full_version < '3.11' and platform_python_implementation != 'PyPy' and sys_platform == 'darwin'",
"python_version < '0'",
"python_full_version < '3.11' and platform_machine == 'aarch64' and platform_python_implementation == 'PyPy' and sys_platform == 'linux'",
"python_full_version < '3.11' and platform_machine == 'aarch64' and platform_python_implementation != 'PyPy' and sys_platform == 'linux'",
"(python_full_version < '3.11' and platform_machine != 'aarch64' and platform_python_implementation == 'PyPy' and sys_platform == 'linux') or (python_full_version < '3.11' and platform_python_implementation == 'PyPy' and sys_platform != 'darwin' and sys_platform != 'linux')",
"(python_full_version < '3.11' and platform_machine != 'aarch64' and platform_python_implementation != 'PyPy' and sys_platform == 'linux') or (python_full_version < '3.11' and platform_python_implementation != 'PyPy' and sys_platform != 'darwin' and sys_platform != 'linux')",
"python_full_version == '3.11.*' and platform_python_implementation == 'PyPy' and sys_platform == 'darwin'",
"python_full_version == '3.11.*' and platform_python_implementation != 'PyPy' and sys_platform == 'darwin'",
"python_full_version == '3.11.*' and platform_machine == 'aarch64' and platform_python_implementation == 'PyPy' and sys_platform == 'linux'",
"python_full_version == '3.11.*' and platform_machine == 'aarch64' and platform_python_implementation != 'PyPy' and sys_platform == 'linux'",
"(python_full_version == '3.11.*' and platform_machine != 'aarch64' and platform_python_implementation == 'PyPy' and sys_platform == 'linux') or (python_full_version == '3.11.*' and platform_python_implementation == 'PyPy' and sys_platform != 'darwin' and sys_platform != 'linux')",
"(python_full_version == '3.11.*' and platform_machine != 'aarch64' and platform_python_implementation != 'PyPy' and sys_platform == 'linux') or (python_full_version == '3.11.*' and platform_python_implementation != 'PyPy' and sys_platform != 'darwin' and sys_platform != 'linux')",
"python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_python_implementation == 'PyPy' and sys_platform == 'darwin'",
"python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_python_implementation != 'PyPy' and sys_platform == 'darwin'",
"python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_machine == 'aarch64' and platform_python_implementation == 'PyPy' and sys_platform == 'linux'",
"python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_machine == 'aarch64' and platform_python_implementation != 'PyPy' and sys_platform == 'linux'",
"(python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_machine != 'aarch64' and platform_python_implementation == 'PyPy' and sys_platform == 'linux') or (python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_python_implementation == 'PyPy' and sys_platform != 'darwin' and sys_platform != 'linux')",
"(python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_machine != 'aarch64' and platform_python_implementation != 'PyPy' and sys_platform == 'linux') or (python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_python_implementation != 'PyPy' and sys_platform != 'darwin' and sys_platform != 'linux')",
"python_full_version >= '3.12.4' and platform_python_implementation == 'PyPy' and sys_platform == 'darwin'",
"python_full_version >= '3.12.4' and platform_python_implementation != 'PyPy' and sys_platform == 'darwin'",
"python_full_version >= '3.12.4' and platform_machine == 'aarch64' and platform_python_implementation == 'PyPy' and sys_platform == 'linux'",
"python_full_version >= '3.12.4' and platform_machine == 'aarch64' and platform_python_implementation != 'PyPy' and sys_platform == 'linux'",
"(python_full_version >= '3.12.4' and platform_machine != 'aarch64' and platform_python_implementation == 'PyPy' and sys_platform == 'linux') or (python_full_version >= '3.12.4' and platform_python_implementation == 'PyPy' and sys_platform != 'darwin' and sys_platform != 'linux')",
"(python_full_version >= '3.12.4' and platform_machine != 'aarch64' and platform_python_implementation != 'PyPy' and sys_platform == 'linux') or (python_full_version >= '3.12.4' and platform_python_implementation != 'PyPy' and sys_platform != 'darwin' and sys_platform != 'linux')",
"python_full_version < '3.11' and platform_system == 'Darwin' and sys_platform == 'darwin'",
"python_full_version < '3.11' and platform_machine == 'aarch64' and platform_system == 'Linux' and sys_platform == 'darwin'",
"(python_full_version < '3.11' and platform_machine != 'aarch64' and platform_system != 'Darwin' and sys_platform == 'darwin') or (python_full_version < '3.11' and platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform == 'darwin')",
"python_full_version < '3.11' and platform_machine == 'aarch64' and platform_system == 'Darwin' and sys_platform == 'linux'",
"python_full_version < '3.11' and platform_machine == 'aarch64' and platform_system == 'Linux' and sys_platform == 'linux'",
"python_full_version < '3.11' and platform_machine == 'aarch64' and platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform == 'linux'",
"(python_full_version < '3.11' and platform_machine != 'aarch64' and platform_system == 'Darwin' and sys_platform != 'darwin') or (python_full_version < '3.11' and platform_system == 'Darwin' and sys_platform != 'darwin' and sys_platform != 'linux')",
"python_full_version < '3.11' and platform_machine == 'aarch64' and platform_system == 'Linux' and sys_platform != 'darwin' and sys_platform != 'linux'",
"(python_full_version < '3.11' and platform_machine != 'aarch64' and platform_system != 'Darwin' and sys_platform != 'darwin') or (python_full_version < '3.11' and platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform != 'darwin' and sys_platform != 'linux')",
"python_full_version == '3.11.*' and platform_system == 'Darwin' and sys_platform == 'darwin'",
"python_full_version == '3.11.*' and platform_machine == 'aarch64' and platform_system == 'Linux' and sys_platform == 'darwin'",
"(python_full_version == '3.11.*' and platform_machine != 'aarch64' and platform_system != 'Darwin' and sys_platform == 'darwin') or (python_full_version == '3.11.*' and platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform == 'darwin')",
"python_full_version == '3.11.*' and platform_machine == 'aarch64' and platform_system == 'Darwin' and sys_platform == 'linux'",
"python_full_version == '3.11.*' and platform_machine == 'aarch64' and platform_system == 'Linux' and sys_platform == 'linux'",
"python_full_version == '3.11.*' and platform_machine == 'aarch64' and platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform == 'linux'",
"(python_full_version == '3.11.*' and platform_machine != 'aarch64' and platform_system == 'Darwin' and sys_platform != 'darwin') or (python_full_version == '3.11.*' and platform_system == 'Darwin' and sys_platform != 'darwin' and sys_platform != 'linux')",
"python_full_version == '3.11.*' and platform_machine == 'aarch64' and platform_system == 'Linux' and sys_platform != 'darwin' and sys_platform != 'linux'",
"(python_full_version == '3.11.*' and platform_machine != 'aarch64' and platform_system != 'Darwin' and sys_platform != 'darwin') or (python_full_version == '3.11.*' and platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform != 'darwin' and sys_platform != 'linux')",
"python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_system == 'Darwin' and sys_platform == 'darwin'",
"python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_machine == 'aarch64' and platform_system == 'Linux' and sys_platform == 'darwin'",
"(python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_machine != 'aarch64' and platform_system != 'Darwin' and sys_platform == 'darwin') or (python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform == 'darwin')",
"python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_machine == 'aarch64' and platform_system == 'Darwin' and sys_platform == 'linux'",
"python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_machine == 'aarch64' and platform_system == 'Linux' and sys_platform == 'linux'",
"python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_machine == 'aarch64' and platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform == 'linux'",
"(python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_machine != 'aarch64' and platform_system == 'Darwin' and sys_platform != 'darwin') or (python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_system == 'Darwin' and sys_platform != 'darwin' and sys_platform != 'linux')",
"python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_machine == 'aarch64' and platform_system == 'Linux' and sys_platform != 'darwin' and sys_platform != 'linux'",
"(python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_machine != 'aarch64' and platform_system != 'Darwin' and sys_platform != 'darwin') or (python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform != 'darwin' and sys_platform != 'linux')",
"python_full_version >= '3.12.4' and platform_system == 'Darwin' and sys_platform == 'darwin'",
"python_full_version >= '3.12.4' and platform_machine == 'aarch64' and platform_system == 'Linux' and sys_platform == 'darwin'",
"(python_full_version >= '3.12.4' and platform_machine != 'aarch64' and platform_system != 'Darwin' and sys_platform == 'darwin') or (python_full_version >= '3.12.4' and platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform == 'darwin')",
"python_full_version >= '3.12.4' and platform_machine == 'aarch64' and platform_system == 'Darwin' and sys_platform == 'linux'",
"python_full_version >= '3.12.4' and platform_machine == 'aarch64' and platform_system == 'Linux' and sys_platform == 'linux'",
"python_full_version >= '3.12.4' and platform_machine == 'aarch64' and platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform == 'linux'",
"(python_full_version >= '3.12.4' and platform_machine != 'aarch64' and platform_system == 'Darwin' and sys_platform != 'darwin') or (python_full_version >= '3.12.4' and platform_system == 'Darwin' and sys_platform != 'darwin' and sys_platform != 'linux')",
"python_full_version >= '3.12.4' and platform_machine == 'aarch64' and platform_system == 'Linux' and sys_platform != 'darwin' and sys_platform != 'linux'",
"(python_full_version >= '3.12.4' and platform_machine != 'aarch64' and platform_system != 'Darwin' and sys_platform != 'darwin') or (python_full_version >= '3.12.4' and platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform != 'darwin' and sys_platform != 'linux')",
]
[[package]]
@@ -334,7 +344,7 @@ name = "build"
version = "1.2.2.post1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "colorama", marker = "(os_name == 'nt' and platform_machine != 'aarch64' and sys_platform == 'linux') or (os_name == 'nt' and sys_platform != 'darwin' and sys_platform != 'linux')" },
{ name = "colorama", marker = "os_name == 'nt'" },
{ name = "importlib-metadata", marker = "python_full_version < '3.10.2'" },
{ name = "packaging" },
{ name = "pyproject-hooks" },
@@ -569,7 +579,7 @@ name = "click"
version = "8.1.8"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "colorama", marker = "sys_platform == 'win32'" },
{ name = "colorama", marker = "platform_system == 'Windows'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/b9/2e/0090cbf739cee7d23781ad4b89a9894a41538e4fcf4c31dcdd705b78eb8b/click-8.1.8.tar.gz", hash = "sha256:ed53c9d8990d83c2a27deae68e4ee337473f6330c040a31d4225c9574d16096a", size = 226593 }
wheels = [
@@ -693,8 +703,8 @@ dev = [
{ name = "pre-commit" },
{ name = "pytest" },
{ name = "pytest-asyncio" },
{ name = "pytest-recording" },
{ name = "pytest-subprocess" },
{ name = "pytest-vcr" },
{ name = "python-dotenv" },
{ name = "ruff" },
]
@@ -735,7 +745,6 @@ requires-dist = [
{ name = "tomli-w", specifier = ">=1.1.0" },
{ name = "uv", specifier = ">=0.4.25" },
]
provides-extras = ["tools", "embeddings", "agentops", "fastembed", "pdfplumber", "pandas", "openpyxl", "mem0", "docling", "aisuite"]
[package.metadata.requires-dev]
dev = [
@@ -750,8 +759,8 @@ dev = [
{ name = "pre-commit", specifier = ">=3.6.0" },
{ name = "pytest", specifier = ">=8.0.0" },
{ name = "pytest-asyncio", specifier = ">=0.23.7" },
{ name = "pytest-recording", specifier = ">=0.13.2" },
{ name = "pytest-subprocess", specifier = ">=1.5.2" },
{ name = "pytest-vcr", specifier = ">=1.0.2" },
{ name = "python-dotenv", specifier = ">=1.0.0" },
{ name = "ruff", specifier = ">=0.8.2" },
]
@@ -2509,7 +2518,7 @@ version = "1.6.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "click" },
{ name = "colorama", marker = "sys_platform == 'win32'" },
{ name = "colorama", marker = "platform_system == 'Windows'" },
{ name = "ghp-import" },
{ name = "jinja2" },
{ name = "markdown" },
@@ -2690,7 +2699,7 @@ version = "2.10.2"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "pygments" },
{ name = "pywin32", marker = "sys_platform == 'win32'" },
{ name = "pywin32", marker = "platform_system == 'Windows'" },
{ name = "tqdm" },
]
sdist = { url = "https://files.pythonhosted.org/packages/3a/93/80ac75c20ce54c785648b4ed363c88f148bf22637e10c9863db4fbe73e74/mpire-2.10.2.tar.gz", hash = "sha256:f66a321e93fadff34585a4bfa05e95bd946cf714b442f51c529038eb45773d97", size = 271270 }
@@ -2937,7 +2946,7 @@ name = "nvidia-cudnn-cu12"
version = "9.1.0.70"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "nvidia-cublas-cu12", marker = "(platform_machine != 'aarch64' and sys_platform == 'linux') or (sys_platform != 'darwin' and sys_platform != 'linux')" },
{ name = "nvidia-cublas-cu12", marker = "(platform_machine != 'aarch64' and platform_system != 'Darwin') or (platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform != 'linux')" },
]
wheels = [
{ url = "https://files.pythonhosted.org/packages/9f/fd/713452cd72343f682b1c7b9321e23829f00b842ceaedcda96e742ea0b0b3/nvidia_cudnn_cu12-9.1.0.70-py3-none-manylinux2014_x86_64.whl", hash = "sha256:165764f44ef8c61fcdfdfdbe769d687e06374059fbb388b6c89ecb0e28793a6f", size = 664752741 },
@@ -2964,9 +2973,9 @@ name = "nvidia-cusolver-cu12"
version = "11.4.5.107"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "nvidia-cublas-cu12", marker = "(platform_machine != 'aarch64' and sys_platform == 'linux') or (sys_platform != 'darwin' and sys_platform != 'linux')" },
{ name = "nvidia-cusparse-cu12", marker = "(platform_machine != 'aarch64' and sys_platform == 'linux') or (sys_platform != 'darwin' and sys_platform != 'linux')" },
{ name = "nvidia-nvjitlink-cu12", marker = "(platform_machine != 'aarch64' and sys_platform == 'linux') or (sys_platform != 'darwin' and sys_platform != 'linux')" },
{ name = "nvidia-cublas-cu12", marker = "(platform_machine != 'aarch64' and platform_system != 'Darwin') or (platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform != 'linux')" },
{ name = "nvidia-cusparse-cu12", marker = "(platform_machine != 'aarch64' and platform_system != 'Darwin') or (platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform != 'linux')" },
{ name = "nvidia-nvjitlink-cu12", marker = "(platform_machine != 'aarch64' and platform_system != 'Darwin') or (platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform != 'linux')" },
]
wheels = [
{ url = "https://files.pythonhosted.org/packages/bc/1d/8de1e5c67099015c834315e333911273a8c6aaba78923dd1d1e25fc5f217/nvidia_cusolver_cu12-11.4.5.107-py3-none-manylinux1_x86_64.whl", hash = "sha256:8a7ec542f0412294b15072fa7dab71d31334014a69f953004ea7a118206fe0dd", size = 124161928 },
@@ -2977,7 +2986,7 @@ name = "nvidia-cusparse-cu12"
version = "12.1.0.106"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "nvidia-nvjitlink-cu12", marker = "(platform_machine != 'aarch64' and sys_platform == 'linux') or (sys_platform != 'darwin' and sys_platform != 'linux')" },
{ name = "nvidia-nvjitlink-cu12", marker = "(platform_machine != 'aarch64' and platform_system != 'Darwin') or (platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform != 'linux')" },
]
wheels = [
{ url = "https://files.pythonhosted.org/packages/65/5b/cfaeebf25cd9fdec14338ccb16f6b2c4c7fa9163aefcf057d86b9cc248bb/nvidia_cusparse_cu12-12.1.0.106-py3-none-manylinux1_x86_64.whl", hash = "sha256:f3b50f42cf363f86ab21f720998517a659a48131e8d538dc02f8768237bd884c", size = 195958278 },
@@ -2988,6 +2997,7 @@ name = "nvidia-nccl-cu12"
version = "2.20.5"
source = { registry = "https://pypi.org/simple" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/c1/bb/d09dda47c881f9ff504afd6f9ca4f502ded6d8fc2f572cacc5e39da91c28/nvidia_nccl_cu12-2.20.5-py3-none-manylinux2014_aarch64.whl", hash = "sha256:1fc150d5c3250b170b29410ba682384b14581db722b2531b0d8d33c595f33d01", size = 176238458 },
{ url = "https://files.pythonhosted.org/packages/4b/2a/0a131f572aa09f741c30ccd45a8e56316e8be8dfc7bc19bf0ab7cfef7b19/nvidia_nccl_cu12-2.20.5-py3-none-manylinux2014_x86_64.whl", hash = "sha256:057f6bf9685f75215d0c53bf3ac4a10b3e6578351de307abad9e18a99182af56", size = 176249402 },
]
@@ -2997,6 +3007,7 @@ version = "12.6.85"
source = { registry = "https://pypi.org/simple" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/9d/d7/c5383e47c7e9bf1c99d5bd2a8c935af2b6d705ad831a7ec5c97db4d82f4f/nvidia_nvjitlink_cu12-12.6.85-py3-none-manylinux2010_x86_64.manylinux_2_12_x86_64.whl", hash = "sha256:eedc36df9e88b682efe4309aa16b5b4e78c2407eac59e8c10a6a47535164369a", size = 19744971 },
{ url = "https://files.pythonhosted.org/packages/31/db/dc71113d441f208cdfe7ae10d4983884e13f464a6252450693365e166dcf/nvidia_nvjitlink_cu12-12.6.85-py3-none-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:cf4eaa7d4b6b543ffd69d6abfb11efdeb2db48270d94dfd3a452c24150829e41", size = 19270338 },
]
[[package]]
@@ -3514,7 +3525,7 @@ name = "portalocker"
version = "2.10.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "pywin32", marker = "sys_platform == 'win32'" },
{ name = "pywin32", marker = "platform_system == 'Windows'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/ed/d3/c6c64067759e87af98cc668c1cc75171347d0f1577fab7ca3749134e3cd4/portalocker-2.10.1.tar.gz", hash = "sha256:ef1bf844e878ab08aee7e40184156e1151f228f103aa5c6bd0724cc330960f8f", size = 40891 }
wheels = [
@@ -4064,20 +4075,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/96/31/6607dab48616902f76885dfcf62c08d929796fc3b2d2318faf9fd54dbed9/pytest_asyncio-0.24.0-py3-none-any.whl", hash = "sha256:a811296ed596b69bf0b6f3dc40f83bcaf341b155a269052d82efa2b25ac7037b", size = 18024 },
]
[[package]]
name = "pytest-recording"
version = "0.13.2"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "pytest" },
{ name = "vcrpy", version = "5.1.0", source = { registry = "https://pypi.org/simple" }, marker = "platform_python_implementation == 'PyPy'" },
{ name = "vcrpy", version = "7.0.0", source = { registry = "https://pypi.org/simple" }, marker = "platform_python_implementation != 'PyPy'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/fe/2a/ea6b8036ae01979eae02d8ad5a7da14dec90d9176b613e49fb8d134c78fc/pytest_recording-0.13.2.tar.gz", hash = "sha256:000c3babbb466681457fd65b723427c1779a0c6c17d9e381c3142a701e124877", size = 25270 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/72/52/8e67a969e9fad3fa5ec4eab9f2a7348ff04692065c7deda21d76e9112703/pytest_recording-0.13.2-py3-none-any.whl", hash = "sha256:3820fe5743d1ac46e807989e11d073cb776a60bdc544cf43ebca454051b22d13", size = 12783 },
]
[[package]]
name = "pytest-subprocess"
version = "1.5.2"
@@ -4090,6 +4087,19 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/10/77/a80e8f9126b95ffd5ad4d04bd14005c68dcbf0d88f53b2b14893f6cc7232/pytest_subprocess-1.5.2-py3-none-any.whl", hash = "sha256:23ac7732aa8bd45f1757265b1316eb72a7f55b41fb21e2ca22e149ba3629fa46", size = 20886 },
]
[[package]]
name = "pytest-vcr"
version = "1.0.2"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "pytest" },
{ name = "vcrpy" },
]
sdist = { url = "https://files.pythonhosted.org/packages/1a/60/104c619483c1a42775d3f8b27293f1ecfc0728014874d065e68cb9702d49/pytest-vcr-1.0.2.tar.gz", hash = "sha256:23ee51b75abbcc43d926272773aae4f39f93aceb75ed56852d0bf618f92e1896", size = 3810 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/9d/d3/ff520d11e6ee400602711d1ece8168dcfc5b6d8146fb7db4244a6ad6a9c3/pytest_vcr-1.0.2-py2.py3-none-any.whl", hash = "sha256:2f316e0539399bea0296e8b8401145c62b6f85e9066af7e57b6151481b0d6d9c", size = 4137 },
]
[[package]]
name = "python-bidi"
version = "0.6.3"
@@ -5022,19 +5032,19 @@ dependencies = [
{ name = "fsspec" },
{ name = "jinja2" },
{ name = "networkx" },
{ name = "nvidia-cublas-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "nvidia-cuda-cupti-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "nvidia-cuda-nvrtc-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "nvidia-cuda-runtime-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "nvidia-cudnn-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "nvidia-cufft-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "nvidia-curand-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "nvidia-cusolver-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "nvidia-cusparse-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "nvidia-nccl-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "nvidia-nvtx-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "nvidia-cublas-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "nvidia-cuda-cupti-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "nvidia-cuda-nvrtc-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "nvidia-cuda-runtime-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "nvidia-cudnn-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "nvidia-cufft-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "nvidia-curand-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "nvidia-cusolver-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "nvidia-cusparse-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "nvidia-nccl-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "nvidia-nvtx-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "sympy" },
{ name = "triton", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "triton", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "typing-extensions" },
]
wheels = [
@@ -5081,7 +5091,7 @@ name = "tqdm"
version = "4.66.5"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "colorama", marker = "sys_platform == 'win32'" },
{ name = "colorama", marker = "platform_system == 'Windows'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/58/83/6ba9844a41128c62e810fddddd72473201f3eacde02046066142a2d96cc5/tqdm-4.66.5.tar.gz", hash = "sha256:e1020aef2e5096702d8a025ac7d16b1577279c9d63f8375b63083e9a5f0fcbad", size = 169504 }
wheels = [
@@ -5123,7 +5133,7 @@ name = "triton"
version = "3.0.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "filelock", marker = "(platform_machine != 'aarch64' and sys_platform == 'linux') or (sys_platform != 'darwin' and sys_platform != 'linux')" },
{ name = "filelock", marker = "(platform_machine != 'aarch64' and platform_system != 'Darwin') or (platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform != 'linux')" },
]
wheels = [
{ url = "https://files.pythonhosted.org/packages/45/27/14cc3101409b9b4b9241d2ba7deaa93535a217a211c86c4cc7151fb12181/triton-3.0.0-1-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:e1efef76935b2febc365bfadf74bcb65a6f959a9872e5bddf44cc9e0adce1e1a", size = 209376304 },
@@ -5278,59 +5288,16 @@ wheels = [
name = "vcrpy"
version = "5.1.0"
source = { registry = "https://pypi.org/simple" }
resolution-markers = [
"python_full_version < '3.11' and platform_python_implementation == 'PyPy' and sys_platform == 'darwin'",
"python_full_version < '3.11' and platform_machine == 'aarch64' and platform_python_implementation == 'PyPy' and sys_platform == 'linux'",
"(python_full_version < '3.11' and platform_machine != 'aarch64' and platform_python_implementation == 'PyPy' and sys_platform == 'linux') or (python_full_version < '3.11' and platform_python_implementation == 'PyPy' and sys_platform != 'darwin' and sys_platform != 'linux')",
"python_full_version == '3.11.*' and platform_python_implementation == 'PyPy' and sys_platform == 'darwin'",
"python_full_version == '3.11.*' and platform_machine == 'aarch64' and platform_python_implementation == 'PyPy' and sys_platform == 'linux'",
"(python_full_version == '3.11.*' and platform_machine != 'aarch64' and platform_python_implementation == 'PyPy' and sys_platform == 'linux') or (python_full_version == '3.11.*' and platform_python_implementation == 'PyPy' and sys_platform != 'darwin' and sys_platform != 'linux')",
"python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_python_implementation == 'PyPy' and sys_platform == 'darwin'",
"python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_machine == 'aarch64' and platform_python_implementation == 'PyPy' and sys_platform == 'linux'",
"(python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_machine != 'aarch64' and platform_python_implementation == 'PyPy' and sys_platform == 'linux') or (python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_python_implementation == 'PyPy' and sys_platform != 'darwin' and sys_platform != 'linux')",
"python_full_version >= '3.12.4' and platform_python_implementation == 'PyPy' and sys_platform == 'darwin'",
"python_full_version >= '3.12.4' and platform_machine == 'aarch64' and platform_python_implementation == 'PyPy' and sys_platform == 'linux'",
"(python_full_version >= '3.12.4' and platform_machine != 'aarch64' and platform_python_implementation == 'PyPy' and sys_platform == 'linux') or (python_full_version >= '3.12.4' and platform_python_implementation == 'PyPy' and sys_platform != 'darwin' and sys_platform != 'linux')",
]
dependencies = [
{ name = "pyyaml", marker = "platform_python_implementation == 'PyPy'" },
{ name = "wrapt", marker = "platform_python_implementation == 'PyPy'" },
{ name = "yarl", marker = "platform_python_implementation == 'PyPy'" },
{ name = "pyyaml" },
{ name = "wrapt" },
{ name = "yarl" },
]
sdist = { url = "https://files.pythonhosted.org/packages/a5/ea/a166a3cce4ac5958ba9bbd9768acdb1ba38ae17ff7986da09fa5b9dbc633/vcrpy-5.1.0.tar.gz", hash = "sha256:bbf1532f2618a04f11bce2a99af3a9647a32c880957293ff91e0a5f187b6b3d2", size = 84576 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/2a/5b/3f70bcb279ad30026cc4f1df0a0491a0205a24dddd88301f396c485de9e7/vcrpy-5.1.0-py2.py3-none-any.whl", hash = "sha256:605e7b7a63dcd940db1df3ab2697ca7faf0e835c0852882142bafb19649d599e", size = 41969 },
]
[[package]]
name = "vcrpy"
version = "7.0.0"
source = { registry = "https://pypi.org/simple" }
resolution-markers = [
"python_full_version < '3.11' and platform_python_implementation != 'PyPy' and sys_platform == 'darwin'",
"python_full_version < '3.11' and platform_machine == 'aarch64' and platform_python_implementation != 'PyPy' and sys_platform == 'linux'",
"(python_full_version < '3.11' and platform_machine != 'aarch64' and platform_python_implementation != 'PyPy' and sys_platform == 'linux') or (python_full_version < '3.11' and platform_python_implementation != 'PyPy' and sys_platform != 'darwin' and sys_platform != 'linux')",
"python_full_version == '3.11.*' and platform_python_implementation != 'PyPy' and sys_platform == 'darwin'",
"python_full_version == '3.11.*' and platform_machine == 'aarch64' and platform_python_implementation != 'PyPy' and sys_platform == 'linux'",
"(python_full_version == '3.11.*' and platform_machine != 'aarch64' and platform_python_implementation != 'PyPy' and sys_platform == 'linux') or (python_full_version == '3.11.*' and platform_python_implementation != 'PyPy' and sys_platform != 'darwin' and sys_platform != 'linux')",
"python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_python_implementation != 'PyPy' and sys_platform == 'darwin'",
"python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_machine == 'aarch64' and platform_python_implementation != 'PyPy' and sys_platform == 'linux'",
"(python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_machine != 'aarch64' and platform_python_implementation != 'PyPy' and sys_platform == 'linux') or (python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_python_implementation != 'PyPy' and sys_platform != 'darwin' and sys_platform != 'linux')",
"python_full_version >= '3.12.4' and platform_python_implementation != 'PyPy' and sys_platform == 'darwin'",
"python_full_version >= '3.12.4' and platform_machine == 'aarch64' and platform_python_implementation != 'PyPy' and sys_platform == 'linux'",
"(python_full_version >= '3.12.4' and platform_machine != 'aarch64' and platform_python_implementation != 'PyPy' and sys_platform == 'linux') or (python_full_version >= '3.12.4' and platform_python_implementation != 'PyPy' and sys_platform != 'darwin' and sys_platform != 'linux')",
]
dependencies = [
{ name = "pyyaml", marker = "platform_python_implementation != 'PyPy'" },
{ name = "urllib3", marker = "platform_python_implementation != 'PyPy'" },
{ name = "wrapt", marker = "platform_python_implementation != 'PyPy'" },
{ name = "yarl", marker = "platform_python_implementation != 'PyPy'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/25/d3/856e06184d4572aada1dd559ddec3bedc46df1f2edc5ab2c91121a2cccdb/vcrpy-7.0.0.tar.gz", hash = "sha256:176391ad0425edde1680c5b20738ea3dc7fb942520a48d2993448050986b3a50", size = 85502 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/13/5d/1f15b252890c968d42b348d1e9b0aa12d5bf3e776704178ec37cceccdb63/vcrpy-7.0.0-py2.py3-none-any.whl", hash = "sha256:55791e26c18daa363435054d8b35bd41a4ac441b6676167635d1b37a71dbe124", size = 42321 },
]
[[package]]
name = "virtualenv"
version = "20.27.0"