Compare commits

..

5 Commits

Author SHA1 Message Date
Devin AI
1eb09f7e96 Add type ignore comment to fix type checking issue
Co-Authored-By: Joe Moura <joao@crewai.com>
2025-04-09 03:11:21 +00:00
Devin AI
474a584312 Fix type checking issue in LLM class
Co-Authored-By: Joe Moura <joao@crewai.com>
2025-04-09 03:09:51 +00:00
Devin AI
1a53894cd9 Fix import order with ruff auto-fix
Co-Authored-By: Joe Moura <joao@crewai.com>
2025-04-09 03:08:37 +00:00
Devin AI
3a85b442fb Fix import order in test file
Co-Authored-By: Joe Moura <joao@crewai.com>
2025-04-09 03:07:36 +00:00
Devin AI
cdd5ebfb1a Fix #2541: Add support for multimodal content format in qwen2.5-vl model
Co-Authored-By: Joe Moura <joao@crewai.com>
2025-04-09 03:06:24 +00:00
75 changed files with 2863 additions and 4000 deletions

View File

@@ -4,6 +4,4 @@ repos:
hooks:
- id: ruff
args: ["--fix"]
exclude: ^src/crewai/cli/templates/
- id: ruff-format
exclude: ^src/crewai/cli/templates/

View File

@@ -23,7 +23,8 @@ The `Crew` class has been enriched with several attributes to support advanced f
| **Process Flow** (`process`) | Defines execution logic (e.g., sequential, hierarchical) for task distribution. |
| **Verbose Logging** (`verbose`) | Provides detailed logging for monitoring and debugging. Accepts integer and boolean values to control verbosity level. |
| **Rate Limiting** (`max_rpm`) | Limits requests per minute to optimize resource usage. Setting guidelines depend on task complexity and load. |
| **Internationalization / Customization** (`prompt_file`) | Supports prompt customization for global usability. [Example of file](https://github.com/joaomdmoura/crewAI/blob/main/src/crewai/translations/en.json) |
| **Internationalization / Customization** (`language`, `prompt_file`) | Supports prompt customization for global usability. [Example of file](https://github.com/joaomdmoura/crewAI/blob/main/src/crewai/translations/en.json) |
| **Execution and Output Handling** (`full_output`) | Controls output granularity, distinguishing between full and final outputs. |
| **Callback and Telemetry** (`step_callback`, `task_callback`) | Enables step-wise and task-level execution monitoring and telemetry for performance analytics. |
| **Crew Sharing** (`share_crew`) | Allows sharing crew data with CrewAI for model improvement. Privacy implications and benefits should be considered. |
| **Usage Metrics** (`usage_metrics`) | Logs all LLM usage metrics during task execution for performance insights. |
@@ -48,4 +49,4 @@ Consider a crew with a researcher agent tasked with data gathering and a writer
## Conclusion
The integration of advanced attributes and functionalities into the CrewAI framework significantly enriches the agent collaboration ecosystem. These enhancements not only simplify interactions but also offer unprecedented flexibility and control, paving the way for sophisticated AI-driven solutions capable of tackling complex tasks through intelligent collaboration and delegation.
The integration of advanced attributes and functionalities into the CrewAI framework significantly enriches the agent collaboration ecosystem. These enhancements not only simplify interactions but also offer unprecedented flexibility and control, paving the way for sophisticated AI-driven solutions capable of tackling complex tasks through intelligent collaboration and delegation.

View File

@@ -20,10 +20,13 @@ A crew in crewAI represents a collaborative group of agents working together to
| **Function Calling LLM** _(optional)_ | `function_calling_llm` | If passed, the crew will use this LLM to do function calling for tools for all agents in the crew. Each agent can have its own LLM, which overrides the crew's LLM for function calling. |
| **Config** _(optional)_ | `config` | Optional configuration settings for the crew, in `Json` or `Dict[str, Any]` format. |
| **Max RPM** _(optional)_ | `max_rpm` | Maximum requests per minute the crew adheres to during execution. Defaults to `None`. |
| **Language** _(optional)_ | `language` | Language used for the crew, defaults to English. |
| **Language File** _(optional)_ | `language_file` | Path to the language file to be used for the crew. |
| **Memory** _(optional)_ | `memory` | Utilized for storing execution memories (short-term, long-term, entity memory). |
| **Memory Config** _(optional)_ | `memory_config` | Configuration for the memory provider to be used by the crew. |
| **Cache** _(optional)_ | `cache` | Specifies whether to use a cache for storing the results of tools' execution. Defaults to `True`. |
| **Embedder** _(optional)_ | `embedder` | Configuration for the embedder to be used by the crew. Mostly used by memory for now. Default is `{"provider": "openai"}`. |
| **Full Output** _(optional)_ | `full_output` | Whether the crew should return the full output with all tasks outputs or just the final output. Defaults to `False`. |
| **Step Callback** _(optional)_ | `step_callback` | A function that is called after each step of every agent. This can be used to log the agent's actions or to perform other operations; it won't override the agent-specific `step_callback`. |
| **Task Callback** _(optional)_ | `task_callback` | A function that is called after the completion of each task. Useful for monitoring or additional operations post-task execution. |
| **Share Crew** _(optional)_ | `share_crew` | Whether you want to share the complete crew information and execution with the crewAI team to make the library better, and allow us to train models. |

View File

@@ -545,20 +545,16 @@ The `third_method` and `fourth_method` listen to the output of the `second_metho
When you run this Flow, the output will change based on the random boolean value generated by the `start_method`.
## Adding Agents to Flows
## Adding LiteAgent to Flows
Agents can be seamlessly integrated into your flows, providing a lightweight alternative to full Crews when you need simpler, focused task execution. Here's an example of how to use an Agent within a flow to perform market research:
LiteAgents can be seamlessly integrated into your flows, providing a lightweight alternative to full Crews when you need simpler, focused task execution. Here's an example of how to use a LiteAgent within a flow to perform market research:
```python
import asyncio
from typing import Any, Dict, List
from crewai_tools import SerperDevTool
from typing import List, cast
from crewai_tools.tools.website_search.website_search_tool import WebsiteSearchTool
from pydantic import BaseModel, Field
from crewai.agent import Agent
from crewai.flow.flow import Flow, listen, start
from crewai.lite_agent import LiteAgent
# Define a structured output format
class MarketAnalysis(BaseModel):
@@ -566,30 +562,28 @@ class MarketAnalysis(BaseModel):
market_size: str = Field(description="Estimated market size")
competitors: List[str] = Field(description="Major competitors in the space")
# Define flow state
class MarketResearchState(BaseModel):
product: str = ""
analysis: MarketAnalysis | None = None
# Create a flow class
class MarketResearchFlow(Flow[MarketResearchState]):
@start()
def initialize_research(self) -> Dict[str, Any]:
def initialize_research(self):
print(f"Starting market research for {self.state.product}")
return {"product": self.state.product}
@listen(initialize_research)
async def analyze_market(self) -> Dict[str, Any]:
# Create an Agent for market research
analyst = Agent(
def analyze_market(self):
# Create a LiteAgent for market research
analyst = LiteAgent(
role="Market Research Analyst",
goal=f"Analyze the market for {self.state.product}",
backstory="You are an experienced market analyst with expertise in "
"identifying market trends and opportunities.",
tools=[SerperDevTool()],
llm="gpt-4o",
tools=[WebsiteSearchTool()],
verbose=True,
response_format=MarketAnalysis,
)
# Define the research query
@@ -598,65 +592,49 @@ class MarketResearchFlow(Flow[MarketResearchState]):
1. Key market trends
2. Market size
3. Major competitors
Format your response according to the specified structure.
"""
# Execute the analysis with structured output format
result = await analyst.kickoff_async(query, response_format=MarketAnalysis)
if result.pydantic:
print("result", result.pydantic)
else:
print("result", result)
# Return the analysis to update the state
return {"analysis": result.pydantic}
# Execute the analysis
result = analyst.kickoff(query)
self.state.analysis = cast(MarketAnalysis, result.pydantic)
return result.pydantic
@listen(analyze_market)
def present_results(self, analysis) -> None:
def present_results(self):
analysis = self.state.analysis
if analysis is None:
print("No analysis results available")
return
print("\nMarket Analysis Results")
print("=====================")
if isinstance(analysis, dict):
# If we got a dict with 'analysis' key, extract the actual analysis object
market_analysis = analysis.get("analysis")
else:
market_analysis = analysis
print("\nKey Market Trends:")
for trend in analysis.key_trends:
print(f"- {trend}")
if market_analysis and isinstance(market_analysis, MarketAnalysis):
print("\nKey Market Trends:")
for trend in market_analysis.key_trends:
print(f"- {trend}")
print(f"\nMarket Size: {market_analysis.market_size}")
print("\nMajor Competitors:")
for competitor in market_analysis.competitors:
print(f"- {competitor}")
else:
print("No structured analysis data available.")
print("Raw analysis:", analysis)
print(f"\nMarket Size: {analysis.market_size}")
print("\nMajor Competitors:")
for competitor in analysis.competitors:
print(f"- {competitor}")
# Usage example
async def run_flow():
flow = MarketResearchFlow()
result = await flow.kickoff_async(inputs={"product": "AI-powered chatbots"})
return result
# Run the flow
if __name__ == "__main__":
asyncio.run(run_flow())
flow = MarketResearchFlow()
result = flow.kickoff(inputs={"product": "AI-powered chatbots"})
```
This example demonstrates several key features of using Agents in flows:
This example demonstrates several key features of using LiteAgents in flows:
1. **Structured Output**: Using Pydantic models to define the expected output format (`MarketAnalysis`) ensures type safety and structured data throughout the flow.
2. **State Management**: The flow state (`MarketResearchState`) maintains context between steps and stores both inputs and outputs.
3. **Tool Integration**: Agents can use tools (like `WebsiteSearchTool`) to enhance their capabilities.
3. **Tool Integration**: LiteAgents can use tools (like `WebsiteSearchTool`) to enhance their capabilities.
If you want to learn more about LiteAgents, check out the [LiteAgent](/concepts/lite-agent) page.
## Adding Crews to Flows

View File

@@ -0,0 +1,242 @@
---
title: LiteAgent
description: A lightweight, single-purpose agent for simple autonomous tasks within the CrewAI framework.
icon: feather
---
## Overview
A `LiteAgent` is a streamlined version of CrewAI's Agent, designed for simpler, standalone tasks that don't require the full complexity of a crew-based workflow. It's perfect for quick automations, single-purpose tasks, or when you need a lightweight solution.
<Tip>
Think of a LiteAgent as a specialized worker that excels at individual tasks.
While regular Agents are team players in a crew, LiteAgents are solo
performers optimized for specific operations.
</Tip>
## LiteAgent Attributes
| Attribute | Parameter | Type | Description |
| :------------------------------- | :---------------- | :--------------------- | :-------------------------------------------------------------- |
| **Role** | `role` | `str` | Defines the agent's function and expertise. |
| **Goal** | `goal` | `str` | The specific objective that guides the agent's actions. |
| **Backstory** | `backstory` | `str` | Provides context and personality to the agent. |
| **LLM** _(optional)_ | `llm` | `Union[str, LLM, Any]` | Language model powering the agent. Defaults to "gpt-4". |
| **Tools** _(optional)_ | `tools` | `List[BaseTool]` | Capabilities available to the agent. Defaults to an empty list. |
| **Verbose** _(optional)_ | `verbose` | `bool` | Enable detailed execution logs. Default is False. |
| **Response Format** _(optional)_ | `response_format` | `Type[BaseModel]` | Pydantic model for structured output. Optional. |
## Creating a LiteAgent
Here's a simple example of creating and using a standalone LiteAgent:
```python
from typing import List, cast
from crewai_tools import SerperDevTool
from pydantic import BaseModel, Field
from crewai.lite_agent import LiteAgent
# Define a structured output format
class MovieReview(BaseModel):
title: str = Field(description="The title of the movie")
rating: float = Field(description="Rating out of 10")
pros: List[str] = Field(description="List of positive aspects")
cons: List[str] = Field(description="List of negative aspects")
# Create a LiteAgent
critic = LiteAgent(
role="Movie Critic",
goal="Provide insightful movie reviews",
backstory="You are an experienced film critic known for balanced, thoughtful reviews.",
tools=[SerperDevTool()],
verbose=True,
response_format=MovieReview,
)
# Use the agent
query = """
Review the movie 'Inception'. Include:
1. Your rating out of 10
2. Key positive aspects
3. Areas that could be improved
"""
result = critic.kickoff(query)
# Access the structured output
review = cast(MovieReview, result.pydantic)
print(f"\nMovie Review: {review.title}")
print(f"Rating: {review.rating}/10")
print("\nPros:")
for pro in review.pros:
print(f"- {pro}")
print("\nCons:")
for con in review.cons:
print(f"- {con}")
```
This example demonstrates the core features of a LiteAgent:
- Structured output using Pydantic models
- Tool integration with WebSearchTool
- Simple execution with `kickoff()`
- Easy access to both raw and structured results
## Using LiteAgent in a Flow
For more complex scenarios, you can integrate LiteAgents into a Flow. Here's an example of a market research flow:
````python
from typing import List
from pydantic import BaseModel, Field
from crewai.flow.flow import Flow, start, listen
from crewai.lite_agent import LiteAgent
from crewai.tools import WebSearchTool
# Define a structured output format
class MarketAnalysis(BaseModel):
key_trends: List[str] = Field(description="List of identified market trends")
market_size: str = Field(description="Estimated market size")
competitors: List[str] = Field(description="Major competitors in the space")
# Define flow state
class MarketResearchState(BaseModel):
product: str = ""
analysis: MarketAnalysis = None
# Create a flow class
class MarketResearchFlow(Flow[MarketResearchState]):
@start()
def initialize_research(self, product: str):
print(f"Starting market research for {product}")
self.state.product = product
@listen(initialize_research)
async def analyze_market(self):
# Create a LiteAgent for market research
analyst = LiteAgent(
role="Market Research Analyst",
goal=f"Analyze the market for {self.state.product}",
backstory="You are an experienced market analyst with expertise in "
"identifying market trends and opportunities.",
tools=[WebSearchTool()],
verbose=True,
response_format=MarketAnalysis
)
# Define the research query
query = f"""
Research the market for {self.state.product}. Include:
1. Key market trends
2. Market size
3. Major competitors
Format your response according to the specified structure.
"""
# Execute the analysis
result = await analyst.kickoff_async(query)
self.state.analysis = result.pydantic
return result.pydantic
@listen(analyze_market)
def present_results(self):
analysis = self.state.analysis
print("\nMarket Analysis Results")
print("=====================")
print("\nKey Market Trends:")
for trend in analysis.key_trends:
print(f"- {trend}")
print(f"\nMarket Size: {analysis.market_size}")
print("\nMajor Competitors:")
for competitor in analysis.competitors:
print(f"- {competitor}")
# Usage example
import asyncio
async def run_flow():
flow = MarketResearchFlow()
result = await flow.kickoff(inputs={"product": "AI-powered chatbots"})
return result
# Run the flow
if __name__ == "__main__":
asyncio.run(run_flow())
## Key Features
### 1. Simplified Setup
Unlike regular Agents, LiteAgents are designed for quick setup and standalone operation. They don't require crew configuration or task management.
### 2. Structured Output
LiteAgents support Pydantic models for response formatting, making it easy to get structured, type-safe data from your agent's operations.
### 3. Tool Integration
Just like regular Agents, LiteAgents can use tools to enhance their capabilities:
```python
from crewai.tools import SerperDevTool, CalculatorTool
agent = LiteAgent(
role="Research Assistant",
goal="Find and analyze information",
tools=[SerperDevTool(), CalculatorTool()],
verbose=True
)
````
### 4. Async Support
LiteAgents support asynchronous execution through the `kickoff_async` method, making them suitable for non-blocking operations in your application.
## Response Formatting
LiteAgents support structured output through Pydantic models using the `response_format` parameter. This feature ensures type safety and consistent output structure, making it easier to work with agent responses in your application.
### Basic Usage
```python
from pydantic import BaseModel, Field
class SearchResult(BaseModel):
title: str = Field(description="The title of the found content")
summary: str = Field(description="A brief summary of the content")
relevance_score: float = Field(description="Relevance score from 0 to 1")
agent = LiteAgent(
role="Search Specialist",
goal="Find and summarize relevant information",
response_format=SearchResult
)
result = await agent.kickoff_async("Find information about quantum computing")
print(f"Title: {result.pydantic.title}")
print(f"Summary: {result.pydantic.summary}")
print(f"Relevance: {result.pydantic.relevance_score}")
```
### Handling Responses
When using `response_format`, the agent's response will be available in two forms:
1. **Raw Response**: Access the unstructured string response
```python
result = await agent.kickoff_async("Analyze the market")
print(result.raw) # Original LLM response
```
2. **Structured Response**: Access the parsed Pydantic model
```python
print(result.pydantic) # Parsed response as Pydantic model
print(result.pydantic.dict()) # Convert to dictionary
```

View File

@@ -66,6 +66,7 @@
"concepts/tasks",
"concepts/crews",
"concepts/flows",
"concepts/lite-agent",
"concepts/knowledge",
"concepts/llms",
"concepts/processes",
@@ -76,7 +77,9 @@
"concepts/testing",
"concepts/cli",
"concepts/tools",
"concepts/event-listener"
"concepts/event-listener",
"concepts/langchain-tools",
"concepts/llamaindex-tools"
]
},
{
@@ -95,9 +98,7 @@
"how-to/kickoff-async",
"how-to/kickoff-for-each",
"how-to/replay-tasks-from-latest-crew-kickoff",
"how-to/conditional-tasks",
"how-to/langchain-tools",
"how-to/llamaindex-tools"
"how-to/conditional-tasks"
]
},
{
@@ -196,11 +197,6 @@
"anchor": "Community",
"href": "https://community.crewai.com",
"icon": "discourse"
},
{
"anchor": "Tutorials",
"href": "https://www.youtube.com/@crewAIInc",
"icon": "youtube"
}
]
}
@@ -235,4 +231,4 @@
"reddit": "https://www.reddit.com/r/crewAIInc/"
}
}
}
}

View File

@@ -4,21 +4,6 @@ description: Get started with CrewAI - Install, configure, and build your first
icon: wrench
---
## Video Tutorial
Watch this video tutorial for a step-by-step demonstration of the installation process:
<iframe
width="100%"
height="400"
src="https://www.youtube.com/embed/-kSOTtYzgEw"
title="CrewAI Installation Guide"
frameborder="0"
style={{ borderRadius: '10px' }}
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
allowfullscreen
></iframe>
## Text Tutorial
<Note>
**Python Version Requirements**

View File

@@ -22,16 +22,7 @@ usage of tools, API calls, responses, any data processed by the agents, or secre
When the `share_crew` feature is enabled, detailed data including task descriptions, agents' backstories or goals, and other specific attributes are collected
to provide deeper insights. This expanded data collection may include personal information if users have incorporated it into their crews or tasks.
Users should carefully consider the content of their crews and tasks before enabling `share_crew`.
Users can disable telemetry by setting the environment variable `CREWAI_DISABLE_TELEMETRY` to `true` or by setting `OTEL_SDK_DISABLED` to `true` (note that the latter disables all OpenTelemetry instrumentation globally).
### Examples:
```python
# Disable CrewAI telemetry only
os.environ['CREWAI_DISABLE_TELEMETRY'] = 'true'
# Disable all OpenTelemetry (including CrewAI)
os.environ['OTEL_SDK_DISABLED'] = 'true'
```
Users can disable telemetry by setting the environment variable `OTEL_SDK_DISABLED` to `true`.
### Data Explanation:
| Defaulted | Data | Reason and Specifics |
@@ -64,4 +55,4 @@ This enables a deeper insight into usage patterns.
<Warning>
If you enable `share_crew`, the collected data may include personal information if it has been incorporated into crew configurations, task descriptions, or outputs.
Users should carefully review their data and ensure compliance with GDPR and other applicable privacy regulations before enabling this feature.
</Warning>
</Warning>

View File

@@ -1,6 +1,6 @@
[project]
name = "crewai"
version = "0.114.0"
version = "0.108.0"
description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks."
readme = "README.md"
requires-python = ">=3.10,<3.13"
@@ -45,7 +45,7 @@ Documentation = "https://docs.crewai.com"
Repository = "https://github.com/crewAIInc/crewAI"
[project.optional-dependencies]
tools = ["crewai-tools~=0.40.1"]
tools = ["crewai-tools~=0.38.0"]
embeddings = [
"tiktoken~=0.7.0"
]

View File

@@ -2,14 +2,12 @@ import warnings
from crewai.agent import Agent
from crewai.crew import Crew
from crewai.crews.crew_output import CrewOutput
from crewai.flow.flow import Flow
from crewai.knowledge.knowledge import Knowledge
from crewai.llm import LLM
from crewai.llms.base_llm import BaseLLM
from crewai.process import Process
from crewai.task import Task
from crewai.tasks.task_output import TaskOutput
warnings.filterwarnings(
"ignore",
@@ -17,16 +15,14 @@ warnings.filterwarnings(
category=UserWarning,
module="pydantic.main",
)
__version__ = "0.114.0"
__version__ = "0.108.0"
__all__ = [
"Agent",
"Crew",
"CrewOutput",
"Process",
"Task",
"LLM",
"BaseLLM",
"Flow",
"Knowledge",
"TaskOutput",
]

View File

@@ -1,6 +1,7 @@
import re
import shutil
import subprocess
from typing import Any, Dict, List, Literal, Optional, Sequence, Type, Union
from typing import Any, Dict, List, Literal, Optional, Sequence, Union
from pydantic import Field, InstanceOf, PrivateAttr, model_validator
@@ -10,7 +11,6 @@ from crewai.agents.crew_agent_executor import CrewAgentExecutor
from crewai.knowledge.knowledge import Knowledge
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
from crewai.knowledge.utils.knowledge_utils import extract_knowledge_context
from crewai.lite_agent import LiteAgent, LiteAgentOutput
from crewai.llm import BaseLLM
from crewai.memory.contextual.contextual_memory import ContextualMemory
from crewai.security import Fingerprint
@@ -367,12 +367,8 @@ class Agent(BaseAgent):
"info", "Coding tools not available. Install crewai_tools. "
)
def get_output_converter(
self, agent, llm, text, model, instructions
): # Add agent parameter
return Converter(
agent=agent, llm=llm, text=text, model=model, instructions=instructions
)
def get_output_converter(self, llm, text, model, instructions):
return Converter(llm=llm, text=text, model=model, instructions=instructions)
def _training_handler(self, task_prompt: str) -> str:
"""Handle training data for the agent task prompt to improve output on Training."""
@@ -453,74 +449,3 @@ class Agent(BaseAgent):
def set_fingerprint(self, fingerprint: Fingerprint):
self.security_config.fingerprint = fingerprint
def kickoff(
self,
messages: Union[str, List[Dict[str, str]]],
response_format: Optional[Type[Any]] = None,
) -> LiteAgentOutput:
"""
Execute the agent with the given messages using a LiteAgent instance.
This method is useful when you want to use the Agent configuration but
with the simpler and more direct execution flow of LiteAgent.
Args:
messages: Either a string query or a list of message dictionaries.
If a string is provided, it will be converted to a user message.
If a list is provided, each dict should have 'role' and 'content' keys.
response_format: Optional Pydantic model for structured output.
Returns:
LiteAgentOutput: The result of the agent execution.
"""
lite_agent = LiteAgent(
role=self.role,
goal=self.goal,
backstory=self.backstory,
llm=self.llm,
tools=self.tools or [],
max_iterations=self.max_iter,
max_execution_time=self.max_execution_time,
respect_context_window=self.respect_context_window,
verbose=self.verbose,
response_format=response_format,
i18n=self.i18n,
)
return lite_agent.kickoff(messages)
async def kickoff_async(
self,
messages: Union[str, List[Dict[str, str]]],
response_format: Optional[Type[Any]] = None,
) -> LiteAgentOutput:
"""
Execute the agent asynchronously with the given messages using a LiteAgent instance.
This is the async version of the kickoff method.
Args:
messages: Either a string query or a list of message dictionaries.
If a string is provided, it will be converted to a user message.
If a list is provided, each dict should have 'role' and 'content' keys.
response_format: Optional Pydantic model for structured output.
Returns:
LiteAgentOutput: The result of the agent execution.
"""
lite_agent = LiteAgent(
role=self.role,
goal=self.goal,
backstory=self.backstory,
llm=self.llm,
tools=self.tools or [],
max_iterations=self.max_iter,
max_execution_time=self.max_execution_time,
respect_context_window=self.respect_context_window,
verbose=self.verbose,
response_format=response_format,
i18n=self.i18n,
)
return await lite_agent.kickoff_async(messages)

View File

@@ -60,7 +60,7 @@ def test():
"current_year": str(datetime.now().year)
}
try:
{{crew_name}}().crew().test(n_iterations=int(sys.argv[1]), eval_llm=sys.argv[2], inputs=inputs)
{{crew_name}}().crew().test(n_iterations=int(sys.argv[1]), openai_model_name=sys.argv[2], inputs=inputs)
except Exception as e:
raise Exception(f"An error occurred while testing the crew: {e}")

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.13"
dependencies = [
"crewai[tools]>=0.114.0,<1.0.0"
"crewai[tools]>=0.108.0,<1.0.0"
]
[project.scripts]

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.13"
dependencies = [
"crewai[tools]>=0.114.0,<1.0.0",
"crewai[tools]>=0.108.0,<1.0.0",
]
[project.scripts]

View File

@@ -5,7 +5,7 @@ description = "Power up your crews with {{folder_name}}"
readme = "README.md"
requires-python = ">=3.10,<3.13"
dependencies = [
"crewai[tools]>=0.114.0"
"crewai[tools]>=0.108.0"
]
[tool.crewai]

View File

@@ -153,12 +153,8 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
login_response_json = login_response.json()
settings = Settings()
settings.tool_repository_username = login_response_json["credential"][
"username"
]
settings.tool_repository_password = login_response_json["credential"][
"password"
]
settings.tool_repository_username = login_response_json["credential"]["username"]
settings.tool_repository_password = login_response_json["credential"]["password"]
settings.dump()
console.print(
@@ -183,7 +179,7 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
capture_output=False,
env=self._build_env_with_credentials(repository_handle),
text=True,
check=True,
check=True
)
if add_package_result.stderr:
@@ -208,11 +204,7 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
settings = Settings()
env = os.environ.copy()
env[f"UV_INDEX_{repository_handle}_USERNAME"] = str(
settings.tool_repository_username or ""
)
env[f"UV_INDEX_{repository_handle}_PASSWORD"] = str(
settings.tool_repository_password or ""
)
env[f"UV_INDEX_{repository_handle}_USERNAME"] = str(settings.tool_repository_username or "")
env[f"UV_INDEX_{repository_handle}_PASSWORD"] = str(settings.tool_repository_password or "")
return env

View File

@@ -297,7 +297,9 @@ class Crew(BaseModel):
)
self._external_memory = (
# External memory doesnt support a default value since it was designed to be managed entirely externally
self.external_memory.set_crew(self) if self.external_memory else None
self.external_memory.set_crew(self)
if self.external_memory
else None
)
if (
self.memory_config

View File

@@ -34,13 +34,13 @@ class FlowPlot:
ValueError
If flow object is invalid or missing required attributes.
"""
if not hasattr(flow, "_methods"):
if not hasattr(flow, '_methods'):
raise ValueError("Invalid flow object: missing '_methods' attribute")
if not hasattr(flow, "_listeners"):
if not hasattr(flow, '_listeners'):
raise ValueError("Invalid flow object: missing '_listeners' attribute")
if not hasattr(flow, "_start_methods"):
if not hasattr(flow, '_start_methods'):
raise ValueError("Invalid flow object: missing '_start_methods' attribute")
self.flow = flow
self.colors = COLORS
self.node_styles = NODE_STYLES
@@ -65,7 +65,7 @@ class FlowPlot:
"""
if not filename or not isinstance(filename, str):
raise ValueError("Filename must be a non-empty string")
try:
# Initialize network
net = Network(
@@ -121,9 +121,7 @@ class FlowPlot:
network_html = net.generate_html()
final_html_content = self._generate_final_html(network_html)
except Exception as e:
raise RuntimeError(
f"Failed to generate network visualization: {str(e)}"
)
raise RuntimeError(f"Failed to generate network visualization: {str(e)}")
# Save the final HTML content to the file
try:
@@ -131,9 +129,7 @@ class FlowPlot:
f.write(final_html_content)
print(f"Plot saved as {filename}.html")
except IOError as e:
raise IOError(
f"Failed to save flow visualization to {filename}.html: {str(e)}"
)
raise IOError(f"Failed to save flow visualization to {filename}.html: {str(e)}")
except (ValueError, RuntimeError, IOError) as e:
raise e
@@ -169,9 +165,7 @@ class FlowPlot:
try:
# Extract just the body content from the generated HTML
current_dir = os.path.dirname(__file__)
template_path = safe_path_join(
"assets", "crewai_flow_visual_template.html", root=current_dir
)
template_path = safe_path_join("assets", "crewai_flow_visual_template.html", root=current_dir)
logo_path = safe_path_join("assets", "crewai_logo.svg", root=current_dir)
if not os.path.exists(template_path):
@@ -203,7 +197,6 @@ class FlowPlot:
lib_folder = safe_path_join("lib", root=os.getcwd())
if os.path.exists(lib_folder) and os.path.isdir(lib_folder):
import shutil
shutil.rmtree(lib_folder)
except ValueError as e:
print(f"Error validating lib folder path: {e}")

View File

@@ -1,3 +1,4 @@
def get_legend_items(colors):
return [
{"label": "Start Method", "color": colors["start"]},

View File

@@ -43,18 +43,18 @@ def safe_path_join(*parts: str, root: Union[str, Path, None] = None) -> str:
# Establish root directory
root_path = Path(root).resolve() if root else Path.cwd()
# Join and resolve the full path
full_path = Path(root_path, *clean_parts).resolve()
# Check if the resolved path is within root
if not str(full_path).startswith(str(root_path)):
raise ValueError(
f"Invalid path: Potential directory traversal. Path must be within {root_path}"
)
return str(full_path)
except Exception as e:
if isinstance(e, ValueError):
raise
@@ -84,17 +84,17 @@ def validate_path_exists(path: Union[str, Path], file_type: str = "file") -> str
"""
try:
path_obj = Path(path).resolve()
if not path_obj.exists():
raise ValueError(f"Path does not exist: {path}")
if file_type == "file" and not path_obj.is_file():
raise ValueError(f"Path is not a file: {path}")
elif file_type == "directory" and not path_obj.is_dir():
raise ValueError(f"Path is not a directory: {path}")
return str(path_obj)
except Exception as e:
if isinstance(e, ValueError):
raise
@@ -126,9 +126,9 @@ def list_files(directory: Union[str, Path], pattern: str = "*") -> List[str]:
dir_path = Path(directory).resolve()
if not dir_path.is_dir():
raise ValueError(f"Not a directory: {directory}")
return [str(p) for p in dir_path.glob(pattern) if p.is_file()]
except Exception as e:
if isinstance(e, ValueError):
raise

View File

@@ -8,45 +8,45 @@ from pydantic import BaseModel
class FlowPersistence(abc.ABC):
"""Abstract base class for flow state persistence.
This class defines the interface that all persistence implementations must follow.
It supports both structured (Pydantic BaseModel) and unstructured (dict) states.
"""
@abc.abstractmethod
def init_db(self) -> None:
"""Initialize the persistence backend.
This method should handle any necessary setup, such as:
- Creating tables
- Establishing connections
- Setting up indexes
"""
pass
@abc.abstractmethod
def save_state(
self,
flow_uuid: str,
method_name: str,
state_data: Union[Dict[str, Any], BaseModel],
state_data: Union[Dict[str, Any], BaseModel]
) -> None:
"""Persist the flow state after method completion.
Args:
flow_uuid: Unique identifier for the flow instance
method_name: Name of the method that just completed
state_data: Current state data (either dict or Pydantic model)
"""
pass
@abc.abstractmethod
def load_state(self, flow_uuid: str) -> Optional[Dict[str, Any]]:
"""Load the most recent state for a given flow UUID.
Args:
flow_uuid: Unique identifier for the flow instance
Returns:
The most recent state as a dictionary, or None if no state exists
"""

View File

@@ -48,7 +48,7 @@ LOG_MESSAGES = {
"save_state": "Saving flow state to memory for ID: {}",
"save_error": "Failed to persist state for method {}: {}",
"state_missing": "Flow instance has no state",
"id_missing": "Flow state must have an 'id' field for persistence",
"id_missing": "Flow state must have an 'id' field for persistence"
}
@@ -58,13 +58,7 @@ class PersistenceDecorator:
_printer = Printer() # Class-level printer instance
@classmethod
def persist_state(
cls,
flow_instance: Any,
method_name: str,
persistence_instance: FlowPersistence,
verbose: bool = False,
) -> None:
def persist_state(cls, flow_instance: Any, method_name: str, persistence_instance: FlowPersistence, verbose: bool = False) -> None:
"""Persist flow state with proper error handling and logging.
This method handles the persistence of flow state data, including proper
@@ -82,24 +76,22 @@ class PersistenceDecorator:
AttributeError: If flow instance lacks required state attributes
"""
try:
state = getattr(flow_instance, "state", None)
state = getattr(flow_instance, 'state', None)
if state is None:
raise ValueError("Flow instance has no state")
flow_uuid: Optional[str] = None
if isinstance(state, dict):
flow_uuid = state.get("id")
flow_uuid = state.get('id')
elif isinstance(state, BaseModel):
flow_uuid = getattr(state, "id", None)
flow_uuid = getattr(state, 'id', None)
if not flow_uuid:
raise ValueError("Flow state must have an 'id' field for persistence")
# Log state saving only if verbose is True
if verbose:
cls._printer.print(
LOG_MESSAGES["save_state"].format(flow_uuid), color="cyan"
)
cls._printer.print(LOG_MESSAGES["save_state"].format(flow_uuid), color="cyan")
logger.info(LOG_MESSAGES["save_state"].format(flow_uuid))
try:
@@ -152,10 +144,7 @@ def persist(persistence: Optional[FlowPersistence] = None, verbose: bool = False
def begin(self):
pass
"""
def decorator(
target: Union[Type, Callable[..., T]],
) -> Union[Type, Callable[..., T]]:
def decorator(target: Union[Type, Callable[..., T]]) -> Union[Type, Callable[..., T]]:
"""Decorator that handles both class and method decoration."""
actual_persistence = persistence or SQLiteFlowPersistence()
@@ -165,8 +154,8 @@ def persist(persistence: Optional[FlowPersistence] = None, verbose: bool = False
@functools.wraps(original_init)
def new_init(self: Any, *args: Any, **kwargs: Any) -> None:
if "persistence" not in kwargs:
kwargs["persistence"] = actual_persistence
if 'persistence' not in kwargs:
kwargs['persistence'] = actual_persistence
original_init(self, *args, **kwargs)
setattr(target, "__init__", new_init)
@@ -176,11 +165,11 @@ def persist(persistence: Optional[FlowPersistence] = None, verbose: bool = False
for name, method in target.__dict__.items():
if callable(method) and (
hasattr(method, "__is_start_method__")
or hasattr(method, "__trigger_methods__")
or hasattr(method, "__condition_type__")
or hasattr(method, "__is_flow_method__")
or hasattr(method, "__is_router__")
hasattr(method, "__is_start_method__") or
hasattr(method, "__trigger_methods__") or
hasattr(method, "__condition_type__") or
hasattr(method, "__is_flow_method__") or
hasattr(method, "__is_router__")
):
original_methods[name] = method
@@ -188,30 +177,18 @@ def persist(persistence: Optional[FlowPersistence] = None, verbose: bool = False
for name, method in original_methods.items():
if asyncio.iscoroutinefunction(method):
# Create a closure to capture the current name and method
def create_async_wrapper(
method_name: str, original_method: Callable
):
def create_async_wrapper(method_name: str, original_method: Callable):
@functools.wraps(original_method)
async def method_wrapper(
self: Any, *args: Any, **kwargs: Any
) -> Any:
async def method_wrapper(self: Any, *args: Any, **kwargs: Any) -> Any:
result = await original_method(self, *args, **kwargs)
PersistenceDecorator.persist_state(
self, method_name, actual_persistence, verbose
)
PersistenceDecorator.persist_state(self, method_name, actual_persistence, verbose)
return result
return method_wrapper
wrapped = create_async_wrapper(name, method)
# Preserve all original decorators and attributes
for attr in [
"__is_start_method__",
"__trigger_methods__",
"__condition_type__",
"__is_router__",
]:
for attr in ["__is_start_method__", "__trigger_methods__", "__condition_type__", "__is_router__"]:
if hasattr(method, attr):
setattr(wrapped, attr, getattr(method, attr))
setattr(wrapped, "__is_flow_method__", True)
@@ -220,28 +197,18 @@ def persist(persistence: Optional[FlowPersistence] = None, verbose: bool = False
setattr(target, name, wrapped)
else:
# Create a closure to capture the current name and method
def create_sync_wrapper(
method_name: str, original_method: Callable
):
def create_sync_wrapper(method_name: str, original_method: Callable):
@functools.wraps(original_method)
def method_wrapper(self: Any, *args: Any, **kwargs: Any) -> Any:
result = original_method(self, *args, **kwargs)
PersistenceDecorator.persist_state(
self, method_name, actual_persistence, verbose
)
PersistenceDecorator.persist_state(self, method_name, actual_persistence, verbose)
return result
return method_wrapper
wrapped = create_sync_wrapper(name, method)
# Preserve all original decorators and attributes
for attr in [
"__is_start_method__",
"__trigger_methods__",
"__condition_type__",
"__is_router__",
]:
for attr in ["__is_start_method__", "__trigger_methods__", "__condition_type__", "__is_router__"]:
if hasattr(method, attr):
setattr(wrapped, attr, getattr(method, attr))
setattr(wrapped, "__is_flow_method__", True)
@@ -256,49 +223,29 @@ def persist(persistence: Optional[FlowPersistence] = None, verbose: bool = False
setattr(method, "__is_flow_method__", True)
if asyncio.iscoroutinefunction(method):
@functools.wraps(method)
async def method_async_wrapper(
flow_instance: Any, *args: Any, **kwargs: Any
) -> T:
async def method_async_wrapper(flow_instance: Any, *args: Any, **kwargs: Any) -> T:
method_coro = method(flow_instance, *args, **kwargs)
if asyncio.iscoroutine(method_coro):
result = await method_coro
else:
result = method_coro
PersistenceDecorator.persist_state(
flow_instance, method.__name__, actual_persistence, verbose
)
PersistenceDecorator.persist_state(flow_instance, method.__name__, actual_persistence, verbose)
return result
for attr in [
"__is_start_method__",
"__trigger_methods__",
"__condition_type__",
"__is_router__",
]:
for attr in ["__is_start_method__", "__trigger_methods__", "__condition_type__", "__is_router__"]:
if hasattr(method, attr):
setattr(method_async_wrapper, attr, getattr(method, attr))
setattr(method_async_wrapper, "__is_flow_method__", True)
return cast(Callable[..., T], method_async_wrapper)
else:
@functools.wraps(method)
def method_sync_wrapper(
flow_instance: Any, *args: Any, **kwargs: Any
) -> T:
def method_sync_wrapper(flow_instance: Any, *args: Any, **kwargs: Any) -> T:
result = method(flow_instance, *args, **kwargs)
PersistenceDecorator.persist_state(
flow_instance, method.__name__, actual_persistence, verbose
)
PersistenceDecorator.persist_state(flow_instance, method.__name__, actual_persistence, verbose)
return result
for attr in [
"__is_start_method__",
"__trigger_methods__",
"__condition_type__",
"__is_router__",
]:
for attr in ["__is_start_method__", "__trigger_methods__", "__condition_type__", "__is_router__"]:
if hasattr(method, attr):
setattr(method_sync_wrapper, attr, getattr(method, attr))
setattr(method_sync_wrapper, "__is_flow_method__", True)

View File

@@ -56,7 +56,6 @@ def method_calls_crew(method: Any) -> bool:
class CrewCallVisitor(ast.NodeVisitor):
"""AST visitor to detect .crew() method calls."""
def __init__(self):
self.found = False
@@ -75,7 +74,7 @@ def add_nodes_to_network(
net: Any,
flow: Any,
node_positions: Dict[str, Tuple[float, float]],
node_styles: Dict[str, Dict[str, Any]],
node_styles: Dict[str, Dict[str, Any]]
) -> None:
"""
Add nodes to the network visualization with appropriate styling.
@@ -99,7 +98,6 @@ def add_nodes_to_network(
- Crew methods
- Regular methods
"""
def human_friendly_label(method_name):
return method_name.replace("_", " ").title()
@@ -142,7 +140,7 @@ def compute_positions(
flow: Any,
node_levels: Dict[str, int],
y_spacing: float = 150,
x_spacing: float = 150,
x_spacing: float = 150
) -> Dict[str, Tuple[float, float]]:
"""
Compute the (x, y) positions for each node in the flow graph.
@@ -183,7 +181,7 @@ def add_edges(
net: Any,
flow: Any,
node_positions: Dict[str, Tuple[float, float]],
colors: Dict[str, str],
colors: Dict[str, str]
) -> None:
edge_smooth: Dict[str, Union[str, float]] = {"type": "continuous"} # Default value
"""

View File

@@ -1,4 +1,6 @@
import asyncio
import json
import re
import uuid
from datetime import datetime
from typing import Any, Callable, Dict, List, Optional, Type, Union, cast

View File

@@ -839,9 +839,13 @@ class LLM(BaseLLM):
# Validate message format first
for msg in messages:
if not isinstance(msg, dict) or "role" not in msg or "content" not in msg:
if not isinstance(msg, dict) or "role" not in msg:
raise TypeError(
"Invalid message format. Each message must be a dict with 'role' and 'content' keys"
"Invalid message format. Each message must be a dict with 'role' key"
)
if "content" not in msg and msg["role"] != "system":
raise TypeError(
"Invalid message format. Each non-system message must have a 'content' key"
)
# Handle O1 models specially
@@ -868,6 +872,19 @@ class LLM(BaseLLM):
messages.append({"role": "user", "content": "Please continue."})
return messages
if "qwen" in self.model.lower():
formatted_messages = []
for msg in messages:
if not isinstance(msg.get("content"), str):
formatted_messages.append(msg)
continue
formatted_messages.append({
"role": msg["role"],
"content": [{"type": "text", "text": msg["content"]}] # type: ignore
})
return formatted_messages
# Handle Anthropic models
if not self.is_anthropic:
return messages

View File

@@ -18,7 +18,9 @@ class KickoffTaskOutputsSQLiteStorage:
An updated SQLite storage class for kickoff task outputs storage.
"""
def __init__(self, db_path: Optional[str] = None) -> None:
def __init__(
self, db_path: Optional[str] = None
) -> None:
if db_path is None:
# Get the parent directory of the default db path and create our db file there
db_path = str(Path(db_storage_path()) / "latest_kickoff_task_outputs.db")
@@ -144,9 +146,7 @@ class KickoffTaskOutputsSQLiteStorage:
conn.commit()
if cursor.rowcount == 0:
logger.warning(
f"No row found with task_index {task_index}. No update performed."
)
logger.warning(f"No row found with task_index {task_index}. No update performed.")
except sqlite3.Error as e:
error_msg = DatabaseError.format_error(DatabaseError.UPDATE_ERROR, e)
logger.error(error_msg)

View File

@@ -12,7 +12,9 @@ class LTMSQLiteStorage:
An updated SQLite storage class for LTM data storage.
"""
def __init__(self, db_path: Optional[str] = None) -> None:
def __init__(
self, db_path: Optional[str] = None
) -> None:
if db_path is None:
# Get the parent directory of the default db path and create our db file there
db_path = str(Path(db_storage_path()) / "long_term_memory_storage.db")

View File

@@ -137,11 +137,13 @@ def CrewBase(cls: T) -> T:
all_functions, "is_cache_handler"
)
callbacks = self._filter_functions(all_functions, "is_callback")
agents = self._filter_functions(all_functions, "is_agent")
for agent_name, agent_info in self.agents_config.items():
self._map_agent_variables(
agent_name,
agent_info,
agents,
llms,
tool_functions,
cache_handler_functions,
@@ -152,6 +154,7 @@ def CrewBase(cls: T) -> T:
self,
agent_name: str,
agent_info: Dict[str, Any],
agents: Dict[str, Callable],
llms: Dict[str, Callable],
tool_functions: Dict[str, Callable],
cache_handler_functions: Dict[str, Callable],
@@ -169,14 +172,9 @@ def CrewBase(cls: T) -> T:
]
if function_calling_llm := agent_info.get("function_calling_llm"):
try:
self.agents_config[agent_name]["function_calling_llm"] = llms[
function_calling_llm
]()
except KeyError:
self.agents_config[agent_name]["function_calling_llm"] = (
function_calling_llm
)
self.agents_config[agent_name]["function_calling_llm"] = agents[
function_calling_llm
]()
if step_callback := agent_info.get("step_callback"):
self.agents_config[agent_name]["step_callback"] = callbacks[

View File

@@ -26,55 +26,46 @@ class Fingerprint(BaseModel):
metadata (Dict[str, Any]): Additional metadata associated with this fingerprint
"""
uuid_str: str = Field(
default_factory=lambda: str(uuid.uuid4()),
description="String representation of the UUID",
)
created_at: datetime = Field(
default_factory=datetime.now, description="When this fingerprint was created"
)
metadata: Dict[str, Any] = Field(
default_factory=dict, description="Additional metadata for this fingerprint"
)
uuid_str: str = Field(default_factory=lambda: str(uuid.uuid4()), description="String representation of the UUID")
created_at: datetime = Field(default_factory=datetime.now, description="When this fingerprint was created")
metadata: Dict[str, Any] = Field(default_factory=dict, description="Additional metadata for this fingerprint")
model_config = ConfigDict(arbitrary_types_allowed=True)
@field_validator("metadata")
@field_validator('metadata')
@classmethod
def validate_metadata(cls, v):
"""Validate that metadata is a dictionary with string keys and valid values."""
if not isinstance(v, dict):
raise ValueError("Metadata must be a dictionary")
# Validate that all keys are strings
for key, value in v.items():
if not isinstance(key, str):
raise ValueError(f"Metadata keys must be strings, got {type(key)}")
# Validate nested dictionaries (prevent deeply nested structures)
if isinstance(value, dict):
# Check for nested dictionaries (limit depth to 1)
for nested_key, nested_value in value.items():
if not isinstance(nested_key, str):
raise ValueError(
f"Nested metadata keys must be strings, got {type(nested_key)}"
)
raise ValueError(f"Nested metadata keys must be strings, got {type(nested_key)}")
if isinstance(nested_value, dict):
raise ValueError("Metadata can only be nested one level deep")
# Check for maximum metadata size (prevent DoS)
if len(str(v)) > 10000: # Limit metadata size to 10KB
raise ValueError("Metadata size exceeds maximum allowed (10KB)")
return v
def __init__(self, **data):
"""Initialize a Fingerprint with auto-generated uuid_str and created_at."""
# Remove uuid_str and created_at from data to ensure they're auto-generated
if "uuid_str" in data:
data.pop("uuid_str")
if "created_at" in data:
data.pop("created_at")
if 'uuid_str' in data:
data.pop('uuid_str')
if 'created_at' in data:
data.pop('created_at')
# Call the parent constructor with the modified data
super().__init__(**data)
@@ -97,21 +88,19 @@ class Fingerprint(BaseModel):
"""
if not isinstance(seed, str):
raise ValueError("Seed must be a string")
if not seed.strip():
raise ValueError("Seed cannot be empty or whitespace")
# Create a deterministic UUID using v5 (SHA-1)
# Custom namespace for CrewAI to enhance security
# Using a unique namespace specific to CrewAI to reduce collision risks
CREW_AI_NAMESPACE = uuid.UUID("f47ac10b-58cc-4372-a567-0e02b2c3d479")
CREW_AI_NAMESPACE = uuid.UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479')
return str(uuid.uuid5(CREW_AI_NAMESPACE, seed))
@classmethod
def generate(
cls, seed: Optional[str] = None, metadata: Optional[Dict[str, Any]] = None
) -> "Fingerprint":
def generate(cls, seed: Optional[str] = None, metadata: Optional[Dict[str, Any]] = None) -> 'Fingerprint':
"""
Static factory method to create a new Fingerprint.
@@ -126,7 +115,7 @@ class Fingerprint(BaseModel):
fingerprint = cls(metadata=metadata or {})
if seed:
# For seed-based generation, we need to manually set the uuid_str after creation
object.__setattr__(fingerprint, "uuid_str", cls._generate_uuid(seed))
object.__setattr__(fingerprint, 'uuid_str', cls._generate_uuid(seed))
return fingerprint
def __str__(self) -> str:
@@ -153,11 +142,11 @@ class Fingerprint(BaseModel):
return {
"uuid_str": self.uuid_str,
"created_at": self.created_at.isoformat(),
"metadata": self.metadata,
"metadata": self.metadata
}
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> "Fingerprint":
def from_dict(cls, data: Dict[str, Any]) -> 'Fingerprint':
"""
Create a Fingerprint from a dictionary representation.
@@ -174,10 +163,8 @@ class Fingerprint(BaseModel):
# For consistency with existing stored fingerprints, we need to manually set these
if "uuid_str" in data:
object.__setattr__(fingerprint, "uuid_str", data["uuid_str"])
object.__setattr__(fingerprint, 'uuid_str', data["uuid_str"])
if "created_at" in data and isinstance(data["created_at"], str):
object.__setattr__(
fingerprint, "created_at", datetime.fromisoformat(data["created_at"])
)
object.__setattr__(fingerprint, 'created_at', datetime.fromisoformat(data["created_at"]))
return fingerprint

View File

@@ -38,27 +38,29 @@ class SecurityConfig(BaseModel):
)
version: str = Field(
default="1.0.0", description="Version of the security configuration"
default="1.0.0",
description="Version of the security configuration"
)
fingerprint: Fingerprint = Field(
default_factory=Fingerprint, description="Unique identifier for the component"
default_factory=Fingerprint,
description="Unique identifier for the component"
)
def is_compatible(self, min_version: str) -> bool:
"""
Check if this security configuration is compatible with the minimum required version.
Args:
min_version (str): Minimum required version in semver format (e.g., "1.0.0")
Returns:
bool: True if this configuration is compatible, False otherwise
"""
# Simple version comparison (can be enhanced with packaging.version if needed)
current = [int(x) for x in self.version.split(".")]
minimum = [int(x) for x in min_version.split(".")]
# Compare major, minor, patch versions
for c, m in zip(current, minimum):
if c > m:
@@ -67,19 +69,19 @@ class SecurityConfig(BaseModel):
return False
return True
@model_validator(mode="before")
@model_validator(mode='before')
@classmethod
def validate_fingerprint(cls, values):
"""Ensure fingerprint is properly initialized."""
if isinstance(values, dict):
# Handle case where fingerprint is not provided or is None
if "fingerprint" not in values or values["fingerprint"] is None:
values["fingerprint"] = Fingerprint()
if 'fingerprint' not in values or values['fingerprint'] is None:
values['fingerprint'] = Fingerprint()
# Handle case where fingerprint is a string (seed)
elif isinstance(values["fingerprint"], str):
if not values["fingerprint"].strip():
elif isinstance(values['fingerprint'], str):
if not values['fingerprint'].strip():
raise ValueError("Fingerprint seed cannot be empty")
values["fingerprint"] = Fingerprint.generate(seed=values["fingerprint"])
values['fingerprint'] = Fingerprint.generate(seed=values['fingerprint'])
return values
def to_dict(self) -> Dict[str, Any]:
@@ -89,11 +91,13 @@ class SecurityConfig(BaseModel):
Returns:
Dict[str, Any]: Dictionary representation of the security config
"""
result = {"fingerprint": self.fingerprint.to_dict()}
result = {
"fingerprint": self.fingerprint.to_dict()
}
return result
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> "SecurityConfig":
def from_dict(cls, data: Dict[str, Any]) -> 'SecurityConfig':
"""
Create a SecurityConfig from a dictionary.
@@ -107,10 +111,6 @@ class SecurityConfig(BaseModel):
data_copy = data.copy()
fingerprint_data = data_copy.pop("fingerprint", None)
fingerprint = (
Fingerprint.from_dict(fingerprint_data)
if fingerprint_data
else Fingerprint()
)
fingerprint = Fingerprint.from_dict(fingerprint_data) if fingerprint_data else Fingerprint()
return cls(fingerprint=fingerprint)

View File

@@ -193,6 +193,7 @@ class Task(BaseModel):
# Check return annotation if present, but don't require it
return_annotation = sig.return_annotation
if return_annotation != inspect.Signature.empty:
return_annotation_args = get_args(return_annotation)
if not (
get_origin(return_annotation) is tuple
@@ -463,9 +464,7 @@ class Task(BaseModel):
)
)
self._save_file(content)
crewai_event_bus.emit(
self, TaskCompletedEvent(output=task_output, task=self)
)
crewai_event_bus.emit(self, TaskCompletedEvent(output=task_output, task=self))
return task_output
except Exception as e:
self.end_time = datetime.datetime.now()

View File

@@ -22,7 +22,6 @@ class GuardrailResult(BaseModel):
result (Any, optional): The validated/transformed result if successful
error (str, optional): Error message if validation failed
"""
success: bool
result: Optional[Any] = None
error: Optional[str] = None
@@ -33,13 +32,9 @@ class GuardrailResult(BaseModel):
values = info.data
if "success" in values:
if values["success"] and v and "error" in values and values["error"]:
raise ValueError(
"Cannot have both result and error when success is True"
)
raise ValueError("Cannot have both result and error when success is True")
if not values["success"] and v and "result" in values and values["result"]:
raise ValueError(
"Cannot have both result and error when success is False"
)
raise ValueError("Cannot have both result and error when success is False")
return v
@classmethod
@@ -57,5 +52,5 @@ class GuardrailResult(BaseModel):
return cls(
success=success,
result=data if success else None,
error=data if not success else None,
error=data if not success else None
)

View File

@@ -45,10 +45,10 @@ class Telemetry:
"""
def __init__(self):
self.ready: bool = False
self.trace_set: bool = False
self.ready = False
self.trace_set = False
if self._is_telemetry_disabled():
if os.getenv("OTEL_SDK_DISABLED", "false").lower() == "true":
return
try:
@@ -76,13 +76,6 @@ class Telemetry:
raise # Re-raise the exception to not interfere with system signals
self.ready = False
def _is_telemetry_disabled(self) -> bool:
"""Check if telemetry should be disabled based on environment variables."""
return (
os.getenv("OTEL_SDK_DISABLED", "false").lower() == "true"
or os.getenv("CREWAI_DISABLE_TELEMETRY", "false").lower() == "true"
)
def set_tracer(self):
if self.ready and not self.trace_set:
try:

View File

@@ -7,19 +7,6 @@ from crewai.utilities import I18N
i18n = I18N()
def _get_add_image_tool_name() -> str:
"""Safely get the tool name from i18n."""
tool_info = i18n.tools("add_image")
if isinstance(tool_info, dict):
return tool_info.get("name", "Add Image")
return "Add Image" # Default name if not a dict
def _get_add_image_tool_description() -> str:
"""Safely get the tool description from i18n."""
tool_info = i18n.tools("add_image")
if isinstance(tool_info, dict):
return tool_info.get("description", "Tool for adding images to the content")
return "Tool for adding images to the content" # Default description if not a dict
class AddImageToolSchema(BaseModel):
image_url: str = Field(..., description="The URL or path of the image to add")
@@ -31,8 +18,8 @@ class AddImageToolSchema(BaseModel):
class AddImageTool(BaseTool):
"""Tool for adding images to the content"""
name: str = Field(default_factory=_get_add_image_tool_name)
description: str = Field(default_factory=_get_add_image_tool_description)
name: str = Field(default_factory=lambda: i18n.tools("add_image")["name"]) # type: ignore
description: str = Field(default_factory=lambda: i18n.tools("add_image")["description"]) # type: ignore
args_schema: type[BaseModel] = AddImageToolSchema
def _run(

View File

@@ -47,7 +47,10 @@ class BaseAgentTool(BaseTool):
return coworker
def _execute(
self, agent_name: Optional[str], task: str, context: Optional[str] = None
self,
agent_name: Optional[str],
task: str,
context: Optional[str] = None
) -> str:
"""
Execute delegation to an agent with case-insensitive and whitespace-tolerant matching.
@@ -74,46 +77,36 @@ class BaseAgentTool(BaseTool):
# when it should look like this:
# {"task": "....", "coworker": "...."}
sanitized_name = self.sanitize_agent_name(agent_name)
logger.debug(
f"Sanitized agent name from '{agent_name}' to '{sanitized_name}'"
)
logger.debug(f"Sanitized agent name from '{agent_name}' to '{sanitized_name}'")
available_agents = [agent.role for agent in self.agents]
logger.debug(f"Available agents: {available_agents}")
matching_agents = [
agent = [ # type: ignore # Incompatible types in assignment (expression has type "list[BaseAgent]", variable has type "str | None")
available_agent
for available_agent in self.agents
if self.sanitize_agent_name(available_agent.role) == sanitized_name
]
logger.debug(
f"Found {len(matching_agents)} matching agents for role '{sanitized_name}'"
)
logger.debug(f"Found {len(agent)} matching agents for role '{sanitized_name}'")
except (AttributeError, ValueError) as e:
# Handle specific exceptions that might occur during role name processing
return self.i18n.errors("agent_tool_unexisting_coworker").format(
coworkers="\n".join(
[
f"- {self.sanitize_agent_name(agent.role)}"
for agent in self.agents
]
[f"- {self.sanitize_agent_name(agent.role)}" for agent in self.agents]
),
error=str(e),
error=str(e)
)
if not matching_agents:
if not agent:
# No matching agent found after sanitization
return self.i18n.errors("agent_tool_unexisting_coworker").format(
coworkers="\n".join(
[
f"- {self.sanitize_agent_name(agent.role)}"
for agent in self.agents
]
[f"- {self.sanitize_agent_name(agent.role)}" for agent in self.agents]
),
error=f"No agent found with role '{sanitized_name}'",
error=f"No agent found with role '{sanitized_name}'"
)
agent: BaseAgent = matching_agents[0]
agent = agent[0]
try:
task_with_assigned_agent = Task(
description=task,
@@ -121,12 +114,11 @@ class BaseAgentTool(BaseTool):
expected_output=agent.i18n.slice("manager_request"),
i18n=agent.i18n,
)
logger.debug(
f"Created task for agent '{self.sanitize_agent_name(agent.role)}': {task}"
)
logger.debug(f"Created task for agent '{self.sanitize_agent_name(agent.role)}': {task}")
return agent.execute_task(task_with_assigned_agent, context)
except Exception as e:
# Handle task creation or execution errors
return self.i18n.errors("agent_tool_execution_error").format(
agent_role=self.sanitize_agent_name(agent.role), error=str(e)
agent_role=self.sanitize_agent_name(agent.role),
error=str(e)
)

View File

@@ -244,13 +244,9 @@ def to_langchain(
return [t.to_structured_tool() if isinstance(t, BaseTool) else t for t in tools]
def tool(*args, result_as_answer=False):
def tool(*args):
"""
Decorator to create a tool from a function.
Args:
*args: Positional arguments, either the function to decorate or the tool name.
result_as_answer: Flag to indicate if the tool result should be used as the final agent answer.
"""
def _make_with_name(tool_name: str) -> Callable:
@@ -276,7 +272,6 @@ def tool(*args, result_as_answer=False):
description=f.__doc__,
func=f,
args_schema=args_schema,
result_as_answer=result_as_answer,
)
return _make_tool

View File

@@ -380,7 +380,6 @@ class ToolUsage:
else ToolCalling
)
converter = Converter(
agent=None, # Agent not needed here as function calling is supported/used
text=f"Only tools available:\n###\n{self._render()}\n\nReturn a valid schema for the tool, the tool name must be exactly equal one of the options, use this text to inform the valid output schema:\n\n### TEXT \n{tool_string}",
llm=self.function_calling_llm,
model=model,

View File

@@ -2,7 +2,7 @@ import json
import re
from typing import Any, Optional, Type, Union, get_args, get_origin
from pydantic import BaseModel, Field, ValidationError
from pydantic import BaseModel, ValidationError
from crewai.agents.agent_builder.utilities.base_output_converter import OutputConverter
from crewai.utilities.printer import Printer
@@ -20,27 +20,18 @@ class ConverterError(Exception):
class Converter(OutputConverter):
"""Class that converts text into either pydantic or json."""
agent: Any = Field(description="The agent instance associated with this converter.")
def to_pydantic(self, current_attempt=1) -> BaseModel:
"""Convert text to pydantic."""
try:
if self.llm.supports_function_calling():
result = self._create_instructor().to_pydantic()
else:
messages = []
if self.agent and getattr(self.agent, "use_system_prompt", True):
messages.append({"role": "system", "content": self.instructions})
messages.append({"role": "user", "content": self.text})
else:
messages.append(
{
"role": "user",
"content": f"{self.instructions}\n\n{self.text}",
}
)
response = self.llm.call(messages) # Assign the result to 'response'
response = self.llm.call(
[
{"role": "system", "content": self.instructions},
{"role": "user", "content": self.text},
]
)
try:
# Try to directly validate the response JSON
result = self.model.model_validate_json(response)
@@ -83,20 +74,14 @@ class Converter(OutputConverter):
if self.llm.supports_function_calling():
return self._create_instructor().to_json()
else:
messages = []
if self.agent and getattr(self.agent, "use_system_prompt", True):
messages.append({"role": "system", "content": self.instructions})
messages.append({"role": "user", "content": self.text})
else:
messages.append(
{
"role": "user",
"content": f"{self.instructions}\n\n{self.text}",
}
return json.dumps(
self.llm.call(
[
{"role": "system", "content": self.instructions},
{"role": "user", "content": self.text},
]
)
llm_result = self.llm.call(messages)
return json.dumps(llm_result)
)
except Exception as e:
if current_attempt < self.max_attempts:
return self.to_json(current_attempt + 1)
@@ -254,11 +239,11 @@ def create_converter(
) -> Converter:
if agent and not converter_cls:
if hasattr(agent, "get_output_converter"):
converter = agent.get_output_converter(agent=agent, *args, **kwargs)
converter = agent.get_output_converter(*args, **kwargs)
else:
raise AttributeError("Agent does not have a 'get_output_converter' method")
elif converter_cls:
converter = converter_cls(agent=agent, *args, **kwargs)
converter = converter_cls(*args, **kwargs)
else:
raise ValueError("Either agent or converter_cls must be provided")

View File

@@ -11,7 +11,6 @@ from pydantic import BaseModel
class CrewJSONEncoder(json.JSONEncoder):
"""Custom JSON encoder for CrewAI objects and special types."""
def default(self, obj):
if isinstance(obj, BaseModel):
return self._handle_pydantic_model(obj)

View File

@@ -8,7 +8,6 @@ from crewai.agents.parser import OutputParserException
"""Parser for converting text outputs into Pydantic models."""
class CrewPydanticOutputParser:
"""Parses text outputs into specified Pydantic models."""

View File

@@ -1,5 +1,4 @@
"""Error message definitions for CrewAI database operations."""
from typing import Optional

View File

@@ -65,18 +65,13 @@ class TaskEvaluator:
instructions = f"{instructions}\n\nReturn only valid JSON with the following schema:\n```json\n{model_schema}\n```"
converter = Converter(
agent=self.original_agent, # Pass agent
llm=self.llm,
text=evaluation_query,
model=TaskEvaluation,
instructions=instructions,
)
result = converter.to_pydantic()
if isinstance(result, TaskEvaluation):
return result
else:
raise TypeError(f"Expected TaskEvaluation, got {type(result)}")
return converter.to_pydantic()
def evaluate_training_data(
self, training_data: dict, agent_id: str
@@ -139,7 +134,6 @@ class TaskEvaluator:
instructions = f"{instructions}\n\nThe json should have the following structure, with the following keys:\n{model_schema}"
converter = Converter(
agent=self.original_agent, # Pass agent
llm=self.llm,
text=evaluation_query,
model=TrainingTaskEvaluation,
@@ -147,7 +141,4 @@ class TaskEvaluator:
)
pydantic_result = converter.to_pydantic()
if isinstance(pydantic_result, TrainingTaskEvaluation):
return pydantic_result
else:
raise TypeError(f"Expected TrainingTaskEvaluation, got {type(pydantic_result)}")
return pydantic_result

View File

@@ -7,33 +7,27 @@ from typing import Union
class FileHandler:
"""Handler for file operations supporting both JSON and text-based logging.
Args:
file_path (Union[bool, str]): Path to the log file or boolean flag
"""
def __init__(self, file_path: Union[bool, str]):
self._initialize_path(file_path)
def _initialize_path(self, file_path: Union[bool, str]):
if file_path is True: # File path is boolean True
self._path = os.path.join(os.curdir, "logs.txt")
elif isinstance(file_path, str): # File path is a string
if file_path.endswith((".json", ".txt")):
self._path = (
file_path # No modification if the file ends with .json or .txt
)
self._path = file_path # No modification if the file ends with .json or .txt
else:
self._path = (
file_path + ".txt"
) # Append .txt if the file doesn't end with .json or .txt
self._path = file_path + ".txt" # Append .txt if the file doesn't end with .json or .txt
else:
raise ValueError(
"file_path must be a string or boolean."
) # Handle the case where file_path isn't valid
raise ValueError("file_path must be a string or boolean.") # Handle the case where file_path isn't valid
def log(self, **kwargs):
try:
now = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
@@ -51,25 +45,20 @@ class FileHandler:
except (json.JSONDecodeError, FileNotFoundError):
# If no valid JSON or file doesn't exist, start with an empty list
existing_data = [log_entry]
with open(self._path, "w", encoding="utf-8") as write_file:
json.dump(existing_data, write_file, indent=4)
write_file.write("\n")
else:
# Append log in plain text format
message = (
f"{now}: "
+ ", ".join([f'{key}="{value}"' for key, value in kwargs.items()])
+ "\n"
)
message = f"{now}: " + ", ".join([f"{key}=\"{value}\"" for key, value in kwargs.items()]) + "\n"
with open(self._path, "a", encoding="utf-8") as file:
file.write(message)
except Exception as e:
raise ValueError(f"Failed to log message: {str(e)}")
class PickleHandler:
def __init__(self, file_name: str) -> None:
"""

View File

@@ -6,10 +6,8 @@ from pydantic import BaseModel, Field, PrivateAttr, model_validator
"""Internationalization support for CrewAI prompts and messages."""
class I18N(BaseModel):
"""Handles loading and retrieving internationalized prompts."""
_prompts: Dict[str, Dict[str, str]] = PrivateAttr()
prompt_file: Optional[str] = Field(
default=None,

View File

@@ -5,7 +5,6 @@ import appdirs
"""Path management utilities for CrewAI storage and configuration."""
def db_storage_path() -> str:
"""Returns the path for SQLite database storage.
@@ -29,4 +28,4 @@ def get_project_directory_name():
else:
cwd = Path.cwd()
project_directory_name = cwd.name
return project_directory_name
return project_directory_name

View File

@@ -9,10 +9,8 @@ from crewai.task import Task
"""Handles planning and coordination of crew tasks."""
logger = logging.getLogger(__name__)
class PlanPerTask(BaseModel):
"""Represents a plan for a specific task."""
task: str = Field(..., description="The task for which the plan is created")
plan: str = Field(
...,
@@ -22,7 +20,6 @@ class PlanPerTask(BaseModel):
class PlannerTaskPydanticOutput(BaseModel):
"""Output format for task planning results."""
list_of_plans_per_task: List[PlanPerTask] = Field(
...,
description="Step by step plan on how the agents can execute their tasks using the available tools with mastery",
@@ -31,7 +28,6 @@ class PlannerTaskPydanticOutput(BaseModel):
class CrewPlanner:
"""Plans and coordinates the execution of crew tasks."""
def __init__(self, tasks: List[Task], planning_agent_llm: Optional[Any] = None):
self.tasks = tasks
@@ -101,12 +97,8 @@ class CrewPlanner:
for idx, task in enumerate(self.tasks):
knowledge_list = self._get_agent_knowledge(task)
agent_tools = (
f"[{', '.join(str(tool) for tool in task.agent.tools)}]"
if task.agent and task.agent.tools
else '"agent has no tools"',
f',\n "agent_knowledge": "[\\"{knowledge_list[0]}\\"]"'
if knowledge_list and str(knowledge_list) != "None"
else "",
f"[{', '.join(str(tool) for tool in task.agent.tools)}]" if task.agent and task.agent.tools else '"agent has no tools"',
f',\n "agent_knowledge": "[\\"{knowledge_list[0]}\\"]"' if knowledge_list and str(knowledge_list) != "None" else ""
)
task_summary = f"""
Task Number {idx + 1} - {task.description}

View File

@@ -10,10 +10,8 @@ from crewai.task import Task
"""Handles storage and retrieval of task execution outputs."""
class ExecutionLog(BaseModel):
"""Represents a log entry for task execution."""
task_id: str
expected_output: Optional[str] = None
output: Dict[str, Any]
@@ -28,7 +26,6 @@ class ExecutionLog(BaseModel):
"""Manages storage and retrieval of task outputs."""
class TaskOutputStorageHandler:
def __init__(self) -> None:
self.storage = KickoffTaskOutputsSQLiteStorage()

View File

@@ -259,9 +259,7 @@ def test_cache_hitting():
def handle_tool_end(source, event):
received_events.append(event)
with (
patch.object(CacheHandler, "read") as read,
):
with (patch.object(CacheHandler, "read") as read,):
read.return_value = "0"
task = Task(
description="What is 2 times 6? Ignore correctness and just return the result of the multiplication tool, you must use the tool.",

View File

@@ -1,486 +0,0 @@
interactions:
- request:
body: '{"messages": [{"role": "system", "content": "You are Test Agent. Test Backstory\nYour
personal goal is: Test Goal\n\nYou ONLY have access to the following tools,
and should NEVER make up tools that are not listed here:\n\nTool Name: search_web\nTool
Arguments: {''query'': {''description'': None, ''type'': ''str''}}\nTool Description:
Search the web for information about a topic.\nTool Name: calculate\nTool Arguments:
{''expression'': {''description'': None, ''type'': ''str''}}\nTool Description:
Calculate the result of a mathematical expression.\n\nIMPORTANT: Use the following
format in your response:\n\n```\nThought: you should always think about what
to do\nAction: the action to take, only one name of [search_web, calculate],
just the name, exactly as it''s written.\nAction Input: the input to the action,
just a simple JSON object, enclosed in curly braces, using \" to wrap keys and
values.\nObservation: the result of the action\n```\n\nOnce all necessary information
is gathered, return the following format:\n\n```\nThought: I now know the final
answer\nFinal Answer: the final answer to the original input question\n```"},
{"role": "user", "content": "Test query"}], "model": "gpt-4o-mini", "stop":
["\nObservation:"]}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, zstd
connection:
- keep-alive
content-length:
- '1220'
content-type:
- application/json
cookie:
- __cf_bm=skEg5DBE_nQ5gLAsfGzhjTetiNUJ_Y2bXWLMsvjIi7s-1744222695-1.0.1.1-qyjwnTgJKwF54pRhf0YHxW_BUw6p7SC60kwFsF9XTq4i2u2mnFKVq4WbsgvQDeuDEIxyaNb.ngWUVOU1GIX1O2Hcxcdn6TSaJ8NXTQw28F8;
_cfuvid=Hwvd7n4RVfOZLGiOKPaHmYJC7h8rCQmlmnBgBsKqy4Y-1744222695443-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.68.2
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.68.2
x-stainless-raw-response:
- 'true'
x-stainless-read-timeout:
- '600.0'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.9
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-BKUIMCbxAr4MO0Ku8tDYBgJ30LGXi\",\n \"object\":
\"chat.completion\",\n \"created\": 1744222714,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"```\\nThought: I need more information
to understand what specific query to search for.\\nAction: search_web\\nAction
Input: {\\\"query\\\":\\\"Test query\\\"}\",\n \"refusal\": null,\n \"annotations\":
[]\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 242,\n \"completion_tokens\":
31,\n \"total_tokens\": 273,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_b376dfbbd5\"\n}\n"
headers:
CF-RAY:
- 92dc01f9bd96cf41-SJC
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Wed, 09 Apr 2025 18:18:34 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '749'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999732'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_99e3ad4ee98371cc1c55a2f5c6ae3962
http_version: HTTP/1.1
status_code: 200
- request:
body: '{"messages": [{"role": "system", "content": "You are Test Agent. Test Backstory\nYour
personal goal is: Test Goal\n\nYou ONLY have access to the following tools,
and should NEVER make up tools that are not listed here:\n\nTool Name: search_web\nTool
Arguments: {''query'': {''description'': None, ''type'': ''str''}}\nTool Description:
Search the web for information about a topic.\nTool Name: calculate\nTool Arguments:
{''expression'': {''description'': None, ''type'': ''str''}}\nTool Description:
Calculate the result of a mathematical expression.\n\nIMPORTANT: Use the following
format in your response:\n\n```\nThought: you should always think about what
to do\nAction: the action to take, only one name of [search_web, calculate],
just the name, exactly as it''s written.\nAction Input: the input to the action,
just a simple JSON object, enclosed in curly braces, using \" to wrap keys and
values.\nObservation: the result of the action\n```\n\nOnce all necessary information
is gathered, return the following format:\n\n```\nThought: I now know the final
answer\nFinal Answer: the final answer to the original input question\n```"},
{"role": "user", "content": "Test query"}, {"role": "assistant", "content":
"```\nThought: I need more information to understand what specific query to
search for.\nAction: search_web\nAction Input: {\"query\":\"Test query\"}\nObservation:
Found information about Test query: This is a simulated search result for demonstration
purposes."}], "model": "gpt-4o-mini", "stop": ["\nObservation:"]}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, zstd
connection:
- keep-alive
content-length:
- '1518'
content-type:
- application/json
cookie:
- __cf_bm=skEg5DBE_nQ5gLAsfGzhjTetiNUJ_Y2bXWLMsvjIi7s-1744222695-1.0.1.1-qyjwnTgJKwF54pRhf0YHxW_BUw6p7SC60kwFsF9XTq4i2u2mnFKVq4WbsgvQDeuDEIxyaNb.ngWUVOU1GIX1O2Hcxcdn6TSaJ8NXTQw28F8;
_cfuvid=Hwvd7n4RVfOZLGiOKPaHmYJC7h8rCQmlmnBgBsKqy4Y-1744222695443-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.68.2
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.68.2
x-stainless-raw-response:
- 'true'
x-stainless-read-timeout:
- '600.0'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.9
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-BKUINDYiGwrVyJU7wUoXCw3hft7yF\",\n \"object\":
\"chat.completion\",\n \"created\": 1744222715,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"```\\nThought: I now know the final answer\\nFinal
Answer: This is a simulated search result for demonstration purposes.\\n```\",\n
\ \"refusal\": null,\n \"annotations\": []\n },\n \"logprobs\":
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
295,\n \"completion_tokens\": 26,\n \"total_tokens\": 321,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_b376dfbbd5\"\n}\n"
headers:
CF-RAY:
- 92dc02003c9ecf41-SJC
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Wed, 09 Apr 2025 18:18:35 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '531'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999667'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_dd9052c40d5d61ecc5eb141f49df3abe
http_version: HTTP/1.1
status_code: 200
- request:
body: '{"messages": [{"role": "system", "content": "You are Test Agent. Test Backstory\nYour
personal goal is: Test Goal\n\nYou ONLY have access to the following tools,
and should NEVER make up tools that are not listed here:\n\nTool Name: search_web\nTool
Arguments: {''query'': {''description'': None, ''type'': ''str''}}\nTool Description:
Search the web for information about a topic.\nTool Name: calculate\nTool Arguments:
{''expression'': {''description'': None, ''type'': ''str''}}\nTool Description:
Calculate the result of a mathematical expression.\n\nIMPORTANT: Use the following
format in your response:\n\n```\nThought: you should always think about what
to do\nAction: the action to take, only one name of [search_web, calculate],
just the name, exactly as it''s written.\nAction Input: the input to the action,
just a simple JSON object, enclosed in curly braces, using \" to wrap keys and
values.\nObservation: the result of the action\n```\n\nOnce all necessary information
is gathered, return the following format:\n\n```\nThought: I now know the final
answer\nFinal Answer: the final answer to the original input question\n```\nIMPORTANT:
Your final answer MUST contain all the information requested in the following
format: {\n \"test_field\": str\n}\n\nIMPORTANT: Ensure the final output does
not include any code block markers like ```json or ```python."}, {"role": "user",
"content": "Test query"}], "model": "gpt-4o-mini", "stop": ["\nObservation:"]}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, zstd
connection:
- keep-alive
content-length:
- '1451'
content-type:
- application/json
cookie:
- __cf_bm=skEg5DBE_nQ5gLAsfGzhjTetiNUJ_Y2bXWLMsvjIi7s-1744222695-1.0.1.1-qyjwnTgJKwF54pRhf0YHxW_BUw6p7SC60kwFsF9XTq4i2u2mnFKVq4WbsgvQDeuDEIxyaNb.ngWUVOU1GIX1O2Hcxcdn6TSaJ8NXTQw28F8;
_cfuvid=Hwvd7n4RVfOZLGiOKPaHmYJC7h8rCQmlmnBgBsKqy4Y-1744222695443-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.68.2
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.68.2
x-stainless-raw-response:
- 'true'
x-stainless-read-timeout:
- '600.0'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.9
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-BKUIN3xeM6JBgLjV5HQA8MTI2Uuem\",\n \"object\":
\"chat.completion\",\n \"created\": 1744222715,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"```\\nThought: I need to clarify what
specific information or topic the test query is targeting.\\nAction: search_web\\nAction
Input: {\\\"query\\\":\\\"What is the purpose of a test query in data retrieval?\\\"}\",\n
\ \"refusal\": null,\n \"annotations\": []\n },\n \"logprobs\":
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
288,\n \"completion_tokens\": 43,\n \"total_tokens\": 331,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_b376dfbbd5\"\n}\n"
headers:
CF-RAY:
- 92dc0204d91ccf41-SJC
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Wed, 09 Apr 2025 18:18:36 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '728'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999675'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_e792e993009ddfe84cfbb503560d88cf
http_version: HTTP/1.1
status_code: 200
- request:
body: '{"messages": [{"role": "system", "content": "You are Test Agent. Test Backstory\nYour
personal goal is: Test Goal\n\nYou ONLY have access to the following tools,
and should NEVER make up tools that are not listed here:\n\nTool Name: search_web\nTool
Arguments: {''query'': {''description'': None, ''type'': ''str''}}\nTool Description:
Search the web for information about a topic.\nTool Name: calculate\nTool Arguments:
{''expression'': {''description'': None, ''type'': ''str''}}\nTool Description:
Calculate the result of a mathematical expression.\n\nIMPORTANT: Use the following
format in your response:\n\n```\nThought: you should always think about what
to do\nAction: the action to take, only one name of [search_web, calculate],
just the name, exactly as it''s written.\nAction Input: the input to the action,
just a simple JSON object, enclosed in curly braces, using \" to wrap keys and
values.\nObservation: the result of the action\n```\n\nOnce all necessary information
is gathered, return the following format:\n\n```\nThought: I now know the final
answer\nFinal Answer: the final answer to the original input question\n```\nIMPORTANT:
Your final answer MUST contain all the information requested in the following
format: {\n \"test_field\": str\n}\n\nIMPORTANT: Ensure the final output does
not include any code block markers like ```json or ```python."}, {"role": "user",
"content": "Test query"}, {"role": "assistant", "content": "```\nThought: I
need to clarify what specific information or topic the test query is targeting.\nAction:
search_web\nAction Input: {\"query\":\"What is the purpose of a test query in
data retrieval?\"}\nObservation: Found information about What is the purpose
of a test query in data retrieval?: This is a simulated search result for demonstration
purposes."}], "model": "gpt-4o-mini", "stop": ["\nObservation:"]}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, zstd
connection:
- keep-alive
content-length:
- '1846'
content-type:
- application/json
cookie:
- __cf_bm=skEg5DBE_nQ5gLAsfGzhjTetiNUJ_Y2bXWLMsvjIi7s-1744222695-1.0.1.1-qyjwnTgJKwF54pRhf0YHxW_BUw6p7SC60kwFsF9XTq4i2u2mnFKVq4WbsgvQDeuDEIxyaNb.ngWUVOU1GIX1O2Hcxcdn6TSaJ8NXTQw28F8;
_cfuvid=Hwvd7n4RVfOZLGiOKPaHmYJC7h8rCQmlmnBgBsKqy4Y-1744222695443-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.68.2
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.68.2
x-stainless-raw-response:
- 'true'
x-stainless-read-timeout:
- '600.0'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.9
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-BKUIOqyLDCIZv6YIz1hlaW479SIzg\",\n \"object\":
\"chat.completion\",\n \"created\": 1744222716,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"```\\nThought: I now know the final answer\\nFinal
Answer: {\\n \\\"test_field\\\": \\\"A test query is utilized to evaluate the
functionality, performance, and accuracy of data retrieval systems, ensuring
they return expected results.\\\"\\n}\\n```\",\n \"refusal\": null,\n
\ \"annotations\": []\n },\n \"logprobs\": null,\n \"finish_reason\":
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 362,\n \"completion_tokens\":
49,\n \"total_tokens\": 411,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_b376dfbbd5\"\n}\n"
headers:
CF-RAY:
- 92dc020a3defcf41-SJC
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Wed, 09 Apr 2025 18:18:37 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '805'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999588'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_3b6c80fd3066b9e0054d0d2280bc4c98
http_version: HTTP/1.1
status_code: 200
version: 1

View File

@@ -1,249 +0,0 @@
interactions:
- request:
body: '{"messages": [{"role": "system", "content": "You are Research Assistant.
You are a helpful research assistant who can search for information about the
population of Tokyo.\nYour personal goal is: Find information about the population
of Tokyo\n\nYou ONLY have access to the following tools, and should NEVER make
up tools that are not listed here:\n\nTool Name: search_web\nTool Arguments:
{''query'': {''description'': None, ''type'': ''str''}}\nTool Description: Search
the web for information about a topic.\n\nIMPORTANT: Use the following format
in your response:\n\n```\nThought: you should always think about what to do\nAction:
the action to take, only one name of [search_web], just the name, exactly as
it''s written.\nAction Input: the input to the action, just a simple JSON object,
enclosed in curly braces, using \" to wrap keys and values.\nObservation: the
result of the action\n```\n\nOnce all necessary information is gathered, return
the following format:\n\n```\nThought: I now know the final answer\nFinal Answer:
the final answer to the original input question\n```"}, {"role": "user", "content":
"What is the population of Tokyo? Return your strucutred output in JSON format
with the following fields: summary, confidence"}], "model": "gpt-4o-mini", "stop":
["\nObservation:"]}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, zstd
connection:
- keep-alive
content-length:
- '1290'
content-type:
- application/json
cookie:
- _cfuvid=u769MG.poap6iEjFpbByMFUC0FygMEqYSurr5DfLbas-1743447969501-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.68.2
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.68.2
x-stainless-raw-response:
- 'true'
x-stainless-read-timeout:
- '600.0'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.9
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-BKUM5MZbz4TG6qmUtTrgKo8gI48FO\",\n \"object\":
\"chat.completion\",\n \"created\": 1744222945,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"```\\nThought: I need to find the current
population of Tokyo.\\nAction: search_web\\nAction Input: {\\\"query\\\":\\\"current
population of Tokyo 2023\\\"}\",\n \"refusal\": null,\n \"annotations\":
[]\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 248,\n \"completion_tokens\":
33,\n \"total_tokens\": 281,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_b376dfbbd5\"\n}\n"
headers:
CF-RAY:
- 92dc079f8e5a7ab0-SJC
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Wed, 09 Apr 2025 18:22:26 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=1F.UUVSjZyp8QMRT0dTQXUJc5WlGpC3xAx4FY7KCQbs-1744222946-1.0.1.1-vcXIZcokSjfxyFeoTTUAWmBGmJpv0ss9iFqt5EJVZGE1PvSV2ov0erCS.KIo0xItBMuX_MtCgDSaYMPI3L9QDsLatWqfUFieHiFh0CrX4h8;
path=/; expires=Wed, 09-Apr-25 18:52:26 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=RbJuVW8hReYElyyghEbAFletdnJZ2mk5rn9D8EGuyNk-1744222946580-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '1282'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999713'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_845ed875afd48dee3d88f33cbab88cc2
http_version: HTTP/1.1
status_code: 200
- request:
body: '{"messages": [{"role": "system", "content": "You are Research Assistant.
You are a helpful research assistant who can search for information about the
population of Tokyo.\nYour personal goal is: Find information about the population
of Tokyo\n\nYou ONLY have access to the following tools, and should NEVER make
up tools that are not listed here:\n\nTool Name: search_web\nTool Arguments:
{''query'': {''description'': None, ''type'': ''str''}}\nTool Description: Search
the web for information about a topic.\n\nIMPORTANT: Use the following format
in your response:\n\n```\nThought: you should always think about what to do\nAction:
the action to take, only one name of [search_web], just the name, exactly as
it''s written.\nAction Input: the input to the action, just a simple JSON object,
enclosed in curly braces, using \" to wrap keys and values.\nObservation: the
result of the action\n```\n\nOnce all necessary information is gathered, return
the following format:\n\n```\nThought: I now know the final answer\nFinal Answer:
the final answer to the original input question\n```"}, {"role": "user", "content":
"What is the population of Tokyo? Return your strucutred output in JSON format
with the following fields: summary, confidence"}, {"role": "assistant", "content":
"```\nThought: I need to find the current population of Tokyo.\nAction: search_web\nAction
Input: {\"query\":\"current population of Tokyo 2023\"}\nObservation: Tokyo''s
population in 2023 was approximately 21 million people in the city proper, and
37 million in the greater metropolitan area."}], "model": "gpt-4o-mini", "stop":
["\nObservation:"]}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, zstd
connection:
- keep-alive
content-length:
- '1619'
content-type:
- application/json
cookie:
- _cfuvid=RbJuVW8hReYElyyghEbAFletdnJZ2mk5rn9D8EGuyNk-1744222946580-0.0.1.1-604800000;
__cf_bm=1F.UUVSjZyp8QMRT0dTQXUJc5WlGpC3xAx4FY7KCQbs-1744222946-1.0.1.1-vcXIZcokSjfxyFeoTTUAWmBGmJpv0ss9iFqt5EJVZGE1PvSV2ov0erCS.KIo0xItBMuX_MtCgDSaYMPI3L9QDsLatWqfUFieHiFh0CrX4h8
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.68.2
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.68.2
x-stainless-raw-response:
- 'true'
x-stainless-read-timeout:
- '600.0'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.9
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-BKUM69pnk6VLn5rpDjGdg21mOxFke\",\n \"object\":
\"chat.completion\",\n \"created\": 1744222946,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"```\\nThought: I now know the final answer\\nFinal
Answer: {\\\"summary\\\":\\\"The population of Tokyo is approximately 21 million
in the city proper and 37 million in the greater metropolitan area as of 2023.\\\",\\\"confidence\\\":\\\"high\\\"}\\n```\",\n
\ \"refusal\": null,\n \"annotations\": []\n },\n \"logprobs\":
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
315,\n \"completion_tokens\": 51,\n \"total_tokens\": 366,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_b376dfbbd5\"\n}\n"
headers:
CF-RAY:
- 92dc07a8ac9f7ab0-SJC
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Wed, 09 Apr 2025 18:22:27 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '1024'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999642'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_d72860d8629025988b1170e939bc1f20
http_version: HTTP/1.1
status_code: 200
version: 1

View File

@@ -8,7 +8,6 @@ researcher:
developments in {topic}. Known for your ability to find the most relevant
information and present it in a clear and concise manner.
verbose: true
function_calling_llm: "local_llm"
reporting_analyst:
role: >
@@ -19,5 +18,4 @@ reporting_analyst:
You're a meticulous analyst with a keen eye for detail. You're known for
your ability to turn complex data into clear and concise reports, making
it easy for others to understand and act on the information you provide.
verbose: true
function_calling_llm: "online_llm"
verbose: true

View File

@@ -8,7 +8,6 @@ from dotenv import load_dotenv
load_result = load_dotenv(override=True)
@pytest.fixture(autouse=True)
def setup_test_environment():
"""Set up test environment with a temporary directory for SQLite storage."""
@@ -16,13 +15,11 @@ def setup_test_environment():
# Create the directory with proper permissions
storage_dir = Path(temp_dir) / "crewai_test_storage"
storage_dir.mkdir(parents=True, exist_ok=True)
# Validate that the directory was created successfully
if not storage_dir.exists() or not storage_dir.is_dir():
raise RuntimeError(
f"Failed to create test storage directory: {storage_dir}"
)
raise RuntimeError(f"Failed to create test storage directory: {storage_dir}")
# Verify directory permissions
try:
# Try to create a test file to verify write permissions
@@ -30,13 +27,11 @@ def setup_test_environment():
test_file.touch()
test_file.unlink()
except (OSError, IOError) as e:
raise RuntimeError(
f"Test storage directory {storage_dir} is not writable: {e}"
)
raise RuntimeError(f"Test storage directory {storage_dir} is not writable: {e}")
# Set environment variable to point to the test storage directory
os.environ["CREWAI_STORAGE_DIR"] = str(storage_dir)
yield
# Cleanup is handled automatically when tempfile context exits

View File

@@ -2157,6 +2157,7 @@ def test_tools_with_custom_caching():
with patch.object(
CacheHandler, "add", wraps=crew._cache_handler.add
) as add_to_cache:
result = crew.kickoff()
# Check that add_to_cache was called exactly twice

View File

@@ -1,15 +0,0 @@
"""Test that all public API classes are properly importable."""
def test_task_output_import():
"""Test that TaskOutput can be imported from crewai."""
from crewai import TaskOutput
assert TaskOutput is not None
def test_crew_output_import():
"""Test that CrewOutput can be imported from crewai."""
from crewai import CrewOutput
assert CrewOutput is not None

View File

@@ -1,3 +1,4 @@
from unittest.mock import MagicMock, patch
import pytest
@@ -11,7 +12,6 @@ class MockCrew:
def __init__(self, memory_config):
self.memory_config = memory_config
@pytest.fixture
def user_memory():
"""Fixture to create a UserMemory instance"""
@@ -19,18 +19,17 @@ def user_memory():
memory_config={
"provider": "mem0",
"config": {"user_id": "john"},
"user_memory": {},
"user_memory" : {}
}
)
user_memory = MagicMock(spec=UserMemory)
with patch.object(Memory, "__new__", return_value=user_memory):
with patch.object(Memory,'__new__',return_value=user_memory):
user_memory_instance = UserMemory(crew=crew)
return user_memory_instance
def test_save_and_search(user_memory):
memory = UserMemoryItem(
data="""test value test value test value test value test value test value
@@ -41,10 +40,16 @@ def test_save_and_search(user_memory):
)
with patch.object(UserMemory, "save") as mock_save:
user_memory.save(value=memory.data, metadata=memory.metadata, user=memory.user)
user_memory.save(
value=memory.data,
metadata=memory.metadata,
user=memory.user
)
mock_save.assert_called_once_with(
value=memory.data, metadata=memory.metadata, user=memory.user
value=memory.data,
metadata=memory.metadata,
user=memory.user
)
expected_result = [
@@ -57,9 +62,7 @@ def test_save_and_search(user_memory):
expected_result = ["mocked_result"]
# Use patch.object to mock UserMemory's search method
with patch.object(
UserMemory, "search", return_value=expected_result
) as mock_search:
with patch.object(UserMemory, 'search', return_value=expected_result) as mock_search:
find = UserMemory.search("test value", score_threshold=0.01)[0]
mock_search.assert_called_once_with("test value", score_threshold=0.01)
assert find == expected_result[0]
assert find == expected_result[0]

View File

@@ -2,16 +2,7 @@ import pytest
from crewai.agent import Agent
from crewai.crew import Crew
from crewai.llm import LLM
from crewai.project import (
CrewBase,
after_kickoff,
agent,
before_kickoff,
crew,
llm,
task,
)
from crewai.project import CrewBase, after_kickoff, agent, before_kickoff, crew, task
from crewai.task import Task
@@ -40,14 +31,6 @@ class InternalCrew:
agents_config = "config/agents.yaml"
tasks_config = "config/tasks.yaml"
@llm
def local_llm(self):
return LLM(
model="openai/model_name",
api_key="None",
base_url="http://xxx.xxx.xxx.xxx:8000/v1",
)
@agent
def researcher(self):
return Agent(config=self.agents_config["researcher"])
@@ -122,20 +105,6 @@ def test_task_name():
), "Custom task name is not being set as expected"
def test_agent_function_calling_llm():
crew = InternalCrew()
llm = crew.local_llm()
obj_llm_agent = crew.researcher()
assert (
obj_llm_agent.function_calling_llm is llm
), "agent's function_calling_llm is incorrect"
str_llm_agent = crew.reporting_analyst()
assert (
str_llm_agent.function_calling_llm.model == "online_llm"
), "agent's function_calling_llm is incorrect"
@pytest.mark.vcr(filter_headers=["authorization"])
def test_before_kickoff_modification():
crew = InternalCrew()

View File

@@ -54,7 +54,7 @@ def test_agent_with_deterministic_fingerprint():
role="Researcher",
goal="Research quantum computing",
backstory="Expert in quantum physics",
security_config=security_config,
security_config=security_config
)
# Create another agent with the same security config
@@ -62,7 +62,7 @@ def test_agent_with_deterministic_fingerprint():
role="Completely different role",
goal="Different goal",
backstory="Different backstory",
security_config=security_config,
security_config=security_config
)
# Both agents should have the same fingerprint UUID
@@ -84,7 +84,9 @@ def test_task_with_deterministic_fingerprint():
# Create an agent first (required for tasks)
agent = Agent(
role="Assistant", goal="Help with tasks", backstory="Helpful AI assistant"
role="Assistant",
goal="Help with tasks",
backstory="Helpful AI assistant"
)
# Create a task with the deterministic fingerprint
@@ -92,7 +94,7 @@ def test_task_with_deterministic_fingerprint():
description="Analyze data",
expected_output="Data analysis report",
agent=agent,
security_config=security_config,
security_config=security_config
)
# Create another task with the same security config
@@ -100,7 +102,7 @@ def test_task_with_deterministic_fingerprint():
description="Different task description",
expected_output="Different expected output",
agent=agent,
security_config=security_config,
security_config=security_config
)
# Both tasks should have the same fingerprint UUID
@@ -117,18 +119,36 @@ def test_crew_with_deterministic_fingerprint():
# Create agents for the crew
agent1 = Agent(
role="Researcher", goal="Research information", backstory="Expert researcher"
role="Researcher",
goal="Research information",
backstory="Expert researcher"
)
agent2 = Agent(role="Writer", goal="Write reports", backstory="Expert writer")
agent2 = Agent(
role="Writer",
goal="Write reports",
backstory="Expert writer"
)
# Create a crew with the deterministic fingerprint
crew1 = Crew(agents=[agent1, agent2], tasks=[], security_config=security_config)
crew1 = Crew(
agents=[agent1, agent2],
tasks=[],
security_config=security_config
)
# Create another crew with the same security config but different agents
agent3 = Agent(role="Analyst", goal="Analyze data", backstory="Expert analyst")
agent3 = Agent(
role="Analyst",
goal="Analyze data",
backstory="Expert analyst"
)
crew2 = Crew(agents=[agent3], tasks=[], security_config=security_config)
crew2 = Crew(
agents=[agent3],
tasks=[],
security_config=security_config
)
# Both crews should have the same fingerprint UUID
assert crew1.fingerprint.uuid_str == crew2.fingerprint.uuid_str
@@ -148,7 +168,7 @@ def test_recreating_components_with_same_seed():
role="Researcher",
goal="Research topic",
backstory="Expert researcher",
security_config=security_config1,
security_config=security_config1
)
uuid_from_first_session = agent1.fingerprint.uuid_str
@@ -161,7 +181,7 @@ def test_recreating_components_with_same_seed():
role="Researcher",
goal="Research topic",
backstory="Expert researcher",
security_config=security_config2,
security_config=security_config2
)
# Should have same UUID across sessions
@@ -189,7 +209,7 @@ def test_security_config_with_seed_string():
role="Tester",
goal="Test fingerprints",
backstory="Expert tester",
security_config=security_config,
security_config=security_config
)
# Agent should have the same fingerprint UUID
@@ -216,7 +236,7 @@ def test_complex_component_hierarchy_with_deterministic_fingerprints():
role="Complex Test Agent",
goal="Test complex fingerprint scenarios",
backstory="Expert in testing",
security_config=agent_config,
security_config=agent_config
)
# Create a task
@@ -224,11 +244,15 @@ def test_complex_component_hierarchy_with_deterministic_fingerprints():
description="Test complex fingerprinting",
expected_output="Verification of fingerprint stability",
agent=agent,
security_config=task_config,
security_config=task_config
)
# Create a crew
crew = Crew(agents=[agent], tasks=[task], security_config=crew_config)
crew = Crew(
agents=[agent],
tasks=[task],
security_config=crew_config
)
# Each component should have its own deterministic fingerprint
assert agent.fingerprint.uuid_str == agent_fingerprint.uuid_str
@@ -247,4 +271,4 @@ def test_complex_component_hierarchy_with_deterministic_fingerprints():
assert agent_fingerprint.uuid_str == agent_fingerprint2.uuid_str
assert task_fingerprint.uuid_str == task_fingerprint2.uuid_str
assert crew_fingerprint.uuid_str == crew_fingerprint2.uuid_str
assert crew_fingerprint.uuid_str == crew_fingerprint2.uuid_str

View File

@@ -170,7 +170,7 @@ def test_fingerprint_from_dict():
fingerprint_dict = {
"uuid_str": uuid_str,
"created_at": created_at_iso,
"metadata": metadata,
"metadata": metadata
}
fingerprint = Fingerprint.from_dict(fingerprint_dict)
@@ -207,7 +207,11 @@ def test_invalid_uuid_str():
uuid_str = "not-a-valid-uuid"
created_at = datetime.now().isoformat()
fingerprint_dict = {"uuid_str": uuid_str, "created_at": created_at, "metadata": {}}
fingerprint_dict = {
"uuid_str": uuid_str,
"created_at": created_at,
"metadata": {}
}
# The Fingerprint.from_dict method accepts even invalid UUIDs
# This seems to be the current behavior
@@ -239,7 +243,7 @@ def test_fingerprint_metadata_mutation():
expected_metadata = {
"version": "1.0",
"status": "published",
"author": "Test Author",
"author": "Test Author"
}
assert fingerprint.metadata == expected_metadata
@@ -256,4 +260,4 @@ def test_fingerprint_metadata_mutation():
# Ensure immutable fields remain unchanged
assert fingerprint.uuid_str == uuid_str
assert fingerprint.created_at == created_at
assert fingerprint.created_at == created_at

View File

@@ -15,7 +15,7 @@ def test_agent_with_security_config():
role="Tester",
goal="Test fingerprinting",
backstory="Testing fingerprinting",
security_config=security_config,
security_config=security_config
)
assert agent.security_config is not None
@@ -28,7 +28,9 @@ def test_agent_fingerprint_property():
"""Test the fingerprint property on Agent."""
# Create agent without security_config
agent = Agent(
role="Tester", goal="Test fingerprinting", backstory="Testing fingerprinting"
role="Tester",
goal="Test fingerprinting",
backstory="Testing fingerprinting"
)
# Fingerprint should be automatically generated
@@ -43,14 +45,21 @@ def test_crew_with_security_config():
security_config = SecurityConfig()
agent1 = Agent(
role="Tester1", goal="Test fingerprinting", backstory="Testing fingerprinting"
role="Tester1",
goal="Test fingerprinting",
backstory="Testing fingerprinting"
)
agent2 = Agent(
role="Tester2", goal="Test fingerprinting", backstory="Testing fingerprinting"
role="Tester2",
goal="Test fingerprinting",
backstory="Testing fingerprinting"
)
crew = Crew(agents=[agent1, agent2], security_config=security_config)
crew = Crew(
agents=[agent1, agent2],
security_config=security_config
)
assert crew.security_config is not None
assert crew.security_config == security_config
@@ -62,11 +71,15 @@ def test_crew_fingerprint_property():
"""Test the fingerprint property on Crew."""
# Create crew without security_config
agent1 = Agent(
role="Tester1", goal="Test fingerprinting", backstory="Testing fingerprinting"
role="Tester1",
goal="Test fingerprinting",
backstory="Testing fingerprinting"
)
agent2 = Agent(
role="Tester2", goal="Test fingerprinting", backstory="Testing fingerprinting"
role="Tester2",
goal="Test fingerprinting",
backstory="Testing fingerprinting"
)
crew = Crew(agents=[agent1, agent2])
@@ -83,14 +96,16 @@ def test_task_with_security_config():
security_config = SecurityConfig()
agent = Agent(
role="Tester", goal="Test fingerprinting", backstory="Testing fingerprinting"
role="Tester",
goal="Test fingerprinting",
backstory="Testing fingerprinting"
)
task = Task(
description="Test task",
expected_output="Testing output",
agent=agent,
security_config=security_config,
security_config=security_config
)
assert task.security_config is not None
@@ -103,10 +118,16 @@ def test_task_fingerprint_property():
"""Test the fingerprint property on Task."""
# Create task without security_config
agent = Agent(
role="Tester", goal="Test fingerprinting", backstory="Testing fingerprinting"
role="Tester",
goal="Test fingerprinting",
backstory="Testing fingerprinting"
)
task = Task(description="Test task", expected_output="Testing output", agent=agent)
task = Task(
description="Test task",
expected_output="Testing output",
agent=agent
)
# Fingerprint should be automatically generated
assert task.fingerprint is not None
@@ -118,20 +139,33 @@ def test_end_to_end_fingerprinting():
"""Test end-to-end fingerprinting across Agent, Crew, and Task."""
# Create components with auto-generated fingerprints
agent1 = Agent(
role="Researcher", goal="Research information", backstory="Expert researcher"
role="Researcher",
goal="Research information",
backstory="Expert researcher"
)
agent2 = Agent(role="Writer", goal="Write content", backstory="Expert writer")
agent2 = Agent(
role="Writer",
goal="Write content",
backstory="Expert writer"
)
task1 = Task(
description="Research topic", expected_output="Research findings", agent=agent1
description="Research topic",
expected_output="Research findings",
agent=agent1
)
task2 = Task(
description="Write article", expected_output="Written article", agent=agent2
description="Write article",
expected_output="Written article",
agent=agent2
)
crew = Crew(agents=[agent1, agent2], tasks=[task1, task2])
crew = Crew(
agents=[agent1, agent2],
tasks=[task1, task2]
)
# Verify all fingerprints were automatically generated
assert agent1.fingerprint is not None
@@ -146,18 +180,18 @@ def test_end_to_end_fingerprinting():
agent2.fingerprint.uuid_str,
task1.fingerprint.uuid_str,
task2.fingerprint.uuid_str,
crew.fingerprint.uuid_str,
crew.fingerprint.uuid_str
]
assert len(fingerprints) == len(
set(fingerprints)
), "All fingerprints should be unique"
assert len(fingerprints) == len(set(fingerprints)), "All fingerprints should be unique"
def test_fingerprint_persistence():
"""Test that fingerprints persist and don't change."""
# Create an agent and check its fingerprint
agent = Agent(
role="Tester", goal="Test fingerprinting", backstory="Testing fingerprinting"
role="Tester",
goal="Test fingerprinting",
backstory="Testing fingerprinting"
)
# Get initial fingerprint
@@ -167,7 +201,11 @@ def test_fingerprint_persistence():
assert agent.fingerprint.uuid_str == initial_fingerprint
# Create a task with the agent
task = Task(description="Test task", expected_output="Testing output", agent=agent)
task = Task(
description="Test task",
expected_output="Testing output",
agent=agent
)
# Check that task has its own unique fingerprint
assert task.fingerprint is not None
@@ -185,25 +223,27 @@ def test_shared_security_config_fingerprints():
role="Researcher",
goal="Research information",
backstory="Expert researcher",
security_config=shared_security_config,
security_config=shared_security_config
)
agent2 = Agent(
role="Writer",
goal="Write content",
backstory="Expert writer",
security_config=shared_security_config,
security_config=shared_security_config
)
task = Task(
description="Write article",
expected_output="Written article",
agent=agent1,
security_config=shared_security_config,
security_config=shared_security_config
)
crew = Crew(
agents=[agent1, agent2], tasks=[task], security_config=shared_security_config
agents=[agent1, agent2],
tasks=[task],
security_config=shared_security_config
)
# Verify all components have the same fingerprint UUID
@@ -216,4 +256,4 @@ def test_shared_security_config_fingerprints():
assert agent1.fingerprint is shared_security_config.fingerprint
assert agent2.fingerprint is shared_security_config.fingerprint
assert task.fingerprint is shared_security_config.fingerprint
assert crew.fingerprint is shared_security_config.fingerprint
assert crew.fingerprint is shared_security_config.fingerprint

View File

@@ -63,11 +63,13 @@ def test_security_config_from_dict():
fingerprint_dict = {
"uuid_str": "b723c6ff-95de-5e87-860b-467b72282bd8",
"created_at": datetime.now().isoformat(),
"metadata": {"version": "1.0"},
"metadata": {"version": "1.0"}
}
# Create a config dict with just the fingerprint
config_dict = {"fingerprint": fingerprint_dict}
config_dict = {
"fingerprint": fingerprint_dict
}
# Create config manually since from_dict has a specific implementation
config = SecurityConfig()
@@ -113,4 +115,4 @@ def test_security_config_json_serialization():
new_config.fingerprint = new_fingerprint
# Check the new config has the same fingerprint metadata
assert new_config.fingerprint.metadata == {"version": "1.0"}
assert new_config.fingerprint.metadata == {"version": "1.0"}

View File

@@ -1,33 +0,0 @@
import os
from unittest.mock import patch
import pytest
from crewai.telemetry import Telemetry
@pytest.mark.parametrize(
"env_var,value,expected_ready",
[
("OTEL_SDK_DISABLED", "true", False),
("OTEL_SDK_DISABLED", "TRUE", False),
("CREWAI_DISABLE_TELEMETRY", "true", False),
("CREWAI_DISABLE_TELEMETRY", "TRUE", False),
("OTEL_SDK_DISABLED", "false", True),
("CREWAI_DISABLE_TELEMETRY", "false", True),
],
)
def test_telemetry_environment_variables(env_var, value, expected_ready):
"""Test telemetry state with different environment variable configurations."""
with patch.dict(os.environ, {env_var: value}):
with patch("crewai.telemetry.telemetry.TracerProvider"):
telemetry = Telemetry()
assert telemetry.ready is expected_ready
def test_telemetry_enabled_by_default():
"""Test that telemetry is enabled by default."""
with patch.dict(os.environ, {}, clear=True):
with patch("crewai.telemetry.telemetry.TracerProvider"):
telemetry = Telemetry()
assert telemetry.ready is True

View File

@@ -6,7 +6,6 @@ from crewai.flow.persistence import persist
class PoemState(FlowState):
"""Test state model with default values that should be overridden."""
sentence_count: int = 1000 # Default that should be overridden
has_set_count: bool = False # Track whether we've set the count
poem_type: str = ""
@@ -47,13 +46,11 @@ def test_default_value_override():
# Fourth run - explicit override should work
flow3 = PoemFlow()
flow3.kickoff(
inputs={
"id": original_uuid,
"has_set_count": True,
"sentence_count": 5, # Override persisted value
}
)
flow3.kickoff(inputs={
"id": original_uuid,
"has_set_count": True,
"sentence_count": 5, # Override persisted value
})
assert flow3.state.sentence_count == 5 # Should use override value
# Third run - should not load sentence_count=2 instead of default 1000
@@ -99,12 +96,17 @@ def test_multi_step_default_override():
# Second run - should load persisted state and update poem type
flow2 = MultiStepPoemFlow()
flow2.kickoff(inputs={"id": original_uuid, "sentence_count": 5})
flow2.kickoff(inputs={
"id": original_uuid,
"sentence_count": 5
})
assert flow2.state.sentence_count == 5
assert flow2.state.poem_type == "limerick"
# Third run - new flow without persisted state should use defaults
flow3 = MultiStepPoemFlow()
flow3.kickoff(inputs={"id": original_uuid})
flow3.kickoff(inputs={
"id": original_uuid
})
assert flow3.state.sentence_count == 5
assert flow3.state.poem_type == "limerick"
assert flow3.state.poem_type == "limerick"

View File

@@ -4,8 +4,8 @@ from typing import cast
import pytest
from pydantic import BaseModel, Field
from crewai import LLM, Agent
from crewai.lite_agent import LiteAgent, LiteAgentOutput
from crewai import LLM
from crewai.lite_agent import LiteAgent
from crewai.tools import BaseTool
from crewai.utilities.events import crewai_event_bus
from crewai.utilities.events.tool_usage_events import ToolUsageStartedEvent
@@ -63,74 +63,12 @@ class ResearchResult(BaseModel):
sources: list[str] = Field(description="List of sources used")
@pytest.mark.vcr(filter_headers=["authorization"])
@pytest.mark.parametrize("verbose", [True, False])
def test_lite_agent_created_with_correct_parameters(monkeypatch, verbose):
"""Test that LiteAgent is created with the correct parameters when Agent.kickoff() is called."""
# Create a test agent with specific parameters
llm = LLM(model="gpt-4o-mini")
custom_tools = [WebSearchTool(), CalculatorTool()]
max_iter = 10
max_execution_time = 300
agent = Agent(
role="Test Agent",
goal="Test Goal",
backstory="Test Backstory",
llm=llm,
tools=custom_tools,
max_iter=max_iter,
max_execution_time=max_execution_time,
verbose=verbose,
)
# Create a mock to capture the created LiteAgent
created_lite_agent = None
original_lite_agent = LiteAgent
# Define a mock LiteAgent class that captures its arguments
class MockLiteAgent(original_lite_agent):
def __init__(self, **kwargs):
nonlocal created_lite_agent
created_lite_agent = kwargs
super().__init__(**kwargs)
# Patch the LiteAgent class
monkeypatch.setattr("crewai.agent.LiteAgent", MockLiteAgent)
# Call kickoff to create the LiteAgent
agent.kickoff("Test query")
# Verify all parameters were passed correctly
assert created_lite_agent is not None
assert created_lite_agent["role"] == "Test Agent"
assert created_lite_agent["goal"] == "Test Goal"
assert created_lite_agent["backstory"] == "Test Backstory"
assert created_lite_agent["llm"] == llm
assert len(created_lite_agent["tools"]) == 2
assert isinstance(created_lite_agent["tools"][0], WebSearchTool)
assert isinstance(created_lite_agent["tools"][1], CalculatorTool)
assert created_lite_agent["max_iterations"] == max_iter
assert created_lite_agent["max_execution_time"] == max_execution_time
assert created_lite_agent["verbose"] == verbose
assert created_lite_agent["response_format"] is None
# Test with a response_format
monkeypatch.setattr("crewai.agent.LiteAgent", MockLiteAgent)
class TestResponse(BaseModel):
test_field: str
agent.kickoff("Test query", response_format=TestResponse)
assert created_lite_agent["response_format"] == TestResponse
@pytest.mark.vcr(filter_headers=["authorization"])
def test_lite_agent_with_tools():
"""Test that Agent can use tools."""
"""Test that LiteAgent can use tools."""
# Create a LiteAgent with tools
llm = LLM(model="gpt-4o-mini")
agent = Agent(
agent = LiteAgent(
role="Research Assistant",
goal="Find information about the population of Tokyo",
backstory="You are a helpful research assistant who can search for information about the population of Tokyo.",
@@ -168,7 +106,7 @@ def test_lite_agent_with_tools():
@pytest.mark.vcr(filter_headers=["authorization"])
def test_lite_agent_structured_output():
"""Test that Agent can return a simple structured output."""
"""Test that LiteAgent can return a simple structured output."""
class SimpleOutput(BaseModel):
"""Simple structure for agent outputs."""
@@ -179,18 +117,18 @@ def test_lite_agent_structured_output():
web_search_tool = WebSearchTool()
llm = LLM(model="gpt-4o-mini")
agent = Agent(
agent = LiteAgent(
role="Info Gatherer",
goal="Provide brief information",
backstory="You gather and summarize information quickly.",
llm=llm,
tools=[web_search_tool],
verbose=True,
response_format=SimpleOutput,
)
result = agent.kickoff(
"What is the population of Tokyo? Return your strucutred output in JSON format with the following fields: summary, confidence",
response_format=SimpleOutput,
"What is the population of Tokyo? Return your strucutred output in JSON format with the following fields: summary, confidence"
)
print(f"\n=== Agent Result Type: {type(result)}")
@@ -217,7 +155,7 @@ def test_lite_agent_structured_output():
def test_lite_agent_returns_usage_metrics():
"""Test that LiteAgent returns usage metrics."""
llm = LLM(model="gpt-4o-mini")
agent = Agent(
agent = LiteAgent(
role="Research Assistant",
goal="Find information about the population of Tokyo",
backstory="You are a helpful research assistant who can search for information about the population of Tokyo.",
@@ -232,26 +170,3 @@ def test_lite_agent_returns_usage_metrics():
assert result.usage_metrics is not None
assert result.usage_metrics["total_tokens"] > 0
@pytest.mark.vcr(filter_headers=["authorization"])
@pytest.mark.asyncio
async def test_lite_agent_returns_usage_metrics_async():
"""Test that LiteAgent returns usage metrics when run asynchronously."""
llm = LLM(model="gpt-4o-mini")
agent = Agent(
role="Research Assistant",
goal="Find information about the population of Tokyo",
backstory="You are a helpful research assistant who can search for information about the population of Tokyo.",
llm=llm,
tools=[WebSearchTool()],
verbose=True,
)
result = await agent.kickoff_async(
"What is the population of Tokyo? Return your strucutred output in JSON format with the following fields: summary, confidence"
)
assert isinstance(result, LiteAgentOutput)
assert "21 million" in result.raw or "37 million" in result.raw
assert result.usage_metrics is not None
assert result.usage_metrics["total_tokens"] > 0

View File

@@ -18,7 +18,7 @@ def test_multimodal_agent_with_image_url():
llm = LLM(
model="openai/gpt-4o", # model with vision capabilities
api_key=OPENAI_API_KEY,
temperature=0.7,
temperature=0.7
)
expert_analyst = Agent(
@@ -28,7 +28,7 @@ def test_multimodal_agent_with_image_url():
llm=llm,
verbose=True,
allow_delegation=False,
multimodal=True,
multimodal=True
)
inspection_task = Task(
@@ -40,7 +40,7 @@ def test_multimodal_agent_with_image_url():
Provide a detailed report highlighting any issues found.
""",
expected_output="A detailed report highlighting any issues found",
agent=expert_analyst,
agent=expert_analyst
)
crew = Crew(agents=[expert_analyst], tasks=[inspection_task])

View File

@@ -0,0 +1,32 @@
import pytest
from crewai.llm import LLM
@pytest.mark.vcr(filter_headers=["authorization"])
def test_qwen_multimodal_content_formatting():
"""Test that multimodal content is properly formatted for Qwen models."""
llm = LLM(model="sambanova/Qwen2.5-72B-Instruct", temperature=0.7)
message = {"role": "user", "content": "Describe this image"}
formatted = llm._format_messages_for_provider([message])
assert isinstance(formatted[0]["content"], list)
assert formatted[0]["content"][0]["type"] == "text"
assert formatted[0]["content"][0]["text"] == "Describe this image"
multimodal_content = [
{"type": "text", "text": "What's in this image?"},
{"type": "image_url", "image_url": "https://example.com/image.jpg"}
]
message = {"role": "user", "content": multimodal_content}
formatted = llm._format_messages_for_provider([message])
assert formatted[0]["content"] == multimodal_content
messages = [
{"role": "system", "content": "You are a visual analysis assistant."},
{"role": "user", "content": multimodal_content}
]
formatted = llm._format_messages_for_provider(messages)
assert isinstance(formatted[0]["content"], list)
assert formatted[1]["content"] == multimodal_content

View File

@@ -100,25 +100,3 @@ def test_default_cache_function_is_true():
my_tool = MyCustomTool()
# Assert all the right attributes were defined
assert my_tool.cache_function()
def test_result_as_answer_in_tool_decorator():
@tool("Tool with result as answer", result_as_answer=True)
def my_tool_with_result_as_answer(question: str) -> str:
"""This tool will return its result as the final answer."""
return question
assert my_tool_with_result_as_answer.result_as_answer is True
converted_tool = my_tool_with_result_as_answer.to_structured_tool()
assert converted_tool.result_as_answer is True
@tool("Tool with default result_as_answer")
def my_tool_with_default(question: str) -> str:
"""This tool uses the default result_as_answer value."""
return question
assert my_tool_with_default.result_as_answer is False
converted_tool = my_tool_with_default.to_structured_tool()
assert converted_tool.result_as_answer is False

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,864 @@
interactions:
- request:
body: '{"model": "llama3.2:3b", "prompt": "### System:\nPlease convert the following
text into valid JSON.\n\nOutput ONLY the valid JSON and nothing else.\n\nThe
JSON must follow this format exactly:\n{\n \"name\": str,\n \"age\": int\n}\n\n###
User:\nName: Alice Llama, Age: 30\n\n", "options": {"stop": []}, "stream": false}'
headers:
accept:
- '*/*'
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '321'
host:
- localhost:11434
user-agent:
- litellm/1.60.2
method: POST
uri: http://localhost:11434/api/generate
response:
content: '{"model":"llama3.2:3b","created_at":"2025-02-21T02:57:55.059392Z","response":"{\"name\":
\"Alice Llama\", \"age\": 30}","done":true,"done_reason":"stop","context":[128006,9125,128007,271,38766,1303,33025,2696,25,6790,220,2366,18,271,128009,128006,882,128007,271,14711,744,512,5618,5625,279,2768,1495,1139,2764,4823,382,5207,27785,279,2764,4823,323,4400,775,382,791,4823,2011,1833,420,3645,7041,512,517,220,330,609,794,610,345,220,330,425,794,528,198,633,14711,2724,512,678,25,30505,445,81101,11,13381,25,220,966,271,128009,128006,78191,128007,271,5018,609,794,330,62786,445,81101,498,330,425,794,220,966,92],"total_duration":4675906000,"load_duration":836091458,"prompt_eval_count":82,"prompt_eval_duration":3561000000,"eval_count":15,"eval_duration":275000000}'
headers:
Content-Length:
- '761'
Content-Type:
- application/json; charset=utf-8
Date:
- Fri, 21 Feb 2025 02:57:55 GMT
http_version: HTTP/1.1
status_code: 200
- request:
body: '{"name": "llama3.2:3b"}'
headers:
accept:
- '*/*'
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '23'
content-type:
- application/json
host:
- localhost:11434
user-agent:
- litellm/1.60.2
method: POST
uri: http://localhost:11434/api/show
response:
content: "{\"license\":\"LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\\nLlama 3.2 Version
Release Date: September 25, 2024\\n\\n\u201CAgreement\u201D means the terms
and conditions for use, reproduction, distribution \\nand modification of the
Llama Materials set forth herein.\\n\\n\u201CDocumentation\u201D means the specifications,
manuals and documentation accompanying Llama 3.2\\ndistributed by Meta at https://llama.meta.com/doc/overview.\\n\\n\u201CLicensee\u201D
or \u201Cyou\u201D means you, or your employer or any other person or entity
(if you are \\nentering into this Agreement on such person or entity\u2019s
behalf), of the age required under\\napplicable laws, rules or regulations to
provide legal consent and that has legal authority\\nto bind your employer or
such other person or entity if you are entering in this Agreement\\non their
behalf.\\n\\n\u201CLlama 3.2\u201D means the foundational large language models
and software and algorithms, including\\nmachine-learning model code, trained
model weights, inference-enabling code, training-enabling code,\\nfine-tuning
enabling code and other elements of the foregoing distributed by Meta at \\nhttps://www.llama.com/llama-downloads.\\n\\n\u201CLlama
Materials\u201D means, collectively, Meta\u2019s proprietary Llama 3.2 and Documentation
(and \\nany portion thereof) made available under this Agreement.\\n\\n\u201CMeta\u201D
or \u201Cwe\u201D means Meta Platforms Ireland Limited (if you are located in
or, \\nif you are an entity, your principal place of business is in the EEA
or Switzerland) \\nand Meta Platforms, Inc. (if you are located outside of the
EEA or Switzerland). \\n\\n\\nBy clicking \u201CI Accept\u201D below or by using
or distributing any portion or element of the Llama Materials,\\nyou agree to
be bound by this Agreement.\\n\\n\\n1. License Rights and Redistribution.\\n\\n
\ a. Grant of Rights. You are granted a non-exclusive, worldwide, \\nnon-transferable
and royalty-free limited license under Meta\u2019s intellectual property or
other rights \\nowned by Meta embodied in the Llama Materials to use, reproduce,
distribute, copy, create derivative works \\nof, and make modifications to the
Llama Materials. \\n\\n b. Redistribution and Use. \\n\\n i. If
you distribute or make available the Llama Materials (or any derivative works
thereof), \\nor a product or service (including another AI model) that contains
any of them, you shall (A) provide\\na copy of this Agreement with any such
Llama Materials; and (B) prominently display \u201CBuilt with Llama\u201D\\non
a related website, user interface, blogpost, about page, or product documentation.
If you use the\\nLlama Materials or any outputs or results of the Llama Materials
to create, train, fine tune, or\\notherwise improve an AI model, which is distributed
or made available, you shall also include \u201CLlama\u201D\\nat the beginning
of any such AI model name.\\n\\n ii. If you receive Llama Materials,
or any derivative works thereof, from a Licensee as part\\nof an integrated
end user product, then Section 2 of this Agreement will not apply to you. \\n\\n
\ iii. You must retain in all copies of the Llama Materials that you distribute
the \\nfollowing attribution notice within a \u201CNotice\u201D text file distributed
as a part of such copies: \\n\u201CLlama 3.2 is licensed under the Llama 3.2
Community License, Copyright \xA9 Meta Platforms,\\nInc. All Rights Reserved.\u201D\\n\\n
\ iv. Your use of the Llama Materials must comply with applicable laws
and regulations\\n(including trade compliance laws and regulations) and adhere
to the Acceptable Use Policy for\\nthe Llama Materials (available at https://www.llama.com/llama3_2/use-policy),
which is hereby \\nincorporated by reference into this Agreement.\\n \\n2.
Additional Commercial Terms. If, on the Llama 3.2 version release date, the
monthly active users\\nof the products or services made available by or for
Licensee, or Licensee\u2019s affiliates, \\nis greater than 700 million monthly
active users in the preceding calendar month, you must request \\na license
from Meta, which Meta may grant to you in its sole discretion, and you are not
authorized to\\nexercise any of the rights under this Agreement unless or until
Meta otherwise expressly grants you such rights.\\n\\n3. Disclaimer of Warranty.
UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND \\nRESULTS
THEREFROM ARE PROVIDED ON AN \u201CAS IS\u201D BASIS, WITHOUT WARRANTIES OF
ANY KIND, AND META DISCLAIMS\\nALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND
IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES\\nOF TITLE, NON-INFRINGEMENT,
MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE\\nFOR
DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS
AND ASSUME ANY RISKS ASSOCIATED\\nWITH YOUR USE OF THE LLAMA MATERIALS AND ANY
OUTPUT AND RESULTS.\\n\\n4. Limitation of Liability. IN NO EVENT WILL META OR
ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, \\nWHETHER IN CONTRACT,
TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT,
\\nFOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL,
EXEMPLARY OR PUNITIVE DAMAGES, EVEN \\nIF META OR ITS AFFILIATES HAVE BEEN ADVISED
OF THE POSSIBILITY OF ANY OF THE FOREGOING.\\n\\n5. Intellectual Property.\\n\\n
\ a. No trademark licenses are granted under this Agreement, and in connection
with the Llama Materials, \\nneither Meta nor Licensee may use any name or mark
owned by or associated with the other or any of its affiliates, \\nexcept as
required for reasonable and customary use in describing and redistributing the
Llama Materials or as \\nset forth in this Section 5(a). Meta hereby grants
you a license to use \u201CLlama\u201D (the \u201CMark\u201D) solely as required
\\nto comply with the last sentence of Section 1.b.i. You will comply with Meta\u2019s
brand guidelines (currently accessible \\nat https://about.meta.com/brand/resources/meta/company-brand/).
All goodwill arising out of your use of the Mark \\nwill inure to the benefit
of Meta.\\n\\n b. Subject to Meta\u2019s ownership of Llama Materials and
derivatives made by or for Meta, with respect to any\\n derivative works
and modifications of the Llama Materials that are made by you, as between you
and Meta,\\n you are and will be the owner of such derivative works and modifications.\\n\\n
\ c. If you institute litigation or other proceedings against Meta or any
entity (including a cross-claim or\\n counterclaim in a lawsuit) alleging
that the Llama Materials or Llama 3.2 outputs or results, or any portion\\n
\ of any of the foregoing, constitutes infringement of intellectual property
or other rights owned or licensable\\n by you, then any licenses granted
to you under this Agreement shall terminate as of the date such litigation or\\n
\ claim is filed or instituted. You will indemnify and hold harmless Meta
from and against any claim by any third\\n party arising out of or related
to your use or distribution of the Llama Materials.\\n\\n6. Term and Termination.
The term of this Agreement will commence upon your acceptance of this Agreement
or access\\nto the Llama Materials and will continue in full force and effect
until terminated in accordance with the terms\\nand conditions herein. Meta
may terminate this Agreement if you are in breach of any term or condition of
this\\nAgreement. Upon termination of this Agreement, you shall delete and cease
use of the Llama Materials. Sections 3,\\n4 and 7 shall survive the termination
of this Agreement. \\n\\n7. Governing Law and Jurisdiction. This Agreement will
be governed and construed under the laws of the State of \\nCalifornia without
regard to choice of law principles, and the UN Convention on Contracts for the
International\\nSale of Goods does not apply to this Agreement. The courts of
California shall have exclusive jurisdiction of\\nany dispute arising out of
this Agreement.\\n**Llama 3.2** **Acceptable Use Policy**\\n\\nMeta is committed
to promoting safe and fair use of its tools and features, including Llama 3.2.
If you access or use Llama 3.2, you agree to this Acceptable Use Policy (\u201C**Policy**\u201D).
The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\\n\\n**Prohibited
Uses**\\n\\nWe want everyone to use Llama 3.2 safely and responsibly. You agree
you will not use, or allow others to use, Llama 3.2 to:\\n\\n\\n\\n1. Violate
the law or others\u2019 rights, including to:\\n 1. Engage in, promote, generate,
contribute to, encourage, plan, incite, or further illegal or unlawful activity
or content, such as:\\n 1. Violence or terrorism\\n 2. Exploitation
or harm to children, including the solicitation, creation, acquisition, or dissemination
of child exploitative content or failure to report Child Sexual Abuse Material\\n
\ 3. Human trafficking, exploitation, and sexual violence\\n 4.
The illegal distribution of information or materials to minors, including obscene
materials, or failure to employ legally required age-gating in connection with
such information or materials.\\n 5. Sexual solicitation\\n 6.
Any other criminal activity\\n 1. Engage in, promote, incite, or facilitate
the harassment, abuse, threatening, or bullying of individuals or groups of
individuals\\n 2. Engage in, promote, incite, or facilitate discrimination
or other unlawful or harmful conduct in the provision of employment, employment
benefits, credit, housing, other economic benefits, or other essential goods
and services\\n 3. Engage in the unauthorized or unlicensed practice of any
profession including, but not limited to, financial, legal, medical/health,
or related professional practices\\n 4. Collect, process, disclose, generate,
or infer private or sensitive information about individuals, including information
about individuals\u2019 identity, health, or demographic information, unless
you have obtained the right to do so in accordance with applicable law\\n 5.
Engage in or facilitate any action or generate any content that infringes, misappropriates,
or otherwise violates any third-party rights, including the outputs or results
of any products or services using the Llama Materials\\n 6. Create, generate,
or facilitate the creation of malicious code, malware, computer viruses or do
anything else that could disable, overburden, interfere with or impair the proper
working, integrity, operation or appearance of a website or computer system\\n
\ 7. Engage in any action, or facilitate any action, to intentionally circumvent
or remove usage restrictions or other safety measures, or to enable functionality
disabled by Meta\\n2. Engage in, promote, incite, facilitate, or assist in the
planning or development of activities that present a risk of death or bodily
harm to individuals, including use of Llama 3.2 related to the following:\\n
\ 8. Military, warfare, nuclear industries or applications, espionage, use
for materials or activities that are subject to the International Traffic Arms
Regulations (ITAR) maintained by the United States Department of State or to
the U.S. Biological Weapons Anti-Terrorism Act of 1989 or the Chemical Weapons
Convention Implementation Act of 1997\\n 9. Guns and illegal weapons (including
weapon development)\\n 10. Illegal drugs and regulated/controlled substances\\n
\ 11. Operation of critical infrastructure, transportation technologies, or
heavy machinery\\n 12. Self-harm or harm to others, including suicide, cutting,
and eating disorders\\n 13. Any content intended to incite or promote violence,
abuse, or any infliction of bodily harm to an individual\\n3. Intentionally
deceive or mislead others, including use of Llama 3.2 related to the following:\\n
\ 14. Generating, promoting, or furthering fraud or the creation or promotion
of disinformation\\n 15. Generating, promoting, or furthering defamatory
content, including the creation of defamatory statements, images, or other content\\n
\ 16. Generating, promoting, or further distributing spam\\n 17. Impersonating
another individual without consent, authorization, or legal right\\n 18.
Representing that the use of Llama 3.2 or outputs are human-generated\\n 19.
Generating or facilitating false online engagement, including fake reviews and
other means of fake online engagement\\n4. Fail to appropriately disclose to
end users any known dangers of your AI system\\n5. Interact with third party
tools, models, or software designed to generate unlawful content or engage in
unlawful or harmful conduct and/or represent that the outputs of such tools,
models, or software are associated with Meta or Llama 3.2\\n\\nWith respect
to any multimodal models included in Llama 3.2, the rights granted under Section
1(a) of the Llama 3.2 Community License Agreement are not being granted to you
if you are an individual domiciled in, or a company with a principal place of
business in, the European Union. This restriction does not apply to end users
of a product or service that incorporates any such multimodal models.\\n\\nPlease
report any violation of this Policy, software \u201Cbug,\u201D or other problems
that could lead to a violation of this Policy through one of the following means:\\n\\n\\n\\n*
Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues\\u0026h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\\n*
Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\\n*
Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\\n*
Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama
3.2: LlamaUseReport@meta.com\",\"modelfile\":\"# Modelfile generated by \\\"ollama
show\\\"\\n# To build a new Modelfile based on this, replace FROM with:\\n#
FROM llama3.2:3b\\n\\nFROM /Users/joaomoura/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff\\nTEMPLATE
\\\"\\\"\\\"\\u003c|start_header_id|\\u003esystem\\u003c|end_header_id|\\u003e\\n\\nCutting
Knowledge Date: December 2023\\n\\n{{ if .System }}{{ .System }}\\n{{- end }}\\n{{-
if .Tools }}When you receive a tool call response, use the output to format
an answer to the orginal user question.\\n\\nYou are a helpful assistant with
tool calling capabilities.\\n{{- end }}\\u003c|eot_id|\\u003e\\n{{- range $i,
$_ := .Messages }}\\n{{- $last := eq (len (slice $.Messages $i)) 1 }}\\n{{-
if eq .Role \\\"user\\\" }}\\u003c|start_header_id|\\u003euser\\u003c|end_header_id|\\u003e\\n{{-
if and $.Tools $last }}\\n\\nGiven the following functions, please respond with
a JSON for a function call with its proper arguments that best answers the given
prompt.\\n\\nRespond in the format {\\\"name\\\": function name, \\\"parameters\\\":
dictionary of argument name and its value}. Do not use variables.\\n\\n{{ range
$.Tools }}\\n{{- . }}\\n{{ end }}\\n{{ .Content }}\\u003c|eot_id|\\u003e\\n{{-
else }}\\n\\n{{ .Content }}\\u003c|eot_id|\\u003e\\n{{- end }}{{ if $last }}\\u003c|start_header_id|\\u003eassistant\\u003c|end_header_id|\\u003e\\n\\n{{
end }}\\n{{- else if eq .Role \\\"assistant\\\" }}\\u003c|start_header_id|\\u003eassistant\\u003c|end_header_id|\\u003e\\n{{-
if .ToolCalls }}\\n{{ range .ToolCalls }}\\n{\\\"name\\\": \\\"{{ .Function.Name
}}\\\", \\\"parameters\\\": {{ .Function.Arguments }}}{{ end }}\\n{{- else }}\\n\\n{{
.Content }}\\n{{- end }}{{ if not $last }}\\u003c|eot_id|\\u003e{{ end }}\\n{{-
else if eq .Role \\\"tool\\\" }}\\u003c|start_header_id|\\u003eipython\\u003c|end_header_id|\\u003e\\n\\n{{
.Content }}\\u003c|eot_id|\\u003e{{ if $last }}\\u003c|start_header_id|\\u003eassistant\\u003c|end_header_id|\\u003e\\n\\n{{
end }}\\n{{- end }}\\n{{- end }}\\\"\\\"\\\"\\nPARAMETER stop \\u003c|start_header_id|\\u003e\\nPARAMETER
stop \\u003c|end_header_id|\\u003e\\nPARAMETER stop \\u003c|eot_id|\\u003e\\nLICENSE
\\\"LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\\nLlama 3.2 Version Release Date:
September 25, 2024\\n\\n\u201CAgreement\u201D means the terms and conditions
for use, reproduction, distribution \\nand modification of the Llama Materials
set forth herein.\\n\\n\u201CDocumentation\u201D means the specifications, manuals
and documentation accompanying Llama 3.2\\ndistributed by Meta at https://llama.meta.com/doc/overview.\\n\\n\u201CLicensee\u201D
or \u201Cyou\u201D means you, or your employer or any other person or entity
(if you are \\nentering into this Agreement on such person or entity\u2019s
behalf), of the age required under\\napplicable laws, rules or regulations to
provide legal consent and that has legal authority\\nto bind your employer or
such other person or entity if you are entering in this Agreement\\non their
behalf.\\n\\n\u201CLlama 3.2\u201D means the foundational large language models
and software and algorithms, including\\nmachine-learning model code, trained
model weights, inference-enabling code, training-enabling code,\\nfine-tuning
enabling code and other elements of the foregoing distributed by Meta at \\nhttps://www.llama.com/llama-downloads.\\n\\n\u201CLlama
Materials\u201D means, collectively, Meta\u2019s proprietary Llama 3.2 and Documentation
(and \\nany portion thereof) made available under this Agreement.\\n\\n\u201CMeta\u201D
or \u201Cwe\u201D means Meta Platforms Ireland Limited (if you are located in
or, \\nif you are an entity, your principal place of business is in the EEA
or Switzerland) \\nand Meta Platforms, Inc. (if you are located outside of the
EEA or Switzerland). \\n\\n\\nBy clicking \u201CI Accept\u201D below or by using
or distributing any portion or element of the Llama Materials,\\nyou agree to
be bound by this Agreement.\\n\\n\\n1. License Rights and Redistribution.\\n\\n
\ a. Grant of Rights. You are granted a non-exclusive, worldwide, \\nnon-transferable
and royalty-free limited license under Meta\u2019s intellectual property or
other rights \\nowned by Meta embodied in the Llama Materials to use, reproduce,
distribute, copy, create derivative works \\nof, and make modifications to the
Llama Materials. \\n\\n b. Redistribution and Use. \\n\\n i. If
you distribute or make available the Llama Materials (or any derivative works
thereof), \\nor a product or service (including another AI model) that contains
any of them, you shall (A) provide\\na copy of this Agreement with any such
Llama Materials; and (B) prominently display \u201CBuilt with Llama\u201D\\non
a related website, user interface, blogpost, about page, or product documentation.
If you use the\\nLlama Materials or any outputs or results of the Llama Materials
to create, train, fine tune, or\\notherwise improve an AI model, which is distributed
or made available, you shall also include \u201CLlama\u201D\\nat the beginning
of any such AI model name.\\n\\n ii. If you receive Llama Materials,
or any derivative works thereof, from a Licensee as part\\nof an integrated
end user product, then Section 2 of this Agreement will not apply to you. \\n\\n
\ iii. You must retain in all copies of the Llama Materials that you distribute
the \\nfollowing attribution notice within a \u201CNotice\u201D text file distributed
as a part of such copies: \\n\u201CLlama 3.2 is licensed under the Llama 3.2
Community License, Copyright \xA9 Meta Platforms,\\nInc. All Rights Reserved.\u201D\\n\\n
\ iv. Your use of the Llama Materials must comply with applicable laws
and regulations\\n(including trade compliance laws and regulations) and adhere
to the Acceptable Use Policy for\\nthe Llama Materials (available at https://www.llama.com/llama3_2/use-policy),
which is hereby \\nincorporated by reference into this Agreement.\\n \\n2.
Additional Commercial Terms. If, on the Llama 3.2 version release date, the
monthly active users\\nof the products or services made available by or for
Licensee, or Licensee\u2019s affiliates, \\nis greater than 700 million monthly
active users in the preceding calendar month, you must request \\na license
from Meta, which Meta may grant to you in its sole discretion, and you are not
authorized to\\nexercise any of the rights under this Agreement unless or until
Meta otherwise expressly grants you such rights.\\n\\n3. Disclaimer of Warranty.
UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND \\nRESULTS
THEREFROM ARE PROVIDED ON AN \u201CAS IS\u201D BASIS, WITHOUT WARRANTIES OF
ANY KIND, AND META DISCLAIMS\\nALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND
IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES\\nOF TITLE, NON-INFRINGEMENT,
MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE\\nFOR
DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS
AND ASSUME ANY RISKS ASSOCIATED\\nWITH YOUR USE OF THE LLAMA MATERIALS AND ANY
OUTPUT AND RESULTS.\\n\\n4. Limitation of Liability. IN NO EVENT WILL META OR
ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, \\nWHETHER IN CONTRACT,
TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT,
\\nFOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL,
EXEMPLARY OR PUNITIVE DAMAGES, EVEN \\nIF META OR ITS AFFILIATES HAVE BEEN ADVISED
OF THE POSSIBILITY OF ANY OF THE FOREGOING.\\n\\n5. Intellectual Property.\\n\\n
\ a. No trademark licenses are granted under this Agreement, and in connection
with the Llama Materials, \\nneither Meta nor Licensee may use any name or mark
owned by or associated with the other or any of its affiliates, \\nexcept as
required for reasonable and customary use in describing and redistributing the
Llama Materials or as \\nset forth in this Section 5(a). Meta hereby grants
you a license to use \u201CLlama\u201D (the \u201CMark\u201D) solely as required
\\nto comply with the last sentence of Section 1.b.i. You will comply with Meta\u2019s
brand guidelines (currently accessible \\nat https://about.meta.com/brand/resources/meta/company-brand/).
All goodwill arising out of your use of the Mark \\nwill inure to the benefit
of Meta.\\n\\n b. Subject to Meta\u2019s ownership of Llama Materials and
derivatives made by or for Meta, with respect to any\\n derivative works
and modifications of the Llama Materials that are made by you, as between you
and Meta,\\n you are and will be the owner of such derivative works and modifications.\\n\\n
\ c. If you institute litigation or other proceedings against Meta or any
entity (including a cross-claim or\\n counterclaim in a lawsuit) alleging
that the Llama Materials or Llama 3.2 outputs or results, or any portion\\n
\ of any of the foregoing, constitutes infringement of intellectual property
or other rights owned or licensable\\n by you, then any licenses granted
to you under this Agreement shall terminate as of the date such litigation or\\n
\ claim is filed or instituted. You will indemnify and hold harmless Meta
from and against any claim by any third\\n party arising out of or related
to your use or distribution of the Llama Materials.\\n\\n6. Term and Termination.
The term of this Agreement will commence upon your acceptance of this Agreement
or access\\nto the Llama Materials and will continue in full force and effect
until terminated in accordance with the terms\\nand conditions herein. Meta
may terminate this Agreement if you are in breach of any term or condition of
this\\nAgreement. Upon termination of this Agreement, you shall delete and cease
use of the Llama Materials. Sections 3,\\n4 and 7 shall survive the termination
of this Agreement. \\n\\n7. Governing Law and Jurisdiction. This Agreement will
be governed and construed under the laws of the State of \\nCalifornia without
regard to choice of law principles, and the UN Convention on Contracts for the
International\\nSale of Goods does not apply to this Agreement. The courts of
California shall have exclusive jurisdiction of\\nany dispute arising out of
this Agreement.\\\"\\nLICENSE \\\"**Llama 3.2** **Acceptable Use Policy**\\n\\nMeta
is committed to promoting safe and fair use of its tools and features, including
Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use
Policy (\u201C**Policy**\u201D). The most recent copy of this policy can be
found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\\n\\n**Prohibited
Uses**\\n\\nWe want everyone to use Llama 3.2 safely and responsibly. You agree
you will not use, or allow others to use, Llama 3.2 to:\\n\\n\\n\\n1. Violate
the law or others\u2019 rights, including to:\\n 1. Engage in, promote, generate,
contribute to, encourage, plan, incite, or further illegal or unlawful activity
or content, such as:\\n 1. Violence or terrorism\\n 2. Exploitation
or harm to children, including the solicitation, creation, acquisition, or dissemination
of child exploitative content or failure to report Child Sexual Abuse Material\\n
\ 3. Human trafficking, exploitation, and sexual violence\\n 4.
The illegal distribution of information or materials to minors, including obscene
materials, or failure to employ legally required age-gating in connection with
such information or materials.\\n 5. Sexual solicitation\\n 6.
Any other criminal activity\\n 1. Engage in, promote, incite, or facilitate
the harassment, abuse, threatening, or bullying of individuals or groups of
individuals\\n 2. Engage in, promote, incite, or facilitate discrimination
or other unlawful or harmful conduct in the provision of employment, employment
benefits, credit, housing, other economic benefits, or other essential goods
and services\\n 3. Engage in the unauthorized or unlicensed practice of any
profession including, but not limited to, financial, legal, medical/health,
or related professional practices\\n 4. Collect, process, disclose, generate,
or infer private or sensitive information about individuals, including information
about individuals\u2019 identity, health, or demographic information, unless
you have obtained the right to do so in accordance with applicable law\\n 5.
Engage in or facilitate any action or generate any content that infringes, misappropriates,
or otherwise violates any third-party rights, including the outputs or results
of any products or services using the Llama Materials\\n 6. Create, generate,
or facilitate the creation of malicious code, malware, computer viruses or do
anything else that could disable, overburden, interfere with or impair the proper
working, integrity, operation or appearance of a website or computer system\\n
\ 7. Engage in any action, or facilitate any action, to intentionally circumvent
or remove usage restrictions or other safety measures, or to enable functionality
disabled by Meta\\n2. Engage in, promote, incite, facilitate, or assist in the
planning or development of activities that present a risk of death or bodily
harm to individuals, including use of Llama 3.2 related to the following:\\n
\ 8. Military, warfare, nuclear industries or applications, espionage, use
for materials or activities that are subject to the International Traffic Arms
Regulations (ITAR) maintained by the United States Department of State or to
the U.S. Biological Weapons Anti-Terrorism Act of 1989 or the Chemical Weapons
Convention Implementation Act of 1997\\n 9. Guns and illegal weapons (including
weapon development)\\n 10. Illegal drugs and regulated/controlled substances\\n
\ 11. Operation of critical infrastructure, transportation technologies, or
heavy machinery\\n 12. Self-harm or harm to others, including suicide, cutting,
and eating disorders\\n 13. Any content intended to incite or promote violence,
abuse, or any infliction of bodily harm to an individual\\n3. Intentionally
deceive or mislead others, including use of Llama 3.2 related to the following:\\n
\ 14. Generating, promoting, or furthering fraud or the creation or promotion
of disinformation\\n 15. Generating, promoting, or furthering defamatory
content, including the creation of defamatory statements, images, or other content\\n
\ 16. Generating, promoting, or further distributing spam\\n 17. Impersonating
another individual without consent, authorization, or legal right\\n 18.
Representing that the use of Llama 3.2 or outputs are human-generated\\n 19.
Generating or facilitating false online engagement, including fake reviews and
other means of fake online engagement\\n4. Fail to appropriately disclose to
end users any known dangers of your AI system\\n5. Interact with third party
tools, models, or software designed to generate unlawful content or engage in
unlawful or harmful conduct and/or represent that the outputs of such tools,
models, or software are associated with Meta or Llama 3.2\\n\\nWith respect
to any multimodal models included in Llama 3.2, the rights granted under Section
1(a) of the Llama 3.2 Community License Agreement are not being granted to you
if you are an individual domiciled in, or a company with a principal place of
business in, the European Union. This restriction does not apply to end users
of a product or service that incorporates any such multimodal models.\\n\\nPlease
report any violation of this Policy, software \u201Cbug,\u201D or other problems
that could lead to a violation of this Policy through one of the following means:\\n\\n\\n\\n*
Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues\\u0026h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\\n*
Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\\n*
Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\\n*
Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama
3.2: LlamaUseReport@meta.com\\\"\\n\",\"parameters\":\"stop \\\"\\u003c|start_header_id|\\u003e\\\"\\nstop
\ \\\"\\u003c|end_header_id|\\u003e\\\"\\nstop \\\"\\u003c|eot_id|\\u003e\\\"\",\"template\":\"\\u003c|start_header_id|\\u003esystem\\u003c|end_header_id|\\u003e\\n\\nCutting
Knowledge Date: December 2023\\n\\n{{ if .System }}{{ .System }}\\n{{- end }}\\n{{-
if .Tools }}When you receive a tool call response, use the output to format
an answer to the orginal user question.\\n\\nYou are a helpful assistant with
tool calling capabilities.\\n{{- end }}\\u003c|eot_id|\\u003e\\n{{- range $i,
$_ := .Messages }}\\n{{- $last := eq (len (slice $.Messages $i)) 1 }}\\n{{-
if eq .Role \\\"user\\\" }}\\u003c|start_header_id|\\u003euser\\u003c|end_header_id|\\u003e\\n{{-
if and $.Tools $last }}\\n\\nGiven the following functions, please respond with
a JSON for a function call with its proper arguments that best answers the given
prompt.\\n\\nRespond in the format {\\\"name\\\": function name, \\\"parameters\\\":
dictionary of argument name and its value}. Do not use variables.\\n\\n{{ range
$.Tools }}\\n{{- . }}\\n{{ end }}\\n{{ .Content }}\\u003c|eot_id|\\u003e\\n{{-
else }}\\n\\n{{ .Content }}\\u003c|eot_id|\\u003e\\n{{- end }}{{ if $last }}\\u003c|start_header_id|\\u003eassistant\\u003c|end_header_id|\\u003e\\n\\n{{
end }}\\n{{- else if eq .Role \\\"assistant\\\" }}\\u003c|start_header_id|\\u003eassistant\\u003c|end_header_id|\\u003e\\n{{-
if .ToolCalls }}\\n{{ range .ToolCalls }}\\n{\\\"name\\\": \\\"{{ .Function.Name
}}\\\", \\\"parameters\\\": {{ .Function.Arguments }}}{{ end }}\\n{{- else }}\\n\\n{{
.Content }}\\n{{- end }}{{ if not $last }}\\u003c|eot_id|\\u003e{{ end }}\\n{{-
else if eq .Role \\\"tool\\\" }}\\u003c|start_header_id|\\u003eipython\\u003c|end_header_id|\\u003e\\n\\n{{
.Content }}\\u003c|eot_id|\\u003e{{ if $last }}\\u003c|start_header_id|\\u003eassistant\\u003c|end_header_id|\\u003e\\n\\n{{
end }}\\n{{- end }}\\n{{- end }}\",\"details\":{\"parent_model\":\"\",\"format\":\"gguf\",\"family\":\"llama\",\"families\":[\"llama\"],\"parameter_size\":\"3.2B\",\"quantization_level\":\"Q4_K_M\"},\"model_info\":{\"general.architecture\":\"llama\",\"general.basename\":\"Llama-3.2\",\"general.file_type\":15,\"general.finetune\":\"Instruct\",\"general.languages\":[\"en\",\"de\",\"fr\",\"it\",\"pt\",\"hi\",\"es\",\"th\"],\"general.parameter_count\":3212749888,\"general.quantization_version\":2,\"general.size_label\":\"3B\",\"general.tags\":[\"facebook\",\"meta\",\"pytorch\",\"llama\",\"llama-3\",\"text-generation\"],\"general.type\":\"model\",\"llama.attention.head_count\":24,\"llama.attention.head_count_kv\":8,\"llama.attention.key_length\":128,\"llama.attention.layer_norm_rms_epsilon\":0.00001,\"llama.attention.value_length\":128,\"llama.block_count\":28,\"llama.context_length\":131072,\"llama.embedding_length\":3072,\"llama.feed_forward_length\":8192,\"llama.rope.dimension_count\":128,\"llama.rope.freq_base\":500000,\"llama.vocab_size\":128256,\"tokenizer.ggml.bos_token_id\":128000,\"tokenizer.ggml.eos_token_id\":128009,\"tokenizer.ggml.merges\":null,\"tokenizer.ggml.model\":\"gpt2\",\"tokenizer.ggml.pre\":\"llama-bpe\",\"tokenizer.ggml.token_type\":null,\"tokenizer.ggml.tokens\":null},\"modified_at\":\"2025-02-20T18:55:09.150577031-08:00\"}"
headers:
Content-Type:
- application/json; charset=utf-8
Date:
- Fri, 21 Feb 2025 02:57:55 GMT
Transfer-Encoding:
- chunked
http_version: HTTP/1.1
status_code: 200
- request:
body: '{"name": "llama3.2:3b"}'
headers:
accept:
- '*/*'
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '23'
content-type:
- application/json
host:
- localhost:11434
user-agent:
- litellm/1.60.2
method: POST
uri: http://localhost:11434/api/show
response:
content: "{\"license\":\"LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\\nLlama 3.2 Version
Release Date: September 25, 2024\\n\\n\u201CAgreement\u201D means the terms
and conditions for use, reproduction, distribution \\nand modification of the
Llama Materials set forth herein.\\n\\n\u201CDocumentation\u201D means the specifications,
manuals and documentation accompanying Llama 3.2\\ndistributed by Meta at https://llama.meta.com/doc/overview.\\n\\n\u201CLicensee\u201D
or \u201Cyou\u201D means you, or your employer or any other person or entity
(if you are \\nentering into this Agreement on such person or entity\u2019s
behalf), of the age required under\\napplicable laws, rules or regulations to
provide legal consent and that has legal authority\\nto bind your employer or
such other person or entity if you are entering in this Agreement\\non their
behalf.\\n\\n\u201CLlama 3.2\u201D means the foundational large language models
and software and algorithms, including\\nmachine-learning model code, trained
model weights, inference-enabling code, training-enabling code,\\nfine-tuning
enabling code and other elements of the foregoing distributed by Meta at \\nhttps://www.llama.com/llama-downloads.\\n\\n\u201CLlama
Materials\u201D means, collectively, Meta\u2019s proprietary Llama 3.2 and Documentation
(and \\nany portion thereof) made available under this Agreement.\\n\\n\u201CMeta\u201D
or \u201Cwe\u201D means Meta Platforms Ireland Limited (if you are located in
or, \\nif you are an entity, your principal place of business is in the EEA
or Switzerland) \\nand Meta Platforms, Inc. (if you are located outside of the
EEA or Switzerland). \\n\\n\\nBy clicking \u201CI Accept\u201D below or by using
or distributing any portion or element of the Llama Materials,\\nyou agree to
be bound by this Agreement.\\n\\n\\n1. License Rights and Redistribution.\\n\\n
\ a. Grant of Rights. You are granted a non-exclusive, worldwide, \\nnon-transferable
and royalty-free limited license under Meta\u2019s intellectual property or
other rights \\nowned by Meta embodied in the Llama Materials to use, reproduce,
distribute, copy, create derivative works \\nof, and make modifications to the
Llama Materials. \\n\\n b. Redistribution and Use. \\n\\n i. If
you distribute or make available the Llama Materials (or any derivative works
thereof), \\nor a product or service (including another AI model) that contains
any of them, you shall (A) provide\\na copy of this Agreement with any such
Llama Materials; and (B) prominently display \u201CBuilt with Llama\u201D\\non
a related website, user interface, blogpost, about page, or product documentation.
If you use the\\nLlama Materials or any outputs or results of the Llama Materials
to create, train, fine tune, or\\notherwise improve an AI model, which is distributed
or made available, you shall also include \u201CLlama\u201D\\nat the beginning
of any such AI model name.\\n\\n ii. If you receive Llama Materials,
or any derivative works thereof, from a Licensee as part\\nof an integrated
end user product, then Section 2 of this Agreement will not apply to you. \\n\\n
\ iii. You must retain in all copies of the Llama Materials that you distribute
the \\nfollowing attribution notice within a \u201CNotice\u201D text file distributed
as a part of such copies: \\n\u201CLlama 3.2 is licensed under the Llama 3.2
Community License, Copyright \xA9 Meta Platforms,\\nInc. All Rights Reserved.\u201D\\n\\n
\ iv. Your use of the Llama Materials must comply with applicable laws
and regulations\\n(including trade compliance laws and regulations) and adhere
to the Acceptable Use Policy for\\nthe Llama Materials (available at https://www.llama.com/llama3_2/use-policy),
which is hereby \\nincorporated by reference into this Agreement.\\n \\n2.
Additional Commercial Terms. If, on the Llama 3.2 version release date, the
monthly active users\\nof the products or services made available by or for
Licensee, or Licensee\u2019s affiliates, \\nis greater than 700 million monthly
active users in the preceding calendar month, you must request \\na license
from Meta, which Meta may grant to you in its sole discretion, and you are not
authorized to\\nexercise any of the rights under this Agreement unless or until
Meta otherwise expressly grants you such rights.\\n\\n3. Disclaimer of Warranty.
UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND \\nRESULTS
THEREFROM ARE PROVIDED ON AN \u201CAS IS\u201D BASIS, WITHOUT WARRANTIES OF
ANY KIND, AND META DISCLAIMS\\nALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND
IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES\\nOF TITLE, NON-INFRINGEMENT,
MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE\\nFOR
DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS
AND ASSUME ANY RISKS ASSOCIATED\\nWITH YOUR USE OF THE LLAMA MATERIALS AND ANY
OUTPUT AND RESULTS.\\n\\n4. Limitation of Liability. IN NO EVENT WILL META OR
ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, \\nWHETHER IN CONTRACT,
TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT,
\\nFOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL,
EXEMPLARY OR PUNITIVE DAMAGES, EVEN \\nIF META OR ITS AFFILIATES HAVE BEEN ADVISED
OF THE POSSIBILITY OF ANY OF THE FOREGOING.\\n\\n5. Intellectual Property.\\n\\n
\ a. No trademark licenses are granted under this Agreement, and in connection
with the Llama Materials, \\nneither Meta nor Licensee may use any name or mark
owned by or associated with the other or any of its affiliates, \\nexcept as
required for reasonable and customary use in describing and redistributing the
Llama Materials or as \\nset forth in this Section 5(a). Meta hereby grants
you a license to use \u201CLlama\u201D (the \u201CMark\u201D) solely as required
\\nto comply with the last sentence of Section 1.b.i. You will comply with Meta\u2019s
brand guidelines (currently accessible \\nat https://about.meta.com/brand/resources/meta/company-brand/).
All goodwill arising out of your use of the Mark \\nwill inure to the benefit
of Meta.\\n\\n b. Subject to Meta\u2019s ownership of Llama Materials and
derivatives made by or for Meta, with respect to any\\n derivative works
and modifications of the Llama Materials that are made by you, as between you
and Meta,\\n you are and will be the owner of such derivative works and modifications.\\n\\n
\ c. If you institute litigation or other proceedings against Meta or any
entity (including a cross-claim or\\n counterclaim in a lawsuit) alleging
that the Llama Materials or Llama 3.2 outputs or results, or any portion\\n
\ of any of the foregoing, constitutes infringement of intellectual property
or other rights owned or licensable\\n by you, then any licenses granted
to you under this Agreement shall terminate as of the date such litigation or\\n
\ claim is filed or instituted. You will indemnify and hold harmless Meta
from and against any claim by any third\\n party arising out of or related
to your use or distribution of the Llama Materials.\\n\\n6. Term and Termination.
The term of this Agreement will commence upon your acceptance of this Agreement
or access\\nto the Llama Materials and will continue in full force and effect
until terminated in accordance with the terms\\nand conditions herein. Meta
may terminate this Agreement if you are in breach of any term or condition of
this\\nAgreement. Upon termination of this Agreement, you shall delete and cease
use of the Llama Materials. Sections 3,\\n4 and 7 shall survive the termination
of this Agreement. \\n\\n7. Governing Law and Jurisdiction. This Agreement will
be governed and construed under the laws of the State of \\nCalifornia without
regard to choice of law principles, and the UN Convention on Contracts for the
International\\nSale of Goods does not apply to this Agreement. The courts of
California shall have exclusive jurisdiction of\\nany dispute arising out of
this Agreement.\\n**Llama 3.2** **Acceptable Use Policy**\\n\\nMeta is committed
to promoting safe and fair use of its tools and features, including Llama 3.2.
If you access or use Llama 3.2, you agree to this Acceptable Use Policy (\u201C**Policy**\u201D).
The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\\n\\n**Prohibited
Uses**\\n\\nWe want everyone to use Llama 3.2 safely and responsibly. You agree
you will not use, or allow others to use, Llama 3.2 to:\\n\\n\\n\\n1. Violate
the law or others\u2019 rights, including to:\\n 1. Engage in, promote, generate,
contribute to, encourage, plan, incite, or further illegal or unlawful activity
or content, such as:\\n 1. Violence or terrorism\\n 2. Exploitation
or harm to children, including the solicitation, creation, acquisition, or dissemination
of child exploitative content or failure to report Child Sexual Abuse Material\\n
\ 3. Human trafficking, exploitation, and sexual violence\\n 4.
The illegal distribution of information or materials to minors, including obscene
materials, or failure to employ legally required age-gating in connection with
such information or materials.\\n 5. Sexual solicitation\\n 6.
Any other criminal activity\\n 1. Engage in, promote, incite, or facilitate
the harassment, abuse, threatening, or bullying of individuals or groups of
individuals\\n 2. Engage in, promote, incite, or facilitate discrimination
or other unlawful or harmful conduct in the provision of employment, employment
benefits, credit, housing, other economic benefits, or other essential goods
and services\\n 3. Engage in the unauthorized or unlicensed practice of any
profession including, but not limited to, financial, legal, medical/health,
or related professional practices\\n 4. Collect, process, disclose, generate,
or infer private or sensitive information about individuals, including information
about individuals\u2019 identity, health, or demographic information, unless
you have obtained the right to do so in accordance with applicable law\\n 5.
Engage in or facilitate any action or generate any content that infringes, misappropriates,
or otherwise violates any third-party rights, including the outputs or results
of any products or services using the Llama Materials\\n 6. Create, generate,
or facilitate the creation of malicious code, malware, computer viruses or do
anything else that could disable, overburden, interfere with or impair the proper
working, integrity, operation or appearance of a website or computer system\\n
\ 7. Engage in any action, or facilitate any action, to intentionally circumvent
or remove usage restrictions or other safety measures, or to enable functionality
disabled by Meta\\n2. Engage in, promote, incite, facilitate, or assist in the
planning or development of activities that present a risk of death or bodily
harm to individuals, including use of Llama 3.2 related to the following:\\n
\ 8. Military, warfare, nuclear industries or applications, espionage, use
for materials or activities that are subject to the International Traffic Arms
Regulations (ITAR) maintained by the United States Department of State or to
the U.S. Biological Weapons Anti-Terrorism Act of 1989 or the Chemical Weapons
Convention Implementation Act of 1997\\n 9. Guns and illegal weapons (including
weapon development)\\n 10. Illegal drugs and regulated/controlled substances\\n
\ 11. Operation of critical infrastructure, transportation technologies, or
heavy machinery\\n 12. Self-harm or harm to others, including suicide, cutting,
and eating disorders\\n 13. Any content intended to incite or promote violence,
abuse, or any infliction of bodily harm to an individual\\n3. Intentionally
deceive or mislead others, including use of Llama 3.2 related to the following:\\n
\ 14. Generating, promoting, or furthering fraud or the creation or promotion
of disinformation\\n 15. Generating, promoting, or furthering defamatory
content, including the creation of defamatory statements, images, or other content\\n
\ 16. Generating, promoting, or further distributing spam\\n 17. Impersonating
another individual without consent, authorization, or legal right\\n 18.
Representing that the use of Llama 3.2 or outputs are human-generated\\n 19.
Generating or facilitating false online engagement, including fake reviews and
other means of fake online engagement\\n4. Fail to appropriately disclose to
end users any known dangers of your AI system\\n5. Interact with third party
tools, models, or software designed to generate unlawful content or engage in
unlawful or harmful conduct and/or represent that the outputs of such tools,
models, or software are associated with Meta or Llama 3.2\\n\\nWith respect
to any multimodal models included in Llama 3.2, the rights granted under Section
1(a) of the Llama 3.2 Community License Agreement are not being granted to you
if you are an individual domiciled in, or a company with a principal place of
business in, the European Union. This restriction does not apply to end users
of a product or service that incorporates any such multimodal models.\\n\\nPlease
report any violation of this Policy, software \u201Cbug,\u201D or other problems
that could lead to a violation of this Policy through one of the following means:\\n\\n\\n\\n*
Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues\\u0026h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\\n*
Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\\n*
Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\\n*
Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama
3.2: LlamaUseReport@meta.com\",\"modelfile\":\"# Modelfile generated by \\\"ollama
show\\\"\\n# To build a new Modelfile based on this, replace FROM with:\\n#
FROM llama3.2:3b\\n\\nFROM /Users/joaomoura/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff\\nTEMPLATE
\\\"\\\"\\\"\\u003c|start_header_id|\\u003esystem\\u003c|end_header_id|\\u003e\\n\\nCutting
Knowledge Date: December 2023\\n\\n{{ if .System }}{{ .System }}\\n{{- end }}\\n{{-
if .Tools }}When you receive a tool call response, use the output to format
an answer to the orginal user question.\\n\\nYou are a helpful assistant with
tool calling capabilities.\\n{{- end }}\\u003c|eot_id|\\u003e\\n{{- range $i,
$_ := .Messages }}\\n{{- $last := eq (len (slice $.Messages $i)) 1 }}\\n{{-
if eq .Role \\\"user\\\" }}\\u003c|start_header_id|\\u003euser\\u003c|end_header_id|\\u003e\\n{{-
if and $.Tools $last }}\\n\\nGiven the following functions, please respond with
a JSON for a function call with its proper arguments that best answers the given
prompt.\\n\\nRespond in the format {\\\"name\\\": function name, \\\"parameters\\\":
dictionary of argument name and its value}. Do not use variables.\\n\\n{{ range
$.Tools }}\\n{{- . }}\\n{{ end }}\\n{{ .Content }}\\u003c|eot_id|\\u003e\\n{{-
else }}\\n\\n{{ .Content }}\\u003c|eot_id|\\u003e\\n{{- end }}{{ if $last }}\\u003c|start_header_id|\\u003eassistant\\u003c|end_header_id|\\u003e\\n\\n{{
end }}\\n{{- else if eq .Role \\\"assistant\\\" }}\\u003c|start_header_id|\\u003eassistant\\u003c|end_header_id|\\u003e\\n{{-
if .ToolCalls }}\\n{{ range .ToolCalls }}\\n{\\\"name\\\": \\\"{{ .Function.Name
}}\\\", \\\"parameters\\\": {{ .Function.Arguments }}}{{ end }}\\n{{- else }}\\n\\n{{
.Content }}\\n{{- end }}{{ if not $last }}\\u003c|eot_id|\\u003e{{ end }}\\n{{-
else if eq .Role \\\"tool\\\" }}\\u003c|start_header_id|\\u003eipython\\u003c|end_header_id|\\u003e\\n\\n{{
.Content }}\\u003c|eot_id|\\u003e{{ if $last }}\\u003c|start_header_id|\\u003eassistant\\u003c|end_header_id|\\u003e\\n\\n{{
end }}\\n{{- end }}\\n{{- end }}\\\"\\\"\\\"\\nPARAMETER stop \\u003c|start_header_id|\\u003e\\nPARAMETER
stop \\u003c|end_header_id|\\u003e\\nPARAMETER stop \\u003c|eot_id|\\u003e\\nLICENSE
\\\"LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\\nLlama 3.2 Version Release Date:
September 25, 2024\\n\\n\u201CAgreement\u201D means the terms and conditions
for use, reproduction, distribution \\nand modification of the Llama Materials
set forth herein.\\n\\n\u201CDocumentation\u201D means the specifications, manuals
and documentation accompanying Llama 3.2\\ndistributed by Meta at https://llama.meta.com/doc/overview.\\n\\n\u201CLicensee\u201D
or \u201Cyou\u201D means you, or your employer or any other person or entity
(if you are \\nentering into this Agreement on such person or entity\u2019s
behalf), of the age required under\\napplicable laws, rules or regulations to
provide legal consent and that has legal authority\\nto bind your employer or
such other person or entity if you are entering in this Agreement\\non their
behalf.\\n\\n\u201CLlama 3.2\u201D means the foundational large language models
and software and algorithms, including\\nmachine-learning model code, trained
model weights, inference-enabling code, training-enabling code,\\nfine-tuning
enabling code and other elements of the foregoing distributed by Meta at \\nhttps://www.llama.com/llama-downloads.\\n\\n\u201CLlama
Materials\u201D means, collectively, Meta\u2019s proprietary Llama 3.2 and Documentation
(and \\nany portion thereof) made available under this Agreement.\\n\\n\u201CMeta\u201D
or \u201Cwe\u201D means Meta Platforms Ireland Limited (if you are located in
or, \\nif you are an entity, your principal place of business is in the EEA
or Switzerland) \\nand Meta Platforms, Inc. (if you are located outside of the
EEA or Switzerland). \\n\\n\\nBy clicking \u201CI Accept\u201D below or by using
or distributing any portion or element of the Llama Materials,\\nyou agree to
be bound by this Agreement.\\n\\n\\n1. License Rights and Redistribution.\\n\\n
\ a. Grant of Rights. You are granted a non-exclusive, worldwide, \\nnon-transferable
and royalty-free limited license under Meta\u2019s intellectual property or
other rights \\nowned by Meta embodied in the Llama Materials to use, reproduce,
distribute, copy, create derivative works \\nof, and make modifications to the
Llama Materials. \\n\\n b. Redistribution and Use. \\n\\n i. If
you distribute or make available the Llama Materials (or any derivative works
thereof), \\nor a product or service (including another AI model) that contains
any of them, you shall (A) provide\\na copy of this Agreement with any such
Llama Materials; and (B) prominently display \u201CBuilt with Llama\u201D\\non
a related website, user interface, blogpost, about page, or product documentation.
If you use the\\nLlama Materials or any outputs or results of the Llama Materials
to create, train, fine tune, or\\notherwise improve an AI model, which is distributed
or made available, you shall also include \u201CLlama\u201D\\nat the beginning
of any such AI model name.\\n\\n ii. If you receive Llama Materials,
or any derivative works thereof, from a Licensee as part\\nof an integrated
end user product, then Section 2 of this Agreement will not apply to you. \\n\\n
\ iii. You must retain in all copies of the Llama Materials that you distribute
the \\nfollowing attribution notice within a \u201CNotice\u201D text file distributed
as a part of such copies: \\n\u201CLlama 3.2 is licensed under the Llama 3.2
Community License, Copyright \xA9 Meta Platforms,\\nInc. All Rights Reserved.\u201D\\n\\n
\ iv. Your use of the Llama Materials must comply with applicable laws
and regulations\\n(including trade compliance laws and regulations) and adhere
to the Acceptable Use Policy for\\nthe Llama Materials (available at https://www.llama.com/llama3_2/use-policy),
which is hereby \\nincorporated by reference into this Agreement.\\n \\n2.
Additional Commercial Terms. If, on the Llama 3.2 version release date, the
monthly active users\\nof the products or services made available by or for
Licensee, or Licensee\u2019s affiliates, \\nis greater than 700 million monthly
active users in the preceding calendar month, you must request \\na license
from Meta, which Meta may grant to you in its sole discretion, and you are not
authorized to\\nexercise any of the rights under this Agreement unless or until
Meta otherwise expressly grants you such rights.\\n\\n3. Disclaimer of Warranty.
UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND \\nRESULTS
THEREFROM ARE PROVIDED ON AN \u201CAS IS\u201D BASIS, WITHOUT WARRANTIES OF
ANY KIND, AND META DISCLAIMS\\nALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND
IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES\\nOF TITLE, NON-INFRINGEMENT,
MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE\\nFOR
DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS
AND ASSUME ANY RISKS ASSOCIATED\\nWITH YOUR USE OF THE LLAMA MATERIALS AND ANY
OUTPUT AND RESULTS.\\n\\n4. Limitation of Liability. IN NO EVENT WILL META OR
ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, \\nWHETHER IN CONTRACT,
TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT,
\\nFOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL,
EXEMPLARY OR PUNITIVE DAMAGES, EVEN \\nIF META OR ITS AFFILIATES HAVE BEEN ADVISED
OF THE POSSIBILITY OF ANY OF THE FOREGOING.\\n\\n5. Intellectual Property.\\n\\n
\ a. No trademark licenses are granted under this Agreement, and in connection
with the Llama Materials, \\nneither Meta nor Licensee may use any name or mark
owned by or associated with the other or any of its affiliates, \\nexcept as
required for reasonable and customary use in describing and redistributing the
Llama Materials or as \\nset forth in this Section 5(a). Meta hereby grants
you a license to use \u201CLlama\u201D (the \u201CMark\u201D) solely as required
\\nto comply with the last sentence of Section 1.b.i. You will comply with Meta\u2019s
brand guidelines (currently accessible \\nat https://about.meta.com/brand/resources/meta/company-brand/).
All goodwill arising out of your use of the Mark \\nwill inure to the benefit
of Meta.\\n\\n b. Subject to Meta\u2019s ownership of Llama Materials and
derivatives made by or for Meta, with respect to any\\n derivative works
and modifications of the Llama Materials that are made by you, as between you
and Meta,\\n you are and will be the owner of such derivative works and modifications.\\n\\n
\ c. If you institute litigation or other proceedings against Meta or any
entity (including a cross-claim or\\n counterclaim in a lawsuit) alleging
that the Llama Materials or Llama 3.2 outputs or results, or any portion\\n
\ of any of the foregoing, constitutes infringement of intellectual property
or other rights owned or licensable\\n by you, then any licenses granted
to you under this Agreement shall terminate as of the date such litigation or\\n
\ claim is filed or instituted. You will indemnify and hold harmless Meta
from and against any claim by any third\\n party arising out of or related
to your use or distribution of the Llama Materials.\\n\\n6. Term and Termination.
The term of this Agreement will commence upon your acceptance of this Agreement
or access\\nto the Llama Materials and will continue in full force and effect
until terminated in accordance with the terms\\nand conditions herein. Meta
may terminate this Agreement if you are in breach of any term or condition of
this\\nAgreement. Upon termination of this Agreement, you shall delete and cease
use of the Llama Materials. Sections 3,\\n4 and 7 shall survive the termination
of this Agreement. \\n\\n7. Governing Law and Jurisdiction. This Agreement will
be governed and construed under the laws of the State of \\nCalifornia without
regard to choice of law principles, and the UN Convention on Contracts for the
International\\nSale of Goods does not apply to this Agreement. The courts of
California shall have exclusive jurisdiction of\\nany dispute arising out of
this Agreement.\\\"\\nLICENSE \\\"**Llama 3.2** **Acceptable Use Policy**\\n\\nMeta
is committed to promoting safe and fair use of its tools and features, including
Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use
Policy (\u201C**Policy**\u201D). The most recent copy of this policy can be
found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\\n\\n**Prohibited
Uses**\\n\\nWe want everyone to use Llama 3.2 safely and responsibly. You agree
you will not use, or allow others to use, Llama 3.2 to:\\n\\n\\n\\n1. Violate
the law or others\u2019 rights, including to:\\n 1. Engage in, promote, generate,
contribute to, encourage, plan, incite, or further illegal or unlawful activity
or content, such as:\\n 1. Violence or terrorism\\n 2. Exploitation
or harm to children, including the solicitation, creation, acquisition, or dissemination
of child exploitative content or failure to report Child Sexual Abuse Material\\n
\ 3. Human trafficking, exploitation, and sexual violence\\n 4.
The illegal distribution of information or materials to minors, including obscene
materials, or failure to employ legally required age-gating in connection with
such information or materials.\\n 5. Sexual solicitation\\n 6.
Any other criminal activity\\n 1. Engage in, promote, incite, or facilitate
the harassment, abuse, threatening, or bullying of individuals or groups of
individuals\\n 2. Engage in, promote, incite, or facilitate discrimination
or other unlawful or harmful conduct in the provision of employment, employment
benefits, credit, housing, other economic benefits, or other essential goods
and services\\n 3. Engage in the unauthorized or unlicensed practice of any
profession including, but not limited to, financial, legal, medical/health,
or related professional practices\\n 4. Collect, process, disclose, generate,
or infer private or sensitive information about individuals, including information
about individuals\u2019 identity, health, or demographic information, unless
you have obtained the right to do so in accordance with applicable law\\n 5.
Engage in or facilitate any action or generate any content that infringes, misappropriates,
or otherwise violates any third-party rights, including the outputs or results
of any products or services using the Llama Materials\\n 6. Create, generate,
or facilitate the creation of malicious code, malware, computer viruses or do
anything else that could disable, overburden, interfere with or impair the proper
working, integrity, operation or appearance of a website or computer system\\n
\ 7. Engage in any action, or facilitate any action, to intentionally circumvent
or remove usage restrictions or other safety measures, or to enable functionality
disabled by Meta\\n2. Engage in, promote, incite, facilitate, or assist in the
planning or development of activities that present a risk of death or bodily
harm to individuals, including use of Llama 3.2 related to the following:\\n
\ 8. Military, warfare, nuclear industries or applications, espionage, use
for materials or activities that are subject to the International Traffic Arms
Regulations (ITAR) maintained by the United States Department of State or to
the U.S. Biological Weapons Anti-Terrorism Act of 1989 or the Chemical Weapons
Convention Implementation Act of 1997\\n 9. Guns and illegal weapons (including
weapon development)\\n 10. Illegal drugs and regulated/controlled substances\\n
\ 11. Operation of critical infrastructure, transportation technologies, or
heavy machinery\\n 12. Self-harm or harm to others, including suicide, cutting,
and eating disorders\\n 13. Any content intended to incite or promote violence,
abuse, or any infliction of bodily harm to an individual\\n3. Intentionally
deceive or mislead others, including use of Llama 3.2 related to the following:\\n
\ 14. Generating, promoting, or furthering fraud or the creation or promotion
of disinformation\\n 15. Generating, promoting, or furthering defamatory
content, including the creation of defamatory statements, images, or other content\\n
\ 16. Generating, promoting, or further distributing spam\\n 17. Impersonating
another individual without consent, authorization, or legal right\\n 18.
Representing that the use of Llama 3.2 or outputs are human-generated\\n 19.
Generating or facilitating false online engagement, including fake reviews and
other means of fake online engagement\\n4. Fail to appropriately disclose to
end users any known dangers of your AI system\\n5. Interact with third party
tools, models, or software designed to generate unlawful content or engage in
unlawful or harmful conduct and/or represent that the outputs of such tools,
models, or software are associated with Meta or Llama 3.2\\n\\nWith respect
to any multimodal models included in Llama 3.2, the rights granted under Section
1(a) of the Llama 3.2 Community License Agreement are not being granted to you
if you are an individual domiciled in, or a company with a principal place of
business in, the European Union. This restriction does not apply to end users
of a product or service that incorporates any such multimodal models.\\n\\nPlease
report any violation of this Policy, software \u201Cbug,\u201D or other problems
that could lead to a violation of this Policy through one of the following means:\\n\\n\\n\\n*
Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues\\u0026h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\\n*
Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\\n*
Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\\n*
Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama
3.2: LlamaUseReport@meta.com\\\"\\n\",\"parameters\":\"stop \\\"\\u003c|start_header_id|\\u003e\\\"\\nstop
\ \\\"\\u003c|end_header_id|\\u003e\\\"\\nstop \\\"\\u003c|eot_id|\\u003e\\\"\",\"template\":\"\\u003c|start_header_id|\\u003esystem\\u003c|end_header_id|\\u003e\\n\\nCutting
Knowledge Date: December 2023\\n\\n{{ if .System }}{{ .System }}\\n{{- end }}\\n{{-
if .Tools }}When you receive a tool call response, use the output to format
an answer to the orginal user question.\\n\\nYou are a helpful assistant with
tool calling capabilities.\\n{{- end }}\\u003c|eot_id|\\u003e\\n{{- range $i,
$_ := .Messages }}\\n{{- $last := eq (len (slice $.Messages $i)) 1 }}\\n{{-
if eq .Role \\\"user\\\" }}\\u003c|start_header_id|\\u003euser\\u003c|end_header_id|\\u003e\\n{{-
if and $.Tools $last }}\\n\\nGiven the following functions, please respond with
a JSON for a function call with its proper arguments that best answers the given
prompt.\\n\\nRespond in the format {\\\"name\\\": function name, \\\"parameters\\\":
dictionary of argument name and its value}. Do not use variables.\\n\\n{{ range
$.Tools }}\\n{{- . }}\\n{{ end }}\\n{{ .Content }}\\u003c|eot_id|\\u003e\\n{{-
else }}\\n\\n{{ .Content }}\\u003c|eot_id|\\u003e\\n{{- end }}{{ if $last }}\\u003c|start_header_id|\\u003eassistant\\u003c|end_header_id|\\u003e\\n\\n{{
end }}\\n{{- else if eq .Role \\\"assistant\\\" }}\\u003c|start_header_id|\\u003eassistant\\u003c|end_header_id|\\u003e\\n{{-
if .ToolCalls }}\\n{{ range .ToolCalls }}\\n{\\\"name\\\": \\\"{{ .Function.Name
}}\\\", \\\"parameters\\\": {{ .Function.Arguments }}}{{ end }}\\n{{- else }}\\n\\n{{
.Content }}\\n{{- end }}{{ if not $last }}\\u003c|eot_id|\\u003e{{ end }}\\n{{-
else if eq .Role \\\"tool\\\" }}\\u003c|start_header_id|\\u003eipython\\u003c|end_header_id|\\u003e\\n\\n{{
.Content }}\\u003c|eot_id|\\u003e{{ if $last }}\\u003c|start_header_id|\\u003eassistant\\u003c|end_header_id|\\u003e\\n\\n{{
end }}\\n{{- end }}\\n{{- end }}\",\"details\":{\"parent_model\":\"\",\"format\":\"gguf\",\"family\":\"llama\",\"families\":[\"llama\"],\"parameter_size\":\"3.2B\",\"quantization_level\":\"Q4_K_M\"},\"model_info\":{\"general.architecture\":\"llama\",\"general.basename\":\"Llama-3.2\",\"general.file_type\":15,\"general.finetune\":\"Instruct\",\"general.languages\":[\"en\",\"de\",\"fr\",\"it\",\"pt\",\"hi\",\"es\",\"th\"],\"general.parameter_count\":3212749888,\"general.quantization_version\":2,\"general.size_label\":\"3B\",\"general.tags\":[\"facebook\",\"meta\",\"pytorch\",\"llama\",\"llama-3\",\"text-generation\"],\"general.type\":\"model\",\"llama.attention.head_count\":24,\"llama.attention.head_count_kv\":8,\"llama.attention.key_length\":128,\"llama.attention.layer_norm_rms_epsilon\":0.00001,\"llama.attention.value_length\":128,\"llama.block_count\":28,\"llama.context_length\":131072,\"llama.embedding_length\":3072,\"llama.feed_forward_length\":8192,\"llama.rope.dimension_count\":128,\"llama.rope.freq_base\":500000,\"llama.vocab_size\":128256,\"tokenizer.ggml.bos_token_id\":128000,\"tokenizer.ggml.eos_token_id\":128009,\"tokenizer.ggml.merges\":null,\"tokenizer.ggml.model\":\"gpt2\",\"tokenizer.ggml.pre\":\"llama-bpe\",\"tokenizer.ggml.token_type\":null,\"tokenizer.ggml.tokens\":null},\"modified_at\":\"2025-02-20T18:55:09.150577031-08:00\"}"
headers:
Content-Type:
- application/json; charset=utf-8
Date:
- Fri, 21 Feb 2025 02:57:55 GMT
Transfer-Encoding:
- chunked
http_version: HTTP/1.1
status_code: 200
version: 1

View File

@@ -46,7 +46,6 @@ def test_evaluate_training_data(converter_mock):
converter_mock.assert_has_calls(
[
mock.call(
agent=original_agent, # Add agent argument
llm=original_agent.llm,
text="Assess the quality of the training data based on the llm output, human feedback , and llm "
"output improved result.\n\nIteration: data1\nInitial Output:\nInitial output 1\n\nHuman Feedback:\nHuman feedback "

View File

@@ -197,101 +197,6 @@ def test_convert_with_instructions_success(
assert output.age == 50
@patch("crewai.utilities.converter.get_conversion_instructions")
@patch("crewai.utilities.converter.create_converter")
def test_convert_with_instructions_respects_use_system_prompt_false(
mock_create_converter, mock_get_instructions, mock_agent
):
"""
Test that convert_with_instructions does not use a system prompt
when agent.use_system_prompt is False and the LLM doesn't support function calling.
"""
mock_agent.use_system_prompt = False
mock_llm = MagicMock()
mock_llm.supports_function_calling.return_value = False
mock_agent.llm = mock_llm
mock_agent.function_calling_llm = None # Ensure fallback to agent.llm
mock_get_instructions.return_value = "Test Instructions"
mock_converter_instance = MagicMock(spec=Converter)
mock_converter_instance.agent = mock_agent # Set the agent on the mock converter
mock_converter_instance.llm = mock_llm
mock_converter_instance.instructions = "Test Instructions"
mock_converter_instance.text = "Some text"
mock_converter_instance.model = SimpleModel
converter = Converter(
agent=mock_agent,
llm=mock_llm,
text="Some text",
model=SimpleModel,
instructions="Test Instructions",
)
mock_create_converter.return_value = (
converter # This instance will be used by convert_with_instructions
)
converter.llm.call = MagicMock(return_value='{"name": "Mock Name", "age": 99}')
convert_with_instructions("Some text", SimpleModel, False, mock_agent)
converter.llm.call.assert_called_once()
call_args = converter.llm.call.call_args[0][0] # Get the 'messages' list argument
assert not any(msg.get("role") == "system" for msg in call_args)
user_message = next((msg for msg in call_args if msg.get("role") == "user"), None)
assert user_message is not None
assert "Test Instructions" in user_message["content"]
assert "Some text" in user_message["content"]
assert user_message["content"].startswith("Test Instructions\n\n")
@patch("crewai.utilities.converter.get_conversion_instructions")
@patch("crewai.utilities.converter.create_converter")
def test_convert_with_instructions_respects_use_system_prompt_true(
mock_create_converter, mock_get_instructions, mock_agent
):
"""
Test that convert_with_instructions uses a system prompt
when agent.use_system_prompt is True and the LLM doesn't support function calling.
"""
mock_agent.use_system_prompt = True # Explicitly True
mock_llm = MagicMock()
mock_llm.supports_function_calling.return_value = False
mock_agent.llm = mock_llm
mock_agent.function_calling_llm = None
mock_get_instructions.return_value = "Test Instructions"
converter = Converter(
agent=mock_agent,
llm=mock_llm,
text="Some text",
model=SimpleModel,
instructions="Test Instructions",
)
mock_create_converter.return_value = (
converter # This instance will be used by convert_with_instructions
)
converter.llm.call = MagicMock(return_value='{"name": "Mock Name", "age": 99}')
convert_with_instructions("Some text", SimpleModel, False, mock_agent)
converter.llm.call.assert_called_once()
call_args = converter.llm.call.call_args[0][0]
system_message = next(
(msg for msg in call_args if msg.get("role") == "system"), None
)
assert system_message is not None
assert system_message["content"] == "Test Instructions"
user_message = next((msg for msg in call_args if msg.get("role") == "user"), None)
assert user_message is not None
assert user_message["content"] == "Some text"
@patch("crewai.utilities.converter.create_converter")
@patch("crewai.utilities.converter.get_conversion_instructions")
def test_convert_with_instructions_failure(
@@ -429,10 +334,7 @@ def test_convert_with_instructions():
sample_text = "Name: Alice, Age: 30"
instructions = get_conversion_instructions(SimpleModel, llm)
mock_agent = Mock() # Add mock agent if not available
mock_agent.use_system_prompt = True # Default or set as needed
converter = Converter(
agent=mock_agent, # Add agent argument
llm=llm,
text=sample_text,
model=SimpleModel,
@@ -460,10 +362,7 @@ def test_converter_with_llama3_2_model():
llm = LLM(model="ollama/llama3.2:3b", base_url="http://localhost:11434")
sample_text = "Name: Alice Llama, Age: 30"
instructions = get_conversion_instructions(SimpleModel, llm)
mock_agent = Mock() # Add mock agent if not available
mock_agent.use_system_prompt = True # Default or set as needed
converter = Converter(
agent=mock_agent, # Add agent argument
llm=llm,
text=sample_text,
model=SimpleModel,
@@ -481,10 +380,7 @@ def test_converter_with_llama3_1_model():
llm = LLM(model="ollama/llama3.1", base_url="http://localhost:11434")
sample_text = "Name: Alice Llama, Age: 30"
instructions = get_conversion_instructions(SimpleModel, llm)
mock_agent = Mock() # Add mock agent if not available
mock_agent.use_system_prompt = True # Default or set as needed
converter = Converter(
agent=mock_agent, # Add agent argument
llm=llm,
text=sample_text,
model=SimpleModel,
@@ -509,11 +405,7 @@ def test_converter_with_nested_model():
sample_text = "Name: John Doe\nAge: 30\nAddress: 123 Main St, Anytown, 12345"
instructions = get_conversion_instructions(Person, llm)
mock_agent = Mock()
mock_agent.use_system_prompt = True
converter = Converter(
agent=mock_agent,
llm=llm,
text=sample_text,
model=Person,
@@ -539,10 +431,7 @@ def test_converter_error_handling():
sample_text = "Name: Alice, Age: 30"
instructions = get_conversion_instructions(SimpleModel, llm)
mock_agent = Mock() # Add mock agent if not available
mock_agent.use_system_prompt = True # Default or set as needed
converter = Converter(
agent=mock_agent, # Add agent argument
llm=llm,
text=sample_text,
model=SimpleModel,
@@ -567,11 +456,7 @@ def test_converter_retry_logic():
sample_text = "Name: Retry Alice, Age: 30"
instructions = get_conversion_instructions(SimpleModel, llm)
mock_agent = Mock()
mock_agent.use_system_prompt = True
converter = Converter(
agent=mock_agent,
llm=llm,
text=sample_text,
model=SimpleModel,
@@ -600,11 +485,7 @@ def test_converter_with_optional_fields():
sample_text = "Name: Bob, age: None"
instructions = get_conversion_instructions(OptionalModel, llm)
mock_agent = Mock()
mock_agent.use_system_prompt = True
converter = Converter(
agent=mock_agent,
llm=llm,
text=sample_text,
model=OptionalModel,
@@ -629,11 +510,7 @@ def test_converter_with_list_field():
sample_text = "Items: 1, 2, 3"
instructions = get_conversion_instructions(ListModel, llm)
mock_agent = Mock()
mock_agent.use_system_prompt = True
converter = Converter(
agent=mock_agent,
llm=llm,
text=sample_text,
model=ListModel,
@@ -666,11 +543,7 @@ def test_converter_with_enum():
sample_text = "Name: Alice, Color: Red"
instructions = get_conversion_instructions(EnumModel, llm)
mock_agent = Mock()
mock_agent.use_system_prompt = True
converter = Converter(
agent=mock_agent,
llm=llm,
text=sample_text,
model=EnumModel,
@@ -692,11 +565,7 @@ def test_converter_with_ambiguous_input():
sample_text = "Charlie is thirty years old"
instructions = get_conversion_instructions(SimpleModel, llm)
mock_agent = Mock()
mock_agent.use_system_prompt = True
converter = Converter(
agent=mock_agent,
llm=llm,
text=sample_text,
model=SimpleModel,
@@ -717,11 +586,7 @@ def test_converter_with_function_calling():
instructor = Mock()
instructor.to_pydantic.return_value = SimpleModel(name="Eve", age=35)
mock_agent = Mock()
mock_agent.use_system_prompt = True
converter = Converter(
agent=mock_agent,
llm=llm,
text="Name: Eve, Age: 35",
model=SimpleModel,

View File

@@ -29,14 +29,13 @@ def mock_knowledge_source():
"""
return StringKnowledgeSource(content=content)
@patch("crewai.knowledge.storage.knowledge_storage.chromadb")
@patch('crewai.knowledge.storage.knowledge_storage.chromadb')
def test_knowledge_included_in_planning(mock_chroma):
"""Test that verifies knowledge sources are properly included in planning."""
# Mock ChromaDB collection
mock_collection = mock_chroma.return_value.get_or_create_collection.return_value
mock_collection.add.return_value = None
# Create an agent with knowledge
agent = Agent(
role="AI Researcher",
@@ -46,14 +45,14 @@ def test_knowledge_included_in_planning(mock_chroma):
StringKnowledgeSource(
content="AI systems require careful training and validation."
)
],
]
)
# Create a task for the agent
task = Task(
description="Explain the basics of AI systems",
expected_output="A clear explanation of AI fundamentals",
agent=agent,
agent=agent
)
# Create a crew planner
@@ -63,29 +62,23 @@ def test_knowledge_included_in_planning(mock_chroma):
task_summary = planner._create_tasks_summary()
# Verify that knowledge is included in planning when present
assert (
"AI systems require careful training" in task_summary
), "Knowledge content should be present in task summary when knowledge exists"
assert (
'"agent_knowledge"' in task_summary
), "agent_knowledge field should be present in task summary when knowledge exists"
assert "AI systems require careful training" in task_summary, \
"Knowledge content should be present in task summary when knowledge exists"
assert '"agent_knowledge"' in task_summary, \
"agent_knowledge field should be present in task summary when knowledge exists"
# Verify that knowledge is properly formatted
assert isinstance(
task.agent.knowledge_sources, list
), "Knowledge sources should be stored in a list"
assert (
len(task.agent.knowledge_sources) > 0
), "At least one knowledge source should be present"
assert (
task.agent.knowledge_sources[0].content in task_summary
), "Knowledge source content should be included in task summary"
assert isinstance(task.agent.knowledge_sources, list), \
"Knowledge sources should be stored in a list"
assert len(task.agent.knowledge_sources) > 0, \
"At least one knowledge source should be present"
assert task.agent.knowledge_sources[0].content in task_summary, \
"Knowledge source content should be included in task summary"
# Verify that other expected components are still present
assert (
task.description in task_summary
), "Task description should be present in task summary"
assert (
task.expected_output in task_summary
), "Expected output should be present in task summary"
assert agent.role in task_summary, "Agent role should be present in task summary"
assert task.description in task_summary, \
"Task description should be present in task summary"
assert task.expected_output in task_summary, \
"Expected output should be present in task summary"
assert agent.role in task_summary, \
"Agent role should be present in task summary"

View File

@@ -100,7 +100,7 @@ class InternalCrewPlanner:
# Knowledge field should not be present when empty
assert '"agent_knowledge"' not in tasks_summary
@patch("crewai.knowledge.storage.knowledge_storage.chromadb")
@patch('crewai.knowledge.storage.knowledge_storage.chromadb')
def test_create_tasks_summary_with_knowledge_and_tools(self, mock_chroma):
"""Test task summary generation with both knowledge and tools present."""
# Mock ChromaDB collection
@@ -146,8 +146,8 @@ class InternalCrewPlanner:
tools=[tool1, tool2],
knowledge_sources=[
StringKnowledgeSource(content="Test knowledge content")
],
),
]
)
)
# Create planner with the new task

110
uv.lock generated
View File

@@ -1,42 +1,19 @@
version = 1
revision = 1
requires-python = ">=3.10, <3.13"
resolution-markers = [
"python_full_version < '3.11' and platform_system == 'Darwin' and sys_platform == 'darwin'",
"python_full_version < '3.11' and platform_machine == 'aarch64' and platform_system == 'Linux' and sys_platform == 'darwin'",
"(python_full_version < '3.11' and platform_machine != 'aarch64' and platform_system != 'Darwin' and sys_platform == 'darwin') or (python_full_version < '3.11' and platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform == 'darwin')",
"python_full_version < '3.11' and platform_machine == 'aarch64' and platform_system == 'Darwin' and sys_platform == 'linux'",
"python_full_version < '3.11' and platform_machine == 'aarch64' and platform_system == 'Linux' and sys_platform == 'linux'",
"python_full_version < '3.11' and platform_machine == 'aarch64' and platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform == 'linux'",
"(python_full_version < '3.11' and platform_machine != 'aarch64' and platform_system == 'Darwin' and sys_platform != 'darwin') or (python_full_version < '3.11' and platform_system == 'Darwin' and sys_platform != 'darwin' and sys_platform != 'linux')",
"python_full_version < '3.11' and platform_machine == 'aarch64' and platform_system == 'Linux' and sys_platform != 'darwin' and sys_platform != 'linux'",
"(python_full_version < '3.11' and platform_machine != 'aarch64' and platform_system != 'Darwin' and sys_platform != 'darwin') or (python_full_version < '3.11' and platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform != 'darwin' and sys_platform != 'linux')",
"python_full_version == '3.11.*' and platform_system == 'Darwin' and sys_platform == 'darwin'",
"python_full_version == '3.11.*' and platform_machine == 'aarch64' and platform_system == 'Linux' and sys_platform == 'darwin'",
"(python_full_version == '3.11.*' and platform_machine != 'aarch64' and platform_system != 'Darwin' and sys_platform == 'darwin') or (python_full_version == '3.11.*' and platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform == 'darwin')",
"python_full_version == '3.11.*' and platform_machine == 'aarch64' and platform_system == 'Darwin' and sys_platform == 'linux'",
"python_full_version == '3.11.*' and platform_machine == 'aarch64' and platform_system == 'Linux' and sys_platform == 'linux'",
"python_full_version == '3.11.*' and platform_machine == 'aarch64' and platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform == 'linux'",
"(python_full_version == '3.11.*' and platform_machine != 'aarch64' and platform_system == 'Darwin' and sys_platform != 'darwin') or (python_full_version == '3.11.*' and platform_system == 'Darwin' and sys_platform != 'darwin' and sys_platform != 'linux')",
"python_full_version == '3.11.*' and platform_machine == 'aarch64' and platform_system == 'Linux' and sys_platform != 'darwin' and sys_platform != 'linux'",
"(python_full_version == '3.11.*' and platform_machine != 'aarch64' and platform_system != 'Darwin' and sys_platform != 'darwin') or (python_full_version == '3.11.*' and platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform != 'darwin' and sys_platform != 'linux')",
"python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_system == 'Darwin' and sys_platform == 'darwin'",
"python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_machine == 'aarch64' and platform_system == 'Linux' and sys_platform == 'darwin'",
"(python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_machine != 'aarch64' and platform_system != 'Darwin' and sys_platform == 'darwin') or (python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform == 'darwin')",
"python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_machine == 'aarch64' and platform_system == 'Darwin' and sys_platform == 'linux'",
"python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_machine == 'aarch64' and platform_system == 'Linux' and sys_platform == 'linux'",
"python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_machine == 'aarch64' and platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform == 'linux'",
"(python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_machine != 'aarch64' and platform_system == 'Darwin' and sys_platform != 'darwin') or (python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_system == 'Darwin' and sys_platform != 'darwin' and sys_platform != 'linux')",
"python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_machine == 'aarch64' and platform_system == 'Linux' and sys_platform != 'darwin' and sys_platform != 'linux'",
"(python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_machine != 'aarch64' and platform_system != 'Darwin' and sys_platform != 'darwin') or (python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform != 'darwin' and sys_platform != 'linux')",
"python_full_version >= '3.12.4' and platform_system == 'Darwin' and sys_platform == 'darwin'",
"python_full_version >= '3.12.4' and platform_machine == 'aarch64' and platform_system == 'Linux' and sys_platform == 'darwin'",
"(python_full_version >= '3.12.4' and platform_machine != 'aarch64' and platform_system != 'Darwin' and sys_platform == 'darwin') or (python_full_version >= '3.12.4' and platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform == 'darwin')",
"python_full_version >= '3.12.4' and platform_machine == 'aarch64' and platform_system == 'Darwin' and sys_platform == 'linux'",
"python_full_version >= '3.12.4' and platform_machine == 'aarch64' and platform_system == 'Linux' and sys_platform == 'linux'",
"python_full_version >= '3.12.4' and platform_machine == 'aarch64' and platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform == 'linux'",
"(python_full_version >= '3.12.4' and platform_machine != 'aarch64' and platform_system == 'Darwin' and sys_platform != 'darwin') or (python_full_version >= '3.12.4' and platform_system == 'Darwin' and sys_platform != 'darwin' and sys_platform != 'linux')",
"python_full_version >= '3.12.4' and platform_machine == 'aarch64' and platform_system == 'Linux' and sys_platform != 'darwin' and sys_platform != 'linux'",
"(python_full_version >= '3.12.4' and platform_machine != 'aarch64' and platform_system != 'Darwin' and sys_platform != 'darwin') or (python_full_version >= '3.12.4' and platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform != 'darwin' and sys_platform != 'linux')",
"python_full_version < '3.11' and sys_platform == 'darwin'",
"python_full_version < '3.11' and platform_machine == 'aarch64' and sys_platform == 'linux'",
"(python_full_version < '3.11' and platform_machine != 'aarch64' and sys_platform == 'linux') or (python_full_version < '3.11' and sys_platform != 'darwin' and sys_platform != 'linux')",
"python_full_version == '3.11.*' and sys_platform == 'darwin'",
"python_full_version == '3.11.*' and platform_machine == 'aarch64' and sys_platform == 'linux'",
"(python_full_version == '3.11.*' and platform_machine != 'aarch64' and sys_platform == 'linux') or (python_full_version == '3.11.*' and sys_platform != 'darwin' and sys_platform != 'linux')",
"python_full_version >= '3.12' and python_full_version < '3.12.4' and sys_platform == 'darwin'",
"python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_machine == 'aarch64' and sys_platform == 'linux'",
"(python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_machine != 'aarch64' and sys_platform == 'linux') or (python_full_version >= '3.12' and python_full_version < '3.12.4' and sys_platform != 'darwin' and sys_platform != 'linux')",
"python_full_version >= '3.12.4' and sys_platform == 'darwin'",
"python_full_version >= '3.12.4' and platform_machine == 'aarch64' and sys_platform == 'linux'",
"(python_full_version >= '3.12.4' and platform_machine != 'aarch64' and sys_platform == 'linux') or (python_full_version >= '3.12.4' and sys_platform != 'darwin' and sys_platform != 'linux')",
]
[[package]]
@@ -344,7 +321,7 @@ name = "build"
version = "1.2.2.post1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "colorama", marker = "os_name == 'nt'" },
{ name = "colorama", marker = "(os_name == 'nt' and platform_machine != 'aarch64' and sys_platform == 'linux') or (os_name == 'nt' and sys_platform != 'darwin' and sys_platform != 'linux')" },
{ name = "importlib-metadata", marker = "python_full_version < '3.10.2'" },
{ name = "packaging" },
{ name = "pyproject-hooks" },
@@ -579,7 +556,7 @@ name = "click"
version = "8.1.8"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "colorama", marker = "platform_system == 'Windows'" },
{ name = "colorama", marker = "sys_platform == 'win32'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/b9/2e/0090cbf739cee7d23781ad4b89a9894a41538e4fcf4c31dcdd705b78eb8b/click-8.1.8.tar.gz", hash = "sha256:ed53c9d8990d83c2a27deae68e4ee337473f6330c040a31d4225c9574d16096a", size = 226593 }
wheels = [
@@ -630,7 +607,7 @@ wheels = [
[[package]]
name = "crewai"
version = "0.114.0"
version = "0.108.0"
source = { editable = "." }
dependencies = [
{ name = "appdirs" },
@@ -718,7 +695,7 @@ requires-dist = [
{ name = "blinker", specifier = ">=1.9.0" },
{ name = "chromadb", specifier = ">=0.5.23" },
{ name = "click", specifier = ">=8.1.7" },
{ name = "crewai-tools", marker = "extra == 'tools'", specifier = "~=0.40.1" },
{ name = "crewai-tools", marker = "extra == 'tools'", specifier = "~=0.38.0" },
{ name = "docling", marker = "extra == 'docling'", specifier = ">=2.12.0" },
{ name = "fastembed", marker = "extra == 'fastembed'", specifier = ">=0.4.1" },
{ name = "instructor", specifier = ">=1.3.3" },
@@ -745,6 +722,7 @@ requires-dist = [
{ name = "tomli-w", specifier = ">=1.1.0" },
{ name = "uv", specifier = ">=0.4.25" },
]
provides-extras = ["tools", "embeddings", "agentops", "fastembed", "pdfplumber", "pandas", "openpyxl", "mem0", "docling", "aisuite"]
[package.metadata.requires-dev]
dev = [
@@ -767,7 +745,7 @@ dev = [
[[package]]
name = "crewai-tools"
version = "0.40.1"
version = "0.38.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "chromadb" },
@@ -782,9 +760,9 @@ dependencies = [
{ name = "pytube" },
{ name = "requests" },
]
sdist = { url = "https://files.pythonhosted.org/packages/16/ff/0c16c9943ec1501b12fc72aca7815f191ffe94d5f1fe4e9c353ee8c4ad1d/crewai_tools-0.40.1.tar.gz", hash = "sha256:6af5040b2277df8fd592238a17bf584f95dcc9ef7766236534999c8a9e9d0b52", size = 744094 }
sdist = { url = "https://files.pythonhosted.org/packages/85/3f/d3b5697b4c6756cec65316c9ea9ccd9054f7b73670d1580befd3632ba031/crewai_tools-0.38.1.tar.gz", hash = "sha256:6abe75b3b339d53a9cf4e2d80124d863ff62a82b36753c30bec64318881876b2", size = 737620 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/35/05/619c00bae2dda038f0d218dd5197120c938e9c9ccef1b9e50cfb037486f6/crewai_tools-0.40.1-py3-none-any.whl", hash = "sha256:8f459f74dee64364bfdbc524c815c4afcfb9ed532b51e6b8b4f616398d46cf1e", size = 573286 },
{ url = "https://files.pythonhosted.org/packages/2b/2b/a6c9007647ffbb6a3c204b3ef26806030d6b041e3e012d4cec43c21335d6/crewai_tools-0.38.1-py3-none-any.whl", hash = "sha256:d9d3a88060f1f30c8f4ea044f6dd564a50d0a22b8a018a6fcec202b36246b9d8", size = 561414 },
]
[[package]]
@@ -2518,7 +2496,7 @@ version = "1.6.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "click" },
{ name = "colorama", marker = "platform_system == 'Windows'" },
{ name = "colorama", marker = "sys_platform == 'win32'" },
{ name = "ghp-import" },
{ name = "jinja2" },
{ name = "markdown" },
@@ -2699,7 +2677,7 @@ version = "2.10.2"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "pygments" },
{ name = "pywin32", marker = "platform_system == 'Windows'" },
{ name = "pywin32", marker = "sys_platform == 'win32'" },
{ name = "tqdm" },
]
sdist = { url = "https://files.pythonhosted.org/packages/3a/93/80ac75c20ce54c785648b4ed363c88f148bf22637e10c9863db4fbe73e74/mpire-2.10.2.tar.gz", hash = "sha256:f66a321e93fadff34585a4bfa05e95bd946cf714b442f51c529038eb45773d97", size = 271270 }
@@ -2946,7 +2924,7 @@ name = "nvidia-cudnn-cu12"
version = "9.1.0.70"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "nvidia-cublas-cu12", marker = "(platform_machine != 'aarch64' and platform_system != 'Darwin') or (platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform != 'linux')" },
{ name = "nvidia-cublas-cu12", marker = "(platform_machine != 'aarch64' and sys_platform == 'linux') or (sys_platform != 'darwin' and sys_platform != 'linux')" },
]
wheels = [
{ url = "https://files.pythonhosted.org/packages/9f/fd/713452cd72343f682b1c7b9321e23829f00b842ceaedcda96e742ea0b0b3/nvidia_cudnn_cu12-9.1.0.70-py3-none-manylinux2014_x86_64.whl", hash = "sha256:165764f44ef8c61fcdfdfdbe769d687e06374059fbb388b6c89ecb0e28793a6f", size = 664752741 },
@@ -2973,9 +2951,9 @@ name = "nvidia-cusolver-cu12"
version = "11.4.5.107"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "nvidia-cublas-cu12", marker = "(platform_machine != 'aarch64' and platform_system != 'Darwin') or (platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform != 'linux')" },
{ name = "nvidia-cusparse-cu12", marker = "(platform_machine != 'aarch64' and platform_system != 'Darwin') or (platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform != 'linux')" },
{ name = "nvidia-nvjitlink-cu12", marker = "(platform_machine != 'aarch64' and platform_system != 'Darwin') or (platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform != 'linux')" },
{ name = "nvidia-cublas-cu12", marker = "(platform_machine != 'aarch64' and sys_platform == 'linux') or (sys_platform != 'darwin' and sys_platform != 'linux')" },
{ name = "nvidia-cusparse-cu12", marker = "(platform_machine != 'aarch64' and sys_platform == 'linux') or (sys_platform != 'darwin' and sys_platform != 'linux')" },
{ name = "nvidia-nvjitlink-cu12", marker = "(platform_machine != 'aarch64' and sys_platform == 'linux') or (sys_platform != 'darwin' and sys_platform != 'linux')" },
]
wheels = [
{ url = "https://files.pythonhosted.org/packages/bc/1d/8de1e5c67099015c834315e333911273a8c6aaba78923dd1d1e25fc5f217/nvidia_cusolver_cu12-11.4.5.107-py3-none-manylinux1_x86_64.whl", hash = "sha256:8a7ec542f0412294b15072fa7dab71d31334014a69f953004ea7a118206fe0dd", size = 124161928 },
@@ -2986,7 +2964,7 @@ name = "nvidia-cusparse-cu12"
version = "12.1.0.106"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "nvidia-nvjitlink-cu12", marker = "(platform_machine != 'aarch64' and platform_system != 'Darwin') or (platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform != 'linux')" },
{ name = "nvidia-nvjitlink-cu12", marker = "(platform_machine != 'aarch64' and sys_platform == 'linux') or (sys_platform != 'darwin' and sys_platform != 'linux')" },
]
wheels = [
{ url = "https://files.pythonhosted.org/packages/65/5b/cfaeebf25cd9fdec14338ccb16f6b2c4c7fa9163aefcf057d86b9cc248bb/nvidia_cusparse_cu12-12.1.0.106-py3-none-manylinux1_x86_64.whl", hash = "sha256:f3b50f42cf363f86ab21f720998517a659a48131e8d538dc02f8768237bd884c", size = 195958278 },
@@ -2997,7 +2975,6 @@ name = "nvidia-nccl-cu12"
version = "2.20.5"
source = { registry = "https://pypi.org/simple" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/c1/bb/d09dda47c881f9ff504afd6f9ca4f502ded6d8fc2f572cacc5e39da91c28/nvidia_nccl_cu12-2.20.5-py3-none-manylinux2014_aarch64.whl", hash = "sha256:1fc150d5c3250b170b29410ba682384b14581db722b2531b0d8d33c595f33d01", size = 176238458 },
{ url = "https://files.pythonhosted.org/packages/4b/2a/0a131f572aa09f741c30ccd45a8e56316e8be8dfc7bc19bf0ab7cfef7b19/nvidia_nccl_cu12-2.20.5-py3-none-manylinux2014_x86_64.whl", hash = "sha256:057f6bf9685f75215d0c53bf3ac4a10b3e6578351de307abad9e18a99182af56", size = 176249402 },
]
@@ -3007,7 +2984,6 @@ version = "12.6.85"
source = { registry = "https://pypi.org/simple" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/9d/d7/c5383e47c7e9bf1c99d5bd2a8c935af2b6d705ad831a7ec5c97db4d82f4f/nvidia_nvjitlink_cu12-12.6.85-py3-none-manylinux2010_x86_64.manylinux_2_12_x86_64.whl", hash = "sha256:eedc36df9e88b682efe4309aa16b5b4e78c2407eac59e8c10a6a47535164369a", size = 19744971 },
{ url = "https://files.pythonhosted.org/packages/31/db/dc71113d441f208cdfe7ae10d4983884e13f464a6252450693365e166dcf/nvidia_nvjitlink_cu12-12.6.85-py3-none-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:cf4eaa7d4b6b543ffd69d6abfb11efdeb2db48270d94dfd3a452c24150829e41", size = 19270338 },
]
[[package]]
@@ -3525,7 +3501,7 @@ name = "portalocker"
version = "2.10.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "pywin32", marker = "platform_system == 'Windows'" },
{ name = "pywin32", marker = "sys_platform == 'win32'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/ed/d3/c6c64067759e87af98cc668c1cc75171347d0f1577fab7ca3749134e3cd4/portalocker-2.10.1.tar.gz", hash = "sha256:ef1bf844e878ab08aee7e40184156e1151f228f103aa5c6bd0724cc330960f8f", size = 40891 }
wheels = [
@@ -5032,19 +5008,19 @@ dependencies = [
{ name = "fsspec" },
{ name = "jinja2" },
{ name = "networkx" },
{ name = "nvidia-cublas-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "nvidia-cuda-cupti-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "nvidia-cuda-nvrtc-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "nvidia-cuda-runtime-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "nvidia-cudnn-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "nvidia-cufft-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "nvidia-curand-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "nvidia-cusolver-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "nvidia-cusparse-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "nvidia-nccl-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "nvidia-nvtx-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "nvidia-cublas-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "nvidia-cuda-cupti-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "nvidia-cuda-nvrtc-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "nvidia-cuda-runtime-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "nvidia-cudnn-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "nvidia-cufft-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "nvidia-curand-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "nvidia-cusolver-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "nvidia-cusparse-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "nvidia-nccl-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "nvidia-nvtx-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "sympy" },
{ name = "triton", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "triton", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "typing-extensions" },
]
wheels = [
@@ -5091,7 +5067,7 @@ name = "tqdm"
version = "4.66.5"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "colorama", marker = "platform_system == 'Windows'" },
{ name = "colorama", marker = "sys_platform == 'win32'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/58/83/6ba9844a41128c62e810fddddd72473201f3eacde02046066142a2d96cc5/tqdm-4.66.5.tar.gz", hash = "sha256:e1020aef2e5096702d8a025ac7d16b1577279c9d63f8375b63083e9a5f0fcbad", size = 169504 }
wheels = [
@@ -5133,7 +5109,7 @@ name = "triton"
version = "3.0.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "filelock", marker = "(platform_machine != 'aarch64' and platform_system != 'Darwin') or (platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform != 'linux')" },
{ name = "filelock", marker = "(platform_machine != 'aarch64' and sys_platform == 'linux') or (sys_platform != 'darwin' and sys_platform != 'linux')" },
]
wheels = [
{ url = "https://files.pythonhosted.org/packages/45/27/14cc3101409b9b4b9241d2ba7deaa93535a217a211c86c4cc7151fb12181/triton-3.0.0-1-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:e1efef76935b2febc365bfadf74bcb65a6f959a9872e5bddf44cc9e0adce1e1a", size = 209376304 },