mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-01-07 15:18:29 +00:00
Compare commits
13 Commits
devin/1746
...
devin/1735
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
edaa966e26 | ||
|
|
c38354c189 | ||
|
|
bdb32dd3e6 | ||
|
|
83b4e0b4c0 | ||
|
|
7c2b307217 | ||
|
|
ce4a730f76 | ||
|
|
8871d9a6cd | ||
|
|
a0c322a535 | ||
|
|
86f58c95de | ||
|
|
99fe91586d | ||
|
|
0c2d23dfe0 | ||
|
|
2433819c4f | ||
|
|
97fc44c930 |
158
README.md
158
README.md
@@ -4,7 +4,7 @@
|
||||
|
||||
# **CrewAI**
|
||||
|
||||
🤖 **CrewAI**: Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
|
||||
🤖 **CrewAI**: Production-grade framework for orchestrating sophisticated AI agent systems. From simple automations to complex real-world applications, CrewAI provides precise control and deep customization. By fostering collaborative intelligence through flexible, production-ready architecture, CrewAI empowers agents to work together seamlessly, tackling complex business challenges with predictable, consistent results.
|
||||
|
||||
<h3>
|
||||
|
||||
@@ -22,13 +22,17 @@
|
||||
- [Why CrewAI?](#why-crewai)
|
||||
- [Getting Started](#getting-started)
|
||||
- [Key Features](#key-features)
|
||||
- [Understanding Flows and Crews](#understanding-flows-and-crews)
|
||||
- [CrewAI vs LangGraph](#how-crewai-compares)
|
||||
- [Examples](#examples)
|
||||
- [Quick Tutorial](#quick-tutorial)
|
||||
- [Write Job Descriptions](#write-job-descriptions)
|
||||
- [Trip Planner](#trip-planner)
|
||||
- [Stock Analysis](#stock-analysis)
|
||||
- [Using Crews and Flows Together](#using-crews-and-flows-together)
|
||||
- [Connecting Your Crew to a Model](#connecting-your-crew-to-a-model)
|
||||
- [How CrewAI Compares](#how-crewai-compares)
|
||||
- [Frequently Asked Questions (FAQ)](#frequently-asked-questions-faq)
|
||||
- [Contribution](#contribution)
|
||||
- [Telemetry](#telemetry)
|
||||
- [License](#license)
|
||||
@@ -36,10 +40,40 @@
|
||||
## Why CrewAI?
|
||||
|
||||
The power of AI collaboration has too much to offer.
|
||||
CrewAI is designed to enable AI agents to assume roles, share goals, and operate in a cohesive unit - much like a well-oiled crew. Whether you're building a smart assistant platform, an automated customer service ensemble, or a multi-agent research team, CrewAI provides the backbone for sophisticated multi-agent interactions.
|
||||
CrewAI is a standalone framework, built from the ground up without dependencies on Langchain or other agent frameworks. It's designed to enable AI agents to assume roles, share goals, and operate in a cohesive unit - much like a well-oiled crew. Whether you're building a smart assistant platform, an automated customer service ensemble, or a multi-agent research team, CrewAI provides the backbone for sophisticated multi-agent interactions.
|
||||
|
||||
## Getting Started
|
||||
|
||||
### Learning Resources
|
||||
|
||||
Learn CrewAI through our comprehensive courses:
|
||||
- [Multi AI Agent Systems with CrewAI](https://www.deeplearning.ai/short-courses/multi-ai-agent-systems-with-crewai/) - Master the fundamentals of multi-agent systems
|
||||
- [Practical Multi AI Agents and Advanced Use Cases](https://www.deeplearning.ai/short-courses/practical-multi-ai-agents-and-advanced-use-cases-with-crewai/) - Deep dive into advanced implementations
|
||||
|
||||
### Understanding Flows and Crews
|
||||
|
||||
CrewAI offers two powerful, complementary approaches that work seamlessly together to build sophisticated AI applications:
|
||||
|
||||
1. **Crews**: Teams of AI agents with true autonomy and agency, working together to accomplish complex tasks through role-based collaboration. Crews enable:
|
||||
- Natural, autonomous decision-making between agents
|
||||
- Dynamic task delegation and collaboration
|
||||
- Specialized roles with defined goals and expertise
|
||||
- Flexible problem-solving approaches
|
||||
|
||||
2. **Flows**: Production-ready, event-driven workflows that deliver precise control over complex automations. Flows provide:
|
||||
- Fine-grained control over execution paths for real-world scenarios
|
||||
- Secure, consistent state management between tasks
|
||||
- Clean integration of AI agents with production Python code
|
||||
- Conditional branching for complex business logic
|
||||
|
||||
The true power of CrewAI emerges when combining Crews and Flows. This synergy allows you to:
|
||||
- Build complex, production-grade applications
|
||||
- Balance autonomy with precise control
|
||||
- Handle sophisticated real-world scenarios
|
||||
- Maintain clean, maintainable code structure
|
||||
|
||||
### Getting Started with Installation
|
||||
|
||||
To get started with CrewAI, follow these simple steps:
|
||||
|
||||
### 1. Installation
|
||||
@@ -264,13 +298,16 @@ In addition to the sequential process, you can use the hierarchical process, whi
|
||||
|
||||
## Key Features
|
||||
|
||||
- **Role-Based Agent Design**: Customize agents with specific roles, goals, and tools.
|
||||
- **Autonomous Inter-Agent Delegation**: Agents can autonomously delegate tasks and inquire amongst themselves, enhancing problem-solving efficiency.
|
||||
- **Flexible Task Management**: Define tasks with customizable tools and assign them to agents dynamically.
|
||||
- **Processes Driven**: Currently only supports `sequential` task execution and `hierarchical` processes, but more complex processes like consensual and autonomous are being worked on.
|
||||
- **Save output as file**: Save the output of individual tasks as a file, so you can use it later.
|
||||
- **Parse output as Pydantic or Json**: Parse the output of individual tasks as a Pydantic model or as a Json if you want to.
|
||||
- **Works with Open Source Models**: Run your crew using Open AI or open source models refer to the [Connect CrewAI to LLMs](https://docs.crewai.com/how-to/LLM-Connections/) page for details on configuring your agents' connections to models, even ones running locally!
|
||||
**Note**: CrewAI is a standalone framework built from the ground up, without dependencies on Langchain or other agent frameworks.
|
||||
|
||||
- **Deep Customization**: Build sophisticated agents with full control over the system - from overriding inner prompts to accessing low-level APIs. Customize roles, goals, tools, and behaviors while maintaining clean abstractions.
|
||||
- **Autonomous Inter-Agent Delegation**: Agents can autonomously delegate tasks and inquire amongst themselves, enabling complex problem-solving in real-world scenarios.
|
||||
- **Flexible Task Management**: Define and customize tasks with granular control, from simple operations to complex multi-step processes.
|
||||
- **Production-Grade Architecture**: Support for both high-level abstractions and low-level customization, with robust error handling and state management.
|
||||
- **Predictable Results**: Ensure consistent, accurate outputs through programmatic guardrails, agent training capabilities, and flow-based execution control. See our [documentation on guardrails](https://docs.crewai.com/how-to/guardrails/) for implementation details.
|
||||
- **Model Flexibility**: Run your crew using OpenAI or open source models with production-ready integrations. See [Connect CrewAI to LLMs](https://docs.crewai.com/how-to/LLM-Connections/) for detailed configuration options.
|
||||
- **Event-Driven Flows**: Build complex, real-world workflows with precise control over execution paths, state management, and conditional logic.
|
||||
- **Process Orchestration**: Achieve any workflow pattern through flows - from simple sequential and hierarchical processes to complex, custom orchestration patterns with conditional branching and parallel execution.
|
||||
|
||||

|
||||
|
||||
@@ -305,6 +342,98 @@ You can test different real life examples of AI crews in the [CrewAI-examples re
|
||||
|
||||
[](https://www.youtube.com/watch?v=e0Uj4yWdaAg "Stock Analysis")
|
||||
|
||||
### Using Crews and Flows Together
|
||||
|
||||
CrewAI's power truly shines when combining Crews with Flows to create sophisticated automation pipelines. Here's how you can orchestrate multiple Crews within a Flow:
|
||||
|
||||
```python
|
||||
from crewai.flow.flow import Flow, listen, start, router
|
||||
from crewai import Crew, Agent, Task
|
||||
from pydantic import BaseModel
|
||||
|
||||
# Define structured state for precise control
|
||||
class MarketState(BaseModel):
|
||||
sentiment: str = "neutral"
|
||||
confidence: float = 0.0
|
||||
recommendations: list = []
|
||||
|
||||
class AdvancedAnalysisFlow(Flow[MarketState]):
|
||||
@start()
|
||||
def fetch_market_data(self):
|
||||
# Demonstrate low-level control with structured state
|
||||
self.state.sentiment = "analyzing"
|
||||
return {"sector": "tech", "timeframe": "1W"} # These parameters match the task description template
|
||||
|
||||
@listen(fetch_market_data)
|
||||
def analyze_with_crew(self, market_data):
|
||||
# Show crew agency through specialized roles
|
||||
analyst = Agent(
|
||||
role="Senior Market Analyst",
|
||||
goal="Conduct deep market analysis with expert insight",
|
||||
backstory="You're a veteran analyst known for identifying subtle market patterns"
|
||||
)
|
||||
researcher = Agent(
|
||||
role="Data Researcher",
|
||||
goal="Gather and validate supporting market data",
|
||||
backstory="You excel at finding and correlating multiple data sources"
|
||||
)
|
||||
|
||||
analysis_task = Task(
|
||||
description="Analyze {sector} sector data for the past {timeframe}",
|
||||
expected_output="Detailed market analysis with confidence score",
|
||||
agent=analyst
|
||||
)
|
||||
research_task = Task(
|
||||
description="Find supporting data to validate the analysis",
|
||||
expected_output="Corroborating evidence and potential contradictions",
|
||||
agent=researcher
|
||||
)
|
||||
|
||||
# Demonstrate crew autonomy
|
||||
analysis_crew = Crew(
|
||||
agents=[analyst, researcher],
|
||||
tasks=[analysis_task, research_task],
|
||||
process=Process.sequential,
|
||||
verbose=True
|
||||
)
|
||||
return analysis_crew.kickoff(inputs=market_data) # Pass market_data as named inputs
|
||||
|
||||
@router(analyze_with_crew)
|
||||
def determine_next_steps(self):
|
||||
# Show flow control with conditional routing
|
||||
if self.state.confidence > 0.8:
|
||||
return "high_confidence"
|
||||
elif self.state.confidence > 0.5:
|
||||
return "medium_confidence"
|
||||
return "low_confidence"
|
||||
|
||||
@listen("high_confidence")
|
||||
def execute_strategy(self):
|
||||
# Demonstrate complex decision making
|
||||
strategy_crew = Crew(
|
||||
agents=[
|
||||
Agent(role="Strategy Expert",
|
||||
goal="Develop optimal market strategy")
|
||||
],
|
||||
tasks=[
|
||||
Task(description="Create detailed strategy based on analysis",
|
||||
expected_output="Step-by-step action plan")
|
||||
]
|
||||
)
|
||||
return strategy_crew.kickoff()
|
||||
|
||||
@listen("medium_confidence", "low_confidence")
|
||||
def request_additional_analysis(self):
|
||||
self.state.recommendations.append("Gather more data")
|
||||
return "Additional analysis required"
|
||||
```
|
||||
|
||||
This example demonstrates how to:
|
||||
1. Use Python code for basic data operations
|
||||
2. Create and execute Crews as steps in your workflow
|
||||
3. Use Flow decorators to manage the sequence of operations
|
||||
4. Implement conditional branching based on Crew results
|
||||
|
||||
## Connecting Your Crew to a Model
|
||||
|
||||
CrewAI supports using various LLMs through a variety of connection options. By default your agents will use the OpenAI API when querying the model. However, there are several other ways to allow your agents to connect to models. For example, you can configure your agents to use a local model via the Ollama tool.
|
||||
@@ -313,9 +442,13 @@ Please refer to the [Connect CrewAI to LLMs](https://docs.crewai.com/how-to/LLM-
|
||||
|
||||
## How CrewAI Compares
|
||||
|
||||
**CrewAI's Advantage**: CrewAI is built with production in mind. It offers the flexibility of Autogen's conversational agents and the structured process approach of ChatDev, but without the rigidity. CrewAI's processes are designed to be dynamic and adaptable, fitting seamlessly into both development and production workflows.
|
||||
**CrewAI's Advantage**: CrewAI combines autonomous agent intelligence with precise workflow control through its unique Crews and Flows architecture. The framework excels at both high-level orchestration and low-level customization, enabling complex, production-grade systems with granular control.
|
||||
|
||||
- **Autogen**: While Autogen does good in creating conversational agents capable of working together, it lacks an inherent concept of process. In Autogen, orchestrating agents' interactions requires additional programming, which can become complex and cumbersome as the scale of tasks grows.
|
||||
- **LangGraph**: While LangGraph provides a foundation for building agent workflows, its approach requires significant boilerplate code and complex state management patterns. The framework's tight coupling with LangChain can limit flexibility when implementing custom agent behaviors or integrating with external systems.
|
||||
|
||||
*P.S. CrewAI demonstrates significant performance advantages over LangGraph, executing 5.76x faster in certain cases like this QA task example ([see comparison](https://github.com/crewAIInc/crewAI-examples/tree/main/Notebooks/CrewAI%20Flows%20%26%20Langgraph/QA%20Agent)) while achieving higher evaluation scores with faster completion times in certain coding tasks, like in this example ([detailed analysis](https://github.com/crewAIInc/crewAI-examples/blob/main/Notebooks/CrewAI%20Flows%20%26%20Langgraph/Coding%20Assistant/coding_assistant_eval.ipynb)).*
|
||||
|
||||
- **Autogen**: While Autogen excels at creating conversational agents capable of working together, it lacks an inherent concept of process. In Autogen, orchestrating agents' interactions requires additional programming, which can become complex and cumbersome as the scale of tasks grows.
|
||||
|
||||
- **ChatDev**: ChatDev introduced the idea of processes into the realm of AI agents, but its implementation is quite rigid. Customizations in ChatDev are limited and not geared towards production environments, which can hinder scalability and flexibility in real-world applications.
|
||||
|
||||
@@ -440,5 +573,8 @@ A: CrewAI uses anonymous telemetry to collect usage data for improvement purpose
|
||||
### Q: Where can I find examples of CrewAI in action?
|
||||
A: You can find various real-life examples in the [CrewAI-examples repository](https://github.com/crewAIInc/crewAI-examples), including trip planners, stock analysis tools, and more.
|
||||
|
||||
### Q: What is the difference between Crews and Flows?
|
||||
A: Crews and Flows serve different but complementary purposes in CrewAI. Crews are teams of AI agents working together to accomplish specific tasks through role-based collaboration, delivering accurate and predictable results. Flows, on the other hand, are event-driven workflows that can orchestrate both Crews and regular Python code, allowing you to build complex automation pipelines with secure state management and conditional execution paths.
|
||||
|
||||
### Q: How can I contribute to CrewAI?
|
||||
A: Contributions are welcome! You can fork the repository, create a new branch for your feature, add your improvement, and send a pull request. Check the Contribution section in the README for more details.
|
||||
|
||||
@@ -171,6 +171,58 @@ crewai reset-memories --knowledge
|
||||
|
||||
This is useful when you've updated your knowledge sources and want to ensure that the agents are using the most recent information.
|
||||
|
||||
## Agent-Specific Knowledge
|
||||
|
||||
While knowledge can be provided at the crew level using `crew.knowledge_sources`, individual agents can also have their own knowledge sources using the `knowledge_sources` parameter:
|
||||
|
||||
```python Code
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai.knowledge.source.string_knowledge_source import StringKnowledgeSource
|
||||
|
||||
# Create agent-specific knowledge about a product
|
||||
product_specs = StringKnowledgeSource(
|
||||
content="""The XPS 13 laptop features:
|
||||
- 13.4-inch 4K display
|
||||
- Intel Core i7 processor
|
||||
- 16GB RAM
|
||||
- 512GB SSD storage
|
||||
- 12-hour battery life""",
|
||||
metadata={"category": "product_specs"}
|
||||
)
|
||||
|
||||
# Create a support agent with product knowledge
|
||||
support_agent = Agent(
|
||||
role="Technical Support Specialist",
|
||||
goal="Provide accurate product information and support.",
|
||||
backstory="You are an expert on our laptop products and specifications.",
|
||||
knowledge_sources=[product_specs] # Agent-specific knowledge
|
||||
)
|
||||
|
||||
# Create a task that requires product knowledge
|
||||
support_task = Task(
|
||||
description="Answer this customer question: {question}",
|
||||
agent=support_agent
|
||||
)
|
||||
|
||||
# Create and run the crew
|
||||
crew = Crew(
|
||||
agents=[support_agent],
|
||||
tasks=[support_task]
|
||||
)
|
||||
|
||||
# Get answer about the laptop's specifications
|
||||
result = crew.kickoff(
|
||||
inputs={"question": "What is the storage capacity of the XPS 13?"}
|
||||
)
|
||||
```
|
||||
|
||||
<Info>
|
||||
Benefits of agent-specific knowledge:
|
||||
- Give agents specialized information for their roles
|
||||
- Maintain separation of concerns between agents
|
||||
- Combine with crew-level knowledge for layered information access
|
||||
</Info>
|
||||
|
||||
## Custom Knowledge Sources
|
||||
|
||||
CrewAI allows you to create custom knowledge sources for any type of data by extending the `BaseKnowledgeSource` class. Let's create a practical example that fetches and processes space news articles.
|
||||
|
||||
@@ -1,213 +0,0 @@
|
||||
# Multiple Model Configuration in CrewAI
|
||||
|
||||
CrewAI now supports configuring multiple language models with different API keys and configurations. This feature allows you to:
|
||||
|
||||
1. Load-balance across multiple model deployments
|
||||
2. Set up fallback models in case of rate limits or errors
|
||||
3. Configure different routing strategies for model selection
|
||||
4. Maintain fine-grained control over model selection and usage
|
||||
|
||||
## Basic Usage
|
||||
|
||||
You can configure multiple models at the agent level:
|
||||
|
||||
```python
|
||||
from crewai import Agent
|
||||
|
||||
# Define model configurations
|
||||
model_list = [
|
||||
{
|
||||
"model_name": "gpt-4o-mini",
|
||||
"litellm_params": {
|
||||
"model": "gpt-4o-mini", # Required: model name must be specified here
|
||||
"api_key": "your-openai-api-key-1"
|
||||
}
|
||||
},
|
||||
{
|
||||
"model_name": "gpt-3.5-turbo",
|
||||
"litellm_params": {
|
||||
"model": "gpt-3.5-turbo", # Required: model name must be specified here
|
||||
"api_key": "your-openai-api-key-2"
|
||||
}
|
||||
},
|
||||
{
|
||||
"model_name": "claude-3-sonnet-20240229",
|
||||
"litellm_params": {
|
||||
"model": "claude-3-sonnet-20240229", # Required: model name must be specified here
|
||||
"api_key": "your-anthropic-api-key"
|
||||
}
|
||||
}
|
||||
]
|
||||
|
||||
# Create an agent with multiple model configurations
|
||||
agent = Agent(
|
||||
role="Data Analyst",
|
||||
goal="Analyze the data and provide insights",
|
||||
backstory="You are an expert data analyst with years of experience.",
|
||||
model_list=model_list,
|
||||
routing_strategy="simple-shuffle" # Optional routing strategy
|
||||
)
|
||||
```
|
||||
|
||||
## Routing Strategies
|
||||
|
||||
CrewAI supports the following routing strategies for precise control over model selection:
|
||||
|
||||
- `simple-shuffle`: Randomly selects a model from the list
|
||||
- `least-busy`: Routes to the model with the least number of ongoing requests
|
||||
- `usage-based`: Routes based on token usage across models
|
||||
- `latency-based`: Routes to the model with the lowest latency
|
||||
- `cost-based`: Routes to the model with the lowest cost
|
||||
|
||||
Example with latency-based routing:
|
||||
|
||||
```python
|
||||
agent = Agent(
|
||||
role="Data Analyst",
|
||||
goal="Analyze the data and provide insights",
|
||||
backstory="You are an expert data analyst with years of experience.",
|
||||
model_list=model_list,
|
||||
routing_strategy="latency-based"
|
||||
)
|
||||
```
|
||||
|
||||
## Direct LLM Configuration
|
||||
|
||||
You can also configure multiple models directly with the LLM class for more flexibility:
|
||||
|
||||
```python
|
||||
from crewai import LLM
|
||||
|
||||
llm = LLM(
|
||||
model="gpt-4o-mini",
|
||||
model_list=model_list,
|
||||
routing_strategy="simple-shuffle"
|
||||
)
|
||||
```
|
||||
|
||||
## Advanced Configuration
|
||||
|
||||
For more advanced configurations, you can specify additional parameters for each model to handle complex use cases:
|
||||
|
||||
```python
|
||||
model_list = [
|
||||
{
|
||||
"model_name": "gpt-4o-mini",
|
||||
"litellm_params": {
|
||||
"model": "gpt-4o-mini", # Required: model name must be specified here
|
||||
"api_key": "your-openai-api-key-1",
|
||||
"temperature": 0.7
|
||||
},
|
||||
"tpm": 100000, # Tokens per minute limit
|
||||
"rpm": 1000 # Requests per minute limit
|
||||
},
|
||||
{
|
||||
"model_name": "gpt-3.5-turbo",
|
||||
"litellm_params": {
|
||||
"model": "gpt-3.5-turbo", # Required: model name must be specified here
|
||||
"api_key": "your-openai-api-key-2",
|
||||
"temperature": 0.5
|
||||
}
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
## Error Handling and Troubleshooting
|
||||
|
||||
When working with multiple model configurations, you may encounter various issues. Here are some common problems and their solutions:
|
||||
|
||||
### Missing Required Parameters
|
||||
|
||||
**Problem**: Router initialization fails with an error about missing parameters.
|
||||
|
||||
**Solution**: Ensure each model configuration in `model_list` includes both `model_name` and `litellm_params` with the required `model` parameter:
|
||||
|
||||
```python
|
||||
# Correct configuration
|
||||
model_config = {
|
||||
"model_name": "gpt-4o-mini", # Required
|
||||
"litellm_params": {
|
||||
"model": "gpt-4o-mini", # Required
|
||||
"api_key": "your-api-key"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Invalid Routing Strategy
|
||||
|
||||
**Problem**: Error when specifying an unsupported routing strategy.
|
||||
|
||||
**Solution**: Use only the supported routing strategies:
|
||||
|
||||
```python
|
||||
# Valid routing strategies
|
||||
valid_strategies = [
|
||||
"simple-shuffle",
|
||||
"least-busy",
|
||||
"usage-based",
|
||||
"latency-based",
|
||||
"cost-based"
|
||||
]
|
||||
```
|
||||
|
||||
### API Key Authentication Errors
|
||||
|
||||
**Problem**: Authentication errors when making API calls.
|
||||
|
||||
**Solution**: Verify that all API keys are valid and have the necessary permissions:
|
||||
|
||||
```python
|
||||
# Check environment variables first
|
||||
import os
|
||||
os.environ.get("OPENAI_API_KEY") # Should be set if using OpenAI models
|
||||
|
||||
# Or explicitly provide in the configuration
|
||||
model_list = [{
|
||||
"model_name": "gpt-4o-mini",
|
||||
"litellm_params": {
|
||||
"model": "gpt-4o-mini",
|
||||
"api_key": "valid-api-key-here" # Ensure this is correct
|
||||
}
|
||||
}]
|
||||
```
|
||||
|
||||
### Rate Limit Handling
|
||||
|
||||
**Problem**: Encountering rate limits with multiple models.
|
||||
|
||||
**Solution**: Configure rate limits and implement fallback mechanisms:
|
||||
|
||||
```python
|
||||
model_list = [
|
||||
{
|
||||
"model_name": "primary-model",
|
||||
"litellm_params": {"model": "primary-model", "api_key": "key1"},
|
||||
"rpm": 100 # Requests per minute
|
||||
},
|
||||
{
|
||||
"model_name": "fallback-model",
|
||||
"litellm_params": {"model": "fallback-model", "api_key": "key2"}
|
||||
}
|
||||
]
|
||||
|
||||
# Configure with fallback
|
||||
llm = LLM(
|
||||
model="primary-model",
|
||||
model_list=model_list,
|
||||
routing_strategy="least-busy" # Will route to fallback when primary is busy
|
||||
)
|
||||
```
|
||||
|
||||
### Debugging Router Issues
|
||||
|
||||
If you're experiencing issues with the router, you can enable verbose logging to get more information:
|
||||
|
||||
```python
|
||||
import litellm
|
||||
litellm.set_verbose = True
|
||||
|
||||
# Then initialize your LLM
|
||||
llm = LLM(model="gpt-4o-mini", model_list=model_list)
|
||||
```
|
||||
|
||||
This feature leverages litellm's Router functionality under the hood, providing robust load balancing and fallback capabilities for your CrewAI agents. The implementation ensures predictability and consistency in model selection while maintaining security through proper API key management.
|
||||
@@ -1,10 +1,9 @@
|
||||
import os
|
||||
import shutil
|
||||
import subprocess
|
||||
from enum import Enum
|
||||
from typing import Any, Dict, List, Literal, Optional, Union
|
||||
|
||||
from pydantic import Field, InstanceOf, PrivateAttr, model_validator, field_validator
|
||||
from pydantic import Field, InstanceOf, PrivateAttr, model_validator
|
||||
|
||||
from crewai.agents import CacheHandler
|
||||
from crewai.agents.agent_builder.base_agent import BaseAgent
|
||||
@@ -87,20 +86,7 @@ class Agent(BaseAgent):
|
||||
description="Language model that will run the agent.", default=None
|
||||
)
|
||||
function_calling_llm: Optional[Any] = Field(
|
||||
description="Language model that will handle function calling for the agent.", default=None
|
||||
)
|
||||
class RoutingStrategy(str, Enum):
|
||||
SIMPLE_SHUFFLE = "simple-shuffle"
|
||||
LEAST_BUSY = "least-busy"
|
||||
USAGE_BASED = "usage-based"
|
||||
LATENCY_BASED = "latency-based"
|
||||
COST_BASED = "cost-based"
|
||||
|
||||
model_list: Optional[List[Dict[str, Any]]] = Field(
|
||||
default=None, description="List of model configurations for routing between multiple models."
|
||||
)
|
||||
routing_strategy: Optional[RoutingStrategy] = Field(
|
||||
default=None, description="Strategy for routing between multiple models (e.g., 'simple-shuffle', 'least-busy', 'usage-based', 'latency-based', 'cost-based')."
|
||||
description="Language model that will run the agent.", default=None
|
||||
)
|
||||
system_template: Optional[str] = Field(
|
||||
default=None, description="System format for the agent."
|
||||
@@ -162,17 +148,10 @@ class Agent(BaseAgent):
|
||||
# Handle different cases for self.llm
|
||||
if isinstance(self.llm, str):
|
||||
# If it's a string, create an LLM instance
|
||||
self.llm = LLM(
|
||||
model=self.llm,
|
||||
model_list=self.model_list,
|
||||
routing_strategy=self.routing_strategy
|
||||
)
|
||||
self.llm = LLM(model=self.llm)
|
||||
elif isinstance(self.llm, LLM):
|
||||
# If it's already an LLM instance, keep it as is
|
||||
if self.model_list and not getattr(self.llm, "model_list", None):
|
||||
self.llm.model_list = self.model_list
|
||||
self.llm.routing_strategy = self.routing_strategy
|
||||
self.llm._initialize_router()
|
||||
pass
|
||||
elif self.llm is None:
|
||||
# Determine the model name from environment variables or use default
|
||||
model_name = (
|
||||
@@ -180,11 +159,7 @@ class Agent(BaseAgent):
|
||||
or os.environ.get("MODEL")
|
||||
or "gpt-4o-mini"
|
||||
)
|
||||
llm_params = {
|
||||
"model": model_name,
|
||||
"model_list": self.model_list,
|
||||
"routing_strategy": self.routing_strategy
|
||||
}
|
||||
llm_params = {"model": model_name}
|
||||
|
||||
api_base = os.environ.get("OPENAI_API_BASE") or os.environ.get(
|
||||
"OPENAI_BASE_URL"
|
||||
@@ -232,8 +207,6 @@ class Agent(BaseAgent):
|
||||
"api_key": getattr(self.llm, "api_key", None),
|
||||
"base_url": getattr(self.llm, "base_url", None),
|
||||
"organization": getattr(self.llm, "organization", None),
|
||||
"model_list": self.model_list,
|
||||
"routing_strategy": self.routing_strategy,
|
||||
}
|
||||
# Remove None values to avoid passing unnecessary parameters
|
||||
llm_params = {k: v for k, v in llm_params.items() if v is not None}
|
||||
|
||||
@@ -14,13 +14,13 @@ class Knowledge(BaseModel):
|
||||
Knowledge is a collection of sources and setup for the vector store to save and query relevant context.
|
||||
Args:
|
||||
sources: List[BaseKnowledgeSource] = Field(default_factory=list)
|
||||
storage: KnowledgeStorage = Field(default_factory=KnowledgeStorage)
|
||||
storage: Optional[KnowledgeStorage] = Field(default=None)
|
||||
embedder_config: Optional[Dict[str, Any]] = None
|
||||
"""
|
||||
|
||||
sources: List[BaseKnowledgeSource] = Field(default_factory=list)
|
||||
model_config = ConfigDict(arbitrary_types_allowed=True)
|
||||
storage: KnowledgeStorage = Field(default_factory=KnowledgeStorage)
|
||||
storage: Optional[KnowledgeStorage] = Field(default=None)
|
||||
embedder_config: Optional[Dict[str, Any]] = None
|
||||
collection_name: Optional[str] = None
|
||||
|
||||
@@ -49,8 +49,13 @@ class Knowledge(BaseModel):
|
||||
"""
|
||||
Query across all knowledge sources to find the most relevant information.
|
||||
Returns the top_k most relevant chunks.
|
||||
|
||||
Raises:
|
||||
ValueError: If storage is not initialized.
|
||||
"""
|
||||
|
||||
if self.storage is None:
|
||||
raise ValueError("Storage is not initialized.")
|
||||
|
||||
results = self.storage.search(
|
||||
query,
|
||||
limit,
|
||||
|
||||
@@ -22,13 +22,14 @@ class BaseFileKnowledgeSource(BaseKnowledgeSource, ABC):
|
||||
default_factory=list, description="The path to the file"
|
||||
)
|
||||
content: Dict[Path, str] = Field(init=False, default_factory=dict)
|
||||
storage: KnowledgeStorage = Field(default_factory=KnowledgeStorage)
|
||||
storage: Optional[KnowledgeStorage] = Field(default=None)
|
||||
safe_file_paths: List[Path] = Field(default_factory=list)
|
||||
|
||||
@field_validator("file_path", "file_paths", mode="before")
|
||||
def validate_file_path(cls, v, values):
|
||||
def validate_file_path(cls, v, info):
|
||||
"""Validate that at least one of file_path or file_paths is provided."""
|
||||
if v is None and ("file_path" not in values or values.get("file_path") is None):
|
||||
# Single check if both are None, O(1) instead of nested conditions
|
||||
if v is None and info.data.get("file_path" if info.field_name == "file_paths" else "file_paths") is None:
|
||||
raise ValueError("Either file_path or file_paths must be provided")
|
||||
return v
|
||||
|
||||
@@ -62,7 +63,10 @@ class BaseFileKnowledgeSource(BaseKnowledgeSource, ABC):
|
||||
|
||||
def _save_documents(self):
|
||||
"""Save the documents to the storage."""
|
||||
self.storage.save(self.chunks)
|
||||
if self.storage:
|
||||
self.storage.save(self.chunks)
|
||||
else:
|
||||
raise ValueError("No storage found to save documents.")
|
||||
|
||||
def convert_to_path(self, path: Union[Path, str]) -> Path:
|
||||
"""Convert a path to a Path object."""
|
||||
|
||||
@@ -16,7 +16,7 @@ class BaseKnowledgeSource(BaseModel, ABC):
|
||||
chunk_embeddings: List[np.ndarray] = Field(default_factory=list)
|
||||
|
||||
model_config = ConfigDict(arbitrary_types_allowed=True)
|
||||
storage: KnowledgeStorage = Field(default_factory=KnowledgeStorage)
|
||||
storage: Optional[KnowledgeStorage] = Field(default=None)
|
||||
metadata: Dict[str, Any] = Field(default_factory=dict) # Currently unused
|
||||
collection_name: Optional[str] = Field(default=None)
|
||||
|
||||
@@ -46,4 +46,7 @@ class BaseKnowledgeSource(BaseModel, ABC):
|
||||
Save the documents to the storage.
|
||||
This method should be called after the chunks and embeddings are generated.
|
||||
"""
|
||||
self.storage.save(self.chunks)
|
||||
if self.storage:
|
||||
self.storage.save(self.chunks)
|
||||
else:
|
||||
raise ValueError("No storage found to save documents.")
|
||||
|
||||
@@ -7,17 +7,12 @@ from contextlib import contextmanager
|
||||
from typing import Any, Dict, List, Optional, Union
|
||||
|
||||
import litellm
|
||||
from litellm import Router as LiteLLMRouter
|
||||
from litellm import get_supported_openai_params
|
||||
from tenacity import retry, stop_after_attempt, wait_exponential
|
||||
|
||||
from crewai.utilities.logger import Logger
|
||||
from crewai.utilities.exceptions.context_window_exceeding_exception import (
|
||||
LLMContextLengthExceededException,
|
||||
)
|
||||
|
||||
logger = Logger(verbose=True)
|
||||
|
||||
|
||||
class FilteredStream:
|
||||
def __init__(self, original_stream):
|
||||
@@ -118,8 +113,6 @@ class LLM:
|
||||
api_version: Optional[str] = None,
|
||||
api_key: Optional[str] = None,
|
||||
callbacks: List[Any] = [],
|
||||
model_list: Optional[List[Dict[str, Any]]] = None,
|
||||
routing_strategy: Optional[str] = None,
|
||||
**kwargs,
|
||||
):
|
||||
self.model = model
|
||||
@@ -143,50 +136,11 @@ class LLM:
|
||||
self.callbacks = callbacks
|
||||
self.context_window_size = 0
|
||||
self.kwargs = kwargs
|
||||
self.model_list = model_list
|
||||
self.routing_strategy = routing_strategy
|
||||
self.router = None
|
||||
|
||||
litellm.drop_params = True
|
||||
litellm.set_verbose = False
|
||||
self.set_callbacks(callbacks)
|
||||
self.set_env_callbacks()
|
||||
|
||||
if self.model_list:
|
||||
self._initialize_router()
|
||||
|
||||
def _initialize_router(self):
|
||||
"""
|
||||
Initialize the litellm Router with the provided model_list and routing_strategy.
|
||||
"""
|
||||
try:
|
||||
router_kwargs = {}
|
||||
if self.routing_strategy:
|
||||
valid_strategies = ["simple-shuffle", "least-busy", "usage-based", "latency-based", "cost-based"]
|
||||
if self.routing_strategy not in valid_strategies:
|
||||
raise ValueError(f"Invalid routing strategy: {self.routing_strategy}. Valid options are: {', '.join(valid_strategies)}")
|
||||
router_kwargs["routing_strategy"] = self.routing_strategy
|
||||
|
||||
self.router = LiteLLMRouter(
|
||||
model_list=self.model_list,
|
||||
**router_kwargs
|
||||
)
|
||||
except Exception as e:
|
||||
logger.log("error", f"Failed to initialize router: {str(e)}")
|
||||
raise RuntimeError(f"Router initialization failed: {str(e)}")
|
||||
|
||||
@retry(stop=stop_after_attempt(3), wait=wait_exponential(multiplier=1, min=4, max=10))
|
||||
def _execute_router_call(self, params):
|
||||
"""
|
||||
Execute a call to the router with retry logic for handling transient issues.
|
||||
|
||||
Args:
|
||||
params: Parameters to pass to the router completion method
|
||||
|
||||
Returns:
|
||||
The response from the router
|
||||
"""
|
||||
return self.router.completion(model=self.model, **params)
|
||||
|
||||
def call(self, messages: List[Dict[str, str]], callbacks: List[Any] = []) -> str:
|
||||
with suppress_warnings():
|
||||
@@ -195,6 +149,7 @@ class LLM:
|
||||
|
||||
try:
|
||||
params = {
|
||||
"model": self.model,
|
||||
"messages": messages,
|
||||
"timeout": self.timeout,
|
||||
"temperature": self.temperature,
|
||||
@@ -209,6 +164,9 @@ class LLM:
|
||||
"seed": self.seed,
|
||||
"logprobs": self.logprobs,
|
||||
"top_logprobs": self.top_logprobs,
|
||||
"api_base": self.base_url,
|
||||
"api_version": self.api_version,
|
||||
"api_key": self.api_key,
|
||||
"stream": False,
|
||||
**self.kwargs,
|
||||
}
|
||||
@@ -216,17 +174,7 @@ class LLM:
|
||||
# Remove None values to avoid passing unnecessary parameters
|
||||
params = {k: v for k, v in params.items() if v is not None}
|
||||
|
||||
if self.router:
|
||||
response = self._execute_router_call(params)
|
||||
else:
|
||||
params.update({
|
||||
"model": self.model,
|
||||
"api_base": self.base_url,
|
||||
"api_version": self.api_version,
|
||||
"api_key": self.api_key,
|
||||
})
|
||||
response = litellm.completion(**params)
|
||||
|
||||
response = litellm.completion(**params)
|
||||
return response["choices"][0]["message"]["content"]
|
||||
except Exception as e:
|
||||
if not LLMContextLengthExceededException(
|
||||
|
||||
@@ -179,6 +179,7 @@ class Task(BaseModel):
|
||||
_execution_span: Optional[Span] = PrivateAttr(default=None)
|
||||
_original_description: Optional[str] = PrivateAttr(default=None)
|
||||
_original_expected_output: Optional[str] = PrivateAttr(default=None)
|
||||
_original_output_file: Optional[str] = PrivateAttr(default=None)
|
||||
_thread: Optional[threading.Thread] = PrivateAttr(default=None)
|
||||
_execution_time: Optional[float] = PrivateAttr(default=None)
|
||||
|
||||
@@ -213,8 +214,46 @@ class Task(BaseModel):
|
||||
|
||||
@field_validator("output_file")
|
||||
@classmethod
|
||||
def output_file_validation(cls, value: str) -> str:
|
||||
"""Validate the output file path by removing the / from the beginning of the path."""
|
||||
def output_file_validation(cls, value: Optional[str]) -> Optional[str]:
|
||||
"""Validate the output file path.
|
||||
|
||||
Args:
|
||||
value: The output file path to validate. Can be None or a string.
|
||||
If the path contains template variables (e.g. {var}), leading slashes are preserved.
|
||||
For regular paths, leading slashes are stripped.
|
||||
|
||||
Returns:
|
||||
The validated and potentially modified path, or None if no path was provided.
|
||||
|
||||
Raises:
|
||||
ValueError: If the path contains invalid characters, path traversal attempts,
|
||||
or other security concerns.
|
||||
"""
|
||||
if value is None:
|
||||
return None
|
||||
|
||||
# Basic security checks
|
||||
if ".." in value:
|
||||
raise ValueError("Path traversal attempts are not allowed in output_file paths")
|
||||
|
||||
# Check for shell expansion first
|
||||
if value.startswith('~') or value.startswith('$'):
|
||||
raise ValueError("Shell expansion characters are not allowed in output_file paths")
|
||||
|
||||
# Then check other shell special characters
|
||||
if any(char in value for char in ['|', '>', '<', '&', ';']):
|
||||
raise ValueError("Shell special characters are not allowed in output_file paths")
|
||||
|
||||
# Don't strip leading slash if it's a template path with variables
|
||||
if "{" in value or "}" in value:
|
||||
# Validate template variable format
|
||||
template_vars = [part.split("}")[0] for part in value.split("{")[1:]]
|
||||
for var in template_vars:
|
||||
if not var.isidentifier():
|
||||
raise ValueError(f"Invalid template variable name: {var}")
|
||||
return value
|
||||
|
||||
# Strip leading slash for regular paths
|
||||
if value.startswith("/"):
|
||||
return value[1:]
|
||||
return value
|
||||
@@ -393,27 +432,89 @@ class Task(BaseModel):
|
||||
tasks_slices = [self.description, output]
|
||||
return "\n".join(tasks_slices)
|
||||
|
||||
def interpolate_inputs(self, inputs: Dict[str, Any]) -> None:
|
||||
"""Interpolate inputs into the task description and expected output."""
|
||||
def interpolate_inputs(self, inputs: Dict[str, Union[str, int, float]]) -> None:
|
||||
"""Interpolate inputs into the task description, expected output, and output file path.
|
||||
|
||||
Args:
|
||||
inputs: Dictionary mapping template variables to their values.
|
||||
Supported value types are strings, integers, and floats.
|
||||
|
||||
Raises:
|
||||
ValueError: If a required template variable is missing from inputs.
|
||||
"""
|
||||
if self._original_description is None:
|
||||
self._original_description = self.description
|
||||
if self._original_expected_output is None:
|
||||
self._original_expected_output = self.expected_output
|
||||
if self.output_file is not None and self._original_output_file is None:
|
||||
self._original_output_file = self.output_file
|
||||
|
||||
if inputs:
|
||||
if not inputs:
|
||||
return
|
||||
|
||||
try:
|
||||
self.description = self._original_description.format(**inputs)
|
||||
except KeyError as e:
|
||||
raise ValueError(f"Missing required template variable '{e.args[0]}' in description") from e
|
||||
except ValueError as e:
|
||||
raise ValueError(f"Error interpolating description: {str(e)}") from e
|
||||
|
||||
try:
|
||||
self.expected_output = self.interpolate_only(
|
||||
input_string=self._original_expected_output, inputs=inputs
|
||||
)
|
||||
except (KeyError, ValueError) as e:
|
||||
raise ValueError(f"Error interpolating expected_output: {str(e)}") from e
|
||||
|
||||
def interpolate_only(self, input_string: str, inputs: Dict[str, Any]) -> str:
|
||||
"""Interpolate placeholders (e.g., {key}) in a string while leaving JSON untouched."""
|
||||
escaped_string = input_string.replace("{", "{{").replace("}", "}}")
|
||||
if self.output_file is not None:
|
||||
try:
|
||||
self.output_file = self.interpolate_only(
|
||||
input_string=self._original_output_file, inputs=inputs
|
||||
)
|
||||
except (KeyError, ValueError) as e:
|
||||
raise ValueError(f"Error interpolating output_file path: {str(e)}") from e
|
||||
|
||||
for key in inputs.keys():
|
||||
escaped_string = escaped_string.replace(f"{{{{{key}}}}}", f"{{{key}}}")
|
||||
def interpolate_only(self, input_string: Optional[str], inputs: Dict[str, Union[str, int, float]]) -> str:
|
||||
"""Interpolate placeholders (e.g., {key}) in a string while leaving JSON untouched.
|
||||
|
||||
Args:
|
||||
input_string: The string containing template variables to interpolate.
|
||||
Can be None or empty, in which case an empty string is returned.
|
||||
inputs: Dictionary mapping template variables to their values.
|
||||
Supported value types are strings, integers, and floats.
|
||||
If input_string is empty or has no placeholders, inputs can be empty.
|
||||
|
||||
Returns:
|
||||
The interpolated string with all template variables replaced with their values.
|
||||
Empty string if input_string is None or empty.
|
||||
|
||||
Raises:
|
||||
ValueError: If a required template variable is missing from inputs.
|
||||
KeyError: If a template variable is not found in the inputs dictionary.
|
||||
"""
|
||||
if input_string is None or not input_string:
|
||||
return ""
|
||||
if "{" not in input_string and "}" not in input_string:
|
||||
return input_string
|
||||
if not inputs:
|
||||
raise ValueError("Inputs dictionary cannot be empty when interpolating variables")
|
||||
|
||||
return escaped_string.format(**inputs)
|
||||
try:
|
||||
# Validate input types
|
||||
for key, value in inputs.items():
|
||||
if not isinstance(value, (str, int, float)):
|
||||
raise ValueError(f"Value for key '{key}' must be a string, integer, or float, got {type(value).__name__}")
|
||||
|
||||
escaped_string = input_string.replace("{", "{{").replace("}", "}}")
|
||||
|
||||
for key in inputs.keys():
|
||||
escaped_string = escaped_string.replace(f"{{{{{key}}}}}", f"{{{key}}}")
|
||||
|
||||
return escaped_string.format(**inputs)
|
||||
except KeyError as e:
|
||||
raise KeyError(f"Template variable '{e.args[0]}' not found in inputs dictionary") from e
|
||||
except ValueError as e:
|
||||
raise ValueError(f"Error during string interpolation: {str(e)}") from e
|
||||
|
||||
def increment_tools_errors(self) -> None:
|
||||
"""Increment the tools errors counter."""
|
||||
|
||||
243
tests/cassettes/test_crew_output_file_end_to_end.yaml
Normal file
243
tests/cassettes/test_crew_output_file_end_to_end.yaml
Normal file
@@ -0,0 +1,243 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: !!binary |
|
||||
CuIcCiQKIgoMc2VydmljZS5uYW1lEhIKEGNyZXdBSS10ZWxlbWV0cnkSuRwKEgoQY3Jld2FpLnRl
|
||||
bGVtZXRyeRKjBwoQXK7w4+uvyEkrI9D5qyvcJxII5UmQ7hmczdIqDENyZXcgQ3JlYXRlZDABOfxQ
|
||||
/hs4jBUYQUi3DBw4jBUYShoKDmNyZXdhaV92ZXJzaW9uEggKBjAuODYuMEoaCg5weXRob25fdmVy
|
||||
c2lvbhIICgYzLjEyLjdKLgoIY3Jld19rZXkSIgogYzk3YjVmZWI1ZDFiNjZiYjU5MDA2YWFhMDFh
|
||||
MjljZDZKMQoHY3Jld19pZBImCiRkZjY3NGMwYi1hOTc0LTQ3NTAtYjlkMS0yZWQxNjM3MzFiNTZK
|
||||
HAoMY3Jld19wcm9jZXNzEgwKCnNlcXVlbnRpYWxKEQoLY3Jld19tZW1vcnkSAhAAShoKFGNyZXdf
|
||||
bnVtYmVyX29mX3Rhc2tzEgIYAUobChVjcmV3X251bWJlcl9vZl9hZ2VudHMSAhgBStECCgtjcmV3
|
||||
X2FnZW50cxLBAgq+Alt7ImtleSI6ICIwN2Q5OWI2MzA0MTFkMzVmZDkwNDdhNTMyZDUzZGRhNyIs
|
||||
ICJpZCI6ICI5MDYwYTQ2Zi02MDY3LTQ1N2MtOGU3ZC04NjAyN2YzY2U5ZDUiLCAicm9sZSI6ICJS
|
||||
ZXNlYXJjaGVyIiwgInZlcmJvc2U/IjogZmFsc2UsICJtYXhfaXRlciI6IDIwLCAibWF4X3JwbSI6
|
||||
IG51bGwsICJmdW5jdGlvbl9jYWxsaW5nX2xsbSI6ICIiLCAibGxtIjogImdwdC00by1taW5pIiwg
|
||||
ImRlbGVnYXRpb25fZW5hYmxlZD8iOiBmYWxzZSwgImFsbG93X2NvZGVfZXhlY3V0aW9uPyI6IGZh
|
||||
bHNlLCAibWF4X3JldHJ5X2xpbWl0IjogMiwgInRvb2xzX25hbWVzIjogW119XUr/AQoKY3Jld190
|
||||
YXNrcxLwAQrtAVt7ImtleSI6ICI2Mzk5NjUxN2YzZjNmMWM5NGQ2YmI2MTdhYTBiMWM0ZiIsICJp
|
||||
ZCI6ICJjYTA4ZjkyOS0yMmI0LTQyZmQtYjViMC05N2M3MjM0ZDk5OTEiLCAiYXN5bmNfZXhlY3V0
|
||||
aW9uPyI6IGZhbHNlLCAiaHVtYW5faW5wdXQ/IjogZmFsc2UsICJhZ2VudF9yb2xlIjogIlJlc2Vh
|
||||
cmNoZXIiLCAiYWdlbnRfa2V5IjogIjA3ZDk5YjYzMDQxMWQzNWZkOTA0N2E1MzJkNTNkZGE3Iiwg
|
||||
InRvb2xzX25hbWVzIjogW119XXoCGAGFAQABAAASjgIKEOTJZh9R45IwgGVg9cinZmISCJopKRMf
|
||||
bpMJKgxUYXNrIENyZWF0ZWQwATlG+zQcOIwVGEHk0zUcOIwVGEouCghjcmV3X2tleRIiCiBjOTdi
|
||||
NWZlYjVkMWI2NmJiNTkwMDZhYWEwMWEyOWNkNkoxCgdjcmV3X2lkEiYKJGRmNjc0YzBiLWE5NzQt
|
||||
NDc1MC1iOWQxLTJlZDE2MzczMWI1NkouCgh0YXNrX2tleRIiCiA2Mzk5NjUxN2YzZjNmMWM5NGQ2
|
||||
YmI2MTdhYTBiMWM0ZkoxCgd0YXNrX2lkEiYKJGNhMDhmOTI5LTIyYjQtNDJmZC1iNWIwLTk3Yzcy
|
||||
MzRkOTk5MXoCGAGFAQABAAASowcKEEvwrN8+tNMIBwtnA+ip7jASCI78Hrh2wlsBKgxDcmV3IENy
|
||||
ZWF0ZWQwATkcRqYeOIwVGEE8erQeOIwVGEoaCg5jcmV3YWlfdmVyc2lvbhIICgYwLjg2LjBKGgoO
|
||||
cHl0aG9uX3ZlcnNpb24SCAoGMy4xMi43Si4KCGNyZXdfa2V5EiIKIDhjMjc1MmY0OWU1YjlkMmI2
|
||||
OGNiMzVjYWM4ZmNjODZkSjEKB2NyZXdfaWQSJgokZmRkYzA4ZTMtNDUyNi00N2Q2LThlNWMtNjY0
|
||||
YzIyMjc4ZDgyShwKDGNyZXdfcHJvY2VzcxIMCgpzZXF1ZW50aWFsShEKC2NyZXdfbWVtb3J5EgIQ
|
||||
AEoaChRjcmV3X251bWJlcl9vZl90YXNrcxICGAFKGwoVY3Jld19udW1iZXJfb2ZfYWdlbnRzEgIY
|
||||
AUrRAgoLY3Jld19hZ2VudHMSwQIKvgJbeyJrZXkiOiAiOGJkMjEzOWI1OTc1MTgxNTA2ZTQxZmQ5
|
||||
YzQ1NjNkNzUiLCAiaWQiOiAiY2UxNjA2YjktMjdiOS00ZDc4LWEyODctNDZiMDNlZDg3ZTA1Iiwg
|
||||
InJvbGUiOiAiUmVzZWFyY2hlciIsICJ2ZXJib3NlPyI6IGZhbHNlLCAibWF4X2l0ZXIiOiAyMCwg
|
||||
Im1heF9ycG0iOiBudWxsLCAiZnVuY3Rpb25fY2FsbGluZ19sbG0iOiAiIiwgImxsbSI6ICJncHQt
|
||||
NG8tbWluaSIsICJkZWxlZ2F0aW9uX2VuYWJsZWQ/IjogZmFsc2UsICJhbGxvd19jb2RlX2V4ZWN1
|
||||
dGlvbj8iOiBmYWxzZSwgIm1heF9yZXRyeV9saW1pdCI6IDIsICJ0b29sc19uYW1lcyI6IFtdfV1K
|
||||
/wEKCmNyZXdfdGFza3MS8AEK7QFbeyJrZXkiOiAiMGQ2ODVhMjE5OTRkOTQ5MDk3YmM1YTU2ZDcz
|
||||
N2U2ZDEiLCAiaWQiOiAiNDdkMzRjZjktMGYxZS00Y2JkLTgzMzItNzRjZjY0YWRlOThlIiwgImFz
|
||||
eW5jX2V4ZWN1dGlvbj8iOiBmYWxzZSwgImh1bWFuX2lucHV0PyI6IGZhbHNlLCAiYWdlbnRfcm9s
|
||||
ZSI6ICJSZXNlYXJjaGVyIiwgImFnZW50X2tleSI6ICI4YmQyMTM5YjU5NzUxODE1MDZlNDFmZDlj
|
||||
NDU2M2Q3NSIsICJ0b29sc19uYW1lcyI6IFtdfV16AhgBhQEAAQAAEo4CChAf4TXS782b0PBJ4NSB
|
||||
JXwsEgjXnd13GkMzlyoMVGFzayBDcmVhdGVkMAE5mb/cHjiMFRhBGRTiHjiMFRhKLgoIY3Jld19r
|
||||
ZXkSIgogOGMyNzUyZjQ5ZTViOWQyYjY4Y2IzNWNhYzhmY2M4NmRKMQoHY3Jld19pZBImCiRmZGRj
|
||||
MDhlMy00NTI2LTQ3ZDYtOGU1Yy02NjRjMjIyNzhkODJKLgoIdGFza19rZXkSIgogMGQ2ODVhMjE5
|
||||
OTRkOTQ5MDk3YmM1YTU2ZDczN2U2ZDFKMQoHdGFza19pZBImCiQ0N2QzNGNmOS0wZjFlLTRjYmQt
|
||||
ODMzMi03NGNmNjRhZGU5OGV6AhgBhQEAAQAAEqMHChAyBGKhzDhROB5pmAoXrikyEgj6SCwzj1dU
|
||||
LyoMQ3JldyBDcmVhdGVkMAE5vkjTHziMFRhBRDbhHziMFRhKGgoOY3Jld2FpX3ZlcnNpb24SCAoG
|
||||
MC44Ni4wShoKDnB5dGhvbl92ZXJzaW9uEggKBjMuMTIuN0ouCghjcmV3X2tleRIiCiBiNjczNjg2
|
||||
ZmM4MjJjMjAzYzdlODc5YzY3NTQyNDY5OUoxCgdjcmV3X2lkEiYKJGYyYWVlYTYzLTU2OWUtNDUz
|
||||
NS1iZTY0LTRiZjYzZmU5NjhjN0ocCgxjcmV3X3Byb2Nlc3MSDAoKc2VxdWVudGlhbEoRCgtjcmV3
|
||||
X21lbW9yeRICEABKGgoUY3Jld19udW1iZXJfb2ZfdGFza3MSAhgBShsKFWNyZXdfbnVtYmVyX29m
|
||||
X2FnZW50cxICGAFK0QIKC2NyZXdfYWdlbnRzEsECCr4CW3sia2V5IjogImI1OWNmNzdiNmU3NjU4
|
||||
NDg3MGViMWMzODgyM2Q3ZTI4IiwgImlkIjogImJiZjNkM2E4LWEwMjUtNGI0ZC1hY2Q0LTFmNzcz
|
||||
NTI3MWJmMCIsICJyb2xlIjogIlJlc2VhcmNoZXIiLCAidmVyYm9zZT8iOiBmYWxzZSwgIm1heF9p
|
||||
dGVyIjogMjAsICJtYXhfcnBtIjogbnVsbCwgImZ1bmN0aW9uX2NhbGxpbmdfbGxtIjogIiIsICJs
|
||||
bG0iOiAiZ3B0LTRvLW1pbmkiLCAiZGVsZWdhdGlvbl9lbmFibGVkPyI6IGZhbHNlLCAiYWxsb3df
|
||||
Y29kZV9leGVjdXRpb24/IjogZmFsc2UsICJtYXhfcmV0cnlfbGltaXQiOiAyLCAidG9vbHNfbmFt
|
||||
ZXMiOiBbXX1dSv8BCgpjcmV3X3Rhc2tzEvABCu0BW3sia2V5IjogImE1ZTVjNThjZWExYjlkMDAz
|
||||
MzJlNjg0NDFkMzI3YmRmIiwgImlkIjogIjBiOTRiMTY0LTM5NTktNGFmYS05Njg4LWJjNmEwZWMy
|
||||
MWYzOCIsICJhc3luY19leGVjdXRpb24/IjogZmFsc2UsICJodW1hbl9pbnB1dD8iOiBmYWxzZSwg
|
||||
ImFnZW50X3JvbGUiOiAiUmVzZWFyY2hlciIsICJhZ2VudF9rZXkiOiAiYjU5Y2Y3N2I2ZTc2NTg0
|
||||
ODcwZWIxYzM4ODIzZDdlMjgiLCAidG9vbHNfbmFtZXMiOiBbXX1degIYAYUBAAEAABKOAgoQyYfi
|
||||
Ftim717svttBZY3p5hIIUxR5bBHzWWkqDFRhc2sgQ3JlYXRlZDABOV4OBiA4jBUYQbLjBiA4jBUY
|
||||
Si4KCGNyZXdfa2V5EiIKIGI2NzM2ODZmYzgyMmMyMDNjN2U4NzljNjc1NDI0Njk5SjEKB2NyZXdf
|
||||
aWQSJgokZjJhZWVhNjMtNTY5ZS00NTM1LWJlNjQtNGJmNjNmZTk2OGM3Si4KCHRhc2tfa2V5EiIK
|
||||
IGE1ZTVjNThjZWExYjlkMDAzMzJlNjg0NDFkMzI3YmRmSjEKB3Rhc2tfaWQSJgokMGI5NGIxNjQt
|
||||
Mzk1OS00YWZhLTk2ODgtYmM2YTBlYzIxZjM4egIYAYUBAAEAAA==
|
||||
headers:
|
||||
Accept:
|
||||
- '*/*'
|
||||
Accept-Encoding:
|
||||
- gzip, deflate
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '3685'
|
||||
Content-Type:
|
||||
- application/x-protobuf
|
||||
User-Agent:
|
||||
- OTel-OTLP-Exporter-Python/1.27.0
|
||||
method: POST
|
||||
uri: https://telemetry.crewai.com:4319/v1/traces
|
||||
response:
|
||||
body:
|
||||
string: "\n\0"
|
||||
headers:
|
||||
Content-Length:
|
||||
- '2'
|
||||
Content-Type:
|
||||
- application/x-protobuf
|
||||
Date:
|
||||
- Sun, 29 Dec 2024 04:43:27 GMT
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Researcher. You have
|
||||
extensive AI research experience.\nYour personal goal is: Analyze AI topics\nTo
|
||||
give my best complete final answer to the task use the exact following format:\n\nThought:
|
||||
I now can give a great answer\nFinal Answer: Your final answer must be the great
|
||||
and the most complete as possible, it must be outcome described.\n\nI MUST use
|
||||
these formats, my job depends on it!"}, {"role": "user", "content": "\nCurrent
|
||||
Task: Explain the advantages of AI.\n\nThis is the expect criteria for your
|
||||
final answer: A summary of the main advantages, bullet points recommended.\nyou
|
||||
MUST return the actual complete content as the final answer, not a summary.\n\nBegin!
|
||||
This is VERY important to you, use the tools available and give your best Final
|
||||
Answer, your job depends on it!\n\nThought:"}], "model": "gpt-4o-mini", "stop":
|
||||
["\nObservation:"], "stream": false}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '922'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- _cfuvid=eff7OIkJ0zWRunpA6z67LHqscmSe6XjNxXiPw1R3xCc-1733770413538-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.52.1
|
||||
x-stainless-arch:
|
||||
- x64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- Linux
|
||||
x-stainless-package-version:
|
||||
- 1.52.1
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.7
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
content: "{\n \"id\": \"chatcmpl-AjfR6FDuTw7NGzy8w7sxjvOkUQlru\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1735447404,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"I now can give a great answer \\nFinal
|
||||
Answer: \\n**Advantages of AI** \\n\\n1. **Increased Efficiency and Productivity**
|
||||
\ \\n - AI systems can process large amounts of data quickly and accurately,
|
||||
leading to faster decision-making and increased productivity in various sectors.\\n\\n2.
|
||||
**Cost Savings** \\n - Automation of repetitive and time-consuming tasks
|
||||
reduces labor costs and increases operational efficiency, allowing businesses
|
||||
to allocate resources more effectively.\\n\\n3. **Enhanced Data Analysis** \\n
|
||||
\ - AI excels at analyzing big data, identifying patterns, and providing insights
|
||||
that support better strategic planning and business decision-making.\\n\\n4.
|
||||
**24/7 Availability** \\n - AI solutions, such as chatbots and virtual assistants,
|
||||
operate continuously without breaks, offering constant support and customer
|
||||
service, enhancing user experience.\\n\\n5. **Personalization** \\n - AI
|
||||
enables the customization of content, products, and services based on user preferences
|
||||
and behaviors, leading to improved customer satisfaction and loyalty.\\n\\n6.
|
||||
**Improved Accuracy** \\n - AI technologies, such as machine learning algorithms,
|
||||
reduce the likelihood of human error in various processes, leading to greater
|
||||
accuracy and reliability.\\n\\n7. **Enhanced Innovation** \\n - AI fosters
|
||||
innovative solutions by providing new tools and approaches to problem-solving,
|
||||
enabling companies to develop cutting-edge products and services.\\n\\n8. **Scalability**
|
||||
\ \\n - AI can be scaled to handle varying amounts of workloads without significant
|
||||
changes to infrastructure, making it easier for organizations to expand operations.\\n\\n9.
|
||||
**Predictive Capabilities** \\n - Advanced analytics powered by AI can anticipate
|
||||
trends and outcomes, allowing businesses to proactively adjust strategies and
|
||||
improve forecasting.\\n\\n10. **Health Benefits** \\n - In healthcare, AI
|
||||
assists in diagnostics, personalized treatment plans, and predictive analytics,
|
||||
leading to better patient care and improved health outcomes.\\n\\n11. **Safety
|
||||
and Risk Mitigation** \\n - AI can enhance safety in various industries
|
||||
by taking over dangerous tasks, monitoring for hazards, and predicting maintenance
|
||||
needs for critical machinery, thereby preventing accidents.\\n\\n12. **Reduced
|
||||
Environmental Impact** \\n - AI can optimize resource usage in areas such
|
||||
as energy consumption and supply chain logistics, contributing to sustainability
|
||||
efforts and reducing overall environmental footprints.\",\n \"refusal\":
|
||||
null\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
|
||||
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 168,\n \"completion_tokens\":
|
||||
440,\n \"total_tokens\": 608,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
|
||||
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
|
||||
\"fp_0aa8d3e20b\"\n}\n"
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 8f9721053d1eb9f1-SEA
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Sun, 29 Dec 2024 04:43:32 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=5enubNIoQSGMYEgy8Q2FpzzhphA0y.0lXukRZrWFvMk-1735447412-1.0.1.1-FIK1sMkUl3YnW1gTC6ftDtb2mKsbosb4mwabdFAlWCfJ6pXeavYq.bPsfKNvzAb5WYq60yVGH5lHsJT05bhSgw;
|
||||
path=/; expires=Sun, 29-Dec-24 05:13:32 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=63wmKMTuFamkLN8FBI4fP8JZWbjWiRxWm7wb3kz.z_A-1735447412038-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '7577'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149999793'
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_55b8d714656e8f10f4e23cbe9034d66b
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
version: 1
|
||||
@@ -0,0 +1,236 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are TestAgent. Testing
|
||||
interpolation logic\nYour personal goal is: Validate output_file interpolation
|
||||
with kickoff_for_each\nTo give my best complete final answer to the task use
|
||||
the exact following format:\n\nThought: I now can give a great answer\nFinal
|
||||
Answer: Your final answer must be the great and the most complete as possible,
|
||||
it must be outcome described.\n\nI MUST use these formats, my job depends on
|
||||
it!"}, {"role": "user", "content": "\nCurrent Task: Create a brief summary for
|
||||
the ID: alpha\n\nThis is the expect criteria for your final answer: Should write
|
||||
a file with the ID included in the path\nyou MUST return the actual complete
|
||||
content as the final answer, not a summary.\n\nBegin! This is VERY important
|
||||
to you, use the tools available and give your best Final Answer, your job depends
|
||||
on it!\n\nThought:"}], "model": "gpt-4o-mini", "stop": ["\nObservation:"], "stream":
|
||||
false}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '948'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.52.1
|
||||
x-stainless-arch:
|
||||
- x64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- Linux
|
||||
x-stainless-package-version:
|
||||
- 1.52.1
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.7
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
content: "{\n \"id\": \"chatcmpl-AjflQm9Vq8QvtY5ORMahjQVL061VL\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1735448664,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"I now can give a great answer \\nFinal
|
||||
Answer: The content for the file with ID 'alpha' is as follows:\\n\\n```\\nThis
|
||||
is the content specific to ID: alpha. This file serves as a reference for the
|
||||
alpha ID and may include various information relevant to its usage. The ID aids
|
||||
in categorization and organization of data. \\n```\\n\\nThe file should be
|
||||
created with the path including the ID, for example: `./output/alpha_file.txt`.\",\n
|
||||
\ \"refusal\": null\n },\n \"logprobs\": null,\n \"finish_reason\":
|
||||
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 175,\n \"completion_tokens\":
|
||||
94,\n \"total_tokens\": 269,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
|
||||
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
|
||||
\"fp_0aa8d3e20b\"\n}\n"
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 8f973fc8a8e176cd-SEA
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Sun, 29 Dec 2024 05:04:25 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=Td9pau1aHgSMMxQpKkO8mpGk5DWBDjUinUYlfJPsgcw-1735448665-1.0.1.1-wH0kiPhZtNwT_fIqYCb3obZgU2VIxoPL.QrruewXdLVLT9FeI4b6BU.sHqsBjMODx6Tvtuzz9GQRvoY1nWs6Fw;
|
||||
path=/; expires=Sun, 29-Dec-24 05:34:25 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=P0VQx4Uj_TpVJBRs2gSXmq3srUR2DW7oj5Vmq54U47M-1735448665828-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '1288'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149999786'
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_9dd7c0e64935fade58b5b8359fcc9c02
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are TestAgent. Testing
|
||||
interpolation logic\nYour personal goal is: Validate output_file interpolation
|
||||
with kickoff_for_each\nTo give my best complete final answer to the task use
|
||||
the exact following format:\n\nThought: I now can give a great answer\nFinal
|
||||
Answer: Your final answer must be the great and the most complete as possible,
|
||||
it must be outcome described.\n\nI MUST use these formats, my job depends on
|
||||
it!"}, {"role": "user", "content": "\nCurrent Task: Create a brief summary for
|
||||
the ID: beta\n\nThis is the expect criteria for your final answer: Should write
|
||||
a file with the ID included in the path\nyou MUST return the actual complete
|
||||
content as the final answer, not a summary.\n\nBegin! This is VERY important
|
||||
to you, use the tools available and give your best Final Answer, your job depends
|
||||
on it!\n\nThought:"}], "model": "gpt-4o-mini", "stop": ["\nObservation:"], "stream":
|
||||
false}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '947'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=Td9pau1aHgSMMxQpKkO8mpGk5DWBDjUinUYlfJPsgcw-1735448665-1.0.1.1-wH0kiPhZtNwT_fIqYCb3obZgU2VIxoPL.QrruewXdLVLT9FeI4b6BU.sHqsBjMODx6Tvtuzz9GQRvoY1nWs6Fw;
|
||||
_cfuvid=P0VQx4Uj_TpVJBRs2gSXmq3srUR2DW7oj5Vmq54U47M-1735448665828-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.52.1
|
||||
x-stainless-arch:
|
||||
- x64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- Linux
|
||||
x-stainless-package-version:
|
||||
- 1.52.1
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.7
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
content: "{\n \"id\": \"chatcmpl-AjflSvzNtfBXfJSerUuWwQM47tE4N\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1735448666,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"I now can give a great answer \\nFinal
|
||||
Answer: Here is the content to be written to the file for ID: beta\\n\\n```\\nThis
|
||||
is the content specifically related to the ID: beta.\\n```\\n\\nThe file will
|
||||
be created at the following path: `/path/to/directory/beta.txt` \\nThank you
|
||||
for the opportunity to complete this task!\",\n \"refusal\": null\n },\n
|
||||
\ \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n
|
||||
\ \"usage\": {\n \"prompt_tokens\": 175,\n \"completion_tokens\": 72,\n
|
||||
\ \"total_tokens\": 247,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
|
||||
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
|
||||
\"fp_0aa8d3e20b\"\n}\n"
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 8f973fd1af7576cd-SEA
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Sun, 29 Dec 2024 05:04:27 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '1475'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149999785'
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_dc9a842ed35e213e52c2d6fe9f11c36a
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
version: 1
|
||||
@@ -1183,6 +1183,53 @@ def test_kickoff_for_each_error_handling():
|
||||
crew.kickoff_for_each(inputs=inputs)
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_kickoff_for_each_output_file_interpolation(tmp_path):
|
||||
"""Test that output_file paths are correctly interpolated when using kickoff_for_each."""
|
||||
# Create test directories
|
||||
output_dir = tmp_path / "output_files"
|
||||
output_dir.mkdir()
|
||||
|
||||
agent = Agent(
|
||||
role="TestAgent",
|
||||
goal="Validate output_file interpolation with kickoff_for_each",
|
||||
backstory="Testing interpolation logic",
|
||||
allow_code_execution=False,
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Create a brief summary for the ID: {id_var}",
|
||||
expected_output="Should write a file with the ID included in the path",
|
||||
agent=agent,
|
||||
output_file=str(output_dir / "summary_{id_var}.md")
|
||||
)
|
||||
|
||||
crew = Crew(agents=[agent], tasks=[task], verbose=True)
|
||||
|
||||
inputs = [
|
||||
{"id_var": "alpha"},
|
||||
{"id_var": "beta"},
|
||||
]
|
||||
|
||||
# Use kickoff_for_each
|
||||
results = crew.kickoff_for_each(inputs=inputs)
|
||||
|
||||
# Verify each iteration's output_file was properly interpolated
|
||||
for input_data, result in zip(inputs, results):
|
||||
expected_path = output_dir / f"summary_{input_data['id_var']}.md"
|
||||
assert expected_path.exists(), f"Expected output file {expected_path} was not created"
|
||||
|
||||
# Verify the output file was created with the correct name
|
||||
assert expected_path.exists(), (
|
||||
f"Expected output file {expected_path} was not created"
|
||||
)
|
||||
|
||||
# Verify file contains some content
|
||||
assert expected_path.stat().st_size > 0, (
|
||||
f"Output file {expected_path} is empty"
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
@pytest.mark.asyncio
|
||||
async def test_kickoff_async_basic_functionality_and_output():
|
||||
@@ -1941,6 +1988,90 @@ def test_crew_log_file_output(tmp_path):
|
||||
assert test_file.exists()
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_crew_output_file_end_to_end(tmp_path):
|
||||
"""Test output file functionality in a full crew context."""
|
||||
# Create an agent
|
||||
agent = Agent(
|
||||
role="Researcher",
|
||||
goal="Analyze AI topics",
|
||||
backstory="You have extensive AI research experience.",
|
||||
allow_delegation=False,
|
||||
)
|
||||
|
||||
# Create a task with dynamic output file path
|
||||
dynamic_path = tmp_path / "output_{topic}.txt"
|
||||
task = Task(
|
||||
description="Explain the advantages of {topic}.",
|
||||
expected_output="A summary of the main advantages, bullet points recommended.",
|
||||
agent=agent,
|
||||
output_file=str(dynamic_path),
|
||||
)
|
||||
|
||||
# Create and run the crew
|
||||
crew = Crew(
|
||||
agents=[agent],
|
||||
tasks=[task],
|
||||
process=Process.sequential,
|
||||
)
|
||||
crew.kickoff(inputs={"topic": "AI"})
|
||||
|
||||
# Verify file creation and cleanup
|
||||
expected_file = tmp_path / "output_AI.txt"
|
||||
assert expected_file.exists(), f"Output file {expected_file} was not created"
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_crew_output_file_validation_failures():
|
||||
"""Test output file validation failures in a crew context."""
|
||||
agent = Agent(
|
||||
role="Researcher",
|
||||
goal="Analyze data",
|
||||
backstory="You analyze data files.",
|
||||
allow_delegation=False,
|
||||
)
|
||||
|
||||
# Test path traversal
|
||||
with pytest.raises(ValueError, match="Path traversal"):
|
||||
task = Task(
|
||||
description="Analyze data",
|
||||
expected_output="Analysis results",
|
||||
agent=agent,
|
||||
output_file="../output.txt"
|
||||
)
|
||||
Crew(agents=[agent], tasks=[task]).kickoff()
|
||||
|
||||
# Test shell special characters
|
||||
with pytest.raises(ValueError, match="Shell special characters"):
|
||||
task = Task(
|
||||
description="Analyze data",
|
||||
expected_output="Analysis results",
|
||||
agent=agent,
|
||||
output_file="output.txt | rm -rf /"
|
||||
)
|
||||
Crew(agents=[agent], tasks=[task]).kickoff()
|
||||
|
||||
# Test shell expansion
|
||||
with pytest.raises(ValueError, match="Shell expansion"):
|
||||
task = Task(
|
||||
description="Analyze data",
|
||||
expected_output="Analysis results",
|
||||
agent=agent,
|
||||
output_file="~/output.txt"
|
||||
)
|
||||
Crew(agents=[agent], tasks=[task]).kickoff()
|
||||
|
||||
# Test invalid template variable
|
||||
with pytest.raises(ValueError, match="Invalid template variable"):
|
||||
task = Task(
|
||||
description="Analyze data",
|
||||
expected_output="Analysis results",
|
||||
agent=agent,
|
||||
output_file="{invalid-name}/output.txt"
|
||||
)
|
||||
Crew(agents=[agent], tasks=[task]).kickoff()
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_manager_agent():
|
||||
from unittest.mock import patch
|
||||
@@ -3125,4 +3256,4 @@ def test_multimodal_agent_live_image_analysis():
|
||||
# Verify we got a meaningful response
|
||||
assert isinstance(result.raw, str)
|
||||
assert len(result.raw) > 100 # Expecting a detailed analysis
|
||||
assert "error" not in result.raw.lower() # No error messages in response
|
||||
assert "error" not in result.raw.lower() # No error messages in response
|
||||
|
||||
@@ -584,3 +584,28 @@ def test_docling_source_with_local_file():
|
||||
docling_source = CrewDoclingSource(file_paths=[pdf_path])
|
||||
assert docling_source.file_paths == [pdf_path]
|
||||
assert docling_source.content is not None
|
||||
|
||||
|
||||
def test_file_path_validation():
|
||||
"""Test file path validation for knowledge sources."""
|
||||
current_dir = Path(__file__).parent
|
||||
pdf_path = current_dir / "crewai_quickstart.pdf"
|
||||
|
||||
# Test valid single file_path
|
||||
source = PDFKnowledgeSource(file_path=pdf_path)
|
||||
assert source.safe_file_paths == [pdf_path]
|
||||
|
||||
# Test valid file_paths list
|
||||
source = PDFKnowledgeSource(file_paths=[pdf_path])
|
||||
assert source.safe_file_paths == [pdf_path]
|
||||
|
||||
# Test both file_path and file_paths provided (should use file_paths)
|
||||
source = PDFKnowledgeSource(file_path=pdf_path, file_paths=[pdf_path])
|
||||
assert source.safe_file_paths == [pdf_path]
|
||||
|
||||
# Test neither file_path nor file_paths provided
|
||||
with pytest.raises(
|
||||
ValueError,
|
||||
match="file_path/file_paths must be a Path, str, or a list of these types"
|
||||
):
|
||||
PDFKnowledgeSource()
|
||||
|
||||
@@ -1,246 +0,0 @@
|
||||
import pytest
|
||||
from unittest.mock import patch, MagicMock
|
||||
|
||||
from crewai.llm import LLM
|
||||
from crewai.agent import Agent
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
@patch("litellm.Router")
|
||||
@patch.object(LLM, '_initialize_router')
|
||||
def test_llm_with_model_list(mock_initialize_router, mock_router):
|
||||
"""Test that LLM can be initialized with a model_list for multiple model configurations."""
|
||||
mock_initialize_router.return_value = None
|
||||
|
||||
mock_router_instance = MagicMock()
|
||||
mock_router.return_value = mock_router_instance
|
||||
|
||||
model_list = [
|
||||
{
|
||||
"model_name": "gpt-4o-mini",
|
||||
"litellm_params": {
|
||||
"model": "gpt-4o-mini",
|
||||
"api_key": "test-key-1"
|
||||
}
|
||||
},
|
||||
{
|
||||
"model_name": "gpt-3.5-turbo",
|
||||
"litellm_params": {
|
||||
"model": "gpt-3.5-turbo",
|
||||
"api_key": "test-key-2"
|
||||
}
|
||||
}
|
||||
]
|
||||
|
||||
llm = LLM(model="gpt-4o-mini", model_list=model_list)
|
||||
llm.router = mock_router_instance
|
||||
|
||||
assert llm.model == "gpt-4o-mini"
|
||||
assert llm.model_list == model_list
|
||||
assert llm.router is not None
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
@patch("litellm.Router")
|
||||
@patch.object(LLM, '_initialize_router')
|
||||
def test_llm_with_routing_strategy(mock_initialize_router, mock_router):
|
||||
"""Test that LLM can be initialized with a routing strategy."""
|
||||
mock_initialize_router.return_value = None
|
||||
|
||||
mock_router_instance = MagicMock()
|
||||
mock_router.return_value = mock_router_instance
|
||||
|
||||
model_list = [
|
||||
{
|
||||
"model_name": "gpt-4o-mini",
|
||||
"litellm_params": {
|
||||
"model": "gpt-4o-mini",
|
||||
"api_key": "test-key-1"
|
||||
}
|
||||
},
|
||||
{
|
||||
"model_name": "gpt-3.5-turbo",
|
||||
"litellm_params": {
|
||||
"model": "gpt-3.5-turbo",
|
||||
"api_key": "test-key-2"
|
||||
}
|
||||
}
|
||||
]
|
||||
|
||||
llm = LLM(
|
||||
model="gpt-4o-mini",
|
||||
model_list=model_list,
|
||||
routing_strategy="simple-shuffle"
|
||||
)
|
||||
llm.router = mock_router_instance
|
||||
|
||||
assert llm.routing_strategy == "simple-shuffle"
|
||||
assert llm.router is not None
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
@patch("litellm.Router")
|
||||
@patch.object(LLM, '_initialize_router')
|
||||
def test_agent_with_model_list(mock_initialize_router, mock_router):
|
||||
"""Test that Agent can be initialized with a model_list for multiple model configurations."""
|
||||
mock_initialize_router.return_value = None
|
||||
|
||||
mock_router_instance = MagicMock()
|
||||
mock_router.return_value = mock_router_instance
|
||||
|
||||
model_list = [
|
||||
{
|
||||
"model_name": "gpt-4o-mini",
|
||||
"litellm_params": {
|
||||
"model": "gpt-4o-mini",
|
||||
"api_key": "test-key-1"
|
||||
}
|
||||
},
|
||||
{
|
||||
"model_name": "gpt-3.5-turbo",
|
||||
"litellm_params": {
|
||||
"model": "gpt-3.5-turbo",
|
||||
"api_key": "test-key-2"
|
||||
}
|
||||
}
|
||||
]
|
||||
|
||||
with patch.object(Agent, 'post_init_setup', wraps=Agent.post_init_setup) as mock_post_init:
|
||||
agent = Agent(
|
||||
role="test",
|
||||
goal="test",
|
||||
backstory="test",
|
||||
model_list=model_list
|
||||
)
|
||||
|
||||
agent.llm.router = mock_router_instance
|
||||
|
||||
assert agent.model_list == model_list
|
||||
assert agent.llm.model_list == model_list
|
||||
assert agent.llm.router is not None
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
@patch("litellm.Router")
|
||||
@patch.object(LLM, '_initialize_router')
|
||||
def test_llm_call_with_router(mock_initialize_router, mock_router):
|
||||
"""Test that LLM.call uses the router when model_list is provided."""
|
||||
mock_initialize_router.return_value = None
|
||||
|
||||
mock_router_instance = MagicMock()
|
||||
mock_router.return_value = mock_router_instance
|
||||
|
||||
mock_response = {
|
||||
"choices": [{"message": {"content": "Test response"}}]
|
||||
}
|
||||
mock_router_instance.completion.return_value = mock_response
|
||||
|
||||
model_list = [
|
||||
{
|
||||
"model_name": "gpt-4o-mini",
|
||||
"litellm_params": {
|
||||
"model": "gpt-4o-mini",
|
||||
"api_key": "test-key-1"
|
||||
}
|
||||
}
|
||||
]
|
||||
|
||||
# Create LLM with model_list
|
||||
llm = LLM(model="gpt-4o-mini", model_list=model_list)
|
||||
|
||||
llm.router = mock_router_instance
|
||||
|
||||
messages = [{"role": "user", "content": "Hello"}]
|
||||
response = llm.call(messages)
|
||||
|
||||
mock_router_instance.completion.assert_called_once()
|
||||
assert response == "Test response"
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
@patch("litellm.completion")
|
||||
def test_llm_call_without_router(mock_completion):
|
||||
"""Test that LLM.call uses litellm.completion when no model_list is provided."""
|
||||
mock_response = {
|
||||
"choices": [{"message": {"content": "Test response"}}]
|
||||
}
|
||||
mock_completion.return_value = mock_response
|
||||
|
||||
llm = LLM(model="gpt-4o-mini")
|
||||
|
||||
messages = [{"role": "user", "content": "Hello"}]
|
||||
response = llm.call(messages)
|
||||
|
||||
mock_completion.assert_called_once()
|
||||
assert response == "Test response"
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_llm_with_invalid_routing_strategy():
|
||||
"""Test that LLM initialization raises an error with an invalid routing strategy."""
|
||||
model_list = [
|
||||
{
|
||||
"model_name": "gpt-4o-mini",
|
||||
"litellm_params": {
|
||||
"model": "gpt-4o-mini",
|
||||
"api_key": "test-key-1"
|
||||
}
|
||||
}
|
||||
]
|
||||
|
||||
with pytest.raises(RuntimeError) as exc_info:
|
||||
LLM(
|
||||
model="gpt-4o-mini",
|
||||
model_list=model_list,
|
||||
routing_strategy="invalid-strategy"
|
||||
)
|
||||
|
||||
assert "Invalid routing strategy" in str(exc_info.value)
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_agent_with_invalid_routing_strategy():
|
||||
"""Test that Agent initialization raises an error with an invalid routing strategy."""
|
||||
model_list = [
|
||||
{
|
||||
"model_name": "gpt-4o-mini",
|
||||
"litellm_params": {
|
||||
"model": "gpt-4o-mini",
|
||||
"api_key": "test-key-1"
|
||||
}
|
||||
}
|
||||
]
|
||||
|
||||
with pytest.raises(Exception) as exc_info:
|
||||
Agent(
|
||||
role="test",
|
||||
goal="test",
|
||||
backstory="test",
|
||||
model_list=model_list,
|
||||
routing_strategy="invalid-strategy"
|
||||
)
|
||||
|
||||
assert "Input should be" in str(exc_info.value)
|
||||
assert "simple-shuffle" in str(exc_info.value)
|
||||
assert "least-busy" in str(exc_info.value)
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
@patch.object(LLM, '_initialize_router')
|
||||
def test_llm_with_missing_model_in_litellm_params(mock_initialize_router):
|
||||
"""Test that LLM initialization raises an error when model is missing in litellm_params."""
|
||||
mock_initialize_router.side_effect = RuntimeError("Router initialization failed: Missing required 'model' in litellm_params")
|
||||
|
||||
model_list = [
|
||||
{
|
||||
"model_name": "gpt-4o-mini",
|
||||
"litellm_params": {
|
||||
"api_key": "test-key-1"
|
||||
}
|
||||
}
|
||||
]
|
||||
|
||||
with pytest.raises(RuntimeError) as exc_info:
|
||||
LLM(model="gpt-4o-mini", model_list=model_list)
|
||||
|
||||
assert "Router initialization failed" in str(exc_info.value)
|
||||
@@ -719,21 +719,24 @@ def test_interpolate_inputs():
|
||||
task = Task(
|
||||
description="Give me a list of 5 interesting ideas about {topic} to explore for an article, what makes them unique and interesting.",
|
||||
expected_output="Bullet point list of 5 interesting ideas about {topic}.",
|
||||
output_file="/tmp/{topic}/output_{date}.txt"
|
||||
)
|
||||
|
||||
task.interpolate_inputs(inputs={"topic": "AI"})
|
||||
task.interpolate_inputs(inputs={"topic": "AI", "date": "2024"})
|
||||
assert (
|
||||
task.description
|
||||
== "Give me a list of 5 interesting ideas about AI to explore for an article, what makes them unique and interesting."
|
||||
)
|
||||
assert task.expected_output == "Bullet point list of 5 interesting ideas about AI."
|
||||
assert task.output_file == "/tmp/AI/output_2024.txt"
|
||||
|
||||
task.interpolate_inputs(inputs={"topic": "ML"})
|
||||
task.interpolate_inputs(inputs={"topic": "ML", "date": "2025"})
|
||||
assert (
|
||||
task.description
|
||||
== "Give me a list of 5 interesting ideas about ML to explore for an article, what makes them unique and interesting."
|
||||
)
|
||||
assert task.expected_output == "Bullet point list of 5 interesting ideas about ML."
|
||||
assert task.output_file == "/tmp/ML/output_2025.txt"
|
||||
|
||||
|
||||
def test_interpolate_only():
|
||||
@@ -872,3 +875,61 @@ def test_key():
|
||||
assert (
|
||||
task.key == hash
|
||||
), "The key should be the hash of the non-interpolated description."
|
||||
|
||||
|
||||
def test_output_file_validation():
|
||||
"""Test output file path validation."""
|
||||
# Valid paths
|
||||
assert Task(
|
||||
description="Test task",
|
||||
expected_output="Test output",
|
||||
output_file="output.txt"
|
||||
).output_file == "output.txt"
|
||||
assert Task(
|
||||
description="Test task",
|
||||
expected_output="Test output",
|
||||
output_file="/tmp/output.txt"
|
||||
).output_file == "tmp/output.txt"
|
||||
assert Task(
|
||||
description="Test task",
|
||||
expected_output="Test output",
|
||||
output_file="{dir}/output_{date}.txt"
|
||||
).output_file == "{dir}/output_{date}.txt"
|
||||
|
||||
# Invalid paths
|
||||
with pytest.raises(ValueError, match="Path traversal"):
|
||||
Task(
|
||||
description="Test task",
|
||||
expected_output="Test output",
|
||||
output_file="../output.txt"
|
||||
)
|
||||
with pytest.raises(ValueError, match="Path traversal"):
|
||||
Task(
|
||||
description="Test task",
|
||||
expected_output="Test output",
|
||||
output_file="folder/../output.txt"
|
||||
)
|
||||
with pytest.raises(ValueError, match="Shell special characters"):
|
||||
Task(
|
||||
description="Test task",
|
||||
expected_output="Test output",
|
||||
output_file="output.txt | rm -rf /"
|
||||
)
|
||||
with pytest.raises(ValueError, match="Shell expansion"):
|
||||
Task(
|
||||
description="Test task",
|
||||
expected_output="Test output",
|
||||
output_file="~/output.txt"
|
||||
)
|
||||
with pytest.raises(ValueError, match="Shell expansion"):
|
||||
Task(
|
||||
description="Test task",
|
||||
expected_output="Test output",
|
||||
output_file="$HOME/output.txt"
|
||||
)
|
||||
with pytest.raises(ValueError, match="Invalid template variable"):
|
||||
Task(
|
||||
description="Test task",
|
||||
expected_output="Test output",
|
||||
output_file="{invalid-name}/output.txt"
|
||||
)
|
||||
|
||||
Reference in New Issue
Block a user