mirror of
https://github.com/crewAIInc/crewAI.git
synced 2025-12-28 02:08:29 +00:00
Compare commits
27 Commits
fix/python
...
devin/1735
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
82f9b26848 | ||
|
|
a548463fae | ||
|
|
45b802a625 | ||
|
|
ba0965ef87 | ||
|
|
d85898cf29 | ||
|
|
09fd6058b0 | ||
|
|
73f328860b | ||
|
|
a0c322a535 | ||
|
|
86f58c95de | ||
|
|
99fe91586d | ||
|
|
0c2d23dfe0 | ||
|
|
2433819c4f | ||
|
|
97fc44c930 | ||
|
|
409892d65f | ||
|
|
62f3df7ed5 | ||
|
|
4cf8913d31 | ||
|
|
82647358b2 | ||
|
|
6cc2f510bf | ||
|
|
9a65abf6b8 | ||
|
|
b3185ad90c | ||
|
|
c887ff1f47 | ||
|
|
22e5d39884 | ||
|
|
9ee6824ccd | ||
|
|
da73865f25 | ||
|
|
627b9f1abb | ||
|
|
1b8001bf98 | ||
|
|
e59e07e4f7 |
175
README.md
175
README.md
@@ -4,7 +4,7 @@
|
||||
|
||||
# **CrewAI**
|
||||
|
||||
🤖 **CrewAI**: Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
|
||||
🤖 **CrewAI**: Production-grade framework for orchestrating sophisticated AI agent systems. From simple automations to complex real-world applications, CrewAI provides precise control and deep customization. By fostering collaborative intelligence through flexible, production-ready architecture, CrewAI empowers agents to work together seamlessly, tackling complex business challenges with predictable, consistent results.
|
||||
|
||||
<h3>
|
||||
|
||||
@@ -22,13 +22,17 @@
|
||||
- [Why CrewAI?](#why-crewai)
|
||||
- [Getting Started](#getting-started)
|
||||
- [Key Features](#key-features)
|
||||
- [Understanding Flows and Crews](#understanding-flows-and-crews)
|
||||
- [CrewAI vs LangGraph](#how-crewai-compares)
|
||||
- [Examples](#examples)
|
||||
- [Quick Tutorial](#quick-tutorial)
|
||||
- [Write Job Descriptions](#write-job-descriptions)
|
||||
- [Trip Planner](#trip-planner)
|
||||
- [Stock Analysis](#stock-analysis)
|
||||
- [Using Crews and Flows Together](#using-crews-and-flows-together)
|
||||
- [Connecting Your Crew to a Model](#connecting-your-crew-to-a-model)
|
||||
- [How CrewAI Compares](#how-crewai-compares)
|
||||
- [Frequently Asked Questions (FAQ)](#frequently-asked-questions-faq)
|
||||
- [Contribution](#contribution)
|
||||
- [Telemetry](#telemetry)
|
||||
- [License](#license)
|
||||
@@ -36,10 +40,40 @@
|
||||
## Why CrewAI?
|
||||
|
||||
The power of AI collaboration has too much to offer.
|
||||
CrewAI is designed to enable AI agents to assume roles, share goals, and operate in a cohesive unit - much like a well-oiled crew. Whether you're building a smart assistant platform, an automated customer service ensemble, or a multi-agent research team, CrewAI provides the backbone for sophisticated multi-agent interactions.
|
||||
CrewAI is a standalone framework, built from the ground up without dependencies on Langchain or other agent frameworks. It's designed to enable AI agents to assume roles, share goals, and operate in a cohesive unit - much like a well-oiled crew. Whether you're building a smart assistant platform, an automated customer service ensemble, or a multi-agent research team, CrewAI provides the backbone for sophisticated multi-agent interactions.
|
||||
|
||||
## Getting Started
|
||||
|
||||
### Learning Resources
|
||||
|
||||
Learn CrewAI through our comprehensive courses:
|
||||
- [Multi AI Agent Systems with CrewAI](https://www.deeplearning.ai/short-courses/multi-ai-agent-systems-with-crewai/) - Master the fundamentals of multi-agent systems
|
||||
- [Practical Multi AI Agents and Advanced Use Cases](https://www.deeplearning.ai/short-courses/practical-multi-ai-agents-and-advanced-use-cases-with-crewai/) - Deep dive into advanced implementations
|
||||
|
||||
### Understanding Flows and Crews
|
||||
|
||||
CrewAI offers two powerful, complementary approaches that work seamlessly together to build sophisticated AI applications:
|
||||
|
||||
1. **Crews**: Teams of AI agents with true autonomy and agency, working together to accomplish complex tasks through role-based collaboration. Crews enable:
|
||||
- Natural, autonomous decision-making between agents
|
||||
- Dynamic task delegation and collaboration
|
||||
- Specialized roles with defined goals and expertise
|
||||
- Flexible problem-solving approaches
|
||||
|
||||
2. **Flows**: Production-ready, event-driven workflows that deliver precise control over complex automations. Flows provide:
|
||||
- Fine-grained control over execution paths for real-world scenarios
|
||||
- Secure, consistent state management between tasks
|
||||
- Clean integration of AI agents with production Python code
|
||||
- Conditional branching for complex business logic
|
||||
|
||||
The true power of CrewAI emerges when combining Crews and Flows. This synergy allows you to:
|
||||
- Build complex, production-grade applications
|
||||
- Balance autonomy with precise control
|
||||
- Handle sophisticated real-world scenarios
|
||||
- Maintain clean, maintainable code structure
|
||||
|
||||
### Getting Started with Installation
|
||||
|
||||
To get started with CrewAI, follow these simple steps:
|
||||
|
||||
### 1. Installation
|
||||
@@ -51,7 +85,6 @@ First, install CrewAI:
|
||||
```shell
|
||||
pip install crewai
|
||||
```
|
||||
|
||||
If you want to install the 'crewai' package along with its optional features that include additional tools for agents, you can do so by using the following command:
|
||||
|
||||
```shell
|
||||
@@ -59,6 +92,22 @@ pip install 'crewai[tools]'
|
||||
```
|
||||
The command above installs the basic package and also adds extra components which require more dependencies to function.
|
||||
|
||||
### Troubleshooting Dependencies
|
||||
|
||||
If you encounter issues during installation or usage, here are some common solutions:
|
||||
|
||||
#### Common Issues
|
||||
|
||||
1. **ModuleNotFoundError: No module named 'tiktoken'**
|
||||
- Install tiktoken explicitly: `pip install 'crewai[embeddings]'`
|
||||
- If using embedchain or other tools: `pip install 'crewai[tools]'`
|
||||
|
||||
2. **Failed building wheel for tiktoken**
|
||||
- Ensure Rust compiler is installed (see installation steps above)
|
||||
- For Windows: Verify Visual C++ Build Tools are installed
|
||||
- Try upgrading pip: `pip install --upgrade pip`
|
||||
- If issues persist, use a pre-built wheel: `pip install tiktoken --prefer-binary`
|
||||
|
||||
### 2. Setting Up Your Crew with the YAML Configuration
|
||||
|
||||
To create a new CrewAI project, run the following CLI (Command Line Interface) command:
|
||||
@@ -264,13 +313,16 @@ In addition to the sequential process, you can use the hierarchical process, whi
|
||||
|
||||
## Key Features
|
||||
|
||||
- **Role-Based Agent Design**: Customize agents with specific roles, goals, and tools.
|
||||
- **Autonomous Inter-Agent Delegation**: Agents can autonomously delegate tasks and inquire amongst themselves, enhancing problem-solving efficiency.
|
||||
- **Flexible Task Management**: Define tasks with customizable tools and assign them to agents dynamically.
|
||||
- **Processes Driven**: Currently only supports `sequential` task execution and `hierarchical` processes, but more complex processes like consensual and autonomous are being worked on.
|
||||
- **Save output as file**: Save the output of individual tasks as a file, so you can use it later.
|
||||
- **Parse output as Pydantic or Json**: Parse the output of individual tasks as a Pydantic model or as a Json if you want to.
|
||||
- **Works with Open Source Models**: Run your crew using Open AI or open source models refer to the [Connect CrewAI to LLMs](https://docs.crewai.com/how-to/LLM-Connections/) page for details on configuring your agents' connections to models, even ones running locally!
|
||||
**Note**: CrewAI is a standalone framework built from the ground up, without dependencies on Langchain or other agent frameworks.
|
||||
|
||||
- **Deep Customization**: Build sophisticated agents with full control over the system - from overriding inner prompts to accessing low-level APIs. Customize roles, goals, tools, and behaviors while maintaining clean abstractions.
|
||||
- **Autonomous Inter-Agent Delegation**: Agents can autonomously delegate tasks and inquire amongst themselves, enabling complex problem-solving in real-world scenarios.
|
||||
- **Flexible Task Management**: Define and customize tasks with granular control, from simple operations to complex multi-step processes.
|
||||
- **Production-Grade Architecture**: Support for both high-level abstractions and low-level customization, with robust error handling and state management.
|
||||
- **Predictable Results**: Ensure consistent, accurate outputs through programmatic guardrails, agent training capabilities, and flow-based execution control. See our [documentation on guardrails](https://docs.crewai.com/how-to/guardrails/) for implementation details.
|
||||
- **Model Flexibility**: Run your crew using OpenAI or open source models with production-ready integrations. See [Connect CrewAI to LLMs](https://docs.crewai.com/how-to/LLM-Connections/) for detailed configuration options.
|
||||
- **Event-Driven Flows**: Build complex, real-world workflows with precise control over execution paths, state management, and conditional logic.
|
||||
- **Process Orchestration**: Achieve any workflow pattern through flows - from simple sequential and hierarchical processes to complex, custom orchestration patterns with conditional branching and parallel execution.
|
||||
|
||||

|
||||
|
||||
@@ -305,6 +357,98 @@ You can test different real life examples of AI crews in the [CrewAI-examples re
|
||||
|
||||
[](https://www.youtube.com/watch?v=e0Uj4yWdaAg "Stock Analysis")
|
||||
|
||||
### Using Crews and Flows Together
|
||||
|
||||
CrewAI's power truly shines when combining Crews with Flows to create sophisticated automation pipelines. Here's how you can orchestrate multiple Crews within a Flow:
|
||||
|
||||
```python
|
||||
from crewai.flow.flow import Flow, listen, start, router
|
||||
from crewai import Crew, Agent, Task
|
||||
from pydantic import BaseModel
|
||||
|
||||
# Define structured state for precise control
|
||||
class MarketState(BaseModel):
|
||||
sentiment: str = "neutral"
|
||||
confidence: float = 0.0
|
||||
recommendations: list = []
|
||||
|
||||
class AdvancedAnalysisFlow(Flow[MarketState]):
|
||||
@start()
|
||||
def fetch_market_data(self):
|
||||
# Demonstrate low-level control with structured state
|
||||
self.state.sentiment = "analyzing"
|
||||
return {"sector": "tech", "timeframe": "1W"} # These parameters match the task description template
|
||||
|
||||
@listen(fetch_market_data)
|
||||
def analyze_with_crew(self, market_data):
|
||||
# Show crew agency through specialized roles
|
||||
analyst = Agent(
|
||||
role="Senior Market Analyst",
|
||||
goal="Conduct deep market analysis with expert insight",
|
||||
backstory="You're a veteran analyst known for identifying subtle market patterns"
|
||||
)
|
||||
researcher = Agent(
|
||||
role="Data Researcher",
|
||||
goal="Gather and validate supporting market data",
|
||||
backstory="You excel at finding and correlating multiple data sources"
|
||||
)
|
||||
|
||||
analysis_task = Task(
|
||||
description="Analyze {sector} sector data for the past {timeframe}",
|
||||
expected_output="Detailed market analysis with confidence score",
|
||||
agent=analyst
|
||||
)
|
||||
research_task = Task(
|
||||
description="Find supporting data to validate the analysis",
|
||||
expected_output="Corroborating evidence and potential contradictions",
|
||||
agent=researcher
|
||||
)
|
||||
|
||||
# Demonstrate crew autonomy
|
||||
analysis_crew = Crew(
|
||||
agents=[analyst, researcher],
|
||||
tasks=[analysis_task, research_task],
|
||||
process=Process.sequential,
|
||||
verbose=True
|
||||
)
|
||||
return analysis_crew.kickoff(inputs=market_data) # Pass market_data as named inputs
|
||||
|
||||
@router(analyze_with_crew)
|
||||
def determine_next_steps(self):
|
||||
# Show flow control with conditional routing
|
||||
if self.state.confidence > 0.8:
|
||||
return "high_confidence"
|
||||
elif self.state.confidence > 0.5:
|
||||
return "medium_confidence"
|
||||
return "low_confidence"
|
||||
|
||||
@listen("high_confidence")
|
||||
def execute_strategy(self):
|
||||
# Demonstrate complex decision making
|
||||
strategy_crew = Crew(
|
||||
agents=[
|
||||
Agent(role="Strategy Expert",
|
||||
goal="Develop optimal market strategy")
|
||||
],
|
||||
tasks=[
|
||||
Task(description="Create detailed strategy based on analysis",
|
||||
expected_output="Step-by-step action plan")
|
||||
]
|
||||
)
|
||||
return strategy_crew.kickoff()
|
||||
|
||||
@listen("medium_confidence", "low_confidence")
|
||||
def request_additional_analysis(self):
|
||||
self.state.recommendations.append("Gather more data")
|
||||
return "Additional analysis required"
|
||||
```
|
||||
|
||||
This example demonstrates how to:
|
||||
1. Use Python code for basic data operations
|
||||
2. Create and execute Crews as steps in your workflow
|
||||
3. Use Flow decorators to manage the sequence of operations
|
||||
4. Implement conditional branching based on Crew results
|
||||
|
||||
## Connecting Your Crew to a Model
|
||||
|
||||
CrewAI supports using various LLMs through a variety of connection options. By default your agents will use the OpenAI API when querying the model. However, there are several other ways to allow your agents to connect to models. For example, you can configure your agents to use a local model via the Ollama tool.
|
||||
@@ -313,9 +457,13 @@ Please refer to the [Connect CrewAI to LLMs](https://docs.crewai.com/how-to/LLM-
|
||||
|
||||
## How CrewAI Compares
|
||||
|
||||
**CrewAI's Advantage**: CrewAI is built with production in mind. It offers the flexibility of Autogen's conversational agents and the structured process approach of ChatDev, but without the rigidity. CrewAI's processes are designed to be dynamic and adaptable, fitting seamlessly into both development and production workflows.
|
||||
**CrewAI's Advantage**: CrewAI combines autonomous agent intelligence with precise workflow control through its unique Crews and Flows architecture. The framework excels at both high-level orchestration and low-level customization, enabling complex, production-grade systems with granular control.
|
||||
|
||||
- **Autogen**: While Autogen does good in creating conversational agents capable of working together, it lacks an inherent concept of process. In Autogen, orchestrating agents' interactions requires additional programming, which can become complex and cumbersome as the scale of tasks grows.
|
||||
- **LangGraph**: While LangGraph provides a foundation for building agent workflows, its approach requires significant boilerplate code and complex state management patterns. The framework's tight coupling with LangChain can limit flexibility when implementing custom agent behaviors or integrating with external systems.
|
||||
|
||||
*P.S. CrewAI demonstrates significant performance advantages over LangGraph, executing 5.76x faster in certain cases like this QA task example ([see comparison](https://github.com/crewAIInc/crewAI-examples/tree/main/Notebooks/CrewAI%20Flows%20%26%20Langgraph/QA%20Agent)) while achieving higher evaluation scores with faster completion times in certain coding tasks, like in this example ([detailed analysis](https://github.com/crewAIInc/crewAI-examples/blob/main/Notebooks/CrewAI%20Flows%20%26%20Langgraph/Coding%20Assistant/coding_assistant_eval.ipynb)).*
|
||||
|
||||
- **Autogen**: While Autogen excels at creating conversational agents capable of working together, it lacks an inherent concept of process. In Autogen, orchestrating agents' interactions requires additional programming, which can become complex and cumbersome as the scale of tasks grows.
|
||||
|
||||
- **ChatDev**: ChatDev introduced the idea of processes into the realm of AI agents, but its implementation is quite rigid. Customizations in ChatDev are limited and not geared towards production environments, which can hinder scalability and flexibility in real-world applications.
|
||||
|
||||
@@ -440,5 +588,8 @@ A: CrewAI uses anonymous telemetry to collect usage data for improvement purpose
|
||||
### Q: Where can I find examples of CrewAI in action?
|
||||
A: You can find various real-life examples in the [CrewAI-examples repository](https://github.com/crewAIInc/crewAI-examples), including trip planners, stock analysis tools, and more.
|
||||
|
||||
### Q: What is the difference between Crews and Flows?
|
||||
A: Crews and Flows serve different but complementary purposes in CrewAI. Crews are teams of AI agents working together to accomplish specific tasks through role-based collaboration, delivering accurate and predictable results. Flows, on the other hand, are event-driven workflows that can orchestrate both Crews and regular Python code, allowing you to build complex automation pipelines with secure state management and conditional execution paths.
|
||||
|
||||
### Q: How can I contribute to CrewAI?
|
||||
A: Contributions are welcome! You can fork the repository, create a new branch for your feature, add your improvement, and send a pull request. Check the Contribution section in the README for more details.
|
||||
|
||||
@@ -79,6 +79,55 @@ crew = Crew(
|
||||
result = crew.kickoff(inputs={"question": "What city does John live in and how old is he?"})
|
||||
```
|
||||
|
||||
|
||||
Here's another example with the `CrewDoclingSource`
|
||||
```python Code
|
||||
from crewai import LLM, Agent, Crew, Process, Task
|
||||
from crewai.knowledge.source.crew_docling_source import CrewDoclingSource
|
||||
|
||||
# Create a knowledge source
|
||||
content_source = CrewDoclingSource(
|
||||
file_paths=[
|
||||
"https://lilianweng.github.io/posts/2024-11-28-reward-hacking",
|
||||
"https://lilianweng.github.io/posts/2024-07-07-hallucination",
|
||||
],
|
||||
)
|
||||
|
||||
# Create an LLM with a temperature of 0 to ensure deterministic outputs
|
||||
llm = LLM(model="gpt-4o-mini", temperature=0)
|
||||
|
||||
# Create an agent with the knowledge store
|
||||
agent = Agent(
|
||||
role="About papers",
|
||||
goal="You know everything about the papers.",
|
||||
backstory="""You are a master at understanding papers and their content.""",
|
||||
verbose=True,
|
||||
allow_delegation=False,
|
||||
llm=llm,
|
||||
)
|
||||
task = Task(
|
||||
description="Answer the following questions about the papers: {question}",
|
||||
expected_output="An answer to the question.",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
crew = Crew(
|
||||
agents=[agent],
|
||||
tasks=[task],
|
||||
verbose=True,
|
||||
process=Process.sequential,
|
||||
knowledge_sources=[
|
||||
content_source
|
||||
], # Enable knowledge by adding the sources here. You can also add more sources to the sources list.
|
||||
)
|
||||
|
||||
result = crew.kickoff(
|
||||
inputs={
|
||||
"question": "What is the reward hacking paper about? Be sure to provide sources."
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
## Knowledge Configuration
|
||||
|
||||
### Chunking Configuration
|
||||
@@ -122,6 +171,58 @@ crewai reset-memories --knowledge
|
||||
|
||||
This is useful when you've updated your knowledge sources and want to ensure that the agents are using the most recent information.
|
||||
|
||||
## Agent-Specific Knowledge
|
||||
|
||||
While knowledge can be provided at the crew level using `crew.knowledge_sources`, individual agents can also have their own knowledge sources using the `knowledge_sources` parameter:
|
||||
|
||||
```python Code
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai.knowledge.source.string_knowledge_source import StringKnowledgeSource
|
||||
|
||||
# Create agent-specific knowledge about a product
|
||||
product_specs = StringKnowledgeSource(
|
||||
content="""The XPS 13 laptop features:
|
||||
- 13.4-inch 4K display
|
||||
- Intel Core i7 processor
|
||||
- 16GB RAM
|
||||
- 512GB SSD storage
|
||||
- 12-hour battery life""",
|
||||
metadata={"category": "product_specs"}
|
||||
)
|
||||
|
||||
# Create a support agent with product knowledge
|
||||
support_agent = Agent(
|
||||
role="Technical Support Specialist",
|
||||
goal="Provide accurate product information and support.",
|
||||
backstory="You are an expert on our laptop products and specifications.",
|
||||
knowledge_sources=[product_specs] # Agent-specific knowledge
|
||||
)
|
||||
|
||||
# Create a task that requires product knowledge
|
||||
support_task = Task(
|
||||
description="Answer this customer question: {question}",
|
||||
agent=support_agent
|
||||
)
|
||||
|
||||
# Create and run the crew
|
||||
crew = Crew(
|
||||
agents=[support_agent],
|
||||
tasks=[support_task]
|
||||
)
|
||||
|
||||
# Get answer about the laptop's specifications
|
||||
result = crew.kickoff(
|
||||
inputs={"question": "What is the storage capacity of the XPS 13?"}
|
||||
)
|
||||
```
|
||||
|
||||
<Info>
|
||||
Benefits of agent-specific knowledge:
|
||||
- Give agents specialized information for their roles
|
||||
- Maintain separation of concerns between agents
|
||||
- Combine with crew-level knowledge for layered information access
|
||||
</Info>
|
||||
|
||||
## Custom Knowledge Sources
|
||||
|
||||
CrewAI allows you to create custom knowledge sources for any type of data by extending the `BaseKnowledgeSource` class. Let's create a practical example that fetches and processes space news articles.
|
||||
|
||||
@@ -29,7 +29,7 @@ Large Language Models (LLMs) are the core intelligence behind CrewAI agents. The
|
||||
|
||||
## Available Models and Their Capabilities
|
||||
|
||||
Here's a detailed breakdown of supported models and their capabilities, you can compare performance at [lmarena.ai](https://lmarena.ai/):
|
||||
Here's a detailed breakdown of supported models and their capabilities, you can compare performance at [lmarena.ai](https://lmarena.ai/?leaderboard) and [artificialanalysis.ai](https://artificialanalysis.ai/):
|
||||
|
||||
<Tabs>
|
||||
<Tab title="OpenAI">
|
||||
@@ -121,12 +121,18 @@ Here's a detailed breakdown of supported models and their capabilities, you can
|
||||
<Tab title="Gemini">
|
||||
| Model | Context Window | Best For |
|
||||
|-------|---------------|-----------|
|
||||
| Gemini 1.5 Flash | 1M tokens | Balanced multimodal model, good for most tasks |
|
||||
| Gemini 1.5 Flash 8B | 1M tokens | Fastest, most cost-efficient, good for high-frequency tasks |
|
||||
| Gemini 1.5 Pro | 2M tokens | Best performing, wide variety of reasoning tasks including logical reasoning, coding, and creative collaboration |
|
||||
| gemini-2.0-flash-exp | 1M tokens | Higher quality at faster speed, multimodal model, good for most tasks |
|
||||
| gemini-1.5-flash | 1M tokens | Balanced multimodal model, good for most tasks |
|
||||
| gemini-1.5-flash-8B | 1M tokens | Fastest, most cost-efficient, good for high-frequency tasks |
|
||||
| gemini-1.5-pro | 2M tokens | Best performing, wide variety of reasoning tasks including logical reasoning, coding, and creative collaboration |
|
||||
|
||||
<Tip>
|
||||
Google's Gemini models are all multimodal, supporting audio, images, video and text, supporting context caching, json schema, function calling, etc.
|
||||
|
||||
These models are available via API_KEY from
|
||||
[The Gemini API](https://ai.google.dev/gemini-api/docs) and also from
|
||||
[Google Cloud Vertex](https://cloud.google.com/vertex-ai/generative-ai/docs/migrate/migrate-google-ai) as part of the
|
||||
[Model Garden](https://cloud.google.com/vertex-ai/generative-ai/docs/model-garden/explore-models).
|
||||
</Tip>
|
||||
</Tab>
|
||||
<Tab title="Groq">
|
||||
@@ -135,7 +141,6 @@ Here's a detailed breakdown of supported models and their capabilities, you can
|
||||
| Llama 3.1 70B/8B | 131,072 tokens | High-performance, large context tasks |
|
||||
| Llama 3.2 Series | 8,192 tokens | General-purpose tasks |
|
||||
| Mixtral 8x7B | 32,768 tokens | Balanced performance and context |
|
||||
| Gemma Series | 8,192 tokens | Efficient, smaller-scale tasks |
|
||||
|
||||
<Tip>
|
||||
Groq is known for its fast inference speeds, making it suitable for real-time applications.
|
||||
@@ -146,7 +151,7 @@ Here's a detailed breakdown of supported models and their capabilities, you can
|
||||
|----------|---------------|--------------|
|
||||
| Deepseek Chat | 128,000 tokens | Specialized in technical discussions |
|
||||
| Claude 3 | Up to 200K tokens | Strong reasoning, code understanding |
|
||||
| Gemini | Varies by model | Multimodal capabilities |
|
||||
| Gemma Series | 8,192 tokens | Efficient, smaller-scale tasks |
|
||||
|
||||
<Info>
|
||||
Provider selection should consider factors like:
|
||||
|
||||
@@ -6,7 +6,7 @@ icon: list-check
|
||||
|
||||
## Overview of a Task
|
||||
|
||||
In the CrewAI framework, a `Task` is a specific assignment completed by an `Agent`.
|
||||
In the CrewAI framework, a `Task` is a specific assignment completed by an `Agent`.
|
||||
|
||||
Tasks provide all necessary details for execution, such as a description, the agent responsible, required tools, and more, facilitating a wide range of action complexities.
|
||||
|
||||
@@ -263,8 +263,148 @@ analysis_task = Task(
|
||||
)
|
||||
```
|
||||
|
||||
## Task Guardrails
|
||||
|
||||
Task guardrails provide a way to validate and transform task outputs before they
|
||||
are passed to the next task. This feature helps ensure data quality and provides
|
||||
efeedback to agents when their output doesn't meet specific criteria.
|
||||
|
||||
### Using Task Guardrails
|
||||
|
||||
To add a guardrail to a task, provide a validation function through the `guardrail` parameter:
|
||||
|
||||
```python Code
|
||||
from typing import Tuple, Union, Dict, Any
|
||||
|
||||
def validate_blog_content(result: str) -> Tuple[bool, Union[Dict[str, Any], str]]:
|
||||
"""Validate blog content meets requirements."""
|
||||
try:
|
||||
# Check word count
|
||||
word_count = len(result.split())
|
||||
if word_count > 200:
|
||||
return (False, {
|
||||
"error": "Blog content exceeds 200 words",
|
||||
"code": "WORD_COUNT_ERROR",
|
||||
"context": {"word_count": word_count}
|
||||
})
|
||||
|
||||
# Additional validation logic here
|
||||
return (True, result.strip())
|
||||
except Exception as e:
|
||||
return (False, {
|
||||
"error": "Unexpected error during validation",
|
||||
"code": "SYSTEM_ERROR"
|
||||
})
|
||||
|
||||
blog_task = Task(
|
||||
description="Write a blog post about AI",
|
||||
expected_output="A blog post under 200 words",
|
||||
agent=blog_agent,
|
||||
guardrail=validate_blog_content # Add the guardrail function
|
||||
)
|
||||
```
|
||||
|
||||
### Guardrail Function Requirements
|
||||
|
||||
1. **Function Signature**:
|
||||
- Must accept exactly one parameter (the task output)
|
||||
- Should return a tuple of `(bool, Any)`
|
||||
- Type hints are recommended but optional
|
||||
|
||||
2. **Return Values**:
|
||||
- Success: Return `(True, validated_result)`
|
||||
- Failure: Return `(False, error_details)`
|
||||
|
||||
### Error Handling Best Practices
|
||||
|
||||
1. **Structured Error Responses**:
|
||||
```python Code
|
||||
def validate_with_context(result: str) -> Tuple[bool, Union[Dict[str, Any], str]]:
|
||||
try:
|
||||
# Main validation logic
|
||||
validated_data = perform_validation(result)
|
||||
return (True, validated_data)
|
||||
except ValidationError as e:
|
||||
return (False, {
|
||||
"error": str(e),
|
||||
"code": "VALIDATION_ERROR",
|
||||
"context": {"input": result}
|
||||
})
|
||||
except Exception as e:
|
||||
return (False, {
|
||||
"error": "Unexpected error",
|
||||
"code": "SYSTEM_ERROR"
|
||||
})
|
||||
```
|
||||
|
||||
2. **Error Categories**:
|
||||
- Use specific error codes
|
||||
- Include relevant context
|
||||
- Provide actionable feedback
|
||||
|
||||
3. **Validation Chain**:
|
||||
```python Code
|
||||
from typing import Any, Dict, List, Tuple, Union
|
||||
|
||||
def complex_validation(result: str) -> Tuple[bool, Union[str, Dict[str, Any]]]:
|
||||
"""Chain multiple validation steps."""
|
||||
# Step 1: Basic validation
|
||||
if not result:
|
||||
return (False, {"error": "Empty result", "code": "EMPTY_INPUT"})
|
||||
|
||||
# Step 2: Content validation
|
||||
try:
|
||||
validated = validate_content(result)
|
||||
if not validated:
|
||||
return (False, {"error": "Invalid content", "code": "CONTENT_ERROR"})
|
||||
|
||||
# Step 3: Format validation
|
||||
formatted = format_output(validated)
|
||||
return (True, formatted)
|
||||
except Exception as e:
|
||||
return (False, {
|
||||
"error": str(e),
|
||||
"code": "VALIDATION_ERROR",
|
||||
"context": {"step": "content_validation"}
|
||||
})
|
||||
```
|
||||
|
||||
### Handling Guardrail Results
|
||||
|
||||
When a guardrail returns `(False, error)`:
|
||||
1. The error is sent back to the agent
|
||||
2. The agent attempts to fix the issue
|
||||
3. The process repeats until:
|
||||
- The guardrail returns `(True, result)`
|
||||
- Maximum retries are reached
|
||||
|
||||
Example with retry handling:
|
||||
```python Code
|
||||
from typing import Optional, Tuple, Union
|
||||
|
||||
def validate_json_output(result: str) -> Tuple[bool, Union[Dict[str, Any], str]]:
|
||||
"""Validate and parse JSON output."""
|
||||
try:
|
||||
# Try to parse as JSON
|
||||
data = json.loads(result)
|
||||
return (True, data)
|
||||
except json.JSONDecodeError as e:
|
||||
return (False, {
|
||||
"error": "Invalid JSON format",
|
||||
"code": "JSON_ERROR",
|
||||
"context": {"line": e.lineno, "column": e.colno}
|
||||
})
|
||||
|
||||
task = Task(
|
||||
description="Generate a JSON report",
|
||||
expected_output="A valid JSON object",
|
||||
agent=analyst,
|
||||
guardrail=validate_json_output,
|
||||
max_retries=3 # Limit retry attempts
|
||||
)
|
||||
```
|
||||
|
||||
## Getting Structured Consistent Outputs from Tasks
|
||||
When you need to ensure that a task outputs a structured and consistent format, you can use the `output_pydantic` or `output_json` properties on a task. These properties allow you to define the expected output structure, making it easier to parse and utilize the results in your application.
|
||||
|
||||
<Note>
|
||||
It's also important to note that the output of the final task of a crew becomes the final output of the actual crew itself.
|
||||
@@ -608,6 +748,114 @@ While creating and executing tasks, certain validation mechanisms are in place t
|
||||
|
||||
These validations help in maintaining the consistency and reliability of task executions within the crewAI framework.
|
||||
|
||||
## Task Guardrails
|
||||
|
||||
Task guardrails provide a powerful way to validate, transform, or filter task outputs before they are passed to the next task. Guardrails are optional functions that execute before the next task starts, allowing you to ensure that task outputs meet specific requirements or formats.
|
||||
|
||||
### Basic Usage
|
||||
|
||||
```python Code
|
||||
from typing import Tuple, Union
|
||||
from crewai import Task
|
||||
|
||||
def validate_json_output(result: str) -> Tuple[bool, Union[dict, str]]:
|
||||
"""Validate that the output is valid JSON."""
|
||||
try:
|
||||
json_data = json.loads(result)
|
||||
return (True, json_data)
|
||||
except json.JSONDecodeError:
|
||||
return (False, "Output must be valid JSON")
|
||||
|
||||
task = Task(
|
||||
description="Generate JSON data",
|
||||
expected_output="Valid JSON object",
|
||||
guardrail=validate_json_output
|
||||
)
|
||||
```
|
||||
|
||||
### How Guardrails Work
|
||||
|
||||
1. **Optional Attribute**: Guardrails are an optional attribute at the task level, allowing you to add validation only where needed.
|
||||
2. **Execution Timing**: The guardrail function is executed before the next task starts, ensuring valid data flow between tasks.
|
||||
3. **Return Format**: Guardrails must return a tuple of `(success, data)`:
|
||||
- If `success` is `True`, `data` is the validated/transformed result
|
||||
- If `success` is `False`, `data` is the error message
|
||||
4. **Result Routing**:
|
||||
- On success (`True`), the result is automatically passed to the next task
|
||||
- On failure (`False`), the error is sent back to the agent to generate a new answer
|
||||
|
||||
### Common Use Cases
|
||||
|
||||
#### Data Format Validation
|
||||
```python Code
|
||||
def validate_email_format(result: str) -> Tuple[bool, Union[str, str]]:
|
||||
"""Ensure the output contains a valid email address."""
|
||||
import re
|
||||
email_pattern = r'^[\w\.-]+@[\w\.-]+\.\w+$'
|
||||
if re.match(email_pattern, result.strip()):
|
||||
return (True, result.strip())
|
||||
return (False, "Output must be a valid email address")
|
||||
```
|
||||
|
||||
#### Content Filtering
|
||||
```python Code
|
||||
def filter_sensitive_info(result: str) -> Tuple[bool, Union[str, str]]:
|
||||
"""Remove or validate sensitive information."""
|
||||
sensitive_patterns = ['SSN:', 'password:', 'secret:']
|
||||
for pattern in sensitive_patterns:
|
||||
if pattern.lower() in result.lower():
|
||||
return (False, f"Output contains sensitive information ({pattern})")
|
||||
return (True, result)
|
||||
```
|
||||
|
||||
#### Data Transformation
|
||||
```python Code
|
||||
def normalize_phone_number(result: str) -> Tuple[bool, Union[str, str]]:
|
||||
"""Ensure phone numbers are in a consistent format."""
|
||||
import re
|
||||
digits = re.sub(r'\D', '', result)
|
||||
if len(digits) == 10:
|
||||
formatted = f"({digits[:3]}) {digits[3:6]}-{digits[6:]}"
|
||||
return (True, formatted)
|
||||
return (False, "Output must be a 10-digit phone number")
|
||||
```
|
||||
|
||||
### Advanced Features
|
||||
|
||||
#### Chaining Multiple Validations
|
||||
```python Code
|
||||
def chain_validations(*validators):
|
||||
"""Chain multiple validators together."""
|
||||
def combined_validator(result):
|
||||
for validator in validators:
|
||||
success, data = validator(result)
|
||||
if not success:
|
||||
return (False, data)
|
||||
result = data
|
||||
return (True, result)
|
||||
return combined_validator
|
||||
|
||||
# Usage
|
||||
task = Task(
|
||||
description="Get user contact info",
|
||||
expected_output="Email and phone",
|
||||
guardrail=chain_validations(
|
||||
validate_email_format,
|
||||
filter_sensitive_info
|
||||
)
|
||||
)
|
||||
```
|
||||
|
||||
#### Custom Retry Logic
|
||||
```python Code
|
||||
task = Task(
|
||||
description="Generate data",
|
||||
expected_output="Valid data",
|
||||
guardrail=validate_data,
|
||||
max_retries=5 # Override default retry limit
|
||||
)
|
||||
```
|
||||
|
||||
## Creating Directories when Saving Files
|
||||
|
||||
You can now specify if a task should create directories when saving its output to a file. This is particularly useful for organizing outputs and ensuring that file paths are correctly structured.
|
||||
@@ -629,7 +877,7 @@ save_output_task = Task(
|
||||
|
||||
## Conclusion
|
||||
|
||||
Tasks are the driving force behind the actions of agents in CrewAI.
|
||||
By properly defining tasks and their outcomes, you set the stage for your AI agents to work effectively, either independently or as a collaborative unit.
|
||||
Equipping tasks with appropriate tools, understanding the execution process, and following robust validation practices are crucial for maximizing CrewAI's potential,
|
||||
Tasks are the driving force behind the actions of agents in CrewAI.
|
||||
By properly defining tasks and their outcomes, you set the stage for your AI agents to work effectively, either independently or as a collaborative unit.
|
||||
Equipping tasks with appropriate tools, understanding the execution process, and following robust validation practices are crucial for maximizing CrewAI's potential,
|
||||
ensuring agents are effectively prepared for their assignments and that tasks are executed as intended.
|
||||
|
||||
211
docs/how-to/Portkey-Observability-and-Guardrails.md
Normal file
211
docs/how-to/Portkey-Observability-and-Guardrails.md
Normal file
@@ -0,0 +1,211 @@
|
||||
# Portkey Integration with CrewAI
|
||||
<img src="https://raw.githubusercontent.com/siddharthsambharia-portkey/Portkey-Product-Images/main/Portkey-CrewAI.png" alt="Portkey CrewAI Header Image" width="70%" />
|
||||
|
||||
|
||||
[Portkey](https://portkey.ai/?utm_source=crewai&utm_medium=crewai&utm_campaign=crewai) is a 2-line upgrade to make your CrewAI agents reliable, cost-efficient, and fast.
|
||||
|
||||
Portkey adds 4 core production capabilities to any CrewAI agent:
|
||||
1. Routing to **200+ LLMs**
|
||||
2. Making each LLM call more robust
|
||||
3. Full-stack tracing & cost, performance analytics
|
||||
4. Real-time guardrails to enforce behavior
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
## Getting Started
|
||||
|
||||
1. **Install Required Packages:**
|
||||
|
||||
```bash
|
||||
pip install -qU crewai portkey-ai
|
||||
```
|
||||
|
||||
2. **Configure the LLM Client:**
|
||||
|
||||
To build CrewAI Agents with Portkey, you'll need two keys:
|
||||
- **Portkey API Key**: Sign up on the [Portkey app](https://app.portkey.ai/?utm_source=crewai&utm_medium=crewai&utm_campaign=crewai) and copy your API key
|
||||
- **Virtual Key**: Virtual Keys securely manage your LLM API keys in one place. Store your LLM provider API keys securely in Portkey's vault
|
||||
|
||||
```python
|
||||
from crewai import LLM
|
||||
from portkey_ai import createHeaders, PORTKEY_GATEWAY_URL
|
||||
|
||||
gpt_llm = LLM(
|
||||
model="gpt-4",
|
||||
base_url=PORTKEY_GATEWAY_URL,
|
||||
api_key="dummy", # We are using Virtual key
|
||||
extra_headers=createHeaders(
|
||||
api_key="YOUR_PORTKEY_API_KEY",
|
||||
virtual_key="YOUR_VIRTUAL_KEY", # Enter your Virtual key from Portkey
|
||||
)
|
||||
)
|
||||
```
|
||||
|
||||
3. **Create and Run Your First Agent:**
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
|
||||
# Define your agents with roles and goals
|
||||
coder = Agent(
|
||||
role='Software developer',
|
||||
goal='Write clear, concise code on demand',
|
||||
backstory='An expert coder with a keen eye for software trends.',
|
||||
llm=gpt_llm
|
||||
)
|
||||
|
||||
# Create tasks for your agents
|
||||
task1 = Task(
|
||||
description="Define the HTML for making a simple website with heading- Hello World! Portkey is working!",
|
||||
expected_output="A clear and concise HTML code",
|
||||
agent=coder
|
||||
)
|
||||
|
||||
# Instantiate your crew
|
||||
crew = Crew(
|
||||
agents=[coder],
|
||||
tasks=[task1],
|
||||
)
|
||||
|
||||
result = crew.kickoff()
|
||||
print(result)
|
||||
```
|
||||
|
||||
|
||||
## Key Features
|
||||
|
||||
| Feature | Description |
|
||||
|---------|-------------|
|
||||
| 🌐 Multi-LLM Support | Access OpenAI, Anthropic, Gemini, Azure, and 250+ providers through a unified interface |
|
||||
| 🛡️ Production Reliability | Implement retries, timeouts, load balancing, and fallbacks |
|
||||
| 📊 Advanced Observability | Track 40+ metrics including costs, tokens, latency, and custom metadata |
|
||||
| 🔍 Comprehensive Logging | Debug with detailed execution traces and function call logs |
|
||||
| 🚧 Security Controls | Set budget limits and implement role-based access control |
|
||||
| 🔄 Performance Analytics | Capture and analyze feedback for continuous improvement |
|
||||
| 💾 Intelligent Caching | Reduce costs and latency with semantic or simple caching |
|
||||
|
||||
|
||||
## Production Features with Portkey Configs
|
||||
|
||||
All features mentioned below are through Portkey's Config system. Portkey's Config system allows you to define routing strategies using simple JSON objects in your LLM API calls. You can create and manage Configs directly in your code or through the Portkey Dashboard. Each Config has a unique ID for easy reference.
|
||||
|
||||
<Frame>
|
||||
<img src="https://raw.githubusercontent.com/Portkey-AI/docs-core/refs/heads/main/images/libraries/libraries-3.avif"/>
|
||||
</Frame>
|
||||
|
||||
|
||||
### 1. Use 250+ LLMs
|
||||
Access various LLMs like Anthropic, Gemini, Mistral, Azure OpenAI, and more with minimal code changes. Switch between providers or use them together seamlessly. [Learn more about Universal API](https://portkey.ai/docs/product/ai-gateway/universal-api)
|
||||
|
||||
|
||||
Easily switch between different LLM providers:
|
||||
|
||||
```python
|
||||
# Anthropic Configuration
|
||||
anthropic_llm = LLM(
|
||||
model="claude-3-5-sonnet-latest",
|
||||
base_url=PORTKEY_GATEWAY_URL,
|
||||
api_key="dummy",
|
||||
extra_headers=createHeaders(
|
||||
api_key="YOUR_PORTKEY_API_KEY",
|
||||
virtual_key="YOUR_ANTHROPIC_VIRTUAL_KEY", #You don't need provider when using Virtual keys
|
||||
trace_id="anthropic_agent"
|
||||
)
|
||||
)
|
||||
|
||||
# Azure OpenAI Configuration
|
||||
azure_llm = LLM(
|
||||
model="gpt-4",
|
||||
base_url=PORTKEY_GATEWAY_URL,
|
||||
api_key="dummy",
|
||||
extra_headers=createHeaders(
|
||||
api_key="YOUR_PORTKEY_API_KEY",
|
||||
virtual_key="YOUR_AZURE_VIRTUAL_KEY", #You don't need provider when using Virtual keys
|
||||
trace_id="azure_agent"
|
||||
)
|
||||
)
|
||||
```
|
||||
|
||||
|
||||
### 2. Caching
|
||||
Improve response times and reduce costs with two powerful caching modes:
|
||||
- **Simple Cache**: Perfect for exact matches
|
||||
- **Semantic Cache**: Matches responses for requests that are semantically similar
|
||||
[Learn more about Caching](https://portkey.ai/docs/product/ai-gateway/cache-simple-and-semantic)
|
||||
|
||||
```py
|
||||
config = {
|
||||
"cache": {
|
||||
"mode": "semantic", # or "simple" for exact matching
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Production Reliability
|
||||
Portkey provides comprehensive reliability features:
|
||||
- **Automatic Retries**: Handle temporary failures gracefully
|
||||
- **Request Timeouts**: Prevent hanging operations
|
||||
- **Conditional Routing**: Route requests based on specific conditions
|
||||
- **Fallbacks**: Set up automatic provider failovers
|
||||
- **Load Balancing**: Distribute requests efficiently
|
||||
|
||||
[Learn more about Reliability Features](https://portkey.ai/docs/product/ai-gateway/)
|
||||
|
||||
|
||||
|
||||
### 4. Metrics
|
||||
|
||||
Agent runs are complex. Portkey automatically logs **40+ comprehensive metrics** for your AI agents, including cost, tokens used, latency, etc. Whether you need a broad overview or granular insights into your agent runs, Portkey's customizable filters provide the metrics you need.
|
||||
|
||||
|
||||
- Cost per agent interaction
|
||||
- Response times and latency
|
||||
- Token usage and efficiency
|
||||
- Success/failure rates
|
||||
- Cache hit rates
|
||||
|
||||
<img src="https://github.com/siddharthsambharia-portkey/Portkey-Product-Images/blob/main/Portkey-Dashboard.png?raw=true" width="70%" alt="Portkey Dashboard" />
|
||||
|
||||
### 5. Detailed Logging
|
||||
Logs are essential for understanding agent behavior, diagnosing issues, and improving performance. They provide a detailed record of agent activities and tool use, which is crucial for debugging and optimizing processes.
|
||||
|
||||
|
||||
Access a dedicated section to view records of agent executions, including parameters, outcomes, function calls, and errors. Filter logs based on multiple parameters such as trace ID, model, tokens used, and metadata.
|
||||
|
||||
<details>
|
||||
<summary><b>Traces</b></summary>
|
||||
<img src="https://raw.githubusercontent.com/siddharthsambharia-portkey/Portkey-Product-Images/main/Portkey-Traces.png" alt="Portkey Traces" width="70%" />
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary><b>Logs</b></summary>
|
||||
<img src="https://raw.githubusercontent.com/siddharthsambharia-portkey/Portkey-Product-Images/main/Portkey-Logs.png" alt="Portkey Logs" width="70%" />
|
||||
</details>
|
||||
|
||||
### 6. Enterprise Security Features
|
||||
- Set budget limit and rate limts per Virtual Key (disposable API keys)
|
||||
- Implement role-based access control
|
||||
- Track system changes with audit logs
|
||||
- Configure data retention policies
|
||||
|
||||
|
||||
|
||||
For detailed information on creating and managing Configs, visit the [Portkey documentation](https://docs.portkey.ai/product/ai-gateway/configs).
|
||||
|
||||
## Resources
|
||||
|
||||
- [📘 Portkey Documentation](https://docs.portkey.ai)
|
||||
- [📊 Portkey Dashboard](https://app.portkey.ai/?utm_source=crewai&utm_medium=crewai&utm_campaign=crewai)
|
||||
- [🐦 Twitter](https://twitter.com/portkeyai)
|
||||
- [💬 Discord Community](https://discord.gg/DD7vgKK299)
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
138
docs/how-to/multimodal-agents.mdx
Normal file
138
docs/how-to/multimodal-agents.mdx
Normal file
@@ -0,0 +1,138 @@
|
||||
---
|
||||
title: Using Multimodal Agents
|
||||
description: Learn how to enable and use multimodal capabilities in your agents for processing images and other non-text content within the CrewAI framework.
|
||||
icon: image
|
||||
---
|
||||
|
||||
# Using Multimodal Agents
|
||||
|
||||
CrewAI supports multimodal agents that can process both text and non-text content like images. This guide will show you how to enable and use multimodal capabilities in your agents.
|
||||
|
||||
## Enabling Multimodal Capabilities
|
||||
|
||||
To create a multimodal agent, simply set the `multimodal` parameter to `True` when initializing your agent:
|
||||
|
||||
```python
|
||||
from crewai import Agent
|
||||
|
||||
agent = Agent(
|
||||
role="Image Analyst",
|
||||
goal="Analyze and extract insights from images",
|
||||
backstory="An expert in visual content interpretation with years of experience in image analysis",
|
||||
multimodal=True # This enables multimodal capabilities
|
||||
)
|
||||
```
|
||||
|
||||
When you set `multimodal=True`, the agent is automatically configured with the necessary tools for handling non-text content, including the `AddImageTool`.
|
||||
|
||||
## Working with Images
|
||||
|
||||
The multimodal agent comes pre-configured with the `AddImageTool`, which allows it to process images. You don't need to manually add this tool - it's automatically included when you enable multimodal capabilities.
|
||||
|
||||
Here's a complete example showing how to use a multimodal agent to analyze an image:
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
|
||||
# Create a multimodal agent
|
||||
image_analyst = Agent(
|
||||
role="Product Analyst",
|
||||
goal="Analyze product images and provide detailed descriptions",
|
||||
backstory="Expert in visual product analysis with deep knowledge of design and features",
|
||||
multimodal=True
|
||||
)
|
||||
|
||||
# Create a task for image analysis
|
||||
task = Task(
|
||||
description="Analyze the product image at https://example.com/product.jpg and provide a detailed description",
|
||||
agent=image_analyst
|
||||
)
|
||||
|
||||
# Create and run the crew
|
||||
crew = Crew(
|
||||
agents=[image_analyst],
|
||||
tasks=[task]
|
||||
)
|
||||
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
### Advanced Usage with Context
|
||||
|
||||
You can provide additional context or specific questions about the image when creating tasks for multimodal agents. The task description can include specific aspects you want the agent to focus on:
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
|
||||
# Create a multimodal agent for detailed analysis
|
||||
expert_analyst = Agent(
|
||||
role="Visual Quality Inspector",
|
||||
goal="Perform detailed quality analysis of product images",
|
||||
backstory="Senior quality control expert with expertise in visual inspection",
|
||||
multimodal=True # AddImageTool is automatically included
|
||||
)
|
||||
|
||||
# Create a task with specific analysis requirements
|
||||
inspection_task = Task(
|
||||
description="""
|
||||
Analyze the product image at https://example.com/product.jpg with focus on:
|
||||
1. Quality of materials
|
||||
2. Manufacturing defects
|
||||
3. Compliance with standards
|
||||
Provide a detailed report highlighting any issues found.
|
||||
""",
|
||||
agent=expert_analyst
|
||||
)
|
||||
|
||||
# Create and run the crew
|
||||
crew = Crew(
|
||||
agents=[expert_analyst],
|
||||
tasks=[inspection_task]
|
||||
)
|
||||
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
### Tool Details
|
||||
|
||||
When working with multimodal agents, the `AddImageTool` is automatically configured with the following schema:
|
||||
|
||||
```python
|
||||
class AddImageToolSchema:
|
||||
image_url: str # Required: The URL or path of the image to process
|
||||
action: Optional[str] = None # Optional: Additional context or specific questions about the image
|
||||
```
|
||||
|
||||
The multimodal agent will automatically handle the image processing through its built-in tools, allowing it to:
|
||||
- Access images via URLs or local file paths
|
||||
- Process image content with optional context or specific questions
|
||||
- Provide analysis and insights based on the visual information and task requirements
|
||||
|
||||
## Best Practices
|
||||
|
||||
When working with multimodal agents, keep these best practices in mind:
|
||||
|
||||
1. **Image Access**
|
||||
- Ensure your images are accessible via URLs that the agent can reach
|
||||
- For local images, consider hosting them temporarily or using absolute file paths
|
||||
- Verify that image URLs are valid and accessible before running tasks
|
||||
|
||||
2. **Task Description**
|
||||
- Be specific about what aspects of the image you want the agent to analyze
|
||||
- Include clear questions or requirements in the task description
|
||||
- Consider using the optional `action` parameter for focused analysis
|
||||
|
||||
3. **Resource Management**
|
||||
- Image processing may require more computational resources than text-only tasks
|
||||
- Some language models may require base64 encoding for image data
|
||||
- Consider batch processing for multiple images to optimize performance
|
||||
|
||||
4. **Environment Setup**
|
||||
- Verify that your environment has the necessary dependencies for image processing
|
||||
- Ensure your language model supports multimodal capabilities
|
||||
- Test with small images first to validate your setup
|
||||
|
||||
5. **Error Handling**
|
||||
- Implement proper error handling for image loading failures
|
||||
- Have fallback strategies for when image processing fails
|
||||
- Monitor and log image processing operations for debugging
|
||||
222
docs/tools/brave-search-tool.mdx
Normal file
222
docs/tools/brave-search-tool.mdx
Normal file
@@ -0,0 +1,222 @@
|
||||
---
|
||||
title: BraveSearchTool
|
||||
description: A tool for performing web searches using the Brave Search API
|
||||
icon: search
|
||||
---
|
||||
|
||||
## BraveSearchTool
|
||||
|
||||
The BraveSearchTool enables web searches using the Brave Search API, providing customizable result counts, country-specific searches, and rate-limited operations. It formats search results with titles, URLs, and snippets for easy consumption.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install 'crewai[tools]'
|
||||
```
|
||||
|
||||
## Authentication
|
||||
|
||||
Set up your Brave Search API key:
|
||||
```bash
|
||||
export BRAVE_API_KEY='your-brave-api-key'
|
||||
```
|
||||
|
||||
## Usage Example
|
||||
|
||||
```python
|
||||
from crewai import Agent
|
||||
from crewai_tools import BraveSearchTool
|
||||
|
||||
# Basic initialization
|
||||
search_tool = BraveSearchTool()
|
||||
|
||||
# Advanced initialization with custom parameters
|
||||
search_tool = BraveSearchTool(
|
||||
country="US", # Country-specific search
|
||||
n_results=5, # Number of results to return
|
||||
save_file=True # Save results to file
|
||||
)
|
||||
|
||||
# Create an agent with the tool
|
||||
researcher = Agent(
|
||||
role='Web Researcher',
|
||||
goal='Search and analyze web content',
|
||||
backstory='Expert at finding relevant information online.',
|
||||
tools=[search_tool],
|
||||
verbose=True
|
||||
)
|
||||
```
|
||||
|
||||
## Input Schema
|
||||
|
||||
```python
|
||||
class BraveSearchToolSchema(BaseModel):
|
||||
search_query: str = Field(
|
||||
description="Mandatory search query you want to use to search the internet"
|
||||
)
|
||||
```
|
||||
|
||||
## Function Signature
|
||||
|
||||
```python
|
||||
def __init__(
|
||||
self,
|
||||
country: Optional[str] = "",
|
||||
n_results: int = 10,
|
||||
save_file: bool = False,
|
||||
*args,
|
||||
**kwargs
|
||||
):
|
||||
"""
|
||||
Initialize the Brave search tool.
|
||||
|
||||
Args:
|
||||
country (Optional[str]): Country code for region-specific search
|
||||
n_results (int): Number of results to return (default: 10)
|
||||
save_file (bool): Whether to save results to file (default: False)
|
||||
"""
|
||||
|
||||
def _run(
|
||||
self,
|
||||
**kwargs: Any
|
||||
) -> str:
|
||||
"""
|
||||
Execute web search using Brave Search API.
|
||||
|
||||
Args:
|
||||
search_query (str): Query to search
|
||||
save_file (bool, optional): Override save_file setting
|
||||
n_results (int, optional): Override n_results setting
|
||||
|
||||
Returns:
|
||||
str: Formatted search results with titles, URLs, and snippets
|
||||
"""
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. API Authentication:
|
||||
- Securely store BRAVE_API_KEY
|
||||
- Keep API key confidential
|
||||
- Handle authentication errors
|
||||
|
||||
2. Rate Limiting:
|
||||
- Tool automatically handles rate limiting
|
||||
- Minimum 1-second interval between requests
|
||||
- Consider implementing additional rate limits
|
||||
|
||||
3. Search Optimization:
|
||||
- Use specific search queries
|
||||
- Adjust result count based on needs
|
||||
- Consider regional search requirements
|
||||
|
||||
4. Error Handling:
|
||||
- Handle API request failures
|
||||
- Manage parsing errors
|
||||
- Monitor rate limit errors
|
||||
|
||||
## Integration Example
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import BraveSearchTool
|
||||
|
||||
# Initialize tool with custom configuration
|
||||
search_tool = BraveSearchTool(
|
||||
country="GB", # UK-specific search
|
||||
n_results=3, # Limit to 3 results
|
||||
save_file=True # Save results to file
|
||||
)
|
||||
|
||||
# Create agent
|
||||
researcher = Agent(
|
||||
role='Web Researcher',
|
||||
goal='Research latest AI developments',
|
||||
backstory='Expert at finding and analyzing tech news.',
|
||||
tools=[search_tool]
|
||||
)
|
||||
|
||||
# Define task
|
||||
research_task = Task(
|
||||
description="""Find the latest news about artificial
|
||||
intelligence developments in quantum computing.""",
|
||||
agent=researcher
|
||||
)
|
||||
|
||||
# The tool will use:
|
||||
# {
|
||||
# "search_query": "latest quantum computing AI developments"
|
||||
# }
|
||||
|
||||
# Create crew
|
||||
crew = Crew(
|
||||
agents=[researcher],
|
||||
tasks=[research_task]
|
||||
)
|
||||
|
||||
# Execute
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
## Advanced Usage
|
||||
|
||||
### Country-Specific Search
|
||||
```python
|
||||
# Initialize tools for different regions
|
||||
us_search = BraveSearchTool(country="US")
|
||||
uk_search = BraveSearchTool(country="GB")
|
||||
jp_search = BraveSearchTool(country="JP")
|
||||
|
||||
# Compare results across regions
|
||||
us_results = us_search.run(
|
||||
search_query="local news"
|
||||
)
|
||||
uk_results = uk_search.run(
|
||||
search_query="local news"
|
||||
)
|
||||
jp_results = jp_search.run(
|
||||
search_query="local news"
|
||||
)
|
||||
```
|
||||
|
||||
### Result Management
|
||||
```python
|
||||
# Save results to file
|
||||
archival_search = BraveSearchTool(
|
||||
save_file=True,
|
||||
n_results=20
|
||||
)
|
||||
|
||||
# Search and save
|
||||
results = archival_search.run(
|
||||
search_query="historical events 2023"
|
||||
)
|
||||
# Results saved to search_results_YYYY-MM-DD_HH-MM-SS.txt
|
||||
```
|
||||
|
||||
### Error Handling Example
|
||||
```python
|
||||
try:
|
||||
search_tool = BraveSearchTool()
|
||||
results = search_tool.run(
|
||||
search_query="important topic"
|
||||
)
|
||||
print(results)
|
||||
except ValueError as e: # API key missing
|
||||
print(f"Authentication error: {str(e)}")
|
||||
except Exception as e:
|
||||
print(f"Search error: {str(e)}")
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Requires Brave Search API key
|
||||
- Implements automatic rate limiting
|
||||
- Supports country-specific searches
|
||||
- Customizable result count
|
||||
- Optional file saving feature
|
||||
- Thread-safe operations
|
||||
- Efficient result formatting
|
||||
- Handles API errors gracefully
|
||||
- Supports parallel searches
|
||||
- Maintains search context
|
||||
164
docs/tools/code-docs-search-tool.mdx
Normal file
164
docs/tools/code-docs-search-tool.mdx
Normal file
@@ -0,0 +1,164 @@
|
||||
---
|
||||
title: CodeDocsSearchTool
|
||||
description: A semantic search tool for code documentation websites using RAG capabilities
|
||||
icon: book-open
|
||||
---
|
||||
|
||||
## CodeDocsSearchTool
|
||||
|
||||
The CodeDocsSearchTool is a specialized Retrieval-Augmented Generation (RAG) tool that enables semantic search within code documentation websites. It inherits from the base RagTool class and provides both fixed and dynamic documentation URL searching capabilities.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install 'crewai[tools]'
|
||||
```
|
||||
|
||||
## Usage Example
|
||||
|
||||
```python
|
||||
from crewai import Agent
|
||||
from crewai_tools import CodeDocsSearchTool
|
||||
|
||||
# Method 1: Dynamic documentation URL
|
||||
docs_search = CodeDocsSearchTool()
|
||||
|
||||
# Method 2: Fixed documentation URL
|
||||
fixed_docs_search = CodeDocsSearchTool(
|
||||
docs_url="https://docs.example.com"
|
||||
)
|
||||
|
||||
# Create an agent with the tool
|
||||
researcher = Agent(
|
||||
role='Documentation Researcher',
|
||||
goal='Search through code documentation semantically',
|
||||
backstory='Expert at finding relevant information in technical documentation.',
|
||||
tools=[docs_search],
|
||||
verbose=True
|
||||
)
|
||||
```
|
||||
|
||||
## Input Schema
|
||||
|
||||
The tool supports two input schemas depending on initialization:
|
||||
|
||||
### Dynamic URL Schema
|
||||
```python
|
||||
class CodeDocsSearchToolSchema(BaseModel):
|
||||
search_query: str # The semantic search query
|
||||
docs_url: str # URL of the documentation site to search
|
||||
```
|
||||
|
||||
### Fixed URL Schema
|
||||
```python
|
||||
class FixedCodeDocsSearchToolSchema(BaseModel):
|
||||
search_query: str # The semantic search query
|
||||
```
|
||||
|
||||
## Function Signature
|
||||
|
||||
```python
|
||||
def __init__(self, docs_url: Optional[str] = None, **kwargs):
|
||||
"""
|
||||
Initialize the documentation search tool.
|
||||
|
||||
Args:
|
||||
docs_url (Optional[str]): Fixed URL to a documentation site. If provided,
|
||||
the tool will only search this documentation.
|
||||
**kwargs: Additional arguments passed to the parent RagTool
|
||||
"""
|
||||
|
||||
def _run(self, search_query: str, **kwargs: Any) -> Any:
|
||||
"""
|
||||
Perform semantic search on the documentation site.
|
||||
|
||||
Args:
|
||||
search_query (str): The semantic search query
|
||||
**kwargs: Additional arguments (including 'docs_url' for dynamic mode)
|
||||
|
||||
Returns:
|
||||
str: Relevant documentation passages based on semantic search
|
||||
"""
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. Choose initialization method based on use case:
|
||||
- Use fixed URL when repeatedly searching the same documentation
|
||||
- Use dynamic URL when searching different documentation sites
|
||||
2. Write clear, semantic search queries
|
||||
3. Ensure documentation sites are accessible
|
||||
4. Consider documentation structure and size
|
||||
5. Handle potential URL access errors in agent prompts
|
||||
|
||||
## Integration Example
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import CodeDocsSearchTool
|
||||
|
||||
# Example 1: Fixed documentation search
|
||||
api_docs_search = CodeDocsSearchTool(
|
||||
docs_url="https://api.example.com/docs"
|
||||
)
|
||||
|
||||
# Example 2: Dynamic documentation search
|
||||
flexible_docs_search = CodeDocsSearchTool()
|
||||
|
||||
# Create agents
|
||||
api_analyst = Agent(
|
||||
role='API Documentation Analyst',
|
||||
goal='Find relevant API endpoints and usage examples',
|
||||
backstory='Expert at analyzing API documentation.',
|
||||
tools=[api_docs_search]
|
||||
)
|
||||
|
||||
docs_researcher = Agent(
|
||||
role='Documentation Researcher',
|
||||
goal='Search through various documentation sites',
|
||||
backstory='Specialist in finding information across multiple docs.',
|
||||
tools=[flexible_docs_search]
|
||||
)
|
||||
|
||||
# Define tasks
|
||||
fixed_search_task = Task(
|
||||
description="""Find all authentication-related endpoints
|
||||
in the API documentation.""",
|
||||
agent=api_analyst
|
||||
)
|
||||
|
||||
# The agent will use:
|
||||
# {
|
||||
# "search_query": "authentication endpoints and methods"
|
||||
# }
|
||||
|
||||
dynamic_search_task = Task(
|
||||
description="""Search through the Python documentation at
|
||||
docs.python.org for information about async/await.""",
|
||||
agent=docs_researcher
|
||||
)
|
||||
|
||||
# The agent will use:
|
||||
# {
|
||||
# "search_query": "async await syntax and usage",
|
||||
# "docs_url": "https://docs.python.org"
|
||||
# }
|
||||
|
||||
# Create crew
|
||||
crew = Crew(
|
||||
agents=[api_analyst, docs_researcher],
|
||||
tasks=[fixed_search_task, dynamic_search_task]
|
||||
)
|
||||
|
||||
# Execute
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Inherits from RagTool for semantic search capabilities
|
||||
- Supports both fixed and dynamic documentation URLs
|
||||
- Uses embeddings for semantic search
|
||||
- Thread-safe operations
|
||||
- Automatically handles documentation loading and embedding
|
||||
- Optimized for technical documentation search
|
||||
224
docs/tools/code-interpreter-tool.mdx
Normal file
224
docs/tools/code-interpreter-tool.mdx
Normal file
@@ -0,0 +1,224 @@
|
||||
---
|
||||
title: CodeInterpreterTool
|
||||
description: A tool for secure Python code execution in isolated Docker environments
|
||||
icon: code
|
||||
---
|
||||
|
||||
## CodeInterpreterTool
|
||||
|
||||
The CodeInterpreterTool provides secure Python code execution capabilities using Docker containers. It supports dynamic library installation and offers both safe (Docker-based) and unsafe (direct) execution modes.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install 'crewai[tools]'
|
||||
```
|
||||
|
||||
## Usage Example
|
||||
|
||||
```python
|
||||
from crewai import Agent
|
||||
from crewai_tools import CodeInterpreterTool
|
||||
|
||||
# Initialize the tool
|
||||
code_tool = CodeInterpreterTool()
|
||||
|
||||
# Create an agent with the tool
|
||||
programmer = Agent(
|
||||
role='Code Executor',
|
||||
goal='Execute and analyze Python code',
|
||||
backstory='Expert at writing and executing Python code.',
|
||||
tools=[code_tool],
|
||||
verbose=True
|
||||
)
|
||||
```
|
||||
|
||||
## Input Schema
|
||||
|
||||
```python
|
||||
class CodeInterpreterSchema(BaseModel):
|
||||
code: str = Field(
|
||||
description="Python3 code used to be interpreted in the Docker container. ALWAYS PRINT the final result and the output of the code"
|
||||
)
|
||||
libraries_used: List[str] = Field(
|
||||
description="List of libraries used in the code with proper installing names separated by commas. Example: numpy,pandas,beautifulsoup4"
|
||||
)
|
||||
```
|
||||
|
||||
## Function Signature
|
||||
|
||||
```python
|
||||
def __init__(
|
||||
self,
|
||||
code: Optional[str] = None,
|
||||
user_dockerfile_path: Optional[str] = None,
|
||||
user_docker_base_url: Optional[str] = None,
|
||||
unsafe_mode: bool = False,
|
||||
**kwargs
|
||||
):
|
||||
"""
|
||||
Initialize the code interpreter tool.
|
||||
|
||||
Args:
|
||||
code (Optional[str]): Default code to execute
|
||||
user_dockerfile_path (Optional[str]): Custom Dockerfile path
|
||||
user_docker_base_url (Optional[str]): Custom Docker daemon URL
|
||||
unsafe_mode (bool): Enable direct code execution
|
||||
**kwargs: Additional arguments for base tool
|
||||
"""
|
||||
|
||||
def _run(
|
||||
self,
|
||||
code: str,
|
||||
libraries_used: List[str],
|
||||
**kwargs: Any
|
||||
) -> str:
|
||||
"""
|
||||
Execute Python code in Docker container or directly.
|
||||
|
||||
Args:
|
||||
code (str): Python code to execute
|
||||
libraries_used (List[str]): Required libraries
|
||||
**kwargs: Additional arguments
|
||||
|
||||
Returns:
|
||||
str: Execution output or error message
|
||||
"""
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. Security Considerations:
|
||||
- Use Docker mode by default
|
||||
- Validate input code
|
||||
- Control library access
|
||||
- Monitor execution time
|
||||
|
||||
2. Docker Configuration:
|
||||
- Use custom Dockerfile when needed
|
||||
- Handle container lifecycle
|
||||
- Manage resource limits
|
||||
- Clean up after execution
|
||||
|
||||
3. Library Management:
|
||||
- Specify exact versions
|
||||
- Use trusted packages
|
||||
- Handle dependencies
|
||||
- Verify installations
|
||||
|
||||
4. Error Handling:
|
||||
- Catch execution errors
|
||||
- Handle timeouts
|
||||
- Manage Docker errors
|
||||
- Provide clear messages
|
||||
|
||||
## Integration Example
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import CodeInterpreterTool
|
||||
|
||||
# Initialize tool
|
||||
code_tool = CodeInterpreterTool()
|
||||
|
||||
# Create agent
|
||||
programmer = Agent(
|
||||
role='Code Executor',
|
||||
goal='Execute data analysis code',
|
||||
backstory='Expert Python programmer specializing in data analysis.',
|
||||
tools=[code_tool]
|
||||
)
|
||||
|
||||
# Define task
|
||||
analysis_task = Task(
|
||||
description="""Analyze the dataset using pandas and
|
||||
create a summary visualization with matplotlib.""",
|
||||
agent=programmer
|
||||
)
|
||||
|
||||
# The tool will use:
|
||||
# {
|
||||
# "code": """
|
||||
# import pandas as pd
|
||||
# import matplotlib.pyplot as plt
|
||||
#
|
||||
# # Load and analyze data
|
||||
# df = pd.read_csv('data.csv')
|
||||
# summary = df.describe()
|
||||
#
|
||||
# # Create visualization
|
||||
# plt.figure(figsize=(10, 6))
|
||||
# df['column'].hist()
|
||||
# plt.savefig('output.png')
|
||||
#
|
||||
# print(summary)
|
||||
# """,
|
||||
# "libraries_used": "pandas,matplotlib"
|
||||
# }
|
||||
|
||||
# Create crew
|
||||
crew = Crew(
|
||||
agents=[programmer],
|
||||
tasks=[analysis_task]
|
||||
)
|
||||
|
||||
# Execute
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
## Advanced Usage
|
||||
|
||||
### Custom Docker Configuration
|
||||
```python
|
||||
# Use custom Dockerfile
|
||||
tool = CodeInterpreterTool(
|
||||
user_dockerfile_path="/path/to/Dockerfile"
|
||||
)
|
||||
|
||||
# Use custom Docker daemon
|
||||
tool = CodeInterpreterTool(
|
||||
user_docker_base_url="tcp://remote-docker:2375"
|
||||
)
|
||||
```
|
||||
|
||||
### Direct Execution Mode
|
||||
```python
|
||||
# Enable unsafe mode (not recommended)
|
||||
tool = CodeInterpreterTool(unsafe_mode=True)
|
||||
|
||||
# Execute code directly
|
||||
result = tool.run(
|
||||
code="print('Hello, World!')",
|
||||
libraries_used=[]
|
||||
)
|
||||
```
|
||||
|
||||
### Error Handling Example
|
||||
```python
|
||||
try:
|
||||
code_tool = CodeInterpreterTool()
|
||||
result = code_tool.run(
|
||||
code="""
|
||||
import numpy as np
|
||||
arr = np.array([1, 2, 3])
|
||||
print(f"Array mean: {arr.mean()}")
|
||||
""",
|
||||
libraries_used=["numpy"]
|
||||
)
|
||||
print(result)
|
||||
except Exception as e:
|
||||
print(f"Error executing code: {str(e)}")
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Inherits from BaseTool
|
||||
- Docker-based isolation
|
||||
- Dynamic library installation
|
||||
- Secure code execution
|
||||
- Custom Docker support
|
||||
- Comprehensive error handling
|
||||
- Resource management
|
||||
- Container cleanup
|
||||
- Library dependency handling
|
||||
- Execution output capture
|
||||
207
docs/tools/csv-search-tool.mdx
Normal file
207
docs/tools/csv-search-tool.mdx
Normal file
@@ -0,0 +1,207 @@
|
||||
---
|
||||
title: CSVSearchTool
|
||||
description: A tool for semantic search within CSV files using RAG capabilities
|
||||
icon: table
|
||||
---
|
||||
|
||||
## CSVSearchTool
|
||||
|
||||
The CSVSearchTool enables semantic search capabilities for CSV files using Retrieval-Augmented Generation (RAG). It can process CSV files either specified during initialization or at runtime, making it flexible for various use cases.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install 'crewai[tools]'
|
||||
```
|
||||
|
||||
## Usage Example
|
||||
|
||||
```python
|
||||
from crewai import Agent
|
||||
from crewai_tools import CSVSearchTool
|
||||
|
||||
# Method 1: Initialize with specific CSV file
|
||||
csv_tool = CSVSearchTool(csv="path/to/data.csv")
|
||||
|
||||
# Method 2: Initialize without CSV (specify at runtime)
|
||||
flexible_csv_tool = CSVSearchTool()
|
||||
|
||||
# Create an agent with the tool
|
||||
data_analyst = Agent(
|
||||
role='Data Analyst',
|
||||
goal='Search and analyze CSV data semantically',
|
||||
backstory='Expert at analyzing and extracting insights from CSV data.',
|
||||
tools=[csv_tool],
|
||||
verbose=True
|
||||
)
|
||||
```
|
||||
|
||||
## Input Schema
|
||||
|
||||
### Fixed CSV Schema (when CSV path provided during initialization)
|
||||
```python
|
||||
class FixedCSVSearchToolSchema(BaseModel):
|
||||
search_query: str = Field(
|
||||
description="Mandatory search query you want to use to search the CSV's content"
|
||||
)
|
||||
```
|
||||
|
||||
### Flexible CSV Schema (when CSV path provided at runtime)
|
||||
```python
|
||||
class CSVSearchToolSchema(FixedCSVSearchToolSchema):
|
||||
csv: str = Field(
|
||||
description="Mandatory csv path you want to search"
|
||||
)
|
||||
```
|
||||
|
||||
## Function Signature
|
||||
|
||||
```python
|
||||
def __init__(
|
||||
self,
|
||||
csv: Optional[str] = None,
|
||||
**kwargs
|
||||
):
|
||||
"""
|
||||
Initialize the CSV search tool.
|
||||
|
||||
Args:
|
||||
csv (Optional[str]): Path to CSV file (optional)
|
||||
**kwargs: Additional arguments for RAG tool configuration
|
||||
"""
|
||||
|
||||
def _run(
|
||||
self,
|
||||
search_query: str,
|
||||
**kwargs: Any
|
||||
) -> str:
|
||||
"""
|
||||
Execute semantic search on CSV content.
|
||||
|
||||
Args:
|
||||
search_query (str): Query to search in the CSV
|
||||
**kwargs: Additional arguments including csv path if not initialized
|
||||
|
||||
Returns:
|
||||
str: Relevant content from the CSV matching the query
|
||||
"""
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. CSV File Handling:
|
||||
- Ensure CSV files are properly formatted
|
||||
- Use absolute paths for reliability
|
||||
- Verify file permissions before processing
|
||||
|
||||
2. Search Optimization:
|
||||
- Use specific, focused search queries
|
||||
- Consider column names and data structure
|
||||
- Test with sample queries first
|
||||
|
||||
3. Performance Considerations:
|
||||
- Pre-initialize with CSV for repeated searches
|
||||
- Handle large CSV files appropriately
|
||||
- Monitor memory usage with big datasets
|
||||
|
||||
4. Error Handling:
|
||||
- Verify CSV file existence
|
||||
- Handle malformed CSV data
|
||||
- Manage file access permissions
|
||||
|
||||
## Integration Example
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import CSVSearchTool
|
||||
|
||||
# Initialize tool with specific CSV
|
||||
csv_tool = CSVSearchTool(csv="/path/to/sales_data.csv")
|
||||
|
||||
# Create agent
|
||||
analyst = Agent(
|
||||
role='Data Analyst',
|
||||
goal='Extract insights from sales data',
|
||||
backstory='Expert at analyzing sales data and trends.',
|
||||
tools=[csv_tool]
|
||||
)
|
||||
|
||||
# Define task
|
||||
analysis_task = Task(
|
||||
description="""Find all sales records from the CSV
|
||||
that relate to product returns in Q4 2023.""",
|
||||
agent=analyst
|
||||
)
|
||||
|
||||
# The tool will use:
|
||||
# {
|
||||
# "search_query": "product returns Q4 2023"
|
||||
# }
|
||||
|
||||
# Create crew
|
||||
crew = Crew(
|
||||
agents=[analyst],
|
||||
tasks=[analysis_task]
|
||||
)
|
||||
|
||||
# Execute
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
## Advanced Usage
|
||||
|
||||
### Dynamic CSV Selection
|
||||
```python
|
||||
# Initialize without CSV
|
||||
flexible_tool = CSVSearchTool()
|
||||
|
||||
# Search different CSVs
|
||||
result1 = flexible_tool.run(
|
||||
search_query="revenue 2023",
|
||||
csv="/path/to/finance.csv"
|
||||
)
|
||||
|
||||
result2 = flexible_tool.run(
|
||||
search_query="customer feedback",
|
||||
csv="/path/to/surveys.csv"
|
||||
)
|
||||
```
|
||||
|
||||
### Multiple CSV Analysis
|
||||
```python
|
||||
# Create tools for different CSVs
|
||||
sales_tool = CSVSearchTool(csv="/path/to/sales.csv")
|
||||
inventory_tool = CSVSearchTool(csv="/path/to/inventory.csv")
|
||||
|
||||
# Create agent with multiple tools
|
||||
analyst = Agent(
|
||||
role='Business Analyst',
|
||||
goal='Cross-reference sales and inventory data',
|
||||
tools=[sales_tool, inventory_tool]
|
||||
)
|
||||
```
|
||||
|
||||
### Error Handling Example
|
||||
```python
|
||||
try:
|
||||
csv_tool = CSVSearchTool(csv="/path/to/data.csv")
|
||||
result = csv_tool.run(
|
||||
search_query="important metrics"
|
||||
)
|
||||
print(result)
|
||||
except Exception as e:
|
||||
print(f"Error processing CSV: {str(e)}")
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Inherits from RagTool for semantic search
|
||||
- Supports dynamic CSV file specification
|
||||
- Uses embedchain for data processing
|
||||
- Maintains search context across queries
|
||||
- Thread-safe operations
|
||||
- Efficient semantic search capabilities
|
||||
- Supports various CSV formats
|
||||
- Handles large datasets effectively
|
||||
- Preserves CSV structure in search
|
||||
- Enables natural language queries
|
||||
217
docs/tools/directory-read-tool.mdx
Normal file
217
docs/tools/directory-read-tool.mdx
Normal file
@@ -0,0 +1,217 @@
|
||||
---
|
||||
title: Directory Read Tool
|
||||
description: A tool for recursively listing directory contents
|
||||
---
|
||||
|
||||
# Directory Read Tool
|
||||
|
||||
The Directory Read Tool provides functionality to recursively list all files within a directory. It supports both fixed and dynamic directory path modes, allowing you to specify the directory at initialization or runtime.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install 'crewai[tools]'
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
You can use the Directory Read Tool in two ways:
|
||||
|
||||
### 1. Fixed Directory Path
|
||||
|
||||
Initialize the tool with a specific directory path:
|
||||
|
||||
```python
|
||||
from crewai import Agent
|
||||
from crewai_tools import DirectoryReadTool
|
||||
|
||||
# Initialize with a fixed directory
|
||||
tool = DirectoryReadTool(directory="/path/to/your/directory")
|
||||
|
||||
# Create an agent with the tool
|
||||
agent = Agent(
|
||||
role='File System Analyst',
|
||||
goal='Analyze directory contents',
|
||||
backstory='I help analyze and organize file systems',
|
||||
tools=[tool]
|
||||
)
|
||||
|
||||
# Use in a task
|
||||
task = Task(
|
||||
description="List all files in the project directory",
|
||||
agent=agent
|
||||
)
|
||||
```
|
||||
|
||||
### 2. Dynamic Directory Path
|
||||
|
||||
Initialize the tool without a specific directory path to provide it at runtime:
|
||||
|
||||
```python
|
||||
from crewai import Agent
|
||||
from crewai_tools import DirectoryReadTool
|
||||
|
||||
# Initialize without a fixed directory
|
||||
tool = DirectoryReadTool()
|
||||
|
||||
# Create an agent with the tool
|
||||
agent = Agent(
|
||||
role='File System Explorer',
|
||||
goal='Explore different directories',
|
||||
backstory='I analyze various directory structures',
|
||||
tools=[tool]
|
||||
)
|
||||
|
||||
# Use in a task with dynamic directory path
|
||||
task = Task(
|
||||
description="List all files in the specified directory",
|
||||
agent=agent,
|
||||
context={
|
||||
"directory": "/path/to/explore"
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
## Input Schema
|
||||
|
||||
### Fixed Directory Mode
|
||||
```python
|
||||
class FixedDirectoryReadToolSchema(BaseModel):
|
||||
pass # No additional parameters needed when directory is fixed
|
||||
```
|
||||
|
||||
### Dynamic Directory Mode
|
||||
```python
|
||||
class DirectoryReadToolSchema(BaseModel):
|
||||
directory: str # The path to the directory to list contents
|
||||
```
|
||||
|
||||
## Function Signatures
|
||||
|
||||
```python
|
||||
def __init__(self, directory: Optional[str] = None, **kwargs):
|
||||
"""
|
||||
Initialize the Directory Read Tool.
|
||||
|
||||
Args:
|
||||
directory (Optional[str]): Path to the directory (optional)
|
||||
**kwargs: Additional arguments passed to BaseTool
|
||||
"""
|
||||
|
||||
def _run(
|
||||
self,
|
||||
**kwargs: Any,
|
||||
) -> str:
|
||||
"""
|
||||
Execute the directory listing.
|
||||
|
||||
Args:
|
||||
**kwargs: Arguments including 'directory' for dynamic mode
|
||||
|
||||
Returns:
|
||||
str: A formatted string containing all file paths in the directory
|
||||
"""
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Path Handling**:
|
||||
- Use absolute paths to avoid path resolution issues
|
||||
- Handle trailing slashes appropriately
|
||||
- Verify directory existence before listing
|
||||
|
||||
2. **Performance Considerations**:
|
||||
- Be mindful of directory size when listing large directories
|
||||
- Consider implementing pagination for large directories
|
||||
- Handle symlinks appropriately
|
||||
|
||||
3. **Error Handling**:
|
||||
- Handle directory not found errors gracefully
|
||||
- Manage permission issues appropriately
|
||||
- Validate input parameters before processing
|
||||
|
||||
## Example Integration
|
||||
|
||||
Here's a complete example showing how to integrate the Directory Read Tool with CrewAI:
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import DirectoryReadTool
|
||||
|
||||
# Initialize the tool
|
||||
dir_tool = DirectoryReadTool()
|
||||
|
||||
# Create an agent with the tool
|
||||
file_analyst = Agent(
|
||||
role='File System Analyst',
|
||||
goal='Analyze and report on directory structures',
|
||||
backstory='I am an expert at analyzing file system organization',
|
||||
tools=[dir_tool]
|
||||
)
|
||||
|
||||
# Create tasks
|
||||
analysis_task = Task(
|
||||
description="""
|
||||
Analyze the project directory structure:
|
||||
1. List all files recursively
|
||||
2. Identify key file types
|
||||
3. Report on directory organization
|
||||
|
||||
Provide a comprehensive analysis of the findings.
|
||||
""",
|
||||
agent=file_analyst,
|
||||
context={
|
||||
"directory": "/path/to/project"
|
||||
}
|
||||
)
|
||||
|
||||
# Create and run the crew
|
||||
crew = Crew(
|
||||
agents=[file_analyst],
|
||||
tasks=[analysis_task]
|
||||
)
|
||||
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
The tool handles various error scenarios:
|
||||
|
||||
1. **Directory Not Found**:
|
||||
```python
|
||||
try:
|
||||
tool = DirectoryReadTool(directory="/nonexistent/path")
|
||||
except FileNotFoundError:
|
||||
print("Directory not found. Please verify the path.")
|
||||
```
|
||||
|
||||
2. **Permission Issues**:
|
||||
```python
|
||||
try:
|
||||
tool = DirectoryReadTool(directory="/restricted/path")
|
||||
except PermissionError:
|
||||
print("Insufficient permissions to access the directory.")
|
||||
```
|
||||
|
||||
3. **Invalid Path**:
|
||||
```python
|
||||
try:
|
||||
result = tool._run(directory="invalid/path")
|
||||
except ValueError:
|
||||
print("Invalid directory path provided.")
|
||||
```
|
||||
|
||||
## Output Format
|
||||
|
||||
The tool returns a formatted string containing all file paths in the directory:
|
||||
|
||||
```
|
||||
File paths:
|
||||
- /path/to/directory/file1.txt
|
||||
- /path/to/directory/subdirectory/file2.txt
|
||||
- /path/to/directory/subdirectory/file3.py
|
||||
```
|
||||
|
||||
|
||||
Each file path is listed on a new line with a hyphen prefix, making it easy to parse and read the output.
|
||||
214
docs/tools/directory-search-tool.mdx
Normal file
214
docs/tools/directory-search-tool.mdx
Normal file
@@ -0,0 +1,214 @@
|
||||
---
|
||||
title: DirectorySearchTool
|
||||
description: A tool for semantic search within directory contents using RAG capabilities
|
||||
icon: folder-search
|
||||
---
|
||||
|
||||
## DirectorySearchTool
|
||||
|
||||
The DirectorySearchTool enables semantic search capabilities for directory contents using Retrieval-Augmented Generation (RAG). It processes files recursively within a directory and allows searching through their contents using natural language queries.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install 'crewai[tools]'
|
||||
```
|
||||
|
||||
## Usage Example
|
||||
|
||||
```python
|
||||
from crewai import Agent
|
||||
from crewai_tools import DirectorySearchTool
|
||||
|
||||
# Method 1: Initialize with specific directory
|
||||
dir_tool = DirectorySearchTool(directory="/path/to/documents")
|
||||
|
||||
# Method 2: Initialize without directory (specify at runtime)
|
||||
flexible_dir_tool = DirectorySearchTool()
|
||||
|
||||
# Create an agent with the tool
|
||||
researcher = Agent(
|
||||
role='Directory Researcher',
|
||||
goal='Search and analyze directory contents',
|
||||
backstory='Expert at finding relevant information in document collections.',
|
||||
tools=[dir_tool],
|
||||
verbose=True
|
||||
)
|
||||
```
|
||||
|
||||
## Input Schema
|
||||
|
||||
### Fixed Directory Schema (when path provided during initialization)
|
||||
```python
|
||||
class FixedDirectorySearchToolSchema(BaseModel):
|
||||
search_query: str = Field(
|
||||
description="Mandatory search query you want to use to search the directory's content"
|
||||
)
|
||||
```
|
||||
|
||||
### Flexible Directory Schema (when path provided at runtime)
|
||||
```python
|
||||
class DirectorySearchToolSchema(FixedDirectorySearchToolSchema):
|
||||
directory: str = Field(
|
||||
description="Mandatory directory you want to search"
|
||||
)
|
||||
```
|
||||
|
||||
## Function Signature
|
||||
|
||||
```python
|
||||
def __init__(
|
||||
self,
|
||||
directory: Optional[str] = None,
|
||||
**kwargs
|
||||
):
|
||||
"""
|
||||
Initialize the directory search tool.
|
||||
|
||||
Args:
|
||||
directory (Optional[str]): Path to directory (optional)
|
||||
**kwargs: Additional arguments for RAG tool configuration
|
||||
"""
|
||||
|
||||
def _run(
|
||||
self,
|
||||
search_query: str,
|
||||
**kwargs: Any
|
||||
) -> str:
|
||||
"""
|
||||
Execute semantic search on directory contents.
|
||||
|
||||
Args:
|
||||
search_query (str): Query to search in the directory
|
||||
**kwargs: Additional arguments including directory if not initialized
|
||||
|
||||
Returns:
|
||||
str: Relevant content from the directory matching the query
|
||||
"""
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. Directory Management:
|
||||
- Use absolute paths
|
||||
- Verify directory existence
|
||||
- Handle permissions properly
|
||||
|
||||
2. Search Optimization:
|
||||
- Use specific queries
|
||||
- Consider file types
|
||||
- Test with sample queries
|
||||
|
||||
3. Performance Considerations:
|
||||
- Pre-initialize for repeated searches
|
||||
- Handle large directories
|
||||
- Monitor processing time
|
||||
|
||||
4. Error Handling:
|
||||
- Verify directory access
|
||||
- Handle missing files
|
||||
- Manage permissions
|
||||
|
||||
## Integration Example
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import DirectorySearchTool
|
||||
|
||||
# Initialize tool with specific directory
|
||||
dir_tool = DirectorySearchTool(
|
||||
directory="/path/to/documents"
|
||||
)
|
||||
|
||||
# Create agent
|
||||
researcher = Agent(
|
||||
role='Directory Researcher',
|
||||
goal='Extract insights from document collections',
|
||||
backstory='Expert at analyzing document collections.',
|
||||
tools=[dir_tool]
|
||||
)
|
||||
|
||||
# Define task
|
||||
research_task = Task(
|
||||
description="""Find all mentions of machine learning
|
||||
applications from the directory contents.""",
|
||||
agent=researcher
|
||||
)
|
||||
|
||||
# The tool will use:
|
||||
# {
|
||||
# "search_query": "machine learning applications"
|
||||
# }
|
||||
|
||||
# Create crew
|
||||
crew = Crew(
|
||||
agents=[researcher],
|
||||
tasks=[research_task]
|
||||
)
|
||||
|
||||
# Execute
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
## Advanced Usage
|
||||
|
||||
### Dynamic Directory Selection
|
||||
```python
|
||||
# Initialize without directory path
|
||||
flexible_tool = DirectorySearchTool()
|
||||
|
||||
# Search different directories
|
||||
docs_results = flexible_tool.run(
|
||||
search_query="technical specifications",
|
||||
directory="/path/to/docs"
|
||||
)
|
||||
|
||||
reports_results = flexible_tool.run(
|
||||
search_query="financial metrics",
|
||||
directory="/path/to/reports"
|
||||
)
|
||||
```
|
||||
|
||||
### Multiple Directory Analysis
|
||||
```python
|
||||
# Create tools for different directories
|
||||
docs_tool = DirectorySearchTool(
|
||||
directory="/path/to/docs"
|
||||
)
|
||||
reports_tool = DirectorySearchTool(
|
||||
directory="/path/to/reports"
|
||||
)
|
||||
|
||||
# Create agent with multiple tools
|
||||
analyst = Agent(
|
||||
role='Content Analyst',
|
||||
goal='Cross-reference multiple document collections',
|
||||
tools=[docs_tool, reports_tool]
|
||||
)
|
||||
```
|
||||
|
||||
### Error Handling Example
|
||||
```python
|
||||
try:
|
||||
dir_tool = DirectorySearchTool()
|
||||
results = dir_tool.run(
|
||||
search_query="key concepts",
|
||||
directory="/path/to/documents"
|
||||
)
|
||||
print(results)
|
||||
except Exception as e:
|
||||
print(f"Error processing directory: {str(e)}")
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Inherits from RagTool
|
||||
- Uses DirectoryLoader
|
||||
- Supports recursive search
|
||||
- Dynamic directory specification
|
||||
- Efficient content retrieval
|
||||
- Thread-safe operations
|
||||
- Maintains search context
|
||||
- Processes multiple file types
|
||||
- Handles nested directories
|
||||
- Memory-efficient processing
|
||||
224
docs/tools/docx-search-tool.mdx
Normal file
224
docs/tools/docx-search-tool.mdx
Normal file
@@ -0,0 +1,224 @@
|
||||
---
|
||||
title: DOCXSearchTool
|
||||
description: A tool for semantic search within DOCX documents using RAG capabilities
|
||||
icon: file-text
|
||||
---
|
||||
|
||||
## DOCXSearchTool
|
||||
|
||||
The DOCXSearchTool enables semantic search capabilities for Microsoft Word (DOCX) documents using Retrieval-Augmented Generation (RAG). It supports both fixed and dynamic document selection modes.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install 'crewai[tools]'
|
||||
```
|
||||
|
||||
## Usage Example
|
||||
|
||||
```python
|
||||
from crewai import Agent
|
||||
from crewai_tools import DOCXSearchTool
|
||||
|
||||
# Method 1: Fixed document (specified at initialization)
|
||||
fixed_tool = DOCXSearchTool(
|
||||
docx="path/to/document.docx"
|
||||
)
|
||||
|
||||
# Method 2: Dynamic document (specified at runtime)
|
||||
dynamic_tool = DOCXSearchTool()
|
||||
|
||||
# Create an agent with the tool
|
||||
researcher = Agent(
|
||||
role='Document Researcher',
|
||||
goal='Search and analyze document contents',
|
||||
backstory='Expert at finding relevant information in documents.',
|
||||
tools=[fixed_tool], # or [dynamic_tool]
|
||||
verbose=True
|
||||
)
|
||||
```
|
||||
|
||||
## Input Schema
|
||||
|
||||
### Fixed Document Mode
|
||||
```python
|
||||
class FixedDOCXSearchToolSchema(BaseModel):
|
||||
search_query: str = Field(
|
||||
description="Mandatory search query you want to use to search the DOCX's content"
|
||||
)
|
||||
```
|
||||
|
||||
### Dynamic Document Mode
|
||||
```python
|
||||
class DOCXSearchToolSchema(BaseModel):
|
||||
docx: str = Field(
|
||||
description="Mandatory docx path you want to search"
|
||||
)
|
||||
search_query: str = Field(
|
||||
description="Mandatory search query you want to use to search the DOCX's content"
|
||||
)
|
||||
```
|
||||
|
||||
## Function Signature
|
||||
|
||||
```python
|
||||
def __init__(
|
||||
self,
|
||||
docx: Optional[str] = None,
|
||||
**kwargs
|
||||
):
|
||||
"""
|
||||
Initialize the DOCX search tool.
|
||||
|
||||
Args:
|
||||
docx (Optional[str]): Path to DOCX file (optional for dynamic mode)
|
||||
**kwargs: Additional arguments for RAG tool configuration
|
||||
"""
|
||||
|
||||
def _run(
|
||||
self,
|
||||
search_query: str,
|
||||
docx: Optional[str] = None,
|
||||
**kwargs: Any
|
||||
) -> str:
|
||||
"""
|
||||
Execute semantic search on document contents.
|
||||
|
||||
Args:
|
||||
search_query (str): Query to search in the document
|
||||
docx (Optional[str]): Document path (required for dynamic mode)
|
||||
**kwargs: Additional arguments
|
||||
|
||||
Returns:
|
||||
str: Relevant content from the document matching the query
|
||||
"""
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. Document Handling:
|
||||
- Use absolute file paths
|
||||
- Verify file existence
|
||||
- Handle large documents
|
||||
- Monitor memory usage
|
||||
|
||||
2. Query Optimization:
|
||||
- Structure queries clearly
|
||||
- Consider document size
|
||||
- Handle formatting
|
||||
- Monitor performance
|
||||
|
||||
3. Error Handling:
|
||||
- Check file access
|
||||
- Validate file format
|
||||
- Handle corrupted files
|
||||
- Log issues
|
||||
|
||||
4. Mode Selection:
|
||||
- Choose fixed mode for static documents
|
||||
- Use dynamic mode for runtime selection
|
||||
- Consider memory implications
|
||||
- Manage document lifecycle
|
||||
|
||||
## Integration Example
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import DOCXSearchTool
|
||||
|
||||
# Initialize tool
|
||||
docx_tool = DOCXSearchTool(
|
||||
docx="reports/annual_report_2023.docx"
|
||||
)
|
||||
|
||||
# Create agent
|
||||
researcher = Agent(
|
||||
role='Document Analyst',
|
||||
goal='Extract insights from annual report',
|
||||
backstory='Expert at analyzing business documents.',
|
||||
tools=[docx_tool]
|
||||
)
|
||||
|
||||
# Define task
|
||||
analysis_task = Task(
|
||||
description="""Find all mentions of revenue growth
|
||||
and market expansion.""",
|
||||
agent=researcher
|
||||
)
|
||||
|
||||
# Create crew
|
||||
crew = Crew(
|
||||
agents=[researcher],
|
||||
tasks=[analysis_task]
|
||||
)
|
||||
|
||||
# Execute
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
## Advanced Usage
|
||||
|
||||
### Multiple Document Analysis
|
||||
```python
|
||||
# Create tools for different documents
|
||||
report_tool = DOCXSearchTool(
|
||||
docx="reports/annual_report.docx"
|
||||
)
|
||||
|
||||
policy_tool = DOCXSearchTool(
|
||||
docx="policies/compliance.docx"
|
||||
)
|
||||
|
||||
# Create agent with multiple tools
|
||||
analyst = Agent(
|
||||
role='Document Analyst',
|
||||
goal='Cross-reference reports and policies',
|
||||
tools=[report_tool, policy_tool]
|
||||
)
|
||||
```
|
||||
|
||||
### Dynamic Document Loading
|
||||
```python
|
||||
# Initialize dynamic tool
|
||||
dynamic_tool = DOCXSearchTool()
|
||||
|
||||
# Use with different documents
|
||||
result1 = dynamic_tool.run(
|
||||
docx="document1.docx",
|
||||
search_query="project timeline"
|
||||
)
|
||||
|
||||
result2 = dynamic_tool.run(
|
||||
docx="document2.docx",
|
||||
search_query="budget allocation"
|
||||
)
|
||||
```
|
||||
|
||||
### Error Handling Example
|
||||
```python
|
||||
try:
|
||||
docx_tool = DOCXSearchTool(
|
||||
docx="reports/quarterly_report.docx"
|
||||
)
|
||||
results = docx_tool.run(
|
||||
search_query="Q3 performance metrics"
|
||||
)
|
||||
print(results)
|
||||
except FileNotFoundError as e:
|
||||
print(f"Document not found: {str(e)}")
|
||||
except Exception as e:
|
||||
print(f"Error processing document: {str(e)}")
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Inherits from RagTool
|
||||
- Supports fixed/dynamic modes
|
||||
- Document path validation
|
||||
- Memory management
|
||||
- Performance optimization
|
||||
- Error handling
|
||||
- Search capabilities
|
||||
- Content extraction
|
||||
- Format handling
|
||||
- Security features
|
||||
193
docs/tools/file-read-tool.mdx
Normal file
193
docs/tools/file-read-tool.mdx
Normal file
@@ -0,0 +1,193 @@
|
||||
---
|
||||
title: FileReadTool
|
||||
description: A tool for reading file contents with flexible path specification
|
||||
icon: file-text
|
||||
---
|
||||
|
||||
## FileReadTool
|
||||
|
||||
The FileReadTool provides functionality to read file contents with support for both fixed and dynamic file path specification. It includes comprehensive error handling for common file operations and maintains clear descriptions of its configured state.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install 'crewai[tools]'
|
||||
```
|
||||
|
||||
## Usage Example
|
||||
|
||||
```python
|
||||
from crewai import Agent
|
||||
from crewai_tools import FileReadTool
|
||||
|
||||
# Method 1: Initialize with specific file
|
||||
reader = FileReadTool(file_path="/path/to/data.txt")
|
||||
|
||||
# Method 2: Initialize without file (specify at runtime)
|
||||
flexible_reader = FileReadTool()
|
||||
|
||||
# Create an agent with the tool
|
||||
file_processor = Agent(
|
||||
role='File Processor',
|
||||
goal='Read and process file contents',
|
||||
backstory='Expert at handling file operations and content processing.',
|
||||
tools=[reader],
|
||||
verbose=True
|
||||
)
|
||||
```
|
||||
|
||||
## Input Schema
|
||||
|
||||
```python
|
||||
class FileReadToolSchema(BaseModel):
|
||||
file_path: str = Field(
|
||||
description="Mandatory file full path to read the file"
|
||||
)
|
||||
```
|
||||
|
||||
## Function Signature
|
||||
|
||||
```python
|
||||
def __init__(
|
||||
self,
|
||||
file_path: Optional[str] = None,
|
||||
**kwargs: Any
|
||||
) -> None:
|
||||
"""
|
||||
Initialize the file read tool.
|
||||
|
||||
Args:
|
||||
file_path (Optional[str]): Path to file to read (optional)
|
||||
**kwargs: Additional arguments passed to BaseTool
|
||||
"""
|
||||
|
||||
def _run(
|
||||
self,
|
||||
**kwargs: Any
|
||||
) -> str:
|
||||
"""
|
||||
Read and return file contents.
|
||||
|
||||
Args:
|
||||
file_path (str, optional): Override default file path
|
||||
**kwargs: Additional arguments
|
||||
|
||||
Returns:
|
||||
str: File contents or error message
|
||||
"""
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. File Path Management:
|
||||
- Use absolute paths for reliability
|
||||
- Verify file existence before operations
|
||||
- Handle path resolution properly
|
||||
|
||||
2. Error Handling:
|
||||
- Check for file existence
|
||||
- Handle permission issues
|
||||
- Manage encoding errors
|
||||
- Process file access failures
|
||||
|
||||
3. Performance Considerations:
|
||||
- Close files after reading
|
||||
- Handle large files appropriately
|
||||
- Consider memory constraints
|
||||
|
||||
4. Security Practices:
|
||||
- Validate file paths
|
||||
- Check file permissions
|
||||
- Avoid path traversal issues
|
||||
|
||||
## Integration Example
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import FileReadTool
|
||||
|
||||
# Initialize tool with specific file
|
||||
reader = FileReadTool(file_path="/path/to/config.txt")
|
||||
|
||||
# Create agent
|
||||
processor = Agent(
|
||||
role='File Processor',
|
||||
goal='Process configuration files',
|
||||
backstory='Expert at reading and analyzing configuration files.',
|
||||
tools=[reader]
|
||||
)
|
||||
|
||||
# Define task
|
||||
read_task = Task(
|
||||
description="""Read and analyze the contents of
|
||||
the configuration file.""",
|
||||
agent=processor
|
||||
)
|
||||
|
||||
# The tool will use the default file path
|
||||
|
||||
# Create crew
|
||||
crew = Crew(
|
||||
agents=[processor],
|
||||
tasks=[read_task]
|
||||
)
|
||||
|
||||
# Execute
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
## Advanced Usage
|
||||
|
||||
### Dynamic File Selection
|
||||
```python
|
||||
# Initialize without file path
|
||||
flexible_reader = FileReadTool()
|
||||
|
||||
# Read different files
|
||||
config_content = flexible_reader.run(
|
||||
file_path="/path/to/config.txt"
|
||||
)
|
||||
|
||||
log_content = flexible_reader.run(
|
||||
file_path="/path/to/logs.txt"
|
||||
)
|
||||
```
|
||||
|
||||
### Multiple File Processing
|
||||
```python
|
||||
# Create tools for different files
|
||||
config_reader = FileReadTool(file_path="/path/to/config.txt")
|
||||
log_reader = FileReadTool(file_path="/path/to/logs.txt")
|
||||
|
||||
# Create agent with multiple tools
|
||||
processor = Agent(
|
||||
role='File Analyst',
|
||||
goal='Analyze multiple file types',
|
||||
tools=[config_reader, log_reader]
|
||||
)
|
||||
```
|
||||
|
||||
### Error Handling Example
|
||||
```python
|
||||
try:
|
||||
reader = FileReadTool()
|
||||
content = reader.run(
|
||||
file_path="/path/to/file.txt"
|
||||
)
|
||||
print(content)
|
||||
except Exception as e:
|
||||
print(f"Error reading file: {str(e)}")
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Inherits from BaseTool
|
||||
- Supports fixed or dynamic file paths
|
||||
- Comprehensive error handling
|
||||
- Thread-safe operations
|
||||
- Clear error messages
|
||||
- Flexible path specification
|
||||
- Maintains tool description
|
||||
- Handles common file errors
|
||||
- Supports various file types
|
||||
- Memory-efficient operations
|
||||
141
docs/tools/filewritertool.mdx
Normal file
141
docs/tools/filewritertool.mdx
Normal file
@@ -0,0 +1,141 @@
|
||||
---
|
||||
title: FileWriterTool
|
||||
description: A tool for writing content to files with support for various file formats.
|
||||
icon: file-pen
|
||||
---
|
||||
|
||||
## FileWriterTool
|
||||
|
||||
The FileWriterTool provides agents with the capability to write content to files, supporting various file formats and ensuring proper file handling.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install 'crewai[tools]'
|
||||
```
|
||||
|
||||
## Usage Example
|
||||
|
||||
```python
|
||||
from crewai import Agent
|
||||
from crewai_tools import FileWriterTool
|
||||
|
||||
# Initialize the tool
|
||||
file_writer = FileWriterTool()
|
||||
|
||||
# Create an agent with the tool
|
||||
writer_agent = Agent(
|
||||
role='Content Writer',
|
||||
goal='Write and save content to files',
|
||||
backstory='Expert at creating and managing file content.',
|
||||
tools=[file_writer],
|
||||
verbose=True
|
||||
)
|
||||
|
||||
# Use in a task
|
||||
task = Task(
|
||||
description='Write a report and save it to report.txt',
|
||||
agent=writer_agent
|
||||
)
|
||||
```
|
||||
|
||||
## Tool Attributes
|
||||
|
||||
| Attribute | Type | Description |
|
||||
| :-------- | :--- | :---------- |
|
||||
| name | str | "File Writer Tool" |
|
||||
| description | str | "A tool that writes content to a file." |
|
||||
|
||||
## Input Schema
|
||||
|
||||
```python
|
||||
class FileWriterToolInput(BaseModel):
|
||||
filename: str # Name of the file to write
|
||||
directory: str = "./" # Optional directory path, defaults to current directory
|
||||
overwrite: str = "False" # Whether to overwrite existing file ("True"/"False")
|
||||
content: str # Content to write to the file
|
||||
```
|
||||
|
||||
## Function Signature
|
||||
|
||||
```python
|
||||
def _run(self, **kwargs: Any) -> str:
|
||||
"""
|
||||
Write content to a file with specified parameters.
|
||||
|
||||
Args:
|
||||
filename (str): Name of the file to write
|
||||
content (str): Content to write to the file
|
||||
directory (str, optional): Directory path. Defaults to "./".
|
||||
overwrite (str, optional): Whether to overwrite existing file. Defaults to "False".
|
||||
|
||||
Returns:
|
||||
str: Success message with filepath or error message
|
||||
"""
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
The tool includes error handling for common file operations:
|
||||
- FileExistsError: When file exists and overwrite is not allowed
|
||||
- KeyError: When required parameters are missing
|
||||
- Directory Creation: Automatically creates directories if they don't exist
|
||||
- General Exceptions: Catches and reports any other file operation errors
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. Always provide absolute file paths
|
||||
2. Ensure proper file permissions
|
||||
3. Handle potential errors in your agent prompts
|
||||
4. Verify file contents after writing
|
||||
|
||||
## Integration Example
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import FileWriterTool
|
||||
|
||||
# Initialize tool
|
||||
file_writer = FileWriterTool()
|
||||
|
||||
# Create agent
|
||||
writer = Agent(
|
||||
role='Technical Writer',
|
||||
goal='Create and save technical documentation',
|
||||
backstory='Expert technical writer with experience in documentation.',
|
||||
tools=[file_writer]
|
||||
)
|
||||
|
||||
# Define task
|
||||
writing_task = Task(
|
||||
description="""Write a technical guide about Python best practices and save it
|
||||
to the docs directory. The file should be named 'python_guide.md'.
|
||||
Include sections on code style, documentation, and testing.
|
||||
If a file already exists, overwrite it.""",
|
||||
agent=writer
|
||||
)
|
||||
|
||||
# The agent can use the tool with these parameters:
|
||||
# {
|
||||
# "filename": "python_guide.md",
|
||||
# "directory": "docs",
|
||||
# "overwrite": "True",
|
||||
# "content": "# Python Best Practices\n\n## Code Style\n..."
|
||||
# }
|
||||
|
||||
# Create crew
|
||||
crew = Crew(
|
||||
agents=[writer],
|
||||
tasks=[writing_task]
|
||||
)
|
||||
|
||||
# Execute
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- The tool automatically creates directories in the file path if they don't exist
|
||||
- Supports various file formats (txt, md, json, etc.)
|
||||
- Returns descriptive error messages for better debugging
|
||||
- Thread-safe file operations
|
||||
181
docs/tools/firecrawl-crawl-website-tool.mdx
Normal file
181
docs/tools/firecrawl-crawl-website-tool.mdx
Normal file
@@ -0,0 +1,181 @@
|
||||
---
|
||||
title: FirecrawlCrawlWebsiteTool
|
||||
description: A web crawling tool powered by Firecrawl API for comprehensive website content extraction
|
||||
icon: spider-web
|
||||
---
|
||||
|
||||
## FirecrawlCrawlWebsiteTool
|
||||
|
||||
The FirecrawlCrawlWebsiteTool provides website crawling capabilities using the Firecrawl API. It allows for customizable crawling with options for polling intervals, idempotency, and URL parameters.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install 'crewai[tools]'
|
||||
pip install firecrawl-py # Required dependency
|
||||
```
|
||||
|
||||
## Usage Example
|
||||
|
||||
```python
|
||||
from crewai import Agent
|
||||
from crewai_tools import FirecrawlCrawlWebsiteTool
|
||||
|
||||
# Method 1: Using environment variable
|
||||
# export FIRECRAWL_API_KEY='your-api-key'
|
||||
crawler = FirecrawlCrawlWebsiteTool()
|
||||
|
||||
# Method 2: Providing API key directly
|
||||
crawler = FirecrawlCrawlWebsiteTool(
|
||||
api_key="your-firecrawl-api-key"
|
||||
)
|
||||
|
||||
# Method 3: With custom configuration
|
||||
crawler = FirecrawlCrawlWebsiteTool(
|
||||
api_key="your-firecrawl-api-key",
|
||||
url="https://example.com", # Base URL
|
||||
poll_interval=5, # Custom polling interval
|
||||
idempotency_key="unique-key"
|
||||
)
|
||||
|
||||
# Create an agent with the tool
|
||||
researcher = Agent(
|
||||
role='Web Crawler',
|
||||
goal='Extract and analyze website content',
|
||||
backstory='Expert at crawling and analyzing web content.',
|
||||
tools=[crawler],
|
||||
verbose=True
|
||||
)
|
||||
```
|
||||
|
||||
## Input Schema
|
||||
|
||||
```python
|
||||
class FirecrawlCrawlWebsiteToolSchema(BaseModel):
|
||||
url: str = Field(description="Website URL")
|
||||
```
|
||||
|
||||
## Function Signature
|
||||
|
||||
```python
|
||||
def __init__(
|
||||
self,
|
||||
api_key: Optional[str] = None,
|
||||
url: Optional[str] = None,
|
||||
params: Optional[Dict[str, Any]] = None,
|
||||
poll_interval: Optional[int] = 2,
|
||||
idempotency_key: Optional[str] = None,
|
||||
**kwargs
|
||||
):
|
||||
"""
|
||||
Initialize the website crawling tool.
|
||||
|
||||
Args:
|
||||
api_key (Optional[str]): Firecrawl API key. If not provided, checks FIRECRAWL_API_KEY env var
|
||||
url (Optional[str]): Base URL to crawl. Can be overridden in _run
|
||||
params (Optional[Dict[str, Any]]): Additional parameters for FirecrawlApp
|
||||
poll_interval (Optional[int]): Poll interval for FirecrawlApp
|
||||
idempotency_key (Optional[str]): Idempotency key for FirecrawlApp
|
||||
**kwargs: Additional arguments for tool creation
|
||||
"""
|
||||
|
||||
def _run(self, url: str) -> Any:
|
||||
"""
|
||||
Crawl a website using Firecrawl.
|
||||
|
||||
Args:
|
||||
url (str): Website URL to crawl (overrides constructor URL if provided)
|
||||
|
||||
Returns:
|
||||
Any: Crawled website content from Firecrawl API
|
||||
"""
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. Set up API authentication:
|
||||
- Use environment variable: `export FIRECRAWL_API_KEY='your-api-key'`
|
||||
- Or provide directly in constructor
|
||||
2. Configure crawling parameters:
|
||||
- Set appropriate poll intervals
|
||||
- Use idempotency keys for retry safety
|
||||
- Customize URL parameters as needed
|
||||
3. Handle rate limits and quotas
|
||||
4. Consider website robots.txt policies
|
||||
5. Handle potential crawling errors in agent prompts
|
||||
|
||||
## Integration Example
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import FirecrawlCrawlWebsiteTool
|
||||
|
||||
# Initialize crawler with configuration
|
||||
crawler = FirecrawlCrawlWebsiteTool(
|
||||
api_key="your-firecrawl-api-key",
|
||||
poll_interval=5,
|
||||
params={
|
||||
"max_depth": 3,
|
||||
"follow_links": True
|
||||
}
|
||||
)
|
||||
|
||||
# Create agent
|
||||
web_analyst = Agent(
|
||||
role='Web Content Analyst',
|
||||
goal='Extract and analyze website content comprehensively',
|
||||
backstory='Expert at web crawling and content analysis.',
|
||||
tools=[crawler]
|
||||
)
|
||||
|
||||
# Define task
|
||||
crawl_task = Task(
|
||||
description="""Crawl the documentation website at docs.example.com
|
||||
and extract all API-related content.""",
|
||||
agent=web_analyst
|
||||
)
|
||||
|
||||
# The agent will use:
|
||||
# {
|
||||
# "url": "https://docs.example.com"
|
||||
# }
|
||||
|
||||
# Create crew
|
||||
crew = Crew(
|
||||
agents=[web_analyst],
|
||||
tasks=[crawl_task]
|
||||
)
|
||||
|
||||
# Execute
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
## Configuration Options
|
||||
|
||||
### URL Parameters
|
||||
```python
|
||||
params = {
|
||||
"max_depth": 3, # Maximum crawl depth
|
||||
"follow_links": True, # Follow internal links
|
||||
"exclude_patterns": [], # URL patterns to exclude
|
||||
"include_patterns": [] # URL patterns to include
|
||||
}
|
||||
```
|
||||
|
||||
### Polling Configuration
|
||||
```python
|
||||
crawler = FirecrawlCrawlWebsiteTool(
|
||||
poll_interval=5, # Poll every 5 seconds
|
||||
idempotency_key="unique-key-123" # For retry safety
|
||||
)
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Requires valid Firecrawl API key
|
||||
- Supports both environment variable and direct API key configuration
|
||||
- Configurable polling intervals for crawl status
|
||||
- Idempotency support for safe retries
|
||||
- Thread-safe operations
|
||||
- Customizable crawling parameters
|
||||
- Respects robots.txt by default
|
||||
154
docs/tools/firecrawl-search-tool.mdx
Normal file
154
docs/tools/firecrawl-search-tool.mdx
Normal file
@@ -0,0 +1,154 @@
|
||||
---
|
||||
title: FirecrawlSearchTool
|
||||
description: A web search tool powered by Firecrawl API for comprehensive web search capabilities
|
||||
icon: magnifying-glass-chart
|
||||
---
|
||||
|
||||
## FirecrawlSearchTool
|
||||
|
||||
The FirecrawlSearchTool provides web search capabilities using the Firecrawl API. It allows for customizable search queries with options for result formatting and search parameters.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install 'crewai[tools]'
|
||||
pip install firecrawl-py # Required dependency
|
||||
```
|
||||
|
||||
## Usage Example
|
||||
|
||||
```python
|
||||
from crewai import Agent
|
||||
from crewai_tools import FirecrawlSearchTool
|
||||
|
||||
# Initialize the tool with your API key
|
||||
search_tool = FirecrawlSearchTool(api_key="your-firecrawl-api-key")
|
||||
|
||||
# Create an agent with the tool
|
||||
researcher = Agent(
|
||||
role='Web Researcher',
|
||||
goal='Find relevant information across the web',
|
||||
backstory='Expert at web research and information gathering.',
|
||||
tools=[search_tool],
|
||||
verbose=True
|
||||
)
|
||||
```
|
||||
|
||||
## Input Schema
|
||||
|
||||
```python
|
||||
class FirecrawlSearchToolSchema(BaseModel):
|
||||
query: str = Field(description="Search query")
|
||||
page_options: Optional[Dict[str, Any]] = Field(
|
||||
default=None,
|
||||
description="Options for result formatting"
|
||||
)
|
||||
search_options: Optional[Dict[str, Any]] = Field(
|
||||
default=None,
|
||||
description="Options for searching"
|
||||
)
|
||||
```
|
||||
|
||||
## Function Signature
|
||||
|
||||
```python
|
||||
def __init__(self, api_key: Optional[str] = None, **kwargs):
|
||||
"""
|
||||
Initialize the Firecrawl search tool.
|
||||
|
||||
Args:
|
||||
api_key (Optional[str]): Firecrawl API key
|
||||
**kwargs: Additional arguments for tool creation
|
||||
"""
|
||||
|
||||
def _run(
|
||||
self,
|
||||
query: str,
|
||||
page_options: Optional[Dict[str, Any]] = None,
|
||||
result_options: Optional[Dict[str, Any]] = None,
|
||||
) -> Any:
|
||||
"""
|
||||
Perform a web search using Firecrawl.
|
||||
|
||||
Args:
|
||||
query (str): Search query string
|
||||
page_options (Optional[Dict[str, Any]]): Options for result formatting
|
||||
result_options (Optional[Dict[str, Any]]): Options for search results
|
||||
|
||||
Returns:
|
||||
Any: Search results from Firecrawl API
|
||||
"""
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. Always provide a valid API key
|
||||
2. Use specific, focused search queries
|
||||
3. Customize page and result options for better results
|
||||
4. Handle potential API errors in agent prompts
|
||||
5. Consider rate limits and usage quotas
|
||||
|
||||
## Integration Example
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import FirecrawlSearchTool
|
||||
|
||||
# Initialize tool with API key
|
||||
search_tool = FirecrawlSearchTool(api_key="your-firecrawl-api-key")
|
||||
|
||||
# Create agent
|
||||
researcher = Agent(
|
||||
role='Market Researcher',
|
||||
goal='Research market trends and competitor analysis',
|
||||
backstory='Expert market analyst with deep research skills.',
|
||||
tools=[search_tool]
|
||||
)
|
||||
|
||||
# Define task
|
||||
research_task = Task(
|
||||
description="""Research the latest developments in electric vehicles,
|
||||
focusing on market leaders and emerging technologies. Format the results
|
||||
in a structured way.""",
|
||||
agent=researcher
|
||||
)
|
||||
|
||||
# The agent will use:
|
||||
# {
|
||||
# "query": "electric vehicle market leaders emerging technologies",
|
||||
# "page_options": {
|
||||
# "format": "structured",
|
||||
# "maxLength": 1000
|
||||
# },
|
||||
# "result_options": {
|
||||
# "limit": 5,
|
||||
# "sortBy": "relevance"
|
||||
# }
|
||||
# }
|
||||
|
||||
# Create crew
|
||||
crew = Crew(
|
||||
agents=[researcher],
|
||||
tasks=[research_task]
|
||||
)
|
||||
|
||||
# Execute
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
The tool includes error handling for:
|
||||
- Missing API key
|
||||
- Missing firecrawl-py package
|
||||
- API request failures
|
||||
- Invalid options parameters
|
||||
|
||||
## Notes
|
||||
|
||||
- Requires valid Firecrawl API key
|
||||
- Supports customizable search parameters
|
||||
- Provides structured web search results
|
||||
- Thread-safe operations
|
||||
- Efficient for large-scale web searches
|
||||
- Handles rate limiting automatically
|
||||
233
docs/tools/github-search-tool.mdx
Normal file
233
docs/tools/github-search-tool.mdx
Normal file
@@ -0,0 +1,233 @@
|
||||
---
|
||||
title: GithubSearchTool
|
||||
description: A tool for semantic search within GitHub repositories using RAG capabilities
|
||||
icon: github
|
||||
---
|
||||
|
||||
## GithubSearchTool
|
||||
|
||||
The GithubSearchTool enables semantic search capabilities for GitHub repositories using Retrieval-Augmented Generation (RAG). It processes various content types including code, repository information, pull requests, and issues, allowing natural language queries across repository content.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install 'crewai[tools]'
|
||||
```
|
||||
|
||||
## Usage Example
|
||||
|
||||
```python
|
||||
from crewai import Agent
|
||||
from crewai_tools import GithubSearchTool
|
||||
|
||||
# Method 1: Initialize with specific repository
|
||||
github_tool = GithubSearchTool(
|
||||
github_repo="owner/repo",
|
||||
gh_token="your_github_token",
|
||||
content_types=["code", "pr", "issue"]
|
||||
)
|
||||
|
||||
# Method 2: Initialize without repository (specify at runtime)
|
||||
flexible_github_tool = GithubSearchTool(
|
||||
gh_token="your_github_token",
|
||||
content_types=["code", "repo"]
|
||||
)
|
||||
|
||||
# Create an agent with the tool
|
||||
researcher = Agent(
|
||||
role='GitHub Researcher',
|
||||
goal='Search and analyze repository contents',
|
||||
backstory='Expert at finding relevant information in GitHub repositories.',
|
||||
tools=[github_tool],
|
||||
verbose=True
|
||||
)
|
||||
```
|
||||
|
||||
## Input Schema
|
||||
|
||||
### Fixed Repository Schema (when repo provided during initialization)
|
||||
```python
|
||||
class FixedGithubSearchToolSchema(BaseModel):
|
||||
search_query: str = Field(
|
||||
description="Mandatory search query you want to use to search the github repo's content"
|
||||
)
|
||||
```
|
||||
|
||||
### Flexible Repository Schema (when repo provided at runtime)
|
||||
```python
|
||||
class GithubSearchToolSchema(FixedGithubSearchToolSchema):
|
||||
github_repo: str = Field(
|
||||
description="Mandatory github you want to search"
|
||||
)
|
||||
content_types: List[str] = Field(
|
||||
description="Mandatory content types you want to be included search, options: [code, repo, pr, issue]"
|
||||
)
|
||||
```
|
||||
|
||||
## Function Signature
|
||||
|
||||
```python
|
||||
def __init__(
|
||||
self,
|
||||
github_repo: Optional[str] = None,
|
||||
gh_token: str,
|
||||
content_types: List[str],
|
||||
**kwargs
|
||||
):
|
||||
"""
|
||||
Initialize the GitHub search tool.
|
||||
|
||||
Args:
|
||||
github_repo (Optional[str]): Repository to search (optional)
|
||||
gh_token (str): GitHub authentication token
|
||||
content_types (List[str]): Content types to search
|
||||
**kwargs: Additional arguments for RAG tool configuration
|
||||
"""
|
||||
|
||||
def _run(
|
||||
self,
|
||||
search_query: str,
|
||||
**kwargs: Any
|
||||
) -> str:
|
||||
"""
|
||||
Execute semantic search on repository contents.
|
||||
|
||||
Args:
|
||||
search_query (str): Query to search in the repository
|
||||
**kwargs: Additional arguments including github_repo and content_types if not initialized
|
||||
|
||||
Returns:
|
||||
str: Relevant content from the repository matching the query
|
||||
"""
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. Authentication:
|
||||
- Secure token management
|
||||
- Use environment variables
|
||||
- Handle token expiration
|
||||
|
||||
2. Search Optimization:
|
||||
- Target specific content types
|
||||
- Use focused queries
|
||||
- Consider rate limits
|
||||
|
||||
3. Performance Considerations:
|
||||
- Pre-initialize for repeated searches
|
||||
- Handle large repositories
|
||||
- Monitor API usage
|
||||
|
||||
4. Error Handling:
|
||||
- Verify repository access
|
||||
- Handle API limits
|
||||
- Manage authentication errors
|
||||
|
||||
## Integration Example
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import GithubSearchTool
|
||||
|
||||
# Initialize tool with specific repository
|
||||
github_tool = GithubSearchTool(
|
||||
github_repo="owner/repo",
|
||||
gh_token="your_github_token",
|
||||
content_types=["code", "pr", "issue"]
|
||||
)
|
||||
|
||||
# Create agent
|
||||
researcher = Agent(
|
||||
role='GitHub Researcher',
|
||||
goal='Extract insights from repository content',
|
||||
backstory='Expert at analyzing GitHub repositories.',
|
||||
tools=[github_tool]
|
||||
)
|
||||
|
||||
# Define task
|
||||
research_task = Task(
|
||||
description="""Find all implementations of
|
||||
machine learning algorithms in the codebase.""",
|
||||
agent=researcher
|
||||
)
|
||||
|
||||
# The tool will use:
|
||||
# {
|
||||
# "search_query": "machine learning implementation"
|
||||
# }
|
||||
|
||||
# Create crew
|
||||
crew = Crew(
|
||||
agents=[researcher],
|
||||
tasks=[research_task]
|
||||
)
|
||||
|
||||
# Execute
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
## Advanced Usage
|
||||
|
||||
### Dynamic Repository Selection
|
||||
```python
|
||||
# Initialize without repository
|
||||
flexible_tool = GithubSearchTool(
|
||||
gh_token="your_github_token",
|
||||
content_types=["code", "repo"]
|
||||
)
|
||||
|
||||
# Search different repositories
|
||||
backend_results = flexible_tool.run(
|
||||
search_query="authentication implementation",
|
||||
github_repo="owner/backend-repo"
|
||||
)
|
||||
|
||||
frontend_results = flexible_tool.run(
|
||||
search_query="component architecture",
|
||||
github_repo="owner/frontend-repo"
|
||||
)
|
||||
```
|
||||
|
||||
### Multiple Content Type Analysis
|
||||
```python
|
||||
# Create tool with multiple content types
|
||||
multi_tool = GithubSearchTool(
|
||||
github_repo="owner/repo",
|
||||
gh_token="your_github_token",
|
||||
content_types=["code", "pr", "issue", "repo"]
|
||||
)
|
||||
|
||||
# Search across all content types
|
||||
results = multi_tool.run(
|
||||
search_query="feature implementation status"
|
||||
)
|
||||
```
|
||||
|
||||
### Error Handling Example
|
||||
```python
|
||||
try:
|
||||
github_tool = GithubSearchTool(
|
||||
gh_token="your_github_token",
|
||||
content_types=["code"]
|
||||
)
|
||||
results = github_tool.run(
|
||||
search_query="api endpoints",
|
||||
github_repo="owner/repo"
|
||||
)
|
||||
print(results)
|
||||
except Exception as e:
|
||||
print(f"Error searching repository: {str(e)}")
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Inherits from RagTool
|
||||
- Uses GithubLoader
|
||||
- Requires authentication
|
||||
- Supports multiple content types
|
||||
- Dynamic repository specification
|
||||
- Efficient content retrieval
|
||||
- Thread-safe operations
|
||||
- Maintains search context
|
||||
- Handles API rate limits
|
||||
- Memory-efficient processing
|
||||
220
docs/tools/jina-scrape-website-tool.mdx
Normal file
220
docs/tools/jina-scrape-website-tool.mdx
Normal file
@@ -0,0 +1,220 @@
|
||||
---
|
||||
title: JinaScrapeWebsiteTool
|
||||
description: A tool for scraping website content using Jina.ai's reader service with markdown output
|
||||
icon: globe
|
||||
---
|
||||
|
||||
## JinaScrapeWebsiteTool
|
||||
|
||||
The JinaScrapeWebsiteTool provides website content scraping capabilities using Jina.ai's reader service. It converts web content into clean markdown format and supports both fixed and dynamic URL modes with optional authentication.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install 'crewai[tools]'
|
||||
```
|
||||
|
||||
## Usage Example
|
||||
|
||||
```python
|
||||
from crewai import Agent
|
||||
from crewai_tools import JinaScrapeWebsiteTool
|
||||
|
||||
# Method 1: Fixed URL (specified at initialization)
|
||||
fixed_tool = JinaScrapeWebsiteTool(
|
||||
website_url="https://example.com",
|
||||
api_key="your-jina-api-key" # Optional
|
||||
)
|
||||
|
||||
# Method 2: Dynamic URL (specified at runtime)
|
||||
dynamic_tool = JinaScrapeWebsiteTool(
|
||||
api_key="your-jina-api-key" # Optional
|
||||
)
|
||||
|
||||
# Create an agent with the tool
|
||||
researcher = Agent(
|
||||
role='Web Content Researcher',
|
||||
goal='Extract and analyze website content',
|
||||
backstory='Expert at gathering and processing web information.',
|
||||
tools=[fixed_tool], # or [dynamic_tool]
|
||||
verbose=True
|
||||
)
|
||||
```
|
||||
|
||||
## Input Schema
|
||||
|
||||
```python
|
||||
class JinaScrapeWebsiteToolInput(BaseModel):
|
||||
website_url: str = Field(
|
||||
description="Mandatory website url to read the file"
|
||||
)
|
||||
```
|
||||
|
||||
## Function Signature
|
||||
|
||||
```python
|
||||
def __init__(
|
||||
self,
|
||||
website_url: Optional[str] = None,
|
||||
api_key: Optional[str] = None,
|
||||
custom_headers: Optional[dict] = None,
|
||||
**kwargs
|
||||
):
|
||||
"""
|
||||
Initialize the website scraping tool.
|
||||
|
||||
Args:
|
||||
website_url (Optional[str]): URL to scrape (optional for dynamic mode)
|
||||
api_key (Optional[str]): Jina.ai API key for authentication
|
||||
custom_headers (Optional[dict]): Custom HTTP headers
|
||||
**kwargs: Additional arguments for base tool
|
||||
"""
|
||||
|
||||
def _run(
|
||||
self,
|
||||
website_url: Optional[str] = None
|
||||
) -> str:
|
||||
"""
|
||||
Execute website scraping.
|
||||
|
||||
Args:
|
||||
website_url (Optional[str]): URL to scrape (required for dynamic mode)
|
||||
|
||||
Returns:
|
||||
str: Markdown-formatted website content
|
||||
"""
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. URL Handling:
|
||||
- Use complete URLs
|
||||
- Validate URL format
|
||||
- Handle redirects
|
||||
- Monitor timeouts
|
||||
|
||||
2. Authentication:
|
||||
- Secure API key storage
|
||||
- Use environment variables
|
||||
- Manage headers properly
|
||||
- Handle auth errors
|
||||
|
||||
3. Content Processing:
|
||||
- Handle large pages
|
||||
- Process markdown output
|
||||
- Manage encoding
|
||||
- Handle errors
|
||||
|
||||
4. Mode Selection:
|
||||
- Choose fixed mode for static sites
|
||||
- Use dynamic mode for variable URLs
|
||||
- Consider caching
|
||||
- Manage timeouts
|
||||
|
||||
## Integration Example
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import JinaScrapeWebsiteTool
|
||||
import os
|
||||
|
||||
# Initialize tool with API key
|
||||
scraper_tool = JinaScrapeWebsiteTool(
|
||||
api_key=os.getenv('JINA_API_KEY'),
|
||||
custom_headers={
|
||||
'User-Agent': 'CrewAI Bot 1.0'
|
||||
}
|
||||
)
|
||||
|
||||
# Create agent
|
||||
researcher = Agent(
|
||||
role='Web Content Analyst',
|
||||
goal='Extract and analyze website content',
|
||||
backstory='Expert at processing web information.',
|
||||
tools=[scraper_tool]
|
||||
)
|
||||
|
||||
# Define task
|
||||
analysis_task = Task(
|
||||
description="""Analyze the content of
|
||||
https://example.com/blog for key insights.""",
|
||||
agent=researcher
|
||||
)
|
||||
|
||||
# Create crew
|
||||
crew = Crew(
|
||||
agents=[researcher],
|
||||
tasks=[analysis_task]
|
||||
)
|
||||
|
||||
# Execute
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
## Advanced Usage
|
||||
|
||||
### Multiple Site Analysis
|
||||
```python
|
||||
# Initialize tool
|
||||
scraper = JinaScrapeWebsiteTool(
|
||||
api_key=os.getenv('JINA_API_KEY')
|
||||
)
|
||||
|
||||
# Analyze multiple sites
|
||||
results = []
|
||||
sites = [
|
||||
"https://site1.com",
|
||||
"https://site2.com",
|
||||
"https://site3.com"
|
||||
]
|
||||
|
||||
for site in sites:
|
||||
content = scraper.run(
|
||||
website_url=site
|
||||
)
|
||||
results.append(content)
|
||||
```
|
||||
|
||||
### Custom Headers Configuration
|
||||
```python
|
||||
# Initialize with custom headers
|
||||
tool = JinaScrapeWebsiteTool(
|
||||
custom_headers={
|
||||
'User-Agent': 'Custom Bot 1.0',
|
||||
'Accept-Language': 'en-US,en;q=0.9',
|
||||
'Accept': 'text/html,application/xhtml+xml'
|
||||
}
|
||||
)
|
||||
|
||||
# Use the tool
|
||||
content = tool.run(
|
||||
website_url="https://example.com"
|
||||
)
|
||||
```
|
||||
|
||||
### Error Handling Example
|
||||
```python
|
||||
try:
|
||||
scraper = JinaScrapeWebsiteTool()
|
||||
content = scraper.run(
|
||||
website_url="https://example.com"
|
||||
)
|
||||
print(content)
|
||||
except requests.exceptions.RequestException as e:
|
||||
print(f"Error accessing website: {str(e)}")
|
||||
except Exception as e:
|
||||
print(f"Error processing content: {str(e)}")
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Uses Jina.ai reader service
|
||||
- Markdown output format
|
||||
- API key authentication
|
||||
- Custom headers support
|
||||
- Error handling
|
||||
- Timeout management
|
||||
- Content processing
|
||||
- URL validation
|
||||
- Redirect handling
|
||||
- Response formatting
|
||||
224
docs/tools/json-search-tool.mdx
Normal file
224
docs/tools/json-search-tool.mdx
Normal file
@@ -0,0 +1,224 @@
|
||||
---
|
||||
title: JSONSearchTool
|
||||
description: A tool for semantic search within JSON files using RAG capabilities
|
||||
icon: braces
|
||||
---
|
||||
|
||||
## JSONSearchTool
|
||||
|
||||
The JSONSearchTool enables semantic search capabilities for JSON files using Retrieval-Augmented Generation (RAG). It supports both fixed and dynamic file path modes, allowing flexible usage patterns.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install 'crewai[tools]'
|
||||
```
|
||||
|
||||
## Usage Example
|
||||
|
||||
```python
|
||||
from crewai import Agent
|
||||
from crewai_tools import JSONSearchTool
|
||||
|
||||
# Method 1: Fixed path (specified at initialization)
|
||||
fixed_tool = JSONSearchTool(
|
||||
json_path="path/to/data.json"
|
||||
)
|
||||
|
||||
# Method 2: Dynamic path (specified at runtime)
|
||||
dynamic_tool = JSONSearchTool()
|
||||
|
||||
# Create an agent with the tool
|
||||
researcher = Agent(
|
||||
role='JSON Data Researcher',
|
||||
goal='Search and analyze JSON data',
|
||||
backstory='Expert at finding relevant information in JSON files.',
|
||||
tools=[fixed_tool], # or [dynamic_tool]
|
||||
verbose=True
|
||||
)
|
||||
```
|
||||
|
||||
## Input Schema
|
||||
|
||||
### Fixed Path Mode
|
||||
```python
|
||||
class FixedJSONSearchToolSchema(BaseModel):
|
||||
search_query: str = Field(
|
||||
description="Mandatory search query you want to use to search the JSON's content"
|
||||
)
|
||||
```
|
||||
|
||||
### Dynamic Path Mode
|
||||
```python
|
||||
class JSONSearchToolSchema(BaseModel):
|
||||
json_path: str = Field(
|
||||
description="Mandatory json path you want to search"
|
||||
)
|
||||
search_query: str = Field(
|
||||
description="Mandatory search query you want to use to search the JSON's content"
|
||||
)
|
||||
```
|
||||
|
||||
## Function Signature
|
||||
|
||||
```python
|
||||
def __init__(
|
||||
self,
|
||||
json_path: Optional[str] = None,
|
||||
**kwargs
|
||||
):
|
||||
"""
|
||||
Initialize the JSON search tool.
|
||||
|
||||
Args:
|
||||
json_path (Optional[str]): Path to JSON file (optional for dynamic mode)
|
||||
**kwargs: Additional arguments for RAG tool configuration
|
||||
"""
|
||||
|
||||
def _run(
|
||||
self,
|
||||
search_query: str,
|
||||
**kwargs: Any
|
||||
) -> str:
|
||||
"""
|
||||
Execute semantic search on JSON contents.
|
||||
|
||||
Args:
|
||||
search_query (str): Query to search in the JSON
|
||||
**kwargs: Additional arguments
|
||||
|
||||
Returns:
|
||||
str: Relevant content from the JSON matching the query
|
||||
"""
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. File Handling:
|
||||
- Use absolute file paths
|
||||
- Verify file existence
|
||||
- Handle large JSON files
|
||||
- Monitor memory usage
|
||||
|
||||
2. Query Optimization:
|
||||
- Structure queries clearly
|
||||
- Consider JSON structure
|
||||
- Handle nested data
|
||||
- Monitor performance
|
||||
|
||||
3. Error Handling:
|
||||
- Check file access
|
||||
- Validate JSON format
|
||||
- Handle malformed JSON
|
||||
- Log issues
|
||||
|
||||
4. Mode Selection:
|
||||
- Choose fixed mode for static files
|
||||
- Use dynamic mode for runtime selection
|
||||
- Consider caching
|
||||
- Manage file lifecycle
|
||||
|
||||
## Integration Example
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import JSONSearchTool
|
||||
|
||||
# Initialize tool
|
||||
json_tool = JSONSearchTool(
|
||||
json_path="data/config.json"
|
||||
)
|
||||
|
||||
# Create agent
|
||||
researcher = Agent(
|
||||
role='JSON Data Analyst',
|
||||
goal='Extract insights from JSON configuration',
|
||||
backstory='Expert at analyzing JSON data structures.',
|
||||
tools=[json_tool]
|
||||
)
|
||||
|
||||
# Define task
|
||||
analysis_task = Task(
|
||||
description="""Find all configuration settings
|
||||
related to security.""",
|
||||
agent=researcher
|
||||
)
|
||||
|
||||
# Create crew
|
||||
crew = Crew(
|
||||
agents=[researcher],
|
||||
tasks=[analysis_task]
|
||||
)
|
||||
|
||||
# Execute
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
## Advanced Usage
|
||||
|
||||
### Multiple File Analysis
|
||||
```python
|
||||
# Create tools for different JSON files
|
||||
config_tool = JSONSearchTool(
|
||||
json_path="config/settings.json"
|
||||
)
|
||||
|
||||
data_tool = JSONSearchTool(
|
||||
json_path="data/records.json"
|
||||
)
|
||||
|
||||
# Create agent with multiple tools
|
||||
analyst = Agent(
|
||||
role='JSON Data Analyst',
|
||||
goal='Cross-reference configuration and data',
|
||||
tools=[config_tool, data_tool]
|
||||
)
|
||||
```
|
||||
|
||||
### Dynamic File Loading
|
||||
```python
|
||||
# Initialize dynamic tool
|
||||
dynamic_tool = JSONSearchTool()
|
||||
|
||||
# Use with different JSON files
|
||||
result1 = dynamic_tool.run(
|
||||
json_path="file1.json",
|
||||
search_query="security settings"
|
||||
)
|
||||
|
||||
result2 = dynamic_tool.run(
|
||||
json_path="file2.json",
|
||||
search_query="user preferences"
|
||||
)
|
||||
```
|
||||
|
||||
### Error Handling Example
|
||||
```python
|
||||
try:
|
||||
json_tool = JSONSearchTool(
|
||||
json_path="config/settings.json"
|
||||
)
|
||||
results = json_tool.run(
|
||||
search_query="encryption settings"
|
||||
)
|
||||
print(results)
|
||||
except FileNotFoundError as e:
|
||||
print(f"JSON file not found: {str(e)}")
|
||||
except ValueError as e:
|
||||
print(f"Invalid JSON format: {str(e)}")
|
||||
except Exception as e:
|
||||
print(f"Error processing JSON: {str(e)}")
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Inherits from RagTool
|
||||
- Supports fixed/dynamic modes
|
||||
- JSON path validation
|
||||
- Memory management
|
||||
- Performance optimization
|
||||
- Error handling
|
||||
- Search capabilities
|
||||
- Content extraction
|
||||
- Format validation
|
||||
- Security features
|
||||
184
docs/tools/linkup-search-tool.mdx
Normal file
184
docs/tools/linkup-search-tool.mdx
Normal file
@@ -0,0 +1,184 @@
|
||||
---
|
||||
title: LinkupSearchTool
|
||||
description: A search tool powered by Linkup API for retrieving contextual information
|
||||
icon: search
|
||||
---
|
||||
|
||||
## LinkupSearchTool
|
||||
|
||||
The LinkupSearchTool provides search capabilities using the Linkup API. It allows for customizable search depth and output formatting, returning structured results with contextual information.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install 'crewai[tools]'
|
||||
pip install linkup # Required dependency
|
||||
```
|
||||
|
||||
## Usage Example
|
||||
|
||||
```python
|
||||
from crewai import Agent
|
||||
from crewai_tools import LinkupSearchTool
|
||||
|
||||
# Initialize the tool with your API key
|
||||
search_tool = LinkupSearchTool(api_key="your-linkup-api-key")
|
||||
|
||||
# Create an agent with the tool
|
||||
researcher = Agent(
|
||||
role='Information Researcher',
|
||||
goal='Find relevant contextual information',
|
||||
backstory='Expert at retrieving and analyzing contextual data.',
|
||||
tools=[search_tool],
|
||||
verbose=True
|
||||
)
|
||||
```
|
||||
|
||||
## Function Signature
|
||||
|
||||
```python
|
||||
def __init__(self, api_key: str):
|
||||
"""
|
||||
Initialize the Linkup search tool.
|
||||
|
||||
Args:
|
||||
api_key (str): Linkup API key for authentication
|
||||
"""
|
||||
|
||||
def _run(
|
||||
self,
|
||||
query: str,
|
||||
depth: str = "standard",
|
||||
output_type: str = "searchResults"
|
||||
) -> dict:
|
||||
"""
|
||||
Perform a search using the Linkup API.
|
||||
|
||||
Args:
|
||||
query (str): The search query
|
||||
depth (str): Search depth ("standard" by default)
|
||||
output_type (str): Desired result type ("searchResults" by default)
|
||||
|
||||
Returns:
|
||||
dict: {
|
||||
"success": bool,
|
||||
"results": List[Dict] | None,
|
||||
"error": str | None
|
||||
}
|
||||
|
||||
On success, results contains list of:
|
||||
{
|
||||
"name": str,
|
||||
"url": str,
|
||||
"content": str
|
||||
}
|
||||
"""
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. Always provide a valid API key
|
||||
2. Use specific, focused search queries
|
||||
3. Choose appropriate search depth based on needs
|
||||
4. Handle potential API errors in agent prompts
|
||||
5. Process structured results effectively
|
||||
|
||||
## Integration Example
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import LinkupSearchTool
|
||||
|
||||
# Initialize tool with API key
|
||||
search_tool = LinkupSearchTool(api_key="your-linkup-api-key")
|
||||
|
||||
# Create agent
|
||||
researcher = Agent(
|
||||
role='Context Researcher',
|
||||
goal='Find detailed contextual information about topics',
|
||||
backstory='Expert at discovering and analyzing contextual data.',
|
||||
tools=[search_tool]
|
||||
)
|
||||
|
||||
# Define task
|
||||
research_task = Task(
|
||||
description="""Research the latest developments in quantum computing,
|
||||
focusing on recent breakthroughs and applications. Use standard depth
|
||||
for comprehensive results.""",
|
||||
agent=researcher
|
||||
)
|
||||
|
||||
# The tool will use:
|
||||
# query: "quantum computing recent breakthroughs applications"
|
||||
# depth: "standard"
|
||||
# output_type: "searchResults"
|
||||
|
||||
# Create crew
|
||||
crew = Crew(
|
||||
agents=[researcher],
|
||||
tasks=[research_task]
|
||||
)
|
||||
|
||||
# Execute
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
## Advanced Usage
|
||||
|
||||
### Search Depth Options
|
||||
```python
|
||||
# Quick surface-level search
|
||||
results = search_tool._run(
|
||||
query="quantum computing",
|
||||
depth="basic"
|
||||
)
|
||||
|
||||
# Standard comprehensive search
|
||||
results = search_tool._run(
|
||||
query="quantum computing",
|
||||
depth="standard"
|
||||
)
|
||||
|
||||
# Deep detailed search
|
||||
results = search_tool._run(
|
||||
query="quantum computing",
|
||||
depth="deep"
|
||||
)
|
||||
```
|
||||
|
||||
### Output Type Options
|
||||
```python
|
||||
# Default search results
|
||||
results = search_tool._run(
|
||||
query="quantum computing",
|
||||
output_type="searchResults"
|
||||
)
|
||||
|
||||
# Custom output format
|
||||
results = search_tool._run(
|
||||
query="quantum computing",
|
||||
output_type="customFormat"
|
||||
)
|
||||
```
|
||||
|
||||
### Error Handling
|
||||
```python
|
||||
results = search_tool._run(query="quantum computing")
|
||||
if results["success"]:
|
||||
for result in results["results"]:
|
||||
print(f"Name: {result['name']}")
|
||||
print(f"URL: {result['url']}")
|
||||
print(f"Content: {result['content']}")
|
||||
else:
|
||||
print(f"Error: {results['error']}")
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Requires valid Linkup API key
|
||||
- Returns structured search results
|
||||
- Supports multiple search depths
|
||||
- Configurable output formats
|
||||
- Built-in error handling
|
||||
- Thread-safe operations
|
||||
- Efficient for contextual searches
|
||||
192
docs/tools/llamaindex-tool.mdx
Normal file
192
docs/tools/llamaindex-tool.mdx
Normal file
@@ -0,0 +1,192 @@
|
||||
---
|
||||
title: LlamaIndexTool
|
||||
description: A wrapper tool for integrating LlamaIndex tools and query engines with CrewAI
|
||||
icon: link
|
||||
---
|
||||
|
||||
## LlamaIndexTool
|
||||
|
||||
The LlamaIndexTool serves as a bridge between CrewAI and LlamaIndex, allowing you to use LlamaIndex tools and query engines within your CrewAI agents. It supports both direct tool wrapping and query engine integration.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install 'crewai[tools]'
|
||||
pip install llama-index # Required for LlamaIndex integration
|
||||
```
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Using with LlamaIndex Tools
|
||||
|
||||
```python
|
||||
from crewai import Agent
|
||||
from crewai_tools import LlamaIndexTool
|
||||
from llama_index.core.tools import BaseTool as LlamaBaseTool
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
# Create a LlamaIndex tool
|
||||
class CustomLlamaSchema(BaseModel):
|
||||
query: str = Field(..., description="Query to process")
|
||||
|
||||
class CustomLlamaTool(LlamaBaseTool):
|
||||
name = "Custom Llama Tool"
|
||||
description = "A custom LlamaIndex tool"
|
||||
|
||||
def __call__(self, query: str) -> str:
|
||||
return f"Processed: {query}"
|
||||
|
||||
# Wrap the LlamaIndex tool
|
||||
llama_tool = CustomLlamaTool()
|
||||
wrapped_tool = LlamaIndexTool.from_tool(llama_tool)
|
||||
|
||||
# Create an agent with the tool
|
||||
agent = Agent(
|
||||
role='LlamaIndex Integration Agent',
|
||||
goal='Process queries using LlamaIndex tools',
|
||||
backstory='Specialist in integrating LlamaIndex capabilities.',
|
||||
tools=[wrapped_tool]
|
||||
)
|
||||
```
|
||||
|
||||
### Using with Query Engines
|
||||
|
||||
```python
|
||||
from crewai import Agent
|
||||
from crewai_tools import LlamaIndexTool
|
||||
from llama_index.core import VectorStoreIndex, Document
|
||||
|
||||
# Create a query engine
|
||||
documents = [Document(text="Sample document content")]
|
||||
index = VectorStoreIndex.from_documents(documents)
|
||||
query_engine = index.as_query_engine()
|
||||
|
||||
# Create the tool
|
||||
query_tool = LlamaIndexTool.from_query_engine(
|
||||
query_engine,
|
||||
name="Document Search",
|
||||
description="Search through indexed documents"
|
||||
)
|
||||
|
||||
# Create an agent with the tool
|
||||
agent = Agent(
|
||||
role='Document Researcher',
|
||||
goal='Find relevant information in documents',
|
||||
backstory='Expert at searching through document collections.',
|
||||
tools=[query_tool]
|
||||
)
|
||||
```
|
||||
|
||||
## Tool Creation Methods
|
||||
|
||||
### From LlamaIndex Tool
|
||||
|
||||
```python
|
||||
@classmethod
|
||||
def from_tool(cls, tool: Any, **kwargs: Any) -> "LlamaIndexTool":
|
||||
"""
|
||||
Create a CrewAI tool from a LlamaIndex tool.
|
||||
|
||||
Args:
|
||||
tool (LlamaBaseTool): A LlamaIndex tool to wrap
|
||||
**kwargs: Additional arguments for tool creation
|
||||
|
||||
Returns:
|
||||
LlamaIndexTool: A CrewAI-compatible tool wrapper
|
||||
|
||||
Raises:
|
||||
ValueError: If tool is not a LlamaBaseTool or lacks fn_schema
|
||||
"""
|
||||
```
|
||||
|
||||
### From Query Engine
|
||||
|
||||
```python
|
||||
@classmethod
|
||||
def from_query_engine(
|
||||
cls,
|
||||
query_engine: Any,
|
||||
name: Optional[str] = None,
|
||||
description: Optional[str] = None,
|
||||
return_direct: bool = False,
|
||||
**kwargs: Any
|
||||
) -> "LlamaIndexTool":
|
||||
"""
|
||||
Create a CrewAI tool from a LlamaIndex query engine.
|
||||
|
||||
Args:
|
||||
query_engine (BaseQueryEngine): The query engine to wrap
|
||||
name (Optional[str]): Custom name for the tool
|
||||
description (Optional[str]): Custom description
|
||||
return_direct (bool): Whether to return query engine response directly
|
||||
**kwargs: Additional arguments for tool creation
|
||||
|
||||
Returns:
|
||||
LlamaIndexTool: A CrewAI-compatible tool wrapper
|
||||
|
||||
Raises:
|
||||
ValueError: If query_engine is not a BaseQueryEngine
|
||||
"""
|
||||
```
|
||||
|
||||
## Integration Example
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import LlamaIndexTool
|
||||
from llama_index.core import VectorStoreIndex, Document
|
||||
from llama_index.core.tools import QueryEngineTool
|
||||
|
||||
# Create documents and index
|
||||
documents = [
|
||||
Document(text="AI is a technology that simulates human intelligence."),
|
||||
Document(text="Machine learning is a subset of AI.")
|
||||
]
|
||||
index = VectorStoreIndex.from_documents(documents)
|
||||
query_engine = index.as_query_engine()
|
||||
|
||||
# Create the tool
|
||||
search_tool = LlamaIndexTool.from_query_engine(
|
||||
query_engine,
|
||||
name="AI Knowledge Base",
|
||||
description="Search through AI-related documents"
|
||||
)
|
||||
|
||||
# Create agent
|
||||
researcher = Agent(
|
||||
role='AI Researcher',
|
||||
goal='Research AI concepts',
|
||||
backstory='Expert at finding and explaining AI concepts.',
|
||||
tools=[search_tool]
|
||||
)
|
||||
|
||||
# Define task
|
||||
research_task = Task(
|
||||
description="""Find and explain what AI is and its relationship
|
||||
with machine learning.""",
|
||||
agent=researcher
|
||||
)
|
||||
|
||||
# The agent will use:
|
||||
# {
|
||||
# "query": "What is AI and how does it relate to machine learning?"
|
||||
# }
|
||||
|
||||
# Create crew
|
||||
crew = Crew(
|
||||
agents=[researcher],
|
||||
tasks=[research_task]
|
||||
)
|
||||
|
||||
# Execute
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Automatically adapts LlamaIndex tool schemas for CrewAI compatibility
|
||||
- Renames 'input' parameter to 'query' for better integration
|
||||
- Supports both direct tool wrapping and query engine integration
|
||||
- Handles schema validation and error resolution
|
||||
- Thread-safe operations
|
||||
- Compatible with all LlamaIndex tool types and query engines
|
||||
209
docs/tools/mdx-search-tool.mdx
Normal file
209
docs/tools/mdx-search-tool.mdx
Normal file
@@ -0,0 +1,209 @@
|
||||
---
|
||||
title: MDX Search Tool
|
||||
description: A tool for semantic searching within MDX files using RAG capabilities
|
||||
---
|
||||
|
||||
# MDX Search Tool
|
||||
|
||||
The MDX Search Tool enables semantic searching within MDX (Markdown with JSX) files using Retrieval-Augmented Generation (RAG) capabilities. It supports both fixed and dynamic file path modes, allowing you to specify the MDX file at initialization or runtime.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install 'crewai[tools]'
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
You can use the MDX Search Tool in two ways:
|
||||
|
||||
### 1. Fixed MDX File Path
|
||||
|
||||
Initialize the tool with a specific MDX file path:
|
||||
|
||||
```python
|
||||
from crewai import Agent
|
||||
from crewai_tools import MDXSearchTool
|
||||
|
||||
# Initialize with a fixed MDX file
|
||||
tool = MDXSearchTool(mdx="/path/to/your/document.mdx")
|
||||
|
||||
# Create an agent with the tool
|
||||
agent = Agent(
|
||||
role='Technical Writer',
|
||||
goal='Search through MDX documentation',
|
||||
backstory='I help find relevant information in MDX documentation',
|
||||
tools=[tool]
|
||||
)
|
||||
|
||||
# Use in a task
|
||||
task = Task(
|
||||
description="Find information about API endpoints in the documentation",
|
||||
agent=agent
|
||||
)
|
||||
```
|
||||
|
||||
### 2. Dynamic MDX File Path
|
||||
|
||||
Initialize the tool without a specific file path to provide it at runtime:
|
||||
|
||||
```python
|
||||
from crewai import Agent
|
||||
from crewai_tools import MDXSearchTool
|
||||
|
||||
# Initialize without a fixed MDX file
|
||||
tool = MDXSearchTool()
|
||||
|
||||
# Create an agent with the tool
|
||||
agent = Agent(
|
||||
role='Documentation Analyst',
|
||||
goal='Search through various MDX files',
|
||||
backstory='I analyze different MDX documentation files',
|
||||
tools=[tool]
|
||||
)
|
||||
|
||||
# Use in a task with dynamic file path
|
||||
task = Task(
|
||||
description="Search for 'authentication' in the API documentation",
|
||||
agent=agent,
|
||||
context={
|
||||
"mdx": "/path/to/api-docs.mdx",
|
||||
"search_query": "authentication"
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
## Input Schema
|
||||
|
||||
### Fixed MDX File Mode
|
||||
```python
|
||||
class FixedMDXSearchToolSchema(BaseModel):
|
||||
search_query: str # The search query to find content in the MDX file
|
||||
```
|
||||
|
||||
### Dynamic MDX File Mode
|
||||
```python
|
||||
class MDXSearchToolSchema(BaseModel):
|
||||
search_query: str # The search query to find content in the MDX file
|
||||
mdx: str # The path to the MDX file to search
|
||||
```
|
||||
|
||||
## Function Signatures
|
||||
|
||||
```python
|
||||
def __init__(self, mdx: Optional[str] = None, **kwargs):
|
||||
"""
|
||||
Initialize the MDX Search Tool.
|
||||
|
||||
Args:
|
||||
mdx (Optional[str]): Path to the MDX file (optional)
|
||||
**kwargs: Additional arguments passed to RagTool
|
||||
"""
|
||||
|
||||
def _run(
|
||||
self,
|
||||
search_query: str,
|
||||
**kwargs: Any,
|
||||
) -> str:
|
||||
"""
|
||||
Execute the search on the MDX file.
|
||||
|
||||
Args:
|
||||
search_query (str): The query to search for
|
||||
**kwargs: Additional arguments including 'mdx' for dynamic mode
|
||||
|
||||
Returns:
|
||||
str: The search results from the MDX content
|
||||
"""
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **File Path Handling**:
|
||||
- Use absolute paths to avoid path resolution issues
|
||||
- Verify file existence before searching
|
||||
- Handle file permissions appropriately
|
||||
|
||||
2. **Query Optimization**:
|
||||
- Use specific, focused search queries
|
||||
- Consider context when formulating queries
|
||||
- Break down complex searches into smaller queries
|
||||
|
||||
3. **Error Handling**:
|
||||
- Handle file not found errors gracefully
|
||||
- Manage permission issues appropriately
|
||||
- Validate input parameters before processing
|
||||
|
||||
## Example Integration
|
||||
|
||||
Here's a complete example showing how to integrate the MDX Search Tool with CrewAI:
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import MDXSearchTool
|
||||
|
||||
# Initialize the tool
|
||||
mdx_tool = MDXSearchTool()
|
||||
|
||||
# Create an agent with the tool
|
||||
researcher = Agent(
|
||||
role='Documentation Researcher',
|
||||
goal='Find and analyze information in MDX documentation',
|
||||
backstory='I am an expert at finding relevant information in documentation',
|
||||
tools=[mdx_tool]
|
||||
)
|
||||
|
||||
# Create tasks
|
||||
search_task = Task(
|
||||
description="""
|
||||
Search through the API documentation for information about authentication methods.
|
||||
Look for:
|
||||
1. Authentication endpoints
|
||||
2. Security best practices
|
||||
3. Token handling
|
||||
|
||||
Provide a comprehensive summary of the findings.
|
||||
""",
|
||||
agent=researcher,
|
||||
context={
|
||||
"mdx": "/path/to/api-docs.mdx",
|
||||
"search_query": "authentication security tokens"
|
||||
}
|
||||
)
|
||||
|
||||
# Create and run the crew
|
||||
crew = Crew(
|
||||
agents=[researcher],
|
||||
tasks=[search_task]
|
||||
)
|
||||
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
The tool handles various error scenarios:
|
||||
|
||||
1. **File Not Found**:
|
||||
```python
|
||||
try:
|
||||
tool = MDXSearchTool(mdx="/path/to/nonexistent.mdx")
|
||||
except FileNotFoundError:
|
||||
print("MDX file not found. Please verify the file path.")
|
||||
```
|
||||
|
||||
2. **Permission Issues**:
|
||||
```python
|
||||
try:
|
||||
tool = MDXSearchTool(mdx="/restricted/docs.mdx")
|
||||
except PermissionError:
|
||||
print("Insufficient permissions to access the MDX file.")
|
||||
```
|
||||
|
||||
3. **Invalid Content**:
|
||||
```python
|
||||
try:
|
||||
result = tool._run(search_query="query", mdx="/path/to/invalid.mdx")
|
||||
except ValueError:
|
||||
print("Invalid MDX content or format.")
|
||||
```
|
||||
217
docs/tools/mysql-search-tool.mdx
Normal file
217
docs/tools/mysql-search-tool.mdx
Normal file
@@ -0,0 +1,217 @@
|
||||
---
|
||||
title: MySQLSearchTool
|
||||
description: A tool for semantic search within MySQL database tables using RAG capabilities
|
||||
icon: database
|
||||
---
|
||||
|
||||
## MySQLSearchTool
|
||||
|
||||
The MySQLSearchTool enables semantic search capabilities for MySQL database tables using Retrieval-Augmented Generation (RAG). It processes table contents and allows natural language queries to search through the data.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install 'crewai[tools]'
|
||||
```
|
||||
|
||||
## Usage Example
|
||||
|
||||
```python
|
||||
from crewai import Agent
|
||||
from crewai_tools import MySQLSearchTool
|
||||
|
||||
# Initialize the tool
|
||||
mysql_tool = MySQLSearchTool(
|
||||
table_name="users",
|
||||
db_uri="mysql://user:pass@localhost:3306/database"
|
||||
)
|
||||
|
||||
# Create an agent with the tool
|
||||
researcher = Agent(
|
||||
role='Database Researcher',
|
||||
goal='Search and analyze database contents',
|
||||
backstory='Expert at finding relevant information in databases.',
|
||||
tools=[mysql_tool],
|
||||
verbose=True
|
||||
)
|
||||
```
|
||||
|
||||
## Input Schema
|
||||
|
||||
```python
|
||||
class MySQLSearchToolSchema(BaseModel):
|
||||
search_query: str = Field(
|
||||
description="Mandatory semantic search query you want to use to search the database's content"
|
||||
)
|
||||
```
|
||||
|
||||
## Function Signature
|
||||
|
||||
```python
|
||||
def __init__(
|
||||
self,
|
||||
table_name: str,
|
||||
db_uri: str,
|
||||
**kwargs
|
||||
):
|
||||
"""
|
||||
Initialize the MySQL search tool.
|
||||
|
||||
Args:
|
||||
table_name (str): Name of the table to search
|
||||
db_uri (str): Database connection URI
|
||||
**kwargs: Additional arguments for RAG tool configuration
|
||||
"""
|
||||
|
||||
def _run(
|
||||
self,
|
||||
search_query: str,
|
||||
**kwargs: Any
|
||||
) -> str:
|
||||
"""
|
||||
Execute semantic search on table contents.
|
||||
|
||||
Args:
|
||||
search_query (str): Query to search in the table
|
||||
**kwargs: Additional arguments
|
||||
|
||||
Returns:
|
||||
str: Relevant content from the table matching the query
|
||||
"""
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. Database Connection:
|
||||
- Use secure connection URIs
|
||||
- Handle authentication properly
|
||||
- Manage connection lifecycle
|
||||
- Monitor timeouts
|
||||
|
||||
2. Query Optimization:
|
||||
- Structure queries clearly
|
||||
- Consider table size
|
||||
- Handle large datasets
|
||||
- Monitor performance
|
||||
|
||||
3. Security Considerations:
|
||||
- Protect credentials
|
||||
- Use environment variables
|
||||
- Limit table access
|
||||
- Validate inputs
|
||||
|
||||
4. Error Handling:
|
||||
- Handle connection errors
|
||||
- Manage query timeouts
|
||||
- Provide clear messages
|
||||
- Log issues
|
||||
|
||||
## Integration Example
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import MySQLSearchTool
|
||||
|
||||
# Initialize tool
|
||||
mysql_tool = MySQLSearchTool(
|
||||
table_name="customers",
|
||||
db_uri="mysql://user:pass@localhost:3306/crm"
|
||||
)
|
||||
|
||||
# Create agent
|
||||
researcher = Agent(
|
||||
role='Database Analyst',
|
||||
goal='Extract customer insights from database',
|
||||
backstory='Expert at analyzing customer data.',
|
||||
tools=[mysql_tool]
|
||||
)
|
||||
|
||||
# Define task
|
||||
analysis_task = Task(
|
||||
description="""Find all premium customers
|
||||
with recent purchases.""",
|
||||
agent=researcher
|
||||
)
|
||||
|
||||
# The tool will use:
|
||||
# {
|
||||
# "search_query": "premium customers recent purchases"
|
||||
# }
|
||||
|
||||
# Create crew
|
||||
crew = Crew(
|
||||
agents=[researcher],
|
||||
tasks=[analysis_task]
|
||||
)
|
||||
|
||||
# Execute
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
## Advanced Usage
|
||||
|
||||
### Multiple Table Analysis
|
||||
```python
|
||||
# Create tools for different tables
|
||||
customers_tool = MySQLSearchTool(
|
||||
table_name="customers",
|
||||
db_uri="mysql://user:pass@localhost:3306/crm"
|
||||
)
|
||||
|
||||
orders_tool = MySQLSearchTool(
|
||||
table_name="orders",
|
||||
db_uri="mysql://user:pass@localhost:3306/crm"
|
||||
)
|
||||
|
||||
# Create agent with multiple tools
|
||||
analyst = Agent(
|
||||
role='Data Analyst',
|
||||
goal='Cross-reference customer and order data',
|
||||
tools=[customers_tool, orders_tool]
|
||||
)
|
||||
```
|
||||
|
||||
### Secure Connection Configuration
|
||||
```python
|
||||
import os
|
||||
|
||||
# Use environment variables for credentials
|
||||
db_uri = (
|
||||
f"mysql://{os.getenv('DB_USER')}:{os.getenv('DB_PASS')}"
|
||||
f"@{os.getenv('DB_HOST')}:{os.getenv('DB_PORT')}"
|
||||
f"/{os.getenv('DB_NAME')}"
|
||||
)
|
||||
|
||||
tool = MySQLSearchTool(
|
||||
table_name="sensitive_data",
|
||||
db_uri=db_uri
|
||||
)
|
||||
```
|
||||
|
||||
### Error Handling Example
|
||||
```python
|
||||
try:
|
||||
mysql_tool = MySQLSearchTool(
|
||||
table_name="users",
|
||||
db_uri="mysql://user:pass@localhost:3306/app"
|
||||
)
|
||||
results = mysql_tool.run(
|
||||
search_query="active users in California"
|
||||
)
|
||||
print(results)
|
||||
except Exception as e:
|
||||
print(f"Error querying database: {str(e)}")
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Inherits from RagTool
|
||||
- Uses MySQLLoader
|
||||
- Requires database URI
|
||||
- Table-specific search
|
||||
- Semantic query support
|
||||
- Connection management
|
||||
- Error handling
|
||||
- Performance optimization
|
||||
- Security features
|
||||
- Memory efficiency
|
||||
208
docs/tools/pdf-search-tool.mdx
Normal file
208
docs/tools/pdf-search-tool.mdx
Normal file
@@ -0,0 +1,208 @@
|
||||
---
|
||||
title: PDFSearchTool
|
||||
description: A tool for semantic search within PDF documents using RAG capabilities
|
||||
icon: file-search
|
||||
---
|
||||
|
||||
## PDFSearchTool
|
||||
|
||||
The PDFSearchTool enables semantic search capabilities for PDF documents using Retrieval-Augmented Generation (RAG). It leverages embedchain's PDFEmbedchainAdapter for efficient PDF processing and supports both fixed and dynamic PDF path specification.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install 'crewai[tools]'
|
||||
```
|
||||
|
||||
## Usage Example
|
||||
|
||||
```python
|
||||
from crewai import Agent
|
||||
from crewai_tools import PDFSearchTool
|
||||
|
||||
# Method 1: Initialize with specific PDF
|
||||
pdf_tool = PDFSearchTool(pdf="/path/to/document.pdf")
|
||||
|
||||
# Method 2: Initialize without PDF (specify at runtime)
|
||||
flexible_pdf_tool = PDFSearchTool()
|
||||
|
||||
# Create an agent with the tool
|
||||
researcher = Agent(
|
||||
role='PDF Researcher',
|
||||
goal='Search and analyze PDF documents',
|
||||
backstory='Expert at finding relevant information in PDFs.',
|
||||
tools=[pdf_tool],
|
||||
verbose=True
|
||||
)
|
||||
```
|
||||
|
||||
## Input Schema
|
||||
|
||||
### Fixed PDF Schema (when PDF path provided during initialization)
|
||||
```python
|
||||
class FixedPDFSearchToolSchema(BaseModel):
|
||||
query: str = Field(
|
||||
description="Mandatory query you want to use to search the PDF's content"
|
||||
)
|
||||
```
|
||||
|
||||
### Flexible PDF Schema (when PDF path provided at runtime)
|
||||
```python
|
||||
class PDFSearchToolSchema(FixedPDFSearchToolSchema):
|
||||
pdf: str = Field(
|
||||
description="Mandatory pdf path you want to search"
|
||||
)
|
||||
```
|
||||
|
||||
## Function Signature
|
||||
|
||||
```python
|
||||
def __init__(
|
||||
self,
|
||||
pdf: Optional[str] = None,
|
||||
**kwargs
|
||||
):
|
||||
"""
|
||||
Initialize the PDF search tool.
|
||||
|
||||
Args:
|
||||
pdf (Optional[str]): Path to PDF file (optional)
|
||||
**kwargs: Additional arguments for RAG tool configuration
|
||||
"""
|
||||
|
||||
def _run(
|
||||
self,
|
||||
query: str,
|
||||
**kwargs: Any
|
||||
) -> str:
|
||||
"""
|
||||
Execute semantic search on PDF content.
|
||||
|
||||
Args:
|
||||
query (str): Search query for the PDF
|
||||
**kwargs: Additional arguments including pdf path if not initialized
|
||||
|
||||
Returns:
|
||||
str: Relevant content from the PDF matching the query
|
||||
"""
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. PDF File Handling:
|
||||
- Use absolute paths for reliability
|
||||
- Verify PDF file existence
|
||||
- Handle large PDFs appropriately
|
||||
|
||||
2. Search Optimization:
|
||||
- Use specific, focused queries
|
||||
- Consider document structure
|
||||
- Test with sample queries first
|
||||
|
||||
3. Performance Considerations:
|
||||
- Pre-initialize with PDF for repeated searches
|
||||
- Handle large documents efficiently
|
||||
- Monitor memory usage
|
||||
|
||||
4. Error Handling:
|
||||
- Verify PDF file existence
|
||||
- Handle malformed PDFs
|
||||
- Manage file access permissions
|
||||
|
||||
## Integration Example
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import PDFSearchTool
|
||||
|
||||
# Initialize tool with specific PDF
|
||||
pdf_tool = PDFSearchTool(pdf="/path/to/research.pdf")
|
||||
|
||||
# Create agent
|
||||
researcher = Agent(
|
||||
role='PDF Researcher',
|
||||
goal='Extract insights from research papers',
|
||||
backstory='Expert at analyzing research documents.',
|
||||
tools=[pdf_tool]
|
||||
)
|
||||
|
||||
# Define task
|
||||
research_task = Task(
|
||||
description="""Find all mentions of machine learning
|
||||
applications in healthcare from the PDF.""",
|
||||
agent=researcher
|
||||
)
|
||||
|
||||
# The tool will use:
|
||||
# {
|
||||
# "query": "machine learning applications healthcare"
|
||||
# }
|
||||
|
||||
# Create crew
|
||||
crew = Crew(
|
||||
agents=[researcher],
|
||||
tasks=[research_task]
|
||||
)
|
||||
|
||||
# Execute
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
## Advanced Usage
|
||||
|
||||
### Dynamic PDF Selection
|
||||
```python
|
||||
# Initialize without PDF
|
||||
flexible_tool = PDFSearchTool()
|
||||
|
||||
# Search different PDFs
|
||||
research_results = flexible_tool.run(
|
||||
query="quantum computing",
|
||||
pdf="/path/to/research.pdf"
|
||||
)
|
||||
|
||||
report_results = flexible_tool.run(
|
||||
query="financial metrics",
|
||||
pdf="/path/to/report.pdf"
|
||||
)
|
||||
```
|
||||
|
||||
### Multiple PDF Analysis
|
||||
```python
|
||||
# Create tools for different PDFs
|
||||
research_tool = PDFSearchTool(pdf="/path/to/research.pdf")
|
||||
report_tool = PDFSearchTool(pdf="/path/to/report.pdf")
|
||||
|
||||
# Create agent with multiple tools
|
||||
analyst = Agent(
|
||||
role='Document Analyst',
|
||||
goal='Cross-reference multiple documents',
|
||||
tools=[research_tool, report_tool]
|
||||
)
|
||||
```
|
||||
|
||||
### Error Handling Example
|
||||
```python
|
||||
try:
|
||||
pdf_tool = PDFSearchTool()
|
||||
results = pdf_tool.run(
|
||||
query="important findings",
|
||||
pdf="/path/to/document.pdf"
|
||||
)
|
||||
print(results)
|
||||
except Exception as e:
|
||||
print(f"Error processing PDF: {str(e)}")
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Inherits from RagTool
|
||||
- Uses PDFEmbedchainAdapter
|
||||
- Supports semantic search
|
||||
- Dynamic PDF specification
|
||||
- Efficient content retrieval
|
||||
- Thread-safe operations
|
||||
- Maintains search context
|
||||
- Handles large documents
|
||||
- Supports various PDF formats
|
||||
- Memory-efficient processing
|
||||
234
docs/tools/pdf-text-writing-tool.mdx
Normal file
234
docs/tools/pdf-text-writing-tool.mdx
Normal file
@@ -0,0 +1,234 @@
|
||||
---
|
||||
title: PDFTextWritingTool
|
||||
description: A tool for adding text to specific positions in PDF documents with custom font support
|
||||
icon: file-pdf
|
||||
---
|
||||
|
||||
## PDFTextWritingTool
|
||||
|
||||
The PDFTextWritingTool allows you to add text to specific positions in PDF documents with support for custom fonts, colors, and positioning. It's particularly useful for adding annotations, watermarks, or any text overlay to existing PDFs.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install 'crewai[tools]'
|
||||
```
|
||||
|
||||
## Usage Example
|
||||
|
||||
```python
|
||||
from crewai import Agent
|
||||
from crewai_tools import PDFTextWritingTool
|
||||
|
||||
# Basic initialization
|
||||
pdf_tool = PDFTextWritingTool()
|
||||
|
||||
# Create an agent with the tool
|
||||
document_processor = Agent(
|
||||
role='Document Processor',
|
||||
goal='Add text annotations to PDF documents',
|
||||
backstory='Expert at PDF document processing and text manipulation.',
|
||||
tools=[pdf_tool],
|
||||
verbose=True
|
||||
)
|
||||
```
|
||||
|
||||
## Input Schema
|
||||
|
||||
```python
|
||||
class PDFTextWritingToolSchema(BaseModel):
|
||||
pdf_path: str = Field(
|
||||
description="Path to the PDF file to modify"
|
||||
)
|
||||
text: str = Field(
|
||||
description="Text to add to the PDF"
|
||||
)
|
||||
position: tuple = Field(
|
||||
description="Tuple of (x, y) coordinates for text placement"
|
||||
)
|
||||
font_size: int = Field(
|
||||
default=12,
|
||||
description="Font size of the text"
|
||||
)
|
||||
font_color: str = Field(
|
||||
default="0 0 0 rg",
|
||||
description="RGB color code for the text"
|
||||
)
|
||||
font_name: Optional[str] = Field(
|
||||
default="F1",
|
||||
description="Font name for standard fonts"
|
||||
)
|
||||
font_file: Optional[str] = Field(
|
||||
default=None,
|
||||
description="Path to a .ttf font file for custom font usage"
|
||||
)
|
||||
page_number: int = Field(
|
||||
default=0,
|
||||
description="Page number to add text to"
|
||||
)
|
||||
```
|
||||
|
||||
## Function Signature
|
||||
|
||||
```python
|
||||
def run(
|
||||
self,
|
||||
pdf_path: str,
|
||||
text: str,
|
||||
position: tuple,
|
||||
font_size: int,
|
||||
font_color: str,
|
||||
font_name: str = "F1",
|
||||
font_file: Optional[str] = None,
|
||||
page_number: int = 0,
|
||||
**kwargs
|
||||
) -> str:
|
||||
"""
|
||||
Add text to a specific position in a PDF document.
|
||||
|
||||
Args:
|
||||
pdf_path (str): Path to the PDF file to modify
|
||||
text (str): Text to add to the PDF
|
||||
position (tuple): (x, y) coordinates for text placement
|
||||
font_size (int): Font size of the text
|
||||
font_color (str): RGB color code for the text (e.g., "0 0 0 rg" for black)
|
||||
font_name (str, optional): Font name for standard fonts (default: "F1")
|
||||
font_file (str, optional): Path to a .ttf font file for custom font
|
||||
page_number (int, optional): Page number to add text to (default: 0)
|
||||
|
||||
Returns:
|
||||
str: Success message with output file path
|
||||
"""
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. File Handling:
|
||||
- Ensure PDF files exist before processing
|
||||
- Use absolute paths for reliability
|
||||
- Handle file permissions appropriately
|
||||
|
||||
2. Text Positioning:
|
||||
- Use appropriate coordinates based on PDF dimensions
|
||||
- Consider page orientation and margins
|
||||
- Test positioning with small changes first
|
||||
|
||||
3. Font Usage:
|
||||
- Verify custom font files exist
|
||||
- Use standard fonts when possible
|
||||
- Test font rendering before production use
|
||||
|
||||
4. Error Handling:
|
||||
- Check page numbers are valid
|
||||
- Verify font file accessibility
|
||||
- Handle file writing permissions
|
||||
|
||||
## Integration Example
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import PDFTextWritingTool
|
||||
|
||||
# Initialize tool
|
||||
pdf_tool = PDFTextWritingTool()
|
||||
|
||||
# Create agent
|
||||
document_processor = Agent(
|
||||
role='Document Processor',
|
||||
goal='Process and annotate PDF documents',
|
||||
backstory='Expert at PDF manipulation and text placement.',
|
||||
tools=[pdf_tool]
|
||||
)
|
||||
|
||||
# Define task
|
||||
annotation_task = Task(
|
||||
description="""Add a watermark saying 'CONFIDENTIAL' to
|
||||
the center of the first page of the document at
|
||||
'/path/to/document.pdf'.""",
|
||||
agent=document_processor
|
||||
)
|
||||
|
||||
# The tool will use:
|
||||
# {
|
||||
# "pdf_path": "/path/to/document.pdf",
|
||||
# "text": "CONFIDENTIAL",
|
||||
# "position": (300, 400),
|
||||
# "font_size": 24,
|
||||
# "font_color": "1 0 0 rg", # Red color
|
||||
# "page_number": 0
|
||||
# }
|
||||
|
||||
# Create crew
|
||||
crew = Crew(
|
||||
agents=[document_processor],
|
||||
tasks=[annotation_task]
|
||||
)
|
||||
|
||||
# Execute
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
## Advanced Usage
|
||||
|
||||
### Custom Font Example
|
||||
```python
|
||||
# Using a custom font
|
||||
result = pdf_tool.run(
|
||||
pdf_path="/path/to/input.pdf",
|
||||
text="Custom Font Text",
|
||||
position=(100, 500),
|
||||
font_size=16,
|
||||
font_color="0 0 1 rg", # Blue color
|
||||
font_file="/path/to/custom_font.ttf",
|
||||
page_number=0
|
||||
)
|
||||
```
|
||||
|
||||
### Multiple Text Elements
|
||||
```python
|
||||
# Add multiple text elements
|
||||
positions = [(100, 700), (100, 650), (100, 600)]
|
||||
texts = ["Header", "Subheader", "Body Text"]
|
||||
font_sizes = [18, 14, 12]
|
||||
|
||||
for text, position, size in zip(texts, positions, font_sizes):
|
||||
pdf_tool.run(
|
||||
pdf_path="/path/to/input.pdf",
|
||||
text=text,
|
||||
position=position,
|
||||
font_size=size,
|
||||
font_color="0 0 0 rg" # Black color
|
||||
)
|
||||
```
|
||||
|
||||
### Color Text Example
|
||||
```python
|
||||
# Add colored text
|
||||
colors = {
|
||||
"red": "1 0 0 rg",
|
||||
"green": "0 1 0 rg",
|
||||
"blue": "0 0 1 rg"
|
||||
}
|
||||
|
||||
for y_pos, (color_name, color_code) in enumerate(colors.items()):
|
||||
pdf_tool.run(
|
||||
pdf_path="/path/to/input.pdf",
|
||||
text=f"This text is {color_name}",
|
||||
position=(100, 700 - y_pos * 50),
|
||||
font_size=14,
|
||||
font_color=color_code
|
||||
)
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Supports custom TrueType fonts (.ttf)
|
||||
- Allows RGB color specifications
|
||||
- Handles multi-page PDFs
|
||||
- Preserves original PDF content
|
||||
- Supports text positioning with x,y coordinates
|
||||
- Maintains PDF structure and metadata
|
||||
- Creates new output file for safety
|
||||
- Thread-safe operations
|
||||
- Efficient PDF manipulation
|
||||
- Supports various text attributes
|
||||
181
docs/tools/pg-search-tool.mdx
Normal file
181
docs/tools/pg-search-tool.mdx
Normal file
@@ -0,0 +1,181 @@
|
||||
---
|
||||
title: PGSearchTool
|
||||
description: A RAG-based semantic search tool for PostgreSQL database content
|
||||
icon: database-search
|
||||
---
|
||||
|
||||
## PGSearchTool
|
||||
|
||||
The PGSearchTool provides semantic search capabilities for PostgreSQL database content using RAG (Retrieval-Augmented Generation). It allows for natural language queries over database table content by leveraging embeddings and semantic search.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install 'crewai[tools]'
|
||||
pip install embedchain # Required dependency
|
||||
```
|
||||
|
||||
## Usage Example
|
||||
|
||||
```python
|
||||
from crewai import Agent
|
||||
from crewai_tools import PGSearchTool
|
||||
|
||||
# Initialize the tool with database configuration
|
||||
search_tool = PGSearchTool(
|
||||
db_uri="postgresql://user:password@localhost:5432/dbname",
|
||||
table_name="your_table"
|
||||
)
|
||||
|
||||
# Create an agent with the tool
|
||||
researcher = Agent(
|
||||
role='Database Researcher',
|
||||
goal='Find relevant information in database content',
|
||||
backstory='Expert at searching and analyzing database content.',
|
||||
tools=[search_tool],
|
||||
verbose=True
|
||||
)
|
||||
```
|
||||
|
||||
## Input Schema
|
||||
|
||||
```python
|
||||
class PGSearchToolSchema(BaseModel):
|
||||
search_query: str = Field(
|
||||
description="Mandatory semantic search query for searching the database's content"
|
||||
)
|
||||
```
|
||||
|
||||
## Function Signature
|
||||
|
||||
```python
|
||||
def __init__(self, table_name: str, **kwargs):
|
||||
"""
|
||||
Initialize the PostgreSQL search tool.
|
||||
|
||||
Args:
|
||||
table_name (str): Name of the table to search
|
||||
db_uri (str): PostgreSQL database URI (required in kwargs)
|
||||
**kwargs: Additional arguments for RagTool initialization
|
||||
"""
|
||||
|
||||
def _run(
|
||||
self,
|
||||
search_query: str,
|
||||
**kwargs: Any
|
||||
) -> Any:
|
||||
"""
|
||||
Perform semantic search on database content.
|
||||
|
||||
Args:
|
||||
search_query (str): Semantic search query
|
||||
**kwargs: Additional search parameters
|
||||
|
||||
Returns:
|
||||
Any: Relevant database content based on semantic search
|
||||
"""
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. Secure database credentials:
|
||||
```python
|
||||
# Use environment variables for sensitive data
|
||||
import os
|
||||
|
||||
db_uri = (
|
||||
f"postgresql://{os.getenv('DB_USER')}:{os.getenv('DB_PASS')}"
|
||||
f"@{os.getenv('DB_HOST')}:{os.getenv('DB_PORT')}/{os.getenv('DB_NAME')}"
|
||||
)
|
||||
```
|
||||
|
||||
2. Optimize table selection
|
||||
3. Use specific semantic queries
|
||||
4. Handle database connection errors
|
||||
5. Consider table size and query performance
|
||||
|
||||
## Integration Example
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import PGSearchTool
|
||||
|
||||
# Initialize tool with database configuration
|
||||
db_search = PGSearchTool(
|
||||
db_uri="postgresql://user:password@localhost:5432/dbname",
|
||||
table_name="customer_feedback"
|
||||
)
|
||||
|
||||
# Create agent
|
||||
analyst = Agent(
|
||||
role='Database Analyst',
|
||||
goal='Analyze customer feedback data',
|
||||
backstory='Expert at finding insights in customer feedback.',
|
||||
tools=[db_search]
|
||||
)
|
||||
|
||||
# Define task
|
||||
analysis_task = Task(
|
||||
description="""Find all customer feedback related to product usability
|
||||
and ease of use. Focus on common patterns and issues.""",
|
||||
agent=analyst
|
||||
)
|
||||
|
||||
# The tool will use:
|
||||
# {
|
||||
# "search_query": "product usability feedback ease of use issues"
|
||||
# }
|
||||
|
||||
# Create crew
|
||||
crew = Crew(
|
||||
agents=[analyst],
|
||||
tasks=[analysis_task]
|
||||
)
|
||||
|
||||
# Execute
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
## Advanced Usage
|
||||
|
||||
### Multiple Table Search
|
||||
```python
|
||||
# Create tools for different tables
|
||||
customer_search = PGSearchTool(
|
||||
db_uri="postgresql://user:password@localhost:5432/dbname",
|
||||
table_name="customers"
|
||||
)
|
||||
|
||||
orders_search = PGSearchTool(
|
||||
db_uri="postgresql://user:password@localhost:5432/dbname",
|
||||
table_name="orders"
|
||||
)
|
||||
|
||||
# Use both tools in an agent
|
||||
analyst = Agent(
|
||||
role='Multi-table Analyst',
|
||||
goal='Analyze customer and order data',
|
||||
tools=[customer_search, orders_search]
|
||||
)
|
||||
```
|
||||
|
||||
### Error Handling
|
||||
```python
|
||||
try:
|
||||
results = search_tool._run(
|
||||
search_query="customer satisfaction ratings"
|
||||
)
|
||||
# Process results
|
||||
except Exception as e:
|
||||
print(f"Database search error: {str(e)}")
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Inherits from RagTool for semantic search
|
||||
- Uses embedchain's PostgresLoader
|
||||
- Requires valid PostgreSQL connection
|
||||
- Supports semantic natural language queries
|
||||
- Thread-safe operations
|
||||
- Efficient for large tables
|
||||
- Handles connection pooling automatically
|
||||
282
docs/tools/rag-tool.mdx
Normal file
282
docs/tools/rag-tool.mdx
Normal file
@@ -0,0 +1,282 @@
|
||||
---
|
||||
title: RagTool
|
||||
description: Base class for Retrieval-Augmented Generation (RAG) tools with flexible adapter support
|
||||
icon: database
|
||||
---
|
||||
|
||||
## RagTool
|
||||
|
||||
The RagTool serves as the base class for all Retrieval-Augmented Generation (RAG) tools in the CrewAI ecosystem. It provides a flexible adapter-based architecture for implementing knowledge base functionality with semantic search capabilities.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install 'crewai[tools]'
|
||||
```
|
||||
|
||||
## Usage Example
|
||||
|
||||
```python
|
||||
from crewai import Agent
|
||||
from crewai_tools import RagTool
|
||||
from crewai_tools.adapters import EmbedchainAdapter
|
||||
from embedchain import App
|
||||
|
||||
# Create custom adapter
|
||||
class CustomAdapter(RagTool.Adapter):
|
||||
def query(self, question: str) -> str:
|
||||
# Implement custom query logic
|
||||
return "Answer based on knowledge base"
|
||||
|
||||
def add(self, *args, **kwargs) -> None:
|
||||
# Implement custom add logic
|
||||
pass
|
||||
|
||||
# Method 1: Use default EmbedchainAdapter
|
||||
rag_tool = RagTool(
|
||||
name="Custom Knowledge Base",
|
||||
description="Specialized knowledge base for domain data",
|
||||
summarize=True
|
||||
)
|
||||
|
||||
# Method 2: Use custom adapter
|
||||
custom_tool = RagTool(
|
||||
name="Custom Knowledge Base",
|
||||
adapter=CustomAdapter(),
|
||||
summarize=False
|
||||
)
|
||||
|
||||
# Create an agent with the tool
|
||||
researcher = Agent(
|
||||
role='Knowledge Base Researcher',
|
||||
goal='Search and analyze knowledge base content',
|
||||
backstory='Expert at finding relevant information in specialized datasets.',
|
||||
tools=[rag_tool],
|
||||
verbose=True
|
||||
)
|
||||
```
|
||||
|
||||
## Adapter Interface
|
||||
|
||||
```python
|
||||
class Adapter(BaseModel, ABC):
|
||||
@abstractmethod
|
||||
def query(self, question: str) -> str:
|
||||
"""
|
||||
Query the knowledge base with a question.
|
||||
|
||||
Args:
|
||||
question (str): Query to search in knowledge base
|
||||
|
||||
Returns:
|
||||
str: Answer based on knowledge base content
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def add(self, *args: Any, **kwargs: Any) -> None:
|
||||
"""
|
||||
Add content to the knowledge base.
|
||||
|
||||
Args:
|
||||
*args: Variable length argument list
|
||||
**kwargs: Arbitrary keyword arguments
|
||||
"""
|
||||
```
|
||||
|
||||
## Function Signature
|
||||
|
||||
```python
|
||||
def __init__(
|
||||
self,
|
||||
name: str = "Knowledge base",
|
||||
description: str = "A knowledge base that can be used to answer questions.",
|
||||
summarize: bool = False,
|
||||
adapter: Optional[Adapter] = None,
|
||||
config: Optional[dict[str, Any]] = None,
|
||||
**kwargs
|
||||
):
|
||||
"""
|
||||
Initialize the RAG tool.
|
||||
|
||||
Args:
|
||||
name (str): Tool name
|
||||
description (str): Tool description
|
||||
summarize (bool): Enable answer summarization
|
||||
adapter (Optional[Adapter]): Custom adapter implementation
|
||||
config (Optional[dict]): Configuration for default adapter
|
||||
**kwargs: Additional arguments for base tool
|
||||
"""
|
||||
|
||||
def _run(
|
||||
self,
|
||||
query: str,
|
||||
**kwargs: Any
|
||||
) -> str:
|
||||
"""
|
||||
Execute query against knowledge base.
|
||||
|
||||
Args:
|
||||
query (str): Question to ask
|
||||
**kwargs: Additional arguments
|
||||
|
||||
Returns:
|
||||
str: Answer from knowledge base
|
||||
"""
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. Adapter Implementation:
|
||||
- Define clear interfaces
|
||||
- Handle edge cases
|
||||
- Implement error handling
|
||||
- Document behavior
|
||||
|
||||
2. Knowledge Base Management:
|
||||
- Organize content logically
|
||||
- Update content regularly
|
||||
- Monitor performance
|
||||
- Handle large datasets
|
||||
|
||||
3. Query Optimization:
|
||||
- Structure queries clearly
|
||||
- Consider context
|
||||
- Handle ambiguity
|
||||
- Validate inputs
|
||||
|
||||
4. Error Handling:
|
||||
- Handle missing data
|
||||
- Manage timeouts
|
||||
- Provide clear messages
|
||||
- Log issues
|
||||
|
||||
## Integration Example
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import RagTool
|
||||
from embedchain import App
|
||||
|
||||
# Initialize tool with custom configuration
|
||||
rag_tool = RagTool(
|
||||
name="Technical Documentation KB",
|
||||
description="Knowledge base for technical documentation",
|
||||
summarize=True,
|
||||
config={
|
||||
"collection_name": "tech_docs",
|
||||
"chunking": {
|
||||
"chunk_size": 500,
|
||||
"chunk_overlap": 50
|
||||
}
|
||||
}
|
||||
)
|
||||
|
||||
# Add content to knowledge base
|
||||
rag_tool.add(
|
||||
"Technical documentation content here...",
|
||||
data_type="text"
|
||||
)
|
||||
|
||||
# Create agent
|
||||
researcher = Agent(
|
||||
role='Documentation Expert',
|
||||
goal='Extract technical information from documentation',
|
||||
backstory='Expert at analyzing technical documentation.',
|
||||
tools=[rag_tool]
|
||||
)
|
||||
|
||||
# Define task
|
||||
research_task = Task(
|
||||
description="""Find all mentions of API endpoints
|
||||
and their authentication requirements.""",
|
||||
agent=researcher
|
||||
)
|
||||
|
||||
# Create crew
|
||||
crew = Crew(
|
||||
agents=[researcher],
|
||||
tasks=[research_task]
|
||||
)
|
||||
|
||||
# Execute
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
## Advanced Usage
|
||||
|
||||
### Custom Adapter Implementation
|
||||
```python
|
||||
from typing import Any
|
||||
from pydantic import BaseModel
|
||||
from abc import ABC, abstractmethod
|
||||
|
||||
class SpecializedAdapter(RagTool.Adapter):
|
||||
def __init__(self, config: dict):
|
||||
self.config = config
|
||||
self.knowledge_base = {}
|
||||
|
||||
def query(self, question: str) -> str:
|
||||
# Implement specialized query logic
|
||||
return self._process_query(question)
|
||||
|
||||
def add(self, content: str, **kwargs: Any) -> None:
|
||||
# Implement specialized content addition
|
||||
self._process_content(content, **kwargs)
|
||||
|
||||
# Use custom adapter
|
||||
specialized_tool = RagTool(
|
||||
name="Specialized KB",
|
||||
adapter=SpecializedAdapter(config={"mode": "advanced"})
|
||||
)
|
||||
```
|
||||
|
||||
### Configuration Management
|
||||
```python
|
||||
# Configure default EmbedchainAdapter
|
||||
config = {
|
||||
"collection_name": "custom_collection",
|
||||
"embedding": {
|
||||
"model": "sentence-transformers/all-mpnet-base-v2",
|
||||
"dimensions": 768
|
||||
},
|
||||
"chunking": {
|
||||
"chunk_size": 1000,
|
||||
"chunk_overlap": 100
|
||||
}
|
||||
}
|
||||
|
||||
tool = RagTool(config=config)
|
||||
```
|
||||
|
||||
### Error Handling Example
|
||||
```python
|
||||
try:
|
||||
rag_tool = RagTool()
|
||||
|
||||
# Add content
|
||||
rag_tool.add(
|
||||
"Documentation content...",
|
||||
data_type="text"
|
||||
)
|
||||
|
||||
# Query content
|
||||
result = rag_tool.run(
|
||||
query="What are the system requirements?"
|
||||
)
|
||||
print(result)
|
||||
except Exception as e:
|
||||
print(f"Error using knowledge base: {str(e)}")
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Base class for RAG tools
|
||||
- Flexible adapter pattern
|
||||
- Default EmbedchainAdapter
|
||||
- Custom adapter support
|
||||
- Content management
|
||||
- Query processing
|
||||
- Error handling
|
||||
- Configuration options
|
||||
- Performance optimization
|
||||
- Memory management
|
||||
229
docs/tools/serpapi-google-search-tool.mdx
Normal file
229
docs/tools/serpapi-google-search-tool.mdx
Normal file
@@ -0,0 +1,229 @@
|
||||
---
|
||||
title: SerpApi Google Search Tool
|
||||
description: A tool for performing Google searches using the SerpApi service
|
||||
---
|
||||
|
||||
# SerpApi Google Search Tool
|
||||
|
||||
The SerpApi Google Search Tool enables performing Google searches using the SerpApi service. It provides location-aware search capabilities with comprehensive result filtering.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install 'crewai[tools]'
|
||||
pip install serpapi
|
||||
```
|
||||
|
||||
## Prerequisites
|
||||
|
||||
You need a SerpApi API key to use this tool. You can get one from [SerpApi's website](https://serpapi.com/manage-api-key).
|
||||
|
||||
Set your API key as an environment variable:
|
||||
```bash
|
||||
export SERPAPI_API_KEY="your_api_key_here"
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
Here's how to use the SerpApi Google Search Tool:
|
||||
|
||||
```python
|
||||
from crewai import Agent
|
||||
from crewai_tools import SerpApiGoogleSearchTool
|
||||
|
||||
# Initialize the tool
|
||||
search_tool = SerpApiGoogleSearchTool()
|
||||
|
||||
# Create an agent with the tool
|
||||
search_agent = Agent(
|
||||
role='Web Researcher',
|
||||
goal='Find accurate information online',
|
||||
backstory='I help research and analyze online information',
|
||||
tools=[search_tool]
|
||||
)
|
||||
|
||||
# Use in a task
|
||||
task = Task(
|
||||
description="Research recent AI developments",
|
||||
agent=search_agent,
|
||||
context={
|
||||
"search_query": "latest artificial intelligence breakthroughs 2024",
|
||||
"location": "United States" # Optional
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
## Input Schema
|
||||
|
||||
```python
|
||||
class SerpApiGoogleSearchToolSchema(BaseModel):
|
||||
search_query: str # The search query for Google Search
|
||||
location: Optional[str] = None # Optional location for localized results
|
||||
```
|
||||
|
||||
## Function Signatures
|
||||
|
||||
### Base Tool Initialization
|
||||
```python
|
||||
def __init__(self, **kwargs):
|
||||
"""
|
||||
Initialize the SerpApi tool with API credentials.
|
||||
|
||||
Raises:
|
||||
ImportError: If serpapi package is not installed
|
||||
ValueError: If SERPAPI_API_KEY environment variable is not set
|
||||
"""
|
||||
```
|
||||
|
||||
### Search Execution
|
||||
```python
|
||||
def _run(
|
||||
self,
|
||||
**kwargs: Any,
|
||||
) -> dict:
|
||||
"""
|
||||
Execute the Google search.
|
||||
|
||||
Args:
|
||||
search_query (str): The search query
|
||||
location (Optional[str]): Optional location for results
|
||||
|
||||
Returns:
|
||||
dict: Filtered search results from Google
|
||||
|
||||
Raises:
|
||||
HTTPError: If the API request fails
|
||||
"""
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **API Key Management**:
|
||||
- Store the API key securely in environment variables
|
||||
- Never hardcode the API key in your code
|
||||
- Verify API key validity before making requests
|
||||
|
||||
2. **Search Optimization**:
|
||||
- Use specific, targeted search queries
|
||||
- Include relevant keywords and time frames
|
||||
- Leverage location parameter for regional results
|
||||
|
||||
3. **Error Handling**:
|
||||
- Handle API rate limits gracefully
|
||||
- Implement retry logic for failed requests
|
||||
- Validate input parameters before making requests
|
||||
|
||||
## Example Integration
|
||||
|
||||
Here's a complete example showing how to integrate the SerpApi Google Search Tool with CrewAI:
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import SerpApiGoogleSearchTool
|
||||
|
||||
# Initialize the tool
|
||||
search_tool = SerpApiGoogleSearchTool()
|
||||
|
||||
# Create an agent with the tool
|
||||
researcher = Agent(
|
||||
role='Research Analyst',
|
||||
goal='Find and analyze current information',
|
||||
backstory="""I am an expert at finding and analyzing
|
||||
information from various online sources.""",
|
||||
tools=[search_tool]
|
||||
)
|
||||
|
||||
# Create tasks
|
||||
research_task = Task(
|
||||
description="""
|
||||
Research the following topic:
|
||||
1. Latest developments in quantum computing
|
||||
2. Focus on practical applications
|
||||
3. Include major company announcements
|
||||
|
||||
Provide a comprehensive analysis of the findings.
|
||||
""",
|
||||
agent=researcher,
|
||||
context={
|
||||
"search_query": "quantum computing breakthroughs applications companies",
|
||||
"location": "United States"
|
||||
}
|
||||
)
|
||||
|
||||
# Create and run the crew
|
||||
crew = Crew(
|
||||
agents=[researcher],
|
||||
tasks=[research_task]
|
||||
)
|
||||
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
The tool handles various error scenarios:
|
||||
|
||||
1. **Missing API Key**:
|
||||
```python
|
||||
try:
|
||||
tool = SerpApiGoogleSearchTool()
|
||||
except ValueError as e:
|
||||
print("API key not found. Set SERPAPI_API_KEY environment variable.")
|
||||
```
|
||||
|
||||
2. **API Request Errors**:
|
||||
```python
|
||||
try:
|
||||
results = tool._run(
|
||||
search_query="quantum computing",
|
||||
location="United States"
|
||||
)
|
||||
except HTTPError as e:
|
||||
print(f"API request failed: {str(e)}")
|
||||
```
|
||||
|
||||
3. **Invalid Parameters**:
|
||||
```python
|
||||
try:
|
||||
results = tool._run(
|
||||
search_query="", # Empty query
|
||||
location="Invalid Location"
|
||||
)
|
||||
except ValueError as e:
|
||||
print("Invalid search parameters provided.")
|
||||
```
|
||||
|
||||
## Response Format
|
||||
|
||||
The tool returns a filtered dictionary containing Google search results. Example response structure:
|
||||
|
||||
```python
|
||||
{
|
||||
"organic_results": [
|
||||
{
|
||||
"title": "Page Title",
|
||||
"link": "https://...",
|
||||
"snippet": "Page description or excerpt...",
|
||||
"position": 1
|
||||
}
|
||||
# Additional results...
|
||||
],
|
||||
"knowledge_graph": {
|
||||
"title": "Topic Title",
|
||||
"description": "Topic description...",
|
||||
"source": {
|
||||
"name": "Source Name",
|
||||
"link": "https://..."
|
||||
}
|
||||
},
|
||||
"related_questions": [
|
||||
{
|
||||
"question": "Related question?",
|
||||
"answer": "Answer to related question..."
|
||||
}
|
||||
# Additional related questions...
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
The response is automatically filtered to remove metadata and unnecessary fields, focusing on the most relevant search information. Fields like search metadata, parameters, and pagination are omitted for clarity.
|
||||
225
docs/tools/serpapi-google-shopping-tool.mdx
Normal file
225
docs/tools/serpapi-google-shopping-tool.mdx
Normal file
@@ -0,0 +1,225 @@
|
||||
---
|
||||
title: SerpApi Google Shopping Tool
|
||||
description: A tool for searching Google Shopping using the SerpApi service
|
||||
---
|
||||
|
||||
# SerpApi Google Shopping Tool
|
||||
|
||||
The SerpApi Google Shopping Tool enables searching Google Shopping results using the SerpApi service. It provides location-aware shopping search capabilities with comprehensive result filtering.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install 'crewai[tools]'
|
||||
pip install serpapi
|
||||
```
|
||||
|
||||
## Prerequisites
|
||||
|
||||
You need a SerpApi API key to use this tool. You can get one from [SerpApi's website](https://serpapi.com/manage-api-key).
|
||||
|
||||
Set your API key as an environment variable:
|
||||
```bash
|
||||
export SERPAPI_API_KEY="your_api_key_here"
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
Here's how to use the SerpApi Google Shopping Tool:
|
||||
|
||||
```python
|
||||
from crewai import Agent
|
||||
from crewai_tools import SerpApiGoogleShoppingTool
|
||||
|
||||
# Initialize the tool
|
||||
shopping_tool = SerpApiGoogleShoppingTool()
|
||||
|
||||
# Create an agent with the tool
|
||||
shopping_agent = Agent(
|
||||
role='Shopping Researcher',
|
||||
goal='Find the best shopping deals',
|
||||
backstory='I help find and analyze shopping options',
|
||||
tools=[shopping_tool]
|
||||
)
|
||||
|
||||
# Use in a task
|
||||
task = Task(
|
||||
description="Find best deals for gaming laptops",
|
||||
agent=shopping_agent,
|
||||
context={
|
||||
"search_query": "gaming laptop deals",
|
||||
"location": "United States" # Optional
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
## Input Schema
|
||||
|
||||
```python
|
||||
class SerpApiGoogleShoppingToolSchema(BaseModel):
|
||||
search_query: str # The search query for Google Shopping
|
||||
location: Optional[str] = None # Optional location for localized results
|
||||
```
|
||||
|
||||
## Function Signatures
|
||||
|
||||
### Base Tool Initialization
|
||||
```python
|
||||
def __init__(self, **kwargs):
|
||||
"""
|
||||
Initialize the SerpApi tool with API credentials.
|
||||
|
||||
Raises:
|
||||
ImportError: If serpapi package is not installed
|
||||
ValueError: If SERPAPI_API_KEY environment variable is not set
|
||||
"""
|
||||
```
|
||||
|
||||
### Search Execution
|
||||
```python
|
||||
def _run(
|
||||
self,
|
||||
**kwargs: Any,
|
||||
) -> dict:
|
||||
"""
|
||||
Execute the Google Shopping search.
|
||||
|
||||
Args:
|
||||
search_query (str): The search query for Google Shopping
|
||||
location (Optional[str]): Optional location for results
|
||||
|
||||
Returns:
|
||||
dict: Filtered search results from Google Shopping
|
||||
|
||||
Raises:
|
||||
HTTPError: If the API request fails
|
||||
"""
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **API Key Management**:
|
||||
- Store the API key securely in environment variables
|
||||
- Never hardcode the API key in your code
|
||||
- Verify API key validity before making requests
|
||||
|
||||
2. **Search Optimization**:
|
||||
- Use specific, targeted search queries
|
||||
- Include relevant product details in queries
|
||||
- Leverage location parameter for regional pricing
|
||||
|
||||
3. **Error Handling**:
|
||||
- Handle API rate limits gracefully
|
||||
- Implement retry logic for failed requests
|
||||
- Validate input parameters before making requests
|
||||
|
||||
## Example Integration
|
||||
|
||||
Here's a complete example showing how to integrate the SerpApi Google Shopping Tool with CrewAI:
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import SerpApiGoogleShoppingTool
|
||||
|
||||
# Initialize the tool
|
||||
shopping_tool = SerpApiGoogleShoppingTool()
|
||||
|
||||
# Create an agent with the tool
|
||||
researcher = Agent(
|
||||
role='Shopping Analyst',
|
||||
goal='Find and analyze the best shopping deals',
|
||||
backstory="""I am an expert at finding the best shopping deals
|
||||
and analyzing product offerings across different regions.""",
|
||||
tools=[shopping_tool]
|
||||
)
|
||||
|
||||
# Create tasks
|
||||
search_task = Task(
|
||||
description="""
|
||||
Research gaming laptops with the following criteria:
|
||||
1. Price range: $800-$1500
|
||||
2. Released in the last year
|
||||
3. Compare prices across different retailers
|
||||
|
||||
Provide a comprehensive analysis of the findings.
|
||||
""",
|
||||
agent=researcher,
|
||||
context={
|
||||
"search_query": "gaming laptop RTX 4060 2023",
|
||||
"location": "United States"
|
||||
}
|
||||
)
|
||||
|
||||
# Create and run the crew
|
||||
crew = Crew(
|
||||
agents=[researcher],
|
||||
tasks=[search_task]
|
||||
)
|
||||
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
The tool handles various error scenarios:
|
||||
|
||||
1. **Missing API Key**:
|
||||
```python
|
||||
try:
|
||||
tool = SerpApiGoogleShoppingTool()
|
||||
except ValueError as e:
|
||||
print("API key not found. Set SERPAPI_API_KEY environment variable.")
|
||||
```
|
||||
|
||||
2. **API Request Errors**:
|
||||
```python
|
||||
try:
|
||||
results = tool._run(
|
||||
search_query="gaming laptop",
|
||||
location="United States"
|
||||
)
|
||||
except HTTPError as e:
|
||||
print(f"API request failed: {str(e)}")
|
||||
```
|
||||
|
||||
3. **Invalid Parameters**:
|
||||
```python
|
||||
try:
|
||||
results = tool._run(
|
||||
search_query="", # Empty query
|
||||
location="Invalid Location"
|
||||
)
|
||||
except ValueError as e:
|
||||
print("Invalid search parameters provided.")
|
||||
```
|
||||
|
||||
## Response Format
|
||||
|
||||
The tool returns a filtered dictionary containing Google Shopping results. Example response structure:
|
||||
|
||||
```python
|
||||
{
|
||||
"shopping_results": [
|
||||
{
|
||||
"title": "Product Title",
|
||||
"price": "$999.99",
|
||||
"link": "https://...",
|
||||
"source": "Retailer Name",
|
||||
"rating": 4.5,
|
||||
"reviews": 123,
|
||||
"thumbnail": "https://..."
|
||||
}
|
||||
# Additional results...
|
||||
],
|
||||
"organic_results": [
|
||||
{
|
||||
"title": "Related Product",
|
||||
"link": "https://...",
|
||||
"snippet": "Product description..."
|
||||
}
|
||||
# Additional organic results...
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
The response is automatically filtered to remove metadata and unnecessary fields, focusing on the most relevant shopping information.
|
||||
184
docs/tools/serply-job-search-tool.mdx
Normal file
184
docs/tools/serply-job-search-tool.mdx
Normal file
@@ -0,0 +1,184 @@
|
||||
---
|
||||
title: SerplyJobSearchTool
|
||||
description: A tool for searching US job postings using the Serply API
|
||||
icon: briefcase
|
||||
---
|
||||
|
||||
## SerplyJobSearchTool
|
||||
|
||||
The SerplyJobSearchTool provides job search capabilities using the Serply API. It allows for searching job postings in the US market, returning structured information about positions, employers, locations, and remote work status.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install 'crewai[tools]'
|
||||
```
|
||||
|
||||
## Usage Example
|
||||
|
||||
```python
|
||||
from crewai import Agent
|
||||
from crewai_tools import SerplyJobSearchTool
|
||||
|
||||
# Set environment variable
|
||||
# export SERPLY_API_KEY='your-api-key'
|
||||
|
||||
# Initialize the tool
|
||||
search_tool = SerplyJobSearchTool()
|
||||
|
||||
# Create an agent with the tool
|
||||
job_researcher = Agent(
|
||||
role='Job Market Researcher',
|
||||
goal='Find relevant job opportunities',
|
||||
backstory='Expert at analyzing job market trends and opportunities.',
|
||||
tools=[search_tool],
|
||||
verbose=True
|
||||
)
|
||||
```
|
||||
|
||||
## Input Schema
|
||||
|
||||
```python
|
||||
class SerplyJobSearchToolSchema(BaseModel):
|
||||
search_query: str = Field(
|
||||
description="Mandatory search query for fetching job postings"
|
||||
)
|
||||
```
|
||||
|
||||
## Function Signature
|
||||
|
||||
```python
|
||||
def __init__(self, **kwargs):
|
||||
"""
|
||||
Initialize the job search tool.
|
||||
|
||||
Args:
|
||||
**kwargs: Additional arguments for RagTool initialization
|
||||
|
||||
Note:
|
||||
Requires SERPLY_API_KEY environment variable
|
||||
"""
|
||||
|
||||
def _run(
|
||||
self,
|
||||
**kwargs: Any
|
||||
) -> str:
|
||||
"""
|
||||
Perform job search using Serply API.
|
||||
|
||||
Args:
|
||||
search_query (str): Job search query
|
||||
**kwargs: Additional search parameters
|
||||
|
||||
Returns:
|
||||
str: Formatted string containing job listings with details:
|
||||
- Position
|
||||
- Employer
|
||||
- Location
|
||||
- Link
|
||||
- Highlights
|
||||
- Remote/Hybrid status
|
||||
"""
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. Set up API authentication:
|
||||
```bash
|
||||
export SERPLY_API_KEY='your-serply-api-key'
|
||||
```
|
||||
|
||||
2. Use specific search queries
|
||||
3. Handle potential API errors
|
||||
4. Process structured results effectively
|
||||
5. Consider rate limits and quotas
|
||||
|
||||
## Integration Example
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import SerplyJobSearchTool
|
||||
|
||||
# Initialize tool
|
||||
job_search = SerplyJobSearchTool()
|
||||
|
||||
# Create agent
|
||||
recruiter = Agent(
|
||||
role='Technical Recruiter',
|
||||
goal='Find relevant job opportunities in tech',
|
||||
backstory='Expert at identifying promising tech positions.',
|
||||
tools=[job_search]
|
||||
)
|
||||
|
||||
# Define task
|
||||
search_task = Task(
|
||||
description="""Search for senior software engineer positions
|
||||
with remote work options in the US. Focus on positions
|
||||
requiring Python expertise.""",
|
||||
agent=recruiter
|
||||
)
|
||||
|
||||
# The tool will use:
|
||||
# {
|
||||
# "search_query": "senior software engineer python remote"
|
||||
# }
|
||||
|
||||
# Create crew
|
||||
crew = Crew(
|
||||
agents=[recruiter],
|
||||
tasks=[search_task]
|
||||
)
|
||||
|
||||
# Execute
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
## Advanced Usage
|
||||
|
||||
### Handling Search Results
|
||||
```python
|
||||
# Example of processing structured results
|
||||
results = search_tool._run(
|
||||
search_query="machine learning engineer"
|
||||
)
|
||||
|
||||
# Results format:
|
||||
"""
|
||||
Search results:
|
||||
Position: Senior Machine Learning Engineer
|
||||
Employer: TechCorp Inc
|
||||
Location: San Francisco, CA
|
||||
Link: https://example.com/job/123
|
||||
Highlights: Python, TensorFlow, 5+ years experience
|
||||
Is Remote: True
|
||||
Is Hybrid: False
|
||||
---
|
||||
Position: ML Engineer
|
||||
...
|
||||
"""
|
||||
```
|
||||
|
||||
### Error Handling
|
||||
```python
|
||||
try:
|
||||
results = search_tool._run(
|
||||
search_query="data scientist"
|
||||
)
|
||||
if not results:
|
||||
print("No jobs found")
|
||||
else:
|
||||
print(results)
|
||||
except Exception as e:
|
||||
print(f"Job search error: {str(e)}")
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Requires valid Serply API key
|
||||
- Currently supports US job market only
|
||||
- Returns structured job information
|
||||
- Includes remote/hybrid status
|
||||
- Thread-safe operations
|
||||
- Efficient job search capabilities
|
||||
- Handles API rate limiting automatically
|
||||
- Provides detailed job highlights
|
||||
209
docs/tools/serply-news-search-tool.mdx
Normal file
209
docs/tools/serply-news-search-tool.mdx
Normal file
@@ -0,0 +1,209 @@
|
||||
---
|
||||
title: SerplyNewsSearchTool
|
||||
description: A news article search tool powered by Serply API with configurable search parameters
|
||||
icon: newspaper
|
||||
---
|
||||
|
||||
## SerplyNewsSearchTool
|
||||
|
||||
The SerplyNewsSearchTool provides news article search capabilities using the Serply API. It allows for customizable search parameters including result limits and proxy location for region-specific news results.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install 'crewai[tools]'
|
||||
```
|
||||
|
||||
## Usage Example
|
||||
|
||||
```python
|
||||
from crewai import Agent
|
||||
from crewai_tools import SerplyNewsSearchTool
|
||||
|
||||
# Set environment variable
|
||||
# export SERPLY_API_KEY='your-api-key'
|
||||
|
||||
# Basic initialization
|
||||
news_tool = SerplyNewsSearchTool()
|
||||
|
||||
# Advanced initialization with custom parameters
|
||||
news_tool = SerplyNewsSearchTool(
|
||||
limit=20, # Return 20 results
|
||||
proxy_location="FR" # Search from France
|
||||
)
|
||||
|
||||
# Create an agent with the tool
|
||||
news_researcher = Agent(
|
||||
role='News Researcher',
|
||||
goal='Find relevant news articles',
|
||||
backstory='Expert at news research and information gathering.',
|
||||
tools=[news_tool],
|
||||
verbose=True
|
||||
)
|
||||
```
|
||||
|
||||
## Input Schema
|
||||
|
||||
```python
|
||||
class SerplyNewsSearchToolSchema(BaseModel):
|
||||
search_query: str = Field(
|
||||
description="Mandatory search query for fetching news articles"
|
||||
)
|
||||
```
|
||||
|
||||
## Function Signature
|
||||
|
||||
```python
|
||||
def __init__(
|
||||
self,
|
||||
limit: Optional[int] = 10,
|
||||
proxy_location: Optional[str] = "US",
|
||||
**kwargs
|
||||
):
|
||||
"""
|
||||
Initialize the news search tool.
|
||||
|
||||
Args:
|
||||
limit (int): Maximum number of results [10-100] (default: 10)
|
||||
proxy_location (str): Region for local news results (default: "US")
|
||||
Options: US, CA, IE, GB, FR, DE, SE, IN, JP, KR, SG, AU, BR
|
||||
**kwargs: Additional arguments for tool creation
|
||||
"""
|
||||
|
||||
def _run(
|
||||
self,
|
||||
**kwargs: Any
|
||||
) -> str:
|
||||
"""
|
||||
Perform news search using Serply API.
|
||||
|
||||
Args:
|
||||
search_query (str): News search query
|
||||
|
||||
Returns:
|
||||
str: Formatted string containing news results:
|
||||
- Title
|
||||
- Link
|
||||
- Source
|
||||
- Published Date
|
||||
"""
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. Set up API authentication:
|
||||
```bash
|
||||
export SERPLY_API_KEY='your-serply-api-key'
|
||||
```
|
||||
|
||||
2. Configure search parameters appropriately:
|
||||
- Set reasonable result limits
|
||||
- Select relevant proxy location for regional news
|
||||
- Consider time sensitivity of news content
|
||||
|
||||
3. Handle potential API errors
|
||||
4. Process structured results effectively
|
||||
5. Consider rate limits and quotas
|
||||
|
||||
## Integration Example
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import SerplyNewsSearchTool
|
||||
|
||||
# Initialize tool with custom configuration
|
||||
news_tool = SerplyNewsSearchTool(
|
||||
limit=15, # 15 results
|
||||
proxy_location="US" # US news sources
|
||||
)
|
||||
|
||||
# Create agent
|
||||
news_analyst = Agent(
|
||||
role='News Analyst',
|
||||
goal='Research breaking news and developments',
|
||||
backstory='Expert at analyzing news trends and developments.',
|
||||
tools=[news_tool]
|
||||
)
|
||||
|
||||
# Define task
|
||||
news_task = Task(
|
||||
description="""Research the latest developments in renewable
|
||||
energy technology and investments, focusing on major
|
||||
announcements and industry trends.""",
|
||||
agent=news_analyst
|
||||
)
|
||||
|
||||
# The tool will use:
|
||||
# {
|
||||
# "search_query": "renewable energy technology investments news"
|
||||
# }
|
||||
|
||||
# Create crew
|
||||
crew = Crew(
|
||||
agents=[news_analyst],
|
||||
tasks=[news_task]
|
||||
)
|
||||
|
||||
# Execute
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
## Advanced Usage
|
||||
|
||||
### Regional News Configuration
|
||||
```python
|
||||
# French news sources
|
||||
fr_news = SerplyNewsSearchTool(
|
||||
proxy_location="FR",
|
||||
limit=20
|
||||
)
|
||||
|
||||
# Japanese news sources
|
||||
jp_news = SerplyNewsSearchTool(
|
||||
proxy_location="JP",
|
||||
limit=20
|
||||
)
|
||||
```
|
||||
|
||||
### Result Processing
|
||||
```python
|
||||
# Get news results
|
||||
try:
|
||||
results = news_tool._run(
|
||||
search_query="renewable energy investments"
|
||||
)
|
||||
print(results)
|
||||
except Exception as e:
|
||||
print(f"News search error: {str(e)}")
|
||||
```
|
||||
|
||||
### Multiple Region Search
|
||||
```python
|
||||
# Search across multiple regions
|
||||
regions = ["US", "GB", "DE"]
|
||||
all_results = []
|
||||
|
||||
for region in regions:
|
||||
regional_tool = SerplyNewsSearchTool(
|
||||
proxy_location=region,
|
||||
limit=5
|
||||
)
|
||||
results = regional_tool._run(
|
||||
search_query="global tech innovations"
|
||||
)
|
||||
all_results.append(f"Results from {region}:\n{results}")
|
||||
|
||||
combined_results = "\n\n".join(all_results)
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Requires valid Serply API key
|
||||
- Supports multiple regions for news sources
|
||||
- Configurable result limits (10-100)
|
||||
- Returns structured news article data
|
||||
- Thread-safe operations
|
||||
- Efficient news search capabilities
|
||||
- Handles API rate limiting automatically
|
||||
- Includes source attribution and publication dates
|
||||
- Follows redirects for final article URLs
|
||||
209
docs/tools/serply-scholar-search-tool.mdx
Normal file
209
docs/tools/serply-scholar-search-tool.mdx
Normal file
@@ -0,0 +1,209 @@
|
||||
---
|
||||
title: SerplyScholarSearchTool
|
||||
description: A scholarly literature search tool powered by Serply API with configurable search parameters
|
||||
icon: book
|
||||
---
|
||||
|
||||
## SerplyScholarSearchTool
|
||||
|
||||
The SerplyScholarSearchTool provides scholarly literature search capabilities using the Serply API. It allows for customizable search parameters including language and proxy location for region-specific academic results.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install 'crewai[tools]'
|
||||
```
|
||||
|
||||
## Usage Example
|
||||
|
||||
```python
|
||||
from crewai import Agent
|
||||
from crewai_tools import SerplyScholarSearchTool
|
||||
|
||||
# Set environment variable
|
||||
# export SERPLY_API_KEY='your-api-key'
|
||||
|
||||
# Basic initialization
|
||||
scholar_tool = SerplyScholarSearchTool()
|
||||
|
||||
# Advanced initialization with custom parameters
|
||||
scholar_tool = SerplyScholarSearchTool(
|
||||
hl="fr", # French language results
|
||||
proxy_location="FR" # Search from France
|
||||
)
|
||||
|
||||
# Create an agent with the tool
|
||||
academic_researcher = Agent(
|
||||
role='Academic Researcher',
|
||||
goal='Find relevant scholarly literature',
|
||||
backstory='Expert at academic research and literature review.',
|
||||
tools=[scholar_tool],
|
||||
verbose=True
|
||||
)
|
||||
```
|
||||
|
||||
## Input Schema
|
||||
|
||||
```python
|
||||
class SerplyScholarSearchToolSchema(BaseModel):
|
||||
search_query: str = Field(
|
||||
description="Mandatory search query for fetching scholarly literature"
|
||||
)
|
||||
```
|
||||
|
||||
## Function Signature
|
||||
|
||||
```python
|
||||
def __init__(
|
||||
self,
|
||||
hl: str = "us",
|
||||
proxy_location: Optional[str] = "US",
|
||||
**kwargs
|
||||
):
|
||||
"""
|
||||
Initialize the scholar search tool.
|
||||
|
||||
Args:
|
||||
hl (str): Host language code for results (default: "us")
|
||||
Reference: https://developers.google.com/custom-search/docs/xml_results?hl=en#wsInterfaceLanguages
|
||||
proxy_location (str): Region for local results (default: "US")
|
||||
Options: US, CA, IE, GB, FR, DE, SE, IN, JP, KR, SG, AU, BR
|
||||
**kwargs: Additional arguments for tool creation
|
||||
"""
|
||||
|
||||
def _run(
|
||||
self,
|
||||
**kwargs: Any
|
||||
) -> str:
|
||||
"""
|
||||
Perform scholarly literature search using Serply API.
|
||||
|
||||
Args:
|
||||
search_query (str): Academic search query
|
||||
|
||||
Returns:
|
||||
str: Formatted string containing scholarly results:
|
||||
- Title
|
||||
- Link
|
||||
- Description
|
||||
- Citation
|
||||
- Authors
|
||||
"""
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. Set up API authentication:
|
||||
```bash
|
||||
export SERPLY_API_KEY='your-serply-api-key'
|
||||
```
|
||||
|
||||
2. Configure search parameters appropriately:
|
||||
- Use relevant language codes
|
||||
- Select appropriate proxy location
|
||||
- Provide specific academic search terms
|
||||
|
||||
3. Handle potential API errors
|
||||
4. Process structured results effectively
|
||||
5. Consider rate limits and quotas
|
||||
|
||||
## Integration Example
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import SerplyScholarSearchTool
|
||||
|
||||
# Initialize tool with custom configuration
|
||||
scholar_tool = SerplyScholarSearchTool(
|
||||
hl="en", # English results
|
||||
proxy_location="US" # US academic sources
|
||||
)
|
||||
|
||||
# Create agent
|
||||
researcher = Agent(
|
||||
role='Academic Researcher',
|
||||
goal='Research recent academic publications',
|
||||
backstory='Expert at analyzing academic literature and research trends.',
|
||||
tools=[scholar_tool]
|
||||
)
|
||||
|
||||
# Define task
|
||||
research_task = Task(
|
||||
description="""Research recent academic publications on
|
||||
machine learning applications in healthcare, focusing on
|
||||
peer-reviewed articles from the last two years.""",
|
||||
agent=researcher
|
||||
)
|
||||
|
||||
# The tool will use:
|
||||
# {
|
||||
# "search_query": "machine learning healthcare applications"
|
||||
# }
|
||||
|
||||
# Create crew
|
||||
crew = Crew(
|
||||
agents=[researcher],
|
||||
tasks=[research_task]
|
||||
)
|
||||
|
||||
# Execute
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
## Advanced Usage
|
||||
|
||||
### Language and Region Configuration
|
||||
```python
|
||||
# French academic sources
|
||||
fr_scholar = SerplyScholarSearchTool(
|
||||
hl="fr",
|
||||
proxy_location="FR"
|
||||
)
|
||||
|
||||
# German academic sources
|
||||
de_scholar = SerplyScholarSearchTool(
|
||||
hl="de",
|
||||
proxy_location="DE"
|
||||
)
|
||||
```
|
||||
|
||||
### Result Processing
|
||||
```python
|
||||
try:
|
||||
results = scholar_tool._run(
|
||||
search_query="machine learning healthcare applications"
|
||||
)
|
||||
print(results)
|
||||
except Exception as e:
|
||||
print(f"Scholar search error: {str(e)}")
|
||||
```
|
||||
|
||||
### Citation Analysis
|
||||
```python
|
||||
# Extract and analyze citations
|
||||
def analyze_citations(results):
|
||||
citations = []
|
||||
for result in results.split("---"):
|
||||
if "Cite:" in result:
|
||||
citation = result.split("Cite:")[1].split("\n")[0].strip()
|
||||
citations.append(citation)
|
||||
return citations
|
||||
|
||||
results = scholar_tool._run(
|
||||
search_query="artificial intelligence ethics"
|
||||
)
|
||||
citations = analyze_citations(results)
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Requires valid Serply API key
|
||||
- Supports multiple languages and regions
|
||||
- Returns structured academic article data
|
||||
- Includes citation information
|
||||
- Lists all authors of publications
|
||||
- Thread-safe operations
|
||||
- Efficient scholarly search capabilities
|
||||
- Handles API rate limiting automatically
|
||||
- Supports both direct and document links
|
||||
- Provides comprehensive article metadata
|
||||
213
docs/tools/serply-web-search-tool.mdx
Normal file
213
docs/tools/serply-web-search-tool.mdx
Normal file
@@ -0,0 +1,213 @@
|
||||
---
|
||||
title: SerplyWebSearchTool
|
||||
description: A Google search tool powered by Serply API with configurable search parameters
|
||||
icon: search
|
||||
---
|
||||
|
||||
## SerplyWebSearchTool
|
||||
|
||||
The SerplyWebSearchTool provides Google search capabilities using the Serply API. It allows for customizable search parameters including language, result limits, device type, and proxy location for region-specific results.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install 'crewai[tools]'
|
||||
```
|
||||
|
||||
## Usage Example
|
||||
|
||||
```python
|
||||
from crewai import Agent
|
||||
from crewai_tools import SerplyWebSearchTool
|
||||
|
||||
# Set environment variable
|
||||
# export SERPLY_API_KEY='your-api-key'
|
||||
|
||||
# Basic initialization
|
||||
search_tool = SerplyWebSearchTool()
|
||||
|
||||
# Advanced initialization with custom parameters
|
||||
search_tool = SerplyWebSearchTool(
|
||||
hl="fr", # French language results
|
||||
limit=20, # Return 20 results
|
||||
device_type="mobile", # Mobile search results
|
||||
proxy_location="FR" # Search from France
|
||||
)
|
||||
|
||||
# Create an agent with the tool
|
||||
researcher = Agent(
|
||||
role='Web Researcher',
|
||||
goal='Find relevant information online',
|
||||
backstory='Expert at web research and information gathering.',
|
||||
tools=[search_tool],
|
||||
verbose=True
|
||||
)
|
||||
```
|
||||
|
||||
## Input Schema
|
||||
|
||||
```python
|
||||
class SerplyWebSearchToolSchema(BaseModel):
|
||||
search_query: str = Field(
|
||||
description="Mandatory search query for Google search"
|
||||
)
|
||||
```
|
||||
|
||||
## Function Signature
|
||||
|
||||
```python
|
||||
def __init__(
|
||||
self,
|
||||
hl: str = "us",
|
||||
limit: int = 10,
|
||||
device_type: str = "desktop",
|
||||
proxy_location: str = "US",
|
||||
**kwargs
|
||||
):
|
||||
"""
|
||||
Initialize the Google search tool.
|
||||
|
||||
Args:
|
||||
hl (str): Host language code for results (default: "us")
|
||||
Reference: https://developers.google.com/custom-search/docs/xml_results?hl=en#wsInterfaceLanguages
|
||||
limit (int): Maximum number of results [10-100] (default: 10)
|
||||
device_type (str): "desktop" or "mobile" results (default: "desktop")
|
||||
proxy_location (str): Region for local results (default: "US")
|
||||
Options: US, CA, IE, GB, FR, DE, SE, IN, JP, KR, SG, AU, BR
|
||||
**kwargs: Additional arguments for tool creation
|
||||
"""
|
||||
|
||||
def _run(
|
||||
self,
|
||||
**kwargs: Any
|
||||
) -> str:
|
||||
"""
|
||||
Perform Google search using Serply API.
|
||||
|
||||
Args:
|
||||
search_query (str): Search query
|
||||
|
||||
Returns:
|
||||
str: Formatted string containing search results:
|
||||
- Title
|
||||
- Link
|
||||
- Description
|
||||
"""
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. Set up API authentication:
|
||||
```bash
|
||||
export SERPLY_API_KEY='your-serply-api-key'
|
||||
```
|
||||
|
||||
2. Configure search parameters appropriately:
|
||||
- Use relevant language codes
|
||||
- Set reasonable result limits
|
||||
- Choose appropriate device type
|
||||
- Select relevant proxy location
|
||||
|
||||
3. Handle potential API errors
|
||||
4. Process structured results effectively
|
||||
5. Consider rate limits and quotas
|
||||
|
||||
## Integration Example
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import SerplyWebSearchTool
|
||||
|
||||
# Initialize tool with custom configuration
|
||||
search_tool = SerplyWebSearchTool(
|
||||
hl="en", # English results
|
||||
limit=15, # 15 results
|
||||
device_type="desktop",
|
||||
proxy_location="US"
|
||||
)
|
||||
|
||||
# Create agent
|
||||
researcher = Agent(
|
||||
role='Web Researcher',
|
||||
goal='Research emerging technology trends',
|
||||
backstory='Expert at finding and analyzing tech trends.',
|
||||
tools=[search_tool]
|
||||
)
|
||||
|
||||
# Define task
|
||||
research_task = Task(
|
||||
description="""Research the latest developments in artificial
|
||||
intelligence and machine learning, focusing on practical
|
||||
applications in business.""",
|
||||
agent=researcher
|
||||
)
|
||||
|
||||
# The tool will use:
|
||||
# {
|
||||
# "search_query": "latest AI ML developments business applications"
|
||||
# }
|
||||
|
||||
# Create crew
|
||||
crew = Crew(
|
||||
agents=[researcher],
|
||||
tasks=[research_task]
|
||||
)
|
||||
|
||||
# Execute
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
## Advanced Usage
|
||||
|
||||
### Language and Region Configuration
|
||||
```python
|
||||
# French search from France
|
||||
fr_search = SerplyWebSearchTool(
|
||||
hl="fr",
|
||||
proxy_location="FR"
|
||||
)
|
||||
|
||||
# Japanese search from Japan
|
||||
jp_search = SerplyWebSearchTool(
|
||||
hl="ja",
|
||||
proxy_location="JP"
|
||||
)
|
||||
```
|
||||
|
||||
### Device-Specific Results
|
||||
```python
|
||||
# Mobile results
|
||||
mobile_search = SerplyWebSearchTool(
|
||||
device_type="mobile",
|
||||
limit=20
|
||||
)
|
||||
|
||||
# Desktop results
|
||||
desktop_search = SerplyWebSearchTool(
|
||||
device_type="desktop",
|
||||
limit=20
|
||||
)
|
||||
```
|
||||
|
||||
### Error Handling
|
||||
```python
|
||||
try:
|
||||
results = search_tool._run(
|
||||
search_query="artificial intelligence trends"
|
||||
)
|
||||
print(results)
|
||||
except Exception as e:
|
||||
print(f"Search error: {str(e)}")
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
|
||||
- Requires valid Serply API key
|
||||
- Supports multiple languages and regions
|
||||
- Configurable result limits (10-100)
|
||||
- Device-specific search results
|
||||
- Thread-safe operations
|
||||
- Efficient search capabilities
|
||||
- Handles API rate limiting automatically
|
||||
- Returns structured search results
|
||||
201
docs/tools/serply-webpage-to-markdown-tool.mdx
Normal file
201
docs/tools/serply-webpage-to-markdown-tool.mdx
Normal file
@@ -0,0 +1,201 @@
|
||||
---
|
||||
title: SerplyWebpageToMarkdownTool
|
||||
description: A tool for converting web pages to markdown format using Serply API
|
||||
icon: markdown
|
||||
---
|
||||
|
||||
## SerplyWebpageToMarkdownTool
|
||||
|
||||
The SerplyWebpageToMarkdownTool converts web pages to markdown format using the Serply API, making it easier for LLMs to process and understand web content. It supports configurable proxy locations for region-specific access.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install 'crewai[tools]'
|
||||
```
|
||||
|
||||
## Usage Example
|
||||
|
||||
```python
|
||||
from crewai import Agent
|
||||
from crewai_tools import SerplyWebpageToMarkdownTool
|
||||
|
||||
# Set environment variable
|
||||
# export SERPLY_API_KEY='your-api-key'
|
||||
|
||||
# Basic initialization
|
||||
markdown_tool = SerplyWebpageToMarkdownTool()
|
||||
|
||||
# Advanced initialization with custom parameters
|
||||
markdown_tool = SerplyWebpageToMarkdownTool(
|
||||
proxy_location="FR" # Access from France
|
||||
)
|
||||
|
||||
# Create an agent with the tool
|
||||
web_processor = Agent(
|
||||
role='Web Content Processor',
|
||||
goal='Convert web content to markdown format',
|
||||
backstory='Expert at processing and formatting web content.',
|
||||
tools=[markdown_tool],
|
||||
verbose=True
|
||||
)
|
||||
```
|
||||
|
||||
## Input Schema
|
||||
|
||||
```python
|
||||
class SerplyWebpageToMarkdownToolSchema(BaseModel):
|
||||
url: str = Field(
|
||||
description="Mandatory URL of the webpage to convert to markdown"
|
||||
)
|
||||
```
|
||||
|
||||
## Function Signature
|
||||
|
||||
```python
|
||||
def __init__(
|
||||
self,
|
||||
proxy_location: Optional[str] = "US",
|
||||
**kwargs
|
||||
):
|
||||
"""
|
||||
Initialize the webpage to markdown conversion tool.
|
||||
|
||||
Args:
|
||||
proxy_location (str): Region for accessing the webpage (default: "US")
|
||||
Options: US, CA, IE, GB, FR, DE, SE, IN, JP, KR, SG, AU, BR
|
||||
**kwargs: Additional arguments for tool creation
|
||||
"""
|
||||
|
||||
def _run(
|
||||
self,
|
||||
**kwargs: Any
|
||||
) -> str:
|
||||
"""
|
||||
Convert webpage to markdown using Serply API.
|
||||
|
||||
Args:
|
||||
url (str): URL of the webpage to convert
|
||||
|
||||
Returns:
|
||||
str: Markdown formatted content of the webpage
|
||||
"""
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. Set up API authentication:
|
||||
```bash
|
||||
export SERPLY_API_KEY='your-serply-api-key'
|
||||
```
|
||||
|
||||
2. Configure proxy location appropriately:
|
||||
- Select relevant region for access
|
||||
- Consider content accessibility
|
||||
- Handle region-specific content
|
||||
|
||||
3. Handle potential API errors
|
||||
4. Process markdown output effectively
|
||||
5. Consider rate limits and quotas
|
||||
|
||||
## Integration Example
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import SerplyWebpageToMarkdownTool
|
||||
|
||||
# Initialize tool with custom configuration
|
||||
markdown_tool = SerplyWebpageToMarkdownTool(
|
||||
proxy_location="US" # US access point
|
||||
)
|
||||
|
||||
# Create agent
|
||||
processor = Agent(
|
||||
role='Content Processor',
|
||||
goal='Convert web content to structured markdown',
|
||||
backstory='Expert at processing web content into structured formats.',
|
||||
tools=[markdown_tool]
|
||||
)
|
||||
|
||||
# Define task
|
||||
conversion_task = Task(
|
||||
description="""Convert the documentation page at
|
||||
https://example.com/docs into markdown format for
|
||||
further processing.""",
|
||||
agent=processor
|
||||
)
|
||||
|
||||
# The tool will use:
|
||||
# {
|
||||
# "url": "https://example.com/docs"
|
||||
# }
|
||||
|
||||
# Create crew
|
||||
crew = Crew(
|
||||
agents=[processor],
|
||||
tasks=[conversion_task]
|
||||
)
|
||||
|
||||
# Execute
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
## Advanced Usage
|
||||
|
||||
### Regional Access Configuration
|
||||
```python
|
||||
# European access points
|
||||
fr_processor = SerplyWebpageToMarkdownTool(
|
||||
proxy_location="FR"
|
||||
)
|
||||
|
||||
de_processor = SerplyWebpageToMarkdownTool(
|
||||
proxy_location="DE"
|
||||
)
|
||||
```
|
||||
|
||||
### Error Handling
|
||||
```python
|
||||
try:
|
||||
markdown_content = markdown_tool._run(
|
||||
url="https://example.com/page"
|
||||
)
|
||||
print(markdown_content)
|
||||
except Exception as e:
|
||||
print(f"Conversion error: {str(e)}")
|
||||
```
|
||||
|
||||
### Content Processing
|
||||
```python
|
||||
# Process multiple pages
|
||||
urls = [
|
||||
"https://example.com/page1",
|
||||
"https://example.com/page2",
|
||||
"https://example.com/page3"
|
||||
]
|
||||
|
||||
markdown_contents = []
|
||||
for url in urls:
|
||||
try:
|
||||
content = markdown_tool._run(url=url)
|
||||
markdown_contents.append(content)
|
||||
except Exception as e:
|
||||
print(f"Error processing {url}: {str(e)}")
|
||||
continue
|
||||
|
||||
# Combine contents
|
||||
combined_markdown = "\n\n---\n\n".join(markdown_contents)
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Requires valid Serply API key
|
||||
- Supports multiple proxy locations
|
||||
- Returns markdown-formatted content
|
||||
- Simplifies web content for LLM processing
|
||||
- Thread-safe operations
|
||||
- Efficient content conversion
|
||||
- Handles API rate limiting automatically
|
||||
- Preserves content structure in markdown
|
||||
- Supports various webpage formats
|
||||
- Makes web content more accessible to AI agents
|
||||
158
docs/tools/txt-search-tool.mdx
Normal file
158
docs/tools/txt-search-tool.mdx
Normal file
@@ -0,0 +1,158 @@
|
||||
---
|
||||
title: TXTSearchTool
|
||||
description: A semantic search tool for text files using RAG capabilities
|
||||
icon: magnifying-glass-document
|
||||
---
|
||||
|
||||
## TXTSearchTool
|
||||
|
||||
The TXTSearchTool is a specialized Retrieval-Augmented Generation (RAG) tool that enables semantic search within text files. It inherits from the base RagTool class and provides both fixed and dynamic text file searching capabilities.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install 'crewai[tools]'
|
||||
```
|
||||
|
||||
## Usage Example
|
||||
|
||||
```python
|
||||
from crewai import Agent
|
||||
from crewai_tools import TXTSearchTool
|
||||
|
||||
# Method 1: Dynamic file path
|
||||
txt_search = TXTSearchTool()
|
||||
|
||||
# Method 2: Fixed file path
|
||||
fixed_txt_search = TXTSearchTool(txt="path/to/fixed/document.txt")
|
||||
|
||||
# Create an agent with the tool
|
||||
researcher = Agent(
|
||||
role='Research Assistant',
|
||||
goal='Search through text documents semantically',
|
||||
backstory='Expert at finding relevant information in documents using semantic search.',
|
||||
tools=[txt_search],
|
||||
verbose=True
|
||||
)
|
||||
```
|
||||
|
||||
## Input Schema
|
||||
|
||||
The tool supports two input schemas depending on initialization:
|
||||
|
||||
### Dynamic File Path Schema
|
||||
```python
|
||||
class TXTSearchToolSchema(BaseModel):
|
||||
search_query: str # The semantic search query
|
||||
txt: str # Path to the text file to search
|
||||
```
|
||||
|
||||
### Fixed File Path Schema
|
||||
```python
|
||||
class FixedTXTSearchToolSchema(BaseModel):
|
||||
search_query: str # The semantic search query
|
||||
```
|
||||
|
||||
## Function Signature
|
||||
|
||||
```python
|
||||
def __init__(self, txt: Optional[str] = None, **kwargs):
|
||||
"""
|
||||
Initialize the TXT search tool.
|
||||
|
||||
Args:
|
||||
txt (Optional[str]): Fixed path to a text file. If provided, the tool will only search this file.
|
||||
**kwargs: Additional arguments passed to the parent RagTool
|
||||
"""
|
||||
|
||||
def _run(self, search_query: str, **kwargs: Any) -> Any:
|
||||
"""
|
||||
Perform semantic search on the text file.
|
||||
|
||||
Args:
|
||||
search_query (str): The semantic search query
|
||||
**kwargs: Additional arguments (including 'txt' for dynamic file path)
|
||||
|
||||
Returns:
|
||||
str: Relevant text passages based on semantic search
|
||||
"""
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. Choose initialization method based on use case:
|
||||
- Use fixed file path when repeatedly searching the same document
|
||||
- Use dynamic file path when searching different documents
|
||||
2. Write clear, semantic search queries
|
||||
3. Handle potential file access errors in agent prompts
|
||||
4. Consider memory usage for large text files
|
||||
|
||||
## Integration Example
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import TXTSearchTool
|
||||
|
||||
# Example 1: Fixed document search
|
||||
documentation_search = TXTSearchTool(txt="api_documentation.txt")
|
||||
|
||||
# Example 2: Dynamic document search
|
||||
flexible_search = TXTSearchTool()
|
||||
|
||||
# Create agents
|
||||
doc_analyst = Agent(
|
||||
role='Documentation Analyst',
|
||||
goal='Find relevant API documentation sections',
|
||||
backstory='Expert at analyzing technical documentation.',
|
||||
tools=[documentation_search]
|
||||
)
|
||||
|
||||
file_analyst = Agent(
|
||||
role='File Analyst',
|
||||
goal='Search through various text files',
|
||||
backstory='Specialist in finding information across multiple documents.',
|
||||
tools=[flexible_search]
|
||||
)
|
||||
|
||||
# Define tasks
|
||||
fixed_search_task = Task(
|
||||
description="""Find all API endpoints related to user authentication
|
||||
in the documentation.""",
|
||||
agent=doc_analyst
|
||||
)
|
||||
|
||||
# The agent will use:
|
||||
# {
|
||||
# "search_query": "user authentication API endpoints"
|
||||
# }
|
||||
|
||||
dynamic_search_task = Task(
|
||||
description="""Search through the logs.txt file for any database
|
||||
connection errors.""",
|
||||
agent=file_analyst
|
||||
)
|
||||
|
||||
# The agent will use:
|
||||
# {
|
||||
# "search_query": "database connection errors",
|
||||
# "txt": "logs.txt"
|
||||
# }
|
||||
|
||||
# Create crew
|
||||
crew = Crew(
|
||||
agents=[doc_analyst, file_analyst],
|
||||
tasks=[fixed_search_task, dynamic_search_task]
|
||||
)
|
||||
|
||||
# Execute
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Inherits from RagTool for semantic search capabilities
|
||||
- Supports both fixed and dynamic text file paths
|
||||
- Uses embeddings for semantic search
|
||||
- Optimized for text file analysis
|
||||
- Thread-safe operations
|
||||
- Automatically handles file loading and embedding
|
||||
159
docs/tools/youtube-channel-search-tool.mdx
Normal file
159
docs/tools/youtube-channel-search-tool.mdx
Normal file
@@ -0,0 +1,159 @@
|
||||
---
|
||||
title: YoutubeChannelSearchTool
|
||||
description: A semantic search tool for YouTube channel content using RAG capabilities
|
||||
icon: youtube
|
||||
---
|
||||
|
||||
## YoutubeChannelSearchTool
|
||||
|
||||
The YoutubeChannelSearchTool is a specialized Retrieval-Augmented Generation (RAG) tool that enables semantic search within YouTube channel content. It inherits from the base RagTool class and provides both fixed and dynamic YouTube channel searching capabilities.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install 'crewai[tools]'
|
||||
```
|
||||
|
||||
## Usage Example
|
||||
|
||||
```python
|
||||
from crewai import Agent
|
||||
from crewai_tools import YoutubeChannelSearchTool
|
||||
|
||||
# Method 1: Dynamic channel handle
|
||||
youtube_search = YoutubeChannelSearchTool()
|
||||
|
||||
# Method 2: Fixed channel handle
|
||||
fixed_channel_search = YoutubeChannelSearchTool(youtube_channel_handle="@example_channel")
|
||||
|
||||
# Create an agent with the tool
|
||||
researcher = Agent(
|
||||
role='Content Researcher',
|
||||
goal='Search through YouTube channel content semantically',
|
||||
backstory='Expert at finding relevant information in YouTube content.',
|
||||
tools=[youtube_search],
|
||||
verbose=True
|
||||
)
|
||||
```
|
||||
|
||||
## Input Schema
|
||||
|
||||
The tool supports two input schemas depending on initialization:
|
||||
|
||||
### Dynamic Channel Schema
|
||||
```python
|
||||
class YoutubeChannelSearchToolSchema(BaseModel):
|
||||
search_query: str # The semantic search query
|
||||
youtube_channel_handle: str # YouTube channel handle (with or without @)
|
||||
```
|
||||
|
||||
### Fixed Channel Schema
|
||||
```python
|
||||
class FixedYoutubeChannelSearchToolSchema(BaseModel):
|
||||
search_query: str # The semantic search query
|
||||
```
|
||||
|
||||
## Function Signature
|
||||
|
||||
```python
|
||||
def __init__(self, youtube_channel_handle: Optional[str] = None, **kwargs):
|
||||
"""
|
||||
Initialize the YouTube channel search tool.
|
||||
|
||||
Args:
|
||||
youtube_channel_handle (Optional[str]): Fixed channel handle. If provided,
|
||||
the tool will only search this channel.
|
||||
**kwargs: Additional arguments passed to the parent RagTool
|
||||
"""
|
||||
|
||||
def _run(self, search_query: str, **kwargs: Any) -> Any:
|
||||
"""
|
||||
Perform semantic search on the YouTube channel content.
|
||||
|
||||
Args:
|
||||
search_query (str): The semantic search query
|
||||
**kwargs: Additional arguments (including 'youtube_channel_handle' for dynamic mode)
|
||||
|
||||
Returns:
|
||||
str: Relevant content from the YouTube channel based on semantic search
|
||||
"""
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. Choose initialization method based on use case:
|
||||
- Use fixed channel handle when repeatedly searching the same channel
|
||||
- Use dynamic handle when searching different channels
|
||||
2. Write clear, semantic search queries
|
||||
3. Channel handles can be provided with or without '@' prefix
|
||||
4. Consider content availability and channel size
|
||||
|
||||
## Integration Example
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import YoutubeChannelSearchTool
|
||||
|
||||
# Example 1: Fixed channel search
|
||||
tech_channel_search = YoutubeChannelSearchTool(youtube_channel_handle="@TechChannel")
|
||||
|
||||
# Example 2: Dynamic channel search
|
||||
flexible_search = YoutubeChannelSearchTool()
|
||||
|
||||
# Create agents
|
||||
tech_analyst = Agent(
|
||||
role='Tech Content Analyst',
|
||||
goal='Find relevant tech tutorials and explanations',
|
||||
backstory='Expert at analyzing technical YouTube content.',
|
||||
tools=[tech_channel_search]
|
||||
)
|
||||
|
||||
content_researcher = Agent(
|
||||
role='Content Researcher',
|
||||
goal='Search across multiple YouTube channels',
|
||||
backstory='Specialist in finding information across various channels.',
|
||||
tools=[flexible_search]
|
||||
)
|
||||
|
||||
# Define tasks
|
||||
fixed_search_task = Task(
|
||||
description="""Find all tutorials related to machine learning
|
||||
basics in the channel.""",
|
||||
agent=tech_analyst
|
||||
)
|
||||
|
||||
# The agent will use:
|
||||
# {
|
||||
# "search_query": "machine learning basics tutorial"
|
||||
# }
|
||||
|
||||
dynamic_search_task = Task(
|
||||
description="""Search through the @AIResearch channel for
|
||||
content about neural networks.""",
|
||||
agent=content_researcher
|
||||
)
|
||||
|
||||
# The agent will use:
|
||||
# {
|
||||
# "search_query": "neural networks explanation",
|
||||
# "youtube_channel_handle": "@AIResearch"
|
||||
# }
|
||||
|
||||
# Create crew
|
||||
crew = Crew(
|
||||
agents=[tech_analyst, content_researcher],
|
||||
tasks=[fixed_search_task, dynamic_search_task]
|
||||
)
|
||||
|
||||
# Execute
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Inherits from RagTool for semantic search capabilities
|
||||
- Supports both fixed and dynamic YouTube channel handles
|
||||
- Automatically adds '@' prefix to channel handles if missing
|
||||
- Uses embeddings for semantic search
|
||||
- Thread-safe operations
|
||||
- Automatically handles YouTube content loading and embedding
|
||||
216
docs/tools/youtube-video-search-tool.mdx
Normal file
216
docs/tools/youtube-video-search-tool.mdx
Normal file
@@ -0,0 +1,216 @@
|
||||
---
|
||||
title: YoutubeVideoSearchTool
|
||||
description: A tool for semantic search within YouTube video content using RAG capabilities
|
||||
icon: video
|
||||
---
|
||||
|
||||
## YoutubeVideoSearchTool
|
||||
|
||||
The YoutubeVideoSearchTool enables semantic search capabilities for YouTube video content using Retrieval-Augmented Generation (RAG). It processes video content and allows searching through transcripts and metadata using natural language queries.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install 'crewai[tools]'
|
||||
```
|
||||
|
||||
## Usage Example
|
||||
|
||||
```python
|
||||
from crewai import Agent
|
||||
from crewai_tools import YoutubeVideoSearchTool
|
||||
|
||||
# Method 1: Initialize with specific video
|
||||
video_tool = YoutubeVideoSearchTool(
|
||||
youtube_video_url="https://www.youtube.com/watch?v=example"
|
||||
)
|
||||
|
||||
# Method 2: Initialize without video (specify at runtime)
|
||||
flexible_video_tool = YoutubeVideoSearchTool()
|
||||
|
||||
# Create an agent with the tool
|
||||
researcher = Agent(
|
||||
role='Video Researcher',
|
||||
goal='Search and analyze video content',
|
||||
backstory='Expert at finding relevant information in videos.',
|
||||
tools=[video_tool],
|
||||
verbose=True
|
||||
)
|
||||
```
|
||||
|
||||
## Input Schema
|
||||
|
||||
### Fixed Video Schema (when URL provided during initialization)
|
||||
```python
|
||||
class FixedYoutubeVideoSearchToolSchema(BaseModel):
|
||||
search_query: str = Field(
|
||||
description="Mandatory search query you want to use to search the Youtube Video content"
|
||||
)
|
||||
```
|
||||
|
||||
### Flexible Video Schema (when URL provided at runtime)
|
||||
```python
|
||||
class YoutubeVideoSearchToolSchema(FixedYoutubeVideoSearchToolSchema):
|
||||
youtube_video_url: str = Field(
|
||||
description="Mandatory youtube_video_url path you want to search"
|
||||
)
|
||||
```
|
||||
|
||||
## Function Signature
|
||||
|
||||
```python
|
||||
def __init__(
|
||||
self,
|
||||
youtube_video_url: Optional[str] = None,
|
||||
**kwargs
|
||||
):
|
||||
"""
|
||||
Initialize the YouTube video search tool.
|
||||
|
||||
Args:
|
||||
youtube_video_url (Optional[str]): URL of YouTube video (optional)
|
||||
**kwargs: Additional arguments for RAG tool configuration
|
||||
"""
|
||||
|
||||
def _run(
|
||||
self,
|
||||
search_query: str,
|
||||
**kwargs: Any
|
||||
) -> str:
|
||||
"""
|
||||
Execute semantic search on video content.
|
||||
|
||||
Args:
|
||||
search_query (str): Query to search in the video
|
||||
**kwargs: Additional arguments including youtube_video_url if not initialized
|
||||
|
||||
Returns:
|
||||
str: Relevant content from the video matching the query
|
||||
"""
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. Video URL Management:
|
||||
- Use complete YouTube URLs
|
||||
- Verify video accessibility
|
||||
- Handle region restrictions
|
||||
|
||||
2. Search Optimization:
|
||||
- Use specific, focused queries
|
||||
- Consider video context
|
||||
- Test with sample queries first
|
||||
|
||||
3. Performance Considerations:
|
||||
- Pre-initialize for repeated searches
|
||||
- Handle long videos appropriately
|
||||
- Monitor processing time
|
||||
|
||||
4. Error Handling:
|
||||
- Verify video availability
|
||||
- Handle unavailable videos
|
||||
- Manage API limitations
|
||||
|
||||
## Integration Example
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import YoutubeVideoSearchTool
|
||||
|
||||
# Initialize tool with specific video
|
||||
video_tool = YoutubeVideoSearchTool(
|
||||
youtube_video_url="https://www.youtube.com/watch?v=example"
|
||||
)
|
||||
|
||||
# Create agent
|
||||
researcher = Agent(
|
||||
role='Video Researcher',
|
||||
goal='Extract insights from video content',
|
||||
backstory='Expert at analyzing video content.',
|
||||
tools=[video_tool]
|
||||
)
|
||||
|
||||
# Define task
|
||||
research_task = Task(
|
||||
description="""Find all mentions of machine learning
|
||||
applications from the video content.""",
|
||||
agent=researcher
|
||||
)
|
||||
|
||||
# The tool will use:
|
||||
# {
|
||||
# "search_query": "machine learning applications"
|
||||
# }
|
||||
|
||||
# Create crew
|
||||
crew = Crew(
|
||||
agents=[researcher],
|
||||
tasks=[research_task]
|
||||
)
|
||||
|
||||
# Execute
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
## Advanced Usage
|
||||
|
||||
### Dynamic Video Selection
|
||||
```python
|
||||
# Initialize without video URL
|
||||
flexible_tool = YoutubeVideoSearchTool()
|
||||
|
||||
# Search different videos
|
||||
tech_results = flexible_tool.run(
|
||||
search_query="quantum computing",
|
||||
youtube_video_url="https://youtube.com/watch?v=tech123"
|
||||
)
|
||||
|
||||
science_results = flexible_tool.run(
|
||||
search_query="particle physics",
|
||||
youtube_video_url="https://youtube.com/watch?v=science456"
|
||||
)
|
||||
```
|
||||
|
||||
### Multiple Video Analysis
|
||||
```python
|
||||
# Create tools for different videos
|
||||
tech_tool = YoutubeVideoSearchTool(
|
||||
youtube_video_url="https://youtube.com/watch?v=tech123"
|
||||
)
|
||||
science_tool = YoutubeVideoSearchTool(
|
||||
youtube_video_url="https://youtube.com/watch?v=science456"
|
||||
)
|
||||
|
||||
# Create agent with multiple tools
|
||||
analyst = Agent(
|
||||
role='Content Analyst',
|
||||
goal='Cross-reference multiple videos',
|
||||
tools=[tech_tool, science_tool]
|
||||
)
|
||||
```
|
||||
|
||||
### Error Handling Example
|
||||
```python
|
||||
try:
|
||||
video_tool = YoutubeVideoSearchTool()
|
||||
results = video_tool.run(
|
||||
search_query="key concepts",
|
||||
youtube_video_url="https://youtube.com/watch?v=example"
|
||||
)
|
||||
print(results)
|
||||
except Exception as e:
|
||||
print(f"Error processing video: {str(e)}")
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Inherits from RagTool
|
||||
- Uses embedchain for processing
|
||||
- Supports semantic search
|
||||
- Dynamic video specification
|
||||
- Efficient content retrieval
|
||||
- Thread-safe operations
|
||||
- Maintains search context
|
||||
- Handles video transcripts
|
||||
- Processes video metadata
|
||||
- Memory-efficient processing
|
||||
@@ -8,27 +8,38 @@ authors = [
|
||||
{ name = "Joao Moura", email = "joao@crewai.com" }
|
||||
]
|
||||
dependencies = [
|
||||
# Core Dependencies
|
||||
"pydantic>=2.4.2",
|
||||
"openai>=1.13.3",
|
||||
"litellm>=1.44.22",
|
||||
"instructor>=1.3.3",
|
||||
|
||||
# Text Processing
|
||||
"pdfplumber>=0.11.4",
|
||||
"regex>=2024.9.11",
|
||||
|
||||
# Telemetry and Monitoring
|
||||
"opentelemetry-api>=1.22.0",
|
||||
"opentelemetry-sdk>=1.22.0",
|
||||
"opentelemetry-exporter-otlp-proto-http>=1.22.0",
|
||||
"instructor>=1.3.3",
|
||||
"regex>=2024.9.11",
|
||||
"click>=8.1.7",
|
||||
|
||||
# Data Handling
|
||||
"chromadb>=0.5.23",
|
||||
"openpyxl>=3.1.5",
|
||||
"pyvis>=0.3.2",
|
||||
|
||||
# Authentication and Security
|
||||
"auth0-python>=4.7.1",
|
||||
"python-dotenv>=1.0.0",
|
||||
|
||||
# Configuration and Utils
|
||||
"click>=8.1.7",
|
||||
"appdirs>=1.4.4",
|
||||
"jsonref>=1.1.0",
|
||||
"json-repair>=0.25.2",
|
||||
"auth0-python>=4.7.1",
|
||||
"litellm>=1.44.22",
|
||||
"pyvis>=0.3.2",
|
||||
"uv>=0.4.25",
|
||||
"tomli-w>=1.1.0",
|
||||
"tomli>=2.0.2",
|
||||
"chromadb>=0.5.23",
|
||||
"pdfplumber>=0.11.4",
|
||||
"openpyxl>=3.1.5",
|
||||
"blinker>=1.9.0",
|
||||
]
|
||||
|
||||
@@ -39,6 +50,9 @@ Repository = "https://github.com/crewAIInc/crewAI"
|
||||
|
||||
[project.optional-dependencies]
|
||||
tools = ["crewai-tools>=0.17.0"]
|
||||
embeddings = [
|
||||
"tiktoken~=0.7.0"
|
||||
]
|
||||
agentops = ["agentops>=0.3.0"]
|
||||
fastembed = ["fastembed>=0.4.1"]
|
||||
pdfplumber = [
|
||||
@@ -51,6 +65,9 @@ openpyxl = [
|
||||
"openpyxl>=3.1.5",
|
||||
]
|
||||
mem0 = ["mem0ai>=0.1.29"]
|
||||
docling = [
|
||||
"docling>=2.12.0",
|
||||
]
|
||||
|
||||
[tool.uv]
|
||||
dev-dependencies = [
|
||||
@@ -64,7 +81,6 @@ dev-dependencies = [
|
||||
"mkdocs-material-extensions>=1.3.1",
|
||||
"pillow>=10.2.0",
|
||||
"cairosvg>=2.7.1",
|
||||
"crewai-tools>=0.17.0",
|
||||
"pytest>=8.0.0",
|
||||
"pytest-vcr>=1.0.2",
|
||||
"python-dotenv>=1.0.0",
|
||||
|
||||
@@ -17,6 +17,7 @@ from crewai.memory.contextual.contextual_memory import ContextualMemory
|
||||
from crewai.task import Task
|
||||
from crewai.tools import BaseTool
|
||||
from crewai.tools.agent_tools.agent_tools import AgentTools
|
||||
from crewai.tools.base_tool import Tool
|
||||
from crewai.utilities import Converter, Prompts
|
||||
from crewai.utilities.constants import TRAINED_AGENTS_DATA_FILE, TRAINING_DATA_FILE
|
||||
from crewai.utilities.converter import generate_model_description
|
||||
@@ -114,6 +115,10 @@ class Agent(BaseAgent):
|
||||
default=2,
|
||||
description="Maximum number of retries for an agent to execute a task when an error occurs.",
|
||||
)
|
||||
multimodal: bool = Field(
|
||||
default=False,
|
||||
description="Whether the agent is multimodal.",
|
||||
)
|
||||
code_execution_mode: Literal["safe", "unsafe"] = Field(
|
||||
default="safe",
|
||||
description="Mode for code execution: 'safe' (using Docker) or 'unsafe' (direct execution).",
|
||||
@@ -406,6 +411,10 @@ class Agent(BaseAgent):
|
||||
tools = agent_tools.tools()
|
||||
return tools
|
||||
|
||||
def get_multimodal_tools(self) -> List[Tool]:
|
||||
from crewai.tools.agent_tools.add_image_tool import AddImageTool
|
||||
return [AddImageTool()]
|
||||
|
||||
def get_code_execution_tools(self):
|
||||
try:
|
||||
from crewai_tools import CodeInterpreterTool
|
||||
|
||||
@@ -143,10 +143,20 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
|
||||
tool_result = self._execute_tool_and_check_finality(
|
||||
formatted_answer
|
||||
)
|
||||
if self.step_callback:
|
||||
self.step_callback(tool_result)
|
||||
|
||||
formatted_answer.text += f"\nObservation: {tool_result.result}"
|
||||
# Directly append the result to the messages if the
|
||||
# tool is "Add image to content" in case of multimodal
|
||||
# agents
|
||||
if formatted_answer.tool == self._i18n.tools("add_image")["name"]:
|
||||
self.messages.append(tool_result.result)
|
||||
continue
|
||||
|
||||
else:
|
||||
if self.step_callback:
|
||||
self.step_callback(tool_result)
|
||||
|
||||
formatted_answer.text += f"\nObservation: {tool_result.result}"
|
||||
|
||||
formatted_answer.result = tool_result.result
|
||||
if tool_result.result_as_answer:
|
||||
return AgentFinish(
|
||||
|
||||
@@ -18,3 +18,6 @@ test = "{{folder_name}}.main:test"
|
||||
[build-system]
|
||||
requires = ["hatchling"]
|
||||
build-backend = "hatchling.build"
|
||||
|
||||
[tool.crewai]
|
||||
type = "crew"
|
||||
|
||||
@@ -5,7 +5,7 @@ from pydantic import BaseModel
|
||||
|
||||
from crewai.flow.flow import Flow, listen, start
|
||||
|
||||
from .crews.poem_crew.poem_crew import PoemCrew
|
||||
from {{folder_name}}.crews.poem_crew.poem_crew import PoemCrew
|
||||
|
||||
|
||||
class PoemState(BaseModel):
|
||||
|
||||
@@ -15,3 +15,6 @@ plot = "{{folder_name}}.main:plot"
|
||||
[build-system]
|
||||
requires = ["hatchling"]
|
||||
build-backend = "hatchling.build"
|
||||
|
||||
[tool.crewai]
|
||||
type = "flow"
|
||||
|
||||
@@ -8,3 +8,5 @@ dependencies = [
|
||||
"crewai[tools]>=0.86.0"
|
||||
]
|
||||
|
||||
[tool.crewai]
|
||||
type = "tool"
|
||||
|
||||
@@ -35,6 +35,7 @@ from crewai.tasks.conditional_task import ConditionalTask
|
||||
from crewai.tasks.task_output import TaskOutput
|
||||
from crewai.telemetry import Telemetry
|
||||
from crewai.tools.agent_tools.agent_tools import AgentTools
|
||||
from crewai.tools.base_tool import Tool
|
||||
from crewai.types.usage_metrics import UsageMetrics
|
||||
from crewai.utilities import I18N, FileHandler, Logger, RPMController
|
||||
from crewai.utilities.constants import TRAINING_DATA_FILE
|
||||
@@ -533,9 +534,6 @@ class Crew(BaseModel):
|
||||
if not agent.function_calling_llm: # type: ignore # "BaseAgent" has no attribute "function_calling_llm"
|
||||
agent.function_calling_llm = self.function_calling_llm # type: ignore # "BaseAgent" has no attribute "function_calling_llm"
|
||||
|
||||
if agent.allow_code_execution: # type: ignore # BaseAgent" has no attribute "allow_code_execution"
|
||||
agent.tools += agent.get_code_execution_tools() # type: ignore # "BaseAgent" has no attribute "get_code_execution_tools"; maybe "get_delegation_tools"?
|
||||
|
||||
if not agent.step_callback: # type: ignore # "BaseAgent" has no attribute "step_callback"
|
||||
agent.step_callback = self.step_callback # type: ignore # "BaseAgent" has no attribute "step_callback"
|
||||
|
||||
@@ -672,7 +670,6 @@ class Crew(BaseModel):
|
||||
)
|
||||
manager.tools = []
|
||||
raise Exception("Manager agent should not have tools")
|
||||
manager.tools = self.manager_agent.get_delegation_tools(self.agents)
|
||||
else:
|
||||
self.manager_llm = (
|
||||
getattr(self.manager_llm, "model_name", None)
|
||||
@@ -684,6 +681,7 @@ class Crew(BaseModel):
|
||||
goal=i18n.retrieve("hierarchical_manager_agent", "goal"),
|
||||
backstory=i18n.retrieve("hierarchical_manager_agent", "backstory"),
|
||||
tools=AgentTools(agents=self.agents).tools(),
|
||||
allow_delegation=True,
|
||||
llm=self.manager_llm,
|
||||
verbose=self.verbose,
|
||||
)
|
||||
@@ -726,7 +724,14 @@ class Crew(BaseModel):
|
||||
f"No agent available for task: {task.description}. Ensure that either the task has an assigned agent or a manager agent is provided."
|
||||
)
|
||||
|
||||
self._prepare_agent_tools(task)
|
||||
# Determine which tools to use - task tools take precedence over agent tools
|
||||
tools_for_task = task.tools or agent_to_use.tools or []
|
||||
tools_for_task = self._prepare_tools(
|
||||
agent_to_use,
|
||||
task,
|
||||
tools_for_task
|
||||
)
|
||||
|
||||
self._log_task_start(task, agent_to_use.role)
|
||||
|
||||
if isinstance(task, ConditionalTask):
|
||||
@@ -743,7 +748,7 @@ class Crew(BaseModel):
|
||||
future = task.execute_async(
|
||||
agent=agent_to_use,
|
||||
context=context,
|
||||
tools=agent_to_use.tools,
|
||||
tools=tools_for_task,
|
||||
)
|
||||
futures.append((task, future, task_index))
|
||||
else:
|
||||
@@ -755,7 +760,7 @@ class Crew(BaseModel):
|
||||
task_output = task.execute_sync(
|
||||
agent=agent_to_use,
|
||||
context=context,
|
||||
tools=agent_to_use.tools,
|
||||
tools=tools_for_task,
|
||||
)
|
||||
task_outputs = [task_output]
|
||||
self._process_task_result(task, task_output)
|
||||
@@ -792,45 +797,67 @@ class Crew(BaseModel):
|
||||
return skipped_task_output
|
||||
return None
|
||||
|
||||
def _prepare_agent_tools(self, task: Task):
|
||||
if self.process == Process.hierarchical:
|
||||
if self.manager_agent:
|
||||
self._update_manager_tools(task)
|
||||
else:
|
||||
raise ValueError("Manager agent is required for hierarchical process.")
|
||||
elif task.agent and task.agent.allow_delegation:
|
||||
self._add_delegation_tools(task)
|
||||
def _prepare_tools(self, agent: BaseAgent, task: Task, tools: List[Tool]) -> List[Tool]:
|
||||
# Add delegation tools if agent allows delegation
|
||||
if agent.allow_delegation:
|
||||
if self.process == Process.hierarchical:
|
||||
if self.manager_agent:
|
||||
tools = self._update_manager_tools(task, tools)
|
||||
else:
|
||||
raise ValueError("Manager agent is required for hierarchical process.")
|
||||
|
||||
elif agent and agent.allow_delegation:
|
||||
tools = self._add_delegation_tools(task, tools)
|
||||
|
||||
# Add code execution tools if agent allows code execution
|
||||
if agent.allow_code_execution:
|
||||
tools = self._add_code_execution_tools(agent, tools)
|
||||
|
||||
if agent and agent.multimodal:
|
||||
tools = self._add_multimodal_tools(agent, tools)
|
||||
|
||||
return tools
|
||||
|
||||
def _get_agent_to_use(self, task: Task) -> Optional[BaseAgent]:
|
||||
if self.process == Process.hierarchical:
|
||||
return self.manager_agent
|
||||
return task.agent
|
||||
|
||||
def _add_delegation_tools(self, task: Task):
|
||||
def _merge_tools(self, existing_tools: List[Tool], new_tools: List[Tool]) -> List[Tool]:
|
||||
"""Merge new tools into existing tools list, avoiding duplicates by tool name."""
|
||||
if not new_tools:
|
||||
return existing_tools
|
||||
|
||||
# Create mapping of tool names to new tools
|
||||
new_tool_map = {tool.name: tool for tool in new_tools}
|
||||
|
||||
# Remove any existing tools that will be replaced
|
||||
tools = [tool for tool in existing_tools if tool.name not in new_tool_map]
|
||||
|
||||
# Add all new tools
|
||||
tools.extend(new_tools)
|
||||
|
||||
return tools
|
||||
|
||||
def _inject_delegation_tools(self, tools: List[Tool], task_agent: BaseAgent, agents: List[BaseAgent]):
|
||||
delegation_tools = task_agent.get_delegation_tools(agents)
|
||||
return self._merge_tools(tools, delegation_tools)
|
||||
|
||||
def _add_multimodal_tools(self, agent: BaseAgent, tools: List[Tool]):
|
||||
multimodal_tools = agent.get_multimodal_tools()
|
||||
return self._merge_tools(tools, multimodal_tools)
|
||||
|
||||
def _add_code_execution_tools(self, agent: BaseAgent, tools: List[Tool]):
|
||||
code_tools = agent.get_code_execution_tools()
|
||||
return self._merge_tools(tools, code_tools)
|
||||
|
||||
def _add_delegation_tools(self, task: Task, tools: List[Tool]):
|
||||
agents_for_delegation = [agent for agent in self.agents if agent != task.agent]
|
||||
if len(self.agents) > 1 and len(agents_for_delegation) > 0 and task.agent:
|
||||
delegation_tools = task.agent.get_delegation_tools(agents_for_delegation)
|
||||
|
||||
# Add tools if they are not already in task.tools
|
||||
for new_tool in delegation_tools:
|
||||
# Find the index of the tool with the same name
|
||||
existing_tool_index = next(
|
||||
(
|
||||
index
|
||||
for index, tool in enumerate(task.tools or [])
|
||||
if tool.name == new_tool.name
|
||||
),
|
||||
None,
|
||||
)
|
||||
if not task.tools:
|
||||
task.tools = []
|
||||
|
||||
if existing_tool_index is not None:
|
||||
# Replace the existing tool
|
||||
task.tools[existing_tool_index] = new_tool
|
||||
else:
|
||||
# Add the new tool
|
||||
task.tools.append(new_tool)
|
||||
if not tools:
|
||||
tools = []
|
||||
tools = self._inject_delegation_tools(tools, task.agent, agents_for_delegation)
|
||||
return tools
|
||||
|
||||
def _log_task_start(self, task: Task, role: str = "None"):
|
||||
if self.output_log_file:
|
||||
@@ -838,14 +865,13 @@ class Crew(BaseModel):
|
||||
task_name=task.name, task=task.description, agent=role, status="started"
|
||||
)
|
||||
|
||||
def _update_manager_tools(self, task: Task):
|
||||
def _update_manager_tools(self, task: Task, tools: List[Tool]):
|
||||
if self.manager_agent:
|
||||
if task.agent:
|
||||
self.manager_agent.tools = task.agent.get_delegation_tools([task.agent])
|
||||
tools = self._inject_delegation_tools(tools, task.agent, [task.agent])
|
||||
else:
|
||||
self.manager_agent.tools = self.manager_agent.get_delegation_tools(
|
||||
self.agents
|
||||
)
|
||||
tools = self._inject_delegation_tools(tools, self.manager_agent, self.agents)
|
||||
return tools
|
||||
|
||||
def _get_context(self, task: Task, task_outputs: List[TaskOutput]):
|
||||
context = (
|
||||
|
||||
@@ -30,7 +30,47 @@ from crewai.telemetry import Telemetry
|
||||
T = TypeVar("T", bound=Union[BaseModel, Dict[str, Any]])
|
||||
|
||||
|
||||
def start(condition=None):
|
||||
def start(condition: Optional[Union[str, dict, Callable]] = None) -> Callable:
|
||||
"""
|
||||
Marks a method as a flow's starting point.
|
||||
|
||||
This decorator designates a method as an entry point for the flow execution.
|
||||
It can optionally specify conditions that trigger the start based on other
|
||||
method executions.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
condition : Optional[Union[str, dict, Callable]], optional
|
||||
Defines when the start method should execute. Can be:
|
||||
- str: Name of a method that triggers this start
|
||||
- dict: Contains "type" ("AND"/"OR") and "methods" (list of triggers)
|
||||
- Callable: A method reference that triggers this start
|
||||
Default is None, meaning unconditional start.
|
||||
|
||||
Returns
|
||||
-------
|
||||
Callable
|
||||
A decorator function that marks the method as a flow start point.
|
||||
|
||||
Raises
|
||||
------
|
||||
ValueError
|
||||
If the condition format is invalid.
|
||||
|
||||
Examples
|
||||
--------
|
||||
>>> @start() # Unconditional start
|
||||
>>> def begin_flow(self):
|
||||
... pass
|
||||
|
||||
>>> @start("method_name") # Start after specific method
|
||||
>>> def conditional_start(self):
|
||||
... pass
|
||||
|
||||
>>> @start(and_("method1", "method2")) # Start after multiple methods
|
||||
>>> def complex_start(self):
|
||||
... pass
|
||||
"""
|
||||
def decorator(func):
|
||||
func.__is_start_method__ = True
|
||||
if condition is not None:
|
||||
@@ -55,8 +95,42 @@ def start(condition=None):
|
||||
|
||||
return decorator
|
||||
|
||||
def listen(condition: Union[str, dict, Callable]) -> Callable:
|
||||
"""
|
||||
Creates a listener that executes when specified conditions are met.
|
||||
|
||||
def listen(condition):
|
||||
This decorator sets up a method to execute in response to other method
|
||||
executions in the flow. It supports both simple and complex triggering
|
||||
conditions.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
condition : Union[str, dict, Callable]
|
||||
Specifies when the listener should execute. Can be:
|
||||
- str: Name of a method that triggers this listener
|
||||
- dict: Contains "type" ("AND"/"OR") and "methods" (list of triggers)
|
||||
- Callable: A method reference that triggers this listener
|
||||
|
||||
Returns
|
||||
-------
|
||||
Callable
|
||||
A decorator function that sets up the method as a listener.
|
||||
|
||||
Raises
|
||||
------
|
||||
ValueError
|
||||
If the condition format is invalid.
|
||||
|
||||
Examples
|
||||
--------
|
||||
>>> @listen("process_data") # Listen to single method
|
||||
>>> def handle_processed_data(self):
|
||||
... pass
|
||||
|
||||
>>> @listen(or_("success", "failure")) # Listen to multiple methods
|
||||
>>> def handle_completion(self):
|
||||
... pass
|
||||
"""
|
||||
def decorator(func):
|
||||
if isinstance(condition, str):
|
||||
func.__trigger_methods__ = [condition]
|
||||
@@ -80,16 +154,103 @@ def listen(condition):
|
||||
return decorator
|
||||
|
||||
|
||||
def router(method):
|
||||
def router(condition: Union[str, dict, Callable]) -> Callable:
|
||||
"""
|
||||
Creates a routing method that directs flow execution based on conditions.
|
||||
|
||||
This decorator marks a method as a router, which can dynamically determine
|
||||
the next steps in the flow based on its return value. Routers are triggered
|
||||
by specified conditions and can return constants that determine which path
|
||||
the flow should take.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
condition : Union[str, dict, Callable]
|
||||
Specifies when the router should execute. Can be:
|
||||
- str: Name of a method that triggers this router
|
||||
- dict: Contains "type" ("AND"/"OR") and "methods" (list of triggers)
|
||||
- Callable: A method reference that triggers this router
|
||||
|
||||
Returns
|
||||
-------
|
||||
Callable
|
||||
A decorator function that sets up the method as a router.
|
||||
|
||||
Raises
|
||||
------
|
||||
ValueError
|
||||
If the condition format is invalid.
|
||||
|
||||
Examples
|
||||
--------
|
||||
>>> @router("check_status")
|
||||
>>> def route_based_on_status(self):
|
||||
... if self.state.status == "success":
|
||||
... return SUCCESS
|
||||
... return FAILURE
|
||||
|
||||
>>> @router(and_("validate", "process"))
|
||||
>>> def complex_routing(self):
|
||||
... if all([self.state.valid, self.state.processed]):
|
||||
... return CONTINUE
|
||||
... return STOP
|
||||
"""
|
||||
def decorator(func):
|
||||
func.__is_router__ = True
|
||||
func.__router_for__ = method.__name__
|
||||
if isinstance(condition, str):
|
||||
func.__trigger_methods__ = [condition]
|
||||
func.__condition_type__ = "OR"
|
||||
elif (
|
||||
isinstance(condition, dict)
|
||||
and "type" in condition
|
||||
and "methods" in condition
|
||||
):
|
||||
func.__trigger_methods__ = condition["methods"]
|
||||
func.__condition_type__ = condition["type"]
|
||||
elif callable(condition) and hasattr(condition, "__name__"):
|
||||
func.__trigger_methods__ = [condition.__name__]
|
||||
func.__condition_type__ = "OR"
|
||||
else:
|
||||
raise ValueError(
|
||||
"Condition must be a method, string, or a result of or_() or and_()"
|
||||
)
|
||||
return func
|
||||
|
||||
return decorator
|
||||
|
||||
def or_(*conditions: Union[str, dict, Callable]) -> dict:
|
||||
"""
|
||||
Combines multiple conditions with OR logic for flow control.
|
||||
|
||||
def or_(*conditions):
|
||||
Creates a condition that is satisfied when any of the specified conditions
|
||||
are met. This is used with @start, @listen, or @router decorators to create
|
||||
complex triggering conditions.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
*conditions : Union[str, dict, Callable]
|
||||
Variable number of conditions that can be:
|
||||
- str: Method names
|
||||
- dict: Existing condition dictionaries
|
||||
- Callable: Method references
|
||||
|
||||
Returns
|
||||
-------
|
||||
dict
|
||||
A condition dictionary with format:
|
||||
{"type": "OR", "methods": list_of_method_names}
|
||||
|
||||
Raises
|
||||
------
|
||||
ValueError
|
||||
If any condition is invalid.
|
||||
|
||||
Examples
|
||||
--------
|
||||
>>> @listen(or_("success", "timeout"))
|
||||
>>> def handle_completion(self):
|
||||
... pass
|
||||
"""
|
||||
methods = []
|
||||
for condition in conditions:
|
||||
if isinstance(condition, dict) and "methods" in condition:
|
||||
@@ -103,7 +264,39 @@ def or_(*conditions):
|
||||
return {"type": "OR", "methods": methods}
|
||||
|
||||
|
||||
def and_(*conditions):
|
||||
def and_(*conditions: Union[str, dict, Callable]) -> dict:
|
||||
"""
|
||||
Combines multiple conditions with AND logic for flow control.
|
||||
|
||||
Creates a condition that is satisfied only when all specified conditions
|
||||
are met. This is used with @start, @listen, or @router decorators to create
|
||||
complex triggering conditions.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
*conditions : Union[str, dict, Callable]
|
||||
Variable number of conditions that can be:
|
||||
- str: Method names
|
||||
- dict: Existing condition dictionaries
|
||||
- Callable: Method references
|
||||
|
||||
Returns
|
||||
-------
|
||||
dict
|
||||
A condition dictionary with format:
|
||||
{"type": "AND", "methods": list_of_method_names}
|
||||
|
||||
Raises
|
||||
------
|
||||
ValueError
|
||||
If any condition is invalid.
|
||||
|
||||
Examples
|
||||
--------
|
||||
>>> @listen(and_("validated", "processed"))
|
||||
>>> def handle_complete_data(self):
|
||||
... pass
|
||||
"""
|
||||
methods = []
|
||||
for condition in conditions:
|
||||
if isinstance(condition, dict) and "methods" in condition:
|
||||
@@ -123,8 +316,8 @@ class FlowMeta(type):
|
||||
|
||||
start_methods = []
|
||||
listeners = {}
|
||||
routers = {}
|
||||
router_paths = {}
|
||||
routers = set()
|
||||
|
||||
for attr_name, attr_value in dct.items():
|
||||
if hasattr(attr_value, "__is_start_method__"):
|
||||
@@ -137,18 +330,11 @@ class FlowMeta(type):
|
||||
methods = attr_value.__trigger_methods__
|
||||
condition_type = getattr(attr_value, "__condition_type__", "OR")
|
||||
listeners[attr_name] = (condition_type, methods)
|
||||
|
||||
elif hasattr(attr_value, "__is_router__"):
|
||||
routers[attr_value.__router_for__] = attr_name
|
||||
possible_returns = get_possible_return_constants(attr_value)
|
||||
if possible_returns:
|
||||
router_paths[attr_name] = possible_returns
|
||||
|
||||
# Register router as a listener to its triggering method
|
||||
trigger_method_name = attr_value.__router_for__
|
||||
methods = [trigger_method_name]
|
||||
condition_type = "OR"
|
||||
listeners[attr_name] = (condition_type, methods)
|
||||
if hasattr(attr_value, "__is_router__") and attr_value.__is_router__:
|
||||
routers.add(attr_name)
|
||||
possible_returns = get_possible_return_constants(attr_value)
|
||||
if possible_returns:
|
||||
router_paths[attr_name] = possible_returns
|
||||
|
||||
setattr(cls, "_start_methods", start_methods)
|
||||
setattr(cls, "_listeners", listeners)
|
||||
@@ -163,7 +349,7 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
||||
|
||||
_start_methods: List[str] = []
|
||||
_listeners: Dict[str, tuple[str, List[str]]] = {}
|
||||
_routers: Dict[str, str] = {}
|
||||
_routers: Set[str] = set()
|
||||
_router_paths: Dict[str, List[str]] = {}
|
||||
initial_state: Union[Type[T], T, None] = None
|
||||
event_emitter = Signal("event_emitter")
|
||||
@@ -210,20 +396,10 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
||||
return self._method_outputs
|
||||
|
||||
def _initialize_state(self, inputs: Dict[str, Any]) -> None:
|
||||
"""
|
||||
Initializes or updates the state with the provided inputs.
|
||||
|
||||
Args:
|
||||
inputs: Dictionary of inputs to initialize or update the state.
|
||||
|
||||
Raises:
|
||||
ValueError: If inputs do not match the structured state model.
|
||||
TypeError: If state is neither a BaseModel instance nor a dictionary.
|
||||
"""
|
||||
if isinstance(self._state, BaseModel):
|
||||
# Structured state management
|
||||
# Structured state
|
||||
try:
|
||||
# Define a function to create the dynamic class
|
||||
|
||||
def create_model_with_extra_forbid(
|
||||
base_model: Type[BaseModel],
|
||||
) -> Type[BaseModel]:
|
||||
@@ -233,34 +409,20 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
||||
|
||||
return ModelWithExtraForbid
|
||||
|
||||
# Create the dynamic class
|
||||
ModelWithExtraForbid = create_model_with_extra_forbid(
|
||||
self._state.__class__
|
||||
)
|
||||
|
||||
# Create a new instance using the combined state and inputs
|
||||
self._state = cast(
|
||||
T, ModelWithExtraForbid(**{**self._state.model_dump(), **inputs})
|
||||
)
|
||||
|
||||
except ValidationError as e:
|
||||
raise ValueError(f"Invalid inputs for structured state: {e}") from e
|
||||
elif isinstance(self._state, dict):
|
||||
# Unstructured state management
|
||||
self._state.update(inputs)
|
||||
else:
|
||||
raise TypeError("State must be a BaseModel instance or a dictionary.")
|
||||
|
||||
def kickoff(self, inputs: Optional[Dict[str, Any]] = None) -> Any:
|
||||
"""
|
||||
Starts the execution of the flow synchronously.
|
||||
|
||||
Args:
|
||||
inputs: Optional dictionary of inputs to initialize or update the state.
|
||||
|
||||
Returns:
|
||||
The final output from the flow execution.
|
||||
"""
|
||||
self.event_emitter.send(
|
||||
self,
|
||||
event=FlowStartedEvent(
|
||||
@@ -274,15 +436,6 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
||||
return asyncio.run(self.kickoff_async())
|
||||
|
||||
async def kickoff_async(self, inputs: Optional[Dict[str, Any]] = None) -> Any:
|
||||
"""
|
||||
Starts the execution of the flow asynchronously.
|
||||
|
||||
Args:
|
||||
inputs: Optional dictionary of inputs to initialize or update the state.
|
||||
|
||||
Returns:
|
||||
The final output from the flow execution.
|
||||
"""
|
||||
if not self._start_methods:
|
||||
raise ValueError("No start method defined")
|
||||
|
||||
@@ -290,16 +443,12 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
||||
self.__class__.__name__, list(self._methods.keys())
|
||||
)
|
||||
|
||||
# Create tasks for all start methods
|
||||
tasks = [
|
||||
self._execute_start_method(start_method)
|
||||
for start_method in self._start_methods
|
||||
]
|
||||
|
||||
# Run all start methods concurrently
|
||||
await asyncio.gather(*tasks)
|
||||
|
||||
# Determine the final output (from the last executed method)
|
||||
final_output = self._method_outputs[-1] if self._method_outputs else None
|
||||
|
||||
self.event_emitter.send(
|
||||
@@ -310,10 +459,26 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
||||
result=final_output,
|
||||
),
|
||||
)
|
||||
|
||||
return final_output
|
||||
|
||||
async def _execute_start_method(self, start_method_name: str) -> None:
|
||||
"""
|
||||
Executes a flow's start method and its triggered listeners.
|
||||
|
||||
This internal method handles the execution of methods marked with @start
|
||||
decorator and manages the subsequent chain of listener executions.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
start_method_name : str
|
||||
The name of the start method to execute.
|
||||
|
||||
Notes
|
||||
-----
|
||||
- Executes the start method and captures its result
|
||||
- Triggers execution of any listeners waiting on this start method
|
||||
- Part of the flow's initialization sequence
|
||||
"""
|
||||
result = await self._execute_method(
|
||||
start_method_name, self._methods[start_method_name]
|
||||
)
|
||||
@@ -327,51 +492,146 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
||||
if asyncio.iscoroutinefunction(method)
|
||||
else method(*args, **kwargs)
|
||||
)
|
||||
self._method_outputs.append(result) # Store the output
|
||||
|
||||
# Track method execution counts
|
||||
self._method_outputs.append(result)
|
||||
self._method_execution_counts[method_name] = (
|
||||
self._method_execution_counts.get(method_name, 0) + 1
|
||||
)
|
||||
|
||||
return result
|
||||
|
||||
async def _execute_listeners(self, trigger_method: str, result: Any) -> None:
|
||||
listener_tasks = []
|
||||
"""
|
||||
Executes all listeners and routers triggered by a method completion.
|
||||
|
||||
if trigger_method in self._routers:
|
||||
router_method = self._methods[self._routers[trigger_method]]
|
||||
path = await self._execute_method(
|
||||
self._routers[trigger_method], router_method
|
||||
This internal method manages the execution flow by:
|
||||
1. First executing all triggered routers sequentially
|
||||
2. Then executing all triggered listeners in parallel
|
||||
|
||||
Parameters
|
||||
----------
|
||||
trigger_method : str
|
||||
The name of the method that triggered these listeners.
|
||||
result : Any
|
||||
The result from the triggering method, passed to listeners
|
||||
that accept parameters.
|
||||
|
||||
Notes
|
||||
-----
|
||||
- Routers are executed sequentially to maintain flow control
|
||||
- Each router's result becomes the new trigger_method
|
||||
- Normal listeners are executed in parallel for efficiency
|
||||
- Listeners can receive the trigger method's result as a parameter
|
||||
"""
|
||||
# First, handle routers repeatedly until no router triggers anymore
|
||||
while True:
|
||||
routers_triggered = self._find_triggered_methods(
|
||||
trigger_method, router_only=True
|
||||
)
|
||||
trigger_method = path
|
||||
if not routers_triggered:
|
||||
break
|
||||
for router_name in routers_triggered:
|
||||
await self._execute_single_listener(router_name, result)
|
||||
# After executing router, the router's result is the path
|
||||
# The last router executed sets the trigger_method
|
||||
# The router result is the last element in self._method_outputs
|
||||
trigger_method = self._method_outputs[-1]
|
||||
|
||||
# Now that no more routers are triggered by current trigger_method,
|
||||
# execute normal listeners
|
||||
listeners_triggered = self._find_triggered_methods(
|
||||
trigger_method, router_only=False
|
||||
)
|
||||
if listeners_triggered:
|
||||
tasks = [
|
||||
self._execute_single_listener(listener_name, result)
|
||||
for listener_name in listeners_triggered
|
||||
]
|
||||
await asyncio.gather(*tasks)
|
||||
|
||||
def _find_triggered_methods(
|
||||
self, trigger_method: str, router_only: bool
|
||||
) -> List[str]:
|
||||
"""
|
||||
Finds all methods that should be triggered based on conditions.
|
||||
|
||||
This internal method evaluates both OR and AND conditions to determine
|
||||
which methods should be executed next in the flow.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
trigger_method : str
|
||||
The name of the method that just completed execution.
|
||||
router_only : bool
|
||||
If True, only consider router methods.
|
||||
If False, only consider non-router methods.
|
||||
|
||||
Returns
|
||||
-------
|
||||
List[str]
|
||||
Names of methods that should be triggered.
|
||||
|
||||
Notes
|
||||
-----
|
||||
- Handles both OR and AND conditions:
|
||||
* OR: Triggers if any condition is met
|
||||
* AND: Triggers only when all conditions are met
|
||||
- Maintains state for AND conditions using _pending_and_listeners
|
||||
- Separates router and normal listener evaluation
|
||||
"""
|
||||
triggered = []
|
||||
for listener_name, (condition_type, methods) in self._listeners.items():
|
||||
is_router = listener_name in self._routers
|
||||
|
||||
if router_only != is_router:
|
||||
continue
|
||||
|
||||
if condition_type == "OR":
|
||||
# If the trigger_method matches any in methods, run this
|
||||
if trigger_method in methods:
|
||||
# Schedule the listener without preventing re-execution
|
||||
listener_tasks.append(
|
||||
self._execute_single_listener(listener_name, result)
|
||||
)
|
||||
triggered.append(listener_name)
|
||||
elif condition_type == "AND":
|
||||
# Initialize pending methods for this listener if not already done
|
||||
if listener_name not in self._pending_and_listeners:
|
||||
self._pending_and_listeners[listener_name] = set(methods)
|
||||
# Remove the trigger method from pending methods
|
||||
self._pending_and_listeners[listener_name].discard(trigger_method)
|
||||
if trigger_method in self._pending_and_listeners[listener_name]:
|
||||
self._pending_and_listeners[listener_name].discard(trigger_method)
|
||||
|
||||
if not self._pending_and_listeners[listener_name]:
|
||||
# All required methods have been executed
|
||||
listener_tasks.append(
|
||||
self._execute_single_listener(listener_name, result)
|
||||
)
|
||||
triggered.append(listener_name)
|
||||
# Reset pending methods for this listener
|
||||
self._pending_and_listeners.pop(listener_name, None)
|
||||
|
||||
# Run all listener tasks concurrently and wait for them to complete
|
||||
if listener_tasks:
|
||||
await asyncio.gather(*listener_tasks)
|
||||
return triggered
|
||||
|
||||
async def _execute_single_listener(self, listener_name: str, result: Any) -> None:
|
||||
"""
|
||||
Executes a single listener method with proper event handling.
|
||||
|
||||
This internal method manages the execution of an individual listener,
|
||||
including parameter inspection, event emission, and error handling.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
listener_name : str
|
||||
The name of the listener method to execute.
|
||||
result : Any
|
||||
The result from the triggering method, which may be passed
|
||||
to the listener if it accepts parameters.
|
||||
|
||||
Notes
|
||||
-----
|
||||
- Inspects method signature to determine if it accepts the trigger result
|
||||
- Emits events for method execution start and finish
|
||||
- Handles errors gracefully with detailed logging
|
||||
- Recursively triggers listeners of this listener
|
||||
- Supports both parameterized and parameter-less listeners
|
||||
|
||||
Error Handling
|
||||
-------------
|
||||
Catches and logs any exceptions during execution, preventing
|
||||
individual listener failures from breaking the entire flow.
|
||||
"""
|
||||
try:
|
||||
method = self._methods[listener_name]
|
||||
|
||||
@@ -386,17 +646,13 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
||||
|
||||
sig = inspect.signature(method)
|
||||
params = list(sig.parameters.values())
|
||||
|
||||
# Exclude 'self' parameter
|
||||
method_params = [p for p in params if p.name != "self"]
|
||||
|
||||
if method_params:
|
||||
# If listener expects parameters, pass the result
|
||||
listener_result = await self._execute_method(
|
||||
listener_name, method, result
|
||||
)
|
||||
else:
|
||||
# If listener does not expect parameters, call without arguments
|
||||
listener_result = await self._execute_method(listener_name, method)
|
||||
|
||||
self.event_emitter.send(
|
||||
@@ -408,8 +664,9 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
||||
),
|
||||
)
|
||||
|
||||
# Execute listeners of this listener
|
||||
# Execute listeners (and possibly routers) of this listener
|
||||
await self._execute_listeners(listener_name, listener_result)
|
||||
|
||||
except Exception as e:
|
||||
print(
|
||||
f"[Flow._execute_single_listener] Error in method {listener_name}: {e}"
|
||||
@@ -422,5 +679,4 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
||||
self._telemetry.flow_plotting_span(
|
||||
self.__class__.__name__, list(self._methods.keys())
|
||||
)
|
||||
|
||||
plot_flow(self, filename)
|
||||
|
||||
@@ -1,12 +1,14 @@
|
||||
# flow_visualizer.py
|
||||
|
||||
import os
|
||||
from pathlib import Path
|
||||
|
||||
from pyvis.network import Network
|
||||
|
||||
from crewai.flow.config import COLORS, NODE_STYLES
|
||||
from crewai.flow.html_template_handler import HTMLTemplateHandler
|
||||
from crewai.flow.legend_generator import generate_legend_items_html, get_legend_items
|
||||
from crewai.flow.path_utils import safe_path_join, validate_path_exists
|
||||
from crewai.flow.utils import calculate_node_levels
|
||||
from crewai.flow.visualization_utils import (
|
||||
add_edges,
|
||||
@@ -16,89 +18,209 @@ from crewai.flow.visualization_utils import (
|
||||
|
||||
|
||||
class FlowPlot:
|
||||
"""Handles the creation and rendering of flow visualization diagrams."""
|
||||
|
||||
def __init__(self, flow):
|
||||
"""
|
||||
Initialize FlowPlot with a flow object.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
flow : Flow
|
||||
A Flow instance to visualize.
|
||||
|
||||
Raises
|
||||
------
|
||||
ValueError
|
||||
If flow object is invalid or missing required attributes.
|
||||
"""
|
||||
if not hasattr(flow, '_methods'):
|
||||
raise ValueError("Invalid flow object: missing '_methods' attribute")
|
||||
if not hasattr(flow, '_listeners'):
|
||||
raise ValueError("Invalid flow object: missing '_listeners' attribute")
|
||||
if not hasattr(flow, '_start_methods'):
|
||||
raise ValueError("Invalid flow object: missing '_start_methods' attribute")
|
||||
|
||||
self.flow = flow
|
||||
self.colors = COLORS
|
||||
self.node_styles = NODE_STYLES
|
||||
|
||||
def plot(self, filename):
|
||||
net = Network(
|
||||
directed=True,
|
||||
height="750px",
|
||||
width="100%",
|
||||
bgcolor=self.colors["bg"],
|
||||
layout=None,
|
||||
)
|
||||
|
||||
# Set options to disable physics
|
||||
net.set_options(
|
||||
"""
|
||||
var options = {
|
||||
"nodes": {
|
||||
"font": {
|
||||
"multi": "html"
|
||||
}
|
||||
},
|
||||
"physics": {
|
||||
"enabled": false
|
||||
}
|
||||
}
|
||||
"""
|
||||
)
|
||||
Generate and save an HTML visualization of the flow.
|
||||
|
||||
# Calculate levels for nodes
|
||||
node_levels = calculate_node_levels(self.flow)
|
||||
Parameters
|
||||
----------
|
||||
filename : str
|
||||
Name of the output file (without extension).
|
||||
|
||||
# Compute positions
|
||||
node_positions = compute_positions(self.flow, node_levels)
|
||||
Raises
|
||||
------
|
||||
ValueError
|
||||
If filename is invalid or network generation fails.
|
||||
IOError
|
||||
If file operations fail or visualization cannot be generated.
|
||||
RuntimeError
|
||||
If network visualization generation fails.
|
||||
"""
|
||||
if not filename or not isinstance(filename, str):
|
||||
raise ValueError("Filename must be a non-empty string")
|
||||
|
||||
try:
|
||||
# Initialize network
|
||||
net = Network(
|
||||
directed=True,
|
||||
height="750px",
|
||||
width="100%",
|
||||
bgcolor=self.colors["bg"],
|
||||
layout=None,
|
||||
)
|
||||
|
||||
# Add nodes to the network
|
||||
add_nodes_to_network(net, self.flow, node_positions, self.node_styles)
|
||||
# Set options to disable physics
|
||||
net.set_options(
|
||||
"""
|
||||
var options = {
|
||||
"nodes": {
|
||||
"font": {
|
||||
"multi": "html"
|
||||
}
|
||||
},
|
||||
"physics": {
|
||||
"enabled": false
|
||||
}
|
||||
}
|
||||
"""
|
||||
)
|
||||
|
||||
# Add edges to the network
|
||||
add_edges(net, self.flow, node_positions, self.colors)
|
||||
# Calculate levels for nodes
|
||||
try:
|
||||
node_levels = calculate_node_levels(self.flow)
|
||||
except Exception as e:
|
||||
raise ValueError(f"Failed to calculate node levels: {str(e)}")
|
||||
|
||||
network_html = net.generate_html()
|
||||
final_html_content = self._generate_final_html(network_html)
|
||||
# Compute positions
|
||||
try:
|
||||
node_positions = compute_positions(self.flow, node_levels)
|
||||
except Exception as e:
|
||||
raise ValueError(f"Failed to compute node positions: {str(e)}")
|
||||
|
||||
# Save the final HTML content to the file
|
||||
with open(f"{filename}.html", "w", encoding="utf-8") as f:
|
||||
f.write(final_html_content)
|
||||
print(f"Plot saved as {filename}.html")
|
||||
# Add nodes to the network
|
||||
try:
|
||||
add_nodes_to_network(net, self.flow, node_positions, self.node_styles)
|
||||
except Exception as e:
|
||||
raise RuntimeError(f"Failed to add nodes to network: {str(e)}")
|
||||
|
||||
self._cleanup_pyvis_lib()
|
||||
# Add edges to the network
|
||||
try:
|
||||
add_edges(net, self.flow, node_positions, self.colors)
|
||||
except Exception as e:
|
||||
raise RuntimeError(f"Failed to add edges to network: {str(e)}")
|
||||
|
||||
# Generate HTML
|
||||
try:
|
||||
network_html = net.generate_html()
|
||||
final_html_content = self._generate_final_html(network_html)
|
||||
except Exception as e:
|
||||
raise RuntimeError(f"Failed to generate network visualization: {str(e)}")
|
||||
|
||||
# Save the final HTML content to the file
|
||||
try:
|
||||
with open(f"{filename}.html", "w", encoding="utf-8") as f:
|
||||
f.write(final_html_content)
|
||||
print(f"Plot saved as {filename}.html")
|
||||
except IOError as e:
|
||||
raise IOError(f"Failed to save flow visualization to {filename}.html: {str(e)}")
|
||||
|
||||
except (ValueError, RuntimeError, IOError) as e:
|
||||
raise e
|
||||
except Exception as e:
|
||||
raise RuntimeError(f"Unexpected error during flow visualization: {str(e)}")
|
||||
finally:
|
||||
self._cleanup_pyvis_lib()
|
||||
|
||||
def _generate_final_html(self, network_html):
|
||||
# Extract just the body content from the generated HTML
|
||||
current_dir = os.path.dirname(__file__)
|
||||
template_path = os.path.join(
|
||||
current_dir, "assets", "crewai_flow_visual_template.html"
|
||||
)
|
||||
logo_path = os.path.join(current_dir, "assets", "crewai_logo.svg")
|
||||
"""
|
||||
Generate the final HTML content with network visualization and legend.
|
||||
|
||||
html_handler = HTMLTemplateHandler(template_path, logo_path)
|
||||
network_body = html_handler.extract_body_content(network_html)
|
||||
Parameters
|
||||
----------
|
||||
network_html : str
|
||||
HTML content generated by pyvis Network.
|
||||
|
||||
# Generate the legend items HTML
|
||||
legend_items = get_legend_items(self.colors)
|
||||
legend_items_html = generate_legend_items_html(legend_items)
|
||||
final_html_content = html_handler.generate_final_html(
|
||||
network_body, legend_items_html
|
||||
)
|
||||
return final_html_content
|
||||
Returns
|
||||
-------
|
||||
str
|
||||
Complete HTML content with styling and legend.
|
||||
|
||||
Raises
|
||||
------
|
||||
IOError
|
||||
If template or logo files cannot be accessed.
|
||||
ValueError
|
||||
If network_html is invalid.
|
||||
"""
|
||||
if not network_html:
|
||||
raise ValueError("Invalid network HTML content")
|
||||
|
||||
try:
|
||||
# Extract just the body content from the generated HTML
|
||||
current_dir = os.path.dirname(__file__)
|
||||
template_path = safe_path_join("assets", "crewai_flow_visual_template.html", root=current_dir)
|
||||
logo_path = safe_path_join("assets", "crewai_logo.svg", root=current_dir)
|
||||
|
||||
if not os.path.exists(template_path):
|
||||
raise IOError(f"Template file not found: {template_path}")
|
||||
if not os.path.exists(logo_path):
|
||||
raise IOError(f"Logo file not found: {logo_path}")
|
||||
|
||||
html_handler = HTMLTemplateHandler(template_path, logo_path)
|
||||
network_body = html_handler.extract_body_content(network_html)
|
||||
|
||||
# Generate the legend items HTML
|
||||
legend_items = get_legend_items(self.colors)
|
||||
legend_items_html = generate_legend_items_html(legend_items)
|
||||
final_html_content = html_handler.generate_final_html(
|
||||
network_body, legend_items_html
|
||||
)
|
||||
return final_html_content
|
||||
except Exception as e:
|
||||
raise IOError(f"Failed to generate visualization HTML: {str(e)}")
|
||||
|
||||
def _cleanup_pyvis_lib(self):
|
||||
# Clean up the generated lib folder
|
||||
lib_folder = os.path.join(os.getcwd(), "lib")
|
||||
"""
|
||||
Clean up the generated lib folder from pyvis.
|
||||
|
||||
This method safely removes the temporary lib directory created by pyvis
|
||||
during network visualization generation.
|
||||
"""
|
||||
try:
|
||||
lib_folder = safe_path_join("lib", root=os.getcwd())
|
||||
if os.path.exists(lib_folder) and os.path.isdir(lib_folder):
|
||||
import shutil
|
||||
|
||||
shutil.rmtree(lib_folder)
|
||||
except ValueError as e:
|
||||
print(f"Error validating lib folder path: {e}")
|
||||
except Exception as e:
|
||||
print(f"Error cleaning up {lib_folder}: {e}")
|
||||
print(f"Error cleaning up lib folder: {e}")
|
||||
|
||||
|
||||
def plot_flow(flow, filename="flow_plot"):
|
||||
"""
|
||||
Convenience function to create and save a flow visualization.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
flow : Flow
|
||||
Flow instance to visualize.
|
||||
filename : str, optional
|
||||
Output filename without extension, by default "flow_plot".
|
||||
|
||||
Raises
|
||||
------
|
||||
ValueError
|
||||
If flow object or filename is invalid.
|
||||
IOError
|
||||
If file operations fail.
|
||||
"""
|
||||
visualizer = FlowPlot(flow)
|
||||
visualizer.plot(filename)
|
||||
|
||||
@@ -1,26 +1,53 @@
|
||||
import base64
|
||||
import re
|
||||
from pathlib import Path
|
||||
|
||||
from crewai.flow.path_utils import safe_path_join, validate_path_exists
|
||||
|
||||
|
||||
class HTMLTemplateHandler:
|
||||
"""Handles HTML template processing and generation for flow visualization diagrams."""
|
||||
|
||||
def __init__(self, template_path, logo_path):
|
||||
self.template_path = template_path
|
||||
self.logo_path = logo_path
|
||||
"""
|
||||
Initialize HTMLTemplateHandler with validated template and logo paths.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
template_path : str
|
||||
Path to the HTML template file.
|
||||
logo_path : str
|
||||
Path to the logo image file.
|
||||
|
||||
Raises
|
||||
------
|
||||
ValueError
|
||||
If template or logo paths are invalid or files don't exist.
|
||||
"""
|
||||
try:
|
||||
self.template_path = validate_path_exists(template_path, "file")
|
||||
self.logo_path = validate_path_exists(logo_path, "file")
|
||||
except ValueError as e:
|
||||
raise ValueError(f"Invalid template or logo path: {e}")
|
||||
|
||||
def read_template(self):
|
||||
"""Read and return the HTML template file contents."""
|
||||
with open(self.template_path, "r", encoding="utf-8") as f:
|
||||
return f.read()
|
||||
|
||||
def encode_logo(self):
|
||||
"""Convert the logo SVG file to base64 encoded string."""
|
||||
with open(self.logo_path, "rb") as logo_file:
|
||||
logo_svg_data = logo_file.read()
|
||||
return base64.b64encode(logo_svg_data).decode("utf-8")
|
||||
|
||||
def extract_body_content(self, html):
|
||||
"""Extract and return content between body tags from HTML string."""
|
||||
match = re.search("<body.*?>(.*?)</body>", html, re.DOTALL)
|
||||
return match.group(1) if match else ""
|
||||
|
||||
def generate_legend_items_html(self, legend_items):
|
||||
"""Generate HTML markup for the legend items."""
|
||||
legend_items_html = ""
|
||||
for item in legend_items:
|
||||
if "border" in item:
|
||||
@@ -48,6 +75,7 @@ class HTMLTemplateHandler:
|
||||
return legend_items_html
|
||||
|
||||
def generate_final_html(self, network_body, legend_items_html, title="Flow Plot"):
|
||||
"""Combine all components into final HTML document with network visualization."""
|
||||
html_template = self.read_template()
|
||||
logo_svg_base64 = self.encode_logo()
|
||||
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
|
||||
def get_legend_items(colors):
|
||||
return [
|
||||
{"label": "Start Method", "color": colors["start"]},
|
||||
|
||||
135
src/crewai/flow/path_utils.py
Normal file
135
src/crewai/flow/path_utils.py
Normal file
@@ -0,0 +1,135 @@
|
||||
"""
|
||||
Path utilities for secure file operations in CrewAI flow module.
|
||||
|
||||
This module provides utilities for secure path handling to prevent directory
|
||||
traversal attacks and ensure paths remain within allowed boundaries.
|
||||
"""
|
||||
|
||||
import os
|
||||
from pathlib import Path
|
||||
from typing import List, Union
|
||||
|
||||
|
||||
def safe_path_join(*parts: str, root: Union[str, Path, None] = None) -> str:
|
||||
"""
|
||||
Safely join path components and ensure the result is within allowed boundaries.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
*parts : str
|
||||
Variable number of path components to join.
|
||||
root : Union[str, Path, None], optional
|
||||
Root directory to use as base. If None, uses current working directory.
|
||||
|
||||
Returns
|
||||
-------
|
||||
str
|
||||
String representation of the resolved path.
|
||||
|
||||
Raises
|
||||
------
|
||||
ValueError
|
||||
If the resulting path would be outside the root directory
|
||||
or if any path component is invalid.
|
||||
"""
|
||||
if not parts:
|
||||
raise ValueError("No path components provided")
|
||||
|
||||
try:
|
||||
# Convert all parts to strings and clean them
|
||||
clean_parts = [str(part).strip() for part in parts if part]
|
||||
if not clean_parts:
|
||||
raise ValueError("No valid path components provided")
|
||||
|
||||
# Establish root directory
|
||||
root_path = Path(root).resolve() if root else Path.cwd()
|
||||
|
||||
# Join and resolve the full path
|
||||
full_path = Path(root_path, *clean_parts).resolve()
|
||||
|
||||
# Check if the resolved path is within root
|
||||
if not str(full_path).startswith(str(root_path)):
|
||||
raise ValueError(
|
||||
f"Invalid path: Potential directory traversal. Path must be within {root_path}"
|
||||
)
|
||||
|
||||
return str(full_path)
|
||||
|
||||
except Exception as e:
|
||||
if isinstance(e, ValueError):
|
||||
raise
|
||||
raise ValueError(f"Invalid path components: {str(e)}")
|
||||
|
||||
|
||||
def validate_path_exists(path: Union[str, Path], file_type: str = "file") -> str:
|
||||
"""
|
||||
Validate that a path exists and is of the expected type.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
path : Union[str, Path]
|
||||
Path to validate.
|
||||
file_type : str, optional
|
||||
Expected type ('file' or 'directory'), by default 'file'.
|
||||
|
||||
Returns
|
||||
-------
|
||||
str
|
||||
Validated path as string.
|
||||
|
||||
Raises
|
||||
------
|
||||
ValueError
|
||||
If path doesn't exist or is not of expected type.
|
||||
"""
|
||||
try:
|
||||
path_obj = Path(path).resolve()
|
||||
|
||||
if not path_obj.exists():
|
||||
raise ValueError(f"Path does not exist: {path}")
|
||||
|
||||
if file_type == "file" and not path_obj.is_file():
|
||||
raise ValueError(f"Path is not a file: {path}")
|
||||
elif file_type == "directory" and not path_obj.is_dir():
|
||||
raise ValueError(f"Path is not a directory: {path}")
|
||||
|
||||
return str(path_obj)
|
||||
|
||||
except Exception as e:
|
||||
if isinstance(e, ValueError):
|
||||
raise
|
||||
raise ValueError(f"Invalid path: {str(e)}")
|
||||
|
||||
|
||||
def list_files(directory: Union[str, Path], pattern: str = "*") -> List[str]:
|
||||
"""
|
||||
Safely list files in a directory matching a pattern.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
directory : Union[str, Path]
|
||||
Directory to search in.
|
||||
pattern : str, optional
|
||||
Glob pattern to match files against, by default "*".
|
||||
|
||||
Returns
|
||||
-------
|
||||
List[str]
|
||||
List of matching file paths.
|
||||
|
||||
Raises
|
||||
------
|
||||
ValueError
|
||||
If directory is invalid or inaccessible.
|
||||
"""
|
||||
try:
|
||||
dir_path = Path(directory).resolve()
|
||||
if not dir_path.is_dir():
|
||||
raise ValueError(f"Not a directory: {directory}")
|
||||
|
||||
return [str(p) for p in dir_path.glob(pattern) if p.is_file()]
|
||||
|
||||
except Exception as e:
|
||||
if isinstance(e, ValueError):
|
||||
raise
|
||||
raise ValueError(f"Error listing files: {str(e)}")
|
||||
@@ -1,9 +1,25 @@
|
||||
"""
|
||||
Utility functions for flow visualization and dependency analysis.
|
||||
|
||||
This module provides core functionality for analyzing and manipulating flow structures,
|
||||
including node level calculation, ancestor tracking, and return value analysis.
|
||||
Functions in this module are primarily used by the visualization system to create
|
||||
accurate and informative flow diagrams.
|
||||
|
||||
Example
|
||||
-------
|
||||
>>> flow = Flow()
|
||||
>>> node_levels = calculate_node_levels(flow)
|
||||
>>> ancestors = build_ancestor_dict(flow)
|
||||
"""
|
||||
|
||||
import ast
|
||||
import inspect
|
||||
import textwrap
|
||||
from typing import Any, Dict, List, Optional, Set, Union
|
||||
|
||||
|
||||
def get_possible_return_constants(function):
|
||||
def get_possible_return_constants(function: Any) -> Optional[List[str]]:
|
||||
try:
|
||||
source = inspect.getsource(function)
|
||||
except OSError:
|
||||
@@ -31,23 +47,80 @@ def get_possible_return_constants(function):
|
||||
print(f"Source code:\n{source}")
|
||||
return None
|
||||
|
||||
return_values = []
|
||||
return_values = set()
|
||||
dict_definitions = {}
|
||||
|
||||
class DictionaryAssignmentVisitor(ast.NodeVisitor):
|
||||
def visit_Assign(self, node):
|
||||
# Check if this assignment is assigning a dictionary literal to a variable
|
||||
if isinstance(node.value, ast.Dict) and len(node.targets) == 1:
|
||||
target = node.targets[0]
|
||||
if isinstance(target, ast.Name):
|
||||
var_name = target.id
|
||||
dict_values = []
|
||||
# Extract string values from the dictionary
|
||||
for val in node.value.values:
|
||||
if isinstance(val, ast.Constant) and isinstance(val.value, str):
|
||||
dict_values.append(val.value)
|
||||
# If non-string, skip or just ignore
|
||||
if dict_values:
|
||||
dict_definitions[var_name] = dict_values
|
||||
self.generic_visit(node)
|
||||
|
||||
class ReturnVisitor(ast.NodeVisitor):
|
||||
def visit_Return(self, node):
|
||||
# Check if the return value is a constant (Python 3.8+)
|
||||
if isinstance(node.value, ast.Constant):
|
||||
return_values.append(node.value.value)
|
||||
# Direct string return
|
||||
if isinstance(node.value, ast.Constant) and isinstance(
|
||||
node.value.value, str
|
||||
):
|
||||
return_values.add(node.value.value)
|
||||
# Dictionary-based return, like return paths[result]
|
||||
elif isinstance(node.value, ast.Subscript):
|
||||
# Check if we're subscripting a known dictionary variable
|
||||
if isinstance(node.value.value, ast.Name):
|
||||
var_name = node.value.value.id
|
||||
if var_name in dict_definitions:
|
||||
# Add all possible dictionary values
|
||||
for v in dict_definitions[var_name]:
|
||||
return_values.add(v)
|
||||
self.generic_visit(node)
|
||||
|
||||
# First pass: identify dictionary assignments
|
||||
DictionaryAssignmentVisitor().visit(code_ast)
|
||||
# Second pass: identify returns
|
||||
ReturnVisitor().visit(code_ast)
|
||||
return return_values
|
||||
|
||||
return list(return_values) if return_values else None
|
||||
|
||||
|
||||
def calculate_node_levels(flow):
|
||||
levels = {}
|
||||
queue = []
|
||||
visited = set()
|
||||
pending_and_listeners = {}
|
||||
def calculate_node_levels(flow: Any) -> Dict[str, int]:
|
||||
"""
|
||||
Calculate the hierarchical level of each node in the flow.
|
||||
|
||||
Performs a breadth-first traversal of the flow graph to assign levels
|
||||
to nodes, starting with start methods at level 0.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
flow : Any
|
||||
The flow instance containing methods, listeners, and router configurations.
|
||||
|
||||
Returns
|
||||
-------
|
||||
Dict[str, int]
|
||||
Dictionary mapping method names to their hierarchical levels.
|
||||
|
||||
Notes
|
||||
-----
|
||||
- Start methods are assigned level 0
|
||||
- Each subsequent connected node is assigned level = parent_level + 1
|
||||
- Handles both OR and AND conditions for listeners
|
||||
- Processes router paths separately
|
||||
"""
|
||||
levels: Dict[str, int] = {}
|
||||
queue: List[str] = []
|
||||
visited: Set[str] = set()
|
||||
pending_and_listeners: Dict[str, Set[str]] = {}
|
||||
|
||||
# Make all start methods at level 0
|
||||
for method_name, method in flow._methods.items():
|
||||
@@ -61,10 +134,7 @@ def calculate_node_levels(flow):
|
||||
current_level = levels[current]
|
||||
visited.add(current)
|
||||
|
||||
for listener_name, (
|
||||
condition_type,
|
||||
trigger_methods,
|
||||
) in flow._listeners.items():
|
||||
for listener_name, (condition_type, trigger_methods) in flow._listeners.items():
|
||||
if condition_type == "OR":
|
||||
if current in trigger_methods:
|
||||
if (
|
||||
@@ -89,7 +159,7 @@ def calculate_node_levels(flow):
|
||||
queue.append(listener_name)
|
||||
|
||||
# Handle router connections
|
||||
if current in flow._routers.values():
|
||||
if current in flow._routers:
|
||||
router_method_name = current
|
||||
paths = flow._router_paths.get(router_method_name, [])
|
||||
for path in paths:
|
||||
@@ -105,10 +175,24 @@ def calculate_node_levels(flow):
|
||||
levels[listener_name] = current_level + 1
|
||||
if listener_name not in visited:
|
||||
queue.append(listener_name)
|
||||
|
||||
return levels
|
||||
|
||||
|
||||
def count_outgoing_edges(flow):
|
||||
def count_outgoing_edges(flow: Any) -> Dict[str, int]:
|
||||
"""
|
||||
Count the number of outgoing edges for each method in the flow.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
flow : Any
|
||||
The flow instance to analyze.
|
||||
|
||||
Returns
|
||||
-------
|
||||
Dict[str, int]
|
||||
Dictionary mapping method names to their outgoing edge count.
|
||||
"""
|
||||
counts = {}
|
||||
for method_name in flow._methods:
|
||||
counts[method_name] = 0
|
||||
@@ -120,16 +204,53 @@ def count_outgoing_edges(flow):
|
||||
return counts
|
||||
|
||||
|
||||
def build_ancestor_dict(flow):
|
||||
ancestors = {node: set() for node in flow._methods}
|
||||
visited = set()
|
||||
def build_ancestor_dict(flow: Any) -> Dict[str, Set[str]]:
|
||||
"""
|
||||
Build a dictionary mapping each node to its ancestor nodes.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
flow : Any
|
||||
The flow instance to analyze.
|
||||
|
||||
Returns
|
||||
-------
|
||||
Dict[str, Set[str]]
|
||||
Dictionary mapping each node to a set of its ancestor nodes.
|
||||
"""
|
||||
ancestors: Dict[str, Set[str]] = {node: set() for node in flow._methods}
|
||||
visited: Set[str] = set()
|
||||
for node in flow._methods:
|
||||
if node not in visited:
|
||||
dfs_ancestors(node, ancestors, visited, flow)
|
||||
return ancestors
|
||||
|
||||
|
||||
def dfs_ancestors(node, ancestors, visited, flow):
|
||||
def dfs_ancestors(
|
||||
node: str,
|
||||
ancestors: Dict[str, Set[str]],
|
||||
visited: Set[str],
|
||||
flow: Any
|
||||
) -> None:
|
||||
"""
|
||||
Perform depth-first search to build ancestor relationships.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
node : str
|
||||
Current node being processed.
|
||||
ancestors : Dict[str, Set[str]]
|
||||
Dictionary tracking ancestor relationships.
|
||||
visited : Set[str]
|
||||
Set of already visited nodes.
|
||||
flow : Any
|
||||
The flow instance being analyzed.
|
||||
|
||||
Notes
|
||||
-----
|
||||
This function modifies the ancestors dictionary in-place to build
|
||||
the complete ancestor graph.
|
||||
"""
|
||||
if node in visited:
|
||||
return
|
||||
visited.add(node)
|
||||
@@ -142,7 +263,7 @@ def dfs_ancestors(node, ancestors, visited, flow):
|
||||
dfs_ancestors(listener_name, ancestors, visited, flow)
|
||||
|
||||
# Handle router methods separately
|
||||
if node in flow._routers.values():
|
||||
if node in flow._routers:
|
||||
router_method_name = node
|
||||
paths = flow._router_paths.get(router_method_name, [])
|
||||
for path in paths:
|
||||
@@ -153,12 +274,48 @@ def dfs_ancestors(node, ancestors, visited, flow):
|
||||
dfs_ancestors(listener_name, ancestors, visited, flow)
|
||||
|
||||
|
||||
def is_ancestor(node, ancestor_candidate, ancestors):
|
||||
def is_ancestor(node: str, ancestor_candidate: str, ancestors: Dict[str, Set[str]]) -> bool:
|
||||
"""
|
||||
Check if one node is an ancestor of another.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
node : str
|
||||
The node to check ancestors for.
|
||||
ancestor_candidate : str
|
||||
The potential ancestor node.
|
||||
ancestors : Dict[str, Set[str]]
|
||||
Dictionary containing ancestor relationships.
|
||||
|
||||
Returns
|
||||
-------
|
||||
bool
|
||||
True if ancestor_candidate is an ancestor of node, False otherwise.
|
||||
"""
|
||||
return ancestor_candidate in ancestors.get(node, set())
|
||||
|
||||
|
||||
def build_parent_children_dict(flow):
|
||||
parent_children = {}
|
||||
def build_parent_children_dict(flow: Any) -> Dict[str, List[str]]:
|
||||
"""
|
||||
Build a dictionary mapping parent nodes to their children.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
flow : Any
|
||||
The flow instance to analyze.
|
||||
|
||||
Returns
|
||||
-------
|
||||
Dict[str, List[str]]
|
||||
Dictionary mapping parent method names to lists of their child method names.
|
||||
|
||||
Notes
|
||||
-----
|
||||
- Maps listeners to their trigger methods
|
||||
- Maps router methods to their paths and listeners
|
||||
- Children lists are sorted for consistent ordering
|
||||
"""
|
||||
parent_children: Dict[str, List[str]] = {}
|
||||
|
||||
# Map listeners to their trigger methods
|
||||
for listener_name, (_, trigger_methods) in flow._listeners.items():
|
||||
@@ -182,7 +339,24 @@ def build_parent_children_dict(flow):
|
||||
return parent_children
|
||||
|
||||
|
||||
def get_child_index(parent, child, parent_children):
|
||||
def get_child_index(parent: str, child: str, parent_children: Dict[str, List[str]]) -> int:
|
||||
"""
|
||||
Get the index of a child node in its parent's sorted children list.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
parent : str
|
||||
The parent node name.
|
||||
child : str
|
||||
The child node name to find the index for.
|
||||
parent_children : Dict[str, List[str]]
|
||||
Dictionary mapping parents to their children lists.
|
||||
|
||||
Returns
|
||||
-------
|
||||
int
|
||||
Zero-based index of the child in its parent's sorted children list.
|
||||
"""
|
||||
children = parent_children.get(parent, [])
|
||||
children.sort()
|
||||
return children.index(child)
|
||||
|
||||
@@ -1,5 +1,23 @@
|
||||
"""
|
||||
Utilities for creating visual representations of flow structures.
|
||||
|
||||
This module provides functions for generating network visualizations of flows,
|
||||
including node placement, edge creation, and visual styling. It handles the
|
||||
conversion of flow structures into visual network graphs with appropriate
|
||||
styling and layout.
|
||||
|
||||
Example
|
||||
-------
|
||||
>>> flow = Flow()
|
||||
>>> net = Network(directed=True)
|
||||
>>> node_positions = compute_positions(flow, node_levels)
|
||||
>>> add_nodes_to_network(net, flow, node_positions, node_styles)
|
||||
>>> add_edges(net, flow, node_positions, colors)
|
||||
"""
|
||||
|
||||
import ast
|
||||
import inspect
|
||||
from typing import Any, Dict, List, Optional, Tuple, Union
|
||||
|
||||
from .utils import (
|
||||
build_ancestor_dict,
|
||||
@@ -9,8 +27,25 @@ from .utils import (
|
||||
)
|
||||
|
||||
|
||||
def method_calls_crew(method):
|
||||
"""Check if the method calls `.crew()`."""
|
||||
def method_calls_crew(method: Any) -> bool:
|
||||
"""
|
||||
Check if the method contains a call to `.crew()`.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
method : Any
|
||||
The method to analyze for crew() calls.
|
||||
|
||||
Returns
|
||||
-------
|
||||
bool
|
||||
True if the method calls .crew(), False otherwise.
|
||||
|
||||
Notes
|
||||
-----
|
||||
Uses AST analysis to detect method calls, specifically looking for
|
||||
attribute access of 'crew'.
|
||||
"""
|
||||
try:
|
||||
source = inspect.getsource(method)
|
||||
source = inspect.cleandoc(source)
|
||||
@@ -20,6 +55,7 @@ def method_calls_crew(method):
|
||||
return False
|
||||
|
||||
class CrewCallVisitor(ast.NodeVisitor):
|
||||
"""AST visitor to detect .crew() method calls."""
|
||||
def __init__(self):
|
||||
self.found = False
|
||||
|
||||
@@ -34,7 +70,34 @@ def method_calls_crew(method):
|
||||
return visitor.found
|
||||
|
||||
|
||||
def add_nodes_to_network(net, flow, node_positions, node_styles):
|
||||
def add_nodes_to_network(
|
||||
net: Any,
|
||||
flow: Any,
|
||||
node_positions: Dict[str, Tuple[float, float]],
|
||||
node_styles: Dict[str, Dict[str, Any]]
|
||||
) -> None:
|
||||
"""
|
||||
Add nodes to the network visualization with appropriate styling.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
net : Any
|
||||
The pyvis Network instance to add nodes to.
|
||||
flow : Any
|
||||
The flow instance containing method information.
|
||||
node_positions : Dict[str, Tuple[float, float]]
|
||||
Dictionary mapping node names to their (x, y) positions.
|
||||
node_styles : Dict[str, Dict[str, Any]]
|
||||
Dictionary containing style configurations for different node types.
|
||||
|
||||
Notes
|
||||
-----
|
||||
Node types include:
|
||||
- Start methods
|
||||
- Router methods
|
||||
- Crew methods
|
||||
- Regular methods
|
||||
"""
|
||||
def human_friendly_label(method_name):
|
||||
return method_name.replace("_", " ").title()
|
||||
|
||||
@@ -73,9 +136,33 @@ def add_nodes_to_network(net, flow, node_positions, node_styles):
|
||||
)
|
||||
|
||||
|
||||
def compute_positions(flow, node_levels, y_spacing=150, x_spacing=150):
|
||||
level_nodes = {}
|
||||
node_positions = {}
|
||||
def compute_positions(
|
||||
flow: Any,
|
||||
node_levels: Dict[str, int],
|
||||
y_spacing: float = 150,
|
||||
x_spacing: float = 150
|
||||
) -> Dict[str, Tuple[float, float]]:
|
||||
"""
|
||||
Compute the (x, y) positions for each node in the flow graph.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
flow : Any
|
||||
The flow instance to compute positions for.
|
||||
node_levels : Dict[str, int]
|
||||
Dictionary mapping node names to their hierarchical levels.
|
||||
y_spacing : float, optional
|
||||
Vertical spacing between levels, by default 150.
|
||||
x_spacing : float, optional
|
||||
Horizontal spacing between nodes, by default 150.
|
||||
|
||||
Returns
|
||||
-------
|
||||
Dict[str, Tuple[float, float]]
|
||||
Dictionary mapping node names to their (x, y) coordinates.
|
||||
"""
|
||||
level_nodes: Dict[int, List[str]] = {}
|
||||
node_positions: Dict[str, Tuple[float, float]] = {}
|
||||
|
||||
for method_name, level in node_levels.items():
|
||||
level_nodes.setdefault(level, []).append(method_name)
|
||||
@@ -90,16 +177,44 @@ def compute_positions(flow, node_levels, y_spacing=150, x_spacing=150):
|
||||
return node_positions
|
||||
|
||||
|
||||
def add_edges(net, flow, node_positions, colors):
|
||||
def add_edges(
|
||||
net: Any,
|
||||
flow: Any,
|
||||
node_positions: Dict[str, Tuple[float, float]],
|
||||
colors: Dict[str, str]
|
||||
) -> None:
|
||||
edge_smooth: Dict[str, Union[str, float]] = {"type": "continuous"} # Default value
|
||||
"""
|
||||
Add edges to the network visualization with appropriate styling.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
net : Any
|
||||
The pyvis Network instance to add edges to.
|
||||
flow : Any
|
||||
The flow instance containing edge information.
|
||||
node_positions : Dict[str, Tuple[float, float]]
|
||||
Dictionary mapping node names to their positions.
|
||||
colors : Dict[str, str]
|
||||
Dictionary mapping edge types to their colors.
|
||||
|
||||
Notes
|
||||
-----
|
||||
- Handles both normal listener edges and router edges
|
||||
- Applies appropriate styling (color, dashes) based on edge type
|
||||
- Adds curvature to edges when needed (cycles or multiple children)
|
||||
"""
|
||||
ancestors = build_ancestor_dict(flow)
|
||||
parent_children = build_parent_children_dict(flow)
|
||||
|
||||
# Edges for normal listeners
|
||||
for method_name in flow._listeners:
|
||||
condition_type, trigger_methods = flow._listeners[method_name]
|
||||
is_and_condition = condition_type == "AND"
|
||||
|
||||
for trigger in trigger_methods:
|
||||
if trigger in flow._methods or trigger in flow._routers.values():
|
||||
# Check if nodes exist before adding edges
|
||||
if trigger in node_positions and method_name in node_positions:
|
||||
is_router_edge = any(
|
||||
trigger in paths for paths in flow._router_paths.values()
|
||||
)
|
||||
@@ -124,7 +239,7 @@ def add_edges(net, flow, node_positions, colors):
|
||||
else:
|
||||
edge_smooth = {"type": "cubicBezier"}
|
||||
else:
|
||||
edge_smooth = False
|
||||
edge_smooth.update({"type": "continuous"})
|
||||
|
||||
edge_style = {
|
||||
"color": edge_color,
|
||||
@@ -135,7 +250,22 @@ def add_edges(net, flow, node_positions, colors):
|
||||
}
|
||||
|
||||
net.add_edge(trigger, method_name, **edge_style)
|
||||
else:
|
||||
# Nodes not found in node_positions. Check if it's a known router outcome and a known method.
|
||||
is_router_edge = any(
|
||||
trigger in paths for paths in flow._router_paths.values()
|
||||
)
|
||||
# Check if method_name is a known method
|
||||
method_known = method_name in flow._methods
|
||||
|
||||
# If it's a known router edge and the method is known, don't warn.
|
||||
# This means the path is legitimate, just not reflected as nodes here.
|
||||
if not (is_router_edge and method_known):
|
||||
print(
|
||||
f"Warning: No node found for '{trigger}' or '{method_name}'. Skipping edge."
|
||||
)
|
||||
|
||||
# Edges for router return paths
|
||||
for router_method_name, paths in flow._router_paths.items():
|
||||
for path in paths:
|
||||
for listener_name, (
|
||||
@@ -143,36 +273,49 @@ def add_edges(net, flow, node_positions, colors):
|
||||
trigger_methods,
|
||||
) in flow._listeners.items():
|
||||
if path in trigger_methods:
|
||||
is_cycle_edge = is_ancestor(trigger, method_name, ancestors)
|
||||
parent_has_multiple_children = (
|
||||
len(parent_children.get(router_method_name, [])) > 1
|
||||
)
|
||||
needs_curvature = is_cycle_edge or parent_has_multiple_children
|
||||
if (
|
||||
router_method_name in node_positions
|
||||
and listener_name in node_positions
|
||||
):
|
||||
is_cycle_edge = is_ancestor(
|
||||
router_method_name, listener_name, ancestors
|
||||
)
|
||||
parent_has_multiple_children = (
|
||||
len(parent_children.get(router_method_name, [])) > 1
|
||||
)
|
||||
needs_curvature = is_cycle_edge or parent_has_multiple_children
|
||||
|
||||
if needs_curvature:
|
||||
source_pos = node_positions.get(router_method_name)
|
||||
target_pos = node_positions.get(listener_name)
|
||||
if needs_curvature:
|
||||
source_pos = node_positions.get(router_method_name)
|
||||
target_pos = node_positions.get(listener_name)
|
||||
|
||||
if source_pos and target_pos:
|
||||
dx = target_pos[0] - source_pos[0]
|
||||
smooth_type = "curvedCCW" if dx <= 0 else "curvedCW"
|
||||
index = get_child_index(
|
||||
router_method_name, listener_name, parent_children
|
||||
)
|
||||
edge_smooth = {
|
||||
"type": smooth_type,
|
||||
"roundness": 0.2 + (0.1 * index),
|
||||
}
|
||||
if source_pos and target_pos:
|
||||
dx = target_pos[0] - source_pos[0]
|
||||
smooth_type = "curvedCCW" if dx <= 0 else "curvedCW"
|
||||
index = get_child_index(
|
||||
router_method_name, listener_name, parent_children
|
||||
)
|
||||
edge_smooth = {
|
||||
"type": smooth_type,
|
||||
"roundness": 0.2 + (0.1 * index),
|
||||
}
|
||||
else:
|
||||
edge_smooth = {"type": "cubicBezier"}
|
||||
else:
|
||||
edge_smooth = {"type": "cubicBezier"}
|
||||
else:
|
||||
edge_smooth = False
|
||||
edge_smooth.update({"type": "continuous"})
|
||||
|
||||
edge_style = {
|
||||
"color": colors["router_edge"],
|
||||
"width": 2,
|
||||
"arrows": "to",
|
||||
"dashes": True,
|
||||
"smooth": edge_smooth,
|
||||
}
|
||||
net.add_edge(router_method_name, listener_name, **edge_style)
|
||||
edge_style = {
|
||||
"color": colors["router_edge"],
|
||||
"width": 2,
|
||||
"arrows": "to",
|
||||
"dashes": True,
|
||||
"smooth": edge_smooth,
|
||||
}
|
||||
net.add_edge(router_method_name, listener_name, **edge_style)
|
||||
else:
|
||||
# Same check here: known router edge and known method?
|
||||
method_known = listener_name in flow._methods
|
||||
if not method_known:
|
||||
print(
|
||||
f"Warning: No node found for '{router_method_name}' or '{listener_name}'. Skipping edge."
|
||||
)
|
||||
|
||||
@@ -14,13 +14,13 @@ class Knowledge(BaseModel):
|
||||
Knowledge is a collection of sources and setup for the vector store to save and query relevant context.
|
||||
Args:
|
||||
sources: List[BaseKnowledgeSource] = Field(default_factory=list)
|
||||
storage: KnowledgeStorage = Field(default_factory=KnowledgeStorage)
|
||||
storage: Optional[KnowledgeStorage] = Field(default=None)
|
||||
embedder_config: Optional[Dict[str, Any]] = None
|
||||
"""
|
||||
|
||||
sources: List[BaseKnowledgeSource] = Field(default_factory=list)
|
||||
model_config = ConfigDict(arbitrary_types_allowed=True)
|
||||
storage: KnowledgeStorage = Field(default_factory=KnowledgeStorage)
|
||||
storage: Optional[KnowledgeStorage] = Field(default=None)
|
||||
embedder_config: Optional[Dict[str, Any]] = None
|
||||
collection_name: Optional[str] = None
|
||||
|
||||
@@ -49,8 +49,13 @@ class Knowledge(BaseModel):
|
||||
"""
|
||||
Query across all knowledge sources to find the most relevant information.
|
||||
Returns the top_k most relevant chunks.
|
||||
|
||||
Raises:
|
||||
ValueError: If storage is not initialized.
|
||||
"""
|
||||
|
||||
if self.storage is None:
|
||||
raise ValueError("Storage is not initialized.")
|
||||
|
||||
results = self.storage.search(
|
||||
query,
|
||||
limit,
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
from abc import ABC, abstractmethod
|
||||
from pathlib import Path
|
||||
from typing import Dict, List, Union
|
||||
from typing import Dict, List, Optional, Union
|
||||
|
||||
from pydantic import Field
|
||||
from pydantic import Field, field_validator
|
||||
|
||||
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
|
||||
from crewai.knowledge.storage.knowledge_storage import KnowledgeStorage
|
||||
@@ -14,17 +14,29 @@ class BaseFileKnowledgeSource(BaseKnowledgeSource, ABC):
|
||||
"""Base class for knowledge sources that load content from files."""
|
||||
|
||||
_logger: Logger = Logger(verbose=True)
|
||||
file_path: Union[Path, List[Path], str, List[str]] = Field(
|
||||
..., description="The path to the file"
|
||||
file_path: Optional[Union[Path, List[Path], str, List[str]]] = Field(
|
||||
default=None,
|
||||
description="[Deprecated] The path to the file. Use file_paths instead.",
|
||||
)
|
||||
file_paths: Optional[Union[Path, List[Path], str, List[str]]] = Field(
|
||||
default_factory=list, description="The path to the file"
|
||||
)
|
||||
content: Dict[Path, str] = Field(init=False, default_factory=dict)
|
||||
storage: KnowledgeStorage = Field(default_factory=KnowledgeStorage)
|
||||
storage: Optional[KnowledgeStorage] = Field(default=None)
|
||||
safe_file_paths: List[Path] = Field(default_factory=list)
|
||||
|
||||
@field_validator("file_path", "file_paths", mode="before")
|
||||
def validate_file_path(cls, v, info):
|
||||
"""Validate that at least one of file_path or file_paths is provided."""
|
||||
# Single check if both are None, O(1) instead of nested conditions
|
||||
if v is None and info.data.get("file_path" if info.field_name == "file_paths" else "file_paths") is None:
|
||||
raise ValueError("Either file_path or file_paths must be provided")
|
||||
return v
|
||||
|
||||
def model_post_init(self, _):
|
||||
"""Post-initialization method to load content."""
|
||||
self.safe_file_paths = self._process_file_paths()
|
||||
self.validate_paths()
|
||||
self.validate_content()
|
||||
self.content = self.load_content()
|
||||
|
||||
@abstractmethod
|
||||
@@ -32,7 +44,7 @@ class BaseFileKnowledgeSource(BaseKnowledgeSource, ABC):
|
||||
"""Load and preprocess file content. Should be overridden by subclasses. Assume that the file path is relative to the project root in the knowledge directory."""
|
||||
pass
|
||||
|
||||
def validate_paths(self):
|
||||
def validate_content(self):
|
||||
"""Validate the paths."""
|
||||
for path in self.safe_file_paths:
|
||||
if not path.exists():
|
||||
@@ -51,7 +63,10 @@ class BaseFileKnowledgeSource(BaseKnowledgeSource, ABC):
|
||||
|
||||
def _save_documents(self):
|
||||
"""Save the documents to the storage."""
|
||||
self.storage.save(self.chunks)
|
||||
if self.storage:
|
||||
self.storage.save(self.chunks)
|
||||
else:
|
||||
raise ValueError("No storage found to save documents.")
|
||||
|
||||
def convert_to_path(self, path: Union[Path, str]) -> Path:
|
||||
"""Convert a path to a Path object."""
|
||||
@@ -59,13 +74,30 @@ class BaseFileKnowledgeSource(BaseKnowledgeSource, ABC):
|
||||
|
||||
def _process_file_paths(self) -> List[Path]:
|
||||
"""Convert file_path to a list of Path objects."""
|
||||
paths = (
|
||||
[self.file_path]
|
||||
if isinstance(self.file_path, (str, Path))
|
||||
else self.file_path
|
||||
|
||||
if hasattr(self, "file_path") and self.file_path is not None:
|
||||
self._logger.log(
|
||||
"warning",
|
||||
"The 'file_path' attribute is deprecated and will be removed in a future version. Please use 'file_paths' instead.",
|
||||
color="yellow",
|
||||
)
|
||||
self.file_paths = self.file_path
|
||||
|
||||
if self.file_paths is None:
|
||||
raise ValueError("Your source must be provided with a file_paths: []")
|
||||
|
||||
# Convert single path to list
|
||||
path_list: List[Union[Path, str]] = (
|
||||
[self.file_paths]
|
||||
if isinstance(self.file_paths, (str, Path))
|
||||
else list(self.file_paths)
|
||||
if isinstance(self.file_paths, list)
|
||||
else []
|
||||
)
|
||||
|
||||
if not isinstance(paths, list):
|
||||
raise ValueError("file_path must be a Path, str, or a list of these types")
|
||||
if not path_list:
|
||||
raise ValueError(
|
||||
"file_path/file_paths must be a Path, str, or a list of these types"
|
||||
)
|
||||
|
||||
return [self.convert_to_path(path) for path in paths]
|
||||
return [self.convert_to_path(path) for path in path_list]
|
||||
|
||||
@@ -16,12 +16,12 @@ class BaseKnowledgeSource(BaseModel, ABC):
|
||||
chunk_embeddings: List[np.ndarray] = Field(default_factory=list)
|
||||
|
||||
model_config = ConfigDict(arbitrary_types_allowed=True)
|
||||
storage: KnowledgeStorage = Field(default_factory=KnowledgeStorage)
|
||||
storage: Optional[KnowledgeStorage] = Field(default=None)
|
||||
metadata: Dict[str, Any] = Field(default_factory=dict) # Currently unused
|
||||
collection_name: Optional[str] = Field(default=None)
|
||||
|
||||
@abstractmethod
|
||||
def load_content(self) -> Dict[Any, str]:
|
||||
def validate_content(self) -> Any:
|
||||
"""Load and preprocess content from the source."""
|
||||
pass
|
||||
|
||||
@@ -46,4 +46,7 @@ class BaseKnowledgeSource(BaseModel, ABC):
|
||||
Save the documents to the storage.
|
||||
This method should be called after the chunks and embeddings are generated.
|
||||
"""
|
||||
self.storage.save(self.chunks)
|
||||
if self.storage:
|
||||
self.storage.save(self.chunks)
|
||||
else:
|
||||
raise ValueError("No storage found to save documents.")
|
||||
|
||||
120
src/crewai/knowledge/source/crew_docling_source.py
Normal file
120
src/crewai/knowledge/source/crew_docling_source.py
Normal file
@@ -0,0 +1,120 @@
|
||||
from pathlib import Path
|
||||
from typing import Iterator, List, Optional, Union
|
||||
from urllib.parse import urlparse
|
||||
|
||||
from docling.datamodel.base_models import InputFormat
|
||||
from docling.document_converter import DocumentConverter
|
||||
from docling.exceptions import ConversionError
|
||||
from docling_core.transforms.chunker.hierarchical_chunker import HierarchicalChunker
|
||||
from docling_core.types.doc.document import DoclingDocument
|
||||
from pydantic import Field
|
||||
|
||||
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
|
||||
from crewai.utilities.constants import KNOWLEDGE_DIRECTORY
|
||||
from crewai.utilities.logger import Logger
|
||||
|
||||
|
||||
class CrewDoclingSource(BaseKnowledgeSource):
|
||||
"""Default Source class for converting documents to markdown or json
|
||||
This will auto support PDF, DOCX, and TXT, XLSX, Images, and HTML files without any additional dependencies and follows the docling package as the source of truth.
|
||||
"""
|
||||
|
||||
_logger: Logger = Logger(verbose=True)
|
||||
|
||||
file_path: Optional[List[Union[Path, str]]] = Field(default=None)
|
||||
file_paths: List[Union[Path, str]] = Field(default_factory=list)
|
||||
chunks: List[str] = Field(default_factory=list)
|
||||
safe_file_paths: List[Union[Path, str]] = Field(default_factory=list)
|
||||
content: List[DoclingDocument] = Field(default_factory=list)
|
||||
document_converter: DocumentConverter = Field(
|
||||
default_factory=lambda: DocumentConverter(
|
||||
allowed_formats=[
|
||||
InputFormat.MD,
|
||||
InputFormat.ASCIIDOC,
|
||||
InputFormat.PDF,
|
||||
InputFormat.DOCX,
|
||||
InputFormat.HTML,
|
||||
InputFormat.IMAGE,
|
||||
InputFormat.XLSX,
|
||||
InputFormat.PPTX,
|
||||
]
|
||||
)
|
||||
)
|
||||
|
||||
def model_post_init(self, _) -> None:
|
||||
if self.file_path:
|
||||
self._logger.log(
|
||||
"warning",
|
||||
"The 'file_path' attribute is deprecated and will be removed in a future version. Please use 'file_paths' instead.",
|
||||
color="yellow",
|
||||
)
|
||||
self.file_paths = self.file_path
|
||||
self.safe_file_paths = self.validate_content()
|
||||
self.content = self._load_content()
|
||||
|
||||
def _load_content(self) -> List[DoclingDocument]:
|
||||
try:
|
||||
return self._convert_source_to_docling_documents()
|
||||
except ConversionError as e:
|
||||
self._logger.log(
|
||||
"error",
|
||||
f"Error loading content: {e}. Supported formats: {self.document_converter.allowed_formats}",
|
||||
"red",
|
||||
)
|
||||
raise e
|
||||
except Exception as e:
|
||||
self._logger.log("error", f"Error loading content: {e}")
|
||||
raise e
|
||||
|
||||
def add(self) -> None:
|
||||
if self.content is None:
|
||||
return
|
||||
for doc in self.content:
|
||||
new_chunks_iterable = self._chunk_doc(doc)
|
||||
self.chunks.extend(list(new_chunks_iterable))
|
||||
self._save_documents()
|
||||
|
||||
def _convert_source_to_docling_documents(self) -> List[DoclingDocument]:
|
||||
conv_results_iter = self.document_converter.convert_all(self.safe_file_paths)
|
||||
return [result.document for result in conv_results_iter]
|
||||
|
||||
def _chunk_doc(self, doc: DoclingDocument) -> Iterator[str]:
|
||||
chunker = HierarchicalChunker()
|
||||
for chunk in chunker.chunk(doc):
|
||||
yield chunk.text
|
||||
|
||||
def validate_content(self) -> List[Union[Path, str]]:
|
||||
processed_paths: List[Union[Path, str]] = []
|
||||
for path in self.file_paths:
|
||||
if isinstance(path, str):
|
||||
if path.startswith(("http://", "https://")):
|
||||
try:
|
||||
if self._validate_url(path):
|
||||
processed_paths.append(path)
|
||||
else:
|
||||
raise ValueError(f"Invalid URL format: {path}")
|
||||
except Exception as e:
|
||||
raise ValueError(f"Invalid URL: {path}. Error: {str(e)}")
|
||||
else:
|
||||
local_path = Path(KNOWLEDGE_DIRECTORY + "/" + path)
|
||||
if local_path.exists():
|
||||
processed_paths.append(local_path)
|
||||
else:
|
||||
raise FileNotFoundError(f"File not found: {local_path}")
|
||||
else:
|
||||
# this is an instance of Path
|
||||
processed_paths.append(path)
|
||||
return processed_paths
|
||||
|
||||
def _validate_url(self, url: str) -> bool:
|
||||
try:
|
||||
result = urlparse(url)
|
||||
return all(
|
||||
[
|
||||
result.scheme in ("http", "https"),
|
||||
result.netloc,
|
||||
len(result.netloc.split(".")) >= 2, # Ensure domain has TLD
|
||||
]
|
||||
)
|
||||
except Exception:
|
||||
return False
|
||||
@@ -13,9 +13,9 @@ class StringKnowledgeSource(BaseKnowledgeSource):
|
||||
|
||||
def model_post_init(self, _):
|
||||
"""Post-initialization method to validate content."""
|
||||
self.load_content()
|
||||
self.validate_content()
|
||||
|
||||
def load_content(self):
|
||||
def validate_content(self):
|
||||
"""Validate string content."""
|
||||
if not isinstance(self.content, str):
|
||||
raise ValueError("StringKnowledgeSource only accepts string content")
|
||||
|
||||
@@ -44,6 +44,7 @@ LLM_CONTEXT_WINDOW_SIZES = {
|
||||
"o1-preview": 128000,
|
||||
"o1-mini": 128000,
|
||||
# gemini
|
||||
"gemini-2.0-flash": 1048576,
|
||||
"gemini-1.5-pro": 2097152,
|
||||
"gemini-1.5-flash": 1048576,
|
||||
"gemini-1.5-flash-8b": 1048576,
|
||||
@@ -63,6 +64,8 @@ LLM_CONTEXT_WINDOW_SIZES = {
|
||||
"llama3-70b-8192": 8192,
|
||||
"llama3-8b-8192": 8192,
|
||||
"mixtral-8x7b-32768": 32768,
|
||||
"llama-3.3-70b-versatile": 128000,
|
||||
"llama-3.3-70b-instruct": 128000,
|
||||
}
|
||||
|
||||
DEFAULT_CONTEXT_WINDOW_SIZE = 8192
|
||||
|
||||
@@ -1,12 +1,25 @@
|
||||
import datetime
|
||||
import inspect
|
||||
import json
|
||||
import logging
|
||||
import threading
|
||||
import uuid
|
||||
from concurrent.futures import Future
|
||||
from copy import copy
|
||||
from hashlib import md5
|
||||
from pathlib import Path
|
||||
from typing import Any, Dict, List, Optional, Set, Tuple, Type, Union
|
||||
from typing import (
|
||||
Any,
|
||||
Callable,
|
||||
ClassVar,
|
||||
Dict,
|
||||
List,
|
||||
Optional,
|
||||
Set,
|
||||
Tuple,
|
||||
Type,
|
||||
Union,
|
||||
)
|
||||
|
||||
from opentelemetry.trace import Span
|
||||
from pydantic import (
|
||||
@@ -20,6 +33,7 @@ from pydantic import (
|
||||
from pydantic_core import PydanticCustomError
|
||||
|
||||
from crewai.agents.agent_builder.base_agent import BaseAgent
|
||||
from crewai.tasks.guardrail_result import GuardrailResult
|
||||
from crewai.tasks.output_format import OutputFormat
|
||||
from crewai.tasks.task_output import TaskOutput
|
||||
from crewai.telemetry.telemetry import Telemetry
|
||||
@@ -49,6 +63,7 @@ class Task(BaseModel):
|
||||
"""
|
||||
|
||||
__hash__ = object.__hash__ # type: ignore
|
||||
logger: ClassVar[logging.Logger] = logging.getLogger(__name__)
|
||||
used_tools: int = 0
|
||||
tools_errors: int = 0
|
||||
delegations: int = 0
|
||||
@@ -110,11 +125,61 @@ class Task(BaseModel):
|
||||
default=None,
|
||||
)
|
||||
processed_by_agents: Set[str] = Field(default_factory=set)
|
||||
guardrail: Optional[Callable[[TaskOutput], Tuple[bool, Any]]] = Field(
|
||||
default=None,
|
||||
description="Function to validate task output before proceeding to next task"
|
||||
)
|
||||
max_retries: int = Field(
|
||||
default=3,
|
||||
description="Maximum number of retries when guardrail fails"
|
||||
)
|
||||
retry_count: int = Field(
|
||||
default=0,
|
||||
description="Current number of retries"
|
||||
)
|
||||
|
||||
@field_validator("guardrail")
|
||||
@classmethod
|
||||
def validate_guardrail_function(cls, v: Optional[Callable]) -> Optional[Callable]:
|
||||
"""Validate that the guardrail function has the correct signature and behavior.
|
||||
|
||||
While type hints provide static checking, this validator ensures runtime safety by:
|
||||
1. Verifying the function accepts exactly one parameter (the TaskOutput)
|
||||
2. Checking return type annotations match Tuple[bool, Any] if present
|
||||
3. Providing clear, immediate error messages for debugging
|
||||
|
||||
This runtime validation is crucial because:
|
||||
- Type hints are optional and can be ignored at runtime
|
||||
- Function signatures need immediate validation before task execution
|
||||
- Clear error messages help users debug guardrail implementation issues
|
||||
|
||||
Args:
|
||||
v: The guardrail function to validate
|
||||
|
||||
Returns:
|
||||
The validated guardrail function
|
||||
|
||||
Raises:
|
||||
ValueError: If the function signature is invalid or return annotation
|
||||
doesn't match Tuple[bool, Any]
|
||||
"""
|
||||
if v is not None:
|
||||
sig = inspect.signature(v)
|
||||
if len(sig.parameters) != 1:
|
||||
raise ValueError("Guardrail function must accept exactly one parameter")
|
||||
|
||||
# Check return annotation if present, but don't require it
|
||||
return_annotation = sig.return_annotation
|
||||
if return_annotation != inspect.Signature.empty:
|
||||
if not (return_annotation == Tuple[bool, Any] or str(return_annotation) == 'Tuple[bool, Any]'):
|
||||
raise ValueError("If return type is annotated, it must be Tuple[bool, Any]")
|
||||
return v
|
||||
|
||||
_telemetry: Telemetry = PrivateAttr(default_factory=Telemetry)
|
||||
_execution_span: Optional[Span] = PrivateAttr(default=None)
|
||||
_original_description: Optional[str] = PrivateAttr(default=None)
|
||||
_original_expected_output: Optional[str] = PrivateAttr(default=None)
|
||||
_original_output_file: Optional[str] = PrivateAttr(default=None)
|
||||
_thread: Optional[threading.Thread] = PrivateAttr(default=None)
|
||||
_execution_time: Optional[float] = PrivateAttr(default=None)
|
||||
|
||||
@@ -149,8 +214,46 @@ class Task(BaseModel):
|
||||
|
||||
@field_validator("output_file")
|
||||
@classmethod
|
||||
def output_file_validation(cls, value: str) -> str:
|
||||
"""Validate the output file path by removing the / from the beginning of the path."""
|
||||
def output_file_validation(cls, value: Optional[str]) -> Optional[str]:
|
||||
"""Validate the output file path.
|
||||
|
||||
Args:
|
||||
value: The output file path to validate. Can be None or a string.
|
||||
If the path contains template variables (e.g. {var}), leading slashes are preserved.
|
||||
For regular paths, leading slashes are stripped.
|
||||
|
||||
Returns:
|
||||
The validated and potentially modified path, or None if no path was provided.
|
||||
|
||||
Raises:
|
||||
ValueError: If the path contains invalid characters, path traversal attempts,
|
||||
or other security concerns.
|
||||
"""
|
||||
if value is None:
|
||||
return None
|
||||
|
||||
# Basic security checks
|
||||
if ".." in value:
|
||||
raise ValueError("Path traversal attempts are not allowed in output_file paths")
|
||||
|
||||
# Check for shell expansion first
|
||||
if value.startswith('~') or value.startswith('$'):
|
||||
raise ValueError("Shell expansion characters are not allowed in output_file paths")
|
||||
|
||||
# Then check other shell special characters
|
||||
if any(char in value for char in ['|', '>', '<', '&', ';']):
|
||||
raise ValueError("Shell special characters are not allowed in output_file paths")
|
||||
|
||||
# Don't strip leading slash if it's a template path with variables
|
||||
if "{" in value or "}" in value:
|
||||
# Validate template variable format
|
||||
template_vars = [part.split("}")[0] for part in value.split("{")[1:]]
|
||||
for var in template_vars:
|
||||
if not var.isidentifier():
|
||||
raise ValueError(f"Invalid template variable name: {var}")
|
||||
return value
|
||||
|
||||
# Strip leading slash for regular paths
|
||||
if value.startswith("/"):
|
||||
return value[1:]
|
||||
return value
|
||||
@@ -254,7 +357,6 @@ class Task(BaseModel):
|
||||
)
|
||||
|
||||
pydantic_output, json_output = self._export_output(result)
|
||||
|
||||
task_output = TaskOutput(
|
||||
name=self.name,
|
||||
description=self.description,
|
||||
@@ -265,6 +367,37 @@ class Task(BaseModel):
|
||||
agent=agent.role,
|
||||
output_format=self._get_output_format(),
|
||||
)
|
||||
|
||||
if self.guardrail:
|
||||
guardrail_result = GuardrailResult.from_tuple(self.guardrail(task_output))
|
||||
if not guardrail_result.success:
|
||||
if self.retry_count >= self.max_retries:
|
||||
raise Exception(
|
||||
f"Task failed guardrail validation after {self.max_retries} retries. "
|
||||
f"Last error: {guardrail_result.error}"
|
||||
)
|
||||
|
||||
self.retry_count += 1
|
||||
context = (
|
||||
f"### Previous attempt failed validation: {guardrail_result.error}\n\n\n"
|
||||
f"### Previous result:\n{task_output.raw}\n\n\n"
|
||||
"Try again, making sure to address the validation error."
|
||||
)
|
||||
return self._execute_core(agent, context, tools)
|
||||
|
||||
if guardrail_result.result is None:
|
||||
raise Exception(
|
||||
"Task guardrail returned None as result. This is not allowed."
|
||||
)
|
||||
|
||||
if isinstance(guardrail_result.result, str):
|
||||
task_output.raw = guardrail_result.result
|
||||
pydantic_output, json_output = self._export_output(guardrail_result.result)
|
||||
task_output.pydantic = pydantic_output
|
||||
task_output.json_dict = json_output
|
||||
elif isinstance(guardrail_result.result, TaskOutput):
|
||||
task_output = guardrail_result.result
|
||||
|
||||
self.output = task_output
|
||||
|
||||
self._set_end_execution_time(start_time)
|
||||
@@ -299,16 +432,89 @@ class Task(BaseModel):
|
||||
tasks_slices = [self.description, output]
|
||||
return "\n".join(tasks_slices)
|
||||
|
||||
def interpolate_inputs(self, inputs: Dict[str, Any]) -> None:
|
||||
"""Interpolate inputs into the task description and expected output."""
|
||||
def interpolate_inputs(self, inputs: Dict[str, Union[str, int, float]]) -> None:
|
||||
"""Interpolate inputs into the task description, expected output, and output file path.
|
||||
|
||||
Args:
|
||||
inputs: Dictionary mapping template variables to their values.
|
||||
Supported value types are strings, integers, and floats.
|
||||
|
||||
Raises:
|
||||
ValueError: If a required template variable is missing from inputs.
|
||||
"""
|
||||
if self._original_description is None:
|
||||
self._original_description = self.description
|
||||
if self._original_expected_output is None:
|
||||
self._original_expected_output = self.expected_output
|
||||
if self.output_file is not None and self._original_output_file is None:
|
||||
self._original_output_file = self.output_file
|
||||
|
||||
if inputs:
|
||||
if not inputs:
|
||||
return
|
||||
|
||||
try:
|
||||
self.description = self._original_description.format(**inputs)
|
||||
self.expected_output = self._original_expected_output.format(**inputs)
|
||||
except KeyError as e:
|
||||
raise ValueError(f"Missing required template variable '{e.args[0]}' in description") from e
|
||||
except ValueError as e:
|
||||
raise ValueError(f"Error interpolating description: {str(e)}") from e
|
||||
|
||||
try:
|
||||
self.expected_output = self.interpolate_only(
|
||||
input_string=self._original_expected_output, inputs=inputs
|
||||
)
|
||||
except (KeyError, ValueError) as e:
|
||||
raise ValueError(f"Error interpolating expected_output: {str(e)}") from e
|
||||
|
||||
if self.output_file is not None:
|
||||
try:
|
||||
self.output_file = self.interpolate_only(
|
||||
input_string=self._original_output_file, inputs=inputs
|
||||
)
|
||||
except (KeyError, ValueError) as e:
|
||||
raise ValueError(f"Error interpolating output_file path: {str(e)}") from e
|
||||
|
||||
def interpolate_only(self, input_string: Optional[str], inputs: Dict[str, Union[str, int, float]]) -> str:
|
||||
"""Interpolate placeholders (e.g., {key}) in a string while leaving JSON untouched.
|
||||
|
||||
Args:
|
||||
input_string: The string containing template variables to interpolate.
|
||||
Can be None or empty, in which case an empty string is returned.
|
||||
inputs: Dictionary mapping template variables to their values.
|
||||
Supported value types are strings, integers, and floats.
|
||||
If input_string is empty or has no placeholders, inputs can be empty.
|
||||
|
||||
Returns:
|
||||
The interpolated string with all template variables replaced with their values.
|
||||
Empty string if input_string is None or empty.
|
||||
|
||||
Raises:
|
||||
ValueError: If a required template variable is missing from inputs.
|
||||
KeyError: If a template variable is not found in the inputs dictionary.
|
||||
"""
|
||||
if input_string is None or not input_string:
|
||||
return ""
|
||||
if "{" not in input_string and "}" not in input_string:
|
||||
return input_string
|
||||
if not inputs:
|
||||
raise ValueError("Inputs dictionary cannot be empty when interpolating variables")
|
||||
|
||||
try:
|
||||
# Validate input types
|
||||
for key, value in inputs.items():
|
||||
if not isinstance(value, (str, int, float)):
|
||||
raise ValueError(f"Value for key '{key}' must be a string, integer, or float, got {type(value).__name__}")
|
||||
|
||||
escaped_string = input_string.replace("{", "{{").replace("}", "}}")
|
||||
|
||||
for key in inputs.keys():
|
||||
escaped_string = escaped_string.replace(f"{{{{{key}}}}}", f"{{{key}}}")
|
||||
|
||||
return escaped_string.format(**inputs)
|
||||
except KeyError as e:
|
||||
raise KeyError(f"Template variable '{e.args[0]}' not found in inputs dictionary") from e
|
||||
except ValueError as e:
|
||||
raise ValueError(f"Error during string interpolation: {str(e)}") from e
|
||||
|
||||
def increment_tools_errors(self) -> None:
|
||||
"""Increment the tools errors counter."""
|
||||
@@ -390,22 +596,33 @@ class Task(BaseModel):
|
||||
return OutputFormat.RAW
|
||||
|
||||
def _save_file(self, result: Any) -> None:
|
||||
"""Save task output to a file.
|
||||
|
||||
Args:
|
||||
result: The result to save to the file. Can be a dict or any stringifiable object.
|
||||
|
||||
Raises:
|
||||
ValueError: If output_file is not set
|
||||
RuntimeError: If there is an error writing to the file
|
||||
"""
|
||||
if self.output_file is None:
|
||||
raise ValueError("output_file is not set.")
|
||||
|
||||
resolved_path = Path(self.output_file).expanduser().resolve()
|
||||
directory = resolved_path.parent
|
||||
try:
|
||||
resolved_path = Path(self.output_file).expanduser().resolve()
|
||||
directory = resolved_path.parent
|
||||
|
||||
if not directory.exists():
|
||||
directory.mkdir(parents=True, exist_ok=True)
|
||||
if not directory.exists():
|
||||
directory.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
with resolved_path.open("w", encoding="utf-8") as file:
|
||||
if isinstance(result, dict):
|
||||
import json
|
||||
|
||||
json.dump(result, file, ensure_ascii=False, indent=2)
|
||||
else:
|
||||
file.write(str(result))
|
||||
with resolved_path.open("w", encoding="utf-8") as file:
|
||||
if isinstance(result, dict):
|
||||
import json
|
||||
json.dump(result, file, ensure_ascii=False, indent=2)
|
||||
else:
|
||||
file.write(str(result))
|
||||
except (OSError, IOError) as e:
|
||||
raise RuntimeError(f"Failed to save output file: {e}")
|
||||
return None
|
||||
|
||||
def __repr__(self):
|
||||
|
||||
56
src/crewai/tasks/guardrail_result.py
Normal file
56
src/crewai/tasks/guardrail_result.py
Normal file
@@ -0,0 +1,56 @@
|
||||
"""
|
||||
Module for handling task guardrail validation results.
|
||||
|
||||
This module provides the GuardrailResult class which standardizes
|
||||
the way task guardrails return their validation results.
|
||||
"""
|
||||
|
||||
from typing import Any, Optional, Tuple, Union
|
||||
|
||||
from pydantic import BaseModel, field_validator
|
||||
|
||||
|
||||
class GuardrailResult(BaseModel):
|
||||
"""Result from a task guardrail execution.
|
||||
|
||||
This class standardizes the return format of task guardrails,
|
||||
converting tuple responses into a structured format that can
|
||||
be easily handled by the task execution system.
|
||||
|
||||
Attributes:
|
||||
success (bool): Whether the guardrail validation passed
|
||||
result (Any, optional): The validated/transformed result if successful
|
||||
error (str, optional): Error message if validation failed
|
||||
"""
|
||||
success: bool
|
||||
result: Optional[Any] = None
|
||||
error: Optional[str] = None
|
||||
|
||||
@field_validator("result", "error")
|
||||
@classmethod
|
||||
def validate_result_error_exclusivity(cls, v: Any, info) -> Any:
|
||||
values = info.data
|
||||
if "success" in values:
|
||||
if values["success"] and v and "error" in values and values["error"]:
|
||||
raise ValueError("Cannot have both result and error when success is True")
|
||||
if not values["success"] and v and "result" in values and values["result"]:
|
||||
raise ValueError("Cannot have both result and error when success is False")
|
||||
return v
|
||||
|
||||
@classmethod
|
||||
def from_tuple(cls, result: Tuple[bool, Union[Any, str]]) -> "GuardrailResult":
|
||||
"""Create a GuardrailResult from a validation tuple.
|
||||
|
||||
Args:
|
||||
result: A tuple of (success, data) where data is either
|
||||
the validated result or error message.
|
||||
|
||||
Returns:
|
||||
GuardrailResult: A new instance with the tuple data.
|
||||
"""
|
||||
success, data = result
|
||||
return cls(
|
||||
success=success,
|
||||
result=data if success else None,
|
||||
error=data if not success else None
|
||||
)
|
||||
45
src/crewai/tools/agent_tools/add_image_tool.py
Normal file
45
src/crewai/tools/agent_tools/add_image_tool.py
Normal file
@@ -0,0 +1,45 @@
|
||||
from typing import Dict, Optional, Union
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from crewai.tools.base_tool import BaseTool
|
||||
from crewai.utilities import I18N
|
||||
|
||||
i18n = I18N()
|
||||
|
||||
class AddImageToolSchema(BaseModel):
|
||||
image_url: str = Field(..., description="The URL or path of the image to add")
|
||||
action: Optional[str] = Field(
|
||||
default=None,
|
||||
description="Optional context or question about the image"
|
||||
)
|
||||
|
||||
|
||||
class AddImageTool(BaseTool):
|
||||
"""Tool for adding images to the content"""
|
||||
|
||||
name: str = Field(default_factory=lambda: i18n.tools("add_image")["name"]) # type: ignore
|
||||
description: str = Field(default_factory=lambda: i18n.tools("add_image")["description"]) # type: ignore
|
||||
args_schema: type[BaseModel] = AddImageToolSchema
|
||||
|
||||
def _run(
|
||||
self,
|
||||
image_url: str,
|
||||
action: Optional[str] = None,
|
||||
**kwargs,
|
||||
) -> dict:
|
||||
action = action or i18n.tools("add_image")["default_action"] # type: ignore
|
||||
content = [
|
||||
{"type": "text", "text": action},
|
||||
{
|
||||
"type": "image_url",
|
||||
"image_url": {
|
||||
"url": image_url,
|
||||
},
|
||||
}
|
||||
]
|
||||
|
||||
return {
|
||||
"role": "user",
|
||||
"content": content
|
||||
}
|
||||
@@ -20,13 +20,13 @@ class AgentTools:
|
||||
delegate_tool = DelegateWorkTool(
|
||||
agents=self.agents,
|
||||
i18n=self.i18n,
|
||||
description=self.i18n.tools("delegate_work").format(coworkers=coworkers),
|
||||
description=self.i18n.tools("delegate_work").format(coworkers=coworkers), # type: ignore
|
||||
)
|
||||
|
||||
ask_tool = AskQuestionTool(
|
||||
agents=self.agents,
|
||||
i18n=self.i18n,
|
||||
description=self.i18n.tools("ask_question").format(coworkers=coworkers),
|
||||
description=self.i18n.tools("ask_question").format(coworkers=coworkers), # type: ignore
|
||||
)
|
||||
|
||||
return [delegate_tool, ask_tool]
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
import logging
|
||||
from typing import Optional, Union
|
||||
|
||||
from pydantic import Field
|
||||
@@ -7,6 +8,8 @@ from crewai.task import Task
|
||||
from crewai.tools.base_tool import BaseTool
|
||||
from crewai.utilities import I18N
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class BaseAgentTool(BaseTool):
|
||||
"""Base class for agent-related tools"""
|
||||
@@ -16,6 +19,25 @@ class BaseAgentTool(BaseTool):
|
||||
default_factory=I18N, description="Internationalization settings"
|
||||
)
|
||||
|
||||
def sanitize_agent_name(self, name: str) -> str:
|
||||
"""
|
||||
Sanitize agent role name by normalizing whitespace and setting to lowercase.
|
||||
Converts all whitespace (including newlines) to single spaces and removes quotes.
|
||||
|
||||
Args:
|
||||
name (str): The agent role name to sanitize
|
||||
|
||||
Returns:
|
||||
str: The sanitized agent role name, with whitespace normalized,
|
||||
converted to lowercase, and quotes removed
|
||||
"""
|
||||
if not name:
|
||||
return ""
|
||||
# Normalize all whitespace (including newlines) to single spaces
|
||||
normalized = " ".join(name.split())
|
||||
# Remove quotes and convert to lowercase
|
||||
return normalized.replace('"', "").casefold()
|
||||
|
||||
def _get_coworker(self, coworker: Optional[str], **kwargs) -> Optional[str]:
|
||||
coworker = coworker or kwargs.get("co_worker") or kwargs.get("coworker")
|
||||
if coworker:
|
||||
@@ -25,11 +47,27 @@ class BaseAgentTool(BaseTool):
|
||||
return coworker
|
||||
|
||||
def _execute(
|
||||
self, agent_name: Union[str, None], task: str, context: Union[str, None]
|
||||
self,
|
||||
agent_name: Optional[str],
|
||||
task: str,
|
||||
context: Optional[str] = None
|
||||
) -> str:
|
||||
"""
|
||||
Execute delegation to an agent with case-insensitive and whitespace-tolerant matching.
|
||||
|
||||
Args:
|
||||
agent_name: Name/role of the agent to delegate to (case-insensitive)
|
||||
task: The specific question or task to delegate
|
||||
context: Optional additional context for the task execution
|
||||
|
||||
Returns:
|
||||
str: The execution result from the delegated agent or an error message
|
||||
if the agent cannot be found
|
||||
"""
|
||||
try:
|
||||
if agent_name is None:
|
||||
agent_name = ""
|
||||
logger.debug("No agent name provided, using empty string")
|
||||
|
||||
# It is important to remove the quotes from the agent name.
|
||||
# The reason we have to do this is because less-powerful LLM's
|
||||
@@ -38,31 +76,49 @@ class BaseAgentTool(BaseTool):
|
||||
# {"task": "....", "coworker": "....
|
||||
# when it should look like this:
|
||||
# {"task": "....", "coworker": "...."}
|
||||
agent_name = agent_name.casefold().replace('"', "").replace("\n", "")
|
||||
sanitized_name = self.sanitize_agent_name(agent_name)
|
||||
logger.debug(f"Sanitized agent name from '{agent_name}' to '{sanitized_name}'")
|
||||
|
||||
available_agents = [agent.role for agent in self.agents]
|
||||
logger.debug(f"Available agents: {available_agents}")
|
||||
|
||||
agent = [ # type: ignore # Incompatible types in assignment (expression has type "list[BaseAgent]", variable has type "str | None")
|
||||
available_agent
|
||||
for available_agent in self.agents
|
||||
if available_agent.role.casefold().replace("\n", "") == agent_name
|
||||
if self.sanitize_agent_name(available_agent.role) == sanitized_name
|
||||
]
|
||||
except Exception as _:
|
||||
logger.debug(f"Found {len(agent)} matching agents for role '{sanitized_name}'")
|
||||
except (AttributeError, ValueError) as e:
|
||||
# Handle specific exceptions that might occur during role name processing
|
||||
return self.i18n.errors("agent_tool_unexisting_coworker").format(
|
||||
coworkers="\n".join(
|
||||
[f"- {agent.role.casefold()}" for agent in self.agents]
|
||||
)
|
||||
[f"- {self.sanitize_agent_name(agent.role)}" for agent in self.agents]
|
||||
),
|
||||
error=str(e)
|
||||
)
|
||||
|
||||
if not agent:
|
||||
# No matching agent found after sanitization
|
||||
return self.i18n.errors("agent_tool_unexisting_coworker").format(
|
||||
coworkers="\n".join(
|
||||
[f"- {agent.role.casefold()}" for agent in self.agents]
|
||||
)
|
||||
[f"- {self.sanitize_agent_name(agent.role)}" for agent in self.agents]
|
||||
),
|
||||
error=f"No agent found with role '{sanitized_name}'"
|
||||
)
|
||||
|
||||
agent = agent[0]
|
||||
task_with_assigned_agent = Task( # type: ignore # Incompatible types in assignment (expression has type "Task", variable has type "str")
|
||||
description=task,
|
||||
agent=agent,
|
||||
expected_output=agent.i18n.slice("manager_request"),
|
||||
i18n=agent.i18n,
|
||||
)
|
||||
return agent.execute_task(task_with_assigned_agent, context)
|
||||
try:
|
||||
task_with_assigned_agent = Task(
|
||||
description=task,
|
||||
agent=agent,
|
||||
expected_output=agent.i18n.slice("manager_request"),
|
||||
i18n=agent.i18n,
|
||||
)
|
||||
logger.debug(f"Created task for agent '{self.sanitize_agent_name(agent.role)}': {task}")
|
||||
return agent.execute_task(task_with_assigned_agent, context)
|
||||
except Exception as e:
|
||||
# Handle task creation or execution errors
|
||||
return self.i18n.errors("agent_tool_execution_error").format(
|
||||
agent_role=self.sanitize_agent_name(agent.role),
|
||||
error=str(e)
|
||||
)
|
||||
|
||||
@@ -10,6 +10,7 @@ from crewai.agents.tools_handler import ToolsHandler
|
||||
from crewai.task import Task
|
||||
from crewai.telemetry import Telemetry
|
||||
from crewai.tools import BaseTool
|
||||
from crewai.tools.structured_tool import CrewStructuredTool
|
||||
from crewai.tools.tool_calling import InstructorToolCalling, ToolCalling
|
||||
from crewai.tools.tool_usage_events import ToolUsageError, ToolUsageFinished
|
||||
from crewai.utilities import I18N, Converter, ConverterError, Printer
|
||||
@@ -18,8 +19,7 @@ try:
|
||||
import agentops # type: ignore
|
||||
except ImportError:
|
||||
agentops = None
|
||||
|
||||
OPENAI_BIGGER_MODELS = ["gpt-4", "gpt-4o", "o1-preview", "o1-mini"]
|
||||
OPENAI_BIGGER_MODELS = ["gpt-4", "gpt-4o", "o1-preview", "o1-mini", "o1", "o3", "o3-mini"]
|
||||
|
||||
|
||||
class ToolUsageErrorException(Exception):
|
||||
@@ -103,6 +103,19 @@ class ToolUsage:
|
||||
if self.agent.verbose:
|
||||
self._printer.print(content=f"\n\n{error}\n", color="red")
|
||||
return error
|
||||
|
||||
if isinstance(tool, CrewStructuredTool) and tool.name == self._i18n.tools("add_image")["name"]: # type: ignore
|
||||
try:
|
||||
result = self._use(tool_string=tool_string, tool=tool, calling=calling)
|
||||
return result
|
||||
|
||||
except Exception as e:
|
||||
error = getattr(e, "message", str(e))
|
||||
self.task.increment_tools_errors()
|
||||
if self.agent.verbose:
|
||||
self._printer.print(content=f"\n\n{error}\n", color="red")
|
||||
return error
|
||||
|
||||
return f"{self._use(tool_string=tool_string, tool=tool, calling=calling)}" # type: ignore # BUG?: "_use" of "ToolUsage" does not return a value (it only ever returns None)
|
||||
|
||||
def _use(
|
||||
|
||||
@@ -12,7 +12,7 @@
|
||||
"tools": "\nYou ONLY have access to the following tools, and should NEVER make up tools that are not listed here:\n\n{tools}\n\nUse the following format:\n\nThought: you should always think about what to do\nAction: the action to take, only one name of [{tool_names}], just the name, exactly as it's written.\nAction Input: the input to the action, just a simple python dictionary, enclosed in curly braces, using \" to wrap keys and values.\nObservation: the result of the action\n\nOnce all necessary information is gathered:\n\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n",
|
||||
"no_tools": "\nTo give my best complete final answer to the task use the exact following format:\n\nThought: I now can give a great answer\nFinal Answer: Your final answer must be the great and the most complete as possible, it must be outcome described.\n\nI MUST use these formats, my job depends on it!",
|
||||
"format": "I MUST either use a tool (use one at time) OR give my best final answer not both at the same time. To Use the following format:\n\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action, dictionary enclosed in curly braces\nObservation: the result of the action\n... (this Thought/Action/Action Input/Result can repeat N times)\nThought: I now can give a great answer\nFinal Answer: Your final answer must be the great and the most complete as possible, it must be outcome described\n\n",
|
||||
"final_answer_format": "If you don't need to use any more tools, you must give your best complete final answer, make sure it satisfy the expect criteria, use the EXACT format below:\n\nThought: I now can give a great answer\nFinal Answer: my best complete final answer to the task.\n\n",
|
||||
"final_answer_format": "If you don't need to use any more tools, you must give your best complete final answer, make sure it satisfies the expected criteria, use the EXACT format below:\n\nThought: I now can give a great answer\nFinal Answer: my best complete final answer to the task.\n\n",
|
||||
"format_without_tools": "\nSorry, I didn't use the right format. I MUST either use a tool (among the available ones), OR give my best final answer.\nI just remembered the expected format I must follow:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Result can repeat N times)\nThought: I now can give a great answer\nFinal Answer: Your final answer must be the great and the most complete as possible, it must be outcome described\n\n",
|
||||
"task_with_context": "{task}\n\nThis is the context you're working with:\n{context}",
|
||||
"expected_output": "\nThis is the expect criteria for your final answer: {expected_output}\nyou MUST return the actual complete content as the final answer, not a summary.",
|
||||
@@ -33,10 +33,16 @@
|
||||
"tool_usage_error": "I encountered an error: {error}",
|
||||
"tool_arguments_error": "Error: the Action Input is not a valid key, value dictionary.",
|
||||
"wrong_tool_name": "You tried to use the tool {tool}, but it doesn't exist. You must use one of the following tools, use one at time: {tools}.",
|
||||
"tool_usage_exception": "I encountered an error while trying to use the tool. This was the error: {error}.\n Tool {tool} accepts these inputs: {tool_inputs}"
|
||||
"tool_usage_exception": "I encountered an error while trying to use the tool. This was the error: {error}.\n Tool {tool} accepts these inputs: {tool_inputs}",
|
||||
"agent_tool_execution_error": "Error executing task with agent '{agent_role}'. Error: {error}"
|
||||
},
|
||||
"tools": {
|
||||
"delegate_work": "Delegate a specific task to one of the following coworkers: {coworkers}\nThe input to this tool should be the coworker, the task you want them to do, and ALL necessary context to execute the task, they know nothing about the task, so share absolute everything you know, don't reference things but instead explain them.",
|
||||
"ask_question": "Ask a specific question to one of the following coworkers: {coworkers}\nThe input to this tool should be the coworker, the question you have for them, and ALL necessary context to ask the question properly, they know nothing about the question, so share absolute everything you know, don't reference things but instead explain them."
|
||||
"ask_question": "Ask a specific question to one of the following coworkers: {coworkers}\nThe input to this tool should be the coworker, the question you have for them, and ALL necessary context to ask the question properly, they know nothing about the question, so share absolute everything you know, don't reference things but instead explain them.",
|
||||
"add_image": {
|
||||
"name": "Add image to content",
|
||||
"description": "See image to understand it's content, you can optionally ask a question about the image",
|
||||
"default_action": "Please provide a detailed description of this image, including all visual elements, context, and any notable details you can observe."
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
import json
|
||||
import os
|
||||
from typing import Dict, Optional
|
||||
from typing import Dict, Optional, Union
|
||||
|
||||
from pydantic import BaseModel, Field, PrivateAttr, model_validator
|
||||
|
||||
@@ -41,8 +41,8 @@ class I18N(BaseModel):
|
||||
def errors(self, error: str) -> str:
|
||||
return self.retrieve("errors", error)
|
||||
|
||||
def tools(self, error: str) -> str:
|
||||
return self.retrieve("tools", error)
|
||||
def tools(self, tool: str) -> Union[str, Dict[str, str]]:
|
||||
return self.retrieve("tools", tool)
|
||||
|
||||
def retrieve(self, kind, key) -> str:
|
||||
try:
|
||||
|
||||
@@ -26237,7 +26237,7 @@ interactions:
|
||||
answer."}, {"role": "user", "content": "I did it wrong. Invalid Format: I missed
|
||||
the ''Action:'' after ''Thought:''. I will do right next, and don''t use a tool
|
||||
I have already used.\n\nIf you don''t need to use any more tools, you must give
|
||||
your best complete final answer, make sure it satisfy the expect criteria, use
|
||||
your best complete final answer, make sure it satisfies the expected criteria, use
|
||||
the EXACT format below:\n\nThought: I now can give a great answer\nFinal Answer:
|
||||
my best complete final answer to the task.\n\n"}], "model": "gpt-4o"}'
|
||||
headers:
|
||||
@@ -26590,7 +26590,7 @@ interactions:
|
||||
answer."}, {"role": "user", "content": "I did it wrong. Invalid Format: I missed
|
||||
the ''Action:'' after ''Thought:''. I will do right next, and don''t use a tool
|
||||
I have already used.\n\nIf you don''t need to use any more tools, you must give
|
||||
your best complete final answer, make sure it satisfy the expect criteria, use
|
||||
your best complete final answer, make sure it satisfies the expected criteria, use
|
||||
the EXACT format below:\n\nThought: I now can give a great answer\nFinal Answer:
|
||||
my best complete final answer to the task.\n\n"}, {"role": "user", "content":
|
||||
"I did it wrong. Tried to both perform Action and give a Final Answer at the
|
||||
@@ -26941,7 +26941,7 @@ interactions:
|
||||
answer."}, {"role": "user", "content": "I did it wrong. Invalid Format: I missed
|
||||
the ''Action:'' after ''Thought:''. I will do right next, and don''t use a tool
|
||||
I have already used.\n\nIf you don''t need to use any more tools, you must give
|
||||
your best complete final answer, make sure it satisfy the expect criteria, use
|
||||
your best complete final answer, make sure it satisfies the expected criteria, use
|
||||
the EXACT format below:\n\nThought: I now can give a great answer\nFinal Answer:
|
||||
my best complete final answer to the task.\n\n"}, {"role": "user", "content":
|
||||
"I did it wrong. Tried to both perform Action and give a Final Answer at the
|
||||
@@ -27292,7 +27292,7 @@ interactions:
|
||||
answer."}, {"role": "user", "content": "I did it wrong. Invalid Format: I missed
|
||||
the ''Action:'' after ''Thought:''. I will do right next, and don''t use a tool
|
||||
I have already used.\n\nIf you don''t need to use any more tools, you must give
|
||||
your best complete final answer, make sure it satisfy the expect criteria, use
|
||||
your best complete final answer, make sure it satisfies the expected criteria, use
|
||||
the EXACT format below:\n\nThought: I now can give a great answer\nFinal Answer:
|
||||
my best complete final answer to the task.\n\n"}, {"role": "user", "content":
|
||||
"I did it wrong. Tried to both perform Action and give a Final Answer at the
|
||||
@@ -27647,7 +27647,7 @@ interactions:
|
||||
answer."}, {"role": "user", "content": "I did it wrong. Invalid Format: I missed
|
||||
the ''Action:'' after ''Thought:''. I will do right next, and don''t use a tool
|
||||
I have already used.\n\nIf you don''t need to use any more tools, you must give
|
||||
your best complete final answer, make sure it satisfy the expect criteria, use
|
||||
your best complete final answer, make sure it satisfies the expected criteria, use
|
||||
the EXACT format below:\n\nThought: I now can give a great answer\nFinal Answer:
|
||||
my best complete final answer to the task.\n\n"}, {"role": "user", "content":
|
||||
"I did it wrong. Tried to both perform Action and give a Final Answer at the
|
||||
@@ -28005,7 +28005,7 @@ interactions:
|
||||
answer."}, {"role": "user", "content": "I did it wrong. Invalid Format: I missed
|
||||
the ''Action:'' after ''Thought:''. I will do right next, and don''t use a tool
|
||||
I have already used.\n\nIf you don''t need to use any more tools, you must give
|
||||
your best complete final answer, make sure it satisfy the expect criteria, use
|
||||
your best complete final answer, make sure it satisfies the expected criteria, use
|
||||
the EXACT format below:\n\nThought: I now can give a great answer\nFinal Answer:
|
||||
my best complete final answer to the task.\n\n"}, {"role": "user", "content":
|
||||
"I did it wrong. Tried to both perform Action and give a Final Answer at the
|
||||
@@ -28364,7 +28364,7 @@ interactions:
|
||||
answer."}, {"role": "user", "content": "I did it wrong. Invalid Format: I missed
|
||||
the ''Action:'' after ''Thought:''. I will do right next, and don''t use a tool
|
||||
I have already used.\n\nIf you don''t need to use any more tools, you must give
|
||||
your best complete final answer, make sure it satisfy the expect criteria, use
|
||||
your best complete final answer, make sure it satisfies the expected criteria, use
|
||||
the EXACT format below:\n\nThought: I now can give a great answer\nFinal Answer:
|
||||
my best complete final answer to the task.\n\n"}, {"role": "user", "content":
|
||||
"I did it wrong. Tried to both perform Action and give a Final Answer at the
|
||||
@@ -28718,7 +28718,7 @@ interactions:
|
||||
answer."}, {"role": "user", "content": "I did it wrong. Invalid Format: I missed
|
||||
the ''Action:'' after ''Thought:''. I will do right next, and don''t use a tool
|
||||
I have already used.\n\nIf you don''t need to use any more tools, you must give
|
||||
your best complete final answer, make sure it satisfy the expect criteria, use
|
||||
your best complete final answer, make sure it satisfies the expected criteria, use
|
||||
the EXACT format below:\n\nThought: I now can give a great answer\nFinal Answer:
|
||||
my best complete final answer to the task.\n\n"}, {"role": "user", "content":
|
||||
"I did it wrong. Tried to both perform Action and give a Final Answer at the
|
||||
@@ -29082,7 +29082,7 @@ interactions:
|
||||
answer."}, {"role": "user", "content": "I did it wrong. Invalid Format: I missed
|
||||
the ''Action:'' after ''Thought:''. I will do right next, and don''t use a tool
|
||||
I have already used.\n\nIf you don''t need to use any more tools, you must give
|
||||
your best complete final answer, make sure it satisfy the expect criteria, use
|
||||
your best complete final answer, make sure it satisfies the expected criteria, use
|
||||
the EXACT format below:\n\nThought: I now can give a great answer\nFinal Answer:
|
||||
my best complete final answer to the task.\n\n"}, {"role": "user", "content":
|
||||
"I did it wrong. Tried to both perform Action and give a Final Answer at the
|
||||
@@ -29441,7 +29441,7 @@ interactions:
|
||||
answer."}, {"role": "user", "content": "I did it wrong. Invalid Format: I missed
|
||||
the ''Action:'' after ''Thought:''. I will do right next, and don''t use a tool
|
||||
I have already used.\n\nIf you don''t need to use any more tools, you must give
|
||||
your best complete final answer, make sure it satisfy the expect criteria, use
|
||||
your best complete final answer, make sure it satisfies the expected criteria, use
|
||||
the EXACT format below:\n\nThought: I now can give a great answer\nFinal Answer:
|
||||
my best complete final answer to the task.\n\n"}, {"role": "user", "content":
|
||||
"I did it wrong. Tried to both perform Action and give a Final Answer at the
|
||||
@@ -29802,7 +29802,7 @@ interactions:
|
||||
answer."}, {"role": "user", "content": "I did it wrong. Invalid Format: I missed
|
||||
the ''Action:'' after ''Thought:''. I will do right next, and don''t use a tool
|
||||
I have already used.\n\nIf you don''t need to use any more tools, you must give
|
||||
your best complete final answer, make sure it satisfy the expect criteria, use
|
||||
your best complete final answer, make sure it satisfies the expected criteria, use
|
||||
the EXACT format below:\n\nThought: I now can give a great answer\nFinal Answer:
|
||||
my best complete final answer to the task.\n\n"}, {"role": "user", "content":
|
||||
"I did it wrong. Tried to both perform Action and give a Final Answer at the
|
||||
@@ -30170,7 +30170,7 @@ interactions:
|
||||
answer."}, {"role": "user", "content": "I did it wrong. Invalid Format: I missed
|
||||
the ''Action:'' after ''Thought:''. I will do right next, and don''t use a tool
|
||||
I have already used.\n\nIf you don''t need to use any more tools, you must give
|
||||
your best complete final answer, make sure it satisfy the expect criteria, use
|
||||
your best complete final answer, make sure it satisfies the expected criteria, use
|
||||
the EXACT format below:\n\nThought: I now can give a great answer\nFinal Answer:
|
||||
my best complete final answer to the task.\n\n"}, {"role": "user", "content":
|
||||
"I did it wrong. Tried to both perform Action and give a Final Answer at the
|
||||
@@ -30533,7 +30533,7 @@ interactions:
|
||||
answer."}, {"role": "user", "content": "I did it wrong. Invalid Format: I missed
|
||||
the ''Action:'' after ''Thought:''. I will do right next, and don''t use a tool
|
||||
I have already used.\n\nIf you don''t need to use any more tools, you must give
|
||||
your best complete final answer, make sure it satisfy the expect criteria, use
|
||||
your best complete final answer, make sure it satisfies the expected criteria, use
|
||||
the EXACT format below:\n\nThought: I now can give a great answer\nFinal Answer:
|
||||
my best complete final answer to the task.\n\n"}, {"role": "user", "content":
|
||||
"I did it wrong. Tried to both perform Action and give a Final Answer at the
|
||||
@@ -30907,7 +30907,7 @@ interactions:
|
||||
answer."}, {"role": "user", "content": "I did it wrong. Invalid Format: I missed
|
||||
the ''Action:'' after ''Thought:''. I will do right next, and don''t use a tool
|
||||
I have already used.\n\nIf you don''t need to use any more tools, you must give
|
||||
your best complete final answer, make sure it satisfy the expect criteria, use
|
||||
your best complete final answer, make sure it satisfies the expected criteria, use
|
||||
the EXACT format below:\n\nThought: I now can give a great answer\nFinal Answer:
|
||||
my best complete final answer to the task.\n\n"}, {"role": "user", "content":
|
||||
"I did it wrong. Tried to both perform Action and give a Final Answer at the
|
||||
@@ -31273,7 +31273,7 @@ interactions:
|
||||
answer."}, {"role": "user", "content": "I did it wrong. Invalid Format: I missed
|
||||
the ''Action:'' after ''Thought:''. I will do right next, and don''t use a tool
|
||||
I have already used.\n\nIf you don''t need to use any more tools, you must give
|
||||
your best complete final answer, make sure it satisfy the expect criteria, use
|
||||
your best complete final answer, make sure it satisfies the expected criteria, use
|
||||
the EXACT format below:\n\nThought: I now can give a great answer\nFinal Answer:
|
||||
my best complete final answer to the task.\n\n"}, {"role": "user", "content":
|
||||
"I did it wrong. Tried to both perform Action and give a Final Answer at the
|
||||
@@ -31644,7 +31644,7 @@ interactions:
|
||||
answer."}, {"role": "user", "content": "I did it wrong. Invalid Format: I missed
|
||||
the ''Action:'' after ''Thought:''. I will do right next, and don''t use a tool
|
||||
I have already used.\n\nIf you don''t need to use any more tools, you must give
|
||||
your best complete final answer, make sure it satisfy the expect criteria, use
|
||||
your best complete final answer, make sure it satisfies the expected criteria, use
|
||||
the EXACT format below:\n\nThought: I now can give a great answer\nFinal Answer:
|
||||
my best complete final answer to the task.\n\n"}, {"role": "user", "content":
|
||||
"I did it wrong. Tried to both perform Action and give a Final Answer at the
|
||||
@@ -32015,7 +32015,7 @@ interactions:
|
||||
answer."}, {"role": "user", "content": "I did it wrong. Invalid Format: I missed
|
||||
the ''Action:'' after ''Thought:''. I will do right next, and don''t use a tool
|
||||
I have already used.\n\nIf you don''t need to use any more tools, you must give
|
||||
your best complete final answer, make sure it satisfy the expect criteria, use
|
||||
your best complete final answer, make sure it satisfies the expected criteria, use
|
||||
the EXACT format below:\n\nThought: I now can give a great answer\nFinal Answer:
|
||||
my best complete final answer to the task.\n\n"}, {"role": "user", "content":
|
||||
"I did it wrong. Tried to both perform Action and give a Final Answer at the
|
||||
|
||||
@@ -247,7 +247,7 @@ interactions:
|
||||
{"role": "user", "content": "I did it wrong. Invalid Format: I missed the ''Action:''
|
||||
after ''Thought:''. I will do right next, and don''t use a tool I have already
|
||||
used.\n\nIf you don''t need to use any more tools, you must give your best complete
|
||||
final answer, make sure it satisfy the expect criteria, use the EXACT format
|
||||
final answer, make sure it satisfies the expected criteria, use the EXACT format
|
||||
below:\n\nThought: I now can give a great answer\nFinal Answer: my best complete
|
||||
final answer to the task.\n\n"}], "model": "o1-preview"}'
|
||||
headers:
|
||||
|
||||
243
tests/cassettes/test_crew_output_file_end_to_end.yaml
Normal file
243
tests/cassettes/test_crew_output_file_end_to_end.yaml
Normal file
@@ -0,0 +1,243 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: !!binary |
|
||||
CuIcCiQKIgoMc2VydmljZS5uYW1lEhIKEGNyZXdBSS10ZWxlbWV0cnkSuRwKEgoQY3Jld2FpLnRl
|
||||
bGVtZXRyeRKjBwoQXK7w4+uvyEkrI9D5qyvcJxII5UmQ7hmczdIqDENyZXcgQ3JlYXRlZDABOfxQ
|
||||
/hs4jBUYQUi3DBw4jBUYShoKDmNyZXdhaV92ZXJzaW9uEggKBjAuODYuMEoaCg5weXRob25fdmVy
|
||||
c2lvbhIICgYzLjEyLjdKLgoIY3Jld19rZXkSIgogYzk3YjVmZWI1ZDFiNjZiYjU5MDA2YWFhMDFh
|
||||
MjljZDZKMQoHY3Jld19pZBImCiRkZjY3NGMwYi1hOTc0LTQ3NTAtYjlkMS0yZWQxNjM3MzFiNTZK
|
||||
HAoMY3Jld19wcm9jZXNzEgwKCnNlcXVlbnRpYWxKEQoLY3Jld19tZW1vcnkSAhAAShoKFGNyZXdf
|
||||
bnVtYmVyX29mX3Rhc2tzEgIYAUobChVjcmV3X251bWJlcl9vZl9hZ2VudHMSAhgBStECCgtjcmV3
|
||||
X2FnZW50cxLBAgq+Alt7ImtleSI6ICIwN2Q5OWI2MzA0MTFkMzVmZDkwNDdhNTMyZDUzZGRhNyIs
|
||||
ICJpZCI6ICI5MDYwYTQ2Zi02MDY3LTQ1N2MtOGU3ZC04NjAyN2YzY2U5ZDUiLCAicm9sZSI6ICJS
|
||||
ZXNlYXJjaGVyIiwgInZlcmJvc2U/IjogZmFsc2UsICJtYXhfaXRlciI6IDIwLCAibWF4X3JwbSI6
|
||||
IG51bGwsICJmdW5jdGlvbl9jYWxsaW5nX2xsbSI6ICIiLCAibGxtIjogImdwdC00by1taW5pIiwg
|
||||
ImRlbGVnYXRpb25fZW5hYmxlZD8iOiBmYWxzZSwgImFsbG93X2NvZGVfZXhlY3V0aW9uPyI6IGZh
|
||||
bHNlLCAibWF4X3JldHJ5X2xpbWl0IjogMiwgInRvb2xzX25hbWVzIjogW119XUr/AQoKY3Jld190
|
||||
YXNrcxLwAQrtAVt7ImtleSI6ICI2Mzk5NjUxN2YzZjNmMWM5NGQ2YmI2MTdhYTBiMWM0ZiIsICJp
|
||||
ZCI6ICJjYTA4ZjkyOS0yMmI0LTQyZmQtYjViMC05N2M3MjM0ZDk5OTEiLCAiYXN5bmNfZXhlY3V0
|
||||
aW9uPyI6IGZhbHNlLCAiaHVtYW5faW5wdXQ/IjogZmFsc2UsICJhZ2VudF9yb2xlIjogIlJlc2Vh
|
||||
cmNoZXIiLCAiYWdlbnRfa2V5IjogIjA3ZDk5YjYzMDQxMWQzNWZkOTA0N2E1MzJkNTNkZGE3Iiwg
|
||||
InRvb2xzX25hbWVzIjogW119XXoCGAGFAQABAAASjgIKEOTJZh9R45IwgGVg9cinZmISCJopKRMf
|
||||
bpMJKgxUYXNrIENyZWF0ZWQwATlG+zQcOIwVGEHk0zUcOIwVGEouCghjcmV3X2tleRIiCiBjOTdi
|
||||
NWZlYjVkMWI2NmJiNTkwMDZhYWEwMWEyOWNkNkoxCgdjcmV3X2lkEiYKJGRmNjc0YzBiLWE5NzQt
|
||||
NDc1MC1iOWQxLTJlZDE2MzczMWI1NkouCgh0YXNrX2tleRIiCiA2Mzk5NjUxN2YzZjNmMWM5NGQ2
|
||||
YmI2MTdhYTBiMWM0ZkoxCgd0YXNrX2lkEiYKJGNhMDhmOTI5LTIyYjQtNDJmZC1iNWIwLTk3Yzcy
|
||||
MzRkOTk5MXoCGAGFAQABAAASowcKEEvwrN8+tNMIBwtnA+ip7jASCI78Hrh2wlsBKgxDcmV3IENy
|
||||
ZWF0ZWQwATkcRqYeOIwVGEE8erQeOIwVGEoaCg5jcmV3YWlfdmVyc2lvbhIICgYwLjg2LjBKGgoO
|
||||
cHl0aG9uX3ZlcnNpb24SCAoGMy4xMi43Si4KCGNyZXdfa2V5EiIKIDhjMjc1MmY0OWU1YjlkMmI2
|
||||
OGNiMzVjYWM4ZmNjODZkSjEKB2NyZXdfaWQSJgokZmRkYzA4ZTMtNDUyNi00N2Q2LThlNWMtNjY0
|
||||
YzIyMjc4ZDgyShwKDGNyZXdfcHJvY2VzcxIMCgpzZXF1ZW50aWFsShEKC2NyZXdfbWVtb3J5EgIQ
|
||||
AEoaChRjcmV3X251bWJlcl9vZl90YXNrcxICGAFKGwoVY3Jld19udW1iZXJfb2ZfYWdlbnRzEgIY
|
||||
AUrRAgoLY3Jld19hZ2VudHMSwQIKvgJbeyJrZXkiOiAiOGJkMjEzOWI1OTc1MTgxNTA2ZTQxZmQ5
|
||||
YzQ1NjNkNzUiLCAiaWQiOiAiY2UxNjA2YjktMjdiOS00ZDc4LWEyODctNDZiMDNlZDg3ZTA1Iiwg
|
||||
InJvbGUiOiAiUmVzZWFyY2hlciIsICJ2ZXJib3NlPyI6IGZhbHNlLCAibWF4X2l0ZXIiOiAyMCwg
|
||||
Im1heF9ycG0iOiBudWxsLCAiZnVuY3Rpb25fY2FsbGluZ19sbG0iOiAiIiwgImxsbSI6ICJncHQt
|
||||
NG8tbWluaSIsICJkZWxlZ2F0aW9uX2VuYWJsZWQ/IjogZmFsc2UsICJhbGxvd19jb2RlX2V4ZWN1
|
||||
dGlvbj8iOiBmYWxzZSwgIm1heF9yZXRyeV9saW1pdCI6IDIsICJ0b29sc19uYW1lcyI6IFtdfV1K
|
||||
/wEKCmNyZXdfdGFza3MS8AEK7QFbeyJrZXkiOiAiMGQ2ODVhMjE5OTRkOTQ5MDk3YmM1YTU2ZDcz
|
||||
N2U2ZDEiLCAiaWQiOiAiNDdkMzRjZjktMGYxZS00Y2JkLTgzMzItNzRjZjY0YWRlOThlIiwgImFz
|
||||
eW5jX2V4ZWN1dGlvbj8iOiBmYWxzZSwgImh1bWFuX2lucHV0PyI6IGZhbHNlLCAiYWdlbnRfcm9s
|
||||
ZSI6ICJSZXNlYXJjaGVyIiwgImFnZW50X2tleSI6ICI4YmQyMTM5YjU5NzUxODE1MDZlNDFmZDlj
|
||||
NDU2M2Q3NSIsICJ0b29sc19uYW1lcyI6IFtdfV16AhgBhQEAAQAAEo4CChAf4TXS782b0PBJ4NSB
|
||||
JXwsEgjXnd13GkMzlyoMVGFzayBDcmVhdGVkMAE5mb/cHjiMFRhBGRTiHjiMFRhKLgoIY3Jld19r
|
||||
ZXkSIgogOGMyNzUyZjQ5ZTViOWQyYjY4Y2IzNWNhYzhmY2M4NmRKMQoHY3Jld19pZBImCiRmZGRj
|
||||
MDhlMy00NTI2LTQ3ZDYtOGU1Yy02NjRjMjIyNzhkODJKLgoIdGFza19rZXkSIgogMGQ2ODVhMjE5
|
||||
OTRkOTQ5MDk3YmM1YTU2ZDczN2U2ZDFKMQoHdGFza19pZBImCiQ0N2QzNGNmOS0wZjFlLTRjYmQt
|
||||
ODMzMi03NGNmNjRhZGU5OGV6AhgBhQEAAQAAEqMHChAyBGKhzDhROB5pmAoXrikyEgj6SCwzj1dU
|
||||
LyoMQ3JldyBDcmVhdGVkMAE5vkjTHziMFRhBRDbhHziMFRhKGgoOY3Jld2FpX3ZlcnNpb24SCAoG
|
||||
MC44Ni4wShoKDnB5dGhvbl92ZXJzaW9uEggKBjMuMTIuN0ouCghjcmV3X2tleRIiCiBiNjczNjg2
|
||||
ZmM4MjJjMjAzYzdlODc5YzY3NTQyNDY5OUoxCgdjcmV3X2lkEiYKJGYyYWVlYTYzLTU2OWUtNDUz
|
||||
NS1iZTY0LTRiZjYzZmU5NjhjN0ocCgxjcmV3X3Byb2Nlc3MSDAoKc2VxdWVudGlhbEoRCgtjcmV3
|
||||
X21lbW9yeRICEABKGgoUY3Jld19udW1iZXJfb2ZfdGFza3MSAhgBShsKFWNyZXdfbnVtYmVyX29m
|
||||
X2FnZW50cxICGAFK0QIKC2NyZXdfYWdlbnRzEsECCr4CW3sia2V5IjogImI1OWNmNzdiNmU3NjU4
|
||||
NDg3MGViMWMzODgyM2Q3ZTI4IiwgImlkIjogImJiZjNkM2E4LWEwMjUtNGI0ZC1hY2Q0LTFmNzcz
|
||||
NTI3MWJmMCIsICJyb2xlIjogIlJlc2VhcmNoZXIiLCAidmVyYm9zZT8iOiBmYWxzZSwgIm1heF9p
|
||||
dGVyIjogMjAsICJtYXhfcnBtIjogbnVsbCwgImZ1bmN0aW9uX2NhbGxpbmdfbGxtIjogIiIsICJs
|
||||
bG0iOiAiZ3B0LTRvLW1pbmkiLCAiZGVsZWdhdGlvbl9lbmFibGVkPyI6IGZhbHNlLCAiYWxsb3df
|
||||
Y29kZV9leGVjdXRpb24/IjogZmFsc2UsICJtYXhfcmV0cnlfbGltaXQiOiAyLCAidG9vbHNfbmFt
|
||||
ZXMiOiBbXX1dSv8BCgpjcmV3X3Rhc2tzEvABCu0BW3sia2V5IjogImE1ZTVjNThjZWExYjlkMDAz
|
||||
MzJlNjg0NDFkMzI3YmRmIiwgImlkIjogIjBiOTRiMTY0LTM5NTktNGFmYS05Njg4LWJjNmEwZWMy
|
||||
MWYzOCIsICJhc3luY19leGVjdXRpb24/IjogZmFsc2UsICJodW1hbl9pbnB1dD8iOiBmYWxzZSwg
|
||||
ImFnZW50X3JvbGUiOiAiUmVzZWFyY2hlciIsICJhZ2VudF9rZXkiOiAiYjU5Y2Y3N2I2ZTc2NTg0
|
||||
ODcwZWIxYzM4ODIzZDdlMjgiLCAidG9vbHNfbmFtZXMiOiBbXX1degIYAYUBAAEAABKOAgoQyYfi
|
||||
Ftim717svttBZY3p5hIIUxR5bBHzWWkqDFRhc2sgQ3JlYXRlZDABOV4OBiA4jBUYQbLjBiA4jBUY
|
||||
Si4KCGNyZXdfa2V5EiIKIGI2NzM2ODZmYzgyMmMyMDNjN2U4NzljNjc1NDI0Njk5SjEKB2NyZXdf
|
||||
aWQSJgokZjJhZWVhNjMtNTY5ZS00NTM1LWJlNjQtNGJmNjNmZTk2OGM3Si4KCHRhc2tfa2V5EiIK
|
||||
IGE1ZTVjNThjZWExYjlkMDAzMzJlNjg0NDFkMzI3YmRmSjEKB3Rhc2tfaWQSJgokMGI5NGIxNjQt
|
||||
Mzk1OS00YWZhLTk2ODgtYmM2YTBlYzIxZjM4egIYAYUBAAEAAA==
|
||||
headers:
|
||||
Accept:
|
||||
- '*/*'
|
||||
Accept-Encoding:
|
||||
- gzip, deflate
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '3685'
|
||||
Content-Type:
|
||||
- application/x-protobuf
|
||||
User-Agent:
|
||||
- OTel-OTLP-Exporter-Python/1.27.0
|
||||
method: POST
|
||||
uri: https://telemetry.crewai.com:4319/v1/traces
|
||||
response:
|
||||
body:
|
||||
string: "\n\0"
|
||||
headers:
|
||||
Content-Length:
|
||||
- '2'
|
||||
Content-Type:
|
||||
- application/x-protobuf
|
||||
Date:
|
||||
- Sun, 29 Dec 2024 04:43:27 GMT
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Researcher. You have
|
||||
extensive AI research experience.\nYour personal goal is: Analyze AI topics\nTo
|
||||
give my best complete final answer to the task use the exact following format:\n\nThought:
|
||||
I now can give a great answer\nFinal Answer: Your final answer must be the great
|
||||
and the most complete as possible, it must be outcome described.\n\nI MUST use
|
||||
these formats, my job depends on it!"}, {"role": "user", "content": "\nCurrent
|
||||
Task: Explain the advantages of AI.\n\nThis is the expect criteria for your
|
||||
final answer: A summary of the main advantages, bullet points recommended.\nyou
|
||||
MUST return the actual complete content as the final answer, not a summary.\n\nBegin!
|
||||
This is VERY important to you, use the tools available and give your best Final
|
||||
Answer, your job depends on it!\n\nThought:"}], "model": "gpt-4o-mini", "stop":
|
||||
["\nObservation:"], "stream": false}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '922'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- _cfuvid=eff7OIkJ0zWRunpA6z67LHqscmSe6XjNxXiPw1R3xCc-1733770413538-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.52.1
|
||||
x-stainless-arch:
|
||||
- x64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- Linux
|
||||
x-stainless-package-version:
|
||||
- 1.52.1
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.7
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
content: "{\n \"id\": \"chatcmpl-AjfR6FDuTw7NGzy8w7sxjvOkUQlru\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1735447404,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"I now can give a great answer \\nFinal
|
||||
Answer: \\n**Advantages of AI** \\n\\n1. **Increased Efficiency and Productivity**
|
||||
\ \\n - AI systems can process large amounts of data quickly and accurately,
|
||||
leading to faster decision-making and increased productivity in various sectors.\\n\\n2.
|
||||
**Cost Savings** \\n - Automation of repetitive and time-consuming tasks
|
||||
reduces labor costs and increases operational efficiency, allowing businesses
|
||||
to allocate resources more effectively.\\n\\n3. **Enhanced Data Analysis** \\n
|
||||
\ - AI excels at analyzing big data, identifying patterns, and providing insights
|
||||
that support better strategic planning and business decision-making.\\n\\n4.
|
||||
**24/7 Availability** \\n - AI solutions, such as chatbots and virtual assistants,
|
||||
operate continuously without breaks, offering constant support and customer
|
||||
service, enhancing user experience.\\n\\n5. **Personalization** \\n - AI
|
||||
enables the customization of content, products, and services based on user preferences
|
||||
and behaviors, leading to improved customer satisfaction and loyalty.\\n\\n6.
|
||||
**Improved Accuracy** \\n - AI technologies, such as machine learning algorithms,
|
||||
reduce the likelihood of human error in various processes, leading to greater
|
||||
accuracy and reliability.\\n\\n7. **Enhanced Innovation** \\n - AI fosters
|
||||
innovative solutions by providing new tools and approaches to problem-solving,
|
||||
enabling companies to develop cutting-edge products and services.\\n\\n8. **Scalability**
|
||||
\ \\n - AI can be scaled to handle varying amounts of workloads without significant
|
||||
changes to infrastructure, making it easier for organizations to expand operations.\\n\\n9.
|
||||
**Predictive Capabilities** \\n - Advanced analytics powered by AI can anticipate
|
||||
trends and outcomes, allowing businesses to proactively adjust strategies and
|
||||
improve forecasting.\\n\\n10. **Health Benefits** \\n - In healthcare, AI
|
||||
assists in diagnostics, personalized treatment plans, and predictive analytics,
|
||||
leading to better patient care and improved health outcomes.\\n\\n11. **Safety
|
||||
and Risk Mitigation** \\n - AI can enhance safety in various industries
|
||||
by taking over dangerous tasks, monitoring for hazards, and predicting maintenance
|
||||
needs for critical machinery, thereby preventing accidents.\\n\\n12. **Reduced
|
||||
Environmental Impact** \\n - AI can optimize resource usage in areas such
|
||||
as energy consumption and supply chain logistics, contributing to sustainability
|
||||
efforts and reducing overall environmental footprints.\",\n \"refusal\":
|
||||
null\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
|
||||
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 168,\n \"completion_tokens\":
|
||||
440,\n \"total_tokens\": 608,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
|
||||
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
|
||||
\"fp_0aa8d3e20b\"\n}\n"
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 8f9721053d1eb9f1-SEA
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Sun, 29 Dec 2024 04:43:32 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=5enubNIoQSGMYEgy8Q2FpzzhphA0y.0lXukRZrWFvMk-1735447412-1.0.1.1-FIK1sMkUl3YnW1gTC6ftDtb2mKsbosb4mwabdFAlWCfJ6pXeavYq.bPsfKNvzAb5WYq60yVGH5lHsJT05bhSgw;
|
||||
path=/; expires=Sun, 29-Dec-24 05:13:32 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=63wmKMTuFamkLN8FBI4fP8JZWbjWiRxWm7wb3kz.z_A-1735447412038-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '7577'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149999793'
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_55b8d714656e8f10f4e23cbe9034d66b
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
version: 1
|
||||
@@ -3,223 +3,17 @@ interactions:
|
||||
body: '{"messages": [{"role": "system", "content": "You are CEO. You''re an long
|
||||
time CEO of a content creation agency with a Senior Writer on the team. You''re
|
||||
now working on a new project and want to make sure the content produced is amazing.\nYour
|
||||
personal goal is: Make sure the writers in your company produce amazing content.\nYou
|
||||
ONLY have access to the following tools, and should NEVER make up tools that
|
||||
are not listed here:\n\nTool Name: Delegate work to coworker(task: str, context:
|
||||
str, coworker: Optional[str] = None, **kwargs)\nTool Description: Delegate a
|
||||
specific task to one of the following coworkers: Senior Writer\nThe input to
|
||||
this tool should be the coworker, the task you want them to do, and ALL necessary
|
||||
context to execute the task, they know nothing about the task, so share absolute
|
||||
everything you know, don''t reference things but instead explain them.\nTool
|
||||
Arguments: {''task'': {''title'': ''Task'', ''type'': ''string''}, ''context'':
|
||||
{''title'': ''Context'', ''type'': ''string''}, ''coworker'': {''title'': ''Coworker'',
|
||||
''type'': ''string''}, ''kwargs'': {''title'': ''Kwargs'', ''type'': ''object''}}\nTool
|
||||
Name: Ask question to coworker(question: str, context: str, coworker: Optional[str]
|
||||
= None, **kwargs)\nTool Description: Ask a specific question to one of the following
|
||||
coworkers: Senior Writer\nThe input to this tool should be the coworker, the
|
||||
question you have for them, and ALL necessary context to ask the question properly,
|
||||
they know nothing about the question, so share absolute everything you know,
|
||||
don''t reference things but instead explain them.\nTool Arguments: {''question'':
|
||||
{''title'': ''Question'', ''type'': ''string''}, ''context'': {''title'': ''Context'',
|
||||
''type'': ''string''}, ''coworker'': {''title'': ''Coworker'', ''type'': ''string''},
|
||||
''kwargs'': {''title'': ''Kwargs'', ''type'': ''object''}}\n\nUse the following
|
||||
format:\n\nThought: you should always think about what to do\nAction: the action
|
||||
to take, only one name of [Delegate work to coworker, Ask question to coworker],
|
||||
just the name, exactly as it''s written.\nAction Input: the input to the action,
|
||||
just a simple python dictionary, enclosed in curly braces, using \" to wrap
|
||||
keys and values.\nObservation: the result of the action\n\nOnce all necessary
|
||||
information is gathered:\n\nThought: I now know the final answer\nFinal Answer:
|
||||
the final answer to the original input question\n"}, {"role": "user", "content":
|
||||
"\nCurrent Task: Produce and amazing 1 paragraph draft of an article about AI
|
||||
Agents.\n\nThis is the expect criteria for your final answer: A 4 paragraph
|
||||
article about AI.\nyou MUST return the actual complete content as the final
|
||||
answer, not a summary.\n\nBegin! This is VERY important to you, use the tools
|
||||
available and give your best Final Answer, your job depends on it!\n\nThought:"}],
|
||||
"model": "gpt-4o"}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '2762'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=9.8sBYBkvBR8R1K_bVF7xgU..80XKlEIg3N2OBbTSCU-1727214102-1.0.1.1-.qiTLXbPamYUMSuyNsOEB9jhGu.jOifujOrx9E2JZvStbIZ9RTIiE44xKKNfLPxQkOi6qAT3h6htK8lPDGV_5g;
|
||||
_cfuvid=lbRdAddVWV6W3f5Dm9SaOPWDUOxqtZBSPr_fTW26nEA-1727213194587-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.47.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.47.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.11.7
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
content: "{\n \"id\": \"chatcmpl-AB7ZvxqgeOayGTQWwR61ASlZp0s74\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1727214103,\n \"model\": \"gpt-4o-2024-05-13\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"Thought: To ensure the content is amazing,
|
||||
I'll delegate the task of producing a one-paragraph draft of an article about
|
||||
AI Agents to the Senior Writer with all necessary context.\\n\\nAction: Delegate
|
||||
work to coworker\\nAction Input: \\n{\\n \\\"coworker\\\": \\\"Senior Writer\\\",
|
||||
\\n \\\"task\\\": \\\"Produce a one paragraph draft of an article about AI
|
||||
Agents\\\", \\n \\\"context\\\": \\\"We need an amazing one-paragraph draft
|
||||
as the beginning of a 4-paragraph article about AI Agents. This is for a high-stakes
|
||||
project that critically impacts our company. The paragraph should highlight
|
||||
what AI Agents are, their significance, and how they are transforming various
|
||||
industries. The tone should be professional yet engaging. Make sure the content
|
||||
is original, insightful, and thoroughly researched.\\\"\\n}\",\n \"refusal\":
|
||||
null\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
|
||||
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 608,\n \"completion_tokens\":
|
||||
160,\n \"total_tokens\": 768,\n \"completion_tokens_details\": {\n \"reasoning_tokens\":
|
||||
0\n }\n },\n \"system_fingerprint\": \"fp_e375328146\"\n}\n"
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 8c85f0b038a71cf3-GRU
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Tue, 24 Sep 2024 21:41:45 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '1826'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '30000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29999325'
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 1ms
|
||||
x-request-id:
|
||||
- req_79054638deeb01da76c5bba273bffc28
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
- request:
|
||||
body: !!binary |
|
||||
Cq8OCiQKIgoMc2VydmljZS5uYW1lEhIKEGNyZXdBSS10ZWxlbWV0cnkShg4KEgoQY3Jld2FpLnRl
|
||||
bGVtZXRyeRKQAgoQg15EMIBbDpydrcK3GAUYfBII5VYz5B10kmgqDlRhc2sgRXhlY3V0aW9uMAE5
|
||||
aGIpYwtM+BdBIO6VVRNM+BdKLgoIY3Jld19rZXkSIgogZTNmZGEwZjMxMTBmZTgwYjE4OTQ3YzAx
|
||||
NDcxNDMwYTRKMQoHY3Jld19pZBImCiRjNzM1NzdhYi0xYThhLTQzMGYtYjYyZi01MTBlYWMyMWI3
|
||||
MThKLgoIdGFza19rZXkSIgogNWZhNjVjMDZhOWUzMWYyYzY5NTQzMjY2OGFjZDYyZGRKMQoHdGFz
|
||||
a19pZBImCiQ3MjAzMjYyMC0yMzJmLTQ5ZTMtOGMyNy0xYzBlOWJhNjFiZDB6AhgBhQEAAQAAEssJ
|
||||
ChB+du4H1wHcku5blhLQBtuoEgiXVguc5KA1RyoMQ3JldyBDcmVhdGVkMAE54IJsVxNM+BdBcCN4
|
||||
VxNM+BdKGgoOY3Jld2FpX3ZlcnNpb24SCAoGMC42MS4wShoKDnB5dGhvbl92ZXJzaW9uEggKBjMu
|
||||
MTEuN0ouCghjcmV3X2tleRIiCiBlNjQ5NTczYTI2ZTU4NzkwY2FjMjFhMzdjZDQ0NDM3YUoxCgdj
|
||||
cmV3X2lkEiYKJDI4ZTY0YmQ3LWNlYWMtNDYxOS04MmM3LTIzNmRkNTQxOGM4N0ocCgxjcmV3X3By
|
||||
b2Nlc3MSDAoKc2VxdWVudGlhbEoRCgtjcmV3X21lbW9yeRICEABKGgoUY3Jld19udW1iZXJfb2Zf
|
||||
dGFza3MSAhgBShsKFWNyZXdfbnVtYmVyX29mX2FnZW50cxICGAJKgAUKC2NyZXdfYWdlbnRzEvAE
|
||||
Cu0EW3sia2V5IjogIjMyODIxN2I2YzI5NTliZGZjNDdjYWQwMGU4NDg5MGQwIiwgImlkIjogIjQ1
|
||||
NjMxMmU3LThkMmMtNDcyMi1iNWNkLTlhMGRhMzg5MmM3OCIsICJyb2xlIjogIkNFTyIsICJ2ZXJi
|
||||
b3NlPyI6IGZhbHNlLCAibWF4X2l0ZXIiOiAxNSwgIm1heF9ycG0iOiBudWxsLCAiZnVuY3Rpb25f
|
||||
Y2FsbGluZ19sbG0iOiAiIiwgImxsbSI6ICJncHQtNG8iLCAiZGVsZWdhdGlvbl9lbmFibGVkPyI6
|
||||
IHRydWUsICJhbGxvd19jb2RlX2V4ZWN1dGlvbj8iOiBmYWxzZSwgIm1heF9yZXRyeV9saW1pdCI6
|
||||
IDIsICJ0b29sc19uYW1lcyI6IFtdfSwgeyJrZXkiOiAiOWE1MDE1ZWY0ODk1ZGM2Mjc4ZDU0ODE4
|
||||
YmE0NDZhZjciLCAiaWQiOiAiMTM0MDg5MjAtNzVjOC00MTk3LWIwNmQtY2I4MmNkZjhkZDhhIiwg
|
||||
InJvbGUiOiAiU2VuaW9yIFdyaXRlciIsICJ2ZXJib3NlPyI6IGZhbHNlLCAibWF4X2l0ZXIiOiAx
|
||||
NSwgIm1heF9ycG0iOiBudWxsLCAiZnVuY3Rpb25fY2FsbGluZ19sbG0iOiAiIiwgImxsbSI6ICJn
|
||||
cHQtNG8iLCAiZGVsZWdhdGlvbl9lbmFibGVkPyI6IGZhbHNlLCAiYWxsb3dfY29kZV9leGVjdXRp
|
||||
b24/IjogZmFsc2UsICJtYXhfcmV0cnlfbGltaXQiOiAyLCAidG9vbHNfbmFtZXMiOiBbXX1dSvgB
|
||||
CgpjcmV3X3Rhc2tzEukBCuYBW3sia2V5IjogIjBiOWQ2NWRiNmI3YWVkZmIzOThjNTllMmE5Zjcx
|
||||
ZWM1IiwgImlkIjogImQ0YjVhZmE2LTczNTEtNDUxMy04NzY2LTIzOGNjYTk5ZjRlZiIsICJhc3lu
|
||||
Y19leGVjdXRpb24/IjogZmFsc2UsICJodW1hbl9pbnB1dD8iOiBmYWxzZSwgImFnZW50X3JvbGUi
|
||||
OiAiQ0VPIiwgImFnZW50X2tleSI6ICIzMjgyMTdiNmMyOTU5YmRmYzQ3Y2FkMDBlODQ4OTBkMCIs
|
||||
ICJ0b29sc19uYW1lcyI6IFtdfV16AhgBhQEAAQAAEo4CChCLEGLGYlBkv0YucoYjY1NeEghRpGin
|
||||
zpZUiSoMVGFzayBDcmVhdGVkMAE5KCA2WBNM+BdBaLw2WBNM+BdKLgoIY3Jld19rZXkSIgogZTY0
|
||||
OTU3M2EyNmU1ODc5MGNhYzIxYTM3Y2Q0NDQzN2FKMQoHY3Jld19pZBImCiQyOGU2NGJkNy1jZWFj
|
||||
LTQ2MTktODJjNy0yMzZkZDU0MThjODdKLgoIdGFza19rZXkSIgogMGI5ZDY1ZGI2YjdhZWRmYjM5
|
||||
OGM1OWUyYTlmNzFlYzVKMQoHdGFza19pZBImCiRkNGI1YWZhNi03MzUxLTQ1MTMtODc2Ni0yMzhj
|
||||
Y2E5OWY0ZWZ6AhgBhQEAAQAA
|
||||
headers:
|
||||
Accept:
|
||||
- '*/*'
|
||||
Accept-Encoding:
|
||||
- gzip, deflate
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '1842'
|
||||
Content-Type:
|
||||
- application/x-protobuf
|
||||
User-Agent:
|
||||
- OTel-OTLP-Exporter-Python/1.27.0
|
||||
method: POST
|
||||
uri: https://telemetry.crewai.com:4319/v1/traces
|
||||
response:
|
||||
body:
|
||||
string: "\n\0"
|
||||
headers:
|
||||
Content-Length:
|
||||
- '2'
|
||||
Content-Type:
|
||||
- application/x-protobuf
|
||||
Date:
|
||||
- Tue, 24 Sep 2024 21:41:46 GMT
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Senior Writer. You''re
|
||||
a senior writer, specialized in technology, software engineering, AI and startups.
|
||||
You work as a freelancer and are now working on writing content for a new customer.\nYour
|
||||
personal goal is: Write the best content about AI and AI agents.\nTo give my
|
||||
best complete final answer to the task use the exact following format:\n\nThought:
|
||||
personal goal is: Make sure the writers in your company produce amazing content.\nTo
|
||||
give my best complete final answer to the task use the exact following format:\n\nThought:
|
||||
I now can give a great answer\nFinal Answer: Your final answer must be the great
|
||||
and the most complete as possible, it must be outcome described.\n\nI MUST use
|
||||
these formats, my job depends on it!"}, {"role": "user", "content": "\nCurrent
|
||||
Task: Produce a one paragraph draft of an article about AI Agents\n\nThis is
|
||||
the expect criteria for your final answer: Your best answer to your coworker
|
||||
asking you this, accounting for the context shared.\nyou MUST return the actual
|
||||
complete content as the final answer, not a summary.\n\nThis is the context
|
||||
you''re working with:\nWe need an amazing one-paragraph draft as the beginning
|
||||
of a 4-paragraph article about AI Agents. This is for a high-stakes project
|
||||
that critically impacts our company. The paragraph should highlight what AI
|
||||
Agents are, their significance, and how they are transforming various industries.
|
||||
The tone should be professional yet engaging. Make sure the content is original,
|
||||
insightful, and thoroughly researched.\n\nBegin! This is VERY important to you,
|
||||
use the tools available and give your best Final Answer, your job depends on
|
||||
it!\n\nThought:"}], "model": "gpt-4o"}'
|
||||
Task: Produce and amazing 1 paragraph draft of an article about AI Agents.\n\nThis
|
||||
is the expect criteria for your final answer: A 4 paragraph article about AI.\nyou
|
||||
MUST return the actual complete content as the final answer, not a summary.\n\nBegin!
|
||||
This is VERY important to you, use the tools available and give your best Final
|
||||
Answer, your job depends on it!\n\nThought:"}], "model": "gpt-4o-mini", "stop":
|
||||
["\nObservation:"], "stream": false}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
@@ -228,16 +22,13 @@ interactions:
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1545'
|
||||
- '1105'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=9.8sBYBkvBR8R1K_bVF7xgU..80XKlEIg3N2OBbTSCU-1727214102-1.0.1.1-.qiTLXbPamYUMSuyNsOEB9jhGu.jOifujOrx9E2JZvStbIZ9RTIiE44xKKNfLPxQkOi6qAT3h6htK8lPDGV_5g;
|
||||
_cfuvid=lbRdAddVWV6W3f5Dm9SaOPWDUOxqtZBSPr_fTW26nEA-1727213194587-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.47.0
|
||||
- OpenAI/Python 1.52.1
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
@@ -247,9 +38,11 @@ interactions:
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.47.0
|
||||
- 1.52.1
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
@@ -257,31 +50,51 @@ interactions:
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
content: "{\n \"id\": \"chatcmpl-AB7ZxDYcPlSiBZsftdRs2cWbUJllW\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1727214105,\n \"model\": \"gpt-4o-2024-05-13\",\n
|
||||
content: "{\n \"id\": \"chatcmpl-Ahe7liUPejwfqxMe8aEWmKGJ837em\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1734965705,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"I now can give a great answer \\nFinal
|
||||
Answer: Artificial Intelligence (AI) Agents are sophisticated computer programs
|
||||
designed to perform tasks that typically require human intelligence, such as
|
||||
decision making, problem-solving, and learning. These agents operate autonomously,
|
||||
utilizing vast amounts of data, advanced algorithms, and machine learning techniques
|
||||
to analyze their environment, adapt to new information, and improve their performance
|
||||
over time. The significance of AI Agents lies in their transformative potential
|
||||
across various industries. In healthcare, they assist in diagnosing diseases
|
||||
with greater accuracy; in finance, they predict market trends and manage risks;
|
||||
in customer service, they provide personalized and efficient responses. As these
|
||||
AI-powered entities continue to evolve, they are not only enhancing operational
|
||||
efficiencies but also driving innovation and creating new opportunities for
|
||||
growth and development in every sector they penetrate.\",\n \"refusal\":
|
||||
null\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
|
||||
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 297,\n \"completion_tokens\":
|
||||
160,\n \"total_tokens\": 457,\n \"completion_tokens_details\": {\n \"reasoning_tokens\":
|
||||
0\n }\n },\n \"system_fingerprint\": \"fp_e375328146\"\n}\n"
|
||||
Answer: In the rapidly evolving landscape of technology, AI agents have emerged
|
||||
as formidable tools, revolutionizing how we interact with data and automate
|
||||
tasks. These sophisticated systems leverage machine learning and natural language
|
||||
processing to perform a myriad of functions, from virtual personal assistants
|
||||
to complex decision-making companions in industries such as finance, healthcare,
|
||||
and education. By mimicking human intelligence, AI agents can analyze massive
|
||||
data sets at unparalleled speeds, enabling businesses to uncover valuable insights,
|
||||
enhance productivity, and elevate user experiences to unprecedented levels.\\n\\nOne
|
||||
of the most striking aspects of AI agents is their adaptability; they learn
|
||||
from their interactions and continuously improve their performance over time.
|
||||
This feature is particularly valuable in customer service where AI agents can
|
||||
address inquiries, resolve issues, and provide personalized recommendations
|
||||
without the limitations of human fatigue. Moreover, with intuitive interfaces,
|
||||
AI agents enhance user interactions, making technology more accessible and user-friendly,
|
||||
thereby breaking down barriers that have historically hindered digital engagement.\\n\\nDespite
|
||||
their immense potential, the deployment of AI agents raises important ethical
|
||||
and practical considerations. Issues related to privacy, data security, and
|
||||
the potential for job displacement necessitate thoughtful dialogue and proactive
|
||||
measures. Striking a balance between technological innovation and societal impact
|
||||
will be crucial as organizations integrate these agents into their operations.
|
||||
Additionally, ensuring transparency in AI decision-making processes is vital
|
||||
to maintain public trust as AI agents become an integral part of daily life.\\n\\nLooking
|
||||
ahead, the future of AI agents appears bright, with ongoing advancements promising
|
||||
even greater capabilities. As we continue to harness the power of AI, we can
|
||||
expect these agents to play a transformative role in shaping various sectors\u2014streamlining
|
||||
workflows, enabling smarter decision-making, and fostering more personalized
|
||||
experiences. Embracing this technology responsibly can lead to a future where
|
||||
AI agents not only augment human effort but also inspire creativity and efficiency
|
||||
across the board, ultimately redefining our interaction with the digital world.\",\n
|
||||
\ \"refusal\": null\n },\n \"logprobs\": null,\n \"finish_reason\":
|
||||
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 208,\n \"completion_tokens\":
|
||||
382,\n \"total_tokens\": 590,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
|
||||
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
|
||||
\"fp_0aa8d3e20b\"\n}\n"
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 8c85f0c0cf961cf3-GRU
|
||||
- 8f6930c97a33ae54-GRU
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
@@ -289,45 +102,77 @@ interactions:
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Tue, 24 Sep 2024 21:41:48 GMT
|
||||
- Mon, 23 Dec 2024 14:55:10 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=g58erGPkGAltcfYpDRU4IsdEEzb955dGmBOAZacFlPA-1734965710-1.0.1.1-IiodiX3uxbT5xSa4seI7M.gRM4Jj46h2d6ZW2wCkSUYUAX.ivRh_sGQN2hucEMzdG8O87pc00dCl7E5W8KkyEA;
|
||||
path=/; expires=Mon, 23-Dec-24 15:25:10 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=eQzzWvIXDS8Me1OIBdCG5F1qFyVfAo3sumvYRE7J41E-1734965710778-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '2468'
|
||||
- '5401'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '30000000'
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29999625'
|
||||
- '149999746'
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_66c8801b42ac865249246d98225c1492
|
||||
- req_30791533923ae20626ef35a03ae66172
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
- request:
|
||||
body: !!binary |
|
||||
CtwBCiQKIgoMc2VydmljZS5uYW1lEhIKEGNyZXdBSS10ZWxlbWV0cnkSswEKEgoQY3Jld2FpLnRl
|
||||
bGVtZXRyeRKcAQoQROg/k5NCUGdgfvfLrFlQDxIIlfh6oMbmqu0qClRvb2wgVXNhZ2UwATlws+Wj
|
||||
FEz4F0EwBeijFEz4F0oaCg5jcmV3YWlfdmVyc2lvbhIICgYwLjYxLjBKKAoJdG9vbF9uYW1lEhsK
|
||||
GURlbGVnYXRlIHdvcmsgdG8gY293b3JrZXJKDgoIYXR0ZW1wdHMSAhgBegIYAYUBAAEAAA==
|
||||
CqYMCiQKIgoMc2VydmljZS5uYW1lEhIKEGNyZXdBSS10ZWxlbWV0cnkS/QsKEgoQY3Jld2FpLnRl
|
||||
bGVtZXRyeRLVCQoQLH3VghpS+l/DatJl8rrpvRIIUpNEm7ELU08qDENyZXcgQ3JlYXRlZDABObgs
|
||||
nNId1hMYQfgVpdId1hMYShoKDmNyZXdhaV92ZXJzaW9uEggKBjAuODYuMEoaCg5weXRob25fdmVy
|
||||
c2lvbhIICgYzLjExLjdKLgoIY3Jld19rZXkSIgogZTY0OTU3M2EyNmU1ODc5MGNhYzIxYTM3Y2Q0
|
||||
NDQzN2FKMQoHY3Jld19pZBImCiQzYjVkNDFjNC1kZWJiLTQ2MzItYmIwMC1mNTdhNmM2M2QwMThK
|
||||
HAoMY3Jld19wcm9jZXNzEgwKCnNlcXVlbnRpYWxKEQoLY3Jld19tZW1vcnkSAhAAShoKFGNyZXdf
|
||||
bnVtYmVyX29mX3Rhc2tzEgIYAUobChVjcmV3X251bWJlcl9vZl9hZ2VudHMSAhgCSooFCgtjcmV3
|
||||
X2FnZW50cxL6BAr3BFt7ImtleSI6ICIzMjgyMTdiNmMyOTU5YmRmYzQ3Y2FkMDBlODQ4OTBkMCIs
|
||||
ICJpZCI6ICI1Yjk4NDA2OS03MjVlLTQxOWYtYjdiZS1mMDdjMTYyOGNkZjIiLCAicm9sZSI6ICJD
|
||||
RU8iLCAidmVyYm9zZT8iOiBmYWxzZSwgIm1heF9pdGVyIjogMjAsICJtYXhfcnBtIjogbnVsbCwg
|
||||
ImZ1bmN0aW9uX2NhbGxpbmdfbGxtIjogIiIsICJsbG0iOiAiZ3B0LTRvLW1pbmkiLCAiZGVsZWdh
|
||||
dGlvbl9lbmFibGVkPyI6IHRydWUsICJhbGxvd19jb2RlX2V4ZWN1dGlvbj8iOiBmYWxzZSwgIm1h
|
||||
eF9yZXRyeV9saW1pdCI6IDIsICJ0b29sc19uYW1lcyI6IFtdfSwgeyJrZXkiOiAiOWE1MDE1ZWY0
|
||||
ODk1ZGM2Mjc4ZDU0ODE4YmE0NDZhZjciLCAiaWQiOiAiZjkwZWI0ZmItMzUyMC00ZDAyLTlhNDYt
|
||||
NDE2ZTNlNTQ5NWYxIiwgInJvbGUiOiAiU2VuaW9yIFdyaXRlciIsICJ2ZXJib3NlPyI6IGZhbHNl
|
||||
LCAibWF4X2l0ZXIiOiAyMCwgIm1heF9ycG0iOiBudWxsLCAiZnVuY3Rpb25fY2FsbGluZ19sbG0i
|
||||
OiAiIiwgImxsbSI6ICJncHQtNG8tbWluaSIsICJkZWxlZ2F0aW9uX2VuYWJsZWQ/IjogZmFsc2Us
|
||||
ICJhbGxvd19jb2RlX2V4ZWN1dGlvbj8iOiBmYWxzZSwgIm1heF9yZXRyeV9saW1pdCI6IDIsICJ0
|
||||
b29sc19uYW1lcyI6IFtdfV1K+AEKCmNyZXdfdGFza3MS6QEK5gFbeyJrZXkiOiAiMGI5ZDY1ZGI2
|
||||
YjdhZWRmYjM5OGM1OWUyYTlmNzFlYzUiLCAiaWQiOiAiNzdmNDY3MDYtNzRjZi00ZGVkLThlMDUt
|
||||
NmRlZGM0MmYwZDliIiwgImFzeW5jX2V4ZWN1dGlvbj8iOiBmYWxzZSwgImh1bWFuX2lucHV0PyI6
|
||||
IGZhbHNlLCAiYWdlbnRfcm9sZSI6ICJDRU8iLCAiYWdlbnRfa2V5IjogIjMyODIxN2I2YzI5NTli
|
||||
ZGZjNDdjYWQwMGU4NDg5MGQwIiwgInRvb2xzX25hbWVzIjogW119XXoCGAGFAQABAAASjgIKEBvb
|
||||
LkoAnHiD1gUnbftefpYSCNb1+4JxldizKgxUYXNrIENyZWF0ZWQwATmwYcTSHdYTGEEQz8TSHdYT
|
||||
GEouCghjcmV3X2tleRIiCiBlNjQ5NTczYTI2ZTU4NzkwY2FjMjFhMzdjZDQ0NDM3YUoxCgdjcmV3
|
||||
X2lkEiYKJDNiNWQ0MWM0LWRlYmItNDYzMi1iYjAwLWY1N2E2YzYzZDAxOEouCgh0YXNrX2tleRIi
|
||||
CiAwYjlkNjVkYjZiN2FlZGZiMzk4YzU5ZTJhOWY3MWVjNUoxCgd0YXNrX2lkEiYKJDc3ZjQ2NzA2
|
||||
LTc0Y2YtNGRlZC04ZTA1LTZkZWRjNDJmMGQ5YnoCGAGFAQABAAA=
|
||||
headers:
|
||||
Accept:
|
||||
- '*/*'
|
||||
@@ -336,7 +181,7 @@ interactions:
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '223'
|
||||
- '1577'
|
||||
Content-Type:
|
||||
- application/x-protobuf
|
||||
User-Agent:
|
||||
@@ -352,213 +197,8 @@ interactions:
|
||||
Content-Type:
|
||||
- application/x-protobuf
|
||||
Date:
|
||||
- Tue, 24 Sep 2024 21:41:51 GMT
|
||||
- Mon, 23 Dec 2024 14:55:10 GMT
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are CEO. You''re an long
|
||||
time CEO of a content creation agency with a Senior Writer on the team. You''re
|
||||
now working on a new project and want to make sure the content produced is amazing.\nYour
|
||||
personal goal is: Make sure the writers in your company produce amazing content.\nYou
|
||||
ONLY have access to the following tools, and should NEVER make up tools that
|
||||
are not listed here:\n\nTool Name: Delegate work to coworker(task: str, context:
|
||||
str, coworker: Optional[str] = None, **kwargs)\nTool Description: Delegate a
|
||||
specific task to one of the following coworkers: Senior Writer\nThe input to
|
||||
this tool should be the coworker, the task you want them to do, and ALL necessary
|
||||
context to execute the task, they know nothing about the task, so share absolute
|
||||
everything you know, don''t reference things but instead explain them.\nTool
|
||||
Arguments: {''task'': {''title'': ''Task'', ''type'': ''string''}, ''context'':
|
||||
{''title'': ''Context'', ''type'': ''string''}, ''coworker'': {''title'': ''Coworker'',
|
||||
''type'': ''string''}, ''kwargs'': {''title'': ''Kwargs'', ''type'': ''object''}}\nTool
|
||||
Name: Ask question to coworker(question: str, context: str, coworker: Optional[str]
|
||||
= None, **kwargs)\nTool Description: Ask a specific question to one of the following
|
||||
coworkers: Senior Writer\nThe input to this tool should be the coworker, the
|
||||
question you have for them, and ALL necessary context to ask the question properly,
|
||||
they know nothing about the question, so share absolute everything you know,
|
||||
don''t reference things but instead explain them.\nTool Arguments: {''question'':
|
||||
{''title'': ''Question'', ''type'': ''string''}, ''context'': {''title'': ''Context'',
|
||||
''type'': ''string''}, ''coworker'': {''title'': ''Coworker'', ''type'': ''string''},
|
||||
''kwargs'': {''title'': ''Kwargs'', ''type'': ''object''}}\n\nUse the following
|
||||
format:\n\nThought: you should always think about what to do\nAction: the action
|
||||
to take, only one name of [Delegate work to coworker, Ask question to coworker],
|
||||
just the name, exactly as it''s written.\nAction Input: the input to the action,
|
||||
just a simple python dictionary, enclosed in curly braces, using \" to wrap
|
||||
keys and values.\nObservation: the result of the action\n\nOnce all necessary
|
||||
information is gathered:\n\nThought: I now know the final answer\nFinal Answer:
|
||||
the final answer to the original input question\n"}, {"role": "user", "content":
|
||||
"\nCurrent Task: Produce and amazing 1 paragraph draft of an article about AI
|
||||
Agents.\n\nThis is the expect criteria for your final answer: A 4 paragraph
|
||||
article about AI.\nyou MUST return the actual complete content as the final
|
||||
answer, not a summary.\n\nBegin! This is VERY important to you, use the tools
|
||||
available and give your best Final Answer, your job depends on it!\n\nThought:"},
|
||||
{"role": "assistant", "content": "Thought: To ensure the content is amazing,
|
||||
I''ll delegate the task of producing a one-paragraph draft of an article about
|
||||
AI Agents to the Senior Writer with all necessary context.\n\nAction: Delegate
|
||||
work to coworker\nAction Input: \n{\n \"coworker\": \"Senior Writer\", \n \"task\":
|
||||
\"Produce a one paragraph draft of an article about AI Agents\", \n \"context\":
|
||||
\"We need an amazing one-paragraph draft as the beginning of a 4-paragraph article
|
||||
about AI Agents. This is for a high-stakes project that critically impacts our
|
||||
company. The paragraph should highlight what AI Agents are, their significance,
|
||||
and how they are transforming various industries. The tone should be professional
|
||||
yet engaging. Make sure the content is original, insightful, and thoroughly
|
||||
researched.\"\n}\nObservation: Artificial Intelligence (AI) Agents are sophisticated
|
||||
computer programs designed to perform tasks that typically require human intelligence,
|
||||
such as decision making, problem-solving, and learning. These agents operate
|
||||
autonomously, utilizing vast amounts of data, advanced algorithms, and machine
|
||||
learning techniques to analyze their environment, adapt to new information,
|
||||
and improve their performance over time. The significance of AI Agents lies
|
||||
in their transformative potential across various industries. In healthcare,
|
||||
they assist in diagnosing diseases with greater accuracy; in finance, they predict
|
||||
market trends and manage risks; in customer service, they provide personalized
|
||||
and efficient responses. As these AI-powered entities continue to evolve, they
|
||||
are not only enhancing operational efficiencies but also driving innovation
|
||||
and creating new opportunities for growth and development in every sector they
|
||||
penetrate."}], "model": "gpt-4o"}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '4536'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=9.8sBYBkvBR8R1K_bVF7xgU..80XKlEIg3N2OBbTSCU-1727214102-1.0.1.1-.qiTLXbPamYUMSuyNsOEB9jhGu.jOifujOrx9E2JZvStbIZ9RTIiE44xKKNfLPxQkOi6qAT3h6htK8lPDGV_5g;
|
||||
_cfuvid=lbRdAddVWV6W3f5Dm9SaOPWDUOxqtZBSPr_fTW26nEA-1727213194587-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.47.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.47.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.11.7
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
content: "{\n \"id\": \"chatcmpl-AB7a1PO3pMNybn76wXDFc5HE7ZRsL\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1727214109,\n \"model\": \"gpt-4o-2024-05-13\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"Thought: The paragraph provided by the
|
||||
Senior Writer is well-detailed and engaging. I will now structure the 4-paragraph
|
||||
article around this draft, adding an introduction, expanding on the specific
|
||||
applications in various industries, and concluding with the potential future
|
||||
impact of AI Agents.\\n\\nFinal Answer:\\n\\nArtificial Intelligence (AI) Agents
|
||||
are sophisticated computer programs designed to perform tasks that typically
|
||||
require human intelligence, such as decision making, problem-solving, and learning.
|
||||
These agents operate autonomously, utilizing vast amounts of data, advanced
|
||||
algorithms, and machine learning techniques to analyze their environment, adapt
|
||||
to new information, and improve their performance over time.\\n\\nThe significance
|
||||
of AI Agents lies in their transformative potential across various industries.
|
||||
In healthcare, for example, they assist in diagnosing diseases with greater
|
||||
accuracy and speed than human practitioners, offering personalized treatment
|
||||
plans by analyzing patient data. In finance, AI Agents predict market trends,
|
||||
manage risks, and even execute trades, contributing to more stable and profitable
|
||||
financial systems. Customer service sectors benefit significantly from AI Agents,
|
||||
as they provide personalized and efficient responses, often resolving issues
|
||||
faster than traditional methods.\\n\\nMoreover, AI Agents are also making substantial
|
||||
contributions in fields like education and manufacturing. In education, they
|
||||
offer tailored learning experiences by assessing individual student needs and
|
||||
adjusting teaching methods accordingly. They help educators identify students
|
||||
who might need additional support and provide resources to enhance learning
|
||||
outcomes. In manufacturing, AI Agents optimize production lines, predict equipment
|
||||
failures, and improve supply chain management, thus boosting productivity and
|
||||
reducing downtime.\\n\\nAs these AI-powered entities continue to evolve, they
|
||||
are not only enhancing operational efficiencies but also driving innovation
|
||||
and creating new opportunities for growth and development in every sector they
|
||||
penetrate. The future of AI Agents looks promising, with the potential to revolutionize
|
||||
the way we live and work, making processes more efficient, decisions more data-driven,
|
||||
and solutions more innovative than ever before.\\n\\nThis is now a well-rounded,
|
||||
four-paragraph article that comprehensively covers the topic of AI Agents.\\n\\nFinal
|
||||
Answer: This is the complete content as specified:\\nArtificial Intelligence
|
||||
(AI) Agents are sophisticated computer programs designed to perform tasks that
|
||||
typically require human intelligence, such as decision making, problem-solving,
|
||||
and learning. These agents operate autonomously, utilizing vast amounts of data,
|
||||
advanced algorithms, and machine learning techniques to analyze their environment,
|
||||
adapt to new information, and improve their performance over time.\\n\\nThe
|
||||
significance of AI Agents lies in their transformative potential across various
|
||||
industries. In healthcare, for example, they assist in diagnosing diseases with
|
||||
greater accuracy and speed than human practitioners, offering personalized treatment
|
||||
plans by analyzing patient data. In finance, AI Agents predict market trends,
|
||||
manage risks, and even execute trades, contributing to more stable and profitable
|
||||
financial systems. Customer service sectors benefit significantly from AI Agents,
|
||||
as they provide personalized and efficient responses, often resolving issues
|
||||
faster than traditional methods.\\n\\nMoreover, AI Agents are also making substantial
|
||||
contributions in fields like education and manufacturing. In education, they
|
||||
offer tailored learning experiences by assessing individual student needs and
|
||||
adjusting teaching methods accordingly. They help educators identify students
|
||||
who might need additional support and provide resources to enhance learning
|
||||
outcomes. In manufacturing, AI Agents optimize production lines, predict equipment
|
||||
failures, and improve supply chain management, thus boosting productivity and
|
||||
reducing downtime.\\n\\nAs these AI-powered entities continue to evolve, they
|
||||
are not only enhancing operational efficiencies but also driving innovation
|
||||
and creating new opportunities for growth and development in every sector they
|
||||
penetrate. The future of AI Agents looks promising, with the potential to revolutionize
|
||||
the way we live and work, making processes more efficient, decisions more data-driven,
|
||||
and solutions more innovative than ever before.\",\n \"refusal\": null\n
|
||||
\ },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n
|
||||
\ ],\n \"usage\": {\n \"prompt_tokens\": 923,\n \"completion_tokens\":
|
||||
715,\n \"total_tokens\": 1638,\n \"completion_tokens_details\": {\n \"reasoning_tokens\":
|
||||
0\n }\n },\n \"system_fingerprint\": \"fp_e375328146\"\n}\n"
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 8c85f0d2f90c1cf3-GRU
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Tue, 24 Sep 2024 21:41:58 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '8591'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '30000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29998895'
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 2ms
|
||||
x-request-id:
|
||||
- req_2b51b5cff02148d29b04284b40ca6081
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
version: 1
|
||||
|
||||
@@ -0,0 +1,480 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are CEO. You''re an long
|
||||
time CEO of a content creation agency with a Senior Writer on the team. You''re
|
||||
now working on a new project and want to make sure the content produced is amazing.\nYour
|
||||
personal goal is: Make sure the writers in your company produce amazing content.\nYou
|
||||
ONLY have access to the following tools, and should NEVER make up tools that
|
||||
are not listed here:\n\nTool Name: Test Tool\nTool Arguments: {''query'': {''description'':
|
||||
''Query to process'', ''type'': ''str''}}\nTool Description: A test tool that
|
||||
just returns the input\n\nUse the following format:\n\nThought: you should always
|
||||
think about what to do\nAction: the action to take, only one name of [Test Tool],
|
||||
just the name, exactly as it''s written.\nAction Input: the input to the action,
|
||||
just a simple python dictionary, enclosed in curly braces, using \" to wrap
|
||||
keys and values.\nObservation: the result of the action\n\nOnce all necessary
|
||||
information is gathered:\n\nThought: I now know the final answer\nFinal Answer:
|
||||
the final answer to the original input question"}, {"role": "user", "content":
|
||||
"\nCurrent Task: Produce and amazing 1 paragraph draft of an article about AI
|
||||
Agents.\n\nThis is the expect criteria for your final answer: A 4 paragraph
|
||||
article about AI.\nyou MUST return the actual complete content as the final
|
||||
answer, not a summary.\n\nBegin! This is VERY important to you, use the tools
|
||||
available and give your best Final Answer, your job depends on it!\n\nThought:"}],
|
||||
"model": "gpt-4o-mini", "stop": ["\nObservation:"], "stream": false}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1581'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.52.1
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.52.1
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.11.7
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
content: "{\n \"id\": \"chatcmpl-AhLsKP8xKkISk8ntUscyUKL30xRXW\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1734895556,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"I need to gather information to create
|
||||
an amazing paragraph draft about AI Agents that aligns with the expected criteria
|
||||
of a 4-paragraph article about AI. \\n\\nAction: Test Tool \\nAction Input:
|
||||
{\\\"query\\\": \\\"Write a captivating and informative paragraph about AI Agents,
|
||||
focusing on their capabilities, applications, and significance in modern technology.\\\"}
|
||||
\ \",\n \"refusal\": null\n },\n \"logprobs\": null,\n \"finish_reason\":
|
||||
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 309,\n \"completion_tokens\":
|
||||
68,\n \"total_tokens\": 377,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
|
||||
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
|
||||
\"fp_0aa8d3e20b\"\n}\n"
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 8f62802d0b3f00d5-GRU
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Sun, 22 Dec 2024 19:25:57 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=vwBNilrHRgMLd8ALWzYrBO5Lm8ieJzbQ3WCVOgmuJ.s-1734895557-1.0.1.1-z.QnDsynL_Ndu.JkWrh_wGMo57vvpK88nWDBTA8P.6prlSRmA91GQLpP62yRUbCW6yoKFbDxroSaYO6qrzZPRg;
|
||||
path=/; expires=Sun, 22-Dec-24 19:55:57 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=2u_Xw.i716TDjD2vb2mvMyWxhA4q1MM1JvbrA8CNZpI-1734895557894-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '1075'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149999630'
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_80fbcef3505afac708a24ef167b701bb
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are CEO. You''re an long
|
||||
time CEO of a content creation agency with a Senior Writer on the team. You''re
|
||||
now working on a new project and want to make sure the content produced is amazing.\nYour
|
||||
personal goal is: Make sure the writers in your company produce amazing content.\nYou
|
||||
ONLY have access to the following tools, and should NEVER make up tools that
|
||||
are not listed here:\n\nTool Name: Test Tool\nTool Arguments: {''query'': {''description'':
|
||||
''Query to process'', ''type'': ''str''}}\nTool Description: A test tool that
|
||||
just returns the input\n\nUse the following format:\n\nThought: you should always
|
||||
think about what to do\nAction: the action to take, only one name of [Test Tool],
|
||||
just the name, exactly as it''s written.\nAction Input: the input to the action,
|
||||
just a simple python dictionary, enclosed in curly braces, using \" to wrap
|
||||
keys and values.\nObservation: the result of the action\n\nOnce all necessary
|
||||
information is gathered:\n\nThought: I now know the final answer\nFinal Answer:
|
||||
the final answer to the original input question"}, {"role": "user", "content":
|
||||
"\nCurrent Task: Produce and amazing 1 paragraph draft of an article about AI
|
||||
Agents.\n\nThis is the expect criteria for your final answer: A 4 paragraph
|
||||
article about AI.\nyou MUST return the actual complete content as the final
|
||||
answer, not a summary.\n\nBegin! This is VERY important to you, use the tools
|
||||
available and give your best Final Answer, your job depends on it!\n\nThought:"},
|
||||
{"role": "assistant", "content": "I need to gather information to create an
|
||||
amazing paragraph draft about AI Agents that aligns with the expected criteria
|
||||
of a 4-paragraph article about AI. \n\nAction: Test Tool \nAction Input: {\"query\":
|
||||
\"Write a captivating and informative paragraph about AI Agents, focusing on
|
||||
their capabilities, applications, and significance in modern technology.\"} \nObservation:
|
||||
Processed: Write a captivating and informative paragraph about AI Agents, focusing
|
||||
on their capabilities, applications, and significance in modern technology."}],
|
||||
"model": "gpt-4o-mini", "stop": ["\nObservation:"], "stream": false}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '2153'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=vwBNilrHRgMLd8ALWzYrBO5Lm8ieJzbQ3WCVOgmuJ.s-1734895557-1.0.1.1-z.QnDsynL_Ndu.JkWrh_wGMo57vvpK88nWDBTA8P.6prlSRmA91GQLpP62yRUbCW6yoKFbDxroSaYO6qrzZPRg;
|
||||
_cfuvid=2u_Xw.i716TDjD2vb2mvMyWxhA4q1MM1JvbrA8CNZpI-1734895557894-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.52.1
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.52.1
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.11.7
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
content: "{\n \"id\": \"chatcmpl-AhLsMt1AgrzynC2TSJZZSwr9El8FV\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1734895558,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"Thought: I have received the content
|
||||
related to AI Agents, which I need to now use as a foundation for creating a
|
||||
complete 4-paragraph article about AI. \\n\\nAction: Test Tool \\nAction Input:
|
||||
{\\\"query\\\": \\\"Based on the previous paragraph about AI Agents, write a
|
||||
4-paragraph article about AI, including an introduction, discussion of AI Agents,
|
||||
their applications, and a conclusion on the future of AI.\\\"} \",\n \"refusal\":
|
||||
null\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
|
||||
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 409,\n \"completion_tokens\":
|
||||
88,\n \"total_tokens\": 497,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
|
||||
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
|
||||
\"fp_0aa8d3e20b\"\n}\n"
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 8f6280352b9400d5-GRU
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Sun, 22 Dec 2024 19:25:59 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '1346'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149999498'
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_e25b377af34ef03b9a6955c9cfca5738
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
- request:
|
||||
body: !!binary |
|
||||
CtoOCiQKIgoMc2VydmljZS5uYW1lEhIKEGNyZXdBSS10ZWxlbWV0cnkSsQ4KEgoQY3Jld2FpLnRl
|
||||
bGVtZXRyeRLrCQoQHzrcBLmZm6+CB9ZGtTnz1BIISnyRX3cExT4qDENyZXcgQ3JlYXRlZDABOdCK
|
||||
UxFRlhMYQdiyWhFRlhMYShoKDmNyZXdhaV92ZXJzaW9uEggKBjAuODYuMEoaCg5weXRob25fdmVy
|
||||
c2lvbhIICgYzLjExLjdKLgoIY3Jld19rZXkSIgogZTY0OTU3M2EyNmU1ODc5MGNhYzIxYTM3Y2Q0
|
||||
NDQzN2FKMQoHY3Jld19pZBImCiQyYWFjYzYwZC0xYzE5LTRjZGYtYmJhNy1iM2RiMGM4YzFlZWZK
|
||||
HAoMY3Jld19wcm9jZXNzEgwKCnNlcXVlbnRpYWxKEQoLY3Jld19tZW1vcnkSAhAAShoKFGNyZXdf
|
||||
bnVtYmVyX29mX3Rhc2tzEgIYAUobChVjcmV3X251bWJlcl9vZl9hZ2VudHMSAhgCSpUFCgtjcmV3
|
||||
X2FnZW50cxKFBQqCBVt7ImtleSI6ICIzMjgyMTdiNmMyOTU5YmRmYzQ3Y2FkMDBlODQ4OTBkMCIs
|
||||
ICJpZCI6ICJlZmE4ZWRlNS0wN2IyLTQzOWUtYWQ4Yi1iNmQ0Nzg5NjBkNzkiLCAicm9sZSI6ICJD
|
||||
RU8iLCAidmVyYm9zZT8iOiBmYWxzZSwgIm1heF9pdGVyIjogMjAsICJtYXhfcnBtIjogbnVsbCwg
|
||||
ImZ1bmN0aW9uX2NhbGxpbmdfbGxtIjogIiIsICJsbG0iOiAiZ3B0LTRvLW1pbmkiLCAiZGVsZWdh
|
||||
dGlvbl9lbmFibGVkPyI6IHRydWUsICJhbGxvd19jb2RlX2V4ZWN1dGlvbj8iOiBmYWxzZSwgIm1h
|
||||
eF9yZXRyeV9saW1pdCI6IDIsICJ0b29sc19uYW1lcyI6IFsidGVzdCB0b29sIl19LCB7ImtleSI6
|
||||
ICI5YTUwMTVlZjQ4OTVkYzYyNzhkNTQ4MThiYTQ0NmFmNyIsICJpZCI6ICIxMDE2MGEzMC0zM2U4
|
||||
LTRlN2YtOTAzOC1lODU3Zjc2MzI0ZTUiLCAicm9sZSI6ICJTZW5pb3IgV3JpdGVyIiwgInZlcmJv
|
||||
c2U/IjogZmFsc2UsICJtYXhfaXRlciI6IDIwLCAibWF4X3JwbSI6IG51bGwsICJmdW5jdGlvbl9j
|
||||
YWxsaW5nX2xsbSI6ICIiLCAibGxtIjogImdwdC00by1taW5pIiwgImRlbGVnYXRpb25fZW5hYmxl
|
||||
ZD8iOiBmYWxzZSwgImFsbG93X2NvZGVfZXhlY3V0aW9uPyI6IGZhbHNlLCAibWF4X3JldHJ5X2xp
|
||||
bWl0IjogMiwgInRvb2xzX25hbWVzIjogW119XUqDAgoKY3Jld190YXNrcxL0AQrxAVt7ImtleSI6
|
||||
ICIwYjlkNjVkYjZiN2FlZGZiMzk4YzU5ZTJhOWY3MWVjNSIsICJpZCI6ICJiNjYyZWVkOS1kYzcy
|
||||
LTQ1NTEtYTdmMC1kY2E4ZTk3MmU3NjciLCAiYXN5bmNfZXhlY3V0aW9uPyI6IGZhbHNlLCAiaHVt
|
||||
YW5faW5wdXQ/IjogZmFsc2UsICJhZ2VudF9yb2xlIjogIkNFTyIsICJhZ2VudF9rZXkiOiAiMzI4
|
||||
MjE3YjZjMjk1OWJkZmM0N2NhZDAwZTg0ODkwZDAiLCAidG9vbHNfbmFtZXMiOiBbInRlc3QgdG9v
|
||||
bCJdfV16AhgBhQEAAQAAEo4CChDkOw+7vfeJwW1bc0PIIqxeEggzmQQt0SPl+ioMVGFzayBDcmVh
|
||||
dGVkMAE5OBlxEVGWExhBwKlxEVGWExhKLgoIY3Jld19rZXkSIgogZTY0OTU3M2EyNmU1ODc5MGNh
|
||||
YzIxYTM3Y2Q0NDQzN2FKMQoHY3Jld19pZBImCiQyYWFjYzYwZC0xYzE5LTRjZGYtYmJhNy1iM2Ri
|
||||
MGM4YzFlZWZKLgoIdGFza19rZXkSIgogMGI5ZDY1ZGI2YjdhZWRmYjM5OGM1OWUyYTlmNzFlYzVK
|
||||
MQoHdGFza19pZBImCiRiNjYyZWVkOS1kYzcyLTQ1NTEtYTdmMC1kY2E4ZTk3MmU3Njd6AhgBhQEA
|
||||
AQAAEowBChDS1rm7Q+c0w96t+encwsGJEgjRF+jTQh1PCyoKVG9vbCBVc2FnZTABOaAiFGtRlhMY
|
||||
QdiVImtRlhMYShoKDmNyZXdhaV92ZXJzaW9uEggKBjAuODYuMEoYCgl0b29sX25hbWUSCwoJVGVz
|
||||
dCBUb29sSg4KCGF0dGVtcHRzEgIYAXoCGAGFAQABAAASjAEKECYGxNLnTRLCS76uAAOuzGwSCPmX
|
||||
kSTjWKCcKgpUb29sIFVzYWdlMAE5CH3Wx1GWExhBGH/xx1GWExhKGgoOY3Jld2FpX3ZlcnNpb24S
|
||||
CAoGMC44Ni4wShgKCXRvb2xfbmFtZRILCglUZXN0IFRvb2xKDgoIYXR0ZW1wdHMSAhgBegIYAYUB
|
||||
AAEAAA==
|
||||
headers:
|
||||
Accept:
|
||||
- '*/*'
|
||||
Accept-Encoding:
|
||||
- gzip, deflate
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '1885'
|
||||
Content-Type:
|
||||
- application/x-protobuf
|
||||
User-Agent:
|
||||
- OTel-OTLP-Exporter-Python/1.27.0
|
||||
method: POST
|
||||
uri: https://telemetry.crewai.com:4319/v1/traces
|
||||
response:
|
||||
body:
|
||||
string: "\n\0"
|
||||
headers:
|
||||
Content-Length:
|
||||
- '2'
|
||||
Content-Type:
|
||||
- application/x-protobuf
|
||||
Date:
|
||||
- Sun, 22 Dec 2024 19:26:01 GMT
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are CEO. You''re an long
|
||||
time CEO of a content creation agency with a Senior Writer on the team. You''re
|
||||
now working on a new project and want to make sure the content produced is amazing.\nYour
|
||||
personal goal is: Make sure the writers in your company produce amazing content.\nYou
|
||||
ONLY have access to the following tools, and should NEVER make up tools that
|
||||
are not listed here:\n\nTool Name: Test Tool\nTool Arguments: {''query'': {''description'':
|
||||
''Query to process'', ''type'': ''str''}}\nTool Description: A test tool that
|
||||
just returns the input\n\nUse the following format:\n\nThought: you should always
|
||||
think about what to do\nAction: the action to take, only one name of [Test Tool],
|
||||
just the name, exactly as it''s written.\nAction Input: the input to the action,
|
||||
just a simple python dictionary, enclosed in curly braces, using \" to wrap
|
||||
keys and values.\nObservation: the result of the action\n\nOnce all necessary
|
||||
information is gathered:\n\nThought: I now know the final answer\nFinal Answer:
|
||||
the final answer to the original input question"}, {"role": "user", "content":
|
||||
"\nCurrent Task: Produce and amazing 1 paragraph draft of an article about AI
|
||||
Agents.\n\nThis is the expect criteria for your final answer: A 4 paragraph
|
||||
article about AI.\nyou MUST return the actual complete content as the final
|
||||
answer, not a summary.\n\nBegin! This is VERY important to you, use the tools
|
||||
available and give your best Final Answer, your job depends on it!\n\nThought:"},
|
||||
{"role": "assistant", "content": "I need to gather information to create an
|
||||
amazing paragraph draft about AI Agents that aligns with the expected criteria
|
||||
of a 4-paragraph article about AI. \n\nAction: Test Tool \nAction Input: {\"query\":
|
||||
\"Write a captivating and informative paragraph about AI Agents, focusing on
|
||||
their capabilities, applications, and significance in modern technology.\"} \nObservation:
|
||||
Processed: Write a captivating and informative paragraph about AI Agents, focusing
|
||||
on their capabilities, applications, and significance in modern technology."},
|
||||
{"role": "assistant", "content": "Thought: I have received the content related
|
||||
to AI Agents, which I need to now use as a foundation for creating a complete
|
||||
4-paragraph article about AI. \n\nAction: Test Tool \nAction Input: {\"query\":
|
||||
\"Based on the previous paragraph about AI Agents, write a 4-paragraph article
|
||||
about AI, including an introduction, discussion of AI Agents, their applications,
|
||||
and a conclusion on the future of AI.\"} \nObservation: Processed: Based on
|
||||
the previous paragraph about AI Agents, write a 4-paragraph article about AI,
|
||||
including an introduction, discussion of AI Agents, their applications, and
|
||||
a conclusion on the future of AI."}], "model": "gpt-4o-mini", "stop": ["\nObservation:"],
|
||||
"stream": false}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '2820'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=vwBNilrHRgMLd8ALWzYrBO5Lm8ieJzbQ3WCVOgmuJ.s-1734895557-1.0.1.1-z.QnDsynL_Ndu.JkWrh_wGMo57vvpK88nWDBTA8P.6prlSRmA91GQLpP62yRUbCW6yoKFbDxroSaYO6qrzZPRg;
|
||||
_cfuvid=2u_Xw.i716TDjD2vb2mvMyWxhA4q1MM1JvbrA8CNZpI-1734895557894-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.52.1
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.52.1
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.11.7
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
content: "{\n \"id\": \"chatcmpl-AhLsNJa6GxRIHF8l8eViU7D6CyBHP\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1734895559,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"Thought: I have gathered the complete
|
||||
article on AI, which aligns with the expected criteria. Now I will present the
|
||||
final answer as required. \\n\\nFinal Answer: \\n\\nArtificial Intelligence
|
||||
(AI) has rapidly evolved to become an integral part of our modern world, driving
|
||||
significant advancements across various industries. AI refers to the simulation
|
||||
of human intelligence in machines programmed to think and learn like humans.
|
||||
This technology enables machines to analyze data, recognize patterns, and make
|
||||
decisions with minimal human intervention, paving the way for innovation in
|
||||
fields like healthcare, finance, and transportation.\\n\\nAI Agents, in particular,
|
||||
embody the future of artificial intelligence by acting autonomously to perform
|
||||
complex tasks. These agents leverage machine learning and natural language processing
|
||||
to interact with users and understand their needs. They're deployed in customer
|
||||
service applications, virtual assistants, and personal scheduling tools, showcasing
|
||||
their capability to streamline processes and enhance user experience. By mimicking
|
||||
human reasoning, AI Agents can adapt to changing situations and provide personalized
|
||||
solutions.\\n\\nThe applications of AI Agents extend beyond mere task completion;
|
||||
they are transforming the way businesses operate. In the realm of customer engagement,
|
||||
AI Agents analyze customer behavior to provide insights that help companies
|
||||
tailor their offerings. In healthcare, they assist in diagnosing illnesses by
|
||||
analyzing patient data and suggesting treatments. The versatility of AI Agents
|
||||
makes them invaluable assets in our increasingly automated world.\\n\\nAs we
|
||||
look to the future, the potential of AI continues to expand. With ongoing advancements
|
||||
in technology, AI Agents are set to become even more sophisticated, further
|
||||
bridging the gap between humans and machines. The prospects of AI promise not
|
||||
only to improve efficiency and productivity but also to change the way we live
|
||||
and work, promising a future where intelligent, autonomous agents support us
|
||||
in our daily lives.\",\n \"refusal\": null\n },\n \"logprobs\":
|
||||
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
|
||||
546,\n \"completion_tokens\": 343,\n \"total_tokens\": 889,\n \"prompt_tokens_details\":
|
||||
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
|
||||
\"fp_0aa8d3e20b\"\n}\n"
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 8f62803eed8100d5-GRU
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Sun, 22 Dec 2024 19:26:04 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '4897'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149999342'
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_65fdf94aa8bbc10f64f2a27ccdcc5cc8
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
version: 1
|
||||
@@ -0,0 +1,623 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are CEO. You''re an long
|
||||
time CEO of a content creation agency with a Senior Writer on the team. You''re
|
||||
now working on a new project and want to make sure the content produced is amazing.\nYour
|
||||
personal goal is: Make sure the writers in your company produce amazing content.\nYou
|
||||
ONLY have access to the following tools, and should NEVER make up tools that
|
||||
are not listed here:\n\nTool Name: Test Tool\nTool Arguments: {''query'': {''description'':
|
||||
''Query to process'', ''type'': ''str''}}\nTool Description: A test tool that
|
||||
just returns the input\nTool Name: Delegate work to coworker\nTool Arguments:
|
||||
{''task'': {''description'': ''The task to delegate'', ''type'': ''str''}, ''context'':
|
||||
{''description'': ''The context for the task'', ''type'': ''str''}, ''coworker'':
|
||||
{''description'': ''The role/name of the coworker to delegate to'', ''type'':
|
||||
''str''}}\nTool Description: Delegate a specific task to one of the following
|
||||
coworkers: Senior Writer\nThe input to this tool should be the coworker, the
|
||||
task you want them to do, and ALL necessary context to execute the task, they
|
||||
know nothing about the task, so share absolute everything you know, don''t reference
|
||||
things but instead explain them.\nTool Name: Ask question to coworker\nTool
|
||||
Arguments: {''question'': {''description'': ''The question to ask'', ''type'':
|
||||
''str''}, ''context'': {''description'': ''The context for the question'', ''type'':
|
||||
''str''}, ''coworker'': {''description'': ''The role/name of the coworker to
|
||||
ask'', ''type'': ''str''}}\nTool Description: Ask a specific question to one
|
||||
of the following coworkers: Senior Writer\nThe input to this tool should be
|
||||
the coworker, the question you have for them, and ALL necessary context to ask
|
||||
the question properly, they know nothing about the question, so share absolute
|
||||
everything you know, don''t reference things but instead explain them.\n\nUse
|
||||
the following format:\n\nThought: you should always think about what to do\nAction:
|
||||
the action to take, only one name of [Test Tool, Delegate work to coworker,
|
||||
Ask question to coworker], just the name, exactly as it''s written.\nAction
|
||||
Input: the input to the action, just a simple python dictionary, enclosed in
|
||||
curly braces, using \" to wrap keys and values.\nObservation: the result of
|
||||
the action\n\nOnce all necessary information is gathered:\n\nThought: I now
|
||||
know the final answer\nFinal Answer: the final answer to the original input
|
||||
question"}, {"role": "user", "content": "\nCurrent Task: Produce and amazing
|
||||
1 paragraph draft of an article about AI Agents.\n\nThis is the expect criteria
|
||||
for your final answer: A 4 paragraph article about AI.\nyou MUST return the
|
||||
actual complete content as the final answer, not a summary.\n\nBegin! This is
|
||||
VERY important to you, use the tools available and give your best Final Answer,
|
||||
your job depends on it!\n\nThought:"}], "model": "gpt-4o-mini", "stop": ["\nObservation:"],
|
||||
"stream": false}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '2892'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.52.1
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.52.1
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.11.7
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
content: "{\n \"id\": \"chatcmpl-AhLQELAjJpn76wiLmWBinm3sqf32l\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1734893814,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"I need to gather information and insights
|
||||
to ensure the Senior Writer produces a high-quality draft about AI Agents, which
|
||||
will then serve as a foundation for the complete article.\\n\\nAction: Ask question
|
||||
to coworker \\nAction Input: {\\\"question\\\":\\\"Can you provide a detailed
|
||||
overview of what AI Agents are, their functionalities, and their applications
|
||||
in real-world scenarios? Please include examples of how they are being used
|
||||
in various industries, and discuss their potential impact on the future of technology
|
||||
and society.\\\",\\\"context\\\":\\\"We are looking to create a comprehensive
|
||||
understanding of AI Agents as part of a four-paragraph article. This will help
|
||||
generate a high-quality draft for the article.\\\",\\\"coworker\\\":\\\"Senior
|
||||
Writer\\\"} \",\n \"refusal\": null\n },\n \"logprobs\": null,\n
|
||||
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
|
||||
604,\n \"completion_tokens\": 138,\n \"total_tokens\": 742,\n \"prompt_tokens_details\":
|
||||
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
|
||||
\"fp_0aa8d3e20b\"\n}\n"
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 8f6255a1bf08a519-GRU
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Sun, 22 Dec 2024 18:56:56 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=rVquIlcnYXc7wMkbKG5Ii90HxfQ_ukNJjDgSHhsWb1k-1734893816-1.0.1.1-33qDl8KNWcxLAGBuPhT8FrZ6QUnEy9oOYh2Fp2hIjDnF.cQlyrgxiWcuHljTTxG_mH7eQrf1AHJ6p8sxZJZ30A;
|
||||
path=/; expires=Sun, 22-Dec-24 19:26:56 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=xqgLJsf3h5lBKZFTADRNNUqizeChNBBFoLvWiR2WPnw-1734893816555-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '2340'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149999305'
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_53956b48bd1188451efc104e8a234ef4
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
- request:
|
||||
body: !!binary |
|
||||
CrEMCiQKIgoMc2VydmljZS5uYW1lEhIKEGNyZXdBSS10ZWxlbWV0cnkSiAwKEgoQY3Jld2FpLnRl
|
||||
bGVtZXRyeRLgCQoQg/HA64g3phKbzz/hvUtbahIIpu+Csq+uWc0qDENyZXcgQ3JlYXRlZDABOSDm
|
||||
Uli7lBMYQcgRXFi7lBMYShoKDmNyZXdhaV92ZXJzaW9uEggKBjAuODYuMEoaCg5weXRob25fdmVy
|
||||
c2lvbhIICgYzLjExLjdKLgoIY3Jld19rZXkSIgogZTY0OTU3M2EyNmU1ODc5MGNhYzIxYTM3Y2Q0
|
||||
NDQzN2FKMQoHY3Jld19pZBImCiRhMWFjMTc0Ny0xMTA0LTRlZjItODZkNi02ZGRhNTFmMDlmMTdK
|
||||
HAoMY3Jld19wcm9jZXNzEgwKCnNlcXVlbnRpYWxKEQoLY3Jld19tZW1vcnkSAhAAShoKFGNyZXdf
|
||||
bnVtYmVyX29mX3Rhc2tzEgIYAUobChVjcmV3X251bWJlcl9vZl9hZ2VudHMSAhgCSooFCgtjcmV3
|
||||
X2FnZW50cxL6BAr3BFt7ImtleSI6ICIzMjgyMTdiNmMyOTU5YmRmYzQ3Y2FkMDBlODQ4OTBkMCIs
|
||||
ICJpZCI6ICI4YWUwNGY0Yy0wMjNiLTRkNWQtODAwZC02ZjlkMWFmMWExOTkiLCAicm9sZSI6ICJD
|
||||
RU8iLCAidmVyYm9zZT8iOiBmYWxzZSwgIm1heF9pdGVyIjogMjAsICJtYXhfcnBtIjogbnVsbCwg
|
||||
ImZ1bmN0aW9uX2NhbGxpbmdfbGxtIjogIiIsICJsbG0iOiAiZ3B0LTRvLW1pbmkiLCAiZGVsZWdh
|
||||
dGlvbl9lbmFibGVkPyI6IHRydWUsICJhbGxvd19jb2RlX2V4ZWN1dGlvbj8iOiBmYWxzZSwgIm1h
|
||||
eF9yZXRyeV9saW1pdCI6IDIsICJ0b29sc19uYW1lcyI6IFtdfSwgeyJrZXkiOiAiOWE1MDE1ZWY0
|
||||
ODk1ZGM2Mjc4ZDU0ODE4YmE0NDZhZjciLCAiaWQiOiAiNDg2MWQ4YTMtMjMxYS00Mzc5LTk2ZmEt
|
||||
MWQwZmQyZDI1MGYxIiwgInJvbGUiOiAiU2VuaW9yIFdyaXRlciIsICJ2ZXJib3NlPyI6IGZhbHNl
|
||||
LCAibWF4X2l0ZXIiOiAyMCwgIm1heF9ycG0iOiBudWxsLCAiZnVuY3Rpb25fY2FsbGluZ19sbG0i
|
||||
OiAiIiwgImxsbSI6ICJncHQtNG8tbWluaSIsICJkZWxlZ2F0aW9uX2VuYWJsZWQ/IjogZmFsc2Us
|
||||
ICJhbGxvd19jb2RlX2V4ZWN1dGlvbj8iOiBmYWxzZSwgIm1heF9yZXRyeV9saW1pdCI6IDIsICJ0
|
||||
b29sc19uYW1lcyI6IFtdfV1KgwIKCmNyZXdfdGFza3MS9AEK8QFbeyJrZXkiOiAiMGI5ZDY1ZGI2
|
||||
YjdhZWRmYjM5OGM1OWUyYTlmNzFlYzUiLCAiaWQiOiAiY2IyMmIxMzctZTA3ZC00NDA5LWI5NmMt
|
||||
ZWQ2ZDU3MjFhNDNiIiwgImFzeW5jX2V4ZWN1dGlvbj8iOiBmYWxzZSwgImh1bWFuX2lucHV0PyI6
|
||||
IGZhbHNlLCAiYWdlbnRfcm9sZSI6ICJDRU8iLCAiYWdlbnRfa2V5IjogIjMyODIxN2I2YzI5NTli
|
||||
ZGZjNDdjYWQwMGU4NDg5MGQwIiwgInRvb2xzX25hbWVzIjogWyJ0ZXN0IHRvb2wiXX1degIYAYUB
|
||||
AAEAABKOAgoQB7Z9AEDI9OTStHqguBSbLxIIj9dttVFJs9cqDFRhc2sgQ3JlYXRlZDABOYDae1i7
|
||||
lBMYQeBHfFi7lBMYSi4KCGNyZXdfa2V5EiIKIGU2NDk1NzNhMjZlNTg3OTBjYWMyMWEzN2NkNDQ0
|
||||
MzdhSjEKB2NyZXdfaWQSJgokYTFhYzE3NDctMTEwNC00ZWYyLTg2ZDYtNmRkYTUxZjA5ZjE3Si4K
|
||||
CHRhc2tfa2V5EiIKIDBiOWQ2NWRiNmI3YWVkZmIzOThjNTllMmE5ZjcxZWM1SjEKB3Rhc2tfaWQS
|
||||
JgokY2IyMmIxMzctZTA3ZC00NDA5LWI5NmMtZWQ2ZDU3MjFhNDNiegIYAYUBAAEAAA==
|
||||
headers:
|
||||
Accept:
|
||||
- '*/*'
|
||||
Accept-Encoding:
|
||||
- gzip, deflate
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '1588'
|
||||
Content-Type:
|
||||
- application/x-protobuf
|
||||
User-Agent:
|
||||
- OTel-OTLP-Exporter-Python/1.27.0
|
||||
method: POST
|
||||
uri: https://telemetry.crewai.com:4319/v1/traces
|
||||
response:
|
||||
body:
|
||||
string: "\n\0"
|
||||
headers:
|
||||
Content-Length:
|
||||
- '2'
|
||||
Content-Type:
|
||||
- application/x-protobuf
|
||||
Date:
|
||||
- Sun, 22 Dec 2024 18:56:59 GMT
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Senior Writer. You''re
|
||||
a senior writer, specialized in technology, software engineering, AI and startups.
|
||||
You work as a freelancer and are now working on writing content for a new customer.\nYour
|
||||
personal goal is: Write the best content about AI and AI agents.\nTo give my
|
||||
best complete final answer to the task use the exact following format:\n\nThought:
|
||||
I now can give a great answer\nFinal Answer: Your final answer must be the great
|
||||
and the most complete as possible, it must be outcome described.\n\nI MUST use
|
||||
these formats, my job depends on it!"}, {"role": "user", "content": "\nCurrent
|
||||
Task: Can you provide a detailed overview of what AI Agents are, their functionalities,
|
||||
and their applications in real-world scenarios? Please include examples of how
|
||||
they are being used in various industries, and discuss their potential impact
|
||||
on the future of technology and society.\n\nThis is the expect criteria for
|
||||
your final answer: Your best answer to your coworker asking you this, accounting
|
||||
for the context shared.\nyou MUST return the actual complete content as the
|
||||
final answer, not a summary.\n\nThis is the context you''re working with:\nWe
|
||||
are looking to create a comprehensive understanding of AI Agents as part of
|
||||
a four-paragraph article. This will help generate a high-quality draft for the
|
||||
article.\n\nBegin! This is VERY important to you, use the tools available and
|
||||
give your best Final Answer, your job depends on it!\n\nThought:"}], "model":
|
||||
"gpt-4o-mini", "stop": ["\nObservation:"], "stream": false}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1572'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=rVquIlcnYXc7wMkbKG5Ii90HxfQ_ukNJjDgSHhsWb1k-1734893816-1.0.1.1-33qDl8KNWcxLAGBuPhT8FrZ6QUnEy9oOYh2Fp2hIjDnF.cQlyrgxiWcuHljTTxG_mH7eQrf1AHJ6p8sxZJZ30A;
|
||||
_cfuvid=xqgLJsf3h5lBKZFTADRNNUqizeChNBBFoLvWiR2WPnw-1734893816555-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.52.1
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.52.1
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.11.7
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
content: "{\n \"id\": \"chatcmpl-AhLQG5ubl99yeBYm6TTV0sodagMND\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1734893816,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"I now can give a great answer \\nFinal
|
||||
Answer: \\n\\n**Overview of AI Agents** \\nAI agents are advanced software
|
||||
systems designed to autonomously perform tasks, make decisions, and learn from
|
||||
their environments without needing constant human intervention. They leverage
|
||||
machine learning, natural language processing, and various AI techniques to
|
||||
simulate human-like understanding and autonomy. These agents can be categorized
|
||||
into three types: reactive agents (which operate purely based on their environment),
|
||||
deliberative agents (which can make decisions based on reasoning), and hybrid
|
||||
agents that incorporate aspects of both types. Their ability to adapt and learn
|
||||
over time makes them instrumental in automating processes across various domains.\\n\\n**Functionalities
|
||||
of AI Agents** \\nThe core functionalities of AI agents include perception,
|
||||
action, learning, and interaction. They perceive data through sensors or data
|
||||
feeds, process information through algorithms, and take actions based on this
|
||||
data. Machine learning allows them to refine their performance over time by
|
||||
analyzing outcomes and adjusting their strategies accordingly. Interaction capabilities
|
||||
enable them to communicate with users, providing insights, answering queries,
|
||||
or even negotiating in some cases. These functionalities make AI agents invaluable
|
||||
for tasks such as predictive analytics, personal assistance, and real-time decision-making
|
||||
in complex systems.\\n\\n**Applications in Various Industries** \\nAI agents
|
||||
are already being utilized across multiple industries, demonstrating their versatility
|
||||
and efficiency. In healthcare, AI agents assist in diagnostics by analyzing
|
||||
medical records and suggesting treatment plans tailored to individual patients.
|
||||
In finance, they power robo-advisors that manage investment portfolios, automate
|
||||
trading strategies, and provide financial advice based on real-time market analysis.
|
||||
Furthermore, in customer service, AI chatbots serve as virtual assistants, enhancing
|
||||
user experience by providing instant support and resolving queries without human
|
||||
intervention. The logistics and supply chain industries have also seen AI agents
|
||||
optimize inventory management and route planning, significantly improving operational
|
||||
efficiency.\\n\\n**Future Impact on Technology and Society** \\nThe ongoing
|
||||
development of AI agents is poised to have a profound impact on technology and
|
||||
society. As these agents become more sophisticated, we can anticipate a shift
|
||||
towards increased automation in both professional and personal spheres, leading
|
||||
to enhanced productivity and new business models. However, this automation introduces
|
||||
challenges such as job displacement and ethical considerations regarding decision-making
|
||||
by AI. It is essential to foster an ongoing dialogue on the implications of
|
||||
AI agents to ensure responsible development and integration into our daily lives.
|
||||
As AI agents continue to evolve, they will undoubtedly play a pivotal role in
|
||||
shaping the future of technology and its intersection with societal dynamics,
|
||||
making it critical for us to engage thoughtfully with this emerging paradigm.\",\n
|
||||
\ \"refusal\": null\n },\n \"logprobs\": null,\n \"finish_reason\":
|
||||
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 289,\n \"completion_tokens\":
|
||||
506,\n \"total_tokens\": 795,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
|
||||
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
|
||||
\"fp_0aa8d3e20b\"\n}\n"
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 8f6255b1f832a519-GRU
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Sun, 22 Dec 2024 18:57:04 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '7836'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149999630'
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_c14268346d6ce72ceea4b1472f73c5ae
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
- request:
|
||||
body: !!binary |
|
||||
CtsBCiQKIgoMc2VydmljZS5uYW1lEhIKEGNyZXdBSS10ZWxlbWV0cnkSsgEKEgoQY3Jld2FpLnRl
|
||||
bGVtZXRyeRKbAQoQ7U/1ZgBSTkCXtesUNPA2URIIrnRWFVT58Z8qClRvb2wgVXNhZ2UwATl4lN3i
|
||||
vZQTGEGgevzivZQTGEoaCg5jcmV3YWlfdmVyc2lvbhIICgYwLjg2LjBKJwoJdG9vbF9uYW1lEhoK
|
||||
GEFzayBxdWVzdGlvbiB0byBjb3dvcmtlckoOCghhdHRlbXB0cxICGAF6AhgBhQEAAQAA
|
||||
headers:
|
||||
Accept:
|
||||
- '*/*'
|
||||
Accept-Encoding:
|
||||
- gzip, deflate
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '222'
|
||||
Content-Type:
|
||||
- application/x-protobuf
|
||||
User-Agent:
|
||||
- OTel-OTLP-Exporter-Python/1.27.0
|
||||
method: POST
|
||||
uri: https://telemetry.crewai.com:4319/v1/traces
|
||||
response:
|
||||
body:
|
||||
string: "\n\0"
|
||||
headers:
|
||||
Content-Length:
|
||||
- '2'
|
||||
Content-Type:
|
||||
- application/x-protobuf
|
||||
Date:
|
||||
- Sun, 22 Dec 2024 18:57:09 GMT
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are CEO. You''re an long
|
||||
time CEO of a content creation agency with a Senior Writer on the team. You''re
|
||||
now working on a new project and want to make sure the content produced is amazing.\nYour
|
||||
personal goal is: Make sure the writers in your company produce amazing content.\nYou
|
||||
ONLY have access to the following tools, and should NEVER make up tools that
|
||||
are not listed here:\n\nTool Name: Test Tool\nTool Arguments: {''query'': {''description'':
|
||||
''Query to process'', ''type'': ''str''}}\nTool Description: A test tool that
|
||||
just returns the input\nTool Name: Delegate work to coworker\nTool Arguments:
|
||||
{''task'': {''description'': ''The task to delegate'', ''type'': ''str''}, ''context'':
|
||||
{''description'': ''The context for the task'', ''type'': ''str''}, ''coworker'':
|
||||
{''description'': ''The role/name of the coworker to delegate to'', ''type'':
|
||||
''str''}}\nTool Description: Delegate a specific task to one of the following
|
||||
coworkers: Senior Writer\nThe input to this tool should be the coworker, the
|
||||
task you want them to do, and ALL necessary context to execute the task, they
|
||||
know nothing about the task, so share absolute everything you know, don''t reference
|
||||
things but instead explain them.\nTool Name: Ask question to coworker\nTool
|
||||
Arguments: {''question'': {''description'': ''The question to ask'', ''type'':
|
||||
''str''}, ''context'': {''description'': ''The context for the question'', ''type'':
|
||||
''str''}, ''coworker'': {''description'': ''The role/name of the coworker to
|
||||
ask'', ''type'': ''str''}}\nTool Description: Ask a specific question to one
|
||||
of the following coworkers: Senior Writer\nThe input to this tool should be
|
||||
the coworker, the question you have for them, and ALL necessary context to ask
|
||||
the question properly, they know nothing about the question, so share absolute
|
||||
everything you know, don''t reference things but instead explain them.\n\nUse
|
||||
the following format:\n\nThought: you should always think about what to do\nAction:
|
||||
the action to take, only one name of [Test Tool, Delegate work to coworker,
|
||||
Ask question to coworker], just the name, exactly as it''s written.\nAction
|
||||
Input: the input to the action, just a simple python dictionary, enclosed in
|
||||
curly braces, using \" to wrap keys and values.\nObservation: the result of
|
||||
the action\n\nOnce all necessary information is gathered:\n\nThought: I now
|
||||
know the final answer\nFinal Answer: the final answer to the original input
|
||||
question"}, {"role": "user", "content": "\nCurrent Task: Produce and amazing
|
||||
1 paragraph draft of an article about AI Agents.\n\nThis is the expect criteria
|
||||
for your final answer: A 4 paragraph article about AI.\nyou MUST return the
|
||||
actual complete content as the final answer, not a summary.\n\nBegin! This is
|
||||
VERY important to you, use the tools available and give your best Final Answer,
|
||||
your job depends on it!\n\nThought:"}, {"role": "assistant", "content": "I need
|
||||
to gather information and insights to ensure the Senior Writer produces a high-quality
|
||||
draft about AI Agents, which will then serve as a foundation for the complete
|
||||
article.\n\nAction: Ask question to coworker \nAction Input: {\"question\":\"Can
|
||||
you provide a detailed overview of what AI Agents are, their functionalities,
|
||||
and their applications in real-world scenarios? Please include examples of how
|
||||
they are being used in various industries, and discuss their potential impact
|
||||
on the future of technology and society.\",\"context\":\"We are looking to create
|
||||
a comprehensive understanding of AI Agents as part of a four-paragraph article.
|
||||
This will help generate a high-quality draft for the article.\",\"coworker\":\"Senior
|
||||
Writer\"} \nObservation: **Overview of AI Agents** \nAI agents are advanced
|
||||
software systems designed to autonomously perform tasks, make decisions, and
|
||||
learn from their environments without needing constant human intervention. They
|
||||
leverage machine learning, natural language processing, and various AI techniques
|
||||
to simulate human-like understanding and autonomy. These agents can be categorized
|
||||
into three types: reactive agents (which operate purely based on their environment),
|
||||
deliberative agents (which can make decisions based on reasoning), and hybrid
|
||||
agents that incorporate aspects of both types. Their ability to adapt and learn
|
||||
over time makes them instrumental in automating processes across various domains.\n\n**Functionalities
|
||||
of AI Agents** \nThe core functionalities of AI agents include perception,
|
||||
action, learning, and interaction. They perceive data through sensors or data
|
||||
feeds, process information through algorithms, and take actions based on this
|
||||
data. Machine learning allows them to refine their performance over time by
|
||||
analyzing outcomes and adjusting their strategies accordingly. Interaction capabilities
|
||||
enable them to communicate with users, providing insights, answering queries,
|
||||
or even negotiating in some cases. These functionalities make AI agents invaluable
|
||||
for tasks such as predictive analytics, personal assistance, and real-time decision-making
|
||||
in complex systems.\n\n**Applications in Various Industries** \nAI agents are
|
||||
already being utilized across multiple industries, demonstrating their versatility
|
||||
and efficiency. In healthcare, AI agents assist in diagnostics by analyzing
|
||||
medical records and suggesting treatment plans tailored to individual patients.
|
||||
In finance, they power robo-advisors that manage investment portfolios, automate
|
||||
trading strategies, and provide financial advice based on real-time market analysis.
|
||||
Furthermore, in customer service, AI chatbots serve as virtual assistants, enhancing
|
||||
user experience by providing instant support and resolving queries without human
|
||||
intervention. The logistics and supply chain industries have also seen AI agents
|
||||
optimize inventory management and route planning, significantly improving operational
|
||||
efficiency.\n\n**Future Impact on Technology and Society** \nThe ongoing development
|
||||
of AI agents is poised to have a profound impact on technology and society.
|
||||
As these agents become more sophisticated, we can anticipate a shift towards
|
||||
increased automation in both professional and personal spheres, leading to enhanced
|
||||
productivity and new business models. However, this automation introduces challenges
|
||||
such as job displacement and ethical considerations regarding decision-making
|
||||
by AI. It is essential to foster an ongoing dialogue on the implications of
|
||||
AI agents to ensure responsible development and integration into our daily lives.
|
||||
As AI agents continue to evolve, they will undoubtedly play a pivotal role in
|
||||
shaping the future of technology and its intersection with societal dynamics,
|
||||
making it critical for us to engage thoughtfully with this emerging paradigm."}],
|
||||
"model": "gpt-4o-mini", "stop": ["\nObservation:"], "stream": false}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '6755'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=rVquIlcnYXc7wMkbKG5Ii90HxfQ_ukNJjDgSHhsWb1k-1734893816-1.0.1.1-33qDl8KNWcxLAGBuPhT8FrZ6QUnEy9oOYh2Fp2hIjDnF.cQlyrgxiWcuHljTTxG_mH7eQrf1AHJ6p8sxZJZ30A;
|
||||
_cfuvid=xqgLJsf3h5lBKZFTADRNNUqizeChNBBFoLvWiR2WPnw-1734893816555-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.52.1
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.52.1
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.11.7
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
content: "{\n \"id\": \"chatcmpl-AhLQOmLKuevpaWtRyyhHjHVYqvNVk\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1734893824,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"Thought: I have received a comprehensive
|
||||
overview from the Senior Writer that includes the necessary information about
|
||||
AI Agents, their functionalities, applications, and future implications. Now,
|
||||
I can compile this into a final answer following the specified format: a complete
|
||||
four-paragraph article.\\n\\nFinal Answer: \\n\\n**Overview of AI Agents** \\nAI
|
||||
agents are advanced software systems designed to autonomously perform tasks,
|
||||
make decisions, and learn from their environments without needing constant human
|
||||
intervention. They leverage machine learning, natural language processing, and
|
||||
various AI techniques to simulate human-like understanding and autonomy. These
|
||||
agents can be categorized into three types: reactive agents (which operate purely
|
||||
based on their environment), deliberative agents (which can make decisions based
|
||||
on reasoning), and hybrid agents that incorporate aspects of both types. Their
|
||||
ability to adapt and learn over time makes them instrumental in automating processes
|
||||
across various domains.\\n\\n**Functionalities of AI Agents** \\nThe core functionalities
|
||||
of AI agents include perception, action, learning, and interaction. They perceive
|
||||
data through sensors or data feeds, process information through algorithms,
|
||||
and take actions based on this data. Machine learning allows them to refine
|
||||
their performance over time by analyzing outcomes and adjusting their strategies
|
||||
accordingly. Interaction capabilities enable them to communicate with users,
|
||||
providing insights, answering queries, or even negotiating in some cases. These
|
||||
functionalities make AI agents invaluable for tasks such as predictive analytics,
|
||||
personal assistance, and real-time decision-making in complex systems.\\n\\n**Applications
|
||||
in Various Industries** \\nAI agents are already being utilized across multiple
|
||||
industries, demonstrating their versatility and efficiency. In healthcare, AI
|
||||
agents assist in diagnostics by analyzing medical records and suggesting treatment
|
||||
plans tailored to individual patients. In finance, they power robo-advisors
|
||||
that manage investment portfolios, automate trading strategies, and provide
|
||||
financial advice based on real-time market analysis. Furthermore, in customer
|
||||
service, AI chatbots serve as virtual assistants, enhancing user experience
|
||||
by providing instant support and resolving queries without human intervention.
|
||||
The logistics and supply chain industries have also seen AI agents optimize
|
||||
inventory management and route planning, significantly improving operational
|
||||
efficiency.\\n\\n**Future Impact on Technology and Society** \\nThe ongoing
|
||||
development of AI agents is poised to have a profound impact on technology and
|
||||
society. As these agents become more sophisticated, we can anticipate a shift
|
||||
towards increased automation in both professional and personal spheres, leading
|
||||
to enhanced productivity and new business models. However, this automation introduces
|
||||
challenges such as job displacement and ethical considerations regarding decision-making
|
||||
by AI. It is essential to foster an ongoing dialogue on the implications of
|
||||
AI agents to ensure responsible development and integration into our daily lives.
|
||||
As AI agents continue to evolve, they will undoubtedly play a pivotal role in
|
||||
shaping the future of technology and its intersection with societal dynamics,
|
||||
making it critical for us to engage thoughtfully with this emerging paradigm.\",\n
|
||||
\ \"refusal\": null\n },\n \"logprobs\": null,\n \"finish_reason\":
|
||||
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 1242,\n \"completion_tokens\":
|
||||
550,\n \"total_tokens\": 1792,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
|
||||
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
|
||||
\"fp_0aa8d3e20b\"\n}\n"
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 8f6255e49b37a519-GRU
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Sun, 22 Dec 2024 18:57:12 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '7562'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149998353'
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_a812bbb85b3d785660c4662212614ab9
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
version: 1
|
||||
481
tests/cassettes/test_multimodal_agent_live_image_analysis.yaml
Normal file
481
tests/cassettes/test_multimodal_agent_live_image_analysis.yaml
Normal file
@@ -0,0 +1,481 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Image Analyst. You''re
|
||||
an expert at visual analysis, trained to notice and describe details in images.\nYour
|
||||
personal goal is: Analyze images with high attention to detail\nYou ONLY have
|
||||
access to the following tools, and should NEVER make up tools that are not listed
|
||||
here:\n\nTool Name: Add image to content\nTool Arguments: {''image_url'': {''description'':
|
||||
''The URL or path of the image to add'', ''type'': ''str''}, ''action'': {''description'':
|
||||
''Optional context or question about the image'', ''type'': ''str''}}\nTool
|
||||
Description: See image to understand it''s content, you can optionally ask a
|
||||
question about the image\n\nUse the following format:\n\nThought: you should
|
||||
always think about what to do\nAction: the action to take, only one name of
|
||||
[Add image to content], just the name, exactly as it''s written.\nAction Input:
|
||||
the input to the action, just a simple python dictionary, enclosed in curly
|
||||
braces, using \" to wrap keys and values.\nObservation: the result of the action\n\nOnce
|
||||
all necessary information is gathered:\n\nThought: I now know the final answer\nFinal
|
||||
Answer: the final answer to the original input question"}, {"role": "user",
|
||||
"content": "\nCurrent Task: \n Analyze the provided image and describe
|
||||
what you see in detail.\n Focus on main elements, colors, composition,
|
||||
and any notable details.\n Image: https://media.istockphoto.com/id/946087016/photo/aerial-view-of-lower-manhattan-new-york.jpg?s=612x612&w=0&k=20&c=viLiMRznQ8v5LzKTt_LvtfPFUVl1oiyiemVdSlm29_k=\n \n\nThis
|
||||
is the expect criteria for your final answer: A comprehensive description of
|
||||
the image contents.\nyou MUST return the actual complete content as the final
|
||||
answer, not a summary.\n\nBegin! This is VERY important to you, use the tools
|
||||
available and give your best Final Answer, your job depends on it!\n\nThought:"}],
|
||||
"model": "gpt-4o", "stop": ["\nObservation:"], "stream": false}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1948'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.52.1
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.52.1
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.11.7
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
content: "{\n \"id\": \"chatcmpl-AiuIfzzcje5KdvKIG5CkFeORroiKk\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1735266213,\n \"model\": \"gpt-4o-2024-08-06\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"Action: Add image to content\\nAction
|
||||
Input: {\\\"image_url\\\": \\\"https://media.istockphoto.com/id/946087016/photo/aerial-view-of-lower-manhattan-new-york.jpg?s=612x612&w=0&k=20&c=viLiMRznQ8v5LzKTt_LvtfPFUVl1oiyiemVdSlm29_k=\\\",
|
||||
\\\"action\\\": \\\"Analyze the provided image and describe what you see in
|
||||
detail.\\\"}\",\n \"refusal\": null\n },\n \"logprobs\": null,\n
|
||||
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
|
||||
417,\n \"completion_tokens\": 103,\n \"total_tokens\": 520,\n \"prompt_tokens_details\":
|
||||
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
|
||||
\"fp_5f20662549\"\n}\n"
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 8f85d96b280df217-GRU
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 27 Dec 2024 02:23:35 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=kJ1pw1xjCMSxjHSS8iJC5z_j2PZxl.i387KCpj9xNZU-1735266215-1.0.1.1-Ybg0wVTsrBlpVZmtQyA1ullY8m3v2Ix0N_SYlhr9z7zKfbLeqGZEVL37YSY.dvIiLVY3XPZzMtG8Xwo6UucW6A;
|
||||
path=/; expires=Fri, 27-Dec-24 02:53:35 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=v_wJZ5m7qCjrnRfks0gT2GAk9yR14BdIDAQiQR7xxI8-1735266215000-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '1212'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '30000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29999539'
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_663a2b18099a18361d6b02befc175289
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
- request:
|
||||
body: !!binary |
|
||||
Co4LCiQKIgoMc2VydmljZS5uYW1lEhIKEGNyZXdBSS10ZWxlbWV0cnkS5QoKEgoQY3Jld2FpLnRl
|
||||
bGVtZXRyeRKjBwoQHmzzumMNXHOgpJ4zCIxJSxII72WnLlLfRyYqDENyZXcgQ3JlYXRlZDABOQjB
|
||||
gFxt5xQYQYhMiVxt5xQYShoKDmNyZXdhaV92ZXJzaW9uEggKBjAuODYuMEoaCg5weXRob25fdmVy
|
||||
c2lvbhIICgYzLjExLjdKLgoIY3Jld19rZXkSIgogZTM5NTY3YjUwNTI5MDljYTMzNDA5ODRiODM4
|
||||
OTgwZWFKMQoHY3Jld19pZBImCiQ4MDA0YTA1NC0zYjNkLTQ4OGEtYTlkNC1kZWQzMDVhMDIxY2FK
|
||||
HAoMY3Jld19wcm9jZXNzEgwKCnNlcXVlbnRpYWxKEQoLY3Jld19tZW1vcnkSAhAAShoKFGNyZXdf
|
||||
bnVtYmVyX29mX3Rhc2tzEgIYAUobChVjcmV3X251bWJlcl9vZl9hZ2VudHMSAhgBSs4CCgtjcmV3
|
||||
X2FnZW50cxK+Agq7Alt7ImtleSI6ICI5ZGM4Y2NlMDMwNDY4MTk2MDQxYjRjMzgwYjYxN2NiMCIs
|
||||
ICJpZCI6ICJjNTZhZGI2Mi1lMGIwLTQzYzAtYmQ4OC0xYzEwYTNhNmU5NDQiLCAicm9sZSI6ICJJ
|
||||
bWFnZSBBbmFseXN0IiwgInZlcmJvc2U/IjogdHJ1ZSwgIm1heF9pdGVyIjogMjAsICJtYXhfcnBt
|
||||
IjogbnVsbCwgImZ1bmN0aW9uX2NhbGxpbmdfbGxtIjogIiIsICJsbG0iOiAiZ3B0LTRvIiwgImRl
|
||||
bGVnYXRpb25fZW5hYmxlZD8iOiBmYWxzZSwgImFsbG93X2NvZGVfZXhlY3V0aW9uPyI6IGZhbHNl
|
||||
LCAibWF4X3JldHJ5X2xpbWl0IjogMiwgInRvb2xzX25hbWVzIjogW119XUqCAgoKY3Jld190YXNr
|
||||
cxLzAQrwAVt7ImtleSI6ICJhOWE3NmNhNjk1N2QwYmZmYTY5ZWFiMjBiNjY0ODIyYiIsICJpZCI6
|
||||
ICJhNzFiZDllNC0wNzdkLTRmMTQtODg0MS03MGMwZWM4MGZkMmMiLCAiYXN5bmNfZXhlY3V0aW9u
|
||||
PyI6IGZhbHNlLCAiaHVtYW5faW5wdXQ/IjogZmFsc2UsICJhZ2VudF9yb2xlIjogIkltYWdlIEFu
|
||||
YWx5c3QiLCAiYWdlbnRfa2V5IjogIjlkYzhjY2UwMzA0NjgxOTYwNDFiNGMzODBiNjE3Y2IwIiwg
|
||||
InRvb2xzX25hbWVzIjogW119XXoCGAGFAQABAAASjgIKEOZ5pMdq9ep85DrP1Vv8Y8MSCE7ahOkm
|
||||
2IDHKgxUYXNrIENyZWF0ZWQwATlIg85cbecUGEGQ9M5cbecUGEouCghjcmV3X2tleRIiCiBlMzk1
|
||||
NjdiNTA1MjkwOWNhMzM0MDk4NGI4Mzg5ODBlYUoxCgdjcmV3X2lkEiYKJDgwMDRhMDU0LTNiM2Qt
|
||||
NDg4YS1hOWQ0LWRlZDMwNWEwMjFjYUouCgh0YXNrX2tleRIiCiBhOWE3NmNhNjk1N2QwYmZmYTY5
|
||||
ZWFiMjBiNjY0ODIyYkoxCgd0YXNrX2lkEiYKJGE3MWJkOWU0LTA3N2QtNGYxNC04ODQxLTcwYzBl
|
||||
YzgwZmQyY3oCGAGFAQABAAASlwEKECyaQQK8JkKLh6S2mWHTeDgSCPWCpr7v9CQZKgpUb29sIFVz
|
||||
YWdlMAE5MLyst23nFBhBOJy/t23nFBhKGgoOY3Jld2FpX3ZlcnNpb24SCAoGMC44Ni4wSiMKCXRv
|
||||
b2xfbmFtZRIWChRBZGQgaW1hZ2UgdG8gY29udGVudEoOCghhdHRlbXB0cxICGAF6AhgBhQEAAQAA
|
||||
headers:
|
||||
Accept:
|
||||
- '*/*'
|
||||
Accept-Encoding:
|
||||
- gzip, deflate
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '1425'
|
||||
Content-Type:
|
||||
- application/x-protobuf
|
||||
User-Agent:
|
||||
- OTel-OTLP-Exporter-Python/1.27.0
|
||||
method: POST
|
||||
uri: https://telemetry.crewai.com:4319/v1/traces
|
||||
response:
|
||||
body:
|
||||
string: "\n\0"
|
||||
headers:
|
||||
Content-Length:
|
||||
- '2'
|
||||
Content-Type:
|
||||
- application/x-protobuf
|
||||
Date:
|
||||
- Fri, 27 Dec 2024 02:23:39 GMT
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Image Analyst. You''re
|
||||
an expert at visual analysis, trained to notice and describe details in images.\nYour
|
||||
personal goal is: Analyze images with high attention to detail\nYou ONLY have
|
||||
access to the following tools, and should NEVER make up tools that are not listed
|
||||
here:\n\nTool Name: Add image to content\nTool Arguments: {''image_url'': {''description'':
|
||||
''The URL or path of the image to add'', ''type'': ''str''}, ''action'': {''description'':
|
||||
''Optional context or question about the image'', ''type'': ''str''}}\nTool
|
||||
Description: See image to understand it''s content, you can optionally ask a
|
||||
question about the image\n\nUse the following format:\n\nThought: you should
|
||||
always think about what to do\nAction: the action to take, only one name of
|
||||
[Add image to content], just the name, exactly as it''s written.\nAction Input:
|
||||
the input to the action, just a simple python dictionary, enclosed in curly
|
||||
braces, using \" to wrap keys and values.\nObservation: the result of the action\n\nOnce
|
||||
all necessary information is gathered:\n\nThought: I now know the final answer\nFinal
|
||||
Answer: the final answer to the original input question"}, {"role": "user",
|
||||
"content": "\nCurrent Task: \n Analyze the provided image and describe
|
||||
what you see in detail.\n Focus on main elements, colors, composition,
|
||||
and any notable details.\n Image: https://media.istockphoto.com/id/946087016/photo/aerial-view-of-lower-manhattan-new-york.jpg?s=612x612&w=0&k=20&c=viLiMRznQ8v5LzKTt_LvtfPFUVl1oiyiemVdSlm29_k=\n \n\nThis
|
||||
is the expect criteria for your final answer: A comprehensive description of
|
||||
the image contents.\nyou MUST return the actual complete content as the final
|
||||
answer, not a summary.\n\nBegin! This is VERY important to you, use the tools
|
||||
available and give your best Final Answer, your job depends on it!\n\nThought:"},
|
||||
{"role": "user", "content": [{"type": "text", "text": "Analyze the provided
|
||||
image and describe what you see in detail."}, {"type": "image_url", "image_url":
|
||||
{"url": "https://media.istockphoto.com/id/946087016/photo/aerial-view-of-lower-manhattan-new-york.jpg?s=612x612&w=0&k=20&c=viLiMRznQ8v5LzKTt_LvtfPFUVl1oiyiemVdSlm29_k="}}]}],
|
||||
"model": "gpt-4o", "stop": ["\nObservation:"], "stream": false}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '2279'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=kJ1pw1xjCMSxjHSS8iJC5z_j2PZxl.i387KCpj9xNZU-1735266215-1.0.1.1-Ybg0wVTsrBlpVZmtQyA1ullY8m3v2Ix0N_SYlhr9z7zKfbLeqGZEVL37YSY.dvIiLVY3XPZzMtG8Xwo6UucW6A;
|
||||
_cfuvid=v_wJZ5m7qCjrnRfks0gT2GAk9yR14BdIDAQiQR7xxI8-1735266215000-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.52.1
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.52.1
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.11.7
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
content: "{\n \"id\": \"chatcmpl-AiuIiqT33ROFMdw1gNmqH9jiw6PfF\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1735266216,\n \"model\": \"gpt-4o-2024-08-06\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"The image is an aerial view of Lower
|
||||
Manhattan in New York City. \\n\\nMain Elements:\\n- The One World Trade Center
|
||||
tower stands prominently, distinguishable by its sleek, tapering structure reaching
|
||||
into the sky, surrounded by other skyscrapers.\\n- Skyscrapers in varying heights
|
||||
and architectural styles, fill the densely packed urban landscape.\\n- A waterfront
|
||||
is visible at the edges, with docks and piers extending into the water.\\n\\nColors:\\n-
|
||||
The buildings exhibit a mix of colors, predominantly grays, whites, and browns,
|
||||
against the blues of the sky and water.\\n- There's a section of greenery visible,
|
||||
likely a park or recreational space, offering a contrast with its vibrant green
|
||||
hues.\\n\\nComposition:\\n- The angle of the photograph showcases the expanse
|
||||
of the city, highlighting the density and scale of the buildings.\\n- Water
|
||||
borders the city on two prominent sides, creating a natural boundary and enhancing
|
||||
the island's urban island feel.\\n\\nNotable Details:\\n- The image captures
|
||||
the iconic layout of Manhattan, with the surrounding Hudson River and New York
|
||||
Harbor visible in the background.\\n- Beyond Lower Manhattan, more of the cityscape
|
||||
stretches into the distance, illustrating the vastness of New York City.\\n-
|
||||
The day appears clear and sunny, with shadows casting from the buildings, indicating
|
||||
time in the morning or late afternoon.\\n\\nOverall, the image is a striking
|
||||
depiction of the dynamic and bustling environment of New York's Lower Manhattan,
|
||||
encapsulating its urban character and proximity to the water.\",\n \"refusal\":
|
||||
null\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
|
||||
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 858,\n \"completion_tokens\":
|
||||
295,\n \"total_tokens\": 1153,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
|
||||
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
|
||||
\"fp_5f20662549\"\n}\n"
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 8f85d9741d0cf217-GRU
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 27 Dec 2024 02:23:40 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '5136'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
x-ratelimit-limit-input-images:
|
||||
- '50000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '30000000'
|
||||
x-ratelimit-remaining-input-images:
|
||||
- '49999'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29998756'
|
||||
x-ratelimit-reset-input-images:
|
||||
- 1ms
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 2ms
|
||||
x-request-id:
|
||||
- req_57a7430712d4ff4a81f600ffb94d3b6e
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Image Analyst. You''re
|
||||
an expert at visual analysis, trained to notice and describe details in images.\nYour
|
||||
personal goal is: Analyze images with high attention to detail\nYou ONLY have
|
||||
access to the following tools, and should NEVER make up tools that are not listed
|
||||
here:\n\nTool Name: Add image to content\nTool Arguments: {''image_url'': {''description'':
|
||||
''The URL or path of the image to add'', ''type'': ''str''}, ''action'': {''description'':
|
||||
''Optional context or question about the image'', ''type'': ''str''}}\nTool
|
||||
Description: See image to understand it''s content, you can optionally ask a
|
||||
question about the image\n\nUse the following format:\n\nThought: you should
|
||||
always think about what to do\nAction: the action to take, only one name of
|
||||
[Add image to content], just the name, exactly as it''s written.\nAction Input:
|
||||
the input to the action, just a simple python dictionary, enclosed in curly
|
||||
braces, using \" to wrap keys and values.\nObservation: the result of the action\n\nOnce
|
||||
all necessary information is gathered:\n\nThought: I now know the final answer\nFinal
|
||||
Answer: the final answer to the original input question"}, {"role": "user",
|
||||
"content": "\nCurrent Task: \n Analyze the provided image and describe
|
||||
what you see in detail.\n Focus on main elements, colors, composition,
|
||||
and any notable details.\n Image: https://media.istockphoto.com/id/946087016/photo/aerial-view-of-lower-manhattan-new-york.jpg?s=612x612&w=0&k=20&c=viLiMRznQ8v5LzKTt_LvtfPFUVl1oiyiemVdSlm29_k=\n \n\nThis
|
||||
is the expect criteria for your final answer: A comprehensive description of
|
||||
the image contents.\nyou MUST return the actual complete content as the final
|
||||
answer, not a summary.\n\nBegin! This is VERY important to you, use the tools
|
||||
available and give your best Final Answer, your job depends on it!\n\nThought:"},
|
||||
{"role": "user", "content": [{"type": "text", "text": "Analyze the provided
|
||||
image and describe what you see in detail."}, {"type": "image_url", "image_url":
|
||||
{"url": "https://media.istockphoto.com/id/946087016/photo/aerial-view-of-lower-manhattan-new-york.jpg?s=612x612&w=0&k=20&c=viLiMRznQ8v5LzKTt_LvtfPFUVl1oiyiemVdSlm29_k="}}]},
|
||||
{"role": "user", "content": "I did it wrong. Invalid Format: I missed the ''Action:''
|
||||
after ''Thought:''. I will do right next, and don''t use a tool I have already
|
||||
used.\n\nIf you don''t need to use any more tools, you must give your best complete
|
||||
final answer, make sure it satisfies the expected criteria, use the EXACT format
|
||||
below:\n\nThought: I now can give a great answer\nFinal Answer: my best complete
|
||||
final answer to the task.\n\n"}], "model": "gpt-4o", "stop": ["\nObservation:"],
|
||||
"stream": false}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '2717'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=kJ1pw1xjCMSxjHSS8iJC5z_j2PZxl.i387KCpj9xNZU-1735266215-1.0.1.1-Ybg0wVTsrBlpVZmtQyA1ullY8m3v2Ix0N_SYlhr9z7zKfbLeqGZEVL37YSY.dvIiLVY3XPZzMtG8Xwo6UucW6A;
|
||||
_cfuvid=v_wJZ5m7qCjrnRfks0gT2GAk9yR14BdIDAQiQR7xxI8-1735266215000-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.52.1
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.52.1
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.11.7
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
content: "{\n \"id\": \"chatcmpl-AiuInuYNldaQVo6B1EsEquT1VFMN7\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1735266221,\n \"model\": \"gpt-4o-2024-08-06\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"Thought: I now can give a great answer\\nFinal
|
||||
Answer: The image is an aerial view of Lower Manhattan in New York City. The
|
||||
photograph prominently features the cluster of skyscrapers that characterizes
|
||||
the area, with One World Trade Center standing out as a particularly tall and
|
||||
iconic structure. The buildings vary in color, with shades of glassy blue, grey,
|
||||
and natural stone dominating the skyline. In the bottom part of the image, there
|
||||
is a green space, likely Battery Park, providing a stark contrast to the dense
|
||||
urban environment, with trees and pathways visible. The water surrounding Manhattan
|
||||
is a deep blue, and several piers jut into the harbor. The Hudson River is visible
|
||||
on the left, and the East River can be seen on the right, framing the island.
|
||||
The overall composition captures the bustling and vibrant nature of New York\u2019s
|
||||
financial hub, with bright sunlight illuminating the buildings, casting sharp
|
||||
shadows and enhancing the depth of the cityscape. The sky is clear, suggesting
|
||||
a sunny day with good visibility.\",\n \"refusal\": null\n },\n
|
||||
\ \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n
|
||||
\ \"usage\": {\n \"prompt_tokens\": 952,\n \"completion_tokens\": 203,\n
|
||||
\ \"total_tokens\": 1155,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
|
||||
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
|
||||
\"fp_5f20662549\"\n}\n"
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 8f85d995ad1ef217-GRU
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 27 Dec 2024 02:23:43 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '3108'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
x-ratelimit-limit-input-images:
|
||||
- '50000'
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '30000000'
|
||||
x-ratelimit-remaining-input-images:
|
||||
- '49999'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '29998656'
|
||||
x-ratelimit-reset-input-images:
|
||||
- 1ms
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 2ms
|
||||
x-request-id:
|
||||
- req_45f0e3d457a18f973a59074d16f137b6
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
version: 1
|
||||
569
tests/cassettes/test_task_tools_override_agent_tools.yaml
Normal file
569
tests/cassettes/test_task_tools_override_agent_tools.yaml
Normal file
@@ -0,0 +1,569 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Researcher. You''re
|
||||
an expert researcher, specialized in technology, software engineering, AI and
|
||||
startups. You work as a freelancer and is now working on doing research and
|
||||
analysis for a new customer.\nYour personal goal is: Make the best research
|
||||
and analysis on content about AI and AI agents\nYou ONLY have access to the
|
||||
following tools, and should NEVER make up tools that are not listed here:\n\nTool
|
||||
Name: Test Tool\nTool Arguments: {''query'': {''description'': ''Query to process'',
|
||||
''type'': ''str''}}\nTool Description: A test tool that just returns the input\n\nUse
|
||||
the following format:\n\nThought: you should always think about what to do\nAction:
|
||||
the action to take, only one name of [Test Tool], just the name, exactly as
|
||||
it''s written.\nAction Input: the input to the action, just a simple python
|
||||
dictionary, enclosed in curly braces, using \" to wrap keys and values.\nObservation:
|
||||
the result of the action\n\nOnce all necessary information is gathered:\n\nThought:
|
||||
I now know the final answer\nFinal Answer: the final answer to the original
|
||||
input question"}, {"role": "user", "content": "\nCurrent Task: Write a test
|
||||
task\n\nThis is the expect criteria for your final answer: Test output\nyou
|
||||
MUST return the actual complete content as the final answer, not a summary.\n\nBegin!
|
||||
This is VERY important to you, use the tools available and give your best Final
|
||||
Answer, your job depends on it!\n\nThought:"}], "model": "gpt-4o-mini", "stop":
|
||||
["\nObservation:"], "stream": false}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1536'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- _cfuvid=2u_Xw.i716TDjD2vb2mvMyWxhA4q1MM1JvbrA8CNZpI-1734895557894-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.52.1
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.52.1
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.11.7
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
content: "{\n \"id\": \"chatcmpl-AhQfznhDMtsr58XvTuRDZoB1kxwfK\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1734914011,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"I need to come up with a suitable test
|
||||
task that meets the criteria provided. I will focus on outlining a clear and
|
||||
effective test task related to AI and AI agents.\\n\\nAction: Test Tool\\nAction
|
||||
Input: {\\\"query\\\": \\\"Create a test task that involves evaluating the performance
|
||||
of an AI agent in a given scenario, including criteria for success, tools required,
|
||||
and process for assessment.\\\"}\",\n \"refusal\": null\n },\n \"logprobs\":
|
||||
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
|
||||
298,\n \"completion_tokens\": 78,\n \"total_tokens\": 376,\n \"prompt_tokens_details\":
|
||||
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
|
||||
\"fp_d02d531b47\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 8f6442b868fda486-GRU
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 23 Dec 2024 00:33:32 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=i6jvNjhsDne300GPAeEmyiJJKYqy7OPuamFG_kht3KE-1734914012-1.0.1.1-tCeVANAF521vkXpBdgYw.ov.fYUr6t5QC4LG_DugWyzu4C60Pi2CruTVniUgfCvkcu6rdHA5DwnaEZf2jFaRCQ;
|
||||
path=/; expires=Mon, 23-Dec-24 01:03:32 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '1400'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149999642'
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_c3e50e9ca9dc22de5572692e1a9c0f16
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
- request:
|
||||
body: !!binary |
|
||||
CrBzCiQKIgoMc2VydmljZS5uYW1lEhIKEGNyZXdBSS10ZWxlbWV0cnkSh3MKEgoQY3Jld2FpLnRl
|
||||
bGVtZXRyeRLUCwoQEr8cFisEEEEUtXBvovq6lhIIYdkQ+ekBh3wqDENyZXcgQ3JlYXRlZDABOThc
|
||||
YLAZpxMYQfCuabAZpxMYShoKDmNyZXdhaV92ZXJzaW9uEggKBjAuODYuMEoaCg5weXRob25fdmVy
|
||||
c2lvbhIICgYzLjExLjdKLgoIY3Jld19rZXkSIgogZGUxMDFkODU1M2VhMDI0NTM3YTA4ZjgxMmVl
|
||||
NmI3NGFKMQoHY3Jld19pZBImCiRmNTc2MjViZC1jZmY3LTRlNGMtYWM1Zi0xZWFiNjQyMzJjMmRK
|
||||
HAoMY3Jld19wcm9jZXNzEgwKCnNlcXVlbnRpYWxKEQoLY3Jld19tZW1vcnkSAhAAShoKFGNyZXdf
|
||||
bnVtYmVyX29mX3Rhc2tzEgIYAkobChVjcmV3X251bWJlcl9vZl9hZ2VudHMSAhgCSpIFCgtjcmV3
|
||||
X2FnZW50cxKCBQr/BFt7ImtleSI6ICI4YmQyMTM5YjU5NzUxODE1MDZlNDFmZDljNDU2M2Q3NSIs
|
||||
ICJpZCI6ICI1Y2Y0OWVjNy05NWYzLTRkZDctODU3Mi1mODAwNDA4NjBiMjgiLCAicm9sZSI6ICJS
|
||||
ZXNlYXJjaGVyIiwgInZlcmJvc2U/IjogZmFsc2UsICJtYXhfaXRlciI6IDIwLCAibWF4X3JwbSI6
|
||||
IG51bGwsICJmdW5jdGlvbl9jYWxsaW5nX2xsbSI6ICIiLCAibGxtIjogImdwdC00by1taW5pIiwg
|
||||
ImRlbGVnYXRpb25fZW5hYmxlZD8iOiBmYWxzZSwgImFsbG93X2NvZGVfZXhlY3V0aW9uPyI6IGZh
|
||||
bHNlLCAibWF4X3JldHJ5X2xpbWl0IjogMiwgInRvb2xzX25hbWVzIjogW119LCB7ImtleSI6ICI5
|
||||
YTUwMTVlZjQ4OTVkYzYyNzhkNTQ4MThiYTQ0NmFmNyIsICJpZCI6ICI0MTEyM2QzZC01NmEwLTRh
|
||||
NTgtYTljNi1mZjUwNjRmZjNmNTEiLCAicm9sZSI6ICJTZW5pb3IgV3JpdGVyIiwgInZlcmJvc2U/
|
||||
IjogZmFsc2UsICJtYXhfaXRlciI6IDIwLCAibWF4X3JwbSI6IG51bGwsICJmdW5jdGlvbl9jYWxs
|
||||
aW5nX2xsbSI6ICIiLCAibGxtIjogImdwdC00by1taW5pIiwgImRlbGVnYXRpb25fZW5hYmxlZD8i
|
||||
OiBmYWxzZSwgImFsbG93X2NvZGVfZXhlY3V0aW9uPyI6IGZhbHNlLCAibWF4X3JldHJ5X2xpbWl0
|
||||
IjogMiwgInRvb2xzX25hbWVzIjogW119XUrvAwoKY3Jld190YXNrcxLgAwrdA1t7ImtleSI6ICI5
|
||||
NDRhZWYwYmFjODQwZjFjMjdiZDgzYTkzN2JjMzYxYiIsICJpZCI6ICI3ZDM2NDFhNi1hZmM4LTRj
|
||||
NmMtYjkzMy0wNGZlZjY2NjUxN2MiLCAiYXN5bmNfZXhlY3V0aW9uPyI6IGZhbHNlLCAiaHVtYW5f
|
||||
aW5wdXQ/IjogZmFsc2UsICJhZ2VudF9yb2xlIjogIlJlc2VhcmNoZXIiLCAiYWdlbnRfa2V5Ijog
|
||||
IjhiZDIxMzliNTk3NTE4MTUwNmU0MWZkOWM0NTYzZDc1IiwgInRvb2xzX25hbWVzIjogW119LCB7
|
||||
ImtleSI6ICI5ZjJkNGU5M2FiNTkwYzcyNTg4NzAyNzUwOGFmOTI3OCIsICJpZCI6ICIzNTVjZjFh
|
||||
OS1lOTkzLTQxMTQtOWM0NC0yZDM5MDlhMDljNWYiLCAiYXN5bmNfZXhlY3V0aW9uPyI6IGZhbHNl
|
||||
LCAiaHVtYW5faW5wdXQ/IjogZmFsc2UsICJhZ2VudF9yb2xlIjogIlNlbmlvciBXcml0ZXIiLCAi
|
||||
YWdlbnRfa2V5IjogIjlhNTAxNWVmNDg5NWRjNjI3OGQ1NDgxOGJhNDQ2YWY3IiwgInRvb2xzX25h
|
||||
bWVzIjogW119XXoCGAGFAQABAAASjgIKEHbV3nDt+ndNQNix1f+5+cASCL+l6KV3+FEpKgxUYXNr
|
||||
IENyZWF0ZWQwATmgfo+wGacTGEEQE5CwGacTGEouCghjcmV3X2tleRIiCiBkZTEwMWQ4NTUzZWEw
|
||||
MjQ1MzdhMDhmODEyZWU2Yjc0YUoxCgdjcmV3X2lkEiYKJGY1NzYyNWJkLWNmZjctNGU0Yy1hYzVm
|
||||
LTFlYWI2NDIzMmMyZEouCgh0YXNrX2tleRIiCiA5NDRhZWYwYmFjODQwZjFjMjdiZDgzYTkzN2Jj
|
||||
MzYxYkoxCgd0YXNrX2lkEiYKJDdkMzY0MWE2LWFmYzgtNGM2Yy1iOTMzLTA0ZmVmNjY2NTE3Y3oC
|
||||
GAGFAQABAAASjgIKECqDENVoAz+3ybVKR/wz7dMSCKI9ILLFYx8SKgxUYXNrIENyZWF0ZWQwATng
|
||||
63CzGacTGEE4AXKzGacTGEouCghjcmV3X2tleRIiCiBkZTEwMWQ4NTUzZWEwMjQ1MzdhMDhmODEy
|
||||
ZWU2Yjc0YUoxCgdjcmV3X2lkEiYKJGY1NzYyNWJkLWNmZjctNGU0Yy1hYzVmLTFlYWI2NDIzMmMy
|
||||
ZEouCgh0YXNrX2tleRIiCiA5ZjJkNGU5M2FiNTkwYzcyNTg4NzAyNzUwOGFmOTI3OEoxCgd0YXNr
|
||||
X2lkEiYKJDM1NWNmMWE5LWU5OTMtNDExNC05YzQ0LTJkMzkwOWEwOWM1ZnoCGAGFAQABAAAS1AsK
|
||||
EOofSLF1HDmhYMt7eIAeFo8SCCaKUQMuWNdnKgxDcmV3IENyZWF0ZWQwATkYKA62GacTGEFwlhW2
|
||||
GacTGEoaCg5jcmV3YWlfdmVyc2lvbhIICgYwLjg2LjBKGgoOcHl0aG9uX3ZlcnNpb24SCAoGMy4x
|
||||
MS43Si4KCGNyZXdfa2V5EiIKIDRlOGU0MmNmMWVhN2U2NjhhMGU5MzJhNzAyMDY1NzQ5SjEKB2Ny
|
||||
ZXdfaWQSJgokMmIzNTVjZDMtY2MwNi00Y2QxLTk0YjgtZTU5YjM5OGI3MjEzShwKDGNyZXdfcHJv
|
||||
Y2VzcxIMCgpzZXF1ZW50aWFsShEKC2NyZXdfbWVtb3J5EgIQAEoaChRjcmV3X251bWJlcl9vZl90
|
||||
YXNrcxICGAJKGwoVY3Jld19udW1iZXJfb2ZfYWdlbnRzEgIYAkqSBQoLY3Jld19hZ2VudHMSggUK
|
||||
/wRbeyJrZXkiOiAiOGJkMjEzOWI1OTc1MTgxNTA2ZTQxZmQ5YzQ1NjNkNzUiLCAiaWQiOiAiNWNm
|
||||
NDllYzctOTVmMy00ZGQ3LTg1NzItZjgwMDQwODYwYjI4IiwgInJvbGUiOiAiUmVzZWFyY2hlciIs
|
||||
ICJ2ZXJib3NlPyI6IGZhbHNlLCAibWF4X2l0ZXIiOiAyMCwgIm1heF9ycG0iOiBudWxsLCAiZnVu
|
||||
Y3Rpb25fY2FsbGluZ19sbG0iOiAiIiwgImxsbSI6ICJncHQtNG8tbWluaSIsICJkZWxlZ2F0aW9u
|
||||
X2VuYWJsZWQ/IjogZmFsc2UsICJhbGxvd19jb2RlX2V4ZWN1dGlvbj8iOiBmYWxzZSwgIm1heF9y
|
||||
ZXRyeV9saW1pdCI6IDIsICJ0b29sc19uYW1lcyI6IFtdfSwgeyJrZXkiOiAiOWE1MDE1ZWY0ODk1
|
||||
ZGM2Mjc4ZDU0ODE4YmE0NDZhZjciLCAiaWQiOiAiNDExMjNkM2QtNTZhMC00YTU4LWE5YzYtZmY1
|
||||
MDY0ZmYzZjUxIiwgInJvbGUiOiAiU2VuaW9yIFdyaXRlciIsICJ2ZXJib3NlPyI6IGZhbHNlLCAi
|
||||
bWF4X2l0ZXIiOiAyMCwgIm1heF9ycG0iOiBudWxsLCAiZnVuY3Rpb25fY2FsbGluZ19sbG0iOiAi
|
||||
IiwgImxsbSI6ICJncHQtNG8tbWluaSIsICJkZWxlZ2F0aW9uX2VuYWJsZWQ/IjogZmFsc2UsICJh
|
||||
bGxvd19jb2RlX2V4ZWN1dGlvbj8iOiBmYWxzZSwgIm1heF9yZXRyeV9saW1pdCI6IDIsICJ0b29s
|
||||
c19uYW1lcyI6IFtdfV1K7wMKCmNyZXdfdGFza3MS4AMK3QNbeyJrZXkiOiAiNjc4NDlmZjcxN2Ri
|
||||
YWRhYmExYjk1ZDVmMmRmY2VlYTEiLCAiaWQiOiAiOGE5OTgxMDYtZjg5Zi00YTQ5LThjZjEtYjk4
|
||||
MzQ5ZDE1NDRmIiwgImFzeW5jX2V4ZWN1dGlvbj8iOiBmYWxzZSwgImh1bWFuX2lucHV0PyI6IGZh
|
||||
bHNlLCAiYWdlbnRfcm9sZSI6ICJSZXNlYXJjaGVyIiwgImFnZW50X2tleSI6ICI4YmQyMTM5YjU5
|
||||
NzUxODE1MDZlNDFmZDljNDU2M2Q3NSIsICJ0b29sc19uYW1lcyI6IFtdfSwgeyJrZXkiOiAiODRh
|
||||
ZjlmYzFjZDMzMTk5Y2ViYjlkNDE0MjE4NWY4MDIiLCAiaWQiOiAiYTViMTg0MDgtYjA1OC00ZDE1
|
||||
LTkyMmUtNDJkN2M5Y2ViYjFhIiwgImFzeW5jX2V4ZWN1dGlvbj8iOiBmYWxzZSwgImh1bWFuX2lu
|
||||
cHV0PyI6IGZhbHNlLCAiYWdlbnRfcm9sZSI6ICJTZW5pb3IgV3JpdGVyIiwgImFnZW50X2tleSI6
|
||||
ICI5YTUwMTVlZjQ4OTVkYzYyNzhkNTQ4MThiYTQ0NmFmNyIsICJ0b29sc19uYW1lcyI6IFtdfV16
|
||||
AhgBhQEAAQAAEsIJChDCLrcWQ+nu3SxOgnq50XhSEghjozRtuCFA0SoMQ3JldyBDcmVhdGVkMAE5
|
||||
CDeCthmnExhBmHiIthmnExhKGgoOY3Jld2FpX3ZlcnNpb24SCAoGMC44Ni4wShoKDnB5dGhvbl92
|
||||
ZXJzaW9uEggKBjMuMTEuN0ouCghjcmV3X2tleRIiCiBlM2ZkYTBmMzExMGZlODBiMTg5NDdjMDE0
|
||||
NzE0MzBhNEoxCgdjcmV3X2lkEiYKJGM1ZDQ0YjY5LTRhNzMtNDA3Zi1iY2RhLTUzZmUxZTQ3YTU3
|
||||
M0oeCgxjcmV3X3Byb2Nlc3MSDgoMaGllcmFyY2hpY2FsShEKC2NyZXdfbWVtb3J5EgIQAEoaChRj
|
||||
cmV3X251bWJlcl9vZl90YXNrcxICGAFKGwoVY3Jld19udW1iZXJfb2ZfYWdlbnRzEgIYAkqSBQoL
|
||||
Y3Jld19hZ2VudHMSggUK/wRbeyJrZXkiOiAiOGJkMjEzOWI1OTc1MTgxNTA2ZTQxZmQ5YzQ1NjNk
|
||||
NzUiLCAiaWQiOiAiNWNmNDllYzctOTVmMy00ZGQ3LTg1NzItZjgwMDQwODYwYjI4IiwgInJvbGUi
|
||||
OiAiUmVzZWFyY2hlciIsICJ2ZXJib3NlPyI6IGZhbHNlLCAibWF4X2l0ZXIiOiAyMCwgIm1heF9y
|
||||
cG0iOiBudWxsLCAiZnVuY3Rpb25fY2FsbGluZ19sbG0iOiAiIiwgImxsbSI6ICJncHQtNG8tbWlu
|
||||
aSIsICJkZWxlZ2F0aW9uX2VuYWJsZWQ/IjogZmFsc2UsICJhbGxvd19jb2RlX2V4ZWN1dGlvbj8i
|
||||
OiBmYWxzZSwgIm1heF9yZXRyeV9saW1pdCI6IDIsICJ0b29sc19uYW1lcyI6IFtdfSwgeyJrZXki
|
||||
OiAiOWE1MDE1ZWY0ODk1ZGM2Mjc4ZDU0ODE4YmE0NDZhZjciLCAiaWQiOiAiNDExMjNkM2QtNTZh
|
||||
MC00YTU4LWE5YzYtZmY1MDY0ZmYzZjUxIiwgInJvbGUiOiAiU2VuaW9yIFdyaXRlciIsICJ2ZXJi
|
||||
b3NlPyI6IGZhbHNlLCAibWF4X2l0ZXIiOiAyMCwgIm1heF9ycG0iOiBudWxsLCAiZnVuY3Rpb25f
|
||||
Y2FsbGluZ19sbG0iOiAiIiwgImxsbSI6ICJncHQtNG8tbWluaSIsICJkZWxlZ2F0aW9uX2VuYWJs
|
||||
ZWQ/IjogZmFsc2UsICJhbGxvd19jb2RlX2V4ZWN1dGlvbj8iOiBmYWxzZSwgIm1heF9yZXRyeV9s
|
||||
aW1pdCI6IDIsICJ0b29sc19uYW1lcyI6IFtdfV1K2wEKCmNyZXdfdGFza3MSzAEKyQFbeyJrZXki
|
||||
OiAiNWZhNjVjMDZhOWUzMWYyYzY5NTQzMjY2OGFjZDYyZGQiLCAiaWQiOiAiNjNhYTVlOTYtYTM4
|
||||
Yy00YjcyLWJiZDQtYjM2NmU5NTlhOWZhIiwgImFzeW5jX2V4ZWN1dGlvbj8iOiBmYWxzZSwgImh1
|
||||
bWFuX2lucHV0PyI6IGZhbHNlLCAiYWdlbnRfcm9sZSI6ICJOb25lIiwgImFnZW50X2tleSI6IG51
|
||||
bGwsICJ0b29sc19uYW1lcyI6IFtdfV16AhgBhQEAAQAAEuYJChA8kiyQ+AFdDSYkp0+TUWKvEgjW
|
||||
0grLw8r5KioMQ3JldyBDcmVhdGVkMAE5iLivvhmnExhBeG21vhmnExhKGgoOY3Jld2FpX3ZlcnNp
|
||||
b24SCAoGMC44Ni4wShoKDnB5dGhvbl92ZXJzaW9uEggKBjMuMTEuN0ouCghjcmV3X2tleRIiCiBl
|
||||
M2ZkYTBmMzExMGZlODBiMTg5NDdjMDE0NzE0MzBhNEoxCgdjcmV3X2lkEiYKJGIzZGQ1MGYxLTI0
|
||||
YWQtNDE5OC04ZGFhLTMwZTU0OTQ3MTlhMEoeCgxjcmV3X3Byb2Nlc3MSDgoMaGllcmFyY2hpY2Fs
|
||||
ShEKC2NyZXdfbWVtb3J5EgIQAEoaChRjcmV3X251bWJlcl9vZl90YXNrcxICGAFKGwoVY3Jld19u
|
||||
dW1iZXJfb2ZfYWdlbnRzEgIYAkqSBQoLY3Jld19hZ2VudHMSggUK/wRbeyJrZXkiOiAiOGJkMjEz
|
||||
OWI1OTc1MTgxNTA2ZTQxZmQ5YzQ1NjNkNzUiLCAiaWQiOiAiNWNmNDllYzctOTVmMy00ZGQ3LTg1
|
||||
NzItZjgwMDQwODYwYjI4IiwgInJvbGUiOiAiUmVzZWFyY2hlciIsICJ2ZXJib3NlPyI6IGZhbHNl
|
||||
LCAibWF4X2l0ZXIiOiAyMCwgIm1heF9ycG0iOiBudWxsLCAiZnVuY3Rpb25fY2FsbGluZ19sbG0i
|
||||
OiAiIiwgImxsbSI6ICJncHQtNG8tbWluaSIsICJkZWxlZ2F0aW9uX2VuYWJsZWQ/IjogZmFsc2Us
|
||||
ICJhbGxvd19jb2RlX2V4ZWN1dGlvbj8iOiBmYWxzZSwgIm1heF9yZXRyeV9saW1pdCI6IDIsICJ0
|
||||
b29sc19uYW1lcyI6IFtdfSwgeyJrZXkiOiAiOWE1MDE1ZWY0ODk1ZGM2Mjc4ZDU0ODE4YmE0NDZh
|
||||
ZjciLCAiaWQiOiAiNDExMjNkM2QtNTZhMC00YTU4LWE5YzYtZmY1MDY0ZmYzZjUxIiwgInJvbGUi
|
||||
OiAiU2VuaW9yIFdyaXRlciIsICJ2ZXJib3NlPyI6IGZhbHNlLCAibWF4X2l0ZXIiOiAyMCwgIm1h
|
||||
eF9ycG0iOiBudWxsLCAiZnVuY3Rpb25fY2FsbGluZ19sbG0iOiAiIiwgImxsbSI6ICJncHQtNG8t
|
||||
bWluaSIsICJkZWxlZ2F0aW9uX2VuYWJsZWQ/IjogZmFsc2UsICJhbGxvd19jb2RlX2V4ZWN1dGlv
|
||||
bj8iOiBmYWxzZSwgIm1heF9yZXRyeV9saW1pdCI6IDIsICJ0b29sc19uYW1lcyI6IFtdfV1K/wEK
|
||||
CmNyZXdfdGFza3MS8AEK7QFbeyJrZXkiOiAiNWZhNjVjMDZhOWUzMWYyYzY5NTQzMjY2OGFjZDYy
|
||||
ZGQiLCAiaWQiOiAiNzEyODlkZTAtODQ4My00NDM2LWI2OGMtNDc1MWIzNTU0ZmUzIiwgImFzeW5j
|
||||
X2V4ZWN1dGlvbj8iOiBmYWxzZSwgImh1bWFuX2lucHV0PyI6IGZhbHNlLCAiYWdlbnRfcm9sZSI6
|
||||
ICJSZXNlYXJjaGVyIiwgImFnZW50X2tleSI6ICI4YmQyMTM5YjU5NzUxODE1MDZlNDFmZDljNDU2
|
||||
M2Q3NSIsICJ0b29sc19uYW1lcyI6IFtdfV16AhgBhQEAAQAAEo4CChCTiJL+KK5ff9xnie6eZbEc
|
||||
EghbtQixNaG5DioMVGFzayBDcmVhdGVkMAE5cIXNvhmnExhBuPbNvhmnExhKLgoIY3Jld19rZXkS
|
||||
IgogZTNmZGEwZjMxMTBmZTgwYjE4OTQ3YzAxNDcxNDMwYTRKMQoHY3Jld19pZBImCiRiM2RkNTBm
|
||||
MS0yNGFkLTQxOTgtOGRhYS0zMGU1NDk0NzE5YTBKLgoIdGFza19rZXkSIgogNWZhNjVjMDZhOWUz
|
||||
MWYyYzY5NTQzMjY2OGFjZDYyZGRKMQoHdGFza19pZBImCiQ3MTI4OWRlMC04NDgzLTQ0MzYtYjY4
|
||||
Yy00NzUxYjM1NTRmZTN6AhgBhQEAAQAAEpwBChBCdDi/i+SH0kHHlJKQjmYgEgiemV9jVU5fQSoK
|
||||
VG9vbCBVc2FnZTABOVj/YL8ZpxMYQWCwZr8ZpxMYShoKDmNyZXdhaV92ZXJzaW9uEggKBjAuODYu
|
||||
MEooCgl0b29sX25hbWUSGwoZRGVsZWdhdGUgd29yayB0byBjb3dvcmtlckoOCghhdHRlbXB0cxIC
|
||||
GAF6AhgBhQEAAQAAEqUBChBRuZ6Z/nNag4ubLeZ8L/8pEghCX4biKNFb6SoTVG9vbCBSZXBlYXRl
|
||||
ZCBVc2FnZTABOUj9wr8ZpxMYQdg+yb8ZpxMYShoKDmNyZXdhaV92ZXJzaW9uEggKBjAuODYuMEoo
|
||||
Cgl0b29sX25hbWUSGwoZRGVsZWdhdGUgd29yayB0byBjb3dvcmtlckoOCghhdHRlbXB0cxICGAF6
|
||||
AhgBhQEAAQAAEpwBChDnt1bxQsOb0LVscG9GDYVtEgjf62keNMl5ZyoKVG9vbCBVc2FnZTABOdha
|
||||
6MAZpxMYQWii7cAZpxMYShoKDmNyZXdhaV92ZXJzaW9uEggKBjAuODYuMEooCgl0b29sX25hbWUS
|
||||
GwoZRGVsZWdhdGUgd29yayB0byBjb3dvcmtlckoOCghhdHRlbXB0cxICGAF6AhgBhQEAAQAAEpsB
|
||||
ChDFqFA9b42EIwUxeNLTeScxEgiGFk7FwiNxVioKVG9vbCBVc2FnZTABObDAY8EZpxMYQdhIaMEZ
|
||||
pxMYShoKDmNyZXdhaV92ZXJzaW9uEggKBjAuODYuMEonCgl0b29sX25hbWUSGgoYQXNrIHF1ZXN0
|
||||
aW9uIHRvIGNvd29ya2VySg4KCGF0dGVtcHRzEgIYAXoCGAGFAQABAAASwgkKEHpB0rbuWbSXijzV
|
||||
QdTa3oQSCNSPnbmqe2PfKgxDcmV3IENyZWF0ZWQwATmIXxTCGacTGEF4GhnCGacTGEoaCg5jcmV3
|
||||
YWlfdmVyc2lvbhIICgYwLjg2LjBKGgoOcHl0aG9uX3ZlcnNpb24SCAoGMy4xMS43Si4KCGNyZXdf
|
||||
a2V5EiIKIGUzZmRhMGYzMTEwZmU4MGIxODk0N2MwMTQ3MTQzMGE0SjEKB2NyZXdfaWQSJgokZGJm
|
||||
YzNjMjctMmRjZS00MjIyLThiYmQtYmMxMjU3OTVlNWI1Sh4KDGNyZXdfcHJvY2VzcxIOCgxoaWVy
|
||||
YXJjaGljYWxKEQoLY3Jld19tZW1vcnkSAhAAShoKFGNyZXdfbnVtYmVyX29mX3Rhc2tzEgIYAUob
|
||||
ChVjcmV3X251bWJlcl9vZl9hZ2VudHMSAhgCSpIFCgtjcmV3X2FnZW50cxKCBQr/BFt7ImtleSI6
|
||||
ICI4YmQyMTM5YjU5NzUxODE1MDZlNDFmZDljNDU2M2Q3NSIsICJpZCI6ICI1Y2Y0OWVjNy05NWYz
|
||||
LTRkZDctODU3Mi1mODAwNDA4NjBiMjgiLCAicm9sZSI6ICJSZXNlYXJjaGVyIiwgInZlcmJvc2U/
|
||||
IjogZmFsc2UsICJtYXhfaXRlciI6IDIwLCAibWF4X3JwbSI6IG51bGwsICJmdW5jdGlvbl9jYWxs
|
||||
aW5nX2xsbSI6ICIiLCAibGxtIjogImdwdC00by1taW5pIiwgImRlbGVnYXRpb25fZW5hYmxlZD8i
|
||||
OiBmYWxzZSwgImFsbG93X2NvZGVfZXhlY3V0aW9uPyI6IGZhbHNlLCAibWF4X3JldHJ5X2xpbWl0
|
||||
IjogMiwgInRvb2xzX25hbWVzIjogW119LCB7ImtleSI6ICI5YTUwMTVlZjQ4OTVkYzYyNzhkNTQ4
|
||||
MThiYTQ0NmFmNyIsICJpZCI6ICI0MTEyM2QzZC01NmEwLTRhNTgtYTljNi1mZjUwNjRmZjNmNTEi
|
||||
LCAicm9sZSI6ICJTZW5pb3IgV3JpdGVyIiwgInZlcmJvc2U/IjogZmFsc2UsICJtYXhfaXRlciI6
|
||||
IDIwLCAibWF4X3JwbSI6IG51bGwsICJmdW5jdGlvbl9jYWxsaW5nX2xsbSI6ICIiLCAibGxtIjog
|
||||
ImdwdC00by1taW5pIiwgImRlbGVnYXRpb25fZW5hYmxlZD8iOiBmYWxzZSwgImFsbG93X2NvZGVf
|
||||
ZXhlY3V0aW9uPyI6IGZhbHNlLCAibWF4X3JldHJ5X2xpbWl0IjogMiwgInRvb2xzX25hbWVzIjog
|
||||
W119XUrbAQoKY3Jld190YXNrcxLMAQrJAVt7ImtleSI6ICI1ZmE2NWMwNmE5ZTMxZjJjNjk1NDMy
|
||||
NjY4YWNkNjJkZCIsICJpZCI6ICIyYWFjOTllMC0yNWVmLTQzN2MtYTJmZi1jZGFlMjg2ZWU2MzQi
|
||||
LCAiYXN5bmNfZXhlY3V0aW9uPyI6IGZhbHNlLCAiaHVtYW5faW5wdXQ/IjogZmFsc2UsICJhZ2Vu
|
||||
dF9yb2xlIjogIk5vbmUiLCAiYWdlbnRfa2V5IjogbnVsbCwgInRvb2xzX25hbWVzIjogW119XXoC
|
||||
GAGFAQABAAAS1QkKEM6Xt0BvAHy+TI7iLC6ovN0SCEfHP30NZESSKgxDcmV3IENyZWF0ZWQwATkg
|
||||
PdnDGacTGEFIPN/DGacTGEoaCg5jcmV3YWlfdmVyc2lvbhIICgYwLjg2LjBKGgoOcHl0aG9uX3Zl
|
||||
cnNpb24SCAoGMy4xMS43Si4KCGNyZXdfa2V5EiIKIGU2NDk1NzNhMjZlNTg3OTBjYWMyMWEzN2Nk
|
||||
NDQ0MzdhSjEKB2NyZXdfaWQSJgokNjE3MDA3NGMtYzU5OS00ODkyLTkwYzYtMTcxYjhkM2Y1OTRh
|
||||
ShwKDGNyZXdfcHJvY2VzcxIMCgpzZXF1ZW50aWFsShEKC2NyZXdfbWVtb3J5EgIQAEoaChRjcmV3
|
||||
X251bWJlcl9vZl90YXNrcxICGAFKGwoVY3Jld19udW1iZXJfb2ZfYWdlbnRzEgIYAkqKBQoLY3Jl
|
||||
d19hZ2VudHMS+gQK9wRbeyJrZXkiOiAiMzI4MjE3YjZjMjk1OWJkZmM0N2NhZDAwZTg0ODkwZDAi
|
||||
LCAiaWQiOiAiYjNmMTczZTktNjY3NS00OTFkLTgyYjctODM4NmRkMjExMDM1IiwgInJvbGUiOiAi
|
||||
Q0VPIiwgInZlcmJvc2U/IjogZmFsc2UsICJtYXhfaXRlciI6IDIwLCAibWF4X3JwbSI6IG51bGws
|
||||
ICJmdW5jdGlvbl9jYWxsaW5nX2xsbSI6ICIiLCAibGxtIjogImdwdC00by1taW5pIiwgImRlbGVn
|
||||
YXRpb25fZW5hYmxlZD8iOiB0cnVlLCAiYWxsb3dfY29kZV9leGVjdXRpb24/IjogZmFsc2UsICJt
|
||||
YXhfcmV0cnlfbGltaXQiOiAyLCAidG9vbHNfbmFtZXMiOiBbXX0sIHsia2V5IjogIjlhNTAxNWVm
|
||||
NDg5NWRjNjI3OGQ1NDgxOGJhNDQ2YWY3IiwgImlkIjogIjQxMTIzZDNkLTU2YTAtNGE1OC1hOWM2
|
||||
LWZmNTA2NGZmM2Y1MSIsICJyb2xlIjogIlNlbmlvciBXcml0ZXIiLCAidmVyYm9zZT8iOiBmYWxz
|
||||
ZSwgIm1heF9pdGVyIjogMjAsICJtYXhfcnBtIjogbnVsbCwgImZ1bmN0aW9uX2NhbGxpbmdfbGxt
|
||||
IjogIiIsICJsbG0iOiAiZ3B0LTRvLW1pbmkiLCAiZGVsZWdhdGlvbl9lbmFibGVkPyI6IGZhbHNl
|
||||
LCAiYWxsb3dfY29kZV9leGVjdXRpb24/IjogZmFsc2UsICJtYXhfcmV0cnlfbGltaXQiOiAyLCAi
|
||||
dG9vbHNfbmFtZXMiOiBbXX1dSvgBCgpjcmV3X3Rhc2tzEukBCuYBW3sia2V5IjogIjBiOWQ2NWRi
|
||||
NmI3YWVkZmIzOThjNTllMmE5ZjcxZWM1IiwgImlkIjogImJiNmI1Njg3LTg5NGMtNDAyNS05M2My
|
||||
LTMyYjdkZmEwZTUxMyIsICJhc3luY19leGVjdXRpb24/IjogZmFsc2UsICJodW1hbl9pbnB1dD8i
|
||||
OiBmYWxzZSwgImFnZW50X3JvbGUiOiAiQ0VPIiwgImFnZW50X2tleSI6ICIzMjgyMTdiNmMyOTU5
|
||||
YmRmYzQ3Y2FkMDBlODQ4OTBkMCIsICJ0b29sc19uYW1lcyI6IFtdfV16AhgBhQEAAQAAEo4CChCK
|
||||
KIL9w7sqoMzG3JItjK8eEgiR4RSmJw+SMSoMVGFzayBDcmVhdGVkMAE5CCjywxmnExhByIXywxmn
|
||||
ExhKLgoIY3Jld19rZXkSIgogZTY0OTU3M2EyNmU1ODc5MGNhYzIxYTM3Y2Q0NDQzN2FKMQoHY3Jl
|
||||
d19pZBImCiQ2MTcwMDc0Yy1jNTk5LTQ4OTItOTBjNi0xNzFiOGQzZjU5NGFKLgoIdGFza19rZXkS
|
||||
IgogMGI5ZDY1ZGI2YjdhZWRmYjM5OGM1OWUyYTlmNzFlYzVKMQoHdGFza19pZBImCiRiYjZiNTY4
|
||||
Ny04OTRjLTQwMjUtOTNjMi0zMmI3ZGZhMGU1MTN6AhgBhQEAAQAAEpwBChD+/zv5udkceIEyIb7d
|
||||
ne5vEgj1My75q1O7UCoKVG9vbCBVc2FnZTABOThPfMQZpxMYQcA4g8QZpxMYShoKDmNyZXdhaV92
|
||||
ZXJzaW9uEggKBjAuODYuMEooCgl0b29sX25hbWUSGwoZRGVsZWdhdGUgd29yayB0byBjb3dvcmtl
|
||||
ckoOCghhdHRlbXB0cxICGAF6AhgBhQEAAQAAEuAJChBIzM1Xa9IhegFDHxt6rj3eEgj9z56V1hXk
|
||||
aCoMQ3JldyBDcmVhdGVkMAE5mEoMxRmnExhBoPsRxRmnExhKGgoOY3Jld2FpX3ZlcnNpb24SCAoG
|
||||
MC44Ni4wShoKDnB5dGhvbl92ZXJzaW9uEggKBjMuMTEuN0ouCghjcmV3X2tleRIiCiBlNjQ5NTcz
|
||||
YTI2ZTU4NzkwY2FjMjFhMzdjZDQ0NDM3YUoxCgdjcmV3X2lkEiYKJGQ4MjhhZWM2LTg2N2MtNDdh
|
||||
YS04ODY4LWQwMWYwNGM0MGE0MUocCgxjcmV3X3Byb2Nlc3MSDAoKc2VxdWVudGlhbEoRCgtjcmV3
|
||||
X21lbW9yeRICEABKGgoUY3Jld19udW1iZXJfb2ZfdGFza3MSAhgBShsKFWNyZXdfbnVtYmVyX29m
|
||||
X2FnZW50cxICGAJKigUKC2NyZXdfYWdlbnRzEvoECvcEW3sia2V5IjogIjMyODIxN2I2YzI5NTli
|
||||
ZGZjNDdjYWQwMGU4NDg5MGQwIiwgImlkIjogImIzZjE3M2U5LTY2NzUtNDkxZC04MmI3LTgzODZk
|
||||
ZDIxMTAzNSIsICJyb2xlIjogIkNFTyIsICJ2ZXJib3NlPyI6IGZhbHNlLCAibWF4X2l0ZXIiOiAy
|
||||
MCwgIm1heF9ycG0iOiBudWxsLCAiZnVuY3Rpb25fY2FsbGluZ19sbG0iOiAiIiwgImxsbSI6ICJn
|
||||
cHQtNG8tbWluaSIsICJkZWxlZ2F0aW9uX2VuYWJsZWQ/IjogdHJ1ZSwgImFsbG93X2NvZGVfZXhl
|
||||
Y3V0aW9uPyI6IGZhbHNlLCAibWF4X3JldHJ5X2xpbWl0IjogMiwgInRvb2xzX25hbWVzIjogW119
|
||||
LCB7ImtleSI6ICI5YTUwMTVlZjQ4OTVkYzYyNzhkNTQ4MThiYTQ0NmFmNyIsICJpZCI6ICI0MTEy
|
||||
M2QzZC01NmEwLTRhNTgtYTljNi1mZjUwNjRmZjNmNTEiLCAicm9sZSI6ICJTZW5pb3IgV3JpdGVy
|
||||
IiwgInZlcmJvc2U/IjogZmFsc2UsICJtYXhfaXRlciI6IDIwLCAibWF4X3JwbSI6IG51bGwsICJm
|
||||
dW5jdGlvbl9jYWxsaW5nX2xsbSI6ICIiLCAibGxtIjogImdwdC00by1taW5pIiwgImRlbGVnYXRp
|
||||
b25fZW5hYmxlZD8iOiBmYWxzZSwgImFsbG93X2NvZGVfZXhlY3V0aW9uPyI6IGZhbHNlLCAibWF4
|
||||
X3JldHJ5X2xpbWl0IjogMiwgInRvb2xzX25hbWVzIjogW119XUqDAgoKY3Jld190YXNrcxL0AQrx
|
||||
AVt7ImtleSI6ICIwYjlkNjVkYjZiN2FlZGZiMzk4YzU5ZTJhOWY3MWVjNSIsICJpZCI6ICI5YTBj
|
||||
ODZhZi0wYTE0LTQ4MzgtOTJmZC02NDhhZGM1NzJlMDMiLCAiYXN5bmNfZXhlY3V0aW9uPyI6IGZh
|
||||
bHNlLCAiaHVtYW5faW5wdXQ/IjogZmFsc2UsICJhZ2VudF9yb2xlIjogIkNFTyIsICJhZ2VudF9r
|
||||
ZXkiOiAiMzI4MjE3YjZjMjk1OWJkZmM0N2NhZDAwZTg0ODkwZDAiLCAidG9vbHNfbmFtZXMiOiBb
|
||||
InRlc3QgdG9vbCJdfV16AhgBhQEAAQAAEo4CChDl0EBv/8sdeV8eJ45EUBpxEgj+C7UlokySqSoM
|
||||
VGFzayBDcmVhdGVkMAE5oI8jxRmnExhBYO0jxRmnExhKLgoIY3Jld19rZXkSIgogZTY0OTU3M2Ey
|
||||
NmU1ODc5MGNhYzIxYTM3Y2Q0NDQzN2FKMQoHY3Jld19pZBImCiRkODI4YWVjNi04NjdjLTQ3YWEt
|
||||
ODg2OC1kMDFmMDRjNDBhNDFKLgoIdGFza19rZXkSIgogMGI5ZDY1ZGI2YjdhZWRmYjM5OGM1OWUy
|
||||
YTlmNzFlYzVKMQoHdGFza19pZBImCiQ5YTBjODZhZi0wYTE0LTQ4MzgtOTJmZC02NDhhZGM1NzJl
|
||||
MDN6AhgBhQEAAQAAEpsBChArkcRTKJCaWLUYbx8DLyvTEgikYuS5tmbKNioKVG9vbCBVc2FnZTAB
|
||||
OSh+MscZpxMYQdgTOMcZpxMYShoKDmNyZXdhaV92ZXJzaW9uEggKBjAuODYuMEonCgl0b29sX25h
|
||||
bWUSGgoYQXNrIHF1ZXN0aW9uIHRvIGNvd29ya2VySg4KCGF0dGVtcHRzEgIYAXoCGAGFAQABAAAS
|
||||
6wkKEHxFJsjiUgQromzfQHpYYMISCBkGairjk9kkKgxDcmV3IENyZWF0ZWQwATk4/rXHGacTGEGY
|
||||
yrvHGacTGEoaCg5jcmV3YWlfdmVyc2lvbhIICgYwLjg2LjBKGgoOcHl0aG9uX3ZlcnNpb24SCAoG
|
||||
My4xMS43Si4KCGNyZXdfa2V5EiIKIGU2NDk1NzNhMjZlNTg3OTBjYWMyMWEzN2NkNDQ0MzdhSjEK
|
||||
B2NyZXdfaWQSJgokMjY3NzEyNzItOTRlZC00NDVkLTg1MGEtYTkyYTZjOWI5YmJkShwKDGNyZXdf
|
||||
cHJvY2VzcxIMCgpzZXF1ZW50aWFsShEKC2NyZXdfbWVtb3J5EgIQAEoaChRjcmV3X251bWJlcl9v
|
||||
Zl90YXNrcxICGAFKGwoVY3Jld19udW1iZXJfb2ZfYWdlbnRzEgIYAkqVBQoLY3Jld19hZ2VudHMS
|
||||
hQUKggVbeyJrZXkiOiAiMzI4MjE3YjZjMjk1OWJkZmM0N2NhZDAwZTg0ODkwZDAiLCAiaWQiOiAi
|
||||
YjNmMTczZTktNjY3NS00OTFkLTgyYjctODM4NmRkMjExMDM1IiwgInJvbGUiOiAiQ0VPIiwgInZl
|
||||
cmJvc2U/IjogZmFsc2UsICJtYXhfaXRlciI6IDIwLCAibWF4X3JwbSI6IG51bGwsICJmdW5jdGlv
|
||||
bl9jYWxsaW5nX2xsbSI6ICIiLCAibGxtIjogImdwdC00by1taW5pIiwgImRlbGVnYXRpb25fZW5h
|
||||
YmxlZD8iOiB0cnVlLCAiYWxsb3dfY29kZV9leGVjdXRpb24/IjogZmFsc2UsICJtYXhfcmV0cnlf
|
||||
bGltaXQiOiAyLCAidG9vbHNfbmFtZXMiOiBbInRlc3QgdG9vbCJdfSwgeyJrZXkiOiAiOWE1MDE1
|
||||
ZWY0ODk1ZGM2Mjc4ZDU0ODE4YmE0NDZhZjciLCAiaWQiOiAiNDExMjNkM2QtNTZhMC00YTU4LWE5
|
||||
YzYtZmY1MDY0ZmYzZjUxIiwgInJvbGUiOiAiU2VuaW9yIFdyaXRlciIsICJ2ZXJib3NlPyI6IGZh
|
||||
bHNlLCAibWF4X2l0ZXIiOiAyMCwgIm1heF9ycG0iOiBudWxsLCAiZnVuY3Rpb25fY2FsbGluZ19s
|
||||
bG0iOiAiIiwgImxsbSI6ICJncHQtNG8tbWluaSIsICJkZWxlZ2F0aW9uX2VuYWJsZWQ/IjogZmFs
|
||||
c2UsICJhbGxvd19jb2RlX2V4ZWN1dGlvbj8iOiBmYWxzZSwgIm1heF9yZXRyeV9saW1pdCI6IDIs
|
||||
ICJ0b29sc19uYW1lcyI6IFtdfV1KgwIKCmNyZXdfdGFza3MS9AEK8QFbeyJrZXkiOiAiMGI5ZDY1
|
||||
ZGI2YjdhZWRmYjM5OGM1OWUyYTlmNzFlYzUiLCAiaWQiOiAiNjYzOTEwZjYtNTlkYS00NjE3LTli
|
||||
ZTMtNTBmMDdhNmQ5N2U3IiwgImFzeW5jX2V4ZWN1dGlvbj8iOiBmYWxzZSwgImh1bWFuX2lucHV0
|
||||
PyI6IGZhbHNlLCAiYWdlbnRfcm9sZSI6ICJDRU8iLCAiYWdlbnRfa2V5IjogIjMyODIxN2I2YzI5
|
||||
NTliZGZjNDdjYWQwMGU4NDg5MGQwIiwgInRvb2xzX25hbWVzIjogWyJ0ZXN0IHRvb2wiXX1degIY
|
||||
AYUBAAEAABKOAgoQ1qBlNY8Yu1muyMaMnchyJBII0vE2y9FMwz0qDFRhc2sgQ3JlYXRlZDABObDR
|
||||
zscZpxMYQah5z8cZpxMYSi4KCGNyZXdfa2V5EiIKIGU2NDk1NzNhMjZlNTg3OTBjYWMyMWEzN2Nk
|
||||
NDQ0MzdhSjEKB2NyZXdfaWQSJgokMjY3NzEyNzItOTRlZC00NDVkLTg1MGEtYTkyYTZjOWI5YmJk
|
||||
Si4KCHRhc2tfa2V5EiIKIDBiOWQ2NWRiNmI3YWVkZmIzOThjNTllMmE5ZjcxZWM1SjEKB3Rhc2tf
|
||||
aWQSJgokNjYzOTEwZjYtNTlkYS00NjE3LTliZTMtNTBmMDdhNmQ5N2U3egIYAYUBAAEAABKMAQoQ
|
||||
a8ZDV3ZaBmcOZE5dJ87f1hII7iBRAQfEmdAqClRvb2wgVXNhZ2UwATmYcwjIGacTGEE4RxLIGacT
|
||||
GEoaCg5jcmV3YWlfdmVyc2lvbhIICgYwLjg2LjBKGAoJdG9vbF9uYW1lEgsKCVRlc3QgVG9vbEoO
|
||||
CghhdHRlbXB0cxICGAF6AhgBhQEAAQAAEowBChBqK4036ypaH1gZ3OIOE/0HEgiF8wTQDQGRlSoK
|
||||
VG9vbCBVc2FnZTABOYBiSsgZpxMYQRCYUsgZpxMYShoKDmNyZXdhaV92ZXJzaW9uEggKBjAuODYu
|
||||
MEoYCgl0b29sX25hbWUSCwoJVGVzdCBUb29sSg4KCGF0dGVtcHRzEgIYAXoCGAGFAQABAAASwQcK
|
||||
EIWSiNjtKgeNQ6oIv8gjJ+MSCG8YnypCXfw1KgxDcmV3IENyZWF0ZWQwATnYUW/KGacTGEEoenTK
|
||||
GacTGEoaCg5jcmV3YWlfdmVyc2lvbhIICgYwLjg2LjBKGgoOcHl0aG9uX3ZlcnNpb24SCAoGMy4x
|
||||
MS43Si4KCGNyZXdfa2V5EiIKIDk4MjQ2MGVlMmRkMmNmMTJhNzEzOGI3MDg1OWZlODE3SjEKB2Ny
|
||||
ZXdfaWQSJgokZDNkODZjNmEtNWNmMi00MGI0LWExZGQtMzA5NTYyODdjNWE3ShwKDGNyZXdfcHJv
|
||||
Y2VzcxIMCgpzZXF1ZW50aWFsShEKC2NyZXdfbWVtb3J5EgIQAEoaChRjcmV3X251bWJlcl9vZl90
|
||||
YXNrcxICGAFKGwoVY3Jld19udW1iZXJfb2ZfYWdlbnRzEgIYAUrcAgoLY3Jld19hZ2VudHMSzAIK
|
||||
yQJbeyJrZXkiOiAiOGJkMjEzOWI1OTc1MTgxNTA2ZTQxZmQ5YzQ1NjNkNzUiLCAiaWQiOiAiNWNm
|
||||
NDllYzctOTVmMy00ZGQ3LTg1NzItZjgwMDQwODYwYjI4IiwgInJvbGUiOiAiUmVzZWFyY2hlciIs
|
||||
ICJ2ZXJib3NlPyI6IGZhbHNlLCAibWF4X2l0ZXIiOiAyMCwgIm1heF9ycG0iOiBudWxsLCAiZnVu
|
||||
Y3Rpb25fY2FsbGluZ19sbG0iOiAiIiwgImxsbSI6ICJncHQtNG8tbWluaSIsICJkZWxlZ2F0aW9u
|
||||
X2VuYWJsZWQ/IjogZmFsc2UsICJhbGxvd19jb2RlX2V4ZWN1dGlvbj8iOiBmYWxzZSwgIm1heF9y
|
||||
ZXRyeV9saW1pdCI6IDIsICJ0b29sc19uYW1lcyI6IFsidGVzdCB0b29sIl19XUqSAgoKY3Jld190
|
||||
YXNrcxKDAgqAAlt7ImtleSI6ICJmODM5Yzg3YzNkNzU3Yzg4N2Y0Y2U3NGQxODY0YjAyYSIsICJp
|
||||
ZCI6ICJjM2Y2NjY2MS00YWNjLTQ5OWQtYjJkNC1kZjI0Nzg1MTJhZGYiLCAiYXN5bmNfZXhlY3V0
|
||||
aW9uPyI6IGZhbHNlLCAiaHVtYW5faW5wdXQ/IjogZmFsc2UsICJhZ2VudF9yb2xlIjogIlJlc2Vh
|
||||
cmNoZXIiLCAiYWdlbnRfa2V5IjogIjhiZDIxMzliNTk3NTE4MTUwNmU0MWZkOWM0NTYzZDc1Iiwg
|
||||
InRvb2xzX25hbWVzIjogWyJhbm90aGVyIHRlc3QgdG9vbCJdfV16AhgBhQEAAQAAEo4CChD8dNvp
|
||||
UItERukk59GnvESYEghtjirHyG3B3SoMVGFzayBDcmVhdGVkMAE5MAGByhmnExhBIFeByhmnExhK
|
||||
LgoIY3Jld19rZXkSIgogOTgyNDYwZWUyZGQyY2YxMmE3MTM4YjcwODU5ZmU4MTdKMQoHY3Jld19p
|
||||
ZBImCiRkM2Q4NmM2YS01Y2YyLTQwYjQtYTFkZC0zMDk1NjI4N2M1YTdKLgoIdGFza19rZXkSIgog
|
||||
ZjgzOWM4N2MzZDc1N2M4ODdmNGNlNzRkMTg2NGIwMmFKMQoHdGFza19pZBImCiRjM2Y2NjY2MS00
|
||||
YWNjLTQ5OWQtYjJkNC1kZjI0Nzg1MTJhZGZ6AhgBhQEAAQAAEowBChDdoNfQMW/Om7LQU9gZGDrl
|
||||
Egjw71DM3bnOWCoKVG9vbCBVc2FnZTABOUgPFC8apxMYQdhtKi8apxMYShoKDmNyZXdhaV92ZXJz
|
||||
aW9uEggKBjAuODYuMEoYCgl0b29sX25hbWUSCwoJVGVzdCBUb29sSg4KCGF0dGVtcHRzEgIYAXoC
|
||||
GAGFAQABAAA=
|
||||
headers:
|
||||
Accept:
|
||||
- '*/*'
|
||||
Accept-Encoding:
|
||||
- gzip, deflate
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '14771'
|
||||
Content-Type:
|
||||
- application/x-protobuf
|
||||
User-Agent:
|
||||
- OTel-OTLP-Exporter-Python/1.27.0
|
||||
method: POST
|
||||
uri: https://telemetry.crewai.com:4319/v1/traces
|
||||
response:
|
||||
body:
|
||||
string: "\n\0"
|
||||
headers:
|
||||
Content-Length:
|
||||
- '2'
|
||||
Content-Type:
|
||||
- application/x-protobuf
|
||||
Date:
|
||||
- Mon, 23 Dec 2024 00:33:37 GMT
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Researcher. You''re
|
||||
an expert researcher, specialized in technology, software engineering, AI and
|
||||
startups. You work as a freelancer and is now working on doing research and
|
||||
analysis for a new customer.\nYour personal goal is: Make the best research
|
||||
and analysis on content about AI and AI agents\nYou ONLY have access to the
|
||||
following tools, and should NEVER make up tools that are not listed here:\n\nTool
|
||||
Name: Test Tool\nTool Arguments: {''query'': {''description'': ''Query to process'',
|
||||
''type'': ''str''}}\nTool Description: A test tool that just returns the input\n\nUse
|
||||
the following format:\n\nThought: you should always think about what to do\nAction:
|
||||
the action to take, only one name of [Test Tool], just the name, exactly as
|
||||
it''s written.\nAction Input: the input to the action, just a simple python
|
||||
dictionary, enclosed in curly braces, using \" to wrap keys and values.\nObservation:
|
||||
the result of the action\n\nOnce all necessary information is gathered:\n\nThought:
|
||||
I now know the final answer\nFinal Answer: the final answer to the original
|
||||
input question"}, {"role": "user", "content": "\nCurrent Task: Write a test
|
||||
task\n\nThis is the expect criteria for your final answer: Test output\nyou
|
||||
MUST return the actual complete content as the final answer, not a summary.\n\nBegin!
|
||||
This is VERY important to you, use the tools available and give your best Final
|
||||
Answer, your job depends on it!\n\nThought:"}, {"role": "assistant", "content":
|
||||
"I need to come up with a suitable test task that meets the criteria provided.
|
||||
I will focus on outlining a clear and effective test task related to AI and
|
||||
AI agents.\n\nAction: Test Tool\nAction Input: {\"query\": \"Create a test task
|
||||
that involves evaluating the performance of an AI agent in a given scenario,
|
||||
including criteria for success, tools required, and process for assessment.\"}\nObservation:
|
||||
Processed: Create a test task that involves evaluating the performance of an
|
||||
AI agent in a given scenario, including criteria for success, tools required,
|
||||
and process for assessment."}], "model": "gpt-4o-mini", "stop": ["\nObservation:"],
|
||||
"stream": false}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '2160'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- _cfuvid=2u_Xw.i716TDjD2vb2mvMyWxhA4q1MM1JvbrA8CNZpI-1734895557894-0.0.1.1-604800000;
|
||||
__cf_bm=i6jvNjhsDne300GPAeEmyiJJKYqy7OPuamFG_kht3KE-1734914012-1.0.1.1-tCeVANAF521vkXpBdgYw.ov.fYUr6t5QC4LG_DugWyzu4C60Pi2CruTVniUgfCvkcu6rdHA5DwnaEZf2jFaRCQ
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.52.1
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.52.1
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.11.7
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
content: "{\n \"id\": \"chatcmpl-AhQg0Cx1FED7gOwMpJUD5cQxVzQBK\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1734914012,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"Thought: I need to compile a complete
|
||||
and detailed test task based on the processed input, ensuring it includes all
|
||||
necessary components as per the expect criteria.\\n\\nFinal Answer: \\n### Test
|
||||
Task: Evaluating the Performance of an AI Agent\\n\\n**Objective:** \\nTo evaluate
|
||||
the performance of an AI agent in simulating conversation with users in a customer
|
||||
service context.\\n\\n**Scenario:**\\nAn AI agent is tasked with handling customer
|
||||
inquiries on an e-commerce platform. The agent must respond to questions about
|
||||
product availability, order tracking, returns, and other common queries.\\n\\n**Criteria
|
||||
for Success:**\\n1. **Accuracy:** The AI should provide correct information
|
||||
in at least 90% of the interactions.\\n2. **Response Time:** The average response
|
||||
time should be under 2 seconds.\\n3. **User Satisfaction:** Aim for a user satisfaction
|
||||
score of 85% or higher based on follow-up surveys after interactions.\\n4. **Fallback
|
||||
Rate:** The AI should not default to a human agent more than 10% of the time.\\n\\n**Tools
|
||||
Required:**\\n- Chatbot development platform (e.g., Dialogflow, Rasa)\\n- Metrics
|
||||
tracking software (to measure accuracy, response times, and user satisfaction)\\n-
|
||||
Survey tool (e.g., Google Forms, SurveyMonkey) for feedback collection\\n\\n**Process
|
||||
for Assessment:**\\n1. **Setup:** Deploy the AI agent on a testing environment
|
||||
simulating real customer inquiries.\\n2. **Data Collection:** Run the test for
|
||||
a predetermined period (e.g., one week) or until a set number of interactions
|
||||
(e.g., 1000).\\n3. **Measurement:**\\n - Record the interactions and analyze
|
||||
the accuracy of the AI's responses.\\n - Measure the average response time
|
||||
for each interaction.\\n - Collect user satisfaction scores via surveys sent
|
||||
after the interaction.\\n4. **Analysis:** Compile the data to see if the AI
|
||||
met the success criteria. Identify strengths and weaknesses in the responses.\\n5.
|
||||
**Review:** Share findings with the development team to strategize improvements.\\n\\nThis
|
||||
detailed task will help assess the AI agent\u2019s capabilities and provide
|
||||
insights for further enhancements.\",\n \"refusal\": null\n },\n
|
||||
\ \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n
|
||||
\ \"usage\": {\n \"prompt_tokens\": 416,\n \"completion_tokens\": 422,\n
|
||||
\ \"total_tokens\": 838,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
|
||||
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
|
||||
\"fp_d02d531b47\"\n}\n"
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 8f6442c2ba15a486-GRU
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 23 Dec 2024 00:33:39 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '6734'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149999497'
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_7d8df8b840e279bd64280d161d854161
|
||||
http_version: HTTP/1.1
|
||||
status_code: 200
|
||||
version: 1
|
||||
@@ -332,22 +332,31 @@ def test_manager_agent_delegating_to_assigned_task_agent():
|
||||
tasks=[task],
|
||||
)
|
||||
|
||||
crew.kickoff()
|
||||
|
||||
# Check if the manager agent has the correct tools
|
||||
assert crew.manager_agent is not None
|
||||
assert crew.manager_agent.tools is not None
|
||||
|
||||
assert len(crew.manager_agent.tools) == 2
|
||||
assert (
|
||||
"Delegate a specific task to one of the following coworkers: Researcher\n"
|
||||
in crew.manager_agent.tools[0].description
|
||||
)
|
||||
assert (
|
||||
"Ask a specific question to one of the following coworkers: Researcher\n"
|
||||
in crew.manager_agent.tools[1].description
|
||||
mock_task_output = TaskOutput(
|
||||
description="Mock description",
|
||||
raw="mocked output",
|
||||
agent="mocked agent"
|
||||
)
|
||||
|
||||
# Because we are mocking execute_sync, we never hit the underlying _execute_core
|
||||
# which sets the output attribute of the task
|
||||
task.output = mock_task_output
|
||||
|
||||
with patch.object(Task, 'execute_sync', return_value=mock_task_output) as mock_execute_sync:
|
||||
crew.kickoff()
|
||||
|
||||
# Verify execute_sync was called once
|
||||
mock_execute_sync.assert_called_once()
|
||||
|
||||
# Get the tools argument from the call
|
||||
_, kwargs = mock_execute_sync.call_args
|
||||
tools = kwargs['tools']
|
||||
|
||||
# Verify the delegation tools were passed correctly
|
||||
assert len(tools) == 2
|
||||
assert any("Delegate a specific task to one of the following coworkers: Researcher" in tool.description for tool in tools)
|
||||
assert any("Ask a specific question to one of the following coworkers: Researcher" in tool.description for tool in tools)
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_manager_agent_delegating_to_all_agents():
|
||||
@@ -382,6 +391,71 @@ def test_manager_agent_delegating_to_all_agents():
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_manager_agent_delegates_with_varied_role_cases():
|
||||
"""
|
||||
Test that the manager agent can delegate to agents regardless of case or whitespace variations in role names.
|
||||
This test verifies the fix for issue #1503 where role matching was too strict.
|
||||
"""
|
||||
# Create agents with varied case and whitespace in roles
|
||||
researcher_spaced = Agent(
|
||||
role=" Researcher ", # Extra spaces
|
||||
goal="Research with spaces in role",
|
||||
backstory="A researcher with spaces in role name",
|
||||
allow_delegation=False,
|
||||
)
|
||||
|
||||
writer_caps = Agent(
|
||||
role="SENIOR WRITER", # All caps
|
||||
goal="Write with caps in role",
|
||||
backstory="A writer with caps in role name",
|
||||
allow_delegation=False,
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Research and write about AI. The researcher should do the research, and the writer should write it up.",
|
||||
expected_output="A well-researched article about AI.",
|
||||
agent=researcher_spaced, # Assign to researcher with spaces
|
||||
)
|
||||
|
||||
crew = Crew(
|
||||
agents=[researcher_spaced, writer_caps],
|
||||
process=Process.hierarchical,
|
||||
manager_llm="gpt-4o",
|
||||
tasks=[task],
|
||||
)
|
||||
|
||||
mock_task_output = TaskOutput(
|
||||
description="Mock description",
|
||||
raw="mocked output",
|
||||
agent="mocked agent"
|
||||
)
|
||||
task.output = mock_task_output
|
||||
|
||||
with patch.object(Task, 'execute_sync', return_value=mock_task_output) as mock_execute_sync:
|
||||
crew.kickoff()
|
||||
|
||||
# Verify execute_sync was called once
|
||||
mock_execute_sync.assert_called_once()
|
||||
|
||||
# Get the tools argument from the call
|
||||
_, kwargs = mock_execute_sync.call_args
|
||||
tools = kwargs['tools']
|
||||
|
||||
# Verify the delegation tools were passed correctly and can handle case/whitespace variations
|
||||
assert len(tools) == 2
|
||||
|
||||
# Check delegation tool descriptions (should work despite case/whitespace differences)
|
||||
delegation_tool = tools[0]
|
||||
question_tool = tools[1]
|
||||
|
||||
assert "Delegate a specific task to one of the following coworkers:" in delegation_tool.description
|
||||
assert " Researcher " in delegation_tool.description or "SENIOR WRITER" in delegation_tool.description
|
||||
|
||||
assert "Ask a specific question to one of the following coworkers:" in question_tool.description
|
||||
assert " Researcher " in question_tool.description or "SENIOR WRITER" in question_tool.description
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_crew_with_delegating_agents():
|
||||
tasks = [
|
||||
@@ -402,9 +476,255 @@ def test_crew_with_delegating_agents():
|
||||
|
||||
assert (
|
||||
result.raw
|
||||
== "This is the complete content as specified:\nArtificial Intelligence (AI) Agents are sophisticated computer programs designed to perform tasks that typically require human intelligence, such as decision making, problem-solving, and learning. These agents operate autonomously, utilizing vast amounts of data, advanced algorithms, and machine learning techniques to analyze their environment, adapt to new information, and improve their performance over time.\n\nThe significance of AI Agents lies in their transformative potential across various industries. In healthcare, for example, they assist in diagnosing diseases with greater accuracy and speed than human practitioners, offering personalized treatment plans by analyzing patient data. In finance, AI Agents predict market trends, manage risks, and even execute trades, contributing to more stable and profitable financial systems. Customer service sectors benefit significantly from AI Agents, as they provide personalized and efficient responses, often resolving issues faster than traditional methods.\n\nMoreover, AI Agents are also making substantial contributions in fields like education and manufacturing. In education, they offer tailored learning experiences by assessing individual student needs and adjusting teaching methods accordingly. They help educators identify students who might need additional support and provide resources to enhance learning outcomes. In manufacturing, AI Agents optimize production lines, predict equipment failures, and improve supply chain management, thus boosting productivity and reducing downtime.\n\nAs these AI-powered entities continue to evolve, they are not only enhancing operational efficiencies but also driving innovation and creating new opportunities for growth and development in every sector they penetrate. The future of AI Agents looks promising, with the potential to revolutionize the way we live and work, making processes more efficient, decisions more data-driven, and solutions more innovative than ever before."
|
||||
== "In the rapidly evolving landscape of technology, AI agents have emerged as formidable tools, revolutionizing how we interact with data and automate tasks. These sophisticated systems leverage machine learning and natural language processing to perform a myriad of functions, from virtual personal assistants to complex decision-making companions in industries such as finance, healthcare, and education. By mimicking human intelligence, AI agents can analyze massive data sets at unparalleled speeds, enabling businesses to uncover valuable insights, enhance productivity, and elevate user experiences to unprecedented levels.\n\nOne of the most striking aspects of AI agents is their adaptability; they learn from their interactions and continuously improve their performance over time. This feature is particularly valuable in customer service where AI agents can address inquiries, resolve issues, and provide personalized recommendations without the limitations of human fatigue. Moreover, with intuitive interfaces, AI agents enhance user interactions, making technology more accessible and user-friendly, thereby breaking down barriers that have historically hindered digital engagement.\n\nDespite their immense potential, the deployment of AI agents raises important ethical and practical considerations. Issues related to privacy, data security, and the potential for job displacement necessitate thoughtful dialogue and proactive measures. Striking a balance between technological innovation and societal impact will be crucial as organizations integrate these agents into their operations. Additionally, ensuring transparency in AI decision-making processes is vital to maintain public trust as AI agents become an integral part of daily life.\n\nLooking ahead, the future of AI agents appears bright, with ongoing advancements promising even greater capabilities. As we continue to harness the power of AI, we can expect these agents to play a transformative role in shaping various sectors—streamlining workflows, enabling smarter decision-making, and fostering more personalized experiences. Embracing this technology responsibly can lead to a future where AI agents not only augment human effort but also inspire creativity and efficiency across the board, ultimately redefining our interaction with the digital world."
|
||||
)
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_crew_with_delegating_agents_should_not_override_task_tools():
|
||||
from typing import Type
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from crewai.tools import BaseTool
|
||||
|
||||
class TestToolInput(BaseModel):
|
||||
"""Input schema for TestTool."""
|
||||
query: str = Field(..., description="Query to process")
|
||||
|
||||
class TestTool(BaseTool):
|
||||
name: str = "Test Tool"
|
||||
description: str = "A test tool that just returns the input"
|
||||
args_schema: Type[BaseModel] = TestToolInput
|
||||
|
||||
def _run(self, query: str) -> str:
|
||||
return f"Processed: {query}"
|
||||
|
||||
# Create a task with the test tool
|
||||
tasks = [
|
||||
Task(
|
||||
description="Produce and amazing 1 paragraph draft of an article about AI Agents.",
|
||||
expected_output="A 4 paragraph article about AI.",
|
||||
agent=ceo,
|
||||
tools=[TestTool()],
|
||||
)
|
||||
]
|
||||
|
||||
crew = Crew(
|
||||
agents=[ceo, writer],
|
||||
process=Process.sequential,
|
||||
tasks=tasks,
|
||||
)
|
||||
|
||||
mock_task_output = TaskOutput(
|
||||
description="Mock description",
|
||||
raw="mocked output",
|
||||
agent="mocked agent"
|
||||
)
|
||||
|
||||
# Because we are mocking execute_sync, we never hit the underlying _execute_core
|
||||
# which sets the output attribute of the task
|
||||
tasks[0].output = mock_task_output
|
||||
|
||||
with patch.object(Task, 'execute_sync', return_value=mock_task_output) as mock_execute_sync:
|
||||
crew.kickoff()
|
||||
|
||||
# Execute the task and verify both tools are present
|
||||
_, kwargs = mock_execute_sync.call_args
|
||||
tools = kwargs['tools']
|
||||
|
||||
assert any(isinstance(tool, TestTool) for tool in tools), "TestTool should be present"
|
||||
assert any("delegate" in tool.name.lower() for tool in tools), "Delegation tool should be present"
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_crew_with_delegating_agents_should_not_override_agent_tools():
|
||||
from typing import Type
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from crewai.tools import BaseTool
|
||||
|
||||
class TestToolInput(BaseModel):
|
||||
"""Input schema for TestTool."""
|
||||
query: str = Field(..., description="Query to process")
|
||||
|
||||
class TestTool(BaseTool):
|
||||
name: str = "Test Tool"
|
||||
description: str = "A test tool that just returns the input"
|
||||
args_schema: Type[BaseModel] = TestToolInput
|
||||
|
||||
def _run(self, query: str) -> str:
|
||||
return f"Processed: {query}"
|
||||
|
||||
new_ceo = ceo.model_copy()
|
||||
new_ceo.tools = [TestTool()]
|
||||
|
||||
# Create a task with the test tool
|
||||
tasks = [
|
||||
Task(
|
||||
description="Produce and amazing 1 paragraph draft of an article about AI Agents.",
|
||||
expected_output="A 4 paragraph article about AI.",
|
||||
agent=new_ceo
|
||||
)
|
||||
]
|
||||
|
||||
crew = Crew(
|
||||
agents=[new_ceo, writer],
|
||||
process=Process.sequential,
|
||||
tasks=tasks,
|
||||
)
|
||||
|
||||
mock_task_output = TaskOutput(
|
||||
description="Mock description",
|
||||
raw="mocked output",
|
||||
agent="mocked agent"
|
||||
)
|
||||
|
||||
# Because we are mocking execute_sync, we never hit the underlying _execute_core
|
||||
# which sets the output attribute of the task
|
||||
tasks[0].output = mock_task_output
|
||||
|
||||
with patch.object(Task, 'execute_sync', return_value=mock_task_output) as mock_execute_sync:
|
||||
crew.kickoff()
|
||||
|
||||
# Execute the task and verify both tools are present
|
||||
_, kwargs = mock_execute_sync.call_args
|
||||
tools = kwargs['tools']
|
||||
|
||||
assert any(isinstance(tool, TestTool) for tool in new_ceo.tools), "TestTool should be present"
|
||||
assert any("delegate" in tool.name.lower() for tool in tools), "Delegation tool should be present"
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_task_tools_override_agent_tools():
|
||||
from typing import Type
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from crewai.tools import BaseTool
|
||||
|
||||
class TestToolInput(BaseModel):
|
||||
"""Input schema for TestTool."""
|
||||
query: str = Field(..., description="Query to process")
|
||||
|
||||
class TestTool(BaseTool):
|
||||
name: str = "Test Tool"
|
||||
description: str = "A test tool that just returns the input"
|
||||
args_schema: Type[BaseModel] = TestToolInput
|
||||
|
||||
def _run(self, query: str) -> str:
|
||||
return f"Processed: {query}"
|
||||
|
||||
class AnotherTestTool(BaseTool):
|
||||
name: str = "Another Test Tool"
|
||||
description: str = "Another test tool"
|
||||
args_schema: Type[BaseModel] = TestToolInput
|
||||
|
||||
def _run(self, query: str) -> str:
|
||||
return f"Another processed: {query}"
|
||||
|
||||
# Set agent tools
|
||||
new_researcher = researcher.model_copy()
|
||||
new_researcher.tools = [TestTool()]
|
||||
|
||||
# Create task with different tools
|
||||
task = Task(
|
||||
description="Write a test task",
|
||||
expected_output="Test output",
|
||||
agent=new_researcher,
|
||||
tools=[AnotherTestTool()]
|
||||
)
|
||||
|
||||
crew = Crew(
|
||||
agents=[new_researcher],
|
||||
tasks=[task],
|
||||
process=Process.sequential
|
||||
)
|
||||
|
||||
crew.kickoff()
|
||||
|
||||
# Verify task tools override agent tools
|
||||
assert len(task.tools) == 1 # AnotherTestTool
|
||||
assert any(isinstance(tool, AnotherTestTool) for tool in task.tools)
|
||||
assert not any(isinstance(tool, TestTool) for tool in task.tools)
|
||||
|
||||
# Verify agent tools remain unchanged
|
||||
assert len(new_researcher.tools) == 1
|
||||
assert isinstance(new_researcher.tools[0], TestTool)
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_task_tools_override_agent_tools_with_allow_delegation():
|
||||
"""
|
||||
Test that task tools override agent tools while preserving delegation tools when allow_delegation=True
|
||||
"""
|
||||
from typing import Type
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from crewai.tools import BaseTool
|
||||
|
||||
class TestToolInput(BaseModel):
|
||||
query: str = Field(..., description="Query to process")
|
||||
|
||||
class TestTool(BaseTool):
|
||||
name: str = "Test Tool"
|
||||
description: str = "A test tool that just returns the input"
|
||||
args_schema: Type[BaseModel] = TestToolInput
|
||||
|
||||
def _run(self, query: str) -> str:
|
||||
return f"Processed: {query}"
|
||||
|
||||
class AnotherTestTool(BaseTool):
|
||||
name: str = "Another Test Tool"
|
||||
description: str = "Another test tool"
|
||||
args_schema: Type[BaseModel] = TestToolInput
|
||||
|
||||
def _run(self, query: str) -> str:
|
||||
return f"Another processed: {query}"
|
||||
|
||||
# Set up agents with tools and allow_delegation
|
||||
researcher_with_delegation = researcher.model_copy()
|
||||
researcher_with_delegation.allow_delegation = True
|
||||
researcher_with_delegation.tools = [TestTool()]
|
||||
|
||||
writer_for_delegation = writer.model_copy()
|
||||
|
||||
# Create a task with different tools
|
||||
task = Task(
|
||||
description="Write a test task",
|
||||
expected_output="Test output",
|
||||
agent=researcher_with_delegation,
|
||||
tools=[AnotherTestTool()],
|
||||
)
|
||||
|
||||
crew = Crew(
|
||||
agents=[researcher_with_delegation, writer_for_delegation],
|
||||
tasks=[task],
|
||||
process=Process.sequential,
|
||||
)
|
||||
|
||||
mock_task_output = TaskOutput(
|
||||
description="Mock description",
|
||||
raw="mocked output",
|
||||
agent="mocked agent"
|
||||
)
|
||||
|
||||
# We mock execute_sync to verify which tools get used at runtime
|
||||
with patch.object(Task, "execute_sync", return_value=mock_task_output) as mock_execute_sync:
|
||||
crew.kickoff()
|
||||
|
||||
# Inspect the call kwargs to verify the actual tools passed to execution
|
||||
_, kwargs = mock_execute_sync.call_args
|
||||
used_tools = kwargs["tools"]
|
||||
|
||||
# Confirm AnotherTestTool is present but TestTool is not
|
||||
assert any(isinstance(tool, AnotherTestTool) for tool in used_tools), "AnotherTestTool should be present"
|
||||
assert not any(isinstance(tool, TestTool) for tool in used_tools), "TestTool should not be present among used tools"
|
||||
|
||||
# Confirm delegation tool(s) are present
|
||||
assert any("delegate" in tool.name.lower() for tool in used_tools), "Delegation tool should be present"
|
||||
|
||||
# Finally, make sure the agent's original tools remain unchanged
|
||||
assert len(researcher_with_delegation.tools) == 1
|
||||
assert isinstance(researcher_with_delegation.tools[0], TestTool)
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_crew_verbose_output(capsys):
|
||||
@@ -1193,12 +1513,22 @@ def test_code_execution_flag_adds_code_tool_upon_kickoff():
|
||||
|
||||
crew = Crew(agents=[programmer], tasks=[task])
|
||||
|
||||
with patch.object(Agent, "execute_task") as executor:
|
||||
executor.return_value = "ok"
|
||||
crew.kickoff()
|
||||
assert len(programmer.tools) == 1
|
||||
assert programmer.tools[0].__class__ == CodeInterpreterTool
|
||||
mock_task_output = TaskOutput(
|
||||
description="Mock description",
|
||||
raw="mocked output",
|
||||
agent="mocked agent"
|
||||
)
|
||||
|
||||
with patch.object(Task, "execute_sync", return_value=mock_task_output) as mock_execute_sync:
|
||||
crew.kickoff()
|
||||
|
||||
# Get the tools that were actually used in execution
|
||||
_, kwargs = mock_execute_sync.call_args
|
||||
used_tools = kwargs["tools"]
|
||||
|
||||
# Verify that exactly one tool was used and it was a CodeInterpreterTool
|
||||
assert len(used_tools) == 1, "Should have exactly one tool"
|
||||
assert isinstance(used_tools[0], CodeInterpreterTool), "Tool should be CodeInterpreterTool"
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_delegation_is_not_enabled_if_there_are_only_one_agent():
|
||||
@@ -1307,21 +1637,37 @@ def test_hierarchical_crew_creation_tasks_with_agents():
|
||||
process=Process.hierarchical,
|
||||
manager_llm="gpt-4o",
|
||||
)
|
||||
crew.kickoff()
|
||||
|
||||
assert crew.manager_agent is not None
|
||||
assert crew.manager_agent.tools is not None
|
||||
assert (
|
||||
"Delegate a specific task to one of the following coworkers: Senior Writer\n"
|
||||
in crew.manager_agent.tools[0].description
|
||||
mock_task_output = TaskOutput(
|
||||
description="Mock description",
|
||||
raw="mocked output",
|
||||
agent="mocked agent"
|
||||
)
|
||||
|
||||
# Because we are mocking execute_sync, we never hit the underlying _execute_core
|
||||
# which sets the output attribute of the task
|
||||
task.output = mock_task_output
|
||||
|
||||
with patch.object(Task, 'execute_sync', return_value=mock_task_output) as mock_execute_sync:
|
||||
crew.kickoff()
|
||||
|
||||
# Verify execute_sync was called once
|
||||
mock_execute_sync.assert_called_once()
|
||||
|
||||
# Get the tools argument from the call
|
||||
_, kwargs = mock_execute_sync.call_args
|
||||
tools = kwargs['tools']
|
||||
|
||||
# Verify the delegation tools were passed correctly
|
||||
assert len(tools) == 2
|
||||
assert any("Delegate a specific task to one of the following coworkers: Senior Writer" in tool.description for tool in tools)
|
||||
assert any("Ask a specific question to one of the following coworkers: Senior Writer" in tool.description for tool in tools)
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_hierarchical_crew_creation_tasks_with_async_execution():
|
||||
"""
|
||||
Agents are not required for tasks in a hierarchical process but sometimes they are still added
|
||||
This test makes sure that the manager still delegates the task to the agent even if the agent is passed in the task
|
||||
Tests that async tasks in hierarchical crews are handled correctly with proper delegation tools
|
||||
"""
|
||||
task = Task(
|
||||
description="Write one amazing paragraph about AI.",
|
||||
@@ -1337,14 +1683,35 @@ def test_hierarchical_crew_creation_tasks_with_async_execution():
|
||||
manager_llm="gpt-4o",
|
||||
)
|
||||
|
||||
crew.kickoff()
|
||||
assert crew.manager_agent is not None
|
||||
assert crew.manager_agent.tools is not None
|
||||
assert (
|
||||
"Delegate a specific task to one of the following coworkers: Senior Writer\n"
|
||||
in crew.manager_agent.tools[0].description
|
||||
mock_task_output = TaskOutput(
|
||||
description="Mock description",
|
||||
raw="mocked output",
|
||||
agent="mocked agent"
|
||||
)
|
||||
|
||||
# Create a mock Future that returns our TaskOutput
|
||||
mock_future = MagicMock(spec=Future)
|
||||
mock_future.result.return_value = mock_task_output
|
||||
|
||||
# Because we are mocking execute_async, we never hit the underlying _execute_core
|
||||
# which sets the output attribute of the task
|
||||
task.output = mock_task_output
|
||||
|
||||
with patch.object(Task, 'execute_async', return_value=mock_future) as mock_execute_async:
|
||||
crew.kickoff()
|
||||
|
||||
# Verify execute_async was called once
|
||||
mock_execute_async.assert_called_once()
|
||||
|
||||
# Get the tools argument from the call
|
||||
_, kwargs = mock_execute_async.call_args
|
||||
tools = kwargs['tools']
|
||||
|
||||
# Verify the delegation tools were passed correctly
|
||||
assert len(tools) == 2
|
||||
assert any("Delegate a specific task to one of the following coworkers: Senior Writer\n" in tool.description for tool in tools)
|
||||
assert any("Ask a specific question to one of the following coworkers: Senior Writer\n" in tool.description for tool in tools)
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_hierarchical_crew_creation_tasks_with_sync_last():
|
||||
@@ -1639,6 +2006,90 @@ def test_crew_log_file_output(tmp_path):
|
||||
assert test_file.exists()
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_crew_output_file_end_to_end(tmp_path):
|
||||
"""Test output file functionality in a full crew context."""
|
||||
# Create an agent
|
||||
agent = Agent(
|
||||
role="Researcher",
|
||||
goal="Analyze AI topics",
|
||||
backstory="You have extensive AI research experience.",
|
||||
allow_delegation=False,
|
||||
)
|
||||
|
||||
# Create a task with dynamic output file path
|
||||
dynamic_path = tmp_path / "output_{topic}.txt"
|
||||
task = Task(
|
||||
description="Explain the advantages of {topic}.",
|
||||
expected_output="A summary of the main advantages, bullet points recommended.",
|
||||
agent=agent,
|
||||
output_file=str(dynamic_path),
|
||||
)
|
||||
|
||||
# Create and run the crew
|
||||
crew = Crew(
|
||||
agents=[agent],
|
||||
tasks=[task],
|
||||
process=Process.sequential,
|
||||
)
|
||||
crew.kickoff(inputs={"topic": "AI"})
|
||||
|
||||
# Verify file creation and cleanup
|
||||
expected_file = tmp_path / "output_AI.txt"
|
||||
assert expected_file.exists(), f"Output file {expected_file} was not created"
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_crew_output_file_validation_failures():
|
||||
"""Test output file validation failures in a crew context."""
|
||||
agent = Agent(
|
||||
role="Researcher",
|
||||
goal="Analyze data",
|
||||
backstory="You analyze data files.",
|
||||
allow_delegation=False,
|
||||
)
|
||||
|
||||
# Test path traversal
|
||||
with pytest.raises(ValueError, match="Path traversal"):
|
||||
task = Task(
|
||||
description="Analyze data",
|
||||
expected_output="Analysis results",
|
||||
agent=agent,
|
||||
output_file="../output.txt"
|
||||
)
|
||||
Crew(agents=[agent], tasks=[task]).kickoff()
|
||||
|
||||
# Test shell special characters
|
||||
with pytest.raises(ValueError, match="Shell special characters"):
|
||||
task = Task(
|
||||
description="Analyze data",
|
||||
expected_output="Analysis results",
|
||||
agent=agent,
|
||||
output_file="output.txt | rm -rf /"
|
||||
)
|
||||
Crew(agents=[agent], tasks=[task]).kickoff()
|
||||
|
||||
# Test shell expansion
|
||||
with pytest.raises(ValueError, match="Shell expansion"):
|
||||
task = Task(
|
||||
description="Analyze data",
|
||||
expected_output="Analysis results",
|
||||
agent=agent,
|
||||
output_file="~/output.txt"
|
||||
)
|
||||
Crew(agents=[agent], tasks=[task]).kickoff()
|
||||
|
||||
# Test invalid template variable
|
||||
with pytest.raises(ValueError, match="Invalid template variable"):
|
||||
task = Task(
|
||||
description="Analyze data",
|
||||
expected_output="Analysis results",
|
||||
agent=agent,
|
||||
output_file="{invalid-name}/output.txt"
|
||||
)
|
||||
Crew(agents=[agent], tasks=[task]).kickoff()
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_manager_agent():
|
||||
from unittest.mock import patch
|
||||
@@ -2583,3 +3034,244 @@ def test_hierarchical_verbose_false_manager_agent():
|
||||
|
||||
assert crew.manager_agent is not None
|
||||
assert not crew.manager_agent.verbose
|
||||
|
||||
|
||||
def test_task_tools_preserve_code_execution_tools():
|
||||
"""
|
||||
Test that task tools don't override code execution tools when allow_code_execution=True
|
||||
"""
|
||||
from typing import Type
|
||||
|
||||
from crewai_tools import CodeInterpreterTool
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from crewai.tools import BaseTool
|
||||
|
||||
class TestToolInput(BaseModel):
|
||||
"""Input schema for TestTool."""
|
||||
query: str = Field(..., description="Query to process")
|
||||
|
||||
class TestTool(BaseTool):
|
||||
name: str = "Test Tool"
|
||||
description: str = "A test tool that just returns the input"
|
||||
args_schema: Type[BaseModel] = TestToolInput
|
||||
|
||||
def _run(self, query: str) -> str:
|
||||
return f"Processed: {query}"
|
||||
|
||||
# Create a programmer agent with code execution enabled
|
||||
programmer = Agent(
|
||||
role="Programmer",
|
||||
goal="Write code to solve problems.",
|
||||
backstory="You're a programmer who loves to solve problems with code.",
|
||||
allow_delegation=True,
|
||||
allow_code_execution=True,
|
||||
)
|
||||
|
||||
# Create a code reviewer agent
|
||||
reviewer = Agent(
|
||||
role="Code Reviewer",
|
||||
goal="Review code for bugs and improvements",
|
||||
backstory="You're an experienced code reviewer who ensures code quality and best practices.",
|
||||
allow_delegation=True,
|
||||
allow_code_execution=True,
|
||||
)
|
||||
|
||||
# Create a task with its own tools
|
||||
task = Task(
|
||||
description="Write a program to calculate fibonacci numbers.",
|
||||
expected_output="A working fibonacci calculator.",
|
||||
agent=programmer,
|
||||
tools=[TestTool()]
|
||||
)
|
||||
|
||||
crew = Crew(
|
||||
agents=[programmer, reviewer],
|
||||
tasks=[task],
|
||||
process=Process.sequential,
|
||||
)
|
||||
|
||||
mock_task_output = TaskOutput(
|
||||
description="Mock description",
|
||||
raw="mocked output",
|
||||
agent="mocked agent"
|
||||
)
|
||||
|
||||
with patch.object(Task, "execute_sync", return_value=mock_task_output) as mock_execute_sync:
|
||||
crew.kickoff()
|
||||
|
||||
# Get the tools that were actually used in execution
|
||||
_, kwargs = mock_execute_sync.call_args
|
||||
used_tools = kwargs["tools"]
|
||||
|
||||
# Verify all expected tools are present
|
||||
assert any(isinstance(tool, TestTool) for tool in used_tools), "Task's TestTool should be present"
|
||||
assert any(isinstance(tool, CodeInterpreterTool) for tool in used_tools), "CodeInterpreterTool should be present"
|
||||
assert any("delegate" in tool.name.lower() for tool in used_tools), "Delegation tool should be present"
|
||||
|
||||
# Verify the total number of tools (TestTool + CodeInterpreter + 2 delegation tools)
|
||||
assert len(used_tools) == 4, "Should have TestTool, CodeInterpreter, and 2 delegation tools"
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_multimodal_flag_adds_multimodal_tools():
|
||||
"""
|
||||
Test that an agent with multimodal=True automatically has multimodal tools added to the task execution.
|
||||
"""
|
||||
from crewai.tools.agent_tools.add_image_tool import AddImageTool
|
||||
|
||||
# Create an agent that supports multimodal
|
||||
multimodal_agent = Agent(
|
||||
role="Multimodal Analyst",
|
||||
goal="Handle multiple media types (text, images, etc.).",
|
||||
backstory="You're an agent specialized in analyzing text, images, and other media.",
|
||||
allow_delegation=False,
|
||||
multimodal=True, # crucial for adding the multimodal tool
|
||||
)
|
||||
|
||||
# Create a dummy task
|
||||
task = Task(
|
||||
description="Describe what's in this image and generate relevant metadata.",
|
||||
expected_output="An image description plus any relevant metadata.",
|
||||
agent=multimodal_agent,
|
||||
)
|
||||
|
||||
# Define a crew with the multimodal agent
|
||||
crew = Crew(agents=[multimodal_agent], tasks=[task], process=Process.sequential)
|
||||
|
||||
mock_task_output = TaskOutput(
|
||||
description="Mock description",
|
||||
raw="mocked output",
|
||||
agent="mocked agent"
|
||||
)
|
||||
|
||||
# Mock execute_sync to verify the tools passed at runtime
|
||||
with patch.object(Task, "execute_sync", return_value=mock_task_output) as mock_execute_sync:
|
||||
crew.kickoff()
|
||||
|
||||
# Get the tools that were actually used in execution
|
||||
_, kwargs = mock_execute_sync.call_args
|
||||
used_tools = kwargs["tools"]
|
||||
|
||||
# Check that the multimodal tool was added
|
||||
assert any(isinstance(tool, AddImageTool) for tool in used_tools), (
|
||||
"AddImageTool should be present when agent is multimodal"
|
||||
)
|
||||
|
||||
# Verify we have exactly one tool (just the AddImageTool)
|
||||
assert len(used_tools) == 1, "Should only have the AddImageTool"
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_multimodal_agent_image_tool_handling():
|
||||
"""
|
||||
Test that multimodal agents properly handle image tools in the CrewAgentExecutor
|
||||
"""
|
||||
# Create a multimodal agent
|
||||
multimodal_agent = Agent(
|
||||
role="Image Analyst",
|
||||
goal="Analyze images and provide descriptions",
|
||||
backstory="You're an expert at analyzing and describing images.",
|
||||
allow_delegation=False,
|
||||
multimodal=True,
|
||||
)
|
||||
|
||||
# Create a task that involves image analysis
|
||||
task = Task(
|
||||
description="Analyze this image and describe what you see.",
|
||||
expected_output="A detailed description of the image.",
|
||||
agent=multimodal_agent,
|
||||
)
|
||||
|
||||
crew = Crew(agents=[multimodal_agent], tasks=[task])
|
||||
|
||||
# Mock the image tool response
|
||||
mock_image_tool_result = {
|
||||
"role": "user",
|
||||
"content": [
|
||||
{"type": "text", "text": "Please analyze this image"},
|
||||
{
|
||||
"type": "image_url",
|
||||
"image_url": {
|
||||
"url": "https://example.com/test-image.jpg",
|
||||
},
|
||||
},
|
||||
],
|
||||
}
|
||||
|
||||
# Create a mock task output for the final result
|
||||
mock_task_output = TaskOutput(
|
||||
description="Mock description",
|
||||
raw="A detailed analysis of the image",
|
||||
agent="Image Analyst"
|
||||
)
|
||||
|
||||
with patch.object(Task, 'execute_sync') as mock_execute_sync:
|
||||
# Set up the mock to return our task output
|
||||
mock_execute_sync.return_value = mock_task_output
|
||||
|
||||
# Execute the crew
|
||||
crew.kickoff()
|
||||
|
||||
# Get the tools that were passed to execute_sync
|
||||
_, kwargs = mock_execute_sync.call_args
|
||||
tools = kwargs['tools']
|
||||
|
||||
# Verify the AddImageTool is present and properly configured
|
||||
image_tools = [tool for tool in tools if tool.name == "Add image to content"]
|
||||
assert len(image_tools) == 1, "Should have exactly one AddImageTool"
|
||||
|
||||
# Test the tool's execution
|
||||
image_tool = image_tools[0]
|
||||
result = image_tool._run(
|
||||
image_url="https://example.com/test-image.jpg",
|
||||
action="Please analyze this image"
|
||||
)
|
||||
|
||||
# Verify the tool returns the expected format
|
||||
assert result == mock_image_tool_result
|
||||
assert result["role"] == "user"
|
||||
assert len(result["content"]) == 2
|
||||
assert result["content"][0]["type"] == "text"
|
||||
assert result["content"][1]["type"] == "image_url"
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_multimodal_agent_live_image_analysis():
|
||||
"""
|
||||
Test that multimodal agents can analyze images through a real API call
|
||||
"""
|
||||
# Create a multimodal agent
|
||||
image_analyst = Agent(
|
||||
role="Image Analyst",
|
||||
goal="Analyze images with high attention to detail",
|
||||
backstory="You're an expert at visual analysis, trained to notice and describe details in images.",
|
||||
allow_delegation=False,
|
||||
multimodal=True,
|
||||
verbose=True,
|
||||
llm="gpt-4o"
|
||||
)
|
||||
|
||||
# Create a task for image analysis
|
||||
analyze_image = Task(
|
||||
description="""
|
||||
Analyze the provided image and describe what you see in detail.
|
||||
Focus on main elements, colors, composition, and any notable details.
|
||||
Image: {image_url}
|
||||
""",
|
||||
expected_output="A comprehensive description of the image contents.",
|
||||
agent=image_analyst
|
||||
)
|
||||
|
||||
# Create and run the crew
|
||||
crew = Crew(
|
||||
agents=[image_analyst],
|
||||
tasks=[analyze_image]
|
||||
)
|
||||
|
||||
# Execute with an image URL
|
||||
result = crew.kickoff(inputs={
|
||||
"image_url": "https://media.istockphoto.com/id/946087016/photo/aerial-view-of-lower-manhattan-new-york.jpg?s=612x612&w=0&k=20&c=viLiMRznQ8v5LzKTt_LvtfPFUVl1oiyiemVdSlm29_k="
|
||||
})
|
||||
|
||||
# Verify we got a meaningful response
|
||||
assert isinstance(result.raw, str)
|
||||
assert len(result.raw) > 100 # Expecting a detailed analysis
|
||||
assert "error" not in result.raw.lower() # No error messages in response
|
||||
|
||||
@@ -263,3 +263,62 @@ def test_flow_with_custom_state():
|
||||
flow = StateFlow()
|
||||
flow.kickoff()
|
||||
assert flow.counter == 2
|
||||
|
||||
|
||||
def test_router_with_multiple_conditions():
|
||||
"""Test a router that triggers when any of multiple steps complete (OR condition),
|
||||
and another router that triggers only after all specified steps complete (AND condition).
|
||||
"""
|
||||
|
||||
execution_order = []
|
||||
|
||||
class ComplexRouterFlow(Flow):
|
||||
@start()
|
||||
def step_a(self):
|
||||
execution_order.append("step_a")
|
||||
|
||||
@start()
|
||||
def step_b(self):
|
||||
execution_order.append("step_b")
|
||||
|
||||
@router(or_("step_a", "step_b"))
|
||||
def router_or(self):
|
||||
execution_order.append("router_or")
|
||||
return "next_step_or"
|
||||
|
||||
@listen("next_step_or")
|
||||
def handle_next_step_or_event(self):
|
||||
execution_order.append("handle_next_step_or_event")
|
||||
|
||||
@listen(handle_next_step_or_event)
|
||||
def branch_2_step(self):
|
||||
execution_order.append("branch_2_step")
|
||||
|
||||
@router(and_(handle_next_step_or_event, branch_2_step))
|
||||
def router_and(self):
|
||||
execution_order.append("router_and")
|
||||
return "final_step"
|
||||
|
||||
@listen("final_step")
|
||||
def log_final_step(self):
|
||||
execution_order.append("log_final_step")
|
||||
|
||||
flow = ComplexRouterFlow()
|
||||
flow.kickoff()
|
||||
|
||||
assert "step_a" in execution_order
|
||||
assert "step_b" in execution_order
|
||||
assert "router_or" in execution_order
|
||||
assert "handle_next_step_or_event" in execution_order
|
||||
assert "branch_2_step" in execution_order
|
||||
assert "router_and" in execution_order
|
||||
assert "log_final_step" in execution_order
|
||||
|
||||
# Check that the AND router triggered after both relevant steps:
|
||||
assert execution_order.index("router_and") > execution_order.index(
|
||||
"handle_next_step_or_event"
|
||||
)
|
||||
assert execution_order.index("router_and") > execution_order.index("branch_2_step")
|
||||
|
||||
# final_step should run after router_and
|
||||
assert execution_order.index("log_final_step") > execution_order.index("router_and")
|
||||
|
||||
@@ -1,10 +1,12 @@
|
||||
"""Test Knowledge creation and querying functionality."""
|
||||
|
||||
from pathlib import Path
|
||||
from typing import List, Union
|
||||
from unittest.mock import patch
|
||||
|
||||
import pytest
|
||||
|
||||
from crewai.knowledge.source.crew_docling_source import CrewDoclingSource
|
||||
from crewai.knowledge.source.csv_knowledge_source import CSVKnowledgeSource
|
||||
from crewai.knowledge.source.excel_knowledge_source import ExcelKnowledgeSource
|
||||
from crewai.knowledge.source.json_knowledge_source import JSONKnowledgeSource
|
||||
@@ -200,7 +202,7 @@ def test_single_short_file(mock_vector_db, tmpdir):
|
||||
f.write(content)
|
||||
|
||||
file_source = TextFileKnowledgeSource(
|
||||
file_path=file_path, metadata={"preference": "personal"}
|
||||
file_paths=[file_path], metadata={"preference": "personal"}
|
||||
)
|
||||
mock_vector_db.sources = [file_source]
|
||||
mock_vector_db.query.return_value = [{"context": content, "score": 0.9}]
|
||||
@@ -242,7 +244,7 @@ def test_single_2k_character_file(mock_vector_db, tmpdir):
|
||||
f.write(content)
|
||||
|
||||
file_source = TextFileKnowledgeSource(
|
||||
file_path=file_path, metadata={"preference": "personal"}
|
||||
file_paths=[file_path], metadata={"preference": "personal"}
|
||||
)
|
||||
mock_vector_db.sources = [file_source]
|
||||
mock_vector_db.query.return_value = [{"context": content, "score": 0.9}]
|
||||
@@ -279,7 +281,7 @@ def test_multiple_short_files(mock_vector_db, tmpdir):
|
||||
file_paths.append((file_path, item["metadata"]))
|
||||
|
||||
file_sources = [
|
||||
TextFileKnowledgeSource(file_path=path, metadata=metadata)
|
||||
TextFileKnowledgeSource(file_paths=[path], metadata=metadata)
|
||||
for path, metadata in file_paths
|
||||
]
|
||||
mock_vector_db.sources = file_sources
|
||||
@@ -352,7 +354,7 @@ def test_multiple_2k_character_files(mock_vector_db, tmpdir):
|
||||
file_paths.append(file_path)
|
||||
|
||||
file_sources = [
|
||||
TextFileKnowledgeSource(file_path=path, metadata={"preference": "personal"})
|
||||
TextFileKnowledgeSource(file_paths=[path], metadata={"preference": "personal"})
|
||||
for path in file_paths
|
||||
]
|
||||
mock_vector_db.sources = file_sources
|
||||
@@ -399,7 +401,7 @@ def test_hybrid_string_and_files(mock_vector_db, tmpdir):
|
||||
file_paths.append(file_path)
|
||||
|
||||
file_sources = [
|
||||
TextFileKnowledgeSource(file_path=path, metadata={"preference": "personal"})
|
||||
TextFileKnowledgeSource(file_paths=[path], metadata={"preference": "personal"})
|
||||
for path in file_paths
|
||||
]
|
||||
|
||||
@@ -424,7 +426,7 @@ def test_pdf_knowledge_source(mock_vector_db):
|
||||
|
||||
# Create a PDFKnowledgeSource
|
||||
pdf_source = PDFKnowledgeSource(
|
||||
file_path=pdf_path, metadata={"preference": "personal"}
|
||||
file_paths=[pdf_path], metadata={"preference": "personal"}
|
||||
)
|
||||
mock_vector_db.sources = [pdf_source]
|
||||
mock_vector_db.query.return_value = [
|
||||
@@ -461,7 +463,7 @@ def test_csv_knowledge_source(mock_vector_db, tmpdir):
|
||||
|
||||
# Create a CSVKnowledgeSource
|
||||
csv_source = CSVKnowledgeSource(
|
||||
file_path=csv_path, metadata={"preference": "personal"}
|
||||
file_paths=[csv_path], metadata={"preference": "personal"}
|
||||
)
|
||||
mock_vector_db.sources = [csv_source]
|
||||
mock_vector_db.query.return_value = [
|
||||
@@ -496,7 +498,7 @@ def test_json_knowledge_source(mock_vector_db, tmpdir):
|
||||
|
||||
# Create a JSONKnowledgeSource
|
||||
json_source = JSONKnowledgeSource(
|
||||
file_path=json_path, metadata={"preference": "personal"}
|
||||
file_paths=[json_path], metadata={"preference": "personal"}
|
||||
)
|
||||
mock_vector_db.sources = [json_source]
|
||||
mock_vector_db.query.return_value = [
|
||||
@@ -529,7 +531,7 @@ def test_excel_knowledge_source(mock_vector_db, tmpdir):
|
||||
|
||||
# Create an ExcelKnowledgeSource
|
||||
excel_source = ExcelKnowledgeSource(
|
||||
file_path=excel_path, metadata={"preference": "personal"}
|
||||
file_paths=[excel_path], metadata={"preference": "personal"}
|
||||
)
|
||||
mock_vector_db.sources = [excel_source]
|
||||
mock_vector_db.query.return_value = [
|
||||
@@ -543,3 +545,67 @@ def test_excel_knowledge_source(mock_vector_db, tmpdir):
|
||||
# Assert that the correct information is retrieved
|
||||
assert any("30" in result["context"] for result in results)
|
||||
mock_vector_db.query.assert_called_once()
|
||||
|
||||
|
||||
def test_docling_source(mock_vector_db):
|
||||
docling_source = CrewDoclingSource(
|
||||
file_paths=[
|
||||
"https://lilianweng.github.io/posts/2024-11-28-reward-hacking/",
|
||||
],
|
||||
)
|
||||
mock_vector_db.sources = [docling_source]
|
||||
mock_vector_db.query.return_value = [
|
||||
{
|
||||
"context": "Reward hacking is a technique used to improve the performance of reinforcement learning agents.",
|
||||
"score": 0.9,
|
||||
}
|
||||
]
|
||||
# Perform a query
|
||||
query = "What is reward hacking?"
|
||||
results = mock_vector_db.query(query)
|
||||
assert any("reward hacking" in result["context"].lower() for result in results)
|
||||
mock_vector_db.query.assert_called_once()
|
||||
|
||||
|
||||
def test_multiple_docling_sources():
|
||||
urls: List[Union[Path, str]] = [
|
||||
"https://lilianweng.github.io/posts/2024-11-28-reward-hacking/",
|
||||
"https://lilianweng.github.io/posts/2024-07-07-hallucination/",
|
||||
]
|
||||
docling_source = CrewDoclingSource(file_paths=urls)
|
||||
|
||||
assert docling_source.file_paths == urls
|
||||
assert docling_source.content is not None
|
||||
|
||||
|
||||
def test_docling_source_with_local_file():
|
||||
current_dir = Path(__file__).parent
|
||||
pdf_path = current_dir / "crewai_quickstart.pdf"
|
||||
docling_source = CrewDoclingSource(file_paths=[pdf_path])
|
||||
assert docling_source.file_paths == [pdf_path]
|
||||
assert docling_source.content is not None
|
||||
|
||||
|
||||
def test_file_path_validation():
|
||||
"""Test file path validation for knowledge sources."""
|
||||
current_dir = Path(__file__).parent
|
||||
pdf_path = current_dir / "crewai_quickstart.pdf"
|
||||
|
||||
# Test valid single file_path
|
||||
source = PDFKnowledgeSource(file_path=pdf_path)
|
||||
assert source.safe_file_paths == [pdf_path]
|
||||
|
||||
# Test valid file_paths list
|
||||
source = PDFKnowledgeSource(file_paths=[pdf_path])
|
||||
assert source.safe_file_paths == [pdf_path]
|
||||
|
||||
# Test both file_path and file_paths provided (should use file_paths)
|
||||
source = PDFKnowledgeSource(file_path=pdf_path, file_paths=[pdf_path])
|
||||
assert source.safe_file_paths == [pdf_path]
|
||||
|
||||
# Test neither file_path nor file_paths provided
|
||||
with pytest.raises(
|
||||
ValueError,
|
||||
match="file_path/file_paths must be a Path, str, or a list of these types"
|
||||
):
|
||||
PDFKnowledgeSource()
|
||||
|
||||
@@ -719,21 +719,66 @@ def test_interpolate_inputs():
|
||||
task = Task(
|
||||
description="Give me a list of 5 interesting ideas about {topic} to explore for an article, what makes them unique and interesting.",
|
||||
expected_output="Bullet point list of 5 interesting ideas about {topic}.",
|
||||
output_file="/tmp/{topic}/output_{date}.txt"
|
||||
)
|
||||
|
||||
task.interpolate_inputs(inputs={"topic": "AI"})
|
||||
task.interpolate_inputs(inputs={"topic": "AI", "date": "2024"})
|
||||
assert (
|
||||
task.description
|
||||
== "Give me a list of 5 interesting ideas about AI to explore for an article, what makes them unique and interesting."
|
||||
)
|
||||
assert task.expected_output == "Bullet point list of 5 interesting ideas about AI."
|
||||
assert task.output_file == "/tmp/AI/output_2024.txt"
|
||||
|
||||
task.interpolate_inputs(inputs={"topic": "ML"})
|
||||
task.interpolate_inputs(inputs={"topic": "ML", "date": "2025"})
|
||||
assert (
|
||||
task.description
|
||||
== "Give me a list of 5 interesting ideas about ML to explore for an article, what makes them unique and interesting."
|
||||
)
|
||||
assert task.expected_output == "Bullet point list of 5 interesting ideas about ML."
|
||||
assert task.output_file == "/tmp/ML/output_2025.txt"
|
||||
|
||||
|
||||
def test_interpolate_only():
|
||||
"""Test the interpolate_only method for various scenarios including JSON structure preservation."""
|
||||
task = Task(
|
||||
description="Unused in this test",
|
||||
expected_output="Unused in this test"
|
||||
)
|
||||
|
||||
# Test JSON structure preservation
|
||||
json_string = '{"info": "Look at {placeholder}", "nested": {"val": "{nestedVal}"}}'
|
||||
result = task.interpolate_only(
|
||||
input_string=json_string,
|
||||
inputs={"placeholder": "the data", "nestedVal": "something else"}
|
||||
)
|
||||
assert '"info": "Look at the data"' in result
|
||||
assert '"val": "something else"' in result
|
||||
assert "{placeholder}" not in result
|
||||
assert "{nestedVal}" not in result
|
||||
|
||||
# Test normal string interpolation
|
||||
normal_string = "Hello {name}, welcome to {place}!"
|
||||
result = task.interpolate_only(
|
||||
input_string=normal_string,
|
||||
inputs={"name": "John", "place": "CrewAI"}
|
||||
)
|
||||
assert result == "Hello John, welcome to CrewAI!"
|
||||
|
||||
# Test empty string
|
||||
result = task.interpolate_only(
|
||||
input_string="",
|
||||
inputs={"unused": "value"}
|
||||
)
|
||||
assert result == ""
|
||||
|
||||
# Test string with no placeholders
|
||||
no_placeholders = "Hello, this is a test"
|
||||
result = task.interpolate_only(
|
||||
input_string=no_placeholders,
|
||||
inputs={"unused": "value"}
|
||||
)
|
||||
assert result == no_placeholders
|
||||
|
||||
|
||||
def test_task_output_str_with_pydantic():
|
||||
@@ -830,3 +875,61 @@ def test_key():
|
||||
assert (
|
||||
task.key == hash
|
||||
), "The key should be the hash of the non-interpolated description."
|
||||
|
||||
|
||||
def test_output_file_validation():
|
||||
"""Test output file path validation."""
|
||||
# Valid paths
|
||||
assert Task(
|
||||
description="Test task",
|
||||
expected_output="Test output",
|
||||
output_file="output.txt"
|
||||
).output_file == "output.txt"
|
||||
assert Task(
|
||||
description="Test task",
|
||||
expected_output="Test output",
|
||||
output_file="/tmp/output.txt"
|
||||
).output_file == "tmp/output.txt"
|
||||
assert Task(
|
||||
description="Test task",
|
||||
expected_output="Test output",
|
||||
output_file="{dir}/output_{date}.txt"
|
||||
).output_file == "{dir}/output_{date}.txt"
|
||||
|
||||
# Invalid paths
|
||||
with pytest.raises(ValueError, match="Path traversal"):
|
||||
Task(
|
||||
description="Test task",
|
||||
expected_output="Test output",
|
||||
output_file="../output.txt"
|
||||
)
|
||||
with pytest.raises(ValueError, match="Path traversal"):
|
||||
Task(
|
||||
description="Test task",
|
||||
expected_output="Test output",
|
||||
output_file="folder/../output.txt"
|
||||
)
|
||||
with pytest.raises(ValueError, match="Shell special characters"):
|
||||
Task(
|
||||
description="Test task",
|
||||
expected_output="Test output",
|
||||
output_file="output.txt | rm -rf /"
|
||||
)
|
||||
with pytest.raises(ValueError, match="Shell expansion"):
|
||||
Task(
|
||||
description="Test task",
|
||||
expected_output="Test output",
|
||||
output_file="~/output.txt"
|
||||
)
|
||||
with pytest.raises(ValueError, match="Shell expansion"):
|
||||
Task(
|
||||
description="Test task",
|
||||
expected_output="Test output",
|
||||
output_file="$HOME/output.txt"
|
||||
)
|
||||
with pytest.raises(ValueError, match="Invalid template variable"):
|
||||
Task(
|
||||
description="Test task",
|
||||
expected_output="Test output",
|
||||
output_file="{invalid-name}/output.txt"
|
||||
)
|
||||
|
||||
55
tests/test_manager_llm_delegation.py
Normal file
55
tests/test_manager_llm_delegation.py
Normal file
@@ -0,0 +1,55 @@
|
||||
from unittest.mock import MagicMock
|
||||
|
||||
import pytest
|
||||
|
||||
from crewai import Agent, Task
|
||||
from crewai.tools.agent_tools.base_agent_tools import BaseAgentTool
|
||||
|
||||
|
||||
class TestAgentTool(BaseAgentTool):
|
||||
"""Concrete implementation of BaseAgentTool for testing."""
|
||||
def _run(self, *args, **kwargs):
|
||||
"""Implement required _run method."""
|
||||
return "Test response"
|
||||
|
||||
@pytest.mark.parametrize("role_name,should_match", [
|
||||
('Futel Official Infopoint', True), # exact match
|
||||
(' "Futel Official Infopoint" ', True), # extra quotes and spaces
|
||||
('Futel Official Infopoint\n', True), # trailing newline
|
||||
('"Futel Official Infopoint"', True), # embedded quotes
|
||||
(' FUTEL\nOFFICIAL INFOPOINT ', True), # multiple whitespace and newline
|
||||
('futel official infopoint', True), # lowercase
|
||||
('FUTEL OFFICIAL INFOPOINT', True), # uppercase
|
||||
('Non Existent Agent', False), # non-existent agent
|
||||
(None, False), # None agent name
|
||||
])
|
||||
def test_agent_tool_role_matching(role_name, should_match):
|
||||
"""Test that agent tools can match roles regardless of case, whitespace, and special characters."""
|
||||
# Create test agent
|
||||
test_agent = Agent(
|
||||
role='Futel Official Infopoint',
|
||||
goal='Answer questions about Futel',
|
||||
backstory='Futel Football Club info',
|
||||
allow_delegation=False
|
||||
)
|
||||
|
||||
# Create test agent tool
|
||||
agent_tool = TestAgentTool(
|
||||
name="test_tool",
|
||||
description="Test tool",
|
||||
agents=[test_agent]
|
||||
)
|
||||
|
||||
# Test role matching
|
||||
result = agent_tool._execute(
|
||||
agent_name=role_name,
|
||||
task='Test task',
|
||||
context=None
|
||||
)
|
||||
|
||||
if should_match:
|
||||
assert "coworker mentioned not found" not in result.lower(), \
|
||||
f"Should find agent with role name: {role_name}"
|
||||
else:
|
||||
assert "coworker mentioned not found" in result.lower(), \
|
||||
f"Should not find agent with role name: {role_name}"
|
||||
134
tests/test_task_guardrails.py
Normal file
134
tests/test_task_guardrails.py
Normal file
@@ -0,0 +1,134 @@
|
||||
"""Tests for task guardrails functionality."""
|
||||
|
||||
from unittest.mock import Mock
|
||||
|
||||
import pytest
|
||||
|
||||
from crewai.task import Task
|
||||
from crewai.tasks.task_output import TaskOutput
|
||||
|
||||
|
||||
def test_task_without_guardrail():
|
||||
"""Test that tasks work normally without guardrails (backward compatibility)."""
|
||||
agent = Mock()
|
||||
agent.role = "test_agent"
|
||||
agent.execute_task.return_value = "test result"
|
||||
agent.crew = None
|
||||
|
||||
task = Task(
|
||||
description="Test task",
|
||||
expected_output="Output"
|
||||
)
|
||||
|
||||
result = task.execute_sync(agent=agent)
|
||||
assert isinstance(result, TaskOutput)
|
||||
assert result.raw == "test result"
|
||||
|
||||
|
||||
def test_task_with_successful_guardrail():
|
||||
"""Test that successful guardrail validation passes transformed result."""
|
||||
def guardrail(result: TaskOutput):
|
||||
return (True, result.raw.upper())
|
||||
|
||||
agent = Mock()
|
||||
agent.role = "test_agent"
|
||||
agent.execute_task.return_value = "test result"
|
||||
agent.crew = None
|
||||
|
||||
task = Task(
|
||||
description="Test task",
|
||||
expected_output="Output",
|
||||
guardrail=guardrail
|
||||
)
|
||||
|
||||
result = task.execute_sync(agent=agent)
|
||||
assert isinstance(result, TaskOutput)
|
||||
assert result.raw == "TEST RESULT"
|
||||
|
||||
|
||||
def test_task_with_failing_guardrail():
|
||||
"""Test that failing guardrail triggers retry with error context."""
|
||||
def guardrail(result: TaskOutput):
|
||||
return (False, "Invalid format")
|
||||
|
||||
agent = Mock()
|
||||
agent.role = "test_agent"
|
||||
agent.execute_task.side_effect = [
|
||||
"bad result",
|
||||
"good result"
|
||||
]
|
||||
agent.crew = None
|
||||
|
||||
task = Task(
|
||||
description="Test task",
|
||||
expected_output="Output",
|
||||
guardrail=guardrail,
|
||||
max_retries=1
|
||||
)
|
||||
|
||||
# First execution fails guardrail, second succeeds
|
||||
agent.execute_task.side_effect = ["bad result", "good result"]
|
||||
with pytest.raises(Exception) as exc_info:
|
||||
task.execute_sync(agent=agent)
|
||||
|
||||
assert "Task failed guardrail validation" in str(exc_info.value)
|
||||
assert task.retry_count == 1
|
||||
|
||||
|
||||
def test_task_with_guardrail_retries():
|
||||
"""Test that guardrail respects max_retries configuration."""
|
||||
def guardrail(result: TaskOutput):
|
||||
return (False, "Invalid format")
|
||||
|
||||
agent = Mock()
|
||||
agent.role = "test_agent"
|
||||
agent.execute_task.return_value = "bad result"
|
||||
agent.crew = None
|
||||
|
||||
task = Task(
|
||||
description="Test task",
|
||||
expected_output="Output",
|
||||
guardrail=guardrail,
|
||||
max_retries=2
|
||||
)
|
||||
|
||||
with pytest.raises(Exception) as exc_info:
|
||||
task.execute_sync(agent=agent)
|
||||
|
||||
assert task.retry_count == 2
|
||||
assert "Task failed guardrail validation after 2 retries" in str(exc_info.value)
|
||||
assert "Invalid format" in str(exc_info.value)
|
||||
|
||||
|
||||
def test_guardrail_error_in_context():
|
||||
"""Test that guardrail error is passed in context for retry."""
|
||||
def guardrail(result: TaskOutput):
|
||||
return (False, "Expected JSON, got string")
|
||||
|
||||
agent = Mock()
|
||||
agent.role = "test_agent"
|
||||
agent.crew = None
|
||||
|
||||
task = Task(
|
||||
description="Test task",
|
||||
expected_output="Output",
|
||||
guardrail=guardrail,
|
||||
max_retries=1
|
||||
)
|
||||
|
||||
# Mock execute_task to succeed on second attempt
|
||||
first_call = True
|
||||
def execute_task(task, context, tools):
|
||||
nonlocal first_call
|
||||
if first_call:
|
||||
first_call = False
|
||||
return "invalid"
|
||||
return '{"valid": "json"}'
|
||||
|
||||
agent.execute_task.side_effect = execute_task
|
||||
|
||||
with pytest.raises(Exception) as exc_info:
|
||||
task.execute_sync(agent=agent)
|
||||
|
||||
assert "Task failed guardrail validation" in str(exc_info.value)
|
||||
assert "Expected JSON, got string" in str(exc_info.value)
|
||||
Reference in New Issue
Block a user