Compare commits

..

11 Commits

Author SHA1 Message Date
João Moura
82f9b26848 Merge branch 'main' into devin/1735488202-add-tool-documentation 2024-12-31 01:52:01 -03:00
Marco Vinciguerra
a548463fae feat: add docstring (#1819)
Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-12-31 01:51:43 -03:00
devin-ai-integration[bot]
45b802a625 Docstring, Error Handling, and Type Hints Improvements (#1828)
* docs: add comprehensive docstrings to Flow class and methods

- Added NumPy-style docstrings to all decorator functions
- Added detailed documentation to Flow class methods
- Included parameter types, return types, and examples
- Enhanced documentation clarity and completeness

Co-Authored-By: Joe Moura <joao@crewai.com>

* feat: add secure path handling utilities

- Add path_utils.py with safe path handling functions
- Implement path validation and security checks
- Integrate secure path handling in flow_visualizer.py
- Add path validation in html_template_handler.py
- Add comprehensive error handling for path operations

Co-Authored-By: Joe Moura <joao@crewai.com>

* docs: add comprehensive docstrings and type hints to flow utils (#1819)

Co-Authored-By: Joe Moura <joao@crewai.com>

* fix: add type annotations and fix import sorting

Co-Authored-By: Joe Moura <joao@crewai.com>

* fix: add type annotations to flow utils and visualization utils

Co-Authored-By: Joe Moura <joao@crewai.com>

* fix: resolve import sorting and type annotation issues

Co-Authored-By: Joe Moura <joao@crewai.com>

* fix: properly initialize and update edge_smooth variable

Co-Authored-By: Joe Moura <joao@crewai.com>

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Joe Moura <joao@crewai.com>
2024-12-31 01:39:19 -03:00
devin-ai-integration[bot]
ba0965ef87 fix: add tiktoken as explicit dependency and document Rust requirement (#1826)
* feat: add tiktoken as explicit dependency and document Rust requirement

- Add tiktoken>=0.8.0 as explicit dependency to ensure pre-built wheels are used
- Document Rust compiler requirement as fallback in README.md
- Addresses issue #1824 tiktoken build failure

Co-Authored-By: Joe Moura <joao@crewai.com>

* fix: adjust tiktoken version to ~=0.7.0 for dependency compatibility

- Update tiktoken dependency to ~=0.7.0 to resolve conflict with embedchain
- Maintain compatibility with crewai-tools dependency chain
- Addresses CI build failures

Co-Authored-By: Joe Moura <joao@crewai.com>

* docs: add troubleshooting section and make tiktoken optional

Co-Authored-By: Joe Moura <joao@crewai.com>

* Update README.md

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Joe Moura <joao@crewai.com>
Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-12-30 17:10:56 -03:00
devin-ai-integration[bot]
d85898cf29 fix(manager_llm): handle coworker role name case/whitespace properly (#1820)
* fix(manager_llm): handle coworker role name case/whitespace properly

- Add .strip() to agent name and role comparisons in base_agent_tools.py
- Add test case for varied role name cases and whitespace
- Fix issue #1503 with manager LLM delegation

Co-Authored-By: Joe Moura <joao@crewai.com>

* fix(manager_llm): improve error handling and add debug logging

- Add debug logging for better observability
- Add sanitize_agent_name helper method
- Enhance error messages with more context
- Add parameterized tests for edge cases:
  - Embedded quotes
  - Trailing newlines
  - Multiple whitespace
  - Case variations
  - None values
- Improve error handling with specific exceptions

Co-Authored-By: Joe Moura <joao@crewai.com>

* style: fix import sorting in base_agent_tools and test_manager_llm_delegation

Co-Authored-By: Joe Moura <joao@crewai.com>

* fix(manager_llm): improve whitespace normalization in role name matching

Co-Authored-By: Joe Moura <joao@crewai.com>

* style: fix import sorting in base_agent_tools and test_manager_llm_delegation

Co-Authored-By: Joe Moura <joao@crewai.com>

* fix(manager_llm): add error message template for agent tool execution errors

Co-Authored-By: Joe Moura <joao@crewai.com>

* style: fix import sorting in test_manager_llm_delegation.py

Co-Authored-By: Joe Moura <joao@crewai.com>

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Joe Moura <joao@crewai.com>
2024-12-30 16:58:18 -03:00
Devin AI
09fd6058b0 Add comprehensive documentation for all tools
- Added documentation for file operation tools
- Added documentation for search tools
- Added documentation for web scraping tools
- Added documentation for specialized tools (RAG, code interpreter)
- Added documentation for API-based tools (SerpApi, Serply)

Link to Devin run: https://app.devin.ai/sessions/d2f72a2dfb214659aeb3e9f67ed961f7

Co-Authored-By: Joe Moura <joao@crewai.com>
2024-12-29 16:03:22 +00:00
devin-ai-integration[bot]
73f328860b Fix interpolation for output_file in Task (#1803) (#1814)
* fix: interpolate output_file attribute from YAML

Co-Authored-By: Joe Moura <joao@crewai.com>

* fix: add security validation for output_file paths

Co-Authored-By: Joe Moura <joao@crewai.com>

* fix: add _original_output_file private attribute to fix type-checker error

Co-Authored-By: Joe Moura <joao@crewai.com>

* fix: update interpolate_only to handle None inputs and remove duplicate attribute

Co-Authored-By: Joe Moura <joao@crewai.com>

* fix: improve output_file validation and error messages

Co-Authored-By: Joe Moura <joao@crewai.com>

* test: add end-to-end tests for output_file functionality

Co-Authored-By: Joe Moura <joao@crewai.com>

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Joe Moura <joao@crewai.com>
2024-12-29 01:57:59 -03:00
João Moura
a0c322a535 fixing file paths for knowledge source 2024-12-28 02:05:19 -03:00
devin-ai-integration[bot]
86f58c95de docs: add agent-specific knowledge documentation and examples (#1811)
Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Joe Moura <joao@crewai.com>
2024-12-28 01:48:51 -03:00
João Moura
99fe91586d Update README.md 2024-12-28 01:03:33 -03:00
devin-ai-integration[bot]
0c2d23dfe0 docs: update README to highlight Flows (#1809)
* docs: highlight Flows feature in README

Co-Authored-By: Joe Moura <joao@crewai.com>

* docs: enhance README with LangGraph comparison and flows-crews synergy

Co-Authored-By: Joe Moura <joao@crewai.com>

* docs: replace initial Flow example with advanced Flow+Crew example; enhance LangGraph comparison

Co-Authored-By: Joe Moura <joao@crewai.com>

* docs: incorporate key terms and enhance feature descriptions

Co-Authored-By: Joe Moura <joao@crewai.com>

* docs: refine technical language, enhance feature descriptions, fix string interpolation

Co-Authored-By: Joe Moura <joao@crewai.com>

* docs: update README with performance metrics, feature enhancements, and course links

Co-Authored-By: Joe Moura <joao@crewai.com>

* docs: update LangGraph comparison with paragraph and P.S. section

Co-Authored-By: Joe Moura <joao@crewai.com>

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Joe Moura <joao@crewai.com>
2024-12-28 01:00:58 -03:00
71 changed files with 8493 additions and 527 deletions

175
README.md
View File

@@ -4,7 +4,7 @@
# **CrewAI**
🤖 **CrewAI**: Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
🤖 **CrewAI**: Production-grade framework for orchestrating sophisticated AI agent systems. From simple automations to complex real-world applications, CrewAI provides precise control and deep customization. By fostering collaborative intelligence through flexible, production-ready architecture, CrewAI empowers agents to work together seamlessly, tackling complex business challenges with predictable, consistent results.
<h3>
@@ -22,13 +22,17 @@
- [Why CrewAI?](#why-crewai)
- [Getting Started](#getting-started)
- [Key Features](#key-features)
- [Understanding Flows and Crews](#understanding-flows-and-crews)
- [CrewAI vs LangGraph](#how-crewai-compares)
- [Examples](#examples)
- [Quick Tutorial](#quick-tutorial)
- [Write Job Descriptions](#write-job-descriptions)
- [Trip Planner](#trip-planner)
- [Stock Analysis](#stock-analysis)
- [Using Crews and Flows Together](#using-crews-and-flows-together)
- [Connecting Your Crew to a Model](#connecting-your-crew-to-a-model)
- [How CrewAI Compares](#how-crewai-compares)
- [Frequently Asked Questions (FAQ)](#frequently-asked-questions-faq)
- [Contribution](#contribution)
- [Telemetry](#telemetry)
- [License](#license)
@@ -36,10 +40,40 @@
## Why CrewAI?
The power of AI collaboration has too much to offer.
CrewAI is designed to enable AI agents to assume roles, share goals, and operate in a cohesive unit - much like a well-oiled crew. Whether you're building a smart assistant platform, an automated customer service ensemble, or a multi-agent research team, CrewAI provides the backbone for sophisticated multi-agent interactions.
CrewAI is a standalone framework, built from the ground up without dependencies on Langchain or other agent frameworks. It's designed to enable AI agents to assume roles, share goals, and operate in a cohesive unit - much like a well-oiled crew. Whether you're building a smart assistant platform, an automated customer service ensemble, or a multi-agent research team, CrewAI provides the backbone for sophisticated multi-agent interactions.
## Getting Started
### Learning Resources
Learn CrewAI through our comprehensive courses:
- [Multi AI Agent Systems with CrewAI](https://www.deeplearning.ai/short-courses/multi-ai-agent-systems-with-crewai/) - Master the fundamentals of multi-agent systems
- [Practical Multi AI Agents and Advanced Use Cases](https://www.deeplearning.ai/short-courses/practical-multi-ai-agents-and-advanced-use-cases-with-crewai/) - Deep dive into advanced implementations
### Understanding Flows and Crews
CrewAI offers two powerful, complementary approaches that work seamlessly together to build sophisticated AI applications:
1. **Crews**: Teams of AI agents with true autonomy and agency, working together to accomplish complex tasks through role-based collaboration. Crews enable:
- Natural, autonomous decision-making between agents
- Dynamic task delegation and collaboration
- Specialized roles with defined goals and expertise
- Flexible problem-solving approaches
2. **Flows**: Production-ready, event-driven workflows that deliver precise control over complex automations. Flows provide:
- Fine-grained control over execution paths for real-world scenarios
- Secure, consistent state management between tasks
- Clean integration of AI agents with production Python code
- Conditional branching for complex business logic
The true power of CrewAI emerges when combining Crews and Flows. This synergy allows you to:
- Build complex, production-grade applications
- Balance autonomy with precise control
- Handle sophisticated real-world scenarios
- Maintain clean, maintainable code structure
### Getting Started with Installation
To get started with CrewAI, follow these simple steps:
### 1. Installation
@@ -51,7 +85,6 @@ First, install CrewAI:
```shell
pip install crewai
```
If you want to install the 'crewai' package along with its optional features that include additional tools for agents, you can do so by using the following command:
```shell
@@ -59,6 +92,22 @@ pip install 'crewai[tools]'
```
The command above installs the basic package and also adds extra components which require more dependencies to function.
### Troubleshooting Dependencies
If you encounter issues during installation or usage, here are some common solutions:
#### Common Issues
1. **ModuleNotFoundError: No module named 'tiktoken'**
- Install tiktoken explicitly: `pip install 'crewai[embeddings]'`
- If using embedchain or other tools: `pip install 'crewai[tools]'`
2. **Failed building wheel for tiktoken**
- Ensure Rust compiler is installed (see installation steps above)
- For Windows: Verify Visual C++ Build Tools are installed
- Try upgrading pip: `pip install --upgrade pip`
- If issues persist, use a pre-built wheel: `pip install tiktoken --prefer-binary`
### 2. Setting Up Your Crew with the YAML Configuration
To create a new CrewAI project, run the following CLI (Command Line Interface) command:
@@ -264,13 +313,16 @@ In addition to the sequential process, you can use the hierarchical process, whi
## Key Features
- **Role-Based Agent Design**: Customize agents with specific roles, goals, and tools.
- **Autonomous Inter-Agent Delegation**: Agents can autonomously delegate tasks and inquire amongst themselves, enhancing problem-solving efficiency.
- **Flexible Task Management**: Define tasks with customizable tools and assign them to agents dynamically.
- **Processes Driven**: Currently only supports `sequential` task execution and `hierarchical` processes, but more complex processes like consensual and autonomous are being worked on.
- **Save output as file**: Save the output of individual tasks as a file, so you can use it later.
- **Parse output as Pydantic or Json**: Parse the output of individual tasks as a Pydantic model or as a Json if you want to.
- **Works with Open Source Models**: Run your crew using Open AI or open source models refer to the [Connect CrewAI to LLMs](https://docs.crewai.com/how-to/LLM-Connections/) page for details on configuring your agents' connections to models, even ones running locally!
**Note**: CrewAI is a standalone framework built from the ground up, without dependencies on Langchain or other agent frameworks.
- **Deep Customization**: Build sophisticated agents with full control over the system - from overriding inner prompts to accessing low-level APIs. Customize roles, goals, tools, and behaviors while maintaining clean abstractions.
- **Autonomous Inter-Agent Delegation**: Agents can autonomously delegate tasks and inquire amongst themselves, enabling complex problem-solving in real-world scenarios.
- **Flexible Task Management**: Define and customize tasks with granular control, from simple operations to complex multi-step processes.
- **Production-Grade Architecture**: Support for both high-level abstractions and low-level customization, with robust error handling and state management.
- **Predictable Results**: Ensure consistent, accurate outputs through programmatic guardrails, agent training capabilities, and flow-based execution control. See our [documentation on guardrails](https://docs.crewai.com/how-to/guardrails/) for implementation details.
- **Model Flexibility**: Run your crew using OpenAI or open source models with production-ready integrations. See [Connect CrewAI to LLMs](https://docs.crewai.com/how-to/LLM-Connections/) for detailed configuration options.
- **Event-Driven Flows**: Build complex, real-world workflows with precise control over execution paths, state management, and conditional logic.
- **Process Orchestration**: Achieve any workflow pattern through flows - from simple sequential and hierarchical processes to complex, custom orchestration patterns with conditional branching and parallel execution.
![CrewAI Mind Map](./docs/crewAI-mindmap.png "CrewAI Mind Map")
@@ -305,6 +357,98 @@ You can test different real life examples of AI crews in the [CrewAI-examples re
[![Stock Analysis](https://img.youtube.com/vi/e0Uj4yWdaAg/maxresdefault.jpg)](https://www.youtube.com/watch?v=e0Uj4yWdaAg "Stock Analysis")
### Using Crews and Flows Together
CrewAI's power truly shines when combining Crews with Flows to create sophisticated automation pipelines. Here's how you can orchestrate multiple Crews within a Flow:
```python
from crewai.flow.flow import Flow, listen, start, router
from crewai import Crew, Agent, Task
from pydantic import BaseModel
# Define structured state for precise control
class MarketState(BaseModel):
sentiment: str = "neutral"
confidence: float = 0.0
recommendations: list = []
class AdvancedAnalysisFlow(Flow[MarketState]):
@start()
def fetch_market_data(self):
# Demonstrate low-level control with structured state
self.state.sentiment = "analyzing"
return {"sector": "tech", "timeframe": "1W"} # These parameters match the task description template
@listen(fetch_market_data)
def analyze_with_crew(self, market_data):
# Show crew agency through specialized roles
analyst = Agent(
role="Senior Market Analyst",
goal="Conduct deep market analysis with expert insight",
backstory="You're a veteran analyst known for identifying subtle market patterns"
)
researcher = Agent(
role="Data Researcher",
goal="Gather and validate supporting market data",
backstory="You excel at finding and correlating multiple data sources"
)
analysis_task = Task(
description="Analyze {sector} sector data for the past {timeframe}",
expected_output="Detailed market analysis with confidence score",
agent=analyst
)
research_task = Task(
description="Find supporting data to validate the analysis",
expected_output="Corroborating evidence and potential contradictions",
agent=researcher
)
# Demonstrate crew autonomy
analysis_crew = Crew(
agents=[analyst, researcher],
tasks=[analysis_task, research_task],
process=Process.sequential,
verbose=True
)
return analysis_crew.kickoff(inputs=market_data) # Pass market_data as named inputs
@router(analyze_with_crew)
def determine_next_steps(self):
# Show flow control with conditional routing
if self.state.confidence > 0.8:
return "high_confidence"
elif self.state.confidence > 0.5:
return "medium_confidence"
return "low_confidence"
@listen("high_confidence")
def execute_strategy(self):
# Demonstrate complex decision making
strategy_crew = Crew(
agents=[
Agent(role="Strategy Expert",
goal="Develop optimal market strategy")
],
tasks=[
Task(description="Create detailed strategy based on analysis",
expected_output="Step-by-step action plan")
]
)
return strategy_crew.kickoff()
@listen("medium_confidence", "low_confidence")
def request_additional_analysis(self):
self.state.recommendations.append("Gather more data")
return "Additional analysis required"
```
This example demonstrates how to:
1. Use Python code for basic data operations
2. Create and execute Crews as steps in your workflow
3. Use Flow decorators to manage the sequence of operations
4. Implement conditional branching based on Crew results
## Connecting Your Crew to a Model
CrewAI supports using various LLMs through a variety of connection options. By default your agents will use the OpenAI API when querying the model. However, there are several other ways to allow your agents to connect to models. For example, you can configure your agents to use a local model via the Ollama tool.
@@ -313,9 +457,13 @@ Please refer to the [Connect CrewAI to LLMs](https://docs.crewai.com/how-to/LLM-
## How CrewAI Compares
**CrewAI's Advantage**: CrewAI is built with production in mind. It offers the flexibility of Autogen's conversational agents and the structured process approach of ChatDev, but without the rigidity. CrewAI's processes are designed to be dynamic and adaptable, fitting seamlessly into both development and production workflows.
**CrewAI's Advantage**: CrewAI combines autonomous agent intelligence with precise workflow control through its unique Crews and Flows architecture. The framework excels at both high-level orchestration and low-level customization, enabling complex, production-grade systems with granular control.
- **Autogen**: While Autogen does good in creating conversational agents capable of working together, it lacks an inherent concept of process. In Autogen, orchestrating agents' interactions requires additional programming, which can become complex and cumbersome as the scale of tasks grows.
- **LangGraph**: While LangGraph provides a foundation for building agent workflows, its approach requires significant boilerplate code and complex state management patterns. The framework's tight coupling with LangChain can limit flexibility when implementing custom agent behaviors or integrating with external systems.
*P.S. CrewAI demonstrates significant performance advantages over LangGraph, executing 5.76x faster in certain cases like this QA task example ([see comparison](https://github.com/crewAIInc/crewAI-examples/tree/main/Notebooks/CrewAI%20Flows%20%26%20Langgraph/QA%20Agent)) while achieving higher evaluation scores with faster completion times in certain coding tasks, like in this example ([detailed analysis](https://github.com/crewAIInc/crewAI-examples/blob/main/Notebooks/CrewAI%20Flows%20%26%20Langgraph/Coding%20Assistant/coding_assistant_eval.ipynb)).*
- **Autogen**: While Autogen excels at creating conversational agents capable of working together, it lacks an inherent concept of process. In Autogen, orchestrating agents' interactions requires additional programming, which can become complex and cumbersome as the scale of tasks grows.
- **ChatDev**: ChatDev introduced the idea of processes into the realm of AI agents, but its implementation is quite rigid. Customizations in ChatDev are limited and not geared towards production environments, which can hinder scalability and flexibility in real-world applications.
@@ -440,5 +588,8 @@ A: CrewAI uses anonymous telemetry to collect usage data for improvement purpose
### Q: Where can I find examples of CrewAI in action?
A: You can find various real-life examples in the [CrewAI-examples repository](https://github.com/crewAIInc/crewAI-examples), including trip planners, stock analysis tools, and more.
### Q: What is the difference between Crews and Flows?
A: Crews and Flows serve different but complementary purposes in CrewAI. Crews are teams of AI agents working together to accomplish specific tasks through role-based collaboration, delivering accurate and predictable results. Flows, on the other hand, are event-driven workflows that can orchestrate both Crews and regular Python code, allowing you to build complex automation pipelines with secure state management and conditional execution paths.
### Q: How can I contribute to CrewAI?
A: Contributions are welcome! You can fork the repository, create a new branch for your feature, add your improvement, and send a pull request. Check the Contribution section in the README for more details.

View File

@@ -171,6 +171,58 @@ crewai reset-memories --knowledge
This is useful when you've updated your knowledge sources and want to ensure that the agents are using the most recent information.
## Agent-Specific Knowledge
While knowledge can be provided at the crew level using `crew.knowledge_sources`, individual agents can also have their own knowledge sources using the `knowledge_sources` parameter:
```python Code
from crewai import Agent, Task, Crew
from crewai.knowledge.source.string_knowledge_source import StringKnowledgeSource
# Create agent-specific knowledge about a product
product_specs = StringKnowledgeSource(
content="""The XPS 13 laptop features:
- 13.4-inch 4K display
- Intel Core i7 processor
- 16GB RAM
- 512GB SSD storage
- 12-hour battery life""",
metadata={"category": "product_specs"}
)
# Create a support agent with product knowledge
support_agent = Agent(
role="Technical Support Specialist",
goal="Provide accurate product information and support.",
backstory="You are an expert on our laptop products and specifications.",
knowledge_sources=[product_specs] # Agent-specific knowledge
)
# Create a task that requires product knowledge
support_task = Task(
description="Answer this customer question: {question}",
agent=support_agent
)
# Create and run the crew
crew = Crew(
agents=[support_agent],
tasks=[support_task]
)
# Get answer about the laptop's specifications
result = crew.kickoff(
inputs={"question": "What is the storage capacity of the XPS 13?"}
)
```
<Info>
Benefits of agent-specific knowledge:
- Give agents specialized information for their roles
- Maintain separation of concerns between agents
- Combine with crew-level knowledge for layered information access
</Info>
## Custom Knowledge Sources
CrewAI allows you to create custom knowledge sources for any type of data by extending the `BaseKnowledgeSource` class. Let's create a practical example that fetches and processes space news articles.

View File

@@ -0,0 +1,222 @@
---
title: BraveSearchTool
description: A tool for performing web searches using the Brave Search API
icon: search
---
## BraveSearchTool
The BraveSearchTool enables web searches using the Brave Search API, providing customizable result counts, country-specific searches, and rate-limited operations. It formats search results with titles, URLs, and snippets for easy consumption.
## Installation
```bash
pip install 'crewai[tools]'
```
## Authentication
Set up your Brave Search API key:
```bash
export BRAVE_API_KEY='your-brave-api-key'
```
## Usage Example
```python
from crewai import Agent
from crewai_tools import BraveSearchTool
# Basic initialization
search_tool = BraveSearchTool()
# Advanced initialization with custom parameters
search_tool = BraveSearchTool(
country="US", # Country-specific search
n_results=5, # Number of results to return
save_file=True # Save results to file
)
# Create an agent with the tool
researcher = Agent(
role='Web Researcher',
goal='Search and analyze web content',
backstory='Expert at finding relevant information online.',
tools=[search_tool],
verbose=True
)
```
## Input Schema
```python
class BraveSearchToolSchema(BaseModel):
search_query: str = Field(
description="Mandatory search query you want to use to search the internet"
)
```
## Function Signature
```python
def __init__(
self,
country: Optional[str] = "",
n_results: int = 10,
save_file: bool = False,
*args,
**kwargs
):
"""
Initialize the Brave search tool.
Args:
country (Optional[str]): Country code for region-specific search
n_results (int): Number of results to return (default: 10)
save_file (bool): Whether to save results to file (default: False)
"""
def _run(
self,
**kwargs: Any
) -> str:
"""
Execute web search using Brave Search API.
Args:
search_query (str): Query to search
save_file (bool, optional): Override save_file setting
n_results (int, optional): Override n_results setting
Returns:
str: Formatted search results with titles, URLs, and snippets
"""
```
## Best Practices
1. API Authentication:
- Securely store BRAVE_API_KEY
- Keep API key confidential
- Handle authentication errors
2. Rate Limiting:
- Tool automatically handles rate limiting
- Minimum 1-second interval between requests
- Consider implementing additional rate limits
3. Search Optimization:
- Use specific search queries
- Adjust result count based on needs
- Consider regional search requirements
4. Error Handling:
- Handle API request failures
- Manage parsing errors
- Monitor rate limit errors
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import BraveSearchTool
# Initialize tool with custom configuration
search_tool = BraveSearchTool(
country="GB", # UK-specific search
n_results=3, # Limit to 3 results
save_file=True # Save results to file
)
# Create agent
researcher = Agent(
role='Web Researcher',
goal='Research latest AI developments',
backstory='Expert at finding and analyzing tech news.',
tools=[search_tool]
)
# Define task
research_task = Task(
description="""Find the latest news about artificial
intelligence developments in quantum computing.""",
agent=researcher
)
# The tool will use:
# {
# "search_query": "latest quantum computing AI developments"
# }
# Create crew
crew = Crew(
agents=[researcher],
tasks=[research_task]
)
# Execute
result = crew.kickoff()
```
## Advanced Usage
### Country-Specific Search
```python
# Initialize tools for different regions
us_search = BraveSearchTool(country="US")
uk_search = BraveSearchTool(country="GB")
jp_search = BraveSearchTool(country="JP")
# Compare results across regions
us_results = us_search.run(
search_query="local news"
)
uk_results = uk_search.run(
search_query="local news"
)
jp_results = jp_search.run(
search_query="local news"
)
```
### Result Management
```python
# Save results to file
archival_search = BraveSearchTool(
save_file=True,
n_results=20
)
# Search and save
results = archival_search.run(
search_query="historical events 2023"
)
# Results saved to search_results_YYYY-MM-DD_HH-MM-SS.txt
```
### Error Handling Example
```python
try:
search_tool = BraveSearchTool()
results = search_tool.run(
search_query="important topic"
)
print(results)
except ValueError as e: # API key missing
print(f"Authentication error: {str(e)}")
except Exception as e:
print(f"Search error: {str(e)}")
```
## Notes
- Requires Brave Search API key
- Implements automatic rate limiting
- Supports country-specific searches
- Customizable result count
- Optional file saving feature
- Thread-safe operations
- Efficient result formatting
- Handles API errors gracefully
- Supports parallel searches
- Maintains search context

View File

@@ -0,0 +1,164 @@
---
title: CodeDocsSearchTool
description: A semantic search tool for code documentation websites using RAG capabilities
icon: book-open
---
## CodeDocsSearchTool
The CodeDocsSearchTool is a specialized Retrieval-Augmented Generation (RAG) tool that enables semantic search within code documentation websites. It inherits from the base RagTool class and provides both fixed and dynamic documentation URL searching capabilities.
## Installation
```bash
pip install 'crewai[tools]'
```
## Usage Example
```python
from crewai import Agent
from crewai_tools import CodeDocsSearchTool
# Method 1: Dynamic documentation URL
docs_search = CodeDocsSearchTool()
# Method 2: Fixed documentation URL
fixed_docs_search = CodeDocsSearchTool(
docs_url="https://docs.example.com"
)
# Create an agent with the tool
researcher = Agent(
role='Documentation Researcher',
goal='Search through code documentation semantically',
backstory='Expert at finding relevant information in technical documentation.',
tools=[docs_search],
verbose=True
)
```
## Input Schema
The tool supports two input schemas depending on initialization:
### Dynamic URL Schema
```python
class CodeDocsSearchToolSchema(BaseModel):
search_query: str # The semantic search query
docs_url: str # URL of the documentation site to search
```
### Fixed URL Schema
```python
class FixedCodeDocsSearchToolSchema(BaseModel):
search_query: str # The semantic search query
```
## Function Signature
```python
def __init__(self, docs_url: Optional[str] = None, **kwargs):
"""
Initialize the documentation search tool.
Args:
docs_url (Optional[str]): Fixed URL to a documentation site. If provided,
the tool will only search this documentation.
**kwargs: Additional arguments passed to the parent RagTool
"""
def _run(self, search_query: str, **kwargs: Any) -> Any:
"""
Perform semantic search on the documentation site.
Args:
search_query (str): The semantic search query
**kwargs: Additional arguments (including 'docs_url' for dynamic mode)
Returns:
str: Relevant documentation passages based on semantic search
"""
```
## Best Practices
1. Choose initialization method based on use case:
- Use fixed URL when repeatedly searching the same documentation
- Use dynamic URL when searching different documentation sites
2. Write clear, semantic search queries
3. Ensure documentation sites are accessible
4. Consider documentation structure and size
5. Handle potential URL access errors in agent prompts
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import CodeDocsSearchTool
# Example 1: Fixed documentation search
api_docs_search = CodeDocsSearchTool(
docs_url="https://api.example.com/docs"
)
# Example 2: Dynamic documentation search
flexible_docs_search = CodeDocsSearchTool()
# Create agents
api_analyst = Agent(
role='API Documentation Analyst',
goal='Find relevant API endpoints and usage examples',
backstory='Expert at analyzing API documentation.',
tools=[api_docs_search]
)
docs_researcher = Agent(
role='Documentation Researcher',
goal='Search through various documentation sites',
backstory='Specialist in finding information across multiple docs.',
tools=[flexible_docs_search]
)
# Define tasks
fixed_search_task = Task(
description="""Find all authentication-related endpoints
in the API documentation.""",
agent=api_analyst
)
# The agent will use:
# {
# "search_query": "authentication endpoints and methods"
# }
dynamic_search_task = Task(
description="""Search through the Python documentation at
docs.python.org for information about async/await.""",
agent=docs_researcher
)
# The agent will use:
# {
# "search_query": "async await syntax and usage",
# "docs_url": "https://docs.python.org"
# }
# Create crew
crew = Crew(
agents=[api_analyst, docs_researcher],
tasks=[fixed_search_task, dynamic_search_task]
)
# Execute
result = crew.kickoff()
```
## Notes
- Inherits from RagTool for semantic search capabilities
- Supports both fixed and dynamic documentation URLs
- Uses embeddings for semantic search
- Thread-safe operations
- Automatically handles documentation loading and embedding
- Optimized for technical documentation search

View File

@@ -0,0 +1,224 @@
---
title: CodeInterpreterTool
description: A tool for secure Python code execution in isolated Docker environments
icon: code
---
## CodeInterpreterTool
The CodeInterpreterTool provides secure Python code execution capabilities using Docker containers. It supports dynamic library installation and offers both safe (Docker-based) and unsafe (direct) execution modes.
## Installation
```bash
pip install 'crewai[tools]'
```
## Usage Example
```python
from crewai import Agent
from crewai_tools import CodeInterpreterTool
# Initialize the tool
code_tool = CodeInterpreterTool()
# Create an agent with the tool
programmer = Agent(
role='Code Executor',
goal='Execute and analyze Python code',
backstory='Expert at writing and executing Python code.',
tools=[code_tool],
verbose=True
)
```
## Input Schema
```python
class CodeInterpreterSchema(BaseModel):
code: str = Field(
description="Python3 code used to be interpreted in the Docker container. ALWAYS PRINT the final result and the output of the code"
)
libraries_used: List[str] = Field(
description="List of libraries used in the code with proper installing names separated by commas. Example: numpy,pandas,beautifulsoup4"
)
```
## Function Signature
```python
def __init__(
self,
code: Optional[str] = None,
user_dockerfile_path: Optional[str] = None,
user_docker_base_url: Optional[str] = None,
unsafe_mode: bool = False,
**kwargs
):
"""
Initialize the code interpreter tool.
Args:
code (Optional[str]): Default code to execute
user_dockerfile_path (Optional[str]): Custom Dockerfile path
user_docker_base_url (Optional[str]): Custom Docker daemon URL
unsafe_mode (bool): Enable direct code execution
**kwargs: Additional arguments for base tool
"""
def _run(
self,
code: str,
libraries_used: List[str],
**kwargs: Any
) -> str:
"""
Execute Python code in Docker container or directly.
Args:
code (str): Python code to execute
libraries_used (List[str]): Required libraries
**kwargs: Additional arguments
Returns:
str: Execution output or error message
"""
```
## Best Practices
1. Security Considerations:
- Use Docker mode by default
- Validate input code
- Control library access
- Monitor execution time
2. Docker Configuration:
- Use custom Dockerfile when needed
- Handle container lifecycle
- Manage resource limits
- Clean up after execution
3. Library Management:
- Specify exact versions
- Use trusted packages
- Handle dependencies
- Verify installations
4. Error Handling:
- Catch execution errors
- Handle timeouts
- Manage Docker errors
- Provide clear messages
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import CodeInterpreterTool
# Initialize tool
code_tool = CodeInterpreterTool()
# Create agent
programmer = Agent(
role='Code Executor',
goal='Execute data analysis code',
backstory='Expert Python programmer specializing in data analysis.',
tools=[code_tool]
)
# Define task
analysis_task = Task(
description="""Analyze the dataset using pandas and
create a summary visualization with matplotlib.""",
agent=programmer
)
# The tool will use:
# {
# "code": """
# import pandas as pd
# import matplotlib.pyplot as plt
#
# # Load and analyze data
# df = pd.read_csv('data.csv')
# summary = df.describe()
#
# # Create visualization
# plt.figure(figsize=(10, 6))
# df['column'].hist()
# plt.savefig('output.png')
#
# print(summary)
# """,
# "libraries_used": "pandas,matplotlib"
# }
# Create crew
crew = Crew(
agents=[programmer],
tasks=[analysis_task]
)
# Execute
result = crew.kickoff()
```
## Advanced Usage
### Custom Docker Configuration
```python
# Use custom Dockerfile
tool = CodeInterpreterTool(
user_dockerfile_path="/path/to/Dockerfile"
)
# Use custom Docker daemon
tool = CodeInterpreterTool(
user_docker_base_url="tcp://remote-docker:2375"
)
```
### Direct Execution Mode
```python
# Enable unsafe mode (not recommended)
tool = CodeInterpreterTool(unsafe_mode=True)
# Execute code directly
result = tool.run(
code="print('Hello, World!')",
libraries_used=[]
)
```
### Error Handling Example
```python
try:
code_tool = CodeInterpreterTool()
result = code_tool.run(
code="""
import numpy as np
arr = np.array([1, 2, 3])
print(f"Array mean: {arr.mean()}")
""",
libraries_used=["numpy"]
)
print(result)
except Exception as e:
print(f"Error executing code: {str(e)}")
```
## Notes
- Inherits from BaseTool
- Docker-based isolation
- Dynamic library installation
- Secure code execution
- Custom Docker support
- Comprehensive error handling
- Resource management
- Container cleanup
- Library dependency handling
- Execution output capture

View File

@@ -0,0 +1,207 @@
---
title: CSVSearchTool
description: A tool for semantic search within CSV files using RAG capabilities
icon: table
---
## CSVSearchTool
The CSVSearchTool enables semantic search capabilities for CSV files using Retrieval-Augmented Generation (RAG). It can process CSV files either specified during initialization or at runtime, making it flexible for various use cases.
## Installation
```bash
pip install 'crewai[tools]'
```
## Usage Example
```python
from crewai import Agent
from crewai_tools import CSVSearchTool
# Method 1: Initialize with specific CSV file
csv_tool = CSVSearchTool(csv="path/to/data.csv")
# Method 2: Initialize without CSV (specify at runtime)
flexible_csv_tool = CSVSearchTool()
# Create an agent with the tool
data_analyst = Agent(
role='Data Analyst',
goal='Search and analyze CSV data semantically',
backstory='Expert at analyzing and extracting insights from CSV data.',
tools=[csv_tool],
verbose=True
)
```
## Input Schema
### Fixed CSV Schema (when CSV path provided during initialization)
```python
class FixedCSVSearchToolSchema(BaseModel):
search_query: str = Field(
description="Mandatory search query you want to use to search the CSV's content"
)
```
### Flexible CSV Schema (when CSV path provided at runtime)
```python
class CSVSearchToolSchema(FixedCSVSearchToolSchema):
csv: str = Field(
description="Mandatory csv path you want to search"
)
```
## Function Signature
```python
def __init__(
self,
csv: Optional[str] = None,
**kwargs
):
"""
Initialize the CSV search tool.
Args:
csv (Optional[str]): Path to CSV file (optional)
**kwargs: Additional arguments for RAG tool configuration
"""
def _run(
self,
search_query: str,
**kwargs: Any
) -> str:
"""
Execute semantic search on CSV content.
Args:
search_query (str): Query to search in the CSV
**kwargs: Additional arguments including csv path if not initialized
Returns:
str: Relevant content from the CSV matching the query
"""
```
## Best Practices
1. CSV File Handling:
- Ensure CSV files are properly formatted
- Use absolute paths for reliability
- Verify file permissions before processing
2. Search Optimization:
- Use specific, focused search queries
- Consider column names and data structure
- Test with sample queries first
3. Performance Considerations:
- Pre-initialize with CSV for repeated searches
- Handle large CSV files appropriately
- Monitor memory usage with big datasets
4. Error Handling:
- Verify CSV file existence
- Handle malformed CSV data
- Manage file access permissions
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import CSVSearchTool
# Initialize tool with specific CSV
csv_tool = CSVSearchTool(csv="/path/to/sales_data.csv")
# Create agent
analyst = Agent(
role='Data Analyst',
goal='Extract insights from sales data',
backstory='Expert at analyzing sales data and trends.',
tools=[csv_tool]
)
# Define task
analysis_task = Task(
description="""Find all sales records from the CSV
that relate to product returns in Q4 2023.""",
agent=analyst
)
# The tool will use:
# {
# "search_query": "product returns Q4 2023"
# }
# Create crew
crew = Crew(
agents=[analyst],
tasks=[analysis_task]
)
# Execute
result = crew.kickoff()
```
## Advanced Usage
### Dynamic CSV Selection
```python
# Initialize without CSV
flexible_tool = CSVSearchTool()
# Search different CSVs
result1 = flexible_tool.run(
search_query="revenue 2023",
csv="/path/to/finance.csv"
)
result2 = flexible_tool.run(
search_query="customer feedback",
csv="/path/to/surveys.csv"
)
```
### Multiple CSV Analysis
```python
# Create tools for different CSVs
sales_tool = CSVSearchTool(csv="/path/to/sales.csv")
inventory_tool = CSVSearchTool(csv="/path/to/inventory.csv")
# Create agent with multiple tools
analyst = Agent(
role='Business Analyst',
goal='Cross-reference sales and inventory data',
tools=[sales_tool, inventory_tool]
)
```
### Error Handling Example
```python
try:
csv_tool = CSVSearchTool(csv="/path/to/data.csv")
result = csv_tool.run(
search_query="important metrics"
)
print(result)
except Exception as e:
print(f"Error processing CSV: {str(e)}")
```
## Notes
- Inherits from RagTool for semantic search
- Supports dynamic CSV file specification
- Uses embedchain for data processing
- Maintains search context across queries
- Thread-safe operations
- Efficient semantic search capabilities
- Supports various CSV formats
- Handles large datasets effectively
- Preserves CSV structure in search
- Enables natural language queries

View File

@@ -0,0 +1,217 @@
---
title: Directory Read Tool
description: A tool for recursively listing directory contents
---
# Directory Read Tool
The Directory Read Tool provides functionality to recursively list all files within a directory. It supports both fixed and dynamic directory path modes, allowing you to specify the directory at initialization or runtime.
## Installation
```bash
pip install 'crewai[tools]'
```
## Usage
You can use the Directory Read Tool in two ways:
### 1. Fixed Directory Path
Initialize the tool with a specific directory path:
```python
from crewai import Agent
from crewai_tools import DirectoryReadTool
# Initialize with a fixed directory
tool = DirectoryReadTool(directory="/path/to/your/directory")
# Create an agent with the tool
agent = Agent(
role='File System Analyst',
goal='Analyze directory contents',
backstory='I help analyze and organize file systems',
tools=[tool]
)
# Use in a task
task = Task(
description="List all files in the project directory",
agent=agent
)
```
### 2. Dynamic Directory Path
Initialize the tool without a specific directory path to provide it at runtime:
```python
from crewai import Agent
from crewai_tools import DirectoryReadTool
# Initialize without a fixed directory
tool = DirectoryReadTool()
# Create an agent with the tool
agent = Agent(
role='File System Explorer',
goal='Explore different directories',
backstory='I analyze various directory structures',
tools=[tool]
)
# Use in a task with dynamic directory path
task = Task(
description="List all files in the specified directory",
agent=agent,
context={
"directory": "/path/to/explore"
}
)
```
## Input Schema
### Fixed Directory Mode
```python
class FixedDirectoryReadToolSchema(BaseModel):
pass # No additional parameters needed when directory is fixed
```
### Dynamic Directory Mode
```python
class DirectoryReadToolSchema(BaseModel):
directory: str # The path to the directory to list contents
```
## Function Signatures
```python
def __init__(self, directory: Optional[str] = None, **kwargs):
"""
Initialize the Directory Read Tool.
Args:
directory (Optional[str]): Path to the directory (optional)
**kwargs: Additional arguments passed to BaseTool
"""
def _run(
self,
**kwargs: Any,
) -> str:
"""
Execute the directory listing.
Args:
**kwargs: Arguments including 'directory' for dynamic mode
Returns:
str: A formatted string containing all file paths in the directory
"""
```
## Best Practices
1. **Path Handling**:
- Use absolute paths to avoid path resolution issues
- Handle trailing slashes appropriately
- Verify directory existence before listing
2. **Performance Considerations**:
- Be mindful of directory size when listing large directories
- Consider implementing pagination for large directories
- Handle symlinks appropriately
3. **Error Handling**:
- Handle directory not found errors gracefully
- Manage permission issues appropriately
- Validate input parameters before processing
## Example Integration
Here's a complete example showing how to integrate the Directory Read Tool with CrewAI:
```python
from crewai import Agent, Task, Crew
from crewai_tools import DirectoryReadTool
# Initialize the tool
dir_tool = DirectoryReadTool()
# Create an agent with the tool
file_analyst = Agent(
role='File System Analyst',
goal='Analyze and report on directory structures',
backstory='I am an expert at analyzing file system organization',
tools=[dir_tool]
)
# Create tasks
analysis_task = Task(
description="""
Analyze the project directory structure:
1. List all files recursively
2. Identify key file types
3. Report on directory organization
Provide a comprehensive analysis of the findings.
""",
agent=file_analyst,
context={
"directory": "/path/to/project"
}
)
# Create and run the crew
crew = Crew(
agents=[file_analyst],
tasks=[analysis_task]
)
result = crew.kickoff()
```
## Error Handling
The tool handles various error scenarios:
1. **Directory Not Found**:
```python
try:
tool = DirectoryReadTool(directory="/nonexistent/path")
except FileNotFoundError:
print("Directory not found. Please verify the path.")
```
2. **Permission Issues**:
```python
try:
tool = DirectoryReadTool(directory="/restricted/path")
except PermissionError:
print("Insufficient permissions to access the directory.")
```
3. **Invalid Path**:
```python
try:
result = tool._run(directory="invalid/path")
except ValueError:
print("Invalid directory path provided.")
```
## Output Format
The tool returns a formatted string containing all file paths in the directory:
```
File paths:
- /path/to/directory/file1.txt
- /path/to/directory/subdirectory/file2.txt
- /path/to/directory/subdirectory/file3.py
```
Each file path is listed on a new line with a hyphen prefix, making it easy to parse and read the output.

View File

@@ -0,0 +1,214 @@
---
title: DirectorySearchTool
description: A tool for semantic search within directory contents using RAG capabilities
icon: folder-search
---
## DirectorySearchTool
The DirectorySearchTool enables semantic search capabilities for directory contents using Retrieval-Augmented Generation (RAG). It processes files recursively within a directory and allows searching through their contents using natural language queries.
## Installation
```bash
pip install 'crewai[tools]'
```
## Usage Example
```python
from crewai import Agent
from crewai_tools import DirectorySearchTool
# Method 1: Initialize with specific directory
dir_tool = DirectorySearchTool(directory="/path/to/documents")
# Method 2: Initialize without directory (specify at runtime)
flexible_dir_tool = DirectorySearchTool()
# Create an agent with the tool
researcher = Agent(
role='Directory Researcher',
goal='Search and analyze directory contents',
backstory='Expert at finding relevant information in document collections.',
tools=[dir_tool],
verbose=True
)
```
## Input Schema
### Fixed Directory Schema (when path provided during initialization)
```python
class FixedDirectorySearchToolSchema(BaseModel):
search_query: str = Field(
description="Mandatory search query you want to use to search the directory's content"
)
```
### Flexible Directory Schema (when path provided at runtime)
```python
class DirectorySearchToolSchema(FixedDirectorySearchToolSchema):
directory: str = Field(
description="Mandatory directory you want to search"
)
```
## Function Signature
```python
def __init__(
self,
directory: Optional[str] = None,
**kwargs
):
"""
Initialize the directory search tool.
Args:
directory (Optional[str]): Path to directory (optional)
**kwargs: Additional arguments for RAG tool configuration
"""
def _run(
self,
search_query: str,
**kwargs: Any
) -> str:
"""
Execute semantic search on directory contents.
Args:
search_query (str): Query to search in the directory
**kwargs: Additional arguments including directory if not initialized
Returns:
str: Relevant content from the directory matching the query
"""
```
## Best Practices
1. Directory Management:
- Use absolute paths
- Verify directory existence
- Handle permissions properly
2. Search Optimization:
- Use specific queries
- Consider file types
- Test with sample queries
3. Performance Considerations:
- Pre-initialize for repeated searches
- Handle large directories
- Monitor processing time
4. Error Handling:
- Verify directory access
- Handle missing files
- Manage permissions
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import DirectorySearchTool
# Initialize tool with specific directory
dir_tool = DirectorySearchTool(
directory="/path/to/documents"
)
# Create agent
researcher = Agent(
role='Directory Researcher',
goal='Extract insights from document collections',
backstory='Expert at analyzing document collections.',
tools=[dir_tool]
)
# Define task
research_task = Task(
description="""Find all mentions of machine learning
applications from the directory contents.""",
agent=researcher
)
# The tool will use:
# {
# "search_query": "machine learning applications"
# }
# Create crew
crew = Crew(
agents=[researcher],
tasks=[research_task]
)
# Execute
result = crew.kickoff()
```
## Advanced Usage
### Dynamic Directory Selection
```python
# Initialize without directory path
flexible_tool = DirectorySearchTool()
# Search different directories
docs_results = flexible_tool.run(
search_query="technical specifications",
directory="/path/to/docs"
)
reports_results = flexible_tool.run(
search_query="financial metrics",
directory="/path/to/reports"
)
```
### Multiple Directory Analysis
```python
# Create tools for different directories
docs_tool = DirectorySearchTool(
directory="/path/to/docs"
)
reports_tool = DirectorySearchTool(
directory="/path/to/reports"
)
# Create agent with multiple tools
analyst = Agent(
role='Content Analyst',
goal='Cross-reference multiple document collections',
tools=[docs_tool, reports_tool]
)
```
### Error Handling Example
```python
try:
dir_tool = DirectorySearchTool()
results = dir_tool.run(
search_query="key concepts",
directory="/path/to/documents"
)
print(results)
except Exception as e:
print(f"Error processing directory: {str(e)}")
```
## Notes
- Inherits from RagTool
- Uses DirectoryLoader
- Supports recursive search
- Dynamic directory specification
- Efficient content retrieval
- Thread-safe operations
- Maintains search context
- Processes multiple file types
- Handles nested directories
- Memory-efficient processing

View File

@@ -0,0 +1,224 @@
---
title: DOCXSearchTool
description: A tool for semantic search within DOCX documents using RAG capabilities
icon: file-text
---
## DOCXSearchTool
The DOCXSearchTool enables semantic search capabilities for Microsoft Word (DOCX) documents using Retrieval-Augmented Generation (RAG). It supports both fixed and dynamic document selection modes.
## Installation
```bash
pip install 'crewai[tools]'
```
## Usage Example
```python
from crewai import Agent
from crewai_tools import DOCXSearchTool
# Method 1: Fixed document (specified at initialization)
fixed_tool = DOCXSearchTool(
docx="path/to/document.docx"
)
# Method 2: Dynamic document (specified at runtime)
dynamic_tool = DOCXSearchTool()
# Create an agent with the tool
researcher = Agent(
role='Document Researcher',
goal='Search and analyze document contents',
backstory='Expert at finding relevant information in documents.',
tools=[fixed_tool], # or [dynamic_tool]
verbose=True
)
```
## Input Schema
### Fixed Document Mode
```python
class FixedDOCXSearchToolSchema(BaseModel):
search_query: str = Field(
description="Mandatory search query you want to use to search the DOCX's content"
)
```
### Dynamic Document Mode
```python
class DOCXSearchToolSchema(BaseModel):
docx: str = Field(
description="Mandatory docx path you want to search"
)
search_query: str = Field(
description="Mandatory search query you want to use to search the DOCX's content"
)
```
## Function Signature
```python
def __init__(
self,
docx: Optional[str] = None,
**kwargs
):
"""
Initialize the DOCX search tool.
Args:
docx (Optional[str]): Path to DOCX file (optional for dynamic mode)
**kwargs: Additional arguments for RAG tool configuration
"""
def _run(
self,
search_query: str,
docx: Optional[str] = None,
**kwargs: Any
) -> str:
"""
Execute semantic search on document contents.
Args:
search_query (str): Query to search in the document
docx (Optional[str]): Document path (required for dynamic mode)
**kwargs: Additional arguments
Returns:
str: Relevant content from the document matching the query
"""
```
## Best Practices
1. Document Handling:
- Use absolute file paths
- Verify file existence
- Handle large documents
- Monitor memory usage
2. Query Optimization:
- Structure queries clearly
- Consider document size
- Handle formatting
- Monitor performance
3. Error Handling:
- Check file access
- Validate file format
- Handle corrupted files
- Log issues
4. Mode Selection:
- Choose fixed mode for static documents
- Use dynamic mode for runtime selection
- Consider memory implications
- Manage document lifecycle
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import DOCXSearchTool
# Initialize tool
docx_tool = DOCXSearchTool(
docx="reports/annual_report_2023.docx"
)
# Create agent
researcher = Agent(
role='Document Analyst',
goal='Extract insights from annual report',
backstory='Expert at analyzing business documents.',
tools=[docx_tool]
)
# Define task
analysis_task = Task(
description="""Find all mentions of revenue growth
and market expansion.""",
agent=researcher
)
# Create crew
crew = Crew(
agents=[researcher],
tasks=[analysis_task]
)
# Execute
result = crew.kickoff()
```
## Advanced Usage
### Multiple Document Analysis
```python
# Create tools for different documents
report_tool = DOCXSearchTool(
docx="reports/annual_report.docx"
)
policy_tool = DOCXSearchTool(
docx="policies/compliance.docx"
)
# Create agent with multiple tools
analyst = Agent(
role='Document Analyst',
goal='Cross-reference reports and policies',
tools=[report_tool, policy_tool]
)
```
### Dynamic Document Loading
```python
# Initialize dynamic tool
dynamic_tool = DOCXSearchTool()
# Use with different documents
result1 = dynamic_tool.run(
docx="document1.docx",
search_query="project timeline"
)
result2 = dynamic_tool.run(
docx="document2.docx",
search_query="budget allocation"
)
```
### Error Handling Example
```python
try:
docx_tool = DOCXSearchTool(
docx="reports/quarterly_report.docx"
)
results = docx_tool.run(
search_query="Q3 performance metrics"
)
print(results)
except FileNotFoundError as e:
print(f"Document not found: {str(e)}")
except Exception as e:
print(f"Error processing document: {str(e)}")
```
## Notes
- Inherits from RagTool
- Supports fixed/dynamic modes
- Document path validation
- Memory management
- Performance optimization
- Error handling
- Search capabilities
- Content extraction
- Format handling
- Security features

View File

@@ -0,0 +1,193 @@
---
title: FileReadTool
description: A tool for reading file contents with flexible path specification
icon: file-text
---
## FileReadTool
The FileReadTool provides functionality to read file contents with support for both fixed and dynamic file path specification. It includes comprehensive error handling for common file operations and maintains clear descriptions of its configured state.
## Installation
```bash
pip install 'crewai[tools]'
```
## Usage Example
```python
from crewai import Agent
from crewai_tools import FileReadTool
# Method 1: Initialize with specific file
reader = FileReadTool(file_path="/path/to/data.txt")
# Method 2: Initialize without file (specify at runtime)
flexible_reader = FileReadTool()
# Create an agent with the tool
file_processor = Agent(
role='File Processor',
goal='Read and process file contents',
backstory='Expert at handling file operations and content processing.',
tools=[reader],
verbose=True
)
```
## Input Schema
```python
class FileReadToolSchema(BaseModel):
file_path: str = Field(
description="Mandatory file full path to read the file"
)
```
## Function Signature
```python
def __init__(
self,
file_path: Optional[str] = None,
**kwargs: Any
) -> None:
"""
Initialize the file read tool.
Args:
file_path (Optional[str]): Path to file to read (optional)
**kwargs: Additional arguments passed to BaseTool
"""
def _run(
self,
**kwargs: Any
) -> str:
"""
Read and return file contents.
Args:
file_path (str, optional): Override default file path
**kwargs: Additional arguments
Returns:
str: File contents or error message
"""
```
## Best Practices
1. File Path Management:
- Use absolute paths for reliability
- Verify file existence before operations
- Handle path resolution properly
2. Error Handling:
- Check for file existence
- Handle permission issues
- Manage encoding errors
- Process file access failures
3. Performance Considerations:
- Close files after reading
- Handle large files appropriately
- Consider memory constraints
4. Security Practices:
- Validate file paths
- Check file permissions
- Avoid path traversal issues
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import FileReadTool
# Initialize tool with specific file
reader = FileReadTool(file_path="/path/to/config.txt")
# Create agent
processor = Agent(
role='File Processor',
goal='Process configuration files',
backstory='Expert at reading and analyzing configuration files.',
tools=[reader]
)
# Define task
read_task = Task(
description="""Read and analyze the contents of
the configuration file.""",
agent=processor
)
# The tool will use the default file path
# Create crew
crew = Crew(
agents=[processor],
tasks=[read_task]
)
# Execute
result = crew.kickoff()
```
## Advanced Usage
### Dynamic File Selection
```python
# Initialize without file path
flexible_reader = FileReadTool()
# Read different files
config_content = flexible_reader.run(
file_path="/path/to/config.txt"
)
log_content = flexible_reader.run(
file_path="/path/to/logs.txt"
)
```
### Multiple File Processing
```python
# Create tools for different files
config_reader = FileReadTool(file_path="/path/to/config.txt")
log_reader = FileReadTool(file_path="/path/to/logs.txt")
# Create agent with multiple tools
processor = Agent(
role='File Analyst',
goal='Analyze multiple file types',
tools=[config_reader, log_reader]
)
```
### Error Handling Example
```python
try:
reader = FileReadTool()
content = reader.run(
file_path="/path/to/file.txt"
)
print(content)
except Exception as e:
print(f"Error reading file: {str(e)}")
```
## Notes
- Inherits from BaseTool
- Supports fixed or dynamic file paths
- Comprehensive error handling
- Thread-safe operations
- Clear error messages
- Flexible path specification
- Maintains tool description
- Handles common file errors
- Supports various file types
- Memory-efficient operations

View File

@@ -0,0 +1,141 @@
---
title: FileWriterTool
description: A tool for writing content to files with support for various file formats.
icon: file-pen
---
## FileWriterTool
The FileWriterTool provides agents with the capability to write content to files, supporting various file formats and ensuring proper file handling.
## Installation
```bash
pip install 'crewai[tools]'
```
## Usage Example
```python
from crewai import Agent
from crewai_tools import FileWriterTool
# Initialize the tool
file_writer = FileWriterTool()
# Create an agent with the tool
writer_agent = Agent(
role='Content Writer',
goal='Write and save content to files',
backstory='Expert at creating and managing file content.',
tools=[file_writer],
verbose=True
)
# Use in a task
task = Task(
description='Write a report and save it to report.txt',
agent=writer_agent
)
```
## Tool Attributes
| Attribute | Type | Description |
| :-------- | :--- | :---------- |
| name | str | "File Writer Tool" |
| description | str | "A tool that writes content to a file." |
## Input Schema
```python
class FileWriterToolInput(BaseModel):
filename: str # Name of the file to write
directory: str = "./" # Optional directory path, defaults to current directory
overwrite: str = "False" # Whether to overwrite existing file ("True"/"False")
content: str # Content to write to the file
```
## Function Signature
```python
def _run(self, **kwargs: Any) -> str:
"""
Write content to a file with specified parameters.
Args:
filename (str): Name of the file to write
content (str): Content to write to the file
directory (str, optional): Directory path. Defaults to "./".
overwrite (str, optional): Whether to overwrite existing file. Defaults to "False".
Returns:
str: Success message with filepath or error message
"""
```
## Error Handling
The tool includes error handling for common file operations:
- FileExistsError: When file exists and overwrite is not allowed
- KeyError: When required parameters are missing
- Directory Creation: Automatically creates directories if they don't exist
- General Exceptions: Catches and reports any other file operation errors
## Best Practices
1. Always provide absolute file paths
2. Ensure proper file permissions
3. Handle potential errors in your agent prompts
4. Verify file contents after writing
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import FileWriterTool
# Initialize tool
file_writer = FileWriterTool()
# Create agent
writer = Agent(
role='Technical Writer',
goal='Create and save technical documentation',
backstory='Expert technical writer with experience in documentation.',
tools=[file_writer]
)
# Define task
writing_task = Task(
description="""Write a technical guide about Python best practices and save it
to the docs directory. The file should be named 'python_guide.md'.
Include sections on code style, documentation, and testing.
If a file already exists, overwrite it.""",
agent=writer
)
# The agent can use the tool with these parameters:
# {
# "filename": "python_guide.md",
# "directory": "docs",
# "overwrite": "True",
# "content": "# Python Best Practices\n\n## Code Style\n..."
# }
# Create crew
crew = Crew(
agents=[writer],
tasks=[writing_task]
)
# Execute
result = crew.kickoff()
```
## Notes
- The tool automatically creates directories in the file path if they don't exist
- Supports various file formats (txt, md, json, etc.)
- Returns descriptive error messages for better debugging
- Thread-safe file operations

View File

@@ -0,0 +1,181 @@
---
title: FirecrawlCrawlWebsiteTool
description: A web crawling tool powered by Firecrawl API for comprehensive website content extraction
icon: spider-web
---
## FirecrawlCrawlWebsiteTool
The FirecrawlCrawlWebsiteTool provides website crawling capabilities using the Firecrawl API. It allows for customizable crawling with options for polling intervals, idempotency, and URL parameters.
## Installation
```bash
pip install 'crewai[tools]'
pip install firecrawl-py # Required dependency
```
## Usage Example
```python
from crewai import Agent
from crewai_tools import FirecrawlCrawlWebsiteTool
# Method 1: Using environment variable
# export FIRECRAWL_API_KEY='your-api-key'
crawler = FirecrawlCrawlWebsiteTool()
# Method 2: Providing API key directly
crawler = FirecrawlCrawlWebsiteTool(
api_key="your-firecrawl-api-key"
)
# Method 3: With custom configuration
crawler = FirecrawlCrawlWebsiteTool(
api_key="your-firecrawl-api-key",
url="https://example.com", # Base URL
poll_interval=5, # Custom polling interval
idempotency_key="unique-key"
)
# Create an agent with the tool
researcher = Agent(
role='Web Crawler',
goal='Extract and analyze website content',
backstory='Expert at crawling and analyzing web content.',
tools=[crawler],
verbose=True
)
```
## Input Schema
```python
class FirecrawlCrawlWebsiteToolSchema(BaseModel):
url: str = Field(description="Website URL")
```
## Function Signature
```python
def __init__(
self,
api_key: Optional[str] = None,
url: Optional[str] = None,
params: Optional[Dict[str, Any]] = None,
poll_interval: Optional[int] = 2,
idempotency_key: Optional[str] = None,
**kwargs
):
"""
Initialize the website crawling tool.
Args:
api_key (Optional[str]): Firecrawl API key. If not provided, checks FIRECRAWL_API_KEY env var
url (Optional[str]): Base URL to crawl. Can be overridden in _run
params (Optional[Dict[str, Any]]): Additional parameters for FirecrawlApp
poll_interval (Optional[int]): Poll interval for FirecrawlApp
idempotency_key (Optional[str]): Idempotency key for FirecrawlApp
**kwargs: Additional arguments for tool creation
"""
def _run(self, url: str) -> Any:
"""
Crawl a website using Firecrawl.
Args:
url (str): Website URL to crawl (overrides constructor URL if provided)
Returns:
Any: Crawled website content from Firecrawl API
"""
```
## Best Practices
1. Set up API authentication:
- Use environment variable: `export FIRECRAWL_API_KEY='your-api-key'`
- Or provide directly in constructor
2. Configure crawling parameters:
- Set appropriate poll intervals
- Use idempotency keys for retry safety
- Customize URL parameters as needed
3. Handle rate limits and quotas
4. Consider website robots.txt policies
5. Handle potential crawling errors in agent prompts
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import FirecrawlCrawlWebsiteTool
# Initialize crawler with configuration
crawler = FirecrawlCrawlWebsiteTool(
api_key="your-firecrawl-api-key",
poll_interval=5,
params={
"max_depth": 3,
"follow_links": True
}
)
# Create agent
web_analyst = Agent(
role='Web Content Analyst',
goal='Extract and analyze website content comprehensively',
backstory='Expert at web crawling and content analysis.',
tools=[crawler]
)
# Define task
crawl_task = Task(
description="""Crawl the documentation website at docs.example.com
and extract all API-related content.""",
agent=web_analyst
)
# The agent will use:
# {
# "url": "https://docs.example.com"
# }
# Create crew
crew = Crew(
agents=[web_analyst],
tasks=[crawl_task]
)
# Execute
result = crew.kickoff()
```
## Configuration Options
### URL Parameters
```python
params = {
"max_depth": 3, # Maximum crawl depth
"follow_links": True, # Follow internal links
"exclude_patterns": [], # URL patterns to exclude
"include_patterns": [] # URL patterns to include
}
```
### Polling Configuration
```python
crawler = FirecrawlCrawlWebsiteTool(
poll_interval=5, # Poll every 5 seconds
idempotency_key="unique-key-123" # For retry safety
)
```
## Notes
- Requires valid Firecrawl API key
- Supports both environment variable and direct API key configuration
- Configurable polling intervals for crawl status
- Idempotency support for safe retries
- Thread-safe operations
- Customizable crawling parameters
- Respects robots.txt by default

View File

@@ -0,0 +1,154 @@
---
title: FirecrawlSearchTool
description: A web search tool powered by Firecrawl API for comprehensive web search capabilities
icon: magnifying-glass-chart
---
## FirecrawlSearchTool
The FirecrawlSearchTool provides web search capabilities using the Firecrawl API. It allows for customizable search queries with options for result formatting and search parameters.
## Installation
```bash
pip install 'crewai[tools]'
pip install firecrawl-py # Required dependency
```
## Usage Example
```python
from crewai import Agent
from crewai_tools import FirecrawlSearchTool
# Initialize the tool with your API key
search_tool = FirecrawlSearchTool(api_key="your-firecrawl-api-key")
# Create an agent with the tool
researcher = Agent(
role='Web Researcher',
goal='Find relevant information across the web',
backstory='Expert at web research and information gathering.',
tools=[search_tool],
verbose=True
)
```
## Input Schema
```python
class FirecrawlSearchToolSchema(BaseModel):
query: str = Field(description="Search query")
page_options: Optional[Dict[str, Any]] = Field(
default=None,
description="Options for result formatting"
)
search_options: Optional[Dict[str, Any]] = Field(
default=None,
description="Options for searching"
)
```
## Function Signature
```python
def __init__(self, api_key: Optional[str] = None, **kwargs):
"""
Initialize the Firecrawl search tool.
Args:
api_key (Optional[str]): Firecrawl API key
**kwargs: Additional arguments for tool creation
"""
def _run(
self,
query: str,
page_options: Optional[Dict[str, Any]] = None,
result_options: Optional[Dict[str, Any]] = None,
) -> Any:
"""
Perform a web search using Firecrawl.
Args:
query (str): Search query string
page_options (Optional[Dict[str, Any]]): Options for result formatting
result_options (Optional[Dict[str, Any]]): Options for search results
Returns:
Any: Search results from Firecrawl API
"""
```
## Best Practices
1. Always provide a valid API key
2. Use specific, focused search queries
3. Customize page and result options for better results
4. Handle potential API errors in agent prompts
5. Consider rate limits and usage quotas
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import FirecrawlSearchTool
# Initialize tool with API key
search_tool = FirecrawlSearchTool(api_key="your-firecrawl-api-key")
# Create agent
researcher = Agent(
role='Market Researcher',
goal='Research market trends and competitor analysis',
backstory='Expert market analyst with deep research skills.',
tools=[search_tool]
)
# Define task
research_task = Task(
description="""Research the latest developments in electric vehicles,
focusing on market leaders and emerging technologies. Format the results
in a structured way.""",
agent=researcher
)
# The agent will use:
# {
# "query": "electric vehicle market leaders emerging technologies",
# "page_options": {
# "format": "structured",
# "maxLength": 1000
# },
# "result_options": {
# "limit": 5,
# "sortBy": "relevance"
# }
# }
# Create crew
crew = Crew(
agents=[researcher],
tasks=[research_task]
)
# Execute
result = crew.kickoff()
```
## Error Handling
The tool includes error handling for:
- Missing API key
- Missing firecrawl-py package
- API request failures
- Invalid options parameters
## Notes
- Requires valid Firecrawl API key
- Supports customizable search parameters
- Provides structured web search results
- Thread-safe operations
- Efficient for large-scale web searches
- Handles rate limiting automatically

View File

@@ -0,0 +1,233 @@
---
title: GithubSearchTool
description: A tool for semantic search within GitHub repositories using RAG capabilities
icon: github
---
## GithubSearchTool
The GithubSearchTool enables semantic search capabilities for GitHub repositories using Retrieval-Augmented Generation (RAG). It processes various content types including code, repository information, pull requests, and issues, allowing natural language queries across repository content.
## Installation
```bash
pip install 'crewai[tools]'
```
## Usage Example
```python
from crewai import Agent
from crewai_tools import GithubSearchTool
# Method 1: Initialize with specific repository
github_tool = GithubSearchTool(
github_repo="owner/repo",
gh_token="your_github_token",
content_types=["code", "pr", "issue"]
)
# Method 2: Initialize without repository (specify at runtime)
flexible_github_tool = GithubSearchTool(
gh_token="your_github_token",
content_types=["code", "repo"]
)
# Create an agent with the tool
researcher = Agent(
role='GitHub Researcher',
goal='Search and analyze repository contents',
backstory='Expert at finding relevant information in GitHub repositories.',
tools=[github_tool],
verbose=True
)
```
## Input Schema
### Fixed Repository Schema (when repo provided during initialization)
```python
class FixedGithubSearchToolSchema(BaseModel):
search_query: str = Field(
description="Mandatory search query you want to use to search the github repo's content"
)
```
### Flexible Repository Schema (when repo provided at runtime)
```python
class GithubSearchToolSchema(FixedGithubSearchToolSchema):
github_repo: str = Field(
description="Mandatory github you want to search"
)
content_types: List[str] = Field(
description="Mandatory content types you want to be included search, options: [code, repo, pr, issue]"
)
```
## Function Signature
```python
def __init__(
self,
github_repo: Optional[str] = None,
gh_token: str,
content_types: List[str],
**kwargs
):
"""
Initialize the GitHub search tool.
Args:
github_repo (Optional[str]): Repository to search (optional)
gh_token (str): GitHub authentication token
content_types (List[str]): Content types to search
**kwargs: Additional arguments for RAG tool configuration
"""
def _run(
self,
search_query: str,
**kwargs: Any
) -> str:
"""
Execute semantic search on repository contents.
Args:
search_query (str): Query to search in the repository
**kwargs: Additional arguments including github_repo and content_types if not initialized
Returns:
str: Relevant content from the repository matching the query
"""
```
## Best Practices
1. Authentication:
- Secure token management
- Use environment variables
- Handle token expiration
2. Search Optimization:
- Target specific content types
- Use focused queries
- Consider rate limits
3. Performance Considerations:
- Pre-initialize for repeated searches
- Handle large repositories
- Monitor API usage
4. Error Handling:
- Verify repository access
- Handle API limits
- Manage authentication errors
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import GithubSearchTool
# Initialize tool with specific repository
github_tool = GithubSearchTool(
github_repo="owner/repo",
gh_token="your_github_token",
content_types=["code", "pr", "issue"]
)
# Create agent
researcher = Agent(
role='GitHub Researcher',
goal='Extract insights from repository content',
backstory='Expert at analyzing GitHub repositories.',
tools=[github_tool]
)
# Define task
research_task = Task(
description="""Find all implementations of
machine learning algorithms in the codebase.""",
agent=researcher
)
# The tool will use:
# {
# "search_query": "machine learning implementation"
# }
# Create crew
crew = Crew(
agents=[researcher],
tasks=[research_task]
)
# Execute
result = crew.kickoff()
```
## Advanced Usage
### Dynamic Repository Selection
```python
# Initialize without repository
flexible_tool = GithubSearchTool(
gh_token="your_github_token",
content_types=["code", "repo"]
)
# Search different repositories
backend_results = flexible_tool.run(
search_query="authentication implementation",
github_repo="owner/backend-repo"
)
frontend_results = flexible_tool.run(
search_query="component architecture",
github_repo="owner/frontend-repo"
)
```
### Multiple Content Type Analysis
```python
# Create tool with multiple content types
multi_tool = GithubSearchTool(
github_repo="owner/repo",
gh_token="your_github_token",
content_types=["code", "pr", "issue", "repo"]
)
# Search across all content types
results = multi_tool.run(
search_query="feature implementation status"
)
```
### Error Handling Example
```python
try:
github_tool = GithubSearchTool(
gh_token="your_github_token",
content_types=["code"]
)
results = github_tool.run(
search_query="api endpoints",
github_repo="owner/repo"
)
print(results)
except Exception as e:
print(f"Error searching repository: {str(e)}")
```
## Notes
- Inherits from RagTool
- Uses GithubLoader
- Requires authentication
- Supports multiple content types
- Dynamic repository specification
- Efficient content retrieval
- Thread-safe operations
- Maintains search context
- Handles API rate limits
- Memory-efficient processing

View File

@@ -0,0 +1,220 @@
---
title: JinaScrapeWebsiteTool
description: A tool for scraping website content using Jina.ai's reader service with markdown output
icon: globe
---
## JinaScrapeWebsiteTool
The JinaScrapeWebsiteTool provides website content scraping capabilities using Jina.ai's reader service. It converts web content into clean markdown format and supports both fixed and dynamic URL modes with optional authentication.
## Installation
```bash
pip install 'crewai[tools]'
```
## Usage Example
```python
from crewai import Agent
from crewai_tools import JinaScrapeWebsiteTool
# Method 1: Fixed URL (specified at initialization)
fixed_tool = JinaScrapeWebsiteTool(
website_url="https://example.com",
api_key="your-jina-api-key" # Optional
)
# Method 2: Dynamic URL (specified at runtime)
dynamic_tool = JinaScrapeWebsiteTool(
api_key="your-jina-api-key" # Optional
)
# Create an agent with the tool
researcher = Agent(
role='Web Content Researcher',
goal='Extract and analyze website content',
backstory='Expert at gathering and processing web information.',
tools=[fixed_tool], # or [dynamic_tool]
verbose=True
)
```
## Input Schema
```python
class JinaScrapeWebsiteToolInput(BaseModel):
website_url: str = Field(
description="Mandatory website url to read the file"
)
```
## Function Signature
```python
def __init__(
self,
website_url: Optional[str] = None,
api_key: Optional[str] = None,
custom_headers: Optional[dict] = None,
**kwargs
):
"""
Initialize the website scraping tool.
Args:
website_url (Optional[str]): URL to scrape (optional for dynamic mode)
api_key (Optional[str]): Jina.ai API key for authentication
custom_headers (Optional[dict]): Custom HTTP headers
**kwargs: Additional arguments for base tool
"""
def _run(
self,
website_url: Optional[str] = None
) -> str:
"""
Execute website scraping.
Args:
website_url (Optional[str]): URL to scrape (required for dynamic mode)
Returns:
str: Markdown-formatted website content
"""
```
## Best Practices
1. URL Handling:
- Use complete URLs
- Validate URL format
- Handle redirects
- Monitor timeouts
2. Authentication:
- Secure API key storage
- Use environment variables
- Manage headers properly
- Handle auth errors
3. Content Processing:
- Handle large pages
- Process markdown output
- Manage encoding
- Handle errors
4. Mode Selection:
- Choose fixed mode for static sites
- Use dynamic mode for variable URLs
- Consider caching
- Manage timeouts
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import JinaScrapeWebsiteTool
import os
# Initialize tool with API key
scraper_tool = JinaScrapeWebsiteTool(
api_key=os.getenv('JINA_API_KEY'),
custom_headers={
'User-Agent': 'CrewAI Bot 1.0'
}
)
# Create agent
researcher = Agent(
role='Web Content Analyst',
goal='Extract and analyze website content',
backstory='Expert at processing web information.',
tools=[scraper_tool]
)
# Define task
analysis_task = Task(
description="""Analyze the content of
https://example.com/blog for key insights.""",
agent=researcher
)
# Create crew
crew = Crew(
agents=[researcher],
tasks=[analysis_task]
)
# Execute
result = crew.kickoff()
```
## Advanced Usage
### Multiple Site Analysis
```python
# Initialize tool
scraper = JinaScrapeWebsiteTool(
api_key=os.getenv('JINA_API_KEY')
)
# Analyze multiple sites
results = []
sites = [
"https://site1.com",
"https://site2.com",
"https://site3.com"
]
for site in sites:
content = scraper.run(
website_url=site
)
results.append(content)
```
### Custom Headers Configuration
```python
# Initialize with custom headers
tool = JinaScrapeWebsiteTool(
custom_headers={
'User-Agent': 'Custom Bot 1.0',
'Accept-Language': 'en-US,en;q=0.9',
'Accept': 'text/html,application/xhtml+xml'
}
)
# Use the tool
content = tool.run(
website_url="https://example.com"
)
```
### Error Handling Example
```python
try:
scraper = JinaScrapeWebsiteTool()
content = scraper.run(
website_url="https://example.com"
)
print(content)
except requests.exceptions.RequestException as e:
print(f"Error accessing website: {str(e)}")
except Exception as e:
print(f"Error processing content: {str(e)}")
```
## Notes
- Uses Jina.ai reader service
- Markdown output format
- API key authentication
- Custom headers support
- Error handling
- Timeout management
- Content processing
- URL validation
- Redirect handling
- Response formatting

View File

@@ -0,0 +1,224 @@
---
title: JSONSearchTool
description: A tool for semantic search within JSON files using RAG capabilities
icon: braces
---
## JSONSearchTool
The JSONSearchTool enables semantic search capabilities for JSON files using Retrieval-Augmented Generation (RAG). It supports both fixed and dynamic file path modes, allowing flexible usage patterns.
## Installation
```bash
pip install 'crewai[tools]'
```
## Usage Example
```python
from crewai import Agent
from crewai_tools import JSONSearchTool
# Method 1: Fixed path (specified at initialization)
fixed_tool = JSONSearchTool(
json_path="path/to/data.json"
)
# Method 2: Dynamic path (specified at runtime)
dynamic_tool = JSONSearchTool()
# Create an agent with the tool
researcher = Agent(
role='JSON Data Researcher',
goal='Search and analyze JSON data',
backstory='Expert at finding relevant information in JSON files.',
tools=[fixed_tool], # or [dynamic_tool]
verbose=True
)
```
## Input Schema
### Fixed Path Mode
```python
class FixedJSONSearchToolSchema(BaseModel):
search_query: str = Field(
description="Mandatory search query you want to use to search the JSON's content"
)
```
### Dynamic Path Mode
```python
class JSONSearchToolSchema(BaseModel):
json_path: str = Field(
description="Mandatory json path you want to search"
)
search_query: str = Field(
description="Mandatory search query you want to use to search the JSON's content"
)
```
## Function Signature
```python
def __init__(
self,
json_path: Optional[str] = None,
**kwargs
):
"""
Initialize the JSON search tool.
Args:
json_path (Optional[str]): Path to JSON file (optional for dynamic mode)
**kwargs: Additional arguments for RAG tool configuration
"""
def _run(
self,
search_query: str,
**kwargs: Any
) -> str:
"""
Execute semantic search on JSON contents.
Args:
search_query (str): Query to search in the JSON
**kwargs: Additional arguments
Returns:
str: Relevant content from the JSON matching the query
"""
```
## Best Practices
1. File Handling:
- Use absolute file paths
- Verify file existence
- Handle large JSON files
- Monitor memory usage
2. Query Optimization:
- Structure queries clearly
- Consider JSON structure
- Handle nested data
- Monitor performance
3. Error Handling:
- Check file access
- Validate JSON format
- Handle malformed JSON
- Log issues
4. Mode Selection:
- Choose fixed mode for static files
- Use dynamic mode for runtime selection
- Consider caching
- Manage file lifecycle
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import JSONSearchTool
# Initialize tool
json_tool = JSONSearchTool(
json_path="data/config.json"
)
# Create agent
researcher = Agent(
role='JSON Data Analyst',
goal='Extract insights from JSON configuration',
backstory='Expert at analyzing JSON data structures.',
tools=[json_tool]
)
# Define task
analysis_task = Task(
description="""Find all configuration settings
related to security.""",
agent=researcher
)
# Create crew
crew = Crew(
agents=[researcher],
tasks=[analysis_task]
)
# Execute
result = crew.kickoff()
```
## Advanced Usage
### Multiple File Analysis
```python
# Create tools for different JSON files
config_tool = JSONSearchTool(
json_path="config/settings.json"
)
data_tool = JSONSearchTool(
json_path="data/records.json"
)
# Create agent with multiple tools
analyst = Agent(
role='JSON Data Analyst',
goal='Cross-reference configuration and data',
tools=[config_tool, data_tool]
)
```
### Dynamic File Loading
```python
# Initialize dynamic tool
dynamic_tool = JSONSearchTool()
# Use with different JSON files
result1 = dynamic_tool.run(
json_path="file1.json",
search_query="security settings"
)
result2 = dynamic_tool.run(
json_path="file2.json",
search_query="user preferences"
)
```
### Error Handling Example
```python
try:
json_tool = JSONSearchTool(
json_path="config/settings.json"
)
results = json_tool.run(
search_query="encryption settings"
)
print(results)
except FileNotFoundError as e:
print(f"JSON file not found: {str(e)}")
except ValueError as e:
print(f"Invalid JSON format: {str(e)}")
except Exception as e:
print(f"Error processing JSON: {str(e)}")
```
## Notes
- Inherits from RagTool
- Supports fixed/dynamic modes
- JSON path validation
- Memory management
- Performance optimization
- Error handling
- Search capabilities
- Content extraction
- Format validation
- Security features

View File

@@ -0,0 +1,184 @@
---
title: LinkupSearchTool
description: A search tool powered by Linkup API for retrieving contextual information
icon: search
---
## LinkupSearchTool
The LinkupSearchTool provides search capabilities using the Linkup API. It allows for customizable search depth and output formatting, returning structured results with contextual information.
## Installation
```bash
pip install 'crewai[tools]'
pip install linkup # Required dependency
```
## Usage Example
```python
from crewai import Agent
from crewai_tools import LinkupSearchTool
# Initialize the tool with your API key
search_tool = LinkupSearchTool(api_key="your-linkup-api-key")
# Create an agent with the tool
researcher = Agent(
role='Information Researcher',
goal='Find relevant contextual information',
backstory='Expert at retrieving and analyzing contextual data.',
tools=[search_tool],
verbose=True
)
```
## Function Signature
```python
def __init__(self, api_key: str):
"""
Initialize the Linkup search tool.
Args:
api_key (str): Linkup API key for authentication
"""
def _run(
self,
query: str,
depth: str = "standard",
output_type: str = "searchResults"
) -> dict:
"""
Perform a search using the Linkup API.
Args:
query (str): The search query
depth (str): Search depth ("standard" by default)
output_type (str): Desired result type ("searchResults" by default)
Returns:
dict: {
"success": bool,
"results": List[Dict] | None,
"error": str | None
}
On success, results contains list of:
{
"name": str,
"url": str,
"content": str
}
"""
```
## Best Practices
1. Always provide a valid API key
2. Use specific, focused search queries
3. Choose appropriate search depth based on needs
4. Handle potential API errors in agent prompts
5. Process structured results effectively
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import LinkupSearchTool
# Initialize tool with API key
search_tool = LinkupSearchTool(api_key="your-linkup-api-key")
# Create agent
researcher = Agent(
role='Context Researcher',
goal='Find detailed contextual information about topics',
backstory='Expert at discovering and analyzing contextual data.',
tools=[search_tool]
)
# Define task
research_task = Task(
description="""Research the latest developments in quantum computing,
focusing on recent breakthroughs and applications. Use standard depth
for comprehensive results.""",
agent=researcher
)
# The tool will use:
# query: "quantum computing recent breakthroughs applications"
# depth: "standard"
# output_type: "searchResults"
# Create crew
crew = Crew(
agents=[researcher],
tasks=[research_task]
)
# Execute
result = crew.kickoff()
```
## Advanced Usage
### Search Depth Options
```python
# Quick surface-level search
results = search_tool._run(
query="quantum computing",
depth="basic"
)
# Standard comprehensive search
results = search_tool._run(
query="quantum computing",
depth="standard"
)
# Deep detailed search
results = search_tool._run(
query="quantum computing",
depth="deep"
)
```
### Output Type Options
```python
# Default search results
results = search_tool._run(
query="quantum computing",
output_type="searchResults"
)
# Custom output format
results = search_tool._run(
query="quantum computing",
output_type="customFormat"
)
```
### Error Handling
```python
results = search_tool._run(query="quantum computing")
if results["success"]:
for result in results["results"]:
print(f"Name: {result['name']}")
print(f"URL: {result['url']}")
print(f"Content: {result['content']}")
else:
print(f"Error: {results['error']}")
```
## Notes
- Requires valid Linkup API key
- Returns structured search results
- Supports multiple search depths
- Configurable output formats
- Built-in error handling
- Thread-safe operations
- Efficient for contextual searches

View File

@@ -0,0 +1,192 @@
---
title: LlamaIndexTool
description: A wrapper tool for integrating LlamaIndex tools and query engines with CrewAI
icon: link
---
## LlamaIndexTool
The LlamaIndexTool serves as a bridge between CrewAI and LlamaIndex, allowing you to use LlamaIndex tools and query engines within your CrewAI agents. It supports both direct tool wrapping and query engine integration.
## Installation
```bash
pip install 'crewai[tools]'
pip install llama-index # Required for LlamaIndex integration
```
## Usage Examples
### Using with LlamaIndex Tools
```python
from crewai import Agent
from crewai_tools import LlamaIndexTool
from llama_index.core.tools import BaseTool as LlamaBaseTool
from pydantic import BaseModel, Field
# Create a LlamaIndex tool
class CustomLlamaSchema(BaseModel):
query: str = Field(..., description="Query to process")
class CustomLlamaTool(LlamaBaseTool):
name = "Custom Llama Tool"
description = "A custom LlamaIndex tool"
def __call__(self, query: str) -> str:
return f"Processed: {query}"
# Wrap the LlamaIndex tool
llama_tool = CustomLlamaTool()
wrapped_tool = LlamaIndexTool.from_tool(llama_tool)
# Create an agent with the tool
agent = Agent(
role='LlamaIndex Integration Agent',
goal='Process queries using LlamaIndex tools',
backstory='Specialist in integrating LlamaIndex capabilities.',
tools=[wrapped_tool]
)
```
### Using with Query Engines
```python
from crewai import Agent
from crewai_tools import LlamaIndexTool
from llama_index.core import VectorStoreIndex, Document
# Create a query engine
documents = [Document(text="Sample document content")]
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()
# Create the tool
query_tool = LlamaIndexTool.from_query_engine(
query_engine,
name="Document Search",
description="Search through indexed documents"
)
# Create an agent with the tool
agent = Agent(
role='Document Researcher',
goal='Find relevant information in documents',
backstory='Expert at searching through document collections.',
tools=[query_tool]
)
```
## Tool Creation Methods
### From LlamaIndex Tool
```python
@classmethod
def from_tool(cls, tool: Any, **kwargs: Any) -> "LlamaIndexTool":
"""
Create a CrewAI tool from a LlamaIndex tool.
Args:
tool (LlamaBaseTool): A LlamaIndex tool to wrap
**kwargs: Additional arguments for tool creation
Returns:
LlamaIndexTool: A CrewAI-compatible tool wrapper
Raises:
ValueError: If tool is not a LlamaBaseTool or lacks fn_schema
"""
```
### From Query Engine
```python
@classmethod
def from_query_engine(
cls,
query_engine: Any,
name: Optional[str] = None,
description: Optional[str] = None,
return_direct: bool = False,
**kwargs: Any
) -> "LlamaIndexTool":
"""
Create a CrewAI tool from a LlamaIndex query engine.
Args:
query_engine (BaseQueryEngine): The query engine to wrap
name (Optional[str]): Custom name for the tool
description (Optional[str]): Custom description
return_direct (bool): Whether to return query engine response directly
**kwargs: Additional arguments for tool creation
Returns:
LlamaIndexTool: A CrewAI-compatible tool wrapper
Raises:
ValueError: If query_engine is not a BaseQueryEngine
"""
```
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import LlamaIndexTool
from llama_index.core import VectorStoreIndex, Document
from llama_index.core.tools import QueryEngineTool
# Create documents and index
documents = [
Document(text="AI is a technology that simulates human intelligence."),
Document(text="Machine learning is a subset of AI.")
]
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()
# Create the tool
search_tool = LlamaIndexTool.from_query_engine(
query_engine,
name="AI Knowledge Base",
description="Search through AI-related documents"
)
# Create agent
researcher = Agent(
role='AI Researcher',
goal='Research AI concepts',
backstory='Expert at finding and explaining AI concepts.',
tools=[search_tool]
)
# Define task
research_task = Task(
description="""Find and explain what AI is and its relationship
with machine learning.""",
agent=researcher
)
# The agent will use:
# {
# "query": "What is AI and how does it relate to machine learning?"
# }
# Create crew
crew = Crew(
agents=[researcher],
tasks=[research_task]
)
# Execute
result = crew.kickoff()
```
## Notes
- Automatically adapts LlamaIndex tool schemas for CrewAI compatibility
- Renames 'input' parameter to 'query' for better integration
- Supports both direct tool wrapping and query engine integration
- Handles schema validation and error resolution
- Thread-safe operations
- Compatible with all LlamaIndex tool types and query engines

View File

@@ -0,0 +1,209 @@
---
title: MDX Search Tool
description: A tool for semantic searching within MDX files using RAG capabilities
---
# MDX Search Tool
The MDX Search Tool enables semantic searching within MDX (Markdown with JSX) files using Retrieval-Augmented Generation (RAG) capabilities. It supports both fixed and dynamic file path modes, allowing you to specify the MDX file at initialization or runtime.
## Installation
```bash
pip install 'crewai[tools]'
```
## Usage
You can use the MDX Search Tool in two ways:
### 1. Fixed MDX File Path
Initialize the tool with a specific MDX file path:
```python
from crewai import Agent
from crewai_tools import MDXSearchTool
# Initialize with a fixed MDX file
tool = MDXSearchTool(mdx="/path/to/your/document.mdx")
# Create an agent with the tool
agent = Agent(
role='Technical Writer',
goal='Search through MDX documentation',
backstory='I help find relevant information in MDX documentation',
tools=[tool]
)
# Use in a task
task = Task(
description="Find information about API endpoints in the documentation",
agent=agent
)
```
### 2. Dynamic MDX File Path
Initialize the tool without a specific file path to provide it at runtime:
```python
from crewai import Agent
from crewai_tools import MDXSearchTool
# Initialize without a fixed MDX file
tool = MDXSearchTool()
# Create an agent with the tool
agent = Agent(
role='Documentation Analyst',
goal='Search through various MDX files',
backstory='I analyze different MDX documentation files',
tools=[tool]
)
# Use in a task with dynamic file path
task = Task(
description="Search for 'authentication' in the API documentation",
agent=agent,
context={
"mdx": "/path/to/api-docs.mdx",
"search_query": "authentication"
}
)
```
## Input Schema
### Fixed MDX File Mode
```python
class FixedMDXSearchToolSchema(BaseModel):
search_query: str # The search query to find content in the MDX file
```
### Dynamic MDX File Mode
```python
class MDXSearchToolSchema(BaseModel):
search_query: str # The search query to find content in the MDX file
mdx: str # The path to the MDX file to search
```
## Function Signatures
```python
def __init__(self, mdx: Optional[str] = None, **kwargs):
"""
Initialize the MDX Search Tool.
Args:
mdx (Optional[str]): Path to the MDX file (optional)
**kwargs: Additional arguments passed to RagTool
"""
def _run(
self,
search_query: str,
**kwargs: Any,
) -> str:
"""
Execute the search on the MDX file.
Args:
search_query (str): The query to search for
**kwargs: Additional arguments including 'mdx' for dynamic mode
Returns:
str: The search results from the MDX content
"""
```
## Best Practices
1. **File Path Handling**:
- Use absolute paths to avoid path resolution issues
- Verify file existence before searching
- Handle file permissions appropriately
2. **Query Optimization**:
- Use specific, focused search queries
- Consider context when formulating queries
- Break down complex searches into smaller queries
3. **Error Handling**:
- Handle file not found errors gracefully
- Manage permission issues appropriately
- Validate input parameters before processing
## Example Integration
Here's a complete example showing how to integrate the MDX Search Tool with CrewAI:
```python
from crewai import Agent, Task, Crew
from crewai_tools import MDXSearchTool
# Initialize the tool
mdx_tool = MDXSearchTool()
# Create an agent with the tool
researcher = Agent(
role='Documentation Researcher',
goal='Find and analyze information in MDX documentation',
backstory='I am an expert at finding relevant information in documentation',
tools=[mdx_tool]
)
# Create tasks
search_task = Task(
description="""
Search through the API documentation for information about authentication methods.
Look for:
1. Authentication endpoints
2. Security best practices
3. Token handling
Provide a comprehensive summary of the findings.
""",
agent=researcher,
context={
"mdx": "/path/to/api-docs.mdx",
"search_query": "authentication security tokens"
}
)
# Create and run the crew
crew = Crew(
agents=[researcher],
tasks=[search_task]
)
result = crew.kickoff()
```
## Error Handling
The tool handles various error scenarios:
1. **File Not Found**:
```python
try:
tool = MDXSearchTool(mdx="/path/to/nonexistent.mdx")
except FileNotFoundError:
print("MDX file not found. Please verify the file path.")
```
2. **Permission Issues**:
```python
try:
tool = MDXSearchTool(mdx="/restricted/docs.mdx")
except PermissionError:
print("Insufficient permissions to access the MDX file.")
```
3. **Invalid Content**:
```python
try:
result = tool._run(search_query="query", mdx="/path/to/invalid.mdx")
except ValueError:
print("Invalid MDX content or format.")
```

View File

@@ -0,0 +1,217 @@
---
title: MySQLSearchTool
description: A tool for semantic search within MySQL database tables using RAG capabilities
icon: database
---
## MySQLSearchTool
The MySQLSearchTool enables semantic search capabilities for MySQL database tables using Retrieval-Augmented Generation (RAG). It processes table contents and allows natural language queries to search through the data.
## Installation
```bash
pip install 'crewai[tools]'
```
## Usage Example
```python
from crewai import Agent
from crewai_tools import MySQLSearchTool
# Initialize the tool
mysql_tool = MySQLSearchTool(
table_name="users",
db_uri="mysql://user:pass@localhost:3306/database"
)
# Create an agent with the tool
researcher = Agent(
role='Database Researcher',
goal='Search and analyze database contents',
backstory='Expert at finding relevant information in databases.',
tools=[mysql_tool],
verbose=True
)
```
## Input Schema
```python
class MySQLSearchToolSchema(BaseModel):
search_query: str = Field(
description="Mandatory semantic search query you want to use to search the database's content"
)
```
## Function Signature
```python
def __init__(
self,
table_name: str,
db_uri: str,
**kwargs
):
"""
Initialize the MySQL search tool.
Args:
table_name (str): Name of the table to search
db_uri (str): Database connection URI
**kwargs: Additional arguments for RAG tool configuration
"""
def _run(
self,
search_query: str,
**kwargs: Any
) -> str:
"""
Execute semantic search on table contents.
Args:
search_query (str): Query to search in the table
**kwargs: Additional arguments
Returns:
str: Relevant content from the table matching the query
"""
```
## Best Practices
1. Database Connection:
- Use secure connection URIs
- Handle authentication properly
- Manage connection lifecycle
- Monitor timeouts
2. Query Optimization:
- Structure queries clearly
- Consider table size
- Handle large datasets
- Monitor performance
3. Security Considerations:
- Protect credentials
- Use environment variables
- Limit table access
- Validate inputs
4. Error Handling:
- Handle connection errors
- Manage query timeouts
- Provide clear messages
- Log issues
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import MySQLSearchTool
# Initialize tool
mysql_tool = MySQLSearchTool(
table_name="customers",
db_uri="mysql://user:pass@localhost:3306/crm"
)
# Create agent
researcher = Agent(
role='Database Analyst',
goal='Extract customer insights from database',
backstory='Expert at analyzing customer data.',
tools=[mysql_tool]
)
# Define task
analysis_task = Task(
description="""Find all premium customers
with recent purchases.""",
agent=researcher
)
# The tool will use:
# {
# "search_query": "premium customers recent purchases"
# }
# Create crew
crew = Crew(
agents=[researcher],
tasks=[analysis_task]
)
# Execute
result = crew.kickoff()
```
## Advanced Usage
### Multiple Table Analysis
```python
# Create tools for different tables
customers_tool = MySQLSearchTool(
table_name="customers",
db_uri="mysql://user:pass@localhost:3306/crm"
)
orders_tool = MySQLSearchTool(
table_name="orders",
db_uri="mysql://user:pass@localhost:3306/crm"
)
# Create agent with multiple tools
analyst = Agent(
role='Data Analyst',
goal='Cross-reference customer and order data',
tools=[customers_tool, orders_tool]
)
```
### Secure Connection Configuration
```python
import os
# Use environment variables for credentials
db_uri = (
f"mysql://{os.getenv('DB_USER')}:{os.getenv('DB_PASS')}"
f"@{os.getenv('DB_HOST')}:{os.getenv('DB_PORT')}"
f"/{os.getenv('DB_NAME')}"
)
tool = MySQLSearchTool(
table_name="sensitive_data",
db_uri=db_uri
)
```
### Error Handling Example
```python
try:
mysql_tool = MySQLSearchTool(
table_name="users",
db_uri="mysql://user:pass@localhost:3306/app"
)
results = mysql_tool.run(
search_query="active users in California"
)
print(results)
except Exception as e:
print(f"Error querying database: {str(e)}")
```
## Notes
- Inherits from RagTool
- Uses MySQLLoader
- Requires database URI
- Table-specific search
- Semantic query support
- Connection management
- Error handling
- Performance optimization
- Security features
- Memory efficiency

View File

@@ -0,0 +1,208 @@
---
title: PDFSearchTool
description: A tool for semantic search within PDF documents using RAG capabilities
icon: file-search
---
## PDFSearchTool
The PDFSearchTool enables semantic search capabilities for PDF documents using Retrieval-Augmented Generation (RAG). It leverages embedchain's PDFEmbedchainAdapter for efficient PDF processing and supports both fixed and dynamic PDF path specification.
## Installation
```bash
pip install 'crewai[tools]'
```
## Usage Example
```python
from crewai import Agent
from crewai_tools import PDFSearchTool
# Method 1: Initialize with specific PDF
pdf_tool = PDFSearchTool(pdf="/path/to/document.pdf")
# Method 2: Initialize without PDF (specify at runtime)
flexible_pdf_tool = PDFSearchTool()
# Create an agent with the tool
researcher = Agent(
role='PDF Researcher',
goal='Search and analyze PDF documents',
backstory='Expert at finding relevant information in PDFs.',
tools=[pdf_tool],
verbose=True
)
```
## Input Schema
### Fixed PDF Schema (when PDF path provided during initialization)
```python
class FixedPDFSearchToolSchema(BaseModel):
query: str = Field(
description="Mandatory query you want to use to search the PDF's content"
)
```
### Flexible PDF Schema (when PDF path provided at runtime)
```python
class PDFSearchToolSchema(FixedPDFSearchToolSchema):
pdf: str = Field(
description="Mandatory pdf path you want to search"
)
```
## Function Signature
```python
def __init__(
self,
pdf: Optional[str] = None,
**kwargs
):
"""
Initialize the PDF search tool.
Args:
pdf (Optional[str]): Path to PDF file (optional)
**kwargs: Additional arguments for RAG tool configuration
"""
def _run(
self,
query: str,
**kwargs: Any
) -> str:
"""
Execute semantic search on PDF content.
Args:
query (str): Search query for the PDF
**kwargs: Additional arguments including pdf path if not initialized
Returns:
str: Relevant content from the PDF matching the query
"""
```
## Best Practices
1. PDF File Handling:
- Use absolute paths for reliability
- Verify PDF file existence
- Handle large PDFs appropriately
2. Search Optimization:
- Use specific, focused queries
- Consider document structure
- Test with sample queries first
3. Performance Considerations:
- Pre-initialize with PDF for repeated searches
- Handle large documents efficiently
- Monitor memory usage
4. Error Handling:
- Verify PDF file existence
- Handle malformed PDFs
- Manage file access permissions
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import PDFSearchTool
# Initialize tool with specific PDF
pdf_tool = PDFSearchTool(pdf="/path/to/research.pdf")
# Create agent
researcher = Agent(
role='PDF Researcher',
goal='Extract insights from research papers',
backstory='Expert at analyzing research documents.',
tools=[pdf_tool]
)
# Define task
research_task = Task(
description="""Find all mentions of machine learning
applications in healthcare from the PDF.""",
agent=researcher
)
# The tool will use:
# {
# "query": "machine learning applications healthcare"
# }
# Create crew
crew = Crew(
agents=[researcher],
tasks=[research_task]
)
# Execute
result = crew.kickoff()
```
## Advanced Usage
### Dynamic PDF Selection
```python
# Initialize without PDF
flexible_tool = PDFSearchTool()
# Search different PDFs
research_results = flexible_tool.run(
query="quantum computing",
pdf="/path/to/research.pdf"
)
report_results = flexible_tool.run(
query="financial metrics",
pdf="/path/to/report.pdf"
)
```
### Multiple PDF Analysis
```python
# Create tools for different PDFs
research_tool = PDFSearchTool(pdf="/path/to/research.pdf")
report_tool = PDFSearchTool(pdf="/path/to/report.pdf")
# Create agent with multiple tools
analyst = Agent(
role='Document Analyst',
goal='Cross-reference multiple documents',
tools=[research_tool, report_tool]
)
```
### Error Handling Example
```python
try:
pdf_tool = PDFSearchTool()
results = pdf_tool.run(
query="important findings",
pdf="/path/to/document.pdf"
)
print(results)
except Exception as e:
print(f"Error processing PDF: {str(e)}")
```
## Notes
- Inherits from RagTool
- Uses PDFEmbedchainAdapter
- Supports semantic search
- Dynamic PDF specification
- Efficient content retrieval
- Thread-safe operations
- Maintains search context
- Handles large documents
- Supports various PDF formats
- Memory-efficient processing

View File

@@ -0,0 +1,234 @@
---
title: PDFTextWritingTool
description: A tool for adding text to specific positions in PDF documents with custom font support
icon: file-pdf
---
## PDFTextWritingTool
The PDFTextWritingTool allows you to add text to specific positions in PDF documents with support for custom fonts, colors, and positioning. It's particularly useful for adding annotations, watermarks, or any text overlay to existing PDFs.
## Installation
```bash
pip install 'crewai[tools]'
```
## Usage Example
```python
from crewai import Agent
from crewai_tools import PDFTextWritingTool
# Basic initialization
pdf_tool = PDFTextWritingTool()
# Create an agent with the tool
document_processor = Agent(
role='Document Processor',
goal='Add text annotations to PDF documents',
backstory='Expert at PDF document processing and text manipulation.',
tools=[pdf_tool],
verbose=True
)
```
## Input Schema
```python
class PDFTextWritingToolSchema(BaseModel):
pdf_path: str = Field(
description="Path to the PDF file to modify"
)
text: str = Field(
description="Text to add to the PDF"
)
position: tuple = Field(
description="Tuple of (x, y) coordinates for text placement"
)
font_size: int = Field(
default=12,
description="Font size of the text"
)
font_color: str = Field(
default="0 0 0 rg",
description="RGB color code for the text"
)
font_name: Optional[str] = Field(
default="F1",
description="Font name for standard fonts"
)
font_file: Optional[str] = Field(
default=None,
description="Path to a .ttf font file for custom font usage"
)
page_number: int = Field(
default=0,
description="Page number to add text to"
)
```
## Function Signature
```python
def run(
self,
pdf_path: str,
text: str,
position: tuple,
font_size: int,
font_color: str,
font_name: str = "F1",
font_file: Optional[str] = None,
page_number: int = 0,
**kwargs
) -> str:
"""
Add text to a specific position in a PDF document.
Args:
pdf_path (str): Path to the PDF file to modify
text (str): Text to add to the PDF
position (tuple): (x, y) coordinates for text placement
font_size (int): Font size of the text
font_color (str): RGB color code for the text (e.g., "0 0 0 rg" for black)
font_name (str, optional): Font name for standard fonts (default: "F1")
font_file (str, optional): Path to a .ttf font file for custom font
page_number (int, optional): Page number to add text to (default: 0)
Returns:
str: Success message with output file path
"""
```
## Best Practices
1. File Handling:
- Ensure PDF files exist before processing
- Use absolute paths for reliability
- Handle file permissions appropriately
2. Text Positioning:
- Use appropriate coordinates based on PDF dimensions
- Consider page orientation and margins
- Test positioning with small changes first
3. Font Usage:
- Verify custom font files exist
- Use standard fonts when possible
- Test font rendering before production use
4. Error Handling:
- Check page numbers are valid
- Verify font file accessibility
- Handle file writing permissions
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import PDFTextWritingTool
# Initialize tool
pdf_tool = PDFTextWritingTool()
# Create agent
document_processor = Agent(
role='Document Processor',
goal='Process and annotate PDF documents',
backstory='Expert at PDF manipulation and text placement.',
tools=[pdf_tool]
)
# Define task
annotation_task = Task(
description="""Add a watermark saying 'CONFIDENTIAL' to
the center of the first page of the document at
'/path/to/document.pdf'.""",
agent=document_processor
)
# The tool will use:
# {
# "pdf_path": "/path/to/document.pdf",
# "text": "CONFIDENTIAL",
# "position": (300, 400),
# "font_size": 24,
# "font_color": "1 0 0 rg", # Red color
# "page_number": 0
# }
# Create crew
crew = Crew(
agents=[document_processor],
tasks=[annotation_task]
)
# Execute
result = crew.kickoff()
```
## Advanced Usage
### Custom Font Example
```python
# Using a custom font
result = pdf_tool.run(
pdf_path="/path/to/input.pdf",
text="Custom Font Text",
position=(100, 500),
font_size=16,
font_color="0 0 1 rg", # Blue color
font_file="/path/to/custom_font.ttf",
page_number=0
)
```
### Multiple Text Elements
```python
# Add multiple text elements
positions = [(100, 700), (100, 650), (100, 600)]
texts = ["Header", "Subheader", "Body Text"]
font_sizes = [18, 14, 12]
for text, position, size in zip(texts, positions, font_sizes):
pdf_tool.run(
pdf_path="/path/to/input.pdf",
text=text,
position=position,
font_size=size,
font_color="0 0 0 rg" # Black color
)
```
### Color Text Example
```python
# Add colored text
colors = {
"red": "1 0 0 rg",
"green": "0 1 0 rg",
"blue": "0 0 1 rg"
}
for y_pos, (color_name, color_code) in enumerate(colors.items()):
pdf_tool.run(
pdf_path="/path/to/input.pdf",
text=f"This text is {color_name}",
position=(100, 700 - y_pos * 50),
font_size=14,
font_color=color_code
)
```
## Notes
- Supports custom TrueType fonts (.ttf)
- Allows RGB color specifications
- Handles multi-page PDFs
- Preserves original PDF content
- Supports text positioning with x,y coordinates
- Maintains PDF structure and metadata
- Creates new output file for safety
- Thread-safe operations
- Efficient PDF manipulation
- Supports various text attributes

View File

@@ -0,0 +1,181 @@
---
title: PGSearchTool
description: A RAG-based semantic search tool for PostgreSQL database content
icon: database-search
---
## PGSearchTool
The PGSearchTool provides semantic search capabilities for PostgreSQL database content using RAG (Retrieval-Augmented Generation). It allows for natural language queries over database table content by leveraging embeddings and semantic search.
## Installation
```bash
pip install 'crewai[tools]'
pip install embedchain # Required dependency
```
## Usage Example
```python
from crewai import Agent
from crewai_tools import PGSearchTool
# Initialize the tool with database configuration
search_tool = PGSearchTool(
db_uri="postgresql://user:password@localhost:5432/dbname",
table_name="your_table"
)
# Create an agent with the tool
researcher = Agent(
role='Database Researcher',
goal='Find relevant information in database content',
backstory='Expert at searching and analyzing database content.',
tools=[search_tool],
verbose=True
)
```
## Input Schema
```python
class PGSearchToolSchema(BaseModel):
search_query: str = Field(
description="Mandatory semantic search query for searching the database's content"
)
```
## Function Signature
```python
def __init__(self, table_name: str, **kwargs):
"""
Initialize the PostgreSQL search tool.
Args:
table_name (str): Name of the table to search
db_uri (str): PostgreSQL database URI (required in kwargs)
**kwargs: Additional arguments for RagTool initialization
"""
def _run(
self,
search_query: str,
**kwargs: Any
) -> Any:
"""
Perform semantic search on database content.
Args:
search_query (str): Semantic search query
**kwargs: Additional search parameters
Returns:
Any: Relevant database content based on semantic search
"""
```
## Best Practices
1. Secure database credentials:
```python
# Use environment variables for sensitive data
import os
db_uri = (
f"postgresql://{os.getenv('DB_USER')}:{os.getenv('DB_PASS')}"
f"@{os.getenv('DB_HOST')}:{os.getenv('DB_PORT')}/{os.getenv('DB_NAME')}"
)
```
2. Optimize table selection
3. Use specific semantic queries
4. Handle database connection errors
5. Consider table size and query performance
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import PGSearchTool
# Initialize tool with database configuration
db_search = PGSearchTool(
db_uri="postgresql://user:password@localhost:5432/dbname",
table_name="customer_feedback"
)
# Create agent
analyst = Agent(
role='Database Analyst',
goal='Analyze customer feedback data',
backstory='Expert at finding insights in customer feedback.',
tools=[db_search]
)
# Define task
analysis_task = Task(
description="""Find all customer feedback related to product usability
and ease of use. Focus on common patterns and issues.""",
agent=analyst
)
# The tool will use:
# {
# "search_query": "product usability feedback ease of use issues"
# }
# Create crew
crew = Crew(
agents=[analyst],
tasks=[analysis_task]
)
# Execute
result = crew.kickoff()
```
## Advanced Usage
### Multiple Table Search
```python
# Create tools for different tables
customer_search = PGSearchTool(
db_uri="postgresql://user:password@localhost:5432/dbname",
table_name="customers"
)
orders_search = PGSearchTool(
db_uri="postgresql://user:password@localhost:5432/dbname",
table_name="orders"
)
# Use both tools in an agent
analyst = Agent(
role='Multi-table Analyst',
goal='Analyze customer and order data',
tools=[customer_search, orders_search]
)
```
### Error Handling
```python
try:
results = search_tool._run(
search_query="customer satisfaction ratings"
)
# Process results
except Exception as e:
print(f"Database search error: {str(e)}")
```
## Notes
- Inherits from RagTool for semantic search
- Uses embedchain's PostgresLoader
- Requires valid PostgreSQL connection
- Supports semantic natural language queries
- Thread-safe operations
- Efficient for large tables
- Handles connection pooling automatically

282
docs/tools/rag-tool.mdx Normal file
View File

@@ -0,0 +1,282 @@
---
title: RagTool
description: Base class for Retrieval-Augmented Generation (RAG) tools with flexible adapter support
icon: database
---
## RagTool
The RagTool serves as the base class for all Retrieval-Augmented Generation (RAG) tools in the CrewAI ecosystem. It provides a flexible adapter-based architecture for implementing knowledge base functionality with semantic search capabilities.
## Installation
```bash
pip install 'crewai[tools]'
```
## Usage Example
```python
from crewai import Agent
from crewai_tools import RagTool
from crewai_tools.adapters import EmbedchainAdapter
from embedchain import App
# Create custom adapter
class CustomAdapter(RagTool.Adapter):
def query(self, question: str) -> str:
# Implement custom query logic
return "Answer based on knowledge base"
def add(self, *args, **kwargs) -> None:
# Implement custom add logic
pass
# Method 1: Use default EmbedchainAdapter
rag_tool = RagTool(
name="Custom Knowledge Base",
description="Specialized knowledge base for domain data",
summarize=True
)
# Method 2: Use custom adapter
custom_tool = RagTool(
name="Custom Knowledge Base",
adapter=CustomAdapter(),
summarize=False
)
# Create an agent with the tool
researcher = Agent(
role='Knowledge Base Researcher',
goal='Search and analyze knowledge base content',
backstory='Expert at finding relevant information in specialized datasets.',
tools=[rag_tool],
verbose=True
)
```
## Adapter Interface
```python
class Adapter(BaseModel, ABC):
@abstractmethod
def query(self, question: str) -> str:
"""
Query the knowledge base with a question.
Args:
question (str): Query to search in knowledge base
Returns:
str: Answer based on knowledge base content
"""
@abstractmethod
def add(self, *args: Any, **kwargs: Any) -> None:
"""
Add content to the knowledge base.
Args:
*args: Variable length argument list
**kwargs: Arbitrary keyword arguments
"""
```
## Function Signature
```python
def __init__(
self,
name: str = "Knowledge base",
description: str = "A knowledge base that can be used to answer questions.",
summarize: bool = False,
adapter: Optional[Adapter] = None,
config: Optional[dict[str, Any]] = None,
**kwargs
):
"""
Initialize the RAG tool.
Args:
name (str): Tool name
description (str): Tool description
summarize (bool): Enable answer summarization
adapter (Optional[Adapter]): Custom adapter implementation
config (Optional[dict]): Configuration for default adapter
**kwargs: Additional arguments for base tool
"""
def _run(
self,
query: str,
**kwargs: Any
) -> str:
"""
Execute query against knowledge base.
Args:
query (str): Question to ask
**kwargs: Additional arguments
Returns:
str: Answer from knowledge base
"""
```
## Best Practices
1. Adapter Implementation:
- Define clear interfaces
- Handle edge cases
- Implement error handling
- Document behavior
2. Knowledge Base Management:
- Organize content logically
- Update content regularly
- Monitor performance
- Handle large datasets
3. Query Optimization:
- Structure queries clearly
- Consider context
- Handle ambiguity
- Validate inputs
4. Error Handling:
- Handle missing data
- Manage timeouts
- Provide clear messages
- Log issues
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import RagTool
from embedchain import App
# Initialize tool with custom configuration
rag_tool = RagTool(
name="Technical Documentation KB",
description="Knowledge base for technical documentation",
summarize=True,
config={
"collection_name": "tech_docs",
"chunking": {
"chunk_size": 500,
"chunk_overlap": 50
}
}
)
# Add content to knowledge base
rag_tool.add(
"Technical documentation content here...",
data_type="text"
)
# Create agent
researcher = Agent(
role='Documentation Expert',
goal='Extract technical information from documentation',
backstory='Expert at analyzing technical documentation.',
tools=[rag_tool]
)
# Define task
research_task = Task(
description="""Find all mentions of API endpoints
and their authentication requirements.""",
agent=researcher
)
# Create crew
crew = Crew(
agents=[researcher],
tasks=[research_task]
)
# Execute
result = crew.kickoff()
```
## Advanced Usage
### Custom Adapter Implementation
```python
from typing import Any
from pydantic import BaseModel
from abc import ABC, abstractmethod
class SpecializedAdapter(RagTool.Adapter):
def __init__(self, config: dict):
self.config = config
self.knowledge_base = {}
def query(self, question: str) -> str:
# Implement specialized query logic
return self._process_query(question)
def add(self, content: str, **kwargs: Any) -> None:
# Implement specialized content addition
self._process_content(content, **kwargs)
# Use custom adapter
specialized_tool = RagTool(
name="Specialized KB",
adapter=SpecializedAdapter(config={"mode": "advanced"})
)
```
### Configuration Management
```python
# Configure default EmbedchainAdapter
config = {
"collection_name": "custom_collection",
"embedding": {
"model": "sentence-transformers/all-mpnet-base-v2",
"dimensions": 768
},
"chunking": {
"chunk_size": 1000,
"chunk_overlap": 100
}
}
tool = RagTool(config=config)
```
### Error Handling Example
```python
try:
rag_tool = RagTool()
# Add content
rag_tool.add(
"Documentation content...",
data_type="text"
)
# Query content
result = rag_tool.run(
query="What are the system requirements?"
)
print(result)
except Exception as e:
print(f"Error using knowledge base: {str(e)}")
```
## Notes
- Base class for RAG tools
- Flexible adapter pattern
- Default EmbedchainAdapter
- Custom adapter support
- Content management
- Query processing
- Error handling
- Configuration options
- Performance optimization
- Memory management

View File

@@ -0,0 +1,229 @@
---
title: SerpApi Google Search Tool
description: A tool for performing Google searches using the SerpApi service
---
# SerpApi Google Search Tool
The SerpApi Google Search Tool enables performing Google searches using the SerpApi service. It provides location-aware search capabilities with comprehensive result filtering.
## Installation
```bash
pip install 'crewai[tools]'
pip install serpapi
```
## Prerequisites
You need a SerpApi API key to use this tool. You can get one from [SerpApi's website](https://serpapi.com/manage-api-key).
Set your API key as an environment variable:
```bash
export SERPAPI_API_KEY="your_api_key_here"
```
## Usage
Here's how to use the SerpApi Google Search Tool:
```python
from crewai import Agent
from crewai_tools import SerpApiGoogleSearchTool
# Initialize the tool
search_tool = SerpApiGoogleSearchTool()
# Create an agent with the tool
search_agent = Agent(
role='Web Researcher',
goal='Find accurate information online',
backstory='I help research and analyze online information',
tools=[search_tool]
)
# Use in a task
task = Task(
description="Research recent AI developments",
agent=search_agent,
context={
"search_query": "latest artificial intelligence breakthroughs 2024",
"location": "United States" # Optional
}
)
```
## Input Schema
```python
class SerpApiGoogleSearchToolSchema(BaseModel):
search_query: str # The search query for Google Search
location: Optional[str] = None # Optional location for localized results
```
## Function Signatures
### Base Tool Initialization
```python
def __init__(self, **kwargs):
"""
Initialize the SerpApi tool with API credentials.
Raises:
ImportError: If serpapi package is not installed
ValueError: If SERPAPI_API_KEY environment variable is not set
"""
```
### Search Execution
```python
def _run(
self,
**kwargs: Any,
) -> dict:
"""
Execute the Google search.
Args:
search_query (str): The search query
location (Optional[str]): Optional location for results
Returns:
dict: Filtered search results from Google
Raises:
HTTPError: If the API request fails
"""
```
## Best Practices
1. **API Key Management**:
- Store the API key securely in environment variables
- Never hardcode the API key in your code
- Verify API key validity before making requests
2. **Search Optimization**:
- Use specific, targeted search queries
- Include relevant keywords and time frames
- Leverage location parameter for regional results
3. **Error Handling**:
- Handle API rate limits gracefully
- Implement retry logic for failed requests
- Validate input parameters before making requests
## Example Integration
Here's a complete example showing how to integrate the SerpApi Google Search Tool with CrewAI:
```python
from crewai import Agent, Task, Crew
from crewai_tools import SerpApiGoogleSearchTool
# Initialize the tool
search_tool = SerpApiGoogleSearchTool()
# Create an agent with the tool
researcher = Agent(
role='Research Analyst',
goal='Find and analyze current information',
backstory="""I am an expert at finding and analyzing
information from various online sources.""",
tools=[search_tool]
)
# Create tasks
research_task = Task(
description="""
Research the following topic:
1. Latest developments in quantum computing
2. Focus on practical applications
3. Include major company announcements
Provide a comprehensive analysis of the findings.
""",
agent=researcher,
context={
"search_query": "quantum computing breakthroughs applications companies",
"location": "United States"
}
)
# Create and run the crew
crew = Crew(
agents=[researcher],
tasks=[research_task]
)
result = crew.kickoff()
```
## Error Handling
The tool handles various error scenarios:
1. **Missing API Key**:
```python
try:
tool = SerpApiGoogleSearchTool()
except ValueError as e:
print("API key not found. Set SERPAPI_API_KEY environment variable.")
```
2. **API Request Errors**:
```python
try:
results = tool._run(
search_query="quantum computing",
location="United States"
)
except HTTPError as e:
print(f"API request failed: {str(e)}")
```
3. **Invalid Parameters**:
```python
try:
results = tool._run(
search_query="", # Empty query
location="Invalid Location"
)
except ValueError as e:
print("Invalid search parameters provided.")
```
## Response Format
The tool returns a filtered dictionary containing Google search results. Example response structure:
```python
{
"organic_results": [
{
"title": "Page Title",
"link": "https://...",
"snippet": "Page description or excerpt...",
"position": 1
}
# Additional results...
],
"knowledge_graph": {
"title": "Topic Title",
"description": "Topic description...",
"source": {
"name": "Source Name",
"link": "https://..."
}
},
"related_questions": [
{
"question": "Related question?",
"answer": "Answer to related question..."
}
# Additional related questions...
]
}
```
The response is automatically filtered to remove metadata and unnecessary fields, focusing on the most relevant search information. Fields like search metadata, parameters, and pagination are omitted for clarity.

View File

@@ -0,0 +1,225 @@
---
title: SerpApi Google Shopping Tool
description: A tool for searching Google Shopping using the SerpApi service
---
# SerpApi Google Shopping Tool
The SerpApi Google Shopping Tool enables searching Google Shopping results using the SerpApi service. It provides location-aware shopping search capabilities with comprehensive result filtering.
## Installation
```bash
pip install 'crewai[tools]'
pip install serpapi
```
## Prerequisites
You need a SerpApi API key to use this tool. You can get one from [SerpApi's website](https://serpapi.com/manage-api-key).
Set your API key as an environment variable:
```bash
export SERPAPI_API_KEY="your_api_key_here"
```
## Usage
Here's how to use the SerpApi Google Shopping Tool:
```python
from crewai import Agent
from crewai_tools import SerpApiGoogleShoppingTool
# Initialize the tool
shopping_tool = SerpApiGoogleShoppingTool()
# Create an agent with the tool
shopping_agent = Agent(
role='Shopping Researcher',
goal='Find the best shopping deals',
backstory='I help find and analyze shopping options',
tools=[shopping_tool]
)
# Use in a task
task = Task(
description="Find best deals for gaming laptops",
agent=shopping_agent,
context={
"search_query": "gaming laptop deals",
"location": "United States" # Optional
}
)
```
## Input Schema
```python
class SerpApiGoogleShoppingToolSchema(BaseModel):
search_query: str # The search query for Google Shopping
location: Optional[str] = None # Optional location for localized results
```
## Function Signatures
### Base Tool Initialization
```python
def __init__(self, **kwargs):
"""
Initialize the SerpApi tool with API credentials.
Raises:
ImportError: If serpapi package is not installed
ValueError: If SERPAPI_API_KEY environment variable is not set
"""
```
### Search Execution
```python
def _run(
self,
**kwargs: Any,
) -> dict:
"""
Execute the Google Shopping search.
Args:
search_query (str): The search query for Google Shopping
location (Optional[str]): Optional location for results
Returns:
dict: Filtered search results from Google Shopping
Raises:
HTTPError: If the API request fails
"""
```
## Best Practices
1. **API Key Management**:
- Store the API key securely in environment variables
- Never hardcode the API key in your code
- Verify API key validity before making requests
2. **Search Optimization**:
- Use specific, targeted search queries
- Include relevant product details in queries
- Leverage location parameter for regional pricing
3. **Error Handling**:
- Handle API rate limits gracefully
- Implement retry logic for failed requests
- Validate input parameters before making requests
## Example Integration
Here's a complete example showing how to integrate the SerpApi Google Shopping Tool with CrewAI:
```python
from crewai import Agent, Task, Crew
from crewai_tools import SerpApiGoogleShoppingTool
# Initialize the tool
shopping_tool = SerpApiGoogleShoppingTool()
# Create an agent with the tool
researcher = Agent(
role='Shopping Analyst',
goal='Find and analyze the best shopping deals',
backstory="""I am an expert at finding the best shopping deals
and analyzing product offerings across different regions.""",
tools=[shopping_tool]
)
# Create tasks
search_task = Task(
description="""
Research gaming laptops with the following criteria:
1. Price range: $800-$1500
2. Released in the last year
3. Compare prices across different retailers
Provide a comprehensive analysis of the findings.
""",
agent=researcher,
context={
"search_query": "gaming laptop RTX 4060 2023",
"location": "United States"
}
)
# Create and run the crew
crew = Crew(
agents=[researcher],
tasks=[search_task]
)
result = crew.kickoff()
```
## Error Handling
The tool handles various error scenarios:
1. **Missing API Key**:
```python
try:
tool = SerpApiGoogleShoppingTool()
except ValueError as e:
print("API key not found. Set SERPAPI_API_KEY environment variable.")
```
2. **API Request Errors**:
```python
try:
results = tool._run(
search_query="gaming laptop",
location="United States"
)
except HTTPError as e:
print(f"API request failed: {str(e)}")
```
3. **Invalid Parameters**:
```python
try:
results = tool._run(
search_query="", # Empty query
location="Invalid Location"
)
except ValueError as e:
print("Invalid search parameters provided.")
```
## Response Format
The tool returns a filtered dictionary containing Google Shopping results. Example response structure:
```python
{
"shopping_results": [
{
"title": "Product Title",
"price": "$999.99",
"link": "https://...",
"source": "Retailer Name",
"rating": 4.5,
"reviews": 123,
"thumbnail": "https://..."
}
# Additional results...
],
"organic_results": [
{
"title": "Related Product",
"link": "https://...",
"snippet": "Product description..."
}
# Additional organic results...
]
}
```
The response is automatically filtered to remove metadata and unnecessary fields, focusing on the most relevant shopping information.

View File

@@ -0,0 +1,184 @@
---
title: SerplyJobSearchTool
description: A tool for searching US job postings using the Serply API
icon: briefcase
---
## SerplyJobSearchTool
The SerplyJobSearchTool provides job search capabilities using the Serply API. It allows for searching job postings in the US market, returning structured information about positions, employers, locations, and remote work status.
## Installation
```bash
pip install 'crewai[tools]'
```
## Usage Example
```python
from crewai import Agent
from crewai_tools import SerplyJobSearchTool
# Set environment variable
# export SERPLY_API_KEY='your-api-key'
# Initialize the tool
search_tool = SerplyJobSearchTool()
# Create an agent with the tool
job_researcher = Agent(
role='Job Market Researcher',
goal='Find relevant job opportunities',
backstory='Expert at analyzing job market trends and opportunities.',
tools=[search_tool],
verbose=True
)
```
## Input Schema
```python
class SerplyJobSearchToolSchema(BaseModel):
search_query: str = Field(
description="Mandatory search query for fetching job postings"
)
```
## Function Signature
```python
def __init__(self, **kwargs):
"""
Initialize the job search tool.
Args:
**kwargs: Additional arguments for RagTool initialization
Note:
Requires SERPLY_API_KEY environment variable
"""
def _run(
self,
**kwargs: Any
) -> str:
"""
Perform job search using Serply API.
Args:
search_query (str): Job search query
**kwargs: Additional search parameters
Returns:
str: Formatted string containing job listings with details:
- Position
- Employer
- Location
- Link
- Highlights
- Remote/Hybrid status
"""
```
## Best Practices
1. Set up API authentication:
```bash
export SERPLY_API_KEY='your-serply-api-key'
```
2. Use specific search queries
3. Handle potential API errors
4. Process structured results effectively
5. Consider rate limits and quotas
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import SerplyJobSearchTool
# Initialize tool
job_search = SerplyJobSearchTool()
# Create agent
recruiter = Agent(
role='Technical Recruiter',
goal='Find relevant job opportunities in tech',
backstory='Expert at identifying promising tech positions.',
tools=[job_search]
)
# Define task
search_task = Task(
description="""Search for senior software engineer positions
with remote work options in the US. Focus on positions
requiring Python expertise.""",
agent=recruiter
)
# The tool will use:
# {
# "search_query": "senior software engineer python remote"
# }
# Create crew
crew = Crew(
agents=[recruiter],
tasks=[search_task]
)
# Execute
result = crew.kickoff()
```
## Advanced Usage
### Handling Search Results
```python
# Example of processing structured results
results = search_tool._run(
search_query="machine learning engineer"
)
# Results format:
"""
Search results:
Position: Senior Machine Learning Engineer
Employer: TechCorp Inc
Location: San Francisco, CA
Link: https://example.com/job/123
Highlights: Python, TensorFlow, 5+ years experience
Is Remote: True
Is Hybrid: False
---
Position: ML Engineer
...
"""
```
### Error Handling
```python
try:
results = search_tool._run(
search_query="data scientist"
)
if not results:
print("No jobs found")
else:
print(results)
except Exception as e:
print(f"Job search error: {str(e)}")
```
## Notes
- Requires valid Serply API key
- Currently supports US job market only
- Returns structured job information
- Includes remote/hybrid status
- Thread-safe operations
- Efficient job search capabilities
- Handles API rate limiting automatically
- Provides detailed job highlights

View File

@@ -0,0 +1,209 @@
---
title: SerplyNewsSearchTool
description: A news article search tool powered by Serply API with configurable search parameters
icon: newspaper
---
## SerplyNewsSearchTool
The SerplyNewsSearchTool provides news article search capabilities using the Serply API. It allows for customizable search parameters including result limits and proxy location for region-specific news results.
## Installation
```bash
pip install 'crewai[tools]'
```
## Usage Example
```python
from crewai import Agent
from crewai_tools import SerplyNewsSearchTool
# Set environment variable
# export SERPLY_API_KEY='your-api-key'
# Basic initialization
news_tool = SerplyNewsSearchTool()
# Advanced initialization with custom parameters
news_tool = SerplyNewsSearchTool(
limit=20, # Return 20 results
proxy_location="FR" # Search from France
)
# Create an agent with the tool
news_researcher = Agent(
role='News Researcher',
goal='Find relevant news articles',
backstory='Expert at news research and information gathering.',
tools=[news_tool],
verbose=True
)
```
## Input Schema
```python
class SerplyNewsSearchToolSchema(BaseModel):
search_query: str = Field(
description="Mandatory search query for fetching news articles"
)
```
## Function Signature
```python
def __init__(
self,
limit: Optional[int] = 10,
proxy_location: Optional[str] = "US",
**kwargs
):
"""
Initialize the news search tool.
Args:
limit (int): Maximum number of results [10-100] (default: 10)
proxy_location (str): Region for local news results (default: "US")
Options: US, CA, IE, GB, FR, DE, SE, IN, JP, KR, SG, AU, BR
**kwargs: Additional arguments for tool creation
"""
def _run(
self,
**kwargs: Any
) -> str:
"""
Perform news search using Serply API.
Args:
search_query (str): News search query
Returns:
str: Formatted string containing news results:
- Title
- Link
- Source
- Published Date
"""
```
## Best Practices
1. Set up API authentication:
```bash
export SERPLY_API_KEY='your-serply-api-key'
```
2. Configure search parameters appropriately:
- Set reasonable result limits
- Select relevant proxy location for regional news
- Consider time sensitivity of news content
3. Handle potential API errors
4. Process structured results effectively
5. Consider rate limits and quotas
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import SerplyNewsSearchTool
# Initialize tool with custom configuration
news_tool = SerplyNewsSearchTool(
limit=15, # 15 results
proxy_location="US" # US news sources
)
# Create agent
news_analyst = Agent(
role='News Analyst',
goal='Research breaking news and developments',
backstory='Expert at analyzing news trends and developments.',
tools=[news_tool]
)
# Define task
news_task = Task(
description="""Research the latest developments in renewable
energy technology and investments, focusing on major
announcements and industry trends.""",
agent=news_analyst
)
# The tool will use:
# {
# "search_query": "renewable energy technology investments news"
# }
# Create crew
crew = Crew(
agents=[news_analyst],
tasks=[news_task]
)
# Execute
result = crew.kickoff()
```
## Advanced Usage
### Regional News Configuration
```python
# French news sources
fr_news = SerplyNewsSearchTool(
proxy_location="FR",
limit=20
)
# Japanese news sources
jp_news = SerplyNewsSearchTool(
proxy_location="JP",
limit=20
)
```
### Result Processing
```python
# Get news results
try:
results = news_tool._run(
search_query="renewable energy investments"
)
print(results)
except Exception as e:
print(f"News search error: {str(e)}")
```
### Multiple Region Search
```python
# Search across multiple regions
regions = ["US", "GB", "DE"]
all_results = []
for region in regions:
regional_tool = SerplyNewsSearchTool(
proxy_location=region,
limit=5
)
results = regional_tool._run(
search_query="global tech innovations"
)
all_results.append(f"Results from {region}:\n{results}")
combined_results = "\n\n".join(all_results)
```
## Notes
- Requires valid Serply API key
- Supports multiple regions for news sources
- Configurable result limits (10-100)
- Returns structured news article data
- Thread-safe operations
- Efficient news search capabilities
- Handles API rate limiting automatically
- Includes source attribution and publication dates
- Follows redirects for final article URLs

View File

@@ -0,0 +1,209 @@
---
title: SerplyScholarSearchTool
description: A scholarly literature search tool powered by Serply API with configurable search parameters
icon: book
---
## SerplyScholarSearchTool
The SerplyScholarSearchTool provides scholarly literature search capabilities using the Serply API. It allows for customizable search parameters including language and proxy location for region-specific academic results.
## Installation
```bash
pip install 'crewai[tools]'
```
## Usage Example
```python
from crewai import Agent
from crewai_tools import SerplyScholarSearchTool
# Set environment variable
# export SERPLY_API_KEY='your-api-key'
# Basic initialization
scholar_tool = SerplyScholarSearchTool()
# Advanced initialization with custom parameters
scholar_tool = SerplyScholarSearchTool(
hl="fr", # French language results
proxy_location="FR" # Search from France
)
# Create an agent with the tool
academic_researcher = Agent(
role='Academic Researcher',
goal='Find relevant scholarly literature',
backstory='Expert at academic research and literature review.',
tools=[scholar_tool],
verbose=True
)
```
## Input Schema
```python
class SerplyScholarSearchToolSchema(BaseModel):
search_query: str = Field(
description="Mandatory search query for fetching scholarly literature"
)
```
## Function Signature
```python
def __init__(
self,
hl: str = "us",
proxy_location: Optional[str] = "US",
**kwargs
):
"""
Initialize the scholar search tool.
Args:
hl (str): Host language code for results (default: "us")
Reference: https://developers.google.com/custom-search/docs/xml_results?hl=en#wsInterfaceLanguages
proxy_location (str): Region for local results (default: "US")
Options: US, CA, IE, GB, FR, DE, SE, IN, JP, KR, SG, AU, BR
**kwargs: Additional arguments for tool creation
"""
def _run(
self,
**kwargs: Any
) -> str:
"""
Perform scholarly literature search using Serply API.
Args:
search_query (str): Academic search query
Returns:
str: Formatted string containing scholarly results:
- Title
- Link
- Description
- Citation
- Authors
"""
```
## Best Practices
1. Set up API authentication:
```bash
export SERPLY_API_KEY='your-serply-api-key'
```
2. Configure search parameters appropriately:
- Use relevant language codes
- Select appropriate proxy location
- Provide specific academic search terms
3. Handle potential API errors
4. Process structured results effectively
5. Consider rate limits and quotas
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import SerplyScholarSearchTool
# Initialize tool with custom configuration
scholar_tool = SerplyScholarSearchTool(
hl="en", # English results
proxy_location="US" # US academic sources
)
# Create agent
researcher = Agent(
role='Academic Researcher',
goal='Research recent academic publications',
backstory='Expert at analyzing academic literature and research trends.',
tools=[scholar_tool]
)
# Define task
research_task = Task(
description="""Research recent academic publications on
machine learning applications in healthcare, focusing on
peer-reviewed articles from the last two years.""",
agent=researcher
)
# The tool will use:
# {
# "search_query": "machine learning healthcare applications"
# }
# Create crew
crew = Crew(
agents=[researcher],
tasks=[research_task]
)
# Execute
result = crew.kickoff()
```
## Advanced Usage
### Language and Region Configuration
```python
# French academic sources
fr_scholar = SerplyScholarSearchTool(
hl="fr",
proxy_location="FR"
)
# German academic sources
de_scholar = SerplyScholarSearchTool(
hl="de",
proxy_location="DE"
)
```
### Result Processing
```python
try:
results = scholar_tool._run(
search_query="machine learning healthcare applications"
)
print(results)
except Exception as e:
print(f"Scholar search error: {str(e)}")
```
### Citation Analysis
```python
# Extract and analyze citations
def analyze_citations(results):
citations = []
for result in results.split("---"):
if "Cite:" in result:
citation = result.split("Cite:")[1].split("\n")[0].strip()
citations.append(citation)
return citations
results = scholar_tool._run(
search_query="artificial intelligence ethics"
)
citations = analyze_citations(results)
```
## Notes
- Requires valid Serply API key
- Supports multiple languages and regions
- Returns structured academic article data
- Includes citation information
- Lists all authors of publications
- Thread-safe operations
- Efficient scholarly search capabilities
- Handles API rate limiting automatically
- Supports both direct and document links
- Provides comprehensive article metadata

View File

@@ -0,0 +1,213 @@
---
title: SerplyWebSearchTool
description: A Google search tool powered by Serply API with configurable search parameters
icon: search
---
## SerplyWebSearchTool
The SerplyWebSearchTool provides Google search capabilities using the Serply API. It allows for customizable search parameters including language, result limits, device type, and proxy location for region-specific results.
## Installation
```bash
pip install 'crewai[tools]'
```
## Usage Example
```python
from crewai import Agent
from crewai_tools import SerplyWebSearchTool
# Set environment variable
# export SERPLY_API_KEY='your-api-key'
# Basic initialization
search_tool = SerplyWebSearchTool()
# Advanced initialization with custom parameters
search_tool = SerplyWebSearchTool(
hl="fr", # French language results
limit=20, # Return 20 results
device_type="mobile", # Mobile search results
proxy_location="FR" # Search from France
)
# Create an agent with the tool
researcher = Agent(
role='Web Researcher',
goal='Find relevant information online',
backstory='Expert at web research and information gathering.',
tools=[search_tool],
verbose=True
)
```
## Input Schema
```python
class SerplyWebSearchToolSchema(BaseModel):
search_query: str = Field(
description="Mandatory search query for Google search"
)
```
## Function Signature
```python
def __init__(
self,
hl: str = "us",
limit: int = 10,
device_type: str = "desktop",
proxy_location: str = "US",
**kwargs
):
"""
Initialize the Google search tool.
Args:
hl (str): Host language code for results (default: "us")
Reference: https://developers.google.com/custom-search/docs/xml_results?hl=en#wsInterfaceLanguages
limit (int): Maximum number of results [10-100] (default: 10)
device_type (str): "desktop" or "mobile" results (default: "desktop")
proxy_location (str): Region for local results (default: "US")
Options: US, CA, IE, GB, FR, DE, SE, IN, JP, KR, SG, AU, BR
**kwargs: Additional arguments for tool creation
"""
def _run(
self,
**kwargs: Any
) -> str:
"""
Perform Google search using Serply API.
Args:
search_query (str): Search query
Returns:
str: Formatted string containing search results:
- Title
- Link
- Description
"""
```
## Best Practices
1. Set up API authentication:
```bash
export SERPLY_API_KEY='your-serply-api-key'
```
2. Configure search parameters appropriately:
- Use relevant language codes
- Set reasonable result limits
- Choose appropriate device type
- Select relevant proxy location
3. Handle potential API errors
4. Process structured results effectively
5. Consider rate limits and quotas
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import SerplyWebSearchTool
# Initialize tool with custom configuration
search_tool = SerplyWebSearchTool(
hl="en", # English results
limit=15, # 15 results
device_type="desktop",
proxy_location="US"
)
# Create agent
researcher = Agent(
role='Web Researcher',
goal='Research emerging technology trends',
backstory='Expert at finding and analyzing tech trends.',
tools=[search_tool]
)
# Define task
research_task = Task(
description="""Research the latest developments in artificial
intelligence and machine learning, focusing on practical
applications in business.""",
agent=researcher
)
# The tool will use:
# {
# "search_query": "latest AI ML developments business applications"
# }
# Create crew
crew = Crew(
agents=[researcher],
tasks=[research_task]
)
# Execute
result = crew.kickoff()
```
## Advanced Usage
### Language and Region Configuration
```python
# French search from France
fr_search = SerplyWebSearchTool(
hl="fr",
proxy_location="FR"
)
# Japanese search from Japan
jp_search = SerplyWebSearchTool(
hl="ja",
proxy_location="JP"
)
```
### Device-Specific Results
```python
# Mobile results
mobile_search = SerplyWebSearchTool(
device_type="mobile",
limit=20
)
# Desktop results
desktop_search = SerplyWebSearchTool(
device_type="desktop",
limit=20
)
```
### Error Handling
```python
try:
results = search_tool._run(
search_query="artificial intelligence trends"
)
print(results)
except Exception as e:
print(f"Search error: {str(e)}")
```
## Notes
- Requires valid Serply API key
- Supports multiple languages and regions
- Configurable result limits (10-100)
- Device-specific search results
- Thread-safe operations
- Efficient search capabilities
- Handles API rate limiting automatically
- Returns structured search results

View File

@@ -0,0 +1,201 @@
---
title: SerplyWebpageToMarkdownTool
description: A tool for converting web pages to markdown format using Serply API
icon: markdown
---
## SerplyWebpageToMarkdownTool
The SerplyWebpageToMarkdownTool converts web pages to markdown format using the Serply API, making it easier for LLMs to process and understand web content. It supports configurable proxy locations for region-specific access.
## Installation
```bash
pip install 'crewai[tools]'
```
## Usage Example
```python
from crewai import Agent
from crewai_tools import SerplyWebpageToMarkdownTool
# Set environment variable
# export SERPLY_API_KEY='your-api-key'
# Basic initialization
markdown_tool = SerplyWebpageToMarkdownTool()
# Advanced initialization with custom parameters
markdown_tool = SerplyWebpageToMarkdownTool(
proxy_location="FR" # Access from France
)
# Create an agent with the tool
web_processor = Agent(
role='Web Content Processor',
goal='Convert web content to markdown format',
backstory='Expert at processing and formatting web content.',
tools=[markdown_tool],
verbose=True
)
```
## Input Schema
```python
class SerplyWebpageToMarkdownToolSchema(BaseModel):
url: str = Field(
description="Mandatory URL of the webpage to convert to markdown"
)
```
## Function Signature
```python
def __init__(
self,
proxy_location: Optional[str] = "US",
**kwargs
):
"""
Initialize the webpage to markdown conversion tool.
Args:
proxy_location (str): Region for accessing the webpage (default: "US")
Options: US, CA, IE, GB, FR, DE, SE, IN, JP, KR, SG, AU, BR
**kwargs: Additional arguments for tool creation
"""
def _run(
self,
**kwargs: Any
) -> str:
"""
Convert webpage to markdown using Serply API.
Args:
url (str): URL of the webpage to convert
Returns:
str: Markdown formatted content of the webpage
"""
```
## Best Practices
1. Set up API authentication:
```bash
export SERPLY_API_KEY='your-serply-api-key'
```
2. Configure proxy location appropriately:
- Select relevant region for access
- Consider content accessibility
- Handle region-specific content
3. Handle potential API errors
4. Process markdown output effectively
5. Consider rate limits and quotas
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import SerplyWebpageToMarkdownTool
# Initialize tool with custom configuration
markdown_tool = SerplyWebpageToMarkdownTool(
proxy_location="US" # US access point
)
# Create agent
processor = Agent(
role='Content Processor',
goal='Convert web content to structured markdown',
backstory='Expert at processing web content into structured formats.',
tools=[markdown_tool]
)
# Define task
conversion_task = Task(
description="""Convert the documentation page at
https://example.com/docs into markdown format for
further processing.""",
agent=processor
)
# The tool will use:
# {
# "url": "https://example.com/docs"
# }
# Create crew
crew = Crew(
agents=[processor],
tasks=[conversion_task]
)
# Execute
result = crew.kickoff()
```
## Advanced Usage
### Regional Access Configuration
```python
# European access points
fr_processor = SerplyWebpageToMarkdownTool(
proxy_location="FR"
)
de_processor = SerplyWebpageToMarkdownTool(
proxy_location="DE"
)
```
### Error Handling
```python
try:
markdown_content = markdown_tool._run(
url="https://example.com/page"
)
print(markdown_content)
except Exception as e:
print(f"Conversion error: {str(e)}")
```
### Content Processing
```python
# Process multiple pages
urls = [
"https://example.com/page1",
"https://example.com/page2",
"https://example.com/page3"
]
markdown_contents = []
for url in urls:
try:
content = markdown_tool._run(url=url)
markdown_contents.append(content)
except Exception as e:
print(f"Error processing {url}: {str(e)}")
continue
# Combine contents
combined_markdown = "\n\n---\n\n".join(markdown_contents)
```
## Notes
- Requires valid Serply API key
- Supports multiple proxy locations
- Returns markdown-formatted content
- Simplifies web content for LLM processing
- Thread-safe operations
- Efficient content conversion
- Handles API rate limiting automatically
- Preserves content structure in markdown
- Supports various webpage formats
- Makes web content more accessible to AI agents

View File

@@ -0,0 +1,158 @@
---
title: TXTSearchTool
description: A semantic search tool for text files using RAG capabilities
icon: magnifying-glass-document
---
## TXTSearchTool
The TXTSearchTool is a specialized Retrieval-Augmented Generation (RAG) tool that enables semantic search within text files. It inherits from the base RagTool class and provides both fixed and dynamic text file searching capabilities.
## Installation
```bash
pip install 'crewai[tools]'
```
## Usage Example
```python
from crewai import Agent
from crewai_tools import TXTSearchTool
# Method 1: Dynamic file path
txt_search = TXTSearchTool()
# Method 2: Fixed file path
fixed_txt_search = TXTSearchTool(txt="path/to/fixed/document.txt")
# Create an agent with the tool
researcher = Agent(
role='Research Assistant',
goal='Search through text documents semantically',
backstory='Expert at finding relevant information in documents using semantic search.',
tools=[txt_search],
verbose=True
)
```
## Input Schema
The tool supports two input schemas depending on initialization:
### Dynamic File Path Schema
```python
class TXTSearchToolSchema(BaseModel):
search_query: str # The semantic search query
txt: str # Path to the text file to search
```
### Fixed File Path Schema
```python
class FixedTXTSearchToolSchema(BaseModel):
search_query: str # The semantic search query
```
## Function Signature
```python
def __init__(self, txt: Optional[str] = None, **kwargs):
"""
Initialize the TXT search tool.
Args:
txt (Optional[str]): Fixed path to a text file. If provided, the tool will only search this file.
**kwargs: Additional arguments passed to the parent RagTool
"""
def _run(self, search_query: str, **kwargs: Any) -> Any:
"""
Perform semantic search on the text file.
Args:
search_query (str): The semantic search query
**kwargs: Additional arguments (including 'txt' for dynamic file path)
Returns:
str: Relevant text passages based on semantic search
"""
```
## Best Practices
1. Choose initialization method based on use case:
- Use fixed file path when repeatedly searching the same document
- Use dynamic file path when searching different documents
2. Write clear, semantic search queries
3. Handle potential file access errors in agent prompts
4. Consider memory usage for large text files
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import TXTSearchTool
# Example 1: Fixed document search
documentation_search = TXTSearchTool(txt="api_documentation.txt")
# Example 2: Dynamic document search
flexible_search = TXTSearchTool()
# Create agents
doc_analyst = Agent(
role='Documentation Analyst',
goal='Find relevant API documentation sections',
backstory='Expert at analyzing technical documentation.',
tools=[documentation_search]
)
file_analyst = Agent(
role='File Analyst',
goal='Search through various text files',
backstory='Specialist in finding information across multiple documents.',
tools=[flexible_search]
)
# Define tasks
fixed_search_task = Task(
description="""Find all API endpoints related to user authentication
in the documentation.""",
agent=doc_analyst
)
# The agent will use:
# {
# "search_query": "user authentication API endpoints"
# }
dynamic_search_task = Task(
description="""Search through the logs.txt file for any database
connection errors.""",
agent=file_analyst
)
# The agent will use:
# {
# "search_query": "database connection errors",
# "txt": "logs.txt"
# }
# Create crew
crew = Crew(
agents=[doc_analyst, file_analyst],
tasks=[fixed_search_task, dynamic_search_task]
)
# Execute
result = crew.kickoff()
```
## Notes
- Inherits from RagTool for semantic search capabilities
- Supports both fixed and dynamic text file paths
- Uses embeddings for semantic search
- Optimized for text file analysis
- Thread-safe operations
- Automatically handles file loading and embedding

View File

@@ -0,0 +1,159 @@
---
title: YoutubeChannelSearchTool
description: A semantic search tool for YouTube channel content using RAG capabilities
icon: youtube
---
## YoutubeChannelSearchTool
The YoutubeChannelSearchTool is a specialized Retrieval-Augmented Generation (RAG) tool that enables semantic search within YouTube channel content. It inherits from the base RagTool class and provides both fixed and dynamic YouTube channel searching capabilities.
## Installation
```bash
pip install 'crewai[tools]'
```
## Usage Example
```python
from crewai import Agent
from crewai_tools import YoutubeChannelSearchTool
# Method 1: Dynamic channel handle
youtube_search = YoutubeChannelSearchTool()
# Method 2: Fixed channel handle
fixed_channel_search = YoutubeChannelSearchTool(youtube_channel_handle="@example_channel")
# Create an agent with the tool
researcher = Agent(
role='Content Researcher',
goal='Search through YouTube channel content semantically',
backstory='Expert at finding relevant information in YouTube content.',
tools=[youtube_search],
verbose=True
)
```
## Input Schema
The tool supports two input schemas depending on initialization:
### Dynamic Channel Schema
```python
class YoutubeChannelSearchToolSchema(BaseModel):
search_query: str # The semantic search query
youtube_channel_handle: str # YouTube channel handle (with or without @)
```
### Fixed Channel Schema
```python
class FixedYoutubeChannelSearchToolSchema(BaseModel):
search_query: str # The semantic search query
```
## Function Signature
```python
def __init__(self, youtube_channel_handle: Optional[str] = None, **kwargs):
"""
Initialize the YouTube channel search tool.
Args:
youtube_channel_handle (Optional[str]): Fixed channel handle. If provided,
the tool will only search this channel.
**kwargs: Additional arguments passed to the parent RagTool
"""
def _run(self, search_query: str, **kwargs: Any) -> Any:
"""
Perform semantic search on the YouTube channel content.
Args:
search_query (str): The semantic search query
**kwargs: Additional arguments (including 'youtube_channel_handle' for dynamic mode)
Returns:
str: Relevant content from the YouTube channel based on semantic search
"""
```
## Best Practices
1. Choose initialization method based on use case:
- Use fixed channel handle when repeatedly searching the same channel
- Use dynamic handle when searching different channels
2. Write clear, semantic search queries
3. Channel handles can be provided with or without '@' prefix
4. Consider content availability and channel size
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import YoutubeChannelSearchTool
# Example 1: Fixed channel search
tech_channel_search = YoutubeChannelSearchTool(youtube_channel_handle="@TechChannel")
# Example 2: Dynamic channel search
flexible_search = YoutubeChannelSearchTool()
# Create agents
tech_analyst = Agent(
role='Tech Content Analyst',
goal='Find relevant tech tutorials and explanations',
backstory='Expert at analyzing technical YouTube content.',
tools=[tech_channel_search]
)
content_researcher = Agent(
role='Content Researcher',
goal='Search across multiple YouTube channels',
backstory='Specialist in finding information across various channels.',
tools=[flexible_search]
)
# Define tasks
fixed_search_task = Task(
description="""Find all tutorials related to machine learning
basics in the channel.""",
agent=tech_analyst
)
# The agent will use:
# {
# "search_query": "machine learning basics tutorial"
# }
dynamic_search_task = Task(
description="""Search through the @AIResearch channel for
content about neural networks.""",
agent=content_researcher
)
# The agent will use:
# {
# "search_query": "neural networks explanation",
# "youtube_channel_handle": "@AIResearch"
# }
# Create crew
crew = Crew(
agents=[tech_analyst, content_researcher],
tasks=[fixed_search_task, dynamic_search_task]
)
# Execute
result = crew.kickoff()
```
## Notes
- Inherits from RagTool for semantic search capabilities
- Supports both fixed and dynamic YouTube channel handles
- Automatically adds '@' prefix to channel handles if missing
- Uses embeddings for semantic search
- Thread-safe operations
- Automatically handles YouTube content loading and embedding

View File

@@ -0,0 +1,216 @@
---
title: YoutubeVideoSearchTool
description: A tool for semantic search within YouTube video content using RAG capabilities
icon: video
---
## YoutubeVideoSearchTool
The YoutubeVideoSearchTool enables semantic search capabilities for YouTube video content using Retrieval-Augmented Generation (RAG). It processes video content and allows searching through transcripts and metadata using natural language queries.
## Installation
```bash
pip install 'crewai[tools]'
```
## Usage Example
```python
from crewai import Agent
from crewai_tools import YoutubeVideoSearchTool
# Method 1: Initialize with specific video
video_tool = YoutubeVideoSearchTool(
youtube_video_url="https://www.youtube.com/watch?v=example"
)
# Method 2: Initialize without video (specify at runtime)
flexible_video_tool = YoutubeVideoSearchTool()
# Create an agent with the tool
researcher = Agent(
role='Video Researcher',
goal='Search and analyze video content',
backstory='Expert at finding relevant information in videos.',
tools=[video_tool],
verbose=True
)
```
## Input Schema
### Fixed Video Schema (when URL provided during initialization)
```python
class FixedYoutubeVideoSearchToolSchema(BaseModel):
search_query: str = Field(
description="Mandatory search query you want to use to search the Youtube Video content"
)
```
### Flexible Video Schema (when URL provided at runtime)
```python
class YoutubeVideoSearchToolSchema(FixedYoutubeVideoSearchToolSchema):
youtube_video_url: str = Field(
description="Mandatory youtube_video_url path you want to search"
)
```
## Function Signature
```python
def __init__(
self,
youtube_video_url: Optional[str] = None,
**kwargs
):
"""
Initialize the YouTube video search tool.
Args:
youtube_video_url (Optional[str]): URL of YouTube video (optional)
**kwargs: Additional arguments for RAG tool configuration
"""
def _run(
self,
search_query: str,
**kwargs: Any
) -> str:
"""
Execute semantic search on video content.
Args:
search_query (str): Query to search in the video
**kwargs: Additional arguments including youtube_video_url if not initialized
Returns:
str: Relevant content from the video matching the query
"""
```
## Best Practices
1. Video URL Management:
- Use complete YouTube URLs
- Verify video accessibility
- Handle region restrictions
2. Search Optimization:
- Use specific, focused queries
- Consider video context
- Test with sample queries first
3. Performance Considerations:
- Pre-initialize for repeated searches
- Handle long videos appropriately
- Monitor processing time
4. Error Handling:
- Verify video availability
- Handle unavailable videos
- Manage API limitations
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import YoutubeVideoSearchTool
# Initialize tool with specific video
video_tool = YoutubeVideoSearchTool(
youtube_video_url="https://www.youtube.com/watch?v=example"
)
# Create agent
researcher = Agent(
role='Video Researcher',
goal='Extract insights from video content',
backstory='Expert at analyzing video content.',
tools=[video_tool]
)
# Define task
research_task = Task(
description="""Find all mentions of machine learning
applications from the video content.""",
agent=researcher
)
# The tool will use:
# {
# "search_query": "machine learning applications"
# }
# Create crew
crew = Crew(
agents=[researcher],
tasks=[research_task]
)
# Execute
result = crew.kickoff()
```
## Advanced Usage
### Dynamic Video Selection
```python
# Initialize without video URL
flexible_tool = YoutubeVideoSearchTool()
# Search different videos
tech_results = flexible_tool.run(
search_query="quantum computing",
youtube_video_url="https://youtube.com/watch?v=tech123"
)
science_results = flexible_tool.run(
search_query="particle physics",
youtube_video_url="https://youtube.com/watch?v=science456"
)
```
### Multiple Video Analysis
```python
# Create tools for different videos
tech_tool = YoutubeVideoSearchTool(
youtube_video_url="https://youtube.com/watch?v=tech123"
)
science_tool = YoutubeVideoSearchTool(
youtube_video_url="https://youtube.com/watch?v=science456"
)
# Create agent with multiple tools
analyst = Agent(
role='Content Analyst',
goal='Cross-reference multiple videos',
tools=[tech_tool, science_tool]
)
```
### Error Handling Example
```python
try:
video_tool = YoutubeVideoSearchTool()
results = video_tool.run(
search_query="key concepts",
youtube_video_url="https://youtube.com/watch?v=example"
)
print(results)
except Exception as e:
print(f"Error processing video: {str(e)}")
```
## Notes
- Inherits from RagTool
- Uses embedchain for processing
- Supports semantic search
- Dynamic video specification
- Efficient content retrieval
- Thread-safe operations
- Maintains search context
- Handles video transcripts
- Processes video metadata
- Memory-efficient processing

View File

@@ -8,27 +8,38 @@ authors = [
{ name = "Joao Moura", email = "joao@crewai.com" }
]
dependencies = [
# Core Dependencies
"pydantic>=2.4.2",
"openai>=1.13.3",
"litellm>=1.44.22",
"instructor>=1.3.3",
# Text Processing
"pdfplumber>=0.11.4",
"regex>=2024.9.11",
# Telemetry and Monitoring
"opentelemetry-api>=1.22.0",
"opentelemetry-sdk>=1.22.0",
"opentelemetry-exporter-otlp-proto-http>=1.22.0",
"instructor>=1.3.3",
"regex>=2024.9.11",
"click>=8.1.7",
# Data Handling
"chromadb>=0.5.23",
"openpyxl>=3.1.5",
"pyvis>=0.3.2",
# Authentication and Security
"auth0-python>=4.7.1",
"python-dotenv>=1.0.0",
# Configuration and Utils
"click>=8.1.7",
"appdirs>=1.4.4",
"jsonref>=1.1.0",
"json-repair>=0.25.2",
"auth0-python>=4.7.1",
"litellm>=1.44.22",
"pyvis>=0.3.2",
"uv>=0.4.25",
"tomli-w>=1.1.0",
"tomli>=2.0.2",
"chromadb>=0.5.23",
"pdfplumber>=0.11.4",
"openpyxl>=3.1.5",
"blinker>=1.9.0",
]
@@ -39,6 +50,9 @@ Repository = "https://github.com/crewAIInc/crewAI"
[project.optional-dependencies]
tools = ["crewai-tools>=0.17.0"]
embeddings = [
"tiktoken~=0.7.0"
]
agentops = ["agentops>=0.3.0"]
fastembed = ["fastembed>=0.4.1"]
pdfplumber = [

View File

@@ -294,7 +294,14 @@ class Agent(BaseAgent):
)
if self.crew and self.crew.memory:
memory = self.crew.contextual_memory.build_context_for_task(task, context)
contextual_memory = ContextualMemory(
self.crew.memory_config,
self.crew._short_term_memory,
self.crew._long_term_memory,
self.crew._entity_memory,
self.crew._user_memory,
)
memory = contextual_memory.build_context_for_task(task, context)
if memory.strip() != "":
task_prompt += self.i18n.slice("memory").format(memory=memory)

View File

@@ -358,9 +358,9 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
if self.crew is not None and hasattr(self.crew, "_train_iteration"):
train_iteration = self.crew._train_iteration
if agent_id in training_data and isinstance(train_iteration, int):
training_data[agent_id][train_iteration]["improved_output"] = (
result.output
)
training_data[agent_id][train_iteration][
"improved_output"
] = result.output
training_handler.save(training_data)
else:
self._printer.print(

View File

@@ -153,12 +153,8 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
login_response_json = login_response.json()
settings = Settings()
settings.tool_repository_username = login_response_json["credential"][
"username"
]
settings.tool_repository_password = login_response_json["credential"][
"password"
]
settings.tool_repository_username = login_response_json["credential"]["username"]
settings.tool_repository_password = login_response_json["credential"]["password"]
settings.dump()
console.print(
@@ -183,7 +179,7 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
capture_output=False,
env=self._build_env_with_credentials(repository_handle),
text=True,
check=True,
check=True
)
if add_package_result.stderr:
@@ -208,11 +204,7 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
settings = Settings()
env = os.environ.copy()
env[f"UV_INDEX_{repository_handle}_USERNAME"] = str(
settings.tool_repository_username or ""
)
env[f"UV_INDEX_{repository_handle}_PASSWORD"] = str(
settings.tool_repository_password or ""
)
env[f"UV_INDEX_{repository_handle}_USERNAME"] = str(settings.tool_repository_username or "")
env[f"UV_INDEX_{repository_handle}_PASSWORD"] = str(settings.tool_repository_password or "")
return env

View File

@@ -25,7 +25,6 @@ from crewai.crews.crew_output import CrewOutput
from crewai.knowledge.knowledge import Knowledge
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
from crewai.llm import LLM
from crewai.memory.contextual.contextual_memory import ContextualMemory
from crewai.memory.entity.entity_memory import EntityMemory
from crewai.memory.long_term.long_term_memory import LongTermMemory
from crewai.memory.short_term.short_term_memory import ShortTermMemory
@@ -279,13 +278,6 @@ class Crew(BaseModel):
)
else:
self._user_memory = None
self.contextual_memory = ContextualMemory(
memory_config=self.memory_config,
stm=self._short_term_memory,
ltm=self._long_term_memory,
em=self._entity_memory,
um=self._user_memory,
)
return self
@model_validator(mode="after")

View File

@@ -30,7 +30,47 @@ from crewai.telemetry import Telemetry
T = TypeVar("T", bound=Union[BaseModel, Dict[str, Any]])
def start(condition=None):
def start(condition: Optional[Union[str, dict, Callable]] = None) -> Callable:
"""
Marks a method as a flow's starting point.
This decorator designates a method as an entry point for the flow execution.
It can optionally specify conditions that trigger the start based on other
method executions.
Parameters
----------
condition : Optional[Union[str, dict, Callable]], optional
Defines when the start method should execute. Can be:
- str: Name of a method that triggers this start
- dict: Contains "type" ("AND"/"OR") and "methods" (list of triggers)
- Callable: A method reference that triggers this start
Default is None, meaning unconditional start.
Returns
-------
Callable
A decorator function that marks the method as a flow start point.
Raises
------
ValueError
If the condition format is invalid.
Examples
--------
>>> @start() # Unconditional start
>>> def begin_flow(self):
... pass
>>> @start("method_name") # Start after specific method
>>> def conditional_start(self):
... pass
>>> @start(and_("method1", "method2")) # Start after multiple methods
>>> def complex_start(self):
... pass
"""
def decorator(func):
func.__is_start_method__ = True
if condition is not None:
@@ -55,8 +95,42 @@ def start(condition=None):
return decorator
def listen(condition: Union[str, dict, Callable]) -> Callable:
"""
Creates a listener that executes when specified conditions are met.
def listen(condition):
This decorator sets up a method to execute in response to other method
executions in the flow. It supports both simple and complex triggering
conditions.
Parameters
----------
condition : Union[str, dict, Callable]
Specifies when the listener should execute. Can be:
- str: Name of a method that triggers this listener
- dict: Contains "type" ("AND"/"OR") and "methods" (list of triggers)
- Callable: A method reference that triggers this listener
Returns
-------
Callable
A decorator function that sets up the method as a listener.
Raises
------
ValueError
If the condition format is invalid.
Examples
--------
>>> @listen("process_data") # Listen to single method
>>> def handle_processed_data(self):
... pass
>>> @listen(or_("success", "failure")) # Listen to multiple methods
>>> def handle_completion(self):
... pass
"""
def decorator(func):
if isinstance(condition, str):
func.__trigger_methods__ = [condition]
@@ -80,10 +154,49 @@ def listen(condition):
return decorator
def router(condition):
def router(condition: Union[str, dict, Callable]) -> Callable:
"""
Creates a routing method that directs flow execution based on conditions.
This decorator marks a method as a router, which can dynamically determine
the next steps in the flow based on its return value. Routers are triggered
by specified conditions and can return constants that determine which path
the flow should take.
Parameters
----------
condition : Union[str, dict, Callable]
Specifies when the router should execute. Can be:
- str: Name of a method that triggers this router
- dict: Contains "type" ("AND"/"OR") and "methods" (list of triggers)
- Callable: A method reference that triggers this router
Returns
-------
Callable
A decorator function that sets up the method as a router.
Raises
------
ValueError
If the condition format is invalid.
Examples
--------
>>> @router("check_status")
>>> def route_based_on_status(self):
... if self.state.status == "success":
... return SUCCESS
... return FAILURE
>>> @router(and_("validate", "process"))
>>> def complex_routing(self):
... if all([self.state.valid, self.state.processed]):
... return CONTINUE
... return STOP
"""
def decorator(func):
func.__is_router__ = True
# Handle conditions like listen/start
if isinstance(condition, str):
func.__trigger_methods__ = [condition]
func.__condition_type__ = "OR"
@@ -105,8 +218,39 @@ def router(condition):
return decorator
def or_(*conditions: Union[str, dict, Callable]) -> dict:
"""
Combines multiple conditions with OR logic for flow control.
def or_(*conditions):
Creates a condition that is satisfied when any of the specified conditions
are met. This is used with @start, @listen, or @router decorators to create
complex triggering conditions.
Parameters
----------
*conditions : Union[str, dict, Callable]
Variable number of conditions that can be:
- str: Method names
- dict: Existing condition dictionaries
- Callable: Method references
Returns
-------
dict
A condition dictionary with format:
{"type": "OR", "methods": list_of_method_names}
Raises
------
ValueError
If any condition is invalid.
Examples
--------
>>> @listen(or_("success", "timeout"))
>>> def handle_completion(self):
... pass
"""
methods = []
for condition in conditions:
if isinstance(condition, dict) and "methods" in condition:
@@ -120,7 +264,39 @@ def or_(*conditions):
return {"type": "OR", "methods": methods}
def and_(*conditions):
def and_(*conditions: Union[str, dict, Callable]) -> dict:
"""
Combines multiple conditions with AND logic for flow control.
Creates a condition that is satisfied only when all specified conditions
are met. This is used with @start, @listen, or @router decorators to create
complex triggering conditions.
Parameters
----------
*conditions : Union[str, dict, Callable]
Variable number of conditions that can be:
- str: Method names
- dict: Existing condition dictionaries
- Callable: Method references
Returns
-------
dict
A condition dictionary with format:
{"type": "AND", "methods": list_of_method_names}
Raises
------
ValueError
If any condition is invalid.
Examples
--------
>>> @listen(and_("validated", "processed"))
>>> def handle_complete_data(self):
... pass
"""
methods = []
for condition in conditions:
if isinstance(condition, dict) and "methods" in condition:
@@ -286,6 +462,23 @@ class Flow(Generic[T], metaclass=FlowMeta):
return final_output
async def _execute_start_method(self, start_method_name: str) -> None:
"""
Executes a flow's start method and its triggered listeners.
This internal method handles the execution of methods marked with @start
decorator and manages the subsequent chain of listener executions.
Parameters
----------
start_method_name : str
The name of the start method to execute.
Notes
-----
- Executes the start method and captures its result
- Triggers execution of any listeners waiting on this start method
- Part of the flow's initialization sequence
"""
result = await self._execute_method(
start_method_name, self._methods[start_method_name]
)
@@ -306,6 +499,28 @@ class Flow(Generic[T], metaclass=FlowMeta):
return result
async def _execute_listeners(self, trigger_method: str, result: Any) -> None:
"""
Executes all listeners and routers triggered by a method completion.
This internal method manages the execution flow by:
1. First executing all triggered routers sequentially
2. Then executing all triggered listeners in parallel
Parameters
----------
trigger_method : str
The name of the method that triggered these listeners.
result : Any
The result from the triggering method, passed to listeners
that accept parameters.
Notes
-----
- Routers are executed sequentially to maintain flow control
- Each router's result becomes the new trigger_method
- Normal listeners are executed in parallel for efficiency
- Listeners can receive the trigger method's result as a parameter
"""
# First, handle routers repeatedly until no router triggers anymore
while True:
routers_triggered = self._find_triggered_methods(
@@ -335,6 +550,33 @@ class Flow(Generic[T], metaclass=FlowMeta):
def _find_triggered_methods(
self, trigger_method: str, router_only: bool
) -> List[str]:
"""
Finds all methods that should be triggered based on conditions.
This internal method evaluates both OR and AND conditions to determine
which methods should be executed next in the flow.
Parameters
----------
trigger_method : str
The name of the method that just completed execution.
router_only : bool
If True, only consider router methods.
If False, only consider non-router methods.
Returns
-------
List[str]
Names of methods that should be triggered.
Notes
-----
- Handles both OR and AND conditions:
* OR: Triggers if any condition is met
* AND: Triggers only when all conditions are met
- Maintains state for AND conditions using _pending_and_listeners
- Separates router and normal listener evaluation
"""
triggered = []
for listener_name, (condition_type, methods) in self._listeners.items():
is_router = listener_name in self._routers
@@ -363,6 +605,33 @@ class Flow(Generic[T], metaclass=FlowMeta):
return triggered
async def _execute_single_listener(self, listener_name: str, result: Any) -> None:
"""
Executes a single listener method with proper event handling.
This internal method manages the execution of an individual listener,
including parameter inspection, event emission, and error handling.
Parameters
----------
listener_name : str
The name of the listener method to execute.
result : Any
The result from the triggering method, which may be passed
to the listener if it accepts parameters.
Notes
-----
- Inspects method signature to determine if it accepts the trigger result
- Emits events for method execution start and finish
- Handles errors gracefully with detailed logging
- Recursively triggers listeners of this listener
- Supports both parameterized and parameter-less listeners
Error Handling
-------------
Catches and logs any exceptions during execution, preventing
individual listener failures from breaking the entire flow.
"""
try:
method = self._methods[listener_name]

View File

@@ -1,12 +1,14 @@
# flow_visualizer.py
import os
from pathlib import Path
from pyvis.network import Network
from crewai.flow.config import COLORS, NODE_STYLES
from crewai.flow.html_template_handler import HTMLTemplateHandler
from crewai.flow.legend_generator import generate_legend_items_html, get_legend_items
from crewai.flow.path_utils import safe_path_join, validate_path_exists
from crewai.flow.utils import calculate_node_levels
from crewai.flow.visualization_utils import (
add_edges,
@@ -16,89 +18,209 @@ from crewai.flow.visualization_utils import (
class FlowPlot:
"""Handles the creation and rendering of flow visualization diagrams."""
def __init__(self, flow):
"""
Initialize FlowPlot with a flow object.
Parameters
----------
flow : Flow
A Flow instance to visualize.
Raises
------
ValueError
If flow object is invalid or missing required attributes.
"""
if not hasattr(flow, '_methods'):
raise ValueError("Invalid flow object: missing '_methods' attribute")
if not hasattr(flow, '_listeners'):
raise ValueError("Invalid flow object: missing '_listeners' attribute")
if not hasattr(flow, '_start_methods'):
raise ValueError("Invalid flow object: missing '_start_methods' attribute")
self.flow = flow
self.colors = COLORS
self.node_styles = NODE_STYLES
def plot(self, filename):
net = Network(
directed=True,
height="750px",
width="100%",
bgcolor=self.colors["bg"],
layout=None,
)
# Set options to disable physics
net.set_options(
"""
var options = {
"nodes": {
"font": {
"multi": "html"
}
},
"physics": {
"enabled": false
}
}
"""
)
Generate and save an HTML visualization of the flow.
# Calculate levels for nodes
node_levels = calculate_node_levels(self.flow)
Parameters
----------
filename : str
Name of the output file (without extension).
# Compute positions
node_positions = compute_positions(self.flow, node_levels)
Raises
------
ValueError
If filename is invalid or network generation fails.
IOError
If file operations fail or visualization cannot be generated.
RuntimeError
If network visualization generation fails.
"""
if not filename or not isinstance(filename, str):
raise ValueError("Filename must be a non-empty string")
try:
# Initialize network
net = Network(
directed=True,
height="750px",
width="100%",
bgcolor=self.colors["bg"],
layout=None,
)
# Add nodes to the network
add_nodes_to_network(net, self.flow, node_positions, self.node_styles)
# Set options to disable physics
net.set_options(
"""
var options = {
"nodes": {
"font": {
"multi": "html"
}
},
"physics": {
"enabled": false
}
}
"""
)
# Add edges to the network
add_edges(net, self.flow, node_positions, self.colors)
# Calculate levels for nodes
try:
node_levels = calculate_node_levels(self.flow)
except Exception as e:
raise ValueError(f"Failed to calculate node levels: {str(e)}")
network_html = net.generate_html()
final_html_content = self._generate_final_html(network_html)
# Compute positions
try:
node_positions = compute_positions(self.flow, node_levels)
except Exception as e:
raise ValueError(f"Failed to compute node positions: {str(e)}")
# Save the final HTML content to the file
with open(f"{filename}.html", "w", encoding="utf-8") as f:
f.write(final_html_content)
print(f"Plot saved as {filename}.html")
# Add nodes to the network
try:
add_nodes_to_network(net, self.flow, node_positions, self.node_styles)
except Exception as e:
raise RuntimeError(f"Failed to add nodes to network: {str(e)}")
self._cleanup_pyvis_lib()
# Add edges to the network
try:
add_edges(net, self.flow, node_positions, self.colors)
except Exception as e:
raise RuntimeError(f"Failed to add edges to network: {str(e)}")
# Generate HTML
try:
network_html = net.generate_html()
final_html_content = self._generate_final_html(network_html)
except Exception as e:
raise RuntimeError(f"Failed to generate network visualization: {str(e)}")
# Save the final HTML content to the file
try:
with open(f"{filename}.html", "w", encoding="utf-8") as f:
f.write(final_html_content)
print(f"Plot saved as {filename}.html")
except IOError as e:
raise IOError(f"Failed to save flow visualization to {filename}.html: {str(e)}")
except (ValueError, RuntimeError, IOError) as e:
raise e
except Exception as e:
raise RuntimeError(f"Unexpected error during flow visualization: {str(e)}")
finally:
self._cleanup_pyvis_lib()
def _generate_final_html(self, network_html):
# Extract just the body content from the generated HTML
current_dir = os.path.dirname(__file__)
template_path = os.path.join(
current_dir, "assets", "crewai_flow_visual_template.html"
)
logo_path = os.path.join(current_dir, "assets", "crewai_logo.svg")
"""
Generate the final HTML content with network visualization and legend.
html_handler = HTMLTemplateHandler(template_path, logo_path)
network_body = html_handler.extract_body_content(network_html)
Parameters
----------
network_html : str
HTML content generated by pyvis Network.
# Generate the legend items HTML
legend_items = get_legend_items(self.colors)
legend_items_html = generate_legend_items_html(legend_items)
final_html_content = html_handler.generate_final_html(
network_body, legend_items_html
)
return final_html_content
Returns
-------
str
Complete HTML content with styling and legend.
Raises
------
IOError
If template or logo files cannot be accessed.
ValueError
If network_html is invalid.
"""
if not network_html:
raise ValueError("Invalid network HTML content")
try:
# Extract just the body content from the generated HTML
current_dir = os.path.dirname(__file__)
template_path = safe_path_join("assets", "crewai_flow_visual_template.html", root=current_dir)
logo_path = safe_path_join("assets", "crewai_logo.svg", root=current_dir)
if not os.path.exists(template_path):
raise IOError(f"Template file not found: {template_path}")
if not os.path.exists(logo_path):
raise IOError(f"Logo file not found: {logo_path}")
html_handler = HTMLTemplateHandler(template_path, logo_path)
network_body = html_handler.extract_body_content(network_html)
# Generate the legend items HTML
legend_items = get_legend_items(self.colors)
legend_items_html = generate_legend_items_html(legend_items)
final_html_content = html_handler.generate_final_html(
network_body, legend_items_html
)
return final_html_content
except Exception as e:
raise IOError(f"Failed to generate visualization HTML: {str(e)}")
def _cleanup_pyvis_lib(self):
# Clean up the generated lib folder
lib_folder = os.path.join(os.getcwd(), "lib")
"""
Clean up the generated lib folder from pyvis.
This method safely removes the temporary lib directory created by pyvis
during network visualization generation.
"""
try:
lib_folder = safe_path_join("lib", root=os.getcwd())
if os.path.exists(lib_folder) and os.path.isdir(lib_folder):
import shutil
shutil.rmtree(lib_folder)
except ValueError as e:
print(f"Error validating lib folder path: {e}")
except Exception as e:
print(f"Error cleaning up {lib_folder}: {e}")
print(f"Error cleaning up lib folder: {e}")
def plot_flow(flow, filename="flow_plot"):
"""
Convenience function to create and save a flow visualization.
Parameters
----------
flow : Flow
Flow instance to visualize.
filename : str, optional
Output filename without extension, by default "flow_plot".
Raises
------
ValueError
If flow object or filename is invalid.
IOError
If file operations fail.
"""
visualizer = FlowPlot(flow)
visualizer.plot(filename)

View File

@@ -1,26 +1,53 @@
import base64
import re
from pathlib import Path
from crewai.flow.path_utils import safe_path_join, validate_path_exists
class HTMLTemplateHandler:
"""Handles HTML template processing and generation for flow visualization diagrams."""
def __init__(self, template_path, logo_path):
self.template_path = template_path
self.logo_path = logo_path
"""
Initialize HTMLTemplateHandler with validated template and logo paths.
Parameters
----------
template_path : str
Path to the HTML template file.
logo_path : str
Path to the logo image file.
Raises
------
ValueError
If template or logo paths are invalid or files don't exist.
"""
try:
self.template_path = validate_path_exists(template_path, "file")
self.logo_path = validate_path_exists(logo_path, "file")
except ValueError as e:
raise ValueError(f"Invalid template or logo path: {e}")
def read_template(self):
"""Read and return the HTML template file contents."""
with open(self.template_path, "r", encoding="utf-8") as f:
return f.read()
def encode_logo(self):
"""Convert the logo SVG file to base64 encoded string."""
with open(self.logo_path, "rb") as logo_file:
logo_svg_data = logo_file.read()
return base64.b64encode(logo_svg_data).decode("utf-8")
def extract_body_content(self, html):
"""Extract and return content between body tags from HTML string."""
match = re.search("<body.*?>(.*?)</body>", html, re.DOTALL)
return match.group(1) if match else ""
def generate_legend_items_html(self, legend_items):
"""Generate HTML markup for the legend items."""
legend_items_html = ""
for item in legend_items:
if "border" in item:
@@ -48,6 +75,7 @@ class HTMLTemplateHandler:
return legend_items_html
def generate_final_html(self, network_body, legend_items_html, title="Flow Plot"):
"""Combine all components into final HTML document with network visualization."""
html_template = self.read_template()
logo_svg_base64 = self.encode_logo()

View File

@@ -1,3 +1,4 @@
def get_legend_items(colors):
return [
{"label": "Start Method", "color": colors["start"]},

View File

@@ -0,0 +1,135 @@
"""
Path utilities for secure file operations in CrewAI flow module.
This module provides utilities for secure path handling to prevent directory
traversal attacks and ensure paths remain within allowed boundaries.
"""
import os
from pathlib import Path
from typing import List, Union
def safe_path_join(*parts: str, root: Union[str, Path, None] = None) -> str:
"""
Safely join path components and ensure the result is within allowed boundaries.
Parameters
----------
*parts : str
Variable number of path components to join.
root : Union[str, Path, None], optional
Root directory to use as base. If None, uses current working directory.
Returns
-------
str
String representation of the resolved path.
Raises
------
ValueError
If the resulting path would be outside the root directory
or if any path component is invalid.
"""
if not parts:
raise ValueError("No path components provided")
try:
# Convert all parts to strings and clean them
clean_parts = [str(part).strip() for part in parts if part]
if not clean_parts:
raise ValueError("No valid path components provided")
# Establish root directory
root_path = Path(root).resolve() if root else Path.cwd()
# Join and resolve the full path
full_path = Path(root_path, *clean_parts).resolve()
# Check if the resolved path is within root
if not str(full_path).startswith(str(root_path)):
raise ValueError(
f"Invalid path: Potential directory traversal. Path must be within {root_path}"
)
return str(full_path)
except Exception as e:
if isinstance(e, ValueError):
raise
raise ValueError(f"Invalid path components: {str(e)}")
def validate_path_exists(path: Union[str, Path], file_type: str = "file") -> str:
"""
Validate that a path exists and is of the expected type.
Parameters
----------
path : Union[str, Path]
Path to validate.
file_type : str, optional
Expected type ('file' or 'directory'), by default 'file'.
Returns
-------
str
Validated path as string.
Raises
------
ValueError
If path doesn't exist or is not of expected type.
"""
try:
path_obj = Path(path).resolve()
if not path_obj.exists():
raise ValueError(f"Path does not exist: {path}")
if file_type == "file" and not path_obj.is_file():
raise ValueError(f"Path is not a file: {path}")
elif file_type == "directory" and not path_obj.is_dir():
raise ValueError(f"Path is not a directory: {path}")
return str(path_obj)
except Exception as e:
if isinstance(e, ValueError):
raise
raise ValueError(f"Invalid path: {str(e)}")
def list_files(directory: Union[str, Path], pattern: str = "*") -> List[str]:
"""
Safely list files in a directory matching a pattern.
Parameters
----------
directory : Union[str, Path]
Directory to search in.
pattern : str, optional
Glob pattern to match files against, by default "*".
Returns
-------
List[str]
List of matching file paths.
Raises
------
ValueError
If directory is invalid or inaccessible.
"""
try:
dir_path = Path(directory).resolve()
if not dir_path.is_dir():
raise ValueError(f"Not a directory: {directory}")
return [str(p) for p in dir_path.glob(pattern) if p.is_file()]
except Exception as e:
if isinstance(e, ValueError):
raise
raise ValueError(f"Error listing files: {str(e)}")

View File

@@ -1,9 +1,25 @@
"""
Utility functions for flow visualization and dependency analysis.
This module provides core functionality for analyzing and manipulating flow structures,
including node level calculation, ancestor tracking, and return value analysis.
Functions in this module are primarily used by the visualization system to create
accurate and informative flow diagrams.
Example
-------
>>> flow = Flow()
>>> node_levels = calculate_node_levels(flow)
>>> ancestors = build_ancestor_dict(flow)
"""
import ast
import inspect
import textwrap
from typing import Any, Dict, List, Optional, Set, Union
def get_possible_return_constants(function):
def get_possible_return_constants(function: Any) -> Optional[List[str]]:
try:
source = inspect.getsource(function)
except OSError:
@@ -77,11 +93,34 @@ def get_possible_return_constants(function):
return list(return_values) if return_values else None
def calculate_node_levels(flow):
levels = {}
queue = []
visited = set()
pending_and_listeners = {}
def calculate_node_levels(flow: Any) -> Dict[str, int]:
"""
Calculate the hierarchical level of each node in the flow.
Performs a breadth-first traversal of the flow graph to assign levels
to nodes, starting with start methods at level 0.
Parameters
----------
flow : Any
The flow instance containing methods, listeners, and router configurations.
Returns
-------
Dict[str, int]
Dictionary mapping method names to their hierarchical levels.
Notes
-----
- Start methods are assigned level 0
- Each subsequent connected node is assigned level = parent_level + 1
- Handles both OR and AND conditions for listeners
- Processes router paths separately
"""
levels: Dict[str, int] = {}
queue: List[str] = []
visited: Set[str] = set()
pending_and_listeners: Dict[str, Set[str]] = {}
# Make all start methods at level 0
for method_name, method in flow._methods.items():
@@ -140,7 +179,20 @@ def calculate_node_levels(flow):
return levels
def count_outgoing_edges(flow):
def count_outgoing_edges(flow: Any) -> Dict[str, int]:
"""
Count the number of outgoing edges for each method in the flow.
Parameters
----------
flow : Any
The flow instance to analyze.
Returns
-------
Dict[str, int]
Dictionary mapping method names to their outgoing edge count.
"""
counts = {}
for method_name in flow._methods:
counts[method_name] = 0
@@ -152,16 +204,53 @@ def count_outgoing_edges(flow):
return counts
def build_ancestor_dict(flow):
ancestors = {node: set() for node in flow._methods}
visited = set()
def build_ancestor_dict(flow: Any) -> Dict[str, Set[str]]:
"""
Build a dictionary mapping each node to its ancestor nodes.
Parameters
----------
flow : Any
The flow instance to analyze.
Returns
-------
Dict[str, Set[str]]
Dictionary mapping each node to a set of its ancestor nodes.
"""
ancestors: Dict[str, Set[str]] = {node: set() for node in flow._methods}
visited: Set[str] = set()
for node in flow._methods:
if node not in visited:
dfs_ancestors(node, ancestors, visited, flow)
return ancestors
def dfs_ancestors(node, ancestors, visited, flow):
def dfs_ancestors(
node: str,
ancestors: Dict[str, Set[str]],
visited: Set[str],
flow: Any
) -> None:
"""
Perform depth-first search to build ancestor relationships.
Parameters
----------
node : str
Current node being processed.
ancestors : Dict[str, Set[str]]
Dictionary tracking ancestor relationships.
visited : Set[str]
Set of already visited nodes.
flow : Any
The flow instance being analyzed.
Notes
-----
This function modifies the ancestors dictionary in-place to build
the complete ancestor graph.
"""
if node in visited:
return
visited.add(node)
@@ -185,12 +274,48 @@ def dfs_ancestors(node, ancestors, visited, flow):
dfs_ancestors(listener_name, ancestors, visited, flow)
def is_ancestor(node, ancestor_candidate, ancestors):
def is_ancestor(node: str, ancestor_candidate: str, ancestors: Dict[str, Set[str]]) -> bool:
"""
Check if one node is an ancestor of another.
Parameters
----------
node : str
The node to check ancestors for.
ancestor_candidate : str
The potential ancestor node.
ancestors : Dict[str, Set[str]]
Dictionary containing ancestor relationships.
Returns
-------
bool
True if ancestor_candidate is an ancestor of node, False otherwise.
"""
return ancestor_candidate in ancestors.get(node, set())
def build_parent_children_dict(flow):
parent_children = {}
def build_parent_children_dict(flow: Any) -> Dict[str, List[str]]:
"""
Build a dictionary mapping parent nodes to their children.
Parameters
----------
flow : Any
The flow instance to analyze.
Returns
-------
Dict[str, List[str]]
Dictionary mapping parent method names to lists of their child method names.
Notes
-----
- Maps listeners to their trigger methods
- Maps router methods to their paths and listeners
- Children lists are sorted for consistent ordering
"""
parent_children: Dict[str, List[str]] = {}
# Map listeners to their trigger methods
for listener_name, (_, trigger_methods) in flow._listeners.items():
@@ -214,7 +339,24 @@ def build_parent_children_dict(flow):
return parent_children
def get_child_index(parent, child, parent_children):
def get_child_index(parent: str, child: str, parent_children: Dict[str, List[str]]) -> int:
"""
Get the index of a child node in its parent's sorted children list.
Parameters
----------
parent : str
The parent node name.
child : str
The child node name to find the index for.
parent_children : Dict[str, List[str]]
Dictionary mapping parents to their children lists.
Returns
-------
int
Zero-based index of the child in its parent's sorted children list.
"""
children = parent_children.get(parent, [])
children.sort()
return children.index(child)

View File

@@ -1,5 +1,23 @@
"""
Utilities for creating visual representations of flow structures.
This module provides functions for generating network visualizations of flows,
including node placement, edge creation, and visual styling. It handles the
conversion of flow structures into visual network graphs with appropriate
styling and layout.
Example
-------
>>> flow = Flow()
>>> net = Network(directed=True)
>>> node_positions = compute_positions(flow, node_levels)
>>> add_nodes_to_network(net, flow, node_positions, node_styles)
>>> add_edges(net, flow, node_positions, colors)
"""
import ast
import inspect
from typing import Any, Dict, List, Optional, Tuple, Union
from .utils import (
build_ancestor_dict,
@@ -9,8 +27,25 @@ from .utils import (
)
def method_calls_crew(method):
"""Check if the method calls `.crew()`."""
def method_calls_crew(method: Any) -> bool:
"""
Check if the method contains a call to `.crew()`.
Parameters
----------
method : Any
The method to analyze for crew() calls.
Returns
-------
bool
True if the method calls .crew(), False otherwise.
Notes
-----
Uses AST analysis to detect method calls, specifically looking for
attribute access of 'crew'.
"""
try:
source = inspect.getsource(method)
source = inspect.cleandoc(source)
@@ -20,6 +55,7 @@ def method_calls_crew(method):
return False
class CrewCallVisitor(ast.NodeVisitor):
"""AST visitor to detect .crew() method calls."""
def __init__(self):
self.found = False
@@ -34,7 +70,34 @@ def method_calls_crew(method):
return visitor.found
def add_nodes_to_network(net, flow, node_positions, node_styles):
def add_nodes_to_network(
net: Any,
flow: Any,
node_positions: Dict[str, Tuple[float, float]],
node_styles: Dict[str, Dict[str, Any]]
) -> None:
"""
Add nodes to the network visualization with appropriate styling.
Parameters
----------
net : Any
The pyvis Network instance to add nodes to.
flow : Any
The flow instance containing method information.
node_positions : Dict[str, Tuple[float, float]]
Dictionary mapping node names to their (x, y) positions.
node_styles : Dict[str, Dict[str, Any]]
Dictionary containing style configurations for different node types.
Notes
-----
Node types include:
- Start methods
- Router methods
- Crew methods
- Regular methods
"""
def human_friendly_label(method_name):
return method_name.replace("_", " ").title()
@@ -73,9 +136,33 @@ def add_nodes_to_network(net, flow, node_positions, node_styles):
)
def compute_positions(flow, node_levels, y_spacing=150, x_spacing=150):
level_nodes = {}
node_positions = {}
def compute_positions(
flow: Any,
node_levels: Dict[str, int],
y_spacing: float = 150,
x_spacing: float = 150
) -> Dict[str, Tuple[float, float]]:
"""
Compute the (x, y) positions for each node in the flow graph.
Parameters
----------
flow : Any
The flow instance to compute positions for.
node_levels : Dict[str, int]
Dictionary mapping node names to their hierarchical levels.
y_spacing : float, optional
Vertical spacing between levels, by default 150.
x_spacing : float, optional
Horizontal spacing between nodes, by default 150.
Returns
-------
Dict[str, Tuple[float, float]]
Dictionary mapping node names to their (x, y) coordinates.
"""
level_nodes: Dict[int, List[str]] = {}
node_positions: Dict[str, Tuple[float, float]] = {}
for method_name, level in node_levels.items():
level_nodes.setdefault(level, []).append(method_name)
@@ -90,7 +177,33 @@ def compute_positions(flow, node_levels, y_spacing=150, x_spacing=150):
return node_positions
def add_edges(net, flow, node_positions, colors):
def add_edges(
net: Any,
flow: Any,
node_positions: Dict[str, Tuple[float, float]],
colors: Dict[str, str]
) -> None:
edge_smooth: Dict[str, Union[str, float]] = {"type": "continuous"} # Default value
"""
Add edges to the network visualization with appropriate styling.
Parameters
----------
net : Any
The pyvis Network instance to add edges to.
flow : Any
The flow instance containing edge information.
node_positions : Dict[str, Tuple[float, float]]
Dictionary mapping node names to their positions.
colors : Dict[str, str]
Dictionary mapping edge types to their colors.
Notes
-----
- Handles both normal listener edges and router edges
- Applies appropriate styling (color, dashes) based on edge type
- Adds curvature to edges when needed (cycles or multiple children)
"""
ancestors = build_ancestor_dict(flow)
parent_children = build_parent_children_dict(flow)
@@ -126,7 +239,7 @@ def add_edges(net, flow, node_positions, colors):
else:
edge_smooth = {"type": "cubicBezier"}
else:
edge_smooth = False
edge_smooth.update({"type": "continuous"})
edge_style = {
"color": edge_color,
@@ -189,7 +302,7 @@ def add_edges(net, flow, node_positions, colors):
else:
edge_smooth = {"type": "cubicBezier"}
else:
edge_smooth = False
edge_smooth.update({"type": "continuous"})
edge_style = {
"color": colors["router_edge"],

View File

@@ -26,9 +26,10 @@ class BaseFileKnowledgeSource(BaseKnowledgeSource, ABC):
safe_file_paths: List[Path] = Field(default_factory=list)
@field_validator("file_path", "file_paths", mode="before")
def validate_file_path(cls, v, values):
def validate_file_path(cls, v, info):
"""Validate that at least one of file_path or file_paths is provided."""
if v is None and ("file_path" not in values or values.get("file_path") is None):
# Single check if both are None, O(1) instead of nested conditions
if v is None and info.data.get("file_path" if info.field_name == "file_paths" else "file_paths") is None:
raise ValueError("Either file_path or file_paths must be provided")
return v

View File

@@ -1,5 +1,4 @@
from typing import Any, Dict, Optional
from crewai.task import Task
from crewai.memory import EntityMemory, LongTermMemory, ShortTermMemory, UserMemory
@@ -11,7 +10,7 @@ class ContextualMemory:
stm: ShortTermMemory,
ltm: LongTermMemory,
em: EntityMemory,
um: Optional[UserMemory],
um: UserMemory,
):
if memory_config is not None:
self.memory_provider = memory_config.get("provider")
@@ -22,7 +21,7 @@ class ContextualMemory:
self.em = em
self.um = um
def build_context_for_task(self, task: Task, context: str) -> str:
def build_context_for_task(self, task, context) -> str:
"""
Automatically builds a minimal, highly relevant set of contextual information
for a given task.
@@ -40,7 +39,7 @@ class ContextualMemory:
context.append(self._fetch_user_context(query))
return "\n".join(filter(None, context))
def _fetch_stm_context(self, query: str) -> str:
def _fetch_stm_context(self, query) -> str:
"""
Fetches recent relevant insights from STM related to the task's description and expected_output,
formatted as bullet points.
@@ -54,7 +53,7 @@ class ContextualMemory:
)
return f"Recent Insights:\n{formatted_results}" if stm_results else ""
def _fetch_ltm_context(self, task: str) -> Optional[str]:
def _fetch_ltm_context(self, task) -> Optional[str]:
"""
Fetches historical data or insights from LTM that are relevant to the task's description and expected_output,
formatted as bullet points.
@@ -73,7 +72,7 @@ class ContextualMemory:
return f"Historical Data:\n{formatted_results}" if ltm_results else ""
def _fetch_entity_context(self, query: str) -> str:
def _fetch_entity_context(self, query) -> str:
"""
Fetches relevant entity information from Entity Memory related to the task's description and expected_output,
formatted as bullet points.
@@ -95,8 +94,6 @@ class ContextualMemory:
Returns:
str: Formatted user memories as bullet points, or an empty string if none found.
"""
if not self.um:
return ""
user_memories = self.um.search(query)
if not user_memories:
return ""

View File

@@ -11,7 +11,7 @@ class EntityMemory(Memory):
"""
def __init__(self, crew=None, embedder_config=None, storage=None, path=None):
if crew and hasattr(crew, "memory_config") and crew.memory_config is not None:
if hasattr(crew, "memory_config") and crew.memory_config is not None:
self.memory_provider = crew.memory_config.get("provider")
else:
self.memory_provider = None

View File

@@ -15,17 +15,8 @@ class LongTermMemory(Memory):
"""
def __init__(self, storage=None, path=None):
"""Initialize long term memory.
Args:
storage: Optional custom storage instance
path: Optional custom path for storage location
Note:
If both storage and path are provided, storage takes precedence
"""
if not storage:
storage = LTMSQLiteStorage(storage_path=path) if path else LTMSQLiteStorage()
storage = LTMSQLiteStorage(db_path=path) if path else LTMSQLiteStorage()
super().__init__(storage)
def save(self, item: LongTermMemoryItem) -> None: # type: ignore # BUG?: Signature of "save" incompatible with supertype "Memory"

View File

@@ -15,7 +15,7 @@ class ShortTermMemory(Memory):
"""
def __init__(self, crew=None, embedder_config=None, storage=None, path=None):
if crew and hasattr(crew, "memory_config") and crew.memory_config is not None:
if hasattr(crew, "memory_config") and crew.memory_config is not None:
self.memory_provider = crew.memory_config.get("provider")
else:
self.memory_provider = None

View File

@@ -1,11 +1,5 @@
from abc import ABC, abstractmethod
from pathlib import Path
import os
from typing import Any, Dict, List, Optional, TypeVar
from abc import ABC, abstractmethod
from pathlib import Path
from crewai.utilities.paths import get_default_storage_path
from typing import Any, Dict, List, Optional
class BaseRAGStorage(ABC):
@@ -18,46 +12,17 @@ class BaseRAGStorage(ABC):
def __init__(
self,
type: str,
storage_path: Optional[Path] = None,
allow_reset: bool = True,
embedder_config: Optional[Any] = None,
crew: Any = None,
) -> None:
"""Initialize the BaseRAGStorage.
Args:
type: Type of storage being used
storage_path: Optional custom path for storage location
allow_reset: Whether storage can be reset
embedder_config: Optional configuration for the embedder
crew: Optional crew instance this storage belongs to
Raises:
PermissionError: If storage path is not writable
OSError: If storage path cannot be created
"""
):
self.type = type
self.storage_path = storage_path if storage_path else get_default_storage_path('rag')
# Validate storage path
try:
self.storage_path.parent.mkdir(parents=True, exist_ok=True)
if not os.access(self.storage_path.parent, os.W_OK):
raise PermissionError(f"No write permission for storage path: {self.storage_path}")
except OSError as e:
raise OSError(f"Failed to initialize storage path: {str(e)}")
self.allow_reset = allow_reset
self.embedder_config = embedder_config
self.crew = crew
self.agents = self._initialize_agents()
def _initialize_agents(self) -> str:
"""Initialize agent identifiers for storage.
Returns:
str: Underscore-joined string of sanitized agent role names
"""
if self.crew:
return "_".join(
[self._sanitize_role(agent.role) for agent in self.crew.agents]
@@ -66,27 +31,12 @@ class BaseRAGStorage(ABC):
@abstractmethod
def _sanitize_role(self, role: str) -> str:
"""Sanitizes agent roles to ensure valid directory names.
Args:
role: The agent role name to sanitize
Returns:
str: Sanitized role name safe for use in paths
"""
"""Sanitizes agent roles to ensure valid directory names."""
pass
@abstractmethod
def save(self, value: Any, metadata: Dict[str, Any]) -> None:
"""Save a value with metadata to the storage.
Args:
value: The value to store
metadata: Additional metadata to store with the value
Raises:
OSError: If there is an error writing to storage
"""
"""Save a value with metadata to the storage."""
pass
@abstractmethod
@@ -96,55 +46,25 @@ class BaseRAGStorage(ABC):
limit: int = 3,
filter: Optional[dict] = None,
score_threshold: float = 0.35,
) -> List[Dict[str, Any]]:
"""Search for entries in the storage.
Args:
query: The search query string
limit: Maximum number of results to return
filter: Optional filter criteria
score_threshold: Minimum similarity score threshold
Returns:
List[Dict[str, Any]]: List of matching entries with their metadata
"""
) -> List[Any]:
"""Search for entries in the storage."""
pass
@abstractmethod
def reset(self) -> None:
"""Reset the storage.
Raises:
OSError: If there is an error clearing storage
PermissionError: If reset is not allowed
"""
"""Reset the storage."""
pass
@abstractmethod
def _generate_embedding(
self, text: str, metadata: Optional[Dict[str, Any]] = None
) -> List[float]:
"""Generate an embedding for the given text and metadata.
Args:
text: Text to generate embedding for
metadata: Optional metadata to include in embedding
Returns:
List[float]: Vector embedding of the text
Raises:
ValueError: If text is empty or invalid
"""
) -> Any:
"""Generate an embedding for the given text and metadata."""
pass
@abstractmethod
def _initialize_app(self) -> None:
"""Initialize the vector db.
Raises:
OSError: If vector db initialization fails
"""
def _initialize_app(self):
"""Initialize the vector db."""
pass
def setup_config(self, config: Dict[str, Any]):

View File

@@ -1,13 +1,11 @@
import json
import os
import sqlite3
from pathlib import Path
from typing import Any, Dict, List, Optional
from crewai.task import Task
from crewai.utilities import Printer
from crewai.utilities.crew_json_encoder import CrewJSONEncoder
from crewai.utilities.paths import get_default_storage_path
from crewai.utilities.paths import db_storage_path
class KickoffTaskOutputsSQLiteStorage:
@@ -15,26 +13,10 @@ class KickoffTaskOutputsSQLiteStorage:
An updated SQLite storage class for kickoff task outputs storage.
"""
def __init__(self, storage_path: Optional[Path] = None) -> None:
"""Initialize kickoff task outputs storage.
Args:
storage_path: Optional custom path for storage location
Raises:
PermissionError: If storage path is not writable
OSError: If storage path cannot be created
"""
self.storage_path = storage_path if storage_path else get_default_storage_path('kickoff')
# Validate storage path
try:
self.storage_path.parent.mkdir(parents=True, exist_ok=True)
if not os.access(self.storage_path.parent, os.W_OK):
raise PermissionError(f"No write permission for storage path: {self.storage_path}")
except OSError as e:
raise OSError(f"Failed to initialize storage path: {str(e)}")
def __init__(
self, db_path: str = f"{db_storage_path()}/latest_kickoff_task_outputs.db"
) -> None:
self.db_path = db_path
self._printer: Printer = Printer()
self._initialize_db()
@@ -43,7 +25,7 @@ class KickoffTaskOutputsSQLiteStorage:
Initializes the SQLite database and creates LTM table
"""
try:
with sqlite3.connect(str(self.storage_path)) as conn:
with sqlite3.connect(self.db_path) as conn:
cursor = conn.cursor()
cursor.execute(
"""
@@ -73,21 +55,9 @@ class KickoffTaskOutputsSQLiteStorage:
task_index: int,
was_replayed: bool = False,
inputs: Dict[str, Any] = {},
) -> None:
"""Add a task output to storage.
Args:
task: The task whose output is being stored
output: The output data from the task
task_index: Index of this task in the sequence
was_replayed: Whether this was from a replay
inputs: Optional input data that led to this output
Raises:
sqlite3.Error: If there is an error saving to database
"""
):
try:
with sqlite3.connect(str(self.storage_path)) as conn:
with sqlite3.connect(self.db_path) as conn:
cursor = conn.cursor()
cursor.execute(
"""
@@ -120,7 +90,7 @@ class KickoffTaskOutputsSQLiteStorage:
Updates an existing row in the latest_kickoff_task_outputs table based on task_index.
"""
try:
with sqlite3.connect(str(self.storage_path)) as conn:
with sqlite3.connect(self.db_path) as conn:
cursor = conn.cursor()
fields = []
@@ -149,7 +119,7 @@ class KickoffTaskOutputsSQLiteStorage:
def load(self) -> Optional[List[Dict[str, Any]]]:
try:
with sqlite3.connect(str(self.storage_path)) as conn:
with sqlite3.connect(self.db_path) as conn:
cursor = conn.cursor()
cursor.execute("""
SELECT *
@@ -185,7 +155,7 @@ class KickoffTaskOutputsSQLiteStorage:
Deletes all rows from the latest_kickoff_task_outputs table.
"""
try:
with sqlite3.connect(str(self.storage_path)) as conn:
with sqlite3.connect(self.db_path) as conn:
cursor = conn.cursor()
cursor.execute("DELETE FROM latest_kickoff_task_outputs")
conn.commit()

View File

@@ -1,11 +1,9 @@
import json
import os
import sqlite3
from pathlib import Path
from typing import Any, Dict, List, Optional, Union
from crewai.utilities import Printer
from crewai.utilities.paths import get_default_storage_path
from crewai.utilities.paths import db_storage_path
class LTMSQLiteStorage:
@@ -13,26 +11,10 @@ class LTMSQLiteStorage:
An updated SQLite storage class for LTM data storage.
"""
def __init__(self, storage_path: Optional[Path] = None) -> None:
"""Initialize LTM SQLite storage.
Args:
storage_path: Optional custom path for storage location
Raises:
PermissionError: If storage path is not writable
OSError: If storage path cannot be created
"""
self.storage_path = storage_path if storage_path else get_default_storage_path('ltm')
# Validate storage path
try:
self.storage_path.parent.mkdir(parents=True, exist_ok=True)
if not os.access(self.storage_path.parent, os.W_OK):
raise PermissionError(f"No write permission for storage path: {self.storage_path}")
except OSError as e:
raise OSError(f"Failed to initialize storage path: {str(e)}")
def __init__(
self, db_path: str = f"{db_storage_path()}/long_term_memory_storage.db"
) -> None:
self.db_path = db_path
self._printer: Printer = Printer()
self._initialize_db()
@@ -41,7 +23,7 @@ class LTMSQLiteStorage:
Initializes the SQLite database and creates LTM table
"""
try:
with sqlite3.connect(str(self.storage_path)) as conn:
with sqlite3.connect(self.db_path) as conn:
cursor = conn.cursor()
cursor.execute(
"""
@@ -69,20 +51,9 @@ class LTMSQLiteStorage:
datetime: str,
score: Union[int, float],
) -> None:
"""Save a memory entry to long-term memory.
Args:
task_description: Description of the task this memory relates to
metadata: Additional data to store with the memory
datetime: Timestamp for when this memory was created
score: Relevance score for this memory (higher is more relevant)
Raises:
sqlite3.Error: If there is an error saving to the database
"""
"""Saves data to the LTM table with error handling."""
try:
with sqlite3.connect(str(self.storage_path)) as conn:
with sqlite3.connect(self.db_path) as conn:
cursor = conn.cursor()
cursor.execute(
"""
@@ -103,7 +74,7 @@ class LTMSQLiteStorage:
) -> Optional[List[Dict[str, Any]]]:
"""Queries the LTM table by task description with error handling."""
try:
with sqlite3.connect(str(self.storage_path)) as conn:
with sqlite3.connect(self.db_path) as conn:
cursor = conn.cursor()
cursor.execute(
f"""
@@ -138,7 +109,7 @@ class LTMSQLiteStorage:
) -> None:
"""Resets the LTM table with error handling."""
try:
with sqlite3.connect(str(self.storage_path)) as conn:
with sqlite3.connect(self.db_path) as conn:
cursor = conn.cursor()
cursor.execute("DELETE FROM long_term_memories")
conn.commit()

View File

@@ -19,7 +19,7 @@ class Mem0Storage(Storage):
self.memory_type = type
self.crew = crew
self.memory_config = crew.memory_config if crew else None
self.memory_config = crew.memory_config
# User ID is required for user memory type "user" since it's used as a unique identifier for the user.
user_id = self._get_user_id()
@@ -27,10 +27,9 @@ class Mem0Storage(Storage):
raise ValueError("User ID is required for user memory type")
# API key in memory config overrides the environment variable
if self.memory_config and self.memory_config.get("config"):
mem0_api_key = self.memory_config.get("config").get("api_key")
else:
mem0_api_key = os.getenv("MEM0_API_KEY")
mem0_api_key = self.memory_config.get("config", {}).get("api_key") or os.getenv(
"MEM0_API_KEY"
)
self.memory = MemoryClient(api_key=mem0_api_key)
def _sanitize_role(self, role: str) -> str:

View File

@@ -11,6 +11,7 @@ from chromadb.api import ClientAPI
from crewai.memory.storage.base_rag_storage import BaseRAGStorage
from crewai.utilities import EmbeddingConfigurator
from crewai.utilities.constants import MAX_FILE_NAME_LENGTH
from crewai.utilities.paths import db_storage_path
@contextlib.contextmanager
@@ -39,15 +40,9 @@ class RAGStorage(BaseRAGStorage):
app: ClientAPI | None = None
def __init__(
self,
type,
storage_path=None,
allow_reset=True,
embedder_config=None,
crew=None,
path=None,
self, type, allow_reset=True, embedder_config=None, crew=None, path=None
):
super().__init__(type, storage_path, allow_reset, embedder_config, crew)
super().__init__(type, allow_reset, embedder_config, crew)
agents = crew.agents if crew else []
agents = [self._sanitize_role(agent.role) for agent in agents]
agents = "_".join(agents)
@@ -95,7 +90,7 @@ class RAGStorage(BaseRAGStorage):
"""
Ensures file name does not exceed max allowed by OS
"""
base_path = f"{self.storage_path}/{type}"
base_path = f"{db_storage_path()}/{type}"
if len(file_name) > MAX_FILE_NAME_LENGTH:
logging.warning(
@@ -157,7 +152,7 @@ class RAGStorage(BaseRAGStorage):
try:
if self.app:
self.app.reset()
shutil.rmtree(f"{self.storage_path}/{self.type}")
shutil.rmtree(f"{db_storage_path()}/{self.type}")
self.app = None
self.collection = None
except Exception as e:

View File

@@ -66,6 +66,7 @@ def cache_handler(func):
def crew(func) -> Callable[..., Crew]:
@wraps(func)
def wrapper(self, *args, **kwargs) -> Crew:
instantiated_tasks = []

View File

@@ -216,5 +216,5 @@ def CrewBase(cls: T) -> T:
# Include base class (qual)name in the wrapper class (qual)name.
WrappedClass.__name__ = CrewBase.__name__ + "(" + cls.__name__ + ")"
WrappedClass.__qualname__ = CrewBase.__qualname__ + "(" + cls.__name__ + ")"
return cast(T, WrappedClass)

View File

@@ -179,6 +179,7 @@ class Task(BaseModel):
_execution_span: Optional[Span] = PrivateAttr(default=None)
_original_description: Optional[str] = PrivateAttr(default=None)
_original_expected_output: Optional[str] = PrivateAttr(default=None)
_original_output_file: Optional[str] = PrivateAttr(default=None)
_thread: Optional[threading.Thread] = PrivateAttr(default=None)
_execution_time: Optional[float] = PrivateAttr(default=None)
@@ -213,8 +214,46 @@ class Task(BaseModel):
@field_validator("output_file")
@classmethod
def output_file_validation(cls, value: str) -> str:
"""Validate the output file path by removing the / from the beginning of the path."""
def output_file_validation(cls, value: Optional[str]) -> Optional[str]:
"""Validate the output file path.
Args:
value: The output file path to validate. Can be None or a string.
If the path contains template variables (e.g. {var}), leading slashes are preserved.
For regular paths, leading slashes are stripped.
Returns:
The validated and potentially modified path, or None if no path was provided.
Raises:
ValueError: If the path contains invalid characters, path traversal attempts,
or other security concerns.
"""
if value is None:
return None
# Basic security checks
if ".." in value:
raise ValueError("Path traversal attempts are not allowed in output_file paths")
# Check for shell expansion first
if value.startswith('~') or value.startswith('$'):
raise ValueError("Shell expansion characters are not allowed in output_file paths")
# Then check other shell special characters
if any(char in value for char in ['|', '>', '<', '&', ';']):
raise ValueError("Shell special characters are not allowed in output_file paths")
# Don't strip leading slash if it's a template path with variables
if "{" in value or "}" in value:
# Validate template variable format
template_vars = [part.split("}")[0] for part in value.split("{")[1:]]
for var in template_vars:
if not var.isidentifier():
raise ValueError(f"Invalid template variable name: {var}")
return value
# Strip leading slash for regular paths
if value.startswith("/"):
return value[1:]
return value
@@ -373,9 +412,7 @@ class Task(BaseModel):
content = (
json_output
if json_output
else pydantic_output.model_dump_json()
if pydantic_output
else result
else pydantic_output.model_dump_json() if pydantic_output else result
)
self._save_file(content)
@@ -395,27 +432,89 @@ class Task(BaseModel):
tasks_slices = [self.description, output]
return "\n".join(tasks_slices)
def interpolate_inputs(self, inputs: Dict[str, Any]) -> None:
"""Interpolate inputs into the task description and expected output."""
def interpolate_inputs(self, inputs: Dict[str, Union[str, int, float]]) -> None:
"""Interpolate inputs into the task description, expected output, and output file path.
Args:
inputs: Dictionary mapping template variables to their values.
Supported value types are strings, integers, and floats.
Raises:
ValueError: If a required template variable is missing from inputs.
"""
if self._original_description is None:
self._original_description = self.description
if self._original_expected_output is None:
self._original_expected_output = self.expected_output
if self.output_file is not None and self._original_output_file is None:
self._original_output_file = self.output_file
if inputs:
if not inputs:
return
try:
self.description = self._original_description.format(**inputs)
except KeyError as e:
raise ValueError(f"Missing required template variable '{e.args[0]}' in description") from e
except ValueError as e:
raise ValueError(f"Error interpolating description: {str(e)}") from e
try:
self.expected_output = self.interpolate_only(
input_string=self._original_expected_output, inputs=inputs
)
except (KeyError, ValueError) as e:
raise ValueError(f"Error interpolating expected_output: {str(e)}") from e
def interpolate_only(self, input_string: str, inputs: Dict[str, Any]) -> str:
"""Interpolate placeholders (e.g., {key}) in a string while leaving JSON untouched."""
escaped_string = input_string.replace("{", "{{").replace("}", "}}")
if self.output_file is not None:
try:
self.output_file = self.interpolate_only(
input_string=self._original_output_file, inputs=inputs
)
except (KeyError, ValueError) as e:
raise ValueError(f"Error interpolating output_file path: {str(e)}") from e
for key in inputs.keys():
escaped_string = escaped_string.replace(f"{{{{{key}}}}}", f"{{{key}}}")
def interpolate_only(self, input_string: Optional[str], inputs: Dict[str, Union[str, int, float]]) -> str:
"""Interpolate placeholders (e.g., {key}) in a string while leaving JSON untouched.
Args:
input_string: The string containing template variables to interpolate.
Can be None or empty, in which case an empty string is returned.
inputs: Dictionary mapping template variables to their values.
Supported value types are strings, integers, and floats.
If input_string is empty or has no placeholders, inputs can be empty.
Returns:
The interpolated string with all template variables replaced with their values.
Empty string if input_string is None or empty.
Raises:
ValueError: If a required template variable is missing from inputs.
KeyError: If a template variable is not found in the inputs dictionary.
"""
if input_string is None or not input_string:
return ""
if "{" not in input_string and "}" not in input_string:
return input_string
if not inputs:
raise ValueError("Inputs dictionary cannot be empty when interpolating variables")
return escaped_string.format(**inputs)
try:
# Validate input types
for key, value in inputs.items():
if not isinstance(value, (str, int, float)):
raise ValueError(f"Value for key '{key}' must be a string, integer, or float, got {type(value).__name__}")
escaped_string = input_string.replace("{", "{{").replace("}", "}}")
for key in inputs.keys():
escaped_string = escaped_string.replace(f"{{{{{key}}}}}", f"{{{key}}}")
return escaped_string.format(**inputs)
except KeyError as e:
raise KeyError(f"Template variable '{e.args[0]}' not found in inputs dictionary") from e
except ValueError as e:
raise ValueError(f"Error during string interpolation: {str(e)}") from e
def increment_tools_errors(self) -> None:
"""Increment the tools errors counter."""

View File

@@ -1,3 +1,4 @@
import logging
from typing import Optional, Union
from pydantic import Field
@@ -7,6 +8,8 @@ from crewai.task import Task
from crewai.tools.base_tool import BaseTool
from crewai.utilities import I18N
logger = logging.getLogger(__name__)
class BaseAgentTool(BaseTool):
"""Base class for agent-related tools"""
@@ -16,6 +19,25 @@ class BaseAgentTool(BaseTool):
default_factory=I18N, description="Internationalization settings"
)
def sanitize_agent_name(self, name: str) -> str:
"""
Sanitize agent role name by normalizing whitespace and setting to lowercase.
Converts all whitespace (including newlines) to single spaces and removes quotes.
Args:
name (str): The agent role name to sanitize
Returns:
str: The sanitized agent role name, with whitespace normalized,
converted to lowercase, and quotes removed
"""
if not name:
return ""
# Normalize all whitespace (including newlines) to single spaces
normalized = " ".join(name.split())
# Remove quotes and convert to lowercase
return normalized.replace('"', "").casefold()
def _get_coworker(self, coworker: Optional[str], **kwargs) -> Optional[str]:
coworker = coworker or kwargs.get("co_worker") or kwargs.get("coworker")
if coworker:
@@ -25,11 +47,27 @@ class BaseAgentTool(BaseTool):
return coworker
def _execute(
self, agent_name: Union[str, None], task: str, context: Union[str, None]
self,
agent_name: Optional[str],
task: str,
context: Optional[str] = None
) -> str:
"""
Execute delegation to an agent with case-insensitive and whitespace-tolerant matching.
Args:
agent_name: Name/role of the agent to delegate to (case-insensitive)
task: The specific question or task to delegate
context: Optional additional context for the task execution
Returns:
str: The execution result from the delegated agent or an error message
if the agent cannot be found
"""
try:
if agent_name is None:
agent_name = ""
logger.debug("No agent name provided, using empty string")
# It is important to remove the quotes from the agent name.
# The reason we have to do this is because less-powerful LLM's
@@ -38,31 +76,49 @@ class BaseAgentTool(BaseTool):
# {"task": "....", "coworker": "....
# when it should look like this:
# {"task": "....", "coworker": "...."}
agent_name = agent_name.casefold().replace('"', "").replace("\n", "")
sanitized_name = self.sanitize_agent_name(agent_name)
logger.debug(f"Sanitized agent name from '{agent_name}' to '{sanitized_name}'")
available_agents = [agent.role for agent in self.agents]
logger.debug(f"Available agents: {available_agents}")
agent = [ # type: ignore # Incompatible types in assignment (expression has type "list[BaseAgent]", variable has type "str | None")
available_agent
for available_agent in self.agents
if available_agent.role.casefold().replace("\n", "") == agent_name
if self.sanitize_agent_name(available_agent.role) == sanitized_name
]
except Exception as _:
logger.debug(f"Found {len(agent)} matching agents for role '{sanitized_name}'")
except (AttributeError, ValueError) as e:
# Handle specific exceptions that might occur during role name processing
return self.i18n.errors("agent_tool_unexisting_coworker").format(
coworkers="\n".join(
[f"- {agent.role.casefold()}" for agent in self.agents]
)
[f"- {self.sanitize_agent_name(agent.role)}" for agent in self.agents]
),
error=str(e)
)
if not agent:
# No matching agent found after sanitization
return self.i18n.errors("agent_tool_unexisting_coworker").format(
coworkers="\n".join(
[f"- {agent.role.casefold()}" for agent in self.agents]
)
[f"- {self.sanitize_agent_name(agent.role)}" for agent in self.agents]
),
error=f"No agent found with role '{sanitized_name}'"
)
agent = agent[0]
task_with_assigned_agent = Task( # type: ignore # Incompatible types in assignment (expression has type "Task", variable has type "str")
description=task,
agent=agent,
expected_output=agent.i18n.slice("manager_request"),
i18n=agent.i18n,
)
return agent.execute_task(task_with_assigned_agent, context)
try:
task_with_assigned_agent = Task(
description=task,
agent=agent,
expected_output=agent.i18n.slice("manager_request"),
i18n=agent.i18n,
)
logger.debug(f"Created task for agent '{self.sanitize_agent_name(agent.role)}': {task}")
return agent.execute_task(task_with_assigned_agent, context)
except Exception as e:
# Handle task creation or execution errors
return self.i18n.errors("agent_tool_execution_error").format(
agent_role=self.sanitize_agent_name(agent.role),
error=str(e)
)

View File

@@ -33,7 +33,8 @@
"tool_usage_error": "I encountered an error: {error}",
"tool_arguments_error": "Error: the Action Input is not a valid key, value dictionary.",
"wrong_tool_name": "You tried to use the tool {tool}, but it doesn't exist. You must use one of the following tools, use one at time: {tools}.",
"tool_usage_exception": "I encountered an error while trying to use the tool. This was the error: {error}.\n Tool {tool} accepts these inputs: {tool_inputs}"
"tool_usage_exception": "I encountered an error while trying to use the tool. This was the error: {error}.\n Tool {tool} accepts these inputs: {tool_inputs}",
"agent_tool_execution_error": "Error executing task with agent '{agent_role}'. Error: {error}"
},
"tools": {
"delegate_work": "Delegate a specific task to one of the following coworkers: {coworkers}\nThe input to this tool should be the coworker, the task you want them to do, and ALL necessary context to execute the task, they know nothing about the task, so share absolute everything you know, don't reference things but instead explain them.",

View File

@@ -27,7 +27,7 @@ class EmbeddingConfigurator:
if embedder_config is None:
return self._create_default_embedding_function()
provider = embedder_config.get("provider", "")
provider = embedder_config.get("provider")
config = embedder_config.get("config", {})
model_name = config.get("model")
@@ -38,13 +38,12 @@ class EmbeddingConfigurator:
except Exception as e:
raise ValueError(f"Invalid custom embedding function: {str(e)}")
embedding_function = self.embedding_functions.get(provider, None)
if not embedding_function:
if provider not in self.embedding_functions:
raise Exception(
f"Unsupported embedding provider: {provider}, supported providers: {list(self.embedding_functions.keys())}"
)
return embedding_function(config, model_name)
return self.embedding_functions[provider](config, model_name)
@staticmethod
def _create_default_embedding_function():

View File

@@ -22,26 +22,3 @@ def get_project_directory_name():
cwd = Path.cwd()
project_directory_name = cwd.name
return project_directory_name
def get_default_storage_path(storage_type: str) -> Path:
"""Returns the default storage path for a given storage type.
Args:
storage_type: Type of storage ('ltm', 'kickoff', 'rag')
Returns:
Path: Default storage path for the specified type
Raises:
ValueError: If storage_type is not recognized
"""
base_path = db_storage_path()
if storage_type == 'ltm':
return base_path / 'latest_long_term_memories.db'
elif storage_type == 'kickoff':
return base_path / 'latest_kickoff_task_outputs.db'
elif storage_type == 'rag':
return base_path
else:
raise ValueError(f"Unknown storage type: {storage_type}")

View File

@@ -0,0 +1,243 @@
interactions:
- request:
body: !!binary |
CuIcCiQKIgoMc2VydmljZS5uYW1lEhIKEGNyZXdBSS10ZWxlbWV0cnkSuRwKEgoQY3Jld2FpLnRl
bGVtZXRyeRKjBwoQXK7w4+uvyEkrI9D5qyvcJxII5UmQ7hmczdIqDENyZXcgQ3JlYXRlZDABOfxQ
/hs4jBUYQUi3DBw4jBUYShoKDmNyZXdhaV92ZXJzaW9uEggKBjAuODYuMEoaCg5weXRob25fdmVy
c2lvbhIICgYzLjEyLjdKLgoIY3Jld19rZXkSIgogYzk3YjVmZWI1ZDFiNjZiYjU5MDA2YWFhMDFh
MjljZDZKMQoHY3Jld19pZBImCiRkZjY3NGMwYi1hOTc0LTQ3NTAtYjlkMS0yZWQxNjM3MzFiNTZK
HAoMY3Jld19wcm9jZXNzEgwKCnNlcXVlbnRpYWxKEQoLY3Jld19tZW1vcnkSAhAAShoKFGNyZXdf
bnVtYmVyX29mX3Rhc2tzEgIYAUobChVjcmV3X251bWJlcl9vZl9hZ2VudHMSAhgBStECCgtjcmV3
X2FnZW50cxLBAgq+Alt7ImtleSI6ICIwN2Q5OWI2MzA0MTFkMzVmZDkwNDdhNTMyZDUzZGRhNyIs
ICJpZCI6ICI5MDYwYTQ2Zi02MDY3LTQ1N2MtOGU3ZC04NjAyN2YzY2U5ZDUiLCAicm9sZSI6ICJS
ZXNlYXJjaGVyIiwgInZlcmJvc2U/IjogZmFsc2UsICJtYXhfaXRlciI6IDIwLCAibWF4X3JwbSI6
IG51bGwsICJmdW5jdGlvbl9jYWxsaW5nX2xsbSI6ICIiLCAibGxtIjogImdwdC00by1taW5pIiwg
ImRlbGVnYXRpb25fZW5hYmxlZD8iOiBmYWxzZSwgImFsbG93X2NvZGVfZXhlY3V0aW9uPyI6IGZh
bHNlLCAibWF4X3JldHJ5X2xpbWl0IjogMiwgInRvb2xzX25hbWVzIjogW119XUr/AQoKY3Jld190
YXNrcxLwAQrtAVt7ImtleSI6ICI2Mzk5NjUxN2YzZjNmMWM5NGQ2YmI2MTdhYTBiMWM0ZiIsICJp
ZCI6ICJjYTA4ZjkyOS0yMmI0LTQyZmQtYjViMC05N2M3MjM0ZDk5OTEiLCAiYXN5bmNfZXhlY3V0
aW9uPyI6IGZhbHNlLCAiaHVtYW5faW5wdXQ/IjogZmFsc2UsICJhZ2VudF9yb2xlIjogIlJlc2Vh
cmNoZXIiLCAiYWdlbnRfa2V5IjogIjA3ZDk5YjYzMDQxMWQzNWZkOTA0N2E1MzJkNTNkZGE3Iiwg
InRvb2xzX25hbWVzIjogW119XXoCGAGFAQABAAASjgIKEOTJZh9R45IwgGVg9cinZmISCJopKRMf
bpMJKgxUYXNrIENyZWF0ZWQwATlG+zQcOIwVGEHk0zUcOIwVGEouCghjcmV3X2tleRIiCiBjOTdi
NWZlYjVkMWI2NmJiNTkwMDZhYWEwMWEyOWNkNkoxCgdjcmV3X2lkEiYKJGRmNjc0YzBiLWE5NzQt
NDc1MC1iOWQxLTJlZDE2MzczMWI1NkouCgh0YXNrX2tleRIiCiA2Mzk5NjUxN2YzZjNmMWM5NGQ2
YmI2MTdhYTBiMWM0ZkoxCgd0YXNrX2lkEiYKJGNhMDhmOTI5LTIyYjQtNDJmZC1iNWIwLTk3Yzcy
MzRkOTk5MXoCGAGFAQABAAASowcKEEvwrN8+tNMIBwtnA+ip7jASCI78Hrh2wlsBKgxDcmV3IENy
ZWF0ZWQwATkcRqYeOIwVGEE8erQeOIwVGEoaCg5jcmV3YWlfdmVyc2lvbhIICgYwLjg2LjBKGgoO
cHl0aG9uX3ZlcnNpb24SCAoGMy4xMi43Si4KCGNyZXdfa2V5EiIKIDhjMjc1MmY0OWU1YjlkMmI2
OGNiMzVjYWM4ZmNjODZkSjEKB2NyZXdfaWQSJgokZmRkYzA4ZTMtNDUyNi00N2Q2LThlNWMtNjY0
YzIyMjc4ZDgyShwKDGNyZXdfcHJvY2VzcxIMCgpzZXF1ZW50aWFsShEKC2NyZXdfbWVtb3J5EgIQ
AEoaChRjcmV3X251bWJlcl9vZl90YXNrcxICGAFKGwoVY3Jld19udW1iZXJfb2ZfYWdlbnRzEgIY
AUrRAgoLY3Jld19hZ2VudHMSwQIKvgJbeyJrZXkiOiAiOGJkMjEzOWI1OTc1MTgxNTA2ZTQxZmQ5
YzQ1NjNkNzUiLCAiaWQiOiAiY2UxNjA2YjktMjdiOS00ZDc4LWEyODctNDZiMDNlZDg3ZTA1Iiwg
InJvbGUiOiAiUmVzZWFyY2hlciIsICJ2ZXJib3NlPyI6IGZhbHNlLCAibWF4X2l0ZXIiOiAyMCwg
Im1heF9ycG0iOiBudWxsLCAiZnVuY3Rpb25fY2FsbGluZ19sbG0iOiAiIiwgImxsbSI6ICJncHQt
NG8tbWluaSIsICJkZWxlZ2F0aW9uX2VuYWJsZWQ/IjogZmFsc2UsICJhbGxvd19jb2RlX2V4ZWN1
dGlvbj8iOiBmYWxzZSwgIm1heF9yZXRyeV9saW1pdCI6IDIsICJ0b29sc19uYW1lcyI6IFtdfV1K
/wEKCmNyZXdfdGFza3MS8AEK7QFbeyJrZXkiOiAiMGQ2ODVhMjE5OTRkOTQ5MDk3YmM1YTU2ZDcz
N2U2ZDEiLCAiaWQiOiAiNDdkMzRjZjktMGYxZS00Y2JkLTgzMzItNzRjZjY0YWRlOThlIiwgImFz
eW5jX2V4ZWN1dGlvbj8iOiBmYWxzZSwgImh1bWFuX2lucHV0PyI6IGZhbHNlLCAiYWdlbnRfcm9s
ZSI6ICJSZXNlYXJjaGVyIiwgImFnZW50X2tleSI6ICI4YmQyMTM5YjU5NzUxODE1MDZlNDFmZDlj
NDU2M2Q3NSIsICJ0b29sc19uYW1lcyI6IFtdfV16AhgBhQEAAQAAEo4CChAf4TXS782b0PBJ4NSB
JXwsEgjXnd13GkMzlyoMVGFzayBDcmVhdGVkMAE5mb/cHjiMFRhBGRTiHjiMFRhKLgoIY3Jld19r
ZXkSIgogOGMyNzUyZjQ5ZTViOWQyYjY4Y2IzNWNhYzhmY2M4NmRKMQoHY3Jld19pZBImCiRmZGRj
MDhlMy00NTI2LTQ3ZDYtOGU1Yy02NjRjMjIyNzhkODJKLgoIdGFza19rZXkSIgogMGQ2ODVhMjE5
OTRkOTQ5MDk3YmM1YTU2ZDczN2U2ZDFKMQoHdGFza19pZBImCiQ0N2QzNGNmOS0wZjFlLTRjYmQt
ODMzMi03NGNmNjRhZGU5OGV6AhgBhQEAAQAAEqMHChAyBGKhzDhROB5pmAoXrikyEgj6SCwzj1dU
LyoMQ3JldyBDcmVhdGVkMAE5vkjTHziMFRhBRDbhHziMFRhKGgoOY3Jld2FpX3ZlcnNpb24SCAoG
MC44Ni4wShoKDnB5dGhvbl92ZXJzaW9uEggKBjMuMTIuN0ouCghjcmV3X2tleRIiCiBiNjczNjg2
ZmM4MjJjMjAzYzdlODc5YzY3NTQyNDY5OUoxCgdjcmV3X2lkEiYKJGYyYWVlYTYzLTU2OWUtNDUz
NS1iZTY0LTRiZjYzZmU5NjhjN0ocCgxjcmV3X3Byb2Nlc3MSDAoKc2VxdWVudGlhbEoRCgtjcmV3
X21lbW9yeRICEABKGgoUY3Jld19udW1iZXJfb2ZfdGFza3MSAhgBShsKFWNyZXdfbnVtYmVyX29m
X2FnZW50cxICGAFK0QIKC2NyZXdfYWdlbnRzEsECCr4CW3sia2V5IjogImI1OWNmNzdiNmU3NjU4
NDg3MGViMWMzODgyM2Q3ZTI4IiwgImlkIjogImJiZjNkM2E4LWEwMjUtNGI0ZC1hY2Q0LTFmNzcz
NTI3MWJmMCIsICJyb2xlIjogIlJlc2VhcmNoZXIiLCAidmVyYm9zZT8iOiBmYWxzZSwgIm1heF9p
dGVyIjogMjAsICJtYXhfcnBtIjogbnVsbCwgImZ1bmN0aW9uX2NhbGxpbmdfbGxtIjogIiIsICJs
bG0iOiAiZ3B0LTRvLW1pbmkiLCAiZGVsZWdhdGlvbl9lbmFibGVkPyI6IGZhbHNlLCAiYWxsb3df
Y29kZV9leGVjdXRpb24/IjogZmFsc2UsICJtYXhfcmV0cnlfbGltaXQiOiAyLCAidG9vbHNfbmFt
ZXMiOiBbXX1dSv8BCgpjcmV3X3Rhc2tzEvABCu0BW3sia2V5IjogImE1ZTVjNThjZWExYjlkMDAz
MzJlNjg0NDFkMzI3YmRmIiwgImlkIjogIjBiOTRiMTY0LTM5NTktNGFmYS05Njg4LWJjNmEwZWMy
MWYzOCIsICJhc3luY19leGVjdXRpb24/IjogZmFsc2UsICJodW1hbl9pbnB1dD8iOiBmYWxzZSwg
ImFnZW50X3JvbGUiOiAiUmVzZWFyY2hlciIsICJhZ2VudF9rZXkiOiAiYjU5Y2Y3N2I2ZTc2NTg0
ODcwZWIxYzM4ODIzZDdlMjgiLCAidG9vbHNfbmFtZXMiOiBbXX1degIYAYUBAAEAABKOAgoQyYfi
Ftim717svttBZY3p5hIIUxR5bBHzWWkqDFRhc2sgQ3JlYXRlZDABOV4OBiA4jBUYQbLjBiA4jBUY
Si4KCGNyZXdfa2V5EiIKIGI2NzM2ODZmYzgyMmMyMDNjN2U4NzljNjc1NDI0Njk5SjEKB2NyZXdf
aWQSJgokZjJhZWVhNjMtNTY5ZS00NTM1LWJlNjQtNGJmNjNmZTk2OGM3Si4KCHRhc2tfa2V5EiIK
IGE1ZTVjNThjZWExYjlkMDAzMzJlNjg0NDFkMzI3YmRmSjEKB3Rhc2tfaWQSJgokMGI5NGIxNjQt
Mzk1OS00YWZhLTk2ODgtYmM2YTBlYzIxZjM4egIYAYUBAAEAAA==
headers:
Accept:
- '*/*'
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '3685'
Content-Type:
- application/x-protobuf
User-Agent:
- OTel-OTLP-Exporter-Python/1.27.0
method: POST
uri: https://telemetry.crewai.com:4319/v1/traces
response:
body:
string: "\n\0"
headers:
Content-Length:
- '2'
Content-Type:
- application/x-protobuf
Date:
- Sun, 29 Dec 2024 04:43:27 GMT
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "system", "content": "You are Researcher. You have
extensive AI research experience.\nYour personal goal is: Analyze AI topics\nTo
give my best complete final answer to the task use the exact following format:\n\nThought:
I now can give a great answer\nFinal Answer: Your final answer must be the great
and the most complete as possible, it must be outcome described.\n\nI MUST use
these formats, my job depends on it!"}, {"role": "user", "content": "\nCurrent
Task: Explain the advantages of AI.\n\nThis is the expect criteria for your
final answer: A summary of the main advantages, bullet points recommended.\nyou
MUST return the actual complete content as the final answer, not a summary.\n\nBegin!
This is VERY important to you, use the tools available and give your best Final
Answer, your job depends on it!\n\nThought:"}], "model": "gpt-4o-mini", "stop":
["\nObservation:"], "stream": false}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '922'
content-type:
- application/json
cookie:
- _cfuvid=eff7OIkJ0zWRunpA6z67LHqscmSe6XjNxXiPw1R3xCc-1733770413538-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
x-stainless-arch:
- x64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- Linux
x-stainless-package-version:
- 1.52.1
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AjfR6FDuTw7NGzy8w7sxjvOkUQlru\",\n \"object\":
\"chat.completion\",\n \"created\": 1735447404,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"I now can give a great answer \\nFinal
Answer: \\n**Advantages of AI** \\n\\n1. **Increased Efficiency and Productivity**
\ \\n - AI systems can process large amounts of data quickly and accurately,
leading to faster decision-making and increased productivity in various sectors.\\n\\n2.
**Cost Savings** \\n - Automation of repetitive and time-consuming tasks
reduces labor costs and increases operational efficiency, allowing businesses
to allocate resources more effectively.\\n\\n3. **Enhanced Data Analysis** \\n
\ - AI excels at analyzing big data, identifying patterns, and providing insights
that support better strategic planning and business decision-making.\\n\\n4.
**24/7 Availability** \\n - AI solutions, such as chatbots and virtual assistants,
operate continuously without breaks, offering constant support and customer
service, enhancing user experience.\\n\\n5. **Personalization** \\n - AI
enables the customization of content, products, and services based on user preferences
and behaviors, leading to improved customer satisfaction and loyalty.\\n\\n6.
**Improved Accuracy** \\n - AI technologies, such as machine learning algorithms,
reduce the likelihood of human error in various processes, leading to greater
accuracy and reliability.\\n\\n7. **Enhanced Innovation** \\n - AI fosters
innovative solutions by providing new tools and approaches to problem-solving,
enabling companies to develop cutting-edge products and services.\\n\\n8. **Scalability**
\ \\n - AI can be scaled to handle varying amounts of workloads without significant
changes to infrastructure, making it easier for organizations to expand operations.\\n\\n9.
**Predictive Capabilities** \\n - Advanced analytics powered by AI can anticipate
trends and outcomes, allowing businesses to proactively adjust strategies and
improve forecasting.\\n\\n10. **Health Benefits** \\n - In healthcare, AI
assists in diagnostics, personalized treatment plans, and predictive analytics,
leading to better patient care and improved health outcomes.\\n\\n11. **Safety
and Risk Mitigation** \\n - AI can enhance safety in various industries
by taking over dangerous tasks, monitoring for hazards, and predicting maintenance
needs for critical machinery, thereby preventing accidents.\\n\\n12. **Reduced
Environmental Impact** \\n - AI can optimize resource usage in areas such
as energy consumption and supply chain logistics, contributing to sustainability
efforts and reducing overall environmental footprints.\",\n \"refusal\":
null\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 168,\n \"completion_tokens\":
440,\n \"total_tokens\": 608,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"fp_0aa8d3e20b\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8f9721053d1eb9f1-SEA
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Sun, 29 Dec 2024 04:43:32 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=5enubNIoQSGMYEgy8Q2FpzzhphA0y.0lXukRZrWFvMk-1735447412-1.0.1.1-FIK1sMkUl3YnW1gTC6ftDtb2mKsbosb4mwabdFAlWCfJ6pXeavYq.bPsfKNvzAb5WYq60yVGH5lHsJT05bhSgw;
path=/; expires=Sun, 29-Dec-24 05:13:32 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=63wmKMTuFamkLN8FBI4fP8JZWbjWiRxWm7wb3kz.z_A-1735447412038-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '7577'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999793'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_55b8d714656e8f10f4e23cbe9034d66b
http_version: HTTP/1.1
status_code: 200
version: 1

View File

@@ -28,10 +28,9 @@ def test_create_success(mock_subprocess):
with in_temp_dir():
tool_command = ToolCommand()
with (
patch.object(tool_command, "login") as mock_login,
patch("sys.stdout", new=StringIO()) as fake_out,
):
with patch.object(tool_command, "login") as mock_login, patch(
"sys.stdout", new=StringIO()
) as fake_out:
tool_command.create("test-tool")
output = fake_out.getvalue()
@@ -83,7 +82,7 @@ def test_install_success(mock_get, mock_subprocess_run):
capture_output=False,
text=True,
check=True,
env=unittest.mock.ANY,
env=unittest.mock.ANY
)
assert "Successfully installed sample-tool" in output

View File

@@ -391,6 +391,71 @@ def test_manager_agent_delegating_to_all_agents():
)
@pytest.mark.vcr(filter_headers=["authorization"])
def test_manager_agent_delegates_with_varied_role_cases():
"""
Test that the manager agent can delegate to agents regardless of case or whitespace variations in role names.
This test verifies the fix for issue #1503 where role matching was too strict.
"""
# Create agents with varied case and whitespace in roles
researcher_spaced = Agent(
role=" Researcher ", # Extra spaces
goal="Research with spaces in role",
backstory="A researcher with spaces in role name",
allow_delegation=False,
)
writer_caps = Agent(
role="SENIOR WRITER", # All caps
goal="Write with caps in role",
backstory="A writer with caps in role name",
allow_delegation=False,
)
task = Task(
description="Research and write about AI. The researcher should do the research, and the writer should write it up.",
expected_output="A well-researched article about AI.",
agent=researcher_spaced, # Assign to researcher with spaces
)
crew = Crew(
agents=[researcher_spaced, writer_caps],
process=Process.hierarchical,
manager_llm="gpt-4o",
tasks=[task],
)
mock_task_output = TaskOutput(
description="Mock description",
raw="mocked output",
agent="mocked agent"
)
task.output = mock_task_output
with patch.object(Task, 'execute_sync', return_value=mock_task_output) as mock_execute_sync:
crew.kickoff()
# Verify execute_sync was called once
mock_execute_sync.assert_called_once()
# Get the tools argument from the call
_, kwargs = mock_execute_sync.call_args
tools = kwargs['tools']
# Verify the delegation tools were passed correctly and can handle case/whitespace variations
assert len(tools) == 2
# Check delegation tool descriptions (should work despite case/whitespace differences)
delegation_tool = tools[0]
question_tool = tools[1]
assert "Delegate a specific task to one of the following coworkers:" in delegation_tool.description
assert " Researcher " in delegation_tool.description or "SENIOR WRITER" in delegation_tool.description
assert "Ask a specific question to one of the following coworkers:" in question_tool.description
assert " Researcher " in question_tool.description or "SENIOR WRITER" in question_tool.description
@pytest.mark.vcr(filter_headers=["authorization"])
def test_crew_with_delegating_agents():
tasks = [
@@ -1941,6 +2006,90 @@ def test_crew_log_file_output(tmp_path):
assert test_file.exists()
@pytest.mark.vcr(filter_headers=["authorization"])
def test_crew_output_file_end_to_end(tmp_path):
"""Test output file functionality in a full crew context."""
# Create an agent
agent = Agent(
role="Researcher",
goal="Analyze AI topics",
backstory="You have extensive AI research experience.",
allow_delegation=False,
)
# Create a task with dynamic output file path
dynamic_path = tmp_path / "output_{topic}.txt"
task = Task(
description="Explain the advantages of {topic}.",
expected_output="A summary of the main advantages, bullet points recommended.",
agent=agent,
output_file=str(dynamic_path),
)
# Create and run the crew
crew = Crew(
agents=[agent],
tasks=[task],
process=Process.sequential,
)
crew.kickoff(inputs={"topic": "AI"})
# Verify file creation and cleanup
expected_file = tmp_path / "output_AI.txt"
assert expected_file.exists(), f"Output file {expected_file} was not created"
@pytest.mark.vcr(filter_headers=["authorization"])
def test_crew_output_file_validation_failures():
"""Test output file validation failures in a crew context."""
agent = Agent(
role="Researcher",
goal="Analyze data",
backstory="You analyze data files.",
allow_delegation=False,
)
# Test path traversal
with pytest.raises(ValueError, match="Path traversal"):
task = Task(
description="Analyze data",
expected_output="Analysis results",
agent=agent,
output_file="../output.txt"
)
Crew(agents=[agent], tasks=[task]).kickoff()
# Test shell special characters
with pytest.raises(ValueError, match="Shell special characters"):
task = Task(
description="Analyze data",
expected_output="Analysis results",
agent=agent,
output_file="output.txt | rm -rf /"
)
Crew(agents=[agent], tasks=[task]).kickoff()
# Test shell expansion
with pytest.raises(ValueError, match="Shell expansion"):
task = Task(
description="Analyze data",
expected_output="Analysis results",
agent=agent,
output_file="~/output.txt"
)
Crew(agents=[agent], tasks=[task]).kickoff()
# Test invalid template variable
with pytest.raises(ValueError, match="Invalid template variable"):
task = Task(
description="Analyze data",
expected_output="Analysis results",
agent=agent,
output_file="{invalid-name}/output.txt"
)
Crew(agents=[agent], tasks=[task]).kickoff()
@pytest.mark.vcr(filter_headers=["authorization"])
def test_manager_agent():
from unittest.mock import patch
@@ -3125,4 +3274,4 @@ def test_multimodal_agent_live_image_analysis():
# Verify we got a meaningful response
assert isinstance(result.raw, str)
assert len(result.raw) > 100 # Expecting a detailed analysis
assert "error" not in result.raw.lower() # No error messages in response
assert "error" not in result.raw.lower() # No error messages in response

View File

@@ -584,3 +584,28 @@ def test_docling_source_with_local_file():
docling_source = CrewDoclingSource(file_paths=[pdf_path])
assert docling_source.file_paths == [pdf_path]
assert docling_source.content is not None
def test_file_path_validation():
"""Test file path validation for knowledge sources."""
current_dir = Path(__file__).parent
pdf_path = current_dir / "crewai_quickstart.pdf"
# Test valid single file_path
source = PDFKnowledgeSource(file_path=pdf_path)
assert source.safe_file_paths == [pdf_path]
# Test valid file_paths list
source = PDFKnowledgeSource(file_paths=[pdf_path])
assert source.safe_file_paths == [pdf_path]
# Test both file_path and file_paths provided (should use file_paths)
source = PDFKnowledgeSource(file_path=pdf_path, file_paths=[pdf_path])
assert source.safe_file_paths == [pdf_path]
# Test neither file_path nor file_paths provided
with pytest.raises(
ValueError,
match="file_path/file_paths must be a Path, str, or a list of these types"
):
PDFKnowledgeSource()

View File

@@ -1,83 +0,0 @@
import os
import tempfile
from pathlib import Path
import pytest
from unittest.mock import patch
from crewai.memory.storage.ltm_sqlite_storage import LTMSQLiteStorage
from crewai.memory.storage.kickoff_task_outputs_storage import KickoffTaskOutputsSQLiteStorage
from crewai.memory.storage.base_rag_storage import BaseRAGStorage
from crewai.utilities.paths import get_default_storage_path
class MockRAGStorage(BaseRAGStorage):
"""Mock implementation of BaseRAGStorage for testing."""
def _sanitize_role(self, role: str) -> str:
return role.lower()
def save(self, value, metadata):
pass
def search(self, query, limit=3, filter=None, score_threshold=0.35):
return []
def reset(self):
pass
def _generate_embedding(self, text, metadata=None):
return []
def _initialize_app(self):
pass
def test_default_storage_paths():
"""Test that default storage paths are created correctly."""
ltm_path = get_default_storage_path('ltm')
kickoff_path = get_default_storage_path('kickoff')
rag_path = get_default_storage_path('rag')
assert str(ltm_path).endswith('latest_long_term_memories.db')
assert str(kickoff_path).endswith('latest_kickoff_task_outputs.db')
assert isinstance(rag_path, Path)
def test_custom_storage_paths():
"""Test that custom storage paths are respected."""
with tempfile.TemporaryDirectory() as temp_dir:
custom_path = Path(temp_dir) / 'custom.db'
ltm = LTMSQLiteStorage(storage_path=custom_path)
assert ltm.storage_path == custom_path
kickoff = KickoffTaskOutputsSQLiteStorage(storage_path=custom_path)
assert kickoff.storage_path == custom_path
rag = MockRAGStorage('test', storage_path=custom_path)
assert rag.storage_path == custom_path
def test_directory_creation():
"""Test that storage directories are created automatically."""
with tempfile.TemporaryDirectory() as temp_dir:
test_dir = Path(temp_dir) / 'test_storage'
storage_path = test_dir / 'test.db'
assert not test_dir.exists()
LTMSQLiteStorage(storage_path=storage_path)
assert test_dir.exists()
def test_permission_error():
"""Test that permission errors are handled correctly."""
with tempfile.TemporaryDirectory() as temp_dir:
test_dir = Path(temp_dir) / 'readonly'
test_dir.mkdir()
os.chmod(test_dir, 0o444) # Read-only
storage_path = test_dir / 'test.db'
with pytest.raises((PermissionError, OSError)) as exc_info:
LTMSQLiteStorage(storage_path=storage_path)
# Verify that the error message mentions permission
assert "permission" in str(exc_info.value).lower()
def test_invalid_path():
"""Test that invalid paths raise appropriate errors."""
with pytest.raises(OSError):
# Try to create storage in a non-existent root directory
LTMSQLiteStorage(storage_path=Path('/nonexistent/dir/test.db'))

View File

@@ -719,21 +719,24 @@ def test_interpolate_inputs():
task = Task(
description="Give me a list of 5 interesting ideas about {topic} to explore for an article, what makes them unique and interesting.",
expected_output="Bullet point list of 5 interesting ideas about {topic}.",
output_file="/tmp/{topic}/output_{date}.txt"
)
task.interpolate_inputs(inputs={"topic": "AI"})
task.interpolate_inputs(inputs={"topic": "AI", "date": "2024"})
assert (
task.description
== "Give me a list of 5 interesting ideas about AI to explore for an article, what makes them unique and interesting."
)
assert task.expected_output == "Bullet point list of 5 interesting ideas about AI."
assert task.output_file == "/tmp/AI/output_2024.txt"
task.interpolate_inputs(inputs={"topic": "ML"})
task.interpolate_inputs(inputs={"topic": "ML", "date": "2025"})
assert (
task.description
== "Give me a list of 5 interesting ideas about ML to explore for an article, what makes them unique and interesting."
)
assert task.expected_output == "Bullet point list of 5 interesting ideas about ML."
assert task.output_file == "/tmp/ML/output_2025.txt"
def test_interpolate_only():
@@ -872,3 +875,61 @@ def test_key():
assert (
task.key == hash
), "The key should be the hash of the non-interpolated description."
def test_output_file_validation():
"""Test output file path validation."""
# Valid paths
assert Task(
description="Test task",
expected_output="Test output",
output_file="output.txt"
).output_file == "output.txt"
assert Task(
description="Test task",
expected_output="Test output",
output_file="/tmp/output.txt"
).output_file == "tmp/output.txt"
assert Task(
description="Test task",
expected_output="Test output",
output_file="{dir}/output_{date}.txt"
).output_file == "{dir}/output_{date}.txt"
# Invalid paths
with pytest.raises(ValueError, match="Path traversal"):
Task(
description="Test task",
expected_output="Test output",
output_file="../output.txt"
)
with pytest.raises(ValueError, match="Path traversal"):
Task(
description="Test task",
expected_output="Test output",
output_file="folder/../output.txt"
)
with pytest.raises(ValueError, match="Shell special characters"):
Task(
description="Test task",
expected_output="Test output",
output_file="output.txt | rm -rf /"
)
with pytest.raises(ValueError, match="Shell expansion"):
Task(
description="Test task",
expected_output="Test output",
output_file="~/output.txt"
)
with pytest.raises(ValueError, match="Shell expansion"):
Task(
description="Test task",
expected_output="Test output",
output_file="$HOME/output.txt"
)
with pytest.raises(ValueError, match="Invalid template variable"):
Task(
description="Test task",
expected_output="Test output",
output_file="{invalid-name}/output.txt"
)

View File

@@ -0,0 +1,55 @@
from unittest.mock import MagicMock
import pytest
from crewai import Agent, Task
from crewai.tools.agent_tools.base_agent_tools import BaseAgentTool
class TestAgentTool(BaseAgentTool):
"""Concrete implementation of BaseAgentTool for testing."""
def _run(self, *args, **kwargs):
"""Implement required _run method."""
return "Test response"
@pytest.mark.parametrize("role_name,should_match", [
('Futel Official Infopoint', True), # exact match
(' "Futel Official Infopoint" ', True), # extra quotes and spaces
('Futel Official Infopoint\n', True), # trailing newline
('"Futel Official Infopoint"', True), # embedded quotes
(' FUTEL\nOFFICIAL INFOPOINT ', True), # multiple whitespace and newline
('futel official infopoint', True), # lowercase
('FUTEL OFFICIAL INFOPOINT', True), # uppercase
('Non Existent Agent', False), # non-existent agent
(None, False), # None agent name
])
def test_agent_tool_role_matching(role_name, should_match):
"""Test that agent tools can match roles regardless of case, whitespace, and special characters."""
# Create test agent
test_agent = Agent(
role='Futel Official Infopoint',
goal='Answer questions about Futel',
backstory='Futel Football Club info',
allow_delegation=False
)
# Create test agent tool
agent_tool = TestAgentTool(
name="test_tool",
description="Test tool",
agents=[test_agent]
)
# Test role matching
result = agent_tool._execute(
agent_name=role_name,
task='Test task',
context=None
)
if should_match:
assert "coworker mentioned not found" not in result.lower(), \
f"Should find agent with role name: {role_name}"
else:
assert "coworker mentioned not found" in result.lower(), \
f"Should not find agent with role name: {role_name}"

68
uv.lock generated
View File

@@ -1,18 +1,10 @@
version = 1
requires-python = ">=3.10, <3.13"
resolution-markers = [
"python_full_version < '3.11' and sys_platform == 'darwin'",
"python_full_version < '3.11' and platform_machine == 'aarch64' and sys_platform == 'linux'",
"(python_full_version < '3.11' and platform_machine != 'aarch64' and sys_platform == 'linux') or (python_full_version < '3.11' and sys_platform != 'darwin' and sys_platform != 'linux')",
"python_full_version == '3.11.*' and sys_platform == 'darwin'",
"python_full_version == '3.11.*' and platform_machine == 'aarch64' and sys_platform == 'linux'",
"(python_full_version == '3.11.*' and platform_machine != 'aarch64' and sys_platform == 'linux') or (python_full_version == '3.11.*' and sys_platform != 'darwin' and sys_platform != 'linux')",
"python_full_version >= '3.12' and python_full_version < '3.12.4' and sys_platform == 'darwin'",
"python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_machine == 'aarch64' and sys_platform == 'linux'",
"(python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_machine != 'aarch64' and sys_platform == 'linux') or (python_full_version >= '3.12' and python_full_version < '3.12.4' and sys_platform != 'darwin' and sys_platform != 'linux')",
"python_full_version >= '3.12.4' and sys_platform == 'darwin'",
"python_full_version >= '3.12.4' and platform_machine == 'aarch64' and sys_platform == 'linux'",
"(python_full_version >= '3.12.4' and platform_machine != 'aarch64' and sys_platform == 'linux') or (python_full_version >= '3.12.4' and sys_platform != 'darwin' and sys_platform != 'linux')",
"python_full_version < '3.11'",
"python_full_version == '3.11.*'",
"python_full_version >= '3.12' and python_full_version < '3.12.4'",
"python_full_version >= '3.12.4'",
]
[[package]]
@@ -308,7 +300,7 @@ name = "build"
version = "1.2.2.post1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "colorama", marker = "(os_name == 'nt' and platform_machine != 'aarch64' and sys_platform == 'linux') or (os_name == 'nt' and sys_platform != 'darwin' and sys_platform != 'linux')" },
{ name = "colorama", marker = "os_name == 'nt'" },
{ name = "importlib-metadata", marker = "python_full_version < '3.10.2'" },
{ name = "packaging" },
{ name = "pyproject-hooks" },
@@ -543,7 +535,7 @@ name = "click"
version = "8.1.7"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "colorama", marker = "sys_platform == 'win32'" },
{ name = "colorama", marker = "platform_system == 'Windows'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/96/d3/f04c7bfcf5c1862a2a5b845c6b2b360488cf47af55dfa79c98f6a6bf98b5/click-8.1.7.tar.gz", hash = "sha256:ca9853ad459e787e2192211578cc907e7594e294c7ccc834310722b41b9ca6de", size = 336121 }
wheels = [
@@ -650,6 +642,7 @@ tools = [
[package.dev-dependencies]
dev = [
{ name = "cairosvg" },
{ name = "crewai-tools" },
{ name = "mkdocs" },
{ name = "mkdocs-material" },
{ name = "mkdocs-material-extensions" },
@@ -703,6 +696,7 @@ requires-dist = [
[package.metadata.requires-dev]
dev = [
{ name = "cairosvg", specifier = ">=2.7.1" },
{ name = "crewai-tools", specifier = ">=0.17.0" },
{ name = "mkdocs", specifier = ">=1.4.3" },
{ name = "mkdocs-material", specifier = ">=9.5.7" },
{ name = "mkdocs-material-extensions", specifier = ">=1.3.1" },
@@ -2468,7 +2462,7 @@ version = "1.6.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "click" },
{ name = "colorama", marker = "sys_platform == 'win32'" },
{ name = "colorama", marker = "platform_system == 'Windows'" },
{ name = "ghp-import" },
{ name = "jinja2" },
{ name = "markdown" },
@@ -2649,7 +2643,7 @@ version = "2.10.2"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "pygments" },
{ name = "pywin32", marker = "sys_platform == 'win32'" },
{ name = "pywin32", marker = "platform_system == 'Windows'" },
{ name = "tqdm" },
]
sdist = { url = "https://files.pythonhosted.org/packages/3a/93/80ac75c20ce54c785648b4ed363c88f148bf22637e10c9863db4fbe73e74/mpire-2.10.2.tar.gz", hash = "sha256:f66a321e93fadff34585a4bfa05e95bd946cf714b442f51c529038eb45773d97", size = 271270 }
@@ -2896,7 +2890,7 @@ name = "nvidia-cudnn-cu12"
version = "9.1.0.70"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "nvidia-cublas-cu12", marker = "(platform_machine != 'aarch64' and sys_platform == 'linux') or (sys_platform != 'darwin' and sys_platform != 'linux')" },
{ name = "nvidia-cublas-cu12", marker = "(platform_machine != 'aarch64' and platform_system != 'Darwin') or (platform_system != 'Darwin' and platform_system != 'Linux')" },
]
wheels = [
{ url = "https://files.pythonhosted.org/packages/9f/fd/713452cd72343f682b1c7b9321e23829f00b842ceaedcda96e742ea0b0b3/nvidia_cudnn_cu12-9.1.0.70-py3-none-manylinux2014_x86_64.whl", hash = "sha256:165764f44ef8c61fcdfdfdbe769d687e06374059fbb388b6c89ecb0e28793a6f", size = 664752741 },
@@ -2923,9 +2917,9 @@ name = "nvidia-cusolver-cu12"
version = "11.4.5.107"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "nvidia-cublas-cu12", marker = "(platform_machine != 'aarch64' and sys_platform == 'linux') or (sys_platform != 'darwin' and sys_platform != 'linux')" },
{ name = "nvidia-cusparse-cu12", marker = "(platform_machine != 'aarch64' and sys_platform == 'linux') or (sys_platform != 'darwin' and sys_platform != 'linux')" },
{ name = "nvidia-nvjitlink-cu12", marker = "(platform_machine != 'aarch64' and sys_platform == 'linux') or (sys_platform != 'darwin' and sys_platform != 'linux')" },
{ name = "nvidia-cublas-cu12", marker = "(platform_machine != 'aarch64' and platform_system != 'Darwin') or (platform_system != 'Darwin' and platform_system != 'Linux')" },
{ name = "nvidia-cusparse-cu12", marker = "(platform_machine != 'aarch64' and platform_system != 'Darwin') or (platform_system != 'Darwin' and platform_system != 'Linux')" },
{ name = "nvidia-nvjitlink-cu12", marker = "(platform_machine != 'aarch64' and platform_system != 'Darwin') or (platform_system != 'Darwin' and platform_system != 'Linux')" },
]
wheels = [
{ url = "https://files.pythonhosted.org/packages/bc/1d/8de1e5c67099015c834315e333911273a8c6aaba78923dd1d1e25fc5f217/nvidia_cusolver_cu12-11.4.5.107-py3-none-manylinux1_x86_64.whl", hash = "sha256:8a7ec542f0412294b15072fa7dab71d31334014a69f953004ea7a118206fe0dd", size = 124161928 },
@@ -2936,7 +2930,7 @@ name = "nvidia-cusparse-cu12"
version = "12.1.0.106"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "nvidia-nvjitlink-cu12", marker = "(platform_machine != 'aarch64' and sys_platform == 'linux') or (sys_platform != 'darwin' and sys_platform != 'linux')" },
{ name = "nvidia-nvjitlink-cu12", marker = "(platform_machine != 'aarch64' and platform_system != 'Darwin') or (platform_system != 'Darwin' and platform_system != 'Linux')" },
]
wheels = [
{ url = "https://files.pythonhosted.org/packages/65/5b/cfaeebf25cd9fdec14338ccb16f6b2c4c7fa9163aefcf057d86b9cc248bb/nvidia_cusparse_cu12-12.1.0.106-py3-none-manylinux1_x86_64.whl", hash = "sha256:f3b50f42cf363f86ab21f720998517a659a48131e8d538dc02f8768237bd884c", size = 195958278 },
@@ -3486,7 +3480,7 @@ name = "portalocker"
version = "2.10.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "pywin32", marker = "sys_platform == 'win32'" },
{ name = "pywin32", marker = "platform_system == 'Windows'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/ed/d3/c6c64067759e87af98cc668c1cc75171347d0f1577fab7ca3749134e3cd4/portalocker-2.10.1.tar.gz", hash = "sha256:ef1bf844e878ab08aee7e40184156e1151f228f103aa5c6bd0724cc330960f8f", size = 40891 }
wheels = [
@@ -5028,19 +5022,19 @@ dependencies = [
{ name = "fsspec" },
{ name = "jinja2" },
{ name = "networkx" },
{ name = "nvidia-cublas-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "nvidia-cuda-cupti-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "nvidia-cuda-nvrtc-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "nvidia-cuda-runtime-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "nvidia-cudnn-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "nvidia-cufft-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "nvidia-curand-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "nvidia-cusolver-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "nvidia-cusparse-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "nvidia-nccl-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "nvidia-nvtx-cu12", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "nvidia-cublas-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "nvidia-cuda-cupti-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "nvidia-cuda-nvrtc-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "nvidia-cuda-runtime-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "nvidia-cudnn-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "nvidia-cufft-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "nvidia-curand-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "nvidia-cusolver-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "nvidia-cusparse-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "nvidia-nccl-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "nvidia-nvtx-cu12", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "sympy" },
{ name = "triton", marker = "platform_machine == 'x86_64' and sys_platform == 'linux'" },
{ name = "triton", marker = "platform_machine == 'x86_64' and platform_system == 'Linux'" },
{ name = "typing-extensions" },
]
wheels = [
@@ -5087,7 +5081,7 @@ name = "tqdm"
version = "4.66.5"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "colorama", marker = "sys_platform == 'win32'" },
{ name = "colorama", marker = "platform_system == 'Windows'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/58/83/6ba9844a41128c62e810fddddd72473201f3eacde02046066142a2d96cc5/tqdm-4.66.5.tar.gz", hash = "sha256:e1020aef2e5096702d8a025ac7d16b1577279c9d63f8375b63083e9a5f0fcbad", size = 169504 }
wheels = [
@@ -5130,7 +5124,7 @@ version = "0.27.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "attrs" },
{ name = "cffi", marker = "(implementation_name != 'pypy' and os_name == 'nt' and platform_machine != 'aarch64' and sys_platform == 'linux') or (implementation_name != 'pypy' and os_name == 'nt' and sys_platform != 'darwin' and sys_platform != 'linux')" },
{ name = "cffi", marker = "implementation_name != 'pypy' and os_name == 'nt'" },
{ name = "exceptiongroup", marker = "python_full_version < '3.11'" },
{ name = "idna" },
{ name = "outcome" },
@@ -5161,7 +5155,7 @@ name = "triton"
version = "3.0.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "filelock", marker = "(platform_machine != 'aarch64' and sys_platform == 'linux') or (sys_platform != 'darwin' and sys_platform != 'linux')" },
{ name = "filelock", marker = "(platform_machine != 'aarch64' and platform_system != 'Darwin') or (platform_system != 'Darwin' and platform_system != 'Linux')" },
]
wheels = [
{ url = "https://files.pythonhosted.org/packages/45/27/14cc3101409b9b4b9241d2ba7deaa93535a217a211c86c4cc7151fb12181/triton-3.0.0-1-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:e1efef76935b2febc365bfadf74bcb65a6f959a9872e5bddf44cc9e0adce1e1a", size = 209376304 },