Compare commits

...

2 Commits

Author SHA1 Message Date
João Moura
82f9b26848 Merge branch 'main' into devin/1735488202-add-tool-documentation 2024-12-31 01:52:01 -03:00
Devin AI
09fd6058b0 Add comprehensive documentation for all tools
- Added documentation for file operation tools
- Added documentation for search tools
- Added documentation for web scraping tools
- Added documentation for specialized tools (RAG, code interpreter)
- Added documentation for API-based tools (SerpApi, Serply)

Link to Devin run: https://app.devin.ai/sessions/d2f72a2dfb214659aeb3e9f67ed961f7

Co-Authored-By: Joe Moura <joao@crewai.com>
2024-12-29 16:03:22 +00:00
32 changed files with 6528 additions and 0 deletions

View File

@@ -0,0 +1,222 @@
---
title: BraveSearchTool
description: A tool for performing web searches using the Brave Search API
icon: search
---
## BraveSearchTool
The BraveSearchTool enables web searches using the Brave Search API, providing customizable result counts, country-specific searches, and rate-limited operations. It formats search results with titles, URLs, and snippets for easy consumption.
## Installation
```bash
pip install 'crewai[tools]'
```
## Authentication
Set up your Brave Search API key:
```bash
export BRAVE_API_KEY='your-brave-api-key'
```
## Usage Example
```python
from crewai import Agent
from crewai_tools import BraveSearchTool
# Basic initialization
search_tool = BraveSearchTool()
# Advanced initialization with custom parameters
search_tool = BraveSearchTool(
country="US", # Country-specific search
n_results=5, # Number of results to return
save_file=True # Save results to file
)
# Create an agent with the tool
researcher = Agent(
role='Web Researcher',
goal='Search and analyze web content',
backstory='Expert at finding relevant information online.',
tools=[search_tool],
verbose=True
)
```
## Input Schema
```python
class BraveSearchToolSchema(BaseModel):
search_query: str = Field(
description="Mandatory search query you want to use to search the internet"
)
```
## Function Signature
```python
def __init__(
self,
country: Optional[str] = "",
n_results: int = 10,
save_file: bool = False,
*args,
**kwargs
):
"""
Initialize the Brave search tool.
Args:
country (Optional[str]): Country code for region-specific search
n_results (int): Number of results to return (default: 10)
save_file (bool): Whether to save results to file (default: False)
"""
def _run(
self,
**kwargs: Any
) -> str:
"""
Execute web search using Brave Search API.
Args:
search_query (str): Query to search
save_file (bool, optional): Override save_file setting
n_results (int, optional): Override n_results setting
Returns:
str: Formatted search results with titles, URLs, and snippets
"""
```
## Best Practices
1. API Authentication:
- Securely store BRAVE_API_KEY
- Keep API key confidential
- Handle authentication errors
2. Rate Limiting:
- Tool automatically handles rate limiting
- Minimum 1-second interval between requests
- Consider implementing additional rate limits
3. Search Optimization:
- Use specific search queries
- Adjust result count based on needs
- Consider regional search requirements
4. Error Handling:
- Handle API request failures
- Manage parsing errors
- Monitor rate limit errors
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import BraveSearchTool
# Initialize tool with custom configuration
search_tool = BraveSearchTool(
country="GB", # UK-specific search
n_results=3, # Limit to 3 results
save_file=True # Save results to file
)
# Create agent
researcher = Agent(
role='Web Researcher',
goal='Research latest AI developments',
backstory='Expert at finding and analyzing tech news.',
tools=[search_tool]
)
# Define task
research_task = Task(
description="""Find the latest news about artificial
intelligence developments in quantum computing.""",
agent=researcher
)
# The tool will use:
# {
# "search_query": "latest quantum computing AI developments"
# }
# Create crew
crew = Crew(
agents=[researcher],
tasks=[research_task]
)
# Execute
result = crew.kickoff()
```
## Advanced Usage
### Country-Specific Search
```python
# Initialize tools for different regions
us_search = BraveSearchTool(country="US")
uk_search = BraveSearchTool(country="GB")
jp_search = BraveSearchTool(country="JP")
# Compare results across regions
us_results = us_search.run(
search_query="local news"
)
uk_results = uk_search.run(
search_query="local news"
)
jp_results = jp_search.run(
search_query="local news"
)
```
### Result Management
```python
# Save results to file
archival_search = BraveSearchTool(
save_file=True,
n_results=20
)
# Search and save
results = archival_search.run(
search_query="historical events 2023"
)
# Results saved to search_results_YYYY-MM-DD_HH-MM-SS.txt
```
### Error Handling Example
```python
try:
search_tool = BraveSearchTool()
results = search_tool.run(
search_query="important topic"
)
print(results)
except ValueError as e: # API key missing
print(f"Authentication error: {str(e)}")
except Exception as e:
print(f"Search error: {str(e)}")
```
## Notes
- Requires Brave Search API key
- Implements automatic rate limiting
- Supports country-specific searches
- Customizable result count
- Optional file saving feature
- Thread-safe operations
- Efficient result formatting
- Handles API errors gracefully
- Supports parallel searches
- Maintains search context

View File

@@ -0,0 +1,164 @@
---
title: CodeDocsSearchTool
description: A semantic search tool for code documentation websites using RAG capabilities
icon: book-open
---
## CodeDocsSearchTool
The CodeDocsSearchTool is a specialized Retrieval-Augmented Generation (RAG) tool that enables semantic search within code documentation websites. It inherits from the base RagTool class and provides both fixed and dynamic documentation URL searching capabilities.
## Installation
```bash
pip install 'crewai[tools]'
```
## Usage Example
```python
from crewai import Agent
from crewai_tools import CodeDocsSearchTool
# Method 1: Dynamic documentation URL
docs_search = CodeDocsSearchTool()
# Method 2: Fixed documentation URL
fixed_docs_search = CodeDocsSearchTool(
docs_url="https://docs.example.com"
)
# Create an agent with the tool
researcher = Agent(
role='Documentation Researcher',
goal='Search through code documentation semantically',
backstory='Expert at finding relevant information in technical documentation.',
tools=[docs_search],
verbose=True
)
```
## Input Schema
The tool supports two input schemas depending on initialization:
### Dynamic URL Schema
```python
class CodeDocsSearchToolSchema(BaseModel):
search_query: str # The semantic search query
docs_url: str # URL of the documentation site to search
```
### Fixed URL Schema
```python
class FixedCodeDocsSearchToolSchema(BaseModel):
search_query: str # The semantic search query
```
## Function Signature
```python
def __init__(self, docs_url: Optional[str] = None, **kwargs):
"""
Initialize the documentation search tool.
Args:
docs_url (Optional[str]): Fixed URL to a documentation site. If provided,
the tool will only search this documentation.
**kwargs: Additional arguments passed to the parent RagTool
"""
def _run(self, search_query: str, **kwargs: Any) -> Any:
"""
Perform semantic search on the documentation site.
Args:
search_query (str): The semantic search query
**kwargs: Additional arguments (including 'docs_url' for dynamic mode)
Returns:
str: Relevant documentation passages based on semantic search
"""
```
## Best Practices
1. Choose initialization method based on use case:
- Use fixed URL when repeatedly searching the same documentation
- Use dynamic URL when searching different documentation sites
2. Write clear, semantic search queries
3. Ensure documentation sites are accessible
4. Consider documentation structure and size
5. Handle potential URL access errors in agent prompts
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import CodeDocsSearchTool
# Example 1: Fixed documentation search
api_docs_search = CodeDocsSearchTool(
docs_url="https://api.example.com/docs"
)
# Example 2: Dynamic documentation search
flexible_docs_search = CodeDocsSearchTool()
# Create agents
api_analyst = Agent(
role='API Documentation Analyst',
goal='Find relevant API endpoints and usage examples',
backstory='Expert at analyzing API documentation.',
tools=[api_docs_search]
)
docs_researcher = Agent(
role='Documentation Researcher',
goal='Search through various documentation sites',
backstory='Specialist in finding information across multiple docs.',
tools=[flexible_docs_search]
)
# Define tasks
fixed_search_task = Task(
description="""Find all authentication-related endpoints
in the API documentation.""",
agent=api_analyst
)
# The agent will use:
# {
# "search_query": "authentication endpoints and methods"
# }
dynamic_search_task = Task(
description="""Search through the Python documentation at
docs.python.org for information about async/await.""",
agent=docs_researcher
)
# The agent will use:
# {
# "search_query": "async await syntax and usage",
# "docs_url": "https://docs.python.org"
# }
# Create crew
crew = Crew(
agents=[api_analyst, docs_researcher],
tasks=[fixed_search_task, dynamic_search_task]
)
# Execute
result = crew.kickoff()
```
## Notes
- Inherits from RagTool for semantic search capabilities
- Supports both fixed and dynamic documentation URLs
- Uses embeddings for semantic search
- Thread-safe operations
- Automatically handles documentation loading and embedding
- Optimized for technical documentation search

View File

@@ -0,0 +1,224 @@
---
title: CodeInterpreterTool
description: A tool for secure Python code execution in isolated Docker environments
icon: code
---
## CodeInterpreterTool
The CodeInterpreterTool provides secure Python code execution capabilities using Docker containers. It supports dynamic library installation and offers both safe (Docker-based) and unsafe (direct) execution modes.
## Installation
```bash
pip install 'crewai[tools]'
```
## Usage Example
```python
from crewai import Agent
from crewai_tools import CodeInterpreterTool
# Initialize the tool
code_tool = CodeInterpreterTool()
# Create an agent with the tool
programmer = Agent(
role='Code Executor',
goal='Execute and analyze Python code',
backstory='Expert at writing and executing Python code.',
tools=[code_tool],
verbose=True
)
```
## Input Schema
```python
class CodeInterpreterSchema(BaseModel):
code: str = Field(
description="Python3 code used to be interpreted in the Docker container. ALWAYS PRINT the final result and the output of the code"
)
libraries_used: List[str] = Field(
description="List of libraries used in the code with proper installing names separated by commas. Example: numpy,pandas,beautifulsoup4"
)
```
## Function Signature
```python
def __init__(
self,
code: Optional[str] = None,
user_dockerfile_path: Optional[str] = None,
user_docker_base_url: Optional[str] = None,
unsafe_mode: bool = False,
**kwargs
):
"""
Initialize the code interpreter tool.
Args:
code (Optional[str]): Default code to execute
user_dockerfile_path (Optional[str]): Custom Dockerfile path
user_docker_base_url (Optional[str]): Custom Docker daemon URL
unsafe_mode (bool): Enable direct code execution
**kwargs: Additional arguments for base tool
"""
def _run(
self,
code: str,
libraries_used: List[str],
**kwargs: Any
) -> str:
"""
Execute Python code in Docker container or directly.
Args:
code (str): Python code to execute
libraries_used (List[str]): Required libraries
**kwargs: Additional arguments
Returns:
str: Execution output or error message
"""
```
## Best Practices
1. Security Considerations:
- Use Docker mode by default
- Validate input code
- Control library access
- Monitor execution time
2. Docker Configuration:
- Use custom Dockerfile when needed
- Handle container lifecycle
- Manage resource limits
- Clean up after execution
3. Library Management:
- Specify exact versions
- Use trusted packages
- Handle dependencies
- Verify installations
4. Error Handling:
- Catch execution errors
- Handle timeouts
- Manage Docker errors
- Provide clear messages
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import CodeInterpreterTool
# Initialize tool
code_tool = CodeInterpreterTool()
# Create agent
programmer = Agent(
role='Code Executor',
goal='Execute data analysis code',
backstory='Expert Python programmer specializing in data analysis.',
tools=[code_tool]
)
# Define task
analysis_task = Task(
description="""Analyze the dataset using pandas and
create a summary visualization with matplotlib.""",
agent=programmer
)
# The tool will use:
# {
# "code": """
# import pandas as pd
# import matplotlib.pyplot as plt
#
# # Load and analyze data
# df = pd.read_csv('data.csv')
# summary = df.describe()
#
# # Create visualization
# plt.figure(figsize=(10, 6))
# df['column'].hist()
# plt.savefig('output.png')
#
# print(summary)
# """,
# "libraries_used": "pandas,matplotlib"
# }
# Create crew
crew = Crew(
agents=[programmer],
tasks=[analysis_task]
)
# Execute
result = crew.kickoff()
```
## Advanced Usage
### Custom Docker Configuration
```python
# Use custom Dockerfile
tool = CodeInterpreterTool(
user_dockerfile_path="/path/to/Dockerfile"
)
# Use custom Docker daemon
tool = CodeInterpreterTool(
user_docker_base_url="tcp://remote-docker:2375"
)
```
### Direct Execution Mode
```python
# Enable unsafe mode (not recommended)
tool = CodeInterpreterTool(unsafe_mode=True)
# Execute code directly
result = tool.run(
code="print('Hello, World!')",
libraries_used=[]
)
```
### Error Handling Example
```python
try:
code_tool = CodeInterpreterTool()
result = code_tool.run(
code="""
import numpy as np
arr = np.array([1, 2, 3])
print(f"Array mean: {arr.mean()}")
""",
libraries_used=["numpy"]
)
print(result)
except Exception as e:
print(f"Error executing code: {str(e)}")
```
## Notes
- Inherits from BaseTool
- Docker-based isolation
- Dynamic library installation
- Secure code execution
- Custom Docker support
- Comprehensive error handling
- Resource management
- Container cleanup
- Library dependency handling
- Execution output capture

View File

@@ -0,0 +1,207 @@
---
title: CSVSearchTool
description: A tool for semantic search within CSV files using RAG capabilities
icon: table
---
## CSVSearchTool
The CSVSearchTool enables semantic search capabilities for CSV files using Retrieval-Augmented Generation (RAG). It can process CSV files either specified during initialization or at runtime, making it flexible for various use cases.
## Installation
```bash
pip install 'crewai[tools]'
```
## Usage Example
```python
from crewai import Agent
from crewai_tools import CSVSearchTool
# Method 1: Initialize with specific CSV file
csv_tool = CSVSearchTool(csv="path/to/data.csv")
# Method 2: Initialize without CSV (specify at runtime)
flexible_csv_tool = CSVSearchTool()
# Create an agent with the tool
data_analyst = Agent(
role='Data Analyst',
goal='Search and analyze CSV data semantically',
backstory='Expert at analyzing and extracting insights from CSV data.',
tools=[csv_tool],
verbose=True
)
```
## Input Schema
### Fixed CSV Schema (when CSV path provided during initialization)
```python
class FixedCSVSearchToolSchema(BaseModel):
search_query: str = Field(
description="Mandatory search query you want to use to search the CSV's content"
)
```
### Flexible CSV Schema (when CSV path provided at runtime)
```python
class CSVSearchToolSchema(FixedCSVSearchToolSchema):
csv: str = Field(
description="Mandatory csv path you want to search"
)
```
## Function Signature
```python
def __init__(
self,
csv: Optional[str] = None,
**kwargs
):
"""
Initialize the CSV search tool.
Args:
csv (Optional[str]): Path to CSV file (optional)
**kwargs: Additional arguments for RAG tool configuration
"""
def _run(
self,
search_query: str,
**kwargs: Any
) -> str:
"""
Execute semantic search on CSV content.
Args:
search_query (str): Query to search in the CSV
**kwargs: Additional arguments including csv path if not initialized
Returns:
str: Relevant content from the CSV matching the query
"""
```
## Best Practices
1. CSV File Handling:
- Ensure CSV files are properly formatted
- Use absolute paths for reliability
- Verify file permissions before processing
2. Search Optimization:
- Use specific, focused search queries
- Consider column names and data structure
- Test with sample queries first
3. Performance Considerations:
- Pre-initialize with CSV for repeated searches
- Handle large CSV files appropriately
- Monitor memory usage with big datasets
4. Error Handling:
- Verify CSV file existence
- Handle malformed CSV data
- Manage file access permissions
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import CSVSearchTool
# Initialize tool with specific CSV
csv_tool = CSVSearchTool(csv="/path/to/sales_data.csv")
# Create agent
analyst = Agent(
role='Data Analyst',
goal='Extract insights from sales data',
backstory='Expert at analyzing sales data and trends.',
tools=[csv_tool]
)
# Define task
analysis_task = Task(
description="""Find all sales records from the CSV
that relate to product returns in Q4 2023.""",
agent=analyst
)
# The tool will use:
# {
# "search_query": "product returns Q4 2023"
# }
# Create crew
crew = Crew(
agents=[analyst],
tasks=[analysis_task]
)
# Execute
result = crew.kickoff()
```
## Advanced Usage
### Dynamic CSV Selection
```python
# Initialize without CSV
flexible_tool = CSVSearchTool()
# Search different CSVs
result1 = flexible_tool.run(
search_query="revenue 2023",
csv="/path/to/finance.csv"
)
result2 = flexible_tool.run(
search_query="customer feedback",
csv="/path/to/surveys.csv"
)
```
### Multiple CSV Analysis
```python
# Create tools for different CSVs
sales_tool = CSVSearchTool(csv="/path/to/sales.csv")
inventory_tool = CSVSearchTool(csv="/path/to/inventory.csv")
# Create agent with multiple tools
analyst = Agent(
role='Business Analyst',
goal='Cross-reference sales and inventory data',
tools=[sales_tool, inventory_tool]
)
```
### Error Handling Example
```python
try:
csv_tool = CSVSearchTool(csv="/path/to/data.csv")
result = csv_tool.run(
search_query="important metrics"
)
print(result)
except Exception as e:
print(f"Error processing CSV: {str(e)}")
```
## Notes
- Inherits from RagTool for semantic search
- Supports dynamic CSV file specification
- Uses embedchain for data processing
- Maintains search context across queries
- Thread-safe operations
- Efficient semantic search capabilities
- Supports various CSV formats
- Handles large datasets effectively
- Preserves CSV structure in search
- Enables natural language queries

View File

@@ -0,0 +1,217 @@
---
title: Directory Read Tool
description: A tool for recursively listing directory contents
---
# Directory Read Tool
The Directory Read Tool provides functionality to recursively list all files within a directory. It supports both fixed and dynamic directory path modes, allowing you to specify the directory at initialization or runtime.
## Installation
```bash
pip install 'crewai[tools]'
```
## Usage
You can use the Directory Read Tool in two ways:
### 1. Fixed Directory Path
Initialize the tool with a specific directory path:
```python
from crewai import Agent
from crewai_tools import DirectoryReadTool
# Initialize with a fixed directory
tool = DirectoryReadTool(directory="/path/to/your/directory")
# Create an agent with the tool
agent = Agent(
role='File System Analyst',
goal='Analyze directory contents',
backstory='I help analyze and organize file systems',
tools=[tool]
)
# Use in a task
task = Task(
description="List all files in the project directory",
agent=agent
)
```
### 2. Dynamic Directory Path
Initialize the tool without a specific directory path to provide it at runtime:
```python
from crewai import Agent
from crewai_tools import DirectoryReadTool
# Initialize without a fixed directory
tool = DirectoryReadTool()
# Create an agent with the tool
agent = Agent(
role='File System Explorer',
goal='Explore different directories',
backstory='I analyze various directory structures',
tools=[tool]
)
# Use in a task with dynamic directory path
task = Task(
description="List all files in the specified directory",
agent=agent,
context={
"directory": "/path/to/explore"
}
)
```
## Input Schema
### Fixed Directory Mode
```python
class FixedDirectoryReadToolSchema(BaseModel):
pass # No additional parameters needed when directory is fixed
```
### Dynamic Directory Mode
```python
class DirectoryReadToolSchema(BaseModel):
directory: str # The path to the directory to list contents
```
## Function Signatures
```python
def __init__(self, directory: Optional[str] = None, **kwargs):
"""
Initialize the Directory Read Tool.
Args:
directory (Optional[str]): Path to the directory (optional)
**kwargs: Additional arguments passed to BaseTool
"""
def _run(
self,
**kwargs: Any,
) -> str:
"""
Execute the directory listing.
Args:
**kwargs: Arguments including 'directory' for dynamic mode
Returns:
str: A formatted string containing all file paths in the directory
"""
```
## Best Practices
1. **Path Handling**:
- Use absolute paths to avoid path resolution issues
- Handle trailing slashes appropriately
- Verify directory existence before listing
2. **Performance Considerations**:
- Be mindful of directory size when listing large directories
- Consider implementing pagination for large directories
- Handle symlinks appropriately
3. **Error Handling**:
- Handle directory not found errors gracefully
- Manage permission issues appropriately
- Validate input parameters before processing
## Example Integration
Here's a complete example showing how to integrate the Directory Read Tool with CrewAI:
```python
from crewai import Agent, Task, Crew
from crewai_tools import DirectoryReadTool
# Initialize the tool
dir_tool = DirectoryReadTool()
# Create an agent with the tool
file_analyst = Agent(
role='File System Analyst',
goal='Analyze and report on directory structures',
backstory='I am an expert at analyzing file system organization',
tools=[dir_tool]
)
# Create tasks
analysis_task = Task(
description="""
Analyze the project directory structure:
1. List all files recursively
2. Identify key file types
3. Report on directory organization
Provide a comprehensive analysis of the findings.
""",
agent=file_analyst,
context={
"directory": "/path/to/project"
}
)
# Create and run the crew
crew = Crew(
agents=[file_analyst],
tasks=[analysis_task]
)
result = crew.kickoff()
```
## Error Handling
The tool handles various error scenarios:
1. **Directory Not Found**:
```python
try:
tool = DirectoryReadTool(directory="/nonexistent/path")
except FileNotFoundError:
print("Directory not found. Please verify the path.")
```
2. **Permission Issues**:
```python
try:
tool = DirectoryReadTool(directory="/restricted/path")
except PermissionError:
print("Insufficient permissions to access the directory.")
```
3. **Invalid Path**:
```python
try:
result = tool._run(directory="invalid/path")
except ValueError:
print("Invalid directory path provided.")
```
## Output Format
The tool returns a formatted string containing all file paths in the directory:
```
File paths:
- /path/to/directory/file1.txt
- /path/to/directory/subdirectory/file2.txt
- /path/to/directory/subdirectory/file3.py
```
Each file path is listed on a new line with a hyphen prefix, making it easy to parse and read the output.

View File

@@ -0,0 +1,214 @@
---
title: DirectorySearchTool
description: A tool for semantic search within directory contents using RAG capabilities
icon: folder-search
---
## DirectorySearchTool
The DirectorySearchTool enables semantic search capabilities for directory contents using Retrieval-Augmented Generation (RAG). It processes files recursively within a directory and allows searching through their contents using natural language queries.
## Installation
```bash
pip install 'crewai[tools]'
```
## Usage Example
```python
from crewai import Agent
from crewai_tools import DirectorySearchTool
# Method 1: Initialize with specific directory
dir_tool = DirectorySearchTool(directory="/path/to/documents")
# Method 2: Initialize without directory (specify at runtime)
flexible_dir_tool = DirectorySearchTool()
# Create an agent with the tool
researcher = Agent(
role='Directory Researcher',
goal='Search and analyze directory contents',
backstory='Expert at finding relevant information in document collections.',
tools=[dir_tool],
verbose=True
)
```
## Input Schema
### Fixed Directory Schema (when path provided during initialization)
```python
class FixedDirectorySearchToolSchema(BaseModel):
search_query: str = Field(
description="Mandatory search query you want to use to search the directory's content"
)
```
### Flexible Directory Schema (when path provided at runtime)
```python
class DirectorySearchToolSchema(FixedDirectorySearchToolSchema):
directory: str = Field(
description="Mandatory directory you want to search"
)
```
## Function Signature
```python
def __init__(
self,
directory: Optional[str] = None,
**kwargs
):
"""
Initialize the directory search tool.
Args:
directory (Optional[str]): Path to directory (optional)
**kwargs: Additional arguments for RAG tool configuration
"""
def _run(
self,
search_query: str,
**kwargs: Any
) -> str:
"""
Execute semantic search on directory contents.
Args:
search_query (str): Query to search in the directory
**kwargs: Additional arguments including directory if not initialized
Returns:
str: Relevant content from the directory matching the query
"""
```
## Best Practices
1. Directory Management:
- Use absolute paths
- Verify directory existence
- Handle permissions properly
2. Search Optimization:
- Use specific queries
- Consider file types
- Test with sample queries
3. Performance Considerations:
- Pre-initialize for repeated searches
- Handle large directories
- Monitor processing time
4. Error Handling:
- Verify directory access
- Handle missing files
- Manage permissions
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import DirectorySearchTool
# Initialize tool with specific directory
dir_tool = DirectorySearchTool(
directory="/path/to/documents"
)
# Create agent
researcher = Agent(
role='Directory Researcher',
goal='Extract insights from document collections',
backstory='Expert at analyzing document collections.',
tools=[dir_tool]
)
# Define task
research_task = Task(
description="""Find all mentions of machine learning
applications from the directory contents.""",
agent=researcher
)
# The tool will use:
# {
# "search_query": "machine learning applications"
# }
# Create crew
crew = Crew(
agents=[researcher],
tasks=[research_task]
)
# Execute
result = crew.kickoff()
```
## Advanced Usage
### Dynamic Directory Selection
```python
# Initialize without directory path
flexible_tool = DirectorySearchTool()
# Search different directories
docs_results = flexible_tool.run(
search_query="technical specifications",
directory="/path/to/docs"
)
reports_results = flexible_tool.run(
search_query="financial metrics",
directory="/path/to/reports"
)
```
### Multiple Directory Analysis
```python
# Create tools for different directories
docs_tool = DirectorySearchTool(
directory="/path/to/docs"
)
reports_tool = DirectorySearchTool(
directory="/path/to/reports"
)
# Create agent with multiple tools
analyst = Agent(
role='Content Analyst',
goal='Cross-reference multiple document collections',
tools=[docs_tool, reports_tool]
)
```
### Error Handling Example
```python
try:
dir_tool = DirectorySearchTool()
results = dir_tool.run(
search_query="key concepts",
directory="/path/to/documents"
)
print(results)
except Exception as e:
print(f"Error processing directory: {str(e)}")
```
## Notes
- Inherits from RagTool
- Uses DirectoryLoader
- Supports recursive search
- Dynamic directory specification
- Efficient content retrieval
- Thread-safe operations
- Maintains search context
- Processes multiple file types
- Handles nested directories
- Memory-efficient processing

View File

@@ -0,0 +1,224 @@
---
title: DOCXSearchTool
description: A tool for semantic search within DOCX documents using RAG capabilities
icon: file-text
---
## DOCXSearchTool
The DOCXSearchTool enables semantic search capabilities for Microsoft Word (DOCX) documents using Retrieval-Augmented Generation (RAG). It supports both fixed and dynamic document selection modes.
## Installation
```bash
pip install 'crewai[tools]'
```
## Usage Example
```python
from crewai import Agent
from crewai_tools import DOCXSearchTool
# Method 1: Fixed document (specified at initialization)
fixed_tool = DOCXSearchTool(
docx="path/to/document.docx"
)
# Method 2: Dynamic document (specified at runtime)
dynamic_tool = DOCXSearchTool()
# Create an agent with the tool
researcher = Agent(
role='Document Researcher',
goal='Search and analyze document contents',
backstory='Expert at finding relevant information in documents.',
tools=[fixed_tool], # or [dynamic_tool]
verbose=True
)
```
## Input Schema
### Fixed Document Mode
```python
class FixedDOCXSearchToolSchema(BaseModel):
search_query: str = Field(
description="Mandatory search query you want to use to search the DOCX's content"
)
```
### Dynamic Document Mode
```python
class DOCXSearchToolSchema(BaseModel):
docx: str = Field(
description="Mandatory docx path you want to search"
)
search_query: str = Field(
description="Mandatory search query you want to use to search the DOCX's content"
)
```
## Function Signature
```python
def __init__(
self,
docx: Optional[str] = None,
**kwargs
):
"""
Initialize the DOCX search tool.
Args:
docx (Optional[str]): Path to DOCX file (optional for dynamic mode)
**kwargs: Additional arguments for RAG tool configuration
"""
def _run(
self,
search_query: str,
docx: Optional[str] = None,
**kwargs: Any
) -> str:
"""
Execute semantic search on document contents.
Args:
search_query (str): Query to search in the document
docx (Optional[str]): Document path (required for dynamic mode)
**kwargs: Additional arguments
Returns:
str: Relevant content from the document matching the query
"""
```
## Best Practices
1. Document Handling:
- Use absolute file paths
- Verify file existence
- Handle large documents
- Monitor memory usage
2. Query Optimization:
- Structure queries clearly
- Consider document size
- Handle formatting
- Monitor performance
3. Error Handling:
- Check file access
- Validate file format
- Handle corrupted files
- Log issues
4. Mode Selection:
- Choose fixed mode for static documents
- Use dynamic mode for runtime selection
- Consider memory implications
- Manage document lifecycle
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import DOCXSearchTool
# Initialize tool
docx_tool = DOCXSearchTool(
docx="reports/annual_report_2023.docx"
)
# Create agent
researcher = Agent(
role='Document Analyst',
goal='Extract insights from annual report',
backstory='Expert at analyzing business documents.',
tools=[docx_tool]
)
# Define task
analysis_task = Task(
description="""Find all mentions of revenue growth
and market expansion.""",
agent=researcher
)
# Create crew
crew = Crew(
agents=[researcher],
tasks=[analysis_task]
)
# Execute
result = crew.kickoff()
```
## Advanced Usage
### Multiple Document Analysis
```python
# Create tools for different documents
report_tool = DOCXSearchTool(
docx="reports/annual_report.docx"
)
policy_tool = DOCXSearchTool(
docx="policies/compliance.docx"
)
# Create agent with multiple tools
analyst = Agent(
role='Document Analyst',
goal='Cross-reference reports and policies',
tools=[report_tool, policy_tool]
)
```
### Dynamic Document Loading
```python
# Initialize dynamic tool
dynamic_tool = DOCXSearchTool()
# Use with different documents
result1 = dynamic_tool.run(
docx="document1.docx",
search_query="project timeline"
)
result2 = dynamic_tool.run(
docx="document2.docx",
search_query="budget allocation"
)
```
### Error Handling Example
```python
try:
docx_tool = DOCXSearchTool(
docx="reports/quarterly_report.docx"
)
results = docx_tool.run(
search_query="Q3 performance metrics"
)
print(results)
except FileNotFoundError as e:
print(f"Document not found: {str(e)}")
except Exception as e:
print(f"Error processing document: {str(e)}")
```
## Notes
- Inherits from RagTool
- Supports fixed/dynamic modes
- Document path validation
- Memory management
- Performance optimization
- Error handling
- Search capabilities
- Content extraction
- Format handling
- Security features

View File

@@ -0,0 +1,193 @@
---
title: FileReadTool
description: A tool for reading file contents with flexible path specification
icon: file-text
---
## FileReadTool
The FileReadTool provides functionality to read file contents with support for both fixed and dynamic file path specification. It includes comprehensive error handling for common file operations and maintains clear descriptions of its configured state.
## Installation
```bash
pip install 'crewai[tools]'
```
## Usage Example
```python
from crewai import Agent
from crewai_tools import FileReadTool
# Method 1: Initialize with specific file
reader = FileReadTool(file_path="/path/to/data.txt")
# Method 2: Initialize without file (specify at runtime)
flexible_reader = FileReadTool()
# Create an agent with the tool
file_processor = Agent(
role='File Processor',
goal='Read and process file contents',
backstory='Expert at handling file operations and content processing.',
tools=[reader],
verbose=True
)
```
## Input Schema
```python
class FileReadToolSchema(BaseModel):
file_path: str = Field(
description="Mandatory file full path to read the file"
)
```
## Function Signature
```python
def __init__(
self,
file_path: Optional[str] = None,
**kwargs: Any
) -> None:
"""
Initialize the file read tool.
Args:
file_path (Optional[str]): Path to file to read (optional)
**kwargs: Additional arguments passed to BaseTool
"""
def _run(
self,
**kwargs: Any
) -> str:
"""
Read and return file contents.
Args:
file_path (str, optional): Override default file path
**kwargs: Additional arguments
Returns:
str: File contents or error message
"""
```
## Best Practices
1. File Path Management:
- Use absolute paths for reliability
- Verify file existence before operations
- Handle path resolution properly
2. Error Handling:
- Check for file existence
- Handle permission issues
- Manage encoding errors
- Process file access failures
3. Performance Considerations:
- Close files after reading
- Handle large files appropriately
- Consider memory constraints
4. Security Practices:
- Validate file paths
- Check file permissions
- Avoid path traversal issues
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import FileReadTool
# Initialize tool with specific file
reader = FileReadTool(file_path="/path/to/config.txt")
# Create agent
processor = Agent(
role='File Processor',
goal='Process configuration files',
backstory='Expert at reading and analyzing configuration files.',
tools=[reader]
)
# Define task
read_task = Task(
description="""Read and analyze the contents of
the configuration file.""",
agent=processor
)
# The tool will use the default file path
# Create crew
crew = Crew(
agents=[processor],
tasks=[read_task]
)
# Execute
result = crew.kickoff()
```
## Advanced Usage
### Dynamic File Selection
```python
# Initialize without file path
flexible_reader = FileReadTool()
# Read different files
config_content = flexible_reader.run(
file_path="/path/to/config.txt"
)
log_content = flexible_reader.run(
file_path="/path/to/logs.txt"
)
```
### Multiple File Processing
```python
# Create tools for different files
config_reader = FileReadTool(file_path="/path/to/config.txt")
log_reader = FileReadTool(file_path="/path/to/logs.txt")
# Create agent with multiple tools
processor = Agent(
role='File Analyst',
goal='Analyze multiple file types',
tools=[config_reader, log_reader]
)
```
### Error Handling Example
```python
try:
reader = FileReadTool()
content = reader.run(
file_path="/path/to/file.txt"
)
print(content)
except Exception as e:
print(f"Error reading file: {str(e)}")
```
## Notes
- Inherits from BaseTool
- Supports fixed or dynamic file paths
- Comprehensive error handling
- Thread-safe operations
- Clear error messages
- Flexible path specification
- Maintains tool description
- Handles common file errors
- Supports various file types
- Memory-efficient operations

View File

@@ -0,0 +1,141 @@
---
title: FileWriterTool
description: A tool for writing content to files with support for various file formats.
icon: file-pen
---
## FileWriterTool
The FileWriterTool provides agents with the capability to write content to files, supporting various file formats and ensuring proper file handling.
## Installation
```bash
pip install 'crewai[tools]'
```
## Usage Example
```python
from crewai import Agent
from crewai_tools import FileWriterTool
# Initialize the tool
file_writer = FileWriterTool()
# Create an agent with the tool
writer_agent = Agent(
role='Content Writer',
goal='Write and save content to files',
backstory='Expert at creating and managing file content.',
tools=[file_writer],
verbose=True
)
# Use in a task
task = Task(
description='Write a report and save it to report.txt',
agent=writer_agent
)
```
## Tool Attributes
| Attribute | Type | Description |
| :-------- | :--- | :---------- |
| name | str | "File Writer Tool" |
| description | str | "A tool that writes content to a file." |
## Input Schema
```python
class FileWriterToolInput(BaseModel):
filename: str # Name of the file to write
directory: str = "./" # Optional directory path, defaults to current directory
overwrite: str = "False" # Whether to overwrite existing file ("True"/"False")
content: str # Content to write to the file
```
## Function Signature
```python
def _run(self, **kwargs: Any) -> str:
"""
Write content to a file with specified parameters.
Args:
filename (str): Name of the file to write
content (str): Content to write to the file
directory (str, optional): Directory path. Defaults to "./".
overwrite (str, optional): Whether to overwrite existing file. Defaults to "False".
Returns:
str: Success message with filepath or error message
"""
```
## Error Handling
The tool includes error handling for common file operations:
- FileExistsError: When file exists and overwrite is not allowed
- KeyError: When required parameters are missing
- Directory Creation: Automatically creates directories if they don't exist
- General Exceptions: Catches and reports any other file operation errors
## Best Practices
1. Always provide absolute file paths
2. Ensure proper file permissions
3. Handle potential errors in your agent prompts
4. Verify file contents after writing
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import FileWriterTool
# Initialize tool
file_writer = FileWriterTool()
# Create agent
writer = Agent(
role='Technical Writer',
goal='Create and save technical documentation',
backstory='Expert technical writer with experience in documentation.',
tools=[file_writer]
)
# Define task
writing_task = Task(
description="""Write a technical guide about Python best practices and save it
to the docs directory. The file should be named 'python_guide.md'.
Include sections on code style, documentation, and testing.
If a file already exists, overwrite it.""",
agent=writer
)
# The agent can use the tool with these parameters:
# {
# "filename": "python_guide.md",
# "directory": "docs",
# "overwrite": "True",
# "content": "# Python Best Practices\n\n## Code Style\n..."
# }
# Create crew
crew = Crew(
agents=[writer],
tasks=[writing_task]
)
# Execute
result = crew.kickoff()
```
## Notes
- The tool automatically creates directories in the file path if they don't exist
- Supports various file formats (txt, md, json, etc.)
- Returns descriptive error messages for better debugging
- Thread-safe file operations

View File

@@ -0,0 +1,181 @@
---
title: FirecrawlCrawlWebsiteTool
description: A web crawling tool powered by Firecrawl API for comprehensive website content extraction
icon: spider-web
---
## FirecrawlCrawlWebsiteTool
The FirecrawlCrawlWebsiteTool provides website crawling capabilities using the Firecrawl API. It allows for customizable crawling with options for polling intervals, idempotency, and URL parameters.
## Installation
```bash
pip install 'crewai[tools]'
pip install firecrawl-py # Required dependency
```
## Usage Example
```python
from crewai import Agent
from crewai_tools import FirecrawlCrawlWebsiteTool
# Method 1: Using environment variable
# export FIRECRAWL_API_KEY='your-api-key'
crawler = FirecrawlCrawlWebsiteTool()
# Method 2: Providing API key directly
crawler = FirecrawlCrawlWebsiteTool(
api_key="your-firecrawl-api-key"
)
# Method 3: With custom configuration
crawler = FirecrawlCrawlWebsiteTool(
api_key="your-firecrawl-api-key",
url="https://example.com", # Base URL
poll_interval=5, # Custom polling interval
idempotency_key="unique-key"
)
# Create an agent with the tool
researcher = Agent(
role='Web Crawler',
goal='Extract and analyze website content',
backstory='Expert at crawling and analyzing web content.',
tools=[crawler],
verbose=True
)
```
## Input Schema
```python
class FirecrawlCrawlWebsiteToolSchema(BaseModel):
url: str = Field(description="Website URL")
```
## Function Signature
```python
def __init__(
self,
api_key: Optional[str] = None,
url: Optional[str] = None,
params: Optional[Dict[str, Any]] = None,
poll_interval: Optional[int] = 2,
idempotency_key: Optional[str] = None,
**kwargs
):
"""
Initialize the website crawling tool.
Args:
api_key (Optional[str]): Firecrawl API key. If not provided, checks FIRECRAWL_API_KEY env var
url (Optional[str]): Base URL to crawl. Can be overridden in _run
params (Optional[Dict[str, Any]]): Additional parameters for FirecrawlApp
poll_interval (Optional[int]): Poll interval for FirecrawlApp
idempotency_key (Optional[str]): Idempotency key for FirecrawlApp
**kwargs: Additional arguments for tool creation
"""
def _run(self, url: str) -> Any:
"""
Crawl a website using Firecrawl.
Args:
url (str): Website URL to crawl (overrides constructor URL if provided)
Returns:
Any: Crawled website content from Firecrawl API
"""
```
## Best Practices
1. Set up API authentication:
- Use environment variable: `export FIRECRAWL_API_KEY='your-api-key'`
- Or provide directly in constructor
2. Configure crawling parameters:
- Set appropriate poll intervals
- Use idempotency keys for retry safety
- Customize URL parameters as needed
3. Handle rate limits and quotas
4. Consider website robots.txt policies
5. Handle potential crawling errors in agent prompts
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import FirecrawlCrawlWebsiteTool
# Initialize crawler with configuration
crawler = FirecrawlCrawlWebsiteTool(
api_key="your-firecrawl-api-key",
poll_interval=5,
params={
"max_depth": 3,
"follow_links": True
}
)
# Create agent
web_analyst = Agent(
role='Web Content Analyst',
goal='Extract and analyze website content comprehensively',
backstory='Expert at web crawling and content analysis.',
tools=[crawler]
)
# Define task
crawl_task = Task(
description="""Crawl the documentation website at docs.example.com
and extract all API-related content.""",
agent=web_analyst
)
# The agent will use:
# {
# "url": "https://docs.example.com"
# }
# Create crew
crew = Crew(
agents=[web_analyst],
tasks=[crawl_task]
)
# Execute
result = crew.kickoff()
```
## Configuration Options
### URL Parameters
```python
params = {
"max_depth": 3, # Maximum crawl depth
"follow_links": True, # Follow internal links
"exclude_patterns": [], # URL patterns to exclude
"include_patterns": [] # URL patterns to include
}
```
### Polling Configuration
```python
crawler = FirecrawlCrawlWebsiteTool(
poll_interval=5, # Poll every 5 seconds
idempotency_key="unique-key-123" # For retry safety
)
```
## Notes
- Requires valid Firecrawl API key
- Supports both environment variable and direct API key configuration
- Configurable polling intervals for crawl status
- Idempotency support for safe retries
- Thread-safe operations
- Customizable crawling parameters
- Respects robots.txt by default

View File

@@ -0,0 +1,154 @@
---
title: FirecrawlSearchTool
description: A web search tool powered by Firecrawl API for comprehensive web search capabilities
icon: magnifying-glass-chart
---
## FirecrawlSearchTool
The FirecrawlSearchTool provides web search capabilities using the Firecrawl API. It allows for customizable search queries with options for result formatting and search parameters.
## Installation
```bash
pip install 'crewai[tools]'
pip install firecrawl-py # Required dependency
```
## Usage Example
```python
from crewai import Agent
from crewai_tools import FirecrawlSearchTool
# Initialize the tool with your API key
search_tool = FirecrawlSearchTool(api_key="your-firecrawl-api-key")
# Create an agent with the tool
researcher = Agent(
role='Web Researcher',
goal='Find relevant information across the web',
backstory='Expert at web research and information gathering.',
tools=[search_tool],
verbose=True
)
```
## Input Schema
```python
class FirecrawlSearchToolSchema(BaseModel):
query: str = Field(description="Search query")
page_options: Optional[Dict[str, Any]] = Field(
default=None,
description="Options for result formatting"
)
search_options: Optional[Dict[str, Any]] = Field(
default=None,
description="Options for searching"
)
```
## Function Signature
```python
def __init__(self, api_key: Optional[str] = None, **kwargs):
"""
Initialize the Firecrawl search tool.
Args:
api_key (Optional[str]): Firecrawl API key
**kwargs: Additional arguments for tool creation
"""
def _run(
self,
query: str,
page_options: Optional[Dict[str, Any]] = None,
result_options: Optional[Dict[str, Any]] = None,
) -> Any:
"""
Perform a web search using Firecrawl.
Args:
query (str): Search query string
page_options (Optional[Dict[str, Any]]): Options for result formatting
result_options (Optional[Dict[str, Any]]): Options for search results
Returns:
Any: Search results from Firecrawl API
"""
```
## Best Practices
1. Always provide a valid API key
2. Use specific, focused search queries
3. Customize page and result options for better results
4. Handle potential API errors in agent prompts
5. Consider rate limits and usage quotas
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import FirecrawlSearchTool
# Initialize tool with API key
search_tool = FirecrawlSearchTool(api_key="your-firecrawl-api-key")
# Create agent
researcher = Agent(
role='Market Researcher',
goal='Research market trends and competitor analysis',
backstory='Expert market analyst with deep research skills.',
tools=[search_tool]
)
# Define task
research_task = Task(
description="""Research the latest developments in electric vehicles,
focusing on market leaders and emerging technologies. Format the results
in a structured way.""",
agent=researcher
)
# The agent will use:
# {
# "query": "electric vehicle market leaders emerging technologies",
# "page_options": {
# "format": "structured",
# "maxLength": 1000
# },
# "result_options": {
# "limit": 5,
# "sortBy": "relevance"
# }
# }
# Create crew
crew = Crew(
agents=[researcher],
tasks=[research_task]
)
# Execute
result = crew.kickoff()
```
## Error Handling
The tool includes error handling for:
- Missing API key
- Missing firecrawl-py package
- API request failures
- Invalid options parameters
## Notes
- Requires valid Firecrawl API key
- Supports customizable search parameters
- Provides structured web search results
- Thread-safe operations
- Efficient for large-scale web searches
- Handles rate limiting automatically

View File

@@ -0,0 +1,233 @@
---
title: GithubSearchTool
description: A tool for semantic search within GitHub repositories using RAG capabilities
icon: github
---
## GithubSearchTool
The GithubSearchTool enables semantic search capabilities for GitHub repositories using Retrieval-Augmented Generation (RAG). It processes various content types including code, repository information, pull requests, and issues, allowing natural language queries across repository content.
## Installation
```bash
pip install 'crewai[tools]'
```
## Usage Example
```python
from crewai import Agent
from crewai_tools import GithubSearchTool
# Method 1: Initialize with specific repository
github_tool = GithubSearchTool(
github_repo="owner/repo",
gh_token="your_github_token",
content_types=["code", "pr", "issue"]
)
# Method 2: Initialize without repository (specify at runtime)
flexible_github_tool = GithubSearchTool(
gh_token="your_github_token",
content_types=["code", "repo"]
)
# Create an agent with the tool
researcher = Agent(
role='GitHub Researcher',
goal='Search and analyze repository contents',
backstory='Expert at finding relevant information in GitHub repositories.',
tools=[github_tool],
verbose=True
)
```
## Input Schema
### Fixed Repository Schema (when repo provided during initialization)
```python
class FixedGithubSearchToolSchema(BaseModel):
search_query: str = Field(
description="Mandatory search query you want to use to search the github repo's content"
)
```
### Flexible Repository Schema (when repo provided at runtime)
```python
class GithubSearchToolSchema(FixedGithubSearchToolSchema):
github_repo: str = Field(
description="Mandatory github you want to search"
)
content_types: List[str] = Field(
description="Mandatory content types you want to be included search, options: [code, repo, pr, issue]"
)
```
## Function Signature
```python
def __init__(
self,
github_repo: Optional[str] = None,
gh_token: str,
content_types: List[str],
**kwargs
):
"""
Initialize the GitHub search tool.
Args:
github_repo (Optional[str]): Repository to search (optional)
gh_token (str): GitHub authentication token
content_types (List[str]): Content types to search
**kwargs: Additional arguments for RAG tool configuration
"""
def _run(
self,
search_query: str,
**kwargs: Any
) -> str:
"""
Execute semantic search on repository contents.
Args:
search_query (str): Query to search in the repository
**kwargs: Additional arguments including github_repo and content_types if not initialized
Returns:
str: Relevant content from the repository matching the query
"""
```
## Best Practices
1. Authentication:
- Secure token management
- Use environment variables
- Handle token expiration
2. Search Optimization:
- Target specific content types
- Use focused queries
- Consider rate limits
3. Performance Considerations:
- Pre-initialize for repeated searches
- Handle large repositories
- Monitor API usage
4. Error Handling:
- Verify repository access
- Handle API limits
- Manage authentication errors
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import GithubSearchTool
# Initialize tool with specific repository
github_tool = GithubSearchTool(
github_repo="owner/repo",
gh_token="your_github_token",
content_types=["code", "pr", "issue"]
)
# Create agent
researcher = Agent(
role='GitHub Researcher',
goal='Extract insights from repository content',
backstory='Expert at analyzing GitHub repositories.',
tools=[github_tool]
)
# Define task
research_task = Task(
description="""Find all implementations of
machine learning algorithms in the codebase.""",
agent=researcher
)
# The tool will use:
# {
# "search_query": "machine learning implementation"
# }
# Create crew
crew = Crew(
agents=[researcher],
tasks=[research_task]
)
# Execute
result = crew.kickoff()
```
## Advanced Usage
### Dynamic Repository Selection
```python
# Initialize without repository
flexible_tool = GithubSearchTool(
gh_token="your_github_token",
content_types=["code", "repo"]
)
# Search different repositories
backend_results = flexible_tool.run(
search_query="authentication implementation",
github_repo="owner/backend-repo"
)
frontend_results = flexible_tool.run(
search_query="component architecture",
github_repo="owner/frontend-repo"
)
```
### Multiple Content Type Analysis
```python
# Create tool with multiple content types
multi_tool = GithubSearchTool(
github_repo="owner/repo",
gh_token="your_github_token",
content_types=["code", "pr", "issue", "repo"]
)
# Search across all content types
results = multi_tool.run(
search_query="feature implementation status"
)
```
### Error Handling Example
```python
try:
github_tool = GithubSearchTool(
gh_token="your_github_token",
content_types=["code"]
)
results = github_tool.run(
search_query="api endpoints",
github_repo="owner/repo"
)
print(results)
except Exception as e:
print(f"Error searching repository: {str(e)}")
```
## Notes
- Inherits from RagTool
- Uses GithubLoader
- Requires authentication
- Supports multiple content types
- Dynamic repository specification
- Efficient content retrieval
- Thread-safe operations
- Maintains search context
- Handles API rate limits
- Memory-efficient processing

View File

@@ -0,0 +1,220 @@
---
title: JinaScrapeWebsiteTool
description: A tool for scraping website content using Jina.ai's reader service with markdown output
icon: globe
---
## JinaScrapeWebsiteTool
The JinaScrapeWebsiteTool provides website content scraping capabilities using Jina.ai's reader service. It converts web content into clean markdown format and supports both fixed and dynamic URL modes with optional authentication.
## Installation
```bash
pip install 'crewai[tools]'
```
## Usage Example
```python
from crewai import Agent
from crewai_tools import JinaScrapeWebsiteTool
# Method 1: Fixed URL (specified at initialization)
fixed_tool = JinaScrapeWebsiteTool(
website_url="https://example.com",
api_key="your-jina-api-key" # Optional
)
# Method 2: Dynamic URL (specified at runtime)
dynamic_tool = JinaScrapeWebsiteTool(
api_key="your-jina-api-key" # Optional
)
# Create an agent with the tool
researcher = Agent(
role='Web Content Researcher',
goal='Extract and analyze website content',
backstory='Expert at gathering and processing web information.',
tools=[fixed_tool], # or [dynamic_tool]
verbose=True
)
```
## Input Schema
```python
class JinaScrapeWebsiteToolInput(BaseModel):
website_url: str = Field(
description="Mandatory website url to read the file"
)
```
## Function Signature
```python
def __init__(
self,
website_url: Optional[str] = None,
api_key: Optional[str] = None,
custom_headers: Optional[dict] = None,
**kwargs
):
"""
Initialize the website scraping tool.
Args:
website_url (Optional[str]): URL to scrape (optional for dynamic mode)
api_key (Optional[str]): Jina.ai API key for authentication
custom_headers (Optional[dict]): Custom HTTP headers
**kwargs: Additional arguments for base tool
"""
def _run(
self,
website_url: Optional[str] = None
) -> str:
"""
Execute website scraping.
Args:
website_url (Optional[str]): URL to scrape (required for dynamic mode)
Returns:
str: Markdown-formatted website content
"""
```
## Best Practices
1. URL Handling:
- Use complete URLs
- Validate URL format
- Handle redirects
- Monitor timeouts
2. Authentication:
- Secure API key storage
- Use environment variables
- Manage headers properly
- Handle auth errors
3. Content Processing:
- Handle large pages
- Process markdown output
- Manage encoding
- Handle errors
4. Mode Selection:
- Choose fixed mode for static sites
- Use dynamic mode for variable URLs
- Consider caching
- Manage timeouts
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import JinaScrapeWebsiteTool
import os
# Initialize tool with API key
scraper_tool = JinaScrapeWebsiteTool(
api_key=os.getenv('JINA_API_KEY'),
custom_headers={
'User-Agent': 'CrewAI Bot 1.0'
}
)
# Create agent
researcher = Agent(
role='Web Content Analyst',
goal='Extract and analyze website content',
backstory='Expert at processing web information.',
tools=[scraper_tool]
)
# Define task
analysis_task = Task(
description="""Analyze the content of
https://example.com/blog for key insights.""",
agent=researcher
)
# Create crew
crew = Crew(
agents=[researcher],
tasks=[analysis_task]
)
# Execute
result = crew.kickoff()
```
## Advanced Usage
### Multiple Site Analysis
```python
# Initialize tool
scraper = JinaScrapeWebsiteTool(
api_key=os.getenv('JINA_API_KEY')
)
# Analyze multiple sites
results = []
sites = [
"https://site1.com",
"https://site2.com",
"https://site3.com"
]
for site in sites:
content = scraper.run(
website_url=site
)
results.append(content)
```
### Custom Headers Configuration
```python
# Initialize with custom headers
tool = JinaScrapeWebsiteTool(
custom_headers={
'User-Agent': 'Custom Bot 1.0',
'Accept-Language': 'en-US,en;q=0.9',
'Accept': 'text/html,application/xhtml+xml'
}
)
# Use the tool
content = tool.run(
website_url="https://example.com"
)
```
### Error Handling Example
```python
try:
scraper = JinaScrapeWebsiteTool()
content = scraper.run(
website_url="https://example.com"
)
print(content)
except requests.exceptions.RequestException as e:
print(f"Error accessing website: {str(e)}")
except Exception as e:
print(f"Error processing content: {str(e)}")
```
## Notes
- Uses Jina.ai reader service
- Markdown output format
- API key authentication
- Custom headers support
- Error handling
- Timeout management
- Content processing
- URL validation
- Redirect handling
- Response formatting

View File

@@ -0,0 +1,224 @@
---
title: JSONSearchTool
description: A tool for semantic search within JSON files using RAG capabilities
icon: braces
---
## JSONSearchTool
The JSONSearchTool enables semantic search capabilities for JSON files using Retrieval-Augmented Generation (RAG). It supports both fixed and dynamic file path modes, allowing flexible usage patterns.
## Installation
```bash
pip install 'crewai[tools]'
```
## Usage Example
```python
from crewai import Agent
from crewai_tools import JSONSearchTool
# Method 1: Fixed path (specified at initialization)
fixed_tool = JSONSearchTool(
json_path="path/to/data.json"
)
# Method 2: Dynamic path (specified at runtime)
dynamic_tool = JSONSearchTool()
# Create an agent with the tool
researcher = Agent(
role='JSON Data Researcher',
goal='Search and analyze JSON data',
backstory='Expert at finding relevant information in JSON files.',
tools=[fixed_tool], # or [dynamic_tool]
verbose=True
)
```
## Input Schema
### Fixed Path Mode
```python
class FixedJSONSearchToolSchema(BaseModel):
search_query: str = Field(
description="Mandatory search query you want to use to search the JSON's content"
)
```
### Dynamic Path Mode
```python
class JSONSearchToolSchema(BaseModel):
json_path: str = Field(
description="Mandatory json path you want to search"
)
search_query: str = Field(
description="Mandatory search query you want to use to search the JSON's content"
)
```
## Function Signature
```python
def __init__(
self,
json_path: Optional[str] = None,
**kwargs
):
"""
Initialize the JSON search tool.
Args:
json_path (Optional[str]): Path to JSON file (optional for dynamic mode)
**kwargs: Additional arguments for RAG tool configuration
"""
def _run(
self,
search_query: str,
**kwargs: Any
) -> str:
"""
Execute semantic search on JSON contents.
Args:
search_query (str): Query to search in the JSON
**kwargs: Additional arguments
Returns:
str: Relevant content from the JSON matching the query
"""
```
## Best Practices
1. File Handling:
- Use absolute file paths
- Verify file existence
- Handle large JSON files
- Monitor memory usage
2. Query Optimization:
- Structure queries clearly
- Consider JSON structure
- Handle nested data
- Monitor performance
3. Error Handling:
- Check file access
- Validate JSON format
- Handle malformed JSON
- Log issues
4. Mode Selection:
- Choose fixed mode for static files
- Use dynamic mode for runtime selection
- Consider caching
- Manage file lifecycle
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import JSONSearchTool
# Initialize tool
json_tool = JSONSearchTool(
json_path="data/config.json"
)
# Create agent
researcher = Agent(
role='JSON Data Analyst',
goal='Extract insights from JSON configuration',
backstory='Expert at analyzing JSON data structures.',
tools=[json_tool]
)
# Define task
analysis_task = Task(
description="""Find all configuration settings
related to security.""",
agent=researcher
)
# Create crew
crew = Crew(
agents=[researcher],
tasks=[analysis_task]
)
# Execute
result = crew.kickoff()
```
## Advanced Usage
### Multiple File Analysis
```python
# Create tools for different JSON files
config_tool = JSONSearchTool(
json_path="config/settings.json"
)
data_tool = JSONSearchTool(
json_path="data/records.json"
)
# Create agent with multiple tools
analyst = Agent(
role='JSON Data Analyst',
goal='Cross-reference configuration and data',
tools=[config_tool, data_tool]
)
```
### Dynamic File Loading
```python
# Initialize dynamic tool
dynamic_tool = JSONSearchTool()
# Use with different JSON files
result1 = dynamic_tool.run(
json_path="file1.json",
search_query="security settings"
)
result2 = dynamic_tool.run(
json_path="file2.json",
search_query="user preferences"
)
```
### Error Handling Example
```python
try:
json_tool = JSONSearchTool(
json_path="config/settings.json"
)
results = json_tool.run(
search_query="encryption settings"
)
print(results)
except FileNotFoundError as e:
print(f"JSON file not found: {str(e)}")
except ValueError as e:
print(f"Invalid JSON format: {str(e)}")
except Exception as e:
print(f"Error processing JSON: {str(e)}")
```
## Notes
- Inherits from RagTool
- Supports fixed/dynamic modes
- JSON path validation
- Memory management
- Performance optimization
- Error handling
- Search capabilities
- Content extraction
- Format validation
- Security features

View File

@@ -0,0 +1,184 @@
---
title: LinkupSearchTool
description: A search tool powered by Linkup API for retrieving contextual information
icon: search
---
## LinkupSearchTool
The LinkupSearchTool provides search capabilities using the Linkup API. It allows for customizable search depth and output formatting, returning structured results with contextual information.
## Installation
```bash
pip install 'crewai[tools]'
pip install linkup # Required dependency
```
## Usage Example
```python
from crewai import Agent
from crewai_tools import LinkupSearchTool
# Initialize the tool with your API key
search_tool = LinkupSearchTool(api_key="your-linkup-api-key")
# Create an agent with the tool
researcher = Agent(
role='Information Researcher',
goal='Find relevant contextual information',
backstory='Expert at retrieving and analyzing contextual data.',
tools=[search_tool],
verbose=True
)
```
## Function Signature
```python
def __init__(self, api_key: str):
"""
Initialize the Linkup search tool.
Args:
api_key (str): Linkup API key for authentication
"""
def _run(
self,
query: str,
depth: str = "standard",
output_type: str = "searchResults"
) -> dict:
"""
Perform a search using the Linkup API.
Args:
query (str): The search query
depth (str): Search depth ("standard" by default)
output_type (str): Desired result type ("searchResults" by default)
Returns:
dict: {
"success": bool,
"results": List[Dict] | None,
"error": str | None
}
On success, results contains list of:
{
"name": str,
"url": str,
"content": str
}
"""
```
## Best Practices
1. Always provide a valid API key
2. Use specific, focused search queries
3. Choose appropriate search depth based on needs
4. Handle potential API errors in agent prompts
5. Process structured results effectively
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import LinkupSearchTool
# Initialize tool with API key
search_tool = LinkupSearchTool(api_key="your-linkup-api-key")
# Create agent
researcher = Agent(
role='Context Researcher',
goal='Find detailed contextual information about topics',
backstory='Expert at discovering and analyzing contextual data.',
tools=[search_tool]
)
# Define task
research_task = Task(
description="""Research the latest developments in quantum computing,
focusing on recent breakthroughs and applications. Use standard depth
for comprehensive results.""",
agent=researcher
)
# The tool will use:
# query: "quantum computing recent breakthroughs applications"
# depth: "standard"
# output_type: "searchResults"
# Create crew
crew = Crew(
agents=[researcher],
tasks=[research_task]
)
# Execute
result = crew.kickoff()
```
## Advanced Usage
### Search Depth Options
```python
# Quick surface-level search
results = search_tool._run(
query="quantum computing",
depth="basic"
)
# Standard comprehensive search
results = search_tool._run(
query="quantum computing",
depth="standard"
)
# Deep detailed search
results = search_tool._run(
query="quantum computing",
depth="deep"
)
```
### Output Type Options
```python
# Default search results
results = search_tool._run(
query="quantum computing",
output_type="searchResults"
)
# Custom output format
results = search_tool._run(
query="quantum computing",
output_type="customFormat"
)
```
### Error Handling
```python
results = search_tool._run(query="quantum computing")
if results["success"]:
for result in results["results"]:
print(f"Name: {result['name']}")
print(f"URL: {result['url']}")
print(f"Content: {result['content']}")
else:
print(f"Error: {results['error']}")
```
## Notes
- Requires valid Linkup API key
- Returns structured search results
- Supports multiple search depths
- Configurable output formats
- Built-in error handling
- Thread-safe operations
- Efficient for contextual searches

View File

@@ -0,0 +1,192 @@
---
title: LlamaIndexTool
description: A wrapper tool for integrating LlamaIndex tools and query engines with CrewAI
icon: link
---
## LlamaIndexTool
The LlamaIndexTool serves as a bridge between CrewAI and LlamaIndex, allowing you to use LlamaIndex tools and query engines within your CrewAI agents. It supports both direct tool wrapping and query engine integration.
## Installation
```bash
pip install 'crewai[tools]'
pip install llama-index # Required for LlamaIndex integration
```
## Usage Examples
### Using with LlamaIndex Tools
```python
from crewai import Agent
from crewai_tools import LlamaIndexTool
from llama_index.core.tools import BaseTool as LlamaBaseTool
from pydantic import BaseModel, Field
# Create a LlamaIndex tool
class CustomLlamaSchema(BaseModel):
query: str = Field(..., description="Query to process")
class CustomLlamaTool(LlamaBaseTool):
name = "Custom Llama Tool"
description = "A custom LlamaIndex tool"
def __call__(self, query: str) -> str:
return f"Processed: {query}"
# Wrap the LlamaIndex tool
llama_tool = CustomLlamaTool()
wrapped_tool = LlamaIndexTool.from_tool(llama_tool)
# Create an agent with the tool
agent = Agent(
role='LlamaIndex Integration Agent',
goal='Process queries using LlamaIndex tools',
backstory='Specialist in integrating LlamaIndex capabilities.',
tools=[wrapped_tool]
)
```
### Using with Query Engines
```python
from crewai import Agent
from crewai_tools import LlamaIndexTool
from llama_index.core import VectorStoreIndex, Document
# Create a query engine
documents = [Document(text="Sample document content")]
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()
# Create the tool
query_tool = LlamaIndexTool.from_query_engine(
query_engine,
name="Document Search",
description="Search through indexed documents"
)
# Create an agent with the tool
agent = Agent(
role='Document Researcher',
goal='Find relevant information in documents',
backstory='Expert at searching through document collections.',
tools=[query_tool]
)
```
## Tool Creation Methods
### From LlamaIndex Tool
```python
@classmethod
def from_tool(cls, tool: Any, **kwargs: Any) -> "LlamaIndexTool":
"""
Create a CrewAI tool from a LlamaIndex tool.
Args:
tool (LlamaBaseTool): A LlamaIndex tool to wrap
**kwargs: Additional arguments for tool creation
Returns:
LlamaIndexTool: A CrewAI-compatible tool wrapper
Raises:
ValueError: If tool is not a LlamaBaseTool or lacks fn_schema
"""
```
### From Query Engine
```python
@classmethod
def from_query_engine(
cls,
query_engine: Any,
name: Optional[str] = None,
description: Optional[str] = None,
return_direct: bool = False,
**kwargs: Any
) -> "LlamaIndexTool":
"""
Create a CrewAI tool from a LlamaIndex query engine.
Args:
query_engine (BaseQueryEngine): The query engine to wrap
name (Optional[str]): Custom name for the tool
description (Optional[str]): Custom description
return_direct (bool): Whether to return query engine response directly
**kwargs: Additional arguments for tool creation
Returns:
LlamaIndexTool: A CrewAI-compatible tool wrapper
Raises:
ValueError: If query_engine is not a BaseQueryEngine
"""
```
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import LlamaIndexTool
from llama_index.core import VectorStoreIndex, Document
from llama_index.core.tools import QueryEngineTool
# Create documents and index
documents = [
Document(text="AI is a technology that simulates human intelligence."),
Document(text="Machine learning is a subset of AI.")
]
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()
# Create the tool
search_tool = LlamaIndexTool.from_query_engine(
query_engine,
name="AI Knowledge Base",
description="Search through AI-related documents"
)
# Create agent
researcher = Agent(
role='AI Researcher',
goal='Research AI concepts',
backstory='Expert at finding and explaining AI concepts.',
tools=[search_tool]
)
# Define task
research_task = Task(
description="""Find and explain what AI is and its relationship
with machine learning.""",
agent=researcher
)
# The agent will use:
# {
# "query": "What is AI and how does it relate to machine learning?"
# }
# Create crew
crew = Crew(
agents=[researcher],
tasks=[research_task]
)
# Execute
result = crew.kickoff()
```
## Notes
- Automatically adapts LlamaIndex tool schemas for CrewAI compatibility
- Renames 'input' parameter to 'query' for better integration
- Supports both direct tool wrapping and query engine integration
- Handles schema validation and error resolution
- Thread-safe operations
- Compatible with all LlamaIndex tool types and query engines

View File

@@ -0,0 +1,209 @@
---
title: MDX Search Tool
description: A tool for semantic searching within MDX files using RAG capabilities
---
# MDX Search Tool
The MDX Search Tool enables semantic searching within MDX (Markdown with JSX) files using Retrieval-Augmented Generation (RAG) capabilities. It supports both fixed and dynamic file path modes, allowing you to specify the MDX file at initialization or runtime.
## Installation
```bash
pip install 'crewai[tools]'
```
## Usage
You can use the MDX Search Tool in two ways:
### 1. Fixed MDX File Path
Initialize the tool with a specific MDX file path:
```python
from crewai import Agent
from crewai_tools import MDXSearchTool
# Initialize with a fixed MDX file
tool = MDXSearchTool(mdx="/path/to/your/document.mdx")
# Create an agent with the tool
agent = Agent(
role='Technical Writer',
goal='Search through MDX documentation',
backstory='I help find relevant information in MDX documentation',
tools=[tool]
)
# Use in a task
task = Task(
description="Find information about API endpoints in the documentation",
agent=agent
)
```
### 2. Dynamic MDX File Path
Initialize the tool without a specific file path to provide it at runtime:
```python
from crewai import Agent
from crewai_tools import MDXSearchTool
# Initialize without a fixed MDX file
tool = MDXSearchTool()
# Create an agent with the tool
agent = Agent(
role='Documentation Analyst',
goal='Search through various MDX files',
backstory='I analyze different MDX documentation files',
tools=[tool]
)
# Use in a task with dynamic file path
task = Task(
description="Search for 'authentication' in the API documentation",
agent=agent,
context={
"mdx": "/path/to/api-docs.mdx",
"search_query": "authentication"
}
)
```
## Input Schema
### Fixed MDX File Mode
```python
class FixedMDXSearchToolSchema(BaseModel):
search_query: str # The search query to find content in the MDX file
```
### Dynamic MDX File Mode
```python
class MDXSearchToolSchema(BaseModel):
search_query: str # The search query to find content in the MDX file
mdx: str # The path to the MDX file to search
```
## Function Signatures
```python
def __init__(self, mdx: Optional[str] = None, **kwargs):
"""
Initialize the MDX Search Tool.
Args:
mdx (Optional[str]): Path to the MDX file (optional)
**kwargs: Additional arguments passed to RagTool
"""
def _run(
self,
search_query: str,
**kwargs: Any,
) -> str:
"""
Execute the search on the MDX file.
Args:
search_query (str): The query to search for
**kwargs: Additional arguments including 'mdx' for dynamic mode
Returns:
str: The search results from the MDX content
"""
```
## Best Practices
1. **File Path Handling**:
- Use absolute paths to avoid path resolution issues
- Verify file existence before searching
- Handle file permissions appropriately
2. **Query Optimization**:
- Use specific, focused search queries
- Consider context when formulating queries
- Break down complex searches into smaller queries
3. **Error Handling**:
- Handle file not found errors gracefully
- Manage permission issues appropriately
- Validate input parameters before processing
## Example Integration
Here's a complete example showing how to integrate the MDX Search Tool with CrewAI:
```python
from crewai import Agent, Task, Crew
from crewai_tools import MDXSearchTool
# Initialize the tool
mdx_tool = MDXSearchTool()
# Create an agent with the tool
researcher = Agent(
role='Documentation Researcher',
goal='Find and analyze information in MDX documentation',
backstory='I am an expert at finding relevant information in documentation',
tools=[mdx_tool]
)
# Create tasks
search_task = Task(
description="""
Search through the API documentation for information about authentication methods.
Look for:
1. Authentication endpoints
2. Security best practices
3. Token handling
Provide a comprehensive summary of the findings.
""",
agent=researcher,
context={
"mdx": "/path/to/api-docs.mdx",
"search_query": "authentication security tokens"
}
)
# Create and run the crew
crew = Crew(
agents=[researcher],
tasks=[search_task]
)
result = crew.kickoff()
```
## Error Handling
The tool handles various error scenarios:
1. **File Not Found**:
```python
try:
tool = MDXSearchTool(mdx="/path/to/nonexistent.mdx")
except FileNotFoundError:
print("MDX file not found. Please verify the file path.")
```
2. **Permission Issues**:
```python
try:
tool = MDXSearchTool(mdx="/restricted/docs.mdx")
except PermissionError:
print("Insufficient permissions to access the MDX file.")
```
3. **Invalid Content**:
```python
try:
result = tool._run(search_query="query", mdx="/path/to/invalid.mdx")
except ValueError:
print("Invalid MDX content or format.")
```

View File

@@ -0,0 +1,217 @@
---
title: MySQLSearchTool
description: A tool for semantic search within MySQL database tables using RAG capabilities
icon: database
---
## MySQLSearchTool
The MySQLSearchTool enables semantic search capabilities for MySQL database tables using Retrieval-Augmented Generation (RAG). It processes table contents and allows natural language queries to search through the data.
## Installation
```bash
pip install 'crewai[tools]'
```
## Usage Example
```python
from crewai import Agent
from crewai_tools import MySQLSearchTool
# Initialize the tool
mysql_tool = MySQLSearchTool(
table_name="users",
db_uri="mysql://user:pass@localhost:3306/database"
)
# Create an agent with the tool
researcher = Agent(
role='Database Researcher',
goal='Search and analyze database contents',
backstory='Expert at finding relevant information in databases.',
tools=[mysql_tool],
verbose=True
)
```
## Input Schema
```python
class MySQLSearchToolSchema(BaseModel):
search_query: str = Field(
description="Mandatory semantic search query you want to use to search the database's content"
)
```
## Function Signature
```python
def __init__(
self,
table_name: str,
db_uri: str,
**kwargs
):
"""
Initialize the MySQL search tool.
Args:
table_name (str): Name of the table to search
db_uri (str): Database connection URI
**kwargs: Additional arguments for RAG tool configuration
"""
def _run(
self,
search_query: str,
**kwargs: Any
) -> str:
"""
Execute semantic search on table contents.
Args:
search_query (str): Query to search in the table
**kwargs: Additional arguments
Returns:
str: Relevant content from the table matching the query
"""
```
## Best Practices
1. Database Connection:
- Use secure connection URIs
- Handle authentication properly
- Manage connection lifecycle
- Monitor timeouts
2. Query Optimization:
- Structure queries clearly
- Consider table size
- Handle large datasets
- Monitor performance
3. Security Considerations:
- Protect credentials
- Use environment variables
- Limit table access
- Validate inputs
4. Error Handling:
- Handle connection errors
- Manage query timeouts
- Provide clear messages
- Log issues
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import MySQLSearchTool
# Initialize tool
mysql_tool = MySQLSearchTool(
table_name="customers",
db_uri="mysql://user:pass@localhost:3306/crm"
)
# Create agent
researcher = Agent(
role='Database Analyst',
goal='Extract customer insights from database',
backstory='Expert at analyzing customer data.',
tools=[mysql_tool]
)
# Define task
analysis_task = Task(
description="""Find all premium customers
with recent purchases.""",
agent=researcher
)
# The tool will use:
# {
# "search_query": "premium customers recent purchases"
# }
# Create crew
crew = Crew(
agents=[researcher],
tasks=[analysis_task]
)
# Execute
result = crew.kickoff()
```
## Advanced Usage
### Multiple Table Analysis
```python
# Create tools for different tables
customers_tool = MySQLSearchTool(
table_name="customers",
db_uri="mysql://user:pass@localhost:3306/crm"
)
orders_tool = MySQLSearchTool(
table_name="orders",
db_uri="mysql://user:pass@localhost:3306/crm"
)
# Create agent with multiple tools
analyst = Agent(
role='Data Analyst',
goal='Cross-reference customer and order data',
tools=[customers_tool, orders_tool]
)
```
### Secure Connection Configuration
```python
import os
# Use environment variables for credentials
db_uri = (
f"mysql://{os.getenv('DB_USER')}:{os.getenv('DB_PASS')}"
f"@{os.getenv('DB_HOST')}:{os.getenv('DB_PORT')}"
f"/{os.getenv('DB_NAME')}"
)
tool = MySQLSearchTool(
table_name="sensitive_data",
db_uri=db_uri
)
```
### Error Handling Example
```python
try:
mysql_tool = MySQLSearchTool(
table_name="users",
db_uri="mysql://user:pass@localhost:3306/app"
)
results = mysql_tool.run(
search_query="active users in California"
)
print(results)
except Exception as e:
print(f"Error querying database: {str(e)}")
```
## Notes
- Inherits from RagTool
- Uses MySQLLoader
- Requires database URI
- Table-specific search
- Semantic query support
- Connection management
- Error handling
- Performance optimization
- Security features
- Memory efficiency

View File

@@ -0,0 +1,208 @@
---
title: PDFSearchTool
description: A tool for semantic search within PDF documents using RAG capabilities
icon: file-search
---
## PDFSearchTool
The PDFSearchTool enables semantic search capabilities for PDF documents using Retrieval-Augmented Generation (RAG). It leverages embedchain's PDFEmbedchainAdapter for efficient PDF processing and supports both fixed and dynamic PDF path specification.
## Installation
```bash
pip install 'crewai[tools]'
```
## Usage Example
```python
from crewai import Agent
from crewai_tools import PDFSearchTool
# Method 1: Initialize with specific PDF
pdf_tool = PDFSearchTool(pdf="/path/to/document.pdf")
# Method 2: Initialize without PDF (specify at runtime)
flexible_pdf_tool = PDFSearchTool()
# Create an agent with the tool
researcher = Agent(
role='PDF Researcher',
goal='Search and analyze PDF documents',
backstory='Expert at finding relevant information in PDFs.',
tools=[pdf_tool],
verbose=True
)
```
## Input Schema
### Fixed PDF Schema (when PDF path provided during initialization)
```python
class FixedPDFSearchToolSchema(BaseModel):
query: str = Field(
description="Mandatory query you want to use to search the PDF's content"
)
```
### Flexible PDF Schema (when PDF path provided at runtime)
```python
class PDFSearchToolSchema(FixedPDFSearchToolSchema):
pdf: str = Field(
description="Mandatory pdf path you want to search"
)
```
## Function Signature
```python
def __init__(
self,
pdf: Optional[str] = None,
**kwargs
):
"""
Initialize the PDF search tool.
Args:
pdf (Optional[str]): Path to PDF file (optional)
**kwargs: Additional arguments for RAG tool configuration
"""
def _run(
self,
query: str,
**kwargs: Any
) -> str:
"""
Execute semantic search on PDF content.
Args:
query (str): Search query for the PDF
**kwargs: Additional arguments including pdf path if not initialized
Returns:
str: Relevant content from the PDF matching the query
"""
```
## Best Practices
1. PDF File Handling:
- Use absolute paths for reliability
- Verify PDF file existence
- Handle large PDFs appropriately
2. Search Optimization:
- Use specific, focused queries
- Consider document structure
- Test with sample queries first
3. Performance Considerations:
- Pre-initialize with PDF for repeated searches
- Handle large documents efficiently
- Monitor memory usage
4. Error Handling:
- Verify PDF file existence
- Handle malformed PDFs
- Manage file access permissions
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import PDFSearchTool
# Initialize tool with specific PDF
pdf_tool = PDFSearchTool(pdf="/path/to/research.pdf")
# Create agent
researcher = Agent(
role='PDF Researcher',
goal='Extract insights from research papers',
backstory='Expert at analyzing research documents.',
tools=[pdf_tool]
)
# Define task
research_task = Task(
description="""Find all mentions of machine learning
applications in healthcare from the PDF.""",
agent=researcher
)
# The tool will use:
# {
# "query": "machine learning applications healthcare"
# }
# Create crew
crew = Crew(
agents=[researcher],
tasks=[research_task]
)
# Execute
result = crew.kickoff()
```
## Advanced Usage
### Dynamic PDF Selection
```python
# Initialize without PDF
flexible_tool = PDFSearchTool()
# Search different PDFs
research_results = flexible_tool.run(
query="quantum computing",
pdf="/path/to/research.pdf"
)
report_results = flexible_tool.run(
query="financial metrics",
pdf="/path/to/report.pdf"
)
```
### Multiple PDF Analysis
```python
# Create tools for different PDFs
research_tool = PDFSearchTool(pdf="/path/to/research.pdf")
report_tool = PDFSearchTool(pdf="/path/to/report.pdf")
# Create agent with multiple tools
analyst = Agent(
role='Document Analyst',
goal='Cross-reference multiple documents',
tools=[research_tool, report_tool]
)
```
### Error Handling Example
```python
try:
pdf_tool = PDFSearchTool()
results = pdf_tool.run(
query="important findings",
pdf="/path/to/document.pdf"
)
print(results)
except Exception as e:
print(f"Error processing PDF: {str(e)}")
```
## Notes
- Inherits from RagTool
- Uses PDFEmbedchainAdapter
- Supports semantic search
- Dynamic PDF specification
- Efficient content retrieval
- Thread-safe operations
- Maintains search context
- Handles large documents
- Supports various PDF formats
- Memory-efficient processing

View File

@@ -0,0 +1,234 @@
---
title: PDFTextWritingTool
description: A tool for adding text to specific positions in PDF documents with custom font support
icon: file-pdf
---
## PDFTextWritingTool
The PDFTextWritingTool allows you to add text to specific positions in PDF documents with support for custom fonts, colors, and positioning. It's particularly useful for adding annotations, watermarks, or any text overlay to existing PDFs.
## Installation
```bash
pip install 'crewai[tools]'
```
## Usage Example
```python
from crewai import Agent
from crewai_tools import PDFTextWritingTool
# Basic initialization
pdf_tool = PDFTextWritingTool()
# Create an agent with the tool
document_processor = Agent(
role='Document Processor',
goal='Add text annotations to PDF documents',
backstory='Expert at PDF document processing and text manipulation.',
tools=[pdf_tool],
verbose=True
)
```
## Input Schema
```python
class PDFTextWritingToolSchema(BaseModel):
pdf_path: str = Field(
description="Path to the PDF file to modify"
)
text: str = Field(
description="Text to add to the PDF"
)
position: tuple = Field(
description="Tuple of (x, y) coordinates for text placement"
)
font_size: int = Field(
default=12,
description="Font size of the text"
)
font_color: str = Field(
default="0 0 0 rg",
description="RGB color code for the text"
)
font_name: Optional[str] = Field(
default="F1",
description="Font name for standard fonts"
)
font_file: Optional[str] = Field(
default=None,
description="Path to a .ttf font file for custom font usage"
)
page_number: int = Field(
default=0,
description="Page number to add text to"
)
```
## Function Signature
```python
def run(
self,
pdf_path: str,
text: str,
position: tuple,
font_size: int,
font_color: str,
font_name: str = "F1",
font_file: Optional[str] = None,
page_number: int = 0,
**kwargs
) -> str:
"""
Add text to a specific position in a PDF document.
Args:
pdf_path (str): Path to the PDF file to modify
text (str): Text to add to the PDF
position (tuple): (x, y) coordinates for text placement
font_size (int): Font size of the text
font_color (str): RGB color code for the text (e.g., "0 0 0 rg" for black)
font_name (str, optional): Font name for standard fonts (default: "F1")
font_file (str, optional): Path to a .ttf font file for custom font
page_number (int, optional): Page number to add text to (default: 0)
Returns:
str: Success message with output file path
"""
```
## Best Practices
1. File Handling:
- Ensure PDF files exist before processing
- Use absolute paths for reliability
- Handle file permissions appropriately
2. Text Positioning:
- Use appropriate coordinates based on PDF dimensions
- Consider page orientation and margins
- Test positioning with small changes first
3. Font Usage:
- Verify custom font files exist
- Use standard fonts when possible
- Test font rendering before production use
4. Error Handling:
- Check page numbers are valid
- Verify font file accessibility
- Handle file writing permissions
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import PDFTextWritingTool
# Initialize tool
pdf_tool = PDFTextWritingTool()
# Create agent
document_processor = Agent(
role='Document Processor',
goal='Process and annotate PDF documents',
backstory='Expert at PDF manipulation and text placement.',
tools=[pdf_tool]
)
# Define task
annotation_task = Task(
description="""Add a watermark saying 'CONFIDENTIAL' to
the center of the first page of the document at
'/path/to/document.pdf'.""",
agent=document_processor
)
# The tool will use:
# {
# "pdf_path": "/path/to/document.pdf",
# "text": "CONFIDENTIAL",
# "position": (300, 400),
# "font_size": 24,
# "font_color": "1 0 0 rg", # Red color
# "page_number": 0
# }
# Create crew
crew = Crew(
agents=[document_processor],
tasks=[annotation_task]
)
# Execute
result = crew.kickoff()
```
## Advanced Usage
### Custom Font Example
```python
# Using a custom font
result = pdf_tool.run(
pdf_path="/path/to/input.pdf",
text="Custom Font Text",
position=(100, 500),
font_size=16,
font_color="0 0 1 rg", # Blue color
font_file="/path/to/custom_font.ttf",
page_number=0
)
```
### Multiple Text Elements
```python
# Add multiple text elements
positions = [(100, 700), (100, 650), (100, 600)]
texts = ["Header", "Subheader", "Body Text"]
font_sizes = [18, 14, 12]
for text, position, size in zip(texts, positions, font_sizes):
pdf_tool.run(
pdf_path="/path/to/input.pdf",
text=text,
position=position,
font_size=size,
font_color="0 0 0 rg" # Black color
)
```
### Color Text Example
```python
# Add colored text
colors = {
"red": "1 0 0 rg",
"green": "0 1 0 rg",
"blue": "0 0 1 rg"
}
for y_pos, (color_name, color_code) in enumerate(colors.items()):
pdf_tool.run(
pdf_path="/path/to/input.pdf",
text=f"This text is {color_name}",
position=(100, 700 - y_pos * 50),
font_size=14,
font_color=color_code
)
```
## Notes
- Supports custom TrueType fonts (.ttf)
- Allows RGB color specifications
- Handles multi-page PDFs
- Preserves original PDF content
- Supports text positioning with x,y coordinates
- Maintains PDF structure and metadata
- Creates new output file for safety
- Thread-safe operations
- Efficient PDF manipulation
- Supports various text attributes

View File

@@ -0,0 +1,181 @@
---
title: PGSearchTool
description: A RAG-based semantic search tool for PostgreSQL database content
icon: database-search
---
## PGSearchTool
The PGSearchTool provides semantic search capabilities for PostgreSQL database content using RAG (Retrieval-Augmented Generation). It allows for natural language queries over database table content by leveraging embeddings and semantic search.
## Installation
```bash
pip install 'crewai[tools]'
pip install embedchain # Required dependency
```
## Usage Example
```python
from crewai import Agent
from crewai_tools import PGSearchTool
# Initialize the tool with database configuration
search_tool = PGSearchTool(
db_uri="postgresql://user:password@localhost:5432/dbname",
table_name="your_table"
)
# Create an agent with the tool
researcher = Agent(
role='Database Researcher',
goal='Find relevant information in database content',
backstory='Expert at searching and analyzing database content.',
tools=[search_tool],
verbose=True
)
```
## Input Schema
```python
class PGSearchToolSchema(BaseModel):
search_query: str = Field(
description="Mandatory semantic search query for searching the database's content"
)
```
## Function Signature
```python
def __init__(self, table_name: str, **kwargs):
"""
Initialize the PostgreSQL search tool.
Args:
table_name (str): Name of the table to search
db_uri (str): PostgreSQL database URI (required in kwargs)
**kwargs: Additional arguments for RagTool initialization
"""
def _run(
self,
search_query: str,
**kwargs: Any
) -> Any:
"""
Perform semantic search on database content.
Args:
search_query (str): Semantic search query
**kwargs: Additional search parameters
Returns:
Any: Relevant database content based on semantic search
"""
```
## Best Practices
1. Secure database credentials:
```python
# Use environment variables for sensitive data
import os
db_uri = (
f"postgresql://{os.getenv('DB_USER')}:{os.getenv('DB_PASS')}"
f"@{os.getenv('DB_HOST')}:{os.getenv('DB_PORT')}/{os.getenv('DB_NAME')}"
)
```
2. Optimize table selection
3. Use specific semantic queries
4. Handle database connection errors
5. Consider table size and query performance
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import PGSearchTool
# Initialize tool with database configuration
db_search = PGSearchTool(
db_uri="postgresql://user:password@localhost:5432/dbname",
table_name="customer_feedback"
)
# Create agent
analyst = Agent(
role='Database Analyst',
goal='Analyze customer feedback data',
backstory='Expert at finding insights in customer feedback.',
tools=[db_search]
)
# Define task
analysis_task = Task(
description="""Find all customer feedback related to product usability
and ease of use. Focus on common patterns and issues.""",
agent=analyst
)
# The tool will use:
# {
# "search_query": "product usability feedback ease of use issues"
# }
# Create crew
crew = Crew(
agents=[analyst],
tasks=[analysis_task]
)
# Execute
result = crew.kickoff()
```
## Advanced Usage
### Multiple Table Search
```python
# Create tools for different tables
customer_search = PGSearchTool(
db_uri="postgresql://user:password@localhost:5432/dbname",
table_name="customers"
)
orders_search = PGSearchTool(
db_uri="postgresql://user:password@localhost:5432/dbname",
table_name="orders"
)
# Use both tools in an agent
analyst = Agent(
role='Multi-table Analyst',
goal='Analyze customer and order data',
tools=[customer_search, orders_search]
)
```
### Error Handling
```python
try:
results = search_tool._run(
search_query="customer satisfaction ratings"
)
# Process results
except Exception as e:
print(f"Database search error: {str(e)}")
```
## Notes
- Inherits from RagTool for semantic search
- Uses embedchain's PostgresLoader
- Requires valid PostgreSQL connection
- Supports semantic natural language queries
- Thread-safe operations
- Efficient for large tables
- Handles connection pooling automatically

282
docs/tools/rag-tool.mdx Normal file
View File

@@ -0,0 +1,282 @@
---
title: RagTool
description: Base class for Retrieval-Augmented Generation (RAG) tools with flexible adapter support
icon: database
---
## RagTool
The RagTool serves as the base class for all Retrieval-Augmented Generation (RAG) tools in the CrewAI ecosystem. It provides a flexible adapter-based architecture for implementing knowledge base functionality with semantic search capabilities.
## Installation
```bash
pip install 'crewai[tools]'
```
## Usage Example
```python
from crewai import Agent
from crewai_tools import RagTool
from crewai_tools.adapters import EmbedchainAdapter
from embedchain import App
# Create custom adapter
class CustomAdapter(RagTool.Adapter):
def query(self, question: str) -> str:
# Implement custom query logic
return "Answer based on knowledge base"
def add(self, *args, **kwargs) -> None:
# Implement custom add logic
pass
# Method 1: Use default EmbedchainAdapter
rag_tool = RagTool(
name="Custom Knowledge Base",
description="Specialized knowledge base for domain data",
summarize=True
)
# Method 2: Use custom adapter
custom_tool = RagTool(
name="Custom Knowledge Base",
adapter=CustomAdapter(),
summarize=False
)
# Create an agent with the tool
researcher = Agent(
role='Knowledge Base Researcher',
goal='Search and analyze knowledge base content',
backstory='Expert at finding relevant information in specialized datasets.',
tools=[rag_tool],
verbose=True
)
```
## Adapter Interface
```python
class Adapter(BaseModel, ABC):
@abstractmethod
def query(self, question: str) -> str:
"""
Query the knowledge base with a question.
Args:
question (str): Query to search in knowledge base
Returns:
str: Answer based on knowledge base content
"""
@abstractmethod
def add(self, *args: Any, **kwargs: Any) -> None:
"""
Add content to the knowledge base.
Args:
*args: Variable length argument list
**kwargs: Arbitrary keyword arguments
"""
```
## Function Signature
```python
def __init__(
self,
name: str = "Knowledge base",
description: str = "A knowledge base that can be used to answer questions.",
summarize: bool = False,
adapter: Optional[Adapter] = None,
config: Optional[dict[str, Any]] = None,
**kwargs
):
"""
Initialize the RAG tool.
Args:
name (str): Tool name
description (str): Tool description
summarize (bool): Enable answer summarization
adapter (Optional[Adapter]): Custom adapter implementation
config (Optional[dict]): Configuration for default adapter
**kwargs: Additional arguments for base tool
"""
def _run(
self,
query: str,
**kwargs: Any
) -> str:
"""
Execute query against knowledge base.
Args:
query (str): Question to ask
**kwargs: Additional arguments
Returns:
str: Answer from knowledge base
"""
```
## Best Practices
1. Adapter Implementation:
- Define clear interfaces
- Handle edge cases
- Implement error handling
- Document behavior
2. Knowledge Base Management:
- Organize content logically
- Update content regularly
- Monitor performance
- Handle large datasets
3. Query Optimization:
- Structure queries clearly
- Consider context
- Handle ambiguity
- Validate inputs
4. Error Handling:
- Handle missing data
- Manage timeouts
- Provide clear messages
- Log issues
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import RagTool
from embedchain import App
# Initialize tool with custom configuration
rag_tool = RagTool(
name="Technical Documentation KB",
description="Knowledge base for technical documentation",
summarize=True,
config={
"collection_name": "tech_docs",
"chunking": {
"chunk_size": 500,
"chunk_overlap": 50
}
}
)
# Add content to knowledge base
rag_tool.add(
"Technical documentation content here...",
data_type="text"
)
# Create agent
researcher = Agent(
role='Documentation Expert',
goal='Extract technical information from documentation',
backstory='Expert at analyzing technical documentation.',
tools=[rag_tool]
)
# Define task
research_task = Task(
description="""Find all mentions of API endpoints
and their authentication requirements.""",
agent=researcher
)
# Create crew
crew = Crew(
agents=[researcher],
tasks=[research_task]
)
# Execute
result = crew.kickoff()
```
## Advanced Usage
### Custom Adapter Implementation
```python
from typing import Any
from pydantic import BaseModel
from abc import ABC, abstractmethod
class SpecializedAdapter(RagTool.Adapter):
def __init__(self, config: dict):
self.config = config
self.knowledge_base = {}
def query(self, question: str) -> str:
# Implement specialized query logic
return self._process_query(question)
def add(self, content: str, **kwargs: Any) -> None:
# Implement specialized content addition
self._process_content(content, **kwargs)
# Use custom adapter
specialized_tool = RagTool(
name="Specialized KB",
adapter=SpecializedAdapter(config={"mode": "advanced"})
)
```
### Configuration Management
```python
# Configure default EmbedchainAdapter
config = {
"collection_name": "custom_collection",
"embedding": {
"model": "sentence-transformers/all-mpnet-base-v2",
"dimensions": 768
},
"chunking": {
"chunk_size": 1000,
"chunk_overlap": 100
}
}
tool = RagTool(config=config)
```
### Error Handling Example
```python
try:
rag_tool = RagTool()
# Add content
rag_tool.add(
"Documentation content...",
data_type="text"
)
# Query content
result = rag_tool.run(
query="What are the system requirements?"
)
print(result)
except Exception as e:
print(f"Error using knowledge base: {str(e)}")
```
## Notes
- Base class for RAG tools
- Flexible adapter pattern
- Default EmbedchainAdapter
- Custom adapter support
- Content management
- Query processing
- Error handling
- Configuration options
- Performance optimization
- Memory management

View File

@@ -0,0 +1,229 @@
---
title: SerpApi Google Search Tool
description: A tool for performing Google searches using the SerpApi service
---
# SerpApi Google Search Tool
The SerpApi Google Search Tool enables performing Google searches using the SerpApi service. It provides location-aware search capabilities with comprehensive result filtering.
## Installation
```bash
pip install 'crewai[tools]'
pip install serpapi
```
## Prerequisites
You need a SerpApi API key to use this tool. You can get one from [SerpApi's website](https://serpapi.com/manage-api-key).
Set your API key as an environment variable:
```bash
export SERPAPI_API_KEY="your_api_key_here"
```
## Usage
Here's how to use the SerpApi Google Search Tool:
```python
from crewai import Agent
from crewai_tools import SerpApiGoogleSearchTool
# Initialize the tool
search_tool = SerpApiGoogleSearchTool()
# Create an agent with the tool
search_agent = Agent(
role='Web Researcher',
goal='Find accurate information online',
backstory='I help research and analyze online information',
tools=[search_tool]
)
# Use in a task
task = Task(
description="Research recent AI developments",
agent=search_agent,
context={
"search_query": "latest artificial intelligence breakthroughs 2024",
"location": "United States" # Optional
}
)
```
## Input Schema
```python
class SerpApiGoogleSearchToolSchema(BaseModel):
search_query: str # The search query for Google Search
location: Optional[str] = None # Optional location for localized results
```
## Function Signatures
### Base Tool Initialization
```python
def __init__(self, **kwargs):
"""
Initialize the SerpApi tool with API credentials.
Raises:
ImportError: If serpapi package is not installed
ValueError: If SERPAPI_API_KEY environment variable is not set
"""
```
### Search Execution
```python
def _run(
self,
**kwargs: Any,
) -> dict:
"""
Execute the Google search.
Args:
search_query (str): The search query
location (Optional[str]): Optional location for results
Returns:
dict: Filtered search results from Google
Raises:
HTTPError: If the API request fails
"""
```
## Best Practices
1. **API Key Management**:
- Store the API key securely in environment variables
- Never hardcode the API key in your code
- Verify API key validity before making requests
2. **Search Optimization**:
- Use specific, targeted search queries
- Include relevant keywords and time frames
- Leverage location parameter for regional results
3. **Error Handling**:
- Handle API rate limits gracefully
- Implement retry logic for failed requests
- Validate input parameters before making requests
## Example Integration
Here's a complete example showing how to integrate the SerpApi Google Search Tool with CrewAI:
```python
from crewai import Agent, Task, Crew
from crewai_tools import SerpApiGoogleSearchTool
# Initialize the tool
search_tool = SerpApiGoogleSearchTool()
# Create an agent with the tool
researcher = Agent(
role='Research Analyst',
goal='Find and analyze current information',
backstory="""I am an expert at finding and analyzing
information from various online sources.""",
tools=[search_tool]
)
# Create tasks
research_task = Task(
description="""
Research the following topic:
1. Latest developments in quantum computing
2. Focus on practical applications
3. Include major company announcements
Provide a comprehensive analysis of the findings.
""",
agent=researcher,
context={
"search_query": "quantum computing breakthroughs applications companies",
"location": "United States"
}
)
# Create and run the crew
crew = Crew(
agents=[researcher],
tasks=[research_task]
)
result = crew.kickoff()
```
## Error Handling
The tool handles various error scenarios:
1. **Missing API Key**:
```python
try:
tool = SerpApiGoogleSearchTool()
except ValueError as e:
print("API key not found. Set SERPAPI_API_KEY environment variable.")
```
2. **API Request Errors**:
```python
try:
results = tool._run(
search_query="quantum computing",
location="United States"
)
except HTTPError as e:
print(f"API request failed: {str(e)}")
```
3. **Invalid Parameters**:
```python
try:
results = tool._run(
search_query="", # Empty query
location="Invalid Location"
)
except ValueError as e:
print("Invalid search parameters provided.")
```
## Response Format
The tool returns a filtered dictionary containing Google search results. Example response structure:
```python
{
"organic_results": [
{
"title": "Page Title",
"link": "https://...",
"snippet": "Page description or excerpt...",
"position": 1
}
# Additional results...
],
"knowledge_graph": {
"title": "Topic Title",
"description": "Topic description...",
"source": {
"name": "Source Name",
"link": "https://..."
}
},
"related_questions": [
{
"question": "Related question?",
"answer": "Answer to related question..."
}
# Additional related questions...
]
}
```
The response is automatically filtered to remove metadata and unnecessary fields, focusing on the most relevant search information. Fields like search metadata, parameters, and pagination are omitted for clarity.

View File

@@ -0,0 +1,225 @@
---
title: SerpApi Google Shopping Tool
description: A tool for searching Google Shopping using the SerpApi service
---
# SerpApi Google Shopping Tool
The SerpApi Google Shopping Tool enables searching Google Shopping results using the SerpApi service. It provides location-aware shopping search capabilities with comprehensive result filtering.
## Installation
```bash
pip install 'crewai[tools]'
pip install serpapi
```
## Prerequisites
You need a SerpApi API key to use this tool. You can get one from [SerpApi's website](https://serpapi.com/manage-api-key).
Set your API key as an environment variable:
```bash
export SERPAPI_API_KEY="your_api_key_here"
```
## Usage
Here's how to use the SerpApi Google Shopping Tool:
```python
from crewai import Agent
from crewai_tools import SerpApiGoogleShoppingTool
# Initialize the tool
shopping_tool = SerpApiGoogleShoppingTool()
# Create an agent with the tool
shopping_agent = Agent(
role='Shopping Researcher',
goal='Find the best shopping deals',
backstory='I help find and analyze shopping options',
tools=[shopping_tool]
)
# Use in a task
task = Task(
description="Find best deals for gaming laptops",
agent=shopping_agent,
context={
"search_query": "gaming laptop deals",
"location": "United States" # Optional
}
)
```
## Input Schema
```python
class SerpApiGoogleShoppingToolSchema(BaseModel):
search_query: str # The search query for Google Shopping
location: Optional[str] = None # Optional location for localized results
```
## Function Signatures
### Base Tool Initialization
```python
def __init__(self, **kwargs):
"""
Initialize the SerpApi tool with API credentials.
Raises:
ImportError: If serpapi package is not installed
ValueError: If SERPAPI_API_KEY environment variable is not set
"""
```
### Search Execution
```python
def _run(
self,
**kwargs: Any,
) -> dict:
"""
Execute the Google Shopping search.
Args:
search_query (str): The search query for Google Shopping
location (Optional[str]): Optional location for results
Returns:
dict: Filtered search results from Google Shopping
Raises:
HTTPError: If the API request fails
"""
```
## Best Practices
1. **API Key Management**:
- Store the API key securely in environment variables
- Never hardcode the API key in your code
- Verify API key validity before making requests
2. **Search Optimization**:
- Use specific, targeted search queries
- Include relevant product details in queries
- Leverage location parameter for regional pricing
3. **Error Handling**:
- Handle API rate limits gracefully
- Implement retry logic for failed requests
- Validate input parameters before making requests
## Example Integration
Here's a complete example showing how to integrate the SerpApi Google Shopping Tool with CrewAI:
```python
from crewai import Agent, Task, Crew
from crewai_tools import SerpApiGoogleShoppingTool
# Initialize the tool
shopping_tool = SerpApiGoogleShoppingTool()
# Create an agent with the tool
researcher = Agent(
role='Shopping Analyst',
goal='Find and analyze the best shopping deals',
backstory="""I am an expert at finding the best shopping deals
and analyzing product offerings across different regions.""",
tools=[shopping_tool]
)
# Create tasks
search_task = Task(
description="""
Research gaming laptops with the following criteria:
1. Price range: $800-$1500
2. Released in the last year
3. Compare prices across different retailers
Provide a comprehensive analysis of the findings.
""",
agent=researcher,
context={
"search_query": "gaming laptop RTX 4060 2023",
"location": "United States"
}
)
# Create and run the crew
crew = Crew(
agents=[researcher],
tasks=[search_task]
)
result = crew.kickoff()
```
## Error Handling
The tool handles various error scenarios:
1. **Missing API Key**:
```python
try:
tool = SerpApiGoogleShoppingTool()
except ValueError as e:
print("API key not found. Set SERPAPI_API_KEY environment variable.")
```
2. **API Request Errors**:
```python
try:
results = tool._run(
search_query="gaming laptop",
location="United States"
)
except HTTPError as e:
print(f"API request failed: {str(e)}")
```
3. **Invalid Parameters**:
```python
try:
results = tool._run(
search_query="", # Empty query
location="Invalid Location"
)
except ValueError as e:
print("Invalid search parameters provided.")
```
## Response Format
The tool returns a filtered dictionary containing Google Shopping results. Example response structure:
```python
{
"shopping_results": [
{
"title": "Product Title",
"price": "$999.99",
"link": "https://...",
"source": "Retailer Name",
"rating": 4.5,
"reviews": 123,
"thumbnail": "https://..."
}
# Additional results...
],
"organic_results": [
{
"title": "Related Product",
"link": "https://...",
"snippet": "Product description..."
}
# Additional organic results...
]
}
```
The response is automatically filtered to remove metadata and unnecessary fields, focusing on the most relevant shopping information.

View File

@@ -0,0 +1,184 @@
---
title: SerplyJobSearchTool
description: A tool for searching US job postings using the Serply API
icon: briefcase
---
## SerplyJobSearchTool
The SerplyJobSearchTool provides job search capabilities using the Serply API. It allows for searching job postings in the US market, returning structured information about positions, employers, locations, and remote work status.
## Installation
```bash
pip install 'crewai[tools]'
```
## Usage Example
```python
from crewai import Agent
from crewai_tools import SerplyJobSearchTool
# Set environment variable
# export SERPLY_API_KEY='your-api-key'
# Initialize the tool
search_tool = SerplyJobSearchTool()
# Create an agent with the tool
job_researcher = Agent(
role='Job Market Researcher',
goal='Find relevant job opportunities',
backstory='Expert at analyzing job market trends and opportunities.',
tools=[search_tool],
verbose=True
)
```
## Input Schema
```python
class SerplyJobSearchToolSchema(BaseModel):
search_query: str = Field(
description="Mandatory search query for fetching job postings"
)
```
## Function Signature
```python
def __init__(self, **kwargs):
"""
Initialize the job search tool.
Args:
**kwargs: Additional arguments for RagTool initialization
Note:
Requires SERPLY_API_KEY environment variable
"""
def _run(
self,
**kwargs: Any
) -> str:
"""
Perform job search using Serply API.
Args:
search_query (str): Job search query
**kwargs: Additional search parameters
Returns:
str: Formatted string containing job listings with details:
- Position
- Employer
- Location
- Link
- Highlights
- Remote/Hybrid status
"""
```
## Best Practices
1. Set up API authentication:
```bash
export SERPLY_API_KEY='your-serply-api-key'
```
2. Use specific search queries
3. Handle potential API errors
4. Process structured results effectively
5. Consider rate limits and quotas
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import SerplyJobSearchTool
# Initialize tool
job_search = SerplyJobSearchTool()
# Create agent
recruiter = Agent(
role='Technical Recruiter',
goal='Find relevant job opportunities in tech',
backstory='Expert at identifying promising tech positions.',
tools=[job_search]
)
# Define task
search_task = Task(
description="""Search for senior software engineer positions
with remote work options in the US. Focus on positions
requiring Python expertise.""",
agent=recruiter
)
# The tool will use:
# {
# "search_query": "senior software engineer python remote"
# }
# Create crew
crew = Crew(
agents=[recruiter],
tasks=[search_task]
)
# Execute
result = crew.kickoff()
```
## Advanced Usage
### Handling Search Results
```python
# Example of processing structured results
results = search_tool._run(
search_query="machine learning engineer"
)
# Results format:
"""
Search results:
Position: Senior Machine Learning Engineer
Employer: TechCorp Inc
Location: San Francisco, CA
Link: https://example.com/job/123
Highlights: Python, TensorFlow, 5+ years experience
Is Remote: True
Is Hybrid: False
---
Position: ML Engineer
...
"""
```
### Error Handling
```python
try:
results = search_tool._run(
search_query="data scientist"
)
if not results:
print("No jobs found")
else:
print(results)
except Exception as e:
print(f"Job search error: {str(e)}")
```
## Notes
- Requires valid Serply API key
- Currently supports US job market only
- Returns structured job information
- Includes remote/hybrid status
- Thread-safe operations
- Efficient job search capabilities
- Handles API rate limiting automatically
- Provides detailed job highlights

View File

@@ -0,0 +1,209 @@
---
title: SerplyNewsSearchTool
description: A news article search tool powered by Serply API with configurable search parameters
icon: newspaper
---
## SerplyNewsSearchTool
The SerplyNewsSearchTool provides news article search capabilities using the Serply API. It allows for customizable search parameters including result limits and proxy location for region-specific news results.
## Installation
```bash
pip install 'crewai[tools]'
```
## Usage Example
```python
from crewai import Agent
from crewai_tools import SerplyNewsSearchTool
# Set environment variable
# export SERPLY_API_KEY='your-api-key'
# Basic initialization
news_tool = SerplyNewsSearchTool()
# Advanced initialization with custom parameters
news_tool = SerplyNewsSearchTool(
limit=20, # Return 20 results
proxy_location="FR" # Search from France
)
# Create an agent with the tool
news_researcher = Agent(
role='News Researcher',
goal='Find relevant news articles',
backstory='Expert at news research and information gathering.',
tools=[news_tool],
verbose=True
)
```
## Input Schema
```python
class SerplyNewsSearchToolSchema(BaseModel):
search_query: str = Field(
description="Mandatory search query for fetching news articles"
)
```
## Function Signature
```python
def __init__(
self,
limit: Optional[int] = 10,
proxy_location: Optional[str] = "US",
**kwargs
):
"""
Initialize the news search tool.
Args:
limit (int): Maximum number of results [10-100] (default: 10)
proxy_location (str): Region for local news results (default: "US")
Options: US, CA, IE, GB, FR, DE, SE, IN, JP, KR, SG, AU, BR
**kwargs: Additional arguments for tool creation
"""
def _run(
self,
**kwargs: Any
) -> str:
"""
Perform news search using Serply API.
Args:
search_query (str): News search query
Returns:
str: Formatted string containing news results:
- Title
- Link
- Source
- Published Date
"""
```
## Best Practices
1. Set up API authentication:
```bash
export SERPLY_API_KEY='your-serply-api-key'
```
2. Configure search parameters appropriately:
- Set reasonable result limits
- Select relevant proxy location for regional news
- Consider time sensitivity of news content
3. Handle potential API errors
4. Process structured results effectively
5. Consider rate limits and quotas
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import SerplyNewsSearchTool
# Initialize tool with custom configuration
news_tool = SerplyNewsSearchTool(
limit=15, # 15 results
proxy_location="US" # US news sources
)
# Create agent
news_analyst = Agent(
role='News Analyst',
goal='Research breaking news and developments',
backstory='Expert at analyzing news trends and developments.',
tools=[news_tool]
)
# Define task
news_task = Task(
description="""Research the latest developments in renewable
energy technology and investments, focusing on major
announcements and industry trends.""",
agent=news_analyst
)
# The tool will use:
# {
# "search_query": "renewable energy technology investments news"
# }
# Create crew
crew = Crew(
agents=[news_analyst],
tasks=[news_task]
)
# Execute
result = crew.kickoff()
```
## Advanced Usage
### Regional News Configuration
```python
# French news sources
fr_news = SerplyNewsSearchTool(
proxy_location="FR",
limit=20
)
# Japanese news sources
jp_news = SerplyNewsSearchTool(
proxy_location="JP",
limit=20
)
```
### Result Processing
```python
# Get news results
try:
results = news_tool._run(
search_query="renewable energy investments"
)
print(results)
except Exception as e:
print(f"News search error: {str(e)}")
```
### Multiple Region Search
```python
# Search across multiple regions
regions = ["US", "GB", "DE"]
all_results = []
for region in regions:
regional_tool = SerplyNewsSearchTool(
proxy_location=region,
limit=5
)
results = regional_tool._run(
search_query="global tech innovations"
)
all_results.append(f"Results from {region}:\n{results}")
combined_results = "\n\n".join(all_results)
```
## Notes
- Requires valid Serply API key
- Supports multiple regions for news sources
- Configurable result limits (10-100)
- Returns structured news article data
- Thread-safe operations
- Efficient news search capabilities
- Handles API rate limiting automatically
- Includes source attribution and publication dates
- Follows redirects for final article URLs

View File

@@ -0,0 +1,209 @@
---
title: SerplyScholarSearchTool
description: A scholarly literature search tool powered by Serply API with configurable search parameters
icon: book
---
## SerplyScholarSearchTool
The SerplyScholarSearchTool provides scholarly literature search capabilities using the Serply API. It allows for customizable search parameters including language and proxy location for region-specific academic results.
## Installation
```bash
pip install 'crewai[tools]'
```
## Usage Example
```python
from crewai import Agent
from crewai_tools import SerplyScholarSearchTool
# Set environment variable
# export SERPLY_API_KEY='your-api-key'
# Basic initialization
scholar_tool = SerplyScholarSearchTool()
# Advanced initialization with custom parameters
scholar_tool = SerplyScholarSearchTool(
hl="fr", # French language results
proxy_location="FR" # Search from France
)
# Create an agent with the tool
academic_researcher = Agent(
role='Academic Researcher',
goal='Find relevant scholarly literature',
backstory='Expert at academic research and literature review.',
tools=[scholar_tool],
verbose=True
)
```
## Input Schema
```python
class SerplyScholarSearchToolSchema(BaseModel):
search_query: str = Field(
description="Mandatory search query for fetching scholarly literature"
)
```
## Function Signature
```python
def __init__(
self,
hl: str = "us",
proxy_location: Optional[str] = "US",
**kwargs
):
"""
Initialize the scholar search tool.
Args:
hl (str): Host language code for results (default: "us")
Reference: https://developers.google.com/custom-search/docs/xml_results?hl=en#wsInterfaceLanguages
proxy_location (str): Region for local results (default: "US")
Options: US, CA, IE, GB, FR, DE, SE, IN, JP, KR, SG, AU, BR
**kwargs: Additional arguments for tool creation
"""
def _run(
self,
**kwargs: Any
) -> str:
"""
Perform scholarly literature search using Serply API.
Args:
search_query (str): Academic search query
Returns:
str: Formatted string containing scholarly results:
- Title
- Link
- Description
- Citation
- Authors
"""
```
## Best Practices
1. Set up API authentication:
```bash
export SERPLY_API_KEY='your-serply-api-key'
```
2. Configure search parameters appropriately:
- Use relevant language codes
- Select appropriate proxy location
- Provide specific academic search terms
3. Handle potential API errors
4. Process structured results effectively
5. Consider rate limits and quotas
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import SerplyScholarSearchTool
# Initialize tool with custom configuration
scholar_tool = SerplyScholarSearchTool(
hl="en", # English results
proxy_location="US" # US academic sources
)
# Create agent
researcher = Agent(
role='Academic Researcher',
goal='Research recent academic publications',
backstory='Expert at analyzing academic literature and research trends.',
tools=[scholar_tool]
)
# Define task
research_task = Task(
description="""Research recent academic publications on
machine learning applications in healthcare, focusing on
peer-reviewed articles from the last two years.""",
agent=researcher
)
# The tool will use:
# {
# "search_query": "machine learning healthcare applications"
# }
# Create crew
crew = Crew(
agents=[researcher],
tasks=[research_task]
)
# Execute
result = crew.kickoff()
```
## Advanced Usage
### Language and Region Configuration
```python
# French academic sources
fr_scholar = SerplyScholarSearchTool(
hl="fr",
proxy_location="FR"
)
# German academic sources
de_scholar = SerplyScholarSearchTool(
hl="de",
proxy_location="DE"
)
```
### Result Processing
```python
try:
results = scholar_tool._run(
search_query="machine learning healthcare applications"
)
print(results)
except Exception as e:
print(f"Scholar search error: {str(e)}")
```
### Citation Analysis
```python
# Extract and analyze citations
def analyze_citations(results):
citations = []
for result in results.split("---"):
if "Cite:" in result:
citation = result.split("Cite:")[1].split("\n")[0].strip()
citations.append(citation)
return citations
results = scholar_tool._run(
search_query="artificial intelligence ethics"
)
citations = analyze_citations(results)
```
## Notes
- Requires valid Serply API key
- Supports multiple languages and regions
- Returns structured academic article data
- Includes citation information
- Lists all authors of publications
- Thread-safe operations
- Efficient scholarly search capabilities
- Handles API rate limiting automatically
- Supports both direct and document links
- Provides comprehensive article metadata

View File

@@ -0,0 +1,213 @@
---
title: SerplyWebSearchTool
description: A Google search tool powered by Serply API with configurable search parameters
icon: search
---
## SerplyWebSearchTool
The SerplyWebSearchTool provides Google search capabilities using the Serply API. It allows for customizable search parameters including language, result limits, device type, and proxy location for region-specific results.
## Installation
```bash
pip install 'crewai[tools]'
```
## Usage Example
```python
from crewai import Agent
from crewai_tools import SerplyWebSearchTool
# Set environment variable
# export SERPLY_API_KEY='your-api-key'
# Basic initialization
search_tool = SerplyWebSearchTool()
# Advanced initialization with custom parameters
search_tool = SerplyWebSearchTool(
hl="fr", # French language results
limit=20, # Return 20 results
device_type="mobile", # Mobile search results
proxy_location="FR" # Search from France
)
# Create an agent with the tool
researcher = Agent(
role='Web Researcher',
goal='Find relevant information online',
backstory='Expert at web research and information gathering.',
tools=[search_tool],
verbose=True
)
```
## Input Schema
```python
class SerplyWebSearchToolSchema(BaseModel):
search_query: str = Field(
description="Mandatory search query for Google search"
)
```
## Function Signature
```python
def __init__(
self,
hl: str = "us",
limit: int = 10,
device_type: str = "desktop",
proxy_location: str = "US",
**kwargs
):
"""
Initialize the Google search tool.
Args:
hl (str): Host language code for results (default: "us")
Reference: https://developers.google.com/custom-search/docs/xml_results?hl=en#wsInterfaceLanguages
limit (int): Maximum number of results [10-100] (default: 10)
device_type (str): "desktop" or "mobile" results (default: "desktop")
proxy_location (str): Region for local results (default: "US")
Options: US, CA, IE, GB, FR, DE, SE, IN, JP, KR, SG, AU, BR
**kwargs: Additional arguments for tool creation
"""
def _run(
self,
**kwargs: Any
) -> str:
"""
Perform Google search using Serply API.
Args:
search_query (str): Search query
Returns:
str: Formatted string containing search results:
- Title
- Link
- Description
"""
```
## Best Practices
1. Set up API authentication:
```bash
export SERPLY_API_KEY='your-serply-api-key'
```
2. Configure search parameters appropriately:
- Use relevant language codes
- Set reasonable result limits
- Choose appropriate device type
- Select relevant proxy location
3. Handle potential API errors
4. Process structured results effectively
5. Consider rate limits and quotas
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import SerplyWebSearchTool
# Initialize tool with custom configuration
search_tool = SerplyWebSearchTool(
hl="en", # English results
limit=15, # 15 results
device_type="desktop",
proxy_location="US"
)
# Create agent
researcher = Agent(
role='Web Researcher',
goal='Research emerging technology trends',
backstory='Expert at finding and analyzing tech trends.',
tools=[search_tool]
)
# Define task
research_task = Task(
description="""Research the latest developments in artificial
intelligence and machine learning, focusing on practical
applications in business.""",
agent=researcher
)
# The tool will use:
# {
# "search_query": "latest AI ML developments business applications"
# }
# Create crew
crew = Crew(
agents=[researcher],
tasks=[research_task]
)
# Execute
result = crew.kickoff()
```
## Advanced Usage
### Language and Region Configuration
```python
# French search from France
fr_search = SerplyWebSearchTool(
hl="fr",
proxy_location="FR"
)
# Japanese search from Japan
jp_search = SerplyWebSearchTool(
hl="ja",
proxy_location="JP"
)
```
### Device-Specific Results
```python
# Mobile results
mobile_search = SerplyWebSearchTool(
device_type="mobile",
limit=20
)
# Desktop results
desktop_search = SerplyWebSearchTool(
device_type="desktop",
limit=20
)
```
### Error Handling
```python
try:
results = search_tool._run(
search_query="artificial intelligence trends"
)
print(results)
except Exception as e:
print(f"Search error: {str(e)}")
```
## Notes
- Requires valid Serply API key
- Supports multiple languages and regions
- Configurable result limits (10-100)
- Device-specific search results
- Thread-safe operations
- Efficient search capabilities
- Handles API rate limiting automatically
- Returns structured search results

View File

@@ -0,0 +1,201 @@
---
title: SerplyWebpageToMarkdownTool
description: A tool for converting web pages to markdown format using Serply API
icon: markdown
---
## SerplyWebpageToMarkdownTool
The SerplyWebpageToMarkdownTool converts web pages to markdown format using the Serply API, making it easier for LLMs to process and understand web content. It supports configurable proxy locations for region-specific access.
## Installation
```bash
pip install 'crewai[tools]'
```
## Usage Example
```python
from crewai import Agent
from crewai_tools import SerplyWebpageToMarkdownTool
# Set environment variable
# export SERPLY_API_KEY='your-api-key'
# Basic initialization
markdown_tool = SerplyWebpageToMarkdownTool()
# Advanced initialization with custom parameters
markdown_tool = SerplyWebpageToMarkdownTool(
proxy_location="FR" # Access from France
)
# Create an agent with the tool
web_processor = Agent(
role='Web Content Processor',
goal='Convert web content to markdown format',
backstory='Expert at processing and formatting web content.',
tools=[markdown_tool],
verbose=True
)
```
## Input Schema
```python
class SerplyWebpageToMarkdownToolSchema(BaseModel):
url: str = Field(
description="Mandatory URL of the webpage to convert to markdown"
)
```
## Function Signature
```python
def __init__(
self,
proxy_location: Optional[str] = "US",
**kwargs
):
"""
Initialize the webpage to markdown conversion tool.
Args:
proxy_location (str): Region for accessing the webpage (default: "US")
Options: US, CA, IE, GB, FR, DE, SE, IN, JP, KR, SG, AU, BR
**kwargs: Additional arguments for tool creation
"""
def _run(
self,
**kwargs: Any
) -> str:
"""
Convert webpage to markdown using Serply API.
Args:
url (str): URL of the webpage to convert
Returns:
str: Markdown formatted content of the webpage
"""
```
## Best Practices
1. Set up API authentication:
```bash
export SERPLY_API_KEY='your-serply-api-key'
```
2. Configure proxy location appropriately:
- Select relevant region for access
- Consider content accessibility
- Handle region-specific content
3. Handle potential API errors
4. Process markdown output effectively
5. Consider rate limits and quotas
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import SerplyWebpageToMarkdownTool
# Initialize tool with custom configuration
markdown_tool = SerplyWebpageToMarkdownTool(
proxy_location="US" # US access point
)
# Create agent
processor = Agent(
role='Content Processor',
goal='Convert web content to structured markdown',
backstory='Expert at processing web content into structured formats.',
tools=[markdown_tool]
)
# Define task
conversion_task = Task(
description="""Convert the documentation page at
https://example.com/docs into markdown format for
further processing.""",
agent=processor
)
# The tool will use:
# {
# "url": "https://example.com/docs"
# }
# Create crew
crew = Crew(
agents=[processor],
tasks=[conversion_task]
)
# Execute
result = crew.kickoff()
```
## Advanced Usage
### Regional Access Configuration
```python
# European access points
fr_processor = SerplyWebpageToMarkdownTool(
proxy_location="FR"
)
de_processor = SerplyWebpageToMarkdownTool(
proxy_location="DE"
)
```
### Error Handling
```python
try:
markdown_content = markdown_tool._run(
url="https://example.com/page"
)
print(markdown_content)
except Exception as e:
print(f"Conversion error: {str(e)}")
```
### Content Processing
```python
# Process multiple pages
urls = [
"https://example.com/page1",
"https://example.com/page2",
"https://example.com/page3"
]
markdown_contents = []
for url in urls:
try:
content = markdown_tool._run(url=url)
markdown_contents.append(content)
except Exception as e:
print(f"Error processing {url}: {str(e)}")
continue
# Combine contents
combined_markdown = "\n\n---\n\n".join(markdown_contents)
```
## Notes
- Requires valid Serply API key
- Supports multiple proxy locations
- Returns markdown-formatted content
- Simplifies web content for LLM processing
- Thread-safe operations
- Efficient content conversion
- Handles API rate limiting automatically
- Preserves content structure in markdown
- Supports various webpage formats
- Makes web content more accessible to AI agents

View File

@@ -0,0 +1,158 @@
---
title: TXTSearchTool
description: A semantic search tool for text files using RAG capabilities
icon: magnifying-glass-document
---
## TXTSearchTool
The TXTSearchTool is a specialized Retrieval-Augmented Generation (RAG) tool that enables semantic search within text files. It inherits from the base RagTool class and provides both fixed and dynamic text file searching capabilities.
## Installation
```bash
pip install 'crewai[tools]'
```
## Usage Example
```python
from crewai import Agent
from crewai_tools import TXTSearchTool
# Method 1: Dynamic file path
txt_search = TXTSearchTool()
# Method 2: Fixed file path
fixed_txt_search = TXTSearchTool(txt="path/to/fixed/document.txt")
# Create an agent with the tool
researcher = Agent(
role='Research Assistant',
goal='Search through text documents semantically',
backstory='Expert at finding relevant information in documents using semantic search.',
tools=[txt_search],
verbose=True
)
```
## Input Schema
The tool supports two input schemas depending on initialization:
### Dynamic File Path Schema
```python
class TXTSearchToolSchema(BaseModel):
search_query: str # The semantic search query
txt: str # Path to the text file to search
```
### Fixed File Path Schema
```python
class FixedTXTSearchToolSchema(BaseModel):
search_query: str # The semantic search query
```
## Function Signature
```python
def __init__(self, txt: Optional[str] = None, **kwargs):
"""
Initialize the TXT search tool.
Args:
txt (Optional[str]): Fixed path to a text file. If provided, the tool will only search this file.
**kwargs: Additional arguments passed to the parent RagTool
"""
def _run(self, search_query: str, **kwargs: Any) -> Any:
"""
Perform semantic search on the text file.
Args:
search_query (str): The semantic search query
**kwargs: Additional arguments (including 'txt' for dynamic file path)
Returns:
str: Relevant text passages based on semantic search
"""
```
## Best Practices
1. Choose initialization method based on use case:
- Use fixed file path when repeatedly searching the same document
- Use dynamic file path when searching different documents
2. Write clear, semantic search queries
3. Handle potential file access errors in agent prompts
4. Consider memory usage for large text files
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import TXTSearchTool
# Example 1: Fixed document search
documentation_search = TXTSearchTool(txt="api_documentation.txt")
# Example 2: Dynamic document search
flexible_search = TXTSearchTool()
# Create agents
doc_analyst = Agent(
role='Documentation Analyst',
goal='Find relevant API documentation sections',
backstory='Expert at analyzing technical documentation.',
tools=[documentation_search]
)
file_analyst = Agent(
role='File Analyst',
goal='Search through various text files',
backstory='Specialist in finding information across multiple documents.',
tools=[flexible_search]
)
# Define tasks
fixed_search_task = Task(
description="""Find all API endpoints related to user authentication
in the documentation.""",
agent=doc_analyst
)
# The agent will use:
# {
# "search_query": "user authentication API endpoints"
# }
dynamic_search_task = Task(
description="""Search through the logs.txt file for any database
connection errors.""",
agent=file_analyst
)
# The agent will use:
# {
# "search_query": "database connection errors",
# "txt": "logs.txt"
# }
# Create crew
crew = Crew(
agents=[doc_analyst, file_analyst],
tasks=[fixed_search_task, dynamic_search_task]
)
# Execute
result = crew.kickoff()
```
## Notes
- Inherits from RagTool for semantic search capabilities
- Supports both fixed and dynamic text file paths
- Uses embeddings for semantic search
- Optimized for text file analysis
- Thread-safe operations
- Automatically handles file loading and embedding

View File

@@ -0,0 +1,159 @@
---
title: YoutubeChannelSearchTool
description: A semantic search tool for YouTube channel content using RAG capabilities
icon: youtube
---
## YoutubeChannelSearchTool
The YoutubeChannelSearchTool is a specialized Retrieval-Augmented Generation (RAG) tool that enables semantic search within YouTube channel content. It inherits from the base RagTool class and provides both fixed and dynamic YouTube channel searching capabilities.
## Installation
```bash
pip install 'crewai[tools]'
```
## Usage Example
```python
from crewai import Agent
from crewai_tools import YoutubeChannelSearchTool
# Method 1: Dynamic channel handle
youtube_search = YoutubeChannelSearchTool()
# Method 2: Fixed channel handle
fixed_channel_search = YoutubeChannelSearchTool(youtube_channel_handle="@example_channel")
# Create an agent with the tool
researcher = Agent(
role='Content Researcher',
goal='Search through YouTube channel content semantically',
backstory='Expert at finding relevant information in YouTube content.',
tools=[youtube_search],
verbose=True
)
```
## Input Schema
The tool supports two input schemas depending on initialization:
### Dynamic Channel Schema
```python
class YoutubeChannelSearchToolSchema(BaseModel):
search_query: str # The semantic search query
youtube_channel_handle: str # YouTube channel handle (with or without @)
```
### Fixed Channel Schema
```python
class FixedYoutubeChannelSearchToolSchema(BaseModel):
search_query: str # The semantic search query
```
## Function Signature
```python
def __init__(self, youtube_channel_handle: Optional[str] = None, **kwargs):
"""
Initialize the YouTube channel search tool.
Args:
youtube_channel_handle (Optional[str]): Fixed channel handle. If provided,
the tool will only search this channel.
**kwargs: Additional arguments passed to the parent RagTool
"""
def _run(self, search_query: str, **kwargs: Any) -> Any:
"""
Perform semantic search on the YouTube channel content.
Args:
search_query (str): The semantic search query
**kwargs: Additional arguments (including 'youtube_channel_handle' for dynamic mode)
Returns:
str: Relevant content from the YouTube channel based on semantic search
"""
```
## Best Practices
1. Choose initialization method based on use case:
- Use fixed channel handle when repeatedly searching the same channel
- Use dynamic handle when searching different channels
2. Write clear, semantic search queries
3. Channel handles can be provided with or without '@' prefix
4. Consider content availability and channel size
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import YoutubeChannelSearchTool
# Example 1: Fixed channel search
tech_channel_search = YoutubeChannelSearchTool(youtube_channel_handle="@TechChannel")
# Example 2: Dynamic channel search
flexible_search = YoutubeChannelSearchTool()
# Create agents
tech_analyst = Agent(
role='Tech Content Analyst',
goal='Find relevant tech tutorials and explanations',
backstory='Expert at analyzing technical YouTube content.',
tools=[tech_channel_search]
)
content_researcher = Agent(
role='Content Researcher',
goal='Search across multiple YouTube channels',
backstory='Specialist in finding information across various channels.',
tools=[flexible_search]
)
# Define tasks
fixed_search_task = Task(
description="""Find all tutorials related to machine learning
basics in the channel.""",
agent=tech_analyst
)
# The agent will use:
# {
# "search_query": "machine learning basics tutorial"
# }
dynamic_search_task = Task(
description="""Search through the @AIResearch channel for
content about neural networks.""",
agent=content_researcher
)
# The agent will use:
# {
# "search_query": "neural networks explanation",
# "youtube_channel_handle": "@AIResearch"
# }
# Create crew
crew = Crew(
agents=[tech_analyst, content_researcher],
tasks=[fixed_search_task, dynamic_search_task]
)
# Execute
result = crew.kickoff()
```
## Notes
- Inherits from RagTool for semantic search capabilities
- Supports both fixed and dynamic YouTube channel handles
- Automatically adds '@' prefix to channel handles if missing
- Uses embeddings for semantic search
- Thread-safe operations
- Automatically handles YouTube content loading and embedding

View File

@@ -0,0 +1,216 @@
---
title: YoutubeVideoSearchTool
description: A tool for semantic search within YouTube video content using RAG capabilities
icon: video
---
## YoutubeVideoSearchTool
The YoutubeVideoSearchTool enables semantic search capabilities for YouTube video content using Retrieval-Augmented Generation (RAG). It processes video content and allows searching through transcripts and metadata using natural language queries.
## Installation
```bash
pip install 'crewai[tools]'
```
## Usage Example
```python
from crewai import Agent
from crewai_tools import YoutubeVideoSearchTool
# Method 1: Initialize with specific video
video_tool = YoutubeVideoSearchTool(
youtube_video_url="https://www.youtube.com/watch?v=example"
)
# Method 2: Initialize without video (specify at runtime)
flexible_video_tool = YoutubeVideoSearchTool()
# Create an agent with the tool
researcher = Agent(
role='Video Researcher',
goal='Search and analyze video content',
backstory='Expert at finding relevant information in videos.',
tools=[video_tool],
verbose=True
)
```
## Input Schema
### Fixed Video Schema (when URL provided during initialization)
```python
class FixedYoutubeVideoSearchToolSchema(BaseModel):
search_query: str = Field(
description="Mandatory search query you want to use to search the Youtube Video content"
)
```
### Flexible Video Schema (when URL provided at runtime)
```python
class YoutubeVideoSearchToolSchema(FixedYoutubeVideoSearchToolSchema):
youtube_video_url: str = Field(
description="Mandatory youtube_video_url path you want to search"
)
```
## Function Signature
```python
def __init__(
self,
youtube_video_url: Optional[str] = None,
**kwargs
):
"""
Initialize the YouTube video search tool.
Args:
youtube_video_url (Optional[str]): URL of YouTube video (optional)
**kwargs: Additional arguments for RAG tool configuration
"""
def _run(
self,
search_query: str,
**kwargs: Any
) -> str:
"""
Execute semantic search on video content.
Args:
search_query (str): Query to search in the video
**kwargs: Additional arguments including youtube_video_url if not initialized
Returns:
str: Relevant content from the video matching the query
"""
```
## Best Practices
1. Video URL Management:
- Use complete YouTube URLs
- Verify video accessibility
- Handle region restrictions
2. Search Optimization:
- Use specific, focused queries
- Consider video context
- Test with sample queries first
3. Performance Considerations:
- Pre-initialize for repeated searches
- Handle long videos appropriately
- Monitor processing time
4. Error Handling:
- Verify video availability
- Handle unavailable videos
- Manage API limitations
## Integration Example
```python
from crewai import Agent, Task, Crew
from crewai_tools import YoutubeVideoSearchTool
# Initialize tool with specific video
video_tool = YoutubeVideoSearchTool(
youtube_video_url="https://www.youtube.com/watch?v=example"
)
# Create agent
researcher = Agent(
role='Video Researcher',
goal='Extract insights from video content',
backstory='Expert at analyzing video content.',
tools=[video_tool]
)
# Define task
research_task = Task(
description="""Find all mentions of machine learning
applications from the video content.""",
agent=researcher
)
# The tool will use:
# {
# "search_query": "machine learning applications"
# }
# Create crew
crew = Crew(
agents=[researcher],
tasks=[research_task]
)
# Execute
result = crew.kickoff()
```
## Advanced Usage
### Dynamic Video Selection
```python
# Initialize without video URL
flexible_tool = YoutubeVideoSearchTool()
# Search different videos
tech_results = flexible_tool.run(
search_query="quantum computing",
youtube_video_url="https://youtube.com/watch?v=tech123"
)
science_results = flexible_tool.run(
search_query="particle physics",
youtube_video_url="https://youtube.com/watch?v=science456"
)
```
### Multiple Video Analysis
```python
# Create tools for different videos
tech_tool = YoutubeVideoSearchTool(
youtube_video_url="https://youtube.com/watch?v=tech123"
)
science_tool = YoutubeVideoSearchTool(
youtube_video_url="https://youtube.com/watch?v=science456"
)
# Create agent with multiple tools
analyst = Agent(
role='Content Analyst',
goal='Cross-reference multiple videos',
tools=[tech_tool, science_tool]
)
```
### Error Handling Example
```python
try:
video_tool = YoutubeVideoSearchTool()
results = video_tool.run(
search_query="key concepts",
youtube_video_url="https://youtube.com/watch?v=example"
)
print(results)
except Exception as e:
print(f"Error processing video: {str(e)}")
```
## Notes
- Inherits from RagTool
- Uses embedchain for processing
- Supports semantic search
- Dynamic video specification
- Efficient content retrieval
- Thread-safe operations
- Maintains search context
- Handles video transcripts
- Processes video metadata
- Memory-efficient processing