Migrate docs from MkDocs to Mintlify (#1423)

* add new mintlify docs

* add favicon.svg

* minor edits

* add github stats
This commit is contained in:
Tony Kipkemboi
2024-10-10 18:14:28 -04:00
committed by GitHub
parent 8119b7239d
commit 063ad2d99d
108 changed files with 3271 additions and 3103 deletions

View File

@@ -1,38 +0,0 @@
# BrowserbaseLoadTool
## Description
[Browserbase](https://browserbase.com) is a developer platform to reliably run, manage, and monitor headless browsers.
Power your AI data retrievals with:
- [Serverless Infrastructure](https://docs.browserbase.com/under-the-hood) providing reliable browsers to extract data from complex UIs
- [Stealth Mode](https://docs.browserbase.com/features/stealth-mode) with included fingerprinting tactics and automatic captcha solving
- [Session Debugger](https://docs.browserbase.com/features/sessions) to inspect your Browser Session with networks timeline and logs
- [Live Debug](https://docs.browserbase.com/guides/session-debug-connection/browser-remote-control) to quickly debug your automation
## Installation
- Get an API key and Project ID from [browserbase.com](https://browserbase.com) and set it in environment variables (`BROWSERBASE_API_KEY`, `BROWSERBASE_PROJECT_ID`).
- Install the [Browserbase SDK](http://github.com/browserbase/python-sdk) along with `crewai[tools]` package:
```
pip install browserbase 'crewai[tools]'
```
## Example
Utilize the BrowserbaseLoadTool as follows to allow your agent to load websites:
```python
from crewai_tools import BrowserbaseLoadTool
tool = BrowserbaseLoadTool()
```
## Arguments
- `api_key` Optional. Browserbase API key. Default is `BROWSERBASE_API_KEY` env variable.
- `project_id` Optional. Browserbase Project ID. Default is `BROWSERBASE_PROJECT_ID` env variable.
- `text_content` Retrieve only text content. Default is `False`.
- `session_id` Optional. Provide an existing Session ID.
- `proxy` Optional. Enable/Disable Proxies."

View File

@@ -1,62 +0,0 @@
# CSVSearchTool
!!! note "Experimental"
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
## Description
This tool is used to perform a RAG (Retrieval-Augmented Generation) search within a CSV file's content. It allows users to semantically search for queries in the content of a specified CSV file. This feature is particularly useful for extracting information from large CSV datasets where traditional search methods might be inefficient. All tools with "Search" in their name, including CSVSearchTool, are RAG tools designed for searching different sources of data.
## Installation
Install the crewai_tools package
```shell
pip install 'crewai[tools]'
```
## Example
```python
from crewai_tools import CSVSearchTool
# Initialize the tool with a specific CSV file. This setup allows the agent to only search the given CSV file.
tool = CSVSearchTool(csv='path/to/your/csvfile.csv')
# OR
# Initialize the tool without a specific CSV file. Agent will need to provide the CSV path at runtime.
tool = CSVSearchTool()
```
## Arguments
- `csv` : The path to the CSV file you want to search. This is a mandatory argument if the tool was initialized without a specific CSV file; otherwise, it is optional.
## Custom model and embeddings
By default, the tool uses OpenAI for both embeddings and summarization. To customize the model, you can use a config dictionary as follows:
```python
tool = CSVSearchTool(
config=dict(
llm=dict(
provider="ollama", # or google, openai, anthropic, llama2, ...
config=dict(
model="llama2",
# temperature=0.5,
# top_p=1,
# stream=true,
),
),
embedder=dict(
provider="google", # or openai, ollama, ...
config=dict(
model="models/embedding-001",
task_type="retrieval_document",
# title="Embeddings",
),
),
)
)
```

View File

@@ -1,65 +0,0 @@
# CodeDocsSearchTool
!!! note "Experimental"
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
## Description
The CodeDocsSearchTool is a powerful RAG (Retrieval-Augmented Generation) tool designed for semantic searches within code documentation. It enables users to efficiently find specific information or topics within code documentation. By providing a `docs_url` during initialization, the tool narrows down the search to that particular documentation site. Alternatively, without a specific `docs_url`, it searches across a wide array of code documentation known or discovered throughout its execution, making it versatile for various documentation search needs.
## Installation
To start using the CodeDocsSearchTool, first, install the crewai_tools package via pip:
```
pip install 'crewai[tools]'
```
## Example
Utilize the CodeDocsSearchTool as follows to conduct searches within code documentation:
```python
from crewai_tools import CodeDocsSearchTool
# To search any code documentation content if the URL is known or discovered during its execution:
tool = CodeDocsSearchTool()
# OR
# To specifically focus your search on a given documentation site by providing its URL:
tool = CodeDocsSearchTool(docs_url='https://docs.example.com/reference')
```
Note: Substitute 'https://docs.example.com/reference' with your target documentation URL and 'How to use search tool' with the search query relevant to your needs.
## Arguments
- `docs_url`: Optional. Specifies the URL of the code documentation to be searched. Providing this during the tool's initialization focuses the search on the specified documentation content.
## Custom model and embeddings
By default, the tool uses OpenAI for both embeddings and summarization. To customize the model, you can use a config dictionary as follows:
```python
tool = CodeDocsSearchTool(
config=dict(
llm=dict(
provider="ollama", # or google, openai, anthropic, llama2, ...
config=dict(
model="llama2",
# temperature=0.5,
# top_p=1,
# stream=true,
),
),
embedder=dict(
provider="google", # or openai, ollama, ...
config=dict(
model="models/embedding-001",
task_type="retrieval_document",
# title="Embeddings",
),
),
)
)
```

View File

@@ -1,37 +0,0 @@
```markdown
# DirectoryReadTool
!!! note "Experimental"
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
## Description
The DirectoryReadTool is a powerful utility designed to provide a comprehensive listing of directory contents. It can recursively navigate through the specified directory, offering users a detailed enumeration of all files, including those within subdirectories. This tool is crucial for tasks that require a thorough inventory of directory structures or for validating the organization of files within directories.
## Installation
To utilize the DirectoryReadTool in your project, install the `crewai_tools` package. If this package is not yet part of your environment, you can install it using pip with the command below:
```shell
pip install 'crewai[tools]'
```
This command installs the latest version of the `crewai_tools` package, granting access to the DirectoryReadTool among other utilities.
## Example
Employing the DirectoryReadTool is straightforward. The following code snippet demonstrates how to set it up and use the tool to list the contents of a specified directory:
```python
from crewai_tools import DirectoryReadTool
# Initialize the tool so the agent can read any directory's content it learns about during execution
tool = DirectoryReadTool()
# OR
# Initialize the tool with a specific directory, so the agent can only read the content of the specified directory
tool = DirectoryReadTool(directory='/path/to/your/directory')
```
## Arguments
The DirectoryReadTool requires minimal configuration for use. The essential argument for this tool is as follows:
- `directory`: **Optional**. An argument that specifies the path to the directory whose contents you wish to list. It accepts both absolute and relative paths, guiding the tool to the desired directory for content listing.

View File

@@ -1,36 +0,0 @@
# EXASearchTool Documentation
## Description
The EXASearchTool is designed to perform a semantic search for a specified query from a text's content across the internet. It utilizes the [exa.ai](https://exa.ai/) API to fetch and display the most relevant search results based on the query provided by the user.
## Installation
To incorporate this tool into your project, follow the installation instructions below:
```shell
pip install 'crewai[tools]'
```
## Example
The following example demonstrates how to initialize the tool and execute a search with a given query:
```python
from crewai_tools import EXASearchTool
# Initialize the tool for internet searching capabilities
tool = EXASearchTool()
```
## Steps to Get Started
To effectively use the EXASearchTool, follow these steps:
1. **Package Installation**: Confirm that the `crewai[tools]` package is installed in your Python environment.
2. **API Key Acquisition**: Acquire a [exa.ai](https://exa.ai/) API key by registering for a free account at [exa.ai](https://exa.ai/).
3. **Environment Configuration**: Store your obtained API key in an environment variable named `EXA_API_KEY` to facilitate its use by the tool.
## Conclusion
By integrating the EXASearchTool into Python projects, users gain the ability to conduct real-time, relevant searches across the internet directly from their applications. By adhering to the setup and usage guidelines provided, incorporating this tool into projects is streamlined and straightforward.

View File

@@ -1,60 +0,0 @@
# PGSearchTool
!!! note "Under Development"
The PGSearchTool is currently under development. This document outlines the intended functionality and interface. As development progresses, please be aware that some features may not be available or could change.
## Description
The PGSearchTool is envisioned as a powerful tool for facilitating semantic searches within PostgreSQL database tables. By leveraging advanced Retrieve and Generate (RAG) technology, it aims to provide an efficient means for querying database table content, specifically tailored for PostgreSQL databases. The tool's goal is to simplify the process of finding relevant data through semantic search queries, offering a valuable resource for users needing to conduct advanced queries on extensive datasets within a PostgreSQL environment.
## Installation
The `crewai_tools` package, which will include the PGSearchTool upon its release, can be installed using the following command:
```shell
pip install 'crewai[tools]'
```
(Note: The PGSearchTool is not yet available in the current version of the `crewai_tools` package. This installation command will be updated once the tool is released.)
## Example Usage
Below is a proposed example showcasing how to use the PGSearchTool for conducting a semantic search on a table within a PostgreSQL database:
```python
from crewai_tools import PGSearchTool
# Initialize the tool with the database URI and the target table name
tool = PGSearchTool(db_uri='postgresql://user:password@localhost:5432/mydatabase', table_name='employees')
```
## Arguments
The PGSearchTool is designed to require the following arguments for its operation:
- `db_uri`: A string representing the URI of the PostgreSQL database to be queried. This argument will be mandatory and must include the necessary authentication details and the location of the database.
- `table_name`: A string specifying the name of the table within the database on which the semantic search will be performed. This argument will also be mandatory.
## Custom Model and Embeddings
The tool intends to use OpenAI for both embeddings and summarization by default. Users will have the option to customize the model using a config dictionary as follows:
```python
tool = PGSearchTool(
config=dict(
llm=dict(
provider="ollama", # or google, openai, anthropic, llama2, ...
config=dict(
model="llama2",
# temperature=0.5,
# top_p=1,
# stream=true,
),
),
embedder=dict(
provider="google", # or openai, ollama, ...
config=dict(
model="models/embedding-001",
task_type="retrieval_document",
# title="Embeddings",
),
),
)
)
```

View File

@@ -1,31 +0,0 @@
# ScrapeWebsiteTool
!!! note "Experimental"
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
## Description
A tool designed to extract and read the content of a specified website. It is capable of handling various types of web pages by making HTTP requests and parsing the received HTML content. This tool can be particularly useful for web scraping tasks, data collection, or extracting specific information from websites.
## Installation
Install the crewai_tools package
```shell
pip install 'crewai[tools]'
```
## Example
```python
from crewai_tools import ScrapeWebsiteTool
# To enable scrapping any website it finds during it's execution
tool = ScrapeWebsiteTool()
# Initialize the tool with the website URL, so the agent can only scrap the content of the specified website
tool = ScrapeWebsiteTool(website_url='https://www.example.com')
# Extract the text from the site
text = tool.run()
print(text)
```
## Arguments
- `website_url` : Mandatory website URL to read the file. This is the primary input for the tool, specifying which website's content should be scraped and read.

View File

@@ -1,44 +0,0 @@
# SeleniumScrapingTool
!!! note "Experimental"
This tool is currently in development. As we refine its capabilities, users may encounter unexpected behavior. Your feedback is invaluable to us for making improvements.
## Description
The SeleniumScrapingTool is crafted for high-efficiency web scraping tasks. It allows for precise extraction of content from web pages by using CSS selectors to target specific elements. Its design caters to a wide range of scraping needs, offering flexibility to work with any provided website URL.
## Installation
To get started with the SeleniumScrapingTool, install the crewai_tools package using pip:
```
pip install 'crewai[tools]'
```
## Usage Examples
Below are some scenarios where the SeleniumScrapingTool can be utilized:
```python
from crewai_tools import SeleniumScrapingTool
# Example 1: Initialize the tool without any parameters to scrape the current page it navigates to
tool = SeleniumScrapingTool()
# Example 2: Scrape the entire webpage of a given URL
tool = SeleniumScrapingTool(website_url='https://example.com')
# Example 3: Target and scrape a specific CSS element from a webpage
tool = SeleniumScrapingTool(website_url='https://example.com', css_element='.main-content')
# Example 4: Perform scraping with additional parameters for a customized experience
tool = SeleniumScrapingTool(website_url='https://example.com', css_element='.main-content', cookie={'name': 'user', 'value': 'John Doe'}, wait_time=10)
```
## Arguments
The following parameters can be used to customize the SeleniumScrapingTool's scraping process:
- `website_url`: **Mandatory**. Specifies the URL of the website from which content is to be scraped.
- `css_element`: **Mandatory**. The CSS selector for a specific element to target on the website. This enables focused scraping of a particular part of a webpage.
- `cookie`: **Optional**. A dictionary that contains cookie information. Useful for simulating a logged-in session, thereby providing access to content that might be restricted to non-logged-in users.
- `wait_time`: **Optional**. Specifies the delay (in seconds) before the content is scraped. This delay allows for the website and any dynamic content to fully load, ensuring a successful scrape.
!!! attention
Since the SeleniumScrapingTool is under active development, the parameters and functionality may evolve over time. Users are encouraged to keep the tool updated and report any issues or suggestions for enhancements.

View File

@@ -1,81 +0,0 @@
# SpiderTool
## Description
[Spider](https://spider.cloud/?ref=crewai) is the [fastest](https://github.com/spider-rs/spider/blob/main/benches/BENCHMARKS.md#benchmark-results) open source scraper and crawler that returns LLM-ready data. It converts any website into pure HTML, markdown, metadata or text while enabling you to crawl with custom actions using AI.
## Installation
To use the Spider API you need to download the [Spider SDK](https://pypi.org/project/spider-client/) and the crewai[tools] SDK too:
```python
pip install spider-client 'crewai[tools]'
```
## Example
This example shows you how you can use the Spider tool to enable your agent to scrape and crawl websites. The data returned from the Spider API is already LLM-ready, so no need to do any cleaning there.
```python
from crewai_tools import SpiderTool
def main():
spider_tool = SpiderTool()
searcher = Agent(
role="Web Research Expert",
goal="Find related information from specific URL's",
backstory="An expert web researcher that uses the web extremely well",
tools=[spider_tool],
verbose=True,
)
return_metadata = Task(
description="Scrape https://spider.cloud with a limit of 1 and enable metadata",
expected_output="Metadata and 10 word summary of spider.cloud",
agent=searcher
)
crew = Crew(
agents=[searcher],
tasks=[
return_metadata,
],
verbose=2
)
crew.kickoff()
if __name__ == "__main__":
main()
```
## Arguments
- `api_key` (string, optional): Specifies Spider API key. If not specified, it looks for `SPIDER_API_KEY` in environment variables.
- `params` (object, optional): Optional parameters for the request. Defaults to `{"return_format": "markdown"}` to return the website's content in a format that fits LLMs better.
- `request` (string): The request type to perform. Possible values are `http`, `chrome`, and `smart`. Use `smart` to perform an HTTP request by default until JavaScript rendering is needed for the HTML.
- `limit` (int): The maximum number of pages allowed to crawl per website. Remove the value or set it to `0` to crawl all pages.
- `depth` (int): The crawl limit for maximum depth. If `0`, no limit will be applied.
- `cache` (bool): Use HTTP caching for the crawl to speed up repeated runs. Default is `true`.
- `budget` (object): Object that has paths with a counter for limiting the amount of pages example `{"*":1}` for only crawling the root page.
- `locale` (string): The locale to use for request, example `en-US`.
- `cookies` (string): Add HTTP cookies to use for request.
- `stealth` (bool): Use stealth mode for headless chrome request to help prevent being blocked. The default is `true` on chrome.
- `headers` (object): Forward HTTP headers to use for all request. The object is expected to be a map of key value pairs.
- `metadata` (bool): Boolean to store metadata about the pages and content found. This could help improve AI interopt. Defaults to `false` unless you have the website already stored with the configuration enabled.
- `viewport` (object): Configure the viewport for chrome. Defaults to `800x600`.
- `encoding` (string): The type of encoding to use like `UTF-8`, `SHIFT_JIS`, or etc.
- `subdomains` (bool): Allow subdomains to be included. Default is `false`.
- `user_agent` (string): Add a custom HTTP user agent to the request. By default this is set to a random agent.
- `store_data` (bool): Boolean to determine if storage should be used. If set this takes precedence over `storageless`. Defaults to `false`.
- `gpt_config` (object): Use AI to generate actions to perform during the crawl. You can pass an array for the `"prompt"` to chain steps.
- `fingerprint` (bool): Use advanced fingerprint for chrome.
- `storageless` (bool): Boolean to prevent storing any type of data for the request including storage and AI vectors embedding. Defaults to `false` unless you have the website already stored.
- `readability` (bool): Use [readability](https://github.com/mozilla/readability) to pre-process the content for reading. This may drastically improve the content for LLM usage.
`return_format` (string): The format to return the data in. Possible values are `markdown`, `raw`, `text`, and `html2text`. Use `raw` to return the default format of the page like HTML etc.
- `proxy_enabled` (bool): Enable high performance premium proxies for the request to prevent being blocked at the network level.
- `query_selector` (string): The CSS query selector to use when extracting content from the markup.
- `full_resources` (bool): Crawl and download all the resources for a website.
- `request_timeout` (int): The timeout to use for request. Timeouts can be from `5-60`. The default is `30` seconds.
- `run_in_background` (bool): Run the request in the background. Useful if storing data and wanting to trigger crawls to the dashboard. This has no effect if storageless is set.

View File

@@ -1,62 +0,0 @@
# TXTSearchTool
!!! note "Experimental"
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
## Description
This tool is used to perform a RAG (Retrieval-Augmented Generation) search within the content of a text file. It allows for semantic searching of a query within a specified text file's content, making it an invaluable resource for quickly extracting information or finding specific sections of text based on the query provided.
## Installation
To use the TXTSearchTool, you first need to install the crewai_tools package. This can be done using pip, a package manager for Python. Open your terminal or command prompt and enter the following command:
```shell
pip install 'crewai[tools]'
```
This command will download and install the TXTSearchTool along with any necessary dependencies.
## Example
The following example demonstrates how to use the TXTSearchTool to search within a text file. This example shows both the initialization of the tool with a specific text file and the subsequent search within that file's content.
```python
from crewai_tools import TXTSearchTool
# Initialize the tool to search within any text file's content the agent learns about during its execution
tool = TXTSearchTool()
# OR
# Initialize the tool with a specific text file, so the agent can search within the given text file's content
tool = TXTSearchTool(txt='path/to/text/file.txt')
```
## Arguments
- `txt` (str): **Optional**. The path to the text file you want to search. This argument is only required if the tool was not initialized with a specific text file; otherwise, the search will be conducted within the initially provided text file.
## Custom model and embeddings
By default, the tool uses OpenAI for both embeddings and summarization. To customize the model, you can use a config dictionary as follows:
```python
tool = TXTSearchTool(
config=dict(
llm=dict(
provider="ollama", # or google, openai, anthropic, llama2, ...
config=dict(
model="llama2",
# temperature=0.5,
# top_p=1,
# stream=true,
),
),
embedder=dict(
provider="google", # or openai, ollama, ...
config=dict(
model="models/embedding-001",
task_type="retrieval_document",
# title="Embeddings",
),
),
)
)
```

View File

@@ -1,30 +0,0 @@
# Vision Tool
## Description
This tool is used to extract text from images. When passed to the agent it will extract the text from the image and then use it to generate a response, report or any other output. The URL or the PATH of the image should be passed to the Agent.
## Installation
Install the crewai_tools package
```shell
pip install 'crewai[tools]'
```
## Usage
In order to use the VisionTool, the OpenAI API key should be set in the environment variable `OPENAI_API_KEY`.
```python
from crewai_tools import VisionTool
vision_tool = VisionTool()
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config["researcher"],
allow_delegation=False,
tools=[vision_tool]
)
```

View File

@@ -0,0 +1,50 @@
---
title: Browserbase Web Loader
description: Browserbase is a developer platform to reliably run, manage, and monitor headless browsers.
icon: browser
---
# `BrowserbaseLoadTool`
## Description
[Browserbase](https://browserbase.com) is a developer platform to reliably run, manage, and monitor headless browsers.
Power your AI data retrievals with:
- [Serverless Infrastructure](https://docs.browserbase.com/under-the-hood) providing reliable browsers to extract data from complex UIs
- [Stealth Mode](https://docs.browserbase.com/features/stealth-mode) with included fingerprinting tactics and automatic captcha solving
- [Session Debugger](https://docs.browserbase.com/features/sessions) to inspect your Browser Session with networks timeline and logs
- [Live Debug](https://docs.browserbase.com/guides/session-debug-connection/browser-remote-control) to quickly debug your automation
## Installation
- Get an API key and Project ID from [browserbase.com](https://browserbase.com) and set it in environment variables (`BROWSERBASE_API_KEY`, `BROWSERBASE_PROJECT_ID`).
- Install the [Browserbase SDK](http://github.com/browserbase/python-sdk) along with `crewai[tools]` package:
```shell
pip install browserbase 'crewai[tools]'
```
## Example
Utilize the BrowserbaseLoadTool as follows to allow your agent to load websites:
```python Code
from crewai_tools import BrowserbaseLoadTool
# Initialize the tool with the Browserbase API key and Project ID
tool = BrowserbaseLoadTool()
```
## Arguments
The following parameters can be used to customize the `BrowserbaseLoadTool`'s behavior:
| Argument | Type | Description |
|:---------------|:---------|:-------------------------------------------------------------------------------------------------------------------------------------|
| **api_key** | `string` | _Optional_. Browserbase API key. Default is `BROWSERBASE_API_KEY` env variable. |
| **project_id** | `string` | _Optional_. Browserbase Project ID. Default is `BROWSERBASE_PROJECT_ID` env variable. |
| **text_content** | `bool` | _Optional_. Retrieve only text content. Default is `False`. |
| **session_id** | `string` | _Optional_. Provide an existing Session ID. |
| **proxy** | `bool` | _Optional_. Enable/Disable Proxies. Default is `False`. |

View File

@@ -0,0 +1,84 @@
---
title: Code Docs RAG Search
description: The `CodeDocsSearchTool` is a powerful RAG (Retrieval-Augmented Generation) tool designed for semantic searches within code documentation.
icon: code
---
# `CodeDocsSearchTool`
<Note>
**Experimental**: We are still working on improving tools, so there might be unexpected behavior or changes in the future.
</Note>
## Description
The CodeDocsSearchTool is a powerful RAG (Retrieval-Augmented Generation) tool designed for semantic searches within code documentation.
It enables users to efficiently find specific information or topics within code documentation. By providing a `docs_url` during initialization,
the tool narrows down the search to that particular documentation site. Alternatively, without a specific `docs_url`,
it searches across a wide array of code documentation known or discovered throughout its execution, making it versatile for various documentation search needs.
## Installation
To start using the CodeDocsSearchTool, first, install the crewai_tools package via pip:
```shell
pip install 'crewai[tools]'
```
## Example
Utilize the CodeDocsSearchTool as follows to conduct searches within code documentation:
```python Code
from crewai_tools import CodeDocsSearchTool
# To search any code documentation content
# if the URL is known or discovered during its execution:
tool = CodeDocsSearchTool()
# OR
# To specifically focus your search on a given documentation site
# by providing its URL:
tool = CodeDocsSearchTool(docs_url='https://docs.example.com/reference')
```
<Note>
Substitute 'https://docs.example.com/reference' with your target documentation URL
and 'How to use search tool' with the search query relevant to your needs.
</Note>
## Arguments
The following parameters can be used to customize the `CodeDocsSearchTool`'s behavior:
| Argument | Type | Description |
|:---------------|:---------|:-------------------------------------------------------------------------------------------------------------------------------------|
| **docs_url** | `string` | _Optional_. Specifies the URL of the code documentation to be searched. |
## Custom model and embeddings
By default, the tool uses OpenAI for both embeddings and summarization. To customize the model, you can use a config dictionary as follows:
```python Code
tool = CodeDocsSearchTool(
config=dict(
llm=dict(
provider="ollama", # or google, openai, anthropic, llama2, ...
config=dict(
model="llama2",
# temperature=0.5,
# top_p=1,
# stream=true,
),
),
embedder=dict(
provider="google", # or openai, ollama, ...
config=dict(
model="models/embedding-001",
task_type="retrieval_document",
# title="Embeddings",
),
),
)
)
```

View File

@@ -1,25 +1,37 @@
# CodeInterpreterTool
---
title: Code Interpreter
description: The `CodeInterpreterTool` is a powerful tool designed for executing Python 3 code within a secure, isolated environment.
icon: code-simple
---
# `CodeInterpreterTool`
## Description
This tool enables the Agent to execute Python 3 code that it has generated autonomously. The code is run in a secure, isolated environment, ensuring safety regardless of the content.
This functionality is particularly valuable as it allows the Agent to create code, execute it within the same ecosystem, obtain the results, and utilize that information to inform subsequent decisions and actions.
This functionality is particularly valuable as it allows the Agent to create code, execute it within the same ecosystem,
obtain the results, and utilize that information to inform subsequent decisions and actions.
## Requirements
- Docker
## Installation
Install the crewai_tools package
Install the `crewai_tools` package
```shell
pip install 'crewai[tools]'
```
## Example
Remember that when using this tool, the code must be generated by the Agent itself. The code must be a Python3 code. And it will take some time for the first time to run because it needs to build the Docker image.
Remember that when using this tool, the code must be generated by the Agent itself.
The code must be a Python3 code. And it will take some time for the first time to run
because it needs to build the Docker image.
```python
```python Code
from crewai import Agent
from crewai_tools import CodeInterpreterTool
@@ -31,7 +43,7 @@ Agent(
We also provide a simple way to use it directly from the Agent.
```python
```python Code
from crewai import Agent
agent = Agent(

View File

@@ -1,8 +1,14 @@
# ComposioTool Documentation
---
title: Composio Tool
description: The `ComposioTool` is a wrapper around the composio set of tools and gives your agent access to a wide variety of tools from the Composio SDK.
icon: gear-code
---
# `ComposioTool`
## Description
This tools is a wrapper around the composio set of tools and gives your agent access to a wide variety of tools from the composio SDK.
This tools is a wrapper around the composio set of tools and gives your agent access to a wide variety of tools from the Composio SDK.
## Installation
@@ -21,7 +27,7 @@ The following example demonstrates how to initialize the tool and execute a gith
1. Initialize Composio tools
```python
```python Code
from composio import App
from crewai_tools import ComposioTool
from crewai import Agent, Task
@@ -32,19 +38,19 @@ tools = [ComposioTool.from_action(action=Action.GITHUB_ACTIVITY_STAR_REPO_FOR_AU
If you don't know what action you want to use, use `from_app` and `tags` filter to get relevant actions
```python
```python Code
tools = ComposioTool.from_app(App.GITHUB, tags=["important"])
```
or use `use_case` to search relevant actions
```python
```python Code
tools = ComposioTool.from_app(App.GITHUB, use_case="Star a github repository")
```
2. Define agent
```python
```python Code
crewai_agent = Agent(
role="Github Agent",
goal="You take action on Github using Github APIs",
@@ -59,7 +65,7 @@ crewai_agent = Agent(
3. Execute task
```python
```python Code
task = Task(
description="Star a repo ComposioHQ/composio on GitHub",
agent=crewai_agent,
@@ -69,4 +75,4 @@ task = Task(
task.execute()
```
* More detailed list of tools can be found [here](https://app.composio.dev)
* More detailed list of tools can be found [here](https://app.composio.dev)

View File

@@ -0,0 +1,77 @@
---
title: CSV RAG Search
description: The `CSVSearchTool` is a powerful RAG (Retrieval-Augmented Generation) tool designed for semantic searches within a CSV file's content.
icon: file-csv
---
# `CSVSearchTool`
<Note>
**Experimental**: We are still working on improving tools, so there might be unexpected behavior or changes in the future.
</Note>
## Description
This tool is used to perform a RAG (Retrieval-Augmented Generation) search within a CSV file's content. It allows users to semantically search for queries in the content of a specified CSV file.
This feature is particularly useful for extracting information from large CSV datasets where traditional search methods might be inefficient. All tools with "Search" in their name, including CSVSearchTool,
are RAG tools designed for searching different sources of data.
## Installation
Install the crewai_tools package
```shell
pip install 'crewai[tools]'
```
## Example
```python Code
from crewai_tools import CSVSearchTool
# Initialize the tool with a specific CSV file.
# This setup allows the agent to only search the given CSV file.
tool = CSVSearchTool(csv='path/to/your/csvfile.csv')
# OR
# Initialize the tool without a specific CSV file.
# Agent will need to provide the CSV path at runtime.
tool = CSVSearchTool()
```
## Arguments
The following parameters can be used to customize the `CSVSearchTool`'s behavior:
| Argument | Type | Description |
|:---------------|:---------|:-------------------------------------------------------------------------------------------------------------------------------------|
| **csv** | `string` | _Optional_. The path to the CSV file you want to search. This is a mandatory argument if the tool was initialized without a specific CSV file; otherwise, it is optional. |
## Custom model and embeddings
By default, the tool uses OpenAI for both embeddings and summarization. To customize the model, you can use a config dictionary as follows:
```python Code
tool = CSVSearchTool(
config=dict(
llm=dict(
provider="ollama", # or google, openai, anthropic, llama2, ...
config=dict(
model="llama2",
# temperature=0.5,
# top_p=1,
# stream=true,
),
),
embedder=dict(
provider="google", # or openai, ollama, ...
config=dict(
model="models/embedding-001",
task_type="retrieval_document",
# title="Embeddings",
),
),
)
)
```

View File

@@ -1,9 +1,18 @@
# DALL-E Tool
---
title: DALL-E Tool
description: The `DallETool` is a powerful tool designed for generating images from textual descriptions.
icon: image
---
# `DallETool`
## Description
This tool is used to give the Agent the ability to generate images using the DALL-E model. It is a transformer-based model that generates images from textual descriptions. This tool allows the Agent to generate images based on the text input provided by the user.
This tool is used to give the Agent the ability to generate images using the DALL-E model. It is a transformer-based model that generates images from textual descriptions.
This tool allows the Agent to generate images based on the text input provided by the user.
## Installation
Install the crewai_tools package
```shell
pip install 'crewai[tools]'
@@ -13,7 +22,7 @@ pip install 'crewai[tools]'
Remember that when using this tool, the text must be generated by the Agent itself. The text must be a description of the image you want to generate.
```python
```python Code
from crewai_tools import DallETool
Agent(
@@ -24,7 +33,7 @@ Agent(
If needed you can also tweak the parameters of the DALL-E model by passing them as arguments to the `DallETool` class. For example:
```python
```python Code
from crewai_tools import DallETool
dalle_tool = DallETool(model="dall-e-3",
@@ -38,4 +47,5 @@ Agent(
)
```
The parameters are based on the `client.images.generate` method from the OpenAI API. For more information on the parameters, please refer to the [OpenAI API documentation](https://platform.openai.com/docs/guides/images/introduction?lang=python).
The parameters are based on the `client.images.generate` method from the OpenAI API. For more information on the parameters,
please refer to the [OpenAI API documentation](https://platform.openai.com/docs/guides/images/introduction?lang=python).

View File

@@ -0,0 +1,53 @@
---
title: Directory Read
description: The `DirectoryReadTool` is a powerful utility designed to provide a comprehensive listing of directory contents.
icon: folder-tree
---
# `DirectoryReadTool`
<Note>
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
</Note>
## Description
The DirectoryReadTool is a powerful utility designed to provide a comprehensive listing of directory contents.
It can recursively navigate through the specified directory, offering users a detailed enumeration of all files, including those within subdirectories.
This tool is crucial for tasks that require a thorough inventory of directory structures or for validating the organization of files within directories.
## Installation
To utilize the DirectoryReadTool in your project, install the `crewai_tools` package. If this package is not yet part of your environment, you can install it using pip with the command below:
```shell
pip install 'crewai[tools]'
```
This command installs the latest version of the `crewai_tools` package, granting access to the DirectoryReadTool among other utilities.
## Example
Employing the DirectoryReadTool is straightforward. The following code snippet demonstrates how to set it up and use the tool to list the contents of a specified directory:
```python Code
from crewai_tools import DirectoryReadTool
# Initialize the tool so the agent can read any directory's content
# it learns about during execution
tool = DirectoryReadTool()
# OR
# Initialize the tool with a specific directory,
# so the agent can only read the content of the specified directory
tool = DirectoryReadTool(directory='/path/to/your/directory')
```
## Arguments
The following parameters can be used to customize the `DirectoryReadTool`'s behavior:
| Argument | Type | Description |
|:---------------|:---------|:-------------------------------------------------------------------------------------------------------------------------------------|
| **directory** | `string` | _Optional_. An argument that specifies the path to the directory whose contents you wish to list. It accepts both absolute and relative paths, guiding the tool to the desired directory for content listing. |

View File

@@ -1,12 +1,21 @@
# DirectorySearchTool
---
title: Directory RAG Search
description: The `DirectorySearchTool` is a powerful RAG (Retrieval-Augmented Generation) tool designed for semantic searches within a directory's content.
icon: address-book
---
!!! note "Experimental"
The DirectorySearchTool is under continuous development. Features and functionalities might evolve, and unexpected behavior may occur as we refine the tool.
# `DirectorySearchTool`
<Note>
**Experimental**: The DirectorySearchTool is under continuous development. Features and functionalities might evolve, and unexpected behavior may occur as we refine the tool.
</Note>
## Description
The DirectorySearchTool enables semantic search within the content of specified directories, leveraging the Retrieval-Augmented Generation (RAG) methodology for efficient navigation through files. Designed for flexibility, it allows users to dynamically specify search directories at runtime or set a fixed directory during initial setup.
## Installation
To use the DirectorySearchTool, begin by installing the crewai_tools package. Execute the following command in your terminal:
```shell
@@ -14,9 +23,10 @@ pip install 'crewai[tools]'
```
## Initialization and Usage
Import the DirectorySearchTool from the `crewai_tools` package to start. You can initialize the tool without specifying a directory, enabling the setting of the search directory at runtime. Alternatively, the tool can be initialized with a predefined directory.
```python
```python Code
from crewai_tools import DirectorySearchTool
# For dynamic directory specification at runtime
@@ -27,12 +37,14 @@ tool = DirectorySearchTool(directory='/path/to/directory')
```
## Arguments
- `directory`: A string argument that specifies the search directory. This is optional during initialization but required for searches if not set initially.
## Custom Model and Embeddings
The DirectorySearchTool uses OpenAI for embeddings and summarization by default. Customization options for these settings include changing the model provider and configuration, enhancing flexibility for advanced users.
```python
```python Code
tool = DirectorySearchTool(
config=dict(
llm=dict(

View File

@@ -1,12 +1,24 @@
# DOCXSearchTool
---
title: DOCX RAG Search
description: The `DOCXSearchTool` is a RAG tool designed for semantic searching within DOCX documents.
icon: file-word
---
!!! note "Experimental"
# `DOCXSearchTool`
<Note>
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
</Note>
## Description
The DOCXSearchTool is a RAG tool designed for semantic searching within DOCX documents. It enables users to effectively search and extract relevant information from DOCX files using query-based searches. This tool is invaluable for data analysis, information management, and research tasks, streamlining the process of finding specific information within large document collections.
The `DOCXSearchTool` is a RAG tool designed for semantic searching within DOCX documents.
It enables users to effectively search and extract relevant information from DOCX files using query-based searches.
This tool is invaluable for data analysis, information management, and research tasks,
streamlining the process of finding specific information within large document collections.
## Installation
Install the crewai_tools package by running the following command in your terminal:
```shell
@@ -14,9 +26,10 @@ pip install 'crewai[tools]'
```
## Example
The following example demonstrates initializing the DOCXSearchTool to search within any DOCX file's content or with a specific DOCX file path.
```python
```python Code
from crewai_tools import DOCXSearchTool
# Initialize the tool to search within any DOCX file's content
@@ -24,18 +37,24 @@ tool = DOCXSearchTool()
# OR
# Initialize the tool with a specific DOCX file, so the agent can only search the content of the specified DOCX file
# Initialize the tool with a specific DOCX file,
# so the agent can only search the content of the specified DOCX file
tool = DOCXSearchTool(docx='path/to/your/document.docx')
```
## Arguments
- `docx`: An optional file path to a specific DOCX document you wish to search. If not provided during initialization, the tool allows for later specification of any DOCX file's content path for searching.
The following parameters can be used to customize the `DOCXSearchTool`'s behavior:
| Argument | Type | Description |
|:---------------|:---------|:-------------------------------------------------------------------------------------------------------------------------------------|
| **docx** | `string` | _Optional_. An argument that specifies the path to the DOCX file you want to search. If not provided during initialization, the tool allows for later specification of any DOCX file's content path for searching. |
## Custom model and embeddings
By default, the tool uses OpenAI for both embeddings and summarization. To customize the model, you can use a config dictionary as follows:
```python
```python Code
tool = DOCXSearchTool(
config=dict(
llm=dict(
@@ -57,4 +76,4 @@ tool = DOCXSearchTool(
),
)
)
```
```

View File

@@ -0,0 +1,52 @@
---
title: EXA Search Web Loader
description: The `EXASearchTool` is designed to perform a semantic search for a specified query from a text's content across the internet.
icon: globe-pointer
---
# `EXASearchTool`
## Description
The EXASearchTool is designed to perform a semantic search for a specified query from a text's content across the internet.
It utilizes the [exa.ai](https://exa.ai/) API to fetch and display the most relevant search results based on the query provided by the user.
## Installation
To incorporate this tool into your project, follow the installation instructions below:
```shell
pip install 'crewai[tools]'
```
## Example
The following example demonstrates how to initialize the tool and execute a search with a given query:
```python Code
from crewai_tools import EXASearchTool
# Initialize the tool for internet searching capabilities
tool = EXASearchTool()
```
## Steps to Get Started
To effectively use the EXASearchTool, follow these steps:
<Steps>
<Step title="Package Installation">
Confirm that the `crewai[tools]` package is installed in your Python environment.
</Step>
<Step title="API Key Acquisition">
Acquire a [exa.ai](https://exa.ai/) API key by registering for a free account at [exa.ai](https://exa.ai/).
</Step>
<Step title="Environment Configuration">
Store your obtained API key in an environment variable named `EXA_API_KEY` to facilitate its use by the tool.
</Step>
</Steps>
## Conclusion
By integrating the `EXASearchTool` into Python projects, users gain the ability to conduct real-time, relevant searches across the internet directly from their applications.
By adhering to the setup and usage guidelines provided, incorporating this tool into projects is streamlined and straightforward.

View File

@@ -1,12 +1,24 @@
# FileReadTool
---
title: File Read
description: The `FileReadTool` is designed to read files from the local file system.
icon: folders
---
!!! note "Experimental"
# `FileReadTool`
<Note>
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
</Note>
## Description
The FileReadTool conceptually represents a suite of functionalities within the crewai_tools package aimed at facilitating file reading and content retrieval. This suite includes tools for processing batch text files, reading runtime configuration files, and importing data for analytics. It supports a variety of text-based file formats such as `.txt`, `.csv`, `.json`, and more. Depending on the file type, the suite offers specialized functionality, such as converting JSON content into a Python dictionary for ease of use.
The FileReadTool conceptually represents a suite of functionalities within the crewai_tools package aimed at facilitating file reading and content retrieval.
This suite includes tools for processing batch text files, reading runtime configuration files, and importing data for analytics.
It supports a variety of text-based file formats such as `.txt`, `.csv`, `.json`, and more. Depending on the file type, the suite offers specialized functionality,
such as converting JSON content into a Python dictionary for ease of use.
## Installation
To utilize the functionalities previously attributed to the FileReadTool, install the crewai_tools package:
```shell
@@ -14,9 +26,10 @@ pip install 'crewai[tools]'
```
## Usage Example
To get started with the FileReadTool:
```python
```python Code
from crewai_tools import FileReadTool
# Initialize the tool to read any files the agents knows or lean the path for
@@ -29,4 +42,5 @@ file_read_tool = FileReadTool(file_path='path/to/your/file.txt')
```
## Arguments
- `file_path`: The path to the file you want to read. It accepts both absolute and relative paths. Ensure the file exists and you have the necessary permissions to access it.

View File

@@ -1,9 +1,19 @@
# FileWriterTool Documentation
---
title: File Write
description: The `FileWriterTool` is designed to write content to files.
icon: file-pen
---
# `FileWriterTool`
## Description
The `FileWriterTool` is a component of the crewai_tools package, designed to simplify the process of writing content to files. It is particularly useful in scenarios such as generating reports, saving logs, creating configuration files, and more. This tool supports creating new directories if they don't exist, making it easier to organize your output.
The `FileWriterTool` is a component of the crewai_tools package, designed to simplify the process of writing content to files.
It is particularly useful in scenarios such as generating reports, saving logs, creating configuration files, and more.
This tool supports creating new directories if they don't exist, making it easier to organize your output.
## Installation
Install the crewai_tools package to use the `FileWriterTool` in your projects:
```shell
@@ -11,9 +21,10 @@ pip install 'crewai[tools]'
```
## Example
To get started with the `FileWriterTool`:
```python
```python Code
from crewai_tools import FileWriterTool
# Initialize the tool
@@ -25,9 +36,13 @@ print(result)
```
## Arguments
- `filename`: The name of the file you want to create or overwrite.
- `content`: The content to write into the file.
- `directory` (optional): The path to the directory where the file will be created. Defaults to the current directory (`.`). If the directory does not exist, it will be created.
## Conclusion
By integrating the `FileWriterTool` into your crews, the agents can execute the process of writing content to files and creating directories. This tool is essential for tasks that require saving output data, creating structured file systems, and more. By adhering to the setup and usage guidelines provided, incorporating this tool into projects is straightforward and efficient.
By integrating the `FileWriterTool` into your crews, the agents can execute the process of writing content to files and creating directories.
This tool is essential for tasks that require saving output data, creating structured file systems, and more. By adhering to the setup and usage guidelines provided,
incorporating this tool into projects is straightforward and efficient.

View File

@@ -1,4 +1,10 @@
# FirecrawlCrawlWebsiteTool
---
title: Firecrawl Crawl Website
description: The `FirecrawlCrawlWebsiteTool` is designed to crawl and convert websites into clean markdown or structured data.
icon: fire-flame
---
# `FirecrawlCrawlWebsiteTool`
## Description
@@ -9,7 +15,7 @@
- Get an API key from [firecrawl.dev](https://firecrawl.dev) and set it in environment variables (`FIRECRAWL_API_KEY`).
- Install the [Firecrawl SDK](https://github.com/mendableai/firecrawl) along with `crewai[tools]` package:
```
```shell
pip install firecrawl-py 'crewai[tools]'
```
@@ -17,7 +23,7 @@ pip install firecrawl-py 'crewai[tools]'
Utilize the FirecrawlScrapeFromWebsiteTool as follows to allow your agent to load websites:
```python
```python Code
from crewai_tools import FirecrawlCrawlWebsiteTool
tool = FirecrawlCrawlWebsiteTool(url='firecrawl.dev')
@@ -39,4 +45,3 @@ tool = FirecrawlCrawlWebsiteTool(url='firecrawl.dev')
- `mode`: Optional. The crawling mode to use. Fast mode crawls 4x faster on websites without a sitemap but may not be as accurate and shouldn't be used on heavily JavaScript-rendered websites.
- `limit`: Optional. Maximum number of pages to crawl.
- `timeout`: Optional. Timeout in milliseconds for the crawling operation.

View File

@@ -1,4 +1,10 @@
# FirecrawlScrapeWebsiteTool
---
title: Firecrawl Scrape Website
description: The `FirecrawlScrapeWebsiteTool` is designed to scrape websites and convert them into clean markdown or structured data.
icon: fire-flame
---
# `FirecrawlScrapeWebsiteTool`
## Description
@@ -9,7 +15,7 @@
- Get an API key from [firecrawl.dev](https://firecrawl.dev) and set it in environment variables (`FIRECRAWL_API_KEY`).
- Install the [Firecrawl SDK](https://github.com/mendableai/firecrawl) along with `crewai[tools]` package:
```
```shell
pip install firecrawl-py 'crewai[tools]'
```
@@ -17,7 +23,7 @@ pip install firecrawl-py 'crewai[tools]'
Utilize the FirecrawlScrapeWebsiteTool as follows to allow your agent to load websites:
```python
```python Code
from crewai_tools import FirecrawlScrapeWebsiteTool
tool = FirecrawlScrapeWebsiteTool(url='firecrawl.dev')
@@ -35,4 +41,3 @@ tool = FirecrawlScrapeWebsiteTool(url='firecrawl.dev')
- `extractionPrompt`: Optional. A prompt describing what information to extract from the page
- `extractionSchema`: Optional. The schema for the data to be extracted
- `timeout`: Optional. Timeout in milliseconds for the request

View File

@@ -1,4 +1,10 @@
# FirecrawlSearchTool
---
title: Firecrawl Search
description: The `FirecrawlSearchTool` is designed to search websites and convert them into clean markdown or structured data.
icon: fire-flame
---
# `FirecrawlSearchTool`
## Description
@@ -9,7 +15,7 @@
- Get an API key from [firecrawl.dev](https://firecrawl.dev) and set it in environment variables (`FIRECRAWL_API_KEY`).
- Install the [Firecrawl SDK](https://github.com/mendableai/firecrawl) along with `crewai[tools]` package:
```
```shell
pip install firecrawl-py 'crewai[tools]'
```
@@ -17,7 +23,7 @@ pip install firecrawl-py 'crewai[tools]'
Utilize the FirecrawlSearchTool as follows to allow your agent to load websites:
```python
```python Code
from crewai_tools import FirecrawlSearchTool
tool = FirecrawlSearchTool(query='what is firecrawl?')

View File

@@ -1,12 +1,21 @@
# GithubSearchTool
---
title: Github Search
description: The `GithubSearchTool` is designed to search websites and convert them into clean markdown or structured data.
icon: github
---
!!! note "Experimental"
# `GithubSearchTool`
<Note>
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
</Note>
## Description
The GithubSearchTool is a Retrieval-Augmented Generation (RAG) tool specifically designed for conducting semantic searches within GitHub repositories. Utilizing advanced semantic search capabilities, it sifts through code, pull requests, issues, and repositories, making it an essential tool for developers, researchers, or anyone in need of precise information from GitHub.
## Installation
To use the GithubSearchTool, first ensure the crewai_tools package is installed in your Python environment:
```shell
@@ -16,8 +25,10 @@ pip install 'crewai[tools]'
This command installs the necessary package to run the GithubSearchTool along with any other tools included in the crewai_tools package.
## Example
Heres how you can use the GithubSearchTool to perform semantic searches within a GitHub repository:
```python
```python Code
from crewai_tools import GithubSearchTool
# Initialize the tool for semantic searches within a specific GitHub repository
@@ -35,14 +46,17 @@ tool = GithubSearchTool(
```
## Arguments
- `github_repo` : The URL of the GitHub repository where the search will be conducted. This is a mandatory field and specifies the target repository for your search.
- `content_types` : Specifies the types of content to include in your search. You must provide a list of content types from the following options: `code` for searching within the code, `repo` for searching within the repository's general information, `pr` for searching within pull requests, and `issue` for searching within issues. This field is mandatory and allows tailoring the search to specific content types within the GitHub repository.
- `content_types` : Specifies the types of content to include in your search. You must provide a list of content types from the following options: `code` for searching within the code,
`repo` for searching within the repository's general information, `pr` for searching within pull requests, and `issue` for searching within issues.
This field is mandatory and allows tailoring the search to specific content types within the GitHub repository.
## Custom model and embeddings
By default, the tool uses OpenAI for both embeddings and summarization. To customize the model, you can use a config dictionary as follows:
```python
```python Code
tool = GithubSearchTool(
config=dict(
llm=dict(
@@ -64,4 +78,4 @@ tool = GithubSearchTool(
),
)
)
```
```

View File

@@ -1,12 +1,22 @@
# JSONSearchTool
---
title: JSON RAG Search
description: The `JSONSearchTool` is designed to search JSON files and return the most relevant results.
icon: file-code
---
!!! note "Experimental Status"
The JSONSearchTool is currently in an experimental phase. This means the tool is under active development, and users might encounter unexpected behavior or changes. We highly encourage feedback on any issues or suggestions for improvements.
# `JSONSearchTool`
<Note>
The JSONSearchTool is currently in an experimental phase. This means the tool is under active development, and users might encounter unexpected behavior or changes.
We highly encourage feedback on any issues or suggestions for improvements.
</Note>
## Description
The JSONSearchTool is designed to facilitate efficient and precise searches within JSON file contents. It utilizes a RAG (Retrieve and Generate) search mechanism, allowing users to specify a JSON path for targeted searches within a particular JSON file. This capability significantly improves the accuracy and relevance of search results.
## Installation
To install the JSONSearchTool, use the following pip command:
```shell
@@ -14,9 +24,10 @@ pip install 'crewai[tools]'
```
## Usage Examples
Here are updated examples on how to utilize the JSONSearchTool effectively for searching within JSON files. These examples take into account the current implementation and usage patterns identified in the codebase.
```python
```python Code
from crewai.json_tools import JSONSearchTool # Updated import path
# General JSON content search
@@ -29,12 +40,14 @@ tool = JSONSearchTool(json_path='./path/to/your/file.json')
```
## Arguments
- `json_path` (str, optional): Specifies the path to the JSON file to be searched. This argument is not required if the tool is initialized for a general search. When provided, it confines the search to the specified JSON file.
## Configuration Options
The JSONSearchTool supports extensive customization through a configuration dictionary. This allows users to select different models for embeddings and summarization based on their requirements.
```python
```python Code
tool = JSONSearchTool(
config={
"llm": {

View File

@@ -1,12 +1,21 @@
# MDXSearchTool
---
title: MDX RAG Search
description: The `MDXSearchTool` is designed to search MDX files and return the most relevant results.
icon: markdown
---
!!! note "Experimental"
# `MDXSearchTool`
<Note>
The MDXSearchTool is in continuous development. Features may be added or removed, and functionality could change unpredictably as we refine the tool.
</Note>
## Description
The MDX Search Tool is a component of the `crewai_tools` package aimed at facilitating advanced markdown language extraction. It enables users to effectively search and extract relevant information from MD files using query-based searches. This tool is invaluable for data analysis, information management, and research tasks, streamlining the process of finding specific information within large document collections.
## Installation
Before using the MDX Search Tool, ensure the `crewai_tools` package is installed. If it is not, you can install it with the following command:
```shell
@@ -14,9 +23,10 @@ pip install 'crewai[tools]'
```
## Usage Example
To use the MDX Search Tool, you must first set up the necessary environment variables. Then, integrate the tool into your crewAI project to begin your market research. Below is a basic example of how to do this:
```python
```python Code
from crewai_tools import MDXSearchTool
# Initialize the tool to search any MDX content it learns about during execution
@@ -29,13 +39,14 @@ tool = MDXSearchTool(mdx='path/to/your/document.mdx')
```
## Parameters
- mdx: **Optional**. Specifies the MDX file path for the search. It can be provided during initialization.
## Customization of Model and Embeddings
The tool defaults to using OpenAI for embeddings and summarization. For customization, utilize a configuration dictionary as shown below:
```python
```python Code
tool = MDXSearchTool(
config=dict(
llm=dict(
@@ -59,4 +70,4 @@ tool = MDXSearchTool(
),
)
)
```
```

View File

@@ -1,9 +1,20 @@
# MySQLSearchTool
---
title: MySQL RAG Search
description: The `MySQLSearchTool` is designed to search MySQL databases and return the most relevant results.
icon: database
---
# `MySQLSearchTool`
## Description
This tool is designed to facilitate semantic searches within MySQL database tables. Leveraging the RAG (Retrieve and Generate) technology, the MySQLSearchTool provides users with an efficient means of querying database table content, specifically tailored for MySQL databases. It simplifies the process of finding relevant data through semantic search queries, making it an invaluable resource for users needing to perform advanced queries on extensive datasets within a MySQL database.
This tool is designed to facilitate semantic searches within MySQL database tables. Leveraging the RAG (Retrieve and Generate) technology,
the MySQLSearchTool provides users with an efficient means of querying database table content, specifically tailored for MySQL databases.
It simplifies the process of finding relevant data through semantic search queries, making it an invaluable resource for users needing
to perform advanced queries on extensive datasets within a MySQL database.
## Installation
To install the `crewai_tools` package and utilize the MySQLSearchTool, execute the following command in your terminal:
```shell
@@ -11,17 +22,21 @@ pip install 'crewai[tools]'
```
## Example
Below is an example showcasing how to use the MySQLSearchTool to conduct a semantic search on a table within a MySQL database:
```python
```python Code
from crewai_tools import MySQLSearchTool
# Initialize the tool with the database URI and the target table name
tool = MySQLSearchTool(db_uri='mysql://user:password@localhost:3306/mydatabase', table_name='employees')
tool = MySQLSearchTool(
db_uri='mysql://user:password@localhost:3306/mydatabase',
table_name='employees'
)
```
## Arguments
The MySQLSearchTool requires the following arguments for its operation:
- `db_uri`: A string representing the URI of the MySQL database to be queried. This argument is mandatory and must include the necessary authentication details and the location of the database.
@@ -31,7 +46,7 @@ The MySQLSearchTool requires the following arguments for its operation:
By default, the tool uses OpenAI for both embeddings and summarization. To customize the model, you can use a config dictionary as follows:
```python
```python Code
tool = MySQLSearchTool(
config=dict(
llm=dict(
@@ -53,4 +68,4 @@ tool = MySQLSearchTool(
),
)
)
```
```

View File

@@ -1,10 +1,17 @@
# NL2SQL Tool
---
title: NL2SQL Tool
description: The `NL2SQLTool` is designed to convert natural language to SQL queries.
icon: language
---
# `NL2SQLTool`
## Description
This tool is used to convert natural language to SQL queries. When passsed to the agent it will generate queries and then use them to interact with the database.
This enables multiple workflows like having an Agent to access the database fetch information based on the goal and then use the information to generate a response, report or any other output. Along with that proivdes the ability for the Agent to update the database based on its goal.
This enables multiple workflows like having an Agent to access the database fetch information based on the goal and then use the information to generate a response, report or any other output.
Along with that proivdes the ability for the Agent to update the database based on its goal.
**Attention**: Make sure that the Agent has access to a Read-Replica or that is okay for the Agent to run insert/update queries on the database.
@@ -14,7 +21,9 @@ This enables multiple workflows like having an Agent to access the database fetc
- Any DB compatible library (e.g. psycopg2, mysql-connector-python)
## Installation
Install the crewai_tools package
```shell
pip install 'crewai[tools]'
```
@@ -24,7 +33,7 @@ pip install 'crewai[tools]'
In order to use the NL2SQLTool, you need to pass the database URI to the tool. The URI should be in the format `dialect+driver://username:password@host:port/database`.
```python
```python Code
from crewai_tools import NL2SQLTool
# psycopg2 was installed to run this example with PostgreSQL
@@ -43,7 +52,8 @@ def researcher(self) -> Agent:
The primary task goal was:
"Retrieve the average, maximum, and minimum monthly revenue for each city, but only include cities that have more than one user. Also, count the number of user in each city and sort the results by the average monthly revenue in descending order"
"Retrieve the average, maximum, and minimum monthly revenue for each city, but only include cities that have more than one user. Also, count the number of user in each city and
sort the results by the average monthly revenue in descending order"
So the Agent tried to get information from the DB, the first one is wrong so the Agent tries again and gets the correct information and passes to the next agent.
@@ -69,6 +79,6 @@ This is a simple example of how the NL2SQLTool can be used to interact with the
The Tool provides endless possibilities on the logic of the Agent and how it can interact with the database.
```
```md
DB -> Agent -> ... -> Agent -> DB
```
```

View File

@@ -1,12 +1,22 @@
# PDFSearchTool
---
title: PDF RAG Search
description: The `PDFSearchTool` is designed to search PDF files and return the most relevant results.
icon: file-pdf
---
!!! note "Experimental"
# `PDFSearchTool`
<Note>
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
</Note>
## Description
The PDFSearchTool is a RAG tool designed for semantic searches within PDF content. It allows for inputting a search query and a PDF document, leveraging advanced search techniques to find relevant content efficiently. This capability makes it especially useful for extracting specific information from large PDF files quickly.
The PDFSearchTool is a RAG tool designed for semantic searches within PDF content. It allows for inputting a search query and a PDF document, leveraging advanced search techniques to find relevant content efficiently.
This capability makes it especially useful for extracting specific information from large PDF files quickly.
## Installation
To get started with the PDFSearchTool, first, ensure the crewai_tools package is installed with the following command:
```shell
@@ -16,7 +26,7 @@ pip install 'crewai[tools]'
## Example
Here's how to use the PDFSearchTool to search within a PDF document:
```python
```python Code
from crewai_tools import PDFSearchTool
# Initialize the tool allowing for any PDF content search if the path is provided during execution
@@ -29,13 +39,14 @@ tool = PDFSearchTool(pdf='path/to/your/document.pdf')
```
## Arguments
- `pdf`: **Optional** The PDF path for the search. Can be provided at initialization or within the `run` method's arguments. If provided at initialization, the tool confines its search to the specified document.
## Custom model and embeddings
By default, the tool uses OpenAI for both embeddings and summarization. To customize the model, you can use a config dictionary as follows:
```python
```python Code
tool = PDFSearchTool(
config=dict(
llm=dict(
@@ -57,4 +68,4 @@ tool = PDFSearchTool(
),
)
)
```
```

View File

@@ -0,0 +1,82 @@
---
title: PG RAG Search
description: The `PGSearchTool` is designed to search PostgreSQL databases and return the most relevant results.
icon: database
---
# `PGSearchTool`
<Note>
The PGSearchTool is currently under development. This document outlines the intended functionality and interface.
As development progresses, please be aware that some features may not be available or could change.
</Note>
## Description
The PGSearchTool is envisioned as a powerful tool for facilitating semantic searches within PostgreSQL database tables. By leveraging advanced Retrieve and Generate (RAG) technology,
it aims to provide an efficient means for querying database table content, specifically tailored for PostgreSQL databases.
The tool's goal is to simplify the process of finding relevant data through semantic search queries, offering a valuable resource for users needing to conduct advanced queries on
extensive datasets within a PostgreSQL environment.
## Installation
The `crewai_tools` package, which will include the PGSearchTool upon its release, can be installed using the following command:
```shell
pip install 'crewai[tools]'
```
<Note>
The PGSearchTool is not yet available in the current version of the `crewai_tools` package. This installation command will be updated once the tool is released.
</Note>
## Example Usage
Below is a proposed example showcasing how to use the PGSearchTool for conducting a semantic search on a table within a PostgreSQL database:
```python Code
from crewai_tools import PGSearchTool
# Initialize the tool with the database URI and the target table name
tool = PGSearchTool(
db_uri='postgresql://user:password@localhost:5432/mydatabase',
table_name='employees'
)
```
## Arguments
The PGSearchTool is designed to require the following arguments for its operation:
| Argument | Type | Description |
|:---------------|:---------|:-------------------------------------------------------------------------------------------------------------------------------------|
| **db_uri** | `string` | **Mandatory**. A string representing the URI of the PostgreSQL database to be queried. This argument will be mandatory and must include the necessary authentication details and the location of the database. |
| **table_name** | `string` | **Mandatory**. A string specifying the name of the table within the database on which the semantic search will be performed. This argument will also be mandatory. |
## Custom Model and Embeddings
The tool intends to use OpenAI for both embeddings and summarization by default. Users will have the option to customize the model using a config dictionary as follows:
```python Code
tool = PGSearchTool(
config=dict(
llm=dict(
provider="ollama", # or google, openai, anthropic, llama2, ...
config=dict(
model="llama2",
# temperature=0.5,
# top_p=1,
# stream=true,
),
),
embedder=dict(
provider="google", # or openai, ollama, ...
config=dict(
model="models/embedding-001",
task_type="retrieval_document",
# title="Embeddings",
),
),
)
)
```

View File

@@ -0,0 +1,47 @@
---
title: Scrape Website
description: The `ScrapeWebsiteTool` is designed to extract and read the content of a specified website.
icon: magnifying-glass-location
---
# `ScrapeWebsiteTool`
<Note>
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
</Note>
## Description
A tool designed to extract and read the content of a specified website. It is capable of handling various types of web pages by making HTTP requests and parsing the received HTML content.
This tool can be particularly useful for web scraping tasks, data collection, or extracting specific information from websites.
## Installation
Install the crewai_tools package
```shell
pip install 'crewai[tools]'
```
## Example
```python
from crewai_tools import ScrapeWebsiteTool
# To enable scrapping any website it finds during it's execution
tool = ScrapeWebsiteTool()
# Initialize the tool with the website URL,
# so the agent can only scrap the content of the specified website
tool = ScrapeWebsiteTool(website_url='https://www.example.com')
# Extract the text from the site
text = tool.run()
print(text)
```
## Arguments
| Argument | Type | Description |
|:---------------|:---------|:-------------------------------------------------------------------------------------------------------------------------------------|
| **website_url** | `string` | **Mandatory** website URL to read the file. This is the primary input for the tool, specifying which website's content should be scraped and read. |

View File

@@ -0,0 +1,76 @@
---
title: Selenium Scraper
description: The `SeleniumScrapingTool` is designed to extract and read the content of a specified website using Selenium.
icon: clipboard-user
---
# `SeleniumScrapingTool`
<Note>
This tool is currently in development. As we refine its capabilities, users may encounter unexpected behavior.
Your feedback is invaluable to us for making improvements.
</Note>
## Description
The SeleniumScrapingTool is crafted for high-efficiency web scraping tasks.
It allows for precise extraction of content from web pages by using CSS selectors to target specific elements.
Its design caters to a wide range of scraping needs, offering flexibility to work with any provided website URL.
## Installation
To get started with the SeleniumScrapingTool, install the crewai_tools package using pip:
```shell
pip install 'crewai[tools]'
```
## Usage Examples
Below are some scenarios where the SeleniumScrapingTool can be utilized:
```python Code
from crewai_tools import SeleniumScrapingTool
# Example 1:
# Initialize the tool without any parameters to scrape
# the current page it navigates to
tool = SeleniumScrapingTool()
# Example 2:
# Scrape the entire webpage of a given URL
tool = SeleniumScrapingTool(website_url='https://example.com')
# Example 3:
# Target and scrape a specific CSS element from a webpage
tool = SeleniumScrapingTool(
website_url='https://example.com',
css_element='.main-content'
)
# Example 4:
# Perform scraping with additional parameters for a customized experience
tool = SeleniumScrapingTool(
website_url='https://example.com',
css_element='.main-content',
cookie={'name': 'user', 'value': 'John Doe'},
wait_time=10
)
```
## Arguments
The following parameters can be used to customize the SeleniumScrapingTool's scraping process:
| Argument | Type | Description |
|:---------------|:---------|:-------------------------------------------------------------------------------------------------------------------------------------|
| **website_url** | `string` | **Mandatory**. Specifies the URL of the website from which content is to be scraped. |
| **css_element** | `string` | **Mandatory**. The CSS selector for a specific element to target on the website, enabling focused scraping of a particular part of a webpage. |
| **cookie** | `object` | **Optional**. A dictionary containing cookie information, useful for simulating a logged-in session to access restricted content. |
| **wait_time** | `int` | **Optional**. Specifies the delay (in seconds) before scraping, allowing the website and any dynamic content to fully load. |
<Warning>
Since the `SeleniumScrapingTool` is under active development, the parameters and functionality may evolve over time.
Users are encouraged to keep the tool updated and report any issues or suggestions for enhancements.
</Warning>

View File

@@ -1,21 +1,33 @@
# SerperDevTool Documentation
---
title: Google Serper Search
description: The `SerperDevTool` is designed to search the internet and return the most relevant results.
icon: google
---
!!! note "Experimental"
# `SerperDevTool`
<Note>
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
</Note>
## Description
This tool is designed to perform a semantic search for a specified query from a text's content across the internet. It utilizes the [serper.dev](https://serper.dev) API to fetch and display the most relevant search results based on the query provided by the user.
This tool is designed to perform a semantic search for a specified query from a text's content across the internet. It utilizes the [serper.dev](https://serper.dev) API
to fetch and display the most relevant search results based on the query provided by the user.
## Installation
To incorporate this tool into your project, follow the installation instructions below:
```shell
pip install 'crewai[tools]'
```
## Example
The following example demonstrates how to initialize the tool and execute a search with a given query:
```python
```python Code
from crewai_tools import SerperDevTool
# Initialize the tool for internet searching capabilities
@@ -23,6 +35,7 @@ tool = SerperDevTool()
```
## Steps to Get Started
To effectively use the `SerperDevTool`, follow these steps:
1. **Package Installation**: Confirm that the `crewai[tools]` package is installed in your Python environment.
@@ -43,9 +56,10 @@ The `SerperDevTool` comes with several parameters that will be passed to the API
The values for `country`, `location`, `locale` and `search_url` can be found on the [Serper Playground](https://serper.dev/playground).
## Example with Parameters
Here is an example demonstrating how to use the tool with additional parameters:
```python
```python Code
from crewai_tools import SerperDevTool
tool = SerperDevTool(
@@ -68,7 +82,7 @@ print(tool.run(search_query="ChatGPT"))
```
```python
```python Code
from crewai_tools import SerperDevTool
tool = SerperDevTool(
@@ -90,8 +104,9 @@ print(tool.run(search_query="Jeux Olympiques"))
# Link: https://tickets.paris2024.org/
# Snippet: Achetez vos billets exclusivement sur le site officiel de la billetterie de Paris 2024 pour participer au plus grand événement sportif au monde.
# ---
```
## Conclusion
By integrating the `SerperDevTool` into Python projects, users gain the ability to conduct real-time, relevant searches across the internet directly from their applications. The updated parameters allow for more customized and localized search results. By adhering to the setup and usage guidelines provided, incorporating this tool into projects is streamlined and straightforward.
By integrating the `SerperDevTool` into Python projects, users gain the ability to conduct real-time, relevant searches across the internet directly from their applications.
The updated parameters allow for more customized and localized search results. By adhering to the setup and usage guidelines provided, incorporating this tool into projects is streamlined and straightforward.

92
docs/tools/spidertool.mdx Normal file
View File

@@ -0,0 +1,92 @@
---
title: Spider Scraper
description: The `SpiderTool` is designed to extract and read the content of a specified website using Spider.
icon: spider-web
---
# `SpiderTool`
## Description
[Spider](https://spider.cloud/?ref=crewai) is the [fastest](https://github.com/spider-rs/spider/blob/main/benches/BENCHMARKS.md#benchmark-results)
open source scraper and crawler that returns LLM-ready data.
It converts any website into pure HTML, markdown, metadata or text while enabling you to crawl with custom actions using AI.
## Installation
To use the `SpiderTool` you need to download the [Spider SDK](https://pypi.org/project/spider-client/)
and the `crewai[tools]` SDK too:
```shell
pip install spider-client 'crewai[tools]'
```
## Example
This example shows you how you can use the `SpiderTool` to enable your agent to scrape and crawl websites.
The data returned from the Spider API is already LLM-ready, so no need to do any cleaning there.
```python Code
from crewai_tools import SpiderTool
def main():
spider_tool = SpiderTool()
searcher = Agent(
role="Web Research Expert",
goal="Find related information from specific URL's",
backstory="An expert web researcher that uses the web extremely well",
tools=[spider_tool],
verbose=True,
)
return_metadata = Task(
description="Scrape https://spider.cloud with a limit of 1 and enable metadata",
expected_output="Metadata and 10 word summary of spider.cloud",
agent=searcher
)
crew = Crew(
agents=[searcher],
tasks=[
return_metadata,
],
verbose=2
)
crew.kickoff()
if __name__ == "__main__":
main()
```
## Arguments
| Argument | Type | Description |
|:------------------|:---------|:-----------------------------------------------------------------------------------------------------------------------------------------------------|
| **api_key** | `string` | Specifies Spider API key. If not specified, it looks for `SPIDER_API_KEY` in environment variables. |
| **params** | `object` | Optional parameters for the request. Defaults to `{"return_format": "markdown"}` to optimize content for LLMs. |
| **request** | `string` | Type of request to perform (`http`, `chrome`, `smart`). `smart` defaults to HTTP, switching to JavaScript rendering if needed. |
| **limit** | `int` | Max pages to crawl per website. Set to `0` or omit for unlimited. |
| **depth** | `int` | Max crawl depth. Set to `0` for no limit. |
| **cache** | `bool` | Enables HTTP caching to speed up repeated runs. Default is `true`. |
| **budget** | `object` | Sets path-based limits for crawled pages, e.g., `{"*":1}` for root page only. |
| **locale** | `string` | Locale for the request, e.g., `en-US`. |
| **cookies** | `string` | HTTP cookies for the request. |
| **stealth** | `bool` | Enables stealth mode for Chrome requests to avoid detection. Default is `true`. |
| **headers** | `object` | HTTP headers as a map of key-value pairs for all requests. |
| **metadata** | `bool` | Stores metadata about pages and content, aiding AI interoperability. Defaults to `false`. |
| **viewport** | `object` | Sets Chrome viewport dimensions. Default is `800x600`. |
| **encoding** | `string` | Specifies encoding type, e.g., `UTF-8`, `SHIFT_JIS`. |
| **subdomains** | `bool` | Includes subdomains in the crawl. Default is `false`. |
| **user_agent** | `string` | Custom HTTP user agent. Defaults to a random agent. |
| **store_data** | `bool` | Enables data storage for the request. Overrides `storageless` when set. Default is `false`. |
| **gpt_config** | `object` | Allows AI to generate crawl actions, with optional chaining steps via an array for `"prompt"`. |
| **fingerprint** | `bool` | Enables advanced fingerprinting for Chrome. |
| **storageless** | `bool` | Prevents all data storage, including AI embeddings. Default is `false`. |
| **readability** | `bool` | Pre-processes content for reading via [Mozillas readability](https://github.com/mozilla/readability). Improves content for LLMs. |
| **return_format** | `string` | Format to return data: `markdown`, `raw`, `text`, `html2text`. Use `raw` for default page format. |
| **proxy_enabled** | `bool` | Enables high-performance proxies to avoid network-level blocking. |
| **query_selector** | `string` | CSS query selector for content extraction from markup. |
| **full_resources** | `bool` | Downloads all resources linked to the website. |
| **request_timeout** | `int` | Timeout in seconds for requests (5-60). Default is `30`. |
| **run_in_background** | `bool` | Runs the request in the background, useful for data storage and triggering dashboard crawls. No effect if `storageless` is set. |

View File

@@ -0,0 +1,82 @@
---
title: TXT RAG Search
description: The `TXTSearchTool` is designed to perform a RAG (Retrieval-Augmented Generation) search within the content of a text file.
icon: file-lines
---
# `TXTSearchTool`
<Note>
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
</Note>
## Description
This tool is used to perform a RAG (Retrieval-Augmented Generation) search within the content of a text file.
It allows for semantic searching of a query within a specified text file's content,
making it an invaluable resource for quickly extracting information or finding specific sections of text based on the query provided.
## Installation
To use the `TXTSearchTool`, you first need to install the `crewai_tools` package.
This can be done using pip, a package manager for Python.
Open your terminal or command prompt and enter the following command:
```shell
pip install 'crewai[tools]'
```
This command will download and install the TXTSearchTool along with any necessary dependencies.
## Example
The following example demonstrates how to use the TXTSearchTool to search within a text file.
This example shows both the initialization of the tool with a specific text file and the subsequent search within that file's content.
```python Code
from crewai_tools import TXTSearchTool
# Initialize the tool to search within any text file's content
# the agent learns about during its execution
tool = TXTSearchTool()
# OR
# Initialize the tool with a specific text file,
# so the agent can search within the given text file's content
tool = TXTSearchTool(txt='path/to/text/file.txt')
```
## Arguments
- `txt` (str): **Optional**. The path to the text file you want to search.
This argument is only required if the tool was not initialized with a specific text file;
otherwise, the search will be conducted within the initially provided text file.
## Custom model and embeddings
By default, the tool uses OpenAI for both embeddings and summarization.
To customize the model, you can use a config dictionary as follows:
```python Code
tool = TXTSearchTool(
config=dict(
llm=dict(
provider="ollama", # or google, openai, anthropic, llama2, ...
config=dict(
model="llama2",
# temperature=0.5,
# top_p=1,
# stream=true,
),
),
embedder=dict(
provider="google", # or openai, ollama, ...
config=dict(
model="models/embedding-001",
task_type="retrieval_document",
# title="Embeddings",
),
),
)
)
```

50
docs/tools/visiontool.mdx Normal file
View File

@@ -0,0 +1,50 @@
---
title: Vision Tool
description: The `VisionTool` is designed to extract text from images.
icon: eye
---
# `VisionTool`
## Description
This tool is used to extract text from images. When passed to the agent it will extract the text from the image and then use it to generate a response, report or any other output.
The URL or the PATH of the image should be passed to the Agent.
## Installation
Install the crewai_tools package
```shell
pip install 'crewai[tools]'
```
## Usage
In order to use the VisionTool, the OpenAI API key should be set in the environment variable `OPENAI_API_KEY`.
```python Code
from crewai_tools import VisionTool
vision_tool = VisionTool()
@agent
def researcher(self) -> Agent:
'''
This agent uses the VisionTool to extract text from images.
'''
return Agent(
config=self.agents_config["researcher"],
allow_delegation=False,
tools=[vision_tool]
)
```
## Arguments
The VisionTool requires the following arguments:
| Argument | Type | Description |
|:---------------|:---------|:-------------------------------------------------------------------------------------------------------------------------------------|
| **image_path** | `string` | **Mandatory**. The path to the image file from which text needs to be extracted. |

View File

@@ -1,12 +1,24 @@
# WebsiteSearchTool
---
title: Website RAG Search
description: The `WebsiteSearchTool` is designed to perform a RAG (Retrieval-Augmented Generation) search within the content of a website.
icon: globe-stand
---
!!! note "Experimental Status"
# `WebsiteSearchTool`
<Note>
The WebsiteSearchTool is currently in an experimental phase. We are actively working on incorporating this tool into our suite of offerings and will update the documentation accordingly.
</Note>
## Description
The WebsiteSearchTool is designed as a concept for conducting semantic searches within the content of websites. It aims to leverage advanced machine learning models like Retrieval-Augmented Generation (RAG) to navigate and extract information from specified URLs efficiently. This tool intends to offer flexibility, allowing users to perform searches across any website or focus on specific websites of interest. Please note, the current implementation details of the WebsiteSearchTool are under development, and its functionalities as described may not yet be accessible.
The WebsiteSearchTool is designed as a concept for conducting semantic searches within the content of websites.
It aims to leverage advanced machine learning models like Retrieval-Augmented Generation (RAG) to navigate and extract information from specified URLs efficiently.
This tool intends to offer flexibility, allowing users to perform searches across any website or focus on specific websites of interest.
Please note, the current implementation details of the WebsiteSearchTool are under development, and its functionalities as described may not yet be accessible.
## Installation
To prepare your environment for when the WebsiteSearchTool becomes available, you can install the foundational package with:
```shell
@@ -16,26 +28,31 @@ pip install 'crewai[tools]'
This command installs the necessary dependencies to ensure that once the tool is fully integrated, users can start using it immediately.
## Example Usage
Below are examples of how the WebsiteSearchTool could be utilized in different scenarios. Please note, these examples are illustrative and represent planned functionality:
```python
```python Code
from crewai_tools import WebsiteSearchTool
# Example of initiating tool that agents can use to search across any discovered websites
# Example of initiating tool that agents can use
# to search across any discovered websites
tool = WebsiteSearchTool()
# Example of limiting the search to the content of a specific website, so now agents can only search within that website
# Example of limiting the search to the content of a specific website,
# so now agents can only search within that website
tool = WebsiteSearchTool(website='https://example.com')
```
## Arguments
- `website`: An optional argument intended to specify the website URL for focused searches. This argument is designed to enhance the tool's flexibility by allowing targeted searches when necessary.
## Customization Options
By default, the tool uses OpenAI for both embeddings and summarization. To customize the model, you can use a config dictionary as follows:
```python
```python Code
tool = WebsiteSearchTool(
config=dict(
llm=dict(

View File

@@ -1,12 +1,23 @@
# XMLSearchTool
---
title: XML RAG Search
description: The `XMLSearchTool` is designed to perform a RAG (Retrieval-Augmented Generation) search within the content of a XML file.
icon: file-xml
---
!!! note "Experimental"
# `XMLSearchTool`
<Note>
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
</Note>
## Description
The XMLSearchTool is a cutting-edge RAG tool engineered for conducting semantic searches within XML files. Ideal for users needing to parse and extract information from XML content efficiently, this tool supports inputting a search query and an optional XML file path. By specifying an XML path, users can target their search more precisely to the content of that file, thereby obtaining more relevant search outcomes.
The XMLSearchTool is a cutting-edge RAG tool engineered for conducting semantic searches within XML files.
Ideal for users needing to parse and extract information from XML content efficiently, this tool supports inputting a search query and an optional XML file path.
By specifying an XML path, users can target their search more precisely to the content of that file, thereby obtaining more relevant search outcomes.
## Installation
To start using the XMLSearchTool, you must first install the crewai_tools package. This can be easily done with the following command:
```shell
@@ -14,28 +25,34 @@ pip install 'crewai[tools]'
```
## Example
Here are two examples demonstrating how to use the XMLSearchTool. The first example shows searching within a specific XML file, while the second example illustrates initiating a search without predefining an XML path, providing flexibility in search scope.
```python
Here are two examples demonstrating how to use the XMLSearchTool.
The first example shows searching within a specific XML file, while the second example illustrates initiating a search without predefining an XML path, providing flexibility in search scope.
```python Code
from crewai_tools import XMLSearchTool
# Allow agents to search within any XML file's content as it learns about their paths during execution
# Allow agents to search within any XML file's content
#as it learns about their paths during execution
tool = XMLSearchTool()
# OR
# Initialize the tool with a specific XML file path for exclusive search within that document
# Initialize the tool with a specific XML file path
#for exclusive search within that document
tool = XMLSearchTool(xml='path/to/your/xmlfile.xml')
```
## Arguments
- `xml`: This is the path to the XML file you wish to search. It is an optional parameter during the tool's initialization but must be provided either at initialization or as part of the `run` method's arguments to execute a search.
- `xml`: This is the path to the XML file you wish to search.
It is an optional parameter during the tool's initialization but must be provided either at initialization or as part of the `run` method's arguments to execute a search.
## Custom model and embeddings
By default, the tool uses OpenAI for both embeddings and summarization. To customize the model, you can use a config dictionary as follows:
```python
```python Code
tool = XMLSearchTool(
config=dict(
llm=dict(
@@ -57,4 +74,4 @@ tool = XMLSearchTool(
),
)
)
```
```

View File

@@ -1,12 +1,24 @@
# YoutubeChannelSearchTool
---
title: YouTube Channel RAG Search
description: The `YoutubeChannelSearchTool` is designed to perform a RAG (Retrieval-Augmented Generation) search within the content of a Youtube channel.
icon: youtube
---
!!! note "Experimental"
# `YoutubeChannelSearchTool`
<Note>
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
</Note>
## Description
This tool is designed to perform semantic searches within a specific Youtube channel's content. Leveraging the RAG (Retrieval-Augmented Generation) methodology, it provides relevant search results, making it invaluable for extracting information or finding specific content without the need to manually sift through videos. It streamlines the search process within Youtube channels, catering to researchers, content creators, and viewers seeking specific information or topics.
This tool is designed to perform semantic searches within a specific Youtube channel's content.
Leveraging the RAG (Retrieval-Augmented Generation) methodology, it provides relevant search results,
making it invaluable for extracting information or finding specific content without the need to manually sift through videos.
It streamlines the search process within Youtube channels, catering to researchers, content creators, and viewers seeking specific information or topics.
## Installation
To utilize the YoutubeChannelSearchTool, the `crewai_tools` package must be installed. Execute the following command in your shell to install:
```shell
@@ -14,9 +26,11 @@ pip install 'crewai[tools]'
```
## Example
To begin using the YoutubeChannelSearchTool, follow the example below. This demonstrates initializing the tool with a specific Youtube channel handle and conducting a search within that channel's content.
```python
To begin using the YoutubeChannelSearchTool, follow the example below.
This demonstrates initializing the tool with a specific Youtube channel handle and conducting a search within that channel's content.
```python Code
from crewai_tools import YoutubeChannelSearchTool
# Initialize the tool to search within any Youtube channel's content the agent learns about during its execution
@@ -29,13 +43,14 @@ tool = YoutubeChannelSearchTool(youtube_channel_handle='@exampleChannel')
```
## Arguments
- `youtube_channel_handle` : A mandatory string representing the Youtube channel handle. This parameter is crucial for initializing the tool to specify the channel you want to search within. The tool is designed to only search within the content of the provided channel handle.
## Custom model and embeddings
By default, the tool uses OpenAI for both embeddings and summarization. To customize the model, you can use a config dictionary as follows:
```python
```python Code
tool = YoutubeChannelSearchTool(
config=dict(
llm=dict(
@@ -57,4 +72,4 @@ tool = YoutubeChannelSearchTool(
),
)
)
```
```

View File

@@ -1,32 +1,49 @@
# YoutubeVideoSearchTool
---
title: YouTube Video RAG Search
description: The `YoutubeVideoSearchTool` is designed to perform a RAG (Retrieval-Augmented Generation) search within the content of a Youtube video.
icon: youtube
---
!!! note "Experimental"
# `YoutubeVideoSearchTool`
<Note>
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
</Note>
## Description
This tool is part of the `crewai_tools` package and is designed to perform semantic searches within Youtube video content, utilizing Retrieval-Augmented Generation (RAG) techniques. It is one of several "Search" tools in the package that leverage RAG for different sources. The YoutubeVideoSearchTool allows for flexibility in searches; users can search across any Youtube video content without specifying a video URL, or they can target their search to a specific Youtube video by providing its URL.
This tool is part of the `crewai_tools` package and is designed to perform semantic searches within Youtube video content, utilizing Retrieval-Augmented Generation (RAG) techniques.
It is one of several "Search" tools in the package that leverage RAG for different sources.
The YoutubeVideoSearchTool allows for flexibility in searches; users can search across any Youtube video content without specifying a video URL,
or they can target their search to a specific Youtube video by providing its URL.
## Installation
To utilize the YoutubeVideoSearchTool, you must first install the `crewai_tools` package. This package contains the YoutubeVideoSearchTool among other utilities designed to enhance your data analysis and processing tasks. Install the package by executing the following command in your terminal:
To utilize the `YoutubeVideoSearchTool`, you must first install the `crewai_tools` package.
This package contains the `YoutubeVideoSearchTool` among other utilities designed to enhance your data analysis and processing tasks.
Install the package by executing the following command in your terminal:
```
```shell
pip install 'crewai[tools]'
```
## Example
To integrate the YoutubeVideoSearchTool into your Python projects, follow the example below. This demonstrates how to use the tool both for general Youtube content searches and for targeted searches within a specific video's content.
To integrate the YoutubeVideoSearchTool into your Python projects, follow the example below.
This demonstrates how to use the tool both for general Youtube content searches and for targeted searches within a specific video's content.
```python
```python Code
from crewai_tools import YoutubeVideoSearchTool
# General search across Youtube content without specifying a video URL, so the agent can search within any Youtube video content it learns about irs url during its operation
# General search across Youtube content without specifying a video URL,
# so the agent can search within any Youtube video content
# it learns about its url during its operation
tool = YoutubeVideoSearchTool()
# Targeted search within a specific Youtube video's content
tool = YoutubeVideoSearchTool(youtube_video_url='https://youtube.com/watch?v=example')
tool = YoutubeVideoSearchTool(
youtube_video_url='https://youtube.com/watch?v=example'
)
```
## Arguments
@@ -39,7 +56,7 @@ The YoutubeVideoSearchTool accepts the following initialization arguments:
By default, the tool uses OpenAI for both embeddings and summarization. To customize the model, you can use a config dictionary as follows:
```python
```python Code
tool = YoutubeVideoSearchTool(
config=dict(
llm=dict(
@@ -61,4 +78,4 @@ tool = YoutubeVideoSearchTool(
),
)
)
```
```