mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-01-10 00:28:31 +00:00
WIP: docs updates (#3296)
This commit is contained in:
112
docs/en/tools/search-research/arxivpapertool.mdx
Normal file
112
docs/en/tools/search-research/arxivpapertool.mdx
Normal file
@@ -0,0 +1,112 @@
|
||||
---
|
||||
title: Arxiv Paper Tool
|
||||
description: The `ArxivPaperTool` searches arXiv for papers matching a query and optionally downloads PDFs.
|
||||
icon: box-archive
|
||||
---
|
||||
|
||||
# `ArxivPaperTool`
|
||||
|
||||
## Description
|
||||
|
||||
The `ArxivPaperTool` queries the arXiv API for academic papers and returns compact, readable results. It can also optionally download PDFs to disk.
|
||||
|
||||
## Installation
|
||||
|
||||
This tool has no special installation beyond `crewai-tools`.
|
||||
|
||||
```shell
|
||||
uv add crewai-tools
|
||||
```
|
||||
|
||||
No API key is required. This tool uses the public arXiv Atom API.
|
||||
|
||||
## Steps to Get Started
|
||||
|
||||
1. Initialize the tool.
|
||||
2. Provide a `search_query` (e.g., "transformer neural network").
|
||||
3. Optionally set `max_results` (1–100) and enable PDF downloads in the constructor.
|
||||
|
||||
## Example
|
||||
|
||||
```python Code
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import ArxivPaperTool
|
||||
|
||||
tool = ArxivPaperTool(
|
||||
download_pdfs=False,
|
||||
save_dir="./arxiv_pdfs",
|
||||
use_title_as_filename=True,
|
||||
)
|
||||
|
||||
agent = Agent(
|
||||
role="Researcher",
|
||||
goal="Find relevant arXiv papers",
|
||||
backstory="Expert at literature discovery",
|
||||
tools=[tool],
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Search arXiv for 'transformer neural network' and list top 5 results.",
|
||||
expected_output="A concise list of 5 relevant papers with titles, links, and summaries.",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
crew = Crew(agents=[agent], tasks=[task])
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
### Direct usage (without Agent)
|
||||
|
||||
```python Code
|
||||
from crewai_tools import ArxivPaperTool
|
||||
|
||||
tool = ArxivPaperTool(
|
||||
download_pdfs=True,
|
||||
save_dir="./arxiv_pdfs",
|
||||
)
|
||||
print(tool.run(search_query="mixture of experts", max_results=3))
|
||||
```
|
||||
|
||||
## Parameters
|
||||
|
||||
### Initialization Parameters
|
||||
|
||||
- `download_pdfs` (bool, default `False`): Whether to download PDFs.
|
||||
- `save_dir` (str, default `./arxiv_pdfs`): Directory to save PDFs.
|
||||
- `use_title_as_filename` (bool, default `False`): Use paper titles for filenames.
|
||||
|
||||
### Run Parameters
|
||||
|
||||
- `search_query` (str, required): The arXiv search query.
|
||||
- `max_results` (int, default `5`, range 1–100): Number of results.
|
||||
|
||||
## Output format
|
||||
|
||||
The tool returns a human‑readable list of papers with:
|
||||
- Title
|
||||
- Link (abs page)
|
||||
- Snippet/summary (truncated)
|
||||
|
||||
When `download_pdfs=True`, PDFs are saved to disk and the summary mentions saved files.
|
||||
|
||||
## Usage Notes
|
||||
|
||||
- The tool returns formatted text with key metadata and links.
|
||||
- When `download_pdfs=True`, PDFs will be stored in `save_dir`.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
- If you receive a network timeout, re‑try or reduce `max_results`.
|
||||
- Invalid XML errors indicate an arXiv response parse issue; try a simpler query.
|
||||
- File system errors (e.g., permission denied) may occur when saving PDFs; ensure `save_dir` is writable.
|
||||
|
||||
## Related links
|
||||
|
||||
- arXiv API docs: https://info.arxiv.org/help/api/index.html
|
||||
|
||||
## Error Handling
|
||||
|
||||
- Network issues, invalid XML, and OS errors are handled with informative messages.
|
||||
|
||||
|
||||
@@ -23,7 +23,7 @@ pip install 'crewai[tools]'
|
||||
To effectively use the `BraveSearchTool`, follow these steps:
|
||||
|
||||
1. **Package Installation**: Confirm that the `crewai[tools]` package is installed in your Python environment.
|
||||
2. **API Key Acquisition**: Acquire a Brave Search API key by registering at [Brave Search API](https://api.search.brave.com/app/keys).
|
||||
2. **API Key Acquisition**: Acquire a Brave Search API key at https://api.search.brave.com/app/keys (sign in to generate a key).
|
||||
3. **Environment Configuration**: Store your obtained API key in an environment variable named `BRAVE_API_KEY` to facilitate its use by the tool.
|
||||
|
||||
## Example
|
||||
|
||||
80
docs/en/tools/search-research/databricks-query-tool.mdx
Normal file
80
docs/en/tools/search-research/databricks-query-tool.mdx
Normal file
@@ -0,0 +1,80 @@
|
||||
---
|
||||
title: Databricks SQL Query Tool
|
||||
description: The `DatabricksQueryTool` executes SQL queries against Databricks workspace tables.
|
||||
icon: trowel-bricks
|
||||
---
|
||||
|
||||
# `DatabricksQueryTool`
|
||||
|
||||
## Description
|
||||
|
||||
Run SQL against Databricks workspace tables with either CLI profile or direct host/token authentication.
|
||||
|
||||
## Installation
|
||||
|
||||
```shell
|
||||
uv add crewai-tools[databricks-sdk]
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
- `DATABRICKS_CONFIG_PROFILE` or (`DATABRICKS_HOST` + `DATABRICKS_TOKEN`)
|
||||
|
||||
Create a personal access token and find host details in the Databricks workspace under User Settings → Developer.
|
||||
Docs: https://docs.databricks.com/en/dev-tools/auth/pat.html
|
||||
|
||||
## Example
|
||||
|
||||
```python Code
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import DatabricksQueryTool
|
||||
|
||||
tool = DatabricksQueryTool(
|
||||
default_catalog="main",
|
||||
default_schema="default",
|
||||
)
|
||||
|
||||
agent = Agent(
|
||||
role="Data Analyst",
|
||||
goal="Query Databricks",
|
||||
tools=[tool],
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="SELECT * FROM my_table LIMIT 10",
|
||||
expected_output="10 rows",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
crew = Crew(
|
||||
agents=[agent],
|
||||
tasks=[task],
|
||||
verbose=True,
|
||||
)
|
||||
result = crew.kickoff()
|
||||
|
||||
print(result)
|
||||
```
|
||||
|
||||
## Parameters
|
||||
|
||||
- `query` (required): SQL query to execute
|
||||
- `catalog` (optional): Override default catalog
|
||||
- `db_schema` (optional): Override default schema
|
||||
- `warehouse_id` (optional): Override default SQL warehouse
|
||||
- `row_limit` (optional): Maximum rows to return (default: 1000)
|
||||
|
||||
## Defaults on initialization
|
||||
|
||||
- `default_catalog`
|
||||
- `default_schema`
|
||||
- `default_warehouse_id`
|
||||
|
||||
### Error handling & tips
|
||||
|
||||
- Authentication errors: verify `DATABRICKS_HOST` begins with `https://` and token is valid.
|
||||
- Permissions: ensure your SQL warehouse and schema are accessible by your token.
|
||||
- Limits: long‑running queries should be avoided in agent loops; add filters/limits.
|
||||
|
||||
|
||||
@@ -24,6 +24,8 @@ pip install 'crewai[tools]'
|
||||
|
||||
This command installs the necessary package to run the GithubSearchTool along with any other tools included in the crewai_tools package.
|
||||
|
||||
Get a GitHub Personal Access Token at https://github.com/settings/tokens (Developer settings → Fine‑grained tokens or classic tokens).
|
||||
|
||||
## Example
|
||||
|
||||
Here’s how you can use the GithubSearchTool to perform semantic searches within a GitHub repository:
|
||||
|
||||
@@ -52,6 +52,18 @@ These tools enable your agents to search the web, research topics, and find info
|
||||
<Card title="Tavily Extractor Tool" icon="file-text" href="/en/tools/search-research/tavilyextractortool">
|
||||
Extract structured content from web pages using the Tavily API.
|
||||
</Card>
|
||||
|
||||
<Card title="Arxiv Paper Tool" icon="box-archive" href="/en/tools/search-research/arxivpapertool">
|
||||
Search arXiv and optionally download PDFs.
|
||||
</Card>
|
||||
|
||||
<Card title="SerpApi Google Search" icon="search" href="/en/tools/search-research/serpapi-googlesearchtool">
|
||||
Google search via SerpApi with structured results.
|
||||
</Card>
|
||||
|
||||
<Card title="SerpApi Google Shopping" icon="cart-shopping" href="/en/tools/search-research/serpapi-googleshoppingtool">
|
||||
Google Shopping queries via SerpApi.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
## **Common Use Cases**
|
||||
|
||||
65
docs/en/tools/search-research/serpapi-googlesearchtool.mdx
Normal file
65
docs/en/tools/search-research/serpapi-googlesearchtool.mdx
Normal file
@@ -0,0 +1,65 @@
|
||||
---
|
||||
title: SerpApi Google Search Tool
|
||||
description: The `SerpApiGoogleSearchTool` performs Google searches using the SerpApi service.
|
||||
icon: google
|
||||
---
|
||||
|
||||
# `SerpApiGoogleSearchTool`
|
||||
|
||||
## Description
|
||||
|
||||
Use the `SerpApiGoogleSearchTool` to run Google searches with SerpApi and retrieve structured results. Requires a SerpApi API key.
|
||||
|
||||
## Installation
|
||||
|
||||
```shell
|
||||
uv add crewai-tools[serpapi]
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
- `SERPAPI_API_KEY` (required): API key for SerpApi. Create one at https://serpapi.com/ (free tier available).
|
||||
|
||||
## Example
|
||||
|
||||
```python Code
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import SerpApiGoogleSearchTool
|
||||
|
||||
tool = SerpApiGoogleSearchTool()
|
||||
|
||||
agent = Agent(
|
||||
role="Researcher",
|
||||
goal="Answer questions using Google search",
|
||||
backstory="Search specialist",
|
||||
tools=[tool],
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Search for the latest CrewAI releases",
|
||||
expected_output="A concise list of relevant results with titles and links",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
crew = Crew(agents=[agent], tasks=[task])
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Set `SERPAPI_API_KEY` in the environment. Create a key at https://serpapi.com/
|
||||
- See also Google Shopping via SerpApi: `/en/tools/search-research/serpapi-googleshoppingtool`
|
||||
|
||||
## Parameters
|
||||
|
||||
### Run Parameters
|
||||
|
||||
- `search_query` (str, required): The Google query.
|
||||
- `location` (str, optional): Geographic location parameter.
|
||||
|
||||
## Notes
|
||||
|
||||
- This tool wraps SerpApi and returns structured search results.
|
||||
|
||||
|
||||
61
docs/en/tools/search-research/serpapi-googleshoppingtool.mdx
Normal file
61
docs/en/tools/search-research/serpapi-googleshoppingtool.mdx
Normal file
@@ -0,0 +1,61 @@
|
||||
---
|
||||
title: SerpApi Google Shopping Tool
|
||||
description: The `SerpApiGoogleShoppingTool` searches Google Shopping results using SerpApi.
|
||||
icon: cart-shopping
|
||||
---
|
||||
|
||||
# `SerpApiGoogleShoppingTool`
|
||||
|
||||
## Description
|
||||
|
||||
Leverage `SerpApiGoogleShoppingTool` to query Google Shopping via SerpApi and retrieve product-oriented results.
|
||||
|
||||
## Installation
|
||||
|
||||
```shell
|
||||
uv add crewai-tools[serpapi]
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
- `SERPAPI_API_KEY` (required): API key for SerpApi. Create one at https://serpapi.com/ (free tier available).
|
||||
|
||||
## Example
|
||||
|
||||
```python Code
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import SerpApiGoogleShoppingTool
|
||||
|
||||
tool = SerpApiGoogleShoppingTool()
|
||||
|
||||
agent = Agent(
|
||||
role="Shopping Researcher",
|
||||
goal="Find relevant products",
|
||||
backstory="Expert in product search",
|
||||
tools=[tool],
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Search Google Shopping for 'wireless noise-canceling headphones'",
|
||||
expected_output="Top relevant products with titles and links",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
crew = Crew(agents=[agent], tasks=[task])
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Set `SERPAPI_API_KEY` in the environment. Create a key at https://serpapi.com/
|
||||
- See also Google Web Search via SerpApi: `/en/tools/search-research/serpapi-googlesearchtool`
|
||||
|
||||
## Parameters
|
||||
|
||||
### Run Parameters
|
||||
|
||||
- `search_query` (str, required): Product search query.
|
||||
- `location` (str, optional): Geographic location parameter.
|
||||
|
||||
|
||||
@@ -16,7 +16,7 @@ to fetch and display the most relevant search results based on the query provide
|
||||
To effectively use the `SerperDevTool`, follow these steps:
|
||||
|
||||
1. **Package Installation**: Confirm that the `crewai[tools]` package is installed in your Python environment.
|
||||
2. **API Key Acquisition**: Acquire a `serper.dev` API key by registering for a free account at `serper.dev`.
|
||||
2. **API Key Acquisition**: Acquire a `serper.dev` API key at https://serper.dev/ (free tier available).
|
||||
3. **Environment Configuration**: Store your obtained API key in an environment variable named `SERPER_API_KEY` to facilitate its use by the tool.
|
||||
|
||||
To incorporate this tool into your project, follow the installation instructions below:
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
---
|
||||
title: "Tavily Extractor Tool"
|
||||
description: "Extract structured content from web pages using the Tavily API"
|
||||
icon: "file-text"
|
||||
icon: square-poll-horizontal
|
||||
---
|
||||
|
||||
The `TavilyExtractorTool` allows CrewAI agents to extract structured content from web pages using the Tavily API. It can process single URLs or lists of URLs and provides options for controlling the extraction depth and including images.
|
||||
|
||||
@@ -22,6 +22,8 @@ Ensure your Tavily API key is set as an environment variable:
|
||||
export TAVILY_API_KEY='your_tavily_api_key'
|
||||
```
|
||||
|
||||
Get an API key at https://app.tavily.com/ (sign up, then create a key).
|
||||
|
||||
## Example Usage
|
||||
|
||||
Here's how to initialize and use the `TavilySearchTool` within a CrewAI agent:
|
||||
|
||||
Reference in New Issue
Block a user