mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-01-11 00:58:30 +00:00
Add pt-BR docs translation (#3039)
* docs: add pt-br translations Powered by a CrewAI Flow https://github.com/danielfsbarreto/docs_translator * Update mcp/overview.mdx brazilian docs Its en-US counterpart was updated after I did a pass, so now it includes the new section about @CrewBase
This commit is contained in:
50
docs/en/tools/web-scraping/browserbaseloadtool.mdx
Normal file
50
docs/en/tools/web-scraping/browserbaseloadtool.mdx
Normal file
@@ -0,0 +1,50 @@
|
||||
---
|
||||
title: Browserbase Web Loader
|
||||
description: Browserbase is a developer platform to reliably run, manage, and monitor headless browsers.
|
||||
icon: browser
|
||||
---
|
||||
|
||||
# `BrowserbaseLoadTool`
|
||||
|
||||
## Description
|
||||
|
||||
[Browserbase](https://browserbase.com) is a developer platform to reliably run, manage, and monitor headless browsers.
|
||||
|
||||
Power your AI data retrievals with:
|
||||
|
||||
- [Serverless Infrastructure](https://docs.browserbase.com/under-the-hood) providing reliable browsers to extract data from complex UIs
|
||||
- [Stealth Mode](https://docs.browserbase.com/features/stealth-mode) with included fingerprinting tactics and automatic captcha solving
|
||||
- [Session Debugger](https://docs.browserbase.com/features/sessions) to inspect your Browser Session with networks timeline and logs
|
||||
- [Live Debug](https://docs.browserbase.com/guides/session-debug-connection/browser-remote-control) to quickly debug your automation
|
||||
|
||||
## Installation
|
||||
|
||||
- Get an API key and Project ID from [browserbase.com](https://browserbase.com) and set it in environment variables (`BROWSERBASE_API_KEY`, `BROWSERBASE_PROJECT_ID`).
|
||||
- Install the [Browserbase SDK](http://github.com/browserbase/python-sdk) along with `crewai[tools]` package:
|
||||
|
||||
```shell
|
||||
pip install browserbase 'crewai[tools]'
|
||||
```
|
||||
|
||||
## Example
|
||||
|
||||
Utilize the BrowserbaseLoadTool as follows to allow your agent to load websites:
|
||||
|
||||
```python Code
|
||||
from crewai_tools import BrowserbaseLoadTool
|
||||
|
||||
# Initialize the tool with the Browserbase API key and Project ID
|
||||
tool = BrowserbaseLoadTool()
|
||||
```
|
||||
|
||||
## Arguments
|
||||
|
||||
The following parameters can be used to customize the `BrowserbaseLoadTool`'s behavior:
|
||||
|
||||
| Argument | Type | Description |
|
||||
|:---------------|:---------|:-------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| **api_key** | `string` | _Optional_. Browserbase API key. Default is `BROWSERBASE_API_KEY` env variable. |
|
||||
| **project_id** | `string` | _Optional_. Browserbase Project ID. Default is `BROWSERBASE_PROJECT_ID` env variable. |
|
||||
| **text_content** | `bool` | _Optional_. Retrieve only text content. Default is `False`. |
|
||||
| **session_id** | `string` | _Optional_. Provide an existing Session ID. |
|
||||
| **proxy** | `bool` | _Optional_. Enable/Disable Proxies. Default is `False`. |
|
||||
47
docs/en/tools/web-scraping/firecrawlcrawlwebsitetool.mdx
Normal file
47
docs/en/tools/web-scraping/firecrawlcrawlwebsitetool.mdx
Normal file
@@ -0,0 +1,47 @@
|
||||
---
|
||||
title: Firecrawl Crawl Website
|
||||
description: The `FirecrawlCrawlWebsiteTool` is designed to crawl and convert websites into clean markdown or structured data.
|
||||
icon: fire-flame
|
||||
---
|
||||
|
||||
# `FirecrawlCrawlWebsiteTool`
|
||||
|
||||
## Description
|
||||
|
||||
[Firecrawl](https://firecrawl.dev) is a platform for crawling and convert any website into clean markdown or structured data.
|
||||
|
||||
## Installation
|
||||
|
||||
- Get an API key from [firecrawl.dev](https://firecrawl.dev) and set it in environment variables (`FIRECRAWL_API_KEY`).
|
||||
- Install the [Firecrawl SDK](https://github.com/mendableai/firecrawl) along with `crewai[tools]` package:
|
||||
|
||||
```shell
|
||||
pip install firecrawl-py 'crewai[tools]'
|
||||
```
|
||||
|
||||
## Example
|
||||
|
||||
Utilize the FirecrawlScrapeFromWebsiteTool as follows to allow your agent to load websites:
|
||||
|
||||
```python Code
|
||||
from crewai_tools import FirecrawlCrawlWebsiteTool
|
||||
|
||||
tool = FirecrawlCrawlWebsiteTool(url='firecrawl.dev')
|
||||
```
|
||||
|
||||
## Arguments
|
||||
|
||||
- `api_key`: Optional. Specifies Firecrawl API key. Defaults is the `FIRECRAWL_API_KEY` environment variable.
|
||||
- `url`: The base URL to start crawling from.
|
||||
- `page_options`: Optional.
|
||||
- `onlyMainContent`: Optional. Only return the main content of the page excluding headers, navs, footers, etc.
|
||||
- `includeHtml`: Optional. Include the raw HTML content of the page. Will output a html key in the response.
|
||||
- `crawler_options`: Optional. Options for controlling the crawling behavior.
|
||||
- `includes`: Optional. URL patterns to include in the crawl.
|
||||
- `exclude`: Optional. URL patterns to exclude from the crawl.
|
||||
- `generateImgAltText`: Optional. Generate alt text for images using LLMs (requires a paid plan).
|
||||
- `returnOnlyUrls`: Optional. If true, returns only the URLs as a list in the crawl status. Note: the response will be a list of URLs inside the data, not a list of documents.
|
||||
- `maxDepth`: Optional. Maximum depth to crawl. Depth 1 is the base URL, depth 2 includes the base URL and its direct children, and so on.
|
||||
- `mode`: Optional. The crawling mode to use. Fast mode crawls 4x faster on websites without a sitemap but may not be as accurate and shouldn't be used on heavily JavaScript-rendered websites.
|
||||
- `limit`: Optional. Maximum number of pages to crawl.
|
||||
- `timeout`: Optional. Timeout in milliseconds for the crawling operation.
|
||||
43
docs/en/tools/web-scraping/firecrawlscrapewebsitetool.mdx
Normal file
43
docs/en/tools/web-scraping/firecrawlscrapewebsitetool.mdx
Normal file
@@ -0,0 +1,43 @@
|
||||
---
|
||||
title: Firecrawl Scrape Website
|
||||
description: The `FirecrawlScrapeWebsiteTool` is designed to scrape websites and convert them into clean markdown or structured data.
|
||||
icon: fire-flame
|
||||
---
|
||||
|
||||
# `FirecrawlScrapeWebsiteTool`
|
||||
|
||||
## Description
|
||||
|
||||
[Firecrawl](https://firecrawl.dev) is a platform for crawling and convert any website into clean markdown or structured data.
|
||||
|
||||
## Installation
|
||||
|
||||
- Get an API key from [firecrawl.dev](https://firecrawl.dev) and set it in environment variables (`FIRECRAWL_API_KEY`).
|
||||
- Install the [Firecrawl SDK](https://github.com/mendableai/firecrawl) along with `crewai[tools]` package:
|
||||
|
||||
```shell
|
||||
pip install firecrawl-py 'crewai[tools]'
|
||||
```
|
||||
|
||||
## Example
|
||||
|
||||
Utilize the FirecrawlScrapeWebsiteTool as follows to allow your agent to load websites:
|
||||
|
||||
```python Code
|
||||
from crewai_tools import FirecrawlScrapeWebsiteTool
|
||||
|
||||
tool = FirecrawlScrapeWebsiteTool(url='firecrawl.dev')
|
||||
```
|
||||
|
||||
## Arguments
|
||||
|
||||
- `api_key`: Optional. Specifies Firecrawl API key. Defaults is the `FIRECRAWL_API_KEY` environment variable.
|
||||
- `url`: The URL to scrape.
|
||||
- `page_options`: Optional.
|
||||
- `onlyMainContent`: Optional. Only return the main content of the page excluding headers, navs, footers, etc.
|
||||
- `includeHtml`: Optional. Include the raw HTML content of the page. Will output a html key in the response.
|
||||
- `extractor_options`: Optional. Options for LLM-based extraction of structured information from the page content
|
||||
- `mode`: The extraction mode to use, currently supports 'llm-extraction'
|
||||
- `extractionPrompt`: Optional. A prompt describing what information to extract from the page
|
||||
- `extractionSchema`: Optional. The schema for the data to be extracted
|
||||
- `timeout`: Optional. Timeout in milliseconds for the request
|
||||
41
docs/en/tools/web-scraping/firecrawlsearchtool.mdx
Normal file
41
docs/en/tools/web-scraping/firecrawlsearchtool.mdx
Normal file
@@ -0,0 +1,41 @@
|
||||
---
|
||||
title: Firecrawl Search
|
||||
description: The `FirecrawlSearchTool` is designed to search websites and convert them into clean markdown or structured data.
|
||||
icon: fire-flame
|
||||
---
|
||||
|
||||
# `FirecrawlSearchTool`
|
||||
|
||||
## Description
|
||||
|
||||
[Firecrawl](https://firecrawl.dev) is a platform for crawling and convert any website into clean markdown or structured data.
|
||||
|
||||
## Installation
|
||||
|
||||
- Get an API key from [firecrawl.dev](https://firecrawl.dev) and set it in environment variables (`FIRECRAWL_API_KEY`).
|
||||
- Install the [Firecrawl SDK](https://github.com/mendableai/firecrawl) along with `crewai[tools]` package:
|
||||
|
||||
```shell
|
||||
pip install firecrawl-py 'crewai[tools]'
|
||||
```
|
||||
|
||||
## Example
|
||||
|
||||
Utilize the FirecrawlSearchTool as follows to allow your agent to load websites:
|
||||
|
||||
```python Code
|
||||
from crewai_tools import FirecrawlSearchTool
|
||||
|
||||
tool = FirecrawlSearchTool(query='what is firecrawl?')
|
||||
```
|
||||
|
||||
## Arguments
|
||||
|
||||
- `api_key`: Optional. Specifies Firecrawl API key. Defaults is the `FIRECRAWL_API_KEY` environment variable.
|
||||
- `query`: The search query string to be used for searching.
|
||||
- `page_options`: Optional. Options for result formatting.
|
||||
- `onlyMainContent`: Optional. Only return the main content of the page excluding headers, navs, footers, etc.
|
||||
- `includeHtml`: Optional. Include the raw HTML content of the page. Will output a html key in the response.
|
||||
- `fetchPageContent`: Optional. Fetch the full content of the page.
|
||||
- `search_options`: Optional. Options for controlling the crawling behavior.
|
||||
- `limit`: Optional. Maximum number of pages to crawl.
|
||||
86
docs/en/tools/web-scraping/hyperbrowserloadtool.mdx
Normal file
86
docs/en/tools/web-scraping/hyperbrowserloadtool.mdx
Normal file
@@ -0,0 +1,86 @@
|
||||
---
|
||||
title: Hyperbrowser Load Tool
|
||||
description: The `HyperbrowserLoadTool` enables web scraping and crawling using Hyperbrowser.
|
||||
icon: globe
|
||||
---
|
||||
|
||||
# `HyperbrowserLoadTool`
|
||||
|
||||
## Description
|
||||
|
||||
The `HyperbrowserLoadTool` enables web scraping and crawling using [Hyperbrowser](https://hyperbrowser.ai), a platform for running and scaling headless browsers. This tool allows you to scrape a single page or crawl an entire site, returning the content in properly formatted markdown or HTML.
|
||||
|
||||
Key Features:
|
||||
- Instant Scalability - Spin up hundreds of browser sessions in seconds without infrastructure headaches
|
||||
- Simple Integration - Works seamlessly with popular tools like Puppeteer and Playwright
|
||||
- Powerful APIs - Easy to use APIs for scraping/crawling any site
|
||||
- Bypass Anti-Bot Measures - Built-in stealth mode, ad blocking, automatic CAPTCHA solving, and rotating proxies
|
||||
|
||||
## Installation
|
||||
|
||||
To use this tool, you need to install the Hyperbrowser SDK:
|
||||
|
||||
```shell
|
||||
uv add hyperbrowser
|
||||
```
|
||||
|
||||
## Steps to Get Started
|
||||
|
||||
To effectively use the `HyperbrowserLoadTool`, follow these steps:
|
||||
|
||||
1. **Sign Up**: Head to [Hyperbrowser](https://app.hyperbrowser.ai/) to sign up and generate an API key.
|
||||
2. **API Key**: Set the `HYPERBROWSER_API_KEY` environment variable or pass it directly to the tool constructor.
|
||||
3. **Install SDK**: Install the Hyperbrowser SDK using the command above.
|
||||
|
||||
## Example
|
||||
|
||||
The following example demonstrates how to initialize the tool and use it to scrape a website:
|
||||
|
||||
```python Code
|
||||
from crewai_tools import HyperbrowserLoadTool
|
||||
from crewai import Agent
|
||||
|
||||
# Initialize the tool with your API key
|
||||
tool = HyperbrowserLoadTool(api_key="your_api_key") # Or use environment variable
|
||||
|
||||
# Define an agent that uses the tool
|
||||
@agent
|
||||
def web_researcher(self) -> Agent:
|
||||
'''
|
||||
This agent uses the HyperbrowserLoadTool to scrape websites
|
||||
and extract information.
|
||||
'''
|
||||
return Agent(
|
||||
config=self.agents_config["web_researcher"],
|
||||
tools=[tool]
|
||||
)
|
||||
```
|
||||
|
||||
## Parameters
|
||||
|
||||
The `HyperbrowserLoadTool` accepts the following parameters:
|
||||
|
||||
### Constructor Parameters
|
||||
- **api_key**: Optional. Your Hyperbrowser API key. If not provided, it will be read from the `HYPERBROWSER_API_KEY` environment variable.
|
||||
|
||||
### Run Parameters
|
||||
- **url**: Required. The website URL to scrape or crawl.
|
||||
- **operation**: Optional. The operation to perform on the website. Either 'scrape' or 'crawl'. Default is 'scrape'.
|
||||
- **params**: Optional. Additional parameters for the scrape or crawl operation.
|
||||
|
||||
## Supported Parameters
|
||||
|
||||
For detailed information on all supported parameters, visit:
|
||||
- [Scrape Parameters](https://docs.hyperbrowser.ai/reference/sdks/python/scrape#start-scrape-job-and-wait)
|
||||
- [Crawl Parameters](https://docs.hyperbrowser.ai/reference/sdks/python/crawl#start-crawl-job-and-wait)
|
||||
|
||||
## Return Format
|
||||
|
||||
The tool returns content in the following format:
|
||||
|
||||
- For **scrape** operations: The content of the page in markdown or HTML format.
|
||||
- For **crawl** operations: The content of each page separated by dividers, including the URL of each page.
|
||||
|
||||
## Conclusion
|
||||
|
||||
The `HyperbrowserLoadTool` provides a powerful way to scrape and crawl websites, handling complex scenarios like anti-bot measures, CAPTCHAs, and more. By leveraging Hyperbrowser's platform, this tool enables agents to access and extract web content efficiently.
|
||||
107
docs/en/tools/web-scraping/overview.mdx
Normal file
107
docs/en/tools/web-scraping/overview.mdx
Normal file
@@ -0,0 +1,107 @@
|
||||
---
|
||||
title: "Overview"
|
||||
description: "Extract data from websites and automate browser interactions with powerful scraping tools"
|
||||
icon: "face-smile"
|
||||
---
|
||||
|
||||
These tools enable your agents to interact with the web, extract data from websites, and automate browser-based tasks. From simple web scraping to complex browser automation, these tools cover all your web interaction needs.
|
||||
|
||||
## **Available Tools**
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Scrape Website Tool" icon="globe" href="/en/tools/web-scraping/scrapewebsitetool">
|
||||
General-purpose web scraping tool for extracting content from any website.
|
||||
</Card>
|
||||
|
||||
<Card title="Scrape Element Tool" icon="crosshairs" href="/en/tools/web-scraping/scrapeelementfromwebsitetool">
|
||||
Target specific elements on web pages with precision scraping capabilities.
|
||||
</Card>
|
||||
|
||||
<Card title="Firecrawl Crawl Tool" icon="spider" href="/en/tools/web-scraping/firecrawlcrawlwebsitetool">
|
||||
Crawl entire websites systematically with Firecrawl's powerful engine.
|
||||
</Card>
|
||||
|
||||
<Card title="Firecrawl Scrape Tool" icon="fire" href="/en/tools/web-scraping/firecrawlscrapewebsitetool">
|
||||
High-performance web scraping with Firecrawl's advanced capabilities.
|
||||
</Card>
|
||||
|
||||
<Card title="Firecrawl Search Tool" icon="magnifying-glass" href="/en/tools/web-scraping/firecrawlsearchtool">
|
||||
Search and extract specific content using Firecrawl's search features.
|
||||
</Card>
|
||||
|
||||
<Card title="Selenium Scraping Tool" icon="robot" href="/en/tools/web-scraping/seleniumscrapingtool">
|
||||
Browser automation and scraping with Selenium WebDriver capabilities.
|
||||
</Card>
|
||||
|
||||
<Card title="ScrapFly Tool" icon="plane" href="/en/tools/web-scraping/scrapflyscrapetool">
|
||||
Professional web scraping with ScrapFly's premium scraping service.
|
||||
</Card>
|
||||
|
||||
<Card title="ScrapGraph Tool" icon="network-wired" href="/en/tools/web-scraping/scrapegraphscrapetool">
|
||||
Graph-based web scraping for complex data relationships.
|
||||
</Card>
|
||||
|
||||
<Card title="Spider Tool" icon="spider" href="/en/tools/web-scraping/spidertool">
|
||||
Comprehensive web crawling and data extraction capabilities.
|
||||
</Card>
|
||||
|
||||
<Card title="BrowserBase Tool" icon="browser" href="/en/tools/web-scraping/browserbaseloadtool">
|
||||
Cloud-based browser automation with BrowserBase infrastructure.
|
||||
</Card>
|
||||
|
||||
<Card title="HyperBrowser Tool" icon="window-maximize" href="/en/tools/web-scraping/hyperbrowserloadtool">
|
||||
Fast browser interactions with HyperBrowser's optimized engine.
|
||||
</Card>
|
||||
|
||||
<Card title="Stagehand Tool" icon="hand" href="/en/tools/web-scraping/stagehandtool">
|
||||
Intelligent browser automation with natural language commands.
|
||||
</Card>
|
||||
|
||||
<Card title="Oxylabs Scraper Tool" icon="globe" href="/en/tools/web-scraping/oxylabsscraperstool">
|
||||
Access web data at scale with Oxylabs.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
## **Common Use Cases**
|
||||
|
||||
- **Data Extraction**: Scrape product information, prices, and reviews
|
||||
- **Content Monitoring**: Track changes on websites and news sources
|
||||
- **Lead Generation**: Extract contact information and business data
|
||||
- **Market Research**: Gather competitive intelligence and market data
|
||||
- **Testing & QA**: Automate browser testing and validation workflows
|
||||
- **Social Media**: Extract posts, comments, and social media analytics
|
||||
|
||||
## **Quick Start Example**
|
||||
|
||||
```python
|
||||
from crewai_tools import ScrapeWebsiteTool, FirecrawlScrapeWebsiteTool, SeleniumScrapingTool
|
||||
|
||||
# Create scraping tools
|
||||
simple_scraper = ScrapeWebsiteTool()
|
||||
advanced_scraper = FirecrawlScrapeWebsiteTool()
|
||||
browser_automation = SeleniumScrapingTool()
|
||||
|
||||
# Add to your agent
|
||||
agent = Agent(
|
||||
role="Web Research Specialist",
|
||||
tools=[simple_scraper, advanced_scraper, browser_automation],
|
||||
goal="Extract and analyze web data efficiently"
|
||||
)
|
||||
```
|
||||
|
||||
## **Scraping Best Practices**
|
||||
|
||||
- **Respect robots.txt**: Always check and follow website scraping policies
|
||||
- **Rate Limiting**: Implement delays between requests to avoid overwhelming servers
|
||||
- **User Agents**: Use appropriate user agent strings to identify your bot
|
||||
- **Legal Compliance**: Ensure your scraping activities comply with terms of service
|
||||
- **Error Handling**: Implement robust error handling for network issues and blocked requests
|
||||
- **Data Quality**: Validate and clean extracted data before processing
|
||||
|
||||
## **Tool Selection Guide**
|
||||
|
||||
- **Simple Tasks**: Use `ScrapeWebsiteTool` for basic content extraction
|
||||
- **JavaScript-Heavy Sites**: Use `SeleniumScrapingTool` for dynamic content
|
||||
- **Scale & Performance**: Use `FirecrawlScrapeWebsiteTool` for high-volume scraping
|
||||
- **Cloud Infrastructure**: Use `BrowserBaseLoadTool` for scalable browser automation
|
||||
- **Complex Workflows**: Use `StagehandTool` for intelligent browser interactions
|
||||
236
docs/en/tools/web-scraping/oxylabsscraperstool.mdx
Normal file
236
docs/en/tools/web-scraping/oxylabsscraperstool.mdx
Normal file
@@ -0,0 +1,236 @@
|
||||
---
|
||||
title: Oxylabs Scrapers
|
||||
description: >
|
||||
Oxylabs Scrapers allow to easily access the information from the respective sources. Please see the list of available sources below:
|
||||
- `Amazon Product`
|
||||
- `Amazon Search`
|
||||
- `Google Seach`
|
||||
- `Universal`
|
||||
icon: globe
|
||||
---
|
||||
|
||||
## Installation
|
||||
|
||||
Get the credentials by creating an Oxylabs Account [here](https://oxylabs.io).
|
||||
```shell
|
||||
pip install 'crewai[tools]' oxylabs
|
||||
```
|
||||
Check [Oxylabs Documentation](https://developers.oxylabs.io/scraping-solutions/web-scraper-api/targets) to get more information about API parameters.
|
||||
|
||||
# `OxylabsAmazonProductScraperTool`
|
||||
|
||||
### Example
|
||||
|
||||
```python
|
||||
from crewai_tools import OxylabsAmazonProductScraperTool
|
||||
|
||||
# make sure OXYLABS_USERNAME and OXYLABS_PASSWORD variables are set
|
||||
tool = OxylabsAmazonProductScraperTool()
|
||||
|
||||
result = tool.run(query="AAAAABBBBCC")
|
||||
|
||||
print(result)
|
||||
```
|
||||
|
||||
### Parameters
|
||||
|
||||
- `query` - 10-symbol ASIN code.
|
||||
- `domain` - domain localization for Amazon.
|
||||
- `geo_location` - the _Deliver to_ location.
|
||||
- `user_agent_type` - device type and browser.
|
||||
- `render` - enables JavaScript rendering when set to `html`.
|
||||
- `callback_url` - URL to your callback endpoint.
|
||||
- `context` - Additional advanced settings and controls for specialized requirements.
|
||||
- `parse` - returns parsed data when set to true.
|
||||
- `parsing_instructions` - define your own parsing and data transformation logic that will be executed on an HTML scraping result.
|
||||
|
||||
### Advanced example
|
||||
|
||||
```python
|
||||
from crewai_tools import OxylabsAmazonProductScraperTool
|
||||
|
||||
# make sure OXYLABS_USERNAME and OXYLABS_PASSWORD variables are set
|
||||
tool = OxylabsAmazonProductScraperTool(
|
||||
config={
|
||||
"domain": "com",
|
||||
"parse": True,
|
||||
"context": [
|
||||
{
|
||||
"key": "autoselect_variant",
|
||||
"value": True
|
||||
}
|
||||
]
|
||||
}
|
||||
)
|
||||
|
||||
result = tool.run(query="AAAAABBBBCC")
|
||||
|
||||
print(result)
|
||||
```
|
||||
|
||||
# `OxylabsAmazonSearchScraperTool`
|
||||
|
||||
### Example
|
||||
|
||||
```python
|
||||
from crewai_tools import OxylabsAmazonSearchScraperTool
|
||||
|
||||
# make sure OXYLABS_USERNAME and OXYLABS_PASSWORD variables are set
|
||||
tool = OxylabsAmazonSearchScraperTool()
|
||||
|
||||
result = tool.run(query="headsets")
|
||||
|
||||
print(result)
|
||||
```
|
||||
|
||||
### Parameters
|
||||
|
||||
- `query` - Amazon search term.
|
||||
- `domain` - Domain localization for Bestbuy.
|
||||
- `start_page` - starting page number.
|
||||
- `pages` - number of pages to retrieve.
|
||||
- `geo_location` - the _Deliver to_ location.
|
||||
- `user_agent_type` - device type and browser.
|
||||
- `render` - enables JavaScript rendering when set to `html`.
|
||||
- `callback_url` - URL to your callback endpoint.
|
||||
- `context` - Additional advanced settings and controls for specialized requirements.
|
||||
- `parse` - returns parsed data when set to true.
|
||||
- `parsing_instructions` - define your own parsing and data transformation logic that will be executed on an HTML scraping result.
|
||||
|
||||
### Advanced example
|
||||
|
||||
```python
|
||||
from crewai_tools import OxylabsAmazonSearchScraperTool
|
||||
|
||||
# make sure OXYLABS_USERNAME and OXYLABS_PASSWORD variables are set
|
||||
tool = OxylabsAmazonSearchScraperTool(
|
||||
config={
|
||||
"domain": 'nl',
|
||||
"start_page": 2,
|
||||
"pages": 2,
|
||||
"parse": True,
|
||||
"context": [
|
||||
{'key': 'category_id', 'value': 16391693031}
|
||||
],
|
||||
}
|
||||
)
|
||||
|
||||
result = tool.run(query='nirvana tshirt')
|
||||
|
||||
print(result)
|
||||
```
|
||||
|
||||
# `OxylabsGoogleSearchScraperTool`
|
||||
|
||||
### Example
|
||||
|
||||
```python
|
||||
from crewai_tools import OxylabsGoogleSearchScraperTool
|
||||
|
||||
# make sure OXYLABS_USERNAME and OXYLABS_PASSWORD variables are set
|
||||
tool = OxylabsGoogleSearchScraperTool()
|
||||
|
||||
result = tool.run(query="iPhone 16")
|
||||
|
||||
print(result)
|
||||
```
|
||||
|
||||
### Parameters
|
||||
|
||||
- `query` - search keyword.
|
||||
- `domain` - domain localization for Google.
|
||||
- `start_page` - starting page number.
|
||||
- `pages` - number of pages to retrieve.
|
||||
- `limit` - number of results to retrieve in each page.
|
||||
- `locale` - `Accept-Language` header value which changes your Google search page web interface language.
|
||||
- `geo_location` - the geographical location that the result should be adapted for. Using this parameter correctly is extremely important to get the right data.
|
||||
- `user_agent_type` - device type and browser.
|
||||
- `render` - enables JavaScript rendering when set to `html`.
|
||||
- `callback_url` - URL to your callback endpoint.
|
||||
- `context` - Additional advanced settings and controls for specialized requirements.
|
||||
- `parse` - returns parsed data when set to true.
|
||||
- `parsing_instructions` - define your own parsing and data transformation logic that will be executed on an HTML scraping result.
|
||||
|
||||
### Advanced example
|
||||
|
||||
```python
|
||||
from crewai_tools import OxylabsGoogleSearchScraperTool
|
||||
|
||||
# make sure OXYLABS_USERNAME and OXYLABS_PASSWORD variables are set
|
||||
tool = OxylabsGoogleSearchScraperTool(
|
||||
config={
|
||||
"parse": True,
|
||||
"geo_location": "Paris, France",
|
||||
"user_agent_type": "tablet",
|
||||
}
|
||||
)
|
||||
|
||||
result = tool.run(query="iPhone 16")
|
||||
|
||||
print(result)
|
||||
```
|
||||
|
||||
# `OxylabsUniversalScraperTool`
|
||||
|
||||
### Example
|
||||
|
||||
```python
|
||||
from crewai_tools import OxylabsUniversalScraperTool
|
||||
|
||||
# make sure OXYLABS_USERNAME and OXYLABS_PASSWORD variables are set
|
||||
tool = OxylabsUniversalScraperTool()
|
||||
|
||||
result = tool.run(url="https://ip.oxylabs.io")
|
||||
|
||||
print(result)
|
||||
```
|
||||
|
||||
### Parameters
|
||||
|
||||
- `url` - website url to scrape.
|
||||
- `user_agent_type` - device type and browser.
|
||||
- `geo_location` - sets the proxy's geolocation to retrieve data.
|
||||
- `render` - enables JavaScript rendering when set to `html`.
|
||||
- `callback_url` - URL to your callback endpoint.
|
||||
- `context` - Additional advanced settings and controls for specialized requirements.
|
||||
- `parse` - returns parsed data when set to `true`, as long as a dedicated parser exists for the submitted URL's page type.
|
||||
- `parsing_instructions` - define your own parsing and data transformation logic that will be executed on an HTML scraping result.
|
||||
|
||||
|
||||
### Advanced example
|
||||
|
||||
```python
|
||||
from crewai_tools import OxylabsUniversalScraperTool
|
||||
|
||||
# make sure OXYLABS_USERNAME and OXYLABS_PASSWORD variables are set
|
||||
tool = OxylabsUniversalScraperTool(
|
||||
config={
|
||||
"render": "html",
|
||||
"user_agent_type": "mobile",
|
||||
"context": [
|
||||
{"key": "force_headers", "value": True},
|
||||
{"key": "force_cookies", "value": True},
|
||||
{
|
||||
"key": "headers",
|
||||
"value": {
|
||||
"Custom-Header-Name": "custom header content",
|
||||
},
|
||||
},
|
||||
{
|
||||
"key": "cookies",
|
||||
"value": [
|
||||
{"key": "NID", "value": "1234567890"},
|
||||
{"key": "1P JAR", "value": "0987654321"},
|
||||
],
|
||||
},
|
||||
{"key": "http_method", "value": "get"},
|
||||
{"key": "follow_redirects", "value": True},
|
||||
{"key": "successful_status_codes", "value": [808, 909]},
|
||||
],
|
||||
}
|
||||
)
|
||||
|
||||
result = tool.run(url="https://ip.oxylabs.io")
|
||||
|
||||
print(result)
|
||||
```
|
||||
139
docs/en/tools/web-scraping/scrapeelementfromwebsitetool.mdx
Normal file
139
docs/en/tools/web-scraping/scrapeelementfromwebsitetool.mdx
Normal file
@@ -0,0 +1,139 @@
|
||||
---
|
||||
title: Scrape Element From Website Tool
|
||||
description: The `ScrapeElementFromWebsiteTool` enables CrewAI agents to extract specific elements from websites using CSS selectors.
|
||||
icon: code
|
||||
---
|
||||
|
||||
# `ScrapeElementFromWebsiteTool`
|
||||
|
||||
## Description
|
||||
|
||||
The `ScrapeElementFromWebsiteTool` is designed to extract specific elements from websites using CSS selectors. This tool allows CrewAI agents to scrape targeted content from web pages, making it useful for data extraction tasks where only specific parts of a webpage are needed.
|
||||
|
||||
## Installation
|
||||
|
||||
To use this tool, you need to install the required dependencies:
|
||||
|
||||
```shell
|
||||
uv add requests beautifulsoup4
|
||||
```
|
||||
|
||||
## Steps to Get Started
|
||||
|
||||
To effectively use the `ScrapeElementFromWebsiteTool`, follow these steps:
|
||||
|
||||
1. **Install Dependencies**: Install the required packages using the command above.
|
||||
2. **Identify CSS Selectors**: Determine the CSS selectors for the elements you want to extract from the website.
|
||||
3. **Initialize the Tool**: Create an instance of the tool with the necessary parameters.
|
||||
|
||||
## Example
|
||||
|
||||
The following example demonstrates how to use the `ScrapeElementFromWebsiteTool` to extract specific elements from a website:
|
||||
|
||||
```python Code
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import ScrapeElementFromWebsiteTool
|
||||
|
||||
# Initialize the tool
|
||||
scrape_tool = ScrapeElementFromWebsiteTool()
|
||||
|
||||
# Define an agent that uses the tool
|
||||
web_scraper_agent = Agent(
|
||||
role="Web Scraper",
|
||||
goal="Extract specific information from websites",
|
||||
backstory="An expert in web scraping who can extract targeted content from web pages.",
|
||||
tools=[scrape_tool],
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
# Example task to extract headlines from a news website
|
||||
scrape_task = Task(
|
||||
description="Extract the main headlines from the CNN homepage. Use the CSS selector '.headline' to target the headline elements.",
|
||||
expected_output="A list of the main headlines from CNN.",
|
||||
agent=web_scraper_agent,
|
||||
)
|
||||
|
||||
# Create and run the crew
|
||||
crew = Crew(agents=[web_scraper_agent], tasks=[scrape_task])
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
You can also initialize the tool with predefined parameters:
|
||||
|
||||
```python Code
|
||||
# Initialize the tool with predefined parameters
|
||||
scrape_tool = ScrapeElementFromWebsiteTool(
|
||||
website_url="https://www.example.com",
|
||||
css_element=".main-content"
|
||||
)
|
||||
```
|
||||
|
||||
## Parameters
|
||||
|
||||
The `ScrapeElementFromWebsiteTool` accepts the following parameters during initialization:
|
||||
|
||||
- **website_url**: Optional. The URL of the website to scrape. If provided during initialization, the agent won't need to specify it when using the tool.
|
||||
- **css_element**: Optional. The CSS selector for the elements to extract. If provided during initialization, the agent won't need to specify it when using the tool.
|
||||
- **cookies**: Optional. A dictionary containing cookies to be sent with the request. This can be useful for websites that require authentication.
|
||||
|
||||
## Usage
|
||||
|
||||
When using the `ScrapeElementFromWebsiteTool` with an agent, the agent will need to provide the following parameters (unless they were specified during initialization):
|
||||
|
||||
- **website_url**: The URL of the website to scrape.
|
||||
- **css_element**: The CSS selector for the elements to extract.
|
||||
|
||||
The tool will return the text content of all elements matching the CSS selector, joined by newlines.
|
||||
|
||||
```python Code
|
||||
# Example of using the tool with an agent
|
||||
web_scraper_agent = Agent(
|
||||
role="Web Scraper",
|
||||
goal="Extract specific elements from websites",
|
||||
backstory="An expert in web scraping who can extract targeted content using CSS selectors.",
|
||||
tools=[scrape_tool],
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
# Create a task for the agent to extract specific elements
|
||||
extract_task = Task(
|
||||
description="""
|
||||
Extract all product titles from the featured products section on example.com.
|
||||
Use the CSS selector '.product-title' to target the title elements.
|
||||
""",
|
||||
expected_output="A list of product titles from the website",
|
||||
agent=web_scraper_agent,
|
||||
)
|
||||
|
||||
# Run the task through a crew
|
||||
crew = Crew(agents=[web_scraper_agent], tasks=[extract_task])
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
## Implementation Details
|
||||
|
||||
The `ScrapeElementFromWebsiteTool` uses the `requests` library to fetch the web page and `BeautifulSoup` to parse the HTML and extract the specified elements:
|
||||
|
||||
```python Code
|
||||
class ScrapeElementFromWebsiteTool(BaseTool):
|
||||
name: str = "Read a website content"
|
||||
description: str = "A tool that can be used to read a website content."
|
||||
|
||||
# Implementation details...
|
||||
|
||||
def _run(self, **kwargs: Any) -> Any:
|
||||
website_url = kwargs.get("website_url", self.website_url)
|
||||
css_element = kwargs.get("css_element", self.css_element)
|
||||
page = requests.get(
|
||||
website_url,
|
||||
headers=self.headers,
|
||||
cookies=self.cookies if self.cookies else {},
|
||||
)
|
||||
parsed = BeautifulSoup(page.content, "html.parser")
|
||||
elements = parsed.select(css_element)
|
||||
return "\n".join([element.get_text() for element in elements])
|
||||
```
|
||||
|
||||
## Conclusion
|
||||
|
||||
The `ScrapeElementFromWebsiteTool` provides a powerful way to extract specific elements from websites using CSS selectors. By enabling agents to target only the content they need, it makes web scraping tasks more efficient and focused. This tool is particularly useful for data extraction, content monitoring, and research tasks where specific information needs to be extracted from web pages.
|
||||
196
docs/en/tools/web-scraping/scrapegraphscrapetool.mdx
Normal file
196
docs/en/tools/web-scraping/scrapegraphscrapetool.mdx
Normal file
@@ -0,0 +1,196 @@
|
||||
---
|
||||
title: Scrapegraph Scrape Tool
|
||||
description: The `ScrapegraphScrapeTool` leverages Scrapegraph AI's SmartScraper API to intelligently extract content from websites.
|
||||
icon: chart-area
|
||||
---
|
||||
|
||||
# `ScrapegraphScrapeTool`
|
||||
|
||||
## Description
|
||||
|
||||
The `ScrapegraphScrapeTool` is designed to leverage Scrapegraph AI's SmartScraper API to intelligently extract content from websites. This tool provides advanced web scraping capabilities with AI-powered content extraction, making it ideal for targeted data collection and content analysis tasks. Unlike traditional web scrapers, it can understand the context and structure of web pages to extract the most relevant information based on natural language prompts.
|
||||
|
||||
## Installation
|
||||
|
||||
To use this tool, you need to install the Scrapegraph Python client:
|
||||
|
||||
```shell
|
||||
uv add scrapegraph-py
|
||||
```
|
||||
|
||||
You'll also need to set up your Scrapegraph API key as an environment variable:
|
||||
|
||||
```shell
|
||||
export SCRAPEGRAPH_API_KEY="your_api_key"
|
||||
```
|
||||
|
||||
You can obtain an API key from [Scrapegraph AI](https://scrapegraphai.com).
|
||||
|
||||
## Steps to Get Started
|
||||
|
||||
To effectively use the `ScrapegraphScrapeTool`, follow these steps:
|
||||
|
||||
1. **Install Dependencies**: Install the required package using the command above.
|
||||
2. **Set Up API Key**: Set your Scrapegraph API key as an environment variable or provide it during initialization.
|
||||
3. **Initialize the Tool**: Create an instance of the tool with the necessary parameters.
|
||||
4. **Define Extraction Prompts**: Create natural language prompts to guide the extraction of specific content.
|
||||
|
||||
## Example
|
||||
|
||||
The following example demonstrates how to use the `ScrapegraphScrapeTool` to extract content from a website:
|
||||
|
||||
```python Code
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import ScrapegraphScrapeTool
|
||||
|
||||
# Initialize the tool
|
||||
scrape_tool = ScrapegraphScrapeTool(api_key="your_api_key")
|
||||
|
||||
# Define an agent that uses the tool
|
||||
web_scraper_agent = Agent(
|
||||
role="Web Scraper",
|
||||
goal="Extract specific information from websites",
|
||||
backstory="An expert in web scraping who can extract targeted content from web pages.",
|
||||
tools=[scrape_tool],
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
# Example task to extract product information from an e-commerce site
|
||||
scrape_task = Task(
|
||||
description="Extract product names, prices, and descriptions from the featured products section of example.com.",
|
||||
expected_output="A structured list of product information including names, prices, and descriptions.",
|
||||
agent=web_scraper_agent,
|
||||
)
|
||||
|
||||
# Create and run the crew
|
||||
crew = Crew(agents=[web_scraper_agent], tasks=[scrape_task])
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
You can also initialize the tool with predefined parameters:
|
||||
|
||||
```python Code
|
||||
# Initialize the tool with predefined parameters
|
||||
scrape_tool = ScrapegraphScrapeTool(
|
||||
website_url="https://www.example.com",
|
||||
user_prompt="Extract all product prices and descriptions",
|
||||
api_key="your_api_key"
|
||||
)
|
||||
```
|
||||
|
||||
## Parameters
|
||||
|
||||
The `ScrapegraphScrapeTool` accepts the following parameters during initialization:
|
||||
|
||||
- **api_key**: Optional. Your Scrapegraph API key. If not provided, it will look for the `SCRAPEGRAPH_API_KEY` environment variable.
|
||||
- **website_url**: Optional. The URL of the website to scrape. If provided during initialization, the agent won't need to specify it when using the tool.
|
||||
- **user_prompt**: Optional. Custom instructions for content extraction. If provided during initialization, the agent won't need to specify it when using the tool.
|
||||
- **enable_logging**: Optional. Whether to enable logging for the Scrapegraph client. Default is `False`.
|
||||
|
||||
## Usage
|
||||
|
||||
When using the `ScrapegraphScrapeTool` with an agent, the agent will need to provide the following parameters (unless they were specified during initialization):
|
||||
|
||||
- **website_url**: The URL of the website to scrape.
|
||||
- **user_prompt**: Optional. Custom instructions for content extraction. Default is "Extract the main content of the webpage".
|
||||
|
||||
The tool will return the extracted content based on the provided prompt.
|
||||
|
||||
```python Code
|
||||
# Example of using the tool with an agent
|
||||
web_scraper_agent = Agent(
|
||||
role="Web Scraper",
|
||||
goal="Extract specific information from websites",
|
||||
backstory="An expert in web scraping who can extract targeted content from web pages.",
|
||||
tools=[scrape_tool],
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
# Create a task for the agent to extract specific content
|
||||
extract_task = Task(
|
||||
description="Extract the main heading and summary from example.com",
|
||||
expected_output="The main heading and summary from the website",
|
||||
agent=web_scraper_agent,
|
||||
)
|
||||
|
||||
# Run the task
|
||||
crew = Crew(agents=[web_scraper_agent], tasks=[extract_task])
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
The `ScrapegraphScrapeTool` may raise the following exceptions:
|
||||
|
||||
- **ValueError**: When API key is missing or URL format is invalid.
|
||||
- **RateLimitError**: When API rate limits are exceeded.
|
||||
- **RuntimeError**: When scraping operation fails (network issues, API errors).
|
||||
|
||||
It's recommended to instruct agents to handle potential errors gracefully:
|
||||
|
||||
```python Code
|
||||
# Create a task that includes error handling instructions
|
||||
robust_extract_task = Task(
|
||||
description="""
|
||||
Extract the main heading from example.com.
|
||||
Be aware that you might encounter errors such as:
|
||||
- Invalid URL format
|
||||
- Missing API key
|
||||
- Rate limit exceeded
|
||||
- Network or API errors
|
||||
|
||||
If you encounter any errors, provide a clear explanation of what went wrong
|
||||
and suggest possible solutions.
|
||||
""",
|
||||
expected_output="Either the extracted heading or a clear error explanation",
|
||||
agent=web_scraper_agent,
|
||||
)
|
||||
```
|
||||
|
||||
## Rate Limiting
|
||||
|
||||
The Scrapegraph API has rate limits that vary based on your subscription plan. Consider the following best practices:
|
||||
|
||||
- Implement appropriate delays between requests when processing multiple URLs.
|
||||
- Handle rate limit errors gracefully in your application.
|
||||
- Check your API plan limits on the Scrapegraph dashboard.
|
||||
|
||||
## Implementation Details
|
||||
|
||||
The `ScrapegraphScrapeTool` uses the Scrapegraph Python client to interact with the SmartScraper API:
|
||||
|
||||
```python Code
|
||||
class ScrapegraphScrapeTool(BaseTool):
|
||||
"""
|
||||
A tool that uses Scrapegraph AI to intelligently scrape website content.
|
||||
"""
|
||||
|
||||
# Implementation details...
|
||||
|
||||
def _run(self, **kwargs: Any) -> Any:
|
||||
website_url = kwargs.get("website_url", self.website_url)
|
||||
user_prompt = (
|
||||
kwargs.get("user_prompt", self.user_prompt)
|
||||
or "Extract the main content of the webpage"
|
||||
)
|
||||
|
||||
if not website_url:
|
||||
raise ValueError("website_url is required")
|
||||
|
||||
# Validate URL format
|
||||
self._validate_url(website_url)
|
||||
|
||||
try:
|
||||
# Make the SmartScraper request
|
||||
response = self._client.smartscraper(
|
||||
website_url=website_url,
|
||||
user_prompt=user_prompt,
|
||||
)
|
||||
|
||||
return response
|
||||
# Error handling...
|
||||
```
|
||||
|
||||
## Conclusion
|
||||
|
||||
The `ScrapegraphScrapeTool` provides a powerful way to extract content from websites using AI-powered understanding of web page structure. By enabling agents to target specific information using natural language prompts, it makes web scraping tasks more efficient and focused. This tool is particularly useful for data extraction, content monitoring, and research tasks where specific information needs to be extracted from web pages.
|
||||
47
docs/en/tools/web-scraping/scrapewebsitetool.mdx
Normal file
47
docs/en/tools/web-scraping/scrapewebsitetool.mdx
Normal file
@@ -0,0 +1,47 @@
|
||||
---
|
||||
title: Scrape Website
|
||||
description: The `ScrapeWebsiteTool` is designed to extract and read the content of a specified website.
|
||||
icon: magnifying-glass-location
|
||||
---
|
||||
|
||||
# `ScrapeWebsiteTool`
|
||||
|
||||
<Note>
|
||||
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
|
||||
</Note>
|
||||
|
||||
## Description
|
||||
|
||||
A tool designed to extract and read the content of a specified website. It is capable of handling various types of web pages by making HTTP requests and parsing the received HTML content.
|
||||
This tool can be particularly useful for web scraping tasks, data collection, or extracting specific information from websites.
|
||||
|
||||
## Installation
|
||||
|
||||
Install the crewai_tools package
|
||||
|
||||
```shell
|
||||
pip install 'crewai[tools]'
|
||||
```
|
||||
|
||||
## Example
|
||||
|
||||
```python
|
||||
from crewai_tools import ScrapeWebsiteTool
|
||||
|
||||
# To enable scrapping any website it finds during it's execution
|
||||
tool = ScrapeWebsiteTool()
|
||||
|
||||
# Initialize the tool with the website URL,
|
||||
# so the agent can only scrap the content of the specified website
|
||||
tool = ScrapeWebsiteTool(website_url='https://www.example.com')
|
||||
|
||||
# Extract the text from the site
|
||||
text = tool.run()
|
||||
print(text)
|
||||
```
|
||||
|
||||
## Arguments
|
||||
|
||||
| Argument | Type | Description |
|
||||
|:---------------|:---------|:-------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| **website_url** | `string` | **Mandatory** website URL to read the file. This is the primary input for the tool, specifying which website's content should be scraped and read. |
|
||||
220
docs/en/tools/web-scraping/scrapflyscrapetool.mdx
Normal file
220
docs/en/tools/web-scraping/scrapflyscrapetool.mdx
Normal file
@@ -0,0 +1,220 @@
|
||||
---
|
||||
title: Scrapfly Scrape Website Tool
|
||||
description: The `ScrapflyScrapeWebsiteTool` leverages Scrapfly's web scraping API to extract content from websites in various formats.
|
||||
icon: spider
|
||||
---
|
||||
|
||||
# `ScrapflyScrapeWebsiteTool`
|
||||
|
||||
## Description
|
||||
|
||||
The `ScrapflyScrapeWebsiteTool` is designed to leverage [Scrapfly](https://scrapfly.io/)'s web scraping API to extract content from websites. This tool provides advanced web scraping capabilities with headless browser support, proxies, and anti-bot bypass features. It allows for extracting web page data in various formats, including raw HTML, markdown, and plain text, making it ideal for a wide range of web scraping tasks.
|
||||
|
||||
## Installation
|
||||
|
||||
To use this tool, you need to install the Scrapfly SDK:
|
||||
|
||||
```shell
|
||||
uv add scrapfly-sdk
|
||||
```
|
||||
|
||||
You'll also need to obtain a Scrapfly API key by registering at [scrapfly.io/register](https://www.scrapfly.io/register/).
|
||||
|
||||
## Steps to Get Started
|
||||
|
||||
To effectively use the `ScrapflyScrapeWebsiteTool`, follow these steps:
|
||||
|
||||
1. **Install Dependencies**: Install the Scrapfly SDK using the command above.
|
||||
2. **Obtain API Key**: Register at Scrapfly to get your API key.
|
||||
3. **Initialize the Tool**: Create an instance of the tool with your API key.
|
||||
4. **Configure Scraping Parameters**: Customize the scraping parameters based on your needs.
|
||||
|
||||
## Example
|
||||
|
||||
The following example demonstrates how to use the `ScrapflyScrapeWebsiteTool` to extract content from a website:
|
||||
|
||||
```python Code
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import ScrapflyScrapeWebsiteTool
|
||||
|
||||
# Initialize the tool
|
||||
scrape_tool = ScrapflyScrapeWebsiteTool(api_key="your_scrapfly_api_key")
|
||||
|
||||
# Define an agent that uses the tool
|
||||
web_scraper_agent = Agent(
|
||||
role="Web Scraper",
|
||||
goal="Extract information from websites",
|
||||
backstory="An expert in web scraping who can extract content from any website.",
|
||||
tools=[scrape_tool],
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
# Example task to extract content from a website
|
||||
scrape_task = Task(
|
||||
description="Extract the main content from the product page at https://web-scraping.dev/products and summarize the available products.",
|
||||
expected_output="A summary of the products available on the website.",
|
||||
agent=web_scraper_agent,
|
||||
)
|
||||
|
||||
# Create and run the crew
|
||||
crew = Crew(agents=[web_scraper_agent], tasks=[scrape_task])
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
You can also customize the scraping parameters:
|
||||
|
||||
```python Code
|
||||
# Example with custom scraping parameters
|
||||
web_scraper_agent = Agent(
|
||||
role="Web Scraper",
|
||||
goal="Extract information from websites with custom parameters",
|
||||
backstory="An expert in web scraping who can extract content from any website.",
|
||||
tools=[scrape_tool],
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
# The agent will use the tool with parameters like:
|
||||
# url="https://web-scraping.dev/products"
|
||||
# scrape_format="markdown"
|
||||
# ignore_scrape_failures=True
|
||||
# scrape_config={
|
||||
# "asp": True, # Bypass scraping blocking solutions, like Cloudflare
|
||||
# "render_js": True, # Enable JavaScript rendering with a cloud headless browser
|
||||
# "proxy_pool": "public_residential_pool", # Select a proxy pool
|
||||
# "country": "us", # Select a proxy location
|
||||
# "auto_scroll": True, # Auto scroll the page
|
||||
# }
|
||||
|
||||
scrape_task = Task(
|
||||
description="Extract the main content from the product page at https://web-scraping.dev/products using advanced scraping options including JavaScript rendering and proxy settings.",
|
||||
expected_output="A detailed summary of the products with all available information.",
|
||||
agent=web_scraper_agent,
|
||||
)
|
||||
```
|
||||
|
||||
## Parameters
|
||||
|
||||
The `ScrapflyScrapeWebsiteTool` accepts the following parameters:
|
||||
|
||||
### Initialization Parameters
|
||||
|
||||
- **api_key**: Required. Your Scrapfly API key.
|
||||
|
||||
### Run Parameters
|
||||
|
||||
- **url**: Required. The URL of the website to scrape.
|
||||
- **scrape_format**: Optional. The format in which to extract the web page content. Options are "raw" (HTML), "markdown", or "text". Default is "markdown".
|
||||
- **scrape_config**: Optional. A dictionary containing additional Scrapfly scraping configuration options.
|
||||
- **ignore_scrape_failures**: Optional. Whether to ignore failures during scraping. If set to `True`, the tool will return `None` instead of raising an exception when scraping fails.
|
||||
|
||||
## Scrapfly Configuration Options
|
||||
|
||||
The `scrape_config` parameter allows you to customize the scraping behavior with the following options:
|
||||
|
||||
- **asp**: Enable anti-scraping protection bypass.
|
||||
- **render_js**: Enable JavaScript rendering with a cloud headless browser.
|
||||
- **proxy_pool**: Select a proxy pool (e.g., "public_residential_pool", "datacenter").
|
||||
- **country**: Select a proxy location (e.g., "us", "uk").
|
||||
- **auto_scroll**: Automatically scroll the page to load lazy-loaded content.
|
||||
- **js**: Execute custom JavaScript code by the headless browser.
|
||||
|
||||
For a complete list of configuration options, refer to the [Scrapfly API documentation](https://scrapfly.io/docs/scrape-api/getting-started).
|
||||
|
||||
## Usage
|
||||
|
||||
When using the `ScrapflyScrapeWebsiteTool` with an agent, the agent will need to provide the URL of the website to scrape and can optionally specify the format and additional configuration options:
|
||||
|
||||
```python Code
|
||||
# Example of using the tool with an agent
|
||||
web_scraper_agent = Agent(
|
||||
role="Web Scraper",
|
||||
goal="Extract information from websites",
|
||||
backstory="An expert in web scraping who can extract content from any website.",
|
||||
tools=[scrape_tool],
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
# Create a task for the agent
|
||||
scrape_task = Task(
|
||||
description="Extract the main content from example.com in markdown format.",
|
||||
expected_output="The main content of example.com in markdown format.",
|
||||
agent=web_scraper_agent,
|
||||
)
|
||||
|
||||
# Run the task
|
||||
crew = Crew(agents=[web_scraper_agent], tasks=[scrape_task])
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
For more advanced usage with custom configuration:
|
||||
|
||||
```python Code
|
||||
# Create a task with more specific instructions
|
||||
advanced_scrape_task = Task(
|
||||
description="""
|
||||
Extract content from example.com with the following requirements:
|
||||
- Convert the content to plain text format
|
||||
- Enable JavaScript rendering
|
||||
- Use a US-based proxy
|
||||
- Handle any scraping failures gracefully
|
||||
""",
|
||||
expected_output="The extracted content from example.com",
|
||||
agent=web_scraper_agent,
|
||||
)
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
By default, the `ScrapflyScrapeWebsiteTool` will raise an exception if scraping fails. Agents can be instructed to handle failures gracefully by specifying the `ignore_scrape_failures` parameter:
|
||||
|
||||
```python Code
|
||||
# Create a task that instructs the agent to handle errors
|
||||
error_handling_task = Task(
|
||||
description="""
|
||||
Extract content from a potentially problematic website and make sure to handle any
|
||||
scraping failures gracefully by setting ignore_scrape_failures to True.
|
||||
""",
|
||||
expected_output="Either the extracted content or a graceful error message",
|
||||
agent=web_scraper_agent,
|
||||
)
|
||||
```
|
||||
|
||||
## Implementation Details
|
||||
|
||||
The `ScrapflyScrapeWebsiteTool` uses the Scrapfly SDK to interact with the Scrapfly API:
|
||||
|
||||
```python Code
|
||||
class ScrapflyScrapeWebsiteTool(BaseTool):
|
||||
name: str = "Scrapfly web scraping API tool"
|
||||
description: str = (
|
||||
"Scrape a webpage url using Scrapfly and return its content as markdown or text"
|
||||
)
|
||||
|
||||
# Implementation details...
|
||||
|
||||
def _run(
|
||||
self,
|
||||
url: str,
|
||||
scrape_format: str = "markdown",
|
||||
scrape_config: Optional[Dict[str, Any]] = None,
|
||||
ignore_scrape_failures: Optional[bool] = None,
|
||||
):
|
||||
from scrapfly import ScrapeApiResponse, ScrapeConfig
|
||||
|
||||
scrape_config = scrape_config if scrape_config is not None else {}
|
||||
try:
|
||||
response: ScrapeApiResponse = self.scrapfly.scrape(
|
||||
ScrapeConfig(url, format=scrape_format, **scrape_config)
|
||||
)
|
||||
return response.scrape_result["content"]
|
||||
except Exception as e:
|
||||
if ignore_scrape_failures:
|
||||
logger.error(f"Error fetching data from {url}, exception: {e}")
|
||||
return None
|
||||
else:
|
||||
raise e
|
||||
```
|
||||
|
||||
## Conclusion
|
||||
|
||||
The `ScrapflyScrapeWebsiteTool` provides a powerful way to extract content from websites using Scrapfly's advanced web scraping capabilities. With features like headless browser support, proxies, and anti-bot bypass, it can handle complex websites and extract content in various formats. This tool is particularly useful for data extraction, content monitoring, and research tasks where reliable web scraping is required.
|
||||
195
docs/en/tools/web-scraping/seleniumscrapingtool.mdx
Normal file
195
docs/en/tools/web-scraping/seleniumscrapingtool.mdx
Normal file
@@ -0,0 +1,195 @@
|
||||
---
|
||||
title: Selenium Scraper
|
||||
description: The `SeleniumScrapingTool` is designed to extract and read the content of a specified website using Selenium.
|
||||
icon: clipboard-user
|
||||
---
|
||||
|
||||
# `SeleniumScrapingTool`
|
||||
|
||||
<Note>
|
||||
This tool is currently in development. As we refine its capabilities, users may encounter unexpected behavior.
|
||||
Your feedback is invaluable to us for making improvements.
|
||||
</Note>
|
||||
|
||||
## Description
|
||||
|
||||
The `SeleniumScrapingTool` is crafted for high-efficiency web scraping tasks.
|
||||
It allows for precise extraction of content from web pages by using CSS selectors to target specific elements.
|
||||
Its design caters to a wide range of scraping needs, offering flexibility to work with any provided website URL.
|
||||
|
||||
## Installation
|
||||
|
||||
To use this tool, you need to install the CrewAI tools package and Selenium:
|
||||
|
||||
```shell
|
||||
pip install 'crewai[tools]'
|
||||
uv add selenium webdriver-manager
|
||||
```
|
||||
|
||||
You'll also need to have Chrome installed on your system, as the tool uses Chrome WebDriver for browser automation.
|
||||
|
||||
## Example
|
||||
|
||||
The following example demonstrates how to use the `SeleniumScrapingTool` with a CrewAI agent:
|
||||
|
||||
```python Code
|
||||
from crewai import Agent, Task, Crew, Process
|
||||
from crewai_tools import SeleniumScrapingTool
|
||||
|
||||
# Initialize the tool
|
||||
selenium_tool = SeleniumScrapingTool()
|
||||
|
||||
# Define an agent that uses the tool
|
||||
web_scraper_agent = Agent(
|
||||
role="Web Scraper",
|
||||
goal="Extract information from websites using Selenium",
|
||||
backstory="An expert web scraper who can extract content from dynamic websites.",
|
||||
tools=[selenium_tool],
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
# Example task to scrape content from a website
|
||||
scrape_task = Task(
|
||||
description="Extract the main content from the homepage of example.com. Use the CSS selector 'main' to target the main content area.",
|
||||
expected_output="The main content from example.com's homepage.",
|
||||
agent=web_scraper_agent,
|
||||
)
|
||||
|
||||
# Create and run the crew
|
||||
crew = Crew(
|
||||
agents=[web_scraper_agent],
|
||||
tasks=[scrape_task],
|
||||
verbose=True,
|
||||
process=Process.sequential,
|
||||
)
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
You can also initialize the tool with predefined parameters:
|
||||
|
||||
```python Code
|
||||
# Initialize the tool with predefined parameters
|
||||
selenium_tool = SeleniumScrapingTool(
|
||||
website_url='https://example.com',
|
||||
css_element='.main-content',
|
||||
wait_time=5
|
||||
)
|
||||
|
||||
# Define an agent that uses the tool
|
||||
web_scraper_agent = Agent(
|
||||
role="Web Scraper",
|
||||
goal="Extract information from websites using Selenium",
|
||||
backstory="An expert web scraper who can extract content from dynamic websites.",
|
||||
tools=[selenium_tool],
|
||||
verbose=True,
|
||||
)
|
||||
```
|
||||
|
||||
## Parameters
|
||||
|
||||
The `SeleniumScrapingTool` accepts the following parameters during initialization:
|
||||
|
||||
- **website_url**: Optional. The URL of the website to scrape. If provided during initialization, the agent won't need to specify it when using the tool.
|
||||
- **css_element**: Optional. The CSS selector for the elements to extract. If provided during initialization, the agent won't need to specify it when using the tool.
|
||||
- **cookie**: Optional. A dictionary containing cookie information, useful for simulating a logged-in session to access restricted content.
|
||||
- **wait_time**: Optional. Specifies the delay (in seconds) before scraping, allowing the website and any dynamic content to fully load. Default is `3` seconds.
|
||||
- **return_html**: Optional. Whether to return the HTML content instead of just the text. Default is `False`.
|
||||
|
||||
When using the tool with an agent, the agent will need to provide the following parameters (unless they were specified during initialization):
|
||||
|
||||
- **website_url**: Required. The URL of the website to scrape.
|
||||
- **css_element**: Required. The CSS selector for the elements to extract.
|
||||
|
||||
## Agent Integration Example
|
||||
|
||||
Here's a more detailed example of how to integrate the `SeleniumScrapingTool` with a CrewAI agent:
|
||||
|
||||
```python Code
|
||||
from crewai import Agent, Task, Crew, Process
|
||||
from crewai_tools import SeleniumScrapingTool
|
||||
|
||||
# Initialize the tool
|
||||
selenium_tool = SeleniumScrapingTool()
|
||||
|
||||
# Define an agent that uses the tool
|
||||
web_scraper_agent = Agent(
|
||||
role="Web Scraper",
|
||||
goal="Extract and analyze information from dynamic websites",
|
||||
backstory="""You are an expert web scraper who specializes in extracting
|
||||
content from dynamic websites that require browser automation. You have
|
||||
extensive knowledge of CSS selectors and can identify the right selectors
|
||||
to target specific content on any website.""",
|
||||
tools=[selenium_tool],
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
# Create a task for the agent
|
||||
scrape_task = Task(
|
||||
description="""
|
||||
Extract the following information from the news website at {website_url}:
|
||||
|
||||
1. The headlines of all featured articles (CSS selector: '.headline')
|
||||
2. The publication dates of these articles (CSS selector: '.pub-date')
|
||||
3. The author names where available (CSS selector: '.author')
|
||||
|
||||
Compile this information into a structured format with each article's details grouped together.
|
||||
""",
|
||||
expected_output="A structured list of articles with their headlines, publication dates, and authors.",
|
||||
agent=web_scraper_agent,
|
||||
)
|
||||
|
||||
# Run the task
|
||||
crew = Crew(
|
||||
agents=[web_scraper_agent],
|
||||
tasks=[scrape_task],
|
||||
verbose=True,
|
||||
process=Process.sequential,
|
||||
)
|
||||
result = crew.kickoff(inputs={"website_url": "https://news-example.com"})
|
||||
```
|
||||
|
||||
## Implementation Details
|
||||
|
||||
The `SeleniumScrapingTool` uses Selenium WebDriver to automate browser interactions:
|
||||
|
||||
```python Code
|
||||
class SeleniumScrapingTool(BaseTool):
|
||||
name: str = "Read a website content"
|
||||
description: str = "A tool that can be used to read a website content."
|
||||
args_schema: Type[BaseModel] = SeleniumScrapingToolSchema
|
||||
|
||||
def _run(self, **kwargs: Any) -> Any:
|
||||
website_url = kwargs.get("website_url", self.website_url)
|
||||
css_element = kwargs.get("css_element", self.css_element)
|
||||
return_html = kwargs.get("return_html", self.return_html)
|
||||
driver = self._create_driver(website_url, self.cookie, self.wait_time)
|
||||
|
||||
content = self._get_content(driver, css_element, return_html)
|
||||
driver.close()
|
||||
|
||||
return "\n".join(content)
|
||||
```
|
||||
|
||||
The tool performs the following steps:
|
||||
1. Creates a headless Chrome browser instance
|
||||
2. Navigates to the specified URL
|
||||
3. Waits for the specified time to allow the page to load
|
||||
4. Adds any cookies if provided
|
||||
5. Extracts content based on the CSS selector
|
||||
6. Returns the extracted content as text or HTML
|
||||
7. Closes the browser instance
|
||||
|
||||
## Handling Dynamic Content
|
||||
|
||||
The `SeleniumScrapingTool` is particularly useful for scraping websites with dynamic content that is loaded via JavaScript. By using a real browser instance, it can:
|
||||
|
||||
1. Execute JavaScript on the page
|
||||
2. Wait for dynamic content to load
|
||||
3. Interact with elements if needed
|
||||
4. Extract content that would not be available with simple HTTP requests
|
||||
|
||||
You can adjust the `wait_time` parameter to ensure that all dynamic content has loaded before extraction.
|
||||
|
||||
## Conclusion
|
||||
|
||||
The `SeleniumScrapingTool` provides a powerful way to extract content from websites using browser automation. By enabling agents to interact with websites as a real user would, it facilitates scraping of dynamic content that would be difficult or impossible to extract using simpler methods. This tool is particularly useful for research, data collection, and monitoring tasks that involve modern web applications with JavaScript-rendered content.
|
||||
92
docs/en/tools/web-scraping/spidertool.mdx
Normal file
92
docs/en/tools/web-scraping/spidertool.mdx
Normal file
@@ -0,0 +1,92 @@
|
||||
---
|
||||
title: Spider Scraper
|
||||
description: The `SpiderTool` is designed to extract and read the content of a specified website using Spider.
|
||||
icon: spider-web
|
||||
---
|
||||
|
||||
# `SpiderTool`
|
||||
|
||||
## Description
|
||||
|
||||
[Spider](https://spider.cloud/?ref=crewai) is the [fastest](https://github.com/spider-rs/spider/blob/main/benches/BENCHMARKS.md#benchmark-results)
|
||||
open source scraper and crawler that returns LLM-ready data.
|
||||
It converts any website into pure HTML, markdown, metadata or text while enabling you to crawl with custom actions using AI.
|
||||
|
||||
## Installation
|
||||
|
||||
To use the `SpiderTool` you need to download the [Spider SDK](https://pypi.org/project/spider-client/)
|
||||
and the `crewai[tools]` SDK too:
|
||||
|
||||
```shell
|
||||
pip install spider-client 'crewai[tools]'
|
||||
```
|
||||
|
||||
## Example
|
||||
|
||||
This example shows you how you can use the `SpiderTool` to enable your agent to scrape and crawl websites.
|
||||
The data returned from the Spider API is already LLM-ready, so no need to do any cleaning there.
|
||||
|
||||
```python Code
|
||||
from crewai_tools import SpiderTool
|
||||
|
||||
def main():
|
||||
spider_tool = SpiderTool()
|
||||
|
||||
searcher = Agent(
|
||||
role="Web Research Expert",
|
||||
goal="Find related information from specific URL's",
|
||||
backstory="An expert web researcher that uses the web extremely well",
|
||||
tools=[spider_tool],
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
return_metadata = Task(
|
||||
description="Scrape https://spider.cloud with a limit of 1 and enable metadata",
|
||||
expected_output="Metadata and 10 word summary of spider.cloud",
|
||||
agent=searcher
|
||||
)
|
||||
|
||||
crew = Crew(
|
||||
agents=[searcher],
|
||||
tasks=[
|
||||
return_metadata,
|
||||
],
|
||||
verbose=2
|
||||
)
|
||||
|
||||
crew.kickoff()
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
```
|
||||
|
||||
## Arguments
|
||||
| Argument | Type | Description |
|
||||
|:------------------|:---------|:-----------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| **api_key** | `string` | Specifies Spider API key. If not specified, it looks for `SPIDER_API_KEY` in environment variables. |
|
||||
| **params** | `object` | Optional parameters for the request. Defaults to `{"return_format": "markdown"}` to optimize content for LLMs. |
|
||||
| **request** | `string` | Type of request to perform (`http`, `chrome`, `smart`). `smart` defaults to HTTP, switching to JavaScript rendering if needed. |
|
||||
| **limit** | `int` | Max pages to crawl per website. Set to `0` or omit for unlimited. |
|
||||
| **depth** | `int` | Max crawl depth. Set to `0` for no limit. |
|
||||
| **cache** | `bool` | Enables HTTP caching to speed up repeated runs. Default is `true`. |
|
||||
| **budget** | `object` | Sets path-based limits for crawled pages, e.g., `{"*":1}` for root page only. |
|
||||
| **locale** | `string` | Locale for the request, e.g., `en-US`. |
|
||||
| **cookies** | `string` | HTTP cookies for the request. |
|
||||
| **stealth** | `bool` | Enables stealth mode for Chrome requests to avoid detection. Default is `true`. |
|
||||
| **headers** | `object` | HTTP headers as a map of key-value pairs for all requests. |
|
||||
| **metadata** | `bool` | Stores metadata about pages and content, aiding AI interoperability. Defaults to `false`. |
|
||||
| **viewport** | `object` | Sets Chrome viewport dimensions. Default is `800x600`. |
|
||||
| **encoding** | `string` | Specifies encoding type, e.g., `UTF-8`, `SHIFT_JIS`. |
|
||||
| **subdomains** | `bool` | Includes subdomains in the crawl. Default is `false`. |
|
||||
| **user_agent** | `string` | Custom HTTP user agent. Defaults to a random agent. |
|
||||
| **store_data** | `bool` | Enables data storage for the request. Overrides `storageless` when set. Default is `false`. |
|
||||
| **gpt_config** | `object` | Allows AI to generate crawl actions, with optional chaining steps via an array for `"prompt"`. |
|
||||
| **fingerprint** | `bool` | Enables advanced fingerprinting for Chrome. |
|
||||
| **storageless** | `bool` | Prevents all data storage, including AI embeddings. Default is `false`. |
|
||||
| **readability** | `bool` | Pre-processes content for reading via [Mozilla’s readability](https://github.com/mozilla/readability). Improves content for LLMs. |
|
||||
| **return_format** | `string` | Format to return data: `markdown`, `raw`, `text`, `html2text`. Use `raw` for default page format. |
|
||||
| **proxy_enabled** | `bool` | Enables high-performance proxies to avoid network-level blocking. |
|
||||
| **query_selector** | `string` | CSS query selector for content extraction from markup. |
|
||||
| **full_resources** | `bool` | Downloads all resources linked to the website. |
|
||||
| **request_timeout** | `int` | Timeout in seconds for requests (5-60). Default is `30`. |
|
||||
| **run_in_background** | `bool` | Runs the request in the background, useful for data storage and triggering dashboard crawls. No effect if `storageless` is set. |
|
||||
244
docs/en/tools/web-scraping/stagehandtool.mdx
Normal file
244
docs/en/tools/web-scraping/stagehandtool.mdx
Normal file
@@ -0,0 +1,244 @@
|
||||
---
|
||||
title: Stagehand Tool
|
||||
description: Web automation tool that integrates Stagehand with CrewAI for browser interaction and automation
|
||||
icon: hand
|
||||
---
|
||||
|
||||
|
||||
# Overview
|
||||
|
||||
The `StagehandTool` integrates the [Stagehand](https://docs.stagehand.dev/get_started/introduction) framework with CrewAI, enabling agents to interact with websites and automate browser tasks using natural language instructions.
|
||||
|
||||
## Overview
|
||||
|
||||
Stagehand is a powerful browser automation framework built by Browserbase that allows AI agents to:
|
||||
|
||||
- Navigate to websites
|
||||
- Click buttons, links, and other elements
|
||||
- Fill in forms
|
||||
- Extract data from web pages
|
||||
- Observe and identify elements
|
||||
- Perform complex workflows
|
||||
|
||||
The StagehandTool wraps the Stagehand Python SDK to provide CrewAI agents with browser control capabilities through three core primitives:
|
||||
|
||||
1. **Act**: Perform actions like clicking, typing, or navigating
|
||||
2. **Extract**: Extract structured data from web pages
|
||||
3. **Observe**: Identify and analyze elements on the page
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Before using this tool, ensure you have:
|
||||
|
||||
1. A [Browserbase](https://www.browserbase.com/) account with API key and project ID
|
||||
2. An API key for an LLM (OpenAI or Anthropic Claude)
|
||||
3. The Stagehand Python SDK installed
|
||||
|
||||
Install the required dependency:
|
||||
|
||||
```bash
|
||||
pip install stagehand-py
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Implementation
|
||||
|
||||
The StagehandTool can be implemented in two ways:
|
||||
|
||||
#### 1. Using Context Manager (Recommended)
|
||||
<Tip>
|
||||
The context manager approach is recommended as it ensures proper cleanup of resources even if exceptions occur.
|
||||
</Tip>
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import StagehandTool
|
||||
from stagehand.schemas import AvailableModel
|
||||
|
||||
# Initialize the tool with your API keys using a context manager
|
||||
with StagehandTool(
|
||||
api_key="your-browserbase-api-key",
|
||||
project_id="your-browserbase-project-id",
|
||||
model_api_key="your-llm-api-key", # OpenAI or Anthropic API key
|
||||
model_name=AvailableModel.CLAUDE_3_7_SONNET_LATEST, # Optional: specify which model to use
|
||||
) as stagehand_tool:
|
||||
# Create an agent with the tool
|
||||
researcher = Agent(
|
||||
role="Web Researcher",
|
||||
goal="Find and summarize information from websites",
|
||||
backstory="I'm an expert at finding information online.",
|
||||
verbose=True,
|
||||
tools=[stagehand_tool],
|
||||
)
|
||||
|
||||
# Create a task that uses the tool
|
||||
research_task = Task(
|
||||
description="Go to https://www.example.com and tell me what you see on the homepage.",
|
||||
agent=researcher,
|
||||
)
|
||||
|
||||
# Run the crew
|
||||
crew = Crew(
|
||||
agents=[researcher],
|
||||
tasks=[research_task],
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
result = crew.kickoff()
|
||||
print(result)
|
||||
```
|
||||
|
||||
#### 2. Manual Resource Management
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import StagehandTool
|
||||
from stagehand.schemas import AvailableModel
|
||||
|
||||
# Initialize the tool with your API keys
|
||||
stagehand_tool = StagehandTool(
|
||||
api_key="your-browserbase-api-key",
|
||||
project_id="your-browserbase-project-id",
|
||||
model_api_key="your-llm-api-key",
|
||||
model_name=AvailableModel.CLAUDE_3_7_SONNET_LATEST,
|
||||
)
|
||||
|
||||
try:
|
||||
# Create an agent with the tool
|
||||
researcher = Agent(
|
||||
role="Web Researcher",
|
||||
goal="Find and summarize information from websites",
|
||||
backstory="I'm an expert at finding information online.",
|
||||
verbose=True,
|
||||
tools=[stagehand_tool],
|
||||
)
|
||||
|
||||
# Create a task that uses the tool
|
||||
research_task = Task(
|
||||
description="Go to https://www.example.com and tell me what you see on the homepage.",
|
||||
agent=researcher,
|
||||
)
|
||||
|
||||
# Run the crew
|
||||
crew = Crew(
|
||||
agents=[researcher],
|
||||
tasks=[research_task],
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
result = crew.kickoff()
|
||||
print(result)
|
||||
finally:
|
||||
# Explicitly clean up resources
|
||||
stagehand_tool.close()
|
||||
```
|
||||
|
||||
## Command Types
|
||||
|
||||
The StagehandTool supports three different command types for specific web automation tasks:
|
||||
|
||||
### 1. Act Command
|
||||
|
||||
The `act` command type (default) enables webpage interactions like clicking buttons, filling forms, and navigation.
|
||||
|
||||
```python
|
||||
# Perform an action (default behavior)
|
||||
result = stagehand_tool.run(
|
||||
instruction="Click the login button",
|
||||
url="https://example.com",
|
||||
command_type="act" # Default, so can be omitted
|
||||
)
|
||||
|
||||
# Fill out a form
|
||||
result = stagehand_tool.run(
|
||||
instruction="Fill the contact form with name 'John Doe', email 'john@example.com', and message 'Hello world'",
|
||||
url="https://example.com/contact"
|
||||
)
|
||||
```
|
||||
|
||||
### 2. Extract Command
|
||||
|
||||
The `extract` command type retrieves structured data from webpages.
|
||||
|
||||
```python
|
||||
# Extract all product information
|
||||
result = stagehand_tool.run(
|
||||
instruction="Extract all product names, prices, and descriptions",
|
||||
url="https://example.com/products",
|
||||
command_type="extract"
|
||||
)
|
||||
|
||||
# Extract specific information with a selector
|
||||
result = stagehand_tool.run(
|
||||
instruction="Extract the main article title and content",
|
||||
url="https://example.com/blog/article",
|
||||
command_type="extract",
|
||||
selector=".article-container" # Optional CSS selector
|
||||
)
|
||||
```
|
||||
|
||||
### 3. Observe Command
|
||||
|
||||
The `observe` command type identifies and analyzes webpage elements.
|
||||
|
||||
```python
|
||||
# Find interactive elements
|
||||
result = stagehand_tool.run(
|
||||
instruction="Find all interactive elements in the navigation menu",
|
||||
url="https://example.com",
|
||||
command_type="observe"
|
||||
)
|
||||
|
||||
# Identify form fields
|
||||
result = stagehand_tool.run(
|
||||
instruction="Identify all the input fields in the registration form",
|
||||
url="https://example.com/register",
|
||||
command_type="observe",
|
||||
selector="#registration-form"
|
||||
)
|
||||
```
|
||||
|
||||
## Configuration Options
|
||||
|
||||
Customize the StagehandTool behavior with these parameters:
|
||||
|
||||
```python
|
||||
stagehand_tool = StagehandTool(
|
||||
api_key="your-browserbase-api-key",
|
||||
project_id="your-browserbase-project-id",
|
||||
model_api_key="your-llm-api-key",
|
||||
model_name=AvailableModel.CLAUDE_3_7_SONNET_LATEST,
|
||||
dom_settle_timeout_ms=5000, # Wait longer for DOM to settle
|
||||
headless=True, # Run browser in headless mode
|
||||
self_heal=True, # Attempt to recover from errors
|
||||
wait_for_captcha_solves=True, # Wait for CAPTCHA solving
|
||||
verbose=1, # Control logging verbosity (0-3)
|
||||
)
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Be Specific**: Provide detailed instructions for better results
|
||||
2. **Choose Appropriate Command Type**: Select the right command type for your task
|
||||
3. **Use Selectors**: Leverage CSS selectors to improve accuracy
|
||||
4. **Break Down Complex Tasks**: Split complex workflows into multiple tool calls
|
||||
5. **Implement Error Handling**: Add error handling for potential issues
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
|
||||
Common issues and solutions:
|
||||
|
||||
- **Session Issues**: Verify API keys for both Browserbase and LLM provider
|
||||
- **Element Not Found**: Increase `dom_settle_timeout_ms` for slower pages
|
||||
- **Action Failures**: Use `observe` to identify correct elements first
|
||||
- **Incomplete Data**: Refine instructions or provide specific selectors
|
||||
|
||||
|
||||
## Additional Resources
|
||||
|
||||
For questions about the CrewAI integration:
|
||||
- Join Stagehand's [Slack community](https://stagehand.dev/slack)
|
||||
- Open an issue in the [Stagehand repository](https://github.com/browserbase/stagehand)
|
||||
- Visit [Stagehand documentation](https://docs.stagehand.dev/)
|
||||
Reference in New Issue
Block a user