mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-01-08 15:48:29 +00:00
* docs(cli): document device-code login and config reset guidance; renumber sections * docs(cli): fix duplicate numbering (renumber Login/API Keys/Configuration sections) * docs: Fix webhook documentation to include meta dict in all webhook payloads - Add note explaining that meta objects from kickoff requests are included in all webhook payloads - Update webhook examples to show proper payload structure including meta field - Fix webhook examples to match actual API implementation - Apply changes to English, Korean, and Portuguese documentation Resolves the documentation gap where meta dict passing to webhooks was not documented despite being implemented in the API. * WIP: CrewAI docs theme, changelog, GEO, localization * docs(cli): fix merge markers; ensure mode: "wide"; convert ASCII tables to Markdown (en/pt-BR/ko) * docs: add group icons across locales; split Automation/Integrations; update tools overviews and links
49 lines
1.7 KiB
Plaintext
49 lines
1.7 KiB
Plaintext
---
|
|
title: Scrape Website
|
|
description: The `ScrapeWebsiteTool` is designed to extract and read the content of a specified website.
|
|
icon: magnifying-glass-location
|
|
mode: "wide"
|
|
---
|
|
|
|
# `ScrapeWebsiteTool`
|
|
|
|
<Note>
|
|
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
|
|
</Note>
|
|
|
|
## Description
|
|
|
|
A tool designed to extract and read the content of a specified website. It is capable of handling various types of web pages by making HTTP requests and parsing the received HTML content.
|
|
This tool can be particularly useful for web scraping tasks, data collection, or extracting specific information from websites.
|
|
|
|
## Installation
|
|
|
|
Install the crewai_tools package
|
|
|
|
```shell
|
|
pip install 'crewai[tools]'
|
|
```
|
|
|
|
## Example
|
|
|
|
```python
|
|
from crewai_tools import ScrapeWebsiteTool
|
|
|
|
# To enable scrapping any website it finds during it's execution
|
|
tool = ScrapeWebsiteTool()
|
|
|
|
# Initialize the tool with the website URL,
|
|
# so the agent can only scrap the content of the specified website
|
|
tool = ScrapeWebsiteTool(website_url='https://www.example.com')
|
|
|
|
# Extract the text from the site
|
|
text = tool.run()
|
|
print(text)
|
|
```
|
|
|
|
## Arguments
|
|
|
|
| Argument | Type | Description |
|
|
|:---------------|:---------|:-------------------------------------------------------------------------------------------------------------------------------------|
|
|
| **website_url** | `string` | **Mandatory** website URL to read the file. This is the primary input for the tool, specifying which website's content should be scraped and read. |
|