mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-01-28 09:38:17 +00:00
docs: improve tool documentation and examples
- Update SerperDevTool documentation with accurate parameters and JSON response format - Enhance XMLSearchTool and MDXSearchTool docs with RAG capabilities and required parameters - Fix code block formatting across multiple tool documentation files - Add clarification about environment variables and configuration - Validate all examples against actual implementations - Successfully tested with mkdocs build Co-Authored-By: Joe Moura <joao@crewai.com>
This commit is contained in:
@@ -26,7 +26,7 @@ pip install spider-client 'crewai[tools]'
|
||||
This example shows you how you can use the `SpiderTool` to enable your agent to scrape and crawl websites.
|
||||
The data returned from the Spider API is already LLM-ready, so no need to do any cleaning there.
|
||||
|
||||
```python Code
|
||||
```python
|
||||
from crewai_tools import SpiderTool
|
||||
|
||||
def main():
|
||||
@@ -89,4 +89,4 @@ if __name__ == "__main__":
|
||||
| **query_selector** | `string` | CSS query selector for content extraction from markup. |
|
||||
| **full_resources** | `bool` | Downloads all resources linked to the website. |
|
||||
| **request_timeout** | `int` | Timeout in seconds for requests (5-60). Default is `30`. |
|
||||
| **run_in_background** | `bool` | Runs the request in the background, useful for data storage and triggering dashboard crawls. No effect if `storageless` is set. |
|
||||
| **run_in_background** | `bool` | Runs the request in the background, useful for data storage and triggering dashboard crawls. No effect if `storageless` is set. |
|
||||
|
||||
Reference in New Issue
Block a user