docs: improve tool documentation and examples

- Update SerperDevTool documentation with accurate parameters and JSON response format
- Enhance XMLSearchTool and MDXSearchTool docs with RAG capabilities and required parameters
- Fix code block formatting across multiple tool documentation files
- Add clarification about environment variables and configuration
- Validate all examples against actual implementations
- Successfully tested with mkdocs build

Co-Authored-By: Joe Moura <joao@crewai.com>
This commit is contained in:
Devin AI
2024-12-28 04:32:08 +00:00
parent 99fe91586d
commit a499d9de42
14 changed files with 464 additions and 64 deletions

View File

@@ -26,7 +26,7 @@ pip install spider-client 'crewai[tools]'
This example shows you how you can use the `SpiderTool` to enable your agent to scrape and crawl websites.
The data returned from the Spider API is already LLM-ready, so no need to do any cleaning there.
```python Code
```python
from crewai_tools import SpiderTool
def main():
@@ -89,4 +89,4 @@ if __name__ == "__main__":
| **query_selector** | `string` | CSS query selector for content extraction from markup. |
| **full_resources** | `bool` | Downloads all resources linked to the website. |
| **request_timeout** | `int` | Timeout in seconds for requests (5-60). Default is `30`. |
| **run_in_background** | `bool` | Runs the request in the background, useful for data storage and triggering dashboard crawls. No effect if `storageless` is set. |
| **run_in_background** | `bool` | Runs the request in the background, useful for data storage and triggering dashboard crawls. No effect if `storageless` is set. |