mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-01-09 08:08:32 +00:00
chore(docs): bring AMP doc refresh from release/v1.0.0 into main (#3637)
Some checks failed
Some checks failed
* WIP: v1 docs (#3626) (cherry picked from commit d46e20fa09bcd2f5916282f5553ddeb7183bd92c) * docs: parity for all translations * docs: full name of acronym AMP * docs: fix lingering unused code * docs: expand contextual options in docs.json * docs: add contextual action to request feature on GitHub * chore: tidy docs formatting
This commit is contained in:
57
docs/pt-BR/tools/automation/zapieractionstool.mdx
Normal file
57
docs/pt-BR/tools/automation/zapieractionstool.mdx
Normal file
@@ -0,0 +1,57 @@
|
||||
---
|
||||
title: Zapier Actions Tool
|
||||
description: The `ZapierActionsAdapter` exposes Zapier actions as CrewAI tools for automation.
|
||||
icon: bolt
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
# `ZapierActionsAdapter`
|
||||
|
||||
## Description
|
||||
|
||||
Use the Zapier adapter to list and call Zapier actions as CrewAI tools. This enables agents to trigger automations across thousands of apps.
|
||||
|
||||
## Installation
|
||||
|
||||
This adapter is included with `crewai-tools`. No extra install required.
|
||||
|
||||
## Environment Variables
|
||||
|
||||
- `ZAPIER_API_KEY` (required): Zapier API key. Get one from the Zapier Actions dashboard at https://actions.zapier.com/ (create an account, then generate an API key). You can also pass `zapier_api_key` directly when constructing the adapter.
|
||||
|
||||
## Example
|
||||
|
||||
```python Code
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools.adapters.zapier_adapter import ZapierActionsAdapter
|
||||
|
||||
adapter = ZapierActionsAdapter(api_key="your_zapier_api_key")
|
||||
tools = adapter.tools()
|
||||
|
||||
agent = Agent(
|
||||
role="Automator",
|
||||
goal="Execute Zapier actions",
|
||||
backstory="Automation specialist",
|
||||
tools=tools,
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Create a new Google Sheet and add a row using Zapier actions",
|
||||
expected_output="Confirmation with created resource IDs",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
crew = Crew(agents=[agent], tasks=[task])
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
## Notes & limits
|
||||
|
||||
- The adapter lists available actions for your key and creates `BaseTool` wrappers dynamically.
|
||||
- Handle action‑specific required fields in your task instructions or tool call.
|
||||
- Rate limits depend on your Zapier plan; see the Zapier Actions docs.
|
||||
|
||||
## Notes
|
||||
|
||||
- The adapter fetches available actions and generates `BaseTool` wrappers dynamically.
|
||||
@@ -1,188 +0,0 @@
|
||||
---
|
||||
title: Ferramenta Bedrock Invoke Agent
|
||||
description: Permite que agentes CrewAI invoquem Amazon Bedrock Agents e aproveitem suas capacidades em seus fluxos de trabalho
|
||||
icon: aws
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
# `BedrockInvokeAgentTool`
|
||||
|
||||
A `BedrockInvokeAgentTool` permite que agentes CrewAI invoquem Amazon Bedrock Agents e aproveitem suas capacidades em seus fluxos de trabalho.
|
||||
|
||||
## Instalação
|
||||
|
||||
```bash
|
||||
uv pip install 'crewai[tools]'
|
||||
```
|
||||
|
||||
## Requisitos
|
||||
|
||||
- Credenciais AWS configuradas (através de variáveis de ambiente ou AWS CLI)
|
||||
- Pacotes `boto3` e `python-dotenv`
|
||||
- Acesso aos Amazon Bedrock Agents
|
||||
|
||||
## Uso
|
||||
|
||||
Veja como usar a ferramenta com um agente CrewAI:
|
||||
|
||||
```python {2, 4-8}
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools.aws.bedrock.agents.invoke_agent_tool import BedrockInvokeAgentTool
|
||||
|
||||
# Initialize the tool
|
||||
agent_tool = BedrockInvokeAgentTool(
|
||||
agent_id="your-agent-id",
|
||||
agent_alias_id="your-agent-alias-id"
|
||||
)
|
||||
|
||||
# Create a CrewAI agent that uses the tool
|
||||
aws_expert = Agent(
|
||||
role='AWS Service Expert',
|
||||
goal='Help users understand AWS services and quotas',
|
||||
backstory='I am an expert in AWS services and can provide detailed information about them.',
|
||||
tools=[agent_tool],
|
||||
verbose=True
|
||||
)
|
||||
|
||||
# Create a task for the agent
|
||||
quota_task = Task(
|
||||
description="Find out the current service quotas for EC2 in us-west-2 and explain any recent changes.",
|
||||
agent=aws_expert
|
||||
)
|
||||
|
||||
# Create a crew with the agent
|
||||
crew = Crew(
|
||||
agents=[aws_expert],
|
||||
tasks=[quota_task],
|
||||
verbose=2
|
||||
)
|
||||
|
||||
# Run the crew
|
||||
result = crew.kickoff()
|
||||
print(result)
|
||||
```
|
||||
|
||||
## Argumentos da Ferramenta
|
||||
|
||||
| Argumento | Tipo | Obrigatório | Padrão | Descrição |
|
||||
|:----------|:-----|:------------|:---------|:----------------------------------------------------|
|
||||
| **agent_id** | `str` | Sim | None | O identificador único do agente Bedrock |
|
||||
| **agent_alias_id** | `str` | Sim | None | O identificador único do alias do agente |
|
||||
| **session_id** | `str` | Não | timestamp | O identificador único da sessão |
|
||||
| **enable_trace** | `bool` | Não | False | Define se o trace deve ser habilitado para debug |
|
||||
| **end_session** | `bool` | Não | False | Define se a sessão deve ser encerrada após invocação |
|
||||
| **description** | `str` | Não | None | Descrição personalizada para a ferramenta |
|
||||
|
||||
## Variáveis de Ambiente
|
||||
|
||||
```bash
|
||||
BEDROCK_AGENT_ID=your-agent-id # Alternativa para passar agent_id
|
||||
BEDROCK_AGENT_ALIAS_ID=your-agent-alias-id # Alternativa para passar agent_alias_id
|
||||
AWS_REGION=your-aws-region # Padrão é us-west-2
|
||||
AWS_ACCESS_KEY_ID=your-access-key # Necessário para autenticação AWS
|
||||
AWS_SECRET_ACCESS_KEY=your-secret-key # Necessário para autenticação AWS
|
||||
```
|
||||
|
||||
## Uso Avançado
|
||||
|
||||
### Fluxo de Trabalho Multiagente com Gerenciamento de Sessão
|
||||
|
||||
```python {2, 4-22}
|
||||
from crewai import Agent, Task, Crew, Process
|
||||
from crewai_tools.aws.bedrock.agents.invoke_agent_tool import BedrockInvokeAgentTool
|
||||
|
||||
# Initialize tools with session management
|
||||
initial_tool = BedrockInvokeAgentTool(
|
||||
agent_id="your-agent-id",
|
||||
agent_alias_id="your-agent-alias-id",
|
||||
session_id="custom-session-id"
|
||||
)
|
||||
|
||||
followup_tool = BedrockInvokeAgentTool(
|
||||
agent_id="your-agent-id",
|
||||
agent_alias_id="your-agent-alias-id",
|
||||
session_id="custom-session-id"
|
||||
)
|
||||
|
||||
final_tool = BedrockInvokeAgentTool(
|
||||
agent_id="your-agent-id",
|
||||
agent_alias_id="your-agent-alias-id",
|
||||
session_id="custom-session-id",
|
||||
end_session=True
|
||||
)
|
||||
|
||||
# Create agents for different stages
|
||||
researcher = Agent(
|
||||
role='AWS Service Researcher',
|
||||
goal='Gather information about AWS services',
|
||||
backstory='I am specialized in finding detailed AWS service information.',
|
||||
tools=[initial_tool]
|
||||
)
|
||||
|
||||
analyst = Agent(
|
||||
role='Service Compatibility Analyst',
|
||||
goal='Analyze service compatibility and requirements',
|
||||
backstory='I analyze AWS services for compatibility and integration possibilities.',
|
||||
tools=[followup_tool]
|
||||
)
|
||||
|
||||
summarizer = Agent(
|
||||
role='Technical Documentation Writer',
|
||||
goal='Create clear technical summaries',
|
||||
backstory='I specialize in creating clear, concise technical documentation.',
|
||||
tools=[final_tool]
|
||||
)
|
||||
|
||||
# Create tasks
|
||||
research_task = Task(
|
||||
description="Find all available AWS services in us-west-2 region.",
|
||||
agent=researcher
|
||||
)
|
||||
|
||||
analysis_task = Task(
|
||||
description="Analyze which services support IPv6 and their implementation requirements.",
|
||||
agent=analyst
|
||||
)
|
||||
|
||||
summary_task = Task(
|
||||
description="Create a summary of IPv6-compatible services and their key features.",
|
||||
agent=summarizer
|
||||
)
|
||||
|
||||
# Create a crew with the agents and tasks
|
||||
crew = Crew(
|
||||
agents=[researcher, analyst, summarizer],
|
||||
tasks=[research_task, analysis_task, summary_task],
|
||||
process=Process.sequential,
|
||||
verbose=2
|
||||
)
|
||||
|
||||
# Run the crew
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
## Casos de Uso
|
||||
|
||||
### Colaborações Híbridas Multiagente
|
||||
- Crie fluxos de trabalho onde agentes CrewAI colaboram com agentes Bedrock gerenciados executando como serviços na AWS
|
||||
- Permita cenários em que o processamento de dados sensíveis ocorre dentro do seu ambiente AWS enquanto outros agentes operam externamente
|
||||
- Conecte agentes CrewAI on-premises a agentes Bedrock baseados na nuvem para fluxos de trabalho distribuídos de inteligência
|
||||
|
||||
### Soberania e Conformidade de Dados
|
||||
- Mantenha fluxos de trabalho agentivos sensíveis a dados dentro do seu ambiente AWS enquanto permite que agentes CrewAI externos orquestrem tarefas
|
||||
- Mantenha conformidade com requisitos de residência de dados processando informações sensíveis somente em sua conta AWS
|
||||
- Permita colaborações multiagentes seguras onde alguns agentes não podem acessar dados privados da sua organização
|
||||
|
||||
### Integração Transparente com Serviços AWS
|
||||
- Acesse qualquer serviço AWS por meio do Amazon Bedrock Actions sem escrever código de integração complexo
|
||||
- Permita que agentes CrewAI interajam com serviços AWS usando solicitações em linguagem natural
|
||||
- Aproveite as capacidades pré-construídas dos agentes Bedrock para interagir com serviços AWS como Bedrock Knowledge Bases, Lambda e outros
|
||||
|
||||
### Arquiteturas de Agentes Híbridos Escaláveis
|
||||
- Realize tarefas computacionalmente intensivas em agentes Bedrock gerenciados enquanto tarefas leves rodam em CrewAI
|
||||
- Escale o processamento de agentes distribuindo cargas de trabalho entre agentes CrewAI locais e agentes Bedrock na nuvem
|
||||
|
||||
### Colaboração de Agentes Entre Organizações
|
||||
- Permita colaboração segura entre agentes CrewAI da sua organização e agentes Bedrock de organizações parceiras
|
||||
- Crie fluxos de trabalho onde a expertise externa de agentes Bedrock pode ser incorporada sem expor dados sensíveis
|
||||
- Construa ecossistemas de agentes que abrangem fronteiras organizacionais mantendo segurança e controle de dados
|
||||
167
docs/pt-BR/tools/database-data/mongodbvectorsearchtool.mdx
Normal file
167
docs/pt-BR/tools/database-data/mongodbvectorsearchtool.mdx
Normal file
@@ -0,0 +1,167 @@
|
||||
---
|
||||
title: MongoDB Vector Search Tool
|
||||
description: The `MongoDBVectorSearchTool` performs vector search on MongoDB Atlas with optional indexing helpers.
|
||||
icon: "leaf"
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
# `MongoDBVectorSearchTool`
|
||||
|
||||
## Description
|
||||
|
||||
Perform vector similarity queries on MongoDB Atlas collections. Supports index creation helpers and bulk insert of embedded texts.
|
||||
|
||||
MongoDB Atlas supports native vector search. Learn more:
|
||||
https://www.mongodb.com/docs/atlas/atlas-vector-search/vector-search-overview/
|
||||
|
||||
## Installation
|
||||
|
||||
Install with the MongoDB extra:
|
||||
|
||||
```shell
|
||||
pip install crewai-tools[mongodb]
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```shell
|
||||
uv add crewai-tools --extra mongodb
|
||||
```
|
||||
|
||||
## Parameters
|
||||
|
||||
### Initialization
|
||||
|
||||
- `connection_string` (str, required)
|
||||
- `database_name` (str, required)
|
||||
- `collection_name` (str, required)
|
||||
- `vector_index_name` (str, default `vector_index`)
|
||||
- `text_key` (str, default `text`)
|
||||
- `embedding_key` (str, default `embedding`)
|
||||
- `dimensions` (int, default `1536`)
|
||||
|
||||
### Run Parameters
|
||||
|
||||
- `query` (str, required): Natural language query to embed and search.
|
||||
|
||||
## Quick start
|
||||
|
||||
```python Code
|
||||
from crewai_tools import MongoDBVectorSearchTool
|
||||
|
||||
tool = MongoDBVectorSearchTool(
|
||||
connection_string="mongodb+srv://...",
|
||||
database_name="mydb",
|
||||
collection_name="docs",
|
||||
)
|
||||
|
||||
print(tool.run(query="how to create vector index"))
|
||||
```
|
||||
|
||||
## Index creation helpers
|
||||
|
||||
Use `create_vector_search_index(...)` to provision an Atlas Vector Search index with the correct dimensions and similarity.
|
||||
|
||||
## Common issues
|
||||
|
||||
- Authentication failures: ensure your Atlas IP Access List allows your runner and the connection string includes credentials.
|
||||
- Index not found: create the vector index first; name must match `vector_index_name`.
|
||||
- Dimensions mismatch: align embedding model dimensions with `dimensions`.
|
||||
|
||||
## More examples
|
||||
|
||||
### Basic initialization
|
||||
|
||||
```python Code
|
||||
from crewai_tools import MongoDBVectorSearchTool
|
||||
|
||||
tool = MongoDBVectorSearchTool(
|
||||
database_name="example_database",
|
||||
collection_name="example_collection",
|
||||
connection_string="<your_mongodb_connection_string>",
|
||||
)
|
||||
```
|
||||
|
||||
### Custom query configuration
|
||||
|
||||
```python Code
|
||||
from crewai_tools import MongoDBVectorSearchConfig, MongoDBVectorSearchTool
|
||||
|
||||
query_config = MongoDBVectorSearchConfig(limit=10, oversampling_factor=2)
|
||||
tool = MongoDBVectorSearchTool(
|
||||
database_name="example_database",
|
||||
collection_name="example_collection",
|
||||
connection_string="<your_mongodb_connection_string>",
|
||||
query_config=query_config,
|
||||
vector_index_name="my_vector_index",
|
||||
)
|
||||
|
||||
rag_agent = Agent(
|
||||
name="rag_agent",
|
||||
role="You are a helpful assistant that can answer questions with the help of the MongoDBVectorSearchTool.",
|
||||
goal="...",
|
||||
backstory="...",
|
||||
tools=[tool],
|
||||
)
|
||||
```
|
||||
|
||||
### Preloading the database and creating the index
|
||||
|
||||
```python Code
|
||||
import os
|
||||
from crewai_tools import MongoDBVectorSearchTool
|
||||
|
||||
tool = MongoDBVectorSearchTool(
|
||||
database_name="example_database",
|
||||
collection_name="example_collection",
|
||||
connection_string="<your_mongodb_connection_string>",
|
||||
)
|
||||
|
||||
# Load text content from a local folder and add to MongoDB
|
||||
texts = []
|
||||
for fname in os.listdir("knowledge"):
|
||||
path = os.path.join("knowledge", fname)
|
||||
if os.path.isfile(path):
|
||||
with open(path, "r", encoding="utf-8") as f:
|
||||
texts.append(f.read())
|
||||
|
||||
tool.add_texts(texts)
|
||||
|
||||
# Create the Atlas Vector Search index (e.g., 3072 dims for text-embedding-3-large)
|
||||
tool.create_vector_search_index(dimensions=3072)
|
||||
```
|
||||
|
||||
## Example
|
||||
|
||||
```python Code
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import MongoDBVectorSearchTool
|
||||
|
||||
tool = MongoDBVectorSearchTool(
|
||||
connection_string="mongodb+srv://...",
|
||||
database_name="mydb",
|
||||
collection_name="docs",
|
||||
)
|
||||
|
||||
agent = Agent(
|
||||
role="RAG Agent",
|
||||
goal="Answer using MongoDB vector search",
|
||||
backstory="Knowledge retrieval specialist",
|
||||
tools=[tool],
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Find relevant content for 'indexing guidance'",
|
||||
expected_output="A concise answer citing the most relevant matches",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
crew = Crew(
|
||||
agents=[agent],
|
||||
tasks=[task],
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
result = crew.kickoff()
|
||||
```
|
||||
60
docs/pt-BR/tools/database-data/singlestoresearchtool.mdx
Normal file
60
docs/pt-BR/tools/database-data/singlestoresearchtool.mdx
Normal file
@@ -0,0 +1,60 @@
|
||||
---
|
||||
title: SingleStore Search Tool
|
||||
description: The `SingleStoreSearchTool` safely executes SELECT/SHOW queries on SingleStore with pooling.
|
||||
icon: circle
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
# `SingleStoreSearchTool`
|
||||
|
||||
## Description
|
||||
|
||||
Execute read‑only queries (`SELECT`/`SHOW`) against SingleStore with connection pooling and input validation.
|
||||
|
||||
## Installation
|
||||
|
||||
```shell
|
||||
uv add crewai-tools[singlestore]
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
Variables like `SINGLESTOREDB_HOST`, `SINGLESTOREDB_USER`, `SINGLESTOREDB_PASSWORD`, etc., can be used, or `SINGLESTOREDB_URL` as a single DSN.
|
||||
|
||||
Generate the API key from the SingleStore dashboard, [docs here](https://docs.singlestore.com/cloud/reference/management-api/#generate-an-api-key).
|
||||
|
||||
## Example
|
||||
|
||||
```python Code
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import SingleStoreSearchTool
|
||||
|
||||
tool = SingleStoreSearchTool(
|
||||
tables=["products"],
|
||||
host="host",
|
||||
user="user",
|
||||
password="pass",
|
||||
database="db",
|
||||
)
|
||||
|
||||
agent = Agent(
|
||||
role="Analyst",
|
||||
goal="Query SingleStore",
|
||||
tools=[tool],
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="List 5 products",
|
||||
expected_output="5 rows as JSON/text",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
crew = Crew(
|
||||
agents=[agent],
|
||||
tasks=[task],
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
result = crew.kickoff()
|
||||
```
|
||||
88
docs/pt-BR/tools/file-document/ocrtool.mdx
Normal file
88
docs/pt-BR/tools/file-document/ocrtool.mdx
Normal file
@@ -0,0 +1,88 @@
|
||||
---
|
||||
title: OCR Tool
|
||||
description: The `OCRTool` extracts text from local images or image URLs using an LLM with vision.
|
||||
icon: image
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
# `OCRTool`
|
||||
|
||||
## Description
|
||||
|
||||
Extract text from images (local path or URL). Uses a vision‑capable LLM via CrewAI’s LLM interface.
|
||||
|
||||
## Installation
|
||||
|
||||
No extra install beyond `crewai-tools`. Ensure your selected LLM supports vision.
|
||||
|
||||
## Parameters
|
||||
|
||||
### Run Parameters
|
||||
|
||||
- `image_path_url` (str, required): Local image path or HTTP(S) URL.
|
||||
|
||||
## Examples
|
||||
|
||||
### Direct usage
|
||||
|
||||
```python Code
|
||||
from crewai_tools import OCRTool
|
||||
|
||||
print(OCRTool().run(image_path_url="/tmp/receipt.png"))
|
||||
```
|
||||
|
||||
### With an agent
|
||||
|
||||
```python Code
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import OCRTool
|
||||
|
||||
ocr = OCRTool()
|
||||
|
||||
agent = Agent(
|
||||
role="OCR",
|
||||
goal="Extract text",
|
||||
tools=[ocr],
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Extract text from https://example.com/invoice.jpg",
|
||||
expected_output="All detected text in plain text",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
crew = Crew(agents=[agent], tasks=[task])
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Ensure the selected LLM supports image inputs.
|
||||
- For large images, consider downscaling to reduce token usage.
|
||||
- You can pass a specific LLM instance to the tool (e.g., `LLM(model="gpt-4o")`) if needed, matching the README guidance.
|
||||
|
||||
## Example
|
||||
|
||||
```python Code
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import OCRTool
|
||||
|
||||
tool = OCRTool()
|
||||
|
||||
agent = Agent(
|
||||
role="OCR Specialist",
|
||||
goal="Extract text from images",
|
||||
backstory="Vision‑enabled analyst",
|
||||
tools=[tool],
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Extract text from https://example.com/receipt.png",
|
||||
expected_output="All detected text in plain text",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
crew = Crew(agents=[agent], tasks=[task])
|
||||
result = crew.kickoff()
|
||||
```
|
||||
75
docs/pt-BR/tools/file-document/pdf-text-writing-tool.mdx
Normal file
75
docs/pt-BR/tools/file-document/pdf-text-writing-tool.mdx
Normal file
@@ -0,0 +1,75 @@
|
||||
---
|
||||
title: PDF Text Writing Tool
|
||||
description: The `PDFTextWritingTool` writes text to specific positions in a PDF, supporting custom fonts.
|
||||
icon: file-pdf
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
# `PDFTextWritingTool`
|
||||
|
||||
## Description
|
||||
|
||||
Write text at precise coordinates on a PDF page, optionally embedding a custom TrueType font.
|
||||
|
||||
## Parameters
|
||||
|
||||
### Run Parameters
|
||||
|
||||
- `pdf_path` (str, required): Path to the input PDF.
|
||||
- `text` (str, required): Text to add.
|
||||
- `position` (tuple[int, int], required): `(x, y)` coordinates.
|
||||
- `font_size` (int, default `12`)
|
||||
- `font_color` (str, default `"0 0 0 rg"`)
|
||||
- `font_name` (str, default `"F1"`)
|
||||
- `font_file` (str, optional): Path to `.ttf` file.
|
||||
- `page_number` (int, default `0`)
|
||||
|
||||
## Example
|
||||
|
||||
```python Code
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import PDFTextWritingTool
|
||||
|
||||
tool = PDFTextWritingTool()
|
||||
|
||||
agent = Agent(
|
||||
role="PDF Editor",
|
||||
goal="Annotate PDFs",
|
||||
backstory="Documentation specialist",
|
||||
tools=[tool],
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Write 'CONFIDENTIAL' at (72, 720) on page 1 of ./sample.pdf",
|
||||
expected_output="Confirmation message",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
crew = Crew(
|
||||
agents=[agent],
|
||||
tasks=[task],
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
### Direct usage
|
||||
|
||||
```python Code
|
||||
from crewai_tools import PDFTextWritingTool
|
||||
|
||||
PDFTextWritingTool().run(
|
||||
pdf_path="./input.pdf",
|
||||
text="CONFIDENTIAL",
|
||||
position=(72, 720),
|
||||
font_size=18,
|
||||
page_number=0,
|
||||
)
|
||||
```
|
||||
|
||||
## Tips
|
||||
|
||||
- Coordinate origin is the bottom‑left corner.
|
||||
- If using a custom font (`font_file`), ensure it is a valid `.ttf`.
|
||||
188
docs/pt-BR/tools/integration/bedrockinvokeagenttool.mdx
Normal file
188
docs/pt-BR/tools/integration/bedrockinvokeagenttool.mdx
Normal file
@@ -0,0 +1,188 @@
|
||||
---
|
||||
title: Bedrock Invoke Agent Tool
|
||||
description: Enables CrewAI agents to invoke Amazon Bedrock Agents and leverage their capabilities within your workflows
|
||||
icon: aws
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
# `BedrockInvokeAgentTool`
|
||||
|
||||
The `BedrockInvokeAgentTool` enables CrewAI agents to invoke Amazon Bedrock Agents and leverage their capabilities within your workflows.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
uv pip install 'crewai[tools]'
|
||||
```
|
||||
|
||||
## Requirements
|
||||
|
||||
- AWS credentials configured (either through environment variables or AWS CLI)
|
||||
- `boto3` and `python-dotenv` packages
|
||||
- Access to Amazon Bedrock Agents
|
||||
|
||||
## Usage
|
||||
|
||||
Here's how to use the tool with a CrewAI agent:
|
||||
|
||||
```python {2, 4-8}
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools.aws.bedrock.agents.invoke_agent_tool import BedrockInvokeAgentTool
|
||||
|
||||
# Initialize the tool
|
||||
agent_tool = BedrockInvokeAgentTool(
|
||||
agent_id="your-agent-id",
|
||||
agent_alias_id="your-agent-alias-id"
|
||||
)
|
||||
|
||||
# Create a CrewAI agent that uses the tool
|
||||
aws_expert = Agent(
|
||||
role='AWS Service Expert',
|
||||
goal='Help users understand AWS services and quotas',
|
||||
backstory='I am an expert in AWS services and can provide detailed information about them.',
|
||||
tools=[agent_tool],
|
||||
verbose=True
|
||||
)
|
||||
|
||||
# Create a task for the agent
|
||||
quota_task = Task(
|
||||
description="Find out the current service quotas for EC2 in us-west-2 and explain any recent changes.",
|
||||
agent=aws_expert
|
||||
)
|
||||
|
||||
# Create a crew with the agent
|
||||
crew = Crew(
|
||||
agents=[aws_expert],
|
||||
tasks=[quota_task],
|
||||
verbose=2
|
||||
)
|
||||
|
||||
# Run the crew
|
||||
result = crew.kickoff()
|
||||
print(result)
|
||||
```
|
||||
|
||||
## Tool Arguments
|
||||
|
||||
| Argument | Type | Required | Default | Description |
|
||||
|:---------|:-----|:---------|:--------|:------------|
|
||||
| **agent_id** | `str` | Yes | None | The unique identifier of the Bedrock agent |
|
||||
| **agent_alias_id** | `str` | Yes | None | The unique identifier of the agent alias |
|
||||
| **session_id** | `str` | No | timestamp | The unique identifier of the session |
|
||||
| **enable_trace** | `bool` | No | False | Whether to enable trace for debugging |
|
||||
| **end_session** | `bool` | No | False | Whether to end the session after invocation |
|
||||
| **description** | `str` | No | None | Custom description for the tool |
|
||||
|
||||
## Environment Variables
|
||||
|
||||
```bash
|
||||
BEDROCK_AGENT_ID=your-agent-id # Alternative to passing agent_id
|
||||
BEDROCK_AGENT_ALIAS_ID=your-agent-alias-id # Alternative to passing agent_alias_id
|
||||
AWS_REGION=your-aws-region # Defaults to us-west-2
|
||||
AWS_ACCESS_KEY_ID=your-access-key # Required for AWS authentication
|
||||
AWS_SECRET_ACCESS_KEY=your-secret-key # Required for AWS authentication
|
||||
```
|
||||
|
||||
## Advanced Usage
|
||||
|
||||
### Multi-Agent Workflow with Session Management
|
||||
|
||||
```python {2, 4-22}
|
||||
from crewai import Agent, Task, Crew, Process
|
||||
from crewai_tools.aws.bedrock.agents.invoke_agent_tool import BedrockInvokeAgentTool
|
||||
|
||||
# Initialize tools with session management
|
||||
initial_tool = BedrockInvokeAgentTool(
|
||||
agent_id="your-agent-id",
|
||||
agent_alias_id="your-agent-alias-id",
|
||||
session_id="custom-session-id"
|
||||
)
|
||||
|
||||
followup_tool = BedrockInvokeAgentTool(
|
||||
agent_id="your-agent-id",
|
||||
agent_alias_id="your-agent-alias-id",
|
||||
session_id="custom-session-id"
|
||||
)
|
||||
|
||||
final_tool = BedrockInvokeAgentTool(
|
||||
agent_id="your-agent-id",
|
||||
agent_alias_id="your-agent-alias-id",
|
||||
session_id="custom-session-id",
|
||||
end_session=True
|
||||
)
|
||||
|
||||
# Create agents for different stages
|
||||
researcher = Agent(
|
||||
role='AWS Service Researcher',
|
||||
goal='Gather information about AWS services',
|
||||
backstory='I am specialized in finding detailed AWS service information.',
|
||||
tools=[initial_tool]
|
||||
)
|
||||
|
||||
analyst = Agent(
|
||||
role='Service Compatibility Analyst',
|
||||
goal='Analyze service compatibility and requirements',
|
||||
backstory='I analyze AWS services for compatibility and integration possibilities.',
|
||||
tools=[followup_tool]
|
||||
)
|
||||
|
||||
summarizer = Agent(
|
||||
role='Technical Documentation Writer',
|
||||
goal='Create clear technical summaries',
|
||||
backstory='I specialize in creating clear, concise technical documentation.',
|
||||
tools=[final_tool]
|
||||
)
|
||||
|
||||
# Create tasks
|
||||
research_task = Task(
|
||||
description="Find all available AWS services in us-west-2 region.",
|
||||
agent=researcher
|
||||
)
|
||||
|
||||
analysis_task = Task(
|
||||
description="Analyze which services support IPv6 and their implementation requirements.",
|
||||
agent=analyst
|
||||
)
|
||||
|
||||
summary_task = Task(
|
||||
description="Create a summary of IPv6-compatible services and their key features.",
|
||||
agent=summarizer
|
||||
)
|
||||
|
||||
# Create a crew with the agents and tasks
|
||||
crew = Crew(
|
||||
agents=[researcher, analyst, summarizer],
|
||||
tasks=[research_task, analysis_task, summary_task],
|
||||
process=Process.sequential,
|
||||
verbose=2
|
||||
)
|
||||
|
||||
# Run the crew
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
## Use Cases
|
||||
|
||||
### Hybrid Multi-Agent Collaborations
|
||||
- Create workflows where CrewAI agents collaborate with managed Bedrock agents running as services in AWS
|
||||
- Enable scenarios where sensitive data processing happens within your AWS environment while other agents operate externally
|
||||
- Bridge on-premises CrewAI agents with cloud-based Bedrock agents for distributed intelligence workflows
|
||||
|
||||
### Data Sovereignty and Compliance
|
||||
- Keep data-sensitive agentic workflows within your AWS environment while allowing external CrewAI agents to orchestrate tasks
|
||||
- Maintain compliance with data residency requirements by processing sensitive information only within your AWS account
|
||||
- Enable secure multi-agent collaborations where some agents cannot access your organization's private data
|
||||
|
||||
### Seamless AWS Service Integration
|
||||
- Access any AWS service through Amazon Bedrock Actions without writing complex integration code
|
||||
- Enable CrewAI agents to interact with AWS services through natural language requests
|
||||
- Leverage pre-built Bedrock agent capabilities to interact with AWS services like Bedrock Knowledge Bases, Lambda, and more
|
||||
|
||||
### Scalable Hybrid Agent Architectures
|
||||
- Offload computationally intensive tasks to managed Bedrock agents while lightweight tasks run in CrewAI
|
||||
- Scale agent processing by distributing workloads between local CrewAI agents and cloud-based Bedrock agents
|
||||
|
||||
### Cross-Organizational Agent Collaboration
|
||||
- Enable secure collaboration between your organization's CrewAI agents and partner organizations' Bedrock agents
|
||||
- Create workflows where external expertise from Bedrock agents can be incorporated without exposing sensitive data
|
||||
- Build agent ecosystems that span organizational boundaries while maintaining security and data control
|
||||
276
docs/pt-BR/tools/integration/crewaiautomationtool.mdx
Normal file
276
docs/pt-BR/tools/integration/crewaiautomationtool.mdx
Normal file
@@ -0,0 +1,276 @@
|
||||
---
|
||||
title: CrewAI Run Automation Tool
|
||||
description: Enables CrewAI agents to invoke CrewAI Platform automations and leverage external crew services within your workflows.
|
||||
icon: robot
|
||||
---
|
||||
|
||||
# `InvokeCrewAIAutomationTool`
|
||||
|
||||
The `InvokeCrewAIAutomationTool` provides CrewAI Platform API integration with external crew services. This tool allows you to invoke and interact with CrewAI Platform automations from within your CrewAI agents, enabling seamless integration between different crew workflows.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
uv pip install 'crewai[tools]'
|
||||
```
|
||||
|
||||
## Requirements
|
||||
|
||||
- CrewAI Platform API access
|
||||
- Valid bearer token for authentication
|
||||
- Network access to CrewAI Platform automation endpoints
|
||||
|
||||
## Usage
|
||||
|
||||
Here's how to use the tool with a CrewAI agent:
|
||||
|
||||
```python {2, 4-9}
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import InvokeCrewAIAutomationTool
|
||||
|
||||
# Initialize the tool
|
||||
automation_tool = InvokeCrewAIAutomationTool(
|
||||
crew_api_url="https://data-analysis-crew-[...].crewai.com",
|
||||
crew_bearer_token="your_bearer_token_here",
|
||||
crew_name="Data Analysis Crew",
|
||||
crew_description="Analyzes data and generates insights"
|
||||
)
|
||||
|
||||
# Create a CrewAI agent that uses the tool
|
||||
automation_coordinator = Agent(
|
||||
role='Automation Coordinator',
|
||||
goal='Coordinate and execute automated crew tasks',
|
||||
backstory='I am an expert at leveraging automation tools to execute complex workflows.',
|
||||
tools=[automation_tool],
|
||||
verbose=True
|
||||
)
|
||||
|
||||
# Create a task for the agent
|
||||
analysis_task = Task(
|
||||
description="Execute data analysis automation and provide insights",
|
||||
agent=automation_coordinator,
|
||||
expected_output="Comprehensive data analysis report"
|
||||
)
|
||||
|
||||
# Create a crew with the agent
|
||||
crew = Crew(
|
||||
agents=[automation_coordinator],
|
||||
tasks=[analysis_task],
|
||||
verbose=2
|
||||
)
|
||||
|
||||
# Run the crew
|
||||
result = crew.kickoff()
|
||||
print(result)
|
||||
```
|
||||
|
||||
## Tool Arguments
|
||||
|
||||
| Argument | Type | Required | Default | Description |
|
||||
|:---------|:-----|:---------|:--------|:------------|
|
||||
| **crew_api_url** | `str` | Yes | None | Base URL of the CrewAI Platform automation API |
|
||||
| **crew_bearer_token** | `str` | Yes | None | Bearer token for API authentication |
|
||||
| **crew_name** | `str` | Yes | None | Name of the crew automation |
|
||||
| **crew_description** | `str` | Yes | None | Description of what the crew automation does |
|
||||
| **max_polling_time** | `int` | No | 600 | Maximum time in seconds to wait for task completion |
|
||||
| **crew_inputs** | `dict` | No | None | Dictionary defining custom input schema fields |
|
||||
|
||||
## Environment Variables
|
||||
|
||||
```bash
|
||||
CREWAI_API_URL=https://your-crew-automation.crewai.com # Alternative to passing crew_api_url
|
||||
CREWAI_BEARER_TOKEN=your_bearer_token_here # Alternative to passing crew_bearer_token
|
||||
```
|
||||
|
||||
## Advanced Usage
|
||||
|
||||
### Custom Input Schema with Dynamic Parameters
|
||||
|
||||
```python {2, 4-15}
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import InvokeCrewAIAutomationTool
|
||||
from pydantic import Field
|
||||
|
||||
# Define custom input schema
|
||||
custom_inputs = {
|
||||
"year": Field(..., description="Year to retrieve the report for (integer)"),
|
||||
"region": Field(default="global", description="Geographic region for analysis"),
|
||||
"format": Field(default="summary", description="Report format (summary, detailed, raw)")
|
||||
}
|
||||
|
||||
# Create tool with custom inputs
|
||||
market_research_tool = InvokeCrewAIAutomationTool(
|
||||
crew_api_url="https://state-of-ai-report-crew-[...].crewai.com",
|
||||
crew_bearer_token="your_bearer_token_here",
|
||||
crew_name="State of AI Report",
|
||||
crew_description="Retrieves a comprehensive report on state of AI for a given year and region",
|
||||
crew_inputs=custom_inputs,
|
||||
max_polling_time=15 * 60 # 15 minutes timeout
|
||||
)
|
||||
|
||||
# Create an agent with the tool
|
||||
research_agent = Agent(
|
||||
role="Research Coordinator",
|
||||
goal="Coordinate and execute market research tasks",
|
||||
backstory="You are an expert at coordinating research tasks and leveraging automation tools.",
|
||||
tools=[market_research_tool],
|
||||
verbose=True
|
||||
)
|
||||
|
||||
# Create and execute a task with custom parameters
|
||||
research_task = Task(
|
||||
description="Conduct market research on AI tools market for 2024 in North America with detailed format",
|
||||
agent=research_agent,
|
||||
expected_output="Comprehensive market research report"
|
||||
)
|
||||
|
||||
crew = Crew(
|
||||
agents=[research_agent],
|
||||
tasks=[research_task]
|
||||
)
|
||||
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
### Multi-Stage Automation Workflow
|
||||
|
||||
```python {2, 4-35}
|
||||
from crewai import Agent, Task, Crew, Process
|
||||
from crewai_tools import InvokeCrewAIAutomationTool
|
||||
|
||||
# Initialize different automation tools
|
||||
data_collection_tool = InvokeCrewAIAutomationTool(
|
||||
crew_api_url="https://data-collection-crew-[...].crewai.com",
|
||||
crew_bearer_token="your_bearer_token_here",
|
||||
crew_name="Data Collection Automation",
|
||||
crew_description="Collects and preprocesses raw data"
|
||||
)
|
||||
|
||||
analysis_tool = InvokeCrewAIAutomationTool(
|
||||
crew_api_url="https://analysis-crew-[...].crewai.com",
|
||||
crew_bearer_token="your_bearer_token_here",
|
||||
crew_name="Analysis Automation",
|
||||
crew_description="Performs advanced data analysis and modeling"
|
||||
)
|
||||
|
||||
reporting_tool = InvokeCrewAIAutomationTool(
|
||||
crew_api_url="https://reporting-crew-[...].crewai.com",
|
||||
crew_bearer_token="your_bearer_token_here",
|
||||
crew_name="Reporting Automation",
|
||||
crew_description="Generates comprehensive reports and visualizations"
|
||||
)
|
||||
|
||||
# Create specialized agents
|
||||
data_collector = Agent(
|
||||
role='Data Collection Specialist',
|
||||
goal='Gather and preprocess data from various sources',
|
||||
backstory='I specialize in collecting and cleaning data from multiple sources.',
|
||||
tools=[data_collection_tool]
|
||||
)
|
||||
|
||||
data_analyst = Agent(
|
||||
role='Data Analysis Expert',
|
||||
goal='Perform advanced analysis on collected data',
|
||||
backstory='I am an expert in statistical analysis and machine learning.',
|
||||
tools=[analysis_tool]
|
||||
)
|
||||
|
||||
report_generator = Agent(
|
||||
role='Report Generation Specialist',
|
||||
goal='Create comprehensive reports and visualizations',
|
||||
backstory='I excel at creating clear, actionable reports from complex data.',
|
||||
tools=[reporting_tool]
|
||||
)
|
||||
|
||||
# Create sequential tasks
|
||||
collection_task = Task(
|
||||
description="Collect market data for Q4 2024 analysis",
|
||||
agent=data_collector
|
||||
)
|
||||
|
||||
analysis_task = Task(
|
||||
description="Analyze collected data to identify trends and patterns",
|
||||
agent=data_analyst
|
||||
)
|
||||
|
||||
reporting_task = Task(
|
||||
description="Generate executive summary report with key insights and recommendations",
|
||||
agent=report_generator
|
||||
)
|
||||
|
||||
# Create a crew with sequential processing
|
||||
crew = Crew(
|
||||
agents=[data_collector, data_analyst, report_generator],
|
||||
tasks=[collection_task, analysis_task, reporting_task],
|
||||
process=Process.sequential,
|
||||
verbose=2
|
||||
)
|
||||
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
## Use Cases
|
||||
|
||||
### Distributed Crew Orchestration
|
||||
- Coordinate multiple specialized crew automations to handle complex, multi-stage workflows
|
||||
- Enable seamless handoffs between different automation services for comprehensive task execution
|
||||
- Scale processing by distributing workloads across multiple CrewAI Platform automations
|
||||
|
||||
### Cross-Platform Integration
|
||||
- Bridge CrewAI agents with CrewAI Platform automations for hybrid local-cloud workflows
|
||||
- Leverage specialized automations while maintaining local control and orchestration
|
||||
- Enable secure collaboration between local agents and cloud-based automation services
|
||||
|
||||
### Enterprise Automation Pipelines
|
||||
- Create enterprise-grade automation pipelines that combine local intelligence with cloud processing power
|
||||
- Implement complex business workflows that span multiple automation services
|
||||
- Enable scalable, repeatable processes for data analysis, reporting, and decision-making
|
||||
|
||||
### Dynamic Workflow Composition
|
||||
- Dynamically compose workflows by chaining different automation services based on task requirements
|
||||
- Enable adaptive processing where the choice of automation depends on data characteristics or business rules
|
||||
- Create flexible, reusable automation components that can be combined in various ways
|
||||
|
||||
### Specialized Domain Processing
|
||||
- Access domain-specific automations (financial analysis, legal research, technical documentation) from general-purpose agents
|
||||
- Leverage pre-built, specialized crew automations without rebuilding complex domain logic
|
||||
- Enable agents to access expert-level capabilities through targeted automation services
|
||||
|
||||
## Custom Input Schema
|
||||
|
||||
When defining `crew_inputs`, use Pydantic Field objects to specify the input parameters:
|
||||
|
||||
```python
|
||||
from pydantic import Field
|
||||
|
||||
crew_inputs = {
|
||||
"required_param": Field(..., description="This parameter is required"),
|
||||
"optional_param": Field(default="default_value", description="This parameter is optional"),
|
||||
"typed_param": Field(..., description="Integer parameter", ge=1, le=100) # With validation
|
||||
}
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
The tool provides comprehensive error handling for common scenarios:
|
||||
|
||||
- **API Connection Errors**: Network connectivity issues with CrewAI Platform
|
||||
- **Authentication Errors**: Invalid or expired bearer tokens
|
||||
- **Timeout Errors**: Tasks that exceed the maximum polling time
|
||||
- **Task Failures**: Crew automations that fail during execution
|
||||
- **Input Validation Errors**: Invalid parameters passed to automation endpoints
|
||||
|
||||
## API Endpoints
|
||||
|
||||
The tool interacts with two main API endpoints:
|
||||
|
||||
- `POST {crew_api_url}/kickoff`: Starts a new crew automation task
|
||||
- `GET {crew_api_url}/status/{crew_id}`: Checks the status of a running task
|
||||
|
||||
## Notes
|
||||
|
||||
- The tool automatically polls the status endpoint every second until completion or timeout
|
||||
- Successful tasks return the result directly, while failed tasks return error information
|
||||
- Bearer tokens should be kept secure and not hardcoded in production environments
|
||||
- Consider using environment variables for sensitive configuration like bearer tokens
|
||||
- Custom input schemas must be compatible with the target crew automation's expected parameters
|
||||
72
docs/pt-BR/tools/integration/overview.mdx
Normal file
72
docs/pt-BR/tools/integration/overview.mdx
Normal file
@@ -0,0 +1,72 @@
|
||||
---
|
||||
title: "Visão Geral"
|
||||
description: "Conecte agentes CrewAI com automações externas e serviços de IA gerenciados"
|
||||
icon: "plug"
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
As ferramentas de integração permitem que seus agentes deleguem trabalho para outras plataformas de automação e serviços de IA gerenciados. Use-as quando o fluxo precisar invocar uma automação já publicada no CrewAI Platform ou quando for necessário direcionar tarefas para provedores especializados, como Amazon Bedrock.
|
||||
|
||||
## **Ferramentas Disponíveis**
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Ferramenta de Execução de Automação CrewAI" icon="robot" href="/pt-BR/tools/integration/crewaiautomationtool">
|
||||
Invoque automações ativas do CrewAI Platform, envie entradas personalizadas e acompanhe o resultado diretamente do seu agente.
|
||||
</Card>
|
||||
|
||||
<Card title="Ferramenta Bedrock Invoke Agent" icon="aws" href="/pt-BR/tools/integration/bedrockinvokeagenttool">
|
||||
Acesse agentes do Amazon Bedrock a partir das suas crews, reutilize guardrails existentes da AWS e traga as respostas de volta ao fluxo atual.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
## **Casos de Uso Comuns**
|
||||
|
||||
- **Encadear automações**: Inicie uma automação do CrewAI a partir de outra crew ou flow
|
||||
- **Hand-off corporativo**: Direcione tarefas para agentes Bedrock que já incorporam políticas e guardrails internos
|
||||
- **Workflows híbridos**: Combine o raciocínio do CrewAI com sistemas externos que expõem APIs de agente
|
||||
- **Tarefas demoradas**: Faça polling de automações externas e mescle o resultado final na execução atual
|
||||
|
||||
## **Exemplo de Início Rápido**
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import InvokeCrewAIAutomationTool
|
||||
from crewai_tools.aws.bedrock.agents.invoke_agent_tool import BedrockInvokeAgentTool
|
||||
|
||||
# Automação externa
|
||||
analysis_automation = InvokeCrewAIAutomationTool(
|
||||
crew_api_url="https://analysis-crew.acme.crewai.com",
|
||||
crew_bearer_token="YOUR_BEARER_TOKEN",
|
||||
crew_name="Analysis Automation",
|
||||
crew_description="Executa o pipeline de análise em produção",
|
||||
)
|
||||
|
||||
# Agente gerenciado no Bedrock
|
||||
knowledge_router = BedrockInvokeAgentTool(
|
||||
agent_id="bedrock-agent-id",
|
||||
agent_alias_id="prod",
|
||||
)
|
||||
|
||||
automation_strategist = Agent(
|
||||
role="Estrategista de Automação",
|
||||
goal="Orquestrar automações externas e resumir os resultados",
|
||||
backstory="Você coordena fluxos corporativos e sabe quando delegar tarefas a serviços especializados.",
|
||||
tools=[analysis_automation, knowledge_router],
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
execute_playbook = Task(
|
||||
description="Execute a automação de análise e peça ao agente Bedrock pontos principais para a diretoria.",
|
||||
agent=automation_strategist,
|
||||
)
|
||||
|
||||
Crew(agents=[automation_strategist], tasks=[execute_playbook]).kickoff()
|
||||
```
|
||||
|
||||
## **Boas Práticas**
|
||||
|
||||
- **Proteja credenciais**: Armazene tokens e chaves de API em variáveis de ambiente ou no gerenciador de segredos
|
||||
- **Planeje a latência**: Automações externas podem levar mais tempo — configure intervalos e timeouts adequados
|
||||
- **Reutilize sessões**: Agentes Bedrock aceitam IDs de sessão para manter contexto entre chamadas
|
||||
- **Valide respostas**: Normalize a saída remota (JSON, texto, códigos de status) antes de enviá-la a etapas seguintes
|
||||
- **Monitore o uso**: Acompanhe logs no CrewAI Platform ou no AWS CloudWatch para evitar estouros de quota e falhas
|
||||
111
docs/pt-BR/tools/search-research/arxivpapertool.mdx
Normal file
111
docs/pt-BR/tools/search-research/arxivpapertool.mdx
Normal file
@@ -0,0 +1,111 @@
|
||||
---
|
||||
title: Arxiv Paper Tool
|
||||
description: The `ArxivPaperTool` searches arXiv for papers matching a query and optionally downloads PDFs.
|
||||
icon: box-archive
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
# `ArxivPaperTool`
|
||||
|
||||
## Description
|
||||
|
||||
The `ArxivPaperTool` queries the arXiv API for academic papers and returns compact, readable results. It can also optionally download PDFs to disk.
|
||||
|
||||
## Installation
|
||||
|
||||
This tool has no special installation beyond `crewai-tools`.
|
||||
|
||||
```shell
|
||||
uv add crewai-tools
|
||||
```
|
||||
|
||||
No API key is required. This tool uses the public arXiv Atom API.
|
||||
|
||||
## Steps to Get Started
|
||||
|
||||
1. Initialize the tool.
|
||||
2. Provide a `search_query` (e.g., "transformer neural network").
|
||||
3. Optionally set `max_results` (1–100) and enable PDF downloads in the constructor.
|
||||
|
||||
## Example
|
||||
|
||||
```python Code
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import ArxivPaperTool
|
||||
|
||||
tool = ArxivPaperTool(
|
||||
download_pdfs=False,
|
||||
save_dir="./arxiv_pdfs",
|
||||
use_title_as_filename=True,
|
||||
)
|
||||
|
||||
agent = Agent(
|
||||
role="Researcher",
|
||||
goal="Find relevant arXiv papers",
|
||||
backstory="Expert at literature discovery",
|
||||
tools=[tool],
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Search arXiv for 'transformer neural network' and list top 5 results.",
|
||||
expected_output="A concise list of 5 relevant papers with titles, links, and summaries.",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
crew = Crew(agents=[agent], tasks=[task])
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
### Direct usage (without Agent)
|
||||
|
||||
```python Code
|
||||
from crewai_tools import ArxivPaperTool
|
||||
|
||||
tool = ArxivPaperTool(
|
||||
download_pdfs=True,
|
||||
save_dir="./arxiv_pdfs",
|
||||
)
|
||||
print(tool.run(search_query="mixture of experts", max_results=3))
|
||||
```
|
||||
|
||||
## Parameters
|
||||
|
||||
### Initialization Parameters
|
||||
|
||||
- `download_pdfs` (bool, default `False`): Whether to download PDFs.
|
||||
- `save_dir` (str, default `./arxiv_pdfs`): Directory to save PDFs.
|
||||
- `use_title_as_filename` (bool, default `False`): Use paper titles for filenames.
|
||||
|
||||
### Run Parameters
|
||||
|
||||
- `search_query` (str, required): The arXiv search query.
|
||||
- `max_results` (int, default `5`, range 1–100): Number of results.
|
||||
|
||||
## Output format
|
||||
|
||||
The tool returns a human‑readable list of papers with:
|
||||
- Title
|
||||
- Link (abs page)
|
||||
- Snippet/summary (truncated)
|
||||
|
||||
When `download_pdfs=True`, PDFs are saved to disk and the summary mentions saved files.
|
||||
|
||||
## Usage Notes
|
||||
|
||||
- The tool returns formatted text with key metadata and links.
|
||||
- When `download_pdfs=True`, PDFs will be stored in `save_dir`.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
- If you receive a network timeout, re‑try or reduce `max_results`.
|
||||
- Invalid XML errors indicate an arXiv response parse issue; try a simpler query.
|
||||
- File system errors (e.g., permission denied) may occur when saving PDFs; ensure `save_dir` is writable.
|
||||
|
||||
## Related links
|
||||
|
||||
- arXiv API docs: https://info.arxiv.org/help/api/index.html
|
||||
|
||||
## Error Handling
|
||||
|
||||
- Network issues, invalid XML, and OS errors are handled with informative messages.
|
||||
79
docs/pt-BR/tools/search-research/databricks-query-tool.mdx
Normal file
79
docs/pt-BR/tools/search-research/databricks-query-tool.mdx
Normal file
@@ -0,0 +1,79 @@
|
||||
---
|
||||
title: Databricks SQL Query Tool
|
||||
description: The `DatabricksQueryTool` executes SQL queries against Databricks workspace tables.
|
||||
icon: trowel-bricks
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
# `DatabricksQueryTool`
|
||||
|
||||
## Description
|
||||
|
||||
Run SQL against Databricks workspace tables with either CLI profile or direct host/token authentication.
|
||||
|
||||
## Installation
|
||||
|
||||
```shell
|
||||
uv add crewai-tools[databricks-sdk]
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
- `DATABRICKS_CONFIG_PROFILE` or (`DATABRICKS_HOST` + `DATABRICKS_TOKEN`)
|
||||
|
||||
Create a personal access token and find host details in the Databricks workspace under User Settings → Developer.
|
||||
Docs: https://docs.databricks.com/en/dev-tools/auth/pat.html
|
||||
|
||||
## Example
|
||||
|
||||
```python Code
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import DatabricksQueryTool
|
||||
|
||||
tool = DatabricksQueryTool(
|
||||
default_catalog="main",
|
||||
default_schema="default",
|
||||
)
|
||||
|
||||
agent = Agent(
|
||||
role="Data Analyst",
|
||||
goal="Query Databricks",
|
||||
tools=[tool],
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="SELECT * FROM my_table LIMIT 10",
|
||||
expected_output="10 rows",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
crew = Crew(
|
||||
agents=[agent],
|
||||
tasks=[task],
|
||||
verbose=True,
|
||||
)
|
||||
result = crew.kickoff()
|
||||
|
||||
print(result)
|
||||
```
|
||||
|
||||
## Parameters
|
||||
|
||||
- `query` (required): SQL query to execute
|
||||
- `catalog` (optional): Override default catalog
|
||||
- `db_schema` (optional): Override default schema
|
||||
- `warehouse_id` (optional): Override default SQL warehouse
|
||||
- `row_limit` (optional): Maximum rows to return (default: 1000)
|
||||
|
||||
## Defaults on initialization
|
||||
|
||||
- `default_catalog`
|
||||
- `default_schema`
|
||||
- `default_warehouse_id`
|
||||
|
||||
### Error handling & tips
|
||||
|
||||
- Authentication errors: verify `DATABRICKS_HOST` begins with `https://` and token is valid.
|
||||
- Permissions: ensure your SQL warehouse and schema are accessible by your token.
|
||||
- Limits: long‑running queries should be avoided in agent loops; add filters/limits.
|
||||
@@ -0,0 +1,64 @@
|
||||
---
|
||||
title: SerpApi Google Search Tool
|
||||
description: The `SerpApiGoogleSearchTool` performs Google searches using the SerpApi service.
|
||||
icon: google
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
# `SerpApiGoogleSearchTool`
|
||||
|
||||
## Description
|
||||
|
||||
Use the `SerpApiGoogleSearchTool` to run Google searches with SerpApi and retrieve structured results. Requires a SerpApi API key.
|
||||
|
||||
## Installation
|
||||
|
||||
```shell
|
||||
uv add crewai-tools[serpapi]
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
- `SERPAPI_API_KEY` (required): API key for SerpApi. Create one at https://serpapi.com/ (free tier available).
|
||||
|
||||
## Example
|
||||
|
||||
```python Code
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import SerpApiGoogleSearchTool
|
||||
|
||||
tool = SerpApiGoogleSearchTool()
|
||||
|
||||
agent = Agent(
|
||||
role="Researcher",
|
||||
goal="Answer questions using Google search",
|
||||
backstory="Search specialist",
|
||||
tools=[tool],
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Search for the latest CrewAI releases",
|
||||
expected_output="A concise list of relevant results with titles and links",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
crew = Crew(agents=[agent], tasks=[task])
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Set `SERPAPI_API_KEY` in the environment. Create a key at https://serpapi.com/
|
||||
- See also Google Shopping via SerpApi: `/en/tools/search-research/serpapi-googleshoppingtool`
|
||||
|
||||
## Parameters
|
||||
|
||||
### Run Parameters
|
||||
|
||||
- `search_query` (str, required): The Google query.
|
||||
- `location` (str, optional): Geographic location parameter.
|
||||
|
||||
## Notes
|
||||
|
||||
- This tool wraps SerpApi and returns structured search results.
|
||||
@@ -0,0 +1,60 @@
|
||||
---
|
||||
title: SerpApi Google Shopping Tool
|
||||
description: The `SerpApiGoogleShoppingTool` searches Google Shopping results using SerpApi.
|
||||
icon: cart-shopping
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
# `SerpApiGoogleShoppingTool`
|
||||
|
||||
## Description
|
||||
|
||||
Leverage `SerpApiGoogleShoppingTool` to query Google Shopping via SerpApi and retrieve product-oriented results.
|
||||
|
||||
## Installation
|
||||
|
||||
```shell
|
||||
uv add crewai-tools[serpapi]
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
- `SERPAPI_API_KEY` (required): API key for SerpApi. Create one at https://serpapi.com/ (free tier available).
|
||||
|
||||
## Example
|
||||
|
||||
```python Code
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import SerpApiGoogleShoppingTool
|
||||
|
||||
tool = SerpApiGoogleShoppingTool()
|
||||
|
||||
agent = Agent(
|
||||
role="Shopping Researcher",
|
||||
goal="Find relevant products",
|
||||
backstory="Expert in product search",
|
||||
tools=[tool],
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Search Google Shopping for 'wireless noise-canceling headphones'",
|
||||
expected_output="Top relevant products with titles and links",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
crew = Crew(agents=[agent], tasks=[task])
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Set `SERPAPI_API_KEY` in the environment. Create a key at https://serpapi.com/
|
||||
- See also Google Web Search via SerpApi: `/en/tools/search-research/serpapi-googlesearchtool`
|
||||
|
||||
## Parameters
|
||||
|
||||
### Run Parameters
|
||||
|
||||
- `search_query` (str, required): Product search query.
|
||||
- `location` (str, optional): Geographic location parameter.
|
||||
140
docs/pt-BR/tools/search-research/tavilyextractortool.mdx
Normal file
140
docs/pt-BR/tools/search-research/tavilyextractortool.mdx
Normal file
@@ -0,0 +1,140 @@
|
||||
---
|
||||
title: "Tavily Extractor Tool"
|
||||
description: "Extract structured content from web pages using the Tavily API"
|
||||
icon: square-poll-horizontal
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
The `TavilyExtractorTool` allows CrewAI agents to extract structured content from web pages using the Tavily API. It can process single URLs or lists of URLs and provides options for controlling the extraction depth and including images.
|
||||
|
||||
## Installation
|
||||
|
||||
To use the `TavilyExtractorTool`, you need to install the `tavily-python` library:
|
||||
|
||||
```shell
|
||||
pip install 'crewai[tools]' tavily-python
|
||||
```
|
||||
|
||||
You also need to set your Tavily API key as an environment variable:
|
||||
|
||||
```bash
|
||||
export TAVILY_API_KEY='your-tavily-api-key'
|
||||
```
|
||||
|
||||
## Example Usage
|
||||
|
||||
Here's how to initialize and use the `TavilyExtractorTool` within a CrewAI agent:
|
||||
|
||||
```python
|
||||
import os
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import TavilyExtractorTool
|
||||
|
||||
# Ensure TAVILY_API_KEY is set in your environment
|
||||
# os.environ["TAVILY_API_KEY"] = "YOUR_API_KEY"
|
||||
|
||||
# Initialize the tool
|
||||
tavily_tool = TavilyExtractorTool()
|
||||
|
||||
# Create an agent that uses the tool
|
||||
extractor_agent = Agent(
|
||||
role='Web Content Extractor',
|
||||
goal='Extract key information from specified web pages',
|
||||
backstory='You are an expert at extracting relevant content from websites using the Tavily API.',
|
||||
tools=[tavily_tool],
|
||||
verbose=True
|
||||
)
|
||||
|
||||
# Define a task for the agent
|
||||
extract_task = Task(
|
||||
description='Extract the main content from the URL https://example.com using basic extraction depth.',
|
||||
expected_output='A JSON string containing the extracted content from the URL.',
|
||||
agent=extractor_agent
|
||||
)
|
||||
|
||||
# Create and run the crew
|
||||
crew = Crew(
|
||||
agents=[extractor_agent],
|
||||
tasks=[extract_task],
|
||||
verbose=2
|
||||
)
|
||||
|
||||
result = crew.kickoff()
|
||||
print(result)
|
||||
```
|
||||
|
||||
## Configuration Options
|
||||
|
||||
The `TavilyExtractorTool` accepts the following arguments:
|
||||
|
||||
- `urls` (Union[List[str], str]): **Required**. A single URL string or a list of URL strings to extract data from.
|
||||
- `include_images` (Optional[bool]): Whether to include images in the extraction results. Defaults to `False`.
|
||||
- `extract_depth` (Literal["basic", "advanced"]): The depth of extraction. Use `"basic"` for faster, surface-level extraction or `"advanced"` for more comprehensive extraction. Defaults to `"basic"`.
|
||||
- `timeout` (int): The maximum time in seconds to wait for the extraction request to complete. Defaults to `60`.
|
||||
|
||||
## Advanced Usage
|
||||
|
||||
### Multiple URLs with Advanced Extraction
|
||||
|
||||
```python
|
||||
# Example with multiple URLs and advanced extraction
|
||||
multi_extract_task = Task(
|
||||
description='Extract content from https://example.com and https://anotherexample.org using advanced extraction.',
|
||||
expected_output='A JSON string containing the extracted content from both URLs.',
|
||||
agent=extractor_agent
|
||||
)
|
||||
|
||||
# Configure the tool with custom parameters
|
||||
custom_extractor = TavilyExtractorTool(
|
||||
extract_depth='advanced',
|
||||
include_images=True,
|
||||
timeout=120
|
||||
)
|
||||
|
||||
agent_with_custom_tool = Agent(
|
||||
role="Advanced Content Extractor",
|
||||
goal="Extract comprehensive content with images",
|
||||
tools=[custom_extractor]
|
||||
)
|
||||
```
|
||||
|
||||
### Tool Parameters
|
||||
|
||||
You can customize the tool's behavior by setting parameters during initialization:
|
||||
|
||||
```python
|
||||
# Initialize with custom configuration
|
||||
extractor_tool = TavilyExtractorTool(
|
||||
extract_depth='advanced', # More comprehensive extraction
|
||||
include_images=True, # Include image results
|
||||
timeout=90 # Custom timeout
|
||||
)
|
||||
```
|
||||
|
||||
## Features
|
||||
|
||||
- **Single or Multiple URLs**: Extract content from one URL or process multiple URLs in a single request
|
||||
- **Configurable Depth**: Choose between basic (fast) and advanced (comprehensive) extraction modes
|
||||
- **Image Support**: Optionally include images in the extraction results
|
||||
- **Structured Output**: Returns well-formatted JSON containing the extracted content
|
||||
- **Error Handling**: Robust handling of network timeouts and extraction errors
|
||||
|
||||
## Response Format
|
||||
|
||||
The tool returns a JSON string representing the structured data extracted from the provided URL(s). The exact structure depends on the content of the pages and the `extract_depth` used.
|
||||
|
||||
Common response elements include:
|
||||
- **Title**: The page title
|
||||
- **Content**: Main text content of the page
|
||||
- **Images**: Image URLs and metadata (when `include_images=True`)
|
||||
- **Metadata**: Additional page information like author, description, etc.
|
||||
|
||||
## Use Cases
|
||||
|
||||
- **Content Analysis**: Extract and analyze content from competitor websites
|
||||
- **Research**: Gather structured data from multiple sources for analysis
|
||||
- **Content Migration**: Extract content from existing websites for migration
|
||||
- **Monitoring**: Regular extraction of content for change detection
|
||||
- **Data Collection**: Systematic extraction of information from web sources
|
||||
|
||||
Refer to the [Tavily API documentation](https://docs.tavily.com/docs/tavily-api/python-sdk#extract) for detailed information about the response structure and available options.
|
||||
125
docs/pt-BR/tools/search-research/tavilysearchtool.mdx
Normal file
125
docs/pt-BR/tools/search-research/tavilysearchtool.mdx
Normal file
@@ -0,0 +1,125 @@
|
||||
---
|
||||
title: "Tavily Search Tool"
|
||||
description: "Perform comprehensive web searches using the Tavily Search API"
|
||||
icon: "magnifying-glass"
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
The `TavilySearchTool` provides an interface to the Tavily Search API, enabling CrewAI agents to perform comprehensive web searches. It allows for specifying search depth, topics, time ranges, included/excluded domains, and whether to include direct answers, raw content, or images in the results.
|
||||
|
||||
## Installation
|
||||
|
||||
To use the `TavilySearchTool`, you need to install the `tavily-python` library:
|
||||
|
||||
```shell
|
||||
pip install 'crewai[tools]' tavily-python
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
Ensure your Tavily API key is set as an environment variable:
|
||||
|
||||
```bash
|
||||
export TAVILY_API_KEY='your_tavily_api_key'
|
||||
```
|
||||
|
||||
Get an API key at https://app.tavily.com/ (sign up, then create a key).
|
||||
|
||||
## Example Usage
|
||||
|
||||
Here's how to initialize and use the `TavilySearchTool` within a CrewAI agent:
|
||||
|
||||
```python
|
||||
import os
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import TavilySearchTool
|
||||
|
||||
# Ensure the TAVILY_API_KEY environment variable is set
|
||||
# os.environ["TAVILY_API_KEY"] = "YOUR_TAVILY_API_KEY"
|
||||
|
||||
# Initialize the tool
|
||||
tavily_tool = TavilySearchTool()
|
||||
|
||||
# Create an agent that uses the tool
|
||||
researcher = Agent(
|
||||
role='Market Researcher',
|
||||
goal='Find information about the latest AI trends',
|
||||
backstory='An expert market researcher specializing in technology.',
|
||||
tools=[tavily_tool],
|
||||
verbose=True
|
||||
)
|
||||
|
||||
# Create a task for the agent
|
||||
research_task = Task(
|
||||
description='Search for the top 3 AI trends in 2024.',
|
||||
expected_output='A JSON report summarizing the top 3 AI trends found.',
|
||||
agent=researcher
|
||||
)
|
||||
|
||||
# Form the crew and kick it off
|
||||
crew = Crew(
|
||||
agents=[researcher],
|
||||
tasks=[research_task],
|
||||
verbose=2
|
||||
)
|
||||
|
||||
result = crew.kickoff()
|
||||
print(result)
|
||||
```
|
||||
|
||||
## Configuration Options
|
||||
|
||||
The `TavilySearchTool` accepts the following arguments during initialization or when calling the `run` method:
|
||||
|
||||
- `query` (str): **Required**. The search query string.
|
||||
- `search_depth` (Literal["basic", "advanced"], optional): The depth of the search. Defaults to `"basic"`.
|
||||
- `topic` (Literal["general", "news", "finance"], optional): The topic to focus the search on. Defaults to `"general"`.
|
||||
- `time_range` (Literal["day", "week", "month", "year"], optional): The time range for the search. Defaults to `None`.
|
||||
- `days` (int, optional): The number of days to search back. Relevant if `time_range` is not set. Defaults to `7`.
|
||||
- `max_results` (int, optional): The maximum number of search results to return. Defaults to `5`.
|
||||
- `include_domains` (Sequence[str], optional): A list of domains to prioritize in the search. Defaults to `None`.
|
||||
- `exclude_domains` (Sequence[str], optional): A list of domains to exclude from the search. Defaults to `None`.
|
||||
- `include_answer` (Union[bool, Literal["basic", "advanced"]], optional): Whether to include a direct answer synthesized from the search results. Defaults to `False`.
|
||||
- `include_raw_content` (bool, optional): Whether to include the raw HTML content of the searched pages. Defaults to `False`.
|
||||
- `include_images` (bool, optional): Whether to include image results. Defaults to `False`.
|
||||
- `timeout` (int, optional): The request timeout in seconds. Defaults to `60`.
|
||||
|
||||
## Advanced Usage
|
||||
|
||||
You can configure the tool with custom parameters:
|
||||
|
||||
```python
|
||||
# Example: Initialize with specific parameters
|
||||
custom_tavily_tool = TavilySearchTool(
|
||||
search_depth='advanced',
|
||||
max_results=10,
|
||||
include_answer=True
|
||||
)
|
||||
|
||||
# The agent will use these defaults
|
||||
agent_with_custom_tool = Agent(
|
||||
role="Advanced Researcher",
|
||||
goal="Conduct detailed research with comprehensive results",
|
||||
tools=[custom_tavily_tool]
|
||||
)
|
||||
```
|
||||
|
||||
## Features
|
||||
|
||||
- **Comprehensive Search**: Access to Tavily's powerful search index
|
||||
- **Configurable Depth**: Choose between basic and advanced search modes
|
||||
- **Topic Filtering**: Focus searches on general, news, or finance topics
|
||||
- **Time Range Control**: Limit results to specific time periods
|
||||
- **Domain Control**: Include or exclude specific domains
|
||||
- **Direct Answers**: Get synthesized answers from search results
|
||||
- **Content Filtering**: Prevent context window issues with automatic content truncation
|
||||
|
||||
## Response Format
|
||||
|
||||
The tool returns search results as a JSON string containing:
|
||||
- Search results with titles, URLs, and content snippets
|
||||
- Optional direct answers to queries
|
||||
- Optional image results
|
||||
- Optional raw HTML content (when enabled)
|
||||
|
||||
Content for each result is automatically truncated to prevent context window issues while maintaining the most relevant information.
|
||||
30
docs/pt-BR/tools/tool-integrations/overview.mdx
Normal file
30
docs/pt-BR/tools/tool-integrations/overview.mdx
Normal file
@@ -0,0 +1,30 @@
|
||||
---
|
||||
title: Overview
|
||||
description: Integrations for deploying and automating crews with external platforms
|
||||
icon: face-smile
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
## Available Integrations
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card
|
||||
title="Bedrock Invoke Agent Tool"
|
||||
icon="cloud"
|
||||
href="/en/tools/tool-integrations/bedrockinvokeagenttool"
|
||||
color="#0891B2"
|
||||
>
|
||||
Invoke Amazon Bedrock Agents from CrewAI to orchestrate actions across AWS services.
|
||||
</Card>
|
||||
|
||||
<Card
|
||||
title="CrewAI Automation Tool"
|
||||
icon="bolt"
|
||||
href="/en/tools/tool-integrations/crewaiautomationtool"
|
||||
color="#7C3AED"
|
||||
>
|
||||
Automate deployment and operations by integrating CrewAI with external platforms and workflows.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
Use these integrations to connect CrewAI with your infrastructure and workflows.
|
||||
110
docs/pt-BR/tools/web-scraping/brightdata-tools.mdx
Normal file
110
docs/pt-BR/tools/web-scraping/brightdata-tools.mdx
Normal file
@@ -0,0 +1,110 @@
|
||||
---
|
||||
title: Bright Data Tools
|
||||
description: Bright Data integrations for SERP search, Web Unlocker scraping, and Dataset API.
|
||||
icon: spider
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
# Bright Data Tools
|
||||
|
||||
This set of tools integrates Bright Data services for web extraction.
|
||||
|
||||
## Installation
|
||||
|
||||
```shell
|
||||
uv add crewai-tools requests aiohttp
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
- `BRIGHT_DATA_API_KEY` (required)
|
||||
- `BRIGHT_DATA_ZONE` (for SERP/Web Unlocker)
|
||||
|
||||
Create credentials at https://brightdata.com/ (sign up, then create an API token and zone).
|
||||
See their docs: https://developers.brightdata.com/
|
||||
|
||||
## Included Tools
|
||||
|
||||
- `BrightDataSearchTool`: SERP search (Google/Bing/Yandex) with geo/language/device options.
|
||||
- `BrightDataWebUnlockerTool`: Scrape pages with anti-bot bypass and rendering.
|
||||
- `BrightDataDatasetTool`: Run Dataset API jobs and fetch results.
|
||||
|
||||
## Examples
|
||||
|
||||
### SERP Search
|
||||
|
||||
```python Code
|
||||
from crewai_tools import BrightDataSearchTool
|
||||
|
||||
tool = BrightDataSearchTool(
|
||||
query="CrewAI",
|
||||
country="us",
|
||||
)
|
||||
|
||||
print(tool.run())
|
||||
```
|
||||
|
||||
### Web Unlocker
|
||||
|
||||
```python Code
|
||||
from crewai_tools import BrightDataWebUnlockerTool
|
||||
|
||||
tool = BrightDataWebUnlockerTool(
|
||||
url="https://example.com",
|
||||
format="markdown",
|
||||
)
|
||||
|
||||
print(tool.run(url="https://example.com"))
|
||||
```
|
||||
|
||||
### Dataset API
|
||||
|
||||
```python Code
|
||||
from crewai_tools import BrightDataDatasetTool
|
||||
|
||||
tool = BrightDataDatasetTool(
|
||||
dataset_type="ecommerce",
|
||||
url="https://example.com/product",
|
||||
)
|
||||
|
||||
print(tool.run())
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
- 401/403: verify `BRIGHT_DATA_API_KEY` and `BRIGHT_DATA_ZONE`.
|
||||
- Empty/blocked content: enable rendering or try a different zone.
|
||||
|
||||
## Example
|
||||
|
||||
```python Code
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import BrightDataSearchTool
|
||||
|
||||
tool = BrightDataSearchTool(
|
||||
query="CrewAI",
|
||||
country="us",
|
||||
)
|
||||
|
||||
agent = Agent(
|
||||
role="Web Researcher",
|
||||
goal="Search with Bright Data",
|
||||
backstory="Finds reliable results",
|
||||
tools=[tool],
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Search for CrewAI and summarize top results",
|
||||
expected_output="Short summary with links",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
crew = Crew(
|
||||
agents=[agent],
|
||||
tasks=[task],
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
result = crew.kickoff()
|
||||
```
|
||||
101
docs/pt-BR/tools/web-scraping/serperscrapewebsitetool.mdx
Normal file
101
docs/pt-BR/tools/web-scraping/serperscrapewebsitetool.mdx
Normal file
@@ -0,0 +1,101 @@
|
||||
---
|
||||
title: Serper Scrape Website
|
||||
description: The `SerperScrapeWebsiteTool` is designed to scrape websites and extract clean, readable content using Serper's scraping API.
|
||||
icon: globe
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
# `SerperScrapeWebsiteTool`
|
||||
|
||||
## Description
|
||||
|
||||
This tool is designed to scrape website content and extract clean, readable text from any website URL. It utilizes the [serper.dev](https://serper.dev) scraping API to fetch and process web pages, optionally including markdown formatting for better structure and readability.
|
||||
|
||||
## Installation
|
||||
|
||||
To effectively use the `SerperScrapeWebsiteTool`, follow these steps:
|
||||
|
||||
1. **Package Installation**: Confirm that the `crewai[tools]` package is installed in your Python environment.
|
||||
2. **API Key Acquisition**: Acquire a `serper.dev` API key by registering for an account at `serper.dev`.
|
||||
3. **Environment Configuration**: Store your obtained API key in an environment variable named `SERPER_API_KEY` to facilitate its use by the tool.
|
||||
|
||||
To incorporate this tool into your project, follow the installation instructions below:
|
||||
|
||||
```shell
|
||||
pip install 'crewai[tools]'
|
||||
```
|
||||
|
||||
## Example
|
||||
|
||||
The following example demonstrates how to initialize the tool and scrape a website:
|
||||
|
||||
```python Code
|
||||
from crewai_tools import SerperScrapeWebsiteTool
|
||||
|
||||
# Initialize the tool for website scraping capabilities
|
||||
tool = SerperScrapeWebsiteTool()
|
||||
|
||||
# Scrape a website with markdown formatting
|
||||
result = tool.run(url="https://example.com", include_markdown=True)
|
||||
```
|
||||
|
||||
## Arguments
|
||||
|
||||
The `SerperScrapeWebsiteTool` accepts the following arguments:
|
||||
|
||||
- **url**: Required. The URL of the website to scrape.
|
||||
- **include_markdown**: Optional. Whether to include markdown formatting in the scraped content. Defaults to `True`.
|
||||
|
||||
## Example with Parameters
|
||||
|
||||
Here is an example demonstrating how to use the tool with different parameters:
|
||||
|
||||
```python Code
|
||||
from crewai_tools import SerperScrapeWebsiteTool
|
||||
|
||||
tool = SerperScrapeWebsiteTool()
|
||||
|
||||
# Scrape with markdown formatting (default)
|
||||
markdown_result = tool.run(
|
||||
url="https://docs.crewai.com",
|
||||
include_markdown=True
|
||||
)
|
||||
|
||||
# Scrape without markdown formatting for plain text
|
||||
plain_result = tool.run(
|
||||
url="https://docs.crewai.com",
|
||||
include_markdown=False
|
||||
)
|
||||
|
||||
print("Markdown formatted content:")
|
||||
print(markdown_result)
|
||||
|
||||
print("\nPlain text content:")
|
||||
print(plain_result)
|
||||
```
|
||||
|
||||
## Use Cases
|
||||
|
||||
The `SerperScrapeWebsiteTool` is particularly useful for:
|
||||
|
||||
- **Content Analysis**: Extract and analyze website content for research purposes
|
||||
- **Data Collection**: Gather structured information from web pages
|
||||
- **Documentation Processing**: Convert web-based documentation into readable formats
|
||||
- **Competitive Analysis**: Scrape competitor websites for market research
|
||||
- **Content Migration**: Extract content from existing websites for migration purposes
|
||||
|
||||
## Error Handling
|
||||
|
||||
The tool includes comprehensive error handling for:
|
||||
|
||||
- **Network Issues**: Handles connection timeouts and network errors gracefully
|
||||
- **API Errors**: Provides detailed error messages for API-related issues
|
||||
- **Invalid URLs**: Validates and reports issues with malformed URLs
|
||||
- **Authentication**: Clear error messages for missing or invalid API keys
|
||||
|
||||
## Security Considerations
|
||||
|
||||
- Always store your `SERPER_API_KEY` in environment variables, never hardcode it in your source code
|
||||
- Be mindful of rate limits imposed by the Serper API
|
||||
- Respect robots.txt and website terms of service when scraping content
|
||||
- Consider implementing delays between requests for large-scale scraping operations
|
||||
Reference in New Issue
Block a user