Add pt-BR docs translation (#3039)

* docs: add pt-br translations

Powered by a CrewAI Flow https://github.com/danielfsbarreto/docs_translator

* Update mcp/overview.mdx brazilian docs

Its en-US counterpart was updated after I did a pass,
so now it includes the new section about @CrewBase
This commit is contained in:
Daniel Barreto
2025-06-25 12:52:33 -03:00
committed by GitHub
parent f6dfec61d6
commit a50fae3a4b
339 changed files with 33822 additions and 517 deletions

View File

@@ -0,0 +1,118 @@
---
title: AI Mind Tool
description: The `AIMindTool` is designed to query data sources in natural language.
icon: brain
---
# `AIMindTool`
## Description
The `AIMindTool` is a wrapper around [AI-Minds](https://mindsdb.com/minds) provided by [MindsDB](https://mindsdb.com/). It allows you to query data sources in natural language by simply configuring their connection parameters. This tool is useful when you need answers to questions from your data stored in various data sources including PostgreSQL, MySQL, MariaDB, ClickHouse, Snowflake, and Google BigQuery.
Minds are AI systems that work similarly to large language models (LLMs) but go beyond by answering any question from any data. This is accomplished by:
- Selecting the most relevant data for an answer using parametric search
- Understanding the meaning and providing responses within the correct context through semantic search
- Delivering precise answers by analyzing data and using machine learning (ML) models
## Installation
To incorporate this tool into your project, you need to install the Minds SDK:
```shell
uv add minds-sdk
```
## Steps to Get Started
To effectively use the `AIMindTool`, follow these steps:
1. **Package Installation**: Confirm that the `crewai[tools]` and `minds-sdk` packages are installed in your Python environment.
2. **API Key Acquisition**: Sign up for a Minds account [here](https://mdb.ai/register), and obtain an API key.
3. **Environment Configuration**: Store your obtained API key in an environment variable named `MINDS_API_KEY` to facilitate its use by the tool.
## Example
The following example demonstrates how to initialize the tool and execute a query:
```python Code
from crewai_tools import AIMindTool
# Initialize the AIMindTool
aimind_tool = AIMindTool(
datasources=[
{
"description": "house sales data",
"engine": "postgres",
"connection_data": {
"user": "demo_user",
"password": "demo_password",
"host": "samples.mindsdb.com",
"port": 5432,
"database": "demo",
"schema": "demo_data"
},
"tables": ["house_sales"]
}
]
)
# Run a natural language query
result = aimind_tool.run("How many 3 bedroom houses were sold in 2008?")
print(result)
```
## Parameters
The `AIMindTool` accepts the following parameters:
- **api_key**: Optional. Your Minds API key. If not provided, it will be read from the `MINDS_API_KEY` environment variable.
- **datasources**: A list of dictionaries, each containing the following keys:
- **description**: A description of the data contained in the datasource.
- **engine**: The engine (or type) of the datasource.
- **connection_data**: A dictionary containing the connection parameters for the datasource.
- **tables**: A list of tables that the data source will use. This is optional and can be omitted if all tables in the data source are to be used.
A list of supported data sources and their connection parameters can be found [here](https://docs.mdb.ai/docs/data_sources).
## Agent Integration Example
Here's how to integrate the `AIMindTool` with a CrewAI agent:
```python Code
from crewai import Agent
from crewai.project import agent
from crewai_tools import AIMindTool
# Initialize the tool
aimind_tool = AIMindTool(
datasources=[
{
"description": "sales data",
"engine": "postgres",
"connection_data": {
"user": "your_user",
"password": "your_password",
"host": "your_host",
"port": 5432,
"database": "your_db",
"schema": "your_schema"
},
"tables": ["sales"]
}
]
)
# Define an agent with the AIMindTool
@agent
def data_analyst(self) -> Agent:
return Agent(
config=self.agents_config["data_analyst"],
allow_delegation=False,
tools=[aimind_tool]
)
```
## Conclusion
The `AIMindTool` provides a powerful way to query your data sources using natural language, making it easier to extract insights without writing complex SQL queries. By connecting to various data sources and leveraging AI-Minds technology, this tool enables agents to access and analyze data efficiently.

View File

@@ -0,0 +1,209 @@
---
title: Code Interpreter
description: The `CodeInterpreterTool` is a powerful tool designed for executing Python 3 code within a secure, isolated environment.
icon: code-simple
---
# `CodeInterpreterTool`
## Description
The `CodeInterpreterTool` enables CrewAI agents to execute Python 3 code that they generate autonomously. This functionality is particularly valuable as it allows agents to create code, execute it, obtain the results, and utilize that information to inform subsequent decisions and actions.
There are several ways to use this tool:
### Docker Container (Recommended)
This is the primary option. The code runs in a secure, isolated Docker container, ensuring safety regardless of its content.
Make sure Docker is installed and running on your system. If you dont have it, you can install it from [here](https://docs.docker.com/get-docker/).
### Sandbox environment
If Docker is unavailable — either not installed or not accessible for any reason — the code will be executed in a restricted Python environment - called sandbox.
This environment is very limited, with strict restrictions on many modules and built-in functions.
### Unsafe Execution
**NOT RECOMMENDED FOR PRODUCTION**
This mode allows execution of any Python code, including dangerous calls to `sys, os..` and similar modules. [Check out](/en/tools/ai-ml/codeinterpretertool#enabling-unsafe-mode) how to enable this mode
## Logging
The `CodeInterpreterTool` logs the selected execution strategy to STDOUT
## Installation
To use this tool, you need to install the CrewAI tools package:
```shell
pip install 'crewai[tools]'
```
## Example
The following example demonstrates how to use the `CodeInterpreterTool` with a CrewAI agent:
```python Code
from crewai import Agent, Task, Crew, Process
from crewai_tools import CodeInterpreterTool
# Initialize the tool
code_interpreter = CodeInterpreterTool()
# Define an agent that uses the tool
programmer_agent = Agent(
role="Python Programmer",
goal="Write and execute Python code to solve problems",
backstory="An expert Python programmer who can write efficient code to solve complex problems.",
tools=[code_interpreter],
verbose=True,
)
# Example task to generate and execute code
coding_task = Task(
description="Write a Python function to calculate the Fibonacci sequence up to the 10th number and print the result.",
expected_output="The Fibonacci sequence up to the 10th number.",
agent=programmer_agent,
)
# Create and run the crew
crew = Crew(
agents=[programmer_agent],
tasks=[coding_task],
verbose=True,
process=Process.sequential,
)
result = crew.kickoff()
```
You can also enable code execution directly when creating an agent:
```python Code
from crewai import Agent
# Create an agent with code execution enabled
programmer_agent = Agent(
role="Python Programmer",
goal="Write and execute Python code to solve problems",
backstory="An expert Python programmer who can write efficient code to solve complex problems.",
allow_code_execution=True, # This automatically adds the CodeInterpreterTool
verbose=True,
)
```
### Enabling `unsafe_mode`
```python Code
from crewai_tools import CodeInterpreterTool
code = """
import os
os.system("ls -la")
"""
CodeInterpreterTool(unsafe_mode=True).run(code=code)
```
## Parameters
The `CodeInterpreterTool` accepts the following parameters during initialization:
- **user_dockerfile_path**: Optional. Path to a custom Dockerfile to use for the code interpreter container.
- **user_docker_base_url**: Optional. URL to the Docker daemon to use for running the container.
- **unsafe_mode**: Optional. Whether to run code directly on the host machine instead of in a Docker container or sandbox. Default is `False`. Use with caution!
- **default_image_tag**: Optional. Default Docker image tag. Default is `code-interpreter:latest`
When using the tool with an agent, the agent will need to provide:
- **code**: Required. The Python 3 code to execute.
- **libraries_used**: Optional. A list of libraries used in the code that need to be installed. Default is `[]`
## Agent Integration Example
Here's a more detailed example of how to integrate the `CodeInterpreterTool` with a CrewAI agent:
```python Code
from crewai import Agent, Task, Crew
from crewai_tools import CodeInterpreterTool
# Initialize the tool
code_interpreter = CodeInterpreterTool()
# Define an agent that uses the tool
data_analyst = Agent(
role="Data Analyst",
goal="Analyze data using Python code",
backstory="""You are an expert data analyst who specializes in using Python
to analyze and visualize data. You can write efficient code to process
large datasets and extract meaningful insights.""",
tools=[code_interpreter],
verbose=True,
)
# Create a task for the agent
analysis_task = Task(
description="""
Write Python code to:
1. Generate a random dataset of 100 points with x and y coordinates
2. Calculate the correlation coefficient between x and y
3. Create a scatter plot of the data
4. Print the correlation coefficient and save the plot as 'scatter.png'
Make sure to handle any necessary imports and print the results.
""",
expected_output="The correlation coefficient and confirmation that the scatter plot has been saved.",
agent=data_analyst,
)
# Run the task
crew = Crew(
agents=[data_analyst],
tasks=[analysis_task],
verbose=True,
process=Process.sequential,
)
result = crew.kickoff()
```
## Implementation Details
The `CodeInterpreterTool` uses Docker to create a secure environment for code execution:
```python Code
class CodeInterpreterTool(BaseTool):
name: str = "Code Interpreter"
description: str = "Interprets Python3 code strings with a final print statement."
args_schema: Type[BaseModel] = CodeInterpreterSchema
default_image_tag: str = "code-interpreter:latest"
def _run(self, **kwargs) -> str:
code = kwargs.get("code", self.code)
libraries_used = kwargs.get("libraries_used", [])
if self.unsafe_mode:
return self.run_code_unsafe(code, libraries_used)
else:
return self.run_code_safety(code, libraries_used)
```
The tool performs the following steps:
1. Verifies that the Docker image exists or builds it if necessary
2. Creates a Docker container with the current working directory mounted
3. Installs any required libraries specified by the agent
4. Executes the Python code in the container
5. Returns the output of the code execution
6. Cleans up by stopping and removing the container
## Security Considerations
By default, the `CodeInterpreterTool` runs code in an isolated Docker container, which provides a layer of security. However, there are still some security considerations to keep in mind:
1. The Docker container has access to the current working directory, so sensitive files could potentially be accessed.
2. If the Docker container is unavailable and the code needs to run safely, it will be executed in a sandbox environment. For security reasons, installing arbitrary libraries is not allowed
3. The `unsafe_mode` parameter allows code to be executed directly on the host machine, which should only be used in trusted environments.
4. Be cautious when allowing agents to install arbitrary libraries, as they could potentially include malicious code.
## Conclusion
The `CodeInterpreterTool` provides a powerful way for CrewAI agents to execute Python code in a relatively secure environment. By enabling agents to write and run code, it significantly expands their problem-solving capabilities, especially for tasks involving data analysis, calculations, or other computational work. This tool is particularly useful for agents that need to perform complex operations that are more efficiently expressed in code than in natural language.

View File

@@ -0,0 +1,51 @@
---
title: DALL-E Tool
description: The `DallETool` is a powerful tool designed for generating images from textual descriptions.
icon: image
---
# `DallETool`
## Description
This tool is used to give the Agent the ability to generate images using the DALL-E model. It is a transformer-based model that generates images from textual descriptions.
This tool allows the Agent to generate images based on the text input provided by the user.
## Installation
Install the crewai_tools package
```shell
pip install 'crewai[tools]'
```
## Example
Remember that when using this tool, the text must be generated by the Agent itself. The text must be a description of the image you want to generate.
```python Code
from crewai_tools import DallETool
Agent(
...
tools=[DallETool()],
)
```
If needed you can also tweak the parameters of the DALL-E model by passing them as arguments to the `DallETool` class. For example:
```python Code
from crewai_tools import DallETool
dalle_tool = DallETool(model="dall-e-3",
size="1024x1024",
quality="standard",
n=1)
Agent(
...
tools=[dalle_tool]
)
```
The parameters are based on the `client.images.generate` method from the OpenAI API. For more information on the parameters,
please refer to the [OpenAI API documentation](https://platform.openai.com/docs/guides/images/introduction?lang=python).

View File

@@ -0,0 +1,58 @@
---
title: LangChain Tool
description: The `LangChainTool` is a wrapper for LangChain tools and query engines.
icon: link
---
## `LangChainTool`
<Info>
CrewAI seamlessly integrates with LangChain's comprehensive [list of tools](https://python.langchain.com/docs/integrations/tools/), all of which can be used with CrewAI.
</Info>
```python Code
import os
from dotenv import load_dotenv
from crewai import Agent, Task, Crew
from crewai.tools import BaseTool
from pydantic import Field
from langchain_community.utilities import GoogleSerperAPIWrapper
# Set up your SERPER_API_KEY key in an .env file, eg:
# SERPER_API_KEY=<your api key>
load_dotenv()
search = GoogleSerperAPIWrapper()
class SearchTool(BaseTool):
name: str = "Search"
description: str = "Useful for search-based queries. Use this to find current information about markets, companies, and trends."
search: GoogleSerperAPIWrapper = Field(default_factory=GoogleSerperAPIWrapper)
def _run(self, query: str) -> str:
"""Execute the search query and return results"""
try:
return self.search.run(query)
except Exception as e:
return f"Error performing search: {str(e)}"
# Create Agents
researcher = Agent(
role='Research Analyst',
goal='Gather current market data and trends',
backstory="""You are an expert research analyst with years of experience in
gathering market intelligence. You're known for your ability to find
relevant and up-to-date market information and present it in a clear,
actionable format.""",
tools=[SearchTool()],
verbose=True
)
# rest of the code ...
```
## Conclusion
Tools are pivotal in extending the capabilities of CrewAI agents, enabling them to undertake a broad spectrum of tasks and collaborate effectively.
When building solutions with CrewAI, leverage both custom and existing tools to empower your agents and enhance the AI ecosystem. Consider utilizing error handling, caching mechanisms,
and the flexibility of tool arguments to optimize your agents' performance and capabilities.

View File

@@ -0,0 +1,146 @@
---
title: LlamaIndex Tool
description: The `LlamaIndexTool` is a wrapper for LlamaIndex tools and query engines.
icon: address-book
---
# `LlamaIndexTool`
## Description
The `LlamaIndexTool` is designed to be a general wrapper around LlamaIndex tools and query engines, enabling you to leverage LlamaIndex resources in terms of RAG/agentic pipelines as tools to plug into CrewAI agents. This tool allows you to seamlessly integrate LlamaIndex's powerful data processing and retrieval capabilities into your CrewAI workflows.
## Installation
To use this tool, you need to install LlamaIndex:
```shell
uv add llama-index
```
## Steps to Get Started
To effectively use the `LlamaIndexTool`, follow these steps:
1. **Install LlamaIndex**: Install the LlamaIndex package using the command above.
2. **Set Up LlamaIndex**: Follow the [LlamaIndex documentation](https://docs.llamaindex.ai/) to set up a RAG/agent pipeline.
3. **Create a Tool or Query Engine**: Create a LlamaIndex tool or query engine that you want to use with CrewAI.
## Example
The following examples demonstrate how to initialize the tool from different LlamaIndex components:
### From a LlamaIndex Tool
```python Code
from crewai_tools import LlamaIndexTool
from crewai import Agent
from llama_index.core.tools import FunctionTool
# Example 1: Initialize from FunctionTool
def search_data(query: str) -> str:
"""Search for information in the data."""
# Your implementation here
return f"Results for: {query}"
# Create a LlamaIndex FunctionTool
og_tool = FunctionTool.from_defaults(
search_data,
name="DataSearchTool",
description="Search for information in the data"
)
# Wrap it with LlamaIndexTool
tool = LlamaIndexTool.from_tool(og_tool)
# Define an agent that uses the tool
@agent
def researcher(self) -> Agent:
'''
This agent uses the LlamaIndexTool to search for information.
'''
return Agent(
config=self.agents_config["researcher"],
tools=[tool]
)
```
### From LlamaHub Tools
```python Code
from crewai_tools import LlamaIndexTool
from llama_index.tools.wolfram_alpha import WolframAlphaToolSpec
# Initialize from LlamaHub Tools
wolfram_spec = WolframAlphaToolSpec(app_id="your_app_id")
wolfram_tools = wolfram_spec.to_tool_list()
tools = [LlamaIndexTool.from_tool(t) for t in wolfram_tools]
```
### From a LlamaIndex Query Engine
```python Code
from crewai_tools import LlamaIndexTool
from llama_index.core import VectorStoreIndex
from llama_index.core.readers import SimpleDirectoryReader
# Load documents
documents = SimpleDirectoryReader("./data").load_data()
# Create an index
index = VectorStoreIndex.from_documents(documents)
# Create a query engine
query_engine = index.as_query_engine()
# Create a LlamaIndexTool from the query engine
query_tool = LlamaIndexTool.from_query_engine(
query_engine,
name="Company Data Query Tool",
description="Use this tool to lookup information in company documents"
)
```
## Class Methods
The `LlamaIndexTool` provides two main class methods for creating instances:
### from_tool
Creates a `LlamaIndexTool` from a LlamaIndex tool.
```python Code
@classmethod
def from_tool(cls, tool: Any, **kwargs: Any) -> "LlamaIndexTool":
# Implementation details
```
### from_query_engine
Creates a `LlamaIndexTool` from a LlamaIndex query engine.
```python Code
@classmethod
def from_query_engine(
cls,
query_engine: Any,
name: Optional[str] = None,
description: Optional[str] = None,
return_direct: bool = False,
**kwargs: Any,
) -> "LlamaIndexTool":
# Implementation details
```
## Parameters
The `from_query_engine` method accepts the following parameters:
- **query_engine**: Required. The LlamaIndex query engine to wrap.
- **name**: Optional. The name of the tool.
- **description**: Optional. The description of the tool.
- **return_direct**: Optional. Whether to return the response directly. Default is `False`.
## Conclusion
The `LlamaIndexTool` provides a powerful way to integrate LlamaIndex's capabilities into CrewAI agents. By wrapping LlamaIndex tools and query engines, it enables agents to leverage sophisticated data retrieval and processing functionalities, enhancing their ability to work with complex information sources.

View File

@@ -0,0 +1,64 @@
---
title: "Overview"
description: "Leverage AI services, generate images, process vision, and build intelligent systems"
icon: "face-smile"
---
These tools integrate with AI and machine learning services to enhance your agents with advanced capabilities like image generation, vision processing, and intelligent code execution.
## **Available Tools**
<CardGroup cols={2}>
<Card title="DALL-E Tool" icon="image" href="/en/tools/ai-ml/dalletool">
Generate AI images using OpenAI's DALL-E model.
</Card>
<Card title="Vision Tool" icon="eye" href="/en/tools/ai-ml/visiontool">
Process and analyze images with computer vision capabilities.
</Card>
<Card title="AI Mind Tool" icon="brain" href="/en/tools/ai-ml/aimindtool">
Advanced AI reasoning and decision-making capabilities.
</Card>
<Card title="LlamaIndex Tool" icon="llama" href="/en/tools/ai-ml/llamaindextool">
Build knowledge bases and retrieval systems with LlamaIndex.
</Card>
<Card title="LangChain Tool" icon="link" href="/en/tools/ai-ml/langchaintool">
Integrate with LangChain for complex AI workflows.
</Card>
<Card title="RAG Tool" icon="database" href="/en/tools/ai-ml/ragtool">
Implement Retrieval-Augmented Generation systems.
</Card>
<Card title="Code Interpreter Tool" icon="code" href="/en/tools/ai-ml/codeinterpretertool">
Execute Python code and perform data analysis.
</Card>
</CardGroup>
## **Common Use Cases**
- **Content Generation**: Create images, text, and multimedia content
- **Data Analysis**: Execute code and analyze complex datasets
- **Knowledge Systems**: Build RAG systems and intelligent databases
- **Computer Vision**: Process and understand visual content
- **AI Safety**: Implement content moderation and safety checks
```python
from crewai_tools import DallETool, VisionTool, CodeInterpreterTool
# Create AI tools
image_generator = DallETool()
vision_processor = VisionTool()
code_executor = CodeInterpreterTool()
# Add to your agent
agent = Agent(
role="AI Specialist",
tools=[image_generator, vision_processor, code_executor],
goal="Create and analyze content using AI capabilities"
)

View File

@@ -0,0 +1,172 @@
---
title: RAG Tool
description: The `RagTool` is a dynamic knowledge base tool for answering questions using Retrieval-Augmented Generation.
icon: vector-square
---
# `RagTool`
## Description
The `RagTool` is designed to answer questions by leveraging the power of Retrieval-Augmented Generation (RAG) through EmbedChain.
It provides a dynamic knowledge base that can be queried to retrieve relevant information from various data sources.
This tool is particularly useful for applications that require access to a vast array of information and need to provide contextually relevant answers.
## Example
The following example demonstrates how to initialize the tool and use it with different data sources:
```python Code
from crewai_tools import RagTool
# Create a RAG tool with default settings
rag_tool = RagTool()
# Add content from a file
rag_tool.add(data_type="file", path="path/to/your/document.pdf")
# Add content from a web page
rag_tool.add(data_type="web_page", url="https://example.com")
# Define an agent with the RagTool
@agent
def knowledge_expert(self) -> Agent:
'''
This agent uses the RagTool to answer questions about the knowledge base.
'''
return Agent(
config=self.agents_config["knowledge_expert"],
allow_delegation=False,
tools=[rag_tool]
)
```
## Supported Data Sources
The `RagTool` can be used with a wide variety of data sources, including:
- 📰 PDF files
- 📊 CSV files
- 📃 JSON files
- 📝 Text
- 📁 Directories/Folders
- 🌐 HTML Web pages
- 📽️ YouTube Channels
- 📺 YouTube Videos
- 📚 Documentation websites
- 📝 MDX files
- 📄 DOCX files
- 🧾 XML files
- 📬 Gmail
- 📝 GitHub repositories
- 🐘 PostgreSQL databases
- 🐬 MySQL databases
- 🤖 Slack conversations
- 💬 Discord messages
- 🗨️ Discourse forums
- 📝 Substack newsletters
- 🐝 Beehiiv content
- 💾 Dropbox files
- 🖼️ Images
- ⚙️ Custom data sources
## Parameters
The `RagTool` accepts the following parameters:
- **summarize**: Optional. Whether to summarize the retrieved content. Default is `False`.
- **adapter**: Optional. A custom adapter for the knowledge base. If not provided, an EmbedchainAdapter will be used.
- **config**: Optional. Configuration for the underlying EmbedChain App.
## Adding Content
You can add content to the knowledge base using the `add` method:
```python Code
# Add a PDF file
rag_tool.add(data_type="file", path="path/to/your/document.pdf")
# Add a web page
rag_tool.add(data_type="web_page", url="https://example.com")
# Add a YouTube video
rag_tool.add(data_type="youtube_video", url="https://www.youtube.com/watch?v=VIDEO_ID")
# Add a directory of files
rag_tool.add(data_type="directory", path="path/to/your/directory")
```
## Agent Integration Example
Here's how to integrate the `RagTool` with a CrewAI agent:
```python Code
from crewai import Agent
from crewai.project import agent
from crewai_tools import RagTool
# Initialize the tool and add content
rag_tool = RagTool()
rag_tool.add(data_type="web_page", url="https://docs.crewai.com")
rag_tool.add(data_type="file", path="company_data.pdf")
# Define an agent with the RagTool
@agent
def knowledge_expert(self) -> Agent:
return Agent(
config=self.agents_config["knowledge_expert"],
allow_delegation=False,
tools=[rag_tool]
)
```
## Advanced Configuration
You can customize the behavior of the `RagTool` by providing a configuration dictionary:
```python Code
from crewai_tools import RagTool
# Create a RAG tool with custom configuration
config = {
"app": {
"name": "custom_app",
},
"llm": {
"provider": "openai",
"config": {
"model": "gpt-4",
}
},
"embedding_model": {
"provider": "openai",
"config": {
"model": "text-embedding-ada-002"
}
},
"vectordb": {
"provider": "elasticsearch",
"config": {
"collection_name": "my-collection",
"cloud_id": "deployment-name:xxxx",
"api_key": "your-key",
"verify_certs": False
}
},
"chunker": {
"chunk_size": 400,
"chunk_overlap": 100,
"length_function": "len",
"min_chunk_size": 0
}
}
rag_tool = RagTool(config=config, summarize=True)
```
The internal RAG tool utilizes the Embedchain adapter, allowing you to pass any configuration options that are supported by Embedchain.
You can refer to the [Embedchain documentation](https://docs.embedchain.ai/components/introduction) for details.
Make sure to review the configuration options available in the .yaml file.
## Conclusion
The `RagTool` provides a powerful way to create and query knowledge bases from various data sources. By leveraging Retrieval-Augmented Generation, it enables agents to access and retrieve relevant information efficiently, enhancing their ability to provide accurate and contextually appropriate responses.

View File

@@ -0,0 +1,49 @@
---
title: Vision Tool
description: The `VisionTool` is designed to extract text from images.
icon: eye
---
# `VisionTool`
## Description
This tool is used to extract text from images. When passed to the agent it will extract the text from the image and then use it to generate a response, report or any other output.
The URL or the PATH of the image should be passed to the Agent.
## Installation
Install the crewai_tools package
```shell
pip install 'crewai[tools]'
```
## Usage
In order to use the VisionTool, the OpenAI API key should be set in the environment variable `OPENAI_API_KEY`.
```python Code
from crewai_tools import VisionTool
vision_tool = VisionTool()
@agent
def researcher(self) -> Agent:
'''
This agent uses the VisionTool to extract text from images.
'''
return Agent(
config=self.agents_config["researcher"],
allow_delegation=False,
tools=[vision_tool]
)
```
## Arguments
The VisionTool requires the following arguments:
| Argument | Type | Description |
| :----------------- | :------- | :------------------------------------------------------------------------------- |
| **image_path_url** | `string` | **Mandatory**. The path to the image file from which text needs to be extracted. |