Compare commits

...

1 Commits

Author SHA1 Message Date
Devin AI
a499d9de42 docs: improve tool documentation and examples
- Update SerperDevTool documentation with accurate parameters and JSON response format
- Enhance XMLSearchTool and MDXSearchTool docs with RAG capabilities and required parameters
- Fix code block formatting across multiple tool documentation files
- Add clarification about environment variables and configuration
- Validate all examples against actual implementations
- Successfully tested with mkdocs build

Co-Authored-By: Joe Moura <joao@crewai.com>
2024-12-28 04:32:08 +00:00
14 changed files with 464 additions and 64 deletions

View File

@@ -35,10 +35,41 @@ By default, the memory system is disabled, and you can ensure it is active by se
The memory will use OpenAI embeddings by default, but you can change it by setting `embedder` to a different model.
It's also possible to initialize the memory instance with your own instance.
The 'embedder' only applies to **Short-Term Memory** which uses Chroma for RAG.
The **Long-Term Memory** uses SQLite3 to store task results. Currently, there is no way to override these storage implementations.
The data storage files are saved into a platform-specific location found using the appdirs package,
and the name of the project can be overridden using the **CREWAI_STORAGE_DIR** environment variable.
Each memory type uses different storage implementations:
- **Short-Term Memory**: Uses Chroma for RAG (Retrieval-Augmented Generation) with configurable embeddings
- **Long-Term Memory**: Uses SQLite3 for persistent storage of task results and metadata
- **Entity Memory**: Uses either RAG storage (default) or Mem0 for entity information
- **User Memory**: Available through Mem0 integration for personalized experiences
The data storage files are saved in a platform-specific location using the appdirs package.
You can override the storage location using the **CREWAI_STORAGE_DIR** environment variable.
### Storage Implementation Details
#### Short-Term Memory
- Default: ChromaDB with RAG
- Configurable embeddings (OpenAI, Ollama, Google AI, etc.)
- Supports custom embedding functions
- Optional Mem0 integration for enhanced capabilities
#### Long-Term Memory
- SQLite3 storage with structured schema
- Stores task descriptions, metadata, timestamps, and quality scores
- Supports querying by task description with configurable limits
- Includes error handling and reset capabilities
#### Entity Memory
- Default: RAG storage with ChromaDB
- Optional Mem0 integration
- Structured entity storage (name, type, description)
- Supports metadata and relationship mapping
#### User Memory
- Requires Mem0 integration
- Stores user preferences and interaction history
- Supports personalized context building
- Configurable through memory_config
### Example: Configuring Memory for a Crew
@@ -93,11 +124,41 @@ my_crew = Crew(
)
```
## Integrating Mem0 for Enhanced User Memory
## Integrating Mem0 Provider
[Mem0](https://mem0.ai/) is a self-improving memory layer for LLM applications, enabling personalized AI experiences.
[Mem0](https://mem0.ai/) is a self-improving memory layer for LLM applications that can enhance all memory types in CrewAI. It provides advanced features for storing and retrieving contextual information.
To include user-specific memory you can get your API key [here](https://app.mem0.ai/dashboard/api-keys) and refer the [docs](https://docs.mem0.ai/platform/quickstart#4-1-create-memories) for adding user preferences.
### Configuration
To use Mem0, you'll need:
1. An API key from [Mem0 Dashboard](https://app.mem0.ai/dashboard/api-keys)
2. The `mem0ai` package installed: `pip install mem0ai`
You can configure Mem0 in two ways:
1. **Environment Variable**:
```bash
export MEM0_API_KEY="your-api-key"
```
2. **Memory Config**:
```python
memory_config = {
"provider": "mem0",
"config": {
"api_key": "your-api-key",
"user_id": "user123" # Required for user memory
}
}
```
### Memory Type Support
Mem0 can be used with all memory types:
- **Short-Term Memory**: Enhanced context retention
- **Long-Term Memory**: Improved task history storage
- **Entity Memory**: Better entity relationship tracking
- **User Memory**: Personalized user preferences and history
```python Code
@@ -135,9 +196,118 @@ crew = Crew(
```
## Additional Embedding Providers
## Memory Interface Details
### Using OpenAI embeddings (already default)
When implementing custom memory storage, be aware of these interface requirements:
### Base Memory Class
```python
class Memory:
def save(
self,
value: Any,
metadata: Optional[Dict[str, Any]] = None,
agent: Optional[str] = None,
) -> None:
"""Save data to memory with optional metadata and agent information."""
pass
def search(
self,
query: str,
limit: int = 3,
score_threshold: float = 0.35,
) -> List[Any]:
"""Search memory with configurable limit and relevance threshold."""
pass
```
### Memory Type Specifics
1. **LongTermMemory**:
```python
class LongTermMemoryItem:
task: str # Task description
expected_output: str # Expected task output
metadata: Dict[str, Any] # Additional metadata
agent: Optional[str] = None # Associated agent
datetime: str # Timestamp
quality: float # Task quality score (0-1)
```
- Saves task results with quality scores and timestamps
- Search returns historical task data ordered by date
- Note: Implementation has type hint differences from base Memory class
2. **EntityMemory**:
```python
class EntityMemoryItem:
name: str # Entity name
type: str # Entity type
description: str # Entity description
metadata: Dict[str, Any] # Additional metadata
agent: Optional[str] = None # Associated agent
```
- Saves entity information with type and description
- Search supports entity relationship queries
- Note: Implementation has type hint differences from base Memory class
3. **ShortTermMemory**:
```python
class ShortTermMemoryItem:
data: Any # Memory content
metadata: Dict[str, Any] # Additional metadata
agent: Optional[str] = None # Associated agent
```
- Saves recent interactions with metadata
- Search supports semantic similarity
- Follows base Memory class interface exactly
### Error Handling and Reset
Each memory type includes error handling and reset capabilities:
```python
# Reset short-term memory
try:
crew.short_term_memory.reset()
except Exception as e:
print(f"Error resetting short-term memory: {e}")
# Reset entity memory
try:
crew.entity_memory.reset()
except Exception as e:
print(f"Error resetting entity memory: {e}")
# Reset long-term memory
try:
crew.long_term_memory.reset()
except Exception as e:
print(f"Error resetting long-term memory: {e}")
```
Common error scenarios:
- Database connection issues
- File permission errors
- Storage initialization failures
- Embedding generation errors
### Implementation Notes
1. **Type Hint Considerations**:
- LongTermMemory.save() expects LongTermMemoryItem
- EntityMemory.save() expects EntityMemoryItem
- ShortTermMemory.save() follows base Memory interface
2. **Storage Reset Behavior**:
- Short-term: Clears ChromaDB collection
- Long-term: Truncates SQLite table
- Entity: Clears entity storage
- Mem0: Provider-specific reset
## Embedding Providers
CrewAI supports multiple embedding providers for RAG-based memory types:
```python Code
from crewai import Crew, Agent, Task, Process

View File

@@ -0,0 +1,196 @@
---
title: Agent Monitoring with MLFlow
description: How to monitor and track CrewAI Agents using MLFlow for experiment tracking and model registry.
icon: chart-line
---
# Introduction
MLFlow is an open-source platform for managing the end-to-end machine learning lifecycle. When integrated with CrewAI, it provides powerful capabilities for tracking agent performance, logging metrics, and managing experiments. This guide demonstrates how to implement precise monitoring and tracking of your CrewAI agents using MLFlow.
## MLFlow Integration
MLFlow offers comprehensive experiment tracking and model registry capabilities that complement CrewAI's agent-based workflows:
- **Experiment Tracking**: Monitor agent performance metrics and execution patterns
- **Metric Logging**: Track costs, latency, and success rates
- **Artifact Management**: Store and version agent configurations and outputs
- **Model Registry**: Maintain different versions of agent configurations
### Features
- **Real-time Monitoring**: Track agent performance as tasks are executed
- **Metric Collection**: Gather detailed statistics on agent operations
- **Experiment Organization**: Group related agent runs for comparison
- **Resource Tracking**: Monitor computational and token usage
- **Custom Metrics**: Define and track domain-specific performance indicators
## Getting Started
<Steps>
<Step title="Install Dependencies">
Install MLFlow alongside CrewAI:
```bash
pip install mlflow crewai
```
</Step>
<Step title="Configure MLFlow">
Set up MLFlow tracking in your environment:
```python
import mlflow
from crewai import Agent, Task, Crew
# Configure MLFlow tracking
mlflow.set_tracking_uri("http://localhost:5000")
mlflow.set_experiment("crewai-agents")
```
</Step>
<Step title="Create Tracking Callbacks">
Implement MLFlow callbacks for monitoring:
```python
class MLFlowCallback:
def __init__(self):
self.start_time = time.time()
def on_step(self, agent, task, step_number, step_input, step_output):
mlflow.log_metrics({
"step_number": step_number,
"step_duration": time.time() - self.start_time,
"output_length": len(step_output)
})
def on_task(self, agent, task, output):
mlflow.log_metrics({
"task_duration": time.time() - self.start_time,
"final_output_length": len(output)
})
mlflow.log_param("task_description", task.description)
```
</Step>
<Step title="Integrate with CrewAI">
Apply MLFlow tracking to your CrewAI agents:
```python
# Create MLFlow callback
mlflow_callback = MLFlowCallback()
# Create agent with tracking
researcher = Agent(
role='Researcher',
goal='Conduct market analysis',
backstory='Expert market researcher with deep analytical skills',
step_callback=mlflow_callback.on_step
)
# Create crew with tracking
crew = Crew(
agents=[researcher],
tasks=[...],
task_callback=mlflow_callback.on_task
)
# Execute with MLFlow tracking
with mlflow.start_run():
result = crew.kickoff()
```
</Step>
</Steps>
## Advanced Usage
### Custom Metric Tracking
Track specific metrics relevant to your use case:
```python
class CustomMLFlowCallback:
def __init__(self):
self.metrics = {}
def on_step(self, agent, task, step_number, step_input, step_output):
# Track custom metrics
self.metrics[f"agent_{agent.role}_steps"] = step_number
# Log tool usage
if hasattr(task, 'tools'):
for tool in task.tools:
mlflow.log_param(f"tool_used_{step_number}", tool.name)
# Track token usage
mlflow.log_metric(
f"step_{step_number}_tokens",
len(step_output)
)
```
### Experiment Organization
Group related experiments for better analysis:
```python
def run_agent_experiment(agent_config, task_config):
with mlflow.start_run(
run_name=f"agent_experiment_{agent_config['role']}"
) as run:
# Log configuration
mlflow.log_params(agent_config)
mlflow.log_params(task_config)
# Create and run agent
agent = Agent(**agent_config)
task = Task(**task_config)
# Execute and log results
result = agent.execute(task)
mlflow.log_metric("execution_time", task.execution_time)
return result
```
## Best Practices
1. **Structured Logging**
- Use consistent metric names across experiments
- Group related metrics using common prefixes
- Include timestamps for temporal analysis
2. **Resource Monitoring**
- Track token usage per agent and task
- Monitor execution time for performance optimization
- Log tool usage patterns and success rates
3. **Experiment Organization**
- Use meaningful experiment names
- Group related runs under the same experiment
- Tag runs with relevant metadata
4. **Performance Optimization**
- Monitor agent efficiency metrics
- Track resource utilization
- Identify bottlenecks in task execution
## Viewing Results
Access your MLFlow dashboard to analyze agent performance:
1. Start the MLFlow UI:
```bash
mlflow ui --port 5000
```
2. Open your browser and navigate to `http://localhost:5000`
3. View experiment results including:
- Agent performance metrics
- Task execution times
- Resource utilization
- Custom metrics and parameters
## Security Considerations
- Ensure sensitive data is properly sanitized before logging
- Use appropriate access controls for MLFlow server
- Monitor and audit logged information regularly
## Conclusion
MLFlow integration provides comprehensive monitoring and experimentation capabilities for CrewAI agents. By following these guidelines and best practices, you can effectively track, analyze, and optimize your agent-based workflows while maintaining security and efficiency.

View File

@@ -100,7 +100,8 @@
"how-to/conditional-tasks",
"how-to/agentops-observability",
"how-to/langtrace-observability",
"how-to/openlit-observability"
"how-to/openlit-observability",
"how-to/mlflow-observability"
]
},
{
@@ -163,4 +164,4 @@
"linkedin": "https://www.linkedin.com/company/crewai-inc",
"youtube": "https://youtube.com/@crewAIInc"
}
}
}

View File

@@ -31,7 +31,7 @@ Remember that when using this tool, the code must be generated by the Agent itse
The code must be a Python3 code. And it will take some time for the first time to run
because it needs to build the Docker image.
```python Code
```python
from crewai import Agent
from crewai_tools import CodeInterpreterTool
@@ -43,7 +43,7 @@ Agent(
We also provide a simple way to use it directly from the Agent.
```python Code
```python
from crewai import Agent
agent = Agent(

View File

@@ -27,7 +27,7 @@ The following example demonstrates how to initialize the tool and execute a gith
1. Initialize Composio tools
```python Code
```python
from composio import App
from crewai_tools import ComposioTool
from crewai import Agent, Task
@@ -38,19 +38,19 @@ tools = [ComposioTool.from_action(action=Action.GITHUB_ACTIVITY_STAR_REPO_FOR_AU
If you don't know what action you want to use, use `from_app` and `tags` filter to get relevant actions
```python Code
```python
tools = ComposioTool.from_app(App.GITHUB, tags=["important"])
```
or use `use_case` to search relevant actions
```python Code
```python
tools = ComposioTool.from_app(App.GITHUB, use_case="Star a github repository")
```
2. Define agent
```python Code
```python
crewai_agent = Agent(
role="Github Agent",
goal="You take action on Github using Github APIs",
@@ -65,7 +65,7 @@ crewai_agent = Agent(
3. Execute task
```python Code
```python
task = Task(
description="Star a repo ComposioHQ/composio on GitHub",
agent=crewai_agent,
@@ -75,4 +75,4 @@ task = Task(
task.execute()
```
* More detailed list of tools can be found [here](https://app.composio.dev)
* More detailed list of tools can be found [here](https://app.composio.dev)

View File

@@ -22,7 +22,7 @@ pip install 'crewai[tools]'
Remember that when using this tool, the text must be generated by the Agent itself. The text must be a description of the image you want to generate.
```python Code
```python
from crewai_tools import DallETool
Agent(
@@ -31,9 +31,16 @@ Agent(
)
```
If needed you can also tweak the parameters of the DALL-E model by passing them as arguments to the `DallETool` class. For example:
## Arguments
```python Code
- `model`: The DALL-E model to use (e.g., "dall-e-3")
- `size`: Image size (e.g., "1024x1024")
- `quality`: Image quality ("standard" or "hd")
- `n`: Number of images to generate
## Configuration Example
```python
from crewai_tools import DallETool
dalle_tool = DallETool(model="dall-e-3",
@@ -48,4 +55,4 @@ Agent(
```
The parameters are based on the `client.images.generate` method from the OpenAI API. For more information on the parameters,
please refer to the [OpenAI API documentation](https://platform.openai.com/docs/guides/images/introduction?lang=python).
please refer to the [OpenAI API documentation](https://platform.openai.com/docs/guides/images/introduction?lang=python).

View File

@@ -20,11 +20,11 @@ Install the crewai_tools package to use the `FileWriterTool` in your projects:
pip install 'crewai[tools]'
```
## Example
## Usage
To get started with the `FileWriterTool`:
Here's how to use the `FileWriterTool`:
```python Code
```python
from crewai_tools import FileWriterTool
# Initialize the tool
@@ -45,4 +45,4 @@ print(result)
By integrating the `FileWriterTool` into your crews, the agents can execute the process of writing content to files and creating directories.
This tool is essential for tasks that require saving output data, creating structured file systems, and more. By adhering to the setup and usage guidelines provided,
incorporating this tool into projects is straightforward and efficient.
incorporating this tool into projects is straightforward and efficient.

View File

@@ -23,11 +23,11 @@ To install the JSONSearchTool, use the following pip command:
pip install 'crewai[tools]'
```
## Usage Examples
## Example
Here are updated examples on how to utilize the JSONSearchTool effectively for searching within JSON files. These examples take into account the current implementation and usage patterns identified in the codebase.
```python Code
```python
from crewai.json_tools import JSONSearchTool # Updated import path
# General JSON content search
@@ -47,7 +47,7 @@ tool = JSONSearchTool(json_path='./path/to/your/file.json')
The JSONSearchTool supports extensive customization through a configuration dictionary. This allows users to select different models for embeddings and summarization based on their requirements.
```python Code
```python
tool = JSONSearchTool(
config={
"llm": {
@@ -70,4 +70,4 @@ tool = JSONSearchTool(
},
}
)
```
```

View File

@@ -22,11 +22,13 @@ Before using the MDX Search Tool, ensure the `crewai_tools` package is installed
pip install 'crewai[tools]'
```
## Usage Example
## Example
To use the MDX Search Tool, you must first set up the necessary environment variables. Then, integrate the tool into your crewAI project to begin your market research. Below is a basic example of how to do this:
To use the MDX Search Tool, you must first set up the necessary environment variables for your chosen LLM and embeddings providers (e.g., OpenAI API key if using the default configuration). Then, integrate the tool into your crewAI project as shown in the examples below.
```python Code
The tool will perform semantic search using RAG technology to find and extract relevant content from your MDX files based on the search query:
```python
from crewai_tools import MDXSearchTool
# Initialize the tool to search any MDX content it learns about during execution
@@ -40,13 +42,16 @@ tool = MDXSearchTool(mdx='path/to/your/document.mdx')
## Parameters
- mdx: **Optional**. Specifies the MDX file path for the search. It can be provided during initialization.
- `mdx`: **Optional**. Specifies the MDX file path for the search. It can be provided during initialization or when running the search.
- `search_query`: **Required**. The query string to search for within the MDX content.
The tool inherits from `RagTool` which provides advanced RAG (Retrieval-Augmented Generation) capabilities for semantic search within MDX content.
## Customization of Model and Embeddings
The tool defaults to using OpenAI for embeddings and summarization. For customization, utilize a configuration dictionary as shown below:
```python Code
```python
tool = MDXSearchTool(
config=dict(
llm=dict(
@@ -70,4 +75,4 @@ tool = MDXSearchTool(
),
)
)
```
```

View File

@@ -27,7 +27,7 @@ pip install 'crewai[tools]'
The following example demonstrates how to initialize the tool and execute a search with a given query:
```python Code
```python
from crewai_tools import SerperDevTool
# Initialize the tool for internet searching capabilities
@@ -44,22 +44,27 @@ To effectively use the `SerperDevTool`, follow these steps:
## Parameters
The `SerperDevTool` comes with several parameters that will be passed to the API :
The `SerperDevTool` comes with several parameters that can be configured:
- **search_url**: The URL endpoint for the search API. (Default is `https://google.serper.dev/search`)
- **base_url**: The base URL for the Serper API. Default is `https://google.serper.dev`.
- **n_results**: Number of search results to return. Default is `10`.
- **save_file**: Boolean flag to save search results to a file. Default is `False`.
- **search_type**: Type of search to perform. Can be either `search` (default) or `news`.
Additional parameters that can be passed during search:
- **country**: Optional. Specify the country for the search results.
- **location**: Optional. Specify the location for the search results.
- **locale**: Optional. Specify the locale for the search results.
- **n_results**: Number of search results to return. Default is `10`.
The values for `country`, `location`, `locale` and `search_url` can be found on the [Serper Playground](https://serper.dev/playground).
The values for `country`, `location`, and `locale` can be found on the [Serper Playground](https://serper.dev/playground).
Note: The tool requires the `SERPER_API_KEY` environment variable to be set with your Serper API key.
## Example with Parameters
Here is an example demonstrating how to use the tool with additional parameters:
```python Code
```python
from crewai_tools import SerperDevTool
tool = SerperDevTool(
@@ -71,18 +76,29 @@ print(tool.run(search_query="ChatGPT"))
# Using Tool: Search the internet
# Search results: Title: Role of chat gpt in public health
# Link: https://link.springer.com/article/10.1007/s10439-023-03172-7
# Snippet: … ChatGPT in public health. In this overview, we will examine the potential uses of ChatGPT in
# ---
# Title: Potential use of chat gpt in global warming
# Link: https://link.springer.com/article/10.1007/s10439-023-03171-8
# Snippet: … as ChatGPT, have the potential to play a critical role in advancing our understanding of climate
# ---
# Search results:
{
"searchParameters": {
"q": "ChatGPT",
"type": "search"
},
"organic": [
{
"title": "Role of chat gpt in public health",
"link": "https://link.springer.com/article/10.1007/s10439-023-03172-7",
"snippet": "ChatGPT in public health. In this overview, we will examine the potential uses of ChatGPT in"
},
{
"title": "Potential use of chat gpt in global warming",
"link": "https://link.springer.com/article/10.1007/s10439-023-03171-8",
"snippet": "as ChatGPT, have the potential to play a critical role in advancing our understanding of climate"
}
]
}
```
```python Code
```python
from crewai_tools import SerperDevTool
tool = SerperDevTool(

View File

@@ -26,7 +26,7 @@ pip install spider-client 'crewai[tools]'
This example shows you how you can use the `SpiderTool` to enable your agent to scrape and crawl websites.
The data returned from the Spider API is already LLM-ready, so no need to do any cleaning there.
```python Code
```python
from crewai_tools import SpiderTool
def main():
@@ -89,4 +89,4 @@ if __name__ == "__main__":
| **query_selector** | `string` | CSS query selector for content extraction from markup. |
| **full_resources** | `bool` | Downloads all resources linked to the website. |
| **request_timeout** | `int` | Timeout in seconds for requests (5-60). Default is `30`. |
| **run_in_background** | `bool` | Runs the request in the background, useful for data storage and triggering dashboard crawls. No effect if `storageless` is set. |
| **run_in_background** | `bool` | Runs the request in the background, useful for data storage and triggering dashboard crawls. No effect if `storageless` is set. |

View File

@@ -19,11 +19,11 @@ Install the crewai_tools package
pip install 'crewai[tools]'
```
## Usage
## Example
In order to use the VisionTool, the OpenAI API key should be set in the environment variable `OPENAI_API_KEY`.
To use the VisionTool, first ensure the OpenAI API key is set in the environment variable `OPENAI_API_KEY`. Here's an example:
```python Code
```python
from crewai_tools import VisionTool
vision_tool = VisionTool()

View File

@@ -27,11 +27,11 @@ pip install 'crewai[tools]'
This command installs the necessary dependencies to ensure that once the tool is fully integrated, users can start using it immediately.
## Example Usage
## Example
Below are examples of how the WebsiteSearchTool could be utilized in different scenarios. Please note, these examples are illustrative and represent planned functionality:
```python Code
```python
from crewai_tools import WebsiteSearchTool
# Example of initiating tool that agents can use
@@ -52,7 +52,7 @@ tool = WebsiteSearchTool(website='https://example.com')
By default, the tool uses OpenAI for both embeddings and summarization. To customize the model, you can use a config dictionary as follows:
```python Code
```python
tool = WebsiteSearchTool(
config=dict(
llm=dict(
@@ -74,4 +74,4 @@ tool = WebsiteSearchTool(
),
)
)
```
```

View File

@@ -29,7 +29,9 @@ pip install 'crewai[tools]'
Here are two examples demonstrating how to use the XMLSearchTool.
The first example shows searching within a specific XML file, while the second example illustrates initiating a search without predefining an XML path, providing flexibility in search scope.
```python Code
Note: The tool uses RAG (Retrieval-Augmented Generation) to perform semantic search within XML content, so results will include relevant context from the XML file based on the search query.
```python
from crewai_tools import XMLSearchTool
# Allow agents to search within any XML file's content
@@ -47,12 +49,15 @@ tool = XMLSearchTool(xml='path/to/your/xmlfile.xml')
- `xml`: This is the path to the XML file you wish to search.
It is an optional parameter during the tool's initialization but must be provided either at initialization or as part of the `run` method's arguments to execute a search.
- `search_query`: The query string to search for within the XML content. This is a required parameter when running the search.
The tool inherits from `RagTool` which provides advanced RAG (Retrieval-Augmented Generation) capabilities for semantic search within XML content.
## Custom model and embeddings
By default, the tool uses OpenAI for both embeddings and summarization. To customize the model, you can use a config dictionary as follows:
```python Code
```python
tool = XMLSearchTool(
config=dict(
llm=dict(
@@ -74,4 +79,4 @@ tool = XMLSearchTool(
),
)
)
```
```