Merge branch 'main' into devin/1738752192-fix-memory-reset-openai-dependency

This commit is contained in:
João Moura
2025-02-09 15:43:20 -03:00
committed by GitHub
168 changed files with 21016 additions and 39193 deletions

View File

@@ -43,7 +43,7 @@ Think of an agent as a specialized team member with specific skills, expertise,
| **Max Retry Limit** _(optional)_ | `max_retry_limit` | `int` | Maximum number of retries when an error occurs. Default is 2. |
| **Respect Context Window** _(optional)_ | `respect_context_window` | `bool` | Keep messages under context window size by summarizing. Default is True. |
| **Code Execution Mode** _(optional)_ | `code_execution_mode` | `Literal["safe", "unsafe"]` | Mode for code execution: 'safe' (using Docker) or 'unsafe' (direct). Default is 'safe'. |
| **Embedder Config** _(optional)_ | `embedder_config` | `Optional[Dict[str, Any]]` | Configuration for the embedder used by the agent. |
| **Embedder** _(optional)_ | `embedder` | `Optional[Dict[str, Any]]` | Configuration for the embedder used by the agent. |
| **Knowledge Sources** _(optional)_ | `knowledge_sources` | `Optional[List[BaseKnowledgeSource]]` | Knowledge sources available to the agent. |
| **Use System Prompt** _(optional)_ | `use_system_prompt` | `Optional[bool]` | Whether to use system prompt (for o1 model support). Default is True. |
@@ -101,6 +101,8 @@ from crewai_tools import SerperDevTool
class LatestAiDevelopmentCrew():
"""LatestAiDevelopment crew"""
agents_config = "config/agents.yaml"
@agent
def researcher(self) -> Agent:
return Agent(
@@ -150,7 +152,7 @@ agent = Agent(
use_system_prompt=True, # Default: True
tools=[SerperDevTool()], # Optional: List of tools
knowledge_sources=None, # Optional: List of knowledge sources
embedder_config=None, # Optional: Custom embedder configuration
embedder=None, # Optional: Custom embedder configuration
system_template=None, # Optional: Custom system prompt template
prompt_template=None, # Optional: Custom prompt template
response_template=None, # Optional: Custom response template

View File

@@ -12,7 +12,7 @@ The CrewAI CLI provides a set of commands to interact with CrewAI, allowing you
To use the CrewAI CLI, make sure you have CrewAI installed:
```shell
```shell Terminal
pip install crewai
```
@@ -20,7 +20,7 @@ pip install crewai
The basic structure of a CrewAI CLI command is:
```shell
```shell Terminal
crewai [COMMAND] [OPTIONS] [ARGUMENTS]
```
@@ -30,7 +30,7 @@ crewai [COMMAND] [OPTIONS] [ARGUMENTS]
Create a new crew or flow.
```shell
```shell Terminal
crewai create [OPTIONS] TYPE NAME
```
@@ -38,7 +38,7 @@ crewai create [OPTIONS] TYPE NAME
- `NAME`: Name of the crew or flow
Example:
```shell
```shell Terminal
crewai create crew my_new_crew
crewai create flow my_new_flow
```
@@ -47,14 +47,14 @@ crewai create flow my_new_flow
Show the installed version of CrewAI.
```shell
```shell Terminal
crewai version [OPTIONS]
```
- `--tools`: (Optional) Show the installed version of CrewAI tools
Example:
```shell
```shell Terminal
crewai version
crewai version --tools
```
@@ -63,7 +63,7 @@ crewai version --tools
Train the crew for a specified number of iterations.
```shell
```shell Terminal
crewai train [OPTIONS]
```
@@ -71,7 +71,7 @@ crewai train [OPTIONS]
- `-f, --filename TEXT`: Path to a custom file for training (default: "trained_agents_data.pkl")
Example:
```shell
```shell Terminal
crewai train -n 10 -f my_training_data.pkl
```
@@ -79,14 +79,14 @@ crewai train -n 10 -f my_training_data.pkl
Replay the crew execution from a specific task.
```shell
```shell Terminal
crewai replay [OPTIONS]
```
- `-t, --task_id TEXT`: Replay the crew from this task ID, including all subsequent tasks
Example:
```shell
```shell Terminal
crewai replay -t task_123456
```
@@ -94,7 +94,7 @@ crewai replay -t task_123456
Retrieve your latest crew.kickoff() task outputs.
```shell
```shell Terminal
crewai log-tasks-outputs
```
@@ -102,7 +102,7 @@ crewai log-tasks-outputs
Reset the crew memories (long, short, entity, latest_crew_kickoff_outputs).
```shell
```shell Terminal
crewai reset-memories [OPTIONS]
```
@@ -113,7 +113,7 @@ crewai reset-memories [OPTIONS]
- `-a, --all`: Reset ALL memories
Example:
```shell
```shell Terminal
crewai reset-memories --long --short
crewai reset-memories --all
```
@@ -122,7 +122,7 @@ crewai reset-memories --all
Test the crew and evaluate the results.
```shell
```shell Terminal
crewai test [OPTIONS]
```
@@ -130,7 +130,7 @@ crewai test [OPTIONS]
- `-m, --model TEXT`: LLM Model to run the tests on the Crew (default: "gpt-4o-mini")
Example:
```shell
```shell Terminal
crewai test -n 5 -m gpt-3.5-turbo
```
@@ -138,7 +138,7 @@ crewai test -n 5 -m gpt-3.5-turbo
Run the crew.
```shell
```shell Terminal
crewai run
```
<Note>
@@ -147,7 +147,36 @@ Some commands may require additional configuration or setup within your project
</Note>
### 9. API Keys
### 9. Chat
Starting in version `0.98.0`, when you run the `crewai chat` command, you start an interactive session with your crew. The AI assistant will guide you by asking for necessary inputs to execute the crew. Once all inputs are provided, the crew will execute its tasks.
After receiving the results, you can continue interacting with the assistant for further instructions or questions.
```shell Terminal
crewai chat
```
<Note>
Ensure you execute these commands from your CrewAI project's root directory.
</Note>
<Note>
IMPORTANT: Set the `chat_llm` property in your `crew.py` file to enable this command.
```python
@crew
def crew(self) -> Crew:
return Crew(
agents=self.agents,
tasks=self.tasks,
process=Process.sequential,
verbose=True,
chat_llm="gpt-4o", # LLM for chat orchestration
)
```
</Note>
### 10. API Keys
When running ```crewai create crew``` command, the CLI will first show you the top 5 most common LLM providers and ask you to select one.
@@ -161,6 +190,7 @@ The CLI will initially prompt for API keys for the following services:
* Groq
* Anthropic
* Google Gemini
* SambaNova
When you select a provider, the CLI will prompt you to enter your API key.

View File

@@ -23,14 +23,14 @@ A crew in crewAI represents a collaborative group of agents working together to
| **Language** _(optional)_ | `language` | Language used for the crew, defaults to English. |
| **Language File** _(optional)_ | `language_file` | Path to the language file to be used for the crew. |
| **Memory** _(optional)_ | `memory` | Utilized for storing execution memories (short-term, long-term, entity memory). |
| **Memory Config** _(optional)_ | `memory_config` | Configuration for the memory provider to be used by the crew. |
| **Cache** _(optional)_ | `cache` | Specifies whether to use a cache for storing the results of tools' execution. Defaults to `True`. |
| **Embedder** _(optional)_ | `embedder` | Configuration for the embedder to be used by the crew. Mostly used by memory for now. Default is `{"provider": "openai"}`. |
| **Full Output** _(optional)_ | `full_output` | Whether the crew should return the full output with all tasks outputs or just the final output. Defaults to `False`. |
| **Memory Config** _(optional)_ | `memory_config` | Configuration for the memory provider to be used by the crew. |
| **Cache** _(optional)_ | `cache` | Specifies whether to use a cache for storing the results of tools' execution. Defaults to `True`. |
| **Embedder** _(optional)_ | `embedder` | Configuration for the embedder to be used by the crew. Mostly used by memory for now. Default is `{"provider": "openai"}`. |
| **Full Output** _(optional)_ | `full_output` | Whether the crew should return the full output with all tasks outputs or just the final output. Defaults to `False`. |
| **Step Callback** _(optional)_ | `step_callback` | A function that is called after each step of every agent. This can be used to log the agent's actions or to perform other operations; it won't override the agent-specific `step_callback`. |
| **Task Callback** _(optional)_ | `task_callback` | A function that is called after the completion of each task. Useful for monitoring or additional operations post-task execution. |
| **Share Crew** _(optional)_ | `share_crew` | Whether you want to share the complete crew information and execution with the crewAI team to make the library better, and allow us to train models. |
| **Output Log File** _(optional)_ | `output_log_file` | Whether you want to have a file with the complete crew output and execution. You can set it using True and it will default to the folder you are currently in and it will be called logs.txt or passing a string with the full path and name of the file. |
| **Output Log File** _(optional)_ | `output_log_file` | Set to True to save logs as logs.txt in the current directory or provide a file path. Logs will be in JSON format if the filename ends in .json, otherwise .txt. Defautls to `None`. |
| **Manager Agent** _(optional)_ | `manager_agent` | `manager` sets a custom agent that will be used as a manager. |
| **Prompt File** _(optional)_ | `prompt_file` | Path to the prompt JSON file to be used for the crew. |
| **Planning** *(optional)* | `planning` | Adds planning ability to the Crew. When activated before each Crew iteration, all Crew data is sent to an AgentPlanner that will plan the tasks and this plan will be added to each task description. |
@@ -240,6 +240,23 @@ print(f"Tasks Output: {crew_output.tasks_output}")
print(f"Token Usage: {crew_output.token_usage}")
```
## Accessing Crew Logs
You can see real time log of the crew execution, by setting `output_log_file` as a `True(Boolean)` or a `file_name(str)`. Supports logging of events as both `file_name.txt` and `file_name.json`.
In case of `True(Boolean)` will save as `logs.txt`.
In case of `output_log_file` is set as `False(Booelan)` or `None`, the logs will not be populated.
```python Code
# Save crew logs
crew = Crew(output_log_file = True) # Logs will be saved as logs.txt
crew = Crew(output_log_file = file_name) # Logs will be saved as file_name.txt
crew = Crew(output_log_file = file_name.txt) # Logs will be saved as file_name.txt
crew = Crew(output_log_file = file_name.json) # Logs will be saved as file_name.json
```
## Memory Utilization
Crews can utilize memory (short-term, long-term, and entity memory) to enhance their execution and learning over time. This feature allows crews to store and recall execution memories, aiding in decision-making and task execution strategies.
@@ -279,9 +296,9 @@ print(result)
Once your crew is assembled, initiate the workflow with the appropriate kickoff method. CrewAI provides several methods for better control over the kickoff process: `kickoff()`, `kickoff_for_each()`, `kickoff_async()`, and `kickoff_for_each_async()`.
- `kickoff()`: Starts the execution process according to the defined process flow.
- `kickoff_for_each()`: Executes tasks for each agent individually.
- `kickoff_for_each()`: Executes tasks sequentially for each provided input event or item in the collection.
- `kickoff_async()`: Initiates the workflow asynchronously.
- `kickoff_for_each_async()`: Executes tasks for each agent individually in an asynchronous manner.
- `kickoff_for_each_async()`: Executes tasks concurrently for each provided input event or item, leveraging asynchronous processing.
```python Code
# Start the crew's task execution

View File

@@ -35,6 +35,8 @@ class ExampleFlow(Flow):
@start()
def generate_city(self):
print("Starting flow")
# Each flow state automatically gets a unique ID
print(f"Flow State ID: {self.state['id']}")
response = completion(
model=self.model,
@@ -47,6 +49,8 @@ class ExampleFlow(Flow):
)
random_city = response["choices"][0]["message"]["content"]
# Store the city in our state
self.state["city"] = random_city
print(f"Random City: {random_city}")
return random_city
@@ -64,6 +68,8 @@ class ExampleFlow(Flow):
)
fun_fact = response["choices"][0]["message"]["content"]
# Store the fun fact in our state
self.state["fun_fact"] = fun_fact
return fun_fact
@@ -76,7 +82,15 @@ print(f"Generated fun fact: {result}")
In the above example, we have created a simple Flow that generates a random city using OpenAI and then generates a fun fact about that city. The Flow consists of two tasks: `generate_city` and `generate_fun_fact`. The `generate_city` task is the starting point of the Flow, and the `generate_fun_fact` task listens for the output of the `generate_city` task.
When you run the Flow, it will generate a random city and then generate a fun fact about that city. The output will be printed to the console.
Each Flow instance automatically receives a unique identifier (UUID) in its state, which helps track and manage flow executions. The state can also store additional data (like the generated city and fun fact) that persists throughout the flow's execution.
When you run the Flow, it will:
1. Generate a unique ID for the flow state
2. Generate a random city and store it in the state
3. Generate a fun fact about that city and store it in the state
4. Print the results to the console
The state's unique ID and stored data can be useful for tracking flow executions and maintaining context between tasks.
**Note:** Ensure you have set up your `.env` file to store your `OPENAI_API_KEY`. This key is necessary for authenticating requests to the OpenAI API.
@@ -138,7 +152,7 @@ print("---- Final Output ----")
print(final_output)
````
``` text Output
```text Output
---- Final Output ----
Second method received: Output from first_method
````
@@ -207,34 +221,39 @@ allowing developers to choose the approach that best fits their application's ne
In unstructured state management, all state is stored in the `state` attribute of the `Flow` class.
This approach offers flexibility, enabling developers to add or modify state attributes on the fly without defining a strict schema.
Even with unstructured states, CrewAI Flows automatically generates and maintains a unique identifier (UUID) for each state instance.
```python Code
from crewai.flow.flow import Flow, listen, start
class UntructuredExampleFlow(Flow):
class UnstructuredExampleFlow(Flow):
@start()
def first_method(self):
self.state.message = "Hello from structured flow"
self.state.counter = 0
# The state automatically includes an 'id' field
print(f"State ID: {self.state['id']}")
self.state['counter'] = 0
self.state['message'] = "Hello from structured flow"
@listen(first_method)
def second_method(self):
self.state.counter += 1
self.state.message += " - updated"
self.state['counter'] += 1
self.state['message'] += " - updated"
@listen(second_method)
def third_method(self):
self.state.counter += 1
self.state.message += " - updated again"
self.state['counter'] += 1
self.state['message'] += " - updated again"
print(f"State after third_method: {self.state}")
flow = UntructuredExampleFlow()
flow = UnstructuredExampleFlow()
flow.kickoff()
```
**Note:** The `id` field is automatically generated and preserved throughout the flow's execution. You don't need to manage or set it manually, and it will be maintained even when updating the state with new data.
**Key Points:**
- **Flexibility:** You can dynamically add attributes to `self.state` without predefined constraints.
@@ -245,12 +264,15 @@ flow.kickoff()
Structured state management leverages predefined schemas to ensure consistency and type safety across the workflow.
By using models like Pydantic's `BaseModel`, developers can define the exact shape of the state, enabling better validation and auto-completion in development environments.
Each state in CrewAI Flows automatically receives a unique identifier (UUID) to help track and manage state instances. This ID is automatically generated and managed by the Flow system.
```python Code
from crewai.flow.flow import Flow, listen, start
from pydantic import BaseModel
class ExampleState(BaseModel):
# Note: 'id' field is automatically added to all states
counter: int = 0
message: str = ""
@@ -259,6 +281,8 @@ class StructuredExampleFlow(Flow[ExampleState]):
@start()
def first_method(self):
# Access the auto-generated ID if needed
print(f"State ID: {self.state.id}")
self.state.message = "Hello from structured flow"
@listen(first_method)
@@ -299,6 +323,91 @@ flow.kickoff()
By providing both unstructured and structured state management options, CrewAI Flows empowers developers to build AI workflows that are both flexible and robust, catering to a wide range of application requirements.
## Flow Persistence
The @persist decorator enables automatic state persistence in CrewAI Flows, allowing you to maintain flow state across restarts or different workflow executions. This decorator can be applied at either the class level or method level, providing flexibility in how you manage state persistence.
### Class-Level Persistence
When applied at the class level, the @persist decorator automatically persists all flow method states:
```python
@persist # Using SQLiteFlowPersistence by default
class MyFlow(Flow[MyState]):
@start()
def initialize_flow(self):
# This method will automatically have its state persisted
self.state.counter = 1
print("Initialized flow. State ID:", self.state.id)
@listen(initialize_flow)
def next_step(self):
# The state (including self.state.id) is automatically reloaded
self.state.counter += 1
print("Flow state is persisted. Counter:", self.state.counter)
```
### Method-Level Persistence
For more granular control, you can apply @persist to specific methods:
```python
class AnotherFlow(Flow[dict]):
@persist # Persists only this method's state
@start()
def begin(self):
if "runs" not in self.state:
self.state["runs"] = 0
self.state["runs"] += 1
print("Method-level persisted runs:", self.state["runs"])
```
### How It Works
1. **Unique State Identification**
- Each flow state automatically receives a unique UUID
- The ID is preserved across state updates and method calls
- Supports both structured (Pydantic BaseModel) and unstructured (dictionary) states
2. **Default SQLite Backend**
- SQLiteFlowPersistence is the default storage backend
- States are automatically saved to a local SQLite database
- Robust error handling ensures clear messages if database operations fail
3. **Error Handling**
- Comprehensive error messages for database operations
- Automatic state validation during save and load
- Clear feedback when persistence operations encounter issues
### Important Considerations
- **State Types**: Both structured (Pydantic BaseModel) and unstructured (dictionary) states are supported
- **Automatic ID**: The `id` field is automatically added if not present
- **State Recovery**: Failed or restarted flows can automatically reload their previous state
- **Custom Implementation**: You can provide your own FlowPersistence implementation for specialized storage needs
### Technical Advantages
1. **Precise Control Through Low-Level Access**
- Direct access to persistence operations for advanced use cases
- Fine-grained control via method-level persistence decorators
- Built-in state inspection and debugging capabilities
- Full visibility into state changes and persistence operations
2. **Enhanced Reliability**
- Automatic state recovery after system failures or restarts
- Transaction-based state updates for data integrity
- Comprehensive error handling with clear error messages
- Robust validation during state save and load operations
3. **Extensible Architecture**
- Customizable persistence backend through FlowPersistence interface
- Support for specialized storage solutions beyond SQLite
- Compatible with both structured (Pydantic) and unstructured (dict) states
- Seamless integration with existing CrewAI flow patterns
The persistence system's architecture emphasizes technical precision and customization options, allowing developers to maintain full control over state management while benefiting from built-in reliability features.
## Flow Control
### Conditional Logic: `or`
@@ -628,4 +737,4 @@ Also, check out our YouTube video on how to use flows in CrewAI below!
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
referrerpolicy="strict-origin-when-cross-origin"
allowfullscreen
></iframe>
></iframe>

View File

@@ -4,8 +4,6 @@ description: What is knowledge in CrewAI and how to use it.
icon: book
---
# Using Knowledge in CrewAI
## What is Knowledge?
Knowledge in CrewAI is a powerful system that allows AI agents to access and utilize external information sources during their tasks.
@@ -36,7 +34,20 @@ CrewAI supports various types of knowledge sources out of the box:
</Card>
</CardGroup>
## Quick Start
## Supported Knowledge Parameters
| Parameter | Type | Required | Description |
| :--------------------------- | :---------------------------------- | :------- | :---------------------------------------------------------------------------------------------------------------------------------------------------- |
| `sources` | **List[BaseKnowledgeSource]** | Yes | List of knowledge sources that provide content to be stored and queried. Can include PDF, CSV, Excel, JSON, text files, or string content. |
| `collection_name` | **str** | No | Name of the collection where the knowledge will be stored. Used to identify different sets of knowledge. Defaults to "knowledge" if not provided. |
| `storage` | **Optional[KnowledgeStorage]** | No | Custom storage configuration for managing how the knowledge is stored and retrieved. If not provided, a default storage will be created. |
## Quickstart Example
<Tip>
For file-Based Knowledge Sources, make sure to place your files in a `knowledge` directory at the root of your project.
Also, use relative paths from the `knowledge` directory when creating the source.
</Tip>
Here's an example using string-based knowledge:
@@ -80,7 +91,14 @@ result = crew.kickoff(inputs={"question": "What city does John live in and how o
```
Here's another example with the `CrewDoclingSource`
Here's another example with the `CrewDoclingSource`. The CrewDoclingSource is actually quite versatile and can handle multiple file formats including TXT, PDF, DOCX, HTML, and more.
<Note>
You need to install `docling` for the following example to work: `uv add docling`
</Note>
```python Code
from crewai import LLM, Agent, Crew, Process, Task
from crewai.knowledge.source.crew_docling_source import CrewDoclingSource
@@ -128,39 +146,225 @@ result = crew.kickoff(
)
```
## More Examples
Here are examples of how to use different types of knowledge sources:
### Text File Knowledge Source
```python
from crewai.knowledge.source.crew_docling_source import CrewDoclingSource
# Create a text file knowledge source
text_source = CrewDoclingSource(
file_paths=["document.txt", "another.txt"]
)
# Create crew with text file source on agents or crew level
agent = Agent(
...
knowledge_sources=[text_source]
)
crew = Crew(
...
knowledge_sources=[text_source]
)
```
### PDF Knowledge Source
```python
from crewai.knowledge.source.pdf_knowledge_source import PDFKnowledgeSource
# Create a PDF knowledge source
pdf_source = PDFKnowledgeSource(
file_paths=["document.pdf", "another.pdf"]
)
# Create crew with PDF knowledge source on agents or crew level
agent = Agent(
...
knowledge_sources=[pdf_source]
)
crew = Crew(
...
knowledge_sources=[pdf_source]
)
```
### CSV Knowledge Source
```python
from crewai.knowledge.source.csv_knowledge_source import CSVKnowledgeSource
# Create a CSV knowledge source
csv_source = CSVKnowledgeSource(
file_paths=["data.csv"]
)
# Create crew with CSV knowledge source or on agent level
agent = Agent(
...
knowledge_sources=[csv_source]
)
crew = Crew(
...
knowledge_sources=[csv_source]
)
```
### Excel Knowledge Source
```python
from crewai.knowledge.source.excel_knowledge_source import ExcelKnowledgeSource
# Create an Excel knowledge source
excel_source = ExcelKnowledgeSource(
file_paths=["spreadsheet.xlsx"]
)
# Create crew with Excel knowledge source on agents or crew level
agent = Agent(
...
knowledge_sources=[excel_source]
)
crew = Crew(
...
knowledge_sources=[excel_source]
)
```
### JSON Knowledge Source
```python
from crewai.knowledge.source.json_knowledge_source import JSONKnowledgeSource
# Create a JSON knowledge source
json_source = JSONKnowledgeSource(
file_paths=["data.json"]
)
# Create crew with JSON knowledge source on agents or crew level
agent = Agent(
...
knowledge_sources=[json_source]
)
crew = Crew(
...
knowledge_sources=[json_source]
)
```
## Knowledge Configuration
### Chunking Configuration
Control how content is split for processing by setting the chunk size and overlap.
Knowledge sources automatically chunk content for better processing.
You can configure chunking behavior in your knowledge sources:
```python Code
knowledge_source = StringKnowledgeSource(
content="Long content...",
chunk_size=4000, # Characters per chunk (default)
chunk_overlap=200 # Overlap between chunks (default)
```python
from crewai.knowledge.source.string_knowledge_source import StringKnowledgeSource
source = StringKnowledgeSource(
content="Your content here",
chunk_size=4000, # Maximum size of each chunk (default: 4000)
chunk_overlap=200 # Overlap between chunks (default: 200)
)
```
## Embedder Configuration
The chunking configuration helps in:
- Breaking down large documents into manageable pieces
- Maintaining context through chunk overlap
- Optimizing retrieval accuracy
You can also configure the embedder for the knowledge store. This is useful if you want to use a different embedder for the knowledge store than the one used for the agents.
### Embeddings Configuration
```python Code
...
You can also configure the embedder for the knowledge store.
This is useful if you want to use a different embedder for the knowledge store than the one used for the agents.
The `embedder` parameter supports various embedding model providers that include:
- `openai`: OpenAI's embedding models
- `google`: Google's text embedding models
- `azure`: Azure OpenAI embeddings
- `ollama`: Local embeddings with Ollama
- `vertexai`: Google Cloud VertexAI embeddings
- `cohere`: Cohere's embedding models
- `voyageai`: VoyageAI's embedding models
- `bedrock`: AWS Bedrock embeddings
- `huggingface`: Hugging Face models
- `watson`: IBM Watson embeddings
Here's an example of how to configure the embedder for the knowledge store using Google's `text-embedding-004` model:
<CodeGroup>
```python Example
from crewai import Agent, Task, Crew, Process, LLM
from crewai.knowledge.source.string_knowledge_source import StringKnowledgeSource
import os
# Get the GEMINI API key
GEMINI_API_KEY = os.environ.get("GEMINI_API_KEY")
# Create a knowledge source
content = "Users name is John. He is 30 years old and lives in San Francisco."
string_source = StringKnowledgeSource(
content="Users name is John. He is 30 years old and lives in San Francisco.",
content=content,
)
# Create an LLM with a temperature of 0 to ensure deterministic outputs
gemini_llm = LLM(
model="gemini/gemini-1.5-pro-002",
api_key=GEMINI_API_KEY,
temperature=0,
)
# Create an agent with the knowledge store
agent = Agent(
role="About User",
goal="You know everything about the user.",
backstory="""You are a master at understanding people and their preferences.""",
verbose=True,
allow_delegation=False,
llm=gemini_llm,
embedder={
"provider": "google",
"config": {
"model": "models/text-embedding-004",
"api_key": GEMINI_API_KEY,
}
}
)
task = Task(
description="Answer the following questions about the user: {question}",
expected_output="An answer to the question.",
agent=agent,
)
crew = Crew(
...
agents=[agent],
tasks=[task],
verbose=True,
process=Process.sequential,
knowledge_sources=[string_source],
embedder={
"provider": "openai",
"config": {"model": "text-embedding-3-small"},
},
"provider": "google",
"config": {
"model": "models/text-embedding-004",
"api_key": GEMINI_API_KEY,
}
}
)
```
result = crew.kickoff(inputs={"question": "What city does John live in and how old is he?"})
```
```text Output
# Agent: About User
## Task: Answer the following questions about the user: What city does John live in and how old is he?
# Agent: About User
## Final Answer:
John is 30 years old and lives in San Francisco.
```
</CodeGroup>
## Clearing Knowledge
If you need to clear the knowledge stored in CrewAI, you can use the `crewai reset-memories` command with the `--knowledge` option.
@@ -171,6 +375,58 @@ crewai reset-memories --knowledge
This is useful when you've updated your knowledge sources and want to ensure that the agents are using the most recent information.
## Agent-Specific Knowledge
While knowledge can be provided at the crew level using `crew.knowledge_sources`, individual agents can also have their own knowledge sources using the `knowledge_sources` parameter:
```python Code
from crewai import Agent, Task, Crew
from crewai.knowledge.source.string_knowledge_source import StringKnowledgeSource
# Create agent-specific knowledge about a product
product_specs = StringKnowledgeSource(
content="""The XPS 13 laptop features:
- 13.4-inch 4K display
- Intel Core i7 processor
- 16GB RAM
- 512GB SSD storage
- 12-hour battery life""",
metadata={"category": "product_specs"}
)
# Create a support agent with product knowledge
support_agent = Agent(
role="Technical Support Specialist",
goal="Provide accurate product information and support.",
backstory="You are an expert on our laptop products and specifications.",
knowledge_sources=[product_specs] # Agent-specific knowledge
)
# Create a task that requires product knowledge
support_task = Task(
description="Answer this customer question: {question}",
agent=support_agent
)
# Create and run the crew
crew = Crew(
agents=[support_agent],
tasks=[support_task]
)
# Get answer about the laptop's specifications
result = crew.kickoff(
inputs={"question": "What is the storage capacity of the XPS 13?"}
)
```
<Info>
Benefits of agent-specific knowledge:
- Give agents specialized information for their roles
- Maintain separation of concerns between agents
- Combine with crew-level knowledge for layered information access
</Info>
## Custom Knowledge Sources
CrewAI allows you to create custom knowledge sources for any type of data by extending the `BaseKnowledgeSource` class. Let's create a practical example that fetches and processes space news articles.

View File

@@ -38,6 +38,7 @@ Here's a detailed breakdown of supported models and their capabilities, you can
| GPT-4 | 8,192 tokens | High-accuracy tasks, complex reasoning |
| GPT-4 Turbo | 128,000 tokens | Long-form content, document analysis |
| GPT-4o & GPT-4o-mini | 128,000 tokens | Cost-effective large context processing |
| o3-mini | 200,000 tokens | Fast reasoning, complex reasoning |
<Note>
1 token ≈ 4 characters in English. For example, 8,192 tokens ≈ 32,768 characters or about 6,000 words.
@@ -146,10 +147,24 @@ Here's a detailed breakdown of supported models and their capabilities, you can
Groq is known for its fast inference speeds, making it suitable for real-time applications.
</Tip>
</Tab>
<Tab title="SambaNova">
| Model | Context Window | Best For |
|-------|---------------|-----------|
| Llama 3.1 70B/8B | Up to 131,072 tokens | High-performance, large context tasks |
| Llama 3.1 405B | 8,192 tokens | High-performance and output quality |
| Llama 3.2 Series | 8,192 tokens | General-purpose tasks, multimodal |
| Llama 3.3 70B | Up to 131,072 tokens | High-performance and output quality|
| Qwen2 familly | 8,192 tokens | High-performance and output quality |
<Tip>
[SambaNova](https://cloud.sambanova.ai/) has several models with fast inference speed at full precision.
</Tip>
</Tab>
<Tab title="Others">
| Provider | Context Window | Key Features |
|----------|---------------|--------------|
| Deepseek Chat | 128,000 tokens | Specialized in technical discussions |
| Deepseek Chat | 64,000 tokens | Specialized in technical discussions |
| Deepseek R1 | 64,000 tokens | Affordable reasoning model |
| Claude 3 | Up to 200K tokens | Strong reasoning, code understanding |
| Gemma Series | 8,192 tokens | Efficient, smaller-scale tasks |
@@ -230,6 +245,9 @@ There are three ways to configure LLMs in CrewAI. Choose the method that best fi
# llm: bedrock/amazon.titan-text-express-v1
# llm: bedrock/meta.llama2-70b-chat-v1
# Amazon SageMaker Models - Enterprise-grade
# llm: sagemaker/<my-endpoint>
# Mistral Models - Open source alternative
# llm: mistral/mistral-large-latest
# llm: mistral/mistral-medium-latest
@@ -280,6 +298,10 @@ There are three ways to configure LLMs in CrewAI. Choose the method that best fi
# llm: sambanova/Meta-Llama-3.1-8B-Instruct
# llm: sambanova/BioMistral-7B
# llm: sambanova/Falcon-180B
# Open Router Models - Affordable reasoning
# llm: openrouter/deepseek/deepseek-r1
# llm: openrouter/deepseek/deepseek-chat
```
<Info>
@@ -441,19 +463,36 @@ Learn how to get the most out of your LLM configuration:
<Accordion title="Google">
```python Code
# Option 1. Gemini accessed with an API key.
# Option 1: Gemini accessed with an API key.
# https://ai.google.dev/gemini-api/docs/api-key
GEMINI_API_KEY=<your-api-key>
# Option 2. Vertex AI IAM credentials for Gemini, Anthropic, and anything in the Model Garden.
# Option 2: Vertex AI IAM credentials for Gemini, Anthropic, and Model Garden.
# https://cloud.google.com/vertex-ai/generative-ai/docs/overview
```
Get credentials:
```python Code
import json
file_path = 'path/to/vertex_ai_service_account.json'
# Load the JSON file
with open(file_path, 'r') as file:
vertex_credentials = json.load(file)
# Convert the credentials to a JSON string
vertex_credentials_json = json.dumps(vertex_credentials)
```
Example usage:
```python Code
from crewai import LLM
llm = LLM(
model="gemini/gemini-1.5-pro-latest",
temperature=0.7
temperature=0.7,
vertex_credentials=vertex_credentials_json
)
```
</Accordion>
@@ -493,6 +532,21 @@ Learn how to get the most out of your LLM configuration:
)
```
</Accordion>
<Accordion title="Amazon SageMaker">
```python Code
AWS_ACCESS_KEY_ID=<your-access-key>
AWS_SECRET_ACCESS_KEY=<your-secret-key>
AWS_DEFAULT_REGION=<your-region>
```
Example usage:
```python Code
llm = LLM(
model="sagemaker/<my-endpoint>"
)
```
</Accordion>
<Accordion title="Mistral">
```python Code
@@ -649,8 +703,53 @@ Learn how to get the most out of your LLM configuration:
- Support for long context windows
</Info>
</Accordion>
<Accordion title="Open Router">
```python Code
OPENROUTER_API_KEY=<your-api-key>
```
Example usage:
```python Code
llm = LLM(
model="openrouter/deepseek/deepseek-r1",
base_url="https://openrouter.ai/api/v1",
api_key=OPENROUTER_API_KEY
)
```
<Info>
Open Router models:
- openrouter/deepseek/deepseek-r1
- openrouter/deepseek/deepseek-chat
</Info>
</Accordion>
</AccordionGroup>
## Structured LLM Calls
CrewAI supports structured responses from LLM calls by allowing you to define a `response_format` using a Pydantic model. This enables the framework to automatically parse and validate the output, making it easier to integrate the response into your application without manual post-processing.
For example, you can define a Pydantic model to represent the expected response structure and pass it as the `response_format` when instantiating the LLM. The model will then be used to convert the LLM output into a structured Python object.
```python Code
from crewai import LLM
class Dog(BaseModel):
name: str
age: int
breed: str
llm = LLM(model="gpt-4o", response_format=Dog)
response = llm.call(
"Analyze the following messages and return the name, age, and breed. "
"Meet Kona! She is 3 years old and is a black german shepherd."
)
print(response)
```
## Common Issues and Solutions
<Tabs>

View File

@@ -134,6 +134,23 @@ crew = Crew(
)
```
## Memory Configuration Options
If you want to access a specific organization and project, you can set the `org_id` and `project_id` parameters in the memory configuration.
```python Code
from crewai import Crew
crew = Crew(
agents=[...],
tasks=[...],
verbose=True,
memory=True,
memory_config={
"provider": "mem0",
"config": {"user_id": "john", "org_id": "my_org_id", "project_id": "my_project_id"},
},
)
```
## Additional Embedding Providers
@@ -168,7 +185,12 @@ my_crew = Crew(
process=Process.sequential,
memory=True,
verbose=True,
embedder=OpenAIEmbeddingFunction(api_key=os.getenv("OPENAI_API_KEY"), model_name="text-embedding-3-small"),
embedder={
"provider": "openai",
"config": {
"model": 'text-embedding-3-small'
}
}
)
```
@@ -207,7 +229,7 @@ my_crew = Crew(
"provider": "google",
"config": {
"api_key": "<YOUR_API_KEY>",
"model_name": "<model_name>"
"model": "<model_name>"
}
}
)
@@ -225,13 +247,15 @@ my_crew = Crew(
process=Process.sequential,
memory=True,
verbose=True,
embedder=OpenAIEmbeddingFunction(
api_key="YOUR_API_KEY",
api_base="YOUR_API_BASE_PATH",
api_type="azure",
api_version="YOUR_API_VERSION",
model_name="text-embedding-3-small"
)
embedder={
"provider": "openai",
"config": {
"api_key": "YOUR_API_KEY",
"api_base": "YOUR_API_BASE_PATH",
"api_version": "YOUR_API_VERSION",
"model_name": 'text-embedding-3-small'
}
}
)
```
@@ -247,12 +271,15 @@ my_crew = Crew(
process=Process.sequential,
memory=True,
verbose=True,
embedder=GoogleVertexEmbeddingFunction(
project_id="YOUR_PROJECT_ID",
region="YOUR_REGION",
api_key="YOUR_API_KEY",
model_name="textembedding-gecko"
)
embedder={
"provider": "vertexai",
"config": {
"project_id"="YOUR_PROJECT_ID",
"region"="YOUR_REGION",
"api_key"="YOUR_API_KEY",
"model_name"="textembedding-gecko"
}
}
)
```
@@ -271,7 +298,27 @@ my_crew = Crew(
"provider": "cohere",
"config": {
"api_key": "YOUR_API_KEY",
"model_name": "<model_name>"
"model": "<model_name>"
}
}
)
```
### Using VoyageAI embeddings
```python Code
from crewai import Crew, Agent, Task, Process
my_crew = Crew(
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder={
"provider": "voyageai",
"config": {
"api_key": "YOUR_API_KEY",
"model": "<model_name>"
}
}
)
@@ -321,6 +368,33 @@ my_crew = Crew(
)
```
### Adding Custom Embedding Function
```python Code
from crewai import Crew, Agent, Task, Process
from chromadb import Documents, EmbeddingFunction, Embeddings
# Create a custom embedding function
class CustomEmbedder(EmbeddingFunction):
def __call__(self, input: Documents) -> Embeddings:
# generate embeddings
return [1, 2, 3] # this is a dummy embedding
my_crew = Crew(
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder={
"provider": "custom",
"config": {
"embedder": CustomEmbedder()
}
}
)
```
### Resetting Memory
```shell

View File

@@ -31,7 +31,7 @@ From this point on, your crew will have planning enabled, and the tasks will be
#### Planning LLM
Now you can define the LLM that will be used to plan the tasks. You can use any ChatOpenAI LLM model available.
Now you can define the LLM that will be used to plan the tasks.
When running the base case example, you will see something like the output below, which represents the output of the `AgentPlanner`
responsible for creating the step-by-step logic to add to the Agents' tasks.
@@ -39,7 +39,6 @@ responsible for creating the step-by-step logic to add to the Agents' tasks.
<CodeGroup>
```python Code
from crewai import Crew, Agent, Task, Process
from langchain_openai import ChatOpenAI
# Assemble your crew with planning capabilities and custom LLM
my_crew = Crew(
@@ -47,7 +46,7 @@ my_crew = Crew(
tasks=self.tasks,
process=Process.sequential,
planning=True,
planning_llm=ChatOpenAI(model="gpt-4o")
planning_llm="gpt-4o"
)
# Run the crew
@@ -82,8 +81,8 @@ my_crew.kickoff()
3. **Collect Data:**
- Search for the latest papers, articles, and reports published in 2023 and early 2024.
- Use keywords like "Large Language Models 2024", "AI LLM advancements", "AI ethics 2024", etc.
- Search for the latest papers, articles, and reports published in 2024 and early 2025.
- Use keywords like "Large Language Models 2025", "AI LLM advancements", "AI ethics 2025", etc.
4. **Analyze Findings:**

View File

@@ -23,9 +23,7 @@ Processes enable individual agents to operate as a cohesive unit, streamlining t
To assign a process to a crew, specify the process type upon crew creation to set the execution strategy. For a hierarchical process, ensure to define `manager_llm` or `manager_agent` for the manager agent.
```python
from crewai import Crew
from crewai.process import Process
from langchain_openai import ChatOpenAI
from crewai import Crew, Process
# Example: Creating a crew with a sequential process
crew = Crew(
@@ -40,7 +38,7 @@ crew = Crew(
agents=my_agents,
tasks=my_tasks,
process=Process.hierarchical,
manager_llm=ChatOpenAI(model="gpt-4")
manager_llm="gpt-4o"
# or
# manager_agent=my_manager_agent
)

View File

@@ -33,11 +33,12 @@ crew = Crew(
| :------------------------------- | :---------------- | :---------------------------- | :------------------------------------------------------------------------------------------------------------------- |
| **Description** | `description` | `str` | A clear, concise statement of what the task entails. |
| **Expected Output** | `expected_output` | `str` | A detailed description of what the task's completion looks like. |
| **Name** _(optional)_ | `name` | `Optional[str]` | A name identifier for the task. |
| **Agent** _(optional)_ | `agent` | `Optional[BaseAgent]` | The agent responsible for executing the task. |
| **Tools** _(optional)_ | `tools` | `List[BaseTool]` | The tools/resources the agent is limited to use for this task. |
| **Name** _(optional)_ | `name` | `Optional[str]` | A name identifier for the task. |
| **Agent** _(optional)_ | `agent` | `Optional[BaseAgent]` | The agent responsible for executing the task. |
| **Tools** _(optional)_ | `tools` | `List[BaseTool]` | The tools/resources the agent is limited to use for this task. |
| **Context** _(optional)_ | `context` | `Optional[List["Task"]]` | Other tasks whose outputs will be used as context for this task. |
| **Async Execution** _(optional)_ | `async_execution` | `Optional[bool]` | Whether the task should be executed asynchronously. Defaults to False. |
| **Human Input** _(optional)_ | `human_input` | `Optional[bool]` | Whether the task should have a human review the final answer of the agent. Defaults to False. |
| **Config** _(optional)_ | `config` | `Optional[Dict[str, Any]]` | Task-specific configuration parameters. |
| **Output File** _(optional)_ | `output_file` | `Optional[str]` | File path for storing the task output. |
| **Output JSON** _(optional)_ | `output_json` | `Optional[Type[BaseModel]]` | A Pydantic model to structure the JSON output. |
@@ -68,7 +69,7 @@ research_task:
description: >
Conduct a thorough research about {topic}
Make sure you find any interesting and relevant information given
the current year is 2024.
the current year is 2025.
expected_output: >
A list with 10 bullet points of the most relevant information about {topic}
agent: researcher
@@ -154,7 +155,7 @@ research_task = Task(
description="""
Conduct a thorough research about AI Agents.
Make sure you find any interesting and relevant information given
the current year is 2024.
the current year is 2025.
""",
expected_output="""
A list with 10 bullet points of the most relevant information about AI Agents

View File

@@ -150,15 +150,20 @@ There are two main ways for one to create a CrewAI tool:
```python Code
from crewai.tools import BaseTool
from pydantic import BaseModel, Field
class MyToolInput(BaseModel):
"""Input schema for MyCustomTool."""
argument: str = Field(..., description="Description of the argument.")
class MyCustomTool(BaseTool):
name: str = "Name of my tool"
description: str = "Clear description for what this tool is useful for, your agent will need this information to use it."
description: str = "What this tool does. It's vital for effective utilization."
args_schema: Type[BaseModel] = MyToolInput
def _run(self, argument: str) -> str:
# Implementation goes here
return "Result from custom tool"
# Your tool's logic here
return "Tool's result"
```
### Utilizing the `tool` Decorator

View File

@@ -73,9 +73,9 @@ result = crew.kickoff()
If you're using the hierarchical process and don't want to set a custom manager agent, you can specify the language model for the manager:
```python Code
from langchain_openai import ChatOpenAI
from crewai import LLM
manager_llm = ChatOpenAI(model_name="gpt-4")
manager_llm = LLM(model="gpt-4o")
crew = Crew(
agents=[researcher, writer],

View File

@@ -60,12 +60,12 @@ writer = Agent(
# Create tasks for your agents
task1 = Task(
description=(
"Conduct a comprehensive analysis of the latest advancements in AI in 2024. "
"Conduct a comprehensive analysis of the latest advancements in AI in 2025. "
"Identify key trends, breakthrough technologies, and potential industry impacts. "
"Compile your findings in a detailed report. "
"Make sure to check with a human if the draft is good before finalizing your answer."
),
expected_output='A comprehensive full report on the latest AI advancements in 2024, leave nothing out',
expected_output='A comprehensive full report on the latest AI advancements in 2025, leave nothing out',
agent=researcher,
human_input=True
)
@@ -76,7 +76,7 @@ task2 = Task(
"Your post should be informative yet accessible, catering to a tech-savvy audience. "
"Aim for a narrative that captures the essence of these breakthroughs and their implications for the future."
),
expected_output='A compelling 3 paragraphs blog post formatted as markdown about the latest AI advancements in 2024',
expected_output='A compelling 3 paragraphs blog post formatted as markdown about the latest AI advancements in 2025',
agent=writer,
human_input=True
)

View File

@@ -23,6 +23,7 @@ LiteLLM supports a wide range of providers, including but not limited to:
- Azure OpenAI
- AWS (Bedrock, SageMaker)
- Cohere
- VoyageAI
- Hugging Face
- Ollama
- Mistral AI
@@ -32,6 +33,7 @@ LiteLLM supports a wide range of providers, including but not limited to:
- Cloudflare Workers AI
- DeepInfra
- Groq
- SambaNova
- [NVIDIA NIMs](https://docs.api.nvidia.com/nim/reference/models-1)
- And many more!

View File

@@ -0,0 +1,206 @@
---
title: Agent Monitoring with MLflow
description: Quickly start monitoring your Agents with MLflow.
icon: bars-staggered
---
# MLflow Overview
[MLflow](https://mlflow.org/) is an open-source platform to assist machine learning practitioners and teams in handling the complexities of the machine learning process.
It provides a tracing feature that enhances LLM observability in your Generative AI applications by capturing detailed information about the execution of your applications services.
Tracing provides a way to record the inputs, outputs, and metadata associated with each intermediate step of a request, enabling you to easily pinpoint the source of bugs and unexpected behaviors.
![Overview of MLflow crewAI tracing usage](/images/mlflow-tracing.gif)
### Features
- **Tracing Dashboard**: Monitor activities of your crewAI agents with detailed dashboards that include inputs, outputs and metadata of spans.
- **Automated Tracing**: A fully automated integration with crewAI, which can be enabled by running `mlflow.crewai.autolog()`.
- **Manual Trace Instrumentation with minor efforts**: Customize trace instrumentation through MLflow's high-level fluent APIs such as decorators, function wrappers and context managers.
- **OpenTelemetry Compatibility**: MLflow Tracing supports exporting traces to an OpenTelemetry Collector, which can then be used to export traces to various backends such as Jaeger, Zipkin, and AWS X-Ray.
- **Package and Deploy Agents**: Package and deploy your crewAI agents to an inference server with a variety of deployment targets.
- **Securely Host LLMs**: Host multiple LLM from various providers in one unified endpoint through MFflow gateway.
- **Evaluation**: Evaluate your crewAI agents with a wide range of metrics using a convenient API `mlflow.evaluate()`.
## Setup Instructions
<Steps>
<Step title="Install MLflow package">
```shell
# The crewAI integration is available in mlflow>=2.19.0
pip install mlflow
```
</Step>
<Step title="Start MFflow tracking server">
```shell
# This process is optional, but it is recommended to use MLflow tracking server for better visualization and broader features.
mlflow server
```
</Step>
<Step title="Initialize MLflow in Your Application">
Add the following two lines to your application code:
```python
import mlflow
mlflow.crewai.autolog()
# Optional: Set a tracking URI and an experiment name if you have a tracking server
mlflow.set_tracking_uri("http://localhost:5000")
mlflow.set_experiment("CrewAI")
```
Example Usage for tracing CrewAI Agents:
```python
from crewai import Agent, Crew, Task
from crewai.knowledge.source.string_knowledge_source import StringKnowledgeSource
from crewai_tools import SerperDevTool, WebsiteSearchTool
from textwrap import dedent
content = "Users name is John. He is 30 years old and lives in San Francisco."
string_source = StringKnowledgeSource(
content=content, metadata={"preference": "personal"}
)
search_tool = WebsiteSearchTool()
class TripAgents:
def city_selection_agent(self):
return Agent(
role="City Selection Expert",
goal="Select the best city based on weather, season, and prices",
backstory="An expert in analyzing travel data to pick ideal destinations",
tools=[
search_tool,
],
verbose=True,
)
def local_expert(self):
return Agent(
role="Local Expert at this city",
goal="Provide the BEST insights about the selected city",
backstory="""A knowledgeable local guide with extensive information
about the city, it's attractions and customs""",
tools=[search_tool],
verbose=True,
)
class TripTasks:
def identify_task(self, agent, origin, cities, interests, range):
return Task(
description=dedent(
f"""
Analyze and select the best city for the trip based
on specific criteria such as weather patterns, seasonal
events, and travel costs. This task involves comparing
multiple cities, considering factors like current weather
conditions, upcoming cultural or seasonal events, and
overall travel expenses.
Your final answer must be a detailed
report on the chosen city, and everything you found out
about it, including the actual flight costs, weather
forecast and attractions.
Traveling from: {origin}
City Options: {cities}
Trip Date: {range}
Traveler Interests: {interests}
"""
),
agent=agent,
expected_output="Detailed report on the chosen city including flight costs, weather forecast, and attractions",
)
def gather_task(self, agent, origin, interests, range):
return Task(
description=dedent(
f"""
As a local expert on this city you must compile an
in-depth guide for someone traveling there and wanting
to have THE BEST trip ever!
Gather information about key attractions, local customs,
special events, and daily activity recommendations.
Find the best spots to go to, the kind of place only a
local would know.
This guide should provide a thorough overview of what
the city has to offer, including hidden gems, cultural
hotspots, must-visit landmarks, weather forecasts, and
high level costs.
The final answer must be a comprehensive city guide,
rich in cultural insights and practical tips,
tailored to enhance the travel experience.
Trip Date: {range}
Traveling from: {origin}
Traveler Interests: {interests}
"""
),
agent=agent,
expected_output="Comprehensive city guide including hidden gems, cultural hotspots, and practical travel tips",
)
class TripCrew:
def __init__(self, origin, cities, date_range, interests):
self.cities = cities
self.origin = origin
self.interests = interests
self.date_range = date_range
def run(self):
agents = TripAgents()
tasks = TripTasks()
city_selector_agent = agents.city_selection_agent()
local_expert_agent = agents.local_expert()
identify_task = tasks.identify_task(
city_selector_agent,
self.origin,
self.cities,
self.interests,
self.date_range,
)
gather_task = tasks.gather_task(
local_expert_agent, self.origin, self.interests, self.date_range
)
crew = Crew(
agents=[city_selector_agent, local_expert_agent],
tasks=[identify_task, gather_task],
verbose=True,
memory=True,
knowledge={
"sources": [string_source],
"metadata": {"preference": "personal"},
},
)
result = crew.kickoff()
return result
trip_crew = TripCrew("California", "Tokyo", "Dec 12 - Dec 20", "sports")
result = trip_crew.run()
print(result)
```
Refer to [MLflow Tracing Documentation](https://mlflow.org/docs/latest/llms/tracing/index.html) for more configurations and use cases.
</Step>
<Step title="Visualize Activities of Agents">
Now traces for your crewAI agents are captured by MLflow.
Let's visit MLflow tracking server to view the traces and get insights into your Agents.
Open `127.0.0.1:5000` on your browser to visit MLflow tracking server.
<Frame caption="MLflow Tracing Dashboard">
<img src="/images/mlflow1.png" alt="MLflow tracing example with crewai" />
</Frame>
</Step>
</Steps>

View File

@@ -1,14 +1,14 @@
---
title: Using Multimodal Agents
description: Learn how to enable and use multimodal capabilities in your agents for processing images and other non-text content within the CrewAI framework.
icon: image
icon: video
---
# Using Multimodal Agents
## Using Multimodal Agents
CrewAI supports multimodal agents that can process both text and non-text content like images. This guide will show you how to enable and use multimodal capabilities in your agents.
## Enabling Multimodal Capabilities
### Enabling Multimodal Capabilities
To create a multimodal agent, simply set the `multimodal` parameter to `True` when initializing your agent:
@@ -25,7 +25,7 @@ agent = Agent(
When you set `multimodal=True`, the agent is automatically configured with the necessary tools for handling non-text content, including the `AddImageTool`.
## Working with Images
### Working with Images
The multimodal agent comes pre-configured with the `AddImageTool`, which allows it to process images. You don't need to manually add this tool - it's automatically included when you enable multimodal capabilities.
@@ -45,6 +45,7 @@ image_analyst = Agent(
# Create a task for image analysis
task = Task(
description="Analyze the product image at https://example.com/product.jpg and provide a detailed description",
expected_output="A detailed description of the product image",
agent=image_analyst
)
@@ -81,6 +82,7 @@ inspection_task = Task(
3. Compliance with standards
Provide a detailed report highlighting any issues found.
""",
expected_output="A detailed report highlighting any issues found",
agent=expert_analyst
)
@@ -108,7 +110,7 @@ The multimodal agent will automatically handle the image processing through its
- Process image content with optional context or specific questions
- Provide analysis and insights based on the visual information and task requirements
## Best Practices
### Best Practices
When working with multimodal agents, keep these best practices in mind:

View File

@@ -0,0 +1,202 @@
---
title: Portkey Observability and Guardrails
description: How to use Portkey with CrewAI
icon: key
---
<img src="https://raw.githubusercontent.com/siddharthsambharia-portkey/Portkey-Product-Images/main/Portkey-CrewAI.png" alt="Portkey CrewAI Header Image" width="70%" />
[Portkey](https://portkey.ai/?utm_source=crewai&utm_medium=crewai&utm_campaign=crewai) is a 2-line upgrade to make your CrewAI agents reliable, cost-efficient, and fast.
Portkey adds 4 core production capabilities to any CrewAI agent:
1. Routing to **200+ LLMs**
2. Making each LLM call more robust
3. Full-stack tracing & cost, performance analytics
4. Real-time guardrails to enforce behavior
## Getting Started
<Steps>
<Step title="Install CrewAI and Portkey">
```bash
pip install -qU crewai portkey-ai
```
</Step>
<Step title="Configure the LLM Client">
To build CrewAI Agents with Portkey, you'll need two keys:
- **Portkey API Key**: Sign up on the [Portkey app](https://app.portkey.ai/?utm_source=crewai&utm_medium=crewai&utm_campaign=crewai) and copy your API key
- **Virtual Key**: Virtual Keys securely manage your LLM API keys in one place. Store your LLM provider API keys securely in Portkey's vault
```python
from crewai import LLM
from portkey_ai import createHeaders, PORTKEY_GATEWAY_URL
gpt_llm = LLM(
model="gpt-4",
base_url=PORTKEY_GATEWAY_URL,
api_key="dummy", # We are using Virtual key
extra_headers=createHeaders(
api_key="YOUR_PORTKEY_API_KEY",
virtual_key="YOUR_VIRTUAL_KEY", # Enter your Virtual key from Portkey
)
)
```
</Step>
<Step title="Create and Run Your First Agent">
```python
from crewai import Agent, Task, Crew
# Define your agents with roles and goals
coder = Agent(
role='Software developer',
goal='Write clear, concise code on demand',
backstory='An expert coder with a keen eye for software trends.',
llm=gpt_llm
)
# Create tasks for your agents
task1 = Task(
description="Define the HTML for making a simple website with heading- Hello World! Portkey is working!",
expected_output="A clear and concise HTML code",
agent=coder
)
# Instantiate your crew
crew = Crew(
agents=[coder],
tasks=[task1],
)
result = crew.kickoff()
print(result)
```
</Step>
</Steps>
## Key Features
| Feature | Description |
|:--------|:------------|
| 🌐 Multi-LLM Support | Access OpenAI, Anthropic, Gemini, Azure, and 250+ providers through a unified interface |
| 🛡️ Production Reliability | Implement retries, timeouts, load balancing, and fallbacks |
| 📊 Advanced Observability | Track 40+ metrics including costs, tokens, latency, and custom metadata |
| 🔍 Comprehensive Logging | Debug with detailed execution traces and function call logs |
| 🚧 Security Controls | Set budget limits and implement role-based access control |
| 🔄 Performance Analytics | Capture and analyze feedback for continuous improvement |
| 💾 Intelligent Caching | Reduce costs and latency with semantic or simple caching |
## Production Features with Portkey Configs
All features mentioned below are through Portkey's Config system. Portkey's Config system allows you to define routing strategies using simple JSON objects in your LLM API calls. You can create and manage Configs directly in your code or through the Portkey Dashboard. Each Config has a unique ID for easy reference.
<Frame>
<img src="https://raw.githubusercontent.com/Portkey-AI/docs-core/refs/heads/main/images/libraries/libraries-3.avif"/>
</Frame>
### 1. Use 250+ LLMs
Access various LLMs like Anthropic, Gemini, Mistral, Azure OpenAI, and more with minimal code changes. Switch between providers or use them together seamlessly. [Learn more about Universal API](https://portkey.ai/docs/product/ai-gateway/universal-api)
Easily switch between different LLM providers:
```python
# Anthropic Configuration
anthropic_llm = LLM(
model="claude-3-5-sonnet-latest",
base_url=PORTKEY_GATEWAY_URL,
api_key="dummy",
extra_headers=createHeaders(
api_key="YOUR_PORTKEY_API_KEY",
virtual_key="YOUR_ANTHROPIC_VIRTUAL_KEY", #You don't need provider when using Virtual keys
trace_id="anthropic_agent"
)
)
# Azure OpenAI Configuration
azure_llm = LLM(
model="gpt-4",
base_url=PORTKEY_GATEWAY_URL,
api_key="dummy",
extra_headers=createHeaders(
api_key="YOUR_PORTKEY_API_KEY",
virtual_key="YOUR_AZURE_VIRTUAL_KEY", #You don't need provider when using Virtual keys
trace_id="azure_agent"
)
)
```
### 2. Caching
Improve response times and reduce costs with two powerful caching modes:
- **Simple Cache**: Perfect for exact matches
- **Semantic Cache**: Matches responses for requests that are semantically similar
[Learn more about Caching](https://portkey.ai/docs/product/ai-gateway/cache-simple-and-semantic)
```py
config = {
"cache": {
"mode": "semantic", # or "simple" for exact matching
}
}
```
### 3. Production Reliability
Portkey provides comprehensive reliability features:
- **Automatic Retries**: Handle temporary failures gracefully
- **Request Timeouts**: Prevent hanging operations
- **Conditional Routing**: Route requests based on specific conditions
- **Fallbacks**: Set up automatic provider failovers
- **Load Balancing**: Distribute requests efficiently
[Learn more about Reliability Features](https://portkey.ai/docs/product/ai-gateway/)
### 4. Metrics
Agent runs are complex. Portkey automatically logs **40+ comprehensive metrics** for your AI agents, including cost, tokens used, latency, etc. Whether you need a broad overview or granular insights into your agent runs, Portkey's customizable filters provide the metrics you need.
- Cost per agent interaction
- Response times and latency
- Token usage and efficiency
- Success/failure rates
- Cache hit rates
<img src="https://github.com/siddharthsambharia-portkey/Portkey-Product-Images/blob/main/Portkey-Dashboard.png?raw=true" width="70%" alt="Portkey Dashboard" />
### 5. Detailed Logging
Logs are essential for understanding agent behavior, diagnosing issues, and improving performance. They provide a detailed record of agent activities and tool use, which is crucial for debugging and optimizing processes.
Access a dedicated section to view records of agent executions, including parameters, outcomes, function calls, and errors. Filter logs based on multiple parameters such as trace ID, model, tokens used, and metadata.
<details>
<summary><b>Traces</b></summary>
<img src="https://raw.githubusercontent.com/siddharthsambharia-portkey/Portkey-Product-Images/main/Portkey-Traces.png" alt="Portkey Traces" width="70%" />
</details>
<details>
<summary><b>Logs</b></summary>
<img src="https://raw.githubusercontent.com/siddharthsambharia-portkey/Portkey-Product-Images/main/Portkey-Logs.png" alt="Portkey Logs" width="70%" />
</details>
### 6. Enterprise Security Features
- Set budget limit and rate limts per Virtual Key (disposable API keys)
- Implement role-based access control
- Track system changes with audit logs
- Configure data retention policies
For detailed information on creating and managing Configs, visit the [Portkey documentation](https://docs.portkey.ai/product/ai-gateway/configs).
## Resources
- [📘 Portkey Documentation](https://docs.portkey.ai)
- [📊 Portkey Dashboard](https://app.portkey.ai/?utm_source=crewai&utm_medium=crewai&utm_campaign=crewai)
- [🐦 Twitter](https://twitter.com/portkeyai)
- [💬 Discord Community](https://discord.gg/DD7vgKK299)

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 MiB

BIN
docs/images/mlflow1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 382 KiB

View File

@@ -15,10 +15,48 @@ icon: wrench
If you need to update Python, visit [python.org/downloads](https://python.org/downloads)
</Note>
# Setting Up Your Environment
Before installing CrewAI, it's recommended to set up a virtual environment. This helps isolate your project dependencies and avoid conflicts.
<Steps>
<Step title="Create a Virtual Environment">
Choose your preferred method to create a virtual environment:
**Using venv (Python's built-in tool):**
```shell Terminal
python3 -m venv .venv
```
**Using conda:**
```shell Terminal
conda create -n crewai-env python=3.12
```
</Step>
<Step title="Activate the Virtual Environment">
Activate your virtual environment based on your platform:
**On macOS/Linux (venv):**
```shell Terminal
source .venv/bin/activate
```
**On Windows (venv):**
```shell Terminal
.venv\Scripts\activate
```
**Using conda (all platforms):**
```shell Terminal
conda activate crewai-env
```
</Step>
</Steps>
# Installing CrewAI
CrewAI is a flexible and powerful AI framework that enables you to create and manage AI agents, tools, and tasks efficiently.
Let's get you set up! 🚀
Now let's get you set up! 🚀
<Steps>
<Step title="Install CrewAI">
@@ -72,9 +110,9 @@ Let's get you set up! 🚀
# Creating a New Project
<Info>
<Tip>
We recommend using the YAML Template scaffolding for a structured approach to defining agents and tasks.
</Info>
</Tip>
<Steps>
<Step title="Generate Project Structure">
@@ -104,7 +142,18 @@ Let's get you set up! 🚀
└── tasks.yaml
```
</Frame>
</Step>
</Step>
<Step title="Install Additional Tools">
You can install additional tools using UV:
```shell Terminal
uv add <tool-name>
```
<Tip>
UV is our preferred package manager as it's significantly faster than pip and provides better dependency resolution.
</Tip>
</Step>
<Step title="Customize Your Project">
Your project will contain these essential files:

View File

@@ -91,6 +91,7 @@
"how-to/custom-manager-agent",
"how-to/llm-connections",
"how-to/customizing-agents",
"how-to/multimodal-agents",
"how-to/coding-agents",
"how-to/force-tool-output-as-result",
"how-to/human-input-on-execution",
@@ -100,7 +101,9 @@
"how-to/conditional-tasks",
"how-to/agentops-observability",
"how-to/langtrace-observability",
"how-to/openlit-observability"
"how-to/mlflow-observability",
"how-to/openlit-observability",
"how-to/portkey-observability"
]
},
{

View File

@@ -58,7 +58,7 @@ Follow the steps below to get crewing! 🚣‍♂️
description: >
Conduct a thorough research about {topic}
Make sure you find any interesting and relevant information given
the current year is 2024.
the current year is 2025.
expected_output: >
A list with 10 bullet points of the most relevant information about {topic}
agent: researcher
@@ -195,10 +195,10 @@ Follow the steps below to get crewing! 🚣‍♂️
<CodeGroup>
```markdown output/report.md
# Comprehensive Report on the Rise and Impact of AI Agents in 2024
# Comprehensive Report on the Rise and Impact of AI Agents in 2025
## 1. Introduction to AI Agents
In 2024, Artificial Intelligence (AI) agents are at the forefront of innovation across various industries. As intelligent systems that can perform tasks typically requiring human cognition, AI agents are paving the way for significant advancements in operational efficiency, decision-making, and overall productivity within sectors like Human Resources (HR) and Finance. This report aims to detail the rise of AI agents, their frameworks, applications, and potential implications on the workforce.
In 2025, Artificial Intelligence (AI) agents are at the forefront of innovation across various industries. As intelligent systems that can perform tasks typically requiring human cognition, AI agents are paving the way for significant advancements in operational efficiency, decision-making, and overall productivity within sectors like Human Resources (HR) and Finance. This report aims to detail the rise of AI agents, their frameworks, applications, and potential implications on the workforce.
## 2. Benefits of AI Agents
AI agents bring numerous advantages that are transforming traditional work environments. Key benefits include:
@@ -252,7 +252,7 @@ Follow the steps below to get crewing! 🚣‍♂️
To stay competitive and harness the full potential of AI agents, organizations must remain vigilant about latest developments in AI technology and consider continuous learning and adaptation in their strategic planning.
## 8. Conclusion
The emergence of AI agents is undeniably reshaping the workplace landscape in 2024. With their ability to automate tasks, enhance efficiency, and improve decision-making, AI agents are critical in driving operational success. Organizations must embrace and adapt to AI developments to thrive in an increasingly digital business environment.
The emergence of AI agents is undeniably reshaping the workplace landscape in 5. With their ability to automate tasks, enhance efficiency, and improve decision-making, AI agents are critical in driving operational success. Organizations must embrace and adapt to AI developments to thrive in an increasingly digital business environment.
```
</CodeGroup>
</Step>
@@ -278,7 +278,7 @@ email_summarizer:
Summarize emails into a concise and clear summary
backstory: >
You will create a 5 bullet point summary of the report
llm: mixtal_llm
llm: openai/gpt-4o
```
<Tip>
@@ -301,38 +301,166 @@ Use the annotations to properly reference the agent and task in the `crew.py` fi
### Annotations include:
* `@agent`
* `@task`
* `@crew`
* `@tool`
* `@before_kickoff`
* `@after_kickoff`
* `@callback`
* `@output_json`
* `@output_pydantic`
* `@cache_handler`
Here are examples of how to use each annotation in your CrewAI project, and when you should use them:
```python crew.py
# ...
#### @agent
Used to define an agent in your crew. Use this when:
- You need to create a specialized AI agent with a specific role
- You want the agent to be automatically collected and managed by the crew
- You need to reuse the same agent configuration across multiple tasks
```python
@agent
def email_summarizer(self) -> Agent:
def research_agent(self) -> Agent:
return Agent(
config=self.agents_config["email_summarizer"],
role="Research Analyst",
goal="Conduct thorough research on given topics",
backstory="Expert researcher with years of experience in data analysis",
tools=[SerperDevTool()],
verbose=True
)
@task
def email_summarizer_task(self) -> Task:
return Task(
config=self.tasks_config["email_summarizer_task"],
)
# ...
```
<Tip>
In addition to the [sequential process](../how-to/sequential-process), you can use the [hierarchical process](../how-to/hierarchical-process),
which automatically assigns a manager to the defined crew to properly coordinate the planning and execution of tasks through delegation and validation of results.
You can learn more about the core concepts [here](/concepts).
</Tip>
#### @task
Used to define a task that can be executed by agents. Use this when:
- You need to define a specific piece of work for an agent
- You want tasks to be automatically sequenced and managed
- You need to establish dependencies between different tasks
```python
@task
def research_task(self) -> Task:
return Task(
description="Research the latest developments in AI technology",
expected_output="A comprehensive report on AI advancements",
agent=self.research_agent(),
output_file="output/research.md"
)
```
#### @crew
Used to define your crew configuration. Use this when:
- You want to automatically collect all @agent and @task definitions
- You need to specify how tasks should be processed (sequential or hierarchical)
- You want to set up crew-wide configurations
```python
@crew
def research_crew(self) -> Crew:
return Crew(
agents=self.agents, # Automatically collected from @agent methods
tasks=self.tasks, # Automatically collected from @task methods
process=Process.sequential,
verbose=True
)
```
#### @tool
Used to create custom tools for your agents. Use this when:
- You need to give agents specific capabilities (like web search, data analysis)
- You want to encapsulate external API calls or complex operations
- You need to share functionality across multiple agents
```python
@tool
def web_search_tool(query: str, max_results: int = 5) -> list[str]:
"""
Search the web for information.
Args:
query: The search query
max_results: Maximum number of results to return
Returns:
List of search results
"""
# Implement your search logic here
return [f"Result {i} for: {query}" for i in range(max_results)]
```
#### @before_kickoff
Used to execute logic before the crew starts. Use this when:
- You need to validate or preprocess input data
- You want to set up resources or configurations before execution
- You need to perform any initialization logic
```python
@before_kickoff
def validate_inputs(self, inputs: Optional[Dict[str, Any]]) -> Optional[Dict[str, Any]]:
"""Validate and preprocess inputs before the crew starts."""
if inputs is None:
return None
if 'topic' not in inputs:
raise ValueError("Topic is required")
# Add additional context
inputs['timestamp'] = datetime.now().isoformat()
inputs['topic'] = inputs['topic'].strip().lower()
return inputs
```
#### @after_kickoff
Used to process results after the crew completes. Use this when:
- You need to format or transform the final output
- You want to perform cleanup operations
- You need to save or log the results in a specific way
```python
@after_kickoff
def process_results(self, result: CrewOutput) -> CrewOutput:
"""Process and format the results after the crew completes."""
result.raw = result.raw.strip()
result.raw = f"""
# Research Results
Generated on: {datetime.now().isoformat()}
{result.raw}
"""
return result
```
#### @callback
Used to handle events during crew execution. Use this when:
- You need to monitor task progress
- You want to log intermediate results
- You need to implement custom progress tracking or metrics
```python
@callback
def log_task_completion(self, task: Task, output: str):
"""Log task completion details for monitoring."""
print(f"Task '{task.description}' completed")
print(f"Output length: {len(output)} characters")
print(f"Agent used: {task.agent.role}")
print("-" * 50)
```
#### @cache_handler
Used to implement custom caching for task results. Use this when:
- You want to avoid redundant expensive operations
- You need to implement custom cache storage or expiration logic
- You want to persist results between runs
```python
@cache_handler
def custom_cache(self, key: str) -> Optional[str]:
"""Custom cache implementation for storing task results."""
cache_file = f"cache/{key}.json"
if os.path.exists(cache_file):
with open(cache_file, 'r') as f:
data = json.load(f)
# Check if cache is still valid (e.g., not expired)
if datetime.fromisoformat(data['timestamp']) > datetime.now() - timedelta(days=1):
return data['result']
return None
```
<Note>
These decorators are part of the CrewAI framework and help organize your crew's structure by automatically collecting agents, tasks, and handling various lifecycle events.
They should be used within a class decorated with `@CrewBase`.
</Note>
### Replay Tasks from Latest Crew Kickoff

View File

@@ -1,78 +1,118 @@
---
title: Composio Tool
description: The `ComposioTool` is a wrapper around the composio set of tools and gives your agent access to a wide variety of tools from the Composio SDK.
description: Composio provides 250+ production-ready tools for AI agents with flexible authentication management.
icon: gear-code
---
# `ComposioTool`
# `ComposioToolSet`
## Description
Composio is an integration platform that allows you to connect your AI agents to 250+ tools. Key features include:
This tools is a wrapper around the composio set of tools and gives your agent access to a wide variety of tools from the Composio SDK.
- **Enterprise-Grade Authentication**: Built-in support for OAuth, API Keys, JWT with automatic token refresh
- **Full Observability**: Detailed tool usage logs, execution timestamps, and more
## Installation
To incorporate this tool into your project, follow the installation instructions below:
To incorporate Composio tools into your project, follow the instructions below:
```shell
pip install composio-core
pip install 'crewai[tools]'
pip install composio-crewai
pip install crewai
```
after the installation is complete, either run `composio login` or export your composio API key as `COMPOSIO_API_KEY`.
After the installation is complete, either run `composio login` or export your composio API key as `COMPOSIO_API_KEY`. Get your Composio API key from [here](https://app.composio.dev)
## Example
The following example demonstrates how to initialize the tool and execute a github action:
1. Initialize Composio tools
1. Initialize Composio toolset
```python Code
from composio import App
from crewai_tools import ComposioTool
from crewai import Agent, Task
from composio_crewai import ComposioToolSet, App, Action
from crewai import Agent, Task, Crew
tools = [ComposioTool.from_action(action=Action.GITHUB_ACTIVITY_STAR_REPO_FOR_AUTHENTICATED_USER)]
toolset = ComposioToolSet()
```
If you don't know what action you want to use, use `from_app` and `tags` filter to get relevant actions
2. Connect your GitHub account
<CodeGroup>
```shell CLI
composio add github
```
```python Code
tools = ComposioTool.from_app(App.GITHUB, tags=["important"])
request = toolset.initiate_connection(app=App.GITHUB)
print(f"Open this URL to authenticate: {request.redirectUrl}")
```
</CodeGroup>
or use `use_case` to search relevant actions
3. Get Tools
- Retrieving all the tools from an app (not recommended for production):
```python Code
tools = ComposioTool.from_app(App.GITHUB, use_case="Star a github repository")
tools = toolset.get_tools(apps=[App.GITHUB])
```
2. Define agent
- Filtering tools based on tags:
```python Code
tag = "users"
filtered_action_enums = toolset.find_actions_by_tags(
App.GITHUB,
tags=[tag],
)
tools = toolset.get_tools(actions=filtered_action_enums)
```
- Filtering tools based on use case:
```python Code
use_case = "Star a repository on GitHub"
filtered_action_enums = toolset.find_actions_by_use_case(
App.GITHUB, use_case=use_case, advanced=False
)
tools = toolset.get_tools(actions=filtered_action_enums)
```
<Tip>Set `advanced` to True to get actions for complex use cases</Tip>
- Using specific tools:
In this demo, we will use the `GITHUB_STAR_A_REPOSITORY_FOR_THE_AUTHENTICATED_USER` action from the GitHub app.
```python Code
tools = toolset.get_tools(
actions=[Action.GITHUB_STAR_A_REPOSITORY_FOR_THE_AUTHENTICATED_USER]
)
```
Learn more about filtering actions [here](https://docs.composio.dev/patterns/tools/use-tools/use-specific-actions)
4. Define agent
```python Code
crewai_agent = Agent(
role="Github Agent",
goal="You take action on Github using Github APIs",
backstory=(
"You are AI agent that is responsible for taking actions on Github "
"on users behalf. You need to take action on Github using Github APIs"
),
role="GitHub Agent",
goal="You take action on GitHub using GitHub APIs",
backstory="You are AI agent that is responsible for taking actions on GitHub on behalf of users using GitHub APIs",
verbose=True,
tools=tools,
llm= # pass an llm
)
```
3. Execute task
5. Execute task
```python Code
task = Task(
description="Star a repo ComposioHQ/composio on GitHub",
description="Star a repo composiohq/composio on GitHub",
agent=crewai_agent,
expected_output="if the star happened",
expected_output="Status of the operation",
)
task.execute()
crew = Crew(agents=[crewai_agent], tasks=[task])
crew.kickoff()
```
* More detailed list of tools can be found [here](https://app.composio.dev)
* More detailed list of tools can be found [here](https://app.composio.dev)