mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-01-09 16:18:30 +00:00
Updating Docs
This commit is contained in:
@@ -11,31 +11,34 @@ description: What are crewAI Agents and how to use them.
|
|||||||
<li class='leading-3'>Make decisions</li>
|
<li class='leading-3'>Make decisions</li>
|
||||||
<li class='leading-3'>Communicate with other agents</li>
|
<li class='leading-3'>Communicate with other agents</li>
|
||||||
</ul>
|
</ul>
|
||||||
<br/>
|
<br/>
|
||||||
Think of an agent as a member of a team, with specific skills and a particular job to do. Agents can have different roles like 'Researcher', 'Writer', or 'Customer Support', each contributing to the overall goal of the crew.
|
Think of an agent as a member of a team, with specific skills and a particular job to do. Agents can have different roles like 'Researcher', 'Writer', or 'Customer Support', each contributing to the overall goal of the crew.
|
||||||
|
|
||||||
## Agent Attributes
|
## Agent Attributes
|
||||||
|
|
||||||
| Attribute | Parameter | Description |
|
| Attribute | Parameter | Description |
|
||||||
| :------------------------- | :---- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
| :------------------------- | :--------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||||
| **Role** | `role` | Defines the agent's function within the crew. It determines the kind of tasks the agent is best suited for. |
|
| **Role** | `role` | Defines the agent's function within the crew. It determines the kind of tasks the agent is best suited for. |
|
||||||
| **Goal** | `goal` | The individual objective that the agent aims to achieve. It guides the agent's decision-making process. |
|
| **Goal** | `goal` | The individual objective that the agent aims to achieve. It guides the agent's decision-making process. |
|
||||||
| **Backstory** | `backstory` | Provides context to the agent's role and goal, enriching the interaction and collaboration dynamics. |
|
| **Backstory** | `backstory`| Provides context to the agent's role and goal, enriching the interaction and collaboration dynamics. |
|
||||||
| **LLM** *(optional)* | `llm` | Represents the language model that will run the agent. It dynamically fetches the model name from the `OPENAI_MODEL_NAME` environment variable, defaulting to "gpt-4" if not specified. |
|
| **LLM** *(optional)* | `llm` | Represents the language model that will run the agent. It dynamically fetches the model name from the `OPENAI_MODEL_NAME` environment variable, defaulting to "gpt-4" if not specified. |
|
||||||
| **Tools** *(optional)* | `tools` | Set of capabilities or functions that the agent can use to perform tasks. Expected to be instances of custom classes compatible with the agent's execution environment. Tools are initialized with a default value of an empty list. |
|
| **Tools** *(optional)* | `tools` | Set of capabilities or functions that the agent can use to perform tasks. Expected to be instances of custom classes compatible with the agent's execution environment. Tools are initialized with a default value of an empty list. |
|
||||||
| **Function Calling LLM** *(optional)* | `function_calling_llm` | Specifies the language model that will handle the tool calling for this agent, overriding the crew function calling LLM if passed. Default is `None`. |
|
| **Function Calling LLM** *(optional)* | `function_calling_llm` | Specifies the language model that will handle the tool calling for this agent, overriding the crew function calling LLM if passed. Default is `None`. |
|
||||||
| **Max Iter** *(optional)* | `max_iter` | Max Iter is the maximum number of iterations the agent can perform before being forced to give its best answer. Default is `25`. |
|
| **Max Iter** *(optional)* | `max_iter` | Max Iter is the maximum number of iterations the agent can perform before being forced to give its best answer. Default is `25`. |
|
||||||
| **Max RPM** *(optional)* | `max_rpm` | Max RPM is the maximum number of requests per minute the agent can perform to avoid rate limits. It's optional and can be left unspecified, with a default value of `None`. |
|
| **Max RPM** *(optional)* | `max_rpm` | Max RPM is the maximum number of requests per minute the agent can perform to avoid rate limits. It's optional and can be left unspecified, with a default value of `None`. |
|
||||||
| **Max Execution Time** *(optional)* | `max_execution_time` | Max Execution Time is the maximum execution time for an agent to execute a task. It's optional and can be left unspecified, with a default value of `None`, meaning no max execution time. |
|
| **Max Execution Time** *(optional)* | `max_execution_time` | Max Execution Time is the maximum execution time for an agent to execute a task. It's optional and can be left unspecified, with a default value of `None`, meaning no max execution time. |
|
||||||
| **Verbose** *(optional)* | `verbose` | Setting this to `True` configures the internal logger to provide detailed execution logs, aiding in debugging and monitoring. Default is `False`. |
|
| **Verbose** *(optional)* | `verbose` | Setting this to `True` configures the internal logger to provide detailed execution logs, aiding in debugging and monitoring. Default is `False`. |
|
||||||
| **Allow Delegation** *(optional)* | `allow_delegation` | Agents can delegate tasks or questions to one another, ensuring that each task is handled by the most suitable agent. Default is `True`. |
|
| **Allow Delegation** *(optional)* | `allow_delegation` | Agents can delegate tasks or questions to one another, ensuring that each task is handled by the most suitable agent. Default is `False`.
|
||||||
| **Step Callback** *(optional)* | `step_callback` | A function that is called after each step of the agent. This can be used to log the agent's actions or to perform other operations. It will overwrite the crew `step_callback`. |
|
| **Step Callback** *(optional)* | `step_callback` | A function that is called after each step of the agent. This can be used to log the agent's actions or to perform other operations. It will overwrite the crew `step_callback`. |
|
||||||
| **Cache** *(optional)* | `cache` | Indicates if the agent should use a cache for tool usage. Default is `True`. |
|
| **Cache** *(optional)* | `cache` | Indicates if the agent should use a cache for tool usage. Default is `True`. |
|
||||||
| **System Template** *(optional)* | `system_template` | Specifies the system format for the agent. Default is `None`. |
|
| **System Template** *(optional)* | `system_template` | Specifies the system format for the agent. Default is `None`. |
|
||||||
| **Prompt Template** *(optional)* | `prompt_template` | Specifies the prompt format for the agent. Default is `None`. |
|
| **Prompt Template** *(optional)* | `prompt_template` | Specifies the prompt format for the agent. Default is `None`. |
|
||||||
| **Response Template** *(optional)* | `response_template` | Specifies the response format for the agent. Default is `None`. |
|
| **Response Template** *(optional)* | `response_template` | Specifies the response format for the agent. Default is `None`. |
|
||||||
| **Allow Code Execution** *(optional)* | `allow_code_execution` | Enable code execution for the agent. Default is `False`. |
|
| **Allow Code Execution** *(optional)* | `allow_code_execution` | Enable code execution for the agent. Default is `False`. |
|
||||||
| **Max Retry Limit** *(optional)* | `max_retry_limit` | Maximum number of retries for an agent to execute a task when an error occurs. Default is `2`. |
|
| **Max Retry Limit** *(optional)* | `max_retry_limit` | Maximum number of retries for an agent to execute a task when an error occurs. Default is `2`.
|
||||||
|
| **Use Stop Words** *(optional)* | `use_stop_words` | Adds the ability to not use stop words (to support o1 models). Default is `True`. |
|
||||||
|
| **Use System Prompt** *(optional)* | `use_system_prompt` | Adds the ability to not use system prompt (to support o1 models). Default is `True`. |
|
||||||
|
| **Respect Context Window** *(optional)* | `respect_context_window` | Summary strategy to avoid overflowing the context window. Default is `True`. |
|
||||||
|
|
||||||
## Creating an Agent
|
## Creating an Agent
|
||||||
|
|
||||||
@@ -63,7 +66,7 @@ agent = Agent(
|
|||||||
max_rpm=None, # Optional
|
max_rpm=None, # Optional
|
||||||
max_execution_time=None, # Optional
|
max_execution_time=None, # Optional
|
||||||
verbose=True, # Optional
|
verbose=True, # Optional
|
||||||
allow_delegation=True, # Optional
|
allow_delegation=False, # Optional
|
||||||
step_callback=my_intermediate_step_callback, # Optional
|
step_callback=my_intermediate_step_callback, # Optional
|
||||||
cache=True, # Optional
|
cache=True, # Optional
|
||||||
system_template=my_system_template, # Optional
|
system_template=my_system_template, # Optional
|
||||||
@@ -74,8 +77,11 @@ agent = Agent(
|
|||||||
tools_handler=my_tools_handler, # Optional
|
tools_handler=my_tools_handler, # Optional
|
||||||
cache_handler=my_cache_handler, # Optional
|
cache_handler=my_cache_handler, # Optional
|
||||||
callbacks=[callback1, callback2], # Optional
|
callbacks=[callback1, callback2], # Optional
|
||||||
allow_code_execution=True, # Optiona
|
allow_code_execution=True, # Optional
|
||||||
max_retry_limit=2, # Optional
|
max_retry_limit=2, # Optional
|
||||||
|
use_stop_words=True, # Optional
|
||||||
|
use_system_prompt=True, # Optional
|
||||||
|
respect_context_window=True, # Optional
|
||||||
)
|
)
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -105,7 +111,7 @@ agent = Agent(
|
|||||||
|
|
||||||
BaseAgent includes attributes and methods required to integrate with your crews to run and delegate tasks to other agents within your own crew.
|
BaseAgent includes attributes and methods required to integrate with your crews to run and delegate tasks to other agents within your own crew.
|
||||||
|
|
||||||
CrewAI is a universal multi agent framework that allows for all agents to work together to automate tasks and solve problems.
|
CrewAI is a universal multi-agent framework that allows for all agents to work together to automate tasks and solve problems.
|
||||||
|
|
||||||
|
|
||||||
```py
|
```py
|
||||||
|
|||||||
@@ -27,7 +27,7 @@ The `Crew` class has been enriched with several attributes to support advanced f
|
|||||||
- **Memory Usage (`memory`)**: Indicates whether the crew should use memory to store memories of its execution, enhancing task execution and agent learning.
|
- **Memory Usage (`memory`)**: Indicates whether the crew should use memory to store memories of its execution, enhancing task execution and agent learning.
|
||||||
- **Embedder Configuration (`embedder`)**: Specifies the configuration for the embedder to be used by the crew for understanding and generating language. This attribute supports customization of the language model provider.
|
- **Embedder Configuration (`embedder`)**: Specifies the configuration for the embedder to be used by the crew for understanding and generating language. This attribute supports customization of the language model provider.
|
||||||
- **Cache Management (`cache`)**: Determines whether the crew should use a cache to store the results of tool executions, optimizing performance.
|
- **Cache Management (`cache`)**: Determines whether the crew should use a cache to store the results of tool executions, optimizing performance.
|
||||||
- **Output Logging (`output_log_file`)**: Specifies the file path for logging the output of the crew execution.
|
- **Output Logging (`output_log_file`)**: Specifies the file path for logging the output of the crew's execution.
|
||||||
- **Planning Mode (`planning`)**: Allows crews to plan their actions before executing tasks by setting `planning=True` when creating the `Crew` instance. This feature enhances coordination and efficiency.
|
- **Planning Mode (`planning`)**: Allows crews to plan their actions before executing tasks by setting `planning=True` when creating the `Crew` instance. This feature enhances coordination and efficiency.
|
||||||
- **Replay Feature**: Introduces a new CLI for listing tasks from the last run and replaying from a specific task, enhancing task management and troubleshooting.
|
- **Replay Feature**: Introduces a new CLI for listing tasks from the last run and replaying from a specific task, enhancing task management and troubleshooting.
|
||||||
|
|
||||||
|
|||||||
@@ -13,18 +13,18 @@ A crew in crewAI represents a collaborative group of agents working together to
|
|||||||
| :------------------------------------ | :--------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
| :------------------------------------ | :--------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||||
| **Tasks** | `tasks` | A list of tasks assigned to the crew. |
|
| **Tasks** | `tasks` | A list of tasks assigned to the crew. |
|
||||||
| **Agents** | `agents` | A list of agents that are part of the crew. |
|
| **Agents** | `agents` | A list of agents that are part of the crew. |
|
||||||
| **Process** _(optional)_ | `process` | The process flow (e.g., sequential, hierarchical) the crew follows. |
|
| **Process** _(optional)_ | `process` | The process flow (e.g., sequential, hierarchical) the crew follows. Default is `sequential`. |
|
||||||
| **Verbose** _(optional)_ | `verbose` | The verbosity level for logging during execution. |
|
| **Verbose** _(optional)_ | `verbose` | The verbosity level for logging during execution. Defaults to `False`. |
|
||||||
| **Manager LLM** _(optional)_ | `manager_llm` | The language model used by the manager agent in a hierarchical process. **Required when using a hierarchical process.** |
|
| **Manager LLM** _(optional)_ | `manager_llm` | The language model used by the manager agent in a hierarchical process. **Required when using a hierarchical process.** |
|
||||||
| **Function Calling LLM** _(optional)_ | `function_calling_llm` | If passed, the crew will use this LLM to do function calling for tools for all agents in the crew. Each agent can have its own LLM, which overrides the crew's LLM for function calling. |
|
| **Function Calling LLM** _(optional)_ | `function_calling_llm` | If passed, the crew will use this LLM to do function calling for tools for all agents in the crew. Each agent can have its own LLM, which overrides the crew's LLM for function calling. |
|
||||||
| **Config** _(optional)_ | `config` | Optional configuration settings for the crew, in `Json` or `Dict[str, Any]` format. |
|
| **Config** _(optional)_ | `config` | Optional configuration settings for the crew, in `Json` or `Dict[str, Any]` format. |
|
||||||
| **Max RPM** _(optional)_ | `max_rpm` | Maximum requests per minute the crew adheres to during execution. |
|
| **Max RPM** _(optional)_ | `max_rpm` | Maximum requests per minute the crew adheres to during execution. Defaults to `None`. |
|
||||||
| **Language** _(optional)_ | `language` | Language used for the crew, defaults to English. |
|
| **Language** _(optional)_ | `language` | Language used for the crew, defaults to English. |
|
||||||
| **Language File** _(optional)_ | `language_file` | Path to the language file to be used for the crew. |
|
| **Language File** _(optional)_ | `language_file` | Path to the language file to be used for the crew. |
|
||||||
| **Memory** _(optional)_ | `memory` | Utilized for storing execution memories (short-term, long-term, entity memory). |
|
| **Memory** _(optional)_ | `memory` | Utilized for storing execution memories (short-term, long-term, entity memory). Defaults to `False`. |
|
||||||
| **Cache** _(optional)_ | `cache` | Specifies whether to use a cache for storing the results of tools' execution. |
|
| **Cache** _(optional)_ | `cache` | Specifies whether to use a cache for storing the results of tools' execution. Defaults to `True`. |
|
||||||
| **Embedder** _(optional)_ | `embedder` | Configuration for the embedder to be used by the crew. Mostly used by memory for now. |
|
| **Embedder** _(optional)_ | `embedder` | Configuration for the embedder to be used by the crew. Mostly used by memory for now. Default is `{"provider": "openai"}`. |
|
||||||
| **Full Output** _(optional)_ | `full_output` | Whether the crew should return the full output with all tasks outputs or just the final output. |
|
| **Full Output** _(optional)_ | `full_output` | Whether the crew should return the full output with all tasks outputs or just the final output. Defaults to `False`. |
|
||||||
| **Step Callback** _(optional)_ | `step_callback` | A function that is called after each step of every agent. This can be used to log the agent's actions or to perform other operations; it won't override the agent-specific `step_callback`. |
|
| **Step Callback** _(optional)_ | `step_callback` | A function that is called after each step of every agent. This can be used to log the agent's actions or to perform other operations; it won't override the agent-specific `step_callback`. |
|
||||||
| **Task Callback** _(optional)_ | `task_callback` | A function that is called after the completion of each task. Useful for monitoring or additional operations post-task execution. |
|
| **Task Callback** _(optional)_ | `task_callback` | A function that is called after the completion of each task. Useful for monitoring or additional operations post-task execution. |
|
||||||
| **Share Crew** _(optional)_ | `share_crew` | Whether you want to share the complete crew information and execution with the crewAI team to make the library better, and allow us to train models. |
|
| **Share Crew** _(optional)_ | `share_crew` | Whether you want to share the complete crew information and execution with the crewAI team to make the library better, and allow us to train models. |
|
||||||
@@ -38,65 +38,6 @@ A crew in crewAI represents a collaborative group of agents working together to
|
|||||||
!!! note "Crew Max RPM"
|
!!! note "Crew Max RPM"
|
||||||
The `max_rpm` attribute sets the maximum number of requests per minute the crew can perform to avoid rate limits and will override individual agents' `max_rpm` settings if you set it.
|
The `max_rpm` attribute sets the maximum number of requests per minute the crew can perform to avoid rate limits and will override individual agents' `max_rpm` settings if you set it.
|
||||||
|
|
||||||
## Creating a Crew
|
|
||||||
|
|
||||||
When assembling a crew, you combine agents with complementary roles and tools, assign tasks, and select a process that dictates their execution order and interaction.
|
|
||||||
|
|
||||||
### Example: Assembling a Crew
|
|
||||||
|
|
||||||
```python
|
|
||||||
from crewai import Crew, Agent, Task, Process
|
|
||||||
from langchain_community.tools import DuckDuckGoSearchRun
|
|
||||||
from crewai_tools import tool
|
|
||||||
|
|
||||||
@tool('DuckDuckGoSearch')
|
|
||||||
def search(search_query: str):
|
|
||||||
"""Search the web for information on a given topic"""
|
|
||||||
return DuckDuckGoSearchRun().run(search_query)
|
|
||||||
|
|
||||||
# Define agents with specific roles and tools
|
|
||||||
researcher = Agent(
|
|
||||||
role='Senior Research Analyst',
|
|
||||||
goal='Discover innovative AI technologies',
|
|
||||||
backstory="""You're a senior research analyst at a large company.
|
|
||||||
You're responsible for analyzing data and providing insights
|
|
||||||
to the business.
|
|
||||||
You're currently working on a project to analyze the
|
|
||||||
trends and innovations in the space of artificial intelligence.""",
|
|
||||||
tools=[search]
|
|
||||||
)
|
|
||||||
|
|
||||||
writer = Agent(
|
|
||||||
role='Content Writer',
|
|
||||||
goal='Write engaging articles on AI discoveries',
|
|
||||||
backstory="""You're a senior writer at a large company.
|
|
||||||
You're responsible for creating content to the business.
|
|
||||||
You're currently working on a project to write about trends
|
|
||||||
and innovations in the space of AI for your next meeting.""",
|
|
||||||
verbose=True
|
|
||||||
)
|
|
||||||
|
|
||||||
# Create tasks for the agents
|
|
||||||
research_task = Task(
|
|
||||||
description='Identify breakthrough AI technologies',
|
|
||||||
agent=researcher,
|
|
||||||
expected_output='A bullet list summary of the top 5 most important AI news'
|
|
||||||
)
|
|
||||||
write_article_task = Task(
|
|
||||||
description='Draft an article on the latest AI technologies',
|
|
||||||
agent=writer,
|
|
||||||
expected_output='3 paragraph blog post on the latest AI technologies'
|
|
||||||
)
|
|
||||||
|
|
||||||
# Assemble the crew with a sequential process
|
|
||||||
my_crew = Crew(
|
|
||||||
agents=[researcher, writer],
|
|
||||||
tasks=[research_task, write_article_task],
|
|
||||||
process=Process.sequential,
|
|
||||||
full_output=True,
|
|
||||||
verbose=True,
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Crew Output
|
## Crew Output
|
||||||
|
|
||||||
|
|||||||
@@ -4,16 +4,17 @@ description: Leveraging memory systems in the crewAI framework to enhance agent
|
|||||||
---
|
---
|
||||||
|
|
||||||
## Introduction to Memory Systems in crewAI
|
## Introduction to Memory Systems in crewAI
|
||||||
|
|
||||||
!!! note "Enhancing Agent Intelligence"
|
!!! note "Enhancing Agent Intelligence"
|
||||||
The crewAI framework introduces a sophisticated memory system designed to significantly enhance the capabilities of AI agents. This system comprises short-term memory, long-term memory, entity memory, and contextual memory, each serving a unique purpose in aiding agents to remember, reason, and learn from past interactions.
|
The crewAI framework introduces a sophisticated memory system designed to significantly enhance the capabilities of AI agents. This system comprises short-term memory, long-term memory, entity memory, and contextual memory, each serving a unique purpose in aiding agents to remember, reason, and learn from past interactions.
|
||||||
|
|
||||||
## Memory System Components
|
## Memory System Components
|
||||||
|
|
||||||
| Component | Description |
|
| Component | Description |
|
||||||
| :------------------- | :----------------------------------------------------------- |
|
| :------------------- | :---------------------------------------------------------------------------------------------------------------------- |
|
||||||
| **Short-Term Memory**| Temporarily stores recent interactions and outcomes, enabling agents to recall and utilize information relevant to their current context during the current executions. |
|
| **Short-Term Memory**| Temporarily stores recent interactions and outcomes using `RAG`, enabling agents to recall and utilize information relevant to their current context during the current executions.|
|
||||||
| **Long-Term Memory** | Preserves valuable insights and learnings from past executions, allowing agents to build and refine their knowledge over time. So Agents can remember what they did right and wrong across multiple executions |
|
| **Long-Term Memory** | Preserves valuable insights and learnings from past executions, allowing agents to build and refine their knowledge over time. |
|
||||||
| **Entity Memory** | Captures and organizes information about entities (people, places, concepts) encountered during tasks, facilitating deeper understanding and relationship mapping. |
|
| **Entity Memory** | Captures and organizes information about entities (people, places, concepts) encountered during tasks, facilitating deeper understanding and relationship mapping. Uses `RAG` for storing entity information. |
|
||||||
| **Contextual Memory**| Maintains the context of interactions by combining `ShortTermMemory`, `LongTermMemory`, and `EntityMemory`, aiding in the coherence and relevance of agent responses over a sequence of tasks or a conversation. |
|
| **Contextual Memory**| Maintains the context of interactions by combining `ShortTermMemory`, `LongTermMemory`, and `EntityMemory`, aiding in the coherence and relevance of agent responses over a sequence of tasks or a conversation. |
|
||||||
|
|
||||||
## How Memory Systems Empower Agents
|
## How Memory Systems Empower Agents
|
||||||
@@ -27,12 +28,12 @@ description: Leveraging memory systems in the crewAI framework to enhance agent
|
|||||||
## Implementing Memory in Your Crew
|
## Implementing Memory in Your Crew
|
||||||
|
|
||||||
When configuring a crew, you can enable and customize each memory component to suit the crew's objectives and the nature of tasks it will perform.
|
When configuring a crew, you can enable and customize each memory component to suit the crew's objectives and the nature of tasks it will perform.
|
||||||
By default, the memory system is disabled, and you can ensure it is active by setting `memory=True` in the crew configuration. The memory will use OpenAI Embeddings by default, but you can change it by setting `embedder` to a different model.
|
By default, the memory system is disabled, and you can ensure it is active by setting `memory=True` in the crew configuration. The memory will use OpenAI embeddings by default, but you can change it by setting `embedder` to a different model.
|
||||||
|
|
||||||
The 'embedder' only applies to **Short-Term Memory** which uses Chroma for RAG using EmbedChain package.
|
The 'embedder' only applies to **Short-Term Memory** which uses Chroma for RAG using the EmbedChain package.
|
||||||
The **Long-Term Memory** uses SQLLite3 to store task results. Currently, there is no way to override these storage implementations.
|
The **Long-Term Memory** uses SQLite3 to store task results. Currently, there is no way to override these storage implementations.
|
||||||
The data storage files are saved into a platform specific location found using the appdirs package
|
The data storage files are saved into a platform-specific location found using the appdirs package,
|
||||||
and the name of the project which can be overridden using the **CREWAI_STORAGE_DIR** environment variable.
|
and the name of the project can be overridden using the **CREWAI_STORAGE_DIR** environment variable.
|
||||||
|
|
||||||
### Example: Configuring Memory for a Crew
|
### Example: Configuring Memory for a Crew
|
||||||
|
|
||||||
@@ -56,17 +57,17 @@ my_crew = Crew(
|
|||||||
from crewai import Crew, Agent, Task, Process
|
from crewai import Crew, Agent, Task, Process
|
||||||
|
|
||||||
my_crew = Crew(
|
my_crew = Crew(
|
||||||
agents=[...],
|
agents=[...],
|
||||||
tasks=[...],
|
tasks=[...],
|
||||||
process=Process.sequential,
|
process=Process.sequential,
|
||||||
memory=True,
|
memory=True,
|
||||||
verbose=True,
|
verbose=True,
|
||||||
embedder={
|
embedder={
|
||||||
"provider": "openai",
|
"provider": "openai",
|
||||||
"config":{
|
"config": {
|
||||||
"model": 'text-embedding-3-small'
|
"model": 'text-embedding-3-small'
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
)
|
)
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -75,19 +76,19 @@ my_crew = Crew(
|
|||||||
from crewai import Crew, Agent, Task, Process
|
from crewai import Crew, Agent, Task, Process
|
||||||
|
|
||||||
my_crew = Crew(
|
my_crew = Crew(
|
||||||
agents=[...],
|
agents=[...],
|
||||||
tasks=[...],
|
tasks=[...],
|
||||||
process=Process.sequential,
|
process=Process.sequential,
|
||||||
memory=True,
|
memory=True,
|
||||||
verbose=True,
|
verbose=True,
|
||||||
embedder={
|
embedder={
|
||||||
"provider": "google",
|
"provider": "google",
|
||||||
"config":{
|
"config": {
|
||||||
"model": 'models/embedding-001',
|
"model": 'models/embedding-001',
|
||||||
"task_type": "retrieval_document",
|
"task_type": "retrieval_document",
|
||||||
"title": "Embeddings for Embedchain"
|
"title": "Embeddings for Embedchain"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
)
|
)
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -96,18 +97,18 @@ my_crew = Crew(
|
|||||||
from crewai import Crew, Agent, Task, Process
|
from crewai import Crew, Agent, Task, Process
|
||||||
|
|
||||||
my_crew = Crew(
|
my_crew = Crew(
|
||||||
agents=[...],
|
agents=[...],
|
||||||
tasks=[...],
|
tasks=[...],
|
||||||
process=Process.sequential,
|
process=Process.sequential,
|
||||||
memory=True,
|
memory=True,
|
||||||
verbose=True,
|
verbose=True,
|
||||||
embedder={
|
embedder={
|
||||||
"provider": "azure_openai",
|
"provider": "azure_openai",
|
||||||
"config":{
|
"config": {
|
||||||
"model": 'text-embedding-ada-002',
|
"model": 'text-embedding-ada-002',
|
||||||
"deployment_name": "your_embedding_model_deployment_name"
|
"deployment_name": "your_embedding_model_deployment_name"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
)
|
)
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -116,14 +117,14 @@ my_crew = Crew(
|
|||||||
from crewai import Crew, Agent, Task, Process
|
from crewai import Crew, Agent, Task, Process
|
||||||
|
|
||||||
my_crew = Crew(
|
my_crew = Crew(
|
||||||
agents=[...],
|
agents=[...],
|
||||||
tasks=[...],
|
tasks=[...],
|
||||||
process=Process.sequential,
|
process=Process.sequential,
|
||||||
memory=True,
|
memory=True,
|
||||||
verbose=True,
|
verbose=True,
|
||||||
embedder={
|
embedder={
|
||||||
"provider": "gpt4all"
|
"provider": "gpt4all"
|
||||||
}
|
}
|
||||||
)
|
)
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -132,17 +133,17 @@ my_crew = Crew(
|
|||||||
from crewai import Crew, Agent, Task, Process
|
from crewai import Crew, Agent, Task, Process
|
||||||
|
|
||||||
my_crew = Crew(
|
my_crew = Crew(
|
||||||
agents=[...],
|
agents=[...],
|
||||||
tasks=[...],
|
tasks=[...],
|
||||||
process=Process.sequential,
|
process=Process.sequential,
|
||||||
memory=True,
|
memory=True,
|
||||||
verbose=True,
|
verbose=True,
|
||||||
embedder={
|
embedder={
|
||||||
"provider": "vertexai",
|
"provider": "vertexai",
|
||||||
"config":{
|
"config": {
|
||||||
"model": 'textembedding-gecko'
|
"model": 'textembedding-gecko'
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
)
|
)
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -151,18 +152,18 @@ my_crew = Crew(
|
|||||||
from crewai import Crew, Agent, Task, Process
|
from crewai import Crew, Agent, Task, Process
|
||||||
|
|
||||||
my_crew = Crew(
|
my_crew = Crew(
|
||||||
agents=[...],
|
agents=[...],
|
||||||
tasks=[...],
|
tasks=[...],
|
||||||
process=Process.sequential,
|
process=Process.sequential,
|
||||||
memory=True,
|
memory=True,
|
||||||
verbose=True,
|
verbose=True,
|
||||||
embedder={
|
embedder={
|
||||||
"provider": "cohere",
|
"provider": "cohere",
|
||||||
"config":{
|
"config": {
|
||||||
"model": "embed-english-v3.0",
|
"model": "embed-english-v3.0",
|
||||||
"vector_dimension": 1024
|
"vector_dimension": 1024
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
)
|
)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|||||||
@@ -12,7 +12,7 @@ A pipeline in crewAI represents a structured workflow that allows for the sequen
|
|||||||
Understanding the following terms is crucial for working effectively with pipelines:
|
Understanding the following terms is crucial for working effectively with pipelines:
|
||||||
|
|
||||||
- **Stage**: A distinct part of the pipeline, which can be either sequential (a single crew) or parallel (multiple crews executing concurrently).
|
- **Stage**: A distinct part of the pipeline, which can be either sequential (a single crew) or parallel (multiple crews executing concurrently).
|
||||||
- **Run**: A specific execution of the pipeline for a given set of inputs, representing a single instance of processing through the pipeline.
|
- **Kickoff**: A specific execution of the pipeline for a given set of inputs, representing a single instance of processing through the pipeline.
|
||||||
- **Branch**: Parallel executions within a stage (e.g., concurrent crew operations).
|
- **Branch**: Parallel executions within a stage (e.g., concurrent crew operations).
|
||||||
- **Trace**: The journey of an individual input through the entire pipeline, capturing the path and transformations it undergoes.
|
- **Trace**: The journey of an individual input through the entire pipeline, capturing the path and transformations it undergoes.
|
||||||
|
|
||||||
@@ -28,13 +28,13 @@ This represents a pipeline with three stages:
|
|||||||
2. A parallel stage with two branches (crew2 and crew3 executing concurrently)
|
2. A parallel stage with two branches (crew2 and crew3 executing concurrently)
|
||||||
3. Another sequential stage (crew4)
|
3. Another sequential stage (crew4)
|
||||||
|
|
||||||
Each input creates its own run, flowing through all stages of the pipeline. Multiple runs can be processed concurrently, each following the defined pipeline structure.
|
Each input creates its own kickoff, flowing through all stages of the pipeline. Multiple kickoffs can be processed concurrently, each following the defined pipeline structure.
|
||||||
|
|
||||||
## Pipeline Attributes
|
## Pipeline Attributes
|
||||||
|
|
||||||
| Attribute | Parameters | Description |
|
| Attribute | Parameters | Description |
|
||||||
| :--------- | :--------- | :---------------------------------------------------------------------------------------------- |
|
| :--------- | :---------- | :----------------------------------------------------------------------------------------------------------------- |
|
||||||
| **Stages** | `stages` | A list of crews, lists of crews, or routers representing the stages to be executed in sequence. |
|
| **Stages** | `stages` | A list of `PipelineStage` (crews, lists of crews, or routers) representing the stages to be executed in sequence. |
|
||||||
|
|
||||||
## Creating a Pipeline
|
## Creating a Pipeline
|
||||||
|
|
||||||
@@ -43,7 +43,7 @@ When creating a pipeline, you define a series of stages, each consisting of eith
|
|||||||
### Example: Assembling a Pipeline
|
### Example: Assembling a Pipeline
|
||||||
|
|
||||||
```python
|
```python
|
||||||
from crewai import Crew, Agent, Task, Pipeline
|
from crewai import Crew, Process, Pipeline
|
||||||
|
|
||||||
# Define your crews
|
# Define your crews
|
||||||
research_crew = Crew(
|
research_crew = Crew(
|
||||||
@@ -74,7 +74,8 @@ my_pipeline = Pipeline(
|
|||||||
|
|
||||||
| Method | Description |
|
| Method | Description |
|
||||||
| :--------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
| :--------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||||
| **process_runs** | Executes the pipeline, processing all stages and returning the results. This method initiates one or more runs through the pipeline, handling the flow of data between stages. |
|
| **kickoff** | Executes the pipeline, processing all stages and returning the results. This method initiates one or more kickoffs through the pipeline, handling the flow of data between stages. |
|
||||||
|
| **process_runs** | Runs the pipeline for each input provided, handling the flow and transformation of data between stages. |
|
||||||
|
|
||||||
## Pipeline Output
|
## Pipeline Output
|
||||||
|
|
||||||
@@ -99,12 +100,12 @@ The output of a pipeline in the crewAI framework is encapsulated within the `Pip
|
|||||||
| Attribute | Parameters | Type | Description |
|
| Attribute | Parameters | Type | Description |
|
||||||
| :---------------- | :-------------- | :------------------------- | :-------------------------------------------------------------------------------------------- |
|
| :---------------- | :-------------- | :------------------------- | :-------------------------------------------------------------------------------------------- |
|
||||||
| **ID** | `id` | `UUID4` | A unique identifier for the run result. |
|
| **ID** | `id` | `UUID4` | A unique identifier for the run result. |
|
||||||
| **Raw** | `raw` | `str` | The raw output of the final stage in the pipeline run. |
|
| **Raw** | `raw` | `str` | The raw output of the final stage in the pipeline kickoff. |
|
||||||
| **Pydantic** | `pydantic` | `Optional[BaseModel]` | A Pydantic model object representing the structured output of the final stage, if applicable. |
|
| **Pydantic** | `pydantic` | `Any` | A Pydantic model object representing the structured output of the final stage, if applicable. |
|
||||||
| **JSON Dict** | `json_dict` | `Optional[Dict[str, Any]]` | A dictionary representing the JSON output of the final stage, if applicable. |
|
| **JSON Dict** | `json_dict` | `Union[Dict[str, Any], None]` | A dictionary representing the JSON output of the final stage, if applicable. |
|
||||||
| **Token Usage** | `token_usage` | `Dict[str, Any]` | A summary of token usage across all stages of the pipeline run. |
|
| **Token Usage** | `token_usage` | `Dict[str, UsageMetrics]` | A summary of token usage across all stages of the pipeline kickoff. |
|
||||||
| **Trace** | `trace` | `List[Any]` | A trace of the journey of inputs through the pipeline run. |
|
| **Trace** | `trace` | `List[Any]` | A trace of the journey of inputs through the pipeline kickoff. |
|
||||||
| **Crews Outputs** | `crews_outputs` | `List[CrewOutput]` | A list of `CrewOutput` objects, representing the outputs from each crew in the pipeline run. |
|
| **Crews Outputs** | `crews_outputs` | `List[CrewOutput]` | A list of `CrewOutput` objects, representing the outputs from each crew in the pipeline kickoff. |
|
||||||
|
|
||||||
### Pipeline Run Result Methods and Properties
|
### Pipeline Run Result Methods and Properties
|
||||||
|
|
||||||
@@ -112,7 +113,7 @@ The output of a pipeline in the crewAI framework is encapsulated within the `Pip
|
|||||||
| :-------------- | :------------------------------------------------------------------------------------------------------- |
|
| :-------------- | :------------------------------------------------------------------------------------------------------- |
|
||||||
| **json** | Returns the JSON string representation of the run result if the output format of the final task is JSON. |
|
| **json** | Returns the JSON string representation of the run result if the output format of the final task is JSON. |
|
||||||
| **to_dict** | Converts the JSON and Pydantic outputs to a dictionary. |
|
| **to_dict** | Converts the JSON and Pydantic outputs to a dictionary. |
|
||||||
| \***\*str\*\*** | Returns the string representation of the run result, prioritizing Pydantic, then JSON, then raw. |
|
| **str** | Returns the string representation of the run result, prioritizing Pydantic, then JSON, then raw. |
|
||||||
|
|
||||||
### Accessing Pipeline Outputs
|
### Accessing Pipeline Outputs
|
||||||
|
|
||||||
@@ -247,7 +248,7 @@ main_pipeline = Pipeline(stages=[classification_crew, email_router])
|
|||||||
|
|
||||||
inputs = [{"email": "..."}, {"email": "..."}] # List of email data
|
inputs = [{"email": "..."}, {"email": "..."}] # List of email data
|
||||||
|
|
||||||
main_pipeline.kickoff(inputs=inputs)
|
main_pipeline.kickoff(inputs=inputs=inputs)
|
||||||
```
|
```
|
||||||
|
|
||||||
In this example, the router decides between an urgent pipeline and a normal pipeline based on the urgency score of the email. If the urgency score is greater than 7, it routes to the urgent pipeline; otherwise, it uses the normal pipeline. If the input doesn't include an urgency score, it defaults to just the classification crew.
|
In this example, the router decides between an urgent pipeline and a normal pipeline based on the urgency score of the email. If the urgency score is greater than 7, it routes to the urgent pipeline; otherwise, it uses the normal pipeline. If the input doesn't include an urgency score, it defaults to just the classification crew.
|
||||||
@@ -261,7 +262,7 @@ In this example, the router decides between an urgent pipeline and a normal pipe
|
|||||||
|
|
||||||
### Error Handling and Validation
|
### Error Handling and Validation
|
||||||
|
|
||||||
The Pipeline class includes validation mechanisms to ensure the robustness of the pipeline structure:
|
The `Pipeline` class includes validation mechanisms to ensure the robustness of the pipeline structure:
|
||||||
|
|
||||||
- Validates that stages contain only Crew instances or lists of Crew instances.
|
- Validates that stages contain only Crew instances or lists of Crew instances.
|
||||||
- Prevents double nesting of stages to maintain a clear structure.
|
- Prevents double nesting of stages to maintain a clear structure.
|
||||||
@@ -43,7 +43,7 @@ my_crew = Crew(
|
|||||||
|
|
||||||
### Example
|
### Example
|
||||||
|
|
||||||
When running the base case example, you will see something like the following output, which represents the output of the AgentPlanner responsible for creating the step-by-step logic to add to the Agents tasks.
|
When running the base case example, you will see something like the following output, which represents the output of the AgentPlanner responsible for creating the step-by-step logic to add to the Agents' tasks.
|
||||||
|
|
||||||
```
|
```
|
||||||
[2024-07-15 16:49:11][INFO]: Planning the crew execution
|
[2024-07-15 16:49:11][INFO]: Planning the crew execution
|
||||||
@@ -96,7 +96,7 @@ A list with 10 bullet points of the most relevant information about AI LLMs.
|
|||||||
|
|
||||||
**Agent Goal:** Create detailed reports based on AI LLMs data analysis and research findings
|
**Agent Goal:** Create detailed reports based on AI LLMs data analysis and research findings
|
||||||
|
|
||||||
**Task Expected Output:** A fully fledge report with the main topics, each with a full section of information. Formatted as markdown without '```'
|
**Task Expected Output:** A fully fledged report with the main topics, each with a full section of information. Formatted as markdown without '```'
|
||||||
|
|
||||||
**Task Tools:** None specified
|
**Task Tools:** None specified
|
||||||
|
|
||||||
@@ -130,5 +130,4 @@ A list with 10 bullet points of the most relevant information about AI LLMs.
|
|||||||
- Double-check formatting and make any necessary adjustments.
|
- Double-check formatting and make any necessary adjustments.
|
||||||
|
|
||||||
**Expected Output:**
|
**Expected Output:**
|
||||||
A fully-fledged report with the main topics, each with a full section of information. Formatted as markdown without '```'.
|
A fully fledged report with the main topics, each with a full section of information. Formatted as markdown without '```'.
|
||||||
```
|
|
||||||
|
|||||||
@@ -1,3 +1,4 @@
|
|||||||
|
```markdown
|
||||||
---
|
---
|
||||||
title: crewAI Tasks
|
title: crewAI Tasks
|
||||||
description: Detailed guide on managing and creating tasks within the crewAI framework, reflecting the latest codebase updates.
|
description: Detailed guide on managing and creating tasks within the crewAI framework, reflecting the latest codebase updates.
|
||||||
@@ -12,22 +13,22 @@ Tasks within crewAI can be collaborative, requiring multiple agents to work toge
|
|||||||
|
|
||||||
## Task Attributes
|
## Task Attributes
|
||||||
|
|
||||||
| Attribute | Parameters | Description |
|
| Attribute | Parameters | Type | Description |
|
||||||
| :------------------------------- | :---------------- | :------------------------------------------------------------------------------------------------------------------- |
|
| :------------------------------- | :---------------- | :---------------------------- | :------------------------------------------------------------------------------------------------------------------- |
|
||||||
| **Description** | `description` | A clear, concise statement of what the task entails. |
|
| **Description** | `description` | `str` | A clear, concise statement of what the task entails. |
|
||||||
| **Agent** | `agent` | The agent responsible for the task, assigned either directly or by the crew's process. |
|
| **Agent** | `agent` | `Optional[BaseAgent]` | The agent responsible for the task, assigned either directly or by the crew's process. |
|
||||||
| **Expected Output** | `expected_output` | A detailed description of what the task's completion looks like. |
|
| **Expected Output** | `expected_output` | `str` | A detailed description of what the task's completion looks like. |
|
||||||
| **Tools** _(optional)_ | `tools` | The functions or capabilities the agent can utilize to perform the task. Defaults to an empty list. |
|
| **Tools** _(optional)_ | `tools` | `Optional[List[Any]]` | The functions or capabilities the agent can utilize to perform the task. Defaults to an empty list. |
|
||||||
| **Async Execution** _(optional)_ | `async_execution` | If set, the task executes asynchronously, allowing progression without waiting for completion. Defaults to False. |
|
| **Async Execution** _(optional)_ | `async_execution` | `Optional[bool]` | If set, the task executes asynchronously, allowing progression without waiting for completion. Defaults to False. |
|
||||||
| **Context** _(optional)_ | `context` | Specifies tasks whose outputs are used as context for this task. |
|
| **Context** _(optional)_ | `context` | `Optional[List["Task"]]` | Specifies tasks whose outputs are used as context for this task. |
|
||||||
| **Config** _(optional)_ | `config` | Additional configuration details for the agent executing the task, allowing further customization. Defaults to None. |
|
| **Config** _(optional)_ | `config` | `Optional[Dict[str, Any]]` | Additional configuration details for the agent executing the task, allowing further customization. Defaults to None. |
|
||||||
| **Output JSON** _(optional)_ | `output_json` | Outputs a JSON object, requiring an OpenAI client. Only one output format can be set. |
|
| **Output JSON** _(optional)_ | `output_json` | `Optional[Type[BaseModel]]` | Outputs a JSON object, requiring an OpenAI client. Only one output format can be set. |
|
||||||
| **Output Pydantic** _(optional)_ | `output_pydantic` | Outputs a Pydantic model object, requiring an OpenAI client. Only one output format can be set. |
|
| **Output Pydantic** _(optional)_ | `output_pydantic` | `Optional[Type[BaseModel]]` | Outputs a Pydantic model object, requiring an OpenAI client. Only one output format can be set. |
|
||||||
| **Output File** _(optional)_ | `output_file` | Saves the task output to a file. If used with `Output JSON` or `Output Pydantic`, specifies how the output is saved. |
|
| **Output File** _(optional)_ | `output_file` | `Optional[str]` | Saves the task output to a file. If used with `Output JSON` or `Output Pydantic`, specifies how the output is saved. |
|
||||||
| **Output** _(optional)_ | `output` | An instance of `TaskOutput`, containing the raw, JSON, and Pydantic output plus additional details. |
|
| **Output** _(optional)_ | `output` | `Optional[TaskOutput]` | An instance of `TaskOutput`, containing the raw, JSON, and Pydantic output plus additional details. |
|
||||||
| **Callback** _(optional)_ | `callback` | A callable that is executed with the task's output upon completion. |
|
| **Callback** _(optional)_ | `callback` | `Optional[Any]` | A callable that is executed with the task's output upon completion. |
|
||||||
| **Human Input** _(optional)_ | `human_input` | Indicates if the task requires human feedback at the end, useful for tasks needing human oversight. Defaults to False.|
|
| **Human Input** _(optional)_ | `human_input` | `Optional[bool]` | Indicates if the task should involve human review at the end, useful for tasks needing human oversight. Defaults to False.|
|
||||||
| **Converter Class** _(optional)_ | `converter_cls` | A converter class used to export structured output. Defaults to None. |
|
| **Converter Class** _(optional)_ | `converter_cls` | `Optional[Type[Converter]]` | A converter class used to export structured output. Defaults to None. |
|
||||||
|
|
||||||
## Creating a Task
|
## Creating a Task
|
||||||
|
|
||||||
@@ -49,28 +50,28 @@ Directly specify an `agent` for assignment or let the `hierarchical` CrewAI's pr
|
|||||||
## Task Output
|
## Task Output
|
||||||
|
|
||||||
!!! note "Understanding Task Outputs"
|
!!! note "Understanding Task Outputs"
|
||||||
The output of a task in the crewAI framework is encapsulated within the `TaskOutput` class. This class provides a structured way to access results of a task, including various formats such as raw strings, JSON, and Pydantic models.
|
The output of a task in the crewAI framework is encapsulated within the `TaskOutput` class. This class provides a structured way to access results of a task, including various formats such as raw output, JSON, and Pydantic models.
|
||||||
By default, the `TaskOutput` will only include the `raw` output. A `TaskOutput` will only include the `pydantic` or `json_dict` output if the original `Task` object was configured with `output_pydantic` or `output_json`, respectively.
|
By default, the `TaskOutput` will only include the `raw` output. A `TaskOutput` will only include the `pydantic` or `json_dict` output if the original `Task` object was configured with `output_pydantic` or `output_json`, respectively.
|
||||||
|
|
||||||
### Task Output Attributes
|
### Task Output Attributes
|
||||||
|
|
||||||
| Attribute | Parameters | Type | Description |
|
| Attribute | Parameters | Type | Description |
|
||||||
| :---------------- | :-------------- | :------------------------- | :------------------------------------------------------------------------------------------------- |
|
| :---------------- | :-------------- | :------------------------- | :------------------------------------------------------------------------------------------------- |
|
||||||
| **Description** | `description` | `str` | A brief description of the task. |
|
| **Description** | `description` | `str` | Description of the task. |
|
||||||
| **Summary** | `summary` | `Optional[str]` | A short summary of the task, auto-generated from the first 10 words of the description. |
|
| **Summary** | `summary` | `Optional[str]` | Summary of the task, auto-generated from the first 10 words of the description. |
|
||||||
| **Raw** | `raw` | `str` | The raw output of the task. This is the default format for the output. |
|
| **Raw** | `raw` | `str` | The raw output of the task. This is the default format for the output. |
|
||||||
| **Pydantic** | `pydantic` | `Optional[BaseModel]` | A Pydantic model object representing the structured output of the task. |
|
| **Pydantic** | `pydantic` | `Optional[BaseModel]` | A Pydantic model object representing the structured output of the task. |
|
||||||
| **JSON Dict** | `json_dict` | `Optional[Dict[str, Any]]` | A dictionary representing the JSON output of the task. |
|
| **JSON Dict** | `json_dict` | `Optional[Dict[str, Any]]` | A dictionary representing the JSON output of the task. |
|
||||||
| **Agent** | `agent` | `str` | The agent that executed the task. |
|
| **Agent** | `agent` | `str` | The agent that executed the task. |
|
||||||
| **Output Format** | `output_format` | `OutputFormat` | The format of the task output, with options including RAW, JSON, and Pydantic. The default is RAW. |
|
| **Output Format** | `output_format` | `OutputFormat` | The format of the task output, with options including RAW, JSON, and Pydantic. The default is RAW. |
|
||||||
|
|
||||||
### Task Output Methods and Properties
|
### Task Methods and Properties
|
||||||
|
|
||||||
| Method/Property | Description |
|
| Method/Property | Description |
|
||||||
| :-------------- | :------------------------------------------------------------------------------------------------ |
|
| :-------------- | :------------------------------------------------------------------------------------------------ |
|
||||||
| **json** | Returns the JSON string representation of the task output if the output format is JSON. |
|
| **json** | Returns the JSON string representation of the task output if the output format is JSON. |
|
||||||
| **to_dict** | Converts the JSON and Pydantic outputs to a dictionary. |
|
| **to_dict** | Converts the JSON and Pydantic outputs to a dictionary. |
|
||||||
| \***\*str\*\*** | Returns the string representation of the task output, prioritizing Pydantic, then JSON, then raw. |
|
| **str** | Returns the string representation of the task output, prioritizing Pydantic, then JSON, then raw. |
|
||||||
|
|
||||||
### Accessing Task Outputs
|
### Accessing Task Outputs
|
||||||
|
|
||||||
@@ -234,7 +235,7 @@ def callback_function(output: TaskOutput):
|
|||||||
print(f"""
|
print(f"""
|
||||||
Task completed!
|
Task completed!
|
||||||
Task: {output.description}
|
Task: {output.description}
|
||||||
Output: {output.raw_output}
|
Output: {output.raw}
|
||||||
""")
|
""")
|
||||||
|
|
||||||
research_task = Task(
|
research_task = Task(
|
||||||
@@ -275,7 +276,7 @@ result = crew.kickoff()
|
|||||||
print(f"""
|
print(f"""
|
||||||
Task completed!
|
Task completed!
|
||||||
Task: {task1.output.description}
|
Task: {task1.output.description}
|
||||||
Output: {task1.output.raw_output}
|
Output: {task1.output.raw}
|
||||||
""")
|
""")
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -313,4 +314,4 @@ save_output_task = Task(
|
|||||||
|
|
||||||
## Conclusion
|
## Conclusion
|
||||||
|
|
||||||
Tasks are the driving force behind the actions of agents in crewAI. By properly defining tasks and their outcomes, you set the stage for your AI agents to work effectively, either independently or as a collaborative unit. Equipping tasks with appropriate tools, understanding the execution process, and following robust validation practices are crucial for maximizing CrewAI's potential, ensuring agents are effectively prepared for their assignments and that tasks are executed as intended.
|
Tasks are the driving force behind the actions of agents in crewAI. By properly defining tasks and their outcomes, you set the stage for your AI agents to work effectively, either independently or as a collaborative unit. Equipping tasks with appropriate tools, understanding the execution process, and following robust validation practices are crucial for maximizing CrewAI's potential, ensuring agents are effectively prepared for their assignments and that tasks are executed as intended.
|
||||||
@@ -9,7 +9,7 @@ Testing is a crucial part of the development process, and it is essential to ens
|
|||||||
|
|
||||||
### Using the Testing Feature
|
### Using the Testing Feature
|
||||||
|
|
||||||
We added the CLI command `crewai test` to make it easy to test your crew. This command will run your crew for a specified number of iterations and provide detailed performance metrics. The parameters are `n_iterations` and `model` which are optional and default to 2 and `gpt-4o-mini` respectively. For now, the only provider available is OpenAI.
|
We added the CLI command `crewai test` to make it easy to test your crew. This command will run your crew for a specified number of iterations and provide detailed performance metrics. The parameters are `n_iterations` and `model`, which are optional and default to 2 and `gpt-4o-mini` respectively. For now, the only provider available is OpenAI.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
crewai test
|
crewai test
|
||||||
@@ -21,20 +21,36 @@ If you want to run more iterations or use a different model, you can specify the
|
|||||||
crewai test --n_iterations 5 --model gpt-4o
|
crewai test --n_iterations 5 --model gpt-4o
|
||||||
```
|
```
|
||||||
|
|
||||||
|
or using the short forms:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
crewai test -n 5 -m gpt-4o
|
||||||
|
```
|
||||||
|
|
||||||
When you run the `crewai test` command, the crew will be executed for the specified number of iterations, and the performance metrics will be displayed at the end of the run.
|
When you run the `crewai test` command, the crew will be executed for the specified number of iterations, and the performance metrics will be displayed at the end of the run.
|
||||||
|
|
||||||
A table of scores at the end will show the performance of the crew in terms of the following metrics:
|
A table of scores at the end will show the performance of the crew in terms of the following metrics:
|
||||||
|
|
||||||
```
|
```
|
||||||
Task Scores
|
Tasks Scores
|
||||||
(1-10 Higher is better)
|
(1-10 Higher is better)
|
||||||
┏━━━━━━━━━━━━┳━━━━━━━┳━━━━━━━┳━━━━━━━━━━━━┓
|
┏━━━━━━━━━━━━━━━━━━━━┯━━━━━━━┯━━━━━━━┯━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
|
||||||
┃ Tasks/Crew ┃ Run 1 ┃ Run 2 ┃ Avg. Total ┃
|
┃ Tasks/Crew/Agents │ Run 1 │ Run 2 │ Avg. Total │ Agents │ ┃
|
||||||
┡━━━━━━━━━━━━╇━━━━━━━╇━━━━━━━╇━━━━━━━━━━━━┩
|
┠────────────────────┼───────┼───────┼────────────┼────────────────────────────────┼─────────────────────────────────┨
|
||||||
│ Task 1 │ 10.0 │ 9.0 │ 9.5 │
|
┃ Task 1 │ 9.0 │ 9.5 │ 9.2 │ - Professional Insights │ ┃
|
||||||
│ Task 2 │ 9.0 │ 9.0 │ 9.0 │
|
┃ │ │ │ │ Researcher │ ┃
|
||||||
│ Crew │ 9.5 │ 9.0 │ 9.2 │
|
┃ │ │ │ │ │ ┃
|
||||||
└────────────┴───────┴───────┴────────────┘
|
┃ Task 2 │ 9.0 │ 10.0 │ 9.5 │ - Company Profile Investigator │ ┃
|
||||||
|
┃ │ │ │ │ │ ┃
|
||||||
|
┃ Task 3 │ 9.0 │ 9.0 │ 9.0 │ - Automation Insights │ ┃
|
||||||
|
┃ │ │ │ │ Specialist │ ┃
|
||||||
|
┃ │ │ │ │ │ ┃
|
||||||
|
┃ Task 4 │ 9.0 │ 9.0 │ 9.0 │ - Final Report Compiler │ ┃
|
||||||
|
┃ │ │ │ │ │ - Automation Insights ┃
|
||||||
|
┃ │ │ │ │ │ Specialist ┃
|
||||||
|
┃ Crew │ 9.00 │ 9.38 │ 9.2 │ │ ┃
|
||||||
|
┃ Execution Time (s) │ 126 │ 145 │ 135 │ │ ┃
|
||||||
|
┗━━━━━━━━━━━━━━━━━━━━┷━━━━━━━┷━━━━━━━┷━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛
|
||||||
```
|
```
|
||||||
|
|
||||||
The example above shows the test results for two runs of the crew with two tasks, with the average total score for each task and the crew as a whole.
|
The example above shows the test results for two runs of the crew with two tasks, with the average total score for each task and the crew as a whole.
|
||||||
|
|||||||
@@ -106,7 +106,7 @@ Here is a list of the available tools and their descriptions:
|
|||||||
| **CodeInterpreterTool** | A tool for interpreting python code. |
|
| **CodeInterpreterTool** | A tool for interpreting python code. |
|
||||||
| **ComposioTool** | Enables use of Composio tools. |
|
| **ComposioTool** | Enables use of Composio tools. |
|
||||||
| **CSVSearchTool** | A RAG tool designed for searching within CSV files, tailored to handle structured data. |
|
| **CSVSearchTool** | A RAG tool designed for searching within CSV files, tailored to handle structured data. |
|
||||||
| **DALL-E Tool** | A tool for generating images using the DALL-E API. |
|
| **DALL-E Tool** | A tool for generating images using the DALL-E API. |
|
||||||
| **DirectorySearchTool** | A RAG tool for searching within directories, useful for navigating through file systems. |
|
| **DirectorySearchTool** | A RAG tool for searching within directories, useful for navigating through file systems. |
|
||||||
| **DOCXSearchTool** | A RAG tool aimed at searching within DOCX documents, ideal for processing Word files. |
|
| **DOCXSearchTool** | A RAG tool aimed at searching within DOCX documents, ideal for processing Word files. |
|
||||||
| **DirectoryReadTool** | Facilitates reading and processing of directory structures and their contents. |
|
| **DirectoryReadTool** | Facilitates reading and processing of directory structures and their contents. |
|
||||||
@@ -114,7 +114,7 @@ Here is a list of the available tools and their descriptions:
|
|||||||
| **FileReadTool** | Enables reading and extracting data from files, supporting various file formats. |
|
| **FileReadTool** | Enables reading and extracting data from files, supporting various file formats. |
|
||||||
| **FirecrawlSearchTool** | A tool to search webpages using Firecrawl and return the results. |
|
| **FirecrawlSearchTool** | A tool to search webpages using Firecrawl and return the results. |
|
||||||
| **FirecrawlCrawlWebsiteTool** | A tool for crawling webpages using Firecrawl. |
|
| **FirecrawlCrawlWebsiteTool** | A tool for crawling webpages using Firecrawl. |
|
||||||
| **FirecrawlScrapeWebsiteTool** | A tool for scraping webpages url using Firecrawl and returning its contents. |
|
| **FirecrawlScrapeWebsiteTool** | A tool for scraping webpages URL using Firecrawl and returning its contents. |
|
||||||
| **GithubSearchTool** | A RAG tool for searching within GitHub repositories, useful for code and documentation search.|
|
| **GithubSearchTool** | A RAG tool for searching within GitHub repositories, useful for code and documentation search.|
|
||||||
| **SerperDevTool** | A specialized tool for development purposes, with specific functionalities under development. |
|
| **SerperDevTool** | A specialized tool for development purposes, with specific functionalities under development. |
|
||||||
| **TXTSearchTool** | A RAG tool focused on searching within text (.txt) files, suitable for unstructured data. |
|
| **TXTSearchTool** | A RAG tool focused on searching within text (.txt) files, suitable for unstructured data. |
|
||||||
@@ -123,14 +123,14 @@ Here is a list of the available tools and their descriptions:
|
|||||||
| **MDXSearchTool** | A RAG tool tailored for searching within Markdown (MDX) files, useful for documentation. |
|
| **MDXSearchTool** | A RAG tool tailored for searching within Markdown (MDX) files, useful for documentation. |
|
||||||
| **PDFSearchTool** | A RAG tool aimed at searching within PDF documents, ideal for processing scanned documents. |
|
| **PDFSearchTool** | A RAG tool aimed at searching within PDF documents, ideal for processing scanned documents. |
|
||||||
| **PGSearchTool** | A RAG tool optimized for searching within PostgreSQL databases, suitable for database queries. |
|
| **PGSearchTool** | A RAG tool optimized for searching within PostgreSQL databases, suitable for database queries. |
|
||||||
| **Vision Tool** | A tool for generating images using the DALL-E API. |
|
| **Vision Tool** | A tool for generating images using the DALL-E API. |
|
||||||
| **RagTool** | A general-purpose RAG tool capable of handling various data sources and types. |
|
| **RagTool** | A general-purpose RAG tool capable of handling various data sources and types. |
|
||||||
| **ScrapeElementFromWebsiteTool** | Enables scraping specific elements from websites, useful for targeted data extraction. |
|
| **ScrapeElementFromWebsiteTool** | Enables scraping specific elements from websites, useful for targeted data extraction. |
|
||||||
| **ScrapeWebsiteTool** | Facilitates scraping entire websites, ideal for comprehensive data collection. |
|
| **ScrapeWebsiteTool** | Facilitates scraping entire websites, ideal for comprehensive data collection. |
|
||||||
| **WebsiteSearchTool** | A RAG tool for searching website content, optimized for web data extraction. |
|
| **WebsiteSearchTool** | A RAG tool for searching website content, optimized for web data extraction. |
|
||||||
| **XMLSearchTool** | A RAG tool designed for searching within XML files, suitable for structured data formats. |
|
| **XMLSearchTool** | A RAG tool designed for searching within XML files, suitable for structured data formats. |
|
||||||
| **YoutubeChannelSearchTool**| A RAG tool for searching within YouTube channels, useful for video content analysis. |
|
| **YoutubeChannelSearchTool**| A RAG tool for searching within YouTube channels, useful for video content analysis. |
|
||||||
| **YoutubeVideoSearchTool** | A RAG tool aimed at searching within YouTube videos, ideal for video data extraction. |
|
| **YoutubeVideoSearchTool** | A RAG tool aimed at searching within YouTube videos, ideal for video data extraction. |
|
||||||
|
|
||||||
## Creating your own Tools
|
## Creating your own Tools
|
||||||
|
|
||||||
@@ -144,6 +144,7 @@ pip install 'crewai[tools]'
|
|||||||
```
|
```
|
||||||
|
|
||||||
Once you do that there are two main ways for one to create a crewAI tool:
|
Once you do that there are two main ways for one to create a crewAI tool:
|
||||||
|
|
||||||
### Subclassing `BaseTool`
|
### Subclassing `BaseTool`
|
||||||
|
|
||||||
```python
|
```python
|
||||||
|
|||||||
@@ -16,7 +16,7 @@ To use the training feature, follow these steps:
|
|||||||
3. Run the following command:
|
3. Run the following command:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
crewai train -n <n_iterations> <filename>
|
crewai train -n <n_iterations> <filename> (optional)
|
||||||
```
|
```
|
||||||
|
|
||||||
!!! note "Replace `<n_iterations>` with the desired number of training iterations and `<filename>` with the appropriate filename ending with `.pkl`."
|
!!! note "Replace `<n_iterations>` with the desired number of training iterations and `<filename>` with the appropriate filename ending with `.pkl`."
|
||||||
|
|||||||
@@ -5,9 +5,10 @@ description: Learn how to integrate LangChain tools with CrewAI agents to enhanc
|
|||||||
|
|
||||||
## Using LangChain Tools
|
## Using LangChain Tools
|
||||||
!!! info "LangChain Integration"
|
!!! info "LangChain Integration"
|
||||||
CrewAI seamlessly integrates with LangChain’s comprehensive toolkit for search-based queries and more, here are the available built-in tools that are offered by Langchain [LangChain Toolkit](https://python.langchain.com/docs/integrations/tools/)
|
CrewAI seamlessly integrates with LangChain’s comprehensive [list of tools](https://python.langchain.com/docs/integrations/tools/), all of which can be used with crewAI.
|
||||||
|
|
||||||
```python
|
```python
|
||||||
|
import os
|
||||||
from crewai import Agent
|
from crewai import Agent
|
||||||
from langchain.agents import Tool
|
from langchain.agents import Tool
|
||||||
from langchain.utilities import GoogleSerperAPIWrapper
|
from langchain.utilities import GoogleSerperAPIWrapper
|
||||||
|
|||||||
@@ -35,10 +35,10 @@ query_tool = LlamaIndexTool.from_query_engine(
|
|||||||
|
|
||||||
# Create and assign the tools to an agent
|
# Create and assign the tools to an agent
|
||||||
agent = Agent(
|
agent = Agent(
|
||||||
role='Research Analyst',
|
role='Research Analyst',
|
||||||
goal='Provide up-to-date market analysis',
|
goal='Provide up-to-date market analysis',
|
||||||
backstory='An expert analyst with a keen eye for market trends.',
|
backstory='An expert analyst with a keen eye for market trends.',
|
||||||
tools=[tool, *tools, query_tool]
|
tools=[tool, *tools, query_tool]
|
||||||
)
|
)
|
||||||
|
|
||||||
# rest of the code ...
|
# rest of the code ...
|
||||||
@@ -54,4 +54,4 @@ To effectively use the LlamaIndexTool, follow these steps:
|
|||||||
pip install 'crewai[tools]'
|
pip install 'crewai[tools]'
|
||||||
```
|
```
|
||||||
|
|
||||||
2. **Install and Use LlamaIndex**: Follow LlamaIndex documentation [LlamaIndex Documentation](https://docs.llamaindex.ai/) to set up a RAG/agent pipeline.
|
2. **Install and Use LlamaIndex**: Follow the LlamaIndex documentation [LlamaIndex Documentation](https://docs.llamaindex.ai/) to set up a RAG/agent pipeline.
|
||||||
@@ -71,25 +71,59 @@ To customize your pipeline project, you can:
|
|||||||
3. Modify `src/<project_name>/main.py` to set up and run your pipelines.
|
3. Modify `src/<project_name>/main.py` to set up and run your pipelines.
|
||||||
4. Add your environment variables into the `.env` file.
|
4. Add your environment variables into the `.env` file.
|
||||||
|
|
||||||
### Example: Defining a Pipeline
|
## Example 1: Defining a Two-Stage Sequential Pipeline
|
||||||
|
|
||||||
Here's an example of how to define a pipeline in `src/<project_name>/pipelines/normal_pipeline.py`:
|
Here's an example of how to define a pipeline with sequential stages in `src/<project_name>/pipelines/pipeline.py`:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
from crewai import Pipeline
|
from crewai import Pipeline
|
||||||
from crewai.project import PipelineBase
|
from crewai.project import PipelineBase
|
||||||
from ..crews.normal_crew import NormalCrew
|
from ..crews.research_crew.research_crew import ResearchCrew
|
||||||
|
from ..crews.write_x_crew.write_x_crew import WriteXCrew
|
||||||
|
|
||||||
@PipelineBase
|
@PipelineBase
|
||||||
class NormalPipeline:
|
class SequentialPipeline:
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
# Initialize crews
|
# Initialize crews
|
||||||
self.normal_crew = NormalCrew().crew()
|
self.research_crew = ResearchCrew().crew()
|
||||||
|
self.write_x_crew = WriteXCrew().crew()
|
||||||
|
|
||||||
def create_pipeline(self):
|
def create_pipeline(self):
|
||||||
return Pipeline(
|
return Pipeline(
|
||||||
stages=[
|
stages=[
|
||||||
self.normal_crew
|
self.research_crew,
|
||||||
|
self.write_x_crew
|
||||||
|
]
|
||||||
|
)
|
||||||
|
|
||||||
|
async def kickoff(self, inputs):
|
||||||
|
pipeline = self.create_pipeline()
|
||||||
|
results = await pipeline.kickoff(inputs)
|
||||||
|
return results
|
||||||
|
```
|
||||||
|
|
||||||
|
## Example 2: Defining a Two-Stage Pipeline with Parallel Execution
|
||||||
|
|
||||||
|
```python
|
||||||
|
from crewai import Pipeline
|
||||||
|
from crewai.project import PipelineBase
|
||||||
|
from ..crews.research_crew.research_crew import ResearchCrew
|
||||||
|
from ..crews.write_x_crew.write_x_crew import WriteXCrew
|
||||||
|
from ..crews.write_linkedin_crew.write_linkedin_crew import WriteLinkedInCrew
|
||||||
|
|
||||||
|
@PipelineBase
|
||||||
|
class ParallelExecutionPipeline:
|
||||||
|
def __init__(self):
|
||||||
|
# Initialize crews
|
||||||
|
self.research_crew = ResearchCrew().crew()
|
||||||
|
self.write_x_crew = WriteXCrew().crew()
|
||||||
|
self.write_linkedin_crew = WriteLinkedInCrew().crew()
|
||||||
|
|
||||||
|
def create_pipeline(self):
|
||||||
|
return Pipeline(
|
||||||
|
stages=[
|
||||||
|
self.research_crew,
|
||||||
|
[self.write_x_crew, self.write_linkedin_crew] # Parallel execution
|
||||||
]
|
]
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -126,4 +160,4 @@ This will initialize your pipeline and begin task execution as defined in your `
|
|||||||
|
|
||||||
Pipelines can be deployed in the same way as regular CrewAI projects. The easiest way is through [CrewAI+](https://www.crewai.com/crewaiplus), where you can deploy your pipeline in a few clicks.
|
Pipelines can be deployed in the same way as regular CrewAI projects. The easiest way is through [CrewAI+](https://www.crewai.com/crewaiplus), where you can deploy your pipeline in a few clicks.
|
||||||
|
|
||||||
Remember, when working with pipelines, you're orchestrating multiple crews to work together in a sequence or parallel fashion. This allows for more complex workflows and information processing tasks.
|
Remember, when working with pipelines, you're orchestrating multiple crews to work together in a sequence or parallel fashion. This allows for more complex workflows and information processing tasks.
|
||||||
@@ -1,5 +1,7 @@
|
|||||||
---
|
---
|
||||||
|
|
||||||
title: Starting a New CrewAI Project - Using Template
|
title: Starting a New CrewAI Project - Using Template
|
||||||
|
|
||||||
description: A comprehensive guide to starting a new CrewAI project, including the latest updates and project setup methods.
|
description: A comprehensive guide to starting a new CrewAI project, including the latest updates and project setup methods.
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -21,6 +23,7 @@ $ pip install 'crewai[tools]'
|
|||||||
```
|
```
|
||||||
|
|
||||||
## Creating a New Project
|
## Creating a New Project
|
||||||
|
|
||||||
In this example, we will be using poetry as our virtual environment manager.
|
In this example, we will be using poetry as our virtual environment manager.
|
||||||
|
|
||||||
To create a new CrewAI project, run the following CLI command:
|
To create a new CrewAI project, run the following CLI command:
|
||||||
@@ -95,10 +98,13 @@ research_candidates_task:
|
|||||||
```
|
```
|
||||||
|
|
||||||
### Referencing Variables:
|
### Referencing Variables:
|
||||||
Your defined functions with the same name will be used. For example, you can reference the agent for specific tasks from task.yaml file. Ensure your annotated agent and function name is the same otherwise your task won't recognize the reference properly.
|
|
||||||
|
Your defined functions with the same name will be used. For example, you can reference the agent for specific tasks from `tasks.yaml` file. Ensure your annotated agent and function name are the same; otherwise, your task won't recognize the reference properly.
|
||||||
|
|
||||||
#### Example References
|
#### Example References
|
||||||
agent.yaml
|
|
||||||
|
`agents.yaml`
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
email_summarizer:
|
email_summarizer:
|
||||||
role: >
|
role: >
|
||||||
@@ -110,7 +116,8 @@ email_summarizer:
|
|||||||
llm: mixtal_llm
|
llm: mixtal_llm
|
||||||
```
|
```
|
||||||
|
|
||||||
task.yaml
|
`tasks.yaml`
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
email_summarizer_task:
|
email_summarizer_task:
|
||||||
description: >
|
description: >
|
||||||
@@ -123,37 +130,34 @@ email_summarizer_task:
|
|||||||
- research_task
|
- research_task
|
||||||
```
|
```
|
||||||
|
|
||||||
Use the annotations to properly reference the agent and task in the crew.py file.
|
Use the annotations to properly reference the agent and task in the `crew.py` file.
|
||||||
|
|
||||||
### Annotations include:
|
### Annotations include:
|
||||||
* [@agent](https://github.com/crewAIInc/crewAI/blob/97d7bfb52ad49a9f04db360e1b6612d98c91971e/src/crewai/project/annotations.py#L17)
|
|
||||||
* [@task](https://github.com/crewAIInc/crewAI/blob/97d7bfb52ad49a9f04db360e1b6612d98c91971e/src/crewai/project/annotations.py#L4)
|
|
||||||
* [@crew](https://github.com/crewAIInc/crewAI/blob/97d7bfb52ad49a9f04db360e1b6612d98c91971e/src/crewai/project/annotations.py#L69)
|
|
||||||
* [@llm](https://github.com/crewAIInc/crewAI/blob/97d7bfb52ad49a9f04db360e1b6612d98c91971e/src/crewai/project/annotations.py#L23)
|
|
||||||
* [@tool](https://github.com/crewAIInc/crewAI/blob/97d7bfb52ad49a9f04db360e1b6612d98c91971e/src/crewai/project/annotations.py#L39)
|
|
||||||
* [@callback](https://github.com/crewAIInc/crewAI/blob/97d7bfb52ad49a9f04db360e1b6612d98c91971e/src/crewai/project/annotations.py#L44)
|
|
||||||
* [@output_json](https://github.com/crewAIInc/crewAI/blob/97d7bfb52ad49a9f04db360e1b6612d98c91971e/src/crewai/project/annotations.py#L29)
|
|
||||||
* [@output_pydantic](https://github.com/crewAIInc/crewAI/blob/97d7bfb52ad49a9f04db360e1b6612d98c91971e/src/crewai/project/annotations.py#L34)
|
|
||||||
* [@cache_handler](https://github.com/crewAIInc/crewAI/blob/97d7bfb52ad49a9f04db360e1b6612d98c91971e/src/crewai/project/annotations.py#L49)
|
|
||||||
|
|
||||||
crew.py
|
* `@agent`
|
||||||
```py
|
* `@task`
|
||||||
|
* `@crew`
|
||||||
|
* `@tool`
|
||||||
|
* `@callback`
|
||||||
|
* `@output_json`
|
||||||
|
* `@output_pydantic`
|
||||||
|
* `@cache_handler`
|
||||||
|
|
||||||
|
`crew.py`
|
||||||
|
|
||||||
|
```python
|
||||||
# ...
|
# ...
|
||||||
@llm
|
@agent
|
||||||
def mixtal_llm(self):
|
def email_summarizer(self) -> Agent:
|
||||||
return ChatGroq(temperature=0, model_name="mixtral-8x7b-32768")
|
return Agent(
|
||||||
|
config=self.agents_config["email_summarizer"],
|
||||||
|
)
|
||||||
|
|
||||||
@agent
|
@task
|
||||||
def email_summarizer(self) -> Agent:
|
def email_summarizer_task(self) -> Task:
|
||||||
return Agent(
|
return Task(
|
||||||
config=self.agents_config["email_summarizer"],
|
config=self.tasks_config["email_summarizer_task"],
|
||||||
)
|
)
|
||||||
## ...other tasks defined
|
|
||||||
@task
|
|
||||||
def email_summarizer_task(self) -> Task:
|
|
||||||
return Task(
|
|
||||||
config=self.tasks_config["email_summarizer_task"],
|
|
||||||
)
|
|
||||||
# ...
|
# ...
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -204,6 +208,7 @@ To run your project, use the following command:
|
|||||||
```shell
|
```shell
|
||||||
$ crewai run
|
$ crewai run
|
||||||
```
|
```
|
||||||
|
|
||||||
This will initialize your crew of AI agents and begin task execution as defined in your configuration in the `main.py` file.
|
This will initialize your crew of AI agents and begin task execution as defined in your configuration in the `main.py` file.
|
||||||
|
|
||||||
### Replay Tasks from Latest Crew Kickoff
|
### Replay Tasks from Latest Crew Kickoff
|
||||||
|
|||||||
@@ -19,7 +19,7 @@ from crewai.task import Task
|
|||||||
from crewai_tools import SerperDevTool
|
from crewai_tools import SerperDevTool
|
||||||
|
|
||||||
# Define a condition function for the conditional task
|
# Define a condition function for the conditional task
|
||||||
# if false task will be skipped, true, then execute task
|
# If false, the task will be skipped, if true, then execute the task.
|
||||||
def is_data_missing(output: TaskOutput) -> bool:
|
def is_data_missing(output: TaskOutput) -> bool:
|
||||||
return len(output.pydantic.events) < 10 # this will skip this task
|
return len(output.pydantic.events) < 10 # this will skip this task
|
||||||
|
|
||||||
@@ -29,21 +29,21 @@ data_fetcher_agent = Agent(
|
|||||||
goal="Fetch data online using Serper tool",
|
goal="Fetch data online using Serper tool",
|
||||||
backstory="Backstory 1",
|
backstory="Backstory 1",
|
||||||
verbose=True,
|
verbose=True,
|
||||||
tools=[SerperDevTool()],
|
tools=[SerperDevTool()]
|
||||||
)
|
)
|
||||||
|
|
||||||
data_processor_agent = Agent(
|
data_processor_agent = Agent(
|
||||||
role="Data Processor",
|
role="Data Processor",
|
||||||
goal="Process fetched data",
|
goal="Process fetched data",
|
||||||
backstory="Backstory 2",
|
backstory="Backstory 2",
|
||||||
verbose=True,
|
verbose=True
|
||||||
)
|
)
|
||||||
|
|
||||||
summary_generator_agent = Agent(
|
summary_generator_agent = Agent(
|
||||||
role="Summary Generator",
|
role="Summary Generator",
|
||||||
goal="Generate summary from fetched data",
|
goal="Generate summary from fetched data",
|
||||||
backstory="Backstory 3",
|
backstory="Backstory 3",
|
||||||
verbose=True,
|
verbose=True
|
||||||
)
|
)
|
||||||
|
|
||||||
class EventOutput(BaseModel):
|
class EventOutput(BaseModel):
|
||||||
@@ -69,7 +69,7 @@ conditional_task = ConditionalTask(
|
|||||||
|
|
||||||
task3 = Task(
|
task3 = Task(
|
||||||
description="Generate summary of events in San Francisco from fetched data",
|
description="Generate summary of events in San Francisco from fetched data",
|
||||||
expected_output="summary_generated",
|
expected_output="A complete report on the customer and their customers and competitors, including their demographics, preferences, market positioning and audience engagement.",
|
||||||
agent=summary_generator_agent,
|
agent=summary_generator_agent,
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -78,7 +78,7 @@ crew = Crew(
|
|||||||
agents=[data_fetcher_agent, data_processor_agent, summary_generator_agent],
|
agents=[data_fetcher_agent, data_processor_agent, summary_generator_agent],
|
||||||
tasks=[task1, conditional_task, task3],
|
tasks=[task1, conditional_task, task3],
|
||||||
verbose=True,
|
verbose=True,
|
||||||
planning=True # Enable planning feature
|
planning=True
|
||||||
)
|
)
|
||||||
|
|
||||||
# Run the crew
|
# Run the crew
|
||||||
|
|||||||
@@ -91,4 +91,4 @@ Custom prompt files should be structured in JSON format and include all necessar
|
|||||||
- **Improved Usability**: Supports multiple languages, making it suitable for global projects.
|
- **Improved Usability**: Supports multiple languages, making it suitable for global projects.
|
||||||
- **Consistency**: Ensures uniform prompt structures across different agents and tasks.
|
- **Consistency**: Ensures uniform prompt structures across different agents and tasks.
|
||||||
|
|
||||||
By incorporating these updates, CrewAI provides users with the ability to fully customize and internationalize their agent prompts, making the platform more versatile and user-friendly.
|
By incorporating these updates, CrewAI provides users with the ability to fully customize and internationalize their agent prompts, making the platform more versatile and user-friendly.
|
||||||
@@ -14,12 +14,16 @@ Crafting an efficient CrewAI team hinges on the ability to dynamically tailor yo
|
|||||||
- **Cache** *(Optional)*: Determines whether the agent should use a cache for tool usage.
|
- **Cache** *(Optional)*: Determines whether the agent should use a cache for tool usage.
|
||||||
- **Max RPM**: Sets the maximum number of requests per minute (`max_rpm`). This attribute is optional and can be set to `None` for no limit, allowing for unlimited queries to external services if needed.
|
- **Max RPM**: Sets the maximum number of requests per minute (`max_rpm`). This attribute is optional and can be set to `None` for no limit, allowing for unlimited queries to external services if needed.
|
||||||
- **Verbose** *(Optional)*: Enables detailed logging of an agent's actions, useful for debugging and optimization. Specifically, it provides insights into agent execution processes, aiding in the optimization of performance.
|
- **Verbose** *(Optional)*: Enables detailed logging of an agent's actions, useful for debugging and optimization. Specifically, it provides insights into agent execution processes, aiding in the optimization of performance.
|
||||||
- **Allow Delegation** *(Optional)*: `allow_delegation` controls whether the agent is allowed to delegate tasks to other agents.
|
- **Allow Delegation** *(Optional)*: `allow_delegation` controls whether the agent is allowed to delegate tasks to other agents. This attribute is now set to `False` by default.
|
||||||
- **Max Iter** *(Optional)*: The `max_iter` attribute allows users to define the maximum number of iterations an agent can perform for a single task, preventing infinite loops or excessively long executions. The default value is set to 25, providing a balance between thoroughness and efficiency. Once the agent approaches this number, it will try its best to give a good answer.
|
- **Max Iter** *(Optional)*: The `max_iter` attribute allows users to define the maximum number of iterations an agent can perform for a single task, preventing infinite loops or excessively long executions. The default value is set to 25, providing a balance between thoroughness and efficiency.
|
||||||
- **Max Execution Time** *(Optional)*: `max_execution_time` Sets the maximum execution time for an agent to complete a task.
|
- **Max Execution Time** *(Optional)*: `max_execution_time` Sets the maximum execution time for an agent to complete a task.
|
||||||
- **System Template** *(Optional)*: `system_template` defines the system format for the agent.
|
- **System Template** *(Optional)*: `system_template` defines the system format for the agent.
|
||||||
- **Prompt Template** *(Optional)*: `prompt_template` defines the prompt format for the agent.
|
- **Prompt Template** *(Optional)*: `prompt_template` defines the prompt format for the agent.
|
||||||
- **Response Template** *(Optional)*: `response_template` defines the response format for the agent.
|
- **Response Template** *(Optional)*: `response_template` defines the response format for the agent.
|
||||||
|
- **Use Stop Words** *(Optional)*: `use_stop_words` attribute controls whether the agent will use stop words during task execution. This is now supported to aid o1 models.
|
||||||
|
- **Use System Prompt** *(Optional)*: `use_system_prompt` controls whether the agent will use a system prompt for task execution. Agents can now operate without system prompts.
|
||||||
|
- **Respect Context Window**: `respect_context_window` renames the sliding context window attribute and enables it by default to maintain context size.
|
||||||
|
- **Max Retry Limit**: `max_retry_limit` defines the maximum number of retries for an agent to execute a task when an error occurs.
|
||||||
|
|
||||||
## Advanced Customization Options
|
## Advanced Customization Options
|
||||||
Beyond the basic attributes, CrewAI allows for deeper customization to enhance an agent's behavior and capabilities significantly.
|
Beyond the basic attributes, CrewAI allows for deeper customization to enhance an agent's behavior and capabilities significantly.
|
||||||
@@ -67,12 +71,11 @@ agent = Agent(
|
|||||||
verbose=True,
|
verbose=True,
|
||||||
max_rpm=None, # No limit on requests per minute
|
max_rpm=None, # No limit on requests per minute
|
||||||
max_iter=25, # Default value for maximum iterations
|
max_iter=25, # Default value for maximum iterations
|
||||||
allow_delegation=False
|
|
||||||
)
|
)
|
||||||
```
|
```
|
||||||
|
|
||||||
## Delegation and Autonomy
|
## Delegation and Autonomy
|
||||||
Controlling an agent's ability to delegate tasks or ask questions is vital for tailoring its autonomy and collaborative dynamics within the CrewAI framework. By default, the `allow_delegation` attribute is set to `True`, enabling agents to seek assistance or delegate tasks as needed. This default behavior promotes collaborative problem-solving and efficiency within the CrewAI ecosystem. If needed, delegation can be disabled to suit specific operational requirements.
|
Controlling an agent's ability to delegate tasks or ask questions is vital for tailoring its autonomy and collaborative dynamics within the CrewAI framework. By default, the `allow_delegation` attribute is now set to `False`, disabling agents to seek assistance or delegate tasks as needed. This default behavior can be changed to promote collaborative problem-solving and efficiency within the CrewAI ecosystem. If needed, delegation can be enabled to suit specific operational requirements.
|
||||||
|
|
||||||
### Example: Disabling Delegation for an Agent
|
### Example: Disabling Delegation for an Agent
|
||||||
```python
|
```python
|
||||||
@@ -80,7 +83,7 @@ agent = Agent(
|
|||||||
role='Content Writer',
|
role='Content Writer',
|
||||||
goal='Write engaging content on market trends',
|
goal='Write engaging content on market trends',
|
||||||
backstory='A seasoned writer with expertise in market analysis.',
|
backstory='A seasoned writer with expertise in market analysis.',
|
||||||
allow_delegation=False # Disabling delegation
|
allow_delegation=True # Enabling delegation
|
||||||
)
|
)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|||||||
@@ -1,27 +1,31 @@
|
|||||||
---
|
---
|
||||||
title: Forcing Tool Output as Result
|
title: Forcing Tool Output as Result
|
||||||
description: Learn how to force tool output as the result in of an Agent's task in CrewAI.
|
description: Learn how to force tool output as the result in an Agent's task in CrewAI.
|
||||||
---
|
---
|
||||||
|
|
||||||
## Introduction
|
## Introduction
|
||||||
In CrewAI, you can force the output of a tool as the result of an agent's task. This feature is useful when you want to ensure that the tool output is captured and returned as the task result, and avoid the agent modifying the output during the task execution.
|
In CrewAI, you can force the output of a tool as the result of an agent's task. This feature is useful when you want to ensure that the tool output is captured and returned as the task result, avoiding any agent modification during the task execution.
|
||||||
|
|
||||||
## Forcing Tool Output as Result
|
## Forcing Tool Output as Result
|
||||||
To force the tool output as the result of an agent's task, you can set the `result_as_answer` parameter to `True` when creating the agent. This parameter ensures that the tool output is captured and returned as the task result, without any modifications by the agent.
|
To force the tool output as the result of an agent's task, you need to set the `result_as_answer` parameter to `True` when adding a tool to the agent. This parameter ensures that the tool output is captured and returned as the task result, without any modifications by the agent.
|
||||||
|
|
||||||
Here's an example of how to force the tool output as the result of an agent's task:
|
Here's an example of how to force the tool output as the result of an agent's task:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
# ...
|
# ...
|
||||||
from crewai.agent import Agent
|
from crewai.agent import Agent
|
||||||
|
from my_tool import MyCustomTool
|
||||||
|
|
||||||
# Define a custom tool that returns the result as the answer
|
# Create a coding agent with the custom tool
|
||||||
coding_agent = Agent(
|
coding_agent = Agent(
|
||||||
role="Data Scientist",
|
role="Data Scientist",
|
||||||
goal="Produce amazing reports on AI",
|
goal="Produce amazing reports on AI",
|
||||||
backstory="You work with data and AI",
|
backstory="You work with data and AI",
|
||||||
tools=[MyCustomTool(result_as_answer=True)],
|
tools=[MyCustomTool(result_as_answer=True)],
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# Assuming the tool's execution and result population occurs within the system
|
||||||
|
task_result = coding_agent.execute_task(task)
|
||||||
```
|
```
|
||||||
|
|
||||||
## Workflow in Action
|
## Workflow in Action
|
||||||
|
|||||||
@@ -16,6 +16,13 @@ By default, tasks in CrewAI are managed through a sequential process. However, a
|
|||||||
- **Task Delegation**: A manager agent allocates tasks among crew members based on their roles and capabilities.
|
- **Task Delegation**: A manager agent allocates tasks among crew members based on their roles and capabilities.
|
||||||
- **Result Validation**: The manager evaluates outcomes to ensure they meet the required standards.
|
- **Result Validation**: The manager evaluates outcomes to ensure they meet the required standards.
|
||||||
- **Efficient Workflow**: Emulates corporate structures, providing an organized approach to task management.
|
- **Efficient Workflow**: Emulates corporate structures, providing an organized approach to task management.
|
||||||
|
- **System Prompt Handling**: Optionally specify whether the system should use predefined prompts.
|
||||||
|
- **Stop Words Control**: Optionally specify whether stop words should be used, supporting various models including the o1 models.
|
||||||
|
- **Context Window Respect**: Prioritize important context by enabling respect of the context window, which is now the default behavior.
|
||||||
|
- **Delegation Control**: Delegation is now disabled by default to give users explicit control.
|
||||||
|
- **Max Requests Per Minute**: Configurable option to set the maximum number of requests per minute.
|
||||||
|
- **Max Iterations**: Limit the maximum number of iterations for obtaining a final answer.
|
||||||
|
|
||||||
|
|
||||||
## Implementing the Hierarchical Process
|
## Implementing the Hierarchical Process
|
||||||
To utilize the hierarchical process, it's essential to explicitly set the process attribute to `Process.hierarchical`, as the default behavior is `Process.sequential`. Define a crew with a designated manager and establish a clear chain of command.
|
To utilize the hierarchical process, it's essential to explicitly set the process attribute to `Process.hierarchical`, as the default behavior is `Process.sequential`. Define a crew with a designated manager and establish a clear chain of command.
|
||||||
@@ -38,6 +45,10 @@ researcher = Agent(
|
|||||||
cache=True,
|
cache=True,
|
||||||
verbose=False,
|
verbose=False,
|
||||||
# tools=[] # This can be optionally specified; defaults to an empty list
|
# tools=[] # This can be optionally specified; defaults to an empty list
|
||||||
|
use_system_prompt=True, # Enable or disable system prompts for this agent
|
||||||
|
use_stop_words=True, # Enable or disable stop words for this agent
|
||||||
|
max_rpm=30, # Limit on the number of requests per minute
|
||||||
|
max_iter=5 # Maximum number of iterations for a final answer
|
||||||
)
|
)
|
||||||
writer = Agent(
|
writer = Agent(
|
||||||
role='Writer',
|
role='Writer',
|
||||||
@@ -46,6 +57,10 @@ writer = Agent(
|
|||||||
cache=True,
|
cache=True,
|
||||||
verbose=False,
|
verbose=False,
|
||||||
# tools=[] # Optionally specify tools; defaults to an empty list
|
# tools=[] # Optionally specify tools; defaults to an empty list
|
||||||
|
use_system_prompt=True, # Enable or disable system prompts for this agent
|
||||||
|
use_stop_words=True, # Enable or disable stop words for this agent
|
||||||
|
max_rpm=30, # Limit on the number of requests per minute
|
||||||
|
max_iter=5 # Maximum number of iterations for a final answer
|
||||||
)
|
)
|
||||||
|
|
||||||
# Establishing the crew with a hierarchical process and additional configurations
|
# Establishing the crew with a hierarchical process and additional configurations
|
||||||
@@ -54,6 +69,7 @@ project_crew = Crew(
|
|||||||
agents=[researcher, writer],
|
agents=[researcher, writer],
|
||||||
manager_llm=ChatOpenAI(temperature=0, model="gpt-4"), # Mandatory if manager_agent is not set
|
manager_llm=ChatOpenAI(temperature=0, model="gpt-4"), # Mandatory if manager_agent is not set
|
||||||
process=Process.hierarchical, # Specifies the hierarchical management approach
|
process=Process.hierarchical, # Specifies the hierarchical management approach
|
||||||
|
respect_context_window=True, # Enable respect of the context window for tasks
|
||||||
memory=True, # Enable memory usage for enhanced task execution
|
memory=True, # Enable memory usage for enhanced task execution
|
||||||
manager_agent=None, # Optional: explicitly set a specific agent as manager instead of the manager_llm
|
manager_agent=None, # Optional: explicitly set a specific agent as manager instead of the manager_llm
|
||||||
planning=True, # Enable planning feature for pre-execution strategy
|
planning=True, # Enable planning feature for pre-execution strategy
|
||||||
|
|||||||
@@ -74,7 +74,8 @@ task2 = Task(
|
|||||||
"Aim for a narrative that captures the essence of these breakthroughs and their implications for the future."
|
"Aim for a narrative that captures the essence of these breakthroughs and their implications for the future."
|
||||||
),
|
),
|
||||||
expected_output='A compelling 3 paragraphs blog post formatted as markdown about the latest AI advancements in 2024',
|
expected_output='A compelling 3 paragraphs blog post formatted as markdown about the latest AI advancements in 2024',
|
||||||
agent=writer
|
agent=writer,
|
||||||
|
human_input=True
|
||||||
)
|
)
|
||||||
|
|
||||||
# Instantiate your crew with a sequential process
|
# Instantiate your crew with a sequential process
|
||||||
|
|||||||
@@ -72,7 +72,7 @@ asyncio.run(async_crew_execution())
|
|||||||
|
|
||||||
## Example: Multiple Asynchronous Crew Executions
|
## Example: Multiple Asynchronous Crew Executions
|
||||||
|
|
||||||
In this example, we'll show how to kickoff multiple crews asynchronously and wait for all of them to complete using asyncio.gather():
|
In this example, we'll show how to kickoff multiple crews asynchronously and wait for all of them to complete using `asyncio.gather()`:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
import asyncio
|
import asyncio
|
||||||
@@ -114,4 +114,4 @@ async def async_multiple_crews():
|
|||||||
|
|
||||||
# Run the async function
|
# Run the async function
|
||||||
asyncio.run(async_multiple_crews())
|
asyncio.run(async_multiple_crews())
|
||||||
```
|
```
|
||||||
@@ -25,13 +25,17 @@ coding_agent = Agent(
|
|||||||
# Create a task that requires code execution
|
# Create a task that requires code execution
|
||||||
data_analysis_task = Task(
|
data_analysis_task = Task(
|
||||||
description="Analyze the given dataset and calculate the average age of participants. Ages: {ages}",
|
description="Analyze the given dataset and calculate the average age of participants. Ages: {ages}",
|
||||||
agent=coding_agent
|
agent=coding_agent,
|
||||||
|
expected_output="The average age calculated from the dataset"
|
||||||
)
|
)
|
||||||
|
|
||||||
# Create a crew and add the task
|
# Create a crew and add the task
|
||||||
analysis_crew = Crew(
|
analysis_crew = Crew(
|
||||||
agents=[coding_agent],
|
agents=[coding_agent],
|
||||||
tasks=[data_analysis_task]
|
tasks=[data_analysis_task],
|
||||||
|
verbose=True,
|
||||||
|
memory=False,
|
||||||
|
respect_context_window=True # enable by default
|
||||||
)
|
)
|
||||||
|
|
||||||
datasets = [
|
datasets = [
|
||||||
@@ -42,4 +46,4 @@ datasets = [
|
|||||||
|
|
||||||
# Execute the crew
|
# Execute the crew
|
||||||
result = analysis_crew.kickoff_for_each(inputs=datasets)
|
result = analysis_crew.kickoff_for_each(inputs=datasets)
|
||||||
```
|
```
|
||||||
@@ -1,196 +1,113 @@
|
|||||||
---
|
---
|
||||||
title: Connect CrewAI to LLMs
|
title: Connect CrewAI to LLMs
|
||||||
description: Comprehensive guide on integrating CrewAI with various Large Language Models (LLMs), including detailed class attributes, methods, and configuration options.
|
description: Comprehensive guide on integrating CrewAI with various Large Language Models (LLMs) using LiteLLM, including supported providers and configuration options.
|
||||||
---
|
---
|
||||||
|
|
||||||
## Connect CrewAI to LLMs
|
## Connect CrewAI to LLMs
|
||||||
|
|
||||||
|
CrewAI now uses LiteLLM to connect to a wide variety of Language Models (LLMs). This integration provides extensive versatility, allowing you to use models from numerous providers with a simple, unified interface.
|
||||||
|
|
||||||
!!! note "Default LLM"
|
!!! note "Default LLM"
|
||||||
By default, CrewAI uses OpenAI's GPT-4o model (specifically, the model specified by the OPENAI_MODEL_NAME environment variable, defaulting to "gpt-4o") for language processing. You can configure your agents to use a different model or API as described in this guide.
|
By default, CrewAI uses OpenAI's GPT-4 model (specifically, the model specified by the OPENAI_MODEL_NAME environment variable, defaulting to "gpt-4") for language processing. You can easily configure your agents to use a different model or provider as described in this guide.
|
||||||
By default, CrewAI uses OpenAI's GPT-4 model (specifically, the model specified by the OPENAI_MODEL_NAME environment variable, defaulting to "gpt-4") for language processing. You can configure your agents to use a different model or API as described in this guide.
|
|
||||||
|
|
||||||
CrewAI provides extensive versatility in integrating with various Language Models (LLMs), including local options through Ollama such as Llama and Mixtral to cloud-based solutions like Azure. Its compatibility extends to all [LangChain LLM components](https://python.langchain.com/v0.2/docs/integrations/llms/), offering a wide range of integration possibilities for customized AI applications.
|
## Supported Providers
|
||||||
|
|
||||||
The platform supports connections to an array of Generative AI models, including:
|
LiteLLM supports a wide range of providers, including but not limited to:
|
||||||
|
|
||||||
- OpenAI's suite of advanced language models
|
- OpenAI
|
||||||
- Anthropic's cutting-edge AI offerings
|
- Anthropic
|
||||||
- Ollama's diverse range of locally-hosted generative model & embeddings
|
- Google (Vertex AI, Gemini)
|
||||||
- LM Studio's diverse range of locally hosted generative models & embeddings
|
- Azure OpenAI
|
||||||
- Groq's Super Fast LLM offerings
|
- AWS (Bedrock, SageMaker)
|
||||||
- Azures' generative AI offerings
|
- Cohere
|
||||||
- HuggingFace's generative AI offerings
|
- Hugging Face
|
||||||
|
- Ollama
|
||||||
|
- Mistral AI
|
||||||
|
- Replicate
|
||||||
|
- Together AI
|
||||||
|
- AI21
|
||||||
|
- Cloudflare Workers AI
|
||||||
|
- DeepInfra
|
||||||
|
- Groq
|
||||||
|
- And many more!
|
||||||
|
|
||||||
This broad spectrum of LLM options enables users to select the most suitable model for their specific needs, whether prioritizing local deployment, specialized capabilities, or cloud-based scalability.
|
For a complete and up-to-date list of supported providers, please refer to the [LiteLLM Providers documentation](https://docs.litellm.ai/docs/providers).
|
||||||
|
|
||||||
|
## Changing the LLM
|
||||||
|
|
||||||
|
To use a different LLM with your CrewAI agents, you simply need to pass the model name as a string when initializing the agent. Here are some examples:
|
||||||
|
|
||||||
## Changing the default LLM
|
|
||||||
The default LLM is provided through the `langchain openai` package, which is installed by default when you install CrewAI. You can change this default LLM to a different model or API by setting the `OPENAI_MODEL_NAME` environment variable. This straightforward process allows you to harness the power of different OpenAI models, enhancing the flexibility and capabilities of your CrewAI implementation.
|
|
||||||
```python
|
```python
|
||||||
# Required
|
|
||||||
os.environ["OPENAI_MODEL_NAME"]="gpt-4-0125-preview"
|
|
||||||
|
|
||||||
# Agent will automatically use the model defined in the environment variable
|
|
||||||
example_agent = Agent(
|
|
||||||
role='Local Expert',
|
|
||||||
goal='Provide insights about the city',
|
|
||||||
backstory="A knowledgeable local guide.",
|
|
||||||
verbose=True
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Ollama Local Integration
|
|
||||||
Ollama is preferred for local LLM integration, offering customization and privacy benefits. To integrate Ollama with CrewAI, you will need the `langchain-ollama` package. You can then set the following environment variables to connect to your Ollama instance running locally on port 11434.
|
|
||||||
|
|
||||||
```sh
|
|
||||||
os.environ[OPENAI_API_BASE]='http://localhost:11434'
|
|
||||||
os.environ[OPENAI_MODEL_NAME]='llama2' # Adjust based on available model
|
|
||||||
os.environ[OPENAI_API_KEY]='' # No API Key required for Ollama
|
|
||||||
```
|
|
||||||
|
|
||||||
## Ollama Integration Step by Step (ex. for using Llama 3.1 8B locally)
|
|
||||||
1. [Download and install Ollama](https://ollama.com/download).
|
|
||||||
2. After setting up the Ollama, Pull the Llama3.1 8B model by typing following lines into your terminal ```ollama run llama3.1```.
|
|
||||||
3. Llama3.1 should now be served locally on `http://localhost:11434`
|
|
||||||
```
|
|
||||||
from crewai import Agent, Task, Crew
|
|
||||||
from langchain_ollama import ChatOllama
|
|
||||||
import os
|
|
||||||
os.environ["OPENAI_API_KEY"] = "NA"
|
|
||||||
|
|
||||||
llm = ChatOllama(
|
|
||||||
model = "llama3.1",
|
|
||||||
base_url = "http://localhost:11434")
|
|
||||||
|
|
||||||
general_agent = Agent(role = "Math Professor",
|
|
||||||
goal = """Provide the solution to the students that are asking mathematical questions and give them the answer.""",
|
|
||||||
backstory = """You are an excellent math professor that likes to solve math questions in a way that everyone can understand your solution""",
|
|
||||||
allow_delegation = False,
|
|
||||||
verbose = True,
|
|
||||||
llm = llm)
|
|
||||||
|
|
||||||
task = Task(description="""what is 3 + 5""",
|
|
||||||
agent = general_agent,
|
|
||||||
expected_output="A numerical answer.")
|
|
||||||
|
|
||||||
crew = Crew(
|
|
||||||
agents=[general_agent],
|
|
||||||
tasks=[task],
|
|
||||||
verbose=True
|
|
||||||
)
|
|
||||||
|
|
||||||
result = crew.kickoff()
|
|
||||||
|
|
||||||
print(result)
|
|
||||||
```
|
|
||||||
|
|
||||||
## HuggingFace Integration
|
|
||||||
There are a couple of different ways you can use HuggingFace to host your LLM.
|
|
||||||
|
|
||||||
### Your own HuggingFace endpoint
|
|
||||||
```python
|
|
||||||
from langchain_huggingface import HuggingFaceEndpoint
|
|
||||||
|
|
||||||
llm = HuggingFaceEndpoint(
|
|
||||||
repo_id="microsoft/Phi-3-mini-4k-instruct",
|
|
||||||
task="text-generation",
|
|
||||||
max_new_tokens=512,
|
|
||||||
do_sample=False,
|
|
||||||
repetition_penalty=1.03,
|
|
||||||
)
|
|
||||||
|
|
||||||
agent = Agent(
|
|
||||||
role="HuggingFace Agent",
|
|
||||||
goal="Generate text using HuggingFace",
|
|
||||||
backstory="A diligent explorer of GitHub docs.",
|
|
||||||
llm=llm
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
## OpenAI Compatible API Endpoints
|
|
||||||
Switch between APIs and models seamlessly using environment variables, supporting platforms like FastChat, LM Studio, Groq, and Mistral AI.
|
|
||||||
|
|
||||||
### Configuration Examples
|
|
||||||
#### FastChat
|
|
||||||
```sh
|
|
||||||
os.environ["OPENAI_API_BASE"]='http://localhost:8001/v1'
|
|
||||||
os.environ["OPENAI_MODEL_NAME"]='oh-2.5m7b-q51'
|
|
||||||
os.environ[OPENAI_API_KEY]='NA'
|
|
||||||
```
|
|
||||||
|
|
||||||
#### LM Studio
|
|
||||||
Launch [LM Studio](https://lmstudio.ai) and go to the Server tab. Then select a model from the dropdown menu and wait for it to load. Once it's loaded, click the green Start Server button and use the URL, port, and API key that's shown (you can modify them). Below is an example of the default settings as of LM Studio 0.2.19:
|
|
||||||
```sh
|
|
||||||
os.environ["OPENAI_API_BASE"]='http://localhost:1234/v1'
|
|
||||||
os.environ["OPENAI_API_KEY"]='lm-studio'
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Groq API
|
|
||||||
```sh
|
|
||||||
os.environ["OPENAI_API_KEY"]='your-groq-api-key'
|
|
||||||
os.environ["OPENAI_MODEL_NAME"]='llama3-8b-8192'
|
|
||||||
os.environ["OPENAI_API_BASE"]='https://api.groq.com/openai/v1'
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Mistral API
|
|
||||||
```sh
|
|
||||||
os.environ["OPENAI_API_KEY"]='your-mistral-api-key'
|
|
||||||
os.environ["OPENAI_API_BASE"]='https://api.mistral.ai/v1'
|
|
||||||
os.environ["OPENAI_MODEL_NAME"]='mistral-small'
|
|
||||||
```
|
|
||||||
|
|
||||||
### Solar
|
|
||||||
```sh
|
|
||||||
from langchain_community.chat_models.solar import SolarChat
|
|
||||||
```
|
|
||||||
```sh
|
|
||||||
os.environ[SOLAR_API_BASE]="https://api.upstage.ai/v1/solar"
|
|
||||||
os.environ[SOLAR_API_KEY]="your-solar-api-key"
|
|
||||||
|
|
||||||
# Free developer API key available here: https://console.upstage.ai/services/solar
|
|
||||||
# Langchain Example: https://github.com/langchain-ai/langchain/pull/18556
|
|
||||||
```
|
|
||||||
|
|
||||||
### Cohere
|
|
||||||
```python
|
|
||||||
from langchain_cohere import ChatCohere
|
|
||||||
# Initialize language model
|
|
||||||
os.environ["COHERE_API_KEY"]='your-cohere-api-key'
|
|
||||||
llm = ChatCohere()
|
|
||||||
|
|
||||||
# Free developer API key available here: https://cohere.com/
|
|
||||||
# Langchain Documentation: https://python.langchain.com/docs/integrations/chat/cohere
|
|
||||||
```
|
|
||||||
|
|
||||||
### Azure Open AI Configuration
|
|
||||||
For Azure OpenAI API integration, set the following environment variables:
|
|
||||||
```sh
|
|
||||||
|
|
||||||
os.environ["AZURE_OPENAI_DEPLOYMENT"]='Your deployment'
|
|
||||||
os.environ["OPENAI_API_VERSION"]='2023-12-01-preview'
|
|
||||||
os.environ["AZURE_OPENAI_ENDPOINT"]='Your Endpoint'
|
|
||||||
os.environ["AZURE_OPENAI_API_KEY"]='Your API Key'
|
|
||||||
```
|
|
||||||
|
|
||||||
### Example Agent with Azure LLM
|
|
||||||
```python
|
|
||||||
from dotenv import load_dotenv
|
|
||||||
from crewai import Agent
|
from crewai import Agent
|
||||||
from langchain_openai import AzureChatOpenAI
|
|
||||||
|
|
||||||
load_dotenv()
|
# Using OpenAI's GPT-4
|
||||||
|
openai_agent = Agent(
|
||||||
azure_llm = AzureChatOpenAI(
|
role='OpenAI Expert',
|
||||||
azure_endpoint=os.environ.get("AZURE_OPENAI_ENDPOINT"),
|
goal='Provide insights using GPT-4',
|
||||||
api_key=os.environ.get("AZURE_OPENAI_KEY")
|
backstory="An AI assistant powered by OpenAI's latest model.",
|
||||||
|
llm='gpt-4'
|
||||||
)
|
)
|
||||||
|
|
||||||
azure_agent = Agent(
|
# Using Anthropic's Claude
|
||||||
role='Example Agent',
|
claude_agent = Agent(
|
||||||
goal='Demonstrate custom LLM configuration',
|
role='Anthropic Expert',
|
||||||
backstory='A diligent explorer of GitHub docs.',
|
goal='Analyze data using Claude',
|
||||||
llm=azure_llm
|
backstory="An AI assistant leveraging Anthropic's language model.",
|
||||||
|
llm='claude-2'
|
||||||
|
)
|
||||||
|
|
||||||
|
# Using Ollama's local Llama 2 model
|
||||||
|
ollama_agent = Agent(
|
||||||
|
role='Local AI Expert',
|
||||||
|
goal='Process information using a local model',
|
||||||
|
backstory="An AI assistant running on local hardware.",
|
||||||
|
llm='ollama/llama2'
|
||||||
|
)
|
||||||
|
|
||||||
|
# Using Google's Gemini model
|
||||||
|
gemini_agent = Agent(
|
||||||
|
role='Google AI Expert',
|
||||||
|
goal='Generate creative content with Gemini',
|
||||||
|
backstory="An AI assistant powered by Google's advanced language model.",
|
||||||
|
llm='gemini-pro'
|
||||||
)
|
)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
For most providers, you'll need to set up your API keys as environment variables. Here's how you can do it for some common providers:
|
||||||
|
|
||||||
|
```python
|
||||||
|
import os
|
||||||
|
|
||||||
|
# OpenAI
|
||||||
|
os.environ["OPENAI_API_KEY"] = "your-openai-api-key"
|
||||||
|
|
||||||
|
# Anthropic
|
||||||
|
os.environ["ANTHROPIC_API_KEY"] = "your-anthropic-api-key"
|
||||||
|
|
||||||
|
# Google (Vertex AI)
|
||||||
|
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "path/to/your/credentials.json"
|
||||||
|
|
||||||
|
# Azure OpenAI
|
||||||
|
os.environ["AZURE_API_KEY"] = "your-azure-api-key"
|
||||||
|
os.environ["AZURE_API_BASE"] = "your-azure-endpoint"
|
||||||
|
|
||||||
|
# AWS (Bedrock)
|
||||||
|
os.environ["AWS_ACCESS_KEY_ID"] = "your-aws-access-key-id"
|
||||||
|
os.environ["AWS_SECRET_ACCESS_KEY"] = "your-aws-secret-access-key"
|
||||||
|
```
|
||||||
|
|
||||||
|
For providers that require additional configuration or have specific setup requirements, please refer to the [LiteLLM documentation](https://docs.litellm.ai/docs/) for detailed instructions.
|
||||||
|
|
||||||
|
## Using Local Models
|
||||||
|
|
||||||
|
For local models like those provided by Ollama, ensure you have the necessary software installed and running. For example, to use Ollama:
|
||||||
|
|
||||||
|
1. [Download and install Ollama](https://ollama.com/download)
|
||||||
|
2. Pull the desired model (e.g., `ollama pull llama2`)
|
||||||
|
3. Use the model in your CrewAI agent by specifying `llm='ollama/llama2'`
|
||||||
|
|
||||||
## Conclusion
|
## Conclusion
|
||||||
Integrating CrewAI with different LLMs expands the framework's versatility, allowing for customized, efficient AI solutions across various domains and platforms.
|
|
||||||
|
By leveraging LiteLLM, CrewAI now offers seamless integration with a vast array of LLMs. This flexibility allows you to choose the most suitable model for your specific needs, whether you prioritize performance, cost-efficiency, or local deployment. Remember to consult the [LiteLLM documentation](https://docs.litellm.ai/docs/) for the most up-to-date information on supported models and configuration options.
|
||||||
@@ -1,6 +1,7 @@
|
|||||||
---
|
---
|
||||||
title: Replay Tasks from Latest Crew Kickoff
|
title: Replay Tasks from Latest Crew Kickoff
|
||||||
description: Replay tasks from the latest crew.kickoff(...)
|
description: Replay tasks from the latest crew.kickoff(...)
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Introduction
|
## Introduction
|
||||||
@@ -16,22 +17,24 @@ To use the replay feature, follow these steps:
|
|||||||
|
|
||||||
1. Open your terminal or command prompt.
|
1. Open your terminal or command prompt.
|
||||||
2. Navigate to the directory where your CrewAI project is located.
|
2. Navigate to the directory where your CrewAI project is located.
|
||||||
3. Run the following command:
|
3. Run the following commands:
|
||||||
|
|
||||||
To view the latest kickoff task_ids use:
|
To view the latest kickoff task_ids use:
|
||||||
```shell
|
```shell
|
||||||
crewai log-tasks-outputs
|
crewai log-tasks-outputs
|
||||||
```
|
```
|
||||||
|
|
||||||
Once you have your task_id to replay from use:
|
Once you have your `task_id` to replay, use:
|
||||||
```shell
|
```shell
|
||||||
crewai replay -t <task_id>
|
crewai replay -t <task_id>
|
||||||
```
|
```
|
||||||
|
|
||||||
|
**Note:** Ensure `crewai` is installed and configured correctly in your development environment.
|
||||||
|
|
||||||
### Replaying from a Task Programmatically
|
### Replaying from a Task Programmatically
|
||||||
To replay from a task programmatically, use the following steps:
|
To replay from a task programmatically, use the following steps:
|
||||||
|
|
||||||
1. Specify the task_id and input parameters for the replay process.
|
1. Specify the `task_id` and input parameters for the replay process.
|
||||||
2. Execute the replay command within a try-except block to handle potential errors.
|
2. Execute the replay command within a try-except block to handle potential errors.
|
||||||
|
|
||||||
```python
|
```python
|
||||||
@@ -49,4 +52,7 @@ To replay from a task programmatically, use the following steps:
|
|||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
raise Exception(f"An unexpected error occurred: {e}")
|
raise Exception(f"An unexpected error occurred: {e}")
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Conclusion
|
||||||
|
With the above enhancements and detailed functionality, replaying specific tasks in CrewAI has been made more efficient and robust. Ensure you follow the commands and steps precisely to make the most of these features.
|
||||||
@@ -52,14 +52,17 @@ report_crew = Crew(
|
|||||||
# Execute the crew
|
# Execute the crew
|
||||||
result = report_crew.kickoff()
|
result = report_crew.kickoff()
|
||||||
|
|
||||||
# Accessing the type safe output
|
# Accessing the type-safe output
|
||||||
task_output: TaskOutput = result.tasks[0].output
|
task_output: TaskOutput = result.tasks[0].output
|
||||||
crew_output: CrewOutput = result.output
|
crew_output: CrewOutput = result.output
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Note:
|
||||||
|
Each task in a sequential process **must** have an agent assigned. Ensure that every `Task` includes an `agent` parameter.
|
||||||
|
|
||||||
### Workflow in Action
|
### Workflow in Action
|
||||||
1. **Initial Task**: In a sequential process, the first agent completes their task and signals completion.
|
1. **Initial Task**: In a sequential process, the first agent completes their task and signals completion.
|
||||||
2. **Subsequent Tasks**: Agents pick up their tasks based on the process type, with outcomes of preceding tasks or manager directives guiding their execution.
|
2. **Subsequent Tasks**: Agents pick up their tasks based on the process type, with outcomes of preceding tasks or directives guiding their execution.
|
||||||
3. **Completion**: The process concludes once the final task is executed, leading to project completion.
|
3. **Completion**: The process concludes once the final task is executed, leading to project completion.
|
||||||
|
|
||||||
## Advanced Features
|
## Advanced Features
|
||||||
@@ -87,4 +90,6 @@ CrewAI tracks token usage across all tasks and agents. You can access these metr
|
|||||||
1. **Order Matters**: Arrange tasks in a logical sequence where each task builds upon the previous one.
|
1. **Order Matters**: Arrange tasks in a logical sequence where each task builds upon the previous one.
|
||||||
2. **Clear Task Descriptions**: Provide detailed descriptions for each task to guide the agents effectively.
|
2. **Clear Task Descriptions**: Provide detailed descriptions for each task to guide the agents effectively.
|
||||||
3. **Appropriate Agent Selection**: Match agents' skills and roles to the requirements of each task.
|
3. **Appropriate Agent Selection**: Match agents' skills and roles to the requirements of each task.
|
||||||
4. **Use Context**: Leverage the context from previous tasks to inform subsequent ones.
|
4. **Use Context**: Leverage the context from previous tasks to inform subsequent ones.
|
||||||
|
|
||||||
|
This updated documentation ensures that details accurately reflect the latest changes in the codebase and clearly describes how to leverage new features and configurations. The content is kept simple and direct to ensure easy understanding.
|
||||||
Reference in New Issue
Block a user