mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-01-09 16:18:30 +00:00
Updating Docs
This commit is contained in:
@@ -17,10 +17,10 @@ description: What are crewAI Agents and how to use them.
|
||||
## Agent Attributes
|
||||
|
||||
| Attribute | Parameter | Description |
|
||||
| :------------------------- | :---- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| :------------------------- | :--------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| **Role** | `role` | Defines the agent's function within the crew. It determines the kind of tasks the agent is best suited for. |
|
||||
| **Goal** | `goal` | The individual objective that the agent aims to achieve. It guides the agent's decision-making process. |
|
||||
| **Backstory** | `backstory` | Provides context to the agent's role and goal, enriching the interaction and collaboration dynamics. |
|
||||
| **Backstory** | `backstory`| Provides context to the agent's role and goal, enriching the interaction and collaboration dynamics. |
|
||||
| **LLM** *(optional)* | `llm` | Represents the language model that will run the agent. It dynamically fetches the model name from the `OPENAI_MODEL_NAME` environment variable, defaulting to "gpt-4" if not specified. |
|
||||
| **Tools** *(optional)* | `tools` | Set of capabilities or functions that the agent can use to perform tasks. Expected to be instances of custom classes compatible with the agent's execution environment. Tools are initialized with a default value of an empty list. |
|
||||
| **Function Calling LLM** *(optional)* | `function_calling_llm` | Specifies the language model that will handle the tool calling for this agent, overriding the crew function calling LLM if passed. Default is `None`. |
|
||||
@@ -28,14 +28,17 @@ description: What are crewAI Agents and how to use them.
|
||||
| **Max RPM** *(optional)* | `max_rpm` | Max RPM is the maximum number of requests per minute the agent can perform to avoid rate limits. It's optional and can be left unspecified, with a default value of `None`. |
|
||||
| **Max Execution Time** *(optional)* | `max_execution_time` | Max Execution Time is the maximum execution time for an agent to execute a task. It's optional and can be left unspecified, with a default value of `None`, meaning no max execution time. |
|
||||
| **Verbose** *(optional)* | `verbose` | Setting this to `True` configures the internal logger to provide detailed execution logs, aiding in debugging and monitoring. Default is `False`. |
|
||||
| **Allow Delegation** *(optional)* | `allow_delegation` | Agents can delegate tasks or questions to one another, ensuring that each task is handled by the most suitable agent. Default is `True`. |
|
||||
| **Allow Delegation** *(optional)* | `allow_delegation` | Agents can delegate tasks or questions to one another, ensuring that each task is handled by the most suitable agent. Default is `False`.
|
||||
| **Step Callback** *(optional)* | `step_callback` | A function that is called after each step of the agent. This can be used to log the agent's actions or to perform other operations. It will overwrite the crew `step_callback`. |
|
||||
| **Cache** *(optional)* | `cache` | Indicates if the agent should use a cache for tool usage. Default is `True`. |
|
||||
| **System Template** *(optional)* | `system_template` | Specifies the system format for the agent. Default is `None`. |
|
||||
| **Prompt Template** *(optional)* | `prompt_template` | Specifies the prompt format for the agent. Default is `None`. |
|
||||
| **Response Template** *(optional)* | `response_template` | Specifies the response format for the agent. Default is `None`. |
|
||||
| **Allow Code Execution** *(optional)* | `allow_code_execution` | Enable code execution for the agent. Default is `False`. |
|
||||
| **Max Retry Limit** *(optional)* | `max_retry_limit` | Maximum number of retries for an agent to execute a task when an error occurs. Default is `2`. |
|
||||
| **Max Retry Limit** *(optional)* | `max_retry_limit` | Maximum number of retries for an agent to execute a task when an error occurs. Default is `2`.
|
||||
| **Use Stop Words** *(optional)* | `use_stop_words` | Adds the ability to not use stop words (to support o1 models). Default is `True`. |
|
||||
| **Use System Prompt** *(optional)* | `use_system_prompt` | Adds the ability to not use system prompt (to support o1 models). Default is `True`. |
|
||||
| **Respect Context Window** *(optional)* | `respect_context_window` | Summary strategy to avoid overflowing the context window. Default is `True`. |
|
||||
|
||||
## Creating an Agent
|
||||
|
||||
@@ -63,7 +66,7 @@ agent = Agent(
|
||||
max_rpm=None, # Optional
|
||||
max_execution_time=None, # Optional
|
||||
verbose=True, # Optional
|
||||
allow_delegation=True, # Optional
|
||||
allow_delegation=False, # Optional
|
||||
step_callback=my_intermediate_step_callback, # Optional
|
||||
cache=True, # Optional
|
||||
system_template=my_system_template, # Optional
|
||||
@@ -74,8 +77,11 @@ agent = Agent(
|
||||
tools_handler=my_tools_handler, # Optional
|
||||
cache_handler=my_cache_handler, # Optional
|
||||
callbacks=[callback1, callback2], # Optional
|
||||
allow_code_execution=True, # Optiona
|
||||
allow_code_execution=True, # Optional
|
||||
max_retry_limit=2, # Optional
|
||||
use_stop_words=True, # Optional
|
||||
use_system_prompt=True, # Optional
|
||||
respect_context_window=True, # Optional
|
||||
)
|
||||
```
|
||||
|
||||
@@ -105,7 +111,7 @@ agent = Agent(
|
||||
|
||||
BaseAgent includes attributes and methods required to integrate with your crews to run and delegate tasks to other agents within your own crew.
|
||||
|
||||
CrewAI is a universal multi agent framework that allows for all agents to work together to automate tasks and solve problems.
|
||||
CrewAI is a universal multi-agent framework that allows for all agents to work together to automate tasks and solve problems.
|
||||
|
||||
|
||||
```py
|
||||
|
||||
@@ -27,7 +27,7 @@ The `Crew` class has been enriched with several attributes to support advanced f
|
||||
- **Memory Usage (`memory`)**: Indicates whether the crew should use memory to store memories of its execution, enhancing task execution and agent learning.
|
||||
- **Embedder Configuration (`embedder`)**: Specifies the configuration for the embedder to be used by the crew for understanding and generating language. This attribute supports customization of the language model provider.
|
||||
- **Cache Management (`cache`)**: Determines whether the crew should use a cache to store the results of tool executions, optimizing performance.
|
||||
- **Output Logging (`output_log_file`)**: Specifies the file path for logging the output of the crew execution.
|
||||
- **Output Logging (`output_log_file`)**: Specifies the file path for logging the output of the crew's execution.
|
||||
- **Planning Mode (`planning`)**: Allows crews to plan their actions before executing tasks by setting `planning=True` when creating the `Crew` instance. This feature enhances coordination and efficiency.
|
||||
- **Replay Feature**: Introduces a new CLI for listing tasks from the last run and replaying from a specific task, enhancing task management and troubleshooting.
|
||||
|
||||
|
||||
@@ -13,18 +13,18 @@ A crew in crewAI represents a collaborative group of agents working together to
|
||||
| :------------------------------------ | :--------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| **Tasks** | `tasks` | A list of tasks assigned to the crew. |
|
||||
| **Agents** | `agents` | A list of agents that are part of the crew. |
|
||||
| **Process** _(optional)_ | `process` | The process flow (e.g., sequential, hierarchical) the crew follows. |
|
||||
| **Verbose** _(optional)_ | `verbose` | The verbosity level for logging during execution. |
|
||||
| **Process** _(optional)_ | `process` | The process flow (e.g., sequential, hierarchical) the crew follows. Default is `sequential`. |
|
||||
| **Verbose** _(optional)_ | `verbose` | The verbosity level for logging during execution. Defaults to `False`. |
|
||||
| **Manager LLM** _(optional)_ | `manager_llm` | The language model used by the manager agent in a hierarchical process. **Required when using a hierarchical process.** |
|
||||
| **Function Calling LLM** _(optional)_ | `function_calling_llm` | If passed, the crew will use this LLM to do function calling for tools for all agents in the crew. Each agent can have its own LLM, which overrides the crew's LLM for function calling. |
|
||||
| **Config** _(optional)_ | `config` | Optional configuration settings for the crew, in `Json` or `Dict[str, Any]` format. |
|
||||
| **Max RPM** _(optional)_ | `max_rpm` | Maximum requests per minute the crew adheres to during execution. |
|
||||
| **Max RPM** _(optional)_ | `max_rpm` | Maximum requests per minute the crew adheres to during execution. Defaults to `None`. |
|
||||
| **Language** _(optional)_ | `language` | Language used for the crew, defaults to English. |
|
||||
| **Language File** _(optional)_ | `language_file` | Path to the language file to be used for the crew. |
|
||||
| **Memory** _(optional)_ | `memory` | Utilized for storing execution memories (short-term, long-term, entity memory). |
|
||||
| **Cache** _(optional)_ | `cache` | Specifies whether to use a cache for storing the results of tools' execution. |
|
||||
| **Embedder** _(optional)_ | `embedder` | Configuration for the embedder to be used by the crew. Mostly used by memory for now. |
|
||||
| **Full Output** _(optional)_ | `full_output` | Whether the crew should return the full output with all tasks outputs or just the final output. |
|
||||
| **Memory** _(optional)_ | `memory` | Utilized for storing execution memories (short-term, long-term, entity memory). Defaults to `False`. |
|
||||
| **Cache** _(optional)_ | `cache` | Specifies whether to use a cache for storing the results of tools' execution. Defaults to `True`. |
|
||||
| **Embedder** _(optional)_ | `embedder` | Configuration for the embedder to be used by the crew. Mostly used by memory for now. Default is `{"provider": "openai"}`. |
|
||||
| **Full Output** _(optional)_ | `full_output` | Whether the crew should return the full output with all tasks outputs or just the final output. Defaults to `False`. |
|
||||
| **Step Callback** _(optional)_ | `step_callback` | A function that is called after each step of every agent. This can be used to log the agent's actions or to perform other operations; it won't override the agent-specific `step_callback`. |
|
||||
| **Task Callback** _(optional)_ | `task_callback` | A function that is called after the completion of each task. Useful for monitoring or additional operations post-task execution. |
|
||||
| **Share Crew** _(optional)_ | `share_crew` | Whether you want to share the complete crew information and execution with the crewAI team to make the library better, and allow us to train models. |
|
||||
@@ -38,65 +38,6 @@ A crew in crewAI represents a collaborative group of agents working together to
|
||||
!!! note "Crew Max RPM"
|
||||
The `max_rpm` attribute sets the maximum number of requests per minute the crew can perform to avoid rate limits and will override individual agents' `max_rpm` settings if you set it.
|
||||
|
||||
## Creating a Crew
|
||||
|
||||
When assembling a crew, you combine agents with complementary roles and tools, assign tasks, and select a process that dictates their execution order and interaction.
|
||||
|
||||
### Example: Assembling a Crew
|
||||
|
||||
```python
|
||||
from crewai import Crew, Agent, Task, Process
|
||||
from langchain_community.tools import DuckDuckGoSearchRun
|
||||
from crewai_tools import tool
|
||||
|
||||
@tool('DuckDuckGoSearch')
|
||||
def search(search_query: str):
|
||||
"""Search the web for information on a given topic"""
|
||||
return DuckDuckGoSearchRun().run(search_query)
|
||||
|
||||
# Define agents with specific roles and tools
|
||||
researcher = Agent(
|
||||
role='Senior Research Analyst',
|
||||
goal='Discover innovative AI technologies',
|
||||
backstory="""You're a senior research analyst at a large company.
|
||||
You're responsible for analyzing data and providing insights
|
||||
to the business.
|
||||
You're currently working on a project to analyze the
|
||||
trends and innovations in the space of artificial intelligence.""",
|
||||
tools=[search]
|
||||
)
|
||||
|
||||
writer = Agent(
|
||||
role='Content Writer',
|
||||
goal='Write engaging articles on AI discoveries',
|
||||
backstory="""You're a senior writer at a large company.
|
||||
You're responsible for creating content to the business.
|
||||
You're currently working on a project to write about trends
|
||||
and innovations in the space of AI for your next meeting.""",
|
||||
verbose=True
|
||||
)
|
||||
|
||||
# Create tasks for the agents
|
||||
research_task = Task(
|
||||
description='Identify breakthrough AI technologies',
|
||||
agent=researcher,
|
||||
expected_output='A bullet list summary of the top 5 most important AI news'
|
||||
)
|
||||
write_article_task = Task(
|
||||
description='Draft an article on the latest AI technologies',
|
||||
agent=writer,
|
||||
expected_output='3 paragraph blog post on the latest AI technologies'
|
||||
)
|
||||
|
||||
# Assemble the crew with a sequential process
|
||||
my_crew = Crew(
|
||||
agents=[researcher, writer],
|
||||
tasks=[research_task, write_article_task],
|
||||
process=Process.sequential,
|
||||
full_output=True,
|
||||
verbose=True,
|
||||
)
|
||||
```
|
||||
|
||||
## Crew Output
|
||||
|
||||
|
||||
@@ -4,16 +4,17 @@ description: Leveraging memory systems in the crewAI framework to enhance agent
|
||||
---
|
||||
|
||||
## Introduction to Memory Systems in crewAI
|
||||
|
||||
!!! note "Enhancing Agent Intelligence"
|
||||
The crewAI framework introduces a sophisticated memory system designed to significantly enhance the capabilities of AI agents. This system comprises short-term memory, long-term memory, entity memory, and contextual memory, each serving a unique purpose in aiding agents to remember, reason, and learn from past interactions.
|
||||
|
||||
## Memory System Components
|
||||
|
||||
| Component | Description |
|
||||
| :------------------- | :----------------------------------------------------------- |
|
||||
| **Short-Term Memory**| Temporarily stores recent interactions and outcomes, enabling agents to recall and utilize information relevant to their current context during the current executions. |
|
||||
| **Long-Term Memory** | Preserves valuable insights and learnings from past executions, allowing agents to build and refine their knowledge over time. So Agents can remember what they did right and wrong across multiple executions |
|
||||
| **Entity Memory** | Captures and organizes information about entities (people, places, concepts) encountered during tasks, facilitating deeper understanding and relationship mapping. |
|
||||
| :------------------- | :---------------------------------------------------------------------------------------------------------------------- |
|
||||
| **Short-Term Memory**| Temporarily stores recent interactions and outcomes using `RAG`, enabling agents to recall and utilize information relevant to their current context during the current executions.|
|
||||
| **Long-Term Memory** | Preserves valuable insights and learnings from past executions, allowing agents to build and refine their knowledge over time. |
|
||||
| **Entity Memory** | Captures and organizes information about entities (people, places, concepts) encountered during tasks, facilitating deeper understanding and relationship mapping. Uses `RAG` for storing entity information. |
|
||||
| **Contextual Memory**| Maintains the context of interactions by combining `ShortTermMemory`, `LongTermMemory`, and `EntityMemory`, aiding in the coherence and relevance of agent responses over a sequence of tasks or a conversation. |
|
||||
|
||||
## How Memory Systems Empower Agents
|
||||
@@ -27,12 +28,12 @@ description: Leveraging memory systems in the crewAI framework to enhance agent
|
||||
## Implementing Memory in Your Crew
|
||||
|
||||
When configuring a crew, you can enable and customize each memory component to suit the crew's objectives and the nature of tasks it will perform.
|
||||
By default, the memory system is disabled, and you can ensure it is active by setting `memory=True` in the crew configuration. The memory will use OpenAI Embeddings by default, but you can change it by setting `embedder` to a different model.
|
||||
By default, the memory system is disabled, and you can ensure it is active by setting `memory=True` in the crew configuration. The memory will use OpenAI embeddings by default, but you can change it by setting `embedder` to a different model.
|
||||
|
||||
The 'embedder' only applies to **Short-Term Memory** which uses Chroma for RAG using EmbedChain package.
|
||||
The **Long-Term Memory** uses SQLLite3 to store task results. Currently, there is no way to override these storage implementations.
|
||||
The data storage files are saved into a platform specific location found using the appdirs package
|
||||
and the name of the project which can be overridden using the **CREWAI_STORAGE_DIR** environment variable.
|
||||
The 'embedder' only applies to **Short-Term Memory** which uses Chroma for RAG using the EmbedChain package.
|
||||
The **Long-Term Memory** uses SQLite3 to store task results. Currently, there is no way to override these storage implementations.
|
||||
The data storage files are saved into a platform-specific location found using the appdirs package,
|
||||
and the name of the project can be overridden using the **CREWAI_STORAGE_DIR** environment variable.
|
||||
|
||||
### Example: Configuring Memory for a Crew
|
||||
|
||||
@@ -63,7 +64,7 @@ my_crew = Crew(
|
||||
verbose=True,
|
||||
embedder={
|
||||
"provider": "openai",
|
||||
"config":{
|
||||
"config": {
|
||||
"model": 'text-embedding-3-small'
|
||||
}
|
||||
}
|
||||
@@ -82,7 +83,7 @@ my_crew = Crew(
|
||||
verbose=True,
|
||||
embedder={
|
||||
"provider": "google",
|
||||
"config":{
|
||||
"config": {
|
||||
"model": 'models/embedding-001',
|
||||
"task_type": "retrieval_document",
|
||||
"title": "Embeddings for Embedchain"
|
||||
@@ -103,7 +104,7 @@ my_crew = Crew(
|
||||
verbose=True,
|
||||
embedder={
|
||||
"provider": "azure_openai",
|
||||
"config":{
|
||||
"config": {
|
||||
"model": 'text-embedding-ada-002',
|
||||
"deployment_name": "your_embedding_model_deployment_name"
|
||||
}
|
||||
@@ -139,7 +140,7 @@ my_crew = Crew(
|
||||
verbose=True,
|
||||
embedder={
|
||||
"provider": "vertexai",
|
||||
"config":{
|
||||
"config": {
|
||||
"model": 'textembedding-gecko'
|
||||
}
|
||||
}
|
||||
@@ -158,7 +159,7 @@ my_crew = Crew(
|
||||
verbose=True,
|
||||
embedder={
|
||||
"provider": "cohere",
|
||||
"config":{
|
||||
"config": {
|
||||
"model": "embed-english-v3.0",
|
||||
"vector_dimension": 1024
|
||||
}
|
||||
|
||||
@@ -12,7 +12,7 @@ A pipeline in crewAI represents a structured workflow that allows for the sequen
|
||||
Understanding the following terms is crucial for working effectively with pipelines:
|
||||
|
||||
- **Stage**: A distinct part of the pipeline, which can be either sequential (a single crew) or parallel (multiple crews executing concurrently).
|
||||
- **Run**: A specific execution of the pipeline for a given set of inputs, representing a single instance of processing through the pipeline.
|
||||
- **Kickoff**: A specific execution of the pipeline for a given set of inputs, representing a single instance of processing through the pipeline.
|
||||
- **Branch**: Parallel executions within a stage (e.g., concurrent crew operations).
|
||||
- **Trace**: The journey of an individual input through the entire pipeline, capturing the path and transformations it undergoes.
|
||||
|
||||
@@ -28,13 +28,13 @@ This represents a pipeline with three stages:
|
||||
2. A parallel stage with two branches (crew2 and crew3 executing concurrently)
|
||||
3. Another sequential stage (crew4)
|
||||
|
||||
Each input creates its own run, flowing through all stages of the pipeline. Multiple runs can be processed concurrently, each following the defined pipeline structure.
|
||||
Each input creates its own kickoff, flowing through all stages of the pipeline. Multiple kickoffs can be processed concurrently, each following the defined pipeline structure.
|
||||
|
||||
## Pipeline Attributes
|
||||
|
||||
| Attribute | Parameters | Description |
|
||||
| :--------- | :--------- | :---------------------------------------------------------------------------------------------- |
|
||||
| **Stages** | `stages` | A list of crews, lists of crews, or routers representing the stages to be executed in sequence. |
|
||||
| :--------- | :---------- | :----------------------------------------------------------------------------------------------------------------- |
|
||||
| **Stages** | `stages` | A list of `PipelineStage` (crews, lists of crews, or routers) representing the stages to be executed in sequence. |
|
||||
|
||||
## Creating a Pipeline
|
||||
|
||||
@@ -43,7 +43,7 @@ When creating a pipeline, you define a series of stages, each consisting of eith
|
||||
### Example: Assembling a Pipeline
|
||||
|
||||
```python
|
||||
from crewai import Crew, Agent, Task, Pipeline
|
||||
from crewai import Crew, Process, Pipeline
|
||||
|
||||
# Define your crews
|
||||
research_crew = Crew(
|
||||
@@ -74,7 +74,8 @@ my_pipeline = Pipeline(
|
||||
|
||||
| Method | Description |
|
||||
| :--------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| **process_runs** | Executes the pipeline, processing all stages and returning the results. This method initiates one or more runs through the pipeline, handling the flow of data between stages. |
|
||||
| **kickoff** | Executes the pipeline, processing all stages and returning the results. This method initiates one or more kickoffs through the pipeline, handling the flow of data between stages. |
|
||||
| **process_runs** | Runs the pipeline for each input provided, handling the flow and transformation of data between stages. |
|
||||
|
||||
## Pipeline Output
|
||||
|
||||
@@ -99,12 +100,12 @@ The output of a pipeline in the crewAI framework is encapsulated within the `Pip
|
||||
| Attribute | Parameters | Type | Description |
|
||||
| :---------------- | :-------------- | :------------------------- | :-------------------------------------------------------------------------------------------- |
|
||||
| **ID** | `id` | `UUID4` | A unique identifier for the run result. |
|
||||
| **Raw** | `raw` | `str` | The raw output of the final stage in the pipeline run. |
|
||||
| **Pydantic** | `pydantic` | `Optional[BaseModel]` | A Pydantic model object representing the structured output of the final stage, if applicable. |
|
||||
| **JSON Dict** | `json_dict` | `Optional[Dict[str, Any]]` | A dictionary representing the JSON output of the final stage, if applicable. |
|
||||
| **Token Usage** | `token_usage` | `Dict[str, Any]` | A summary of token usage across all stages of the pipeline run. |
|
||||
| **Trace** | `trace` | `List[Any]` | A trace of the journey of inputs through the pipeline run. |
|
||||
| **Crews Outputs** | `crews_outputs` | `List[CrewOutput]` | A list of `CrewOutput` objects, representing the outputs from each crew in the pipeline run. |
|
||||
| **Raw** | `raw` | `str` | The raw output of the final stage in the pipeline kickoff. |
|
||||
| **Pydantic** | `pydantic` | `Any` | A Pydantic model object representing the structured output of the final stage, if applicable. |
|
||||
| **JSON Dict** | `json_dict` | `Union[Dict[str, Any], None]` | A dictionary representing the JSON output of the final stage, if applicable. |
|
||||
| **Token Usage** | `token_usage` | `Dict[str, UsageMetrics]` | A summary of token usage across all stages of the pipeline kickoff. |
|
||||
| **Trace** | `trace` | `List[Any]` | A trace of the journey of inputs through the pipeline kickoff. |
|
||||
| **Crews Outputs** | `crews_outputs` | `List[CrewOutput]` | A list of `CrewOutput` objects, representing the outputs from each crew in the pipeline kickoff. |
|
||||
|
||||
### Pipeline Run Result Methods and Properties
|
||||
|
||||
@@ -112,7 +113,7 @@ The output of a pipeline in the crewAI framework is encapsulated within the `Pip
|
||||
| :-------------- | :------------------------------------------------------------------------------------------------------- |
|
||||
| **json** | Returns the JSON string representation of the run result if the output format of the final task is JSON. |
|
||||
| **to_dict** | Converts the JSON and Pydantic outputs to a dictionary. |
|
||||
| \***\*str\*\*** | Returns the string representation of the run result, prioritizing Pydantic, then JSON, then raw. |
|
||||
| **str** | Returns the string representation of the run result, prioritizing Pydantic, then JSON, then raw. |
|
||||
|
||||
### Accessing Pipeline Outputs
|
||||
|
||||
@@ -247,7 +248,7 @@ main_pipeline = Pipeline(stages=[classification_crew, email_router])
|
||||
|
||||
inputs = [{"email": "..."}, {"email": "..."}] # List of email data
|
||||
|
||||
main_pipeline.kickoff(inputs=inputs)
|
||||
main_pipeline.kickoff(inputs=inputs=inputs)
|
||||
```
|
||||
|
||||
In this example, the router decides between an urgent pipeline and a normal pipeline based on the urgency score of the email. If the urgency score is greater than 7, it routes to the urgent pipeline; otherwise, it uses the normal pipeline. If the input doesn't include an urgency score, it defaults to just the classification crew.
|
||||
@@ -261,7 +262,7 @@ In this example, the router decides between an urgent pipeline and a normal pipe
|
||||
|
||||
### Error Handling and Validation
|
||||
|
||||
The Pipeline class includes validation mechanisms to ensure the robustness of the pipeline structure:
|
||||
The `Pipeline` class includes validation mechanisms to ensure the robustness of the pipeline structure:
|
||||
|
||||
- Validates that stages contain only Crew instances or lists of Crew instances.
|
||||
- Prevents double nesting of stages to maintain a clear structure.
|
||||
@@ -43,7 +43,7 @@ my_crew = Crew(
|
||||
|
||||
### Example
|
||||
|
||||
When running the base case example, you will see something like the following output, which represents the output of the AgentPlanner responsible for creating the step-by-step logic to add to the Agents tasks.
|
||||
When running the base case example, you will see something like the following output, which represents the output of the AgentPlanner responsible for creating the step-by-step logic to add to the Agents' tasks.
|
||||
|
||||
```
|
||||
[2024-07-15 16:49:11][INFO]: Planning the crew execution
|
||||
@@ -96,7 +96,7 @@ A list with 10 bullet points of the most relevant information about AI LLMs.
|
||||
|
||||
**Agent Goal:** Create detailed reports based on AI LLMs data analysis and research findings
|
||||
|
||||
**Task Expected Output:** A fully fledge report with the main topics, each with a full section of information. Formatted as markdown without '```'
|
||||
**Task Expected Output:** A fully fledged report with the main topics, each with a full section of information. Formatted as markdown without '```'
|
||||
|
||||
**Task Tools:** None specified
|
||||
|
||||
@@ -130,5 +130,4 @@ A list with 10 bullet points of the most relevant information about AI LLMs.
|
||||
- Double-check formatting and make any necessary adjustments.
|
||||
|
||||
**Expected Output:**
|
||||
A fully-fledged report with the main topics, each with a full section of information. Formatted as markdown without '```'.
|
||||
```
|
||||
A fully fledged report with the main topics, each with a full section of information. Formatted as markdown without '```'.
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
```markdown
|
||||
---
|
||||
title: crewAI Tasks
|
||||
description: Detailed guide on managing and creating tasks within the crewAI framework, reflecting the latest codebase updates.
|
||||
@@ -12,22 +13,22 @@ Tasks within crewAI can be collaborative, requiring multiple agents to work toge
|
||||
|
||||
## Task Attributes
|
||||
|
||||
| Attribute | Parameters | Description |
|
||||
| :------------------------------- | :---------------- | :------------------------------------------------------------------------------------------------------------------- |
|
||||
| **Description** | `description` | A clear, concise statement of what the task entails. |
|
||||
| **Agent** | `agent` | The agent responsible for the task, assigned either directly or by the crew's process. |
|
||||
| **Expected Output** | `expected_output` | A detailed description of what the task's completion looks like. |
|
||||
| **Tools** _(optional)_ | `tools` | The functions or capabilities the agent can utilize to perform the task. Defaults to an empty list. |
|
||||
| **Async Execution** _(optional)_ | `async_execution` | If set, the task executes asynchronously, allowing progression without waiting for completion. Defaults to False. |
|
||||
| **Context** _(optional)_ | `context` | Specifies tasks whose outputs are used as context for this task. |
|
||||
| **Config** _(optional)_ | `config` | Additional configuration details for the agent executing the task, allowing further customization. Defaults to None. |
|
||||
| **Output JSON** _(optional)_ | `output_json` | Outputs a JSON object, requiring an OpenAI client. Only one output format can be set. |
|
||||
| **Output Pydantic** _(optional)_ | `output_pydantic` | Outputs a Pydantic model object, requiring an OpenAI client. Only one output format can be set. |
|
||||
| **Output File** _(optional)_ | `output_file` | Saves the task output to a file. If used with `Output JSON` or `Output Pydantic`, specifies how the output is saved. |
|
||||
| **Output** _(optional)_ | `output` | An instance of `TaskOutput`, containing the raw, JSON, and Pydantic output plus additional details. |
|
||||
| **Callback** _(optional)_ | `callback` | A callable that is executed with the task's output upon completion. |
|
||||
| **Human Input** _(optional)_ | `human_input` | Indicates if the task requires human feedback at the end, useful for tasks needing human oversight. Defaults to False.|
|
||||
| **Converter Class** _(optional)_ | `converter_cls` | A converter class used to export structured output. Defaults to None. |
|
||||
| Attribute | Parameters | Type | Description |
|
||||
| :------------------------------- | :---------------- | :---------------------------- | :------------------------------------------------------------------------------------------------------------------- |
|
||||
| **Description** | `description` | `str` | A clear, concise statement of what the task entails. |
|
||||
| **Agent** | `agent` | `Optional[BaseAgent]` | The agent responsible for the task, assigned either directly or by the crew's process. |
|
||||
| **Expected Output** | `expected_output` | `str` | A detailed description of what the task's completion looks like. |
|
||||
| **Tools** _(optional)_ | `tools` | `Optional[List[Any]]` | The functions or capabilities the agent can utilize to perform the task. Defaults to an empty list. |
|
||||
| **Async Execution** _(optional)_ | `async_execution` | `Optional[bool]` | If set, the task executes asynchronously, allowing progression without waiting for completion. Defaults to False. |
|
||||
| **Context** _(optional)_ | `context` | `Optional[List["Task"]]` | Specifies tasks whose outputs are used as context for this task. |
|
||||
| **Config** _(optional)_ | `config` | `Optional[Dict[str, Any]]` | Additional configuration details for the agent executing the task, allowing further customization. Defaults to None. |
|
||||
| **Output JSON** _(optional)_ | `output_json` | `Optional[Type[BaseModel]]` | Outputs a JSON object, requiring an OpenAI client. Only one output format can be set. |
|
||||
| **Output Pydantic** _(optional)_ | `output_pydantic` | `Optional[Type[BaseModel]]` | Outputs a Pydantic model object, requiring an OpenAI client. Only one output format can be set. |
|
||||
| **Output File** _(optional)_ | `output_file` | `Optional[str]` | Saves the task output to a file. If used with `Output JSON` or `Output Pydantic`, specifies how the output is saved. |
|
||||
| **Output** _(optional)_ | `output` | `Optional[TaskOutput]` | An instance of `TaskOutput`, containing the raw, JSON, and Pydantic output plus additional details. |
|
||||
| **Callback** _(optional)_ | `callback` | `Optional[Any]` | A callable that is executed with the task's output upon completion. |
|
||||
| **Human Input** _(optional)_ | `human_input` | `Optional[bool]` | Indicates if the task should involve human review at the end, useful for tasks needing human oversight. Defaults to False.|
|
||||
| **Converter Class** _(optional)_ | `converter_cls` | `Optional[Type[Converter]]` | A converter class used to export structured output. Defaults to None. |
|
||||
|
||||
## Creating a Task
|
||||
|
||||
@@ -49,28 +50,28 @@ Directly specify an `agent` for assignment or let the `hierarchical` CrewAI's pr
|
||||
## Task Output
|
||||
|
||||
!!! note "Understanding Task Outputs"
|
||||
The output of a task in the crewAI framework is encapsulated within the `TaskOutput` class. This class provides a structured way to access results of a task, including various formats such as raw strings, JSON, and Pydantic models.
|
||||
The output of a task in the crewAI framework is encapsulated within the `TaskOutput` class. This class provides a structured way to access results of a task, including various formats such as raw output, JSON, and Pydantic models.
|
||||
By default, the `TaskOutput` will only include the `raw` output. A `TaskOutput` will only include the `pydantic` or `json_dict` output if the original `Task` object was configured with `output_pydantic` or `output_json`, respectively.
|
||||
|
||||
### Task Output Attributes
|
||||
|
||||
| Attribute | Parameters | Type | Description |
|
||||
| :---------------- | :-------------- | :------------------------- | :------------------------------------------------------------------------------------------------- |
|
||||
| **Description** | `description` | `str` | A brief description of the task. |
|
||||
| **Summary** | `summary` | `Optional[str]` | A short summary of the task, auto-generated from the first 10 words of the description. |
|
||||
| **Description** | `description` | `str` | Description of the task. |
|
||||
| **Summary** | `summary` | `Optional[str]` | Summary of the task, auto-generated from the first 10 words of the description. |
|
||||
| **Raw** | `raw` | `str` | The raw output of the task. This is the default format for the output. |
|
||||
| **Pydantic** | `pydantic` | `Optional[BaseModel]` | A Pydantic model object representing the structured output of the task. |
|
||||
| **JSON Dict** | `json_dict` | `Optional[Dict[str, Any]]` | A dictionary representing the JSON output of the task. |
|
||||
| **Agent** | `agent` | `str` | The agent that executed the task. |
|
||||
| **Output Format** | `output_format` | `OutputFormat` | The format of the task output, with options including RAW, JSON, and Pydantic. The default is RAW. |
|
||||
|
||||
### Task Output Methods and Properties
|
||||
### Task Methods and Properties
|
||||
|
||||
| Method/Property | Description |
|
||||
| :-------------- | :------------------------------------------------------------------------------------------------ |
|
||||
| **json** | Returns the JSON string representation of the task output if the output format is JSON. |
|
||||
| **to_dict** | Converts the JSON and Pydantic outputs to a dictionary. |
|
||||
| \***\*str\*\*** | Returns the string representation of the task output, prioritizing Pydantic, then JSON, then raw. |
|
||||
| **str** | Returns the string representation of the task output, prioritizing Pydantic, then JSON, then raw. |
|
||||
|
||||
### Accessing Task Outputs
|
||||
|
||||
@@ -234,7 +235,7 @@ def callback_function(output: TaskOutput):
|
||||
print(f"""
|
||||
Task completed!
|
||||
Task: {output.description}
|
||||
Output: {output.raw_output}
|
||||
Output: {output.raw}
|
||||
""")
|
||||
|
||||
research_task = Task(
|
||||
@@ -275,7 +276,7 @@ result = crew.kickoff()
|
||||
print(f"""
|
||||
Task completed!
|
||||
Task: {task1.output.description}
|
||||
Output: {task1.output.raw_output}
|
||||
Output: {task1.output.raw}
|
||||
""")
|
||||
```
|
||||
|
||||
|
||||
@@ -9,7 +9,7 @@ Testing is a crucial part of the development process, and it is essential to ens
|
||||
|
||||
### Using the Testing Feature
|
||||
|
||||
We added the CLI command `crewai test` to make it easy to test your crew. This command will run your crew for a specified number of iterations and provide detailed performance metrics. The parameters are `n_iterations` and `model` which are optional and default to 2 and `gpt-4o-mini` respectively. For now, the only provider available is OpenAI.
|
||||
We added the CLI command `crewai test` to make it easy to test your crew. This command will run your crew for a specified number of iterations and provide detailed performance metrics. The parameters are `n_iterations` and `model`, which are optional and default to 2 and `gpt-4o-mini` respectively. For now, the only provider available is OpenAI.
|
||||
|
||||
```bash
|
||||
crewai test
|
||||
@@ -21,20 +21,36 @@ If you want to run more iterations or use a different model, you can specify the
|
||||
crewai test --n_iterations 5 --model gpt-4o
|
||||
```
|
||||
|
||||
or using the short forms:
|
||||
|
||||
```bash
|
||||
crewai test -n 5 -m gpt-4o
|
||||
```
|
||||
|
||||
When you run the `crewai test` command, the crew will be executed for the specified number of iterations, and the performance metrics will be displayed at the end of the run.
|
||||
|
||||
A table of scores at the end will show the performance of the crew in terms of the following metrics:
|
||||
|
||||
```
|
||||
Task Scores
|
||||
Tasks Scores
|
||||
(1-10 Higher is better)
|
||||
┏━━━━━━━━━━━━┳━━━━━━━┳━━━━━━━┳━━━━━━━━━━━━┓
|
||||
┃ Tasks/Crew ┃ Run 1 ┃ Run 2 ┃ Avg. Total ┃
|
||||
┡━━━━━━━━━━━━╇━━━━━━━╇━━━━━━━╇━━━━━━━━━━━━┩
|
||||
│ Task 1 │ 10.0 │ 9.0 │ 9.5 │
|
||||
│ Task 2 │ 9.0 │ 9.0 │ 9.0 │
|
||||
│ Crew │ 9.5 │ 9.0 │ 9.2 │
|
||||
└────────────┴───────┴───────┴────────────┘
|
||||
┏━━━━━━━━━━━━━━━━━━━━┯━━━━━━━┯━━━━━━━┯━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
|
||||
┃ Tasks/Crew/Agents │ Run 1 │ Run 2 │ Avg. Total │ Agents │ ┃
|
||||
┠────────────────────┼───────┼───────┼────────────┼────────────────────────────────┼─────────────────────────────────┨
|
||||
┃ Task 1 │ 9.0 │ 9.5 │ 9.2 │ - Professional Insights │ ┃
|
||||
┃ │ │ │ │ Researcher │ ┃
|
||||
┃ │ │ │ │ │ ┃
|
||||
┃ Task 2 │ 9.0 │ 10.0 │ 9.5 │ - Company Profile Investigator │ ┃
|
||||
┃ │ │ │ │ │ ┃
|
||||
┃ Task 3 │ 9.0 │ 9.0 │ 9.0 │ - Automation Insights │ ┃
|
||||
┃ │ │ │ │ Specialist │ ┃
|
||||
┃ │ │ │ │ │ ┃
|
||||
┃ Task 4 │ 9.0 │ 9.0 │ 9.0 │ - Final Report Compiler │ ┃
|
||||
┃ │ │ │ │ │ - Automation Insights ┃
|
||||
┃ │ │ │ │ │ Specialist ┃
|
||||
┃ Crew │ 9.00 │ 9.38 │ 9.2 │ │ ┃
|
||||
┃ Execution Time (s) │ 126 │ 145 │ 135 │ │ ┃
|
||||
┗━━━━━━━━━━━━━━━━━━━━┷━━━━━━━┷━━━━━━━┷━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛
|
||||
```
|
||||
|
||||
The example above shows the test results for two runs of the crew with two tasks, with the average total score for each task and the crew as a whole.
|
||||
|
||||
@@ -114,7 +114,7 @@ Here is a list of the available tools and their descriptions:
|
||||
| **FileReadTool** | Enables reading and extracting data from files, supporting various file formats. |
|
||||
| **FirecrawlSearchTool** | A tool to search webpages using Firecrawl and return the results. |
|
||||
| **FirecrawlCrawlWebsiteTool** | A tool for crawling webpages using Firecrawl. |
|
||||
| **FirecrawlScrapeWebsiteTool** | A tool for scraping webpages url using Firecrawl and returning its contents. |
|
||||
| **FirecrawlScrapeWebsiteTool** | A tool for scraping webpages URL using Firecrawl and returning its contents. |
|
||||
| **GithubSearchTool** | A RAG tool for searching within GitHub repositories, useful for code and documentation search.|
|
||||
| **SerperDevTool** | A specialized tool for development purposes, with specific functionalities under development. |
|
||||
| **TXTSearchTool** | A RAG tool focused on searching within text (.txt) files, suitable for unstructured data. |
|
||||
@@ -144,6 +144,7 @@ pip install 'crewai[tools]'
|
||||
```
|
||||
|
||||
Once you do that there are two main ways for one to create a crewAI tool:
|
||||
|
||||
### Subclassing `BaseTool`
|
||||
|
||||
```python
|
||||
|
||||
@@ -16,7 +16,7 @@ To use the training feature, follow these steps:
|
||||
3. Run the following command:
|
||||
|
||||
```shell
|
||||
crewai train -n <n_iterations> <filename>
|
||||
crewai train -n <n_iterations> <filename> (optional)
|
||||
```
|
||||
|
||||
!!! note "Replace `<n_iterations>` with the desired number of training iterations and `<filename>` with the appropriate filename ending with `.pkl`."
|
||||
|
||||
@@ -5,9 +5,10 @@ description: Learn how to integrate LangChain tools with CrewAI agents to enhanc
|
||||
|
||||
## Using LangChain Tools
|
||||
!!! info "LangChain Integration"
|
||||
CrewAI seamlessly integrates with LangChain’s comprehensive toolkit for search-based queries and more, here are the available built-in tools that are offered by Langchain [LangChain Toolkit](https://python.langchain.com/docs/integrations/tools/)
|
||||
CrewAI seamlessly integrates with LangChain’s comprehensive [list of tools](https://python.langchain.com/docs/integrations/tools/), all of which can be used with crewAI.
|
||||
|
||||
```python
|
||||
import os
|
||||
from crewai import Agent
|
||||
from langchain.agents import Tool
|
||||
from langchain.utilities import GoogleSerperAPIWrapper
|
||||
|
||||
@@ -54,4 +54,4 @@ To effectively use the LlamaIndexTool, follow these steps:
|
||||
pip install 'crewai[tools]'
|
||||
```
|
||||
|
||||
2. **Install and Use LlamaIndex**: Follow LlamaIndex documentation [LlamaIndex Documentation](https://docs.llamaindex.ai/) to set up a RAG/agent pipeline.
|
||||
2. **Install and Use LlamaIndex**: Follow the LlamaIndex documentation [LlamaIndex Documentation](https://docs.llamaindex.ai/) to set up a RAG/agent pipeline.
|
||||
@@ -71,25 +71,59 @@ To customize your pipeline project, you can:
|
||||
3. Modify `src/<project_name>/main.py` to set up and run your pipelines.
|
||||
4. Add your environment variables into the `.env` file.
|
||||
|
||||
### Example: Defining a Pipeline
|
||||
## Example 1: Defining a Two-Stage Sequential Pipeline
|
||||
|
||||
Here's an example of how to define a pipeline in `src/<project_name>/pipelines/normal_pipeline.py`:
|
||||
Here's an example of how to define a pipeline with sequential stages in `src/<project_name>/pipelines/pipeline.py`:
|
||||
|
||||
```python
|
||||
from crewai import Pipeline
|
||||
from crewai.project import PipelineBase
|
||||
from ..crews.normal_crew import NormalCrew
|
||||
from ..crews.research_crew.research_crew import ResearchCrew
|
||||
from ..crews.write_x_crew.write_x_crew import WriteXCrew
|
||||
|
||||
@PipelineBase
|
||||
class NormalPipeline:
|
||||
class SequentialPipeline:
|
||||
def __init__(self):
|
||||
# Initialize crews
|
||||
self.normal_crew = NormalCrew().crew()
|
||||
self.research_crew = ResearchCrew().crew()
|
||||
self.write_x_crew = WriteXCrew().crew()
|
||||
|
||||
def create_pipeline(self):
|
||||
return Pipeline(
|
||||
stages=[
|
||||
self.normal_crew
|
||||
self.research_crew,
|
||||
self.write_x_crew
|
||||
]
|
||||
)
|
||||
|
||||
async def kickoff(self, inputs):
|
||||
pipeline = self.create_pipeline()
|
||||
results = await pipeline.kickoff(inputs)
|
||||
return results
|
||||
```
|
||||
|
||||
## Example 2: Defining a Two-Stage Pipeline with Parallel Execution
|
||||
|
||||
```python
|
||||
from crewai import Pipeline
|
||||
from crewai.project import PipelineBase
|
||||
from ..crews.research_crew.research_crew import ResearchCrew
|
||||
from ..crews.write_x_crew.write_x_crew import WriteXCrew
|
||||
from ..crews.write_linkedin_crew.write_linkedin_crew import WriteLinkedInCrew
|
||||
|
||||
@PipelineBase
|
||||
class ParallelExecutionPipeline:
|
||||
def __init__(self):
|
||||
# Initialize crews
|
||||
self.research_crew = ResearchCrew().crew()
|
||||
self.write_x_crew = WriteXCrew().crew()
|
||||
self.write_linkedin_crew = WriteLinkedInCrew().crew()
|
||||
|
||||
def create_pipeline(self):
|
||||
return Pipeline(
|
||||
stages=[
|
||||
self.research_crew,
|
||||
[self.write_x_crew, self.write_linkedin_crew] # Parallel execution
|
||||
]
|
||||
)
|
||||
|
||||
|
||||
@@ -1,5 +1,7 @@
|
||||
---
|
||||
|
||||
title: Starting a New CrewAI Project - Using Template
|
||||
|
||||
description: A comprehensive guide to starting a new CrewAI project, including the latest updates and project setup methods.
|
||||
---
|
||||
|
||||
@@ -21,6 +23,7 @@ $ pip install 'crewai[tools]'
|
||||
```
|
||||
|
||||
## Creating a New Project
|
||||
|
||||
In this example, we will be using poetry as our virtual environment manager.
|
||||
|
||||
To create a new CrewAI project, run the following CLI command:
|
||||
@@ -95,10 +98,13 @@ research_candidates_task:
|
||||
```
|
||||
|
||||
### Referencing Variables:
|
||||
Your defined functions with the same name will be used. For example, you can reference the agent for specific tasks from task.yaml file. Ensure your annotated agent and function name is the same otherwise your task won't recognize the reference properly.
|
||||
|
||||
Your defined functions with the same name will be used. For example, you can reference the agent for specific tasks from `tasks.yaml` file. Ensure your annotated agent and function name are the same; otherwise, your task won't recognize the reference properly.
|
||||
|
||||
#### Example References
|
||||
agent.yaml
|
||||
|
||||
`agents.yaml`
|
||||
|
||||
```yaml
|
||||
email_summarizer:
|
||||
role: >
|
||||
@@ -110,7 +116,8 @@ email_summarizer:
|
||||
llm: mixtal_llm
|
||||
```
|
||||
|
||||
task.yaml
|
||||
`tasks.yaml`
|
||||
|
||||
```yaml
|
||||
email_summarizer_task:
|
||||
description: >
|
||||
@@ -123,34 +130,31 @@ email_summarizer_task:
|
||||
- research_task
|
||||
```
|
||||
|
||||
Use the annotations to properly reference the agent and task in the crew.py file.
|
||||
Use the annotations to properly reference the agent and task in the `crew.py` file.
|
||||
|
||||
### Annotations include:
|
||||
* [@agent](https://github.com/crewAIInc/crewAI/blob/97d7bfb52ad49a9f04db360e1b6612d98c91971e/src/crewai/project/annotations.py#L17)
|
||||
* [@task](https://github.com/crewAIInc/crewAI/blob/97d7bfb52ad49a9f04db360e1b6612d98c91971e/src/crewai/project/annotations.py#L4)
|
||||
* [@crew](https://github.com/crewAIInc/crewAI/blob/97d7bfb52ad49a9f04db360e1b6612d98c91971e/src/crewai/project/annotations.py#L69)
|
||||
* [@llm](https://github.com/crewAIInc/crewAI/blob/97d7bfb52ad49a9f04db360e1b6612d98c91971e/src/crewai/project/annotations.py#L23)
|
||||
* [@tool](https://github.com/crewAIInc/crewAI/blob/97d7bfb52ad49a9f04db360e1b6612d98c91971e/src/crewai/project/annotations.py#L39)
|
||||
* [@callback](https://github.com/crewAIInc/crewAI/blob/97d7bfb52ad49a9f04db360e1b6612d98c91971e/src/crewai/project/annotations.py#L44)
|
||||
* [@output_json](https://github.com/crewAIInc/crewAI/blob/97d7bfb52ad49a9f04db360e1b6612d98c91971e/src/crewai/project/annotations.py#L29)
|
||||
* [@output_pydantic](https://github.com/crewAIInc/crewAI/blob/97d7bfb52ad49a9f04db360e1b6612d98c91971e/src/crewai/project/annotations.py#L34)
|
||||
* [@cache_handler](https://github.com/crewAIInc/crewAI/blob/97d7bfb52ad49a9f04db360e1b6612d98c91971e/src/crewai/project/annotations.py#L49)
|
||||
|
||||
crew.py
|
||||
```py
|
||||
* `@agent`
|
||||
* `@task`
|
||||
* `@crew`
|
||||
* `@tool`
|
||||
* `@callback`
|
||||
* `@output_json`
|
||||
* `@output_pydantic`
|
||||
* `@cache_handler`
|
||||
|
||||
`crew.py`
|
||||
|
||||
```python
|
||||
# ...
|
||||
@llm
|
||||
def mixtal_llm(self):
|
||||
return ChatGroq(temperature=0, model_name="mixtral-8x7b-32768")
|
||||
|
||||
@agent
|
||||
def email_summarizer(self) -> Agent:
|
||||
@agent
|
||||
def email_summarizer(self) -> Agent:
|
||||
return Agent(
|
||||
config=self.agents_config["email_summarizer"],
|
||||
)
|
||||
## ...other tasks defined
|
||||
@task
|
||||
def email_summarizer_task(self) -> Task:
|
||||
|
||||
@task
|
||||
def email_summarizer_task(self) -> Task:
|
||||
return Task(
|
||||
config=self.tasks_config["email_summarizer_task"],
|
||||
)
|
||||
@@ -204,6 +208,7 @@ To run your project, use the following command:
|
||||
```shell
|
||||
$ crewai run
|
||||
```
|
||||
|
||||
This will initialize your crew of AI agents and begin task execution as defined in your configuration in the `main.py` file.
|
||||
|
||||
### Replay Tasks from Latest Crew Kickoff
|
||||
|
||||
@@ -19,7 +19,7 @@ from crewai.task import Task
|
||||
from crewai_tools import SerperDevTool
|
||||
|
||||
# Define a condition function for the conditional task
|
||||
# if false task will be skipped, true, then execute task
|
||||
# If false, the task will be skipped, if true, then execute the task.
|
||||
def is_data_missing(output: TaskOutput) -> bool:
|
||||
return len(output.pydantic.events) < 10 # this will skip this task
|
||||
|
||||
@@ -29,21 +29,21 @@ data_fetcher_agent = Agent(
|
||||
goal="Fetch data online using Serper tool",
|
||||
backstory="Backstory 1",
|
||||
verbose=True,
|
||||
tools=[SerperDevTool()],
|
||||
tools=[SerperDevTool()]
|
||||
)
|
||||
|
||||
data_processor_agent = Agent(
|
||||
role="Data Processor",
|
||||
goal="Process fetched data",
|
||||
backstory="Backstory 2",
|
||||
verbose=True,
|
||||
verbose=True
|
||||
)
|
||||
|
||||
summary_generator_agent = Agent(
|
||||
role="Summary Generator",
|
||||
goal="Generate summary from fetched data",
|
||||
backstory="Backstory 3",
|
||||
verbose=True,
|
||||
verbose=True
|
||||
)
|
||||
|
||||
class EventOutput(BaseModel):
|
||||
@@ -69,7 +69,7 @@ conditional_task = ConditionalTask(
|
||||
|
||||
task3 = Task(
|
||||
description="Generate summary of events in San Francisco from fetched data",
|
||||
expected_output="summary_generated",
|
||||
expected_output="A complete report on the customer and their customers and competitors, including their demographics, preferences, market positioning and audience engagement.",
|
||||
agent=summary_generator_agent,
|
||||
)
|
||||
|
||||
@@ -78,7 +78,7 @@ crew = Crew(
|
||||
agents=[data_fetcher_agent, data_processor_agent, summary_generator_agent],
|
||||
tasks=[task1, conditional_task, task3],
|
||||
verbose=True,
|
||||
planning=True # Enable planning feature
|
||||
planning=True
|
||||
)
|
||||
|
||||
# Run the crew
|
||||
|
||||
@@ -14,12 +14,16 @@ Crafting an efficient CrewAI team hinges on the ability to dynamically tailor yo
|
||||
- **Cache** *(Optional)*: Determines whether the agent should use a cache for tool usage.
|
||||
- **Max RPM**: Sets the maximum number of requests per minute (`max_rpm`). This attribute is optional and can be set to `None` for no limit, allowing for unlimited queries to external services if needed.
|
||||
- **Verbose** *(Optional)*: Enables detailed logging of an agent's actions, useful for debugging and optimization. Specifically, it provides insights into agent execution processes, aiding in the optimization of performance.
|
||||
- **Allow Delegation** *(Optional)*: `allow_delegation` controls whether the agent is allowed to delegate tasks to other agents.
|
||||
- **Max Iter** *(Optional)*: The `max_iter` attribute allows users to define the maximum number of iterations an agent can perform for a single task, preventing infinite loops or excessively long executions. The default value is set to 25, providing a balance between thoroughness and efficiency. Once the agent approaches this number, it will try its best to give a good answer.
|
||||
- **Allow Delegation** *(Optional)*: `allow_delegation` controls whether the agent is allowed to delegate tasks to other agents. This attribute is now set to `False` by default.
|
||||
- **Max Iter** *(Optional)*: The `max_iter` attribute allows users to define the maximum number of iterations an agent can perform for a single task, preventing infinite loops or excessively long executions. The default value is set to 25, providing a balance between thoroughness and efficiency.
|
||||
- **Max Execution Time** *(Optional)*: `max_execution_time` Sets the maximum execution time for an agent to complete a task.
|
||||
- **System Template** *(Optional)*: `system_template` defines the system format for the agent.
|
||||
- **Prompt Template** *(Optional)*: `prompt_template` defines the prompt format for the agent.
|
||||
- **Response Template** *(Optional)*: `response_template` defines the response format for the agent.
|
||||
- **Use Stop Words** *(Optional)*: `use_stop_words` attribute controls whether the agent will use stop words during task execution. This is now supported to aid o1 models.
|
||||
- **Use System Prompt** *(Optional)*: `use_system_prompt` controls whether the agent will use a system prompt for task execution. Agents can now operate without system prompts.
|
||||
- **Respect Context Window**: `respect_context_window` renames the sliding context window attribute and enables it by default to maintain context size.
|
||||
- **Max Retry Limit**: `max_retry_limit` defines the maximum number of retries for an agent to execute a task when an error occurs.
|
||||
|
||||
## Advanced Customization Options
|
||||
Beyond the basic attributes, CrewAI allows for deeper customization to enhance an agent's behavior and capabilities significantly.
|
||||
@@ -67,12 +71,11 @@ agent = Agent(
|
||||
verbose=True,
|
||||
max_rpm=None, # No limit on requests per minute
|
||||
max_iter=25, # Default value for maximum iterations
|
||||
allow_delegation=False
|
||||
)
|
||||
```
|
||||
|
||||
## Delegation and Autonomy
|
||||
Controlling an agent's ability to delegate tasks or ask questions is vital for tailoring its autonomy and collaborative dynamics within the CrewAI framework. By default, the `allow_delegation` attribute is set to `True`, enabling agents to seek assistance or delegate tasks as needed. This default behavior promotes collaborative problem-solving and efficiency within the CrewAI ecosystem. If needed, delegation can be disabled to suit specific operational requirements.
|
||||
Controlling an agent's ability to delegate tasks or ask questions is vital for tailoring its autonomy and collaborative dynamics within the CrewAI framework. By default, the `allow_delegation` attribute is now set to `False`, disabling agents to seek assistance or delegate tasks as needed. This default behavior can be changed to promote collaborative problem-solving and efficiency within the CrewAI ecosystem. If needed, delegation can be enabled to suit specific operational requirements.
|
||||
|
||||
### Example: Disabling Delegation for an Agent
|
||||
```python
|
||||
@@ -80,7 +83,7 @@ agent = Agent(
|
||||
role='Content Writer',
|
||||
goal='Write engaging content on market trends',
|
||||
backstory='A seasoned writer with expertise in market analysis.',
|
||||
allow_delegation=False # Disabling delegation
|
||||
allow_delegation=True # Enabling delegation
|
||||
)
|
||||
```
|
||||
|
||||
|
||||
@@ -1,27 +1,31 @@
|
||||
---
|
||||
title: Forcing Tool Output as Result
|
||||
description: Learn how to force tool output as the result in of an Agent's task in CrewAI.
|
||||
description: Learn how to force tool output as the result in an Agent's task in CrewAI.
|
||||
---
|
||||
|
||||
## Introduction
|
||||
In CrewAI, you can force the output of a tool as the result of an agent's task. This feature is useful when you want to ensure that the tool output is captured and returned as the task result, and avoid the agent modifying the output during the task execution.
|
||||
In CrewAI, you can force the output of a tool as the result of an agent's task. This feature is useful when you want to ensure that the tool output is captured and returned as the task result, avoiding any agent modification during the task execution.
|
||||
|
||||
## Forcing Tool Output as Result
|
||||
To force the tool output as the result of an agent's task, you can set the `result_as_answer` parameter to `True` when creating the agent. This parameter ensures that the tool output is captured and returned as the task result, without any modifications by the agent.
|
||||
To force the tool output as the result of an agent's task, you need to set the `result_as_answer` parameter to `True` when adding a tool to the agent. This parameter ensures that the tool output is captured and returned as the task result, without any modifications by the agent.
|
||||
|
||||
Here's an example of how to force the tool output as the result of an agent's task:
|
||||
|
||||
```python
|
||||
# ...
|
||||
from crewai.agent import Agent
|
||||
from my_tool import MyCustomTool
|
||||
|
||||
# Define a custom tool that returns the result as the answer
|
||||
# Create a coding agent with the custom tool
|
||||
coding_agent = Agent(
|
||||
role="Data Scientist",
|
||||
goal="Produce amazing reports on AI",
|
||||
backstory="You work with data and AI",
|
||||
tools=[MyCustomTool(result_as_answer=True)],
|
||||
)
|
||||
|
||||
# Assuming the tool's execution and result population occurs within the system
|
||||
task_result = coding_agent.execute_task(task)
|
||||
```
|
||||
|
||||
## Workflow in Action
|
||||
|
||||
@@ -16,6 +16,13 @@ By default, tasks in CrewAI are managed through a sequential process. However, a
|
||||
- **Task Delegation**: A manager agent allocates tasks among crew members based on their roles and capabilities.
|
||||
- **Result Validation**: The manager evaluates outcomes to ensure they meet the required standards.
|
||||
- **Efficient Workflow**: Emulates corporate structures, providing an organized approach to task management.
|
||||
- **System Prompt Handling**: Optionally specify whether the system should use predefined prompts.
|
||||
- **Stop Words Control**: Optionally specify whether stop words should be used, supporting various models including the o1 models.
|
||||
- **Context Window Respect**: Prioritize important context by enabling respect of the context window, which is now the default behavior.
|
||||
- **Delegation Control**: Delegation is now disabled by default to give users explicit control.
|
||||
- **Max Requests Per Minute**: Configurable option to set the maximum number of requests per minute.
|
||||
- **Max Iterations**: Limit the maximum number of iterations for obtaining a final answer.
|
||||
|
||||
|
||||
## Implementing the Hierarchical Process
|
||||
To utilize the hierarchical process, it's essential to explicitly set the process attribute to `Process.hierarchical`, as the default behavior is `Process.sequential`. Define a crew with a designated manager and establish a clear chain of command.
|
||||
@@ -38,6 +45,10 @@ researcher = Agent(
|
||||
cache=True,
|
||||
verbose=False,
|
||||
# tools=[] # This can be optionally specified; defaults to an empty list
|
||||
use_system_prompt=True, # Enable or disable system prompts for this agent
|
||||
use_stop_words=True, # Enable or disable stop words for this agent
|
||||
max_rpm=30, # Limit on the number of requests per minute
|
||||
max_iter=5 # Maximum number of iterations for a final answer
|
||||
)
|
||||
writer = Agent(
|
||||
role='Writer',
|
||||
@@ -46,6 +57,10 @@ writer = Agent(
|
||||
cache=True,
|
||||
verbose=False,
|
||||
# tools=[] # Optionally specify tools; defaults to an empty list
|
||||
use_system_prompt=True, # Enable or disable system prompts for this agent
|
||||
use_stop_words=True, # Enable or disable stop words for this agent
|
||||
max_rpm=30, # Limit on the number of requests per minute
|
||||
max_iter=5 # Maximum number of iterations for a final answer
|
||||
)
|
||||
|
||||
# Establishing the crew with a hierarchical process and additional configurations
|
||||
@@ -54,6 +69,7 @@ project_crew = Crew(
|
||||
agents=[researcher, writer],
|
||||
manager_llm=ChatOpenAI(temperature=0, model="gpt-4"), # Mandatory if manager_agent is not set
|
||||
process=Process.hierarchical, # Specifies the hierarchical management approach
|
||||
respect_context_window=True, # Enable respect of the context window for tasks
|
||||
memory=True, # Enable memory usage for enhanced task execution
|
||||
manager_agent=None, # Optional: explicitly set a specific agent as manager instead of the manager_llm
|
||||
planning=True, # Enable planning feature for pre-execution strategy
|
||||
|
||||
@@ -74,7 +74,8 @@ task2 = Task(
|
||||
"Aim for a narrative that captures the essence of these breakthroughs and their implications for the future."
|
||||
),
|
||||
expected_output='A compelling 3 paragraphs blog post formatted as markdown about the latest AI advancements in 2024',
|
||||
agent=writer
|
||||
agent=writer,
|
||||
human_input=True
|
||||
)
|
||||
|
||||
# Instantiate your crew with a sequential process
|
||||
|
||||
@@ -72,7 +72,7 @@ asyncio.run(async_crew_execution())
|
||||
|
||||
## Example: Multiple Asynchronous Crew Executions
|
||||
|
||||
In this example, we'll show how to kickoff multiple crews asynchronously and wait for all of them to complete using asyncio.gather():
|
||||
In this example, we'll show how to kickoff multiple crews asynchronously and wait for all of them to complete using `asyncio.gather()`:
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
|
||||
@@ -25,13 +25,17 @@ coding_agent = Agent(
|
||||
# Create a task that requires code execution
|
||||
data_analysis_task = Task(
|
||||
description="Analyze the given dataset and calculate the average age of participants. Ages: {ages}",
|
||||
agent=coding_agent
|
||||
agent=coding_agent,
|
||||
expected_output="The average age calculated from the dataset"
|
||||
)
|
||||
|
||||
# Create a crew and add the task
|
||||
analysis_crew = Crew(
|
||||
agents=[coding_agent],
|
||||
tasks=[data_analysis_task]
|
||||
tasks=[data_analysis_task],
|
||||
verbose=True,
|
||||
memory=False,
|
||||
respect_context_window=True # enable by default
|
||||
)
|
||||
|
||||
datasets = [
|
||||
|
||||
@@ -1,196 +1,113 @@
|
||||
---
|
||||
title: Connect CrewAI to LLMs
|
||||
description: Comprehensive guide on integrating CrewAI with various Large Language Models (LLMs), including detailed class attributes, methods, and configuration options.
|
||||
description: Comprehensive guide on integrating CrewAI with various Large Language Models (LLMs) using LiteLLM, including supported providers and configuration options.
|
||||
---
|
||||
|
||||
## Connect CrewAI to LLMs
|
||||
|
||||
CrewAI now uses LiteLLM to connect to a wide variety of Language Models (LLMs). This integration provides extensive versatility, allowing you to use models from numerous providers with a simple, unified interface.
|
||||
|
||||
!!! note "Default LLM"
|
||||
By default, CrewAI uses OpenAI's GPT-4o model (specifically, the model specified by the OPENAI_MODEL_NAME environment variable, defaulting to "gpt-4o") for language processing. You can configure your agents to use a different model or API as described in this guide.
|
||||
By default, CrewAI uses OpenAI's GPT-4 model (specifically, the model specified by the OPENAI_MODEL_NAME environment variable, defaulting to "gpt-4") for language processing. You can configure your agents to use a different model or API as described in this guide.
|
||||
By default, CrewAI uses OpenAI's GPT-4 model (specifically, the model specified by the OPENAI_MODEL_NAME environment variable, defaulting to "gpt-4") for language processing. You can easily configure your agents to use a different model or provider as described in this guide.
|
||||
|
||||
CrewAI provides extensive versatility in integrating with various Language Models (LLMs), including local options through Ollama such as Llama and Mixtral to cloud-based solutions like Azure. Its compatibility extends to all [LangChain LLM components](https://python.langchain.com/v0.2/docs/integrations/llms/), offering a wide range of integration possibilities for customized AI applications.
|
||||
## Supported Providers
|
||||
|
||||
The platform supports connections to an array of Generative AI models, including:
|
||||
LiteLLM supports a wide range of providers, including but not limited to:
|
||||
|
||||
- OpenAI's suite of advanced language models
|
||||
- Anthropic's cutting-edge AI offerings
|
||||
- Ollama's diverse range of locally-hosted generative model & embeddings
|
||||
- LM Studio's diverse range of locally hosted generative models & embeddings
|
||||
- Groq's Super Fast LLM offerings
|
||||
- Azures' generative AI offerings
|
||||
- HuggingFace's generative AI offerings
|
||||
- OpenAI
|
||||
- Anthropic
|
||||
- Google (Vertex AI, Gemini)
|
||||
- Azure OpenAI
|
||||
- AWS (Bedrock, SageMaker)
|
||||
- Cohere
|
||||
- Hugging Face
|
||||
- Ollama
|
||||
- Mistral AI
|
||||
- Replicate
|
||||
- Together AI
|
||||
- AI21
|
||||
- Cloudflare Workers AI
|
||||
- DeepInfra
|
||||
- Groq
|
||||
- And many more!
|
||||
|
||||
This broad spectrum of LLM options enables users to select the most suitable model for their specific needs, whether prioritizing local deployment, specialized capabilities, or cloud-based scalability.
|
||||
For a complete and up-to-date list of supported providers, please refer to the [LiteLLM Providers documentation](https://docs.litellm.ai/docs/providers).
|
||||
|
||||
## Changing the LLM
|
||||
|
||||
To use a different LLM with your CrewAI agents, you simply need to pass the model name as a string when initializing the agent. Here are some examples:
|
||||
|
||||
## Changing the default LLM
|
||||
The default LLM is provided through the `langchain openai` package, which is installed by default when you install CrewAI. You can change this default LLM to a different model or API by setting the `OPENAI_MODEL_NAME` environment variable. This straightforward process allows you to harness the power of different OpenAI models, enhancing the flexibility and capabilities of your CrewAI implementation.
|
||||
```python
|
||||
# Required
|
||||
os.environ["OPENAI_MODEL_NAME"]="gpt-4-0125-preview"
|
||||
|
||||
# Agent will automatically use the model defined in the environment variable
|
||||
example_agent = Agent(
|
||||
role='Local Expert',
|
||||
goal='Provide insights about the city',
|
||||
backstory="A knowledgeable local guide.",
|
||||
verbose=True
|
||||
)
|
||||
```
|
||||
|
||||
## Ollama Local Integration
|
||||
Ollama is preferred for local LLM integration, offering customization and privacy benefits. To integrate Ollama with CrewAI, you will need the `langchain-ollama` package. You can then set the following environment variables to connect to your Ollama instance running locally on port 11434.
|
||||
|
||||
```sh
|
||||
os.environ[OPENAI_API_BASE]='http://localhost:11434'
|
||||
os.environ[OPENAI_MODEL_NAME]='llama2' # Adjust based on available model
|
||||
os.environ[OPENAI_API_KEY]='' # No API Key required for Ollama
|
||||
```
|
||||
|
||||
## Ollama Integration Step by Step (ex. for using Llama 3.1 8B locally)
|
||||
1. [Download and install Ollama](https://ollama.com/download).
|
||||
2. After setting up the Ollama, Pull the Llama3.1 8B model by typing following lines into your terminal ```ollama run llama3.1```.
|
||||
3. Llama3.1 should now be served locally on `http://localhost:11434`
|
||||
```
|
||||
from crewai import Agent, Task, Crew
|
||||
from langchain_ollama import ChatOllama
|
||||
import os
|
||||
os.environ["OPENAI_API_KEY"] = "NA"
|
||||
|
||||
llm = ChatOllama(
|
||||
model = "llama3.1",
|
||||
base_url = "http://localhost:11434")
|
||||
|
||||
general_agent = Agent(role = "Math Professor",
|
||||
goal = """Provide the solution to the students that are asking mathematical questions and give them the answer.""",
|
||||
backstory = """You are an excellent math professor that likes to solve math questions in a way that everyone can understand your solution""",
|
||||
allow_delegation = False,
|
||||
verbose = True,
|
||||
llm = llm)
|
||||
|
||||
task = Task(description="""what is 3 + 5""",
|
||||
agent = general_agent,
|
||||
expected_output="A numerical answer.")
|
||||
|
||||
crew = Crew(
|
||||
agents=[general_agent],
|
||||
tasks=[task],
|
||||
verbose=True
|
||||
)
|
||||
|
||||
result = crew.kickoff()
|
||||
|
||||
print(result)
|
||||
```
|
||||
|
||||
## HuggingFace Integration
|
||||
There are a couple of different ways you can use HuggingFace to host your LLM.
|
||||
|
||||
### Your own HuggingFace endpoint
|
||||
```python
|
||||
from langchain_huggingface import HuggingFaceEndpoint
|
||||
|
||||
llm = HuggingFaceEndpoint(
|
||||
repo_id="microsoft/Phi-3-mini-4k-instruct",
|
||||
task="text-generation",
|
||||
max_new_tokens=512,
|
||||
do_sample=False,
|
||||
repetition_penalty=1.03,
|
||||
)
|
||||
|
||||
agent = Agent(
|
||||
role="HuggingFace Agent",
|
||||
goal="Generate text using HuggingFace",
|
||||
backstory="A diligent explorer of GitHub docs.",
|
||||
llm=llm
|
||||
)
|
||||
```
|
||||
|
||||
## OpenAI Compatible API Endpoints
|
||||
Switch between APIs and models seamlessly using environment variables, supporting platforms like FastChat, LM Studio, Groq, and Mistral AI.
|
||||
|
||||
### Configuration Examples
|
||||
#### FastChat
|
||||
```sh
|
||||
os.environ["OPENAI_API_BASE"]='http://localhost:8001/v1'
|
||||
os.environ["OPENAI_MODEL_NAME"]='oh-2.5m7b-q51'
|
||||
os.environ[OPENAI_API_KEY]='NA'
|
||||
```
|
||||
|
||||
#### LM Studio
|
||||
Launch [LM Studio](https://lmstudio.ai) and go to the Server tab. Then select a model from the dropdown menu and wait for it to load. Once it's loaded, click the green Start Server button and use the URL, port, and API key that's shown (you can modify them). Below is an example of the default settings as of LM Studio 0.2.19:
|
||||
```sh
|
||||
os.environ["OPENAI_API_BASE"]='http://localhost:1234/v1'
|
||||
os.environ["OPENAI_API_KEY"]='lm-studio'
|
||||
```
|
||||
|
||||
#### Groq API
|
||||
```sh
|
||||
os.environ["OPENAI_API_KEY"]='your-groq-api-key'
|
||||
os.environ["OPENAI_MODEL_NAME"]='llama3-8b-8192'
|
||||
os.environ["OPENAI_API_BASE"]='https://api.groq.com/openai/v1'
|
||||
```
|
||||
|
||||
#### Mistral API
|
||||
```sh
|
||||
os.environ["OPENAI_API_KEY"]='your-mistral-api-key'
|
||||
os.environ["OPENAI_API_BASE"]='https://api.mistral.ai/v1'
|
||||
os.environ["OPENAI_MODEL_NAME"]='mistral-small'
|
||||
```
|
||||
|
||||
### Solar
|
||||
```sh
|
||||
from langchain_community.chat_models.solar import SolarChat
|
||||
```
|
||||
```sh
|
||||
os.environ[SOLAR_API_BASE]="https://api.upstage.ai/v1/solar"
|
||||
os.environ[SOLAR_API_KEY]="your-solar-api-key"
|
||||
|
||||
# Free developer API key available here: https://console.upstage.ai/services/solar
|
||||
# Langchain Example: https://github.com/langchain-ai/langchain/pull/18556
|
||||
```
|
||||
|
||||
### Cohere
|
||||
```python
|
||||
from langchain_cohere import ChatCohere
|
||||
# Initialize language model
|
||||
os.environ["COHERE_API_KEY"]='your-cohere-api-key'
|
||||
llm = ChatCohere()
|
||||
|
||||
# Free developer API key available here: https://cohere.com/
|
||||
# Langchain Documentation: https://python.langchain.com/docs/integrations/chat/cohere
|
||||
```
|
||||
|
||||
### Azure Open AI Configuration
|
||||
For Azure OpenAI API integration, set the following environment variables:
|
||||
```sh
|
||||
|
||||
os.environ["AZURE_OPENAI_DEPLOYMENT"]='Your deployment'
|
||||
os.environ["OPENAI_API_VERSION"]='2023-12-01-preview'
|
||||
os.environ["AZURE_OPENAI_ENDPOINT"]='Your Endpoint'
|
||||
os.environ["AZURE_OPENAI_API_KEY"]='Your API Key'
|
||||
```
|
||||
|
||||
### Example Agent with Azure LLM
|
||||
```python
|
||||
from dotenv import load_dotenv
|
||||
from crewai import Agent
|
||||
from langchain_openai import AzureChatOpenAI
|
||||
|
||||
load_dotenv()
|
||||
|
||||
azure_llm = AzureChatOpenAI(
|
||||
azure_endpoint=os.environ.get("AZURE_OPENAI_ENDPOINT"),
|
||||
api_key=os.environ.get("AZURE_OPENAI_KEY")
|
||||
# Using OpenAI's GPT-4
|
||||
openai_agent = Agent(
|
||||
role='OpenAI Expert',
|
||||
goal='Provide insights using GPT-4',
|
||||
backstory="An AI assistant powered by OpenAI's latest model.",
|
||||
llm='gpt-4'
|
||||
)
|
||||
|
||||
azure_agent = Agent(
|
||||
role='Example Agent',
|
||||
goal='Demonstrate custom LLM configuration',
|
||||
backstory='A diligent explorer of GitHub docs.',
|
||||
llm=azure_llm
|
||||
# Using Anthropic's Claude
|
||||
claude_agent = Agent(
|
||||
role='Anthropic Expert',
|
||||
goal='Analyze data using Claude',
|
||||
backstory="An AI assistant leveraging Anthropic's language model.",
|
||||
llm='claude-2'
|
||||
)
|
||||
|
||||
# Using Ollama's local Llama 2 model
|
||||
ollama_agent = Agent(
|
||||
role='Local AI Expert',
|
||||
goal='Process information using a local model',
|
||||
backstory="An AI assistant running on local hardware.",
|
||||
llm='ollama/llama2'
|
||||
)
|
||||
|
||||
# Using Google's Gemini model
|
||||
gemini_agent = Agent(
|
||||
role='Google AI Expert',
|
||||
goal='Generate creative content with Gemini',
|
||||
backstory="An AI assistant powered by Google's advanced language model.",
|
||||
llm='gemini-pro'
|
||||
)
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
For most providers, you'll need to set up your API keys as environment variables. Here's how you can do it for some common providers:
|
||||
|
||||
```python
|
||||
import os
|
||||
|
||||
# OpenAI
|
||||
os.environ["OPENAI_API_KEY"] = "your-openai-api-key"
|
||||
|
||||
# Anthropic
|
||||
os.environ["ANTHROPIC_API_KEY"] = "your-anthropic-api-key"
|
||||
|
||||
# Google (Vertex AI)
|
||||
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "path/to/your/credentials.json"
|
||||
|
||||
# Azure OpenAI
|
||||
os.environ["AZURE_API_KEY"] = "your-azure-api-key"
|
||||
os.environ["AZURE_API_BASE"] = "your-azure-endpoint"
|
||||
|
||||
# AWS (Bedrock)
|
||||
os.environ["AWS_ACCESS_KEY_ID"] = "your-aws-access-key-id"
|
||||
os.environ["AWS_SECRET_ACCESS_KEY"] = "your-aws-secret-access-key"
|
||||
```
|
||||
|
||||
For providers that require additional configuration or have specific setup requirements, please refer to the [LiteLLM documentation](https://docs.litellm.ai/docs/) for detailed instructions.
|
||||
|
||||
## Using Local Models
|
||||
|
||||
For local models like those provided by Ollama, ensure you have the necessary software installed and running. For example, to use Ollama:
|
||||
|
||||
1. [Download and install Ollama](https://ollama.com/download)
|
||||
2. Pull the desired model (e.g., `ollama pull llama2`)
|
||||
3. Use the model in your CrewAI agent by specifying `llm='ollama/llama2'`
|
||||
|
||||
## Conclusion
|
||||
Integrating CrewAI with different LLMs expands the framework's versatility, allowing for customized, efficient AI solutions across various domains and platforms.
|
||||
|
||||
By leveraging LiteLLM, CrewAI now offers seamless integration with a vast array of LLMs. This flexibility allows you to choose the most suitable model for your specific needs, whether you prioritize performance, cost-efficiency, or local deployment. Remember to consult the [LiteLLM documentation](https://docs.litellm.ai/docs/) for the most up-to-date information on supported models and configuration options.
|
||||
@@ -1,6 +1,7 @@
|
||||
---
|
||||
title: Replay Tasks from Latest Crew Kickoff
|
||||
description: Replay tasks from the latest crew.kickoff(...)
|
||||
|
||||
---
|
||||
|
||||
## Introduction
|
||||
@@ -16,22 +17,24 @@ To use the replay feature, follow these steps:
|
||||
|
||||
1. Open your terminal or command prompt.
|
||||
2. Navigate to the directory where your CrewAI project is located.
|
||||
3. Run the following command:
|
||||
3. Run the following commands:
|
||||
|
||||
To view the latest kickoff task_ids use:
|
||||
```shell
|
||||
crewai log-tasks-outputs
|
||||
```
|
||||
|
||||
Once you have your task_id to replay from use:
|
||||
Once you have your `task_id` to replay, use:
|
||||
```shell
|
||||
crewai replay -t <task_id>
|
||||
```
|
||||
|
||||
**Note:** Ensure `crewai` is installed and configured correctly in your development environment.
|
||||
|
||||
### Replaying from a Task Programmatically
|
||||
To replay from a task programmatically, use the following steps:
|
||||
|
||||
1. Specify the task_id and input parameters for the replay process.
|
||||
1. Specify the `task_id` and input parameters for the replay process.
|
||||
2. Execute the replay command within a try-except block to handle potential errors.
|
||||
|
||||
```python
|
||||
@@ -50,3 +53,6 @@ To replay from a task programmatically, use the following steps:
|
||||
except Exception as e:
|
||||
raise Exception(f"An unexpected error occurred: {e}")
|
||||
```
|
||||
|
||||
## Conclusion
|
||||
With the above enhancements and detailed functionality, replaying specific tasks in CrewAI has been made more efficient and robust. Ensure you follow the commands and steps precisely to make the most of these features.
|
||||
@@ -52,14 +52,17 @@ report_crew = Crew(
|
||||
# Execute the crew
|
||||
result = report_crew.kickoff()
|
||||
|
||||
# Accessing the type safe output
|
||||
# Accessing the type-safe output
|
||||
task_output: TaskOutput = result.tasks[0].output
|
||||
crew_output: CrewOutput = result.output
|
||||
```
|
||||
|
||||
### Note:
|
||||
Each task in a sequential process **must** have an agent assigned. Ensure that every `Task` includes an `agent` parameter.
|
||||
|
||||
### Workflow in Action
|
||||
1. **Initial Task**: In a sequential process, the first agent completes their task and signals completion.
|
||||
2. **Subsequent Tasks**: Agents pick up their tasks based on the process type, with outcomes of preceding tasks or manager directives guiding their execution.
|
||||
2. **Subsequent Tasks**: Agents pick up their tasks based on the process type, with outcomes of preceding tasks or directives guiding their execution.
|
||||
3. **Completion**: The process concludes once the final task is executed, leading to project completion.
|
||||
|
||||
## Advanced Features
|
||||
@@ -88,3 +91,5 @@ CrewAI tracks token usage across all tasks and agents. You can access these metr
|
||||
2. **Clear Task Descriptions**: Provide detailed descriptions for each task to guide the agents effectively.
|
||||
3. **Appropriate Agent Selection**: Match agents' skills and roles to the requirements of each task.
|
||||
4. **Use Context**: Leverage the context from previous tasks to inform subsequent ones.
|
||||
|
||||
This updated documentation ensures that details accurately reflect the latest changes in the codebase and clearly describes how to leverage new features and configurations. The content is kept simple and direct to ensure easy understanding.
|
||||
Reference in New Issue
Block a user