Updating docs

This commit is contained in:
João Moura
2024-08-11 15:04:45 -03:00
parent 7b63b6f485
commit a17fa70b1b
20 changed files with 153 additions and 118 deletions

View File

@@ -22,11 +22,13 @@ coding_agent = Agent(
)
```
**Note**: The `allow_code_execution` parameter defaults to `False`.
## Important Considerations
1. **Model Selection**: It is strongly recommended to use more capable models like Claude 3.5 Sonnet and GPT-4 when enabling code execution. These models have a better understanding of programming concepts and are more likely to generate correct and efficient code.
2. **Error Handling**: The code execution feature includes error handling. If executed code raises an exception, the agent will receive the error message and can attempt to correct the code or provide alternative solutions.
2. **Error Handling**: The code execution feature includes error handling. If executed code raises an exception, the agent will receive the error message and can attempt to correct the code or provide alternative solutions. The `max_retry_limit` parameter, which defaults to 2, controls the maximum number of retries for a task.
3. **Dependencies**: To use the code execution feature, you need to install the `crewai_tools` package. If not installed, the agent will log an info message: "Coding tools not available. Install crewai_tools."
@@ -73,4 +75,4 @@ result = analysis_crew.kickoff()
print(result)
```
In this example, the `coding_agent` can write and execute Python code to perform data analysis tasks.
In this example, the `coding_agent` can write and execute Python code to perform data analysis tasks.

View File

@@ -7,9 +7,10 @@ description: Learn how to use conditional tasks in a crewAI kickoff
Conditional Tasks in crewAI allow for dynamic workflow adaptation based on the outcomes of previous tasks. This powerful feature enables crews to make decisions and execute tasks selectively, enhancing the flexibility and efficiency of your AI-driven processes.
## Example Usage
```python
from typing import List
from pydantic import BaseModel
from crewai import Agent, Crew
from crewai.tasks.conditional_task import ConditionalTask
@@ -17,11 +18,10 @@ from crewai.tasks.task_output import TaskOutput
from crewai.task import Task
from crewai_tools import SerperDevTool
# Define a condition function for the conditional task
# if false task will be skipped, true, then execute task
def is_data_missing(output: TaskOutput) -> bool:
return len(output.pydantic.events) < 10: # this will skip this task
return len(output.pydantic.events) < 10 # this will skip this task
# Define the agents
data_fetcher_agent = Agent(
@@ -46,11 +46,9 @@ summary_generator_agent = Agent(
verbose=True,
)
class EventOutput(BaseModel):
events: List[str]
task1 = Task(
description="Fetch data about events in San Francisco using Serper tool",
expected_output="List of 10 things to do in SF this week",
@@ -64,7 +62,7 @@ conditional_task = ConditionalTask(
fetch more events using Serper tool so that
we have a total of 10 events in SF this week..
""",
expected_output="List of 10 Things to do in SF this week ",
expected_output="List of 10 Things to do in SF this week",
condition=is_data_missing,
agent=data_processor_agent,
)
@@ -80,8 +78,10 @@ crew = Crew(
agents=[data_fetcher_agent, data_processor_agent, summary_generator_agent],
tasks=[task1, conditional_task, task3],
verbose=True,
planning=True # Enable planning feature
)
# Run the crew
result = crew.kickoff()
print("results", result)
```
```

View File

@@ -1,6 +1,6 @@
---
title: Forcing Tool Output as Result
description: Learn how to force tool output as the result in of an Agent's task in crewAI.
description: Learn how to force tool output as the result in of an Agent's task in CrewAI.
---
## Introduction
@@ -13,19 +13,20 @@ Here's an example of how to force the tool output as the result of an agent's ta
```python
# ...
from crewai.agent import Agent
# Define a custom tool that returns the result as the answer
coding_agent =Agent(
coding_agent = Agent(
role="Data Scientist",
goal="Product amazing reports on AI",
goal="Produce amazing reports on AI",
backstory="You work with data and AI",
tools=[MyCustomTool(result_as_answer=True)],
)
# ...
```
### Workflow in Action
## Workflow in Action
1. **Task Execution**: The agent executes the task using the tool provided.
2. **Tool Output**: The tool generates the output, which is captured as the task result.
3. **Agent Interaction**: The agent my reflect and take learnings from the tool but the output is not modified.
3. **Agent Interaction**: The agent may reflect and take learnings from the tool but the output is not modified.
4. **Result Return**: The tool output is returned as the task result without any modifications.

View File

@@ -56,6 +56,7 @@ project_crew = Crew(
process=Process.hierarchical, # Specifies the hierarchical management approach
memory=True, # Enable memory usage for enhanced task execution
manager_agent=None, # Optional: explicitly set a specific agent as manager instead of the manager_llm
planning=True, # Enable planning feature for pre-execution strategy
)
```

View File

@@ -83,6 +83,7 @@ crew = Crew(
tasks=[task1, task2],
verbose=True,
memory=True,
planning=True # Enable planning feature for the crew
)
# Get your crew to work!

View File

@@ -9,6 +9,21 @@ CrewAI provides the ability to kickoff a crew asynchronously, allowing you to st
## Asynchronous Crew Execution
To kickoff a crew asynchronously, use the `kickoff_async()` method. This method initiates the crew execution in a separate thread, allowing the main thread to continue executing other tasks.
### Method Signature
```python
def kickoff_async(self, inputs: dict) -> CrewOutput:
```
### Parameters
- `inputs` (dict): A dictionary containing the input data required for the tasks.
### Returns
- `CrewOutput`: An object representing the result of the crew execution.
## Example
Here's an example of how to kickoff a crew asynchronously:
```python
@@ -34,7 +49,6 @@ analysis_crew = Crew(
tasks=[data_analysis_task]
)
# Execute the crew
# Execute the crew asynchronously
result = analysis_crew.kickoff_async(inputs={"ages": [25, 30, 35, 40, 45]})
```
```

View File

@@ -9,7 +9,7 @@ description: Comprehensive guide on integrating CrewAI with various Large Langua
By default, CrewAI uses OpenAI's GPT-4o model (specifically, the model specified by the OPENAI_MODEL_NAME environment variable, defaulting to "gpt-4o") for language processing. You can configure your agents to use a different model or API as described in this guide.
By default, CrewAI uses OpenAI's GPT-4 model (specifically, the model specified by the OPENAI_MODEL_NAME environment variable, defaulting to "gpt-4") for language processing. You can configure your agents to use a different model or API as described in this guide.
CrewAI provides extensive versatility in integrating with various Language Models (LLMs), including local options through Ollama such as Llama and Mixtral to cloud-based solutions like Azure. Its compatibility extends to all [LangChain LLM components](https://python.langchain.com/v0.2/docs/integrations/llms/), offering a wide range of integration possibilities for customized AI applications.
CrewAI provides extensive versatility in integrating with various Language Models (LLMs), including local options through Ollama such as Llama and Mixtral to cloud-based solutions like Azure. Its compatibility extends to all [LangChain LLM components](https://python.langchain.com/v0.2/docs/integrations/llms/), offering a wide range of integration possibilities for customized AI applications.
The platform supports connections to an array of Generative AI models, including:
@@ -37,6 +37,7 @@ example_agent = Agent(
verbose=True
)
```
## Ollama Local Integration
Ollama is preferred for local LLM integration, offering customization and privacy benefits. To integrate Ollama with CrewAI, you will need the `langchain-ollama` package. You can then set the following environment variables to connect to your Ollama instance running locally on port 11434.
@@ -47,8 +48,8 @@ os.environ[OPENAI_API_KEY]='' # No API Key required for Ollama
```
## Ollama Integration Step by Step (ex. for using Llama 3.1 8B locally)
1. [Download and install Ollama](https://ollama.com/download).
2. After setting up the Ollama, Pull the Llama3.1 8B model by typing following lines into your terminal ```ollama run llama3.1```.
1. [Download and install Ollama](https://ollama.com/download).
2. After setting up the Ollama, Pull the Llama3.1 8B model by typing following lines into your terminal ```ollama run llama3.1```.
3. Llama3.1 should now be served locally on `http://localhost:11434`
```
from crewai import Agent, Task, Crew
@@ -165,7 +166,7 @@ llm = ChatCohere()
For Azure OpenAI API integration, set the following environment variables:
```sh
os.environ[AZURE_OPENAI_DEPLOYMENT] = "You deployment"
os.environ[AZURE_OPENAI_DEPLOYMENT] = "Your deployment"
os.environ["OPENAI_API_VERSION"] = "2023-12-01-preview"
os.environ["AZURE_OPENAI_ENDPOINT"] = "Your Endpoint"
os.environ["AZURE_OPENAI_API_KEY"] = "<Your API Key>"
@@ -191,5 +192,6 @@ azure_agent = Agent(
llm=azure_llm
)
```
## Conclusion
Integrating CrewAI with different LLMs expands the framework's versatility, allowing for customized, efficient AI solutions across various domains and platforms.
Integrating CrewAI with different LLMs expands the framework's versatility, allowing for customized, efficient AI solutions across various domains and platforms.

View File

@@ -11,14 +11,13 @@ You must run `crew.kickoff()` before you can replay a task. Currently, only the
Here's an example of how to replay from a task:
### Replaying from specific task Using the CLI
### Replaying from Specific Task Using the CLI
To use the replay feature, follow these steps:
1. Open your terminal or command prompt.
2. Navigate to the directory where your CrewAI project is located.
3. Run the following command:
To view latest kickoff task_ids use:
To view the latest kickoff task_ids use:
```shell
crewai log-tasks-outputs
```
@@ -29,21 +28,25 @@ crewai replay -t <task_id>
```
### Replaying from a task Programmatically
### Replaying from a Task Programmatically
To replay from a task programmatically, use the following steps:
1. Specify the task_id and input parameters for the replay process.
2. Execute the replay command within a try-except block to handle potential errors.
```python
def replay():
def replay():
"""
Replay the crew execution from a specific task.
"""
task_id = '<task_id>'
inputs = {"topic": "CrewAI Training"} # this is optional, you can pass in the inputs you want to replay otherwise uses the previous kickoffs inputs
inputs = {"topic": "CrewAI Training"} # This is optional; you can pass in the inputs you want to replay; otherwise, it uses the previous kickoff's inputs.
try:
YourCrewName_Crew().crew().replay(task_id=task_id, inputs=inputs)
except subprocess.CalledProcessError as e:
raise Exception(f"An error occurred while replaying the crew: {e}")
except Exception as e:
raise Exception(f"An error occurred while replaying the crew: {e}")
raise Exception(f"An unexpected error occurred: {e}")
```

View File

@@ -18,7 +18,7 @@ The sequential process ensures tasks are executed one after the other, following
To use the sequential process, assemble your crew and define tasks in the order they need to be executed.
```python
from crewai import Crew, Process, Agent, Task
from crewai import Crew, Process, Agent, Task, TaskOutput, CrewOutput
# Define your agents
researcher = Agent(
@@ -37,6 +37,7 @@ writer = Agent(
backstory='A skilled writer with a talent for crafting compelling narratives'
)
# Define your tasks
research_task = Task(description='Gather relevant data...', agent=researcher, expected_output='Raw Data')
analysis_task = Task(description='Analyze the data...', agent=analyst, expected_output='Data Insights')
writing_task = Task(description='Compose the report...', agent=writer, expected_output='Final Report')
@@ -50,6 +51,10 @@ report_crew = Crew(
# Execute the crew
result = report_crew.kickoff()
# Accessing the type safe output
task_output: TaskOutput = result.tasks[0].output
crew_output: CrewOutput = result.output
```
### Workflow in Action
@@ -82,4 +87,4 @@ CrewAI tracks token usage across all tasks and agents. You can access these metr
1. **Order Matters**: Arrange tasks in a logical sequence where each task builds upon the previous one.
2. **Clear Task Descriptions**: Provide detailed descriptions for each task to guide the agents effectively.
3. **Appropriate Agent Selection**: Match agents' skills and roles to the requirements of each task.
4. **Use Context**: Leverage the context from previous tasks to inform subsequent ones
4. **Use Context**: Leverage the context from previous tasks to inform subsequent ones.