* Cleaned up task execution to now have separate paths for async and sync execution. Updating all kickoff functions to return CrewOutput. WIP. Waiting for Joao feedback on async task execution with task_output * Consistently storing async and sync output for context * outline tests I need to create going forward * Major rehaul of TaskOutput and CrewOutput. Updated all tests to work with new change. Need to add in a few final tricky async tests and add a few more to verify output types on TaskOutput and CrewOutput. * Encountering issues with callback. Need to test on main. WIP * working on tests. WIP * WIP. Figuring out disconnect issue. * Cleaned up logs now that I've isolated the issue to the LLM * more wip. * WIP. It looks like usage metrics has always been broken for async * Update parent crew who is managing for_each loop * Merge in main to bugfix/kickoff-for-each-usage-metrics * Clean up code for review * Add new tests * Final cleanup. Ready for review. * Moving copy functionality from Agent to BaseAgent * Fix renaming issue * Fix linting errors * use BaseAgent instead of Agent where applicable * Fixing missing function. Working on tests. * WIP. Needing team to review change * Fixing issues brought about by merge * WIP: need to fix json encoder * WIP need to fix encoder * WIP * WIP: replay working with async. need to add tests * Implement major fixes from yesterdays group conversation. Now working on tests. * The majority of tasks are working now. Need to fix converter class * Fix final failing test * Fix linting and type-checker issues * Add more tests to fully test CrewOutput and TaskOutput changes * Add in validation for async cannot depend on other async tasks. * WIP: working replay feat fixing inputs, need tests * WIP: core logic of seq and heir for executing tasks added into one * Update validators and tests * better logic for seq and hier * replay working for both seq and hier just need tests * fixed context * added cli command + code cleanup TODO: need better refactoring * refactoring for cleaner code * added better tests * removed todo comments and fixed some tests * fix logging now all tests should pass * cleaner code * ensure replay is delcared when replaying specific tasks * ensure hierarchical works * better typing for stored_outputs and separated task_output_handler * added better tests * added replay feature to crew docs * easier cli command name * fixing changes * using sqllite instead of .json file for logging previous task_outputs * tools fix * added to docs and fixed tests * fixed .db * fixed docs and removed unneeded comments * separating ltm and replay db * fixed printing colors * added how to doc --------- Co-authored-by: Brandon Hancock <brandon@brandonhancock.io>
10 KiB
title, description
| title | description |
|---|---|
| crewAI Crews | Understanding and utilizing crews in the crewAI framework with comprehensive attributes and functionalities. |
What is a Crew?
A crew in crewAI represents a collaborative group of agents working together to achieve a set of tasks. Each crew defines the strategy for task execution, agent collaboration, and the overall workflow.
Crew Attributes
| Attribute | Parameters | Description |
|---|---|---|
| Tasks | tasks |
A list of tasks assigned to the crew. |
| Agents | agents |
A list of agents that are part of the crew. |
| Process (optional) | process |
The process flow (e.g., sequential, hierarchical) the crew follows. |
| Verbose (optional) | verbose |
The verbosity level for logging during execution. |
| Manager LLM (optional) | manager_llm |
The language model used by the manager agent in a hierarchical process. Required when using a hierarchical process. |
| Function Calling LLM (optional) | function_calling_llm |
If passed, the crew will use this LLM to do function calling for tools for all agents in the crew. Each agent can have its own LLM, which overrides the crew's LLM for function calling. |
| Config (optional) | config |
Optional configuration settings for the crew, in Json or Dict[str, Any] format. |
| Max RPM (optional) | max_rpm |
Maximum requests per minute the crew adheres to during execution. |
| Language (optional) | language |
Language used for the crew, defaults to English. |
| Language File (optional) | language_file |
Path to the language file to be used for the crew. |
| Memory (optional) | memory |
Utilized for storing execution memories (short-term, long-term, entity memory). |
| Cache (optional) | cache |
Specifies whether to use a cache for storing the results of tools' execution. |
| Embedder (optional) | embedder |
Configuration for the embedder to be used by the crew. Mostly used by memory for now. |
| Full Output (optional) | full_output |
Whether the crew should return the full output with all tasks outputs or just the final output. |
| Step Callback (optional) | step_callback |
A function that is called after each step of every agent. This can be used to log the agent's actions or to perform other operations; it won't override the agent-specific step_callback. |
| Task Callback (optional) | task_callback |
A function that is called after the completion of each task. Useful for monitoring or additional operations post-task execution. |
| Share Crew (optional) | share_crew |
Whether you want to share the complete crew information and execution with the crewAI team to make the library better, and allow us to train models. |
| Output Log File (optional) | output_log_file |
Whether you want to have a file with the complete crew output and execution. You can set it using True and it will default to the folder you are currently in and it will be called logs.txt or passing a string with the full path and name of the file. |
| Manager Agent (optional) | manager_agent |
manager sets a custom agent that will be used as a manager. |
| Manager Callbacks (optional) | manager_callbacks |
manager_callbacks takes a list of callback handlers to be executed by the manager agent when a hierarchical process is used. |
| Prompt File (optional) | prompt_file |
Path to the prompt JSON file to be used for the crew. |
!!! note "Crew Max RPM"
The max_rpm attribute sets the maximum number of requests per minute the crew can perform to avoid rate limits and will override individual agents' max_rpm settings if you set it.
Creating a Crew
When assembling a crew, you combine agents with complementary roles and tools, assign tasks, and select a process that dictates their execution order and interaction.
Example: Assembling a Crew
from crewai import Crew, Agent, Task, Process
from langchain_community.tools import DuckDuckGoSearchRun
# Define agents with specific roles and tools
researcher = Agent(
role='Senior Research Analyst',
goal='Discover innovative AI technologies',
backstory="""You're a senior research analyst at a large company.
You're responsible for analyzing data and providing insights
to the business.
You're currently working on a project to analyze the
trends and innovations in the space of artificial intelligence.""",
tools=[DuckDuckGoSearchRun()]
)
writer = Agent(
role='Content Writer',
goal='Write engaging articles on AI discoveries',
backstory="""You're a senior writer at a large company.
You're responsible for creating content to the business.
You're currently working on a project to write about trends
and innovations in the space of AI for your next meeting.""",
verbose=True
)
# Create tasks for the agents
research_task = Task(
description='Identify breakthrough AI technologies',
agent=researcher,
expected_output='A bullet list summary of the top 5 most important AI news'
)
write_article_task = Task(
description='Draft an article on the latest AI technologies',
agent=writer,
expected_output='3 paragraph blog post on the latest AI technologies'
)
# Assemble the crew with a sequential process
my_crew = Crew(
agents=[researcher, writer],
tasks=[research_task, write_article_task],
process=Process.sequential,
full_output=True,
verbose=True,
)
Memory Utilization
Crews can utilize memory (short-term, long-term, and entity memory) to enhance their execution and learning over time. This feature allows crews to store and recall execution memories, aiding in decision-making and task execution strategies.
Cache Utilization
Caches can be employed to store the results of tools' execution, making the process more efficient by reducing the need to re-execute identical tasks.
Crew Usage Metrics
After the crew execution, you can access the usage_metrics attribute to view the language model (LLM) usage metrics for all tasks executed by the crew. This provides insights into operational efficiency and areas for improvement.
# Access the crew's usage metrics
crew = Crew(agents=[agent1, agent2], tasks=[task1, task2])
crew.kickoff()
print(crew.usage_metrics)
Crew Execution Process
- Sequential Process: Tasks are executed one after another, allowing for a linear flow of work.
- Hierarchical Process: A manager agent coordinates the crew, delegating tasks and validating outcomes before proceeding. Note: A
manager_llmormanager_agentis required for this process and it's essential for validating the process flow.
Kicking Off a Crew
Once your crew is assembled, initiate the workflow with the kickoff() method. This starts the execution process according to the defined process flow.
# Start the crew's task execution
result = my_crew.kickoff()
print(result)
Different ways to Kicking Off a Crew
Once your crew is assembled, initiate the workflow with the appropriate kickoff method. CrewAI provides several methods for better control over the kickoff process: kickoff(), kickoff_for_each(), kickoff_async(), and kickoff_for_each_async().
kickoff(): Starts the execution process according to the defined process flow.
kickoff_for_each(): Executes tasks for each agent individually.
kickoff_async(): Initiates the workflow asynchronously.
kickoff_for_each_async(): Executes tasks for each agent individually in an asynchronous manner.
# Start the crew's task execution
result = my_crew.kickoff()
print(result)
# Example of using kickoff_for_each
inputs_array = [{'topic': 'AI in healthcare'}, {'topic': 'AI in finance'}]
results = my_crew.kickoff_for_each(inputs=inputs_array)
for result in results:
print(result)
# Example of using kickoff_async
inputs = {'topic': 'AI in healthcare'}
async_result = my_crew.kickoff_async(inputs=inputs)
print(async_result)
# Example of using kickoff_for_each_async
inputs_array = [{'topic': 'AI in healthcare'}, {'topic': 'AI in finance'}]
async_results = my_crew.kickoff_for_each_async(inputs=inputs_array)
for async_result in async_results:
print(async_result)
These methods provide flexibility in how you manage and execute tasks within your crew, allowing for both synchronous and asynchronous workflows tailored to your needs
Replaying from specific task:
You can now replay from a specific task using our cli command replay.
The replay_from_tasks feature in CrewAI allows you to replay from a specific task using the command-line interface (CLI). By running the command crewai replay -t <task_id>, you can specify the task_id for the replay process.
Kickoffs will now save the latest kickoffs returned task outputs locally for you to be able to replay from.
Replaying from specific task Using the CLI
To use the replay feature, follow these steps:
- Open your terminal or command prompt.
- Navigate to the directory where your CrewAI project is located.
- Run the following command:
To view latest kickoff task_ids use:
crewai log-tasks-outputs
crewai replay -t <task_id>
These commands let you replay from your latest kickoff tasks, still retaining context from previously executed tasks.