mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-04-07 19:48:13 +00:00
Some checks failed
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* chore: update memory management and dependencies - Enhance the memory system by introducing a unified memory API that consolidates short-term, long-term, entity, and external memory functionalities. - Update the `.gitignore` to exclude new memory-related files and blog directories. - Modify `conftest.py` to handle missing imports for vcr stubs more gracefully. - Add new development dependencies in `pyproject.toml` for testing and memory management. - Refactor the `Crew` class to utilize the new unified memory system, replacing deprecated memory attributes. - Implement memory context injection in `LiteAgent` to improve memory recall during agent execution. - Update documentation to reflect changes in memory usage and configuration. * feat: introduce Memory TUI for enhanced memory management - Add a new command to the CLI for launching a Textual User Interface (TUI) to browse and recall memories. - Implement the MemoryTUI class to facilitate user interaction with memory scopes and records. - Enhance the unified memory API by adding a method to list records within a specified scope. - Update `pyproject.toml` to include the `textual` dependency for TUI functionality. - Ensure proper error handling for missing dependencies when accessing the TUI. * feat: implement consolidation flow for memory management - Introduce the ConsolidationFlow class to handle the decision-making process for inserting, updating, or deleting memory records based on new content. - Add new data models: ConsolidationAction and ConsolidationPlan to structure the actions taken during consolidation. - Enhance the memory types with new fields for consolidation thresholds and limits. - Update the unified memory API to utilize the new consolidation flow for managing memory records. - Implement embedding functionality for new content to facilitate similarity checks. - Refactor existing memory analysis methods to integrate with the consolidation process. - Update translations to include prompts for consolidation actions and user interactions. * feat: enhance Memory TUI with Rich markup and improved UI elements - Update the MemoryTUI class to utilize Rich markup for better visual representation of memory scope information. - Introduce a color palette for consistent branding across the TUI interface. - Refactor the CSS styles to improve the layout and aesthetics of the memory browsing experience. - Enhance the display of memory entries, including better formatting for records and importance ratings. - Implement loading indicators and error messages with Rich styling for improved user feedback during recall operations. - Update the action bindings and navigation prompts for a more intuitive user experience. * feat: enhance Crew class memory management and configuration - Update the Crew class to allow for more flexible memory configurations by accepting Memory, MemoryScope, or MemorySlice instances. - Refactor memory initialization logic to support custom memory configurations while maintaining backward compatibility. - Improve documentation for memory-related fields to clarify usage and expectations. - Introduce a recall oversample factor to optimize memory recall processes. - Update related memory types and configurations to ensure consistency across the memory management system. * chore: update dependency overrides and enhance memory management - Added an override for the 'rich' dependency to allow compatibility with 'textual' requirements. - Updated the 'pyproject.toml' and 'uv.lock' files to reflect the new dependency specifications. - Refactored the Crew class to simplify memory configuration handling by allowing any type for the memory attribute. - Improved error messages in the CLI for missing 'textual' dependency to guide users on installation. - Introduced new packages and dependencies in the project to enhance functionality and maintain compatibility. * refactor: enhance thread safety in flow management - Updated LockedListProxy and LockedDictProxy to subclass list and dict respectively, ensuring compatibility with libraries requiring strict type checks. - Improved documentation to clarify the purpose of these proxies and their thread-safe operations. - Ensured that all mutations are protected by locks while reads delegate to the underlying data structures, enhancing concurrency safety. * chore: update dependency versions and improve Python compatibility - Downgraded 'vcrpy' dependency to version 7.0.0 for compatibility. - Enhanced 'uv.lock' to include more granular resolution markers for Python versions and implementations, ensuring better compatibility across different environments. - Updated 'urllib3' and 'selenium' dependencies to specify versions based on Python implementation, improving stability and performance. - Removed deprecated resolution markers for 'fastembed' and streamlined its dependencies for better clarity. * fix linter * chore: update uv.lock for improved dependency management and memory management enhancements - Incremented revision number in uv.lock to reflect changes. - Added a new development dependency group in uv.lock, specifying versions for tools like pytest, mypy, and pre-commit to streamline development workflows. - Enhanced error handling in CLI memory functions to provide clearer feedback on missing dependencies. - Refactored memory management classes to improve type hints and maintainability, ensuring better compatibility with future updates. * fix tests * refactor: remove obsolete RAGStorage tests and clean up error handling - Deleted outdated tests for RAGStorage that were no longer relevant, including tests for client failures, save operation failures, and reset failures. - Cleaned up the test suite to focus on current functionality and improve maintainability. - Ensured that remaining tests continue to validate the expected behavior of knowledge storage components. * fix test * fix texts * fix tests * forcing new commit * fix: add location parameter to Google Vertex embedder configuration for memory integration tests * debugging CI * adding debugging for CI * refactor: remove unnecessary logging for memory checks in agent execution - Eliminated redundant logging statements related to memory checks in the Agent and CrewAgentExecutor classes. - Simplified the memory retrieval logic by directly checking for available memory without logging intermediate states. - Improved code readability and maintainability by reducing clutter in the logging output. * udpating desp * feat: enhance thread safety in LockedListProxy and LockedDictProxy - Added equality comparison methods (__eq__ and __ne__) to LockedListProxy and LockedDictProxy to allow for safe comparison of their contents. - Implemented consistent locking mechanisms to prevent deadlocks during comparisons. - Improved the overall robustness of these proxy classes in multi-threaded environments. * feat: enhance memory functionality in Flows documentation and memory system - Added a new section on memory usage within Flows, detailing built-in methods for storing and recalling memories. - Included an example of a Research and Analyze Flow demonstrating the integration of memory for accumulating knowledge over time. - Updated the Memory documentation to clarify the unified memory system and its capabilities, including adaptive-depth recall and composite scoring. - Introduced a new configuration parameter, `recall_oversample_factor`, to improve the effectiveness of memory retrieval processes. * update docs * refactor: improve memory record handling and pagination in unified memory system - Simplified the `get_record` method in the Memory class by directly accessing the storage's `get_record` method. - Enhanced the `list_records` method to include an `offset` parameter for pagination, allowing users to skip a specified number of records. - Updated documentation for both methods to clarify their functionality and parameters, improving overall code clarity and usability. * test: update memory scope assertions in unified memory tests - Modified assertions in `test_lancedb_list_scopes_get_scope_info` and `test_memory_list_scopes_info_tree` to check for the presence of the "/team" scope instead of the root scope. - Clarified comments to indicate that `list_scopes` returns child scopes rather than the root itself, enhancing test clarity and accuracy. * feat: integrate memory tools for agents and crews - Added functionality to inject memory tools into agents during initialization, enhancing their ability to recall and remember information mid-task. - Implemented a new `_add_memory_tools` method in the Crew class to facilitate the addition of memory tools when memory is available. - Introduced `RecallMemoryTool` and `RememberTool` classes in a new `memory_tools.py` file, providing agents with active recall and memory storage capabilities. - Updated English translations to include descriptions for the new memory tools, improving user guidance on their usage. * refactor: streamline memory recall functionality across agents and tools - Removed the 'depth' parameter from memory recall calls in LiteAgent and Agent classes, simplifying the recall process. - Updated the MemoryTUI to use 'deep' depth by default for more comprehensive memory retrieval. - Enhanced the MemoryScope and MemorySlice classes to default to 'deep' depth, improving recall accuracy. - Introduced a new 'recall_queries' field in QueryAnalysis to optimize semantic vector searches with targeted phrases. - Updated documentation and comments to reflect changes in memory recall behavior and parameters. * refactor: optimize memory management in flow classes - Enhanced memory auto-creation logic in Flow class to prevent unnecessary Memory instance creation for internal flows (RecallFlow, ConsolidationFlow) by introducing a _skip_auto_memory flag. - Removed the deprecated time_hints field from QueryAnalysis and replaced it with a more flexible time_filter field to better handle time-based queries. - Updated documentation and comments to reflect changes in memory handling and query analysis structure, improving clarity and usability. * updates tests * feat: introduce EncodingFlow for enhanced memory encoding pipeline - Added a new EncodingFlow class to orchestrate the encoding process for memory, integrating LLM analysis and embedding. - Updated the Memory class to utilize EncodingFlow for saving content, improving the overall memory management and conflict resolution. - Enhanced the unified memory module to include the new EncodingFlow in its public API, facilitating better memory handling. - Updated tests to ensure proper functionality of the new encoding flow and its integration with existing memory features. * refactor: optimize memory tool integration and recall flow - Streamlined the addition of memory tools in the Agent class by using list comprehension for cleaner code. - Enhanced the RecallFlow class to build task lists more efficiently with list comprehensions, improving readability and performance. - Updated the RecallMemoryTool to utilize list comprehensions for formatting memory results, simplifying the code structure. - Adjusted test assertions in LiteAgent to reflect the default behavior of memory recall depth, ensuring clarity in expected outcomes. * Potential fix for pull request finding 'Empty except' Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com> * chore: gen missing cassette * fix * test: enhance memory extraction test by mocking recall to prevent LLM calls Updated the test for memory extraction to include a mock for the recall method, ensuring that the test focuses on the save path without invoking external LLM calls. This improves test reliability and clarity. * refactor: enhance memory handling by adding agent role parameter Updated memory storage methods across multiple classes to include an optional `agent_role` parameter, improving the context of stored memories. Additionally, modified the initialization of several flow classes to suppress flow events, enhancing performance and reducing unnecessary event triggers. * feat: enhance agent memory functionality with recall and save mechanisms Implemented memory context injection during agent kickoff, allowing for memory recall before execution and passive saving of results afterward. Added new methods to handle memory saving and retrieval, including error handling for memory operations. Updated the BaseAgent class to support dynamic memory resolution and improved memory record structure with source and privacy attributes for better provenance tracking. * test * feat: add utility method to simplify tools field in console formatter Introduced a new static method `_simplify_tools_field` in the console formatter to transform the 'tools' field from full tool objects to a comma-separated string of tool names. This enhancement improves the readability of tool information in the output. * refactor: improve lazy initialization of LLM and embedder in Memory class Refactored the Memory class to implement lazy initialization for the LLM and embedder, ensuring they are only created when first accessed. This change enhances the robustness of the Memory class by preventing initialization failures when constructed without an API key. Additionally, updated error handling to provide clearer guidance for users on resolving initialization issues. * refactor: consolidate memory saving methods for improved efficiency Refactored memory handling across multiple classes to replace individual memory saving calls with a batch method, `remember_many`, enhancing performance and reducing redundancy. Updated related tools and schemas to support single and multiple item memory operations, ensuring a more streamlined interface for memory interactions. Additionally, improved documentation and test coverage for the new functionality. * feat: enhance MemoryTUI with improved layout and entry handling Updated the MemoryTUI class to incorporate a new vertical layout, adding an OptionList for displaying entries and enhancing the detail view for selected records. Introduced methods for populating entry and recall lists, improving user interaction and data presentation. Additionally, refined CSS styles for better visual organization and focus handling. * fix test * feat: inject memory tools into LiteAgent for enhanced functionality Added logic to the LiteAgent class to inject memory tools if memory is configured, ensuring that memory tools are only added if they are not already present. This change improves the agent's capability to utilize memory effectively during execution. * feat: add synchronous execution method to ConsolidationFlow for improved integration Introduced a new `run_sync()` method in the ConsolidationFlow class to facilitate procedural execution of the consolidation pipeline without relying on asynchronous event loops. Updated the EncodingFlow class to utilize this method for conflict resolution, ensuring compatibility within its async context. This change enhances the flow's ability to manage memory records effectively during nested executions. * refactor: update ConsolidationFlow and EncodingFlow for improved async handling Removed the synchronous `run_sync()` method from ConsolidationFlow and refactored the consolidate method in EncodingFlow to be asynchronous. This change allows for direct awaiting of the ConsolidationFlow's kickoff method, enhancing compatibility within the async event loop and preventing nested asyncio.run() issues. Additionally, updated the execution plan to listen for multiple paths, streamlining the consolidation process. * fix: update flow documentation and remove unused ConsolidationFlow Corrected the comment in Flow class regarding internal flows, replacing "ConsolidationFlow" with "EncodingFlow". Removed the ConsolidationFlow class as it is no longer needed, streamlining the memory handling process. Updated related imports and ensured that the memory module reflects these changes, enhancing clarity and maintainability. * feat: enhance memory handling with background saving and query analysis optimization Implemented a background saving mechanism in the Memory class to allow non-blocking memory operations, improving performance during high-load scenarios. Added a query analysis threshold to skip LLM calls for short queries, optimizing recall efficiency. Updated related methods and documentation to reflect these changes, ensuring a more responsive and efficient memory management system. * fix test * fix test * fix: handle synchronous fallback for save operations in Memory class Updated the Memory class to implement a synchronous fallback mechanism for save operations when the background thread pool is shut down. This change ensures that late save requests still succeed, improving reliability in memory management during shutdown scenarios. * feat: implement HITL learning features in human feedback decorator Added support for learning from human feedback in the human feedback decorator. Introduced parameters to enable lesson distillation and pre-review of outputs based on past feedback. Updated related tests to ensure proper functionality of the learning mechanism, including memory interactions and default LLM usage. --------- Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com> Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
1068 lines
40 KiB
Plaintext
1068 lines
40 KiB
Plaintext
---
|
|
title: Flows
|
|
description: Learn how to create and manage AI workflows using CrewAI Flows.
|
|
icon: arrow-progress
|
|
mode: "wide"
|
|
---
|
|
|
|
## Overview
|
|
|
|
CrewAI Flows is a powerful feature designed to streamline the creation and management of AI workflows. Flows allow developers to combine and coordinate coding tasks and Crews efficiently, providing a robust framework for building sophisticated AI automations.
|
|
|
|
Flows allow you to create structured, event-driven workflows. They provide a seamless way to connect multiple tasks, manage state, and control the flow of execution in your AI applications. With Flows, you can easily design and implement multi-step processes that leverage the full potential of CrewAI's capabilities.
|
|
|
|
1. **Simplified Workflow Creation**: Easily chain together multiple Crews and tasks to create complex AI workflows.
|
|
|
|
2. **State Management**: Flows make it super easy to manage and share state between different tasks in your workflow.
|
|
|
|
3. **Event-Driven Architecture**: Built on an event-driven model, allowing for dynamic and responsive workflows.
|
|
|
|
4. **Flexible Control Flow**: Implement conditional logic, loops, and branching within your workflows.
|
|
|
|
## Getting Started
|
|
|
|
Let's create a simple Flow where you will use OpenAI to generate a random city in one task and then use that city to generate a fun fact in another task.
|
|
|
|
```python Code
|
|
|
|
from crewai.flow.flow import Flow, listen, start
|
|
from dotenv import load_dotenv
|
|
from litellm import completion
|
|
|
|
|
|
class ExampleFlow(Flow):
|
|
model = "gpt-4o-mini"
|
|
|
|
@start()
|
|
def generate_city(self):
|
|
print("Starting flow")
|
|
# Each flow state automatically gets a unique ID
|
|
print(f"Flow State ID: {self.state['id']}")
|
|
|
|
response = completion(
|
|
model=self.model,
|
|
messages=[
|
|
{
|
|
"role": "user",
|
|
"content": "Return the name of a random city in the world.",
|
|
},
|
|
],
|
|
)
|
|
|
|
random_city = response["choices"][0]["message"]["content"]
|
|
# Store the city in our state
|
|
self.state["city"] = random_city
|
|
print(f"Random City: {random_city}")
|
|
|
|
return random_city
|
|
|
|
@listen(generate_city)
|
|
def generate_fun_fact(self, random_city):
|
|
response = completion(
|
|
model=self.model,
|
|
messages=[
|
|
{
|
|
"role": "user",
|
|
"content": f"Tell me a fun fact about {random_city}",
|
|
},
|
|
],
|
|
)
|
|
|
|
fun_fact = response["choices"][0]["message"]["content"]
|
|
# Store the fun fact in our state
|
|
self.state["fun_fact"] = fun_fact
|
|
return fun_fact
|
|
|
|
|
|
|
|
flow = ExampleFlow()
|
|
flow.plot()
|
|
result = flow.kickoff()
|
|
|
|
print(f"Generated fun fact: {result}")
|
|
```
|
|

|
|
In the above example, we have created a simple Flow that generates a random city using OpenAI and then generates a fun fact about that city. The Flow consists of two tasks: `generate_city` and `generate_fun_fact`. The `generate_city` task is the starting point of the Flow, and the `generate_fun_fact` task listens for the output of the `generate_city` task.
|
|
|
|
Each Flow instance automatically receives a unique identifier (UUID) in its state, which helps track and manage flow executions. The state can also store additional data (like the generated city and fun fact) that persists throughout the flow's execution.
|
|
|
|
When you run the Flow, it will:
|
|
1. Generate a unique ID for the flow state
|
|
2. Generate a random city and store it in the state
|
|
3. Generate a fun fact about that city and store it in the state
|
|
4. Print the results to the console
|
|
|
|
The state's unique ID and stored data can be useful for tracking flow executions and maintaining context between tasks.
|
|
|
|
**Note:** Ensure you have set up your `.env` file to store your `OPENAI_API_KEY`. This key is necessary for authenticating requests to the OpenAI API.
|
|
|
|
### @start()
|
|
|
|
The `@start()` decorator marks entry points for a Flow. You can:
|
|
|
|
- Declare multiple unconditional starts: `@start()`
|
|
- Gate a start on a prior method or router label: `@start("method_or_label")`
|
|
- Provide a callable condition to control when a start should fire
|
|
|
|
All satisfied `@start()` methods will execute (often in parallel) when the Flow begins or resumes.
|
|
|
|
### @listen()
|
|
|
|
The `@listen()` decorator is used to mark a method as a listener for the output of another task in the Flow. The method decorated with `@listen()` will be executed when the specified task emits an output. The method can access the output of the task it is listening to as an argument.
|
|
|
|
#### Usage
|
|
|
|
The `@listen()` decorator can be used in several ways:
|
|
|
|
1. **Listening to a Method by Name**: You can pass the name of the method you want to listen to as a string. When that method completes, the listener method will be triggered.
|
|
|
|
```python Code
|
|
@listen("generate_city")
|
|
def generate_fun_fact(self, random_city):
|
|
# Implementation
|
|
```
|
|
|
|
2. **Listening to a Method Directly**: You can pass the method itself. When that method completes, the listener method will be triggered.
|
|
```python Code
|
|
@listen(generate_city)
|
|
def generate_fun_fact(self, random_city):
|
|
# Implementation
|
|
```
|
|
|
|
### Flow Output
|
|
|
|
Accessing and handling the output of a Flow is essential for integrating your AI workflows into larger applications or systems. CrewAI Flows provide straightforward mechanisms to retrieve the final output, access intermediate results, and manage the overall state of your Flow.
|
|
|
|
#### Retrieving the Final Output
|
|
|
|
When you run a Flow, the final output is determined by the last method that completes. The `kickoff()` method returns the output of this final method.
|
|
|
|
Here's how you can access the final output:
|
|
|
|
<CodeGroup>
|
|
```python Code
|
|
from crewai.flow.flow import Flow, listen, start
|
|
|
|
class OutputExampleFlow(Flow):
|
|
@start()
|
|
def first_method(self):
|
|
return "Output from first_method"
|
|
|
|
@listen(first_method)
|
|
def second_method(self, first_output):
|
|
return f"Second method received: {first_output}"
|
|
|
|
|
|
flow = OutputExampleFlow()
|
|
flow.plot("my_flow_plot")
|
|
final_output = flow.kickoff()
|
|
|
|
print("---- Final Output ----")
|
|
print(final_output)
|
|
```
|
|
|
|
```text Output
|
|
---- Final Output ----
|
|
Second method received: Output from first_method
|
|
```
|
|
|
|
</CodeGroup>
|
|

|
|
|
|
In this example, the `second_method` is the last method to complete, so its output will be the final output of the Flow.
|
|
The `kickoff()` method will return the final output, which is then printed to the console. The `plot()` method will generate the HTML file, which will help you understand the flow.
|
|
|
|
#### Accessing and Updating State
|
|
|
|
In addition to retrieving the final output, you can also access and update the state within your Flow. The state can be used to store and share data between different methods in the Flow. After the Flow has run, you can access the state to retrieve any information that was added or updated during the execution.
|
|
|
|
Here's an example of how to update and access the state:
|
|
|
|
<CodeGroup>
|
|
|
|
```python Code
|
|
from crewai.flow.flow import Flow, listen, start
|
|
from pydantic import BaseModel
|
|
|
|
class ExampleState(BaseModel):
|
|
counter: int = 0
|
|
message: str = ""
|
|
|
|
class StateExampleFlow(Flow[ExampleState]):
|
|
|
|
@start()
|
|
def first_method(self):
|
|
self.state.message = "Hello from first_method"
|
|
self.state.counter += 1
|
|
|
|
@listen(first_method)
|
|
def second_method(self):
|
|
self.state.message += " - updated by second_method"
|
|
self.state.counter += 1
|
|
return self.state.message
|
|
|
|
flow = StateExampleFlow()
|
|
flow.plot("my_flow_plot")
|
|
final_output = flow.kickoff()
|
|
print(f"Final Output: {final_output}")
|
|
print("Final State:")
|
|
print(flow.state)
|
|
```
|
|
|
|
```text Output
|
|
Final Output: Hello from first_method - updated by second_method
|
|
Final State:
|
|
counter=2 message='Hello from first_method - updated by second_method'
|
|
```
|
|
|
|
</CodeGroup>
|
|
|
|

|
|
|
|
In this example, the state is updated by both `first_method` and `second_method`.
|
|
After the Flow has run, you can access the final state to see the updates made by these methods.
|
|
|
|
By ensuring that the final method's output is returned and providing access to the state, CrewAI Flows make it easy to integrate the results of your AI workflows into larger applications or systems,
|
|
while also maintaining and accessing the state throughout the Flow's execution.
|
|
|
|
## Flow State Management
|
|
|
|
Managing state effectively is crucial for building reliable and maintainable AI workflows. CrewAI Flows provides robust mechanisms for both unstructured and structured state management,
|
|
allowing developers to choose the approach that best fits their application's needs.
|
|
|
|
### Unstructured State Management
|
|
|
|
In unstructured state management, all state is stored in the `state` attribute of the `Flow` class.
|
|
This approach offers flexibility, enabling developers to add or modify state attributes on the fly without defining a strict schema.
|
|
Even with unstructured states, CrewAI Flows automatically generates and maintains a unique identifier (UUID) for each state instance.
|
|
|
|
```python Code
|
|
from crewai.flow.flow import Flow, listen, start
|
|
|
|
class UnstructuredExampleFlow(Flow):
|
|
|
|
@start()
|
|
def first_method(self):
|
|
# The state automatically includes an 'id' field
|
|
print(f"State ID: {self.state['id']}")
|
|
self.state['counter'] = 0
|
|
self.state['message'] = "Hello from structured flow"
|
|
|
|
@listen(first_method)
|
|
def second_method(self):
|
|
self.state['counter'] += 1
|
|
self.state['message'] += " - updated"
|
|
|
|
@listen(second_method)
|
|
def third_method(self):
|
|
self.state['counter'] += 1
|
|
self.state['message'] += " - updated again"
|
|
|
|
print(f"State after third_method: {self.state}")
|
|
|
|
|
|
flow = UnstructuredExampleFlow()
|
|
flow.plot("my_flow_plot")
|
|
flow.kickoff()
|
|
```
|
|
|
|

|
|
|
|
**Note:** The `id` field is automatically generated and preserved throughout the flow's execution. You don't need to manage or set it manually, and it will be maintained even when updating the state with new data.
|
|
|
|
**Key Points:**
|
|
|
|
- **Flexibility:** You can dynamically add attributes to `self.state` without predefined constraints.
|
|
- **Simplicity:** Ideal for straightforward workflows where state structure is minimal or varies significantly.
|
|
|
|
### Structured State Management
|
|
|
|
Structured state management leverages predefined schemas to ensure consistency and type safety across the workflow.
|
|
By using models like Pydantic's `BaseModel`, developers can define the exact shape of the state, enabling better validation and auto-completion in development environments.
|
|
|
|
Each state in CrewAI Flows automatically receives a unique identifier (UUID) to help track and manage state instances. This ID is automatically generated and managed by the Flow system.
|
|
|
|
```python Code
|
|
from crewai.flow.flow import Flow, listen, start
|
|
from pydantic import BaseModel
|
|
|
|
|
|
class ExampleState(BaseModel):
|
|
# Note: 'id' field is automatically added to all states
|
|
counter: int = 0
|
|
message: str = ""
|
|
|
|
|
|
class StructuredExampleFlow(Flow[ExampleState]):
|
|
|
|
@start()
|
|
def first_method(self):
|
|
# Access the auto-generated ID if needed
|
|
print(f"State ID: {self.state.id}")
|
|
self.state.message = "Hello from structured flow"
|
|
|
|
@listen(first_method)
|
|
def second_method(self):
|
|
self.state.counter += 1
|
|
self.state.message += " - updated"
|
|
|
|
@listen(second_method)
|
|
def third_method(self):
|
|
self.state.counter += 1
|
|
self.state.message += " - updated again"
|
|
|
|
print(f"State after third_method: {self.state}")
|
|
|
|
|
|
flow = StructuredExampleFlow()
|
|
flow.kickoff()
|
|
```
|
|
|
|

|
|
|
|
**Key Points:**
|
|
|
|
- **Defined Schema:** `ExampleState` clearly outlines the state structure, enhancing code readability and maintainability.
|
|
- **Type Safety:** Leveraging Pydantic ensures that state attributes adhere to the specified types, reducing runtime errors.
|
|
- **Auto-Completion:** IDEs can provide better auto-completion and error checking based on the defined state model.
|
|
|
|
### Choosing Between Unstructured and Structured State Management
|
|
|
|
- **Use Unstructured State Management when:**
|
|
|
|
- The workflow's state is simple or highly dynamic.
|
|
- Flexibility is prioritized over strict state definitions.
|
|
- Rapid prototyping is required without the overhead of defining schemas.
|
|
|
|
- **Use Structured State Management when:**
|
|
- The workflow requires a well-defined and consistent state structure.
|
|
- Type safety and validation are important for your application's reliability.
|
|
- You want to leverage IDE features like auto-completion and type checking for better developer experience.
|
|
|
|
By providing both unstructured and structured state management options, CrewAI Flows empowers developers to build AI workflows that are both flexible and robust, catering to a wide range of application requirements.
|
|
|
|
## Flow Persistence
|
|
|
|
The @persist decorator enables automatic state persistence in CrewAI Flows, allowing you to maintain flow state across restarts or different workflow executions. This decorator can be applied at either the class level or method level, providing flexibility in how you manage state persistence.
|
|
|
|
### Class-Level Persistence
|
|
|
|
When applied at the class level, the @persist decorator automatically persists all flow method states:
|
|
|
|
```python
|
|
@persist # Using SQLiteFlowPersistence by default
|
|
class MyFlow(Flow[MyState]):
|
|
@start()
|
|
def initialize_flow(self):
|
|
# This method will automatically have its state persisted
|
|
self.state.counter = 1
|
|
print("Initialized flow. State ID:", self.state.id)
|
|
|
|
@listen(initialize_flow)
|
|
def next_step(self):
|
|
# The state (including self.state.id) is automatically reloaded
|
|
self.state.counter += 1
|
|
print("Flow state is persisted. Counter:", self.state.counter)
|
|
```
|
|
|
|
### Method-Level Persistence
|
|
|
|
For more granular control, you can apply @persist to specific methods:
|
|
|
|
```python
|
|
class AnotherFlow(Flow[dict]):
|
|
@persist # Persists only this method's state
|
|
@start()
|
|
def begin(self):
|
|
if "runs" not in self.state:
|
|
self.state["runs"] = 0
|
|
self.state["runs"] += 1
|
|
print("Method-level persisted runs:", self.state["runs"])
|
|
```
|
|
|
|
### How It Works
|
|
|
|
1. **Unique State Identification**
|
|
- Each flow state automatically receives a unique UUID
|
|
- The ID is preserved across state updates and method calls
|
|
- Supports both structured (Pydantic BaseModel) and unstructured (dictionary) states
|
|
|
|
2. **Default SQLite Backend**
|
|
- SQLiteFlowPersistence is the default storage backend
|
|
- States are automatically saved to a local SQLite database
|
|
- Robust error handling ensures clear messages if database operations fail
|
|
|
|
3. **Error Handling**
|
|
- Comprehensive error messages for database operations
|
|
- Automatic state validation during save and load
|
|
- Clear feedback when persistence operations encounter issues
|
|
|
|
### Important Considerations
|
|
|
|
- **State Types**: Both structured (Pydantic BaseModel) and unstructured (dictionary) states are supported
|
|
- **Automatic ID**: The `id` field is automatically added if not present
|
|
- **State Recovery**: Failed or restarted flows can automatically reload their previous state
|
|
- **Custom Implementation**: You can provide your own FlowPersistence implementation for specialized storage needs
|
|
|
|
### Technical Advantages
|
|
|
|
1. **Precise Control Through Low-Level Access**
|
|
- Direct access to persistence operations for advanced use cases
|
|
- Fine-grained control via method-level persistence decorators
|
|
- Built-in state inspection and debugging capabilities
|
|
- Full visibility into state changes and persistence operations
|
|
|
|
2. **Enhanced Reliability**
|
|
- Automatic state recovery after system failures or restarts
|
|
- Transaction-based state updates for data integrity
|
|
- Comprehensive error handling with clear error messages
|
|
- Robust validation during state save and load operations
|
|
|
|
3. **Extensible Architecture**
|
|
- Customizable persistence backend through FlowPersistence interface
|
|
- Support for specialized storage solutions beyond SQLite
|
|
- Compatible with both structured (Pydantic) and unstructured (dict) states
|
|
- Seamless integration with existing CrewAI flow patterns
|
|
|
|
The persistence system's architecture emphasizes technical precision and customization options, allowing developers to maintain full control over state management while benefiting from built-in reliability features.
|
|
|
|
## Flow Control
|
|
|
|
### Conditional Logic: `or`
|
|
|
|
The `or_` function in Flows allows you to listen to multiple methods and trigger the listener method when any of the specified methods emit an output.
|
|
|
|
<CodeGroup>
|
|
|
|
```python Code
|
|
from crewai.flow.flow import Flow, listen, or_, start
|
|
|
|
class OrExampleFlow(Flow):
|
|
|
|
@start()
|
|
def start_method(self):
|
|
return "Hello from the start method"
|
|
|
|
@listen(start_method)
|
|
def second_method(self):
|
|
return "Hello from the second method"
|
|
|
|
@listen(or_(start_method, second_method))
|
|
def logger(self, result):
|
|
print(f"Logger: {result}")
|
|
|
|
|
|
|
|
flow = OrExampleFlow()
|
|
flow.plot("my_flow_plot")
|
|
flow.kickoff()
|
|
```
|
|
|
|
```text Output
|
|
Logger: Hello from the start method
|
|
Logger: Hello from the second method
|
|
```
|
|
|
|
</CodeGroup>
|
|
|
|

|
|
|
|
When you run this Flow, the `logger` method will be triggered by the output of either the `start_method` or the `second_method`.
|
|
The `or_` function is used to listen to multiple methods and trigger the listener method when any of the specified methods emit an output.
|
|
|
|
### Conditional Logic: `and`
|
|
|
|
The `and_` function in Flows allows you to listen to multiple methods and trigger the listener method only when all the specified methods emit an output.
|
|
|
|
<CodeGroup>
|
|
|
|
```python Code
|
|
from crewai.flow.flow import Flow, and_, listen, start
|
|
|
|
class AndExampleFlow(Flow):
|
|
|
|
@start()
|
|
def start_method(self):
|
|
self.state["greeting"] = "Hello from the start method"
|
|
|
|
@listen(start_method)
|
|
def second_method(self):
|
|
self.state["joke"] = "What do computers eat? Microchips."
|
|
|
|
@listen(and_(start_method, second_method))
|
|
def logger(self):
|
|
print("---- Logger ----")
|
|
print(self.state)
|
|
|
|
flow = AndExampleFlow()
|
|
flow.plot()
|
|
flow.kickoff()
|
|
```
|
|
|
|
```text Output
|
|
---- Logger ----
|
|
{'greeting': 'Hello from the start method', 'joke': 'What do computers eat? Microchips.'}
|
|
```
|
|
|
|
</CodeGroup>
|
|
|
|

|
|
|
|
When you run this Flow, the `logger` method will be triggered only when both the `start_method` and the `second_method` emit an output.
|
|
The `and_` function is used to listen to multiple methods and trigger the listener method only when all the specified methods emit an output.
|
|
|
|
### Router
|
|
|
|
The `@router()` decorator in Flows allows you to define conditional routing logic based on the output of a method.
|
|
You can specify different routes based on the output of the method, allowing you to control the flow of execution dynamically.
|
|
|
|
<CodeGroup>
|
|
|
|
```python Code
|
|
import random
|
|
from crewai.flow.flow import Flow, listen, router, start
|
|
from pydantic import BaseModel
|
|
|
|
class ExampleState(BaseModel):
|
|
success_flag: bool = False
|
|
|
|
class RouterFlow(Flow[ExampleState]):
|
|
|
|
@start()
|
|
def start_method(self):
|
|
print("Starting the structured flow")
|
|
random_boolean = random.choice([True, False])
|
|
self.state.success_flag = random_boolean
|
|
|
|
@router(start_method)
|
|
def second_method(self):
|
|
if self.state.success_flag:
|
|
return "success"
|
|
else:
|
|
return "failed"
|
|
|
|
@listen("success")
|
|
def third_method(self):
|
|
print("Third method running")
|
|
|
|
@listen("failed")
|
|
def fourth_method(self):
|
|
print("Fourth method running")
|
|
|
|
|
|
flow = RouterFlow()
|
|
flow.plot("my_flow_plot")
|
|
flow.kickoff()
|
|
```
|
|
|
|
```text Output
|
|
Starting the structured flow
|
|
Third method running
|
|
Fourth method running
|
|
```
|
|
|
|
</CodeGroup>
|
|
|
|

|
|
|
|
In the above example, the `start_method` generates a random boolean value and sets it in the state.
|
|
The `second_method` uses the `@router()` decorator to define conditional routing logic based on the value of the boolean.
|
|
If the boolean is `True`, the method returns `"success"`, and if it is `False`, the method returns `"failed"`.
|
|
The `third_method` and `fourth_method` listen to the output of the `second_method` and execute based on the returned value.
|
|
|
|
When you run this Flow, the output will change based on the random boolean value generated by the `start_method`.
|
|
|
|
### Human in the Loop (human feedback)
|
|
|
|
<Note>
|
|
The `@human_feedback` decorator requires **CrewAI version 1.8.0 or higher**.
|
|
</Note>
|
|
|
|
The `@human_feedback` decorator enables human-in-the-loop workflows by pausing flow execution to collect feedback from a human. This is useful for approval gates, quality review, and decision points that require human judgment.
|
|
|
|
```python Code
|
|
from crewai.flow.flow import Flow, start, listen
|
|
from crewai.flow.human_feedback import human_feedback, HumanFeedbackResult
|
|
|
|
class ReviewFlow(Flow):
|
|
@start()
|
|
@human_feedback(
|
|
message="Do you approve this content?",
|
|
emit=["approved", "rejected", "needs_revision"],
|
|
llm="gpt-4o-mini",
|
|
default_outcome="needs_revision",
|
|
)
|
|
def generate_content(self):
|
|
return "Content to be reviewed..."
|
|
|
|
@listen("approved")
|
|
def on_approval(self, result: HumanFeedbackResult):
|
|
print(f"Approved! Feedback: {result.feedback}")
|
|
|
|
@listen("rejected")
|
|
def on_rejection(self, result: HumanFeedbackResult):
|
|
print(f"Rejected. Reason: {result.feedback}")
|
|
```
|
|
|
|
When `emit` is specified, the human's free-form feedback is interpreted by an LLM and collapsed into one of the specified outcomes, which then triggers the corresponding `@listen` decorator.
|
|
|
|
You can also use `@human_feedback` without routing to simply collect feedback:
|
|
|
|
```python Code
|
|
@start()
|
|
@human_feedback(message="Any comments on this output?")
|
|
def my_method(self):
|
|
return "Output for review"
|
|
|
|
@listen(my_method)
|
|
def next_step(self, result: HumanFeedbackResult):
|
|
# Access feedback via result.feedback
|
|
# Access original output via result.output
|
|
pass
|
|
```
|
|
|
|
Access all feedback collected during a flow via `self.last_human_feedback` (most recent) or `self.human_feedback_history` (all feedback as a list).
|
|
|
|
For a complete guide on human feedback in flows, including **async/non-blocking feedback** with custom providers (Slack, webhooks, etc.), see [Human Feedback in Flows](/en/learn/human-feedback-in-flows).
|
|
|
|
## Adding Agents to Flows
|
|
|
|
Agents can be seamlessly integrated into your flows, providing a lightweight alternative to full Crews when you need simpler, focused task execution. Here's an example of how to use an Agent within a flow to perform market research:
|
|
|
|
```python
|
|
import asyncio
|
|
from typing import Any, Dict, List
|
|
|
|
from crewai_tools import SerperDevTool
|
|
from pydantic import BaseModel, Field
|
|
|
|
from crewai.agent import Agent
|
|
from crewai.flow.flow import Flow, listen, start
|
|
|
|
|
|
# Define a structured output format
|
|
class MarketAnalysis(BaseModel):
|
|
key_trends: List[str] = Field(description="List of identified market trends")
|
|
market_size: str = Field(description="Estimated market size")
|
|
competitors: List[str] = Field(description="Major competitors in the space")
|
|
|
|
|
|
# Define flow state
|
|
class MarketResearchState(BaseModel):
|
|
product: str = ""
|
|
analysis: MarketAnalysis | None = None
|
|
|
|
|
|
# Create a flow class
|
|
class MarketResearchFlow(Flow[MarketResearchState]):
|
|
@start()
|
|
def initialize_research(self) -> Dict[str, Any]:
|
|
print(f"Starting market research for {self.state.product}")
|
|
return {"product": self.state.product}
|
|
|
|
@listen(initialize_research)
|
|
async def analyze_market(self) -> Dict[str, Any]:
|
|
# Create an Agent for market research
|
|
analyst = Agent(
|
|
role="Market Research Analyst",
|
|
goal=f"Analyze the market for {self.state.product}",
|
|
backstory="You are an experienced market analyst with expertise in "
|
|
"identifying market trends and opportunities.",
|
|
tools=[SerperDevTool()],
|
|
verbose=True,
|
|
)
|
|
|
|
# Define the research query
|
|
query = f"""
|
|
Research the market for {self.state.product}. Include:
|
|
1. Key market trends
|
|
2. Market size
|
|
3. Major competitors
|
|
|
|
Format your response according to the specified structure.
|
|
"""
|
|
|
|
# Execute the analysis with structured output format
|
|
result = await analyst.kickoff_async(query, response_format=MarketAnalysis)
|
|
if result.pydantic:
|
|
print("result", result.pydantic)
|
|
else:
|
|
print("result", result)
|
|
|
|
# Return the analysis to update the state
|
|
return {"analysis": result.pydantic}
|
|
|
|
@listen(analyze_market)
|
|
def present_results(self, analysis) -> None:
|
|
print("\nMarket Analysis Results")
|
|
print("=====================")
|
|
|
|
if isinstance(analysis, dict):
|
|
# If we got a dict with 'analysis' key, extract the actual analysis object
|
|
market_analysis = analysis.get("analysis")
|
|
else:
|
|
market_analysis = analysis
|
|
|
|
if market_analysis and isinstance(market_analysis, MarketAnalysis):
|
|
print("\nKey Market Trends:")
|
|
for trend in market_analysis.key_trends:
|
|
print(f"- {trend}")
|
|
|
|
print(f"\nMarket Size: {market_analysis.market_size}")
|
|
|
|
print("\nMajor Competitors:")
|
|
for competitor in market_analysis.competitors:
|
|
print(f"- {competitor}")
|
|
else:
|
|
print("No structured analysis data available.")
|
|
print("Raw analysis:", analysis)
|
|
|
|
|
|
# Usage example
|
|
async def run_flow():
|
|
flow = MarketResearchFlow()
|
|
flow.plot("MarketResearchFlowPlot")
|
|
result = await flow.kickoff_async(inputs={"product": "AI-powered chatbots"})
|
|
return result
|
|
|
|
|
|
# Run the flow
|
|
if __name__ == "__main__":
|
|
asyncio.run(run_flow())
|
|
```
|
|
|
|

|
|
|
|
This example demonstrates several key features of using Agents in flows:
|
|
|
|
1. **Structured Output**: Using Pydantic models to define the expected output format (`MarketAnalysis`) ensures type safety and structured data throughout the flow.
|
|
|
|
2. **State Management**: The flow state (`MarketResearchState`) maintains context between steps and stores both inputs and outputs.
|
|
|
|
3. **Tool Integration**: Agents can use tools (like `WebsiteSearchTool`) to enhance their capabilities.
|
|
|
|
## Adding Crews to Flows
|
|
|
|
Creating a flow with multiple crews in CrewAI is straightforward.
|
|
|
|
You can generate a new CrewAI project that includes all the scaffolding needed to create a flow with multiple crews by running the following command:
|
|
|
|
```bash
|
|
crewai create flow name_of_flow
|
|
```
|
|
|
|
This command will generate a new CrewAI project with the necessary folder structure. The generated project includes a prebuilt crew called `poem_crew` that is already working. You can use this crew as a template by copying, pasting, and editing it to create other crews.
|
|
|
|
### Folder Structure
|
|
|
|
After running the `crewai create flow name_of_flow` command, you will see a folder structure similar to the following:
|
|
|
|
| Directory/File | Description |
|
|
| :--------------------- | :----------------------------------------------------------------- |
|
|
| `name_of_flow/` | Root directory for the flow. |
|
|
| ├── `crews/` | Contains directories for specific crews. |
|
|
| │ └── `poem_crew/` | Directory for the "poem_crew" with its configurations and scripts. |
|
|
| │ ├── `config/` | Configuration files directory for the "poem_crew". |
|
|
| │ │ ├── `agents.yaml` | YAML file defining the agents for "poem_crew". |
|
|
| │ │ └── `tasks.yaml` | YAML file defining the tasks for "poem_crew". |
|
|
| │ ├── `poem_crew.py` | Script for "poem_crew" functionality. |
|
|
| ├── `tools/` | Directory for additional tools used in the flow. |
|
|
| │ └── `custom_tool.py` | Custom tool implementation. |
|
|
| ├── `main.py` | Main script for running the flow. |
|
|
| ├── `README.md` | Project description and instructions. |
|
|
| ├── `pyproject.toml` | Configuration file for project dependencies and settings. |
|
|
| └── `.gitignore` | Specifies files and directories to ignore in version control. |
|
|
|
|
### Building Your Crews
|
|
|
|
In the `crews` folder, you can define multiple crews. Each crew will have its own folder containing configuration files and the crew definition file. For example, the `poem_crew` folder contains:
|
|
|
|
- `config/agents.yaml`: Defines the agents for the crew.
|
|
- `config/tasks.yaml`: Defines the tasks for the crew.
|
|
- `poem_crew.py`: Contains the crew definition, including agents, tasks, and the crew itself.
|
|
|
|
You can copy, paste, and edit the `poem_crew` to create other crews.
|
|
|
|
### Connecting Crews in `main.py`
|
|
|
|
The `main.py` file is where you create your flow and connect the crews together. You can define your flow by using the `Flow` class and the decorators `@start` and `@listen` to specify the flow of execution.
|
|
|
|
Here's an example of how you can connect the `poem_crew` in the `main.py` file:
|
|
|
|
```python Code
|
|
#!/usr/bin/env python
|
|
from random import randint
|
|
|
|
from pydantic import BaseModel
|
|
from crewai.flow.flow import Flow, listen, start
|
|
from .crews.poem_crew.poem_crew import PoemCrew
|
|
|
|
class PoemState(BaseModel):
|
|
sentence_count: int = 1
|
|
poem: str = ""
|
|
|
|
class PoemFlow(Flow[PoemState]):
|
|
|
|
@start()
|
|
def generate_sentence_count(self):
|
|
print("Generating sentence count")
|
|
self.state.sentence_count = randint(1, 5)
|
|
|
|
@listen(generate_sentence_count)
|
|
def generate_poem(self):
|
|
print("Generating poem")
|
|
result = PoemCrew().crew().kickoff(inputs={"sentence_count": self.state.sentence_count})
|
|
|
|
print("Poem generated", result.raw)
|
|
self.state.poem = result.raw
|
|
|
|
@listen(generate_poem)
|
|
def save_poem(self):
|
|
print("Saving poem")
|
|
with open("poem.txt", "w") as f:
|
|
f.write(self.state.poem)
|
|
|
|
def kickoff():
|
|
poem_flow = PoemFlow()
|
|
poem_flow.kickoff()
|
|
|
|
|
|
def plot():
|
|
poem_flow = PoemFlow()
|
|
poem_flow.plot("PoemFlowPlot")
|
|
|
|
if __name__ == "__main__":
|
|
kickoff()
|
|
plot()
|
|
```
|
|
|
|
In this example, the `PoemFlow` class defines a flow that generates a sentence count, uses the `PoemCrew` to generate a poem, and then saves the poem to a file. The flow is kicked off by calling the `kickoff()` method. The PoemFlowPlot will be generated by `plot()` method.
|
|
|
|

|
|
|
|
### Running the Flow
|
|
|
|
(Optional) Before running the flow, you can install the dependencies by running:
|
|
|
|
```bash
|
|
crewai install
|
|
```
|
|
|
|
Once all of the dependencies are installed, you need to activate the virtual environment by running:
|
|
|
|
```bash
|
|
source .venv/bin/activate
|
|
```
|
|
|
|
After activating the virtual environment, you can run the flow by executing one of the following commands:
|
|
|
|
```bash
|
|
crewai flow kickoff
|
|
```
|
|
|
|
or
|
|
|
|
```bash
|
|
uv run kickoff
|
|
```
|
|
|
|
The flow will execute, and you should see the output in the console.
|
|
|
|
## Plot Flows
|
|
|
|
Visualizing your AI workflows can provide valuable insights into the structure and execution paths of your flows. CrewAI offers a powerful visualization tool that allows you to generate interactive plots of your flows, making it easier to understand and optimize your AI workflows.
|
|
|
|
### What are Plots?
|
|
|
|
Plots in CrewAI are graphical representations of your AI workflows. They display the various tasks, their connections, and the flow of data between them. This visualization helps in understanding the sequence of operations, identifying bottlenecks, and ensuring that the workflow logic aligns with your expectations.
|
|
|
|
### How to Generate a Plot
|
|
|
|
CrewAI provides two convenient methods to generate plots of your flows:
|
|
|
|
#### Option 1: Using the `plot()` Method
|
|
|
|
If you are working directly with a flow instance, you can generate a plot by calling the `plot()` method on your flow object. This method will create an HTML file containing the interactive plot of your flow.
|
|
|
|
```python Code
|
|
# Assuming you have a flow instance
|
|
flow.plot("my_flow_plot")
|
|
```
|
|
|
|
This will generate a file named `my_flow_plot.html` in your current directory. You can open this file in a web browser to view the interactive plot.
|
|
|
|
#### Option 2: Using the Command Line
|
|
|
|
If you are working within a structured CrewAI project, you can generate a plot using the command line. This is particularly useful for larger projects where you want to visualize the entire flow setup.
|
|
|
|
```bash
|
|
crewai flow plot
|
|
```
|
|
|
|
This command will generate an HTML file with the plot of your flow, similar to the `plot()` method. The file will be saved in your project directory, and you can open it in a web browser to explore the flow.
|
|
|
|
### Understanding the Plot
|
|
|
|
The generated plot will display nodes representing the tasks in your flow, with directed edges indicating the flow of execution. The plot is interactive, allowing you to zoom in and out, and hover over nodes to see additional details.
|
|
|
|
By visualizing your flows, you can gain a clearer understanding of the workflow's structure, making it easier to debug, optimize, and communicate your AI processes to others.
|
|
|
|
### Conclusion
|
|
|
|
Plotting your flows is a powerful feature of CrewAI that enhances your ability to design and manage complex AI workflows. Whether you choose to use the `plot()` method or the command line, generating plots will provide you with a visual representation of your workflows, aiding in both development and presentation.
|
|
|
|
## Next Steps
|
|
|
|
If you're interested in exploring additional examples of flows, we have a variety of recommendations in our examples repository. Here are four specific flow examples, each showcasing unique use cases to help you match your current problem type to a specific example:
|
|
|
|
1. **Email Auto Responder Flow**: This example demonstrates an infinite loop where a background job continually runs to automate email responses. It's a great use case for tasks that need to be performed repeatedly without manual intervention. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/email_auto_responder_flow)
|
|
|
|
2. **Lead Score Flow**: This flow showcases adding human-in-the-loop feedback and handling different conditional branches using the router. It's an excellent example of how to incorporate dynamic decision-making and human oversight into your workflows. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/lead-score-flow)
|
|
|
|
3. **Write a Book Flow**: This example excels at chaining multiple crews together, where the output of one crew is used by another. Specifically, one crew outlines an entire book, and another crew generates chapters based on the outline. Eventually, everything is connected to produce a complete book. This flow is perfect for complex, multi-step processes that require coordination between different tasks. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/write_a_book_with_flows)
|
|
|
|
4. **Meeting Assistant Flow**: This flow demonstrates how to broadcast one event to trigger multiple follow-up actions. For instance, after a meeting is completed, the flow can update a Trello board, send a Slack message, and save the results. It's a great example of handling multiple outcomes from a single event, making it ideal for comprehensive task management and notification systems. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/meeting_assistant_flow)
|
|
|
|
By exploring these examples, you can gain insights into how to leverage CrewAI Flows for various use cases, from automating repetitive tasks to managing complex, multi-step processes with dynamic decision-making and human feedback.
|
|
|
|
Also, check out our YouTube video on how to use flows in CrewAI below!
|
|
|
|
<iframe
|
|
className="w-full aspect-video rounded-xl"
|
|
src="https://www.youtube.com/embed/MTb5my6VOT8"
|
|
title="CrewAI Flows overview"
|
|
frameBorder="0"
|
|
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
|
|
referrerPolicy="strict-origin-when-cross-origin"
|
|
allowFullScreen
|
|
></iframe>
|
|
|
|
## Running Flows
|
|
|
|
There are two ways to run a flow:
|
|
|
|
### Using the Flow API
|
|
|
|
You can run a flow programmatically by creating an instance of your flow class and calling the `kickoff()` method:
|
|
|
|
```python
|
|
flow = ExampleFlow()
|
|
result = flow.kickoff()
|
|
```
|
|
|
|
### Streaming Flow Execution
|
|
|
|
For real-time visibility into flow execution, you can enable streaming to receive output as it's generated:
|
|
|
|
```python
|
|
class StreamingFlow(Flow):
|
|
stream = True # Enable streaming
|
|
|
|
@start()
|
|
def research(self):
|
|
# Your flow implementation
|
|
pass
|
|
|
|
# Iterate over streaming output
|
|
flow = StreamingFlow()
|
|
streaming = flow.kickoff()
|
|
for chunk in streaming:
|
|
print(chunk.content, end="", flush=True)
|
|
|
|
# Access final result
|
|
result = streaming.result
|
|
```
|
|
|
|
Learn more about streaming in the [Streaming Flow Execution](/en/learn/streaming-flow-execution) guide.
|
|
|
|
## Memory in Flows
|
|
|
|
Every Flow automatically has access to CrewAI's unified [Memory](/concepts/memory) system. You can store, recall, and extract memories directly inside any flow method using three built-in convenience methods.
|
|
|
|
### Built-in Methods
|
|
|
|
| Method | Description |
|
|
| :--- | :--- |
|
|
| `self.remember(content, **kwargs)` | Store content in memory. Accepts optional `scope`, `categories`, `metadata`, `importance`. |
|
|
| `self.recall(query, **kwargs)` | Retrieve relevant memories. Accepts optional `scope`, `categories`, `limit`, `depth`. |
|
|
| `self.extract_memories(content)` | Break raw text into discrete, self-contained memory statements. |
|
|
|
|
A default `Memory()` instance is created automatically when the Flow initializes. You can also pass a custom one:
|
|
|
|
```python
|
|
from crewai.flow.flow import Flow
|
|
from crewai import Memory
|
|
|
|
custom_memory = Memory(
|
|
recency_weight=0.5,
|
|
recency_half_life_days=7,
|
|
embedder={"provider": "ollama", "config": {"model_name": "mxbai-embed-large"}},
|
|
)
|
|
|
|
flow = MyFlow(memory=custom_memory)
|
|
```
|
|
|
|
### Example: Research and Analyze Flow
|
|
|
|
```python
|
|
from crewai.flow.flow import Flow, listen, start
|
|
|
|
|
|
class ResearchAnalysisFlow(Flow):
|
|
@start()
|
|
def gather_data(self):
|
|
# Simulate research findings
|
|
findings = (
|
|
"PostgreSQL handles 10k concurrent connections with connection pooling. "
|
|
"MySQL caps at around 5k. MongoDB scales horizontally but adds complexity."
|
|
)
|
|
|
|
# Extract atomic facts and remember each one
|
|
memories = self.extract_memories(findings)
|
|
for mem in memories:
|
|
self.remember(mem, scope="/research/databases")
|
|
|
|
return findings
|
|
|
|
@listen(gather_data)
|
|
def analyze(self, raw_findings):
|
|
# Recall relevant past research (from this run or previous runs)
|
|
past = self.recall("database performance and scaling", limit=10, depth="shallow")
|
|
|
|
context_lines = [f"- {m.record.content}" for m in past]
|
|
context = "\n".join(context_lines) if context_lines else "No prior context."
|
|
|
|
return {
|
|
"new_findings": raw_findings,
|
|
"prior_context": context,
|
|
"total_memories": len(past),
|
|
}
|
|
|
|
|
|
flow = ResearchAnalysisFlow()
|
|
result = flow.kickoff()
|
|
print(result)
|
|
```
|
|
|
|
Because memory persists across runs (backed by LanceDB on disk), the `analyze` step will recall findings from previous executions too -- enabling flows that learn and accumulate knowledge over time.
|
|
|
|
See the [Memory documentation](/concepts/memory) for details on scopes, slices, composite scoring, embedder configuration, and more.
|
|
|
|
### Using the CLI
|
|
|
|
Starting from version 0.103.0, you can run flows using the `crewai run` command:
|
|
|
|
```shell
|
|
crewai run
|
|
```
|
|
|
|
This command automatically detects if your project is a flow (based on the `type = "flow"` setting in your pyproject.toml) and runs it accordingly. This is the recommended way to run flows from the command line.
|
|
|
|
For backward compatibility, you can also use:
|
|
|
|
```shell
|
|
crewai flow kickoff
|
|
```
|
|
|
|
However, the `crewai run` command is now the preferred method as it works for both crews and flows.
|