mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-01-30 02:28:13 +00:00
Merge branch 'main' into lorenze/improve-docs-flows
This commit is contained in:
@@ -572,6 +572,55 @@ The `third_method` and `fourth_method` listen to the output of the `second_metho
|
||||
|
||||
When you run this Flow, the output will change based on the random boolean value generated by the `start_method`.
|
||||
|
||||
### Human in the Loop (human feedback)
|
||||
|
||||
The `@human_feedback` decorator enables human-in-the-loop workflows by pausing flow execution to collect feedback from a human. This is useful for approval gates, quality review, and decision points that require human judgment.
|
||||
|
||||
```python Code
|
||||
from crewai.flow.flow import Flow, start, listen
|
||||
from crewai.flow.human_feedback import human_feedback, HumanFeedbackResult
|
||||
|
||||
class ReviewFlow(Flow):
|
||||
@start()
|
||||
@human_feedback(
|
||||
message="Do you approve this content?",
|
||||
emit=["approved", "rejected", "needs_revision"],
|
||||
llm="gpt-4o-mini",
|
||||
default_outcome="needs_revision",
|
||||
)
|
||||
def generate_content(self):
|
||||
return "Content to be reviewed..."
|
||||
|
||||
@listen("approved")
|
||||
def on_approval(self, result: HumanFeedbackResult):
|
||||
print(f"Approved! Feedback: {result.feedback}")
|
||||
|
||||
@listen("rejected")
|
||||
def on_rejection(self, result: HumanFeedbackResult):
|
||||
print(f"Rejected. Reason: {result.feedback}")
|
||||
```
|
||||
|
||||
When `emit` is specified, the human's free-form feedback is interpreted by an LLM and collapsed into one of the specified outcomes, which then triggers the corresponding `@listen` decorator.
|
||||
|
||||
You can also use `@human_feedback` without routing to simply collect feedback:
|
||||
|
||||
```python Code
|
||||
@start()
|
||||
@human_feedback(message="Any comments on this output?")
|
||||
def my_method(self):
|
||||
return "Output for review"
|
||||
|
||||
@listen(my_method)
|
||||
def next_step(self, result: HumanFeedbackResult):
|
||||
# Access feedback via result.feedback
|
||||
# Access original output via result.output
|
||||
pass
|
||||
```
|
||||
|
||||
Access all feedback collected during a flow via `self.last_human_feedback` (most recent) or `self.human_feedback_history` (all feedback as a list).
|
||||
|
||||
For a complete guide on human feedback in flows, including **async/non-blocking feedback** with custom providers (Slack, webhooks, etc.), see [Human Feedback in Flows](/en/learn/human-feedback-in-flows).
|
||||
|
||||
## Adding Agents to Flows
|
||||
|
||||
Agents can be seamlessly integrated into your flows, providing a lightweight alternative to full Crews when you need simpler, focused task execution. Here's an example of how to use an Agent within a flow to perform market research:
|
||||
|
||||
Reference in New Issue
Block a user