mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-01-26 08:38:15 +00:00
Merge branch 'main' into lorenze/improve-docs-flows
This commit is contained in:
@@ -16,16 +16,17 @@ Welcome to the CrewAI AOP API reference. This API allows you to programmatically
|
||||
Navigate to your crew's detail page in the CrewAI AOP dashboard and copy your Bearer Token from the Status tab.
|
||||
</Step>
|
||||
|
||||
<Step title="Discover Required Inputs">
|
||||
Use the `GET /inputs` endpoint to see what parameters your crew expects.
|
||||
</Step>
|
||||
<Step title="Discover Required Inputs">
|
||||
Use the `GET /inputs` endpoint to see what parameters your crew expects.
|
||||
</Step>
|
||||
|
||||
<Step title="Start a Crew Execution">
|
||||
Call `POST /kickoff` with your inputs to start the crew execution and receive a `kickoff_id`.
|
||||
</Step>
|
||||
<Step title="Start a Crew Execution">
|
||||
Call `POST /kickoff` with your inputs to start the crew execution and receive
|
||||
a `kickoff_id`.
|
||||
</Step>
|
||||
|
||||
<Step title="Monitor Progress">
|
||||
Use `GET /status/{kickoff_id}` to check execution status and retrieve results.
|
||||
Use `GET /{kickoff_id}/status` to check execution status and retrieve results.
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
@@ -40,13 +41,14 @@ curl -H "Authorization: Bearer YOUR_CREW_TOKEN" \
|
||||
|
||||
### Token Types
|
||||
|
||||
| Token Type | Scope | Use Case |
|
||||
|:-----------|:--------|:----------|
|
||||
| **Bearer Token** | Organization-level access | Full crew operations, ideal for server-to-server integration |
|
||||
| **User Bearer Token** | User-scoped access | Limited permissions, suitable for user-specific operations |
|
||||
| Token Type | Scope | Use Case |
|
||||
| :-------------------- | :------------------------ | :----------------------------------------------------------- |
|
||||
| **Bearer Token** | Organization-level access | Full crew operations, ideal for server-to-server integration |
|
||||
| **User Bearer Token** | User-scoped access | Limited permissions, suitable for user-specific operations |
|
||||
|
||||
<Tip>
|
||||
You can find both token types in the Status tab of your crew's detail page in the CrewAI AOP dashboard.
|
||||
You can find both token types in the Status tab of your crew's detail page in
|
||||
the CrewAI AOP dashboard.
|
||||
</Tip>
|
||||
|
||||
## Base URL
|
||||
@@ -63,29 +65,33 @@ Replace `your-crew-name` with your actual crew's URL from the dashboard.
|
||||
|
||||
1. **Discovery**: Call `GET /inputs` to understand what your crew needs
|
||||
2. **Execution**: Submit inputs via `POST /kickoff` to start processing
|
||||
3. **Monitoring**: Poll `GET /status/{kickoff_id}` until completion
|
||||
3. **Monitoring**: Poll `GET /{kickoff_id}/status` until completion
|
||||
4. **Results**: Extract the final output from the completed response
|
||||
|
||||
## Error Handling
|
||||
|
||||
The API uses standard HTTP status codes:
|
||||
|
||||
| Code | Meaning |
|
||||
|------|:--------|
|
||||
| `200` | Success |
|
||||
| `400` | Bad Request - Invalid input format |
|
||||
| `401` | Unauthorized - Invalid bearer token |
|
||||
| `404` | Not Found - Resource doesn't exist |
|
||||
| Code | Meaning |
|
||||
| ----- | :----------------------------------------- |
|
||||
| `200` | Success |
|
||||
| `400` | Bad Request - Invalid input format |
|
||||
| `401` | Unauthorized - Invalid bearer token |
|
||||
| `404` | Not Found - Resource doesn't exist |
|
||||
| `422` | Validation Error - Missing required inputs |
|
||||
| `500` | Server Error - Contact support |
|
||||
| `500` | Server Error - Contact support |
|
||||
|
||||
## Interactive Testing
|
||||
|
||||
<Info>
|
||||
**Why no "Send" button?** Since each CrewAI AOP user has their own unique crew URL, we use **reference mode** instead of an interactive playground to avoid confusion. This shows you exactly what the requests should look like without non-functional send buttons.
|
||||
**Why no "Send" button?** Since each CrewAI AOP user has their own unique crew
|
||||
URL, we use **reference mode** instead of an interactive playground to avoid
|
||||
confusion. This shows you exactly what the requests should look like without
|
||||
non-functional send buttons.
|
||||
</Info>
|
||||
|
||||
Each endpoint page shows you:
|
||||
|
||||
- ✅ **Exact request format** with all parameters
|
||||
- ✅ **Response examples** for success and error cases
|
||||
- ✅ **Code samples** in multiple languages (cURL, Python, JavaScript, etc.)
|
||||
@@ -103,6 +109,7 @@ Each endpoint page shows you:
|
||||
</CardGroup>
|
||||
|
||||
**Example workflow:**
|
||||
|
||||
1. **Copy this cURL example** from any endpoint page
|
||||
2. **Replace `your-actual-crew-name.crewai.com`** with your real crew URL
|
||||
3. **Replace the Bearer token** with your real token from the dashboard
|
||||
@@ -111,10 +118,18 @@ Each endpoint page shows you:
|
||||
## Need Help?
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Enterprise Support" icon="headset" href="mailto:support@crewai.com">
|
||||
<Card
|
||||
title="Enterprise Support"
|
||||
icon="headset"
|
||||
href="mailto:support@crewai.com"
|
||||
>
|
||||
Get help with API integration and troubleshooting
|
||||
</Card>
|
||||
<Card title="Enterprise Dashboard" icon="chart-line" href="https://app.crewai.com">
|
||||
<Card
|
||||
title="Enterprise Dashboard"
|
||||
icon="chart-line"
|
||||
href="https://app.crewai.com"
|
||||
>
|
||||
Manage your crews and view execution logs
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
@@ -1,8 +1,6 @@
|
||||
---
|
||||
title: "GET /status/{kickoff_id}"
|
||||
title: "GET /{kickoff_id}/status"
|
||||
description: "Get execution status"
|
||||
openapi: "/enterprise-api.en.yaml GET /status/{kickoff_id}"
|
||||
openapi: "/enterprise-api.en.yaml GET /{kickoff_id}/status"
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
|
||||
|
||||
@@ -572,6 +572,55 @@ The `third_method` and `fourth_method` listen to the output of the `second_metho
|
||||
|
||||
When you run this Flow, the output will change based on the random boolean value generated by the `start_method`.
|
||||
|
||||
### Human in the Loop (human feedback)
|
||||
|
||||
The `@human_feedback` decorator enables human-in-the-loop workflows by pausing flow execution to collect feedback from a human. This is useful for approval gates, quality review, and decision points that require human judgment.
|
||||
|
||||
```python Code
|
||||
from crewai.flow.flow import Flow, start, listen
|
||||
from crewai.flow.human_feedback import human_feedback, HumanFeedbackResult
|
||||
|
||||
class ReviewFlow(Flow):
|
||||
@start()
|
||||
@human_feedback(
|
||||
message="Do you approve this content?",
|
||||
emit=["approved", "rejected", "needs_revision"],
|
||||
llm="gpt-4o-mini",
|
||||
default_outcome="needs_revision",
|
||||
)
|
||||
def generate_content(self):
|
||||
return "Content to be reviewed..."
|
||||
|
||||
@listen("approved")
|
||||
def on_approval(self, result: HumanFeedbackResult):
|
||||
print(f"Approved! Feedback: {result.feedback}")
|
||||
|
||||
@listen("rejected")
|
||||
def on_rejection(self, result: HumanFeedbackResult):
|
||||
print(f"Rejected. Reason: {result.feedback}")
|
||||
```
|
||||
|
||||
When `emit` is specified, the human's free-form feedback is interpreted by an LLM and collapsed into one of the specified outcomes, which then triggers the corresponding `@listen` decorator.
|
||||
|
||||
You can also use `@human_feedback` without routing to simply collect feedback:
|
||||
|
||||
```python Code
|
||||
@start()
|
||||
@human_feedback(message="Any comments on this output?")
|
||||
def my_method(self):
|
||||
return "Output for review"
|
||||
|
||||
@listen(my_method)
|
||||
def next_step(self, result: HumanFeedbackResult):
|
||||
# Access feedback via result.feedback
|
||||
# Access original output via result.output
|
||||
pass
|
||||
```
|
||||
|
||||
Access all feedback collected during a flow via `self.last_human_feedback` (most recent) or `self.human_feedback_history` (all feedback as a list).
|
||||
|
||||
For a complete guide on human feedback in flows, including **async/non-blocking feedback** with custom providers (Slack, webhooks, etc.), see [Human Feedback in Flows](/en/learn/human-feedback-in-flows).
|
||||
|
||||
## Adding Agents to Flows
|
||||
|
||||
Agents can be seamlessly integrated into your flows, providing a lightweight alternative to full Crews when you need simpler, focused task execution. Here's an example of how to use an Agent within a flow to perform market research:
|
||||
|
||||
@@ -187,6 +187,97 @@ You can also deploy your crews directly through the CrewAI AOP web interface by
|
||||
|
||||
</Steps>
|
||||
|
||||
## Option 3: Redeploy Using API (CI/CD Integration)
|
||||
|
||||
For automated deployments in CI/CD pipelines, you can use the CrewAI API to trigger redeployments of existing crews. This is particularly useful for GitHub Actions, Jenkins, or other automation workflows.
|
||||
|
||||
<Steps>
|
||||
<Step title="Get Your Personal Access Token">
|
||||
|
||||
Navigate to your CrewAI AOP account settings to generate an API token:
|
||||
|
||||
1. Go to [app.crewai.com](https://app.crewai.com)
|
||||
2. Click on **Settings** → **Account** → **Personal Access Token**
|
||||
3. Generate a new token and copy it securely
|
||||
4. Store this token as a secret in your CI/CD system
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="Find Your Automation UUID">
|
||||
|
||||
Locate the unique identifier for your deployed crew:
|
||||
|
||||
1. Go to **Automations** in your CrewAI AOP dashboard
|
||||
2. Select your existing automation/crew
|
||||
3. Click on **Additional Details**
|
||||
4. Copy the **UUID** - this identifies your specific crew deployment
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="Trigger Redeployment via API">
|
||||
|
||||
Use the Deploy API endpoint to trigger a redeployment:
|
||||
|
||||
```bash
|
||||
curl -i -X POST \
|
||||
-H "Authorization: Bearer YOUR_PERSONAL_ACCESS_TOKEN" \
|
||||
https://app.crewai.com/crewai_plus/api/v1/crews/YOUR-AUTOMATION-UUID/deploy
|
||||
|
||||
# HTTP/2 200
|
||||
# content-type: application/json
|
||||
#
|
||||
# {
|
||||
# "uuid": "your-automation-uuid",
|
||||
# "status": "Deploy Enqueued",
|
||||
# "public_url": "https://your-crew-deployment.crewai.com",
|
||||
# "token": "your-bearer-token"
|
||||
# }
|
||||
```
|
||||
|
||||
<Info>
|
||||
If your automation was first created connected to Git, the API will automatically pull the latest changes from your repository before redeploying.
|
||||
</Info>
|
||||
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="GitHub Actions Integration Example">
|
||||
|
||||
Here's a GitHub Actions workflow with more complex deployment triggers:
|
||||
|
||||
```yaml
|
||||
name: Deploy CrewAI Automation
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [ main ]
|
||||
pull_request:
|
||||
types: [ labeled ]
|
||||
release:
|
||||
types: [ published ]
|
||||
|
||||
jobs:
|
||||
deploy:
|
||||
runs-on: ubuntu-latest
|
||||
if: |
|
||||
(github.event_name == 'push' && github.ref == 'refs/heads/main') ||
|
||||
(github.event_name == 'pull_request' && contains(github.event.pull_request.labels.*.name, 'deploy')) ||
|
||||
(github.event_name == 'release')
|
||||
steps:
|
||||
- name: Trigger CrewAI Redeployment
|
||||
run: |
|
||||
curl -X POST \
|
||||
-H "Authorization: Bearer ${{ secrets.CREWAI_PAT }}" \
|
||||
https://app.crewai.com/crewai_plus/api/v1/crews/${{ secrets.CREWAI_AUTOMATION_UUID }}/deploy
|
||||
```
|
||||
|
||||
<Tip>
|
||||
Add `CREWAI_PAT` and `CREWAI_AUTOMATION_UUID` as repository secrets. For PR deployments, add a "deploy" label to trigger the workflow.
|
||||
</Tip>
|
||||
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
## ⚠️ Environment Variable Security Requirements
|
||||
|
||||
<Warning>
|
||||
|
||||
@@ -62,13 +62,13 @@ Test your Gmail trigger integration locally using the CrewAI CLI:
|
||||
crewai triggers list
|
||||
|
||||
# Simulate a Gmail trigger with realistic payload
|
||||
crewai triggers run gmail/new_email
|
||||
crewai triggers run gmail/new_email_received
|
||||
```
|
||||
|
||||
The `crewai triggers run` command will execute your crew with a complete Gmail payload, allowing you to test your parsing logic before deployment.
|
||||
|
||||
<Warning>
|
||||
Use `crewai triggers run gmail/new_email` (not `crewai run`) to simulate trigger execution during development. After deployment, your crew will automatically receive the trigger payload.
|
||||
Use `crewai triggers run gmail/new_email_received` (not `crewai run`) to simulate trigger execution during development. After deployment, your crew will automatically receive the trigger payload.
|
||||
</Warning>
|
||||
|
||||
## Monitoring Executions
|
||||
@@ -83,6 +83,6 @@ Track history and performance of triggered runs:
|
||||
|
||||
- Ensure Gmail is connected in Tools & Integrations
|
||||
- Verify the Gmail Trigger is enabled on the Triggers tab
|
||||
- Test locally with `crewai triggers run gmail/new_email` to see the exact payload structure
|
||||
- Test locally with `crewai triggers run gmail/new_email_received` to see the exact payload structure
|
||||
- Check the execution logs and confirm the payload is passed as `crewai_trigger_payload`
|
||||
- Remember: use `crewai triggers run` (not `crewai run`) to simulate trigger execution
|
||||
|
||||
581
docs/en/learn/human-feedback-in-flows.mdx
Normal file
581
docs/en/learn/human-feedback-in-flows.mdx
Normal file
@@ -0,0 +1,581 @@
|
||||
---
|
||||
title: Human Feedback in Flows
|
||||
description: Learn how to integrate human feedback directly into your CrewAI Flows using the @human_feedback decorator
|
||||
icon: user-check
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
The `@human_feedback` decorator enables human-in-the-loop (HITL) workflows directly within CrewAI Flows. It allows you to pause flow execution, present output to a human for review, collect their feedback, and optionally route to different listeners based on the feedback outcome.
|
||||
|
||||
This is particularly valuable for:
|
||||
|
||||
- **Quality assurance**: Review AI-generated content before it's used downstream
|
||||
- **Decision gates**: Let humans make critical decisions in automated workflows
|
||||
- **Approval workflows**: Implement approve/reject/revise patterns
|
||||
- **Interactive refinement**: Collect feedback to improve outputs iteratively
|
||||
|
||||
```mermaid
|
||||
flowchart LR
|
||||
A[Flow Method] --> B[Output Generated]
|
||||
B --> C[Human Reviews]
|
||||
C --> D{Feedback}
|
||||
D -->|emit specified| E[LLM Collapses to Outcome]
|
||||
D -->|no emit| F[HumanFeedbackResult]
|
||||
E --> G["@listen('approved')"]
|
||||
E --> H["@listen('rejected')"]
|
||||
F --> I[Next Listener]
|
||||
```
|
||||
|
||||
## Quick Start
|
||||
|
||||
Here's the simplest way to add human feedback to a flow:
|
||||
|
||||
```python Code
|
||||
from crewai.flow.flow import Flow, start, listen
|
||||
from crewai.flow.human_feedback import human_feedback
|
||||
|
||||
class SimpleReviewFlow(Flow):
|
||||
@start()
|
||||
@human_feedback(message="Please review this content:")
|
||||
def generate_content(self):
|
||||
return "This is AI-generated content that needs review."
|
||||
|
||||
@listen(generate_content)
|
||||
def process_feedback(self, result):
|
||||
print(f"Content: {result.output}")
|
||||
print(f"Human said: {result.feedback}")
|
||||
|
||||
flow = SimpleReviewFlow()
|
||||
flow.kickoff()
|
||||
```
|
||||
|
||||
When this flow runs, it will:
|
||||
1. Execute `generate_content` and return the string
|
||||
2. Display the output to the user with the request message
|
||||
3. Wait for the user to type feedback (or press Enter to skip)
|
||||
4. Pass a `HumanFeedbackResult` object to `process_feedback`
|
||||
|
||||
## The @human_feedback Decorator
|
||||
|
||||
### Parameters
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
|-----------|------|----------|-------------|
|
||||
| `message` | `str` | Yes | The message shown to the human alongside the method output |
|
||||
| `emit` | `Sequence[str]` | No | List of possible outcomes. Feedback is collapsed to one of these, which triggers `@listen` decorators |
|
||||
| `llm` | `str \| BaseLLM` | When `emit` specified | LLM used to interpret feedback and map to an outcome |
|
||||
| `default_outcome` | `str` | No | Outcome to use if no feedback provided. Must be in `emit` |
|
||||
| `metadata` | `dict` | No | Additional data for enterprise integrations |
|
||||
| `provider` | `HumanFeedbackProvider` | No | Custom provider for async/non-blocking feedback. See [Async Human Feedback](#async-human-feedback-non-blocking) |
|
||||
|
||||
### Basic Usage (No Routing)
|
||||
|
||||
When you don't specify `emit`, the decorator simply collects feedback and passes a `HumanFeedbackResult` to the next listener:
|
||||
|
||||
```python Code
|
||||
@start()
|
||||
@human_feedback(message="What do you think of this analysis?")
|
||||
def analyze_data(self):
|
||||
return "Analysis results: Revenue up 15%, costs down 8%"
|
||||
|
||||
@listen(analyze_data)
|
||||
def handle_feedback(self, result):
|
||||
# result is a HumanFeedbackResult
|
||||
print(f"Analysis: {result.output}")
|
||||
print(f"Feedback: {result.feedback}")
|
||||
```
|
||||
|
||||
### Routing with emit
|
||||
|
||||
When you specify `emit`, the decorator becomes a router. The human's free-form feedback is interpreted by an LLM and collapsed into one of the specified outcomes:
|
||||
|
||||
```python Code
|
||||
@start()
|
||||
@human_feedback(
|
||||
message="Do you approve this content for publication?",
|
||||
emit=["approved", "rejected", "needs_revision"],
|
||||
llm="gpt-4o-mini",
|
||||
default_outcome="needs_revision",
|
||||
)
|
||||
def review_content(self):
|
||||
return "Draft blog post content here..."
|
||||
|
||||
@listen("approved")
|
||||
def publish(self, result):
|
||||
print(f"Publishing! User said: {result.feedback}")
|
||||
|
||||
@listen("rejected")
|
||||
def discard(self, result):
|
||||
print(f"Discarding. Reason: {result.feedback}")
|
||||
|
||||
@listen("needs_revision")
|
||||
def revise(self, result):
|
||||
print(f"Revising based on: {result.feedback}")
|
||||
```
|
||||
|
||||
<Tip>
|
||||
The LLM uses structured outputs (function calling) when available to guarantee the response is one of your specified outcomes. This makes routing reliable and predictable.
|
||||
</Tip>
|
||||
|
||||
## HumanFeedbackResult
|
||||
|
||||
The `HumanFeedbackResult` dataclass contains all information about a human feedback interaction:
|
||||
|
||||
```python Code
|
||||
from crewai.flow.human_feedback import HumanFeedbackResult
|
||||
|
||||
@dataclass
|
||||
class HumanFeedbackResult:
|
||||
output: Any # The original method output shown to the human
|
||||
feedback: str # The raw feedback text from the human
|
||||
outcome: str | None # The collapsed outcome (if emit was specified)
|
||||
timestamp: datetime # When the feedback was received
|
||||
method_name: str # Name of the decorated method
|
||||
metadata: dict # Any metadata passed to the decorator
|
||||
```
|
||||
|
||||
### Accessing in Listeners
|
||||
|
||||
When a listener is triggered by a `@human_feedback` method with `emit`, it receives the `HumanFeedbackResult`:
|
||||
|
||||
```python Code
|
||||
@listen("approved")
|
||||
def on_approval(self, result: HumanFeedbackResult):
|
||||
print(f"Original output: {result.output}")
|
||||
print(f"User feedback: {result.feedback}")
|
||||
print(f"Outcome: {result.outcome}") # "approved"
|
||||
print(f"Received at: {result.timestamp}")
|
||||
```
|
||||
|
||||
## Accessing Feedback History
|
||||
|
||||
The `Flow` class provides two attributes for accessing human feedback:
|
||||
|
||||
### last_human_feedback
|
||||
|
||||
Returns the most recent `HumanFeedbackResult`:
|
||||
|
||||
```python Code
|
||||
@listen(some_method)
|
||||
def check_feedback(self):
|
||||
if self.last_human_feedback:
|
||||
print(f"Last feedback: {self.last_human_feedback.feedback}")
|
||||
```
|
||||
|
||||
### human_feedback_history
|
||||
|
||||
A list of all `HumanFeedbackResult` objects collected during the flow:
|
||||
|
||||
```python Code
|
||||
@listen(final_step)
|
||||
def summarize(self):
|
||||
print(f"Total feedback collected: {len(self.human_feedback_history)}")
|
||||
for i, fb in enumerate(self.human_feedback_history):
|
||||
print(f"{i+1}. {fb.method_name}: {fb.outcome or 'no routing'}")
|
||||
```
|
||||
|
||||
<Warning>
|
||||
Each `HumanFeedbackResult` is appended to `human_feedback_history`, so multiple feedback steps won't overwrite each other. Use this list to access all feedback collected during the flow.
|
||||
</Warning>
|
||||
|
||||
## Complete Example: Content Approval Workflow
|
||||
|
||||
Here's a full example implementing a content review and approval workflow:
|
||||
|
||||
<CodeGroup>
|
||||
|
||||
```python Code
|
||||
from crewai.flow.flow import Flow, start, listen
|
||||
from crewai.flow.human_feedback import human_feedback, HumanFeedbackResult
|
||||
from pydantic import BaseModel
|
||||
|
||||
|
||||
class ContentState(BaseModel):
|
||||
topic: str = ""
|
||||
draft: str = ""
|
||||
final_content: str = ""
|
||||
revision_count: int = 0
|
||||
|
||||
|
||||
class ContentApprovalFlow(Flow[ContentState]):
|
||||
"""A flow that generates content and gets human approval."""
|
||||
|
||||
@start()
|
||||
def get_topic(self):
|
||||
self.state.topic = input("What topic should I write about? ")
|
||||
return self.state.topic
|
||||
|
||||
@listen(get_topic)
|
||||
def generate_draft(self, topic):
|
||||
# In real use, this would call an LLM
|
||||
self.state.draft = f"# {topic}\n\nThis is a draft about {topic}..."
|
||||
return self.state.draft
|
||||
|
||||
@listen(generate_draft)
|
||||
@human_feedback(
|
||||
message="Please review this draft. Reply 'approved', 'rejected', or provide revision feedback:",
|
||||
emit=["approved", "rejected", "needs_revision"],
|
||||
llm="gpt-4o-mini",
|
||||
default_outcome="needs_revision",
|
||||
)
|
||||
def review_draft(self, draft):
|
||||
return draft
|
||||
|
||||
@listen("approved")
|
||||
def publish_content(self, result: HumanFeedbackResult):
|
||||
self.state.final_content = result.output
|
||||
print("\n✅ Content approved and published!")
|
||||
print(f"Reviewer comment: {result.feedback}")
|
||||
return "published"
|
||||
|
||||
@listen("rejected")
|
||||
def handle_rejection(self, result: HumanFeedbackResult):
|
||||
print("\n❌ Content rejected")
|
||||
print(f"Reason: {result.feedback}")
|
||||
return "rejected"
|
||||
|
||||
@listen("needs_revision")
|
||||
def revise_content(self, result: HumanFeedbackResult):
|
||||
self.state.revision_count += 1
|
||||
print(f"\n📝 Revision #{self.state.revision_count} requested")
|
||||
print(f"Feedback: {result.feedback}")
|
||||
|
||||
# In a real flow, you might loop back to generate_draft
|
||||
# For this example, we just acknowledge
|
||||
return "revision_requested"
|
||||
|
||||
|
||||
# Run the flow
|
||||
flow = ContentApprovalFlow()
|
||||
result = flow.kickoff()
|
||||
print(f"\nFlow completed. Revisions requested: {flow.state.revision_count}")
|
||||
```
|
||||
|
||||
```text Output
|
||||
What topic should I write about? AI Safety
|
||||
|
||||
==================================================
|
||||
OUTPUT FOR REVIEW:
|
||||
==================================================
|
||||
# AI Safety
|
||||
|
||||
This is a draft about AI Safety...
|
||||
==================================================
|
||||
|
||||
Please review this draft. Reply 'approved', 'rejected', or provide revision feedback:
|
||||
(Press Enter to skip, or type your feedback)
|
||||
|
||||
Your feedback: Looks good, approved!
|
||||
|
||||
✅ Content approved and published!
|
||||
Reviewer comment: Looks good, approved!
|
||||
|
||||
Flow completed. Revisions requested: 0
|
||||
```
|
||||
|
||||
</CodeGroup>
|
||||
|
||||
## Combining with Other Decorators
|
||||
|
||||
The `@human_feedback` decorator works with other flow decorators. Place it as the innermost decorator (closest to the function):
|
||||
|
||||
```python Code
|
||||
# Correct: @human_feedback is innermost (closest to the function)
|
||||
@start()
|
||||
@human_feedback(message="Review this:")
|
||||
def my_start_method(self):
|
||||
return "content"
|
||||
|
||||
@listen(other_method)
|
||||
@human_feedback(message="Review this too:")
|
||||
def my_listener(self, data):
|
||||
return f"processed: {data}"
|
||||
```
|
||||
|
||||
<Tip>
|
||||
Place `@human_feedback` as the innermost decorator (last/closest to the function) so it wraps the method directly and can capture the return value before passing to the flow system.
|
||||
</Tip>
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 1. Write Clear Request Messages
|
||||
|
||||
The `request` parameter is what the human sees. Make it actionable:
|
||||
|
||||
```python Code
|
||||
# ✅ Good - clear and actionable
|
||||
@human_feedback(message="Does this summary accurately capture the key points? Reply 'yes' or explain what's missing:")
|
||||
|
||||
# ❌ Bad - vague
|
||||
@human_feedback(message="Review this:")
|
||||
```
|
||||
|
||||
### 2. Choose Meaningful Outcomes
|
||||
|
||||
When using `emit`, pick outcomes that map naturally to human responses:
|
||||
|
||||
```python Code
|
||||
# ✅ Good - natural language outcomes
|
||||
emit=["approved", "rejected", "needs_more_detail"]
|
||||
|
||||
# ❌ Bad - technical or unclear
|
||||
emit=["state_1", "state_2", "state_3"]
|
||||
```
|
||||
|
||||
### 3. Always Provide a Default Outcome
|
||||
|
||||
Use `default_outcome` to handle cases where users press Enter without typing:
|
||||
|
||||
```python Code
|
||||
@human_feedback(
|
||||
message="Approve? (press Enter to request revision)",
|
||||
emit=["approved", "needs_revision"],
|
||||
llm="gpt-4o-mini",
|
||||
default_outcome="needs_revision", # Safe default
|
||||
)
|
||||
```
|
||||
|
||||
### 4. Use Feedback History for Audit Trails
|
||||
|
||||
Access `human_feedback_history` to create audit logs:
|
||||
|
||||
```python Code
|
||||
@listen(final_step)
|
||||
def create_audit_log(self):
|
||||
log = []
|
||||
for fb in self.human_feedback_history:
|
||||
log.append({
|
||||
"step": fb.method_name,
|
||||
"outcome": fb.outcome,
|
||||
"feedback": fb.feedback,
|
||||
"timestamp": fb.timestamp.isoformat(),
|
||||
})
|
||||
return log
|
||||
```
|
||||
|
||||
### 5. Handle Both Routed and Non-Routed Feedback
|
||||
|
||||
When designing flows, consider whether you need routing:
|
||||
|
||||
| Scenario | Use |
|
||||
|----------|-----|
|
||||
| Simple review, just need the feedback text | No `emit` |
|
||||
| Need to branch to different paths based on response | Use `emit` |
|
||||
| Approval gates with approve/reject/revise | Use `emit` |
|
||||
| Collecting comments for logging only | No `emit` |
|
||||
|
||||
## Async Human Feedback (Non-Blocking)
|
||||
|
||||
By default, `@human_feedback` blocks execution waiting for console input. For production applications, you may need **async/non-blocking** feedback that integrates with external systems like Slack, email, webhooks, or APIs.
|
||||
|
||||
### The Provider Abstraction
|
||||
|
||||
Use the `provider` parameter to specify a custom feedback collection strategy:
|
||||
|
||||
```python Code
|
||||
from crewai.flow import Flow, start, human_feedback, HumanFeedbackProvider, HumanFeedbackPending, PendingFeedbackContext
|
||||
|
||||
class WebhookProvider(HumanFeedbackProvider):
|
||||
"""Provider that pauses flow and waits for webhook callback."""
|
||||
|
||||
def __init__(self, webhook_url: str):
|
||||
self.webhook_url = webhook_url
|
||||
|
||||
def request_feedback(self, context: PendingFeedbackContext, flow: Flow) -> str:
|
||||
# Notify external system (e.g., send Slack message, create ticket)
|
||||
self.send_notification(context)
|
||||
|
||||
# Pause execution - framework handles persistence automatically
|
||||
raise HumanFeedbackPending(
|
||||
context=context,
|
||||
callback_info={"webhook_url": f"{self.webhook_url}/{context.flow_id}"}
|
||||
)
|
||||
|
||||
class ReviewFlow(Flow):
|
||||
@start()
|
||||
@human_feedback(
|
||||
message="Review this content:",
|
||||
emit=["approved", "rejected"],
|
||||
llm="gpt-4o-mini",
|
||||
provider=WebhookProvider("https://myapp.com/api"),
|
||||
)
|
||||
def generate_content(self):
|
||||
return "AI-generated content..."
|
||||
|
||||
@listen("approved")
|
||||
def publish(self, result):
|
||||
return "Published!"
|
||||
```
|
||||
|
||||
<Tip>
|
||||
The flow framework **automatically persists state** when `HumanFeedbackPending` is raised. Your provider only needs to notify the external system and raise the exception—no manual persistence calls required.
|
||||
</Tip>
|
||||
|
||||
### Handling Paused Flows
|
||||
|
||||
When using an async provider, `kickoff()` returns a `HumanFeedbackPending` object instead of raising an exception:
|
||||
|
||||
```python Code
|
||||
flow = ReviewFlow()
|
||||
result = flow.kickoff()
|
||||
|
||||
if isinstance(result, HumanFeedbackPending):
|
||||
# Flow is paused, state is automatically persisted
|
||||
print(f"Waiting for feedback at: {result.callback_info['webhook_url']}")
|
||||
print(f"Flow ID: {result.context.flow_id}")
|
||||
else:
|
||||
# Normal completion
|
||||
print(f"Flow completed: {result}")
|
||||
```
|
||||
|
||||
### Resuming a Paused Flow
|
||||
|
||||
When feedback arrives (e.g., via webhook), resume the flow:
|
||||
|
||||
```python Code
|
||||
# Sync handler:
|
||||
def handle_feedback_webhook(flow_id: str, feedback: str):
|
||||
flow = ReviewFlow.from_pending(flow_id)
|
||||
result = flow.resume(feedback)
|
||||
return result
|
||||
|
||||
# Async handler (FastAPI, aiohttp, etc.):
|
||||
async def handle_feedback_webhook(flow_id: str, feedback: str):
|
||||
flow = ReviewFlow.from_pending(flow_id)
|
||||
result = await flow.resume_async(feedback)
|
||||
return result
|
||||
```
|
||||
|
||||
### Key Types
|
||||
|
||||
| Type | Description |
|
||||
|------|-------------|
|
||||
| `HumanFeedbackProvider` | Protocol for custom feedback providers |
|
||||
| `PendingFeedbackContext` | Contains all info needed to resume a paused flow |
|
||||
| `HumanFeedbackPending` | Returned by `kickoff()` when flow is paused for feedback |
|
||||
| `ConsoleProvider` | Default blocking console input provider |
|
||||
|
||||
### PendingFeedbackContext
|
||||
|
||||
The context contains everything needed to resume:
|
||||
|
||||
```python Code
|
||||
@dataclass
|
||||
class PendingFeedbackContext:
|
||||
flow_id: str # Unique identifier for this flow execution
|
||||
flow_class: str # Fully qualified class name
|
||||
method_name: str # Method that triggered feedback
|
||||
method_output: Any # Output shown to the human
|
||||
message: str # The request message
|
||||
emit: list[str] | None # Possible outcomes for routing
|
||||
default_outcome: str | None
|
||||
metadata: dict # Custom metadata
|
||||
llm: str | None # LLM for outcome collapsing
|
||||
requested_at: datetime
|
||||
```
|
||||
|
||||
### Complete Async Flow Example
|
||||
|
||||
```python Code
|
||||
from crewai.flow import (
|
||||
Flow, start, listen, human_feedback,
|
||||
HumanFeedbackProvider, HumanFeedbackPending, PendingFeedbackContext
|
||||
)
|
||||
|
||||
class SlackNotificationProvider(HumanFeedbackProvider):
|
||||
"""Provider that sends Slack notifications and pauses for async feedback."""
|
||||
|
||||
def __init__(self, channel: str):
|
||||
self.channel = channel
|
||||
|
||||
def request_feedback(self, context: PendingFeedbackContext, flow: Flow) -> str:
|
||||
# Send Slack notification (implement your own)
|
||||
slack_thread_id = self.post_to_slack(
|
||||
channel=self.channel,
|
||||
message=f"Review needed:\n\n{context.method_output}\n\n{context.message}",
|
||||
)
|
||||
|
||||
# Pause execution - framework handles persistence automatically
|
||||
raise HumanFeedbackPending(
|
||||
context=context,
|
||||
callback_info={
|
||||
"slack_channel": self.channel,
|
||||
"thread_id": slack_thread_id,
|
||||
}
|
||||
)
|
||||
|
||||
class ContentPipeline(Flow):
|
||||
@start()
|
||||
@human_feedback(
|
||||
message="Approve this content for publication?",
|
||||
emit=["approved", "rejected", "needs_revision"],
|
||||
llm="gpt-4o-mini",
|
||||
default_outcome="needs_revision",
|
||||
provider=SlackNotificationProvider("#content-reviews"),
|
||||
)
|
||||
def generate_content(self):
|
||||
return "AI-generated blog post content..."
|
||||
|
||||
@listen("approved")
|
||||
def publish(self, result):
|
||||
print(f"Publishing! Reviewer said: {result.feedback}")
|
||||
return {"status": "published"}
|
||||
|
||||
@listen("rejected")
|
||||
def archive(self, result):
|
||||
print(f"Archived. Reason: {result.feedback}")
|
||||
return {"status": "archived"}
|
||||
|
||||
@listen("needs_revision")
|
||||
def queue_revision(self, result):
|
||||
print(f"Queued for revision: {result.feedback}")
|
||||
return {"status": "revision_needed"}
|
||||
|
||||
|
||||
# Starting the flow (will pause and wait for Slack response)
|
||||
def start_content_pipeline():
|
||||
flow = ContentPipeline()
|
||||
result = flow.kickoff()
|
||||
|
||||
if isinstance(result, HumanFeedbackPending):
|
||||
return {"status": "pending", "flow_id": result.context.flow_id}
|
||||
|
||||
return result
|
||||
|
||||
|
||||
# Resuming when Slack webhook fires (sync handler)
|
||||
def on_slack_feedback(flow_id: str, slack_message: str):
|
||||
flow = ContentPipeline.from_pending(flow_id)
|
||||
result = flow.resume(slack_message)
|
||||
return result
|
||||
|
||||
|
||||
# If your handler is async (FastAPI, aiohttp, Slack Bolt async, etc.)
|
||||
async def on_slack_feedback_async(flow_id: str, slack_message: str):
|
||||
flow = ContentPipeline.from_pending(flow_id)
|
||||
result = await flow.resume_async(slack_message)
|
||||
return result
|
||||
```
|
||||
|
||||
<Warning>
|
||||
If you're using an async web framework (FastAPI, aiohttp, Slack Bolt async mode), use `await flow.resume_async()` instead of `flow.resume()`. Calling `resume()` from within a running event loop will raise a `RuntimeError`.
|
||||
</Warning>
|
||||
|
||||
### Best Practices for Async Feedback
|
||||
|
||||
1. **Check the return type**: `kickoff()` returns `HumanFeedbackPending` when paused—no try/except needed
|
||||
2. **Use the right resume method**: Use `resume()` in sync code, `await resume_async()` in async code
|
||||
3. **Store callback info**: Use `callback_info` to store webhook URLs, ticket IDs, etc.
|
||||
4. **Implement idempotency**: Your resume handler should be idempotent for safety
|
||||
5. **Automatic persistence**: State is automatically saved when `HumanFeedbackPending` is raised and uses `SQLiteFlowPersistence` by default
|
||||
6. **Custom persistence**: Pass a custom persistence instance to `from_pending()` if needed
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Flows Overview](/en/concepts/flows) - Learn about CrewAI Flows
|
||||
- [Flow State Management](/en/guides/flows/mastering-flow-state) - Managing state in flows
|
||||
- [Flow Persistence](/en/concepts/flows#persistence) - Persisting flow state
|
||||
- [Routing with @router](/en/concepts/flows#router) - More about conditional routing
|
||||
- [Human Input on Execution](/en/learn/human-input-on-execution) - Task-level human input
|
||||
@@ -5,9 +5,22 @@ icon: "user-check"
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
Human-in-the-Loop (HITL) is a powerful approach that combines artificial intelligence with human expertise to enhance decision-making and improve task outcomes. This guide shows you how to implement HITL within CrewAI.
|
||||
Human-in-the-Loop (HITL) is a powerful approach that combines artificial intelligence with human expertise to enhance decision-making and improve task outcomes. CrewAI provides multiple ways to implement HITL depending on your needs.
|
||||
|
||||
## Setting Up HITL Workflows
|
||||
## Choosing Your HITL Approach
|
||||
|
||||
CrewAI offers two main approaches for implementing human-in-the-loop workflows:
|
||||
|
||||
| Approach | Best For | Integration |
|
||||
|----------|----------|-------------|
|
||||
| **Flow-based** (`@human_feedback` decorator) | Local development, console-based review, synchronous workflows | [Human Feedback in Flows](/en/learn/human-feedback-in-flows) |
|
||||
| **Webhook-based** (Enterprise) | Production deployments, async workflows, external integrations (Slack, Teams, etc.) | This guide |
|
||||
|
||||
<Tip>
|
||||
If you're building flows and want to add human review steps with routing based on feedback, check out the [Human Feedback in Flows](/en/learn/human-feedback-in-flows) guide for the `@human_feedback` decorator.
|
||||
</Tip>
|
||||
|
||||
## Setting Up Webhook-Based HITL Workflows
|
||||
|
||||
<Steps>
|
||||
<Step title="Configure Your Task">
|
||||
|
||||
Reference in New Issue
Block a user