Merge branch 'main' into lorenze/improve-docs-flows

This commit is contained in:
Lorenze Jay
2025-12-31 08:32:50 -08:00
committed by GitHub
411 changed files with 17592 additions and 36636 deletions

View File

@@ -309,6 +309,7 @@
"en/learn/hierarchical-process",
"en/learn/human-input-on-execution",
"en/learn/human-in-the-loop",
"en/learn/human-feedback-in-flows",
"en/learn/kickoff-async",
"en/learn/kickoff-for-each",
"en/learn/llm-connections",
@@ -738,6 +739,7 @@
"pt-BR/learn/hierarchical-process",
"pt-BR/learn/human-input-on-execution",
"pt-BR/learn/human-in-the-loop",
"pt-BR/learn/human-feedback-in-flows",
"pt-BR/learn/kickoff-async",
"pt-BR/learn/kickoff-for-each",
"pt-BR/learn/llm-connections",
@@ -1176,6 +1178,7 @@
"ko/learn/hierarchical-process",
"ko/learn/human-input-on-execution",
"ko/learn/human-in-the-loop",
"ko/learn/human-feedback-in-flows",
"ko/learn/kickoff-async",
"ko/learn/kickoff-for-each",
"ko/learn/llm-connections",

View File

@@ -16,16 +16,17 @@ Welcome to the CrewAI AOP API reference. This API allows you to programmatically
Navigate to your crew's detail page in the CrewAI AOP dashboard and copy your Bearer Token from the Status tab.
</Step>
<Step title="Discover Required Inputs">
Use the `GET /inputs` endpoint to see what parameters your crew expects.
</Step>
<Step title="Discover Required Inputs">
Use the `GET /inputs` endpoint to see what parameters your crew expects.
</Step>
<Step title="Start a Crew Execution">
Call `POST /kickoff` with your inputs to start the crew execution and receive a `kickoff_id`.
</Step>
<Step title="Start a Crew Execution">
Call `POST /kickoff` with your inputs to start the crew execution and receive
a `kickoff_id`.
</Step>
<Step title="Monitor Progress">
Use `GET /status/{kickoff_id}` to check execution status and retrieve results.
Use `GET /{kickoff_id}/status` to check execution status and retrieve results.
</Step>
</Steps>
@@ -40,13 +41,14 @@ curl -H "Authorization: Bearer YOUR_CREW_TOKEN" \
### Token Types
| Token Type | Scope | Use Case |
|:-----------|:--------|:----------|
| **Bearer Token** | Organization-level access | Full crew operations, ideal for server-to-server integration |
| **User Bearer Token** | User-scoped access | Limited permissions, suitable for user-specific operations |
| Token Type | Scope | Use Case |
| :-------------------- | :------------------------ | :----------------------------------------------------------- |
| **Bearer Token** | Organization-level access | Full crew operations, ideal for server-to-server integration |
| **User Bearer Token** | User-scoped access | Limited permissions, suitable for user-specific operations |
<Tip>
You can find both token types in the Status tab of your crew's detail page in the CrewAI AOP dashboard.
You can find both token types in the Status tab of your crew's detail page in
the CrewAI AOP dashboard.
</Tip>
## Base URL
@@ -63,29 +65,33 @@ Replace `your-crew-name` with your actual crew's URL from the dashboard.
1. **Discovery**: Call `GET /inputs` to understand what your crew needs
2. **Execution**: Submit inputs via `POST /kickoff` to start processing
3. **Monitoring**: Poll `GET /status/{kickoff_id}` until completion
3. **Monitoring**: Poll `GET /{kickoff_id}/status` until completion
4. **Results**: Extract the final output from the completed response
## Error Handling
The API uses standard HTTP status codes:
| Code | Meaning |
|------|:--------|
| `200` | Success |
| `400` | Bad Request - Invalid input format |
| `401` | Unauthorized - Invalid bearer token |
| `404` | Not Found - Resource doesn't exist |
| Code | Meaning |
| ----- | :----------------------------------------- |
| `200` | Success |
| `400` | Bad Request - Invalid input format |
| `401` | Unauthorized - Invalid bearer token |
| `404` | Not Found - Resource doesn't exist |
| `422` | Validation Error - Missing required inputs |
| `500` | Server Error - Contact support |
| `500` | Server Error - Contact support |
## Interactive Testing
<Info>
**Why no "Send" button?** Since each CrewAI AOP user has their own unique crew URL, we use **reference mode** instead of an interactive playground to avoid confusion. This shows you exactly what the requests should look like without non-functional send buttons.
**Why no "Send" button?** Since each CrewAI AOP user has their own unique crew
URL, we use **reference mode** instead of an interactive playground to avoid
confusion. This shows you exactly what the requests should look like without
non-functional send buttons.
</Info>
Each endpoint page shows you:
- ✅ **Exact request format** with all parameters
- ✅ **Response examples** for success and error cases
- ✅ **Code samples** in multiple languages (cURL, Python, JavaScript, etc.)
@@ -103,6 +109,7 @@ Each endpoint page shows you:
</CardGroup>
**Example workflow:**
1. **Copy this cURL example** from any endpoint page
2. **Replace `your-actual-crew-name.crewai.com`** with your real crew URL
3. **Replace the Bearer token** with your real token from the dashboard
@@ -111,10 +118,18 @@ Each endpoint page shows you:
## Need Help?
<CardGroup cols={2}>
<Card title="Enterprise Support" icon="headset" href="mailto:support@crewai.com">
<Card
title="Enterprise Support"
icon="headset"
href="mailto:support@crewai.com"
>
Get help with API integration and troubleshooting
</Card>
<Card title="Enterprise Dashboard" icon="chart-line" href="https://app.crewai.com">
<Card
title="Enterprise Dashboard"
icon="chart-line"
href="https://app.crewai.com"
>
Manage your crews and view execution logs
</Card>
</CardGroup>

View File

@@ -1,8 +1,6 @@
---
title: "GET /status/{kickoff_id}"
title: "GET /{kickoff_id}/status"
description: "Get execution status"
openapi: "/enterprise-api.en.yaml GET /status/{kickoff_id}"
openapi: "/enterprise-api.en.yaml GET /{kickoff_id}/status"
mode: "wide"
---

View File

@@ -572,6 +572,55 @@ The `third_method` and `fourth_method` listen to the output of the `second_metho
When you run this Flow, the output will change based on the random boolean value generated by the `start_method`.
### Human in the Loop (human feedback)
The `@human_feedback` decorator enables human-in-the-loop workflows by pausing flow execution to collect feedback from a human. This is useful for approval gates, quality review, and decision points that require human judgment.
```python Code
from crewai.flow.flow import Flow, start, listen
from crewai.flow.human_feedback import human_feedback, HumanFeedbackResult
class ReviewFlow(Flow):
@start()
@human_feedback(
message="Do you approve this content?",
emit=["approved", "rejected", "needs_revision"],
llm="gpt-4o-mini",
default_outcome="needs_revision",
)
def generate_content(self):
return "Content to be reviewed..."
@listen("approved")
def on_approval(self, result: HumanFeedbackResult):
print(f"Approved! Feedback: {result.feedback}")
@listen("rejected")
def on_rejection(self, result: HumanFeedbackResult):
print(f"Rejected. Reason: {result.feedback}")
```
When `emit` is specified, the human's free-form feedback is interpreted by an LLM and collapsed into one of the specified outcomes, which then triggers the corresponding `@listen` decorator.
You can also use `@human_feedback` without routing to simply collect feedback:
```python Code
@start()
@human_feedback(message="Any comments on this output?")
def my_method(self):
return "Output for review"
@listen(my_method)
def next_step(self, result: HumanFeedbackResult):
# Access feedback via result.feedback
# Access original output via result.output
pass
```
Access all feedback collected during a flow via `self.last_human_feedback` (most recent) or `self.human_feedback_history` (all feedback as a list).
For a complete guide on human feedback in flows, including **async/non-blocking feedback** with custom providers (Slack, webhooks, etc.), see [Human Feedback in Flows](/en/learn/human-feedback-in-flows).
## Adding Agents to Flows
Agents can be seamlessly integrated into your flows, providing a lightweight alternative to full Crews when you need simpler, focused task execution. Here's an example of how to use an Agent within a flow to perform market research:

View File

@@ -187,6 +187,97 @@ You can also deploy your crews directly through the CrewAI AOP web interface by
</Steps>
## Option 3: Redeploy Using API (CI/CD Integration)
For automated deployments in CI/CD pipelines, you can use the CrewAI API to trigger redeployments of existing crews. This is particularly useful for GitHub Actions, Jenkins, or other automation workflows.
<Steps>
<Step title="Get Your Personal Access Token">
Navigate to your CrewAI AOP account settings to generate an API token:
1. Go to [app.crewai.com](https://app.crewai.com)
2. Click on **Settings** → **Account** → **Personal Access Token**
3. Generate a new token and copy it securely
4. Store this token as a secret in your CI/CD system
</Step>
<Step title="Find Your Automation UUID">
Locate the unique identifier for your deployed crew:
1. Go to **Automations** in your CrewAI AOP dashboard
2. Select your existing automation/crew
3. Click on **Additional Details**
4. Copy the **UUID** - this identifies your specific crew deployment
</Step>
<Step title="Trigger Redeployment via API">
Use the Deploy API endpoint to trigger a redeployment:
```bash
curl -i -X POST \
-H "Authorization: Bearer YOUR_PERSONAL_ACCESS_TOKEN" \
https://app.crewai.com/crewai_plus/api/v1/crews/YOUR-AUTOMATION-UUID/deploy
# HTTP/2 200
# content-type: application/json
#
# {
# "uuid": "your-automation-uuid",
# "status": "Deploy Enqueued",
# "public_url": "https://your-crew-deployment.crewai.com",
# "token": "your-bearer-token"
# }
```
<Info>
If your automation was first created connected to Git, the API will automatically pull the latest changes from your repository before redeploying.
</Info>
</Step>
<Step title="GitHub Actions Integration Example">
Here's a GitHub Actions workflow with more complex deployment triggers:
```yaml
name: Deploy CrewAI Automation
on:
push:
branches: [ main ]
pull_request:
types: [ labeled ]
release:
types: [ published ]
jobs:
deploy:
runs-on: ubuntu-latest
if: |
(github.event_name == 'push' && github.ref == 'refs/heads/main') ||
(github.event_name == 'pull_request' && contains(github.event.pull_request.labels.*.name, 'deploy')) ||
(github.event_name == 'release')
steps:
- name: Trigger CrewAI Redeployment
run: |
curl -X POST \
-H "Authorization: Bearer ${{ secrets.CREWAI_PAT }}" \
https://app.crewai.com/crewai_plus/api/v1/crews/${{ secrets.CREWAI_AUTOMATION_UUID }}/deploy
```
<Tip>
Add `CREWAI_PAT` and `CREWAI_AUTOMATION_UUID` as repository secrets. For PR deployments, add a "deploy" label to trigger the workflow.
</Tip>
</Step>
</Steps>
## ⚠️ Environment Variable Security Requirements
<Warning>

View File

@@ -62,13 +62,13 @@ Test your Gmail trigger integration locally using the CrewAI CLI:
crewai triggers list
# Simulate a Gmail trigger with realistic payload
crewai triggers run gmail/new_email
crewai triggers run gmail/new_email_received
```
The `crewai triggers run` command will execute your crew with a complete Gmail payload, allowing you to test your parsing logic before deployment.
<Warning>
Use `crewai triggers run gmail/new_email` (not `crewai run`) to simulate trigger execution during development. After deployment, your crew will automatically receive the trigger payload.
Use `crewai triggers run gmail/new_email_received` (not `crewai run`) to simulate trigger execution during development. After deployment, your crew will automatically receive the trigger payload.
</Warning>
## Monitoring Executions
@@ -83,6 +83,6 @@ Track history and performance of triggered runs:
- Ensure Gmail is connected in Tools & Integrations
- Verify the Gmail Trigger is enabled on the Triggers tab
- Test locally with `crewai triggers run gmail/new_email` to see the exact payload structure
- Test locally with `crewai triggers run gmail/new_email_received` to see the exact payload structure
- Check the execution logs and confirm the payload is passed as `crewai_trigger_payload`
- Remember: use `crewai triggers run` (not `crewai run`) to simulate trigger execution

View File

@@ -0,0 +1,581 @@
---
title: Human Feedback in Flows
description: Learn how to integrate human feedback directly into your CrewAI Flows using the @human_feedback decorator
icon: user-check
mode: "wide"
---
## Overview
The `@human_feedback` decorator enables human-in-the-loop (HITL) workflows directly within CrewAI Flows. It allows you to pause flow execution, present output to a human for review, collect their feedback, and optionally route to different listeners based on the feedback outcome.
This is particularly valuable for:
- **Quality assurance**: Review AI-generated content before it's used downstream
- **Decision gates**: Let humans make critical decisions in automated workflows
- **Approval workflows**: Implement approve/reject/revise patterns
- **Interactive refinement**: Collect feedback to improve outputs iteratively
```mermaid
flowchart LR
A[Flow Method] --> B[Output Generated]
B --> C[Human Reviews]
C --> D{Feedback}
D -->|emit specified| E[LLM Collapses to Outcome]
D -->|no emit| F[HumanFeedbackResult]
E --> G["@listen('approved')"]
E --> H["@listen('rejected')"]
F --> I[Next Listener]
```
## Quick Start
Here's the simplest way to add human feedback to a flow:
```python Code
from crewai.flow.flow import Flow, start, listen
from crewai.flow.human_feedback import human_feedback
class SimpleReviewFlow(Flow):
@start()
@human_feedback(message="Please review this content:")
def generate_content(self):
return "This is AI-generated content that needs review."
@listen(generate_content)
def process_feedback(self, result):
print(f"Content: {result.output}")
print(f"Human said: {result.feedback}")
flow = SimpleReviewFlow()
flow.kickoff()
```
When this flow runs, it will:
1. Execute `generate_content` and return the string
2. Display the output to the user with the request message
3. Wait for the user to type feedback (or press Enter to skip)
4. Pass a `HumanFeedbackResult` object to `process_feedback`
## The @human_feedback Decorator
### Parameters
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `message` | `str` | Yes | The message shown to the human alongside the method output |
| `emit` | `Sequence[str]` | No | List of possible outcomes. Feedback is collapsed to one of these, which triggers `@listen` decorators |
| `llm` | `str \| BaseLLM` | When `emit` specified | LLM used to interpret feedback and map to an outcome |
| `default_outcome` | `str` | No | Outcome to use if no feedback provided. Must be in `emit` |
| `metadata` | `dict` | No | Additional data for enterprise integrations |
| `provider` | `HumanFeedbackProvider` | No | Custom provider for async/non-blocking feedback. See [Async Human Feedback](#async-human-feedback-non-blocking) |
### Basic Usage (No Routing)
When you don't specify `emit`, the decorator simply collects feedback and passes a `HumanFeedbackResult` to the next listener:
```python Code
@start()
@human_feedback(message="What do you think of this analysis?")
def analyze_data(self):
return "Analysis results: Revenue up 15%, costs down 8%"
@listen(analyze_data)
def handle_feedback(self, result):
# result is a HumanFeedbackResult
print(f"Analysis: {result.output}")
print(f"Feedback: {result.feedback}")
```
### Routing with emit
When you specify `emit`, the decorator becomes a router. The human's free-form feedback is interpreted by an LLM and collapsed into one of the specified outcomes:
```python Code
@start()
@human_feedback(
message="Do you approve this content for publication?",
emit=["approved", "rejected", "needs_revision"],
llm="gpt-4o-mini",
default_outcome="needs_revision",
)
def review_content(self):
return "Draft blog post content here..."
@listen("approved")
def publish(self, result):
print(f"Publishing! User said: {result.feedback}")
@listen("rejected")
def discard(self, result):
print(f"Discarding. Reason: {result.feedback}")
@listen("needs_revision")
def revise(self, result):
print(f"Revising based on: {result.feedback}")
```
<Tip>
The LLM uses structured outputs (function calling) when available to guarantee the response is one of your specified outcomes. This makes routing reliable and predictable.
</Tip>
## HumanFeedbackResult
The `HumanFeedbackResult` dataclass contains all information about a human feedback interaction:
```python Code
from crewai.flow.human_feedback import HumanFeedbackResult
@dataclass
class HumanFeedbackResult:
output: Any # The original method output shown to the human
feedback: str # The raw feedback text from the human
outcome: str | None # The collapsed outcome (if emit was specified)
timestamp: datetime # When the feedback was received
method_name: str # Name of the decorated method
metadata: dict # Any metadata passed to the decorator
```
### Accessing in Listeners
When a listener is triggered by a `@human_feedback` method with `emit`, it receives the `HumanFeedbackResult`:
```python Code
@listen("approved")
def on_approval(self, result: HumanFeedbackResult):
print(f"Original output: {result.output}")
print(f"User feedback: {result.feedback}")
print(f"Outcome: {result.outcome}") # "approved"
print(f"Received at: {result.timestamp}")
```
## Accessing Feedback History
The `Flow` class provides two attributes for accessing human feedback:
### last_human_feedback
Returns the most recent `HumanFeedbackResult`:
```python Code
@listen(some_method)
def check_feedback(self):
if self.last_human_feedback:
print(f"Last feedback: {self.last_human_feedback.feedback}")
```
### human_feedback_history
A list of all `HumanFeedbackResult` objects collected during the flow:
```python Code
@listen(final_step)
def summarize(self):
print(f"Total feedback collected: {len(self.human_feedback_history)}")
for i, fb in enumerate(self.human_feedback_history):
print(f"{i+1}. {fb.method_name}: {fb.outcome or 'no routing'}")
```
<Warning>
Each `HumanFeedbackResult` is appended to `human_feedback_history`, so multiple feedback steps won't overwrite each other. Use this list to access all feedback collected during the flow.
</Warning>
## Complete Example: Content Approval Workflow
Here's a full example implementing a content review and approval workflow:
<CodeGroup>
```python Code
from crewai.flow.flow import Flow, start, listen
from crewai.flow.human_feedback import human_feedback, HumanFeedbackResult
from pydantic import BaseModel
class ContentState(BaseModel):
topic: str = ""
draft: str = ""
final_content: str = ""
revision_count: int = 0
class ContentApprovalFlow(Flow[ContentState]):
"""A flow that generates content and gets human approval."""
@start()
def get_topic(self):
self.state.topic = input("What topic should I write about? ")
return self.state.topic
@listen(get_topic)
def generate_draft(self, topic):
# In real use, this would call an LLM
self.state.draft = f"# {topic}\n\nThis is a draft about {topic}..."
return self.state.draft
@listen(generate_draft)
@human_feedback(
message="Please review this draft. Reply 'approved', 'rejected', or provide revision feedback:",
emit=["approved", "rejected", "needs_revision"],
llm="gpt-4o-mini",
default_outcome="needs_revision",
)
def review_draft(self, draft):
return draft
@listen("approved")
def publish_content(self, result: HumanFeedbackResult):
self.state.final_content = result.output
print("\n✅ Content approved and published!")
print(f"Reviewer comment: {result.feedback}")
return "published"
@listen("rejected")
def handle_rejection(self, result: HumanFeedbackResult):
print("\n❌ Content rejected")
print(f"Reason: {result.feedback}")
return "rejected"
@listen("needs_revision")
def revise_content(self, result: HumanFeedbackResult):
self.state.revision_count += 1
print(f"\n📝 Revision #{self.state.revision_count} requested")
print(f"Feedback: {result.feedback}")
# In a real flow, you might loop back to generate_draft
# For this example, we just acknowledge
return "revision_requested"
# Run the flow
flow = ContentApprovalFlow()
result = flow.kickoff()
print(f"\nFlow completed. Revisions requested: {flow.state.revision_count}")
```
```text Output
What topic should I write about? AI Safety
==================================================
OUTPUT FOR REVIEW:
==================================================
# AI Safety
This is a draft about AI Safety...
==================================================
Please review this draft. Reply 'approved', 'rejected', or provide revision feedback:
(Press Enter to skip, or type your feedback)
Your feedback: Looks good, approved!
✅ Content approved and published!
Reviewer comment: Looks good, approved!
Flow completed. Revisions requested: 0
```
</CodeGroup>
## Combining with Other Decorators
The `@human_feedback` decorator works with other flow decorators. Place it as the innermost decorator (closest to the function):
```python Code
# Correct: @human_feedback is innermost (closest to the function)
@start()
@human_feedback(message="Review this:")
def my_start_method(self):
return "content"
@listen(other_method)
@human_feedback(message="Review this too:")
def my_listener(self, data):
return f"processed: {data}"
```
<Tip>
Place `@human_feedback` as the innermost decorator (last/closest to the function) so it wraps the method directly and can capture the return value before passing to the flow system.
</Tip>
## Best Practices
### 1. Write Clear Request Messages
The `request` parameter is what the human sees. Make it actionable:
```python Code
# ✅ Good - clear and actionable
@human_feedback(message="Does this summary accurately capture the key points? Reply 'yes' or explain what's missing:")
# ❌ Bad - vague
@human_feedback(message="Review this:")
```
### 2. Choose Meaningful Outcomes
When using `emit`, pick outcomes that map naturally to human responses:
```python Code
# ✅ Good - natural language outcomes
emit=["approved", "rejected", "needs_more_detail"]
# ❌ Bad - technical or unclear
emit=["state_1", "state_2", "state_3"]
```
### 3. Always Provide a Default Outcome
Use `default_outcome` to handle cases where users press Enter without typing:
```python Code
@human_feedback(
message="Approve? (press Enter to request revision)",
emit=["approved", "needs_revision"],
llm="gpt-4o-mini",
default_outcome="needs_revision", # Safe default
)
```
### 4. Use Feedback History for Audit Trails
Access `human_feedback_history` to create audit logs:
```python Code
@listen(final_step)
def create_audit_log(self):
log = []
for fb in self.human_feedback_history:
log.append({
"step": fb.method_name,
"outcome": fb.outcome,
"feedback": fb.feedback,
"timestamp": fb.timestamp.isoformat(),
})
return log
```
### 5. Handle Both Routed and Non-Routed Feedback
When designing flows, consider whether you need routing:
| Scenario | Use |
|----------|-----|
| Simple review, just need the feedback text | No `emit` |
| Need to branch to different paths based on response | Use `emit` |
| Approval gates with approve/reject/revise | Use `emit` |
| Collecting comments for logging only | No `emit` |
## Async Human Feedback (Non-Blocking)
By default, `@human_feedback` blocks execution waiting for console input. For production applications, you may need **async/non-blocking** feedback that integrates with external systems like Slack, email, webhooks, or APIs.
### The Provider Abstraction
Use the `provider` parameter to specify a custom feedback collection strategy:
```python Code
from crewai.flow import Flow, start, human_feedback, HumanFeedbackProvider, HumanFeedbackPending, PendingFeedbackContext
class WebhookProvider(HumanFeedbackProvider):
"""Provider that pauses flow and waits for webhook callback."""
def __init__(self, webhook_url: str):
self.webhook_url = webhook_url
def request_feedback(self, context: PendingFeedbackContext, flow: Flow) -> str:
# Notify external system (e.g., send Slack message, create ticket)
self.send_notification(context)
# Pause execution - framework handles persistence automatically
raise HumanFeedbackPending(
context=context,
callback_info={"webhook_url": f"{self.webhook_url}/{context.flow_id}"}
)
class ReviewFlow(Flow):
@start()
@human_feedback(
message="Review this content:",
emit=["approved", "rejected"],
llm="gpt-4o-mini",
provider=WebhookProvider("https://myapp.com/api"),
)
def generate_content(self):
return "AI-generated content..."
@listen("approved")
def publish(self, result):
return "Published!"
```
<Tip>
The flow framework **automatically persists state** when `HumanFeedbackPending` is raised. Your provider only needs to notify the external system and raise the exception—no manual persistence calls required.
</Tip>
### Handling Paused Flows
When using an async provider, `kickoff()` returns a `HumanFeedbackPending` object instead of raising an exception:
```python Code
flow = ReviewFlow()
result = flow.kickoff()
if isinstance(result, HumanFeedbackPending):
# Flow is paused, state is automatically persisted
print(f"Waiting for feedback at: {result.callback_info['webhook_url']}")
print(f"Flow ID: {result.context.flow_id}")
else:
# Normal completion
print(f"Flow completed: {result}")
```
### Resuming a Paused Flow
When feedback arrives (e.g., via webhook), resume the flow:
```python Code
# Sync handler:
def handle_feedback_webhook(flow_id: str, feedback: str):
flow = ReviewFlow.from_pending(flow_id)
result = flow.resume(feedback)
return result
# Async handler (FastAPI, aiohttp, etc.):
async def handle_feedback_webhook(flow_id: str, feedback: str):
flow = ReviewFlow.from_pending(flow_id)
result = await flow.resume_async(feedback)
return result
```
### Key Types
| Type | Description |
|------|-------------|
| `HumanFeedbackProvider` | Protocol for custom feedback providers |
| `PendingFeedbackContext` | Contains all info needed to resume a paused flow |
| `HumanFeedbackPending` | Returned by `kickoff()` when flow is paused for feedback |
| `ConsoleProvider` | Default blocking console input provider |
### PendingFeedbackContext
The context contains everything needed to resume:
```python Code
@dataclass
class PendingFeedbackContext:
flow_id: str # Unique identifier for this flow execution
flow_class: str # Fully qualified class name
method_name: str # Method that triggered feedback
method_output: Any # Output shown to the human
message: str # The request message
emit: list[str] | None # Possible outcomes for routing
default_outcome: str | None
metadata: dict # Custom metadata
llm: str | None # LLM for outcome collapsing
requested_at: datetime
```
### Complete Async Flow Example
```python Code
from crewai.flow import (
Flow, start, listen, human_feedback,
HumanFeedbackProvider, HumanFeedbackPending, PendingFeedbackContext
)
class SlackNotificationProvider(HumanFeedbackProvider):
"""Provider that sends Slack notifications and pauses for async feedback."""
def __init__(self, channel: str):
self.channel = channel
def request_feedback(self, context: PendingFeedbackContext, flow: Flow) -> str:
# Send Slack notification (implement your own)
slack_thread_id = self.post_to_slack(
channel=self.channel,
message=f"Review needed:\n\n{context.method_output}\n\n{context.message}",
)
# Pause execution - framework handles persistence automatically
raise HumanFeedbackPending(
context=context,
callback_info={
"slack_channel": self.channel,
"thread_id": slack_thread_id,
}
)
class ContentPipeline(Flow):
@start()
@human_feedback(
message="Approve this content for publication?",
emit=["approved", "rejected", "needs_revision"],
llm="gpt-4o-mini",
default_outcome="needs_revision",
provider=SlackNotificationProvider("#content-reviews"),
)
def generate_content(self):
return "AI-generated blog post content..."
@listen("approved")
def publish(self, result):
print(f"Publishing! Reviewer said: {result.feedback}")
return {"status": "published"}
@listen("rejected")
def archive(self, result):
print(f"Archived. Reason: {result.feedback}")
return {"status": "archived"}
@listen("needs_revision")
def queue_revision(self, result):
print(f"Queued for revision: {result.feedback}")
return {"status": "revision_needed"}
# Starting the flow (will pause and wait for Slack response)
def start_content_pipeline():
flow = ContentPipeline()
result = flow.kickoff()
if isinstance(result, HumanFeedbackPending):
return {"status": "pending", "flow_id": result.context.flow_id}
return result
# Resuming when Slack webhook fires (sync handler)
def on_slack_feedback(flow_id: str, slack_message: str):
flow = ContentPipeline.from_pending(flow_id)
result = flow.resume(slack_message)
return result
# If your handler is async (FastAPI, aiohttp, Slack Bolt async, etc.)
async def on_slack_feedback_async(flow_id: str, slack_message: str):
flow = ContentPipeline.from_pending(flow_id)
result = await flow.resume_async(slack_message)
return result
```
<Warning>
If you're using an async web framework (FastAPI, aiohttp, Slack Bolt async mode), use `await flow.resume_async()` instead of `flow.resume()`. Calling `resume()` from within a running event loop will raise a `RuntimeError`.
</Warning>
### Best Practices for Async Feedback
1. **Check the return type**: `kickoff()` returns `HumanFeedbackPending` when paused—no try/except needed
2. **Use the right resume method**: Use `resume()` in sync code, `await resume_async()` in async code
3. **Store callback info**: Use `callback_info` to store webhook URLs, ticket IDs, etc.
4. **Implement idempotency**: Your resume handler should be idempotent for safety
5. **Automatic persistence**: State is automatically saved when `HumanFeedbackPending` is raised and uses `SQLiteFlowPersistence` by default
6. **Custom persistence**: Pass a custom persistence instance to `from_pending()` if needed
## Related Documentation
- [Flows Overview](/en/concepts/flows) - Learn about CrewAI Flows
- [Flow State Management](/en/guides/flows/mastering-flow-state) - Managing state in flows
- [Flow Persistence](/en/concepts/flows#persistence) - Persisting flow state
- [Routing with @router](/en/concepts/flows#router) - More about conditional routing
- [Human Input on Execution](/en/learn/human-input-on-execution) - Task-level human input

View File

@@ -5,9 +5,22 @@ icon: "user-check"
mode: "wide"
---
Human-in-the-Loop (HITL) is a powerful approach that combines artificial intelligence with human expertise to enhance decision-making and improve task outcomes. This guide shows you how to implement HITL within CrewAI.
Human-in-the-Loop (HITL) is a powerful approach that combines artificial intelligence with human expertise to enhance decision-making and improve task outcomes. CrewAI provides multiple ways to implement HITL depending on your needs.
## Setting Up HITL Workflows
## Choosing Your HITL Approach
CrewAI offers two main approaches for implementing human-in-the-loop workflows:
| Approach | Best For | Integration |
|----------|----------|-------------|
| **Flow-based** (`@human_feedback` decorator) | Local development, console-based review, synchronous workflows | [Human Feedback in Flows](/en/learn/human-feedback-in-flows) |
| **Webhook-based** (Enterprise) | Production deployments, async workflows, external integrations (Slack, Teams, etc.) | This guide |
<Tip>
If you're building flows and want to add human review steps with routing based on feedback, check out the [Human Feedback in Flows](/en/learn/human-feedback-in-flows) guide for the `@human_feedback` decorator.
</Tip>
## Setting Up Webhook-Based HITL Workflows
<Steps>
<Step title="Configure Your Task">

View File

@@ -35,7 +35,7 @@ info:
1. **Discover inputs** using `GET /inputs`
2. **Start execution** using `POST /kickoff`
3. **Monitor progress** using `GET /status/{kickoff_id}`
3. **Monitor progress** using `GET /{kickoff_id}/status`
version: 1.0.0
contact:
name: CrewAI Support
@@ -63,7 +63,7 @@ paths:
Use this endpoint to discover what inputs you need to provide when starting a crew execution.
operationId: getRequiredInputs
responses:
'200':
"200":
description: Successfully retrieved required inputs
content:
application/json:
@@ -84,13 +84,21 @@ paths:
outreach_crew:
summary: Outreach crew inputs
value:
inputs: ["name", "title", "company", "industry", "our_product", "linkedin_url"]
'401':
$ref: '#/components/responses/UnauthorizedError'
'404':
$ref: '#/components/responses/NotFoundError'
'500':
$ref: '#/components/responses/ServerError'
inputs:
[
"name",
"title",
"company",
"industry",
"our_product",
"linkedin_url",
]
"401":
$ref: "#/components/responses/UnauthorizedError"
"404":
$ref: "#/components/responses/NotFoundError"
"500":
$ref: "#/components/responses/ServerError"
/kickoff:
post:
@@ -170,7 +178,7 @@ paths:
taskWebhookUrl: "https://api.example.com/webhooks/task"
crewWebhookUrl: "https://api.example.com/webhooks/crew"
responses:
'200':
"200":
description: Crew execution started successfully
content:
application/json:
@@ -182,24 +190,24 @@ paths:
format: uuid
description: Unique identifier for tracking this execution
example: "abcd1234-5678-90ef-ghij-klmnopqrstuv"
'400':
"400":
description: Invalid request body or missing required inputs
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
'401':
$ref: '#/components/responses/UnauthorizedError'
'422':
$ref: "#/components/schemas/Error"
"401":
$ref: "#/components/responses/UnauthorizedError"
"422":
description: Validation error - ensure all required inputs are provided
content:
application/json:
schema:
$ref: '#/components/schemas/ValidationError'
'500':
$ref: '#/components/responses/ServerError'
$ref: "#/components/schemas/ValidationError"
"500":
$ref: "#/components/responses/ServerError"
/status/{kickoff_id}:
/{kickoff_id}/status:
get:
summary: Get Execution Status
description: |
@@ -222,15 +230,15 @@ paths:
format: uuid
example: "abcd1234-5678-90ef-ghij-klmnopqrstuv"
responses:
'200':
"200":
description: Successfully retrieved execution status
content:
application/json:
schema:
oneOf:
- $ref: '#/components/schemas/ExecutionRunning'
- $ref: '#/components/schemas/ExecutionCompleted'
- $ref: '#/components/schemas/ExecutionError'
- $ref: "#/components/schemas/ExecutionRunning"
- $ref: "#/components/schemas/ExecutionCompleted"
- $ref: "#/components/schemas/ExecutionError"
examples:
running:
summary: Execution in progress
@@ -262,19 +270,19 @@ paths:
status: "error"
error: "Task execution failed: Invalid API key for external service"
execution_time: 23.1
'401':
$ref: '#/components/responses/UnauthorizedError'
'404':
"401":
$ref: "#/components/responses/UnauthorizedError"
"404":
description: Kickoff ID not found
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
$ref: "#/components/schemas/Error"
example:
error: "Execution not found"
message: "No execution found with ID: abcd1234-5678-90ef-ghij-klmnopqrstuv"
'500':
$ref: '#/components/responses/ServerError'
"500":
$ref: "#/components/responses/ServerError"
/resume:
post:
@@ -354,7 +362,7 @@ paths:
taskWebhookUrl: "https://api.example.com/webhooks/task"
crewWebhookUrl: "https://api.example.com/webhooks/crew"
responses:
'200':
"200":
description: Execution resumed successfully
content:
application/json:
@@ -381,28 +389,28 @@ paths:
value:
status: "retrying"
message: "Task will be retried with your feedback"
'400':
"400":
description: Invalid request body or execution not in pending state
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
$ref: "#/components/schemas/Error"
example:
error: "Invalid Request"
message: "Execution is not in pending human input state"
'401':
$ref: '#/components/responses/UnauthorizedError'
'404':
"401":
$ref: "#/components/responses/UnauthorizedError"
"404":
description: Execution ID or Task ID not found
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
$ref: "#/components/schemas/Error"
example:
error: "Not Found"
message: "Execution ID not found"
'500':
$ref: '#/components/responses/ServerError'
"500":
$ref: "#/components/responses/ServerError"
components:
securitySchemes:
@@ -458,7 +466,7 @@ components:
tasks:
type: array
items:
$ref: '#/components/schemas/TaskResult'
$ref: "#/components/schemas/TaskResult"
execution_time:
type: number
description: Total execution time in seconds
@@ -536,7 +544,7 @@ components:
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
$ref: "#/components/schemas/Error"
example:
error: "Unauthorized"
message: "Invalid or missing bearer token"
@@ -546,7 +554,7 @@ components:
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
$ref: "#/components/schemas/Error"
example:
error: "Not Found"
message: "The requested resource was not found"
@@ -556,7 +564,7 @@ components:
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
$ref: "#/components/schemas/Error"
example:
error: "Internal Server Error"
message: "An unexpected error occurred"

View File

@@ -35,7 +35,7 @@ info:
1. **Discover inputs** using `GET /inputs`
2. **Start execution** using `POST /kickoff`
3. **Monitor progress** using `GET /status/{kickoff_id}`
3. **Monitor progress** using `GET /{kickoff_id}/status`
version: 1.0.0
contact:
name: CrewAI Support
@@ -63,7 +63,7 @@ paths:
Use this endpoint to discover what inputs you need to provide when starting a crew execution.
operationId: getRequiredInputs
responses:
'200':
"200":
description: Successfully retrieved required inputs
content:
application/json:
@@ -84,13 +84,21 @@ paths:
outreach_crew:
summary: Outreach crew inputs
value:
inputs: ["name", "title", "company", "industry", "our_product", "linkedin_url"]
'401':
$ref: '#/components/responses/UnauthorizedError'
'404':
$ref: '#/components/responses/NotFoundError'
'500':
$ref: '#/components/responses/ServerError'
inputs:
[
"name",
"title",
"company",
"industry",
"our_product",
"linkedin_url",
]
"401":
$ref: "#/components/responses/UnauthorizedError"
"404":
$ref: "#/components/responses/NotFoundError"
"500":
$ref: "#/components/responses/ServerError"
/kickoff:
post:
@@ -170,7 +178,7 @@ paths:
taskWebhookUrl: "https://api.example.com/webhooks/task"
crewWebhookUrl: "https://api.example.com/webhooks/crew"
responses:
'200':
"200":
description: Crew execution started successfully
content:
application/json:
@@ -182,24 +190,24 @@ paths:
format: uuid
description: Unique identifier for tracking this execution
example: "abcd1234-5678-90ef-ghij-klmnopqrstuv"
'400':
"400":
description: Invalid request body or missing required inputs
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
'401':
$ref: '#/components/responses/UnauthorizedError'
'422':
$ref: "#/components/schemas/Error"
"401":
$ref: "#/components/responses/UnauthorizedError"
"422":
description: Validation error - ensure all required inputs are provided
content:
application/json:
schema:
$ref: '#/components/schemas/ValidationError'
'500':
$ref: '#/components/responses/ServerError'
$ref: "#/components/schemas/ValidationError"
"500":
$ref: "#/components/responses/ServerError"
/status/{kickoff_id}:
/{kickoff_id}/status:
get:
summary: Get Execution Status
description: |
@@ -222,15 +230,15 @@ paths:
format: uuid
example: "abcd1234-5678-90ef-ghij-klmnopqrstuv"
responses:
'200':
"200":
description: Successfully retrieved execution status
content:
application/json:
schema:
oneOf:
- $ref: '#/components/schemas/ExecutionRunning'
- $ref: '#/components/schemas/ExecutionCompleted'
- $ref: '#/components/schemas/ExecutionError'
- $ref: "#/components/schemas/ExecutionRunning"
- $ref: "#/components/schemas/ExecutionCompleted"
- $ref: "#/components/schemas/ExecutionError"
examples:
running:
summary: Execution in progress
@@ -262,19 +270,19 @@ paths:
status: "error"
error: "Task execution failed: Invalid API key for external service"
execution_time: 23.1
'401':
$ref: '#/components/responses/UnauthorizedError'
'404':
"401":
$ref: "#/components/responses/UnauthorizedError"
"404":
description: Kickoff ID not found
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
$ref: "#/components/schemas/Error"
example:
error: "Execution not found"
message: "No execution found with ID: abcd1234-5678-90ef-ghij-klmnopqrstuv"
'500':
$ref: '#/components/responses/ServerError'
"500":
$ref: "#/components/responses/ServerError"
/resume:
post:
@@ -354,7 +362,7 @@ paths:
taskWebhookUrl: "https://api.example.com/webhooks/task"
crewWebhookUrl: "https://api.example.com/webhooks/crew"
responses:
'200':
"200":
description: Execution resumed successfully
content:
application/json:
@@ -381,28 +389,28 @@ paths:
value:
status: "retrying"
message: "Task will be retried with your feedback"
'400':
"400":
description: Invalid request body or execution not in pending state
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
$ref: "#/components/schemas/Error"
example:
error: "Invalid Request"
message: "Execution is not in pending human input state"
'401':
$ref: '#/components/responses/UnauthorizedError'
'404':
"401":
$ref: "#/components/responses/UnauthorizedError"
"404":
description: Execution ID or Task ID not found
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
$ref: "#/components/schemas/Error"
example:
error: "Not Found"
message: "Execution ID not found"
'500':
$ref: '#/components/responses/ServerError'
"500":
$ref: "#/components/responses/ServerError"
components:
securitySchemes:
@@ -458,7 +466,7 @@ components:
tasks:
type: array
items:
$ref: '#/components/schemas/TaskResult'
$ref: "#/components/schemas/TaskResult"
execution_time:
type: number
description: Total execution time in seconds
@@ -536,7 +544,7 @@ components:
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
$ref: "#/components/schemas/Error"
example:
error: "Unauthorized"
message: "Invalid or missing bearer token"
@@ -546,7 +554,7 @@ components:
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
$ref: "#/components/schemas/Error"
example:
error: "Not Found"
message: "The requested resource was not found"
@@ -556,7 +564,7 @@ components:
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
$ref: "#/components/schemas/Error"
example:
error: "Internal Server Error"
message: "An unexpected error occurred"

View File

@@ -84,7 +84,7 @@ paths:
'500':
$ref: '#/components/responses/ServerError'
/status/{kickoff_id}:
/{kickoff_id}/status:
get:
summary: 실행 상태 조회
description: |

View File

@@ -35,7 +35,7 @@ info:
1. **Descubra os inputs** usando `GET /inputs`
2. **Inicie a execução** usando `POST /kickoff`
3. **Monitore o progresso** usando `GET /status/{kickoff_id}`
3. **Monitore o progresso** usando `GET /{kickoff_id}/status`
version: 1.0.0
contact:
name: CrewAI Suporte
@@ -56,7 +56,7 @@ paths:
Retorna a lista de parâmetros de entrada que sua crew espera.
operationId: getRequiredInputs
responses:
'200':
"200":
description: Inputs requeridos obtidos com sucesso
content:
application/json:
@@ -69,12 +69,12 @@ paths:
type: string
description: Nomes dos parâmetros de entrada
example: ["budget", "interests", "duration", "age"]
'401':
$ref: '#/components/responses/UnauthorizedError'
'404':
$ref: '#/components/responses/NotFoundError'
'500':
$ref: '#/components/responses/ServerError'
"401":
$ref: "#/components/responses/UnauthorizedError"
"404":
$ref: "#/components/responses/NotFoundError"
"500":
$ref: "#/components/responses/ServerError"
/kickoff:
post:
@@ -104,7 +104,7 @@ paths:
age: "35"
responses:
'200':
"200":
description: Execução iniciada com sucesso
content:
application/json:
@@ -115,12 +115,12 @@ paths:
type: string
format: uuid
example: "abcd1234-5678-90ef-ghij-klmnopqrstuv"
'401':
$ref: '#/components/responses/UnauthorizedError'
'500':
$ref: '#/components/responses/ServerError'
"401":
$ref: "#/components/responses/UnauthorizedError"
"500":
$ref: "#/components/responses/ServerError"
/status/{kickoff_id}:
/{kickoff_id}/status:
get:
summary: Obter Status da Execução
description: |
@@ -136,25 +136,25 @@ paths:
type: string
format: uuid
responses:
'200':
"200":
description: Status recuperado com sucesso
content:
application/json:
schema:
oneOf:
- $ref: '#/components/schemas/ExecutionRunning'
- $ref: '#/components/schemas/ExecutionCompleted'
- $ref: '#/components/schemas/ExecutionError'
'401':
$ref: '#/components/responses/UnauthorizedError'
'404':
- $ref: "#/components/schemas/ExecutionRunning"
- $ref: "#/components/schemas/ExecutionCompleted"
- $ref: "#/components/schemas/ExecutionError"
"401":
$ref: "#/components/responses/UnauthorizedError"
"404":
description: Kickoff ID não encontrado
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
'500':
$ref: '#/components/responses/ServerError'
$ref: "#/components/schemas/Error"
"500":
$ref: "#/components/responses/ServerError"
/resume:
post:
@@ -234,7 +234,7 @@ paths:
taskWebhookUrl: "https://api.example.com/webhooks/task"
crewWebhookUrl: "https://api.example.com/webhooks/crew"
responses:
'200':
"200":
description: Execution resumed successfully
content:
application/json:
@@ -261,28 +261,28 @@ paths:
value:
status: "retrying"
message: "Task will be retried with your feedback"
'400':
"400":
description: Invalid request body or execution not in pending state
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
$ref: "#/components/schemas/Error"
example:
error: "Invalid Request"
message: "Execution is not in pending human input state"
'401':
$ref: '#/components/responses/UnauthorizedError'
'404':
"401":
$ref: "#/components/responses/UnauthorizedError"
"404":
description: Execution ID or Task ID not found
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
$ref: "#/components/schemas/Error"
example:
error: "Not Found"
message: "Execution ID not found"
'500':
$ref: '#/components/responses/ServerError'
"500":
$ref: "#/components/responses/ServerError"
components:
securitySchemes:
@@ -324,7 +324,7 @@ components:
tasks:
type: array
items:
$ref: '#/components/schemas/TaskResult'
$ref: "#/components/schemas/TaskResult"
execution_time:
type: number
@@ -380,16 +380,16 @@ components:
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
$ref: "#/components/schemas/Error"
NotFoundError:
description: Recurso não encontrado
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
$ref: "#/components/schemas/Error"
ServerError:
description: Erro interno do servidor
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
$ref: "#/components/schemas/Error"

View File

@@ -16,16 +16,17 @@ CrewAI 엔터프라이즈 API 참고 자료에 오신 것을 환영합니다.
CrewAI AOP 대시보드에서 자신의 crew 상세 페이지로 이동하여 Status 탭에서 Bearer Token을 복사하세요.
</Step>
<Step title="필수 입력값 확인하기">
`GET /inputs` 엔드포인트를 사용하여 crew가 기대하는 파라미터를 확인하세요.
</Step>
<Step title="필수 입력값 확인하기">
`GET /inputs` 엔드포인트를 사용하여 crew가 기대하는 파라미터를 확인하세요.
</Step>
<Step title="Crew 실행 시작하기">
입력값과 함께 `POST /kickoff`를 호출하여 crew 실행을 시작하고 `kickoff_id`를 받으세요.
</Step>
<Step title="Crew 실행 시작하기">
입력값과 함께 `POST /kickoff`를 호출하여 crew 실행을 시작하고 `kickoff_id`를
받으세요.
</Step>
<Step title="진행 상황 모니터링">
`GET /status/{kickoff_id}`를 사용하여 실행 상태를 확인하고 결과를 조회하세요.
`GET /{kickoff_id}/status`를 사용하여 실행 상태를 확인하고 결과를 조회하세요.
</Step>
</Steps>
@@ -40,13 +41,14 @@ curl -H "Authorization: Bearer YOUR_CREW_TOKEN" \
### 토큰 유형
| 토큰 유형 | 범위 | 사용 사례 |
|:-----------|:--------|:----------|
| **Bearer Token** | 조직 단위 접근 | 전체 crew 운영, 서버 간 통합에 이상적 |
| **User Bearer Token** | 사용자 범위 접근 | 제한된 권한, 사용자별 작업에 적합 |
| 토큰 유형 | 범위 | 사용 사례 |
| :-------------------- | :--------------- | :------------------------------------ |
| **Bearer Token** | 조직 단위 접근 | 전체 crew 운영, 서버 간 통합에 이상적 |
| **User Bearer Token** | 사용자 범위 접근 | 제한된 권한, 사용자별 작업에 적합 |
<Tip>
두 토큰 유형 모두 CrewAI AOP 대시보드의 crew 상세 페이지 Status 탭에서 확인할 수 있습니다.
두 토큰 유형 모두 CrewAI AOP 대시보드의 crew 상세 페이지 Status 탭에서 확인할
수 있습니다.
</Tip>
## 기본 URL
@@ -63,29 +65,33 @@ https://your-crew-name.crewai.com
1. **탐색**: `GET /inputs`를 호출하여 crew가 필요한 것을 파악합니다.
2. **실행**: `POST /kickoff`를 통해 입력값을 제출하여 처리를 시작합니다.
3. **모니터링**: 완료될 때까지 `GET /status/{kickoff_id}`를 주기적으로 조회합니다.
3. **모니터링**: 완료될 때까지 `GET /{kickoff_id}/status`를 주기적으로 조회합니다.
4. **결과**: 완료된 응답에서 최종 출력을 추출합니다.
## 오류 처리
API는 표준 HTTP 상태 코드를 사용합니다:
| 코드 | 의미 |
|------|:--------|
| `200` | 성공 |
| `400` | 잘못된 요청 - 잘못된 입력 형식 |
| `401` | 인증 실패 - 잘못된 베어러 토큰 |
| 코드 | 의미 |
| ----- | :------------------------------------ |
| `200` | 성공 |
| `400` | 잘못된 요청 - 잘못된 입력 형식 |
| `401` | 인증 실패 - 잘못된 베어러 토큰 |
| `404` | 찾을 수 없음 - 리소스가 존재하지 않음 |
| `422` | 유효성 검사 오류 - 필수 입력 누락 |
| `500` | 서버 오류 - 지원팀에 문의하십시오 |
| `422` | 유효성 검사 오류 - 필수 입력 누락 |
| `500` | 서버 오류 - 지원팀에 문의하십시오 |
## 인터랙티브 테스트
<Info>
**왜 "전송" 버튼이 없나요?** 각 CrewAI AOP 사용자는 고유한 crew URL을 가지므로, 혼동을 피하기 위해 인터랙티브 플레이그라운드 대신 **참조 모드**를 사용합니다. 이를 통해 비작동 전송 버튼 없이 요청이 어떻게 생겼는지 정확히 보여줍니다.
**왜 "전송" 버튼이 없나요?** 각 CrewAI AOP 사용자는 고유한 crew URL을
가지므로, 혼동을 피하기 위해 인터랙티브 플레이그라운드 대신 **참조 모드**를
사용합니다. 이를 통해 비작동 전송 버튼 없이 요청이 어떻게 생겼는지 정확히
보여줍니다.
</Info>
각 엔드포인트 페이지에서는 다음을 확인할 수 있습니다:
- ✅ 모든 파라미터가 포함된 **정확한 요청 형식**
- ✅ 성공 및 오류 사례에 대한 **응답 예시**
- ✅ 여러 언어(cURL, Python, JavaScript 등)로 제공되는 **코드 샘플**
@@ -103,6 +109,7 @@ API는 표준 HTTP 상태 코드를 사용합니다:
</CardGroup>
**예시 작업 흐름:**
1. **cURL 예제를 복사**합니다 (엔드포인트 페이지에서)
2. **`your-actual-crew-name.crewai.com`**을(를) 실제 crew URL로 교체합니다
3. **Bearer 토큰을** 대시보드에서 복사한 실제 토큰으로 교체합니다
@@ -111,10 +118,18 @@ API는 표준 HTTP 상태 코드를 사용합니다:
## 도움이 필요하신가요?
<CardGroup cols={2}>
<Card title="Enterprise Support" icon="headset" href="mailto:support@crewai.com">
<Card
title="Enterprise Support"
icon="headset"
href="mailto:support@crewai.com"
>
API 통합 및 문제 해결에 대한 지원을 받으세요
</Card>
<Card title="Enterprise Dashboard" icon="chart-line" href="https://app.crewai.com">
<Card
title="Enterprise Dashboard"
icon="chart-line"
href="https://app.crewai.com"
>
crew를 관리하고 실행 로그를 확인하세요
</Card>
</CardGroup>

View File

@@ -1,8 +1,6 @@
---
title: "GET /status/{kickoff_id}"
title: "GET /{kickoff_id}/status"
description: "실행 상태 조회"
openapi: "/enterprise-api.ko.yaml GET /status/{kickoff_id}"
openapi: "/enterprise-api.ko.yaml GET /{kickoff_id}/status"
mode: "wide"
---

View File

@@ -565,6 +565,55 @@ Fourth method running
이 Flow를 실행하면, `start_method`에서 생성된 랜덤 불리언 값에 따라 출력값이 달라집니다.
### Human in the Loop (인간 피드백)
`@human_feedback` 데코레이터는 인간의 피드백을 수집하기 위해 플로우 실행을 일시 중지하는 human-in-the-loop 워크플로우를 가능하게 합니다. 이는 승인 게이트, 품질 검토, 인간의 판단이 필요한 결정 지점에 유용합니다.
```python Code
from crewai.flow.flow import Flow, start, listen
from crewai.flow.human_feedback import human_feedback, HumanFeedbackResult
class ReviewFlow(Flow):
@start()
@human_feedback(
message="이 콘텐츠를 승인하시겠습니까?",
emit=["approved", "rejected", "needs_revision"],
llm="gpt-4o-mini",
default_outcome="needs_revision",
)
def generate_content(self):
return "검토할 콘텐츠..."
@listen("approved")
def on_approval(self, result: HumanFeedbackResult):
print(f"승인됨! 피드백: {result.feedback}")
@listen("rejected")
def on_rejection(self, result: HumanFeedbackResult):
print(f"거부됨. 이유: {result.feedback}")
```
`emit`이 지정되면, 인간의 자유 형식 피드백이 LLM에 의해 해석되어 지정된 outcome 중 하나로 매핑되고, 해당 `@listen` 데코레이터를 트리거합니다.
라우팅 없이 단순히 피드백만 수집할 수도 있습니다:
```python Code
@start()
@human_feedback(message="이 출력에 대한 코멘트가 있으신가요?")
def my_method(self):
return "검토할 출력"
@listen(my_method)
def next_step(self, result: HumanFeedbackResult):
# result.feedback로 피드백에 접근
# result.output으로 원래 출력에 접근
pass
```
플로우 실행 중 수집된 모든 피드백은 `self.last_human_feedback` (가장 최근) 또는 `self.human_feedback_history` (리스트 형태의 모든 피드백)를 통해 접근할 수 있습니다.
플로우에서의 인간 피드백에 대한 완전한 가이드는 비동기/논블로킹 피드백과 커스텀 프로바이더(Slack, 웹훅 등)를 포함하여 [Flow에서 인간 피드백](/ko/learn/human-feedback-in-flows)을 참조하세요.
## 플로우에 에이전트 추가하기
에이전트는 플로우에 원활하게 통합할 수 있으며, 단순하고 집중된 작업 실행이 필요할 때 전체 Crew의 경량 대안으로 활용됩니다. 아래는 에이전트를 플로우 내에서 사용하여 시장 조사를 수행하는 예시입니다:

View File

@@ -62,13 +62,13 @@ CrewAI CLI를 사용하여 Gmail 트리거 통합을 로컬에서 테스트하
crewai triggers list
# 실제 payload로 Gmail 트리거 시뮬레이션
crewai triggers run gmail/new_email
crewai triggers run gmail/new_email_received
```
`crewai triggers run` 명령은 완전한 Gmail payload로 크루를 실행하여 배포 전에 파싱 로직을 테스트할 수 있게 해줍니다.
<Warning>
개발 중에는 `crewai triggers run gmail/new_email`을 사용하세요 (`crewai run`이 아님). 배포 후에는 크루가 자동으로 트리거 payload를 받습니다.
개발 중에는 `crewai triggers run gmail/new_email_received`을 사용하세요 (`crewai run`이 아님). 배포 후에는 크루가 자동으로 트리거 payload를 받습니다.
</Warning>
## Monitoring Executions
@@ -83,6 +83,6 @@ Track history and performance of triggered runs:
- Ensure Gmail is connected in Tools & Integrations
- Verify the Gmail Trigger is enabled on the Triggers tab
- `crewai triggers run gmail/new_email`로 로컬 테스트하여 정확한 payload 구조를 확인하세요
- `crewai triggers run gmail/new_email_received`로 로컬 테스트하여 정확한 payload 구조를 확인하세요
- Check the execution logs and confirm the payload is passed as `crewai_trigger_payload`
- 주의: 트리거 실행을 시뮬레이션하려면 `crewai triggers run`을 사용하세요 (`crewai run`이 아님)

View File

@@ -0,0 +1,581 @@
---
title: Flow에서 인간 피드백
description: "@human_feedback 데코레이터를 사용하여 CrewAI Flow에 인간 피드백을 직접 통합하는 방법을 알아보세요"
icon: user-check
mode: "wide"
---
## 개요
`@human_feedback` 데코레이터는 CrewAI Flow 내에서 직접 human-in-the-loop(HITL) 워크플로우를 가능하게 합니다. Flow 실행을 일시 중지하고, 인간에게 검토를 위해 출력을 제시하고, 피드백을 수집하고, 선택적으로 피드백 결과에 따라 다른 리스너로 라우팅할 수 있습니다.
이는 특히 다음과 같은 경우에 유용합니다:
- **품질 보증**: AI가 생성한 콘텐츠를 다운스트림에서 사용하기 전에 검토
- **결정 게이트**: 자동화된 워크플로우에서 인간이 중요한 결정을 내리도록 허용
- **승인 워크플로우**: 승인/거부/수정 패턴 구현
- **대화형 개선**: 출력을 반복적으로 개선하기 위해 피드백 수집
```mermaid
flowchart LR
A[Flow 메서드] --> B[출력 생성됨]
B --> C[인간이 검토]
C --> D{피드백}
D -->|emit 지정됨| E[LLM이 Outcome으로 매핑]
D -->|emit 없음| F[HumanFeedbackResult]
E --> G["@listen('approved')"]
E --> H["@listen('rejected')"]
F --> I[다음 리스너]
```
## 빠른 시작
Flow에 인간 피드백을 추가하는 가장 간단한 방법은 다음과 같습니다:
```python Code
from crewai.flow.flow import Flow, start, listen
from crewai.flow.human_feedback import human_feedback
class SimpleReviewFlow(Flow):
@start()
@human_feedback(message="이 콘텐츠를 검토해 주세요:")
def generate_content(self):
return "검토가 필요한 AI 생성 콘텐츠입니다."
@listen(generate_content)
def process_feedback(self, result):
print(f"콘텐츠: {result.output}")
print(f"인간의 의견: {result.feedback}")
flow = SimpleReviewFlow()
flow.kickoff()
```
이 Flow를 실행하면:
1. `generate_content`를 실행하고 문자열을 반환합니다
2. 요청 메시지와 함께 사용자에게 출력을 표시합니다
3. 사용자가 피드백을 입력할 때까지 대기합니다 (또는 Enter를 눌러 건너뜁니다)
4. `HumanFeedbackResult` 객체를 `process_feedback`에 전달합니다
## @human_feedback 데코레이터
### 매개변수
| 매개변수 | 타입 | 필수 | 설명 |
|----------|------|------|------|
| `message` | `str` | 예 | 메서드 출력과 함께 인간에게 표시되는 메시지 |
| `emit` | `Sequence[str]` | 아니오 | 가능한 outcome 목록. 피드백이 이 중 하나로 매핑되어 `@listen` 데코레이터를 트리거합니다 |
| `llm` | `str \| BaseLLM` | `emit` 지정 시 | 피드백을 해석하고 outcome에 매핑하는 데 사용되는 LLM |
| `default_outcome` | `str` | 아니오 | 피드백이 제공되지 않을 때 사용할 outcome. `emit`에 있어야 합니다 |
| `metadata` | `dict` | 아니오 | 엔터프라이즈 통합을 위한 추가 데이터 |
| `provider` | `HumanFeedbackProvider` | 아니오 | 비동기/논블로킹 피드백을 위한 커스텀 프로바이더. [비동기 인간 피드백](#비동기-인간-피드백-논블로킹) 참조 |
### 기본 사용법 (라우팅 없음)
`emit`을 지정하지 않으면, 데코레이터는 단순히 피드백을 수집하고 다음 리스너에 `HumanFeedbackResult`를 전달합니다:
```python Code
@start()
@human_feedback(message="이 분석에 대해 어떻게 생각하시나요?")
def analyze_data(self):
return "분석 결과: 매출 15% 증가, 비용 8% 감소"
@listen(analyze_data)
def handle_feedback(self, result):
# result는 HumanFeedbackResult입니다
print(f"분석: {result.output}")
print(f"피드백: {result.feedback}")
```
### emit을 사용한 라우팅
`emit`을 지정하면, 데코레이터는 라우터가 됩니다. 인간의 자유 형식 피드백이 LLM에 의해 해석되어 지정된 outcome 중 하나로 매핑됩니다:
```python Code
@start()
@human_feedback(
message="이 콘텐츠의 출판을 승인하시겠습니까?",
emit=["approved", "rejected", "needs_revision"],
llm="gpt-4o-mini",
default_outcome="needs_revision",
)
def review_content(self):
return "블로그 게시물 초안 내용..."
@listen("approved")
def publish(self, result):
print(f"출판 중! 사용자 의견: {result.feedback}")
@listen("rejected")
def discard(self, result):
print(f"폐기됨. 이유: {result.feedback}")
@listen("needs_revision")
def revise(self, result):
print(f"다음을 기반으로 수정 중: {result.feedback}")
```
<Tip>
LLM은 가능한 경우 구조화된 출력(function calling)을 사용하여 응답이 지정된 outcome 중 하나임을 보장합니다. 이로 인해 라우팅이 신뢰할 수 있고 예측 가능해집니다.
</Tip>
## HumanFeedbackResult
`HumanFeedbackResult` 데이터클래스는 인간 피드백 상호작용에 대한 모든 정보를 포함합니다:
```python Code
from crewai.flow.human_feedback import HumanFeedbackResult
@dataclass
class HumanFeedbackResult:
output: Any # 인간에게 표시된 원래 메서드 출력
feedback: str # 인간의 원시 피드백 텍스트
outcome: str | None # 매핑된 outcome (emit이 지정된 경우)
timestamp: datetime # 피드백이 수신된 시간
method_name: str # 데코레이터된 메서드의 이름
metadata: dict # 데코레이터에 전달된 모든 메타데이터
```
### 리스너에서 접근하기
`emit`이 있는 `@human_feedback` 메서드에 의해 리스너가 트리거되면, `HumanFeedbackResult`를 받습니다:
```python Code
@listen("approved")
def on_approval(self, result: HumanFeedbackResult):
print(f"원래 출력: {result.output}")
print(f"사용자 피드백: {result.feedback}")
print(f"Outcome: {result.outcome}") # "approved"
print(f"수신 시간: {result.timestamp}")
```
## 피드백 히스토리 접근하기
`Flow` 클래스는 인간 피드백에 접근하기 위한 두 가지 속성을 제공합니다:
### last_human_feedback
가장 최근의 `HumanFeedbackResult`를 반환합니다:
```python Code
@listen(some_method)
def check_feedback(self):
if self.last_human_feedback:
print(f"마지막 피드백: {self.last_human_feedback.feedback}")
```
### human_feedback_history
Flow 동안 수집된 모든 `HumanFeedbackResult` 객체의 리스트입니다:
```python Code
@listen(final_step)
def summarize(self):
print(f"수집된 총 피드백: {len(self.human_feedback_history)}")
for i, fb in enumerate(self.human_feedback_history):
print(f"{i+1}. {fb.method_name}: {fb.outcome or '라우팅 없음'}")
```
<Warning>
각 `HumanFeedbackResult`는 `human_feedback_history`에 추가되므로, 여러 피드백 단계가 서로 덮어쓰지 않습니다. 이 리스트를 사용하여 Flow 동안 수집된 모든 피드백에 접근하세요.
</Warning>
## 완전한 예제: 콘텐츠 승인 워크플로우
콘텐츠 검토 및 승인 워크플로우를 구현하는 전체 예제입니다:
<CodeGroup>
```python Code
from crewai.flow.flow import Flow, start, listen
from crewai.flow.human_feedback import human_feedback, HumanFeedbackResult
from pydantic import BaseModel
class ContentState(BaseModel):
topic: str = ""
draft: str = ""
final_content: str = ""
revision_count: int = 0
class ContentApprovalFlow(Flow[ContentState]):
"""콘텐츠를 생성하고 인간의 승인을 받는 Flow입니다."""
@start()
def get_topic(self):
self.state.topic = input("어떤 주제에 대해 글을 쓸까요? ")
return self.state.topic
@listen(get_topic)
def generate_draft(self, topic):
# 실제 사용에서는 LLM을 호출합니다
self.state.draft = f"# {topic}\n\n{topic}에 대한 초안입니다..."
return self.state.draft
@listen(generate_draft)
@human_feedback(
message="이 초안을 검토해 주세요. 'approved', 'rejected'로 답하거나 수정 피드백을 제공해 주세요:",
emit=["approved", "rejected", "needs_revision"],
llm="gpt-4o-mini",
default_outcome="needs_revision",
)
def review_draft(self, draft):
return draft
@listen("approved")
def publish_content(self, result: HumanFeedbackResult):
self.state.final_content = result.output
print("\n✅ 콘텐츠가 승인되어 출판되었습니다!")
print(f"검토자 코멘트: {result.feedback}")
return "published"
@listen("rejected")
def handle_rejection(self, result: HumanFeedbackResult):
print("\n❌ 콘텐츠가 거부되었습니다")
print(f"이유: {result.feedback}")
return "rejected"
@listen("needs_revision")
def revise_content(self, result: HumanFeedbackResult):
self.state.revision_count += 1
print(f"\n📝 수정 #{self.state.revision_count} 요청됨")
print(f"피드백: {result.feedback}")
# 실제 Flow에서는 generate_draft로 돌아갈 수 있습니다
# 이 예제에서는 단순히 확인합니다
return "revision_requested"
# Flow 실행
flow = ContentApprovalFlow()
result = flow.kickoff()
print(f"\nFlow 완료. 요청된 수정: {flow.state.revision_count}")
```
```text Output
어떤 주제에 대해 글을 쓸까요? AI 안전
==================================================
OUTPUT FOR REVIEW:
==================================================
# AI 안전
AI 안전에 대한 초안입니다...
==================================================
이 초안을 검토해 주세요. 'approved', 'rejected'로 답하거나 수정 피드백을 제공해 주세요:
(Press Enter to skip, or type your feedback)
Your feedback: 좋아 보입니다, 승인!
✅ 콘텐츠가 승인되어 출판되었습니다!
검토자 코멘트: 좋아 보입니다, 승인!
Flow 완료. 요청된 수정: 0
```
</CodeGroup>
## 다른 데코레이터와 결합하기
`@human_feedback` 데코레이터는 다른 Flow 데코레이터와 함께 작동합니다. 가장 안쪽 데코레이터(함수에 가장 가까운)로 배치하세요:
```python Code
# 올바름: @human_feedback이 가장 안쪽(함수에 가장 가까움)
@start()
@human_feedback(message="이것을 검토해 주세요:")
def my_start_method(self):
return "content"
@listen(other_method)
@human_feedback(message="이것도 검토해 주세요:")
def my_listener(self, data):
return f"processed: {data}"
```
<Tip>
`@human_feedback`를 가장 안쪽 데코레이터(마지막/함수에 가장 가까움)로 배치하여 메서드를 직접 래핑하고 Flow 시스템에 전달하기 전에 반환 값을 캡처할 수 있도록 하세요.
</Tip>
## 모범 사례
### 1. 명확한 요청 메시지 작성
`message` 매개변수는 인간이 보는 것입니다. 실행 가능하게 만드세요:
```python Code
# ✅ 좋음 - 명확하고 실행 가능
@human_feedback(message="이 요약이 핵심 포인트를 정확하게 캡처했나요? '예'로 답하거나 무엇이 빠졌는지 설명해 주세요:")
# ❌ 나쁨 - 모호함
@human_feedback(message="이것을 검토해 주세요:")
```
### 2. 의미 있는 Outcome 선택
`emit`을 사용할 때, 인간의 응답에 자연스럽게 매핑되는 outcome을 선택하세요:
```python Code
# ✅ 좋음 - 자연어 outcome
emit=["approved", "rejected", "needs_more_detail"]
# ❌ 나쁨 - 기술적이거나 불명확
emit=["state_1", "state_2", "state_3"]
```
### 3. 항상 기본 Outcome 제공
사용자가 입력 없이 Enter를 누르는 경우를 처리하기 위해 `default_outcome`을 사용하세요:
```python Code
@human_feedback(
message="승인하시겠습니까? (수정 요청하려면 Enter 누르세요)",
emit=["approved", "needs_revision"],
llm="gpt-4o-mini",
default_outcome="needs_revision", # 안전한 기본값
)
```
### 4. 감사 추적을 위한 피드백 히스토리 사용
감사 로그를 생성하기 위해 `human_feedback_history`에 접근하세요:
```python Code
@listen(final_step)
def create_audit_log(self):
log = []
for fb in self.human_feedback_history:
log.append({
"step": fb.method_name,
"outcome": fb.outcome,
"feedback": fb.feedback,
"timestamp": fb.timestamp.isoformat(),
})
return log
```
### 5. 라우팅된 피드백과 라우팅되지 않은 피드백 모두 처리
Flow를 설계할 때, 라우팅이 필요한지 고려하세요:
| 시나리오 | 사용 |
|----------|------|
| 간단한 검토, 피드백 텍스트만 필요 | `emit` 없음 |
| 응답에 따라 다른 경로로 분기 필요 | `emit` 사용 |
| 승인/거부/수정이 있는 승인 게이트 | `emit` 사용 |
| 로깅만을 위한 코멘트 수집 | `emit` 없음 |
## 비동기 인간 피드백 (논블로킹)
기본적으로 `@human_feedback`은 콘솔 입력을 기다리며 실행을 차단합니다. 프로덕션 애플리케이션에서는 Slack, 이메일, 웹훅 또는 API와 같은 외부 시스템과 통합되는 **비동기/논블로킹** 피드백이 필요할 수 있습니다.
### Provider 추상화
커스텀 피드백 수집 전략을 지정하려면 `provider` 매개변수를 사용하세요:
```python Code
from crewai.flow import Flow, start, human_feedback, HumanFeedbackProvider, HumanFeedbackPending, PendingFeedbackContext
class WebhookProvider(HumanFeedbackProvider):
"""웹훅 콜백을 기다리며 Flow를 일시 중지하는 Provider."""
def __init__(self, webhook_url: str):
self.webhook_url = webhook_url
def request_feedback(self, context: PendingFeedbackContext, flow: Flow) -> str:
# 외부 시스템에 알림 (예: Slack 메시지 전송, 티켓 생성)
self.send_notification(context)
# 실행 일시 중지 - 프레임워크가 자동으로 영속성 처리
raise HumanFeedbackPending(
context=context,
callback_info={"webhook_url": f"{self.webhook_url}/{context.flow_id}"}
)
class ReviewFlow(Flow):
@start()
@human_feedback(
message="이 콘텐츠를 검토해 주세요:",
emit=["approved", "rejected"],
llm="gpt-4o-mini",
provider=WebhookProvider("https://myapp.com/api"),
)
def generate_content(self):
return "AI가 생성한 콘텐츠..."
@listen("approved")
def publish(self, result):
return "출판됨!"
```
<Tip>
Flow 프레임워크는 `HumanFeedbackPending`이 발생하면 **자동으로 상태를 영속화**합니다. Provider는 외부 시스템에 알리고 예외를 발생시키기만 하면 됩니다—수동 영속성 호출이 필요하지 않습니다.
</Tip>
### 일시 중지된 Flow 처리
비동기 provider를 사용하면 `kickoff()`는 예외를 발생시키는 대신 `HumanFeedbackPending` 객체를 반환합니다:
```python Code
flow = ReviewFlow()
result = flow.kickoff()
if isinstance(result, HumanFeedbackPending):
# Flow가 일시 중지됨, 상태가 자동으로 영속화됨
print(f"피드백 대기 중: {result.callback_info['webhook_url']}")
print(f"Flow ID: {result.context.flow_id}")
else:
# 정상 완료
print(f"Flow 완료: {result}")
```
### 일시 중지된 Flow 재개
피드백이 도착하면 (예: 웹훅을 통해) Flow를 재개합니다:
```python Code
# 동기 핸들러:
def handle_feedback_webhook(flow_id: str, feedback: str):
flow = ReviewFlow.from_pending(flow_id)
result = flow.resume(feedback)
return result
# 비동기 핸들러 (FastAPI, aiohttp 등):
async def handle_feedback_webhook(flow_id: str, feedback: str):
flow = ReviewFlow.from_pending(flow_id)
result = await flow.resume_async(feedback)
return result
```
### 주요 타입
| 타입 | 설명 |
|------|------|
| `HumanFeedbackProvider` | 커스텀 피드백 provider를 위한 프로토콜 |
| `PendingFeedbackContext` | 일시 중지된 Flow를 재개하는 데 필요한 모든 정보 포함 |
| `HumanFeedbackPending` | Flow가 피드백을 위해 일시 중지되면 `kickoff()`에서 반환됨 |
| `ConsoleProvider` | 기본 블로킹 콘솔 입력 provider |
### PendingFeedbackContext
컨텍스트는 재개에 필요한 모든 것을 포함합니다:
```python Code
@dataclass
class PendingFeedbackContext:
flow_id: str # 이 Flow 실행의 고유 식별자
flow_class: str # 정규화된 클래스 이름
method_name: str # 피드백을 트리거한 메서드
method_output: Any # 인간에게 표시된 출력
message: str # 요청 메시지
emit: list[str] | None # 라우팅을 위한 가능한 outcome
default_outcome: str | None
metadata: dict # 커스텀 메타데이터
llm: str | None # outcome 매핑을 위한 LLM
requested_at: datetime
```
### 완전한 비동기 Flow 예제
```python Code
from crewai.flow import (
Flow, start, listen, human_feedback,
HumanFeedbackProvider, HumanFeedbackPending, PendingFeedbackContext
)
class SlackNotificationProvider(HumanFeedbackProvider):
"""Slack 알림을 보내고 비동기 피드백을 위해 일시 중지하는 Provider."""
def __init__(self, channel: str):
self.channel = channel
def request_feedback(self, context: PendingFeedbackContext, flow: Flow) -> str:
# Slack 알림 전송 (직접 구현)
slack_thread_id = self.post_to_slack(
channel=self.channel,
message=f"검토 필요:\n\n{context.method_output}\n\n{context.message}",
)
# 실행 일시 중지 - 프레임워크가 자동으로 영속성 처리
raise HumanFeedbackPending(
context=context,
callback_info={
"slack_channel": self.channel,
"thread_id": slack_thread_id,
}
)
class ContentPipeline(Flow):
@start()
@human_feedback(
message="이 콘텐츠의 출판을 승인하시겠습니까?",
emit=["approved", "rejected", "needs_revision"],
llm="gpt-4o-mini",
default_outcome="needs_revision",
provider=SlackNotificationProvider("#content-reviews"),
)
def generate_content(self):
return "AI가 생성한 블로그 게시물 콘텐츠..."
@listen("approved")
def publish(self, result):
print(f"출판 중! 검토자 의견: {result.feedback}")
return {"status": "published"}
@listen("rejected")
def archive(self, result):
print(f"보관됨. 이유: {result.feedback}")
return {"status": "archived"}
@listen("needs_revision")
def queue_revision(self, result):
print(f"수정 대기열에 추가됨: {result.feedback}")
return {"status": "revision_needed"}
# Flow 시작 (Slack 응답을 기다리며 일시 중지)
def start_content_pipeline():
flow = ContentPipeline()
result = flow.kickoff()
if isinstance(result, HumanFeedbackPending):
return {"status": "pending", "flow_id": result.context.flow_id}
return result
# Slack 웹훅이 실행될 때 재개 (동기 핸들러)
def on_slack_feedback(flow_id: str, slack_message: str):
flow = ContentPipeline.from_pending(flow_id)
result = flow.resume(slack_message)
return result
# 핸들러가 비동기인 경우 (FastAPI, aiohttp, Slack Bolt 비동기 등)
async def on_slack_feedback_async(flow_id: str, slack_message: str):
flow = ContentPipeline.from_pending(flow_id)
result = await flow.resume_async(slack_message)
return result
```
<Warning>
비동기 웹 프레임워크(FastAPI, aiohttp, Slack Bolt 비동기 모드)를 사용하는 경우 `flow.resume()` 대신 `await flow.resume_async()`를 사용하세요. 실행 중인 이벤트 루프 내에서 `resume()`을 호출하면 `RuntimeError`가 발생합니다.
</Warning>
### 비동기 피드백 모범 사례
1. **반환 타입 확인**: `kickoff()`는 일시 중지되면 `HumanFeedbackPending`을 반환합니다—try/except가 필요하지 않습니다
2. **올바른 resume 메서드 사용**: 동기 코드에서는 `resume()`, 비동기 코드에서는 `await resume_async()` 사용
3. **콜백 정보 저장**: `callback_info`를 사용하여 웹훅 URL, 티켓 ID 등을 저장
4. **멱등성 구현**: 안전을 위해 resume 핸들러는 멱등해야 합니다
5. **자동 영속성**: `HumanFeedbackPending`이 발생하면 상태가 자동으로 저장되며 기본적으로 `SQLiteFlowPersistence` 사용
6. **커스텀 영속성**: 필요한 경우 `from_pending()`에 커스텀 영속성 인스턴스 전달
## 관련 문서
- [Flow 개요](/ko/concepts/flows) - CrewAI Flow에 대해 알아보기
- [Flow 상태 관리](/ko/guides/flows/mastering-flow-state) - Flow에서 상태 관리하기
- [Flow 영속성](/ko/concepts/flows#persistence) - Flow 상태 영속화
- [@router를 사용한 라우팅](/ko/concepts/flows#router) - 조건부 라우팅에 대해 더 알아보기
- [실행 시 인간 입력](/ko/learn/human-input-on-execution) - 태스크 수준 인간 입력

View File

@@ -16,16 +16,17 @@ Bem-vindo à referência da API do CrewAI AOP. Esta API permite que você intera
Navegue até a página de detalhes do seu crew no painel do CrewAI AOP e copie seu Bearer Token na aba Status.
</Step>
<Step title="Descubra os Inputs Necessários">
Use o endpoint `GET /inputs` para ver quais parâmetros seu crew espera.
</Step>
<Step title="Descubra os Inputs Necessários">
Use o endpoint `GET /inputs` para ver quais parâmetros seu crew espera.
</Step>
<Step title="Inicie uma Execução de Crew">
Chame `POST /kickoff` com seus inputs para iniciar a execução do crew e receber um `kickoff_id`.
</Step>
<Step title="Inicie uma Execução de Crew">
Chame `POST /kickoff` com seus inputs para iniciar a execução do crew e
receber um `kickoff_id`.
</Step>
<Step title="Monitore o Progresso">
Use `GET /status/{kickoff_id}` para checar o status da execução e recuperar os resultados.
Use `GET /{kickoff_id}/status` para checar o status da execução e recuperar os resultados.
</Step>
</Steps>
@@ -40,13 +41,14 @@ curl -H "Authorization: Bearer YOUR_CREW_TOKEN" \
### Tipos de Token
| Tipo de Token | Escopo | Caso de Uso |
|:--------------------|:------------------------|:---------------------------------------------------------|
| **Bearer Token** | Acesso em nível de organização | Operações completas de crew, ideal para integração server-to-server |
| **User Bearer Token** | Acesso com escopo de usuário | Permissões limitadas, adequado para operações específicas de usuário |
| Tipo de Token | Escopo | Caso de Uso |
| :-------------------- | :----------------------------- | :------------------------------------------------------------------- |
| **Bearer Token** | Acesso em nível de organização | Operações completas de crew, ideal para integração server-to-server |
| **User Bearer Token** | Acesso com escopo de usuário | Permissões limitadas, adequado para operações específicas de usuário |
<Tip>
Você pode encontrar ambos os tipos de token na aba Status da página de detalhes do seu crew no painel do CrewAI AOP.
Você pode encontrar ambos os tipos de token na aba Status da página de
detalhes do seu crew no painel do CrewAI AOP.
</Tip>
## URL Base
@@ -63,29 +65,33 @@ Substitua `your-crew-name` pela URL real do seu crew no painel.
1. **Descoberta**: Chame `GET /inputs` para entender o que seu crew precisa
2. **Execução**: Envie os inputs via `POST /kickoff` para iniciar o processamento
3. **Monitoramento**: Faça polling em `GET /status/{kickoff_id}` até a conclusão
3. **Monitoramento**: Faça polling em `GET /{kickoff_id}/status` até a conclusão
4. **Resultados**: Extraia o output final da resposta concluída
## Tratamento de Erros
A API utiliza códigos de status HTTP padrão:
| Código | Significado |
|--------|:--------------------------------------|
| `200` | Sucesso |
| `400` | Requisição Inválida - Formato de input inválido |
| `401` | Não Autorizado - Bearer token inválido |
| `404` | Não Encontrado - Recurso não existe |
| Código | Significado |
| ------ | :----------------------------------------------- |
| `200` | Sucesso |
| `400` | Requisição Inválida - Formato de input inválido |
| `401` | Não Autorizado - Bearer token inválido |
| `404` | Não Encontrado - Recurso não existe |
| `422` | Erro de Validação - Inputs obrigatórios ausentes |
| `500` | Erro no Servidor - Contate o suporte |
| `500` | Erro no Servidor - Contate o suporte |
## Testes Interativos
<Info>
**Por que não há botão "Enviar"?** Como cada usuário do CrewAI AOP possui sua própria URL de crew, utilizamos o **modo referência** em vez de um playground interativo para evitar confusão. Isso mostra exatamente como as requisições devem ser feitas, sem botões de envio não funcionais.
**Por que não há botão "Enviar"?** Como cada usuário do CrewAI AOP possui sua
própria URL de crew, utilizamos o **modo referência** em vez de um playground
interativo para evitar confusão. Isso mostra exatamente como as requisições
devem ser feitas, sem botões de envio não funcionais.
</Info>
Cada página de endpoint mostra para você:
- ✅ **Formato exato da requisição** com todos os parâmetros
- ✅ **Exemplos de resposta** para casos de sucesso e erro
- ✅ **Exemplos de código** em várias linguagens (cURL, Python, JavaScript, etc.)
@@ -103,6 +109,7 @@ Cada página de endpoint mostra para você:
</CardGroup>
**Exemplo de fluxo:**
1. **Copie este exemplo cURL** de qualquer página de endpoint
2. **Substitua `your-actual-crew-name.crewai.com`** pela URL real do seu crew
3. **Substitua o Bearer token** pelo seu token real do painel
@@ -111,10 +118,18 @@ Cada página de endpoint mostra para você:
## Precisa de Ajuda?
<CardGroup cols={2}>
<Card title="Suporte Enterprise" icon="headset" href="mailto:support@crewai.com">
<Card
title="Suporte Enterprise"
icon="headset"
href="mailto:support@crewai.com"
>
Obtenha ajuda com integração da API e resolução de problemas
</Card>
<Card title="Painel Enterprise" icon="chart-line" href="https://app.crewai.com">
<Card
title="Painel Enterprise"
icon="chart-line"
href="https://app.crewai.com"
>
Gerencie seus crews e visualize logs de execução
</Card>
</CardGroup>

View File

@@ -1,8 +1,6 @@
---
title: "GET /status/{kickoff_id}"
title: "GET /{kickoff_id}/status"
description: "Obter o status da execução"
openapi: "/enterprise-api.pt-BR.yaml GET /status/{kickoff_id}"
openapi: "/enterprise-api.pt-BR.yaml GET /{kickoff_id}/status"
mode: "wide"
---

View File

@@ -307,6 +307,55 @@ Os métodos `third_method` e `fourth_method` escutam a saída do `second_method`
Ao executar esse Flow, a saída será diferente dependendo do valor booleano aleatório gerado pelo `start_method`.
### Human in the Loop (feedback humano)
O decorador `@human_feedback` permite fluxos de trabalho human-in-the-loop, pausando a execução do flow para coletar feedback de um humano. Isso é útil para portões de aprovação, revisão de qualidade e pontos de decisão que requerem julgamento humano.
```python Code
from crewai.flow.flow import Flow, start, listen
from crewai.flow.human_feedback import human_feedback, HumanFeedbackResult
class ReviewFlow(Flow):
@start()
@human_feedback(
message="Você aprova este conteúdo?",
emit=["approved", "rejected", "needs_revision"],
llm="gpt-4o-mini",
default_outcome="needs_revision",
)
def generate_content(self):
return "Conteúdo para revisão..."
@listen("approved")
def on_approval(self, result: HumanFeedbackResult):
print(f"Aprovado! Feedback: {result.feedback}")
@listen("rejected")
def on_rejection(self, result: HumanFeedbackResult):
print(f"Rejeitado. Motivo: {result.feedback}")
```
Quando `emit` é especificado, o feedback livre do humano é interpretado por um LLM e mapeado para um dos outcomes especificados, que então dispara o decorador `@listen` correspondente.
Você também pode usar `@human_feedback` sem roteamento para simplesmente coletar feedback:
```python Code
@start()
@human_feedback(message="Algum comentário sobre esta saída?")
def my_method(self):
return "Saída para revisão"
@listen(my_method)
def next_step(self, result: HumanFeedbackResult):
# Acesse o feedback via result.feedback
# Acesse a saída original via result.output
pass
```
Acesse todo o feedback coletado durante um flow via `self.last_human_feedback` (mais recente) ou `self.human_feedback_history` (todo o feedback em uma lista).
Para um guia completo sobre feedback humano em flows, incluindo feedback assíncrono/não-bloqueante com providers customizados (Slack, webhooks, etc.), veja [Feedback Humano em Flows](/pt-BR/learn/human-feedback-in-flows).
## Adicionando Agentes aos Flows
Os agentes podem ser integrados facilmente aos seus flows, oferecendo uma alternativa leve às crews completas quando você precisar executar tarefas simples e focadas. Veja um exemplo de como utilizar um agente em um flow para realizar uma pesquisa de mercado:

View File

@@ -62,13 +62,13 @@ Teste sua integração de trigger do Gmail localmente usando a CLI da CrewAI:
crewai triggers list
# Simule um trigger do Gmail com payload realista
crewai triggers run gmail/new_email
crewai triggers run gmail/new_email_received
```
O comando `crewai triggers run` executará sua crew com um payload completo do Gmail, permitindo que você teste sua lógica de parsing antes do deployment.
<Warning>
Use `crewai triggers run gmail/new_email` (não `crewai run`) para simular execução de trigger durante o desenvolvimento. Após o deployment, sua crew receberá automaticamente o payload do trigger.
Use `crewai triggers run gmail/new_email_received` (não `crewai run`) para simular execução de trigger durante o desenvolvimento. Após o deployment, sua crew receberá automaticamente o payload do trigger.
</Warning>
## Monitoring Executions
@@ -83,6 +83,6 @@ Track history and performance of triggered runs:
- Ensure Gmail is connected in Tools & Integrations
- Verify the Gmail Trigger is enabled on the Triggers tab
- Teste localmente com `crewai triggers run gmail/new_email` para ver a estrutura exata do payload
- Teste localmente com `crewai triggers run gmail/new_email_received` para ver a estrutura exata do payload
- Check the execution logs and confirm the payload is passed as `crewai_trigger_payload`
- Lembre-se: use `crewai triggers run` (não `crewai run`) para simular execução de trigger

View File

@@ -0,0 +1,581 @@
---
title: Feedback Humano em Flows
description: Aprenda como integrar feedback humano diretamente nos seus CrewAI Flows usando o decorador @human_feedback
icon: user-check
mode: "wide"
---
## Visão Geral
O decorador `@human_feedback` permite fluxos de trabalho human-in-the-loop (HITL) diretamente nos CrewAI Flows. Ele permite pausar a execução do flow, apresentar a saída para um humano revisar, coletar seu feedback e, opcionalmente, rotear para diferentes listeners com base no resultado do feedback.
Isso é particularmente valioso para:
- **Garantia de qualidade**: Revisar conteúdo gerado por IA antes de ser usado downstream
- **Portões de decisão**: Deixar humanos tomarem decisões críticas em fluxos automatizados
- **Fluxos de aprovação**: Implementar padrões de aprovar/rejeitar/revisar
- **Refinamento interativo**: Coletar feedback para melhorar saídas iterativamente
```mermaid
flowchart LR
A[Método do Flow] --> B[Saída Gerada]
B --> C[Humano Revisa]
C --> D{Feedback}
D -->|emit especificado| E[LLM Mapeia para Outcome]
D -->|sem emit| F[HumanFeedbackResult]
E --> G["@listen('approved')"]
E --> H["@listen('rejected')"]
F --> I[Próximo Listener]
```
## Início Rápido
Aqui está a maneira mais simples de adicionar feedback humano a um flow:
```python Code
from crewai.flow.flow import Flow, start, listen
from crewai.flow.human_feedback import human_feedback
class SimpleReviewFlow(Flow):
@start()
@human_feedback(message="Por favor, revise este conteúdo:")
def generate_content(self):
return "Este é um conteúdo gerado por IA que precisa de revisão."
@listen(generate_content)
def process_feedback(self, result):
print(f"Conteúdo: {result.output}")
print(f"Humano disse: {result.feedback}")
flow = SimpleReviewFlow()
flow.kickoff()
```
Quando este flow é executado, ele irá:
1. Executar `generate_content` e retornar a string
2. Exibir a saída para o usuário com a mensagem de solicitação
3. Aguardar o usuário digitar o feedback (ou pressionar Enter para pular)
4. Passar um objeto `HumanFeedbackResult` para `process_feedback`
## O Decorador @human_feedback
### Parâmetros
| Parâmetro | Tipo | Obrigatório | Descrição |
|-----------|------|-------------|-----------|
| `message` | `str` | Sim | A mensagem mostrada ao humano junto com a saída do método |
| `emit` | `Sequence[str]` | Não | Lista de possíveis outcomes. O feedback é mapeado para um destes, que dispara decoradores `@listen` |
| `llm` | `str \| BaseLLM` | Quando `emit` especificado | LLM usado para interpretar o feedback e mapear para um outcome |
| `default_outcome` | `str` | Não | Outcome a usar se nenhum feedback for fornecido. Deve estar em `emit` |
| `metadata` | `dict` | Não | Dados adicionais para integrações enterprise |
| `provider` | `HumanFeedbackProvider` | Não | Provider customizado para feedback assíncrono/não-bloqueante. Veja [Feedback Humano Assíncrono](#feedback-humano-assíncrono-não-bloqueante) |
### Uso Básico (Sem Roteamento)
Quando você não especifica `emit`, o decorador simplesmente coleta o feedback e passa um `HumanFeedbackResult` para o próximo listener:
```python Code
@start()
@human_feedback(message="O que você acha desta análise?")
def analyze_data(self):
return "Resultados da análise: Receita aumentou 15%, custos diminuíram 8%"
@listen(analyze_data)
def handle_feedback(self, result):
# result é um HumanFeedbackResult
print(f"Análise: {result.output}")
print(f"Feedback: {result.feedback}")
```
### Roteamento com emit
Quando você especifica `emit`, o decorador se torna um roteador. O feedback livre do humano é interpretado por um LLM e mapeado para um dos outcomes especificados:
```python Code
@start()
@human_feedback(
message="Você aprova este conteúdo para publicação?",
emit=["approved", "rejected", "needs_revision"],
llm="gpt-4o-mini",
default_outcome="needs_revision",
)
def review_content(self):
return "Rascunho do post do blog aqui..."
@listen("approved")
def publish(self, result):
print(f"Publicando! Usuário disse: {result.feedback}")
@listen("rejected")
def discard(self, result):
print(f"Descartando. Motivo: {result.feedback}")
@listen("needs_revision")
def revise(self, result):
print(f"Revisando baseado em: {result.feedback}")
```
<Tip>
O LLM usa saídas estruturadas (function calling) quando disponível para garantir que a resposta seja um dos seus outcomes especificados. Isso torna o roteamento confiável e previsível.
</Tip>
## HumanFeedbackResult
O dataclass `HumanFeedbackResult` contém todas as informações sobre uma interação de feedback humano:
```python Code
from crewai.flow.human_feedback import HumanFeedbackResult
@dataclass
class HumanFeedbackResult:
output: Any # A saída original do método mostrada ao humano
feedback: str # O texto bruto do feedback do humano
outcome: str | None # O outcome mapeado (se emit foi especificado)
timestamp: datetime # Quando o feedback foi recebido
method_name: str # Nome do método decorado
metadata: dict # Qualquer metadata passado ao decorador
```
### Acessando em Listeners
Quando um listener é disparado por um método `@human_feedback` com `emit`, ele recebe o `HumanFeedbackResult`:
```python Code
@listen("approved")
def on_approval(self, result: HumanFeedbackResult):
print(f"Saída original: {result.output}")
print(f"Feedback do usuário: {result.feedback}")
print(f"Outcome: {result.outcome}") # "approved"
print(f"Recebido em: {result.timestamp}")
```
## Acessando o Histórico de Feedback
A classe `Flow` fornece dois atributos para acessar o feedback humano:
### last_human_feedback
Retorna o `HumanFeedbackResult` mais recente:
```python Code
@listen(some_method)
def check_feedback(self):
if self.last_human_feedback:
print(f"Último feedback: {self.last_human_feedback.feedback}")
```
### human_feedback_history
Uma lista de todos os objetos `HumanFeedbackResult` coletados durante o flow:
```python Code
@listen(final_step)
def summarize(self):
print(f"Total de feedbacks coletados: {len(self.human_feedback_history)}")
for i, fb in enumerate(self.human_feedback_history):
print(f"{i+1}. {fb.method_name}: {fb.outcome or 'sem roteamento'}")
```
<Warning>
Cada `HumanFeedbackResult` é adicionado a `human_feedback_history`, então múltiplos passos de feedback não sobrescrevem uns aos outros. Use esta lista para acessar todo o feedback coletado durante o flow.
</Warning>
## Exemplo Completo: Fluxo de Aprovação de Conteúdo
Aqui está um exemplo completo implementando um fluxo de revisão e aprovação de conteúdo:
<CodeGroup>
```python Code
from crewai.flow.flow import Flow, start, listen
from crewai.flow.human_feedback import human_feedback, HumanFeedbackResult
from pydantic import BaseModel
class ContentState(BaseModel):
topic: str = ""
draft: str = ""
final_content: str = ""
revision_count: int = 0
class ContentApprovalFlow(Flow[ContentState]):
"""Um flow que gera conteúdo e obtém aprovação humana."""
@start()
def get_topic(self):
self.state.topic = input("Sobre qual tópico devo escrever? ")
return self.state.topic
@listen(get_topic)
def generate_draft(self, topic):
# Em uso real, isso chamaria um LLM
self.state.draft = f"# {topic}\n\nEste é um rascunho sobre {topic}..."
return self.state.draft
@listen(generate_draft)
@human_feedback(
message="Por favor, revise este rascunho. Responda 'approved', 'rejected', ou forneça feedback de revisão:",
emit=["approved", "rejected", "needs_revision"],
llm="gpt-4o-mini",
default_outcome="needs_revision",
)
def review_draft(self, draft):
return draft
@listen("approved")
def publish_content(self, result: HumanFeedbackResult):
self.state.final_content = result.output
print("\n✅ Conteúdo aprovado e publicado!")
print(f"Comentário do revisor: {result.feedback}")
return "published"
@listen("rejected")
def handle_rejection(self, result: HumanFeedbackResult):
print("\n❌ Conteúdo rejeitado")
print(f"Motivo: {result.feedback}")
return "rejected"
@listen("needs_revision")
def revise_content(self, result: HumanFeedbackResult):
self.state.revision_count += 1
print(f"\n📝 Revisão #{self.state.revision_count} solicitada")
print(f"Feedback: {result.feedback}")
# Em um flow real, você pode voltar para generate_draft
# Para este exemplo, apenas reconhecemos
return "revision_requested"
# Executar o flow
flow = ContentApprovalFlow()
result = flow.kickoff()
print(f"\nFlow concluído. Revisões solicitadas: {flow.state.revision_count}")
```
```text Output
Sobre qual tópico devo escrever? Segurança em IA
==================================================
OUTPUT FOR REVIEW:
==================================================
# Segurança em IA
Este é um rascunho sobre Segurança em IA...
==================================================
Por favor, revise este rascunho. Responda 'approved', 'rejected', ou forneça feedback de revisão:
(Press Enter to skip, or type your feedback)
Your feedback: Parece bom, aprovado!
✅ Conteúdo aprovado e publicado!
Comentário do revisor: Parece bom, aprovado!
Flow concluído. Revisões solicitadas: 0
```
</CodeGroup>
## Combinando com Outros Decoradores
O decorador `@human_feedback` funciona com outros decoradores de flow. Coloque-o como o decorador mais interno (mais próximo da função):
```python Code
# Correto: @human_feedback é o mais interno (mais próximo da função)
@start()
@human_feedback(message="Revise isto:")
def my_start_method(self):
return "content"
@listen(other_method)
@human_feedback(message="Revise isto também:")
def my_listener(self, data):
return f"processed: {data}"
```
<Tip>
Coloque `@human_feedback` como o decorador mais interno (último/mais próximo da função) para que ele envolva o método diretamente e possa capturar o valor de retorno antes de passar para o sistema de flow.
</Tip>
## Melhores Práticas
### 1. Escreva Mensagens de Solicitação Claras
O parâmetro `message` é o que o humano vê. Torne-o acionável:
```python Code
# ✅ Bom - claro e acionável
@human_feedback(message="Este resumo captura com precisão os pontos-chave? Responda 'sim' ou explique o que está faltando:")
# ❌ Ruim - vago
@human_feedback(message="Revise isto:")
```
### 2. Escolha Outcomes Significativos
Ao usar `emit`, escolha outcomes que mapeiem naturalmente para respostas humanas:
```python Code
# ✅ Bom - outcomes em linguagem natural
emit=["approved", "rejected", "needs_more_detail"]
# ❌ Ruim - técnico ou pouco claro
emit=["state_1", "state_2", "state_3"]
```
### 3. Sempre Forneça um Outcome Padrão
Use `default_outcome` para lidar com casos onde usuários pressionam Enter sem digitar:
```python Code
@human_feedback(
message="Aprovar? (pressione Enter para solicitar revisão)",
emit=["approved", "needs_revision"],
llm="gpt-4o-mini",
default_outcome="needs_revision", # Padrão seguro
)
```
### 4. Use o Histórico de Feedback para Trilhas de Auditoria
Acesse `human_feedback_history` para criar logs de auditoria:
```python Code
@listen(final_step)
def create_audit_log(self):
log = []
for fb in self.human_feedback_history:
log.append({
"step": fb.method_name,
"outcome": fb.outcome,
"feedback": fb.feedback,
"timestamp": fb.timestamp.isoformat(),
})
return log
```
### 5. Trate Feedback Roteado e Não Roteado
Ao projetar flows, considere se você precisa de roteamento:
| Cenário | Use |
|---------|-----|
| Revisão simples, só precisa do texto do feedback | Sem `emit` |
| Precisa ramificar para caminhos diferentes baseado na resposta | Use `emit` |
| Portões de aprovação com aprovar/rejeitar/revisar | Use `emit` |
| Coletando comentários apenas para logging | Sem `emit` |
## Feedback Humano Assíncrono (Não-Bloqueante - Human in the loop)
Por padrão, `@human_feedback` bloqueia a execução aguardando entrada no console. Para aplicações de produção, você pode precisar de feedback **assíncrono/não-bloqueante** que se integre com sistemas externos como Slack, email, webhooks ou APIs.
### A Abstração de Provider
Use o parâmetro `provider` para especificar uma estratégia customizada de coleta de feedback:
```python Code
from crewai.flow import Flow, start, human_feedback, HumanFeedbackProvider, HumanFeedbackPending, PendingFeedbackContext
class WebhookProvider(HumanFeedbackProvider):
"""Provider que pausa o flow e aguarda callback de webhook."""
def __init__(self, webhook_url: str):
self.webhook_url = webhook_url
def request_feedback(self, context: PendingFeedbackContext, flow: Flow) -> str:
# Notifica sistema externo (ex: envia mensagem Slack, cria ticket)
self.send_notification(context)
# Pausa execução - framework cuida da persistência automaticamente
raise HumanFeedbackPending(
context=context,
callback_info={"webhook_url": f"{self.webhook_url}/{context.flow_id}"}
)
class ReviewFlow(Flow):
@start()
@human_feedback(
message="Revise este conteúdo:",
emit=["approved", "rejected"],
llm="gpt-4o-mini",
provider=WebhookProvider("https://myapp.com/api"),
)
def generate_content(self):
return "Conteúdo gerado por IA..."
@listen("approved")
def publish(self, result):
return "Publicado!"
```
<Tip>
O framework de flow **persiste automaticamente o estado** quando `HumanFeedbackPending` é lançado. Seu provider só precisa notificar o sistema externo e lançar a exceção—não são necessárias chamadas manuais de persistência.
</Tip>
### Tratando Flows Pausados
Ao usar um provider assíncrono, `kickoff()` retorna um objeto `HumanFeedbackPending` em vez de lançar uma exceção:
```python Code
flow = ReviewFlow()
result = flow.kickoff()
if isinstance(result, HumanFeedbackPending):
# Flow está pausado, estado é automaticamente persistido
print(f"Aguardando feedback em: {result.callback_info['webhook_url']}")
print(f"Flow ID: {result.context.flow_id}")
else:
# Conclusão normal
print(f"Flow concluído: {result}")
```
### Retomando um Flow Pausado
Quando o feedback chega (ex: via webhook), retome o flow:
```python Code
# Handler síncrono:
def handle_feedback_webhook(flow_id: str, feedback: str):
flow = ReviewFlow.from_pending(flow_id)
result = flow.resume(feedback)
return result
# Handler assíncrono (FastAPI, aiohttp, etc.):
async def handle_feedback_webhook(flow_id: str, feedback: str):
flow = ReviewFlow.from_pending(flow_id)
result = await flow.resume_async(feedback)
return result
```
### Tipos Principais
| Tipo | Descrição |
|------|-----------|
| `HumanFeedbackProvider` | Protocolo para providers de feedback customizados |
| `PendingFeedbackContext` | Contém todas as informações necessárias para retomar um flow pausado |
| `HumanFeedbackPending` | Retornado por `kickoff()` quando o flow está pausado para feedback |
| `ConsoleProvider` | Provider padrão de entrada bloqueante no console |
### PendingFeedbackContext
O contexto contém tudo necessário para retomar:
```python Code
@dataclass
class PendingFeedbackContext:
flow_id: str # Identificador único desta execução de flow
flow_class: str # Nome qualificado completo da classe
method_name: str # Método que disparou o feedback
method_output: Any # Saída mostrada ao humano
message: str # A mensagem de solicitação
emit: list[str] | None # Outcomes possíveis para roteamento
default_outcome: str | None
metadata: dict # Metadata customizado
llm: str | None # LLM para mapeamento de outcome
requested_at: datetime
```
### Exemplo Completo de Flow Assíncrono
```python Code
from crewai.flow import (
Flow, start, listen, human_feedback,
HumanFeedbackProvider, HumanFeedbackPending, PendingFeedbackContext
)
class SlackNotificationProvider(HumanFeedbackProvider):
"""Provider que envia notificações Slack e pausa para feedback assíncrono."""
def __init__(self, channel: str):
self.channel = channel
def request_feedback(self, context: PendingFeedbackContext, flow: Flow) -> str:
# Envia notificação Slack (implemente você mesmo)
slack_thread_id = self.post_to_slack(
channel=self.channel,
message=f"Revisão necessária:\n\n{context.method_output}\n\n{context.message}",
)
# Pausa execução - framework cuida da persistência automaticamente
raise HumanFeedbackPending(
context=context,
callback_info={
"slack_channel": self.channel,
"thread_id": slack_thread_id,
}
)
class ContentPipeline(Flow):
@start()
@human_feedback(
message="Aprova este conteúdo para publicação?",
emit=["approved", "rejected", "needs_revision"],
llm="gpt-4o-mini",
default_outcome="needs_revision",
provider=SlackNotificationProvider("#content-reviews"),
)
def generate_content(self):
return "Conteúdo de blog post gerado por IA..."
@listen("approved")
def publish(self, result):
print(f"Publicando! Revisor disse: {result.feedback}")
return {"status": "published"}
@listen("rejected")
def archive(self, result):
print(f"Arquivado. Motivo: {result.feedback}")
return {"status": "archived"}
@listen("needs_revision")
def queue_revision(self, result):
print(f"Na fila para revisão: {result.feedback}")
return {"status": "revision_needed"}
# Iniciando o flow (vai pausar e aguardar resposta do Slack)
def start_content_pipeline():
flow = ContentPipeline()
result = flow.kickoff()
if isinstance(result, HumanFeedbackPending):
return {"status": "pending", "flow_id": result.context.flow_id}
return result
# Retomando quando webhook do Slack dispara (handler síncrono)
def on_slack_feedback(flow_id: str, slack_message: str):
flow = ContentPipeline.from_pending(flow_id)
result = flow.resume(slack_message)
return result
# Se seu handler é assíncrono (FastAPI, aiohttp, Slack Bolt async, etc.)
async def on_slack_feedback_async(flow_id: str, slack_message: str):
flow = ContentPipeline.from_pending(flow_id)
result = await flow.resume_async(slack_message)
return result
```
<Warning>
Se você está usando um framework web assíncrono (FastAPI, aiohttp, Slack Bolt modo async), use `await flow.resume_async()` em vez de `flow.resume()`. Chamar `resume()` de dentro de um event loop em execução vai lançar um `RuntimeError`.
</Warning>
### Melhores Práticas para Feedback Assíncrono
1. **Verifique o tipo de retorno**: `kickoff()` retorna `HumanFeedbackPending` quando pausado—não precisa de try/except
2. **Use o método resume correto**: Use `resume()` em código síncrono, `await resume_async()` em código assíncrono
3. **Armazene informações de callback**: Use `callback_info` para armazenar URLs de webhook, IDs de tickets, etc.
4. **Implemente idempotência**: Seu handler de resume deve ser idempotente por segurança
5. **Persistência automática**: O estado é automaticamente salvo quando `HumanFeedbackPending` é lançado e usa `SQLiteFlowPersistence` por padrão
6. **Persistência customizada**: Passe uma instância de persistência customizada para `from_pending()` se necessário
## Documentação Relacionada
- [Visão Geral de Flows](/pt-BR/concepts/flows) - Aprenda sobre CrewAI Flows
- [Gerenciamento de Estado em Flows](/pt-BR/guides/flows/mastering-flow-state) - Gerenciando estado em flows
- [Persistência de Flows](/pt-BR/concepts/flows#persistence) - Persistindo estado de flows
- [Roteamento com @router](/pt-BR/concepts/flows#router) - Mais sobre roteamento condicional
- [Input Humano na Execução](/pt-BR/learn/human-input-on-execution) - Input humano no nível de task