mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-01-05 14:18:30 +00:00
* docs: Fix major memory system documentation issues - Remove misleading deprecation warnings, fix confusing comments, clearly separate three memory approaches, provide accurate examples that match implementation * fix: Correct broken image paths in README - Update crewai_logo.png and asset.png paths to point to docs/images/ directory instead of docs/ directly * docs: Add system prompt transparency and customization guide - Add 'Understanding Default System Instructions' section to address black-box concerns - Document what CrewAI automatically injects into prompts - Provide code examples to inspect complete system prompts - Show 3 methods to override default instructions - Include observability integration examples with Langfuse - Add best practices for production prompt management * docs: Fix implementation accuracy issues in memory documentation - Fix Ollama embedding URL parameter and remove unsupported Cohere input_type parameter * docs: Reference observability docs instead of showing specific tool examples * docs: Reorganize knowledge documentation for better developer experience - Move quickstart examples right after overview for immediate hands-on experience - Create logical learning progression: basics → configuration → advanced → troubleshooting - Add comprehensive agent vs crew knowledge guide with working examples - Consolidate debugging and troubleshooting in dedicated section - Organize best practices by topic in accordion format - Improve content flow from simple concepts to advanced features - Ensure all examples are grounded in actual codebase implementation * docs: enhance custom LLM documentation with comprehensive examples and accurate imports * docs: reorganize observability tools into dedicated section with comprehensive overview and improved navigation * docs: rename how-to section to learn and add comprehensive overview page * docs: finalize documentation reorganization and update navigation labels * docs: enhance README with comprehensive badges, navigation links, and getting started video * Add Common Room tracking to documentation - Script will track all documentation page views - Follows Mintlify custom JS implementation pattern - Enables comprehensive docs usage insights * docs: move human-in-the-loop guide to enterprise section and update navigation - Move human-in-the-loop.mdx from learn to enterprise/guides - Update docs.json navigation to reflect new organization
78 lines
3.2 KiB
Plaintext
78 lines
3.2 KiB
Plaintext
---
|
|
title: "HITL Workflows"
|
|
description: "Learn how to implement Human-In-The-Loop workflows in CrewAI for enhanced decision-making"
|
|
icon: "user-check"
|
|
---
|
|
|
|
Human-In-The-Loop (HITL) is a powerful approach that combines artificial intelligence with human expertise to enhance decision-making and improve task outcomes. This guide shows you how to implement HITL within CrewAI.
|
|
|
|
## Setting Up HITL Workflows
|
|
|
|
<Steps>
|
|
<Step title="Configure Your Task">
|
|
Set up your task with human input enabled:
|
|
<Frame>
|
|
<img src="/images/enterprise/crew-human-input.png" alt="Crew Human Input" />
|
|
</Frame>
|
|
</Step>
|
|
|
|
<Step title="Provide Webhook URL">
|
|
When kicking off your crew, include a webhook URL for human input:
|
|
<Frame>
|
|
<img src="/images/enterprise/crew-webhook-url.png" alt="Crew Webhook URL" />
|
|
</Frame>
|
|
</Step>
|
|
|
|
<Step title="Receive Webhook Notification">
|
|
Once the crew completes the task requiring human input, you'll receive a webhook notification containing:
|
|
- **Execution ID**
|
|
- **Task ID**
|
|
- **Task output**
|
|
</Step>
|
|
|
|
<Step title="Review Task Output">
|
|
The system will pause in the `Pending Human Input` state. Review the task output carefully.
|
|
</Step>
|
|
|
|
<Step title="Submit Human Feedback">
|
|
Call the resume endpoint of your crew with the following information:
|
|
<Frame>
|
|
<img src="/images/enterprise/crew-resume-endpoint.png" alt="Crew Resume Endpoint" />
|
|
</Frame>
|
|
<Warning>
|
|
**Feedback Impact on Task Execution**:
|
|
It's crucial to exercise care when providing feedback, as the entire feedback content will be incorporated as additional context for further task executions.
|
|
</Warning>
|
|
This means:
|
|
- All information in your feedback becomes part of the task's context.
|
|
- Irrelevant details may negatively influence it.
|
|
- Concise, relevant feedback helps maintain task focus and efficiency.
|
|
- Always review your feedback carefully before submission to ensure it contains only pertinent information that will positively guide the task's execution.
|
|
</Step>
|
|
<Step title="Handle Negative Feedback">
|
|
If you provide negative feedback:
|
|
- The crew will retry the task with added context from your feedback.
|
|
- You'll receive another webhook notification for further review.
|
|
- Repeat steps 4-6 until satisfied.
|
|
</Step>
|
|
|
|
<Step title="Execution Continuation">
|
|
When you submit positive feedback, the execution will proceed to the next steps.
|
|
</Step>
|
|
</Steps>
|
|
|
|
## Best Practices
|
|
|
|
- **Be Specific**: Provide clear, actionable feedback that directly addresses the task at hand
|
|
- **Stay Relevant**: Only include information that will help improve the task execution
|
|
- **Be Timely**: Respond to HITL prompts promptly to avoid workflow delays
|
|
- **Review Carefully**: Double-check your feedback before submitting to ensure accuracy
|
|
|
|
## Common Use Cases
|
|
|
|
HITL workflows are particularly valuable for:
|
|
- Quality assurance and validation
|
|
- Complex decision-making scenarios
|
|
- Sensitive or high-stakes operations
|
|
- Creative tasks requiring human judgment
|
|
- Compliance and regulatory reviews |