mirror of
https://github.com/crewAIInc/crewAI.git
synced 2025-12-16 04:18:35 +00:00
* feat: implement LLM call hooks and enhance agent execution context - Introduced LLM call hooks to allow modification of messages and responses during LLM interactions. - Added support for before and after hooks in the CrewAgentExecutor, enabling dynamic adjustments to the execution flow. - Created LLMCallHookContext for comprehensive access to the executor state, facilitating in-place modifications. - Added validation for hook callables to ensure proper functionality. - Enhanced tests for LLM hooks and tool hooks to verify their behavior and error handling capabilities. - Updated LiteAgent and CrewAgentExecutor to accommodate the new crew context in their execution processes. * feat: implement LLM call hooks and enhance agent execution context - Introduced LLM call hooks to allow modification of messages and responses during LLM interactions. - Added support for before and after hooks in the CrewAgentExecutor, enabling dynamic adjustments to the execution flow. - Created LLMCallHookContext for comprehensive access to the executor state, facilitating in-place modifications. - Added validation for hook callables to ensure proper functionality. - Enhanced tests for LLM hooks and tool hooks to verify their behavior and error handling capabilities. - Updated LiteAgent and CrewAgentExecutor to accommodate the new crew context in their execution processes. * fix verbose * feat: introduce crew-scoped hook decorators and refactor hook registration - Added decorators for before and after LLM and tool calls to enhance flexibility in modifying execution behavior. - Implemented a centralized hook registration mechanism within CrewBase to automatically register crew-scoped hooks. - Removed the obsolete base.py file as its functionality has been integrated into the new decorators and registration system. - Enhanced tests for the new hook decorators to ensure proper registration and execution flow. - Updated existing hook handling to accommodate the new decorator-based approach, improving code organization and maintainability. * feat: enhance hook management with clear and unregister functions - Introduced functions to unregister specific before and after hooks for both LLM and tool calls, improving flexibility in hook management. - Added clear functions to remove all registered hooks of each type, facilitating easier state management and cleanup. - Implemented a convenience function to clear all global hooks in one call, streamlining the process for testing and execution context resets. - Enhanced tests to verify the functionality of unregistering and clearing hooks, ensuring robust behavior in various scenarios. * refactor: enhance hook type management for LLM and tool hooks - Updated hook type definitions to use generic protocols for better type safety and flexibility. - Replaced Callable type annotations with specific BeforeLLMCallHookType and AfterLLMCallHookType for clarity. - Improved the registration and retrieval functions for before and after hooks to align with the new type definitions. - Enhanced the setup functions to handle hook execution results, allowing for blocking of LLM calls based on hook logic. - Updated related tests to ensure proper functionality and type adherence across the hook management system. * feat: add execution and tool hooks documentation - Introduced new documentation for execution hooks, LLM call hooks, and tool call hooks to provide comprehensive guidance on their usage and implementation in CrewAI. - Updated existing documentation to include references to the new hooks, enhancing the learning resources available for users. - Ensured consistency across multiple languages (English, Portuguese, Korean) for the new documentation, improving accessibility for a wider audience. - Added examples and troubleshooting sections to assist users in effectively utilizing hooks for agent operations. --------- Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
523 lines
14 KiB
Plaintext
523 lines
14 KiB
Plaintext
---
|
|
title: Execution Hooks Overview
|
|
description: Understanding and using execution hooks in CrewAI for fine-grained control over agent operations
|
|
mode: "wide"
|
|
---
|
|
|
|
Execution Hooks provide fine-grained control over the runtime behavior of your CrewAI agents. Unlike kickoff hooks that run before and after crew execution, execution hooks intercept specific operations during agent execution, allowing you to modify behavior, implement safety checks, and add comprehensive monitoring.
|
|
|
|
## Types of Execution Hooks
|
|
|
|
CrewAI provides two main categories of execution hooks:
|
|
|
|
### 1. [LLM Call Hooks](/learn/llm-hooks)
|
|
|
|
Control and monitor language model interactions:
|
|
- **Before LLM Call**: Modify prompts, validate inputs, implement approval gates
|
|
- **After LLM Call**: Transform responses, sanitize outputs, update conversation history
|
|
|
|
**Use Cases:**
|
|
- Iteration limiting
|
|
- Cost tracking and token usage monitoring
|
|
- Response sanitization and content filtering
|
|
- Human-in-the-loop approval for LLM calls
|
|
- Adding safety guidelines or context
|
|
- Debug logging and request/response inspection
|
|
|
|
[View LLM Hooks Documentation →](/learn/llm-hooks)
|
|
|
|
### 2. [Tool Call Hooks](/learn/tool-hooks)
|
|
|
|
Control and monitor tool execution:
|
|
- **Before Tool Call**: Modify inputs, validate parameters, block dangerous operations
|
|
- **After Tool Call**: Transform results, sanitize outputs, log execution details
|
|
|
|
**Use Cases:**
|
|
- Safety guardrails for destructive operations
|
|
- Human approval for sensitive actions
|
|
- Input validation and sanitization
|
|
- Result caching and rate limiting
|
|
- Tool usage analytics
|
|
- Debug logging and monitoring
|
|
|
|
[View Tool Hooks Documentation →](/learn/tool-hooks)
|
|
|
|
## Hook Registration Methods
|
|
|
|
### 1. Decorator-Based Hooks (Recommended)
|
|
|
|
The cleanest and most Pythonic way to register hooks:
|
|
|
|
```python
|
|
from crewai.hooks import before_llm_call, after_llm_call, before_tool_call, after_tool_call
|
|
|
|
@before_llm_call
|
|
def limit_iterations(context):
|
|
"""Prevent infinite loops by limiting iterations."""
|
|
if context.iterations > 10:
|
|
return False # Block execution
|
|
return None
|
|
|
|
@after_llm_call
|
|
def sanitize_response(context):
|
|
"""Remove sensitive data from LLM responses."""
|
|
if "API_KEY" in context.response:
|
|
return context.response.replace("API_KEY", "[REDACTED]")
|
|
return None
|
|
|
|
@before_tool_call
|
|
def block_dangerous_tools(context):
|
|
"""Block destructive operations."""
|
|
if context.tool_name == "delete_database":
|
|
return False # Block execution
|
|
return None
|
|
|
|
@after_tool_call
|
|
def log_tool_result(context):
|
|
"""Log tool execution."""
|
|
print(f"Tool {context.tool_name} completed")
|
|
return None
|
|
```
|
|
|
|
### 2. Crew-Scoped Hooks
|
|
|
|
Apply hooks only to specific crew instances:
|
|
|
|
```python
|
|
from crewai import CrewBase
|
|
from crewai.project import crew
|
|
from crewai.hooks import before_llm_call_crew, after_tool_call_crew
|
|
|
|
@CrewBase
|
|
class MyProjCrew:
|
|
@before_llm_call_crew
|
|
def validate_inputs(self, context):
|
|
# Only applies to this crew
|
|
print(f"LLM call in {self.__class__.__name__}")
|
|
return None
|
|
|
|
@after_tool_call_crew
|
|
def log_results(self, context):
|
|
# Crew-specific logging
|
|
print(f"Tool result: {context.tool_result[:50]}...")
|
|
return None
|
|
|
|
@crew
|
|
def crew(self) -> Crew:
|
|
return Crew(
|
|
agents=self.agents,
|
|
tasks=self.tasks,
|
|
process=Process.sequential
|
|
)
|
|
```
|
|
|
|
## Hook Execution Flow
|
|
|
|
### LLM Call Flow
|
|
|
|
```
|
|
Agent needs to call LLM
|
|
↓
|
|
[Before LLM Call Hooks Execute]
|
|
├→ Hook 1: Validate iteration count
|
|
├→ Hook 2: Add safety context
|
|
└→ Hook 3: Log request
|
|
↓
|
|
If any hook returns False:
|
|
├→ Block LLM call
|
|
└→ Raise ValueError
|
|
↓
|
|
If all hooks return True/None:
|
|
├→ LLM call proceeds
|
|
└→ Response generated
|
|
↓
|
|
[After LLM Call Hooks Execute]
|
|
├→ Hook 1: Sanitize response
|
|
├→ Hook 2: Log response
|
|
└→ Hook 3: Update metrics
|
|
↓
|
|
Final response returned
|
|
```
|
|
|
|
### Tool Call Flow
|
|
|
|
```
|
|
Agent needs to execute tool
|
|
↓
|
|
[Before Tool Call Hooks Execute]
|
|
├→ Hook 1: Check if tool is allowed
|
|
├→ Hook 2: Validate inputs
|
|
└→ Hook 3: Request approval if needed
|
|
↓
|
|
If any hook returns False:
|
|
├→ Block tool execution
|
|
└→ Return error message
|
|
↓
|
|
If all hooks return True/None:
|
|
├→ Tool execution proceeds
|
|
└→ Result generated
|
|
↓
|
|
[After Tool Call Hooks Execute]
|
|
├→ Hook 1: Sanitize result
|
|
├→ Hook 2: Cache result
|
|
└→ Hook 3: Log metrics
|
|
↓
|
|
Final result returned
|
|
```
|
|
|
|
## Hook Context Objects
|
|
|
|
### LLMCallHookContext
|
|
|
|
Provides access to LLM execution state:
|
|
|
|
```python
|
|
class LLMCallHookContext:
|
|
executor: CrewAgentExecutor # Full executor access
|
|
messages: list # Mutable message list
|
|
agent: Agent # Current agent
|
|
task: Task # Current task
|
|
crew: Crew # Crew instance
|
|
llm: BaseLLM # LLM instance
|
|
iterations: int # Current iteration
|
|
response: str | None # LLM response (after hooks)
|
|
```
|
|
|
|
### ToolCallHookContext
|
|
|
|
Provides access to tool execution state:
|
|
|
|
```python
|
|
class ToolCallHookContext:
|
|
tool_name: str # Tool being called
|
|
tool_input: dict # Mutable input parameters
|
|
tool: CrewStructuredTool # Tool instance
|
|
agent: Agent | None # Agent executing
|
|
task: Task | None # Current task
|
|
crew: Crew | None # Crew instance
|
|
tool_result: str | None # Tool result (after hooks)
|
|
```
|
|
|
|
## Common Patterns
|
|
|
|
### Safety and Validation
|
|
|
|
```python
|
|
@before_tool_call
|
|
def safety_check(context):
|
|
"""Block destructive operations."""
|
|
dangerous = ['delete_file', 'drop_table', 'system_shutdown']
|
|
if context.tool_name in dangerous:
|
|
print(f"🛑 Blocked: {context.tool_name}")
|
|
return False
|
|
return None
|
|
|
|
@before_llm_call
|
|
def iteration_limit(context):
|
|
"""Prevent infinite loops."""
|
|
if context.iterations > 15:
|
|
print("⛔ Maximum iterations exceeded")
|
|
return False
|
|
return None
|
|
```
|
|
|
|
### Human-in-the-Loop
|
|
|
|
```python
|
|
@before_tool_call
|
|
def require_approval(context):
|
|
"""Require approval for sensitive operations."""
|
|
sensitive = ['send_email', 'make_payment', 'post_message']
|
|
|
|
if context.tool_name in sensitive:
|
|
response = context.request_human_input(
|
|
prompt=f"Approve {context.tool_name}?",
|
|
default_message="Type 'yes' to approve:"
|
|
)
|
|
|
|
if response.lower() != 'yes':
|
|
return False
|
|
|
|
return None
|
|
```
|
|
|
|
### Monitoring and Analytics
|
|
|
|
```python
|
|
from collections import defaultdict
|
|
import time
|
|
|
|
metrics = defaultdict(lambda: {'count': 0, 'total_time': 0})
|
|
|
|
@before_tool_call
|
|
def start_timer(context):
|
|
context.tool_input['_start'] = time.time()
|
|
return None
|
|
|
|
@after_tool_call
|
|
def track_metrics(context):
|
|
start = context.tool_input.get('_start', time.time())
|
|
duration = time.time() - start
|
|
|
|
metrics[context.tool_name]['count'] += 1
|
|
metrics[context.tool_name]['total_time'] += duration
|
|
|
|
return None
|
|
|
|
# View metrics
|
|
def print_metrics():
|
|
for tool, data in metrics.items():
|
|
avg = data['total_time'] / data['count']
|
|
print(f"{tool}: {data['count']} calls, {avg:.2f}s avg")
|
|
```
|
|
|
|
### Response Sanitization
|
|
|
|
```python
|
|
import re
|
|
|
|
@after_llm_call
|
|
def sanitize_llm_response(context):
|
|
"""Remove sensitive data from LLM responses."""
|
|
if not context.response:
|
|
return None
|
|
|
|
result = context.response
|
|
result = re.sub(r'(api[_-]?key)["\']?\s*[:=]\s*["\']?[\w-]+',
|
|
r'\1: [REDACTED]', result, flags=re.IGNORECASE)
|
|
return result
|
|
|
|
@after_tool_call
|
|
def sanitize_tool_result(context):
|
|
"""Remove sensitive data from tool results."""
|
|
if not context.tool_result:
|
|
return None
|
|
|
|
result = context.tool_result
|
|
result = re.sub(r'\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b',
|
|
'[EMAIL-REDACTED]', result)
|
|
return result
|
|
```
|
|
|
|
## Hook Management
|
|
|
|
### Clearing All Hooks
|
|
|
|
```python
|
|
from crewai.hooks import clear_all_global_hooks
|
|
|
|
# Clear all hooks at once
|
|
result = clear_all_global_hooks()
|
|
print(f"Cleared {result['total']} hooks")
|
|
# Output: {'llm_hooks': (2, 1), 'tool_hooks': (1, 2), 'total': (3, 3)}
|
|
```
|
|
|
|
### Clearing Specific Hook Types
|
|
|
|
```python
|
|
from crewai.hooks import (
|
|
clear_before_llm_call_hooks,
|
|
clear_after_llm_call_hooks,
|
|
clear_before_tool_call_hooks,
|
|
clear_after_tool_call_hooks
|
|
)
|
|
|
|
# Clear specific types
|
|
llm_before_count = clear_before_llm_call_hooks()
|
|
tool_after_count = clear_after_tool_call_hooks()
|
|
```
|
|
|
|
### Unregistering Individual Hooks
|
|
|
|
```python
|
|
from crewai.hooks import (
|
|
unregister_before_llm_call_hook,
|
|
unregister_after_tool_call_hook
|
|
)
|
|
|
|
def my_hook(context):
|
|
...
|
|
|
|
# Register
|
|
register_before_llm_call_hook(my_hook)
|
|
|
|
# Later, unregister
|
|
success = unregister_before_llm_call_hook(my_hook)
|
|
print(f"Unregistered: {success}")
|
|
```
|
|
|
|
## Best Practices
|
|
|
|
### 1. Keep Hooks Focused
|
|
Each hook should have a single, clear responsibility:
|
|
|
|
```python
|
|
# ✅ Good - focused responsibility
|
|
@before_tool_call
|
|
def validate_file_path(context):
|
|
if context.tool_name == 'read_file':
|
|
if '..' in context.tool_input.get('path', ''):
|
|
return False
|
|
return None
|
|
|
|
# ❌ Bad - too many responsibilities
|
|
@before_tool_call
|
|
def do_everything(context):
|
|
# Validation + logging + metrics + approval...
|
|
...
|
|
```
|
|
|
|
### 2. Handle Errors Gracefully
|
|
|
|
```python
|
|
@before_llm_call
|
|
def safe_hook(context):
|
|
try:
|
|
# Your logic
|
|
if some_condition:
|
|
return False
|
|
except Exception as e:
|
|
print(f"Hook error: {e}")
|
|
return None # Allow execution despite error
|
|
```
|
|
|
|
### 3. Modify Context In-Place
|
|
|
|
```python
|
|
# ✅ Correct - modify in-place
|
|
@before_llm_call
|
|
def add_context(context):
|
|
context.messages.append({"role": "system", "content": "Be concise"})
|
|
|
|
# ❌ Wrong - replaces reference
|
|
@before_llm_call
|
|
def wrong_approach(context):
|
|
context.messages = [{"role": "system", "content": "Be concise"}]
|
|
```
|
|
|
|
### 4. Use Type Hints
|
|
|
|
```python
|
|
from crewai.hooks import LLMCallHookContext, ToolCallHookContext
|
|
|
|
def my_llm_hook(context: LLMCallHookContext) -> bool | None:
|
|
# IDE autocomplete and type checking
|
|
return None
|
|
|
|
def my_tool_hook(context: ToolCallHookContext) -> str | None:
|
|
return None
|
|
```
|
|
|
|
### 5. Clean Up in Tests
|
|
|
|
```python
|
|
import pytest
|
|
from crewai.hooks import clear_all_global_hooks
|
|
|
|
@pytest.fixture(autouse=True)
|
|
def clean_hooks():
|
|
"""Reset hooks before each test."""
|
|
yield
|
|
clear_all_global_hooks()
|
|
```
|
|
|
|
## When to Use Which Hook
|
|
|
|
### Use LLM Hooks When:
|
|
- Implementing iteration limits
|
|
- Adding context or safety guidelines to prompts
|
|
- Tracking token usage and costs
|
|
- Sanitizing or transforming responses
|
|
- Implementing approval gates for LLM calls
|
|
- Debugging prompt/response interactions
|
|
|
|
### Use Tool Hooks When:
|
|
- Blocking dangerous or destructive operations
|
|
- Validating tool inputs before execution
|
|
- Implementing approval gates for sensitive actions
|
|
- Caching tool results
|
|
- Tracking tool usage and performance
|
|
- Sanitizing tool outputs
|
|
- Rate limiting tool calls
|
|
|
|
### Use Both When:
|
|
Building comprehensive observability, safety, or approval systems that need to monitor all agent operations.
|
|
|
|
## Alternative Registration Methods
|
|
|
|
### Programmatic Registration (Advanced)
|
|
|
|
For dynamic hook registration or when you need to register hooks programmatically:
|
|
|
|
```python
|
|
from crewai.hooks import (
|
|
register_before_llm_call_hook,
|
|
register_after_tool_call_hook
|
|
)
|
|
|
|
def my_hook(context):
|
|
return None
|
|
|
|
# Register programmatically
|
|
register_before_llm_call_hook(my_hook)
|
|
|
|
# Useful for:
|
|
# - Loading hooks from configuration
|
|
# - Conditional hook registration
|
|
# - Plugin systems
|
|
```
|
|
|
|
**Note:** For most use cases, decorators are cleaner and more maintainable.
|
|
|
|
## Performance Considerations
|
|
|
|
1. **Keep Hooks Fast**: Hooks execute on every call - avoid heavy computation
|
|
2. **Cache When Possible**: Store expensive validations or lookups
|
|
3. **Be Selective**: Use crew-scoped hooks when global hooks aren't needed
|
|
4. **Monitor Hook Overhead**: Profile hook execution time in production
|
|
5. **Lazy Import**: Import heavy dependencies only when needed
|
|
|
|
## Debugging Hooks
|
|
|
|
### Enable Debug Logging
|
|
|
|
```python
|
|
import logging
|
|
|
|
logging.basicConfig(level=logging.DEBUG)
|
|
logger = logging.getLogger(__name__)
|
|
|
|
@before_llm_call
|
|
def debug_hook(context):
|
|
logger.debug(f"LLM call: {context.agent.role}, iteration {context.iterations}")
|
|
return None
|
|
```
|
|
|
|
### Hook Execution Order
|
|
|
|
Hooks execute in registration order. If a before hook returns `False`, subsequent hooks don't execute:
|
|
|
|
```python
|
|
# Register order matters!
|
|
register_before_tool_call_hook(hook1) # Executes first
|
|
register_before_tool_call_hook(hook2) # Executes second
|
|
register_before_tool_call_hook(hook3) # Executes third
|
|
|
|
# If hook2 returns False:
|
|
# - hook1 executed
|
|
# - hook2 executed and returned False
|
|
# - hook3 NOT executed
|
|
# - Tool call blocked
|
|
```
|
|
|
|
## Related Documentation
|
|
|
|
- [LLM Call Hooks →](/learn/llm-hooks) - Detailed LLM hook documentation
|
|
- [Tool Call Hooks →](/learn/tool-hooks) - Detailed tool hook documentation
|
|
- [Before and After Kickoff Hooks →](/learn/before-and-after-kickoff-hooks) - Crew lifecycle hooks
|
|
- [Human-in-the-Loop →](/learn/human-in-the-loop) - Human input patterns
|
|
|
|
## Conclusion
|
|
|
|
Execution hooks provide powerful control over agent runtime behavior. Use them to implement safety guardrails, approval workflows, comprehensive monitoring, and custom business logic. Combined with proper error handling, type safety, and performance considerations, hooks enable production-ready, secure, and observable agent systems.
|