mirror of
https://github.com/crewAIInc/crewAI.git
synced 2025-12-16 04:18:35 +00:00
* feat: implement LLM call hooks and enhance agent execution context - Introduced LLM call hooks to allow modification of messages and responses during LLM interactions. - Added support for before and after hooks in the CrewAgentExecutor, enabling dynamic adjustments to the execution flow. - Created LLMCallHookContext for comprehensive access to the executor state, facilitating in-place modifications. - Added validation for hook callables to ensure proper functionality. - Enhanced tests for LLM hooks and tool hooks to verify their behavior and error handling capabilities. - Updated LiteAgent and CrewAgentExecutor to accommodate the new crew context in their execution processes. * feat: implement LLM call hooks and enhance agent execution context - Introduced LLM call hooks to allow modification of messages and responses during LLM interactions. - Added support for before and after hooks in the CrewAgentExecutor, enabling dynamic adjustments to the execution flow. - Created LLMCallHookContext for comprehensive access to the executor state, facilitating in-place modifications. - Added validation for hook callables to ensure proper functionality. - Enhanced tests for LLM hooks and tool hooks to verify their behavior and error handling capabilities. - Updated LiteAgent and CrewAgentExecutor to accommodate the new crew context in their execution processes. * fix verbose * feat: introduce crew-scoped hook decorators and refactor hook registration - Added decorators for before and after LLM and tool calls to enhance flexibility in modifying execution behavior. - Implemented a centralized hook registration mechanism within CrewBase to automatically register crew-scoped hooks. - Removed the obsolete base.py file as its functionality has been integrated into the new decorators and registration system. - Enhanced tests for the new hook decorators to ensure proper registration and execution flow. - Updated existing hook handling to accommodate the new decorator-based approach, improving code organization and maintainability. * feat: enhance hook management with clear and unregister functions - Introduced functions to unregister specific before and after hooks for both LLM and tool calls, improving flexibility in hook management. - Added clear functions to remove all registered hooks of each type, facilitating easier state management and cleanup. - Implemented a convenience function to clear all global hooks in one call, streamlining the process for testing and execution context resets. - Enhanced tests to verify the functionality of unregistering and clearing hooks, ensuring robust behavior in various scenarios. * refactor: enhance hook type management for LLM and tool hooks - Updated hook type definitions to use generic protocols for better type safety and flexibility. - Replaced Callable type annotations with specific BeforeLLMCallHookType and AfterLLMCallHookType for clarity. - Improved the registration and retrieval functions for before and after hooks to align with the new type definitions. - Enhanced the setup functions to handle hook execution results, allowing for blocking of LLM calls based on hook logic. - Updated related tests to ensure proper functionality and type adherence across the hook management system. * feat: add execution and tool hooks documentation - Introduced new documentation for execution hooks, LLM call hooks, and tool call hooks to provide comprehensive guidance on their usage and implementation in CrewAI. - Updated existing documentation to include references to the new hooks, enhancing the learning resources available for users. - Ensured consistency across multiple languages (English, Portuguese, Korean) for the new documentation, improving accessibility for a wider audience. - Added examples and troubleshooting sections to assist users in effectively utilizing hooks for agent operations. --------- Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
428 lines
12 KiB
Plaintext
428 lines
12 KiB
Plaintext
---
|
|
title: LLM Call Hooks
|
|
description: Learn how to use LLM call hooks to intercept, modify, and control language model interactions in CrewAI
|
|
mode: "wide"
|
|
---
|
|
|
|
LLM Call Hooks provide fine-grained control over language model interactions during agent execution. These hooks allow you to intercept LLM calls, modify prompts, transform responses, implement approval gates, and add custom logging or monitoring.
|
|
|
|
## Overview
|
|
|
|
LLM hooks are executed at two critical points:
|
|
- **Before LLM Call**: Modify messages, validate inputs, or block execution
|
|
- **After LLM Call**: Transform responses, sanitize outputs, or modify conversation history
|
|
|
|
## Hook Types
|
|
|
|
### Before LLM Call Hooks
|
|
|
|
Executed before every LLM call, these hooks can:
|
|
- Inspect and modify messages sent to the LLM
|
|
- Block LLM execution based on conditions
|
|
- Implement rate limiting or approval gates
|
|
- Add context or system messages
|
|
- Log request details
|
|
|
|
**Signature:**
|
|
```python
|
|
def before_hook(context: LLMCallHookContext) -> bool | None:
|
|
# Return False to block execution
|
|
# Return True or None to allow execution
|
|
...
|
|
```
|
|
|
|
### After LLM Call Hooks
|
|
|
|
Executed after every LLM call, these hooks can:
|
|
- Modify or sanitize LLM responses
|
|
- Add metadata or formatting
|
|
- Log response details
|
|
- Update conversation history
|
|
- Implement content filtering
|
|
|
|
**Signature:**
|
|
```python
|
|
def after_hook(context: LLMCallHookContext) -> str | None:
|
|
# Return modified response string
|
|
# Return None to keep original response
|
|
...
|
|
```
|
|
|
|
## LLM Hook Context
|
|
|
|
The `LLMCallHookContext` object provides comprehensive access to execution state:
|
|
|
|
```python
|
|
class LLMCallHookContext:
|
|
executor: CrewAgentExecutor # Full executor reference
|
|
messages: list # Mutable message list
|
|
agent: Agent # Current agent
|
|
task: Task # Current task
|
|
crew: Crew # Crew instance
|
|
llm: BaseLLM # LLM instance
|
|
iterations: int # Current iteration count
|
|
response: str | None # LLM response (after hooks only)
|
|
```
|
|
|
|
### Modifying Messages
|
|
|
|
**Important:** Always modify messages in-place:
|
|
|
|
```python
|
|
# ✅ Correct - modify in-place
|
|
def add_context(context: LLMCallHookContext) -> None:
|
|
context.messages.append({"role": "system", "content": "Be concise"})
|
|
|
|
# ❌ Wrong - replaces list reference
|
|
def wrong_approach(context: LLMCallHookContext) -> None:
|
|
context.messages = [{"role": "system", "content": "Be concise"}]
|
|
```
|
|
|
|
## Registration Methods
|
|
|
|
### 1. Global Hook Registration
|
|
|
|
Register hooks that apply to all LLM calls across all crews:
|
|
|
|
```python
|
|
from crewai.hooks import register_before_llm_call_hook, register_after_llm_call_hook
|
|
|
|
def log_llm_call(context):
|
|
print(f"LLM call by {context.agent.role} at iteration {context.iterations}")
|
|
return None # Allow execution
|
|
|
|
register_before_llm_call_hook(log_llm_call)
|
|
```
|
|
|
|
### 2. Decorator-Based Registration
|
|
|
|
Use decorators for cleaner syntax:
|
|
|
|
```python
|
|
from crewai.hooks import before_llm_call, after_llm_call
|
|
|
|
@before_llm_call
|
|
def validate_iteration_count(context):
|
|
if context.iterations > 10:
|
|
print("⚠️ Exceeded maximum iterations")
|
|
return False # Block execution
|
|
return None
|
|
|
|
@after_llm_call
|
|
def sanitize_response(context):
|
|
if context.response and "API_KEY" in context.response:
|
|
return context.response.replace("API_KEY", "[REDACTED]")
|
|
return None
|
|
```
|
|
|
|
### 3. Crew-Scoped Hooks
|
|
|
|
Register hooks for a specific crew instance:
|
|
|
|
```python
|
|
@CrewBase
|
|
class MyProjCrew:
|
|
@before_llm_call_crew
|
|
def validate_inputs(self, context):
|
|
# Only applies to this crew
|
|
if context.iterations == 0:
|
|
print(f"Starting task: {context.task.description}")
|
|
return None
|
|
|
|
@after_llm_call_crew
|
|
def log_responses(self, context):
|
|
# Crew-specific response logging
|
|
print(f"Response length: {len(context.response)}")
|
|
return None
|
|
|
|
@crew
|
|
def crew(self) -> Crew:
|
|
return Crew(
|
|
agents=self.agents,
|
|
tasks=self.tasks,
|
|
process=Process.sequential,
|
|
verbose=True
|
|
)
|
|
```
|
|
|
|
## Common Use Cases
|
|
|
|
### 1. Iteration Limiting
|
|
|
|
```python
|
|
@before_llm_call
|
|
def limit_iterations(context: LLMCallHookContext) -> bool | None:
|
|
max_iterations = 15
|
|
if context.iterations > max_iterations:
|
|
print(f"⛔ Blocked: Exceeded {max_iterations} iterations")
|
|
return False # Block execution
|
|
return None
|
|
```
|
|
|
|
### 2. Human Approval Gate
|
|
|
|
```python
|
|
@before_llm_call
|
|
def require_approval(context: LLMCallHookContext) -> bool | None:
|
|
if context.iterations > 5:
|
|
response = context.request_human_input(
|
|
prompt=f"Iteration {context.iterations}: Approve LLM call?",
|
|
default_message="Press Enter to approve, or type 'no' to block:"
|
|
)
|
|
if response.lower() == "no":
|
|
print("🚫 LLM call blocked by user")
|
|
return False
|
|
return None
|
|
```
|
|
|
|
### 3. Adding System Context
|
|
|
|
```python
|
|
@before_llm_call
|
|
def add_guardrails(context: LLMCallHookContext) -> None:
|
|
# Add safety guidelines to every LLM call
|
|
context.messages.append({
|
|
"role": "system",
|
|
"content": "Ensure responses are factual and cite sources when possible."
|
|
})
|
|
return None
|
|
```
|
|
|
|
### 4. Response Sanitization
|
|
|
|
```python
|
|
@after_llm_call
|
|
def sanitize_sensitive_data(context: LLMCallHookContext) -> str | None:
|
|
if not context.response:
|
|
return None
|
|
|
|
# Remove sensitive patterns
|
|
import re
|
|
sanitized = context.response
|
|
sanitized = re.sub(r'\b\d{3}-\d{2}-\d{4}\b', '[SSN-REDACTED]', sanitized)
|
|
sanitized = re.sub(r'\b\d{4}[- ]?\d{4}[- ]?\d{4}[- ]?\d{4}\b', '[CARD-REDACTED]', sanitized)
|
|
|
|
return sanitized
|
|
```
|
|
|
|
### 5. Cost Tracking
|
|
|
|
```python
|
|
import tiktoken
|
|
|
|
@before_llm_call
|
|
def track_token_usage(context: LLMCallHookContext) -> None:
|
|
encoding = tiktoken.get_encoding("cl100k_base")
|
|
total_tokens = sum(
|
|
len(encoding.encode(msg.get("content", "")))
|
|
for msg in context.messages
|
|
)
|
|
print(f"📊 Input tokens: ~{total_tokens}")
|
|
return None
|
|
|
|
@after_llm_call
|
|
def track_response_tokens(context: LLMCallHookContext) -> None:
|
|
if context.response:
|
|
encoding = tiktoken.get_encoding("cl100k_base")
|
|
tokens = len(encoding.encode(context.response))
|
|
print(f"📊 Response tokens: ~{tokens}")
|
|
return None
|
|
```
|
|
|
|
### 6. Debug Logging
|
|
|
|
```python
|
|
@before_llm_call
|
|
def debug_request(context: LLMCallHookContext) -> None:
|
|
print(f"""
|
|
🔍 LLM Call Debug:
|
|
- Agent: {context.agent.role}
|
|
- Task: {context.task.description[:50]}...
|
|
- Iteration: {context.iterations}
|
|
- Message Count: {len(context.messages)}
|
|
- Last Message: {context.messages[-1] if context.messages else 'None'}
|
|
""")
|
|
return None
|
|
|
|
@after_llm_call
|
|
def debug_response(context: LLMCallHookContext) -> None:
|
|
if context.response:
|
|
print(f"✅ Response Preview: {context.response[:100]}...")
|
|
return None
|
|
```
|
|
|
|
## Hook Management
|
|
|
|
### Unregistering Hooks
|
|
|
|
```python
|
|
from crewai.hooks import (
|
|
unregister_before_llm_call_hook,
|
|
unregister_after_llm_call_hook
|
|
)
|
|
|
|
# Unregister specific hook
|
|
def my_hook(context):
|
|
...
|
|
|
|
register_before_llm_call_hook(my_hook)
|
|
# Later...
|
|
unregister_before_llm_call_hook(my_hook) # Returns True if found
|
|
```
|
|
|
|
### Clearing Hooks
|
|
|
|
```python
|
|
from crewai.hooks import (
|
|
clear_before_llm_call_hooks,
|
|
clear_after_llm_call_hooks,
|
|
clear_all_llm_call_hooks
|
|
)
|
|
|
|
# Clear specific hook type
|
|
count = clear_before_llm_call_hooks()
|
|
print(f"Cleared {count} before hooks")
|
|
|
|
# Clear all LLM hooks
|
|
before_count, after_count = clear_all_llm_call_hooks()
|
|
print(f"Cleared {before_count} before and {after_count} after hooks")
|
|
```
|
|
|
|
### Listing Registered Hooks
|
|
|
|
```python
|
|
from crewai.hooks import (
|
|
get_before_llm_call_hooks,
|
|
get_after_llm_call_hooks
|
|
)
|
|
|
|
# Get current hooks
|
|
before_hooks = get_before_llm_call_hooks()
|
|
after_hooks = get_after_llm_call_hooks()
|
|
|
|
print(f"Registered: {len(before_hooks)} before, {len(after_hooks)} after")
|
|
```
|
|
|
|
## Advanced Patterns
|
|
|
|
### Conditional Hook Execution
|
|
|
|
```python
|
|
@before_llm_call
|
|
def conditional_blocking(context: LLMCallHookContext) -> bool | None:
|
|
# Only block for specific agents
|
|
if context.agent.role == "researcher" and context.iterations > 10:
|
|
return False
|
|
|
|
# Only block for specific tasks
|
|
if "sensitive" in context.task.description.lower() and context.iterations > 5:
|
|
return False
|
|
|
|
return None
|
|
```
|
|
|
|
### Context-Aware Modifications
|
|
|
|
```python
|
|
@before_llm_call
|
|
def adaptive_prompting(context: LLMCallHookContext) -> None:
|
|
# Add different context based on iteration
|
|
if context.iterations == 0:
|
|
context.messages.append({
|
|
"role": "system",
|
|
"content": "Start with a high-level overview."
|
|
})
|
|
elif context.iterations > 3:
|
|
context.messages.append({
|
|
"role": "system",
|
|
"content": "Focus on specific details and provide examples."
|
|
})
|
|
return None
|
|
```
|
|
|
|
### Chaining Hooks
|
|
|
|
```python
|
|
# Multiple hooks execute in registration order
|
|
|
|
@before_llm_call
|
|
def first_hook(context):
|
|
print("1. First hook executed")
|
|
return None
|
|
|
|
@before_llm_call
|
|
def second_hook(context):
|
|
print("2. Second hook executed")
|
|
return None
|
|
|
|
@before_llm_call
|
|
def blocking_hook(context):
|
|
if context.iterations > 10:
|
|
print("3. Blocking hook - execution stopped")
|
|
return False # Subsequent hooks won't execute
|
|
print("3. Blocking hook - execution allowed")
|
|
return None
|
|
```
|
|
|
|
## Best Practices
|
|
|
|
1. **Keep Hooks Focused**: Each hook should have a single responsibility
|
|
2. **Avoid Heavy Computation**: Hooks execute on every LLM call
|
|
3. **Handle Errors Gracefully**: Use try-except to prevent hook failures from breaking execution
|
|
4. **Use Type Hints**: Leverage `LLMCallHookContext` for better IDE support
|
|
5. **Document Hook Behavior**: Especially for blocking conditions
|
|
6. **Test Hooks Independently**: Unit test hooks before using in production
|
|
7. **Clear Hooks in Tests**: Use `clear_all_llm_call_hooks()` between test runs
|
|
8. **Modify In-Place**: Always modify `context.messages` in-place, never replace
|
|
|
|
## Error Handling
|
|
|
|
```python
|
|
@before_llm_call
|
|
def safe_hook(context: LLMCallHookContext) -> bool | None:
|
|
try:
|
|
# Your hook logic
|
|
if some_condition:
|
|
return False
|
|
except Exception as e:
|
|
print(f"⚠️ Hook error: {e}")
|
|
# Decide: allow or block on error
|
|
return None # Allow execution despite error
|
|
```
|
|
|
|
## Type Safety
|
|
|
|
```python
|
|
from crewai.hooks import LLMCallHookContext, BeforeLLMCallHookType, AfterLLMCallHookType
|
|
|
|
# Explicit type annotations
|
|
def my_before_hook(context: LLMCallHookContext) -> bool | None:
|
|
return None
|
|
|
|
def my_after_hook(context: LLMCallHookContext) -> str | None:
|
|
return None
|
|
|
|
# Type-safe registration
|
|
register_before_llm_call_hook(my_before_hook)
|
|
register_after_llm_call_hook(my_after_hook)
|
|
```
|
|
|
|
## Troubleshooting
|
|
|
|
### Hook Not Executing
|
|
- Verify hook is registered before crew execution
|
|
- Check if previous hook returned `False` (blocks subsequent hooks)
|
|
- Ensure hook signature matches expected type
|
|
|
|
### Message Modifications Not Persisting
|
|
- Use in-place modifications: `context.messages.append()`
|
|
- Don't replace the list: `context.messages = []`
|
|
|
|
### Response Modifications Not Working
|
|
- Return the modified string from after hooks
|
|
- Returning `None` keeps the original response
|
|
|
|
## Conclusion
|
|
|
|
LLM Call Hooks provide powerful capabilities for controlling and monitoring language model interactions in CrewAI. Use them to implement safety guardrails, approval gates, logging, cost tracking, and response sanitization. Combined with proper error handling and type safety, hooks enable robust and production-ready agent systems.
|