mirror of
https://github.com/crewAIInc/crewAI.git
synced 2025-12-16 04:18:35 +00:00
fix llm guardrail import and docs
This commit is contained in:
@@ -369,7 +369,7 @@ blog_task = Task(
|
||||
- Type hints are recommended but optional
|
||||
|
||||
2. **Return Values**:
|
||||
- On success: it returns a tuple of `(bool, Any)`. For example: `(True, validated_result)`
|
||||
- On success: it returns a tuple of `(bool, Any)`. For example: `(True, validated_result)`
|
||||
- On Failure: it returns a tuple of `(bool, str)`. For example: `(False, "Error message explain the failure")`
|
||||
|
||||
### LLMGuardrail
|
||||
@@ -380,7 +380,7 @@ The `LLMGuardrail` class offers a robust mechanism for validating task outputs.
|
||||
|
||||
1. **Structured Error Responses**:
|
||||
```python Code
|
||||
from crewai import TaskOutput
|
||||
from crewai import TaskOutput, LLMGuardrail
|
||||
|
||||
def validate_with_context(result: TaskOutput) -> Tuple[bool, Any]:
|
||||
try:
|
||||
|
||||
Reference in New Issue
Block a user