fix llm guardrail import and docs

This commit is contained in:
João Moura
2025-05-22 21:47:23 -07:00
parent 2d6deee753
commit beddc72189
2 changed files with 4 additions and 2 deletions

View File

@@ -369,7 +369,7 @@ blog_task = Task(
- Type hints are recommended but optional
2. **Return Values**:
- On success: it returns a tuple of `(bool, Any)`. For example: `(True, validated_result)`
- On success: it returns a tuple of `(bool, Any)`. For example: `(True, validated_result)`
- On Failure: it returns a tuple of `(bool, str)`. For example: `(False, "Error message explain the failure")`
### LLMGuardrail
@@ -380,7 +380,7 @@ The `LLMGuardrail` class offers a robust mechanism for validating task outputs.
1. **Structured Error Responses**:
```python Code
from crewai import TaskOutput
from crewai import TaskOutput, LLMGuardrail
def validate_with_context(result: TaskOutput) -> Tuple[bool, Any]:
try: