fix llm guardrail import and docs

This commit is contained in:
João Moura
2025-05-22 21:47:23 -07:00
parent 2d6deee753
commit beddc72189
2 changed files with 4 additions and 2 deletions

View File

@@ -369,7 +369,7 @@ blog_task = Task(
- Type hints are recommended but optional - Type hints are recommended but optional
2. **Return Values**: 2. **Return Values**:
- On success: it returns a tuple of `(bool, Any)`. For example: `(True, validated_result)` - On success: it returns a tuple of `(bool, Any)`. For example: `(True, validated_result)`
- On Failure: it returns a tuple of `(bool, str)`. For example: `(False, "Error message explain the failure")` - On Failure: it returns a tuple of `(bool, str)`. For example: `(False, "Error message explain the failure")`
### LLMGuardrail ### LLMGuardrail
@@ -380,7 +380,7 @@ The `LLMGuardrail` class offers a robust mechanism for validating task outputs.
1. **Structured Error Responses**: 1. **Structured Error Responses**:
```python Code ```python Code
from crewai import TaskOutput from crewai import TaskOutput, LLMGuardrail
def validate_with_context(result: TaskOutput) -> Tuple[bool, Any]: def validate_with_context(result: TaskOutput) -> Tuple[bool, Any]:
try: try:

View File

@@ -9,6 +9,7 @@ from crewai.llm import LLM
from crewai.llms.base_llm import BaseLLM from crewai.llms.base_llm import BaseLLM
from crewai.process import Process from crewai.process import Process
from crewai.task import Task from crewai.task import Task
from crewai.tasks.llm_guardrail import LLMGuardrail
from crewai.tasks.task_output import TaskOutput from crewai.tasks.task_output import TaskOutput
warnings.filterwarnings( warnings.filterwarnings(
@@ -29,4 +30,5 @@ __all__ = [
"Flow", "Flow",
"Knowledge", "Knowledge",
"TaskOutput", "TaskOutput",
"LLMGuardrail",
] ]