Files
crewAI/src/crewai/utilities/guardrail.py
Lorenze Jay 6f2e39c0dd
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Update Test Durations / update-durations (3.10) (push) Has been cancelled
Update Test Durations / update-durations (3.11) (push) Has been cancelled
Update Test Durations / update-durations (3.12) (push) Has been cancelled
Update Test Durations / update-durations (3.13) (push) Has been cancelled
feat: enhance knowledge and guardrail event handling in Agent class (#3672)
* feat: enhance knowledge event handling in Agent class

- Updated the Agent class to include task context in knowledge retrieval events.
- Emitted new events for knowledge retrieval and query processes, capturing task and agent details.
- Refactored knowledge event classes to inherit from a base class for better structure and maintainability.
- Added tracing for knowledge events in the TraceCollectionListener to improve observability.

This change improves the tracking and management of knowledge queries and retrievals, facilitating better debugging and performance monitoring.

* refactor: remove task_id from knowledge event emissions in Agent class

- Removed the task_id parameter from various knowledge event emissions in the Agent class to streamline event handling.
- This change simplifies the event structure and focuses on the essential context of knowledge retrieval and query processes.

This refactor enhances the clarity of knowledge events and aligns with the recent improvements in event handling.

* surface association for guardrail events

* fix: improve LLM selection logic in converter

- Updated the logic for selecting the LLM in the convert_with_instructions function to handle cases where the agent may not have a function_calling_llm attribute.
- This change ensures that the converter can still function correctly by falling back to the standard LLM if necessary, enhancing robustness and preventing potential errors.

This fix improves the reliability of the conversion process when working with different agent configurations.

* fix test

* fix: enforce valid LLM instance requirement in converter

- Updated the convert_with_instructions function to ensure that a valid LLM instance is provided by the agent.
- If neither function_calling_llm nor the standard llm is available, a ValueError is raised, enhancing error handling and robustness.
- Improved error messaging for conversion failures to provide clearer feedback on issues encountered during the conversion process.

This change strengthens the reliability of the conversion process by ensuring that agents are properly configured with a valid LLM.
2025-10-08 11:53:13 -07:00

134 lines
4.3 KiB
Python

from __future__ import annotations
from collections.abc import Callable
from typing import TYPE_CHECKING, Any
from pydantic import BaseModel, Field, field_validator
from typing_extensions import Self
if TYPE_CHECKING:
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.lite_agent import LiteAgent, LiteAgentOutput
from crewai.task import Task
from crewai.tasks.task_output import TaskOutput
class GuardrailResult(BaseModel):
"""Result from a task guardrail execution.
This class standardizes the return format of task guardrails,
converting tuple responses into a structured format that can
be easily handled by the task execution system.
Attributes:
success: Whether the guardrail validation passed
result: The validated/transformed result if successful
error: Error message if validation failed
"""
success: bool = Field(description="Whether the guardrail validation passed")
result: Any | None = Field(
default=None, description="The validated/transformed result if successful"
)
error: str | None = Field(
default=None, description="Error message if validation failed"
)
@field_validator("result", "error")
@classmethod
def validate_result_error_exclusivity(cls, v: Any, info) -> Any:
"""Ensure that result and error are mutually exclusive based on success.
Args:
v: The value being validated (either result or error)
info: Validation info containing the entire model data
Returns:
The original value if validation passes
"""
values = info.data
if "success" in values:
if values["success"] and v and "error" in values and values["error"]:
raise ValueError(
"Cannot have both result and error when success is True"
)
if not values["success"] and v and "result" in values and values["result"]:
raise ValueError(
"Cannot have both result and error when success is False"
)
return v
@classmethod
def from_tuple(cls, result: tuple[bool, Any | str]) -> Self:
"""Create a GuardrailResult from a validation tuple.
Args:
result: A tuple of (success, data) where data is either the validated result or error message.
Returns:
A new instance with the tuple data.
"""
success, data = result
return cls(
success=success,
result=data if success else None,
error=data if not success else None,
)
def process_guardrail(
output: TaskOutput | LiteAgentOutput,
guardrail: Callable[[Any], tuple[bool, Any | str]],
retry_count: int,
event_source: Any | None = None,
from_agent: BaseAgent | LiteAgent | None = None,
from_task: Task | None = None,
) -> GuardrailResult:
"""Process the guardrail for the agent output.
Args:
output: The output to validate with the guardrail
guardrail: The guardrail to validate the output with
retry_count: The number of times the guardrail has been retried
event_source: The source of the guardrail to be sent in events
Returns:
GuardrailResult: The result of the guardrail validation
Raises:
TypeError: If output is not a TaskOutput or LiteAgentOutput
ValueError: If guardrail is None
"""
from crewai.events.event_bus import crewai_event_bus
from crewai.events.types.llm_guardrail_events import (
LLMGuardrailCompletedEvent,
LLMGuardrailStartedEvent,
)
crewai_event_bus.emit(
event_source,
LLMGuardrailStartedEvent(
guardrail=guardrail,
retry_count=retry_count,
from_agent=from_agent,
from_task=from_task,
),
)
result = guardrail(output)
guardrail_result = GuardrailResult.from_tuple(result)
crewai_event_bus.emit(
event_source,
LLMGuardrailCompletedEvent(
success=guardrail_result.success,
result=guardrail_result.result,
error=guardrail_result.error,
retry_count=retry_count,
from_agent=from_agent,
from_task=from_task,
),
)
return guardrail_result