mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-05-07 18:19:00 +00:00
Closes #5725. In _pre_review_with_lessons (lib/crewai/src/crewai/flow/human_feedback.py), a broad except Exception: return method_output silently swallowed any LLM, network, auth, or structured-output failure during the pre-review step. Callers could not distinguish pre-reviewed output from raw output, and there was no log or event surfaced. Changes: - Add a module logger. - Narrow the try/except in _pre_review_with_lessons so the mem is None and not matches short-circuits stay outside the failure path (those are not error cases). - On recall or LLM pre-review failure, log a WARNING with exc_info=True so the silent fallback is observable. Continue to fall back to the raw method_output so the flow does not break. - Add an opt-in learn_strict=True parameter on the human_feedback decorator and HumanFeedbackConfig that re-raises pre-review failures instead of falling back, for callers that need fail-closed behavior. - Update the Graceful degradation docs section to reflect the new observable-by-default behavior and document learn_strict. Tests (lib/crewai/tests/test_human_feedback_decorator.py): - New TestHumanFeedbackPreReviewFailure class with 7 tests covering WARNING logging on LLM and recall failures, learn_strict propagation in both sync and async paths, and config introspection of the new learn_strict flag. Co-Authored-By: João <joao@crewai.com>