Files
crewAI/tests
Devin AI 0452c0af47 fix: prioritize task output_json over LLM response_format
This commit fixes issue #3639 by ensuring task-level output settings
(output_json and output_pydantic) take precedence over agent-level
LLM response_format when both are set with Pydantic models.

Changes:
- Modified LLM._prepare_completion_params() to accept from_task parameter
  and check if task has output_json or output_pydantic set
- If task has output settings, LLM's response_format is ignored
- Updated LLM.call() to pass from_task to _prepare_completion_params()
- Added comprehensive test to verify the priority behavior

The fix ensures predictable behavior following the standard configuration
hierarchy where more specific (task-level) settings override general
(agent-level) defaults.

Co-Authored-By: João <joao@crewai.com>
2025-10-03 06:04:24 +00:00
..
2025-09-20 15:29:25 -03:00
2025-03-14 03:00:30 -03:00
2025-09-10 15:20:21 -07:00
2023-10-29 19:51:59 -03:00