This commit fixes issue #3639 by ensuring task-level output settings
(output_json and output_pydantic) take precedence over agent-level
LLM response_format when both are set with Pydantic models.
Changes:
- Modified LLM._prepare_completion_params() to accept from_task parameter
and check if task has output_json or output_pydantic set
- If task has output settings, LLM's response_format is ignored
- Updated LLM.call() to pass from_task to _prepare_completion_params()
- Added comprehensive test to verify the priority behavior
The fix ensures predictable behavior following the standard configuration
hierarchy where more specific (task-level) settings override general
(agent-level) defaults.
Co-Authored-By: João <joao@crewai.com>