mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-05-04 08:42:38 +00:00
When output_pydantic / response_model is set on a task, the response_model was being forwarded to the LLM on every iteration of the tool-calling loop. This causes many non-OpenAI LLMs to skip tool calls entirely because tools and response_format parameters conflict. Fix: Don't pass response_model to the LLM during tool-calling loop iterations when the agent has tools. Structured output conversion is already handled as post-processing via Task._export_output(). Changes: - crew_agent_executor.py: _invoke_loop_react, _invoke_loop_native_tools, _ainvoke_loop_react, _ainvoke_loop_native_tools now pass response_model=None when tools are present - experimental/agent_executor.py: call_llm_and_parse, call_llm_native_tools similarly updated - Added regression tests in TestResponseModelNotLeakedDuringToolLoop Co-Authored-By: João <joao@crewai.com>