mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-05-06 09:42:39 +00:00
When an LLM does not support function calling (supports_function_calling() returns False), the executor falls back to the ReAct text-based pattern. Previously, response_model (set from task.output_pydantic) was still passed to get_llm_response in the ReAct path, which caused InternalInstructor to force structured output via instructor's TOOLS mode before the agent could reason through Action/Observation cycles. This fix sets response_model=None in both _invoke_loop_react and _ainvoke_loop_react, allowing the ReAct loop to work normally. The output schema is already embedded in the prompt text for guidance, and the final conversion to pydantic/json happens in task._export_output() after the agent finishes. Fixes #4695 Co-Authored-By: João <joao@crewai.com>