Files
crewAI/lib
Devin AI 20be4ae62b fix: prevent response_model from being passed in ReAct flow when LLM lacks function calling
When an LLM does not support function calling (supports_function_calling()
returns False), the executor falls back to the ReAct text-based pattern.
Previously, response_model (set from task.output_pydantic) was still passed
to get_llm_response in the ReAct path, which caused InternalInstructor to
force structured output via instructor's TOOLS mode before the agent could
reason through Action/Observation cycles.

This fix sets response_model=None in both _invoke_loop_react and
_ainvoke_loop_react, allowing the ReAct loop to work normally. The output
schema is already embedded in the prompt text for guidance, and the final
conversion to pydantic/json happens in task._export_output() after the
agent finishes.

Fixes #4695

Co-Authored-By: João <joao@crewai.com>
2026-03-04 12:24:12 +00:00
..
2026-02-27 09:44:47 -05:00