Files
crewAI/lib/crewai/tests/agents
Devin AI 83b07b9d23 fix: prevent LLM observation hallucination by properly attributing tool results
Fixes #4181

The issue was that tool observations were being appended to the assistant
message in the conversation history, which caused the LLM to learn to
hallucinate fake observations during tool calls.

Changes:
- Add llm_response field to AgentAction to store the original LLM response
  before observation is appended
- Modify handle_agent_action_core to store llm_response before appending
  observation to text (text still contains observation for logging)
- Update CrewAgentExecutor._invoke_loop and _ainvoke_loop to:
  - Append LLM response as assistant message
  - Append observation as user message (not assistant)
- Apply same fix to LiteAgent._invoke_loop
- Apply same fix to CrewAgentExecutorFlow.execute_tool_action
- Fix add_image_tool special case in both executors to use same pattern
- Add comprehensive tests for proper message attribution

Co-Authored-By: João <joao@crewai.com>
2026-01-06 06:57:16 +00:00
..
2025-10-20 14:10:19 -07:00
2025-10-20 14:10:19 -07:00