- Add reasoning parameter to LLM.__init__() method
- Implement logic in _prepare_completion_params to handle reasoning=False
- Add comprehensive tests for reasoning parameter functionality
- Ensure reasoning=False overrides reasoning_effort parameter
- Add integration test with Agent class
The fix ensures that when reasoning=False is set, the reasoning_effort
parameter is not included in the LLM completion call, effectively
disabling reasoning mode for models like Qwen and Cogito with Ollama.
Co-Authored-By: João <joao@crewai.com>
* Add reasoning attribute to Agent class
Co-Authored-By: Joe Moura <joao@crewai.com>
* Address PR feedback: improve type hints, error handling, refactor reasoning handler, and enhance tests and docs
Co-Authored-By: Joe Moura <joao@crewai.com>
* Implement function calling for reasoning and move prompts to translations
Co-Authored-By: Joe Moura <joao@crewai.com>
* Simplify function calling implementation with better error handling
Co-Authored-By: Joe Moura <joao@crewai.com>
* Enhance system prompts to leverage agent context (role, goal, backstory)
Co-Authored-By: Joe Moura <joao@crewai.com>
* Fix lint and type-checker issues
Co-Authored-By: Joe Moura <joao@crewai.com>
* Enhance system prompts to better leverage agent context
Co-Authored-By: Joe Moura <joao@crewai.com>
* Fix backstory access in reasoning handler for Python 3.12 compatibility
Co-Authored-By: Joe Moura <joao@crewai.com>
---------
Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Joe Moura <joao@crewai.com>
Co-authored-by: João Moura <joaomdmoura@gmail.com>