Files
crewAI/tests
Devin AI 3bfa1c6559 Fix issue #3454: Add proactive context length checking to prevent empty LLM responses
- Add _check_context_length_before_call() method to CrewAgentExecutor
- Proactively check estimated token count before LLM calls in _invoke_loop
- Use character-based estimation (chars / 4) to approximate token count
- Call existing _handle_context_length() when context window would be exceeded
- Add comprehensive tests covering proactive handling and token estimation
- Prevents empty responses from providers like DeepInfra that don't throw exceptions

Co-Authored-By: João <joao@crewai.com>
2025-09-05 16:05:35 +00:00
..
2024-12-17 16:00:15 -05:00
2024-11-18 00:21:36 -03:00
2024-12-23 13:19:58 -05:00
2023-10-29 19:51:59 -03:00
2023-12-27 15:13:42 -05:00
2024-11-12 15:04:57 -03:00