feat: Add graceful quota limit handling for LLM APIs

- Create LLMQuotaLimitExceededException following CrewAI's existing pattern
- Add quota limit error handling in both streaming and non-streaming LLM calls
- Update error handling in agent execution and crew agent executor
- Add comprehensive tests for quota limit scenarios
- Fixes issue #3434: Handle RateLimitError gracefully instead of crashing

The implementation catches litellm.exceptions.RateLimitError and converts it
to a CrewAI-specific exception, allowing tasks to detect quota limits and
shut down gracefully instead of crashing with unhandled exceptions.

Co-Authored-By: João <joao@crewai.com>
This commit is contained in:
Devin AI
2025-09-02 16:45:35 +00:00
parent 92d71f7f06
commit c763457e8d
6 changed files with 263 additions and 1 deletions

View File

@@ -1 +1,4 @@
"""Exceptions for crewAI."""
from crewai.utilities.exceptions.context_window_exceeding_exception import LLMContextLengthExceededException
from crewai.utilities.exceptions.quota_limit_exception import LLMQuotaLimitExceededException