mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-04-07 11:38:16 +00:00
- Token counting: Make TokenCalcHandler standalone class that conditionally inherits from litellm.CustomLogger when litellm is available, works as plain object when not installed - Callbacks: Guard set_callbacks() and set_env_callbacks() behind LITELLM_AVAILABLE checks - these only affect the litellm fallback path, native providers emit events via base_llm.py - Feature detection: Guard supports_function_calling(), supports_stop_words(), and _validate_call_params() behind LITELLM_AVAILABLE checks with sensible defaults (True for function calling/stop words since all modern models support them) - Error types: Replace litellm.exceptions.ContextWindowExceededError catches with pattern-based detection using LLMContextLengthExceededError._is_context_limit_error() This decouples crewAI's internal infrastructure from litellm, allowing the native providers (OpenAI, Anthropic, Azure, Bedrock, Gemini) to work without litellm installed. The litellm fallback for niche providers still works when litellm IS installed. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>