mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-04-08 12:08:15 +00:00
* feat: add native OpenAI-compatible providers (OpenRouter, DeepSeek, Ollama, vLLM, Cerebras, Dashscope) Add a data-driven OpenAI-compatible provider system that enables native support for multiple third-party APIs that implement the OpenAI API specification. New providers: - OpenRouter: 500+ models via openrouter.ai - DeepSeek: deepseek-chat, deepseek-coder, deepseek-reasoner - Ollama: local models (llama3, mistral, codellama, etc.) - hosted_vllm: self-hosted vLLM servers - Cerebras: ultra-fast inference - Dashscope: Alibaba Qwen models (qwen-turbo, qwen-max, etc.) Architecture: - Single OpenAICompatibleCompletion class extends OpenAICompletion - ProviderConfig dataclass stores per-provider settings - Registry dict makes adding new providers a single config entry - Handles provider-specific quirks (OpenRouter headers, Ollama base URL normalization, optional API keys) Usage: LLM(model="deepseek/deepseek-chat") LLM(model="ollama/llama3") LLM(model="openrouter/anthropic/claude-3-opus") LLM(model="llama3", provider="ollama") Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fix: add is_litellm=True to tests that test litellm-specific methods Tests for _get_custom_llm_provider and _validate_call_params used openrouter/ model prefix which now routes to native provider. Added is_litellm=True to force litellm path since these test litellm-specific internals. --------- Co-authored-by: Joao Moura <joao@crewai.com> Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>