diff --git a/docs/en/concepts/llms.mdx b/docs/en/concepts/llms.mdx index 80b69ce11..332d48576 100644 --- a/docs/en/concepts/llms.mdx +++ b/docs/en/concepts/llms.mdx @@ -684,6 +684,28 @@ In this section, you'll find detailed examples that help you select, configure, - openrouter/deepseek/deepseek-chat + + + Set the following environment variables in your `.env` file: + ```toml Code + NEBIUS_API_KEY= + ``` + + Example usage in your CrewAI project: + ```python Code + llm = LLM( + model="nebius/Qwen/Qwen3-30B-A3B" + ) + ``` + + + Nebius AI Studio features: + - Large collection of open source models + - Higher rate limits + - Competitive pricing + - Good balance of speed and quality + + ## Streaming Responses diff --git a/docs/en/learn/llm-connections.mdx b/docs/en/learn/llm-connections.mdx index fcc264f09..c70577887 100644 --- a/docs/en/learn/llm-connections.mdx +++ b/docs/en/learn/llm-connections.mdx @@ -34,6 +34,7 @@ LiteLLM supports a wide range of providers, including but not limited to: - DeepInfra - Groq - SambaNova +- Nebius AI Studio - [NVIDIA NIMs](https://docs.api.nvidia.com/nim/reference/models-1) - And many more!