mirror of
https://github.com/crewAIInc/crewAI.git
synced 2025-12-16 04:18:35 +00:00
Adding Nebius to docs (#3070)
* Adding Nebius to docs Submitting this PR on behalf of Nebius AI Studio to add Nebius models to the CrewAI documentation. I tested with the latest CrewAI + Nebius setup to ensure compatibility. cc @tonykipkemboi * updated LiteLLM page --------- Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
This commit is contained in:
@@ -684,6 +684,28 @@ In this section, you'll find detailed examples that help you select, configure,
|
|||||||
- openrouter/deepseek/deepseek-chat
|
- openrouter/deepseek/deepseek-chat
|
||||||
</Info>
|
</Info>
|
||||||
</Accordion>
|
</Accordion>
|
||||||
|
|
||||||
|
<Accordion title="Nebius AI Studio">
|
||||||
|
Set the following environment variables in your `.env` file:
|
||||||
|
```toml Code
|
||||||
|
NEBIUS_API_KEY=<your-api-key>
|
||||||
|
```
|
||||||
|
|
||||||
|
Example usage in your CrewAI project:
|
||||||
|
```python Code
|
||||||
|
llm = LLM(
|
||||||
|
model="nebius/Qwen/Qwen3-30B-A3B"
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
<Info>
|
||||||
|
Nebius AI Studio features:
|
||||||
|
- Large collection of open source models
|
||||||
|
- Higher rate limits
|
||||||
|
- Competitive pricing
|
||||||
|
- Good balance of speed and quality
|
||||||
|
</Info>
|
||||||
|
</Accordion>
|
||||||
</AccordionGroup>
|
</AccordionGroup>
|
||||||
|
|
||||||
## Streaming Responses
|
## Streaming Responses
|
||||||
|
|||||||
@@ -34,6 +34,7 @@ LiteLLM supports a wide range of providers, including but not limited to:
|
|||||||
- DeepInfra
|
- DeepInfra
|
||||||
- Groq
|
- Groq
|
||||||
- SambaNova
|
- SambaNova
|
||||||
|
- Nebius AI Studio
|
||||||
- [NVIDIA NIMs](https://docs.api.nvidia.com/nim/reference/models-1)
|
- [NVIDIA NIMs](https://docs.api.nvidia.com/nim/reference/models-1)
|
||||||
- And many more!
|
- And many more!
|
||||||
|
|
||||||
|
|||||||
Reference in New Issue
Block a user