mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-01-09 08:08:32 +00:00
feat sambanova models (#1858)
Co-authored-by: jorgep_snova <jorge.piedrahita@sambanovasystems.com> Co-authored-by: João Moura <joaomdmoura@gmail.com>
This commit is contained in:
committed by
GitHub
parent
673a38c5d9
commit
0e94236735
@@ -161,6 +161,7 @@ The CLI will initially prompt for API keys for the following services:
|
||||
* Groq
|
||||
* Anthropic
|
||||
* Google Gemini
|
||||
* SambaNova
|
||||
|
||||
When you select a provider, the CLI will prompt you to enter your API key.
|
||||
|
||||
|
||||
@@ -146,6 +146,19 @@ Here's a detailed breakdown of supported models and their capabilities, you can
|
||||
Groq is known for its fast inference speeds, making it suitable for real-time applications.
|
||||
</Tip>
|
||||
</Tab>
|
||||
<Tab title="SambaNova">
|
||||
| Model | Context Window | Best For |
|
||||
|-------|---------------|-----------|
|
||||
| Llama 3.1 70B/8B | Up to 131,072 tokens | High-performance, large context tasks |
|
||||
| Llama 3.1 405B | 8,192 tokens | High-performance and output quality |
|
||||
| Llama 3.2 Series | 8,192 tokens | General-purpose tasks, multimodal |
|
||||
| Llama 3.3 70B | Up to 131,072 tokens | High-performance and output quality|
|
||||
| Qwen2 familly | 8,192 tokens | High-performance and output quality |
|
||||
|
||||
<Tip>
|
||||
[SambaNova](https://cloud.sambanova.ai/) has several models with fast inference speed at full precision.
|
||||
</Tip>
|
||||
</Tab>
|
||||
<Tab title="Others">
|
||||
| Provider | Context Window | Key Features |
|
||||
|----------|---------------|--------------|
|
||||
|
||||
Reference in New Issue
Block a user