mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-05-03 00:02:36 +00:00
Fix issue #2984: Add support for watsonx/meta-llama/llama-4-maverick-17b-128e-instruct-fp8 model
- Added watsonx/meta-llama/llama-4-maverick-17b-128e-instruct-fp8 to the watsonx models list in constants.py - Created comprehensive tests to verify CLI model selection and LLM instantiation - All existing tests continue to pass with no regressions - Fixes CLI validation error when users try to select this model for watsonx provider Resolves #2984 Co-Authored-By: João <joao@crewai.com>
This commit is contained in:
@@ -237,6 +237,7 @@ MODELS = {
|
||||
"watsonx/meta-llama/llama-3-2-1b-instruct",
|
||||
"watsonx/meta-llama/llama-3-2-90b-vision-instruct",
|
||||
"watsonx/meta-llama/llama-3-405b-instruct",
|
||||
"watsonx/meta-llama/llama-4-maverick-17b-128e-instruct-fp8",
|
||||
"watsonx/mistral/mistral-large",
|
||||
"watsonx/ibm/granite-3-8b-instruct",
|
||||
],
|
||||
|
||||
Reference in New Issue
Block a user