Fix issue #2984: Add support for watsonx/meta-llama/llama-4-maverick-17b-128e-instruct-fp8 model

- Added watsonx/meta-llama/llama-4-maverick-17b-128e-instruct-fp8 to the watsonx models list in constants.py
- Created comprehensive tests to verify CLI model selection and LLM instantiation
- All existing tests continue to pass with no regressions
- Fixes CLI validation error when users try to select this model for watsonx provider

Resolves #2984

Co-Authored-By: João <joao@crewai.com>
This commit is contained in:
Devin AI
2025-06-10 10:13:05 +00:00
parent 5b740467cb
commit 048f05c755
2 changed files with 49 additions and 0 deletions

View File

@@ -237,6 +237,7 @@ MODELS = {
"watsonx/meta-llama/llama-3-2-1b-instruct",
"watsonx/meta-llama/llama-3-2-90b-vision-instruct",
"watsonx/meta-llama/llama-3-405b-instruct",
"watsonx/meta-llama/llama-4-maverick-17b-128e-instruct-fp8",
"watsonx/mistral/mistral-large",
"watsonx/ibm/granite-3-8b-instruct",
],