Files
crewAI/src/crewai
Devin AI 048f05c755 Fix issue #2984: Add support for watsonx/meta-llama/llama-4-maverick-17b-128e-instruct-fp8 model
- Added watsonx/meta-llama/llama-4-maverick-17b-128e-instruct-fp8 to the watsonx models list in constants.py
- Created comprehensive tests to verify CLI model selection and LLM instantiation
- All existing tests continue to pass with no regressions
- Fixes CLI validation error when users try to select this model for watsonx provider

Resolves #2984

Co-Authored-By: João <joao@crewai.com>
2025-06-10 10:13:05 +00:00
..
2025-05-27 00:33:50 -07:00
2024-09-27 12:11:17 -03:00
2025-06-02 18:12:24 -04:00
2025-03-14 03:00:30 -03:00
2025-05-25 15:24:59 -07:00
2024-02-02 13:56:35 -03:00