feat: improvements on import native sdk support (#3725)

* feat: add support for Anthropic provider and enhance logging

- Introduced the `anthropic` package with version `0.69.0` in `pyproject.toml` and `uv.lock`, allowing for integration with the Anthropic API.
- Updated logging in the LLM class to provide clearer error messages when importing native providers, enhancing debugging capabilities.
- Improved error handling in the AnthropicCompletion class to guide users on installation via the updated error message format.
- Refactored import error handling in other provider classes to maintain consistency in error messaging and installation instructions.

* feat: enhance LLM support with Bedrock provider and update dependencies

- Added support for the `bedrock` provider in the LLM class, allowing integration with AWS Bedrock APIs.
- Updated `uv.lock` to replace `boto3` with `bedrock` in the dependencies, reflecting the new provider structure.
- Introduced `SUPPORTED_NATIVE_PROVIDERS` to include `bedrock` and ensure proper error handling when instantiating native providers.
- Enhanced error handling in the LLM class to raise informative errors when native provider instantiation fails.
- Added tests to validate the behavior of the new Bedrock provider and ensure fallback mechanisms work correctly for unsupported providers.

* test: update native provider fallback tests to expect ImportError

* adjust the test with the expected bevaior - raising ImportError

* this is exoecting the litellm format, all gemini native tests are in test_google.py

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
This commit is contained in:
Lorenze Jay
2025-10-17 14:23:50 -07:00
committed by GitHub
parent c35a84de82
commit fa53a995e4
13 changed files with 128 additions and 91 deletions

View File

@@ -123,9 +123,10 @@ def test_azure_completion_module_is_imported():
assert hasattr(completion_mod, 'AzureCompletion')
def test_fallback_to_litellm_when_native_azure_fails():
def test_native_azure_raises_error_when_initialization_fails():
"""
Test that LLM falls back to LiteLLM when native Azure completion fails
Test that LLM raises ImportError when native Azure completion fails to initialize.
This ensures we don't silently fall back when there's a configuration issue.
"""
# Mock the _get_native_provider to return a failing class
with patch('crewai.llm.LLM._get_native_provider') as mock_get_provider:
@@ -136,12 +137,12 @@ def test_fallback_to_litellm_when_native_azure_fails():
mock_get_provider.return_value = FailingCompletion
# This should fall back to LiteLLM
llm = LLM(model="azure/gpt-4")
# This should raise ImportError, not fall back to LiteLLM
with pytest.raises(ImportError) as excinfo:
LLM(model="azure/gpt-4")
# Check that it's using LiteLLM
assert hasattr(llm, 'is_litellm')
assert llm.is_litellm == True
assert "Error importing native provider" in str(excinfo.value)
assert "Native Azure AI Inference SDK failed" in str(excinfo.value)
def test_azure_completion_initialization_parameters():