mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-05-03 16:22:49 +00:00
feat: improvements on import native sdk support (#3725)
* feat: add support for Anthropic provider and enhance logging - Introduced the `anthropic` package with version `0.69.0` in `pyproject.toml` and `uv.lock`, allowing for integration with the Anthropic API. - Updated logging in the LLM class to provide clearer error messages when importing native providers, enhancing debugging capabilities. - Improved error handling in the AnthropicCompletion class to guide users on installation via the updated error message format. - Refactored import error handling in other provider classes to maintain consistency in error messaging and installation instructions. * feat: enhance LLM support with Bedrock provider and update dependencies - Added support for the `bedrock` provider in the LLM class, allowing integration with AWS Bedrock APIs. - Updated `uv.lock` to replace `boto3` with `bedrock` in the dependencies, reflecting the new provider structure. - Introduced `SUPPORTED_NATIVE_PROVIDERS` to include `bedrock` and ensure proper error handling when instantiating native providers. - Enhanced error handling in the LLM class to raise informative errors when native provider instantiation fails. - Added tests to validate the behavior of the new Bedrock provider and ensure fallback mechanisms work correctly for unsupported providers. * test: update native provider fallback tests to expect ImportError * adjust the test with the expected bevaior - raising ImportError * this is exoecting the litellm format, all gemini native tests are in test_google.py --------- Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
This commit is contained in:
@@ -141,9 +141,10 @@ def test_anthropic_completion_module_is_imported():
|
||||
assert hasattr(completion_mod, 'AnthropicCompletion')
|
||||
|
||||
|
||||
def test_fallback_to_litellm_when_native_anthropic_fails():
|
||||
def test_native_anthropic_raises_error_when_initialization_fails():
|
||||
"""
|
||||
Test that LLM falls back to LiteLLM when native Anthropic completion fails
|
||||
Test that LLM raises ImportError when native Anthropic completion fails to initialize.
|
||||
This ensures we don't silently fall back when there's a configuration issue.
|
||||
"""
|
||||
# Mock the _get_native_provider to return a failing class
|
||||
with patch('crewai.llm.LLM._get_native_provider') as mock_get_provider:
|
||||
@@ -154,12 +155,12 @@ def test_fallback_to_litellm_when_native_anthropic_fails():
|
||||
|
||||
mock_get_provider.return_value = FailingCompletion
|
||||
|
||||
# This should fall back to LiteLLM
|
||||
llm = LLM(model="anthropic/claude-3-5-sonnet-20241022")
|
||||
# This should raise ImportError, not fall back to LiteLLM
|
||||
with pytest.raises(ImportError) as excinfo:
|
||||
LLM(model="anthropic/claude-3-5-sonnet-20241022")
|
||||
|
||||
# Check that it's using LiteLLM
|
||||
assert hasattr(llm, 'is_litellm')
|
||||
assert llm.is_litellm == True
|
||||
assert "Error importing native provider" in str(excinfo.value)
|
||||
assert "Native Anthropic SDK failed" in str(excinfo.value)
|
||||
|
||||
|
||||
def test_anthropic_completion_initialization_parameters():
|
||||
|
||||
Reference in New Issue
Block a user