Lorenze/native inference sdks (#3619)

* ruff linted

* using native sdks with litellm fallback

* drop exa

* drop print on completion

* Refactor LLM and utility functions for type consistency

- Updated `max_tokens` parameter in `LLM` class to accept `float` in addition to `int`.
- Modified `create_llm` function to ensure consistent type hints and return types, now returning `LLM | BaseLLM | None`.
- Adjusted type hints for various parameters in `create_llm` and `_llm_via_environment_or_fallback` functions for improved clarity and type safety.
- Enhanced test cases to reflect changes in type handling and ensure proper instantiation of LLM instances.

* fix agent_tests

* fix litellm tests and usagemetrics fix

* drop print

* Refactor LLM event handling and improve test coverage

- Removed commented-out event emission for LLM call failures in `llm.py`.
- Added `from_agent` parameter to `CrewAgentExecutor` for better context in LLM responses.
- Enhanced test for LLM call failure to simulate OpenAI API failure and updated assertions for clarity.
- Updated agent and task ID assertions in tests to ensure they are consistently treated as strings.

* fix test_converter

* fixed tests/agents/test_agent.py

* Refactor LLM context length exception handling and improve provider integration

- Renamed `LLMContextLengthExceededException` to `LLMContextLengthExceededExceptionError` for clarity and consistency.
- Updated LLM class to pass the provider parameter correctly during initialization.
- Enhanced error handling in various LLM provider implementations to raise the new exception type.
- Adjusted tests to reflect the updated exception name and ensure proper error handling in context length scenarios.

* Enhance LLM context window handling across providers

- Introduced CONTEXT_WINDOW_USAGE_RATIO to adjust context window sizes dynamically for Anthropic, Azure, Gemini, and OpenAI LLMs.
- Added validation for context window sizes in Azure and Gemini providers to ensure they fall within acceptable limits.
- Updated context window size calculations to use the new ratio, improving consistency and adaptability across different models.
- Removed hardcoded context window sizes in favor of ratio-based calculations for better flexibility.

* fix test agent again

* fix test agent

* feat: add native LLM providers for Anthropic, Azure, and Gemini

- Introduced new completion implementations for Anthropic, Azure, and Gemini, integrating their respective SDKs.
- Added utility functions for tool validation and extraction to support function calling across LLM providers.
- Enhanced context window management and token usage extraction for each provider.
- Created a common utility module for shared functionality among LLM providers.

* chore: update dependencies and improve context management

- Removed direct dependency on `litellm` from the main dependencies and added it under extras for better modularity.
- Updated the `litellm` dependency specification to allow for greater flexibility in versioning.
- Refactored context length exception handling across various LLM providers to use a consistent error class.
- Enhanced platform-specific dependency markers for NVIDIA packages to ensure compatibility across different systems.

* refactor(tests): update LLM instantiation to include is_litellm flag in test cases

- Modified multiple test cases in test_llm.py to set the is_litellm parameter to True when instantiating the LLM class.
- This change ensures that the tests are aligned with the latest LLM configuration requirements and improves consistency across test scenarios.
- Adjusted relevant assertions and comments to reflect the updated LLM behavior.

* linter

* linted

* revert constants

* fix(tests): correct type hint in expected model description

- Updated the expected description in the test_generate_model_description_dict_field function to use 'Dict' instead of 'dict' for consistency with type hinting conventions.
- This change ensures that the test accurately reflects the expected output format for model descriptions.

* refactor(llm): enhance LLM instantiation and error handling

- Updated the LLM class to include validation for the model parameter, ensuring it is a non-empty string.
- Improved error handling by logging warnings when the native SDK fails, allowing for a fallback to LiteLLM.
- Adjusted the instantiation of LLM in test cases to consistently include the is_litellm flag, aligning with recent changes in LLM configuration.
- Modified relevant tests to reflect these updates, ensuring better coverage and accuracy in testing scenarios.

* fixed test

* refactor(llm): enhance token usage tracking and add copy methods

- Updated the LLM class to track token usage and log callbacks in streaming mode, improving monitoring capabilities.
- Introduced shallow and deep copy methods for the LLM instance, allowing for better management of LLM configurations and parameters.
- Adjusted test cases to instantiate LLM with the is_litellm flag, ensuring alignment with recent changes in LLM configuration.

* refactor(tests): reorganize imports and enhance error messages in test cases

- Cleaned up import statements in test_crew.py for better organization and readability.
- Enhanced error messages in test cases to use `re.escape` for improved regex matching, ensuring more robust error handling.
- Adjusted comments for clarity and consistency across test scenarios.
- Ensured that all necessary modules are imported correctly to avoid potential runtime issues.
This commit is contained in:
Lorenze Jay
2025-10-03 14:32:35 -07:00
committed by GitHub
parent 428810bd6f
commit 126b91eab3
77 changed files with 25026 additions and 493 deletions

View File

@@ -1,17 +1,12 @@
import os
from datetime import datetime
import os
from unittest.mock import Mock, patch
import pytest
from pydantic import Field
from crewai.agent import Agent
from crewai.agents.crew_agent_executor import CrewAgentExecutor
from crewai.crew import Crew
from crewai.flow.flow import Flow, listen, start
from crewai.llm import LLM
from crewai.task import Task
from crewai.tools.base_tool import BaseTool
from crewai.events.event_bus import crewai_event_bus
from crewai.events.event_listener import EventListener
from crewai.events.types.agent_events import (
AgentExecutionCompletedEvent,
AgentExecutionErrorEvent,
@@ -25,9 +20,6 @@ from crewai.events.types.crew_events import (
CrewTestResultEvent,
CrewTestStartedEvent,
)
from crewai.events.event_bus import crewai_event_bus
from crewai.events.event_listener import EventListener
from crewai.events.types.tool_usage_events import ToolUsageFinishedEvent
from crewai.events.types.flow_events import (
FlowCreatedEvent,
FlowFinishedEvent,
@@ -48,7 +40,14 @@ from crewai.events.types.task_events import (
)
from crewai.events.types.tool_usage_events import (
ToolUsageErrorEvent,
ToolUsageFinishedEvent,
)
from crewai.flow.flow import Flow, listen, start
from crewai.llm import LLM
from crewai.task import Task
from crewai.tools.base_tool import BaseTool
from pydantic import Field
import pytest
@pytest.fixture(scope="module")
@@ -195,7 +194,7 @@ def test_crew_emits_kickoff_failed_event(base_agent, base_task):
error_message = "Simulated crew kickoff failure"
mock_execute.side_effect = Exception(error_message)
with pytest.raises(Exception):
with pytest.raises(Exception): # noqa: B017
crew.kickoff()
assert len(received_events) == 1
@@ -279,7 +278,7 @@ def test_task_emits_failed_event_on_execution_error(base_agent, base_task):
agent=agent,
)
with pytest.raises(Exception):
with pytest.raises(Exception): # noqa: B017
agent.execute_task(task=task)
assert len(received_events) == 1
@@ -333,7 +332,7 @@ def test_agent_emits_execution_error_event(base_agent, base_task):
) as invoke_mock:
invoke_mock.side_effect = Exception(error_message)
with pytest.raises(Exception):
with pytest.raises(Exception): # noqa: B017
base_agent.execute_task(
task=base_task,
)
@@ -517,7 +516,6 @@ def test_flow_emits_method_execution_started_event():
@crewai_event_bus.on(MethodExecutionStartedEvent)
def handle_method_start(source, event):
print("event in method name", event.method_name)
received_events.append(event)
class TestFlow(Flow[dict]):
@@ -619,7 +617,7 @@ def test_flow_emits_method_execution_failed_event():
raise error
flow = TestFlow()
with pytest.raises(Exception):
with pytest.raises(Exception): # noqa: B017
flow.kickoff()
assert len(received_events) == 1
@@ -655,6 +653,7 @@ def test_llm_emits_call_started_event():
@pytest.mark.vcr(filter_headers=["authorization"])
@pytest.mark.isolated
def test_llm_emits_call_failed_event():
received_events = []
@@ -662,13 +661,18 @@ def test_llm_emits_call_failed_event():
def handle_llm_call_failed(source, event):
received_events.append(event)
error_message = "Simulated LLM call failure"
with patch("crewai.llm.litellm.completion", side_effect=Exception(error_message)):
error_message = "OpenAI API call failed: Simulated API failure"
with patch(
"crewai.llms.providers.openai.completion.OpenAICompletion._handle_completion"
) as mock_handle_completion:
mock_handle_completion.side_effect = Exception("Simulated API failure")
llm = LLM(model="gpt-4o-mini")
with pytest.raises(Exception) as exc_info:
llm.call("Hello, how are you?")
assert str(exc_info.value) == error_message
assert str(exc_info.value) == "Simulated API failure"
assert len(received_events) == 1
assert received_events[0].type == "llm_call_failed"
assert received_events[0].error == error_message
@@ -884,8 +888,8 @@ def test_stream_llm_emits_event_with_task_and_agent_info():
assert len(all_task_name) == 14
assert set(all_agent_roles) == {agent.role}
assert set(all_agent_id) == {agent.id}
assert set(all_task_id) == {task.id}
assert set(all_agent_id) == {str(agent.id)}
assert set(all_task_id) == {str(task.id)}
assert set(all_task_name) == {task.name or task.description}
@@ -935,8 +939,8 @@ def test_llm_emits_event_with_task_and_agent_info(base_agent, base_task):
assert len(all_task_name) == 2
assert set(all_agent_roles) == {base_agent.role}
assert set(all_agent_id) == {base_agent.id}
assert set(all_task_id) == {base_task.id}
assert set(all_agent_id) == {str(base_agent.id)}
assert set(all_task_id) == {str(base_task.id)}
assert set(all_task_name) == {base_task.name or base_task.description}
@@ -991,4 +995,4 @@ def test_llm_emits_event_with_lite_agent():
assert len(all_task_name) == 0
assert set(all_agent_roles) == {agent.role}
assert set(all_agent_id) == {agent.id}
assert set(all_agent_id) == {str(agent.id)}