mirror of
https://github.com/crewAIInc/crewAI.git
synced 2025-12-16 04:18:35 +00:00
feat: async llm support
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
feat: introduce async contract to BaseLLM feat: add async call support for: Azure provider Anthropic provider OpenAI provider Gemini provider Bedrock provider LiteLLM provider chore: expand scrubbed header fields (conftest, anthropic, bedrock) chore: update docs to cover async functionality chore: update and harden tests to support acall; re-add uri for cassette compatibility chore: generate missing cassette fix: ensure acall is non-abstract and set supports_tools = true for supported Anthropic models chore: improve Bedrock async docstring and general test robustness
This commit is contained in:
@@ -475,10 +475,14 @@ def test_openai_get_client_params_priority_order():
|
||||
params3 = llm3._get_client_params()
|
||||
assert params3["base_url"] == "https://env.openai.com/v1"
|
||||
|
||||
def test_openai_get_client_params_no_base_url():
|
||||
def test_openai_get_client_params_no_base_url(monkeypatch):
|
||||
"""
|
||||
Test that _get_client_params works correctly when no base_url is specified
|
||||
"""
|
||||
# Clear env vars that could set base_url
|
||||
monkeypatch.delenv("OPENAI_BASE_URL", raising=False)
|
||||
monkeypatch.delenv("OPENAI_API_BASE", raising=False)
|
||||
|
||||
llm = OpenAICompletion(model="gpt-4o")
|
||||
client_params = llm._get_client_params()
|
||||
# When no base_url is provided, it should not be in the params (filtered out as None)
|
||||
|
||||
Reference in New Issue
Block a user