Gl/feat/a2a refactor (#3793)

* feat: agent metaclass, refactor a2a to wrappers

* feat: a2a schemas and utils

* chore: move agent class, update imports

* refactor: organize imports to avoid circularity, add a2a to console

* feat: pass response_model through call chain

* feat: add standard openapi spec serialization to tools and structured output

* feat: a2a events

* chore: add a2a to pyproject

* docs: minimal base for learn docs

* fix: adjust a2a conversation flow, allow llm to decide exit until max_retries

* fix: inject agent skills into initial prompt

* fix: format agent card as json in prompt

* refactor: simplify A2A agent prompt formatting and improve skill display

* chore: wide cleanup

* chore: cleanup logic, add auth cache, use json for messages in prompt

* chore: update docs

* fix: doc snippets formatting

* feat: optimize A2A agent card fetching and improve error reporting

* chore: move imports to top of file

* chore: refactor hasattr check

* chore: add httpx-auth, update lockfile

* feat: create base public api

* chore: cleanup modules, add docstrings, types

* fix: exclude extra fields in prompt

* chore: update docs

* tests: update to correct import

* chore: lint for ruff, add missing import

* fix: tweak openai streaming logic for response model

* tests: add reimport for test

* tests: add reimport for test

* fix: don't set a2a attr if not set

* fix: don't set a2a attr if not set

* chore: update cassettes

* tests: fix tests

* fix: use instructor and dont pass response_format for litellm

* chore: consolidate event listeners, add typing

* fix: address race condition in test, update cassettes

* tests: add correct mocks, rerun cassette for json

* tests: update cassette

* chore: regenerate cassette after new run

* fix: make token manager access-safe

* fix: make token manager access-safe

* merge

* chore: update test and cassete for output pydantic

* fix: tweak to disallow deadlock

* chore: linter

* fix: adjust event ordering for threading

* fix: use conditional for batch check

* tests: tweak for emission

* tests: simplify api + event check

* fix: ensure non-function calling llms see json formatted string

* tests: tweak message comparison

* fix: use internal instructor for litellm structure responses

---------

Co-authored-by: Mike Plachta <mike@crewai.com>
This commit is contained in:
Greyson LaLonde
2025-11-01 02:42:03 +01:00
committed by GitHub
parent e229ef4e19
commit e134e5305b
71 changed files with 9790 additions and 4592 deletions

View File

@@ -78,6 +78,17 @@ def auto_mock_telemetry(request):
mock_instance = create_mock_telemetry_instance()
mock_telemetry_class.return_value = mock_instance
# Create mock for TraceBatchManager
mock_trace_manager = Mock()
mock_trace_manager.add_trace = Mock()
mock_trace_manager.send_batch = Mock()
mock_trace_manager.stop = Mock()
# Create mock for BatchSpanProcessor to prevent OpenTelemetry background threads
mock_batch_processor = Mock()
mock_batch_processor.shutdown = Mock()
mock_batch_processor.force_flush = Mock()
with (
patch(
"crewai.events.event_listener.Telemetry",
@@ -86,6 +97,22 @@ def auto_mock_telemetry(request):
patch("crewai.tools.tool_usage.Telemetry", mock_telemetry_class),
patch("crewai.cli.command.Telemetry", mock_telemetry_class),
patch("crewai.cli.create_flow.Telemetry", mock_telemetry_class),
patch(
"crewai.events.listeners.tracing.trace_batch_manager.TraceBatchManager",
return_value=mock_trace_manager,
),
patch(
"crewai.events.listeners.tracing.trace_listener.TraceBatchManager",
return_value=mock_trace_manager,
),
patch(
"crewai.events.listeners.tracing.first_time_trace_handler.TraceBatchManager",
return_value=mock_trace_manager,
),
patch(
"opentelemetry.sdk.trace.export.BatchSpanProcessor",
return_value=mock_batch_processor,
),
):
yield mock_instance
@@ -175,8 +202,8 @@ def clear_event_bus_handlers(setup_test_environment):
yield
# Shutdown event bus and wait for all handlers to complete
crewai_event_bus.shutdown(wait=True)
# Shutdown event bus without waiting to avoid hanging on blocked threads
crewai_event_bus.shutdown(wait=False)
crewai_event_bus._initialize()
callback = EvaluationTraceCallback()