---
title: Agent-to-Agent (A2A) Protocol
description: Agents delegate tasks to remote A2A agents and/or operate as A2A-compliant server agents.
icon: network-wired
mode: "wide"
---
## A2A Agent Delegation
CrewAI treats [A2A protocol](https://a2a-protocol.org/latest/) as a first-class delegation primitive, enabling agents to delegate tasks, request information, and collaborate with remote agents, as well as act as A2A-compliant server agents.
In client mode, agents autonomously choose between local execution and remote delegation based on task requirements.
## How It Works
When an agent is configured with A2A capabilities:
1. The Agent analyzes each task
2. It decides to either:
- Handle the task directly using its own capabilities
- Delegate to a remote A2A agent for specialized handling
3. If delegating, the agent communicates with the remote A2A agent through the protocol
4. Results are returned to the CrewAI workflow
A2A delegation requires the `a2a-sdk` package. Install with: `uv add 'crewai[a2a]'` or `pip install 'crewai[a2a]'`
## Basic Configuration
`crewai.a2a.config.A2AConfig` is deprecated and will be removed in v2.0.0. Use `A2AClientConfig` for connecting to remote agents and/or `A2AServerConfig` for exposing agents as servers.
Configure an agent for A2A delegation by setting the `a2a` parameter:
```python Code
from crewai import Agent, Crew, Task
from crewai.a2a import A2AClientConfig
agent = Agent(
role="Research Coordinator",
goal="Coordinate research tasks efficiently",
backstory="Expert at delegating to specialized research agents",
llm="gpt-4o",
a2a=A2AClientConfig(
endpoint="https://example.com/.well-known/agent-card.json",
timeout=120,
max_turns=10
)
)
task = Task(
description="Research the latest developments in quantum computing",
expected_output="A comprehensive research report",
agent=agent
)
crew = Crew(agents=[agent], tasks=[task], verbose=True)
result = crew.kickoff()
```
## Client Configuration Options
The `A2AClientConfig` class accepts the following parameters:
The A2A agent endpoint URL (typically points to `.well-known/agent-card.json`)
Authentication scheme for the A2A agent. Supports Bearer tokens, OAuth2, API keys, and HTTP authentication.
Request timeout in seconds
Maximum number of conversation turns with the A2A agent
Optional Pydantic model for requesting structured output from an A2A agent. A2A protocol does not
enforce this, so an A2A agent does not need to honor this request.
Whether to raise an error immediately if agent connection fails. When `False`, the agent continues with available agents and informs the LLM about unavailable ones.
When `True`, returns the A2A agent's result directly when it signals completion. When `False`, allows the server agent to review the result and potentially continue the conversation.
Update mechanism for receiving task status. Options: `StreamingConfig`, `PollingConfig`, or `PushNotificationConfig`.
Transport protocol for A2A communication. Options: `JSONRPC` (default), `GRPC`, or `HTTP+JSON`.
Media types the client can accept in responses.
Ordered list of transport protocols the client supports.
Whether to prioritize client transport preferences over server.
Extension URIs the client supports.
## Authentication
For A2A agents that require authentication, use one of the provided auth schemes:
```python bearer_token_auth.py lines
from crewai.a2a import A2AClientConfig
from crewai.a2a.auth import BearerTokenAuth
agent = Agent(
role="Secure Coordinator",
goal="Coordinate tasks with secured agents",
backstory="Manages secure agent communications",
llm="gpt-4o",
a2a=A2AClientConfig(
endpoint="https://secure-agent.example.com/.well-known/agent-card.json",
auth=BearerTokenAuth(token="your-bearer-token"),
timeout=120
)
)
```
```python api_key_auth.py lines
from crewai.a2a import A2AClientConfig
from crewai.a2a.auth import APIKeyAuth
agent = Agent(
role="API Coordinator",
goal="Coordinate with API-based agents",
backstory="Manages API-authenticated communications",
llm="gpt-4o",
a2a=A2AClientConfig(
endpoint="https://api-agent.example.com/.well-known/agent-card.json",
auth=APIKeyAuth(
api_key="your-api-key",
location="header", # or "query" or "cookie"
name="X-API-Key"
),
timeout=120
)
)
```
```python oauth2_auth.py lines
from crewai.a2a import A2AClientConfig
from crewai.a2a.auth import OAuth2ClientCredentials
agent = Agent(
role="OAuth Coordinator",
goal="Coordinate with OAuth-secured agents",
backstory="Manages OAuth-authenticated communications",
llm="gpt-4o",
a2a=A2AClientConfig(
endpoint="https://oauth-agent.example.com/.well-known/agent-card.json",
auth=OAuth2ClientCredentials(
token_url="https://auth.example.com/oauth/token",
client_id="your-client-id",
client_secret="your-client-secret",
scopes=["read", "write"]
),
timeout=120
)
)
```
```python http_basic_auth.py lines
from crewai.a2a import A2AClientConfig
from crewai.a2a.auth import HTTPBasicAuth
agent = Agent(
role="Basic Auth Coordinator",
goal="Coordinate with basic auth agents",
backstory="Manages basic authentication communications",
llm="gpt-4o",
a2a=A2AClientConfig(
endpoint="https://basic-agent.example.com/.well-known/agent-card.json",
auth=HTTPBasicAuth(
username="your-username",
password="your-password"
),
timeout=120
)
)
```
## Multiple A2A Agents
Configure multiple A2A agents for delegation by passing a list:
```python Code
from crewai.a2a import A2AClientConfig
from crewai.a2a.auth import BearerTokenAuth
agent = Agent(
role="Multi-Agent Coordinator",
goal="Coordinate with multiple specialized agents",
backstory="Expert at delegating to the right specialist",
llm="gpt-4o",
a2a=[
A2AClientConfig(
endpoint="https://research.example.com/.well-known/agent-card.json",
timeout=120
),
A2AClientConfig(
endpoint="https://data.example.com/.well-known/agent-card.json",
auth=BearerTokenAuth(token="data-token"),
timeout=90
)
]
)
```
The LLM will automatically choose which A2A agent to delegate to based on the task requirements.
## Error Handling
Control how agent connection failures are handled using the `fail_fast` parameter:
```python Code
from crewai.a2a import A2AClientConfig
# Fail immediately on connection errors (default)
agent = Agent(
role="Research Coordinator",
goal="Coordinate research tasks",
backstory="Expert at delegation",
llm="gpt-4o",
a2a=A2AClientConfig(
endpoint="https://research.example.com/.well-known/agent-card.json",
fail_fast=True
)
)
# Continue with available agents
agent = Agent(
role="Multi-Agent Coordinator",
goal="Coordinate with multiple agents",
backstory="Expert at working with available resources",
llm="gpt-4o",
a2a=[
A2AClientConfig(
endpoint="https://primary.example.com/.well-known/agent-card.json",
fail_fast=False
),
A2AClientConfig(
endpoint="https://backup.example.com/.well-known/agent-card.json",
fail_fast=False
)
]
)
```
When `fail_fast=False`:
- If some agents fail, the LLM is informed which agents are unavailable and can delegate to working agents
- If all agents fail, the LLM receives a notice about unavailable agents and handles the task directly
- Connection errors are captured and included in the context for better decision-making
## Update Mechanisms
Control how your agent receives task status updates from remote A2A agents:
```python streaming_config.py lines
from crewai.a2a import A2AClientConfig
from crewai.a2a.updates import StreamingConfig
agent = Agent(
role="Research Coordinator",
goal="Coordinate research tasks",
backstory="Expert at delegation",
llm="gpt-4o",
a2a=A2AClientConfig(
endpoint="https://research.example.com/.well-known/agent-card.json",
updates=StreamingConfig()
)
)
```
```python polling_config.py lines
from crewai.a2a import A2AClientConfig
from crewai.a2a.updates import PollingConfig
agent = Agent(
role="Research Coordinator",
goal="Coordinate research tasks",
backstory="Expert at delegation",
llm="gpt-4o",
a2a=A2AClientConfig(
endpoint="https://research.example.com/.well-known/agent-card.json",
updates=PollingConfig(
interval=2.0,
timeout=300.0,
max_polls=100
)
)
)
```
```python push_notifications_config.py lines
from crewai.a2a import A2AClientConfig
from crewai.a2a.updates import PushNotificationConfig
agent = Agent(
role="Research Coordinator",
goal="Coordinate research tasks",
backstory="Expert at delegation",
llm="gpt-4o",
a2a=A2AClientConfig(
endpoint="https://research.example.com/.well-known/agent-card.json",
updates=PushNotificationConfig(
url="{base_url}/a2a/callback",
token="your-validation-token",
timeout=300.0
)
)
)
```
## Exposing Agents as A2A Servers
You can expose your CrewAI agents as A2A-compliant servers, allowing other A2A clients to delegate tasks to them.
### Server Configuration
Add an `A2AServerConfig` to your agent to enable server capabilities:
```python a2a_server_agent.py lines
from crewai import Agent
from crewai.a2a import A2AServerConfig
agent = Agent(
role="Data Analyst",
goal="Analyze datasets and provide insights",
backstory="Expert data scientist with statistical analysis skills",
llm="gpt-4o",
a2a=A2AServerConfig(url="https://your-server.com")
)
```
### Server Configuration Options
Human-readable name for the agent. Defaults to the agent's role if not provided.
Human-readable description. Defaults to the agent's goal and backstory if not provided.
Version string for the agent card.
List of agent skills. Auto-generated from agent tools if not provided.
Declaration of optional capabilities supported by the agent.
Supported input MIME types.
Supported output MIME types.
Preferred endpoint URL. If set, overrides the URL passed to `to_agent_card()`.
Transport protocol for the preferred endpoint.
A2A protocol version this agent supports.
Information about the agent's service provider.
URL to the agent's documentation.
URL to an icon for the agent.
Additional supported interfaces (transport and URL combinations).
Security requirement objects for all agent interactions.
Security schemes available to authorize requests.
Whether agent provides extended card to authenticated users.
JSON Web Signatures for the AgentCard.
### Combined Client and Server
An agent can act as both client and server by providing both configurations:
```python Code
from crewai import Agent
from crewai.a2a import A2AClientConfig, A2AServerConfig
agent = Agent(
role="Research Coordinator",
goal="Coordinate research and serve analysis requests",
backstory="Expert at delegation and analysis",
llm="gpt-4o",
a2a=[
A2AClientConfig(
endpoint="https://specialist.example.com/.well-known/agent-card.json",
timeout=120
),
A2AServerConfig(url="https://your-server.com")
]
)
```
## Best Practices
Configure timeouts based on expected A2A agent response times. Longer-running tasks may need higher timeout values.
Use `max_turns` to prevent excessive back-and-forth. The agent will automatically conclude conversations before hitting the limit.
Set `fail_fast=False` for production environments with multiple agents to gracefully handle connection failures and maintain workflow continuity.
Store authentication tokens and credentials as environment variables, not in code.
Use verbose mode to observe when the LLM chooses to delegate versus handle tasks directly.
## Supported Authentication Methods
- **Bearer Token** - Simple token-based authentication
- **OAuth2 Client Credentials** - OAuth2 flow for machine-to-machine communication
- **OAuth2 Authorization Code** - OAuth2 flow requiring user authorization
- **API Key** - Key-based authentication (header, query param, or cookie)
- **HTTP Basic** - Username/password authentication
- **HTTP Digest** - Digest authentication (requires `httpx-auth` package)
## Learn More
For more information about the A2A protocol and reference implementations:
- [A2A Protocol Documentation](https://a2a-protocol.org)
- [A2A Sample Implementations](https://github.com/a2aproject/a2a-samples)
- [A2A Python SDK](https://github.com/a2aproject/a2a-python)