mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-01-30 18:48:14 +00:00
Compare commits
1 Commits
gl/feat/a2
...
lg-support
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
0dda6641b2 |
@@ -370,8 +370,7 @@
|
||||
"pages": [
|
||||
"en/enterprise/features/traces",
|
||||
"en/enterprise/features/webhook-streaming",
|
||||
"en/enterprise/features/hallucination-guardrail",
|
||||
"en/enterprise/features/flow-hitl-management"
|
||||
"en/enterprise/features/hallucination-guardrail"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -824,8 +823,7 @@
|
||||
"pages": [
|
||||
"pt-BR/enterprise/features/traces",
|
||||
"pt-BR/enterprise/features/webhook-streaming",
|
||||
"pt-BR/enterprise/features/hallucination-guardrail",
|
||||
"pt-BR/enterprise/features/flow-hitl-management"
|
||||
"pt-BR/enterprise/features/hallucination-guardrail"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -1289,8 +1287,7 @@
|
||||
"pages": [
|
||||
"ko/enterprise/features/traces",
|
||||
"ko/enterprise/features/webhook-streaming",
|
||||
"ko/enterprise/features/hallucination-guardrail",
|
||||
"ko/enterprise/features/flow-hitl-management"
|
||||
"ko/enterprise/features/hallucination-guardrail"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -1,563 +0,0 @@
|
||||
---
|
||||
title: "Flow HITL Management"
|
||||
description: "Enterprise-grade human review for Flows with email-first notifications, routing rules, and auto-response capabilities"
|
||||
icon: "users-gear"
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
<Note>
|
||||
Flow HITL Management features require the `@human_feedback` decorator, available in **CrewAI version 1.8.0 or higher**. These features apply specifically to **Flows**, not Crews.
|
||||
</Note>
|
||||
|
||||
CrewAI Enterprise provides a comprehensive Human-in-the-Loop (HITL) management system for Flows that transforms AI workflows into collaborative human-AI processes. The platform uses an **email-first architecture** that enables anyone with an email address to respond to review requests—no platform account required.
|
||||
|
||||
## Overview
|
||||
|
||||
<CardGroup cols={3}>
|
||||
<Card title="Email-First Design" icon="envelope">
|
||||
Responders can reply directly to notification emails to provide feedback
|
||||
</Card>
|
||||
<Card title="Flexible Routing" icon="route">
|
||||
Route requests to specific emails based on method patterns or flow state
|
||||
</Card>
|
||||
<Card title="Auto-Response" icon="clock">
|
||||
Configure automatic fallback responses when no human replies in time
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
### Key Benefits
|
||||
|
||||
- **Simple mental model**: Email addresses are universal; no need to manage platform users or roles
|
||||
- **External responders**: Anyone with an email can respond, even non-platform users
|
||||
- **Dynamic assignment**: Pull assignee email directly from flow state (e.g., `sales_rep_email`)
|
||||
- **Reduced configuration**: Fewer settings to configure, faster time to value
|
||||
- **Email as primary channel**: Most users prefer responding via email over logging into a dashboard
|
||||
|
||||
## Setting Up Human Review Points in Flows
|
||||
|
||||
Configure human review checkpoints within your Flows using the `@human_feedback` decorator. When execution reaches a review point, the system pauses, notifies the assignee via email, and waits for a response.
|
||||
|
||||
```python
|
||||
from crewai.flow.flow import Flow, start, listen
|
||||
from crewai.flow.human_feedback import human_feedback, HumanFeedbackResult
|
||||
|
||||
class ContentApprovalFlow(Flow):
|
||||
@start()
|
||||
def generate_content(self):
|
||||
# AI generates content
|
||||
return "Generated marketing copy for Q1 campaign..."
|
||||
|
||||
@listen(generate_content)
|
||||
@human_feedback(
|
||||
message="Please review this content for brand compliance:",
|
||||
emit=["approved", "rejected", "needs_revision"],
|
||||
)
|
||||
def review_content(self, content):
|
||||
return content
|
||||
|
||||
@listen("approved")
|
||||
def publish_content(self, result: HumanFeedbackResult):
|
||||
print(f"Publishing approved content. Reviewer notes: {result.feedback}")
|
||||
|
||||
@listen("rejected")
|
||||
def archive_content(self, result: HumanFeedbackResult):
|
||||
print(f"Content rejected. Reason: {result.feedback}")
|
||||
|
||||
@listen("needs_revision")
|
||||
def revise_content(self, result: HumanFeedbackResult):
|
||||
print(f"Revision requested: {result.feedback}")
|
||||
```
|
||||
|
||||
For complete implementation details, see the [Human Feedback in Flows](/en/learn/human-feedback-in-flows) guide.
|
||||
|
||||
### Decorator Parameters
|
||||
|
||||
| Parameter | Type | Description |
|
||||
|-----------|------|-------------|
|
||||
| `message` | `str` | The message displayed to the human reviewer |
|
||||
| `emit` | `list[str]` | Valid response options (displayed as buttons in UI) |
|
||||
|
||||
## Platform Configuration
|
||||
|
||||
Access HITL configuration from: **Deployment → Settings → Human in the Loop Configuration**
|
||||
|
||||
<Frame>
|
||||
<img src="/images/enterprise/hitl-settings-overview.png" alt="HITL Configuration Settings" />
|
||||
</Frame>
|
||||
|
||||
### Email Notifications
|
||||
|
||||
Toggle to enable or disable email notifications for HITL requests.
|
||||
|
||||
| Setting | Default | Description |
|
||||
|---------|---------|-------------|
|
||||
| Email Notifications | Enabled | Send emails when feedback is requested |
|
||||
|
||||
<Note>
|
||||
When disabled, responders must use the dashboard UI or you must configure webhooks for custom notification systems.
|
||||
</Note>
|
||||
|
||||
### SLA Target
|
||||
|
||||
Set a target response time for tracking and metrics purposes.
|
||||
|
||||
| Setting | Description |
|
||||
|---------|-------------|
|
||||
| SLA Target (minutes) | Target response time. Used for dashboard metrics and SLA tracking |
|
||||
|
||||
Leave empty to disable SLA tracking.
|
||||
|
||||
## Email Notifications & Responses
|
||||
|
||||
The HITL system uses an email-first architecture where responders can reply directly to notification emails.
|
||||
|
||||
### How Email Responses Work
|
||||
|
||||
<Steps>
|
||||
<Step title="Notification Sent">
|
||||
When a HITL request is created, an email is sent to the assigned responder with the review content and context.
|
||||
</Step>
|
||||
<Step title="Reply-To Address">
|
||||
The email includes a special reply-to address with a signed token for authentication.
|
||||
</Step>
|
||||
<Step title="User Replies">
|
||||
The responder simply replies to the email with their feedback—no login required.
|
||||
</Step>
|
||||
<Step title="Token Validation">
|
||||
The platform receives the reply, verifies the signed token, and matches the sender email.
|
||||
</Step>
|
||||
<Step title="Flow Resumes">
|
||||
The feedback is recorded and the flow continues with the human's input.
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
### Response Format
|
||||
|
||||
Responders can reply with:
|
||||
|
||||
- **Emit option**: If the reply matches an `emit` option (e.g., "approved"), it's used directly
|
||||
- **Free-form text**: Any text response is passed to the flow as feedback
|
||||
- **Plain text**: The first line of the reply body is used as feedback
|
||||
|
||||
### Confirmation Emails
|
||||
|
||||
After processing a reply, the responder receives a confirmation email indicating whether the feedback was successfully submitted or if an error occurred.
|
||||
|
||||
### Email Token Security
|
||||
|
||||
- Tokens are cryptographically signed for security
|
||||
- Tokens expire after 7 days
|
||||
- Sender email must match the token's authorized email
|
||||
- Confirmation/error emails are sent after processing
|
||||
|
||||
## Routing Rules
|
||||
|
||||
Route HITL requests to specific email addresses based on method patterns.
|
||||
|
||||
<Frame>
|
||||
<img src="/images/enterprise/hitl-settings-routing-rules.png" alt="HITL Routing Rules Configuration" />
|
||||
</Frame>
|
||||
|
||||
### Rule Structure
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "Approvals to Finance",
|
||||
"match": {
|
||||
"method_name": "approve_*"
|
||||
},
|
||||
"assign_to_email": "finance@company.com",
|
||||
"assign_from_input": "manager_email"
|
||||
}
|
||||
```
|
||||
|
||||
### Matching Patterns
|
||||
|
||||
| Pattern | Description | Example Match |
|
||||
|---------|-------------|---------------|
|
||||
| `approve_*` | Wildcard (any chars) | `approve_payment`, `approve_vendor` |
|
||||
| `review_?` | Single char | `review_a`, `review_1` |
|
||||
| `validate_payment` | Exact match | `validate_payment` only |
|
||||
|
||||
### Assignment Priority
|
||||
|
||||
1. **Dynamic assignment** (`assign_from_input`): If configured, pulls email from flow state
|
||||
2. **Static email** (`assign_to_email`): Falls back to configured email
|
||||
3. **Deployment creator**: If no rule matches, the deployment creator's email is used
|
||||
|
||||
### Dynamic Assignment Example
|
||||
|
||||
If your flow state contains `{"sales_rep_email": "alice@company.com"}`, configure:
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "Route to Sales Rep",
|
||||
"match": {
|
||||
"method_name": "review_*"
|
||||
},
|
||||
"assign_from_input": "sales_rep_email"
|
||||
}
|
||||
```
|
||||
|
||||
The request will be assigned to `alice@company.com` automatically.
|
||||
|
||||
<Tip>
|
||||
**Use Case**: Pull the assignee from your CRM, database, or previous flow step to dynamically route reviews to the right person.
|
||||
</Tip>
|
||||
|
||||
## Auto-Response
|
||||
|
||||
Automatically respond to HITL requests if no human responds within a timeout. This ensures flows don't hang indefinitely.
|
||||
|
||||
### Configuration
|
||||
|
||||
| Setting | Description |
|
||||
|---------|-------------|
|
||||
| Enabled | Toggle to enable auto-response |
|
||||
| Timeout (minutes) | Time to wait before auto-responding |
|
||||
| Default Outcome | The response value (must match an `emit` option) |
|
||||
|
||||
<Frame>
|
||||
<img src="/images/enterprise/hitl-settings-auto-respond.png" alt="HITL Auto-Response Configuration" />
|
||||
</Frame>
|
||||
|
||||
### Use Cases
|
||||
|
||||
- **SLA compliance**: Ensure flows don't hang indefinitely
|
||||
- **Default approval**: Auto-approve low-risk requests after timeout
|
||||
- **Graceful degradation**: Continue with a safe default when reviewers are unavailable
|
||||
|
||||
<Warning>
|
||||
Use auto-response carefully. Only enable it for non-critical reviews where a default response is acceptable.
|
||||
</Warning>
|
||||
|
||||
## Review Process
|
||||
|
||||
### Dashboard Interface
|
||||
|
||||
The HITL review interface provides a clean, focused experience for reviewers:
|
||||
|
||||
- **Markdown Rendering**: Rich formatting for review content with syntax highlighting
|
||||
- **Context Panel**: View flow state, execution history, and related information
|
||||
- **Feedback Input**: Provide detailed feedback and comments with your decision
|
||||
- **Quick Actions**: One-click emit option buttons with optional comments
|
||||
|
||||
<Frame>
|
||||
<img src="/images/enterprise/hitl-list-pending-feedbacks.png" alt="HITL Pending Requests List" />
|
||||
</Frame>
|
||||
|
||||
### Response Methods
|
||||
|
||||
Reviewers can respond via three channels:
|
||||
|
||||
| Method | Description |
|
||||
|--------|-------------|
|
||||
| **Email Reply** | Reply directly to the notification email |
|
||||
| **Dashboard** | Use the Enterprise dashboard UI |
|
||||
| **API/Webhook** | Programmatic response via API |
|
||||
|
||||
### History & Audit Trail
|
||||
|
||||
Every HITL interaction is tracked with a complete timeline:
|
||||
|
||||
- Decision history (approve/reject/revise)
|
||||
- Reviewer identity and timestamp
|
||||
- Feedback and comments provided
|
||||
- Response method (email/dashboard/API)
|
||||
- Response time metrics
|
||||
|
||||
## Analytics & Monitoring
|
||||
|
||||
Track HITL performance with comprehensive analytics.
|
||||
|
||||
### Performance Dashboard
|
||||
|
||||
<Frame>
|
||||
<img src="/images/enterprise/hitl-metrics.png" alt="HITL Metrics Dashboard" />
|
||||
</Frame>
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Response Times" icon="stopwatch">
|
||||
Monitor average and median response times by reviewer or flow.
|
||||
</Card>
|
||||
<Card title="Volume Trends" icon="chart-bar">
|
||||
Analyze review volume patterns to optimize team capacity.
|
||||
</Card>
|
||||
<Card title="Decision Distribution" icon="chart-pie">
|
||||
View approval/rejection rates across different review types.
|
||||
</Card>
|
||||
<Card title="SLA Tracking" icon="chart-line">
|
||||
Track percentage of reviews completed within SLA targets.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
### Audit & Compliance
|
||||
|
||||
Enterprise-ready audit capabilities for regulatory requirements:
|
||||
|
||||
- Complete decision history with timestamps
|
||||
- Reviewer identity verification
|
||||
- Immutable audit logs
|
||||
- Export capabilities for compliance reporting
|
||||
|
||||
## Common Use Cases
|
||||
|
||||
<AccordionGroup>
|
||||
<Accordion title="Security Reviews" icon="shield-halved">
|
||||
**Use Case**: Internal security questionnaire automation with human validation
|
||||
|
||||
- AI generates responses to security questionnaires
|
||||
- Security team reviews and validates accuracy via email
|
||||
- Approved responses are compiled into final submission
|
||||
- Full audit trail for compliance
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Content Approval" icon="file-lines">
|
||||
**Use Case**: Marketing content requiring legal/brand review
|
||||
|
||||
- AI generates marketing copy or social media content
|
||||
- Route to brand team email for voice/tone review
|
||||
- Automatic publishing upon approval
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Financial Approvals" icon="money-bill">
|
||||
**Use Case**: Expense reports, contract terms, budget allocations
|
||||
|
||||
- AI pre-processes and categorizes financial requests
|
||||
- Route based on amount thresholds using dynamic assignment
|
||||
- Maintain complete audit trail for financial compliance
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Dynamic Assignment from CRM" icon="database">
|
||||
**Use Case**: Route reviews to account owners from your CRM
|
||||
|
||||
- Flow fetches account owner email from CRM
|
||||
- Store email in flow state (e.g., `account_owner_email`)
|
||||
- Use `assign_from_input` to route to the right person automatically
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Quality Assurance" icon="magnifying-glass">
|
||||
**Use Case**: AI output validation before customer delivery
|
||||
|
||||
- AI generates customer-facing content or responses
|
||||
- QA team reviews via email notification
|
||||
- Feedback loops improve AI performance over time
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
## Webhooks API
|
||||
|
||||
When your Flows pause for human feedback, you can configure webhooks to send request data to your own application. This enables:
|
||||
|
||||
- Building custom approval UIs
|
||||
- Integrating with internal tools (Jira, ServiceNow, custom dashboards)
|
||||
- Routing approvals to third-party systems
|
||||
- Mobile app notifications
|
||||
- Automated decision systems
|
||||
|
||||
<Frame>
|
||||
<img src="/images/enterprise/hitl-settings-webhook.png" alt="HITL Webhook Configuration" />
|
||||
</Frame>
|
||||
|
||||
### Configuring Webhooks
|
||||
|
||||
<Steps>
|
||||
<Step title="Navigate to Settings">
|
||||
Go to your **Deployment** → **Settings** → **Human in the Loop**
|
||||
</Step>
|
||||
<Step title="Expand Webhooks Section">
|
||||
Click to expand the **Webhooks** configuration
|
||||
</Step>
|
||||
<Step title="Add Your Webhook URL">
|
||||
Enter your webhook URL (must be HTTPS in production)
|
||||
</Step>
|
||||
<Step title="Save Configuration">
|
||||
Click **Save Configuration** to activate
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
You can configure multiple webhooks. Each active webhook receives all HITL events.
|
||||
|
||||
### Webhook Events
|
||||
|
||||
Your endpoint will receive HTTP POST requests for these events:
|
||||
|
||||
| Event Type | When Triggered |
|
||||
|------------|----------------|
|
||||
| `new_request` | A flow pauses and requests human feedback |
|
||||
|
||||
### Webhook Payload
|
||||
|
||||
All webhooks receive a JSON payload with this structure:
|
||||
|
||||
```json
|
||||
{
|
||||
"event": "new_request",
|
||||
"request": {
|
||||
"id": "550e8400-e29b-41d4-a716-446655440000",
|
||||
"flow_id": "flow_abc123",
|
||||
"method_name": "review_article",
|
||||
"message": "Please review this article for publication.",
|
||||
"emit_options": ["approved", "rejected", "request_changes"],
|
||||
"state": {
|
||||
"article_id": 12345,
|
||||
"author": "john@example.com",
|
||||
"category": "technology"
|
||||
},
|
||||
"metadata": {},
|
||||
"created_at": "2026-01-14T12:00:00Z"
|
||||
},
|
||||
"deployment": {
|
||||
"id": 456,
|
||||
"name": "Content Review Flow",
|
||||
"organization_id": 789
|
||||
},
|
||||
"callback_url": "https://api.crewai.com/...",
|
||||
"assigned_to_email": "reviewer@company.com"
|
||||
}
|
||||
```
|
||||
|
||||
### Responding to Requests
|
||||
|
||||
To submit feedback, **POST to the `callback_url`** included in the webhook payload.
|
||||
|
||||
```http
|
||||
POST {callback_url}
|
||||
Content-Type: application/json
|
||||
|
||||
{
|
||||
"feedback": "Approved. Great article!",
|
||||
"source": "my_custom_app"
|
||||
}
|
||||
```
|
||||
|
||||
### Security
|
||||
|
||||
<Info>
|
||||
All webhook requests are cryptographically signed using HMAC-SHA256 to ensure authenticity and prevent tampering.
|
||||
</Info>
|
||||
|
||||
#### Webhook Security
|
||||
|
||||
- **HMAC-SHA256 signatures**: Every webhook includes a cryptographic signature
|
||||
- **Per-webhook secrets**: Each webhook has its own unique signing secret
|
||||
- **Encrypted at rest**: Signing secrets are encrypted in our database
|
||||
- **Timestamp verification**: Prevents replay attacks
|
||||
|
||||
#### Signature Headers
|
||||
|
||||
Each webhook request includes these headers:
|
||||
|
||||
| Header | Description |
|
||||
|--------|-------------|
|
||||
| `X-Signature` | HMAC-SHA256 signature: `sha256=<hex_digest>` |
|
||||
| `X-Timestamp` | Unix timestamp when the request was signed |
|
||||
|
||||
#### Verification
|
||||
|
||||
Verify by computing:
|
||||
|
||||
```python
|
||||
import hmac
|
||||
import hashlib
|
||||
|
||||
expected = hmac.new(
|
||||
signing_secret.encode(),
|
||||
f"{timestamp}.{payload}".encode(),
|
||||
hashlib.sha256
|
||||
).hexdigest()
|
||||
|
||||
if hmac.compare_digest(expected, signature):
|
||||
# Valid signature
|
||||
```
|
||||
|
||||
### Error Handling
|
||||
|
||||
Your webhook endpoint should return a 2xx status code to acknowledge receipt:
|
||||
|
||||
| Your Response | Our Behavior |
|
||||
|---------------|--------------|
|
||||
| 2xx | Webhook delivered successfully |
|
||||
| 4xx/5xx | Logged as failed, no retry |
|
||||
| Timeout (30s) | Logged as failed, no retry |
|
||||
|
||||
## Security & RBAC
|
||||
|
||||
### Dashboard Access
|
||||
|
||||
HITL access is controlled at the deployment level:
|
||||
|
||||
| Permission | Capability |
|
||||
|------------|------------|
|
||||
| `manage_human_feedback` | Configure HITL settings, view all requests |
|
||||
| `respond_to_human_feedback` | Respond to requests, view assigned requests |
|
||||
|
||||
### Email Response Authorization
|
||||
|
||||
For email replies:
|
||||
1. The reply-to token encodes the authorized email
|
||||
2. Sender email must match the token's email
|
||||
3. Token must not be expired (7-day default)
|
||||
4. Request must still be pending
|
||||
|
||||
### Audit Trail
|
||||
|
||||
All HITL actions are logged:
|
||||
- Request creation
|
||||
- Assignment changes
|
||||
- Response submission (with source: dashboard/email/API)
|
||||
- Flow resume status
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Emails Not Sending
|
||||
|
||||
1. Check "Email Notifications" is enabled in configuration
|
||||
2. Verify routing rules match the method name
|
||||
3. Verify assignee email is valid
|
||||
4. Check deployment creator fallback if no routing rules match
|
||||
|
||||
### Email Replies Not Processing
|
||||
|
||||
1. Check token hasn't expired (7-day default)
|
||||
2. Verify sender email matches assigned email
|
||||
3. Ensure request is still pending (not already responded)
|
||||
|
||||
### Flow Not Resuming
|
||||
|
||||
1. Check request status in dashboard
|
||||
2. Verify callback URL is accessible
|
||||
3. Ensure deployment is still running
|
||||
|
||||
## Best Practices
|
||||
|
||||
<Tip>
|
||||
**Start Simple**: Begin with email notifications to deployment creator, then add routing rules as your workflows mature.
|
||||
</Tip>
|
||||
|
||||
1. **Use Dynamic Assignment**: Pull assignee emails from your flow state for flexible routing.
|
||||
|
||||
2. **Configure Auto-Response**: Set up a fallback for non-critical reviews to prevent flows from hanging.
|
||||
|
||||
3. **Monitor Response Times**: Use analytics to identify bottlenecks and optimize your review process.
|
||||
|
||||
4. **Keep Review Messages Clear**: Write clear, actionable messages in the `@human_feedback` decorator.
|
||||
|
||||
5. **Test Email Flow**: Send test requests to verify email delivery before going to production.
|
||||
|
||||
## Related Resources
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Human Feedback in Flows" icon="code" href="/en/learn/human-feedback-in-flows">
|
||||
Implementation guide for the `@human_feedback` decorator
|
||||
</Card>
|
||||
<Card title="Flow HITL Workflow Guide" icon="route" href="/en/enterprise/guides/human-in-the-loop">
|
||||
Step-by-step guide for setting up HITL workflows
|
||||
</Card>
|
||||
<Card title="RBAC Configuration" icon="shield-check" href="/en/enterprise/features/rbac">
|
||||
Configure role-based access control for your organization
|
||||
</Card>
|
||||
<Card title="Webhook Streaming" icon="bolt" href="/en/enterprise/features/webhook-streaming">
|
||||
Set up real-time event notifications
|
||||
</Card>
|
||||
</CardGroup>
|
||||
@@ -5,54 +5,9 @@ icon: "user-check"
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
Human-In-The-Loop (HITL) is a powerful approach that combines artificial intelligence with human expertise to enhance decision-making and improve task outcomes. This guide shows you how to implement HITL within CrewAI Enterprise.
|
||||
Human-In-The-Loop (HITL) is a powerful approach that combines artificial intelligence with human expertise to enhance decision-making and improve task outcomes. This guide shows you how to implement HITL within CrewAI.
|
||||
|
||||
## HITL Approaches in CrewAI
|
||||
|
||||
CrewAI offers two approaches for implementing human-in-the-loop workflows:
|
||||
|
||||
| Approach | Best For | Version |
|
||||
|----------|----------|---------|
|
||||
| **Flow-based** (`@human_feedback` decorator) | Production with Enterprise UI, email-first workflows, full platform features | **1.8.0+** |
|
||||
| **Webhook-based** | Custom integrations, external systems (Slack, Teams, etc.), legacy setups | All versions |
|
||||
|
||||
## Flow-Based HITL with Enterprise Platform
|
||||
|
||||
<Note>
|
||||
The `@human_feedback` decorator requires **CrewAI version 1.8.0 or higher**.
|
||||
</Note>
|
||||
|
||||
When using the `@human_feedback` decorator in your Flows, CrewAI Enterprise provides an **email-first HITL system** that enables anyone with an email address to respond to review requests:
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Email-First Design" icon="envelope">
|
||||
Responders receive email notifications and can reply directly—no login required.
|
||||
</Card>
|
||||
<Card title="Dashboard Review" icon="desktop">
|
||||
Review and respond to HITL requests in the Enterprise dashboard when preferred.
|
||||
</Card>
|
||||
<Card title="Flexible Routing" icon="route">
|
||||
Route requests to specific emails based on method patterns or pull from flow state.
|
||||
</Card>
|
||||
<Card title="Auto-Response" icon="clock">
|
||||
Configure automatic fallback responses when no human replies within the timeout.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
### Key Benefits
|
||||
|
||||
- **External responders**: Anyone with an email can respond, even non-platform users
|
||||
- **Dynamic assignment**: Pull assignee email from flow state (e.g., `account_owner_email`)
|
||||
- **Simple configuration**: Email-based routing is easier to set up than user/role management
|
||||
- **Deployment creator fallback**: If no routing rule matches, the deployment creator is notified
|
||||
|
||||
<Tip>
|
||||
For implementation details on the `@human_feedback` decorator, see the [Human Feedback in Flows](/en/learn/human-feedback-in-flows) guide.
|
||||
</Tip>
|
||||
|
||||
## Setting Up Webhook-Based HITL Workflows
|
||||
|
||||
For custom integrations with external systems like Slack, Microsoft Teams, or your own applications, you can use the webhook-based approach:
|
||||
## Setting Up HITL Workflows
|
||||
|
||||
<Steps>
|
||||
<Step title="Configure Your Task">
|
||||
@@ -144,14 +99,3 @@ HITL workflows are particularly valuable for:
|
||||
- Sensitive or high-stakes operations
|
||||
- Creative tasks requiring human judgment
|
||||
- Compliance and regulatory reviews
|
||||
|
||||
## Learn More
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Flow HITL Management" icon="users-gear" href="/en/enterprise/features/flow-hitl-management">
|
||||
Explore the full Enterprise Flow HITL platform capabilities including email notifications, routing rules, auto-response, and analytics.
|
||||
</Card>
|
||||
<Card title="Human Feedback in Flows" icon="code" href="/en/learn/human-feedback-in-flows">
|
||||
Implementation guide for the `@human_feedback` decorator in your Flows.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
@@ -151,9 +151,3 @@ HITL workflows are particularly valuable for:
|
||||
- Sensitive or high-stakes operations
|
||||
- Creative tasks requiring human judgment
|
||||
- Compliance and regulatory reviews
|
||||
|
||||
## Enterprise Features
|
||||
|
||||
<Card title="Flow HITL Management Platform" icon="users-gear" href="/en/enterprise/features/flow-hitl-management">
|
||||
CrewAI Enterprise provides a comprehensive HITL management system for Flows with in-platform review, responder assignment, permissions, escalation policies, SLA management, dynamic routing, and full analytics. [Learn more →](/en/enterprise/features/flow-hitl-management)
|
||||
</Card>
|
||||
|
||||
Binary file not shown.
|
Before Width: | Height: | Size: 251 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 263 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 55 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 405 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 156 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 83 KiB |
@@ -1,563 +0,0 @@
|
||||
---
|
||||
title: "Flow HITL 관리"
|
||||
description: "이메일 우선 알림, 라우팅 규칙 및 자동 응답 기능을 갖춘 Flow용 엔터프라이즈급 인간 검토"
|
||||
icon: "users-gear"
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
<Note>
|
||||
Flow HITL 관리 기능은 `@human_feedback` 데코레이터가 필요하며, **CrewAI 버전 1.8.0 이상**에서 사용할 수 있습니다. 이 기능은 Crew가 아닌 **Flow**에만 적용됩니다.
|
||||
</Note>
|
||||
|
||||
CrewAI Enterprise는 AI 워크플로우를 협업적인 인간-AI 프로세스로 전환하는 Flow용 포괄적인 Human-in-the-Loop(HITL) 관리 시스템을 제공합니다. 플랫폼은 **이메일 우선 아키텍처**를 사용하여 이메일 주소가 있는 누구나 플랫폼 계정 없이도 검토 요청에 응답할 수 있습니다.
|
||||
|
||||
## 개요
|
||||
|
||||
<CardGroup cols={3}>
|
||||
<Card title="이메일 우선 설계" icon="envelope">
|
||||
응답자가 알림 이메일에 직접 회신하여 피드백 제공 가능
|
||||
</Card>
|
||||
<Card title="유연한 라우팅" icon="route">
|
||||
메서드 패턴 또는 Flow 상태에 따라 특정 이메일로 요청 라우팅
|
||||
</Card>
|
||||
<Card title="자동 응답" icon="clock">
|
||||
시간 내에 인간이 응답하지 않을 경우 자동 대체 응답 구성
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
### 주요 이점
|
||||
|
||||
- **간단한 멘탈 모델**: 이메일 주소는 보편적이며 플랫폼 사용자나 역할을 관리할 필요 없음
|
||||
- **외부 응답자**: 플랫폼 사용자가 아니어도 이메일이 있는 누구나 응답 가능
|
||||
- **동적 할당**: Flow 상태에서 직접 담당자 이메일 가져오기 (예: `sales_rep_email`)
|
||||
- **간소화된 구성**: 설정할 항목이 적어 더 빠르게 가치 실현
|
||||
- **이메일이 주요 채널**: 대부분의 사용자는 대시보드 로그인보다 이메일로 응답하는 것을 선호
|
||||
|
||||
## Flow에서 인간 검토 포인트 설정
|
||||
|
||||
`@human_feedback` 데코레이터를 사용하여 Flow 내에 인간 검토 체크포인트를 구성합니다. 실행이 검토 포인트에 도달하면 시스템이 일시 중지되고, 담당자에게 이메일로 알리며, 응답을 기다립니다.
|
||||
|
||||
```python
|
||||
from crewai.flow.flow import Flow, start, listen
|
||||
from crewai.flow.human_feedback import human_feedback, HumanFeedbackResult
|
||||
|
||||
class ContentApprovalFlow(Flow):
|
||||
@start()
|
||||
def generate_content(self):
|
||||
# AI가 콘텐츠 생성
|
||||
return "Q1 캠페인용 마케팅 카피 생성..."
|
||||
|
||||
@listen(generate_content)
|
||||
@human_feedback(
|
||||
message="브랜드 준수를 위해 이 콘텐츠를 검토해 주세요:",
|
||||
emit=["approved", "rejected", "needs_revision"],
|
||||
)
|
||||
def review_content(self, content):
|
||||
return content
|
||||
|
||||
@listen("approved")
|
||||
def publish_content(self, result: HumanFeedbackResult):
|
||||
print(f"승인된 콘텐츠 게시 중. 검토자 노트: {result.feedback}")
|
||||
|
||||
@listen("rejected")
|
||||
def archive_content(self, result: HumanFeedbackResult):
|
||||
print(f"콘텐츠 거부됨. 사유: {result.feedback}")
|
||||
|
||||
@listen("needs_revision")
|
||||
def revise_content(self, result: HumanFeedbackResult):
|
||||
print(f"수정 요청: {result.feedback}")
|
||||
```
|
||||
|
||||
완전한 구현 세부 사항은 [Flow에서 인간 피드백](/ko/learn/human-feedback-in-flows) 가이드를 참조하세요.
|
||||
|
||||
### 데코레이터 파라미터
|
||||
|
||||
| 파라미터 | 유형 | 설명 |
|
||||
|---------|------|------|
|
||||
| `message` | `str` | 인간 검토자에게 표시되는 메시지 |
|
||||
| `emit` | `list[str]` | 유효한 응답 옵션 (UI에서 버튼으로 표시) |
|
||||
|
||||
## 플랫폼 구성
|
||||
|
||||
HITL 구성에 접근: **배포** → **설정** → **Human in the Loop 구성**
|
||||
|
||||
<Frame>
|
||||
<img src="/images/enterprise/hitl-settings-overview.png" alt="HITL 구성 설정" />
|
||||
</Frame>
|
||||
|
||||
### 이메일 알림
|
||||
|
||||
HITL 요청에 대한 이메일 알림을 활성화하거나 비활성화하는 토글입니다.
|
||||
|
||||
| 설정 | 기본값 | 설명 |
|
||||
|-----|-------|------|
|
||||
| 이메일 알림 | 활성화됨 | 피드백 요청 시 이메일 전송 |
|
||||
|
||||
<Note>
|
||||
비활성화되면 응답자는 대시보드 UI를 사용하거나 커스텀 알림 시스템을 위해 webhook을 구성해야 합니다.
|
||||
</Note>
|
||||
|
||||
### SLA 목표
|
||||
|
||||
추적 및 메트릭 목적으로 목표 응답 시간을 설정합니다.
|
||||
|
||||
| 설정 | 설명 |
|
||||
|-----|------|
|
||||
| SLA 목표 (분) | 목표 응답 시간. 대시보드 메트릭 및 SLA 추적에 사용 |
|
||||
|
||||
SLA 추적을 비활성화하려면 비워 두세요.
|
||||
|
||||
## 이메일 알림 및 응답
|
||||
|
||||
HITL 시스템은 응답자가 알림 이메일에 직접 회신할 수 있는 이메일 우선 아키텍처를 사용합니다.
|
||||
|
||||
### 이메일 응답 작동 방식
|
||||
|
||||
<Steps>
|
||||
<Step title="알림 전송">
|
||||
HITL 요청이 생성되면 검토 콘텐츠와 컨텍스트가 포함된 이메일이 할당된 응답자에게 전송됩니다.
|
||||
</Step>
|
||||
<Step title="Reply-To 주소">
|
||||
이메일에는 인증을 위한 서명된 토큰이 포함된 특별한 reply-to 주소가 있습니다.
|
||||
</Step>
|
||||
<Step title="사용자 회신">
|
||||
응답자는 이메일에 피드백으로 회신하면 됩니다—로그인 필요 없음.
|
||||
</Step>
|
||||
<Step title="토큰 검증">
|
||||
플랫폼이 회신을 받고, 서명된 토큰을 확인하고, 발신자 이메일을 매칭합니다.
|
||||
</Step>
|
||||
<Step title="Flow 재개">
|
||||
피드백이 기록되고 인간의 입력으로 Flow가 계속됩니다.
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
### 응답 형식
|
||||
|
||||
응답자는 다음과 같이 회신할 수 있습니다:
|
||||
|
||||
- **Emit 옵션**: 회신이 `emit` 옵션과 일치하면 (예: "approved") 직접 사용됨
|
||||
- **자유 형식 텍스트**: 모든 텍스트 응답이 피드백으로 Flow에 전달됨
|
||||
- **일반 텍스트**: 회신 본문의 첫 번째 줄이 피드백으로 사용됨
|
||||
|
||||
### 확인 이메일
|
||||
|
||||
회신을 처리한 후 응답자는 피드백이 성공적으로 제출되었는지 또는 오류가 발생했는지 나타내는 확인 이메일을 받습니다.
|
||||
|
||||
### 이메일 토큰 보안
|
||||
|
||||
- 토큰은 보안을 위해 암호화 서명됨
|
||||
- 토큰은 7일 후 만료됨
|
||||
- 발신자 이메일은 토큰의 인증된 이메일과 일치해야 함
|
||||
- 처리 후 확인/오류 이메일 전송됨
|
||||
|
||||
## 라우팅 규칙
|
||||
|
||||
메서드 패턴에 따라 HITL 요청을 특정 이메일 주소로 라우팅합니다.
|
||||
|
||||
<Frame>
|
||||
<img src="/images/enterprise/hitl-settings-routing-rules.png" alt="HITL 라우팅 규칙 구성" />
|
||||
</Frame>
|
||||
|
||||
### 규칙 구조
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "재무팀으로 승인",
|
||||
"match": {
|
||||
"method_name": "approve_*"
|
||||
},
|
||||
"assign_to_email": "finance@company.com",
|
||||
"assign_from_input": "manager_email"
|
||||
}
|
||||
```
|
||||
|
||||
### 매칭 패턴
|
||||
|
||||
| 패턴 | 설명 | 매칭 예시 |
|
||||
|-----|------|----------|
|
||||
| `approve_*` | 와일드카드 (모든 문자) | `approve_payment`, `approve_vendor` |
|
||||
| `review_?` | 단일 문자 | `review_a`, `review_1` |
|
||||
| `validate_payment` | 정확히 일치 | `validate_payment`만 |
|
||||
|
||||
### 할당 우선순위
|
||||
|
||||
1. **동적 할당** (`assign_from_input`): 구성된 경우 Flow 상태에서 이메일 가져옴
|
||||
2. **정적 이메일** (`assign_to_email`): 구성된 이메일로 대체
|
||||
3. **배포 생성자**: 규칙이 일치하지 않으면 배포 생성자의 이메일이 사용됨
|
||||
|
||||
### 동적 할당 예제
|
||||
|
||||
Flow 상태에 `{"sales_rep_email": "alice@company.com"}`이 포함된 경우:
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "영업 담당자에게 라우팅",
|
||||
"match": {
|
||||
"method_name": "review_*"
|
||||
},
|
||||
"assign_from_input": "sales_rep_email"
|
||||
}
|
||||
```
|
||||
|
||||
요청이 자동으로 `alice@company.com`에 할당됩니다.
|
||||
|
||||
<Tip>
|
||||
**사용 사례**: CRM, 데이터베이스 또는 이전 Flow 단계에서 담당자를 가져와 적합한 사람에게 검토를 동적으로 라우팅하세요.
|
||||
</Tip>
|
||||
|
||||
## 자동 응답
|
||||
|
||||
시간 내에 인간이 응답하지 않으면 HITL 요청에 자동으로 응답합니다. 이를 통해 Flow가 무한정 중단되지 않도록 합니다.
|
||||
|
||||
### 구성
|
||||
|
||||
| 설정 | 설명 |
|
||||
|-----|------|
|
||||
| 활성화됨 | 자동 응답 활성화 토글 |
|
||||
| 타임아웃 (분) | 자동 응답 전 대기 시간 |
|
||||
| 기본 결과 | 응답 값 (`emit` 옵션과 일치해야 함) |
|
||||
|
||||
<Frame>
|
||||
<img src="/images/enterprise/hitl-settings-auto-respond.png" alt="HITL 자동 응답 구성" />
|
||||
</Frame>
|
||||
|
||||
### 사용 사례
|
||||
|
||||
- **SLA 준수**: Flow가 무한정 중단되지 않도록 보장
|
||||
- **기본 승인**: 타임아웃 후 저위험 요청 자동 승인
|
||||
- **우아한 저하**: 검토자가 없을 때 안전한 기본값으로 계속
|
||||
|
||||
<Warning>
|
||||
자동 응답을 신중하게 사용하세요. 기본 응답이 허용되는 중요하지 않은 검토에만 활성화하세요.
|
||||
</Warning>
|
||||
|
||||
## 검토 프로세스
|
||||
|
||||
### 대시보드 인터페이스
|
||||
|
||||
HITL 검토 인터페이스는 검토자에게 깔끔하고 집중된 경험을 제공합니다:
|
||||
|
||||
- **마크다운 렌더링**: 구문 강조가 포함된 풍부한 형식의 검토 콘텐츠
|
||||
- **컨텍스트 패널**: Flow 상태, 실행 기록 및 관련 정보 보기
|
||||
- **피드백 입력**: 결정과 함께 상세한 피드백 및 코멘트 제공
|
||||
- **빠른 작업**: 선택적 코멘트가 있는 원클릭 emit 옵션 버튼
|
||||
|
||||
<Frame>
|
||||
<img src="/images/enterprise/hitl-list-pending-feedbacks.png" alt="HITL 대기 중인 요청 목록" />
|
||||
</Frame>
|
||||
|
||||
### 응답 방법
|
||||
|
||||
검토자는 세 가지 채널을 통해 응답할 수 있습니다:
|
||||
|
||||
| 방법 | 설명 |
|
||||
|-----|------|
|
||||
| **이메일 회신** | 알림 이메일에 직접 회신 |
|
||||
| **대시보드** | Enterprise 대시보드 UI 사용 |
|
||||
| **API/Webhook** | API를 통한 프로그래밍 방식 응답 |
|
||||
|
||||
### 기록 및 감사 추적
|
||||
|
||||
모든 HITL 상호작용은 완전한 타임라인으로 추적됩니다:
|
||||
|
||||
- 결정 기록 (승인/거부/수정)
|
||||
- 검토자 신원 및 타임스탬프
|
||||
- 제공된 피드백 및 코멘트
|
||||
- 응답 방법 (이메일/대시보드/API)
|
||||
- 응답 시간 메트릭
|
||||
|
||||
## 분석 및 모니터링
|
||||
|
||||
포괄적인 분석으로 HITL 성능을 추적합니다.
|
||||
|
||||
### 성능 대시보드
|
||||
|
||||
<Frame>
|
||||
<img src="/images/enterprise/hitl-metrics.png" alt="HITL 메트릭 대시보드" />
|
||||
</Frame>
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="응답 시간" icon="stopwatch">
|
||||
검토자 또는 Flow별 평균 및 중앙값 응답 시간 모니터링.
|
||||
</Card>
|
||||
<Card title="볼륨 트렌드" icon="chart-bar">
|
||||
팀 용량 최적화를 위한 검토 볼륨 패턴 분석.
|
||||
</Card>
|
||||
<Card title="결정 분포" icon="chart-pie">
|
||||
다양한 검토 유형에 대한 승인/거부 비율 보기.
|
||||
</Card>
|
||||
<Card title="SLA 추적" icon="chart-line">
|
||||
SLA 목표 내에 완료된 검토 비율 추적.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
### 감사 및 규정 준수
|
||||
|
||||
규제 요구 사항을 위한 엔터프라이즈급 감사 기능:
|
||||
|
||||
- 타임스탬프가 있는 완전한 결정 기록
|
||||
- 검토자 신원 확인
|
||||
- 불변 감사 로그
|
||||
- 규정 준수 보고를 위한 내보내기 기능
|
||||
|
||||
## 일반적인 사용 사례
|
||||
|
||||
<AccordionGroup>
|
||||
<Accordion title="보안 검토" icon="shield-halved">
|
||||
**사용 사례**: 인간 검증이 포함된 내부 보안 설문지 자동화
|
||||
|
||||
- AI가 보안 설문지에 대한 응답 생성
|
||||
- 보안팀이 이메일로 정확성 검토 및 검증
|
||||
- 승인된 응답이 최종 제출물로 편집
|
||||
- 규정 준수를 위한 완전한 감사 추적
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="콘텐츠 승인" icon="file-lines">
|
||||
**사용 사례**: 법무/브랜드 검토가 필요한 마케팅 콘텐츠
|
||||
|
||||
- AI가 마케팅 카피 또는 소셜 미디어 콘텐츠 생성
|
||||
- 브랜드팀 이메일로 목소리/톤 검토를 위해 라우팅
|
||||
- 승인 시 자동 게시
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="재무 승인" icon="money-bill">
|
||||
**사용 사례**: 경비 보고서, 계약 조건, 예산 배분
|
||||
|
||||
- AI가 재무 요청을 사전 처리하고 분류
|
||||
- 동적 할당을 사용하여 금액 임계값에 따라 라우팅
|
||||
- 재무 규정 준수를 위한 완전한 감사 추적 유지
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="CRM에서 동적 할당" icon="database">
|
||||
**사용 사례**: CRM에서 계정 담당자에게 검토 라우팅
|
||||
|
||||
- Flow가 CRM에서 계정 담당자 이메일 가져옴
|
||||
- 이메일을 Flow 상태에 저장 (예: `account_owner_email`)
|
||||
- `assign_from_input`을 사용하여 적합한 사람에게 자동 라우팅
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="품질 보증" icon="magnifying-glass">
|
||||
**사용 사례**: 고객 전달 전 AI 출력 검증
|
||||
|
||||
- AI가 고객 대면 콘텐츠 또는 응답 생성
|
||||
- QA팀이 이메일 알림을 통해 검토
|
||||
- 피드백 루프가 시간이 지남에 따라 AI 성능 개선
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
## Webhook API
|
||||
|
||||
Flow가 인간 피드백을 위해 일시 중지되면, 요청 데이터를 자체 애플리케이션으로 보내도록 webhook을 구성할 수 있습니다. 이를 통해 다음이 가능합니다:
|
||||
|
||||
- 커스텀 승인 UI 구축
|
||||
- 내부 도구와 통합 (Jira, ServiceNow, 커스텀 대시보드)
|
||||
- 타사 시스템으로 승인 라우팅
|
||||
- 모바일 앱 알림
|
||||
- 자동화된 결정 시스템
|
||||
|
||||
<Frame>
|
||||
<img src="/images/enterprise/hitl-settings-webhook.png" alt="HITL Webhook 구성" />
|
||||
</Frame>
|
||||
|
||||
### Webhook 구성
|
||||
|
||||
<Steps>
|
||||
<Step title="설정으로 이동">
|
||||
**배포** → **설정** → **Human in the Loop**으로 이동
|
||||
</Step>
|
||||
<Step title="Webhook 섹션 확장">
|
||||
**Webhooks** 구성을 클릭하여 확장
|
||||
</Step>
|
||||
<Step title="Webhook URL 추가">
|
||||
webhook URL 입력 (프로덕션에서는 HTTPS 필수)
|
||||
</Step>
|
||||
<Step title="구성 저장">
|
||||
**구성 저장**을 클릭하여 활성화
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
여러 webhook을 구성할 수 있습니다. 각 활성 webhook은 모든 HITL 이벤트를 수신합니다.
|
||||
|
||||
### Webhook 이벤트
|
||||
|
||||
엔드포인트는 다음 이벤트에 대해 HTTP POST 요청을 수신합니다:
|
||||
|
||||
| 이벤트 유형 | 트리거 시점 |
|
||||
|------------|------------|
|
||||
| `new_request` | Flow가 일시 중지되고 인간 피드백을 요청할 때 |
|
||||
|
||||
### Webhook 페이로드
|
||||
|
||||
모든 webhook은 다음 구조의 JSON 페이로드를 수신합니다:
|
||||
|
||||
```json
|
||||
{
|
||||
"event": "new_request",
|
||||
"request": {
|
||||
"id": "550e8400-e29b-41d4-a716-446655440000",
|
||||
"flow_id": "flow_abc123",
|
||||
"method_name": "review_article",
|
||||
"message": "이 기사의 게시를 검토해 주세요.",
|
||||
"emit_options": ["approved", "rejected", "request_changes"],
|
||||
"state": {
|
||||
"article_id": 12345,
|
||||
"author": "john@example.com",
|
||||
"category": "technology"
|
||||
},
|
||||
"metadata": {},
|
||||
"created_at": "2026-01-14T12:00:00Z"
|
||||
},
|
||||
"deployment": {
|
||||
"id": 456,
|
||||
"name": "Content Review Flow",
|
||||
"organization_id": 789
|
||||
},
|
||||
"callback_url": "https://api.crewai.com/...",
|
||||
"assigned_to_email": "reviewer@company.com"
|
||||
}
|
||||
```
|
||||
|
||||
### 요청에 응답하기
|
||||
|
||||
피드백을 제출하려면 webhook 페이로드에 포함된 **`callback_url`로 POST**합니다.
|
||||
|
||||
```http
|
||||
POST {callback_url}
|
||||
Content-Type: application/json
|
||||
|
||||
{
|
||||
"feedback": "승인됨. 훌륭한 기사입니다!",
|
||||
"source": "my_custom_app"
|
||||
}
|
||||
```
|
||||
|
||||
### 보안
|
||||
|
||||
<Info>
|
||||
모든 webhook 요청은 HMAC-SHA256을 사용하여 암호화 서명되어 진위성을 보장하고 변조를 방지합니다.
|
||||
</Info>
|
||||
|
||||
#### Webhook 보안
|
||||
|
||||
- **HMAC-SHA256 서명**: 모든 webhook에 암호화 서명이 포함됨
|
||||
- **Webhook별 시크릿**: 각 webhook은 고유한 서명 시크릿을 가짐
|
||||
- **저장 시 암호화**: 서명 시크릿은 데이터베이스에서 암호화됨
|
||||
- **타임스탬프 검증**: 리플레이 공격 방지
|
||||
|
||||
#### 서명 헤더
|
||||
|
||||
각 webhook 요청에는 다음 헤더가 포함됩니다:
|
||||
|
||||
| 헤더 | 설명 |
|
||||
|------|------|
|
||||
| `X-Signature` | HMAC-SHA256 서명: `sha256=<hex_digest>` |
|
||||
| `X-Timestamp` | 요청이 서명된 Unix 타임스탬프 |
|
||||
|
||||
#### 검증
|
||||
|
||||
다음과 같이 계산하여 검증합니다:
|
||||
|
||||
```python
|
||||
import hmac
|
||||
import hashlib
|
||||
|
||||
expected = hmac.new(
|
||||
signing_secret.encode(),
|
||||
f"{timestamp}.{payload}".encode(),
|
||||
hashlib.sha256
|
||||
).hexdigest()
|
||||
|
||||
if hmac.compare_digest(expected, signature):
|
||||
# 유효한 서명
|
||||
```
|
||||
|
||||
### 오류 처리
|
||||
|
||||
webhook 엔드포인트는 수신 확인을 위해 2xx 상태 코드를 반환해야 합니다:
|
||||
|
||||
| 응답 | 동작 |
|
||||
|------|------|
|
||||
| 2xx | Webhook 성공적으로 전달됨 |
|
||||
| 4xx/5xx | 실패로 기록됨, 재시도 없음 |
|
||||
| 타임아웃 (30초) | 실패로 기록됨, 재시도 없음 |
|
||||
|
||||
## 보안 및 RBAC
|
||||
|
||||
### 대시보드 접근
|
||||
|
||||
HITL 접근은 배포 수준에서 제어됩니다:
|
||||
|
||||
| 권한 | 기능 |
|
||||
|-----|------|
|
||||
| `manage_human_feedback` | HITL 설정 구성, 모든 요청 보기 |
|
||||
| `respond_to_human_feedback` | 요청에 응답, 할당된 요청 보기 |
|
||||
|
||||
### 이메일 응답 인증
|
||||
|
||||
이메일 회신의 경우:
|
||||
1. reply-to 토큰이 인증된 이메일을 인코딩
|
||||
2. 발신자 이메일이 토큰의 이메일과 일치해야 함
|
||||
3. 토큰이 만료되지 않아야 함 (기본 7일)
|
||||
4. 요청이 여전히 대기 중이어야 함
|
||||
|
||||
### 감사 추적
|
||||
|
||||
모든 HITL 작업이 기록됩니다:
|
||||
- 요청 생성
|
||||
- 할당 변경
|
||||
- 응답 제출 (소스: 대시보드/이메일/API)
|
||||
- Flow 재개 상태
|
||||
|
||||
## 문제 해결
|
||||
|
||||
### 이메일이 전송되지 않음
|
||||
|
||||
1. 구성에서 "이메일 알림"이 활성화되어 있는지 확인
|
||||
2. 라우팅 규칙이 메서드 이름과 일치하는지 확인
|
||||
3. 담당자 이메일이 유효한지 확인
|
||||
4. 라우팅 규칙이 일치하지 않는 경우 배포 생성자 대체 확인
|
||||
|
||||
### 이메일 회신이 처리되지 않음
|
||||
|
||||
1. 토큰이 만료되지 않았는지 확인 (기본 7일)
|
||||
2. 발신자 이메일이 할당된 이메일과 일치하는지 확인
|
||||
3. 요청이 여전히 대기 중인지 확인 (아직 응답되지 않음)
|
||||
|
||||
### Flow가 재개되지 않음
|
||||
|
||||
1. 대시보드에서 요청 상태 확인
|
||||
2. 콜백 URL에 접근 가능한지 확인
|
||||
3. 배포가 여전히 실행 중인지 확인
|
||||
|
||||
## 모범 사례
|
||||
|
||||
<Tip>
|
||||
**간단하게 시작**: 배포 생성자에게 이메일 알림으로 시작한 다음, 워크플로우가 성숙해지면 라우팅 규칙을 추가하세요.
|
||||
</Tip>
|
||||
|
||||
1. **동적 할당 사용**: 유연한 라우팅을 위해 Flow 상태에서 담당자 이메일을 가져오세요.
|
||||
|
||||
2. **자동 응답 구성**: 중요하지 않은 검토에 대해 Flow가 중단되지 않도록 대체를 설정하세요.
|
||||
|
||||
3. **응답 시간 모니터링**: 분석을 사용하여 병목 현상을 식별하고 검토 프로세스를 최적화하세요.
|
||||
|
||||
4. **검토 메시지를 명확하게 유지**: `@human_feedback` 데코레이터에 명확하고 실행 가능한 메시지를 작성하세요.
|
||||
|
||||
5. **이메일 흐름 테스트**: 프로덕션에 가기 전에 테스트 요청을 보내 이메일 전달을 확인하세요.
|
||||
|
||||
## 관련 리소스
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Flow에서 인간 피드백" icon="code" href="/ko/learn/human-feedback-in-flows">
|
||||
`@human_feedback` 데코레이터 구현 가이드
|
||||
</Card>
|
||||
<Card title="Flow HITL 워크플로우 가이드" icon="route" href="/ko/enterprise/guides/human-in-the-loop">
|
||||
HITL 워크플로우 설정을 위한 단계별 가이드
|
||||
</Card>
|
||||
<Card title="RBAC 구성" icon="shield-check" href="/ko/enterprise/features/rbac">
|
||||
조직을 위한 역할 기반 접근 제어 구성
|
||||
</Card>
|
||||
<Card title="Webhook 스트리밍" icon="bolt" href="/ko/enterprise/features/webhook-streaming">
|
||||
실시간 이벤트 알림 설정
|
||||
</Card>
|
||||
</CardGroup>
|
||||
@@ -5,54 +5,9 @@ icon: "user-check"
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
인간-중심(Human-In-The-Loop, HITL)은 인공지능과 인간 전문 지식을 결합하여 의사결정을 강화하고 작업 결과를 향상시키는 강력한 접근 방식입니다. 이 가이드는 CrewAI Enterprise 내에서 HITL을 구현하는 방법을 보여줍니다.
|
||||
인간-중심(Human-In-The-Loop, HITL)은 인공지능과 인간 전문 지식을 결합하여 의사결정을 강화하고 작업 결과를 향상시키는 강력한 접근 방식입니다. 이 가이드는 CrewAI 내에서 HITL을 구현하는 방법을 보여줍니다.
|
||||
|
||||
## CrewAI의 HITL 접근 방식
|
||||
|
||||
CrewAI는 human-in-the-loop 워크플로우를 구현하기 위한 두 가지 접근 방식을 제공합니다:
|
||||
|
||||
| 접근 방식 | 적합한 용도 | 버전 |
|
||||
|----------|----------|---------|
|
||||
| **Flow 기반** (`@human_feedback` 데코레이터) | Enterprise UI를 사용한 프로덕션, 이메일 우선 워크플로우, 전체 플랫폼 기능 | **1.8.0+** |
|
||||
| **Webhook 기반** | 커스텀 통합, 외부 시스템 (Slack, Teams 등), 레거시 설정 | 모든 버전 |
|
||||
|
||||
## Enterprise 플랫폼과 Flow 기반 HITL
|
||||
|
||||
<Note>
|
||||
`@human_feedback` 데코레이터는 **CrewAI 버전 1.8.0 이상**이 필요합니다.
|
||||
</Note>
|
||||
|
||||
Flow에서 `@human_feedback` 데코레이터를 사용하면, CrewAI Enterprise는 이메일 주소가 있는 누구나 검토 요청에 응답할 수 있는 **이메일 우선 HITL 시스템**을 제공합니다:
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="이메일 우선 설계" icon="envelope">
|
||||
응답자가 이메일 알림을 받고 직접 회신할 수 있습니다—로그인이 필요 없습니다.
|
||||
</Card>
|
||||
<Card title="대시보드 검토" icon="desktop">
|
||||
원할 때 Enterprise 대시보드에서 HITL 요청을 검토하고 응답하세요.
|
||||
</Card>
|
||||
<Card title="유연한 라우팅" icon="route">
|
||||
메서드 패턴에 따라 특정 이메일로 요청을 라우팅하거나 Flow 상태에서 가져오세요.
|
||||
</Card>
|
||||
<Card title="자동 응답" icon="clock">
|
||||
타임아웃 내에 인간이 응답하지 않을 경우 자동 대체 응답을 구성하세요.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
### 주요 이점
|
||||
|
||||
- **외부 응답자**: 플랫폼 사용자가 아니어도 이메일이 있는 누구나 응답 가능
|
||||
- **동적 할당**: Flow 상태에서 담당자 이메일 가져오기 (예: `account_owner_email`)
|
||||
- **간단한 구성**: 이메일 기반 라우팅은 사용자/역할 관리보다 설정이 쉬움
|
||||
- **배포 생성자 대체**: 라우팅 규칙이 일치하지 않으면 배포 생성자에게 알림
|
||||
|
||||
<Tip>
|
||||
`@human_feedback` 데코레이터의 구현 세부 사항은 [Flow에서 인간 피드백](/ko/learn/human-feedback-in-flows) 가이드를 참조하세요.
|
||||
</Tip>
|
||||
|
||||
## Webhook 기반 HITL 워크플로 설정
|
||||
|
||||
Slack, Microsoft Teams 또는 자체 애플리케이션과 같은 외부 시스템과의 커스텀 통합을 위해 webhook 기반 접근 방식을 사용할 수 있습니다:
|
||||
## HITL 워크플로 설정
|
||||
|
||||
<Steps>
|
||||
<Step title="작업 구성">
|
||||
@@ -144,14 +99,3 @@ HITL 워크플로우는 특히 다음과 같은 경우에 유용합니다:
|
||||
- 민감하거나 위험도가 높은 작업
|
||||
- 인간의 판단이 필요한 창의적 작업
|
||||
- 준수 및 규제 검토
|
||||
|
||||
## 자세히 알아보기
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Flow HITL 관리" icon="users-gear" href="/ko/enterprise/features/flow-hitl-management">
|
||||
이메일 알림, 라우팅 규칙, 자동 응답 및 분석을 포함한 전체 Enterprise Flow HITL 플랫폼 기능을 살펴보세요.
|
||||
</Card>
|
||||
<Card title="Flow에서 인간 피드백" icon="code" href="/ko/learn/human-feedback-in-flows">
|
||||
Flow에서 `@human_feedback` 데코레이터 구현 가이드.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
@@ -112,9 +112,3 @@ HITL 워크플로우는 다음과 같은 경우에 특히 유용합니다:
|
||||
- 민감하거나 고위험 작업
|
||||
- 인간의 판단이 필요한 창의적 과제
|
||||
- 컴플라이언스 및 규제 검토
|
||||
|
||||
## Enterprise 기능
|
||||
|
||||
<Card title="Flow HITL 관리 플랫폼" icon="users-gear" href="/ko/enterprise/features/flow-hitl-management">
|
||||
CrewAI Enterprise는 플랫폼 내 검토, 응답자 할당, 권한, 에스컬레이션 정책, SLA 관리, 동적 라우팅 및 전체 분석을 갖춘 Flow용 포괄적인 HITL 관리 시스템을 제공합니다. [자세히 알아보기 →](/ko/enterprise/features/flow-hitl-management)
|
||||
</Card>
|
||||
|
||||
@@ -1,563 +0,0 @@
|
||||
---
|
||||
title: "Gerenciamento HITL para Flows"
|
||||
description: "Revisão humana de nível empresarial para Flows com notificações por email, regras de roteamento e capacidades de resposta automática"
|
||||
icon: "users-gear"
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
<Note>
|
||||
Os recursos de gerenciamento HITL para Flows requerem o decorador `@human_feedback`, disponível no **CrewAI versão 1.8.0 ou superior**. Estes recursos aplicam-se especificamente a **Flows**, não a Crews.
|
||||
</Note>
|
||||
|
||||
O CrewAI Enterprise oferece um sistema abrangente de gerenciamento Human-in-the-Loop (HITL) para Flows que transforma fluxos de trabalho de IA em processos colaborativos humano-IA. A plataforma usa uma **arquitetura email-first** que permite que qualquer pessoa com um endereço de email responda a solicitações de revisão—sem necessidade de conta na plataforma.
|
||||
|
||||
## Visão Geral
|
||||
|
||||
<CardGroup cols={3}>
|
||||
<Card title="Design Email-First" icon="envelope">
|
||||
Respondentes podem responder diretamente aos emails de notificação para fornecer feedback
|
||||
</Card>
|
||||
<Card title="Roteamento Flexível" icon="route">
|
||||
Direcione solicitações para emails específicos com base em padrões de método ou estado do flow
|
||||
</Card>
|
||||
<Card title="Resposta Automática" icon="clock">
|
||||
Configure respostas automáticas de fallback quando nenhum humano responder a tempo
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
### Principais Benefícios
|
||||
|
||||
- **Modelo mental simples**: Endereços de email são universais; não é necessário gerenciar usuários ou funções da plataforma
|
||||
- **Respondentes externos**: Qualquer pessoa com email pode responder, mesmo não sendo usuário da plataforma
|
||||
- **Atribuição dinâmica**: Obtenha o email do responsável diretamente do estado do flow (ex: `sales_rep_email`)
|
||||
- **Configuração reduzida**: Menos configurações para definir, tempo mais rápido para gerar valor
|
||||
- **Email como canal principal**: A maioria dos usuários prefere responder via email do que fazer login em um dashboard
|
||||
|
||||
## Configurando Pontos de Revisão Humana em Flows
|
||||
|
||||
Configure checkpoints de revisão humana em seus Flows usando o decorador `@human_feedback`. Quando a execução atinge um ponto de revisão, o sistema pausa, notifica o responsável via email e aguarda uma resposta.
|
||||
|
||||
```python
|
||||
from crewai.flow.flow import Flow, start, listen
|
||||
from crewai.flow.human_feedback import human_feedback, HumanFeedbackResult
|
||||
|
||||
class ContentApprovalFlow(Flow):
|
||||
@start()
|
||||
def generate_content(self):
|
||||
# IA gera conteúdo
|
||||
return "Texto de marketing gerado para campanha Q1..."
|
||||
|
||||
@listen(generate_content)
|
||||
@human_feedback(
|
||||
message="Por favor, revise este conteúdo para conformidade com a marca:",
|
||||
emit=["approved", "rejected", "needs_revision"],
|
||||
)
|
||||
def review_content(self, content):
|
||||
return content
|
||||
|
||||
@listen("approved")
|
||||
def publish_content(self, result: HumanFeedbackResult):
|
||||
print(f"Publicando conteúdo aprovado. Notas do revisor: {result.feedback}")
|
||||
|
||||
@listen("rejected")
|
||||
def archive_content(self, result: HumanFeedbackResult):
|
||||
print(f"Conteúdo rejeitado. Motivo: {result.feedback}")
|
||||
|
||||
@listen("needs_revision")
|
||||
def revise_content(self, result: HumanFeedbackResult):
|
||||
print(f"Revisão solicitada: {result.feedback}")
|
||||
```
|
||||
|
||||
Para detalhes completos de implementação, consulte o guia [Feedback Humano em Flows](/pt-BR/learn/human-feedback-in-flows).
|
||||
|
||||
### Parâmetros do Decorador
|
||||
|
||||
| Parâmetro | Tipo | Descrição |
|
||||
|-----------|------|-----------|
|
||||
| `message` | `str` | A mensagem exibida para o revisor humano |
|
||||
| `emit` | `list[str]` | Opções de resposta válidas (exibidas como botões na UI) |
|
||||
|
||||
## Configuração da Plataforma
|
||||
|
||||
Acesse a configuração HITL em: **Deployment** → **Settings** → **Human in the Loop Configuration**
|
||||
|
||||
<Frame>
|
||||
<img src="/images/enterprise/hitl-settings-overview.png" alt="Configurações HITL" />
|
||||
</Frame>
|
||||
|
||||
### Notificações por Email
|
||||
|
||||
Toggle para ativar ou desativar notificações por email para solicitações HITL.
|
||||
|
||||
| Configuração | Padrão | Descrição |
|
||||
|--------------|--------|-----------|
|
||||
| Notificações por Email | Ativado | Enviar emails quando feedback for solicitado |
|
||||
|
||||
<Note>
|
||||
Quando desativado, os respondentes devem usar a UI do dashboard ou você deve configurar webhooks para sistemas de notificação personalizados.
|
||||
</Note>
|
||||
|
||||
### Meta de SLA
|
||||
|
||||
Defina um tempo de resposta alvo para fins de rastreamento e métricas.
|
||||
|
||||
| Configuração | Descrição |
|
||||
|--------------|-----------|
|
||||
| Meta de SLA (minutos) | Tempo de resposta alvo. Usado para métricas do dashboard e rastreamento de SLA |
|
||||
|
||||
Deixe vazio para desativar o rastreamento de SLA.
|
||||
|
||||
## Notificações e Respostas por Email
|
||||
|
||||
O sistema HITL usa uma arquitetura email-first onde os respondentes podem responder diretamente aos emails de notificação.
|
||||
|
||||
### Como Funcionam as Respostas por Email
|
||||
|
||||
<Steps>
|
||||
<Step title="Notificação Enviada">
|
||||
Quando uma solicitação HITL é criada, um email é enviado ao respondente atribuído com o conteúdo e contexto da revisão.
|
||||
</Step>
|
||||
<Step title="Endereço Reply-To">
|
||||
O email inclui um endereço reply-to especial com um token assinado para autenticação.
|
||||
</Step>
|
||||
<Step title="Usuário Responde">
|
||||
O respondente simplesmente responde ao email com seu feedback—nenhum login necessário.
|
||||
</Step>
|
||||
<Step title="Validação do Token">
|
||||
A plataforma recebe a resposta, verifica o token assinado e corresponde o email do remetente.
|
||||
</Step>
|
||||
<Step title="Flow Continua">
|
||||
O feedback é registrado e o flow continua com a entrada humana.
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
### Formato de Resposta
|
||||
|
||||
Respondentes podem responder com:
|
||||
|
||||
- **Opção emit**: Se a resposta corresponder a uma opção `emit` (ex: "approved"), ela é usada diretamente
|
||||
- **Texto livre**: Qualquer resposta de texto é passada para o flow como feedback
|
||||
- **Texto simples**: A primeira linha do corpo da resposta é usada como feedback
|
||||
|
||||
### Emails de Confirmação
|
||||
|
||||
Após processar uma resposta, o respondente recebe um email de confirmação indicando se o feedback foi enviado com sucesso ou se ocorreu um erro.
|
||||
|
||||
### Segurança do Token de Email
|
||||
|
||||
- Tokens são assinados criptograficamente para segurança
|
||||
- Tokens expiram após 7 dias
|
||||
- Email do remetente deve corresponder ao email autorizado do token
|
||||
- Emails de confirmação/erro são enviados após o processamento
|
||||
|
||||
## Regras de Roteamento
|
||||
|
||||
Direcione solicitações HITL para endereços de email específicos com base em padrões de método.
|
||||
|
||||
<Frame>
|
||||
<img src="/images/enterprise/hitl-settings-routing-rules.png" alt="Configuração de Regras de Roteamento HITL" />
|
||||
</Frame>
|
||||
|
||||
### Estrutura da Regra
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "Aprovações para Financeiro",
|
||||
"match": {
|
||||
"method_name": "approve_*"
|
||||
},
|
||||
"assign_to_email": "financeiro@empresa.com",
|
||||
"assign_from_input": "manager_email"
|
||||
}
|
||||
```
|
||||
|
||||
### Padrões de Correspondência
|
||||
|
||||
| Padrão | Descrição | Exemplo de Correspondência |
|
||||
|--------|-----------|---------------------------|
|
||||
| `approve_*` | Wildcard (qualquer caractere) | `approve_payment`, `approve_vendor` |
|
||||
| `review_?` | Caractere único | `review_a`, `review_1` |
|
||||
| `validate_payment` | Correspondência exata | apenas `validate_payment` |
|
||||
|
||||
### Prioridade de Atribuição
|
||||
|
||||
1. **Atribuição dinâmica** (`assign_from_input`): Se configurado, obtém email do estado do flow
|
||||
2. **Email estático** (`assign_to_email`): Fallback para email configurado
|
||||
3. **Criador do deployment**: Se nenhuma regra corresponder, o email do criador do deployment é usado
|
||||
|
||||
### Exemplo de Atribuição Dinâmica
|
||||
|
||||
Se seu estado do flow contém `{"sales_rep_email": "alice@empresa.com"}`, configure:
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "Direcionar para Representante de Vendas",
|
||||
"match": {
|
||||
"method_name": "review_*"
|
||||
},
|
||||
"assign_from_input": "sales_rep_email"
|
||||
}
|
||||
```
|
||||
|
||||
A solicitação será atribuída automaticamente para `alice@empresa.com`.
|
||||
|
||||
<Tip>
|
||||
**Caso de Uso**: Obtenha o responsável do seu CRM, banco de dados ou etapa anterior do flow para direcionar revisões dinamicamente para a pessoa certa.
|
||||
</Tip>
|
||||
|
||||
## Resposta Automática
|
||||
|
||||
Responda automaticamente a solicitações HITL se nenhum humano responder dentro do timeout. Isso garante que os flows não fiquem travados indefinidamente.
|
||||
|
||||
### Configuração
|
||||
|
||||
| Configuração | Descrição |
|
||||
|--------------|-----------|
|
||||
| Ativado | Toggle para ativar resposta automática |
|
||||
| Timeout (minutos) | Tempo de espera antes de responder automaticamente |
|
||||
| Resultado Padrão | O valor da resposta (deve corresponder a uma opção `emit`) |
|
||||
|
||||
<Frame>
|
||||
<img src="/images/enterprise/hitl-settings-auto-respond.png" alt="Configuração de Resposta Automática HITL" />
|
||||
</Frame>
|
||||
|
||||
### Casos de Uso
|
||||
|
||||
- **Conformidade com SLA**: Garante que flows não fiquem travados indefinidamente
|
||||
- **Aprovação padrão**: Aprove automaticamente solicitações de baixo risco após timeout
|
||||
- **Degradação graciosa**: Continue com um padrão seguro quando revisores não estiverem disponíveis
|
||||
|
||||
<Warning>
|
||||
Use resposta automática com cuidado. Ative apenas para revisões não críticas onde uma resposta padrão é aceitável.
|
||||
</Warning>
|
||||
|
||||
## Processo de Revisão
|
||||
|
||||
### Interface do Dashboard
|
||||
|
||||
A interface de revisão HITL oferece uma experiência limpa e focada para revisores:
|
||||
|
||||
- **Renderização Markdown**: Formatação rica para conteúdo de revisão com destaque de sintaxe
|
||||
- **Painel de Contexto**: Visualize estado do flow, histórico de execução e informações relacionadas
|
||||
- **Entrada de Feedback**: Forneça feedback detalhado e comentários com sua decisão
|
||||
- **Ações Rápidas**: Botões de opção emit com um clique com comentários opcionais
|
||||
|
||||
<Frame>
|
||||
<img src="/images/enterprise/hitl-list-pending-feedbacks.png" alt="Lista de Solicitações HITL Pendentes" />
|
||||
</Frame>
|
||||
|
||||
### Métodos de Resposta
|
||||
|
||||
Revisores podem responder por três canais:
|
||||
|
||||
| Método | Descrição |
|
||||
|--------|-----------|
|
||||
| **Resposta por Email** | Responda diretamente ao email de notificação |
|
||||
| **Dashboard** | Use a UI do dashboard Enterprise |
|
||||
| **API/Webhook** | Resposta programática via API |
|
||||
|
||||
### Histórico e Trilha de Auditoria
|
||||
|
||||
Toda interação HITL é rastreada com uma linha do tempo completa:
|
||||
|
||||
- Histórico de decisões (aprovar/rejeitar/revisar)
|
||||
- Identidade do revisor e timestamp
|
||||
- Feedback e comentários fornecidos
|
||||
- Método de resposta (email/dashboard/API)
|
||||
- Métricas de tempo de resposta
|
||||
|
||||
## Análise e Monitoramento
|
||||
|
||||
Acompanhe o desempenho HITL com análises abrangentes.
|
||||
|
||||
### Dashboard de Desempenho
|
||||
|
||||
<Frame>
|
||||
<img src="/images/enterprise/hitl-metrics.png" alt="Dashboard de Métricas HITL" />
|
||||
</Frame>
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Tempos de Resposta" icon="stopwatch">
|
||||
Monitore tempos de resposta médios e medianos por revisor ou flow.
|
||||
</Card>
|
||||
<Card title="Tendências de Volume" icon="chart-bar">
|
||||
Analise padrões de volume de revisão para otimizar capacidade da equipe.
|
||||
</Card>
|
||||
<Card title="Distribuição de Decisões" icon="chart-pie">
|
||||
Visualize taxas de aprovação/rejeição em diferentes tipos de revisão.
|
||||
</Card>
|
||||
<Card title="Rastreamento de SLA" icon="chart-line">
|
||||
Acompanhe a porcentagem de revisões concluídas dentro das metas de SLA.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
### Auditoria e Conformidade
|
||||
|
||||
Capacidades de auditoria prontas para empresas para requisitos regulatórios:
|
||||
|
||||
- Histórico completo de decisões com timestamps
|
||||
- Verificação de identidade do revisor
|
||||
- Logs de auditoria imutáveis
|
||||
- Capacidades de exportação para relatórios de conformidade
|
||||
|
||||
## Casos de Uso Comuns
|
||||
|
||||
<AccordionGroup>
|
||||
<Accordion title="Revisões de Segurança" icon="shield-halved">
|
||||
**Caso de Uso**: Automação de questionários de segurança internos com validação humana
|
||||
|
||||
- IA gera respostas para questionários de segurança
|
||||
- Equipe de segurança revisa e valida precisão via email
|
||||
- Respostas aprovadas são compiladas na submissão final
|
||||
- Trilha de auditoria completa para conformidade
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Aprovação de Conteúdo" icon="file-lines">
|
||||
**Caso de Uso**: Conteúdo de marketing que requer revisão legal/marca
|
||||
|
||||
- IA gera texto de marketing ou conteúdo de mídia social
|
||||
- Roteie para email da equipe de marca para revisão de voz/tom
|
||||
- Publicação automática após aprovação
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Aprovações Financeiras" icon="money-bill">
|
||||
**Caso de Uso**: Relatórios de despesas, termos de contrato, alocações de orçamento
|
||||
|
||||
- IA pré-processa e categoriza solicitações financeiras
|
||||
- Roteie com base em limites de valor usando atribuição dinâmica
|
||||
- Mantenha trilha de auditoria completa para conformidade financeira
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Atribuição Dinâmica do CRM" icon="database">
|
||||
**Caso de Uso**: Direcione revisões para proprietários de conta do seu CRM
|
||||
|
||||
- Flow obtém email do proprietário da conta do CRM
|
||||
- Armazene email no estado do flow (ex: `account_owner_email`)
|
||||
- Use `assign_from_input` para direcionar automaticamente para a pessoa certa
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Garantia de Qualidade" icon="magnifying-glass">
|
||||
**Caso de Uso**: Validação de saída de IA antes da entrega ao cliente
|
||||
|
||||
- IA gera conteúdo ou respostas voltadas ao cliente
|
||||
- Equipe de QA revisa via notificação por email
|
||||
- Loops de feedback melhoram desempenho da IA ao longo do tempo
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
## API de Webhooks
|
||||
|
||||
Quando seus Flows pausam para feedback humano, você pode configurar webhooks para enviar dados da solicitação para sua própria aplicação. Isso permite:
|
||||
|
||||
- Construir UIs de aprovação personalizadas
|
||||
- Integrar com ferramentas internas (Jira, ServiceNow, dashboards personalizados)
|
||||
- Rotear aprovações para sistemas de terceiros
|
||||
- Notificações em apps mobile
|
||||
- Sistemas de decisão automatizados
|
||||
|
||||
<Frame>
|
||||
<img src="/images/enterprise/hitl-settings-webhook.png" alt="Configuração de Webhook HITL" />
|
||||
</Frame>
|
||||
|
||||
### Configurando Webhooks
|
||||
|
||||
<Steps>
|
||||
<Step title="Navegue até Configurações">
|
||||
Vá para **Deployment** → **Settings** → **Human in the Loop**
|
||||
</Step>
|
||||
<Step title="Expanda a Seção Webhooks">
|
||||
Clique para expandir a configuração de **Webhooks**
|
||||
</Step>
|
||||
<Step title="Adicione sua URL de Webhook">
|
||||
Digite sua URL de webhook (deve ser HTTPS em produção)
|
||||
</Step>
|
||||
<Step title="Salve a Configuração">
|
||||
Clique em **Salvar Configuração** para ativar
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
Você pode configurar múltiplos webhooks. Cada webhook ativo recebe todos os eventos HITL.
|
||||
|
||||
### Eventos de Webhook
|
||||
|
||||
Seu endpoint receberá requisições HTTP POST para estes eventos:
|
||||
|
||||
| Tipo de Evento | Quando é Disparado |
|
||||
|----------------|-------------------|
|
||||
| `new_request` | Um flow pausa e solicita feedback humano |
|
||||
|
||||
### Payload do Webhook
|
||||
|
||||
Todos os webhooks recebem um payload JSON com esta estrutura:
|
||||
|
||||
```json
|
||||
{
|
||||
"event": "new_request",
|
||||
"request": {
|
||||
"id": "550e8400-e29b-41d4-a716-446655440000",
|
||||
"flow_id": "flow_abc123",
|
||||
"method_name": "review_article",
|
||||
"message": "Por favor, revise este artigo para publicação.",
|
||||
"emit_options": ["approved", "rejected", "request_changes"],
|
||||
"state": {
|
||||
"article_id": 12345,
|
||||
"author": "john@example.com",
|
||||
"category": "technology"
|
||||
},
|
||||
"metadata": {},
|
||||
"created_at": "2026-01-14T12:00:00Z"
|
||||
},
|
||||
"deployment": {
|
||||
"id": 456,
|
||||
"name": "Content Review Flow",
|
||||
"organization_id": 789
|
||||
},
|
||||
"callback_url": "https://api.crewai.com/...",
|
||||
"assigned_to_email": "reviewer@company.com"
|
||||
}
|
||||
```
|
||||
|
||||
### Respondendo a Solicitações
|
||||
|
||||
Para enviar feedback, **faça POST para a `callback_url`** incluída no payload do webhook.
|
||||
|
||||
```http
|
||||
POST {callback_url}
|
||||
Content-Type: application/json
|
||||
|
||||
{
|
||||
"feedback": "Aprovado. Ótimo artigo!",
|
||||
"source": "my_custom_app"
|
||||
}
|
||||
```
|
||||
|
||||
### Segurança
|
||||
|
||||
<Info>
|
||||
Todas as requisições de webhook são assinadas criptograficamente usando HMAC-SHA256 para garantir autenticidade e prevenir adulteração.
|
||||
</Info>
|
||||
|
||||
#### Segurança do Webhook
|
||||
|
||||
- **Assinaturas HMAC-SHA256**: Todo webhook inclui uma assinatura criptográfica
|
||||
- **Secrets por webhook**: Cada webhook tem seu próprio secret de assinatura único
|
||||
- **Criptografado em repouso**: Os secrets de assinatura são criptografados no nosso banco de dados
|
||||
- **Verificação de timestamp**: Previne ataques de replay
|
||||
|
||||
#### Headers de Assinatura
|
||||
|
||||
Cada requisição de webhook inclui estes headers:
|
||||
|
||||
| Header | Descrição |
|
||||
|--------|-----------|
|
||||
| `X-Signature` | Assinatura HMAC-SHA256: `sha256=<hex_digest>` |
|
||||
| `X-Timestamp` | Timestamp Unix de quando a requisição foi assinada |
|
||||
|
||||
#### Verificação
|
||||
|
||||
Verifique computando:
|
||||
|
||||
```python
|
||||
import hmac
|
||||
import hashlib
|
||||
|
||||
expected = hmac.new(
|
||||
signing_secret.encode(),
|
||||
f"{timestamp}.{payload}".encode(),
|
||||
hashlib.sha256
|
||||
).hexdigest()
|
||||
|
||||
if hmac.compare_digest(expected, signature):
|
||||
# Assinatura válida
|
||||
```
|
||||
|
||||
### Tratamento de Erros
|
||||
|
||||
Seu endpoint de webhook deve retornar um código de status 2xx para confirmar o recebimento:
|
||||
|
||||
| Sua Resposta | Nosso Comportamento |
|
||||
|--------------|---------------------|
|
||||
| 2xx | Webhook entregue com sucesso |
|
||||
| 4xx/5xx | Registrado como falha, sem retry |
|
||||
| Timeout (30s) | Registrado como falha, sem retry |
|
||||
|
||||
## Segurança e RBAC
|
||||
|
||||
### Acesso ao Dashboard
|
||||
|
||||
O acesso HITL é controlado no nível do deployment:
|
||||
|
||||
| Permissão | Capacidade |
|
||||
|-----------|------------|
|
||||
| `manage_human_feedback` | Configurar settings HITL, ver todas as solicitações |
|
||||
| `respond_to_human_feedback` | Responder a solicitações, ver solicitações atribuídas |
|
||||
|
||||
### Autorização de Resposta por Email
|
||||
|
||||
Para respostas por email:
|
||||
1. O token reply-to codifica o email autorizado
|
||||
2. Email do remetente deve corresponder ao email do token
|
||||
3. Token não deve estar expirado (padrão 7 dias)
|
||||
4. Solicitação ainda deve estar pendente
|
||||
|
||||
### Trilha de Auditoria
|
||||
|
||||
Todas as ações HITL são registradas:
|
||||
- Criação de solicitação
|
||||
- Mudanças de atribuição
|
||||
- Submissão de resposta (com fonte: dashboard/email/API)
|
||||
- Status de retomada do flow
|
||||
|
||||
## Solução de Problemas
|
||||
|
||||
### Emails Não Enviando
|
||||
|
||||
1. Verifique se "Notificações por Email" está ativado na configuração
|
||||
2. Verifique se as regras de roteamento correspondem ao nome do método
|
||||
3. Verifique se o email do responsável é válido
|
||||
4. Verifique o fallback do criador do deployment se nenhuma regra de roteamento corresponder
|
||||
|
||||
### Respostas de Email Não Processando
|
||||
|
||||
1. Verifique se o token não expirou (padrão 7 dias)
|
||||
2. Verifique se o email do remetente corresponde ao email atribuído
|
||||
3. Garanta que a solicitação ainda está pendente (não respondida ainda)
|
||||
|
||||
### Flow Não Retomando
|
||||
|
||||
1. Verifique o status da solicitação no dashboard
|
||||
2. Verifique se a URL de callback está acessível
|
||||
3. Garanta que o deployment ainda está rodando
|
||||
|
||||
## Melhores Práticas
|
||||
|
||||
<Tip>
|
||||
**Comece Simples**: Comece com notificações por email para o criador do deployment, depois adicione regras de roteamento conforme seus fluxos de trabalho amadurecem.
|
||||
</Tip>
|
||||
|
||||
1. **Use Atribuição Dinâmica**: Obtenha emails de responsáveis do seu estado do flow para roteamento flexível.
|
||||
|
||||
2. **Configure Resposta Automática**: Defina um fallback para revisões não críticas para evitar que flows fiquem travados.
|
||||
|
||||
3. **Monitore Tempos de Resposta**: Use análises para identificar gargalos e otimizar seu processo de revisão.
|
||||
|
||||
4. **Mantenha Mensagens de Revisão Claras**: Escreva mensagens claras e acionáveis no decorador `@human_feedback`.
|
||||
|
||||
5. **Teste o Fluxo de Email**: Envie solicitações de teste para verificar a entrega de email antes de ir para produção.
|
||||
|
||||
## Recursos Relacionados
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Feedback Humano em Flows" icon="code" href="/pt-BR/learn/human-feedback-in-flows">
|
||||
Guia de implementação para o decorador `@human_feedback`
|
||||
</Card>
|
||||
<Card title="Guia de Workflow HITL para Flows" icon="route" href="/pt-BR/enterprise/guides/human-in-the-loop">
|
||||
Guia passo a passo para configurar workflows HITL
|
||||
</Card>
|
||||
<Card title="Configuração RBAC" icon="shield-check" href="/pt-BR/enterprise/features/rbac">
|
||||
Configure controle de acesso baseado em função para sua organização
|
||||
</Card>
|
||||
<Card title="Streaming de Webhook" icon="bolt" href="/pt-BR/enterprise/features/webhook-streaming">
|
||||
Configure notificações de eventos em tempo real
|
||||
</Card>
|
||||
</CardGroup>
|
||||
@@ -5,54 +5,9 @@ icon: "user-check"
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
Human-In-The-Loop (HITL) é uma abordagem poderosa que combina inteligência artificial com expertise humana para aprimorar a tomada de decisão e melhorar os resultados das tarefas. Este guia mostra como implementar HITL dentro do CrewAI Enterprise.
|
||||
Human-In-The-Loop (HITL) é uma abordagem poderosa que combina inteligência artificial com expertise humana para aprimorar a tomada de decisão e melhorar os resultados das tarefas. Este guia mostra como implementar HITL dentro do CrewAI.
|
||||
|
||||
## Abordagens HITL no CrewAI
|
||||
|
||||
CrewAI oferece duas abordagens para implementar workflows human-in-the-loop:
|
||||
|
||||
| Abordagem | Melhor Para | Versão |
|
||||
|----------|----------|---------|
|
||||
| **Baseada em Flow** (decorador `@human_feedback`) | Produção com UI Enterprise, workflows email-first, recursos completos da plataforma | **1.8.0+** |
|
||||
| **Baseada em Webhook** | Integrações customizadas, sistemas externos (Slack, Teams, etc.), configurações legadas | Todas as versões |
|
||||
|
||||
## HITL Baseado em Flow com Plataforma Enterprise
|
||||
|
||||
<Note>
|
||||
O decorador `@human_feedback` requer **CrewAI versão 1.8.0 ou superior**.
|
||||
</Note>
|
||||
|
||||
Ao usar o decorador `@human_feedback` em seus Flows, o CrewAI Enterprise oferece um **sistema HITL email-first** que permite que qualquer pessoa com um endereço de email responda a solicitações de revisão:
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Design Email-First" icon="envelope">
|
||||
Respondentes recebem notificações por email e podem responder diretamente—nenhum login necessário.
|
||||
</Card>
|
||||
<Card title="Revisão no Dashboard" icon="desktop">
|
||||
Revise e responda a solicitações HITL no dashboard Enterprise quando preferir.
|
||||
</Card>
|
||||
<Card title="Roteamento Flexível" icon="route">
|
||||
Direcione solicitações para emails específicos com base em padrões de método ou obtenha do estado do flow.
|
||||
</Card>
|
||||
<Card title="Resposta Automática" icon="clock">
|
||||
Configure respostas automáticas de fallback quando nenhum humano responder dentro do timeout.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
### Principais Benefícios
|
||||
|
||||
- **Respondentes externos**: Qualquer pessoa com email pode responder, mesmo não sendo usuário da plataforma
|
||||
- **Atribuição dinâmica**: Obtenha o email do responsável do estado do flow (ex: `account_owner_email`)
|
||||
- **Configuração simples**: Roteamento baseado em email é mais fácil de configurar do que gerenciamento de usuários/funções
|
||||
- **Fallback do criador do deployment**: Se nenhuma regra de roteamento corresponder, o criador do deployment é notificado
|
||||
|
||||
<Tip>
|
||||
Para detalhes de implementação do decorador `@human_feedback`, consulte o guia [Feedback Humano em Flows](/pt-BR/learn/human-feedback-in-flows).
|
||||
</Tip>
|
||||
|
||||
## Configurando Workflows HITL Baseados em Webhook
|
||||
|
||||
Para integrações customizadas com sistemas externos como Slack, Microsoft Teams ou suas próprias aplicações, você pode usar a abordagem baseada em webhook:
|
||||
## Configurando Workflows HITL
|
||||
|
||||
<Steps>
|
||||
<Step title="Configure Sua Tarefa">
|
||||
@@ -144,14 +99,3 @@ Workflows HITL são particularmente valiosos para:
|
||||
- Operações sensíveis ou de alto risco
|
||||
- Tarefas criativas que exigem julgamento humano
|
||||
- Revisões de conformidade e regulatórias
|
||||
|
||||
## Saiba Mais
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Gerenciamento HITL para Flows" icon="users-gear" href="/pt-BR/enterprise/features/flow-hitl-management">
|
||||
Explore os recursos completos da plataforma HITL para Flows, incluindo notificações por email, regras de roteamento, resposta automática e análises.
|
||||
</Card>
|
||||
<Card title="Feedback Humano em Flows" icon="code" href="/pt-BR/learn/human-feedback-in-flows">
|
||||
Guia de implementação para o decorador `@human_feedback` em seus Flows.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
@@ -112,9 +112,3 @@ Workflows HITL são particularmente valiosos para:
|
||||
- Operações sensíveis ou de alto risco
|
||||
- Tarefas criativas que requerem julgamento humano
|
||||
- Revisões de conformidade e regulamentação
|
||||
|
||||
## Recursos Enterprise
|
||||
|
||||
<Card title="Plataforma de Gerenciamento HITL para Flows" icon="users-gear" href="/pt-BR/enterprise/features/flow-hitl-management">
|
||||
O CrewAI Enterprise oferece um sistema abrangente de gerenciamento HITL para Flows com revisão na plataforma, atribuição de respondentes, permissões, políticas de escalação, gerenciamento de SLA, roteamento dinâmico e análises completas. [Saiba mais →](/pt-BR/enterprise/features/flow-hitl-management)
|
||||
</Card>
|
||||
|
||||
@@ -13,10 +13,18 @@ from crewai_tools.tools.crewai_platform_tools.crewai_platform_tool_builder impor
|
||||
from crewai_tools.tools.crewai_platform_tools.crewai_platform_tools import (
|
||||
CrewaiPlatformTools,
|
||||
)
|
||||
from crewai_tools.tools.crewai_platform_tools.file_hook import (
|
||||
process_file_markers,
|
||||
register_file_processing_hook,
|
||||
unregister_file_processing_hook,
|
||||
)
|
||||
|
||||
|
||||
__all__ = [
|
||||
"CrewAIPlatformActionTool",
|
||||
"CrewaiPlatformToolBuilder",
|
||||
"CrewaiPlatformTools",
|
||||
"process_file_markers",
|
||||
"register_file_processing_hook",
|
||||
"unregister_file_processing_hook",
|
||||
]
|
||||
|
||||
@@ -2,6 +2,8 @@
|
||||
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
import tempfile
|
||||
from typing import Any
|
||||
|
||||
from crewai.tools import BaseTool
|
||||
@@ -14,6 +16,26 @@ from crewai_tools.tools.crewai_platform_tools.misc import (
|
||||
get_platform_integration_token,
|
||||
)
|
||||
|
||||
_FILE_MARKER_PREFIX = "__CREWAI_FILE__"
|
||||
|
||||
_MIME_TO_EXTENSION = {
|
||||
"application/vnd.openxmlformats-officedocument.spreadsheetml.sheet": ".xlsx",
|
||||
"application/vnd.ms-excel": ".xls",
|
||||
"application/vnd.openxmlformats-officedocument.wordprocessingml.document": ".docx",
|
||||
"application/msword": ".doc",
|
||||
"application/vnd.openxmlformats-officedocument.presentationml.presentation": ".pptx",
|
||||
"application/vnd.ms-powerpoint": ".ppt",
|
||||
"application/pdf": ".pdf",
|
||||
"image/png": ".png",
|
||||
"image/jpeg": ".jpg",
|
||||
"image/gif": ".gif",
|
||||
"image/webp": ".webp",
|
||||
"text/plain": ".txt",
|
||||
"text/csv": ".csv",
|
||||
"application/json": ".json",
|
||||
"application/zip": ".zip",
|
||||
}
|
||||
|
||||
|
||||
class CrewAIPlatformActionTool(BaseTool):
|
||||
action_name: str = Field(default="", description="The name of the action")
|
||||
@@ -71,10 +93,18 @@ class CrewAIPlatformActionTool(BaseTool):
|
||||
url=api_url,
|
||||
headers=headers,
|
||||
json=payload,
|
||||
timeout=60,
|
||||
timeout=300,
|
||||
stream=True,
|
||||
verify=os.environ.get("CREWAI_FACTORY", "false").lower() != "true",
|
||||
)
|
||||
|
||||
content_type = response.headers.get("Content-Type", "")
|
||||
|
||||
# Check if response is binary (non-JSON)
|
||||
if "application/json" not in content_type:
|
||||
return self._handle_binary_response(response)
|
||||
|
||||
# Normal JSON response
|
||||
data = response.json()
|
||||
if not response.ok:
|
||||
if isinstance(data, dict):
|
||||
@@ -91,3 +121,49 @@ class CrewAIPlatformActionTool(BaseTool):
|
||||
|
||||
except Exception as e:
|
||||
return f"Error executing action {self.action_name}: {e!s}"
|
||||
|
||||
def _handle_binary_response(self, response: requests.Response) -> str:
|
||||
"""Handle binary streaming response from the API.
|
||||
|
||||
Streams the binary content to a temporary file and returns a marker
|
||||
that can be processed by the file hook to inject the file into the
|
||||
LLM context.
|
||||
|
||||
Args:
|
||||
response: The streaming HTTP response with binary content.
|
||||
|
||||
Returns:
|
||||
A file marker string in the format:
|
||||
__CREWAI_FILE__:filename:content_type:file_path
|
||||
"""
|
||||
content_type = response.headers.get("Content-Type", "application/octet-stream")
|
||||
|
||||
filename = self._extract_filename_from_headers(response.headers)
|
||||
|
||||
extension = self._get_file_extension(content_type, filename)
|
||||
|
||||
with tempfile.NamedTemporaryFile(
|
||||
delete=False, suffix=extension, prefix="crewai_"
|
||||
) as tmp_file:
|
||||
for chunk in response.iter_content(chunk_size=8192):
|
||||
tmp_file.write(chunk)
|
||||
tmp_path = tmp_file.name
|
||||
|
||||
return f"{_FILE_MARKER_PREFIX}:{filename}:{content_type}:{tmp_path}"
|
||||
|
||||
def _extract_filename_from_headers(
|
||||
self, headers: requests.structures.CaseInsensitiveDict
|
||||
) -> str:
|
||||
content_disposition = headers.get("Content-Disposition", "")
|
||||
if content_disposition:
|
||||
match = re.search(r'filename="?([^";\s]+)"?', content_disposition)
|
||||
if match:
|
||||
return match.group(1)
|
||||
return "downloaded_file"
|
||||
|
||||
def _get_file_extension(self, content_type: str, filename: str) -> str:
|
||||
if "." in filename:
|
||||
return "." + filename.rsplit(".", 1)[-1]
|
||||
|
||||
base_content_type = content_type.split(";")[0].strip()
|
||||
return _MIME_TO_EXTENSION.get(base_content_type, "")
|
||||
|
||||
@@ -0,0 +1,158 @@
|
||||
"""File processing hook for CrewAI Platform Tools.
|
||||
|
||||
This module provides a hook that processes file markers returned by platform tools
|
||||
and injects the files into the LLM context for native file handling.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
import os
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from crewai.hooks.tool_hooks import ToolCallHookContext
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
_FILE_MARKER_PREFIX = "__CREWAI_FILE__"
|
||||
|
||||
_hook_registered = False
|
||||
|
||||
|
||||
def process_file_markers(context: ToolCallHookContext) -> str | None:
|
||||
"""Process file markers in tool results and inject files into context.
|
||||
|
||||
This hook detects file markers returned by platform tools (e.g., download_file)
|
||||
and converts them into FileInput objects that are attached to the hook context.
|
||||
The agent executor will then inject these files into the tool message for
|
||||
native LLM file handling.
|
||||
|
||||
The marker format is:
|
||||
__CREWAI_FILE__:filename:content_type:file_path
|
||||
|
||||
Args:
|
||||
context: The tool call hook context containing the tool result.
|
||||
|
||||
Returns:
|
||||
A human-readable message if a file was processed, None otherwise.
|
||||
"""
|
||||
result = context.tool_result
|
||||
|
||||
if not result or not result.startswith(_FILE_MARKER_PREFIX):
|
||||
return None
|
||||
|
||||
try:
|
||||
parts = result.split(":", 3)
|
||||
if len(parts) < 4:
|
||||
logger.warning(f"Invalid file marker format: {result[:100]}")
|
||||
return None
|
||||
|
||||
_, filename, content_type, file_path = parts
|
||||
|
||||
if not os.path.isfile(file_path):
|
||||
logger.error(f"File not found: {file_path}")
|
||||
return f"Error: Downloaded file not found at {file_path}"
|
||||
|
||||
try:
|
||||
from crewai_files import File
|
||||
except ImportError:
|
||||
logger.warning(
|
||||
"crewai_files not installed. File will not be attached to LLM context."
|
||||
)
|
||||
return (
|
||||
f"Downloaded file: {filename} ({content_type}). "
|
||||
f"File saved at: {file_path}. "
|
||||
"Note: Install crewai_files for native LLM file handling."
|
||||
)
|
||||
|
||||
file = File(source=file_path, content_type=content_type, filename=filename)
|
||||
|
||||
context.files = {filename: file}
|
||||
|
||||
file_size = os.path.getsize(file_path)
|
||||
size_str = _format_file_size(file_size)
|
||||
|
||||
return f"Downloaded file: {filename} ({content_type}, {size_str}). File is attached for LLM analysis."
|
||||
|
||||
except Exception as e:
|
||||
logger.exception(f"Error processing file marker: {e}")
|
||||
return f"Error processing downloaded file: {e}"
|
||||
|
||||
|
||||
def _format_file_size(size_bytes: int) -> str:
|
||||
"""Format file size in human-readable format.
|
||||
|
||||
Args:
|
||||
size_bytes: Size in bytes.
|
||||
|
||||
Returns:
|
||||
Human-readable size string.
|
||||
"""
|
||||
if size_bytes < 1024:
|
||||
return f"{size_bytes} bytes"
|
||||
elif size_bytes < 1024 * 1024:
|
||||
return f"{size_bytes / 1024:.1f} KB"
|
||||
elif size_bytes < 1024 * 1024 * 1024:
|
||||
return f"{size_bytes / (1024 * 1024):.1f} MB"
|
||||
else:
|
||||
return f"{size_bytes / (1024 * 1024 * 1024):.1f} GB"
|
||||
|
||||
|
||||
def register_file_processing_hook() -> bool:
|
||||
"""Register the file processing hook globally.
|
||||
|
||||
This function should be called once during application initialization
|
||||
to enable automatic file injection for platform tools.
|
||||
|
||||
Returns:
|
||||
True if the hook was registered, False if it was already registered
|
||||
or if registration failed.
|
||||
"""
|
||||
global _hook_registered
|
||||
|
||||
if _hook_registered:
|
||||
logger.debug("File processing hook already registered")
|
||||
return False
|
||||
|
||||
try:
|
||||
from crewai.hooks import register_after_tool_call_hook
|
||||
|
||||
register_after_tool_call_hook(process_file_markers)
|
||||
_hook_registered = True
|
||||
logger.info("File processing hook registered successfully")
|
||||
return True
|
||||
except ImportError:
|
||||
logger.warning(
|
||||
"crewai.hooks not available. File processing hook not registered."
|
||||
)
|
||||
return False
|
||||
except Exception as e:
|
||||
logger.exception(f"Failed to register file processing hook: {e}")
|
||||
return False
|
||||
|
||||
|
||||
def unregister_file_processing_hook() -> bool:
|
||||
"""Unregister the file processing hook.
|
||||
|
||||
Returns:
|
||||
True if the hook was unregistered, False if it wasn't registered
|
||||
or if unregistration failed.
|
||||
"""
|
||||
global _hook_registered
|
||||
|
||||
if not _hook_registered:
|
||||
return False
|
||||
|
||||
try:
|
||||
from crewai.hooks import unregister_after_tool_call_hook
|
||||
|
||||
unregister_after_tool_call_hook(process_file_markers)
|
||||
_hook_registered = False
|
||||
logger.info("File processing hook unregistered")
|
||||
return True
|
||||
except ImportError:
|
||||
return False
|
||||
except Exception as e:
|
||||
logger.exception(f"Failed to unregister file processing hook: {e}")
|
||||
return False
|
||||
@@ -1,36 +1,20 @@
|
||||
"""A2A authentication schemas."""
|
||||
|
||||
from crewai.a2a.auth.client_schemes import (
|
||||
from crewai.a2a.auth.schemas import (
|
||||
APIKeyAuth,
|
||||
AuthScheme,
|
||||
BearerTokenAuth,
|
||||
ClientAuthScheme,
|
||||
HTTPBasicAuth,
|
||||
HTTPDigestAuth,
|
||||
OAuth2AuthorizationCode,
|
||||
OAuth2ClientCredentials,
|
||||
TLSConfig,
|
||||
)
|
||||
from crewai.a2a.auth.server_schemes import (
|
||||
AuthenticatedUser,
|
||||
OIDCAuth,
|
||||
ServerAuthScheme,
|
||||
SimpleTokenAuth,
|
||||
)
|
||||
|
||||
|
||||
__all__ = [
|
||||
"APIKeyAuth",
|
||||
"AuthScheme",
|
||||
"AuthenticatedUser",
|
||||
"BearerTokenAuth",
|
||||
"ClientAuthScheme",
|
||||
"HTTPBasicAuth",
|
||||
"HTTPDigestAuth",
|
||||
"OAuth2AuthorizationCode",
|
||||
"OAuth2ClientCredentials",
|
||||
"OIDCAuth",
|
||||
"ServerAuthScheme",
|
||||
"SimpleTokenAuth",
|
||||
"TLSConfig",
|
||||
]
|
||||
|
||||
@@ -1,550 +0,0 @@
|
||||
"""Authentication schemes for A2A protocol clients.
|
||||
|
||||
Supported authentication methods:
|
||||
- Bearer tokens
|
||||
- OAuth2 (Client Credentials, Authorization Code)
|
||||
- API Keys (header, query, cookie)
|
||||
- HTTP Basic authentication
|
||||
- HTTP Digest authentication
|
||||
- mTLS (mutual TLS) client certificate authentication
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from abc import ABC, abstractmethod
|
||||
import asyncio
|
||||
import base64
|
||||
from collections.abc import Awaitable, Callable, MutableMapping
|
||||
from pathlib import Path
|
||||
import ssl
|
||||
import time
|
||||
from typing import TYPE_CHECKING, ClassVar, Literal
|
||||
import urllib.parse
|
||||
|
||||
import httpx
|
||||
from httpx import DigestAuth
|
||||
from pydantic import BaseModel, ConfigDict, Field, FilePath, PrivateAttr
|
||||
from typing_extensions import deprecated
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
import grpc # type: ignore[import-untyped]
|
||||
|
||||
|
||||
class TLSConfig(BaseModel):
|
||||
"""TLS/mTLS configuration for secure client connections.
|
||||
|
||||
Supports mutual TLS (mTLS) where the client presents a certificate to the server,
|
||||
and standard TLS with custom CA verification.
|
||||
|
||||
Attributes:
|
||||
client_cert_path: Path to client certificate file (PEM format) for mTLS.
|
||||
client_key_path: Path to client private key file (PEM format) for mTLS.
|
||||
ca_cert_path: Path to CA certificate bundle for server verification.
|
||||
verify: Whether to verify server certificates. Set False only for development.
|
||||
"""
|
||||
|
||||
model_config: ClassVar[ConfigDict] = ConfigDict(extra="forbid")
|
||||
|
||||
client_cert_path: FilePath | None = Field(
|
||||
default=None,
|
||||
description="Path to client certificate file (PEM format) for mTLS",
|
||||
)
|
||||
client_key_path: FilePath | None = Field(
|
||||
default=None,
|
||||
description="Path to client private key file (PEM format) for mTLS",
|
||||
)
|
||||
ca_cert_path: FilePath | None = Field(
|
||||
default=None,
|
||||
description="Path to CA certificate bundle for server verification",
|
||||
)
|
||||
verify: bool = Field(
|
||||
default=True,
|
||||
description="Whether to verify server certificates. Set False only for development.",
|
||||
)
|
||||
|
||||
def get_httpx_ssl_context(self) -> ssl.SSLContext | bool | str:
|
||||
"""Build SSL context for httpx client.
|
||||
|
||||
Returns:
|
||||
SSL context if certificates configured, True for default verification,
|
||||
False if verification disabled, or path to CA bundle.
|
||||
"""
|
||||
if not self.verify:
|
||||
return False
|
||||
|
||||
if self.client_cert_path and self.client_key_path:
|
||||
context = ssl.create_default_context()
|
||||
|
||||
if self.ca_cert_path:
|
||||
context.load_verify_locations(cafile=str(self.ca_cert_path))
|
||||
|
||||
context.load_cert_chain(
|
||||
certfile=str(self.client_cert_path),
|
||||
keyfile=str(self.client_key_path),
|
||||
)
|
||||
return context
|
||||
|
||||
if self.ca_cert_path:
|
||||
return str(self.ca_cert_path)
|
||||
|
||||
return True
|
||||
|
||||
def get_grpc_credentials(self) -> grpc.ChannelCredentials | None: # type: ignore[no-any-unimported]
|
||||
"""Build gRPC channel credentials for secure connections.
|
||||
|
||||
Returns:
|
||||
gRPC SSL credentials if certificates configured, None otherwise.
|
||||
"""
|
||||
try:
|
||||
import grpc
|
||||
except ImportError:
|
||||
return None
|
||||
|
||||
if not self.verify and not self.client_cert_path:
|
||||
return None
|
||||
|
||||
root_certs: bytes | None = None
|
||||
private_key: bytes | None = None
|
||||
certificate_chain: bytes | None = None
|
||||
|
||||
if self.ca_cert_path:
|
||||
root_certs = Path(self.ca_cert_path).read_bytes()
|
||||
|
||||
if self.client_cert_path and self.client_key_path:
|
||||
private_key = Path(self.client_key_path).read_bytes()
|
||||
certificate_chain = Path(self.client_cert_path).read_bytes()
|
||||
|
||||
return grpc.ssl_channel_credentials(
|
||||
root_certificates=root_certs,
|
||||
private_key=private_key,
|
||||
certificate_chain=certificate_chain,
|
||||
)
|
||||
|
||||
|
||||
class ClientAuthScheme(ABC, BaseModel):
|
||||
"""Base class for client-side authentication schemes.
|
||||
|
||||
Client auth schemes apply credentials to outgoing requests.
|
||||
|
||||
Attributes:
|
||||
tls: Optional TLS/mTLS configuration for secure connections.
|
||||
"""
|
||||
|
||||
tls: TLSConfig | None = Field(
|
||||
default=None,
|
||||
description="TLS/mTLS configuration for secure connections",
|
||||
)
|
||||
|
||||
@abstractmethod
|
||||
async def apply_auth(
|
||||
self, client: httpx.AsyncClient, headers: MutableMapping[str, str]
|
||||
) -> MutableMapping[str, str]:
|
||||
"""Apply authentication to request headers.
|
||||
|
||||
Args:
|
||||
client: HTTP client for making auth requests.
|
||||
headers: Current request headers.
|
||||
|
||||
Returns:
|
||||
Updated headers with authentication applied.
|
||||
"""
|
||||
...
|
||||
|
||||
|
||||
@deprecated("Use ClientAuthScheme instead", category=FutureWarning)
|
||||
class AuthScheme(ClientAuthScheme):
|
||||
"""Deprecated: Use ClientAuthScheme instead."""
|
||||
|
||||
|
||||
class BearerTokenAuth(ClientAuthScheme):
|
||||
"""Bearer token authentication (Authorization: Bearer <token>).
|
||||
|
||||
Attributes:
|
||||
token: Bearer token for authentication.
|
||||
"""
|
||||
|
||||
token: str = Field(description="Bearer token")
|
||||
|
||||
async def apply_auth(
|
||||
self, client: httpx.AsyncClient, headers: MutableMapping[str, str]
|
||||
) -> MutableMapping[str, str]:
|
||||
"""Apply Bearer token to Authorization header.
|
||||
|
||||
Args:
|
||||
client: HTTP client for making auth requests.
|
||||
headers: Current request headers.
|
||||
|
||||
Returns:
|
||||
Updated headers with Bearer token in Authorization header.
|
||||
"""
|
||||
headers["Authorization"] = f"Bearer {self.token}"
|
||||
return headers
|
||||
|
||||
|
||||
class HTTPBasicAuth(ClientAuthScheme):
|
||||
"""HTTP Basic authentication.
|
||||
|
||||
Attributes:
|
||||
username: Username for Basic authentication.
|
||||
password: Password for Basic authentication.
|
||||
"""
|
||||
|
||||
username: str = Field(description="Username")
|
||||
password: str = Field(description="Password")
|
||||
|
||||
async def apply_auth(
|
||||
self, client: httpx.AsyncClient, headers: MutableMapping[str, str]
|
||||
) -> MutableMapping[str, str]:
|
||||
"""Apply HTTP Basic authentication.
|
||||
|
||||
Args:
|
||||
client: HTTP client for making auth requests.
|
||||
headers: Current request headers.
|
||||
|
||||
Returns:
|
||||
Updated headers with Basic auth in Authorization header.
|
||||
"""
|
||||
credentials = f"{self.username}:{self.password}"
|
||||
encoded = base64.b64encode(credentials.encode()).decode()
|
||||
headers["Authorization"] = f"Basic {encoded}"
|
||||
return headers
|
||||
|
||||
|
||||
class HTTPDigestAuth(ClientAuthScheme):
|
||||
"""HTTP Digest authentication.
|
||||
|
||||
Note: Uses httpx-auth library for digest implementation.
|
||||
|
||||
Attributes:
|
||||
username: Username for Digest authentication.
|
||||
password: Password for Digest authentication.
|
||||
"""
|
||||
|
||||
username: str = Field(description="Username")
|
||||
password: str = Field(description="Password")
|
||||
|
||||
_configured_client_id: int | None = PrivateAttr(default=None)
|
||||
|
||||
async def apply_auth(
|
||||
self, client: httpx.AsyncClient, headers: MutableMapping[str, str]
|
||||
) -> MutableMapping[str, str]:
|
||||
"""Digest auth is handled by httpx auth flow, not headers.
|
||||
|
||||
Args:
|
||||
client: HTTP client for making auth requests.
|
||||
headers: Current request headers.
|
||||
|
||||
Returns:
|
||||
Unchanged headers (Digest auth handled by httpx auth flow).
|
||||
"""
|
||||
return headers
|
||||
|
||||
def configure_client(self, client: httpx.AsyncClient) -> None:
|
||||
"""Configure client with Digest auth.
|
||||
|
||||
Idempotent: Only configures the client once. Subsequent calls on the same
|
||||
client instance are no-ops to prevent overwriting auth configuration.
|
||||
|
||||
Args:
|
||||
client: HTTP client to configure with Digest authentication.
|
||||
"""
|
||||
client_id = id(client)
|
||||
if self._configured_client_id == client_id:
|
||||
return
|
||||
|
||||
client.auth = DigestAuth(self.username, self.password)
|
||||
self._configured_client_id = client_id
|
||||
|
||||
|
||||
class APIKeyAuth(ClientAuthScheme):
|
||||
"""API Key authentication (header, query, or cookie).
|
||||
|
||||
Attributes:
|
||||
api_key: API key value for authentication.
|
||||
location: Where to send the API key (header, query, or cookie).
|
||||
name: Parameter name for the API key (default: X-API-Key).
|
||||
"""
|
||||
|
||||
api_key: str = Field(description="API key value")
|
||||
location: Literal["header", "query", "cookie"] = Field(
|
||||
default="header", description="Where to send the API key"
|
||||
)
|
||||
name: str = Field(default="X-API-Key", description="Parameter name for the API key")
|
||||
|
||||
_configured_client_ids: set[int] = PrivateAttr(default_factory=set)
|
||||
|
||||
async def apply_auth(
|
||||
self, client: httpx.AsyncClient, headers: MutableMapping[str, str]
|
||||
) -> MutableMapping[str, str]:
|
||||
"""Apply API key authentication.
|
||||
|
||||
Args:
|
||||
client: HTTP client for making auth requests.
|
||||
headers: Current request headers.
|
||||
|
||||
Returns:
|
||||
Updated headers with API key (for header/cookie locations).
|
||||
"""
|
||||
if self.location == "header":
|
||||
headers[self.name] = self.api_key
|
||||
elif self.location == "cookie":
|
||||
headers["Cookie"] = f"{self.name}={self.api_key}"
|
||||
return headers
|
||||
|
||||
def configure_client(self, client: httpx.AsyncClient) -> None:
|
||||
"""Configure client for query param API keys.
|
||||
|
||||
Idempotent: Only adds the request hook once per client instance.
|
||||
Subsequent calls on the same client are no-ops to prevent hook accumulation.
|
||||
|
||||
Args:
|
||||
client: HTTP client to configure with query param API key hook.
|
||||
"""
|
||||
if self.location == "query":
|
||||
client_id = id(client)
|
||||
if client_id in self._configured_client_ids:
|
||||
return
|
||||
|
||||
async def _add_api_key_param(request: httpx.Request) -> None:
|
||||
url = httpx.URL(request.url)
|
||||
request.url = url.copy_add_param(self.name, self.api_key)
|
||||
|
||||
client.event_hooks["request"].append(_add_api_key_param)
|
||||
self._configured_client_ids.add(client_id)
|
||||
|
||||
|
||||
class OAuth2ClientCredentials(ClientAuthScheme):
|
||||
"""OAuth2 Client Credentials flow authentication.
|
||||
|
||||
Thread-safe implementation with asyncio.Lock to prevent concurrent token fetches
|
||||
when multiple requests share the same auth instance.
|
||||
|
||||
Attributes:
|
||||
token_url: OAuth2 token endpoint URL.
|
||||
client_id: OAuth2 client identifier.
|
||||
client_secret: OAuth2 client secret.
|
||||
scopes: List of required OAuth2 scopes.
|
||||
"""
|
||||
|
||||
token_url: str = Field(description="OAuth2 token endpoint")
|
||||
client_id: str = Field(description="OAuth2 client ID")
|
||||
client_secret: str = Field(description="OAuth2 client secret")
|
||||
scopes: list[str] = Field(
|
||||
default_factory=list, description="Required OAuth2 scopes"
|
||||
)
|
||||
|
||||
_access_token: str | None = PrivateAttr(default=None)
|
||||
_token_expires_at: float | None = PrivateAttr(default=None)
|
||||
_lock: asyncio.Lock = PrivateAttr(default_factory=asyncio.Lock)
|
||||
|
||||
async def apply_auth(
|
||||
self, client: httpx.AsyncClient, headers: MutableMapping[str, str]
|
||||
) -> MutableMapping[str, str]:
|
||||
"""Apply OAuth2 access token to Authorization header.
|
||||
|
||||
Uses asyncio.Lock to ensure only one coroutine fetches tokens at a time,
|
||||
preventing race conditions when multiple concurrent requests use the same
|
||||
auth instance.
|
||||
|
||||
Args:
|
||||
client: HTTP client for making token requests.
|
||||
headers: Current request headers.
|
||||
|
||||
Returns:
|
||||
Updated headers with OAuth2 access token in Authorization header.
|
||||
"""
|
||||
if (
|
||||
self._access_token is None
|
||||
or self._token_expires_at is None
|
||||
or time.time() >= self._token_expires_at
|
||||
):
|
||||
async with self._lock:
|
||||
if (
|
||||
self._access_token is None
|
||||
or self._token_expires_at is None
|
||||
or time.time() >= self._token_expires_at
|
||||
):
|
||||
await self._fetch_token(client)
|
||||
|
||||
if self._access_token:
|
||||
headers["Authorization"] = f"Bearer {self._access_token}"
|
||||
|
||||
return headers
|
||||
|
||||
async def _fetch_token(self, client: httpx.AsyncClient) -> None:
|
||||
"""Fetch OAuth2 access token using client credentials flow.
|
||||
|
||||
Args:
|
||||
client: HTTP client for making token request.
|
||||
|
||||
Raises:
|
||||
httpx.HTTPStatusError: If token request fails.
|
||||
"""
|
||||
data = {
|
||||
"grant_type": "client_credentials",
|
||||
"client_id": self.client_id,
|
||||
"client_secret": self.client_secret,
|
||||
}
|
||||
|
||||
if self.scopes:
|
||||
data["scope"] = " ".join(self.scopes)
|
||||
|
||||
response = await client.post(self.token_url, data=data)
|
||||
response.raise_for_status()
|
||||
|
||||
token_data = response.json()
|
||||
self._access_token = token_data["access_token"]
|
||||
expires_in = token_data.get("expires_in", 3600)
|
||||
self._token_expires_at = time.time() + expires_in - 60
|
||||
|
||||
|
||||
class OAuth2AuthorizationCode(ClientAuthScheme):
|
||||
"""OAuth2 Authorization Code flow authentication.
|
||||
|
||||
Thread-safe implementation with asyncio.Lock to prevent concurrent token operations.
|
||||
|
||||
Note: Requires interactive authorization.
|
||||
|
||||
Attributes:
|
||||
authorization_url: OAuth2 authorization endpoint URL.
|
||||
token_url: OAuth2 token endpoint URL.
|
||||
client_id: OAuth2 client identifier.
|
||||
client_secret: OAuth2 client secret.
|
||||
redirect_uri: OAuth2 redirect URI for callback.
|
||||
scopes: List of required OAuth2 scopes.
|
||||
"""
|
||||
|
||||
authorization_url: str = Field(description="OAuth2 authorization endpoint")
|
||||
token_url: str = Field(description="OAuth2 token endpoint")
|
||||
client_id: str = Field(description="OAuth2 client ID")
|
||||
client_secret: str = Field(description="OAuth2 client secret")
|
||||
redirect_uri: str = Field(description="OAuth2 redirect URI")
|
||||
scopes: list[str] = Field(
|
||||
default_factory=list, description="Required OAuth2 scopes"
|
||||
)
|
||||
|
||||
_access_token: str | None = PrivateAttr(default=None)
|
||||
_refresh_token: str | None = PrivateAttr(default=None)
|
||||
_token_expires_at: float | None = PrivateAttr(default=None)
|
||||
_authorization_callback: Callable[[str], Awaitable[str]] | None = PrivateAttr(
|
||||
default=None
|
||||
)
|
||||
_lock: asyncio.Lock = PrivateAttr(default_factory=asyncio.Lock)
|
||||
|
||||
def set_authorization_callback(
|
||||
self, callback: Callable[[str], Awaitable[str]] | None
|
||||
) -> None:
|
||||
"""Set callback to handle authorization URL.
|
||||
|
||||
Args:
|
||||
callback: Async function that receives authorization URL and returns auth code.
|
||||
"""
|
||||
self._authorization_callback = callback
|
||||
|
||||
async def apply_auth(
|
||||
self, client: httpx.AsyncClient, headers: MutableMapping[str, str]
|
||||
) -> MutableMapping[str, str]:
|
||||
"""Apply OAuth2 access token to Authorization header.
|
||||
|
||||
Uses asyncio.Lock to ensure only one coroutine handles token operations
|
||||
(initial fetch or refresh) at a time.
|
||||
|
||||
Args:
|
||||
client: HTTP client for making token requests.
|
||||
headers: Current request headers.
|
||||
|
||||
Returns:
|
||||
Updated headers with OAuth2 access token in Authorization header.
|
||||
|
||||
Raises:
|
||||
ValueError: If authorization callback is not set.
|
||||
"""
|
||||
if self._access_token is None:
|
||||
if self._authorization_callback is None:
|
||||
msg = "Authorization callback not set. Use set_authorization_callback()"
|
||||
raise ValueError(msg)
|
||||
async with self._lock:
|
||||
if self._access_token is None:
|
||||
await self._fetch_initial_token(client)
|
||||
elif self._token_expires_at and time.time() >= self._token_expires_at:
|
||||
async with self._lock:
|
||||
if self._token_expires_at and time.time() >= self._token_expires_at:
|
||||
await self._refresh_access_token(client)
|
||||
|
||||
if self._access_token:
|
||||
headers["Authorization"] = f"Bearer {self._access_token}"
|
||||
|
||||
return headers
|
||||
|
||||
async def _fetch_initial_token(self, client: httpx.AsyncClient) -> None:
|
||||
"""Fetch initial access token using authorization code flow.
|
||||
|
||||
Args:
|
||||
client: HTTP client for making token request.
|
||||
|
||||
Raises:
|
||||
ValueError: If authorization callback is not set.
|
||||
httpx.HTTPStatusError: If token request fails.
|
||||
"""
|
||||
params = {
|
||||
"response_type": "code",
|
||||
"client_id": self.client_id,
|
||||
"redirect_uri": self.redirect_uri,
|
||||
"scope": " ".join(self.scopes),
|
||||
}
|
||||
auth_url = f"{self.authorization_url}?{urllib.parse.urlencode(params)}"
|
||||
|
||||
if self._authorization_callback is None:
|
||||
msg = "Authorization callback not set"
|
||||
raise ValueError(msg)
|
||||
auth_code = await self._authorization_callback(auth_url)
|
||||
|
||||
data = {
|
||||
"grant_type": "authorization_code",
|
||||
"code": auth_code,
|
||||
"client_id": self.client_id,
|
||||
"client_secret": self.client_secret,
|
||||
"redirect_uri": self.redirect_uri,
|
||||
}
|
||||
|
||||
response = await client.post(self.token_url, data=data)
|
||||
response.raise_for_status()
|
||||
|
||||
token_data = response.json()
|
||||
self._access_token = token_data["access_token"]
|
||||
self._refresh_token = token_data.get("refresh_token")
|
||||
|
||||
expires_in = token_data.get("expires_in", 3600)
|
||||
self._token_expires_at = time.time() + expires_in - 60
|
||||
|
||||
async def _refresh_access_token(self, client: httpx.AsyncClient) -> None:
|
||||
"""Refresh the access token using refresh token.
|
||||
|
||||
Args:
|
||||
client: HTTP client for making token request.
|
||||
|
||||
Raises:
|
||||
httpx.HTTPStatusError: If token refresh request fails.
|
||||
"""
|
||||
if not self._refresh_token:
|
||||
await self._fetch_initial_token(client)
|
||||
return
|
||||
|
||||
data = {
|
||||
"grant_type": "refresh_token",
|
||||
"refresh_token": self._refresh_token,
|
||||
"client_id": self.client_id,
|
||||
"client_secret": self.client_secret,
|
||||
}
|
||||
|
||||
response = await client.post(self.token_url, data=data)
|
||||
response.raise_for_status()
|
||||
|
||||
token_data = response.json()
|
||||
self._access_token = token_data["access_token"]
|
||||
if "refresh_token" in token_data:
|
||||
self._refresh_token = token_data["refresh_token"]
|
||||
|
||||
expires_in = token_data.get("expires_in", 3600)
|
||||
self._token_expires_at = time.time() + expires_in - 60
|
||||
@@ -1,71 +1,392 @@
|
||||
"""Deprecated: Authentication schemes for A2A protocol agents.
|
||||
"""Authentication schemes for A2A protocol agents.
|
||||
|
||||
This module is deprecated. Import from crewai.a2a.auth instead:
|
||||
- crewai.a2a.auth.ClientAuthScheme (replaces AuthScheme)
|
||||
- crewai.a2a.auth.BearerTokenAuth
|
||||
- crewai.a2a.auth.HTTPBasicAuth
|
||||
- crewai.a2a.auth.HTTPDigestAuth
|
||||
- crewai.a2a.auth.APIKeyAuth
|
||||
- crewai.a2a.auth.OAuth2ClientCredentials
|
||||
- crewai.a2a.auth.OAuth2AuthorizationCode
|
||||
Supported authentication methods:
|
||||
- Bearer tokens
|
||||
- OAuth2 (Client Credentials, Authorization Code)
|
||||
- API Keys (header, query, cookie)
|
||||
- HTTP Basic authentication
|
||||
- HTTP Digest authentication
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from typing_extensions import deprecated
|
||||
from abc import ABC, abstractmethod
|
||||
import base64
|
||||
from collections.abc import Awaitable, Callable, MutableMapping
|
||||
import time
|
||||
from typing import Literal
|
||||
import urllib.parse
|
||||
|
||||
from crewai.a2a.auth.client_schemes import (
|
||||
APIKeyAuth as _APIKeyAuth,
|
||||
BearerTokenAuth as _BearerTokenAuth,
|
||||
ClientAuthScheme as _ClientAuthScheme,
|
||||
HTTPBasicAuth as _HTTPBasicAuth,
|
||||
HTTPDigestAuth as _HTTPDigestAuth,
|
||||
OAuth2AuthorizationCode as _OAuth2AuthorizationCode,
|
||||
OAuth2ClientCredentials as _OAuth2ClientCredentials,
|
||||
)
|
||||
import httpx
|
||||
from httpx import DigestAuth
|
||||
from pydantic import BaseModel, Field, PrivateAttr
|
||||
|
||||
|
||||
@deprecated("Use ClientAuthScheme from crewai.a2a.auth instead", category=FutureWarning)
|
||||
class AuthScheme(_ClientAuthScheme):
|
||||
"""Deprecated: Use ClientAuthScheme from crewai.a2a.auth instead."""
|
||||
class AuthScheme(ABC, BaseModel):
|
||||
"""Base class for authentication schemes."""
|
||||
|
||||
@abstractmethod
|
||||
async def apply_auth(
|
||||
self, client: httpx.AsyncClient, headers: MutableMapping[str, str]
|
||||
) -> MutableMapping[str, str]:
|
||||
"""Apply authentication to request headers.
|
||||
|
||||
Args:
|
||||
client: HTTP client for making auth requests.
|
||||
headers: Current request headers.
|
||||
|
||||
Returns:
|
||||
Updated headers with authentication applied.
|
||||
"""
|
||||
...
|
||||
|
||||
|
||||
@deprecated("Import from crewai.a2a.auth instead", category=FutureWarning)
|
||||
class BearerTokenAuth(_BearerTokenAuth):
|
||||
"""Deprecated: Import from crewai.a2a.auth instead."""
|
||||
class BearerTokenAuth(AuthScheme):
|
||||
"""Bearer token authentication (Authorization: Bearer <token>).
|
||||
|
||||
Attributes:
|
||||
token: Bearer token for authentication.
|
||||
"""
|
||||
|
||||
token: str = Field(description="Bearer token")
|
||||
|
||||
async def apply_auth(
|
||||
self, client: httpx.AsyncClient, headers: MutableMapping[str, str]
|
||||
) -> MutableMapping[str, str]:
|
||||
"""Apply Bearer token to Authorization header.
|
||||
|
||||
Args:
|
||||
client: HTTP client for making auth requests.
|
||||
headers: Current request headers.
|
||||
|
||||
Returns:
|
||||
Updated headers with Bearer token in Authorization header.
|
||||
"""
|
||||
headers["Authorization"] = f"Bearer {self.token}"
|
||||
return headers
|
||||
|
||||
|
||||
@deprecated("Import from crewai.a2a.auth instead", category=FutureWarning)
|
||||
class HTTPBasicAuth(_HTTPBasicAuth):
|
||||
"""Deprecated: Import from crewai.a2a.auth instead."""
|
||||
class HTTPBasicAuth(AuthScheme):
|
||||
"""HTTP Basic authentication.
|
||||
|
||||
Attributes:
|
||||
username: Username for Basic authentication.
|
||||
password: Password for Basic authentication.
|
||||
"""
|
||||
|
||||
username: str = Field(description="Username")
|
||||
password: str = Field(description="Password")
|
||||
|
||||
async def apply_auth(
|
||||
self, client: httpx.AsyncClient, headers: MutableMapping[str, str]
|
||||
) -> MutableMapping[str, str]:
|
||||
"""Apply HTTP Basic authentication.
|
||||
|
||||
Args:
|
||||
client: HTTP client for making auth requests.
|
||||
headers: Current request headers.
|
||||
|
||||
Returns:
|
||||
Updated headers with Basic auth in Authorization header.
|
||||
"""
|
||||
credentials = f"{self.username}:{self.password}"
|
||||
encoded = base64.b64encode(credentials.encode()).decode()
|
||||
headers["Authorization"] = f"Basic {encoded}"
|
||||
return headers
|
||||
|
||||
|
||||
@deprecated("Import from crewai.a2a.auth instead", category=FutureWarning)
|
||||
class HTTPDigestAuth(_HTTPDigestAuth):
|
||||
"""Deprecated: Import from crewai.a2a.auth instead."""
|
||||
class HTTPDigestAuth(AuthScheme):
|
||||
"""HTTP Digest authentication.
|
||||
|
||||
Note: Uses httpx-auth library for digest implementation.
|
||||
|
||||
Attributes:
|
||||
username: Username for Digest authentication.
|
||||
password: Password for Digest authentication.
|
||||
"""
|
||||
|
||||
username: str = Field(description="Username")
|
||||
password: str = Field(description="Password")
|
||||
|
||||
async def apply_auth(
|
||||
self, client: httpx.AsyncClient, headers: MutableMapping[str, str]
|
||||
) -> MutableMapping[str, str]:
|
||||
"""Digest auth is handled by httpx auth flow, not headers.
|
||||
|
||||
Args:
|
||||
client: HTTP client for making auth requests.
|
||||
headers: Current request headers.
|
||||
|
||||
Returns:
|
||||
Unchanged headers (Digest auth handled by httpx auth flow).
|
||||
"""
|
||||
return headers
|
||||
|
||||
def configure_client(self, client: httpx.AsyncClient) -> None:
|
||||
"""Configure client with Digest auth.
|
||||
|
||||
Args:
|
||||
client: HTTP client to configure with Digest authentication.
|
||||
"""
|
||||
client.auth = DigestAuth(self.username, self.password)
|
||||
|
||||
|
||||
@deprecated("Import from crewai.a2a.auth instead", category=FutureWarning)
|
||||
class APIKeyAuth(_APIKeyAuth):
|
||||
"""Deprecated: Import from crewai.a2a.auth instead."""
|
||||
class APIKeyAuth(AuthScheme):
|
||||
"""API Key authentication (header, query, or cookie).
|
||||
|
||||
Attributes:
|
||||
api_key: API key value for authentication.
|
||||
location: Where to send the API key (header, query, or cookie).
|
||||
name: Parameter name for the API key (default: X-API-Key).
|
||||
"""
|
||||
|
||||
api_key: str = Field(description="API key value")
|
||||
location: Literal["header", "query", "cookie"] = Field(
|
||||
default="header", description="Where to send the API key"
|
||||
)
|
||||
name: str = Field(default="X-API-Key", description="Parameter name for the API key")
|
||||
|
||||
async def apply_auth(
|
||||
self, client: httpx.AsyncClient, headers: MutableMapping[str, str]
|
||||
) -> MutableMapping[str, str]:
|
||||
"""Apply API key authentication.
|
||||
|
||||
Args:
|
||||
client: HTTP client for making auth requests.
|
||||
headers: Current request headers.
|
||||
|
||||
Returns:
|
||||
Updated headers with API key (for header/cookie locations).
|
||||
"""
|
||||
if self.location == "header":
|
||||
headers[self.name] = self.api_key
|
||||
elif self.location == "cookie":
|
||||
headers["Cookie"] = f"{self.name}={self.api_key}"
|
||||
return headers
|
||||
|
||||
def configure_client(self, client: httpx.AsyncClient) -> None:
|
||||
"""Configure client for query param API keys.
|
||||
|
||||
Args:
|
||||
client: HTTP client to configure with query param API key hook.
|
||||
"""
|
||||
if self.location == "query":
|
||||
|
||||
async def _add_api_key_param(request: httpx.Request) -> None:
|
||||
url = httpx.URL(request.url)
|
||||
request.url = url.copy_add_param(self.name, self.api_key)
|
||||
|
||||
client.event_hooks["request"].append(_add_api_key_param)
|
||||
|
||||
|
||||
@deprecated("Import from crewai.a2a.auth instead", category=FutureWarning)
|
||||
class OAuth2ClientCredentials(_OAuth2ClientCredentials):
|
||||
"""Deprecated: Import from crewai.a2a.auth instead."""
|
||||
class OAuth2ClientCredentials(AuthScheme):
|
||||
"""OAuth2 Client Credentials flow authentication.
|
||||
|
||||
Attributes:
|
||||
token_url: OAuth2 token endpoint URL.
|
||||
client_id: OAuth2 client identifier.
|
||||
client_secret: OAuth2 client secret.
|
||||
scopes: List of required OAuth2 scopes.
|
||||
"""
|
||||
|
||||
token_url: str = Field(description="OAuth2 token endpoint")
|
||||
client_id: str = Field(description="OAuth2 client ID")
|
||||
client_secret: str = Field(description="OAuth2 client secret")
|
||||
scopes: list[str] = Field(
|
||||
default_factory=list, description="Required OAuth2 scopes"
|
||||
)
|
||||
|
||||
_access_token: str | None = PrivateAttr(default=None)
|
||||
_token_expires_at: float | None = PrivateAttr(default=None)
|
||||
|
||||
async def apply_auth(
|
||||
self, client: httpx.AsyncClient, headers: MutableMapping[str, str]
|
||||
) -> MutableMapping[str, str]:
|
||||
"""Apply OAuth2 access token to Authorization header.
|
||||
|
||||
Args:
|
||||
client: HTTP client for making token requests.
|
||||
headers: Current request headers.
|
||||
|
||||
Returns:
|
||||
Updated headers with OAuth2 access token in Authorization header.
|
||||
"""
|
||||
if (
|
||||
self._access_token is None
|
||||
or self._token_expires_at is None
|
||||
or time.time() >= self._token_expires_at
|
||||
):
|
||||
await self._fetch_token(client)
|
||||
|
||||
if self._access_token:
|
||||
headers["Authorization"] = f"Bearer {self._access_token}"
|
||||
|
||||
return headers
|
||||
|
||||
async def _fetch_token(self, client: httpx.AsyncClient) -> None:
|
||||
"""Fetch OAuth2 access token using client credentials flow.
|
||||
|
||||
Args:
|
||||
client: HTTP client for making token request.
|
||||
|
||||
Raises:
|
||||
httpx.HTTPStatusError: If token request fails.
|
||||
"""
|
||||
data = {
|
||||
"grant_type": "client_credentials",
|
||||
"client_id": self.client_id,
|
||||
"client_secret": self.client_secret,
|
||||
}
|
||||
|
||||
if self.scopes:
|
||||
data["scope"] = " ".join(self.scopes)
|
||||
|
||||
response = await client.post(self.token_url, data=data)
|
||||
response.raise_for_status()
|
||||
|
||||
token_data = response.json()
|
||||
self._access_token = token_data["access_token"]
|
||||
expires_in = token_data.get("expires_in", 3600)
|
||||
self._token_expires_at = time.time() + expires_in - 60
|
||||
|
||||
|
||||
@deprecated("Import from crewai.a2a.auth instead", category=FutureWarning)
|
||||
class OAuth2AuthorizationCode(_OAuth2AuthorizationCode):
|
||||
"""Deprecated: Import from crewai.a2a.auth instead."""
|
||||
class OAuth2AuthorizationCode(AuthScheme):
|
||||
"""OAuth2 Authorization Code flow authentication.
|
||||
|
||||
Note: Requires interactive authorization.
|
||||
|
||||
__all__ = [
|
||||
"APIKeyAuth",
|
||||
"AuthScheme",
|
||||
"BearerTokenAuth",
|
||||
"HTTPBasicAuth",
|
||||
"HTTPDigestAuth",
|
||||
"OAuth2AuthorizationCode",
|
||||
"OAuth2ClientCredentials",
|
||||
]
|
||||
Attributes:
|
||||
authorization_url: OAuth2 authorization endpoint URL.
|
||||
token_url: OAuth2 token endpoint URL.
|
||||
client_id: OAuth2 client identifier.
|
||||
client_secret: OAuth2 client secret.
|
||||
redirect_uri: OAuth2 redirect URI for callback.
|
||||
scopes: List of required OAuth2 scopes.
|
||||
"""
|
||||
|
||||
authorization_url: str = Field(description="OAuth2 authorization endpoint")
|
||||
token_url: str = Field(description="OAuth2 token endpoint")
|
||||
client_id: str = Field(description="OAuth2 client ID")
|
||||
client_secret: str = Field(description="OAuth2 client secret")
|
||||
redirect_uri: str = Field(description="OAuth2 redirect URI")
|
||||
scopes: list[str] = Field(
|
||||
default_factory=list, description="Required OAuth2 scopes"
|
||||
)
|
||||
|
||||
_access_token: str | None = PrivateAttr(default=None)
|
||||
_refresh_token: str | None = PrivateAttr(default=None)
|
||||
_token_expires_at: float | None = PrivateAttr(default=None)
|
||||
_authorization_callback: Callable[[str], Awaitable[str]] | None = PrivateAttr(
|
||||
default=None
|
||||
)
|
||||
|
||||
def set_authorization_callback(
|
||||
self, callback: Callable[[str], Awaitable[str]] | None
|
||||
) -> None:
|
||||
"""Set callback to handle authorization URL.
|
||||
|
||||
Args:
|
||||
callback: Async function that receives authorization URL and returns auth code.
|
||||
"""
|
||||
self._authorization_callback = callback
|
||||
|
||||
async def apply_auth(
|
||||
self, client: httpx.AsyncClient, headers: MutableMapping[str, str]
|
||||
) -> MutableMapping[str, str]:
|
||||
"""Apply OAuth2 access token to Authorization header.
|
||||
|
||||
Args:
|
||||
client: HTTP client for making token requests.
|
||||
headers: Current request headers.
|
||||
|
||||
Returns:
|
||||
Updated headers with OAuth2 access token in Authorization header.
|
||||
|
||||
Raises:
|
||||
ValueError: If authorization callback is not set.
|
||||
"""
|
||||
|
||||
if self._access_token is None:
|
||||
if self._authorization_callback is None:
|
||||
msg = "Authorization callback not set. Use set_authorization_callback()"
|
||||
raise ValueError(msg)
|
||||
await self._fetch_initial_token(client)
|
||||
elif self._token_expires_at and time.time() >= self._token_expires_at:
|
||||
await self._refresh_access_token(client)
|
||||
|
||||
if self._access_token:
|
||||
headers["Authorization"] = f"Bearer {self._access_token}"
|
||||
|
||||
return headers
|
||||
|
||||
async def _fetch_initial_token(self, client: httpx.AsyncClient) -> None:
|
||||
"""Fetch initial access token using authorization code flow.
|
||||
|
||||
Args:
|
||||
client: HTTP client for making token request.
|
||||
|
||||
Raises:
|
||||
ValueError: If authorization callback is not set.
|
||||
httpx.HTTPStatusError: If token request fails.
|
||||
"""
|
||||
params = {
|
||||
"response_type": "code",
|
||||
"client_id": self.client_id,
|
||||
"redirect_uri": self.redirect_uri,
|
||||
"scope": " ".join(self.scopes),
|
||||
}
|
||||
auth_url = f"{self.authorization_url}?{urllib.parse.urlencode(params)}"
|
||||
|
||||
if self._authorization_callback is None:
|
||||
msg = "Authorization callback not set"
|
||||
raise ValueError(msg)
|
||||
auth_code = await self._authorization_callback(auth_url)
|
||||
|
||||
data = {
|
||||
"grant_type": "authorization_code",
|
||||
"code": auth_code,
|
||||
"client_id": self.client_id,
|
||||
"client_secret": self.client_secret,
|
||||
"redirect_uri": self.redirect_uri,
|
||||
}
|
||||
|
||||
response = await client.post(self.token_url, data=data)
|
||||
response.raise_for_status()
|
||||
|
||||
token_data = response.json()
|
||||
self._access_token = token_data["access_token"]
|
||||
self._refresh_token = token_data.get("refresh_token")
|
||||
|
||||
expires_in = token_data.get("expires_in", 3600)
|
||||
self._token_expires_at = time.time() + expires_in - 60
|
||||
|
||||
async def _refresh_access_token(self, client: httpx.AsyncClient) -> None:
|
||||
"""Refresh the access token using refresh token.
|
||||
|
||||
Args:
|
||||
client: HTTP client for making token request.
|
||||
|
||||
Raises:
|
||||
httpx.HTTPStatusError: If token refresh request fails.
|
||||
"""
|
||||
if not self._refresh_token:
|
||||
await self._fetch_initial_token(client)
|
||||
return
|
||||
|
||||
data = {
|
||||
"grant_type": "refresh_token",
|
||||
"refresh_token": self._refresh_token,
|
||||
"client_id": self.client_id,
|
||||
"client_secret": self.client_secret,
|
||||
}
|
||||
|
||||
response = await client.post(self.token_url, data=data)
|
||||
response.raise_for_status()
|
||||
|
||||
token_data = response.json()
|
||||
self._access_token = token_data["access_token"]
|
||||
if "refresh_token" in token_data:
|
||||
self._refresh_token = token_data["refresh_token"]
|
||||
|
||||
expires_in = token_data.get("expires_in", 3600)
|
||||
self._token_expires_at = time.time() + expires_in - 60
|
||||
|
||||
@@ -1,739 +0,0 @@
|
||||
"""Server-side authentication schemes for A2A protocol.
|
||||
|
||||
These schemes validate incoming requests to A2A server endpoints.
|
||||
|
||||
Supported authentication methods:
|
||||
- Simple token validation with static bearer tokens
|
||||
- OpenID Connect with JWT validation using JWKS
|
||||
- OAuth2 with JWT validation or token introspection
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from abc import ABC, abstractmethod
|
||||
from dataclasses import dataclass
|
||||
import logging
|
||||
import os
|
||||
from typing import TYPE_CHECKING, Annotated, Any, ClassVar, Literal
|
||||
|
||||
import jwt
|
||||
from jwt import PyJWKClient
|
||||
from pydantic import (
|
||||
BaseModel,
|
||||
BeforeValidator,
|
||||
ConfigDict,
|
||||
Field,
|
||||
HttpUrl,
|
||||
PrivateAttr,
|
||||
SecretStr,
|
||||
model_validator,
|
||||
)
|
||||
from typing_extensions import Self
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from a2a.types import OAuth2SecurityScheme
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
try:
|
||||
from fastapi import HTTPException, status as http_status
|
||||
|
||||
HTTP_401_UNAUTHORIZED = http_status.HTTP_401_UNAUTHORIZED
|
||||
HTTP_500_INTERNAL_SERVER_ERROR = http_status.HTTP_500_INTERNAL_SERVER_ERROR
|
||||
HTTP_503_SERVICE_UNAVAILABLE = http_status.HTTP_503_SERVICE_UNAVAILABLE
|
||||
except ImportError:
|
||||
|
||||
class HTTPException(Exception): # type: ignore[no-redef] # noqa: N818
|
||||
"""Fallback HTTPException when FastAPI is not installed."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
status_code: int,
|
||||
detail: str | None = None,
|
||||
headers: dict[str, str] | None = None,
|
||||
) -> None:
|
||||
self.status_code = status_code
|
||||
self.detail = detail
|
||||
self.headers = headers
|
||||
super().__init__(detail)
|
||||
|
||||
HTTP_401_UNAUTHORIZED = 401
|
||||
HTTP_500_INTERNAL_SERVER_ERROR = 500
|
||||
HTTP_503_SERVICE_UNAVAILABLE = 503
|
||||
|
||||
|
||||
def _coerce_secret_str(v: str | SecretStr | None) -> SecretStr | None:
|
||||
"""Coerce string to SecretStr."""
|
||||
if v is None or isinstance(v, SecretStr):
|
||||
return v
|
||||
return SecretStr(v)
|
||||
|
||||
|
||||
CoercedSecretStr = Annotated[SecretStr, BeforeValidator(_coerce_secret_str)]
|
||||
|
||||
JWTAlgorithm = Literal[
|
||||
"RS256",
|
||||
"RS384",
|
||||
"RS512",
|
||||
"ES256",
|
||||
"ES384",
|
||||
"ES512",
|
||||
"PS256",
|
||||
"PS384",
|
||||
"PS512",
|
||||
]
|
||||
|
||||
|
||||
@dataclass
|
||||
class AuthenticatedUser:
|
||||
"""Result of successful authentication.
|
||||
|
||||
Attributes:
|
||||
token: The original token that was validated.
|
||||
scheme: Name of the authentication scheme used.
|
||||
claims: JWT claims from OIDC or OAuth2 authentication.
|
||||
"""
|
||||
|
||||
token: str
|
||||
scheme: str
|
||||
claims: dict[str, Any] | None = None
|
||||
|
||||
|
||||
class ServerAuthScheme(ABC, BaseModel):
|
||||
"""Base class for server-side authentication schemes.
|
||||
|
||||
Each scheme validates incoming requests and returns an AuthenticatedUser
|
||||
on success, or raises HTTPException on failure.
|
||||
"""
|
||||
|
||||
model_config: ClassVar[ConfigDict] = ConfigDict(extra="forbid")
|
||||
|
||||
@abstractmethod
|
||||
async def authenticate(self, token: str) -> AuthenticatedUser:
|
||||
"""Authenticate the provided token.
|
||||
|
||||
Args:
|
||||
token: The bearer token to authenticate.
|
||||
|
||||
Returns:
|
||||
AuthenticatedUser on successful authentication.
|
||||
|
||||
Raises:
|
||||
HTTPException: If authentication fails.
|
||||
"""
|
||||
...
|
||||
|
||||
|
||||
class SimpleTokenAuth(ServerAuthScheme):
|
||||
"""Simple bearer token authentication.
|
||||
|
||||
Validates tokens against a configured static token or AUTH_TOKEN env var.
|
||||
|
||||
Attributes:
|
||||
token: Expected token value. Falls back to AUTH_TOKEN env var if not set.
|
||||
"""
|
||||
|
||||
token: CoercedSecretStr | None = Field(
|
||||
default=None,
|
||||
description="Expected token. Falls back to AUTH_TOKEN env var.",
|
||||
)
|
||||
|
||||
def _get_expected_token(self) -> str | None:
|
||||
"""Get the expected token value."""
|
||||
if self.token:
|
||||
return self.token.get_secret_value()
|
||||
return os.environ.get("AUTH_TOKEN")
|
||||
|
||||
async def authenticate(self, token: str) -> AuthenticatedUser:
|
||||
"""Authenticate using simple token comparison.
|
||||
|
||||
Args:
|
||||
token: The bearer token to authenticate.
|
||||
|
||||
Returns:
|
||||
AuthenticatedUser on successful authentication.
|
||||
|
||||
Raises:
|
||||
HTTPException: If authentication fails.
|
||||
"""
|
||||
expected = self._get_expected_token()
|
||||
|
||||
if expected is None:
|
||||
logger.warning(
|
||||
"Simple token authentication failed",
|
||||
extra={"reason": "no_token_configured"},
|
||||
)
|
||||
raise HTTPException(
|
||||
status_code=HTTP_401_UNAUTHORIZED,
|
||||
detail="Authentication not configured",
|
||||
)
|
||||
|
||||
if token != expected:
|
||||
raise HTTPException(
|
||||
status_code=HTTP_401_UNAUTHORIZED,
|
||||
detail="Invalid or missing authentication credentials",
|
||||
)
|
||||
|
||||
return AuthenticatedUser(
|
||||
token=token,
|
||||
scheme="simple_token",
|
||||
)
|
||||
|
||||
|
||||
class OIDCAuth(ServerAuthScheme):
|
||||
"""OpenID Connect authentication.
|
||||
|
||||
Validates JWTs using JWKS with caching support via PyJWT.
|
||||
|
||||
Attributes:
|
||||
issuer: The OpenID Connect issuer URL.
|
||||
audience: The expected audience claim.
|
||||
jwks_url: Optional explicit JWKS URL. Derived from issuer if not set.
|
||||
algorithms: List of allowed signing algorithms.
|
||||
required_claims: List of claims that must be present in the token.
|
||||
jwks_cache_ttl: TTL for JWKS cache in seconds.
|
||||
clock_skew_seconds: Allowed clock skew for token validation.
|
||||
"""
|
||||
|
||||
issuer: HttpUrl = Field(
|
||||
description="OpenID Connect issuer URL (e.g., https://auth.example.com)"
|
||||
)
|
||||
audience: str = Field(description="Expected audience claim (e.g., api://my-agent)")
|
||||
jwks_url: HttpUrl | None = Field(
|
||||
default=None,
|
||||
description="Explicit JWKS URL. Derived from issuer if not set.",
|
||||
)
|
||||
algorithms: list[str] = Field(
|
||||
default_factory=lambda: ["RS256"],
|
||||
description="List of allowed signing algorithms (RS256, ES256, etc.)",
|
||||
)
|
||||
required_claims: list[str] = Field(
|
||||
default_factory=lambda: ["exp", "iat", "iss", "aud", "sub"],
|
||||
description="List of claims that must be present in the token",
|
||||
)
|
||||
jwks_cache_ttl: int = Field(
|
||||
default=3600,
|
||||
description="TTL for JWKS cache in seconds",
|
||||
ge=60,
|
||||
)
|
||||
clock_skew_seconds: float = Field(
|
||||
default=30.0,
|
||||
description="Allowed clock skew for token validation",
|
||||
ge=0.0,
|
||||
)
|
||||
|
||||
_jwk_client: PyJWKClient | None = PrivateAttr(default=None)
|
||||
|
||||
@model_validator(mode="after")
|
||||
def _init_jwk_client(self) -> Self:
|
||||
"""Initialize the JWK client after model creation."""
|
||||
jwks_url = (
|
||||
str(self.jwks_url)
|
||||
if self.jwks_url
|
||||
else f"{str(self.issuer).rstrip('/')}/.well-known/jwks.json"
|
||||
)
|
||||
self._jwk_client = PyJWKClient(jwks_url, lifespan=self.jwks_cache_ttl)
|
||||
return self
|
||||
|
||||
async def authenticate(self, token: str) -> AuthenticatedUser:
|
||||
"""Authenticate using OIDC JWT validation.
|
||||
|
||||
Args:
|
||||
token: The JWT to authenticate.
|
||||
|
||||
Returns:
|
||||
AuthenticatedUser on successful authentication.
|
||||
|
||||
Raises:
|
||||
HTTPException: If authentication fails.
|
||||
"""
|
||||
if self._jwk_client is None:
|
||||
raise HTTPException(
|
||||
status_code=HTTP_500_INTERNAL_SERVER_ERROR,
|
||||
detail="OIDC not initialized",
|
||||
)
|
||||
|
||||
try:
|
||||
signing_key = self._jwk_client.get_signing_key_from_jwt(token)
|
||||
|
||||
claims = jwt.decode(
|
||||
token,
|
||||
signing_key.key,
|
||||
algorithms=self.algorithms,
|
||||
audience=self.audience,
|
||||
issuer=str(self.issuer).rstrip("/"),
|
||||
leeway=self.clock_skew_seconds,
|
||||
options={
|
||||
"require": self.required_claims,
|
||||
},
|
||||
)
|
||||
|
||||
return AuthenticatedUser(
|
||||
token=token,
|
||||
scheme="oidc",
|
||||
claims=claims,
|
||||
)
|
||||
|
||||
except jwt.ExpiredSignatureError:
|
||||
logger.debug(
|
||||
"OIDC authentication failed",
|
||||
extra={"reason": "token_expired", "scheme": "oidc"},
|
||||
)
|
||||
raise HTTPException(
|
||||
status_code=HTTP_401_UNAUTHORIZED,
|
||||
detail="Token has expired",
|
||||
) from None
|
||||
except jwt.InvalidAudienceError:
|
||||
logger.debug(
|
||||
"OIDC authentication failed",
|
||||
extra={"reason": "invalid_audience", "scheme": "oidc"},
|
||||
)
|
||||
raise HTTPException(
|
||||
status_code=HTTP_401_UNAUTHORIZED,
|
||||
detail="Invalid token audience",
|
||||
) from None
|
||||
except jwt.InvalidIssuerError:
|
||||
logger.debug(
|
||||
"OIDC authentication failed",
|
||||
extra={"reason": "invalid_issuer", "scheme": "oidc"},
|
||||
)
|
||||
raise HTTPException(
|
||||
status_code=HTTP_401_UNAUTHORIZED,
|
||||
detail="Invalid token issuer",
|
||||
) from None
|
||||
except jwt.MissingRequiredClaimError as e:
|
||||
logger.debug(
|
||||
"OIDC authentication failed",
|
||||
extra={"reason": "missing_claim", "claim": e.claim, "scheme": "oidc"},
|
||||
)
|
||||
raise HTTPException(
|
||||
status_code=HTTP_401_UNAUTHORIZED,
|
||||
detail=f"Missing required claim: {e.claim}",
|
||||
) from None
|
||||
except jwt.PyJWKClientError as e:
|
||||
logger.error(
|
||||
"OIDC authentication failed",
|
||||
extra={
|
||||
"reason": "jwks_client_error",
|
||||
"error": str(e),
|
||||
"scheme": "oidc",
|
||||
},
|
||||
)
|
||||
raise HTTPException(
|
||||
status_code=HTTP_503_SERVICE_UNAVAILABLE,
|
||||
detail="Unable to fetch signing keys",
|
||||
) from None
|
||||
except jwt.InvalidTokenError as e:
|
||||
logger.debug(
|
||||
"OIDC authentication failed",
|
||||
extra={"reason": "invalid_token", "error": str(e), "scheme": "oidc"},
|
||||
)
|
||||
raise HTTPException(
|
||||
status_code=HTTP_401_UNAUTHORIZED,
|
||||
detail="Invalid or missing authentication credentials",
|
||||
) from None
|
||||
|
||||
|
||||
class OAuth2ServerAuth(ServerAuthScheme):
|
||||
"""OAuth2 authentication for A2A server.
|
||||
|
||||
Declares OAuth2 security scheme in AgentCard and validates tokens using
|
||||
either JWKS for JWT tokens or token introspection for opaque tokens.
|
||||
|
||||
This is distinct from OIDCAuth in that it declares an explicit OAuth2SecurityScheme
|
||||
with flows, rather than an OpenIdConnectSecurityScheme with discovery URL.
|
||||
|
||||
Attributes:
|
||||
token_url: OAuth2 token endpoint URL for client_credentials flow.
|
||||
authorization_url: OAuth2 authorization endpoint for authorization_code flow.
|
||||
refresh_url: Optional refresh token endpoint URL.
|
||||
scopes: Available OAuth2 scopes with descriptions.
|
||||
jwks_url: JWKS URL for JWT validation. Required if not using introspection.
|
||||
introspection_url: Token introspection endpoint (RFC 7662). Alternative to JWKS.
|
||||
introspection_client_id: Client ID for introspection endpoint authentication.
|
||||
introspection_client_secret: Client secret for introspection endpoint.
|
||||
audience: Expected audience claim for JWT validation.
|
||||
issuer: Expected issuer claim for JWT validation.
|
||||
algorithms: Allowed JWT signing algorithms.
|
||||
required_claims: Claims that must be present in the token.
|
||||
jwks_cache_ttl: TTL for JWKS cache in seconds.
|
||||
clock_skew_seconds: Allowed clock skew for token validation.
|
||||
"""
|
||||
|
||||
token_url: HttpUrl = Field(
|
||||
description="OAuth2 token endpoint URL",
|
||||
)
|
||||
authorization_url: HttpUrl | None = Field(
|
||||
default=None,
|
||||
description="OAuth2 authorization endpoint URL for authorization_code flow",
|
||||
)
|
||||
refresh_url: HttpUrl | None = Field(
|
||||
default=None,
|
||||
description="OAuth2 refresh token endpoint URL",
|
||||
)
|
||||
scopes: dict[str, str] = Field(
|
||||
default_factory=dict,
|
||||
description="Available OAuth2 scopes with descriptions",
|
||||
)
|
||||
jwks_url: HttpUrl | None = Field(
|
||||
default=None,
|
||||
description="JWKS URL for JWT validation. Required if not using introspection.",
|
||||
)
|
||||
introspection_url: HttpUrl | None = Field(
|
||||
default=None,
|
||||
description="Token introspection endpoint (RFC 7662). Alternative to JWKS.",
|
||||
)
|
||||
introspection_client_id: str | None = Field(
|
||||
default=None,
|
||||
description="Client ID for introspection endpoint authentication",
|
||||
)
|
||||
introspection_client_secret: CoercedSecretStr | None = Field(
|
||||
default=None,
|
||||
description="Client secret for introspection endpoint authentication",
|
||||
)
|
||||
audience: str | None = Field(
|
||||
default=None,
|
||||
description="Expected audience claim for JWT validation",
|
||||
)
|
||||
issuer: str | None = Field(
|
||||
default=None,
|
||||
description="Expected issuer claim for JWT validation",
|
||||
)
|
||||
algorithms: list[str] = Field(
|
||||
default_factory=lambda: ["RS256"],
|
||||
description="Allowed JWT signing algorithms",
|
||||
)
|
||||
required_claims: list[str] = Field(
|
||||
default_factory=lambda: ["exp", "iat"],
|
||||
description="Claims that must be present in the token",
|
||||
)
|
||||
jwks_cache_ttl: int = Field(
|
||||
default=3600,
|
||||
description="TTL for JWKS cache in seconds",
|
||||
ge=60,
|
||||
)
|
||||
clock_skew_seconds: float = Field(
|
||||
default=30.0,
|
||||
description="Allowed clock skew for token validation",
|
||||
ge=0.0,
|
||||
)
|
||||
|
||||
_jwk_client: PyJWKClient | None = PrivateAttr(default=None)
|
||||
|
||||
@model_validator(mode="after")
|
||||
def _validate_and_init(self) -> Self:
|
||||
"""Validate configuration and initialize JWKS client if needed."""
|
||||
if not self.jwks_url and not self.introspection_url:
|
||||
raise ValueError(
|
||||
"Either jwks_url or introspection_url must be provided for token validation"
|
||||
)
|
||||
|
||||
if self.introspection_url:
|
||||
if not self.introspection_client_id or not self.introspection_client_secret:
|
||||
raise ValueError(
|
||||
"introspection_client_id and introspection_client_secret are required "
|
||||
"when using token introspection"
|
||||
)
|
||||
|
||||
if self.jwks_url:
|
||||
self._jwk_client = PyJWKClient(
|
||||
str(self.jwks_url), lifespan=self.jwks_cache_ttl
|
||||
)
|
||||
|
||||
return self
|
||||
|
||||
async def authenticate(self, token: str) -> AuthenticatedUser:
|
||||
"""Authenticate using OAuth2 token validation.
|
||||
|
||||
Uses JWKS validation if jwks_url is configured, otherwise falls back
|
||||
to token introspection.
|
||||
|
||||
Args:
|
||||
token: The OAuth2 access token to authenticate.
|
||||
|
||||
Returns:
|
||||
AuthenticatedUser on successful authentication.
|
||||
|
||||
Raises:
|
||||
HTTPException: If authentication fails.
|
||||
"""
|
||||
if self._jwk_client:
|
||||
return await self._authenticate_jwt(token)
|
||||
return await self._authenticate_introspection(token)
|
||||
|
||||
async def _authenticate_jwt(self, token: str) -> AuthenticatedUser:
|
||||
"""Authenticate using JWKS JWT validation."""
|
||||
if self._jwk_client is None:
|
||||
raise HTTPException(
|
||||
status_code=HTTP_500_INTERNAL_SERVER_ERROR,
|
||||
detail="OAuth2 JWKS not initialized",
|
||||
)
|
||||
|
||||
try:
|
||||
signing_key = self._jwk_client.get_signing_key_from_jwt(token)
|
||||
|
||||
decode_options: dict[str, Any] = {
|
||||
"require": self.required_claims,
|
||||
}
|
||||
|
||||
claims = jwt.decode(
|
||||
token,
|
||||
signing_key.key,
|
||||
algorithms=self.algorithms,
|
||||
audience=self.audience,
|
||||
issuer=self.issuer,
|
||||
leeway=self.clock_skew_seconds,
|
||||
options=decode_options,
|
||||
)
|
||||
|
||||
return AuthenticatedUser(
|
||||
token=token,
|
||||
scheme="oauth2",
|
||||
claims=claims,
|
||||
)
|
||||
|
||||
except jwt.ExpiredSignatureError:
|
||||
logger.debug(
|
||||
"OAuth2 authentication failed",
|
||||
extra={"reason": "token_expired", "scheme": "oauth2"},
|
||||
)
|
||||
raise HTTPException(
|
||||
status_code=HTTP_401_UNAUTHORIZED,
|
||||
detail="Token has expired",
|
||||
) from None
|
||||
except jwt.InvalidAudienceError:
|
||||
logger.debug(
|
||||
"OAuth2 authentication failed",
|
||||
extra={"reason": "invalid_audience", "scheme": "oauth2"},
|
||||
)
|
||||
raise HTTPException(
|
||||
status_code=HTTP_401_UNAUTHORIZED,
|
||||
detail="Invalid token audience",
|
||||
) from None
|
||||
except jwt.InvalidIssuerError:
|
||||
logger.debug(
|
||||
"OAuth2 authentication failed",
|
||||
extra={"reason": "invalid_issuer", "scheme": "oauth2"},
|
||||
)
|
||||
raise HTTPException(
|
||||
status_code=HTTP_401_UNAUTHORIZED,
|
||||
detail="Invalid token issuer",
|
||||
) from None
|
||||
except jwt.MissingRequiredClaimError as e:
|
||||
logger.debug(
|
||||
"OAuth2 authentication failed",
|
||||
extra={"reason": "missing_claim", "claim": e.claim, "scheme": "oauth2"},
|
||||
)
|
||||
raise HTTPException(
|
||||
status_code=HTTP_401_UNAUTHORIZED,
|
||||
detail=f"Missing required claim: {e.claim}",
|
||||
) from None
|
||||
except jwt.PyJWKClientError as e:
|
||||
logger.error(
|
||||
"OAuth2 authentication failed",
|
||||
extra={
|
||||
"reason": "jwks_client_error",
|
||||
"error": str(e),
|
||||
"scheme": "oauth2",
|
||||
},
|
||||
)
|
||||
raise HTTPException(
|
||||
status_code=HTTP_503_SERVICE_UNAVAILABLE,
|
||||
detail="Unable to fetch signing keys",
|
||||
) from None
|
||||
except jwt.InvalidTokenError as e:
|
||||
logger.debug(
|
||||
"OAuth2 authentication failed",
|
||||
extra={"reason": "invalid_token", "error": str(e), "scheme": "oauth2"},
|
||||
)
|
||||
raise HTTPException(
|
||||
status_code=HTTP_401_UNAUTHORIZED,
|
||||
detail="Invalid or missing authentication credentials",
|
||||
) from None
|
||||
|
||||
async def _authenticate_introspection(self, token: str) -> AuthenticatedUser:
|
||||
"""Authenticate using OAuth2 token introspection (RFC 7662)."""
|
||||
import httpx
|
||||
|
||||
if not self.introspection_url:
|
||||
raise HTTPException(
|
||||
status_code=HTTP_500_INTERNAL_SERVER_ERROR,
|
||||
detail="OAuth2 introspection not configured",
|
||||
)
|
||||
|
||||
try:
|
||||
async with httpx.AsyncClient() as client:
|
||||
response = await client.post(
|
||||
str(self.introspection_url),
|
||||
data={"token": token},
|
||||
auth=(
|
||||
self.introspection_client_id or "",
|
||||
self.introspection_client_secret.get_secret_value()
|
||||
if self.introspection_client_secret
|
||||
else "",
|
||||
),
|
||||
)
|
||||
response.raise_for_status()
|
||||
introspection_result = response.json()
|
||||
|
||||
except httpx.HTTPStatusError as e:
|
||||
logger.error(
|
||||
"OAuth2 introspection failed",
|
||||
extra={"reason": "http_error", "status_code": e.response.status_code},
|
||||
)
|
||||
raise HTTPException(
|
||||
status_code=HTTP_503_SERVICE_UNAVAILABLE,
|
||||
detail="Token introspection service unavailable",
|
||||
) from None
|
||||
except Exception as e:
|
||||
logger.error(
|
||||
"OAuth2 introspection failed",
|
||||
extra={"reason": "unexpected_error", "error": str(e)},
|
||||
)
|
||||
raise HTTPException(
|
||||
status_code=HTTP_503_SERVICE_UNAVAILABLE,
|
||||
detail="Token introspection failed",
|
||||
) from None
|
||||
|
||||
if not introspection_result.get("active", False):
|
||||
logger.debug(
|
||||
"OAuth2 authentication failed",
|
||||
extra={"reason": "token_not_active", "scheme": "oauth2"},
|
||||
)
|
||||
raise HTTPException(
|
||||
status_code=HTTP_401_UNAUTHORIZED,
|
||||
detail="Token is not active",
|
||||
)
|
||||
|
||||
return AuthenticatedUser(
|
||||
token=token,
|
||||
scheme="oauth2",
|
||||
claims=introspection_result,
|
||||
)
|
||||
|
||||
def to_security_scheme(self) -> OAuth2SecurityScheme:
|
||||
"""Generate OAuth2SecurityScheme for AgentCard declaration.
|
||||
|
||||
Creates an OAuth2SecurityScheme with appropriate flows based on
|
||||
the configured URLs. Includes client_credentials flow if token_url
|
||||
is set, and authorization_code flow if authorization_url is set.
|
||||
|
||||
Returns:
|
||||
OAuth2SecurityScheme suitable for use in AgentCard security_schemes.
|
||||
"""
|
||||
from a2a.types import (
|
||||
AuthorizationCodeOAuthFlow,
|
||||
ClientCredentialsOAuthFlow,
|
||||
OAuth2SecurityScheme,
|
||||
OAuthFlows,
|
||||
)
|
||||
|
||||
client_credentials = None
|
||||
authorization_code = None
|
||||
|
||||
if self.token_url:
|
||||
client_credentials = ClientCredentialsOAuthFlow(
|
||||
token_url=str(self.token_url),
|
||||
refresh_url=str(self.refresh_url) if self.refresh_url else None,
|
||||
scopes=self.scopes,
|
||||
)
|
||||
|
||||
if self.authorization_url:
|
||||
authorization_code = AuthorizationCodeOAuthFlow(
|
||||
authorization_url=str(self.authorization_url),
|
||||
token_url=str(self.token_url),
|
||||
refresh_url=str(self.refresh_url) if self.refresh_url else None,
|
||||
scopes=self.scopes,
|
||||
)
|
||||
|
||||
return OAuth2SecurityScheme(
|
||||
flows=OAuthFlows(
|
||||
client_credentials=client_credentials,
|
||||
authorization_code=authorization_code,
|
||||
),
|
||||
description="OAuth2 authentication",
|
||||
)
|
||||
|
||||
|
||||
class APIKeyServerAuth(ServerAuthScheme):
|
||||
"""API Key authentication for A2A server.
|
||||
|
||||
Validates requests using an API key in a header, query parameter, or cookie.
|
||||
|
||||
Attributes:
|
||||
name: The name of the API key parameter (default: X-API-Key).
|
||||
location: Where to look for the API key (header, query, or cookie).
|
||||
api_key: The expected API key value.
|
||||
"""
|
||||
|
||||
name: str = Field(
|
||||
default="X-API-Key",
|
||||
description="Name of the API key parameter",
|
||||
)
|
||||
location: Literal["header", "query", "cookie"] = Field(
|
||||
default="header",
|
||||
description="Where to look for the API key",
|
||||
)
|
||||
api_key: CoercedSecretStr = Field(
|
||||
description="Expected API key value",
|
||||
)
|
||||
|
||||
async def authenticate(self, token: str) -> AuthenticatedUser:
|
||||
"""Authenticate using API key comparison.
|
||||
|
||||
Args:
|
||||
token: The API key to authenticate.
|
||||
|
||||
Returns:
|
||||
AuthenticatedUser on successful authentication.
|
||||
|
||||
Raises:
|
||||
HTTPException: If authentication fails.
|
||||
"""
|
||||
if token != self.api_key.get_secret_value():
|
||||
raise HTTPException(
|
||||
status_code=HTTP_401_UNAUTHORIZED,
|
||||
detail="Invalid API key",
|
||||
)
|
||||
|
||||
return AuthenticatedUser(
|
||||
token=token,
|
||||
scheme="api_key",
|
||||
)
|
||||
|
||||
|
||||
class MTLSServerAuth(ServerAuthScheme):
|
||||
"""Mutual TLS authentication marker for AgentCard declaration.
|
||||
|
||||
This scheme is primarily for AgentCard security_schemes declaration.
|
||||
Actual mTLS verification happens at the TLS/transport layer, not
|
||||
at the application layer via token validation.
|
||||
|
||||
When configured, this signals to clients that the server requires
|
||||
client certificates for authentication.
|
||||
"""
|
||||
|
||||
description: str = Field(
|
||||
default="Mutual TLS certificate authentication",
|
||||
description="Description for the security scheme",
|
||||
)
|
||||
|
||||
async def authenticate(self, token: str) -> AuthenticatedUser:
|
||||
"""Return authenticated user for mTLS.
|
||||
|
||||
mTLS verification happens at the transport layer before this is called.
|
||||
If we reach this point, the TLS handshake with client cert succeeded.
|
||||
|
||||
Args:
|
||||
token: Certificate subject or identifier (from TLS layer).
|
||||
|
||||
Returns:
|
||||
AuthenticatedUser indicating mTLS authentication.
|
||||
"""
|
||||
return AuthenticatedUser(
|
||||
token=token or "mtls-verified",
|
||||
scheme="mtls",
|
||||
)
|
||||
@@ -6,10 +6,8 @@ OAuth2, API keys, and HTTP authentication methods.
|
||||
|
||||
import asyncio
|
||||
from collections.abc import Awaitable, Callable, MutableMapping
|
||||
import hashlib
|
||||
import re
|
||||
import threading
|
||||
from typing import Final, Literal, cast
|
||||
from typing import Final
|
||||
|
||||
from a2a.client.errors import A2AClientHTTPError
|
||||
from a2a.types import (
|
||||
@@ -20,10 +18,10 @@ from a2a.types import (
|
||||
)
|
||||
from httpx import AsyncClient, Response
|
||||
|
||||
from crewai.a2a.auth.client_schemes import (
|
||||
from crewai.a2a.auth.schemas import (
|
||||
APIKeyAuth,
|
||||
AuthScheme,
|
||||
BearerTokenAuth,
|
||||
ClientAuthScheme,
|
||||
HTTPBasicAuth,
|
||||
HTTPDigestAuth,
|
||||
OAuth2AuthorizationCode,
|
||||
@@ -31,44 +29,12 @@ from crewai.a2a.auth.client_schemes import (
|
||||
)
|
||||
|
||||
|
||||
class _AuthStore:
|
||||
"""Store for authentication schemes with safe concurrent access."""
|
||||
|
||||
def __init__(self) -> None:
|
||||
self._store: dict[str, ClientAuthScheme | None] = {}
|
||||
self._lock = threading.RLock()
|
||||
|
||||
@staticmethod
|
||||
def compute_key(auth_type: str, auth_data: str) -> str:
|
||||
"""Compute a collision-resistant key using SHA-256."""
|
||||
content = f"{auth_type}:{auth_data}"
|
||||
return hashlib.sha256(content.encode()).hexdigest()
|
||||
|
||||
def set(self, key: str, auth: ClientAuthScheme | None) -> None:
|
||||
"""Store an auth scheme."""
|
||||
with self._lock:
|
||||
self._store[key] = auth
|
||||
|
||||
def get(self, key: str) -> ClientAuthScheme | None:
|
||||
"""Retrieve an auth scheme by key."""
|
||||
with self._lock:
|
||||
return self._store.get(key)
|
||||
|
||||
def __setitem__(self, key: str, value: ClientAuthScheme | None) -> None:
|
||||
with self._lock:
|
||||
self._store[key] = value
|
||||
|
||||
def __getitem__(self, key: str) -> ClientAuthScheme | None:
|
||||
with self._lock:
|
||||
return self._store[key]
|
||||
|
||||
|
||||
_auth_store = _AuthStore()
|
||||
_auth_store: dict[int, AuthScheme | None] = {}
|
||||
|
||||
_SCHEME_PATTERN: Final[re.Pattern[str]] = re.compile(r"(\w+)\s+(.+?)(?=,\s*\w+\s+|$)")
|
||||
_PARAM_PATTERN: Final[re.Pattern[str]] = re.compile(r'(\w+)=(?:"([^"]*)"|([^\s,]+))')
|
||||
|
||||
_SCHEME_AUTH_MAPPING: Final[dict[type, tuple[type[ClientAuthScheme], ...]]] = {
|
||||
_SCHEME_AUTH_MAPPING: Final[dict[type, tuple[type[AuthScheme], ...]]] = {
|
||||
OAuth2SecurityScheme: (
|
||||
OAuth2ClientCredentials,
|
||||
OAuth2AuthorizationCode,
|
||||
@@ -77,9 +43,7 @@ _SCHEME_AUTH_MAPPING: Final[dict[type, tuple[type[ClientAuthScheme], ...]]] = {
|
||||
APIKeySecurityScheme: (APIKeyAuth,),
|
||||
}
|
||||
|
||||
_HTTPSchemeType = Literal["basic", "digest", "bearer"]
|
||||
|
||||
_HTTP_SCHEME_MAPPING: Final[dict[_HTTPSchemeType, type[ClientAuthScheme]]] = {
|
||||
_HTTP_SCHEME_MAPPING: Final[dict[str, type[AuthScheme]]] = {
|
||||
"basic": HTTPBasicAuth,
|
||||
"digest": HTTPDigestAuth,
|
||||
"bearer": BearerTokenAuth,
|
||||
@@ -87,8 +51,8 @@ _HTTP_SCHEME_MAPPING: Final[dict[_HTTPSchemeType, type[ClientAuthScheme]]] = {
|
||||
|
||||
|
||||
def _raise_auth_mismatch(
|
||||
expected_classes: type[ClientAuthScheme] | tuple[type[ClientAuthScheme], ...],
|
||||
provided_auth: ClientAuthScheme,
|
||||
expected_classes: type[AuthScheme] | tuple[type[AuthScheme], ...],
|
||||
provided_auth: AuthScheme,
|
||||
) -> None:
|
||||
"""Raise authentication mismatch error.
|
||||
|
||||
@@ -147,7 +111,7 @@ def parse_www_authenticate(header_value: str) -> dict[str, dict[str, str]]:
|
||||
|
||||
|
||||
def validate_auth_against_agent_card(
|
||||
agent_card: AgentCard, auth: ClientAuthScheme | None
|
||||
agent_card: AgentCard, auth: AuthScheme | None
|
||||
) -> None:
|
||||
"""Validate that provided auth matches AgentCard security requirements.
|
||||
|
||||
@@ -181,8 +145,7 @@ def validate_auth_against_agent_card(
|
||||
return
|
||||
|
||||
if isinstance(scheme, HTTPAuthSecurityScheme):
|
||||
scheme_key = cast(_HTTPSchemeType, scheme.scheme.lower())
|
||||
if required_class := _HTTP_SCHEME_MAPPING.get(scheme_key):
|
||||
if required_class := _HTTP_SCHEME_MAPPING.get(scheme.scheme.lower()):
|
||||
if not isinstance(auth, required_class):
|
||||
_raise_auth_mismatch(required_class, auth)
|
||||
return
|
||||
@@ -193,7 +156,7 @@ def validate_auth_against_agent_card(
|
||||
|
||||
async def retry_on_401(
|
||||
request_func: Callable[[], Awaitable[Response]],
|
||||
auth_scheme: ClientAuthScheme | None,
|
||||
auth_scheme: AuthScheme | None,
|
||||
client: AsyncClient,
|
||||
headers: MutableMapping[str, str],
|
||||
max_retries: int = 3,
|
||||
|
||||
@@ -5,25 +5,14 @@ This module is separate from experimental.a2a to avoid circular imports.
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from pathlib import Path
|
||||
from typing import Any, ClassVar, Literal, cast
|
||||
import warnings
|
||||
from importlib.metadata import version
|
||||
from typing import Any, ClassVar, Literal
|
||||
|
||||
from pydantic import (
|
||||
BaseModel,
|
||||
ConfigDict,
|
||||
Field,
|
||||
FilePath,
|
||||
PrivateAttr,
|
||||
SecretStr,
|
||||
model_validator,
|
||||
)
|
||||
from typing_extensions import Self, deprecated
|
||||
from pydantic import BaseModel, ConfigDict, Field
|
||||
from typing_extensions import deprecated
|
||||
|
||||
from crewai.a2a.auth.client_schemes import ClientAuthScheme
|
||||
from crewai.a2a.auth.server_schemes import ServerAuthScheme
|
||||
from crewai.a2a.extensions.base import ValidatedA2AExtension
|
||||
from crewai.a2a.types import ProtocolVersion, TransportType, Url
|
||||
from crewai.a2a.auth.schemas import AuthScheme
|
||||
from crewai.a2a.types import TransportType, Url
|
||||
|
||||
|
||||
try:
|
||||
@@ -36,17 +25,16 @@ try:
|
||||
SecurityScheme,
|
||||
)
|
||||
|
||||
from crewai.a2a.extensions.server import ServerExtension
|
||||
from crewai.a2a.updates import UpdateConfig
|
||||
except ImportError:
|
||||
UpdateConfig: Any = Any # type: ignore[no-redef]
|
||||
AgentCapabilities: Any = Any # type: ignore[no-redef]
|
||||
AgentCardSignature: Any = Any # type: ignore[no-redef]
|
||||
AgentInterface: Any = Any # type: ignore[no-redef]
|
||||
AgentProvider: Any = Any # type: ignore[no-redef]
|
||||
SecurityScheme: Any = Any # type: ignore[no-redef]
|
||||
AgentSkill: Any = Any # type: ignore[no-redef]
|
||||
ServerExtension: Any = Any # type: ignore[no-redef]
|
||||
UpdateConfig = Any
|
||||
AgentCapabilities = Any
|
||||
AgentCardSignature = Any
|
||||
AgentInterface = Any
|
||||
AgentProvider = Any
|
||||
SecurityScheme = Any
|
||||
AgentSkill = Any
|
||||
UpdateConfig = Any # type: ignore[misc,assignment]
|
||||
|
||||
|
||||
def _get_default_update_config() -> UpdateConfig:
|
||||
@@ -55,309 +43,6 @@ def _get_default_update_config() -> UpdateConfig:
|
||||
return StreamingConfig()
|
||||
|
||||
|
||||
SigningAlgorithm = Literal[
|
||||
"RS256",
|
||||
"RS384",
|
||||
"RS512",
|
||||
"ES256",
|
||||
"ES384",
|
||||
"ES512",
|
||||
"PS256",
|
||||
"PS384",
|
||||
"PS512",
|
||||
]
|
||||
|
||||
|
||||
class AgentCardSigningConfig(BaseModel):
|
||||
"""Configuration for AgentCard JWS signing.
|
||||
|
||||
Provides the private key and algorithm settings for signing AgentCards.
|
||||
Either private_key_path or private_key_pem must be provided, but not both.
|
||||
|
||||
Attributes:
|
||||
private_key_path: Path to a PEM-encoded private key file.
|
||||
private_key_pem: PEM-encoded private key as a secret string.
|
||||
key_id: Optional key identifier for the JWS header (kid claim).
|
||||
algorithm: Signing algorithm (RS256, ES256, PS256, etc.).
|
||||
"""
|
||||
|
||||
model_config: ClassVar[ConfigDict] = ConfigDict(extra="forbid")
|
||||
|
||||
private_key_path: FilePath | None = Field(
|
||||
default=None,
|
||||
description="Path to PEM-encoded private key file",
|
||||
)
|
||||
private_key_pem: SecretStr | None = Field(
|
||||
default=None,
|
||||
description="PEM-encoded private key",
|
||||
)
|
||||
key_id: str | None = Field(
|
||||
default=None,
|
||||
description="Key identifier for JWS header (kid claim)",
|
||||
)
|
||||
algorithm: SigningAlgorithm = Field(
|
||||
default="RS256",
|
||||
description="Signing algorithm (RS256, ES256, PS256, etc.)",
|
||||
)
|
||||
|
||||
@model_validator(mode="after")
|
||||
def _validate_key_source(self) -> Self:
|
||||
"""Ensure exactly one key source is provided."""
|
||||
has_path = self.private_key_path is not None
|
||||
has_pem = self.private_key_pem is not None
|
||||
|
||||
if not has_path and not has_pem:
|
||||
raise ValueError(
|
||||
"Either private_key_path or private_key_pem must be provided"
|
||||
)
|
||||
if has_path and has_pem:
|
||||
raise ValueError(
|
||||
"Only one of private_key_path or private_key_pem should be provided"
|
||||
)
|
||||
return self
|
||||
|
||||
def get_private_key(self) -> str:
|
||||
"""Get the private key content.
|
||||
|
||||
Returns:
|
||||
The PEM-encoded private key as a string.
|
||||
"""
|
||||
if self.private_key_pem:
|
||||
return self.private_key_pem.get_secret_value()
|
||||
if self.private_key_path:
|
||||
return Path(self.private_key_path).read_text()
|
||||
raise ValueError("No private key configured")
|
||||
|
||||
|
||||
class GRPCServerConfig(BaseModel):
|
||||
"""gRPC server transport configuration.
|
||||
|
||||
Presence of this config in ServerTransportConfig.grpc enables gRPC transport.
|
||||
|
||||
Attributes:
|
||||
host: Hostname to advertise in agent cards (default: localhost).
|
||||
Use docker service name (e.g., 'web') for docker-compose setups.
|
||||
port: Port for the gRPC server.
|
||||
tls_cert_path: Path to TLS certificate file for gRPC.
|
||||
tls_key_path: Path to TLS private key file for gRPC.
|
||||
max_workers: Maximum number of workers for the gRPC thread pool.
|
||||
reflection_enabled: Whether to enable gRPC reflection for debugging.
|
||||
"""
|
||||
|
||||
model_config: ClassVar[ConfigDict] = ConfigDict(extra="forbid")
|
||||
|
||||
host: str = Field(
|
||||
default="localhost",
|
||||
description="Hostname to advertise in agent cards for gRPC connections",
|
||||
)
|
||||
port: int = Field(
|
||||
default=50051,
|
||||
description="Port for the gRPC server",
|
||||
)
|
||||
tls_cert_path: str | None = Field(
|
||||
default=None,
|
||||
description="Path to TLS certificate file for gRPC",
|
||||
)
|
||||
tls_key_path: str | None = Field(
|
||||
default=None,
|
||||
description="Path to TLS private key file for gRPC",
|
||||
)
|
||||
max_workers: int = Field(
|
||||
default=10,
|
||||
description="Maximum number of workers for the gRPC thread pool",
|
||||
)
|
||||
reflection_enabled: bool = Field(
|
||||
default=False,
|
||||
description="Whether to enable gRPC reflection for debugging",
|
||||
)
|
||||
|
||||
|
||||
class GRPCClientConfig(BaseModel):
|
||||
"""gRPC client transport configuration.
|
||||
|
||||
Attributes:
|
||||
max_send_message_length: Maximum size for outgoing messages in bytes.
|
||||
max_receive_message_length: Maximum size for incoming messages in bytes.
|
||||
keepalive_time_ms: Time between keepalive pings in milliseconds.
|
||||
keepalive_timeout_ms: Timeout for keepalive ping response in milliseconds.
|
||||
"""
|
||||
|
||||
model_config: ClassVar[ConfigDict] = ConfigDict(extra="forbid")
|
||||
|
||||
max_send_message_length: int | None = Field(
|
||||
default=None,
|
||||
description="Maximum size for outgoing messages in bytes",
|
||||
)
|
||||
max_receive_message_length: int | None = Field(
|
||||
default=None,
|
||||
description="Maximum size for incoming messages in bytes",
|
||||
)
|
||||
keepalive_time_ms: int | None = Field(
|
||||
default=None,
|
||||
description="Time between keepalive pings in milliseconds",
|
||||
)
|
||||
keepalive_timeout_ms: int | None = Field(
|
||||
default=None,
|
||||
description="Timeout for keepalive ping response in milliseconds",
|
||||
)
|
||||
|
||||
|
||||
class JSONRPCServerConfig(BaseModel):
|
||||
"""JSON-RPC server transport configuration.
|
||||
|
||||
Presence of this config in ServerTransportConfig.jsonrpc enables JSON-RPC transport.
|
||||
|
||||
Attributes:
|
||||
rpc_path: URL path for the JSON-RPC endpoint.
|
||||
agent_card_path: URL path for the agent card endpoint.
|
||||
"""
|
||||
|
||||
model_config: ClassVar[ConfigDict] = ConfigDict(extra="forbid")
|
||||
|
||||
rpc_path: str = Field(
|
||||
default="/a2a",
|
||||
description="URL path for the JSON-RPC endpoint",
|
||||
)
|
||||
agent_card_path: str = Field(
|
||||
default="/.well-known/agent-card.json",
|
||||
description="URL path for the agent card endpoint",
|
||||
)
|
||||
|
||||
|
||||
class JSONRPCClientConfig(BaseModel):
|
||||
"""JSON-RPC client transport configuration.
|
||||
|
||||
Attributes:
|
||||
max_request_size: Maximum request body size in bytes.
|
||||
"""
|
||||
|
||||
model_config: ClassVar[ConfigDict] = ConfigDict(extra="forbid")
|
||||
|
||||
max_request_size: int | None = Field(
|
||||
default=None,
|
||||
description="Maximum request body size in bytes",
|
||||
)
|
||||
|
||||
|
||||
class HTTPJSONConfig(BaseModel):
|
||||
"""HTTP+JSON transport configuration.
|
||||
|
||||
Presence of this config in ServerTransportConfig.http_json enables HTTP+JSON transport.
|
||||
"""
|
||||
|
||||
model_config: ClassVar[ConfigDict] = ConfigDict(extra="forbid")
|
||||
|
||||
|
||||
class ServerPushNotificationConfig(BaseModel):
|
||||
"""Configuration for outgoing webhook push notifications.
|
||||
|
||||
Controls how the server signs and delivers push notifications to clients.
|
||||
|
||||
Attributes:
|
||||
signature_secret: Shared secret for HMAC-SHA256 signing of outgoing webhooks.
|
||||
"""
|
||||
|
||||
model_config: ClassVar[ConfigDict] = ConfigDict(extra="forbid")
|
||||
|
||||
signature_secret: SecretStr | None = Field(
|
||||
default=None,
|
||||
description="Shared secret for HMAC-SHA256 signing of outgoing push notifications",
|
||||
)
|
||||
|
||||
|
||||
class ServerTransportConfig(BaseModel):
|
||||
"""Transport configuration for A2A server.
|
||||
|
||||
Groups all transport-related settings including preferred transport
|
||||
and protocol-specific configurations.
|
||||
|
||||
Attributes:
|
||||
preferred: Transport protocol for the preferred endpoint.
|
||||
jsonrpc: JSON-RPC server transport configuration.
|
||||
grpc: gRPC server transport configuration.
|
||||
http_json: HTTP+JSON transport configuration.
|
||||
"""
|
||||
|
||||
model_config: ClassVar[ConfigDict] = ConfigDict(extra="forbid")
|
||||
|
||||
preferred: TransportType = Field(
|
||||
default="JSONRPC",
|
||||
description="Transport protocol for the preferred endpoint",
|
||||
)
|
||||
jsonrpc: JSONRPCServerConfig = Field(
|
||||
default_factory=JSONRPCServerConfig,
|
||||
description="JSON-RPC server transport configuration",
|
||||
)
|
||||
grpc: GRPCServerConfig | None = Field(
|
||||
default=None,
|
||||
description="gRPC server transport configuration",
|
||||
)
|
||||
http_json: HTTPJSONConfig | None = Field(
|
||||
default=None,
|
||||
description="HTTP+JSON transport configuration",
|
||||
)
|
||||
|
||||
|
||||
def _migrate_client_transport_fields(
|
||||
transport: ClientTransportConfig,
|
||||
transport_protocol: TransportType | None,
|
||||
supported_transports: list[TransportType] | None,
|
||||
) -> None:
|
||||
"""Migrate deprecated transport fields to new config."""
|
||||
if transport_protocol is not None:
|
||||
warnings.warn(
|
||||
"transport_protocol is deprecated, use transport=ClientTransportConfig(preferred=...) instead",
|
||||
FutureWarning,
|
||||
stacklevel=5,
|
||||
)
|
||||
object.__setattr__(transport, "preferred", transport_protocol)
|
||||
if supported_transports is not None:
|
||||
warnings.warn(
|
||||
"supported_transports is deprecated, use transport=ClientTransportConfig(supported=...) instead",
|
||||
FutureWarning,
|
||||
stacklevel=5,
|
||||
)
|
||||
object.__setattr__(transport, "supported", supported_transports)
|
||||
|
||||
|
||||
class ClientTransportConfig(BaseModel):
|
||||
"""Transport configuration for A2A client.
|
||||
|
||||
Groups all client transport-related settings including preferred transport,
|
||||
supported transports for negotiation, and protocol-specific configurations.
|
||||
|
||||
Transport negotiation logic:
|
||||
1. If `preferred` is set and server supports it → use client's preferred
|
||||
2. Otherwise, if server's preferred is in client's `supported` → use server's preferred
|
||||
3. Otherwise, find first match from client's `supported` in server's interfaces
|
||||
|
||||
Attributes:
|
||||
preferred: Client's preferred transport. If set, client preference takes priority.
|
||||
supported: Transports the client can use, in order of preference.
|
||||
jsonrpc: JSON-RPC client transport configuration.
|
||||
grpc: gRPC client transport configuration.
|
||||
"""
|
||||
|
||||
model_config: ClassVar[ConfigDict] = ConfigDict(extra="forbid")
|
||||
|
||||
preferred: TransportType | None = Field(
|
||||
default=None,
|
||||
description="Client's preferred transport. If set, takes priority over server preference.",
|
||||
)
|
||||
supported: list[TransportType] = Field(
|
||||
default_factory=lambda: cast(list[TransportType], ["JSONRPC"]),
|
||||
description="Transports the client can use, in order of preference",
|
||||
)
|
||||
jsonrpc: JSONRPCClientConfig = Field(
|
||||
default_factory=JSONRPCClientConfig,
|
||||
description="JSON-RPC client transport configuration",
|
||||
)
|
||||
grpc: GRPCClientConfig = Field(
|
||||
default_factory=GRPCClientConfig,
|
||||
description="gRPC client transport configuration",
|
||||
)
|
||||
|
||||
|
||||
@deprecated(
|
||||
"""
|
||||
`crewai.a2a.config.A2AConfig` is deprecated and will be removed in v2.0.0,
|
||||
@@ -380,14 +65,13 @@ class A2AConfig(BaseModel):
|
||||
fail_fast: If True, raise error when agent unreachable; if False, skip and continue.
|
||||
trust_remote_completion_status: If True, return A2A agent's result directly when completed.
|
||||
updates: Update mechanism config.
|
||||
client_extensions: Client-side processing hooks for tool injection and prompt augmentation.
|
||||
transport: Transport configuration (preferred, supported transports, gRPC settings).
|
||||
transport_protocol: A2A transport protocol (grpc, jsonrpc, http+json).
|
||||
"""
|
||||
|
||||
model_config: ClassVar[ConfigDict] = ConfigDict(extra="forbid")
|
||||
|
||||
endpoint: Url = Field(description="A2A agent endpoint URL")
|
||||
auth: ClientAuthScheme | None = Field(
|
||||
auth: AuthScheme | None = Field(
|
||||
default=None,
|
||||
description="Authentication scheme",
|
||||
)
|
||||
@@ -411,48 +95,10 @@ class A2AConfig(BaseModel):
|
||||
default_factory=_get_default_update_config,
|
||||
description="Update mechanism config",
|
||||
)
|
||||
client_extensions: list[ValidatedA2AExtension] = Field(
|
||||
default_factory=list,
|
||||
description="Client-side processing hooks for tool injection and prompt augmentation",
|
||||
transport_protocol: Literal["JSONRPC", "GRPC", "HTTP+JSON"] = Field(
|
||||
default="JSONRPC",
|
||||
description="Specified mode of A2A transport protocol",
|
||||
)
|
||||
transport: ClientTransportConfig = Field(
|
||||
default_factory=ClientTransportConfig,
|
||||
description="Transport configuration (preferred, supported transports, gRPC settings)",
|
||||
)
|
||||
transport_protocol: TransportType | None = Field(
|
||||
default=None,
|
||||
description="Deprecated: Use transport.preferred instead",
|
||||
exclude=True,
|
||||
)
|
||||
supported_transports: list[TransportType] | None = Field(
|
||||
default=None,
|
||||
description="Deprecated: Use transport.supported instead",
|
||||
exclude=True,
|
||||
)
|
||||
use_client_preference: bool | None = Field(
|
||||
default=None,
|
||||
description="Deprecated: Set transport.preferred to enable client preference",
|
||||
exclude=True,
|
||||
)
|
||||
_parallel_delegation: bool = PrivateAttr(default=False)
|
||||
|
||||
@model_validator(mode="after")
|
||||
def _migrate_deprecated_transport_fields(self) -> Self:
|
||||
"""Migrate deprecated transport fields to new config."""
|
||||
_migrate_client_transport_fields(
|
||||
self.transport, self.transport_protocol, self.supported_transports
|
||||
)
|
||||
if self.use_client_preference is not None:
|
||||
warnings.warn(
|
||||
"use_client_preference is deprecated, set transport.preferred to enable client preference",
|
||||
FutureWarning,
|
||||
stacklevel=4,
|
||||
)
|
||||
if self.use_client_preference and self.transport.supported:
|
||||
object.__setattr__(
|
||||
self.transport, "preferred", self.transport.supported[0]
|
||||
)
|
||||
return self
|
||||
|
||||
|
||||
class A2AClientConfig(BaseModel):
|
||||
@@ -468,15 +114,15 @@ class A2AClientConfig(BaseModel):
|
||||
trust_remote_completion_status: If True, return A2A agent's result directly when completed.
|
||||
updates: Update mechanism config.
|
||||
accepted_output_modes: Media types the client can accept in responses.
|
||||
extensions: Extension URIs the client supports (A2A protocol extensions).
|
||||
client_extensions: Client-side processing hooks for tool injection and prompt augmentation.
|
||||
transport: Transport configuration (preferred, supported transports, gRPC settings).
|
||||
supported_transports: Ordered list of transport protocols the client supports.
|
||||
use_client_preference: Whether to prioritize client transport preferences over server.
|
||||
extensions: Extension URIs the client supports.
|
||||
"""
|
||||
|
||||
model_config: ClassVar[ConfigDict] = ConfigDict(extra="forbid")
|
||||
|
||||
endpoint: Url = Field(description="A2A agent endpoint URL")
|
||||
auth: ClientAuthScheme | None = Field(
|
||||
auth: AuthScheme | None = Field(
|
||||
default=None,
|
||||
description="Authentication scheme",
|
||||
)
|
||||
@@ -504,37 +150,22 @@ class A2AClientConfig(BaseModel):
|
||||
default_factory=lambda: ["application/json"],
|
||||
description="Media types the client can accept in responses",
|
||||
)
|
||||
supported_transports: list[str] = Field(
|
||||
default_factory=lambda: ["JSONRPC"],
|
||||
description="Ordered list of transport protocols the client supports",
|
||||
)
|
||||
use_client_preference: bool = Field(
|
||||
default=False,
|
||||
description="Whether to prioritize client transport preferences over server",
|
||||
)
|
||||
extensions: list[str] = Field(
|
||||
default_factory=list,
|
||||
description="Extension URIs the client supports",
|
||||
)
|
||||
client_extensions: list[ValidatedA2AExtension] = Field(
|
||||
default_factory=list,
|
||||
description="Client-side processing hooks for tool injection and prompt augmentation",
|
||||
transport_protocol: Literal["JSONRPC", "GRPC", "HTTP+JSON"] = Field(
|
||||
default="JSONRPC",
|
||||
description="Specified mode of A2A transport protocol",
|
||||
)
|
||||
transport: ClientTransportConfig = Field(
|
||||
default_factory=ClientTransportConfig,
|
||||
description="Transport configuration (preferred, supported transports, gRPC settings)",
|
||||
)
|
||||
transport_protocol: TransportType | None = Field(
|
||||
default=None,
|
||||
description="Deprecated: Use transport.preferred instead",
|
||||
exclude=True,
|
||||
)
|
||||
supported_transports: list[TransportType] | None = Field(
|
||||
default=None,
|
||||
description="Deprecated: Use transport.supported instead",
|
||||
exclude=True,
|
||||
)
|
||||
_parallel_delegation: bool = PrivateAttr(default=False)
|
||||
|
||||
@model_validator(mode="after")
|
||||
def _migrate_deprecated_transport_fields(self) -> Self:
|
||||
"""Migrate deprecated transport fields to new config."""
|
||||
_migrate_client_transport_fields(
|
||||
self.transport, self.transport_protocol, self.supported_transports
|
||||
)
|
||||
return self
|
||||
|
||||
|
||||
class A2AServerConfig(BaseModel):
|
||||
@@ -551,6 +182,7 @@ class A2AServerConfig(BaseModel):
|
||||
default_input_modes: Default supported input MIME types.
|
||||
default_output_modes: Default supported output MIME types.
|
||||
capabilities: Declaration of optional capabilities.
|
||||
preferred_transport: Transport protocol for the preferred endpoint.
|
||||
protocol_version: A2A protocol version this agent supports.
|
||||
provider: Information about the agent's service provider.
|
||||
documentation_url: URL to the agent's documentation.
|
||||
@@ -560,12 +192,7 @@ class A2AServerConfig(BaseModel):
|
||||
security_schemes: Security schemes available to authorize requests.
|
||||
supports_authenticated_extended_card: Whether agent provides extended card to authenticated users.
|
||||
url: Preferred endpoint URL for the agent.
|
||||
signing_config: Configuration for signing the AgentCard with JWS.
|
||||
signatures: Deprecated. Pre-computed JWS signatures. Use signing_config instead.
|
||||
server_extensions: Server-side A2A protocol extensions with on_request/on_response hooks.
|
||||
push_notifications: Configuration for outgoing push notifications.
|
||||
transport: Transport configuration (preferred transport, gRPC, REST settings).
|
||||
auth: Authentication scheme for A2A endpoints.
|
||||
signatures: JSON Web Signatures for the AgentCard.
|
||||
"""
|
||||
|
||||
model_config: ClassVar[ConfigDict] = ConfigDict(extra="forbid")
|
||||
@@ -601,8 +228,12 @@ class A2AServerConfig(BaseModel):
|
||||
),
|
||||
description="Declaration of optional capabilities supported by the agent",
|
||||
)
|
||||
protocol_version: ProtocolVersion = Field(
|
||||
default="0.3.0",
|
||||
preferred_transport: TransportType = Field(
|
||||
default="JSONRPC",
|
||||
description="Transport protocol for the preferred endpoint",
|
||||
)
|
||||
protocol_version: str = Field(
|
||||
default_factory=lambda: version("a2a-sdk"),
|
||||
description="A2A protocol version this agent supports",
|
||||
)
|
||||
provider: AgentProvider | None = Field(
|
||||
@@ -619,7 +250,7 @@ class A2AServerConfig(BaseModel):
|
||||
)
|
||||
additional_interfaces: list[AgentInterface] = Field(
|
||||
default_factory=list,
|
||||
description="Additional supported interfaces.",
|
||||
description="Additional supported interfaces (transport and URL combinations)",
|
||||
)
|
||||
security: list[dict[str, list[str]]] = Field(
|
||||
default_factory=list,
|
||||
@@ -637,54 +268,7 @@ class A2AServerConfig(BaseModel):
|
||||
default=None,
|
||||
description="Preferred endpoint URL for the agent. Set at runtime if not provided.",
|
||||
)
|
||||
signing_config: AgentCardSigningConfig | None = Field(
|
||||
default=None,
|
||||
description="Configuration for signing the AgentCard with JWS",
|
||||
)
|
||||
signatures: list[AgentCardSignature] | None = Field(
|
||||
default=None,
|
||||
description="Deprecated: Use signing_config instead. Pre-computed JWS signatures for the AgentCard.",
|
||||
exclude=True,
|
||||
deprecated=True,
|
||||
)
|
||||
server_extensions: list[ServerExtension] = Field(
|
||||
signatures: list[AgentCardSignature] = Field(
|
||||
default_factory=list,
|
||||
description="Server-side A2A protocol extensions that modify agent behavior",
|
||||
description="JSON Web Signatures for the AgentCard",
|
||||
)
|
||||
push_notifications: ServerPushNotificationConfig | None = Field(
|
||||
default=None,
|
||||
description="Configuration for outgoing push notifications",
|
||||
)
|
||||
transport: ServerTransportConfig = Field(
|
||||
default_factory=ServerTransportConfig,
|
||||
description="Transport configuration (preferred transport, gRPC, REST settings)",
|
||||
)
|
||||
preferred_transport: TransportType | None = Field(
|
||||
default=None,
|
||||
description="Deprecated: Use transport.preferred instead",
|
||||
exclude=True,
|
||||
deprecated=True,
|
||||
)
|
||||
auth: ServerAuthScheme | None = Field(
|
||||
default=None,
|
||||
description="Authentication scheme for A2A endpoints. Defaults to SimpleTokenAuth using AUTH_TOKEN env var.",
|
||||
)
|
||||
|
||||
@model_validator(mode="after")
|
||||
def _migrate_deprecated_fields(self) -> Self:
|
||||
"""Migrate deprecated fields to new config."""
|
||||
if self.preferred_transport is not None:
|
||||
warnings.warn(
|
||||
"preferred_transport is deprecated, use transport=ServerTransportConfig(preferred=...) instead",
|
||||
FutureWarning,
|
||||
stacklevel=4,
|
||||
)
|
||||
object.__setattr__(self.transport, "preferred", self.preferred_transport)
|
||||
if self.signatures is not None:
|
||||
warnings.warn(
|
||||
"signatures is deprecated, use signing_config=AgentCardSigningConfig(...) instead. "
|
||||
"The signatures field will be removed in v2.0.0.",
|
||||
FutureWarning,
|
||||
stacklevel=4,
|
||||
)
|
||||
return self
|
||||
|
||||
@@ -1,491 +1,7 @@
|
||||
"""A2A error codes and error response utilities.
|
||||
|
||||
This module provides a centralized mapping of all A2A protocol error codes
|
||||
as defined in the A2A specification, plus custom CrewAI extensions.
|
||||
|
||||
Error codes follow JSON-RPC 2.0 conventions:
|
||||
- -32700 to -32600: Standard JSON-RPC errors
|
||||
- -32099 to -32000: Server errors (A2A-specific)
|
||||
- -32768 to -32100: Reserved for implementation-defined errors
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass, field
|
||||
from enum import IntEnum
|
||||
from typing import Any
|
||||
"""A2A protocol error types."""
|
||||
|
||||
from a2a.client.errors import A2AClientTimeoutError
|
||||
|
||||
|
||||
class A2APollingTimeoutError(A2AClientTimeoutError):
|
||||
"""Raised when polling exceeds the configured timeout."""
|
||||
|
||||
|
||||
class A2AErrorCode(IntEnum):
|
||||
"""A2A protocol error codes.
|
||||
|
||||
Codes follow JSON-RPC 2.0 specification with A2A-specific extensions.
|
||||
"""
|
||||
|
||||
# JSON-RPC 2.0 Standard Errors (-32700 to -32600)
|
||||
JSON_PARSE_ERROR = -32700
|
||||
"""Invalid JSON was received by the server."""
|
||||
|
||||
INVALID_REQUEST = -32600
|
||||
"""The JSON sent is not a valid Request object."""
|
||||
|
||||
METHOD_NOT_FOUND = -32601
|
||||
"""The method does not exist / is not available."""
|
||||
|
||||
INVALID_PARAMS = -32602
|
||||
"""Invalid method parameter(s)."""
|
||||
|
||||
INTERNAL_ERROR = -32603
|
||||
"""Internal JSON-RPC error."""
|
||||
|
||||
# A2A-Specific Errors (-32099 to -32000)
|
||||
TASK_NOT_FOUND = -32001
|
||||
"""The specified task was not found."""
|
||||
|
||||
TASK_NOT_CANCELABLE = -32002
|
||||
"""The task cannot be canceled (already completed/failed)."""
|
||||
|
||||
PUSH_NOTIFICATION_NOT_SUPPORTED = -32003
|
||||
"""Push notifications are not supported by this agent."""
|
||||
|
||||
UNSUPPORTED_OPERATION = -32004
|
||||
"""The requested operation is not supported."""
|
||||
|
||||
CONTENT_TYPE_NOT_SUPPORTED = -32005
|
||||
"""Incompatible content types between client and server."""
|
||||
|
||||
INVALID_AGENT_RESPONSE = -32006
|
||||
"""The agent produced an invalid response."""
|
||||
|
||||
# CrewAI Custom Extensions (-32768 to -32100)
|
||||
UNSUPPORTED_VERSION = -32009
|
||||
"""The requested A2A protocol version is not supported."""
|
||||
|
||||
UNSUPPORTED_EXTENSION = -32010
|
||||
"""Client does not support required protocol extensions."""
|
||||
|
||||
AUTHENTICATION_REQUIRED = -32011
|
||||
"""Authentication is required for this operation."""
|
||||
|
||||
AUTHORIZATION_FAILED = -32012
|
||||
"""Authorization check failed (insufficient permissions)."""
|
||||
|
||||
RATE_LIMIT_EXCEEDED = -32013
|
||||
"""Rate limit exceeded for this client/operation."""
|
||||
|
||||
TASK_TIMEOUT = -32014
|
||||
"""Task execution timed out."""
|
||||
|
||||
TRANSPORT_NEGOTIATION_FAILED = -32015
|
||||
"""Failed to negotiate a compatible transport protocol."""
|
||||
|
||||
CONTEXT_NOT_FOUND = -32016
|
||||
"""The specified context was not found."""
|
||||
|
||||
SKILL_NOT_FOUND = -32017
|
||||
"""The specified skill was not found."""
|
||||
|
||||
ARTIFACT_NOT_FOUND = -32018
|
||||
"""The specified artifact was not found."""
|
||||
|
||||
|
||||
# Error code to default message mapping
|
||||
ERROR_MESSAGES: dict[int, str] = {
|
||||
A2AErrorCode.JSON_PARSE_ERROR: "Parse error",
|
||||
A2AErrorCode.INVALID_REQUEST: "Invalid Request",
|
||||
A2AErrorCode.METHOD_NOT_FOUND: "Method not found",
|
||||
A2AErrorCode.INVALID_PARAMS: "Invalid params",
|
||||
A2AErrorCode.INTERNAL_ERROR: "Internal error",
|
||||
A2AErrorCode.TASK_NOT_FOUND: "Task not found",
|
||||
A2AErrorCode.TASK_NOT_CANCELABLE: "Task not cancelable",
|
||||
A2AErrorCode.PUSH_NOTIFICATION_NOT_SUPPORTED: "Push Notification is not supported",
|
||||
A2AErrorCode.UNSUPPORTED_OPERATION: "This operation is not supported",
|
||||
A2AErrorCode.CONTENT_TYPE_NOT_SUPPORTED: "Incompatible content types",
|
||||
A2AErrorCode.INVALID_AGENT_RESPONSE: "Invalid agent response",
|
||||
A2AErrorCode.UNSUPPORTED_VERSION: "Unsupported A2A version",
|
||||
A2AErrorCode.UNSUPPORTED_EXTENSION: "Client does not support required extensions",
|
||||
A2AErrorCode.AUTHENTICATION_REQUIRED: "Authentication required",
|
||||
A2AErrorCode.AUTHORIZATION_FAILED: "Authorization failed",
|
||||
A2AErrorCode.RATE_LIMIT_EXCEEDED: "Rate limit exceeded",
|
||||
A2AErrorCode.TASK_TIMEOUT: "Task execution timed out",
|
||||
A2AErrorCode.TRANSPORT_NEGOTIATION_FAILED: "Transport negotiation failed",
|
||||
A2AErrorCode.CONTEXT_NOT_FOUND: "Context not found",
|
||||
A2AErrorCode.SKILL_NOT_FOUND: "Skill not found",
|
||||
A2AErrorCode.ARTIFACT_NOT_FOUND: "Artifact not found",
|
||||
}
|
||||
|
||||
|
||||
@dataclass
|
||||
class A2AError(Exception):
|
||||
"""Base exception for A2A protocol errors.
|
||||
|
||||
Attributes:
|
||||
code: The A2A/JSON-RPC error code.
|
||||
message: Human-readable error message.
|
||||
data: Optional additional error data.
|
||||
"""
|
||||
|
||||
code: int
|
||||
message: str | None = None
|
||||
data: Any = None
|
||||
|
||||
def __post_init__(self) -> None:
|
||||
if self.message is None:
|
||||
self.message = ERROR_MESSAGES.get(self.code, "Unknown error")
|
||||
super().__init__(self.message)
|
||||
|
||||
def to_dict(self) -> dict[str, Any]:
|
||||
"""Convert to JSON-RPC error object format."""
|
||||
error: dict[str, Any] = {
|
||||
"code": self.code,
|
||||
"message": self.message,
|
||||
}
|
||||
if self.data is not None:
|
||||
error["data"] = self.data
|
||||
return error
|
||||
|
||||
def to_response(self, request_id: str | int | None = None) -> dict[str, Any]:
|
||||
"""Convert to full JSON-RPC error response."""
|
||||
return {
|
||||
"jsonrpc": "2.0",
|
||||
"error": self.to_dict(),
|
||||
"id": request_id,
|
||||
}
|
||||
|
||||
|
||||
@dataclass
|
||||
class JSONParseError(A2AError):
|
||||
"""Invalid JSON was received."""
|
||||
|
||||
code: int = field(default=A2AErrorCode.JSON_PARSE_ERROR, init=False)
|
||||
|
||||
|
||||
@dataclass
|
||||
class InvalidRequestError(A2AError):
|
||||
"""The JSON sent is not a valid Request object."""
|
||||
|
||||
code: int = field(default=A2AErrorCode.INVALID_REQUEST, init=False)
|
||||
|
||||
|
||||
@dataclass
|
||||
class MethodNotFoundError(A2AError):
|
||||
"""The method does not exist / is not available."""
|
||||
|
||||
code: int = field(default=A2AErrorCode.METHOD_NOT_FOUND, init=False)
|
||||
method: str | None = None
|
||||
|
||||
def __post_init__(self) -> None:
|
||||
if self.message is None and self.method:
|
||||
self.message = f"Method not found: {self.method}"
|
||||
super().__post_init__()
|
||||
|
||||
|
||||
@dataclass
|
||||
class InvalidParamsError(A2AError):
|
||||
"""Invalid method parameter(s)."""
|
||||
|
||||
code: int = field(default=A2AErrorCode.INVALID_PARAMS, init=False)
|
||||
param: str | None = None
|
||||
reason: str | None = None
|
||||
|
||||
def __post_init__(self) -> None:
|
||||
if self.message is None:
|
||||
if self.param and self.reason:
|
||||
self.message = f"Invalid parameter '{self.param}': {self.reason}"
|
||||
elif self.param:
|
||||
self.message = f"Invalid parameter: {self.param}"
|
||||
super().__post_init__()
|
||||
|
||||
|
||||
@dataclass
|
||||
class InternalError(A2AError):
|
||||
"""Internal JSON-RPC error."""
|
||||
|
||||
code: int = field(default=A2AErrorCode.INTERNAL_ERROR, init=False)
|
||||
|
||||
|
||||
@dataclass
|
||||
class TaskNotFoundError(A2AError):
|
||||
"""The specified task was not found."""
|
||||
|
||||
code: int = field(default=A2AErrorCode.TASK_NOT_FOUND, init=False)
|
||||
task_id: str | None = None
|
||||
|
||||
def __post_init__(self) -> None:
|
||||
if self.message is None and self.task_id:
|
||||
self.message = f"Task not found: {self.task_id}"
|
||||
super().__post_init__()
|
||||
|
||||
|
||||
@dataclass
|
||||
class TaskNotCancelableError(A2AError):
|
||||
"""The task cannot be canceled."""
|
||||
|
||||
code: int = field(default=A2AErrorCode.TASK_NOT_CANCELABLE, init=False)
|
||||
task_id: str | None = None
|
||||
reason: str | None = None
|
||||
|
||||
def __post_init__(self) -> None:
|
||||
if self.message is None:
|
||||
if self.task_id and self.reason:
|
||||
self.message = f"Task {self.task_id} cannot be canceled: {self.reason}"
|
||||
elif self.task_id:
|
||||
self.message = f"Task {self.task_id} cannot be canceled"
|
||||
super().__post_init__()
|
||||
|
||||
|
||||
@dataclass
|
||||
class PushNotificationNotSupportedError(A2AError):
|
||||
"""Push notifications are not supported."""
|
||||
|
||||
code: int = field(default=A2AErrorCode.PUSH_NOTIFICATION_NOT_SUPPORTED, init=False)
|
||||
|
||||
|
||||
@dataclass
|
||||
class UnsupportedOperationError(A2AError):
|
||||
"""The requested operation is not supported."""
|
||||
|
||||
code: int = field(default=A2AErrorCode.UNSUPPORTED_OPERATION, init=False)
|
||||
operation: str | None = None
|
||||
|
||||
def __post_init__(self) -> None:
|
||||
if self.message is None and self.operation:
|
||||
self.message = f"Operation not supported: {self.operation}"
|
||||
super().__post_init__()
|
||||
|
||||
|
||||
@dataclass
|
||||
class ContentTypeNotSupportedError(A2AError):
|
||||
"""Incompatible content types."""
|
||||
|
||||
code: int = field(default=A2AErrorCode.CONTENT_TYPE_NOT_SUPPORTED, init=False)
|
||||
requested_types: list[str] | None = None
|
||||
supported_types: list[str] | None = None
|
||||
|
||||
def __post_init__(self) -> None:
|
||||
if self.message is None and self.requested_types and self.supported_types:
|
||||
self.message = (
|
||||
f"Content type not supported. Requested: {self.requested_types}, "
|
||||
f"Supported: {self.supported_types}"
|
||||
)
|
||||
super().__post_init__()
|
||||
|
||||
|
||||
@dataclass
|
||||
class InvalidAgentResponseError(A2AError):
|
||||
"""The agent produced an invalid response."""
|
||||
|
||||
code: int = field(default=A2AErrorCode.INVALID_AGENT_RESPONSE, init=False)
|
||||
|
||||
|
||||
@dataclass
|
||||
class UnsupportedVersionError(A2AError):
|
||||
"""The requested A2A version is not supported."""
|
||||
|
||||
code: int = field(default=A2AErrorCode.UNSUPPORTED_VERSION, init=False)
|
||||
requested_version: str | None = None
|
||||
supported_versions: list[str] | None = None
|
||||
|
||||
def __post_init__(self) -> None:
|
||||
if self.message is None and self.requested_version:
|
||||
msg = f"Unsupported A2A version: {self.requested_version}"
|
||||
if self.supported_versions:
|
||||
msg += f". Supported versions: {', '.join(self.supported_versions)}"
|
||||
self.message = msg
|
||||
super().__post_init__()
|
||||
|
||||
|
||||
@dataclass
|
||||
class UnsupportedExtensionError(A2AError):
|
||||
"""Client does not support required extensions."""
|
||||
|
||||
code: int = field(default=A2AErrorCode.UNSUPPORTED_EXTENSION, init=False)
|
||||
required_extensions: list[str] | None = None
|
||||
|
||||
def __post_init__(self) -> None:
|
||||
if self.message is None and self.required_extensions:
|
||||
self.message = f"Client does not support required extensions: {', '.join(self.required_extensions)}"
|
||||
super().__post_init__()
|
||||
|
||||
|
||||
@dataclass
|
||||
class AuthenticationRequiredError(A2AError):
|
||||
"""Authentication is required."""
|
||||
|
||||
code: int = field(default=A2AErrorCode.AUTHENTICATION_REQUIRED, init=False)
|
||||
|
||||
|
||||
@dataclass
|
||||
class AuthorizationFailedError(A2AError):
|
||||
"""Authorization check failed."""
|
||||
|
||||
code: int = field(default=A2AErrorCode.AUTHORIZATION_FAILED, init=False)
|
||||
required_scope: str | None = None
|
||||
|
||||
def __post_init__(self) -> None:
|
||||
if self.message is None and self.required_scope:
|
||||
self.message = (
|
||||
f"Authorization failed. Required scope: {self.required_scope}"
|
||||
)
|
||||
super().__post_init__()
|
||||
|
||||
|
||||
@dataclass
|
||||
class RateLimitExceededError(A2AError):
|
||||
"""Rate limit exceeded."""
|
||||
|
||||
code: int = field(default=A2AErrorCode.RATE_LIMIT_EXCEEDED, init=False)
|
||||
retry_after: int | None = None
|
||||
|
||||
def __post_init__(self) -> None:
|
||||
if self.message is None and self.retry_after:
|
||||
self.message = (
|
||||
f"Rate limit exceeded. Retry after {self.retry_after} seconds"
|
||||
)
|
||||
if self.retry_after:
|
||||
self.data = {"retry_after": self.retry_after}
|
||||
super().__post_init__()
|
||||
|
||||
|
||||
@dataclass
|
||||
class TaskTimeoutError(A2AError):
|
||||
"""Task execution timed out."""
|
||||
|
||||
code: int = field(default=A2AErrorCode.TASK_TIMEOUT, init=False)
|
||||
task_id: str | None = None
|
||||
timeout_seconds: float | None = None
|
||||
|
||||
def __post_init__(self) -> None:
|
||||
if self.message is None:
|
||||
if self.task_id and self.timeout_seconds:
|
||||
self.message = (
|
||||
f"Task {self.task_id} timed out after {self.timeout_seconds}s"
|
||||
)
|
||||
elif self.task_id:
|
||||
self.message = f"Task {self.task_id} timed out"
|
||||
super().__post_init__()
|
||||
|
||||
|
||||
@dataclass
|
||||
class TransportNegotiationFailedError(A2AError):
|
||||
"""Failed to negotiate a compatible transport protocol."""
|
||||
|
||||
code: int = field(default=A2AErrorCode.TRANSPORT_NEGOTIATION_FAILED, init=False)
|
||||
client_transports: list[str] | None = None
|
||||
server_transports: list[str] | None = None
|
||||
|
||||
def __post_init__(self) -> None:
|
||||
if self.message is None and self.client_transports and self.server_transports:
|
||||
self.message = (
|
||||
f"Transport negotiation failed. Client: {self.client_transports}, "
|
||||
f"Server: {self.server_transports}"
|
||||
)
|
||||
super().__post_init__()
|
||||
|
||||
|
||||
@dataclass
|
||||
class ContextNotFoundError(A2AError):
|
||||
"""The specified context was not found."""
|
||||
|
||||
code: int = field(default=A2AErrorCode.CONTEXT_NOT_FOUND, init=False)
|
||||
context_id: str | None = None
|
||||
|
||||
def __post_init__(self) -> None:
|
||||
if self.message is None and self.context_id:
|
||||
self.message = f"Context not found: {self.context_id}"
|
||||
super().__post_init__()
|
||||
|
||||
|
||||
@dataclass
|
||||
class SkillNotFoundError(A2AError):
|
||||
"""The specified skill was not found."""
|
||||
|
||||
code: int = field(default=A2AErrorCode.SKILL_NOT_FOUND, init=False)
|
||||
skill_id: str | None = None
|
||||
|
||||
def __post_init__(self) -> None:
|
||||
if self.message is None and self.skill_id:
|
||||
self.message = f"Skill not found: {self.skill_id}"
|
||||
super().__post_init__()
|
||||
|
||||
|
||||
@dataclass
|
||||
class ArtifactNotFoundError(A2AError):
|
||||
"""The specified artifact was not found."""
|
||||
|
||||
code: int = field(default=A2AErrorCode.ARTIFACT_NOT_FOUND, init=False)
|
||||
artifact_id: str | None = None
|
||||
|
||||
def __post_init__(self) -> None:
|
||||
if self.message is None and self.artifact_id:
|
||||
self.message = f"Artifact not found: {self.artifact_id}"
|
||||
super().__post_init__()
|
||||
|
||||
|
||||
def create_error_response(
|
||||
code: int | A2AErrorCode,
|
||||
message: str | None = None,
|
||||
data: Any = None,
|
||||
request_id: str | int | None = None,
|
||||
) -> dict[str, Any]:
|
||||
"""Create a JSON-RPC error response.
|
||||
|
||||
Args:
|
||||
code: Error code (A2AErrorCode or int).
|
||||
message: Optional error message (uses default if not provided).
|
||||
data: Optional additional error data.
|
||||
request_id: Request ID for correlation.
|
||||
|
||||
Returns:
|
||||
Dict in JSON-RPC error response format.
|
||||
"""
|
||||
error = A2AError(code=int(code), message=message, data=data)
|
||||
return error.to_response(request_id)
|
||||
|
||||
|
||||
def is_retryable_error(code: int) -> bool:
|
||||
"""Check if an error is potentially retryable.
|
||||
|
||||
Args:
|
||||
code: Error code to check.
|
||||
|
||||
Returns:
|
||||
True if the error might be resolved by retrying.
|
||||
"""
|
||||
retryable_codes = {
|
||||
A2AErrorCode.INTERNAL_ERROR,
|
||||
A2AErrorCode.RATE_LIMIT_EXCEEDED,
|
||||
A2AErrorCode.TASK_TIMEOUT,
|
||||
}
|
||||
return code in retryable_codes
|
||||
|
||||
|
||||
def is_client_error(code: int) -> bool:
|
||||
"""Check if an error is a client-side error.
|
||||
|
||||
Args:
|
||||
code: Error code to check.
|
||||
|
||||
Returns:
|
||||
True if the error is due to client request issues.
|
||||
"""
|
||||
client_error_codes = {
|
||||
A2AErrorCode.JSON_PARSE_ERROR,
|
||||
A2AErrorCode.INVALID_REQUEST,
|
||||
A2AErrorCode.METHOD_NOT_FOUND,
|
||||
A2AErrorCode.INVALID_PARAMS,
|
||||
A2AErrorCode.TASK_NOT_FOUND,
|
||||
A2AErrorCode.CONTENT_TYPE_NOT_SUPPORTED,
|
||||
A2AErrorCode.UNSUPPORTED_VERSION,
|
||||
A2AErrorCode.UNSUPPORTED_EXTENSION,
|
||||
A2AErrorCode.CONTEXT_NOT_FOUND,
|
||||
A2AErrorCode.SKILL_NOT_FOUND,
|
||||
A2AErrorCode.ARTIFACT_NOT_FOUND,
|
||||
}
|
||||
return code in client_error_codes
|
||||
|
||||
@@ -1,37 +1,4 @@
|
||||
"""A2A Protocol Extensions for CrewAI.
|
||||
|
||||
This module contains extensions to the A2A (Agent-to-Agent) protocol.
|
||||
|
||||
**Client-side extensions** (A2AExtension) allow customizing how the A2A wrapper
|
||||
processes requests and responses during delegation to remote agents. These provide
|
||||
hooks for tool injection, prompt augmentation, and response processing.
|
||||
|
||||
**Server-side extensions** (ServerExtension) allow agents to offer additional
|
||||
functionality beyond the core A2A specification. Clients activate extensions
|
||||
via the X-A2A-Extensions header.
|
||||
|
||||
See: https://a2a-protocol.org/latest/topics/extensions/
|
||||
"""
|
||||
|
||||
from crewai.a2a.extensions.base import (
|
||||
A2AExtension,
|
||||
ConversationState,
|
||||
ExtensionRegistry,
|
||||
ValidatedA2AExtension,
|
||||
)
|
||||
from crewai.a2a.extensions.server import (
|
||||
ExtensionContext,
|
||||
ServerExtension,
|
||||
ServerExtensionRegistry,
|
||||
)
|
||||
|
||||
|
||||
__all__ = [
|
||||
"A2AExtension",
|
||||
"ConversationState",
|
||||
"ExtensionContext",
|
||||
"ExtensionRegistry",
|
||||
"ServerExtension",
|
||||
"ServerExtensionRegistry",
|
||||
"ValidatedA2AExtension",
|
||||
]
|
||||
|
||||
@@ -1,20 +1,14 @@
|
||||
"""Base extension interface for CrewAI A2A wrapper processing hooks.
|
||||
"""Base extension interface for A2A wrapper integrations.
|
||||
|
||||
This module defines the protocol for extending CrewAI's A2A wrapper functionality
|
||||
with custom logic for tool injection, prompt augmentation, and response processing.
|
||||
|
||||
Note: These are CrewAI-specific processing hooks, NOT A2A protocol extensions.
|
||||
A2A protocol extensions are capability declarations using AgentExtension objects
|
||||
in AgentCard.capabilities.extensions, activated via the A2A-Extensions HTTP header.
|
||||
See: https://a2a-protocol.org/latest/topics/extensions/
|
||||
This module defines the protocol for extending A2A wrapper functionality
|
||||
with custom logic for conversation processing, prompt augmentation, and
|
||||
agent response handling.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from collections.abc import Sequence
|
||||
from typing import TYPE_CHECKING, Annotated, Any, Protocol, runtime_checkable
|
||||
|
||||
from pydantic import BeforeValidator
|
||||
from typing import TYPE_CHECKING, Any, Protocol
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
@@ -23,20 +17,6 @@ if TYPE_CHECKING:
|
||||
from crewai.agent.core import Agent
|
||||
|
||||
|
||||
def _validate_a2a_extension(v: Any) -> Any:
|
||||
"""Validate that value implements A2AExtension protocol."""
|
||||
if not isinstance(v, A2AExtension):
|
||||
raise ValueError(
|
||||
f"Value must implement A2AExtension protocol. "
|
||||
f"Got {type(v).__name__} which is missing required methods."
|
||||
)
|
||||
return v
|
||||
|
||||
|
||||
ValidatedA2AExtension = Annotated[Any, BeforeValidator(_validate_a2a_extension)]
|
||||
|
||||
|
||||
@runtime_checkable
|
||||
class ConversationState(Protocol):
|
||||
"""Protocol for extension-specific conversation state.
|
||||
|
||||
@@ -53,36 +33,11 @@ class ConversationState(Protocol):
|
||||
...
|
||||
|
||||
|
||||
@runtime_checkable
|
||||
class A2AExtension(Protocol):
|
||||
"""Protocol for A2A wrapper extensions.
|
||||
|
||||
Extensions can implement this protocol to inject custom logic into
|
||||
the A2A conversation flow at various integration points.
|
||||
|
||||
Example:
|
||||
class MyExtension:
|
||||
def inject_tools(self, agent: Agent) -> None:
|
||||
# Add custom tools to the agent
|
||||
pass
|
||||
|
||||
def extract_state_from_history(
|
||||
self, conversation_history: Sequence[Message]
|
||||
) -> ConversationState | None:
|
||||
# Extract state from conversation
|
||||
return None
|
||||
|
||||
def augment_prompt(
|
||||
self, base_prompt: str, conversation_state: ConversationState | None
|
||||
) -> str:
|
||||
# Add custom instructions
|
||||
return base_prompt
|
||||
|
||||
def process_response(
|
||||
self, agent_response: Any, conversation_state: ConversationState | None
|
||||
) -> Any:
|
||||
# Modify response if needed
|
||||
return agent_response
|
||||
"""
|
||||
|
||||
def inject_tools(self, agent: Agent) -> None:
|
||||
|
||||
@@ -1,170 +1,34 @@
|
||||
"""A2A Protocol extension utilities.
|
||||
"""Extension registry factory for A2A configurations.
|
||||
|
||||
This module provides utilities for working with A2A protocol extensions as
|
||||
defined in the A2A specification. Extensions are capability declarations in
|
||||
AgentCard.capabilities.extensions using AgentExtension objects, activated
|
||||
via the X-A2A-Extensions HTTP header.
|
||||
|
||||
See: https://a2a-protocol.org/latest/topics/extensions/
|
||||
This module provides utilities for creating extension registries from A2A configurations.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Any
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
from a2a.client.middleware import ClientCallContext, ClientCallInterceptor
|
||||
from a2a.extensions.common import (
|
||||
HTTP_EXTENSION_HEADER,
|
||||
)
|
||||
from a2a.types import AgentCard, AgentExtension
|
||||
|
||||
from crewai.a2a.config import A2AClientConfig, A2AConfig
|
||||
from crewai.a2a.extensions.base import ExtensionRegistry
|
||||
|
||||
|
||||
def get_extensions_from_config(
|
||||
a2a_config: list[A2AConfig | A2AClientConfig] | A2AConfig | A2AClientConfig,
|
||||
) -> list[str]:
|
||||
"""Extract extension URIs from A2A configuration.
|
||||
|
||||
Args:
|
||||
a2a_config: A2A configuration (single or list).
|
||||
|
||||
Returns:
|
||||
Deduplicated list of extension URIs from all configs.
|
||||
"""
|
||||
configs = a2a_config if isinstance(a2a_config, list) else [a2a_config]
|
||||
seen: set[str] = set()
|
||||
result: list[str] = []
|
||||
|
||||
for config in configs:
|
||||
if not isinstance(config, A2AClientConfig):
|
||||
continue
|
||||
for uri in config.extensions:
|
||||
if uri not in seen:
|
||||
seen.add(uri)
|
||||
result.append(uri)
|
||||
|
||||
return result
|
||||
|
||||
|
||||
class ExtensionsMiddleware(ClientCallInterceptor):
|
||||
"""Middleware to add X-A2A-Extensions header to requests.
|
||||
|
||||
This middleware adds the extensions header to all outgoing requests,
|
||||
declaring which A2A protocol extensions the client supports.
|
||||
"""
|
||||
|
||||
def __init__(self, extensions: list[str]) -> None:
|
||||
"""Initialize with extension URIs.
|
||||
|
||||
Args:
|
||||
extensions: List of extension URIs the client supports.
|
||||
"""
|
||||
self._extensions = extensions
|
||||
|
||||
async def intercept(
|
||||
self,
|
||||
method_name: str,
|
||||
request_payload: dict[str, Any],
|
||||
http_kwargs: dict[str, Any],
|
||||
agent_card: AgentCard | None,
|
||||
context: ClientCallContext | None,
|
||||
) -> tuple[dict[str, Any], dict[str, Any]]:
|
||||
"""Add extensions header to the request.
|
||||
|
||||
Args:
|
||||
method_name: The A2A method being called.
|
||||
request_payload: The JSON-RPC request payload.
|
||||
http_kwargs: HTTP request kwargs (headers, etc).
|
||||
agent_card: The target agent's card.
|
||||
context: Optional call context.
|
||||
|
||||
Returns:
|
||||
Tuple of (request_payload, modified_http_kwargs).
|
||||
"""
|
||||
if self._extensions:
|
||||
headers = http_kwargs.setdefault("headers", {})
|
||||
headers[HTTP_EXTENSION_HEADER] = ",".join(self._extensions)
|
||||
return request_payload, http_kwargs
|
||||
|
||||
|
||||
def validate_required_extensions(
|
||||
agent_card: AgentCard,
|
||||
client_extensions: list[str] | None,
|
||||
) -> list[AgentExtension]:
|
||||
"""Validate that client supports all required extensions from agent.
|
||||
|
||||
Args:
|
||||
agent_card: The agent's card with declared extensions.
|
||||
client_extensions: Extension URIs the client supports.
|
||||
|
||||
Returns:
|
||||
List of unsupported required extensions.
|
||||
|
||||
Raises:
|
||||
None - returns list of unsupported extensions for caller to handle.
|
||||
"""
|
||||
unsupported: list[AgentExtension] = []
|
||||
client_set = set(client_extensions or [])
|
||||
|
||||
if not agent_card.capabilities or not agent_card.capabilities.extensions:
|
||||
return unsupported
|
||||
|
||||
unsupported.extend(
|
||||
ext
|
||||
for ext in agent_card.capabilities.extensions
|
||||
if ext.required and ext.uri not in client_set
|
||||
)
|
||||
|
||||
return unsupported
|
||||
if TYPE_CHECKING:
|
||||
from crewai.a2a.config import A2AConfig
|
||||
|
||||
|
||||
def create_extension_registry_from_config(
|
||||
a2a_config: list[A2AConfig | A2AClientConfig] | A2AConfig | A2AClientConfig,
|
||||
a2a_config: list[A2AConfig] | A2AConfig,
|
||||
) -> ExtensionRegistry:
|
||||
"""Create an extension registry from A2A client configuration.
|
||||
|
||||
Extracts client_extensions from each A2AClientConfig and registers them
|
||||
with the ExtensionRegistry. These extensions provide CrewAI-specific
|
||||
processing hooks (tool injection, prompt augmentation, response processing).
|
||||
|
||||
Note: A2A protocol extensions (URI strings sent via X-A2A-Extensions header)
|
||||
are handled separately via get_extensions_from_config() and ExtensionsMiddleware.
|
||||
"""Create an extension registry from A2A configuration.
|
||||
|
||||
Args:
|
||||
a2a_config: A2A configuration (single or list).
|
||||
a2a_config: A2A configuration (single or list)
|
||||
|
||||
Returns:
|
||||
Extension registry with all client_extensions registered.
|
||||
|
||||
Example:
|
||||
class LoggingExtension:
|
||||
def inject_tools(self, agent): pass
|
||||
def extract_state_from_history(self, history): return None
|
||||
def augment_prompt(self, prompt, state): return prompt
|
||||
def process_response(self, response, state):
|
||||
print(f"Response: {response}")
|
||||
return response
|
||||
|
||||
config = A2AClientConfig(
|
||||
endpoint="https://agent.example.com",
|
||||
client_extensions=[LoggingExtension()],
|
||||
)
|
||||
registry = create_extension_registry_from_config(config)
|
||||
Configured extension registry with all applicable extensions
|
||||
"""
|
||||
registry = ExtensionRegistry()
|
||||
configs = a2a_config if isinstance(a2a_config, list) else [a2a_config]
|
||||
|
||||
seen: set[int] = set()
|
||||
|
||||
for config in configs:
|
||||
if isinstance(config, (A2AConfig, A2AClientConfig)):
|
||||
client_exts = getattr(config, "client_extensions", [])
|
||||
for extension in client_exts:
|
||||
ext_id = id(extension)
|
||||
if ext_id not in seen:
|
||||
seen.add(ext_id)
|
||||
registry.register(extension)
|
||||
for _ in configs:
|
||||
pass
|
||||
|
||||
return registry
|
||||
|
||||
@@ -1,305 +0,0 @@
|
||||
"""A2A protocol server extensions for CrewAI agents.
|
||||
|
||||
This module provides the base class and context for implementing A2A protocol
|
||||
extensions on the server side. Extensions allow agents to offer additional
|
||||
functionality beyond the core A2A specification.
|
||||
|
||||
See: https://a2a-protocol.org/latest/topics/extensions/
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from abc import ABC, abstractmethod
|
||||
from dataclasses import dataclass, field
|
||||
import logging
|
||||
from typing import TYPE_CHECKING, Annotated, Any
|
||||
|
||||
from a2a.types import AgentExtension
|
||||
from pydantic_core import CoreSchema, core_schema
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from a2a.server.context import ServerCallContext
|
||||
from pydantic import GetCoreSchemaHandler
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@dataclass
|
||||
class ExtensionContext:
|
||||
"""Context passed to extension hooks during request processing.
|
||||
|
||||
Provides access to request metadata, client extensions, and shared state
|
||||
that extensions can read from and write to.
|
||||
|
||||
Attributes:
|
||||
metadata: Request metadata dict, includes extension-namespaced keys.
|
||||
client_extensions: Set of extension URIs the client declared support for.
|
||||
state: Mutable dict for extensions to share data during request lifecycle.
|
||||
server_context: The underlying A2A server call context.
|
||||
"""
|
||||
|
||||
metadata: dict[str, Any]
|
||||
client_extensions: set[str]
|
||||
state: dict[str, Any] = field(default_factory=dict)
|
||||
server_context: ServerCallContext | None = None
|
||||
|
||||
def get_extension_metadata(self, uri: str, key: str) -> Any | None:
|
||||
"""Get extension-specific metadata value.
|
||||
|
||||
Extension metadata uses namespaced keys in the format:
|
||||
"{extension_uri}/{key}"
|
||||
|
||||
Args:
|
||||
uri: The extension URI.
|
||||
key: The metadata key within the extension namespace.
|
||||
|
||||
Returns:
|
||||
The metadata value, or None if not present.
|
||||
"""
|
||||
full_key = f"{uri}/{key}"
|
||||
return self.metadata.get(full_key)
|
||||
|
||||
def set_extension_metadata(self, uri: str, key: str, value: Any) -> None:
|
||||
"""Set extension-specific metadata value.
|
||||
|
||||
Args:
|
||||
uri: The extension URI.
|
||||
key: The metadata key within the extension namespace.
|
||||
value: The value to set.
|
||||
"""
|
||||
full_key = f"{uri}/{key}"
|
||||
self.metadata[full_key] = value
|
||||
|
||||
|
||||
class ServerExtension(ABC):
|
||||
"""Base class for A2A protocol server extensions.
|
||||
|
||||
Subclass this to create custom extensions that modify agent behavior
|
||||
when clients activate them. Extensions are identified by URI and can
|
||||
be marked as required.
|
||||
|
||||
Example:
|
||||
class SamplingExtension(ServerExtension):
|
||||
uri = "urn:crewai:ext:sampling/v1"
|
||||
required = True
|
||||
|
||||
def __init__(self, max_tokens: int = 4096):
|
||||
self.max_tokens = max_tokens
|
||||
|
||||
@property
|
||||
def params(self) -> dict[str, Any]:
|
||||
return {"max_tokens": self.max_tokens}
|
||||
|
||||
async def on_request(self, context: ExtensionContext) -> None:
|
||||
limit = context.get_extension_metadata(self.uri, "limit")
|
||||
if limit:
|
||||
context.state["token_limit"] = int(limit)
|
||||
|
||||
async def on_response(self, context: ExtensionContext, result: Any) -> Any:
|
||||
return result
|
||||
"""
|
||||
|
||||
uri: Annotated[str, "Extension URI identifier. Must be unique."]
|
||||
required: Annotated[bool, "Whether clients must support this extension."] = False
|
||||
description: Annotated[
|
||||
str | None, "Human-readable description of the extension."
|
||||
] = None
|
||||
|
||||
@classmethod
|
||||
def __get_pydantic_core_schema__(
|
||||
cls,
|
||||
_source_type: Any,
|
||||
_handler: GetCoreSchemaHandler,
|
||||
) -> CoreSchema:
|
||||
"""Tell Pydantic how to validate ServerExtension instances."""
|
||||
return core_schema.is_instance_schema(cls)
|
||||
|
||||
@property
|
||||
def params(self) -> dict[str, Any] | None:
|
||||
"""Extension parameters to advertise in AgentCard.
|
||||
|
||||
Override this property to expose configuration that clients can read.
|
||||
|
||||
Returns:
|
||||
Dict of parameter names to values, or None.
|
||||
"""
|
||||
return None
|
||||
|
||||
def agent_extension(self) -> AgentExtension:
|
||||
"""Generate the AgentExtension object for the AgentCard.
|
||||
|
||||
Returns:
|
||||
AgentExtension with this extension's URI, required flag, and params.
|
||||
"""
|
||||
return AgentExtension(
|
||||
uri=self.uri,
|
||||
required=self.required if self.required else None,
|
||||
description=self.description,
|
||||
params=self.params,
|
||||
)
|
||||
|
||||
def is_active(self, context: ExtensionContext) -> bool:
|
||||
"""Check if this extension is active for the current request.
|
||||
|
||||
An extension is active if the client declared support for it.
|
||||
|
||||
Args:
|
||||
context: The extension context for the current request.
|
||||
|
||||
Returns:
|
||||
True if the client supports this extension.
|
||||
"""
|
||||
return self.uri in context.client_extensions
|
||||
|
||||
@abstractmethod
|
||||
async def on_request(self, context: ExtensionContext) -> None:
|
||||
"""Called before agent execution if extension is active.
|
||||
|
||||
Use this hook to:
|
||||
- Read extension-specific metadata from the request
|
||||
- Set up state for the execution
|
||||
- Modify execution parameters via context.state
|
||||
|
||||
Args:
|
||||
context: The extension context with request metadata and state.
|
||||
"""
|
||||
...
|
||||
|
||||
@abstractmethod
|
||||
async def on_response(self, context: ExtensionContext, result: Any) -> Any:
|
||||
"""Called after agent execution if extension is active.
|
||||
|
||||
Use this hook to:
|
||||
- Modify or enhance the result
|
||||
- Add extension-specific metadata to the response
|
||||
- Clean up any resources
|
||||
|
||||
Args:
|
||||
context: The extension context with request metadata and state.
|
||||
result: The agent execution result.
|
||||
|
||||
Returns:
|
||||
The result, potentially modified.
|
||||
"""
|
||||
...
|
||||
|
||||
|
||||
class ServerExtensionRegistry:
|
||||
"""Registry for managing server-side A2A protocol extensions.
|
||||
|
||||
Collects extensions and provides methods to generate AgentCapabilities
|
||||
and invoke extension hooks during request processing.
|
||||
"""
|
||||
|
||||
def __init__(self, extensions: list[ServerExtension] | None = None) -> None:
|
||||
"""Initialize the registry with optional extensions.
|
||||
|
||||
Args:
|
||||
extensions: Initial list of extensions to register.
|
||||
"""
|
||||
self._extensions: list[ServerExtension] = list(extensions) if extensions else []
|
||||
self._by_uri: dict[str, ServerExtension] = {
|
||||
ext.uri: ext for ext in self._extensions
|
||||
}
|
||||
|
||||
def register(self, extension: ServerExtension) -> None:
|
||||
"""Register an extension.
|
||||
|
||||
Args:
|
||||
extension: The extension to register.
|
||||
|
||||
Raises:
|
||||
ValueError: If an extension with the same URI is already registered.
|
||||
"""
|
||||
if extension.uri in self._by_uri:
|
||||
raise ValueError(f"Extension already registered: {extension.uri}")
|
||||
self._extensions.append(extension)
|
||||
self._by_uri[extension.uri] = extension
|
||||
|
||||
def get_agent_extensions(self) -> list[AgentExtension]:
|
||||
"""Get AgentExtension objects for all registered extensions.
|
||||
|
||||
Returns:
|
||||
List of AgentExtension objects for the AgentCard.
|
||||
"""
|
||||
return [ext.agent_extension() for ext in self._extensions]
|
||||
|
||||
def get_extension(self, uri: str) -> ServerExtension | None:
|
||||
"""Get an extension by URI.
|
||||
|
||||
Args:
|
||||
uri: The extension URI.
|
||||
|
||||
Returns:
|
||||
The extension, or None if not found.
|
||||
"""
|
||||
return self._by_uri.get(uri)
|
||||
|
||||
@staticmethod
|
||||
def create_context(
|
||||
metadata: dict[str, Any],
|
||||
client_extensions: set[str],
|
||||
server_context: ServerCallContext | None = None,
|
||||
) -> ExtensionContext:
|
||||
"""Create an ExtensionContext for a request.
|
||||
|
||||
Args:
|
||||
metadata: Request metadata dict.
|
||||
client_extensions: Set of extension URIs from client.
|
||||
server_context: Optional server call context.
|
||||
|
||||
Returns:
|
||||
ExtensionContext for use in hooks.
|
||||
"""
|
||||
return ExtensionContext(
|
||||
metadata=metadata,
|
||||
client_extensions=client_extensions,
|
||||
server_context=server_context,
|
||||
)
|
||||
|
||||
async def invoke_on_request(self, context: ExtensionContext) -> None:
|
||||
"""Invoke on_request hooks for all active extensions.
|
||||
|
||||
Tracks activated extensions and isolates errors from individual hooks.
|
||||
|
||||
Args:
|
||||
context: The extension context for the request.
|
||||
"""
|
||||
for extension in self._extensions:
|
||||
if extension.is_active(context):
|
||||
try:
|
||||
await extension.on_request(context)
|
||||
if context.server_context is not None:
|
||||
context.server_context.activated_extensions.add(extension.uri)
|
||||
except Exception:
|
||||
logger.exception(
|
||||
"Extension on_request hook failed",
|
||||
extra={"extension": extension.uri},
|
||||
)
|
||||
|
||||
async def invoke_on_response(self, context: ExtensionContext, result: Any) -> Any:
|
||||
"""Invoke on_response hooks for all active extensions.
|
||||
|
||||
Isolates errors from individual hooks to prevent one failing extension
|
||||
from breaking the entire response.
|
||||
|
||||
Args:
|
||||
context: The extension context for the request.
|
||||
result: The agent execution result.
|
||||
|
||||
Returns:
|
||||
The result after all extensions have processed it.
|
||||
"""
|
||||
processed = result
|
||||
for extension in self._extensions:
|
||||
if extension.is_active(context):
|
||||
try:
|
||||
processed = await extension.on_response(context, processed)
|
||||
except Exception:
|
||||
logger.exception(
|
||||
"Extension on_response hook failed",
|
||||
extra={"extension": extension.uri},
|
||||
)
|
||||
return processed
|
||||
@@ -51,13 +51,6 @@ ACTIONABLE_STATES: frozenset[TaskState] = frozenset(
|
||||
}
|
||||
)
|
||||
|
||||
PENDING_STATES: frozenset[TaskState] = frozenset(
|
||||
{
|
||||
TaskState.submitted,
|
||||
TaskState.working,
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
class TaskStateResult(TypedDict):
|
||||
"""Result dictionary from processing A2A task state."""
|
||||
@@ -279,9 +272,6 @@ def process_task_state(
|
||||
history=new_messages,
|
||||
)
|
||||
|
||||
if a2a_task.status.state in PENDING_STATES:
|
||||
return None
|
||||
|
||||
return None
|
||||
|
||||
|
||||
|
||||
@@ -38,18 +38,3 @@ You MUST now:
|
||||
DO NOT send another request - the task is already done.
|
||||
</REMOTE_AGENT_STATUS>
|
||||
"""
|
||||
|
||||
REMOTE_AGENT_RESPONSE_NOTICE: Final[str] = """
|
||||
<REMOTE_AGENT_STATUS>
|
||||
STATUS: RESPONSE_RECEIVED
|
||||
The remote agent has responded. Their response is in the conversation history above.
|
||||
|
||||
You MUST now:
|
||||
1. Set is_a2a=false (the remote task is complete and cannot receive more messages)
|
||||
2. Provide YOUR OWN response to the original task based on the information received
|
||||
|
||||
IMPORTANT: Your response should be addressed to the USER who gave you the original task.
|
||||
Report what the remote agent told you in THIRD PERSON (e.g., "The remote agent said..." or "I learned that...").
|
||||
Do NOT address the remote agent directly or use "you" to refer to them.
|
||||
</REMOTE_AGENT_STATUS>
|
||||
"""
|
||||
|
||||
@@ -36,17 +36,6 @@ except ImportError:
|
||||
|
||||
|
||||
TransportType = Literal["JSONRPC", "GRPC", "HTTP+JSON"]
|
||||
ProtocolVersion = Literal[
|
||||
"0.2.0",
|
||||
"0.2.1",
|
||||
"0.2.2",
|
||||
"0.2.3",
|
||||
"0.2.4",
|
||||
"0.2.5",
|
||||
"0.2.6",
|
||||
"0.3.0",
|
||||
"0.4.0",
|
||||
]
|
||||
|
||||
http_url_adapter: TypeAdapter[HttpUrl] = TypeAdapter(HttpUrl)
|
||||
|
||||
|
||||
@@ -2,28 +2,12 @@
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import TYPE_CHECKING, Any, NamedTuple, Protocol, TypedDict
|
||||
from typing import TYPE_CHECKING, Any, Protocol, TypedDict
|
||||
|
||||
from pydantic import GetCoreSchemaHandler
|
||||
from pydantic_core import CoreSchema, core_schema
|
||||
|
||||
|
||||
class CommonParams(NamedTuple):
|
||||
"""Common parameters shared across all update handlers.
|
||||
|
||||
Groups the frequently-passed parameters to reduce duplication.
|
||||
"""
|
||||
|
||||
turn_number: int
|
||||
is_multiturn: bool
|
||||
agent_role: str | None
|
||||
endpoint: str
|
||||
a2a_agent_name: str | None
|
||||
context_id: str | None
|
||||
from_task: Any
|
||||
from_agent: Any
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from a2a.client import Client
|
||||
from a2a.types import AgentCard, Message, Task
|
||||
@@ -79,8 +63,8 @@ class PushNotificationResultStore(Protocol):
|
||||
@classmethod
|
||||
def __get_pydantic_core_schema__(
|
||||
cls,
|
||||
_source_type: Any,
|
||||
_handler: GetCoreSchemaHandler,
|
||||
source_type: Any,
|
||||
handler: GetCoreSchemaHandler,
|
||||
) -> CoreSchema:
|
||||
return core_schema.any_schema()
|
||||
|
||||
@@ -146,31 +130,3 @@ class UpdateHandler(Protocol):
|
||||
Result dictionary with status, result/error, and history.
|
||||
"""
|
||||
...
|
||||
|
||||
|
||||
def extract_common_params(kwargs: BaseHandlerKwargs) -> CommonParams:
|
||||
"""Extract common parameters from handler kwargs.
|
||||
|
||||
Args:
|
||||
kwargs: Handler kwargs dict.
|
||||
|
||||
Returns:
|
||||
CommonParams with extracted values.
|
||||
|
||||
Raises:
|
||||
ValueError: If endpoint is not provided.
|
||||
"""
|
||||
endpoint = kwargs.get("endpoint")
|
||||
if endpoint is None:
|
||||
raise ValueError("endpoint is required for update handlers")
|
||||
|
||||
return CommonParams(
|
||||
turn_number=kwargs.get("turn_number", 0),
|
||||
is_multiturn=kwargs.get("is_multiturn", False),
|
||||
agent_role=kwargs.get("agent_role"),
|
||||
endpoint=endpoint,
|
||||
a2a_agent_name=kwargs.get("a2a_agent_name"),
|
||||
context_id=kwargs.get("context_id"),
|
||||
from_task=kwargs.get("from_task"),
|
||||
from_agent=kwargs.get("from_agent"),
|
||||
)
|
||||
|
||||
@@ -94,7 +94,7 @@ async def _poll_task_until_complete(
|
||||
A2APollingStatusEvent(
|
||||
task_id=task_id,
|
||||
context_id=effective_context_id,
|
||||
state=str(task.status.state.value),
|
||||
state=str(task.status.state.value) if task.status.state else "unknown",
|
||||
elapsed_seconds=elapsed,
|
||||
poll_count=poll_count,
|
||||
endpoint=endpoint,
|
||||
@@ -325,7 +325,7 @@ class PollingHandler:
|
||||
crewai_event_bus.emit(
|
||||
agent_branch,
|
||||
A2AConnectionErrorEvent(
|
||||
endpoint=endpoint,
|
||||
endpoint=endpoint or "",
|
||||
error=str(e),
|
||||
error_type="unexpected_error",
|
||||
a2a_agent_name=a2a_agent_name,
|
||||
|
||||
@@ -2,30 +2,10 @@
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Annotated
|
||||
|
||||
from a2a.types import PushNotificationAuthenticationInfo
|
||||
from pydantic import AnyHttpUrl, BaseModel, BeforeValidator, Field
|
||||
from pydantic import AnyHttpUrl, BaseModel, Field
|
||||
|
||||
from crewai.a2a.updates.base import PushNotificationResultStore
|
||||
from crewai.a2a.updates.push_notifications.signature import WebhookSignatureConfig
|
||||
|
||||
|
||||
def _coerce_signature(
|
||||
value: str | WebhookSignatureConfig | None,
|
||||
) -> WebhookSignatureConfig | None:
|
||||
"""Convert string secret to WebhookSignatureConfig."""
|
||||
if value is None:
|
||||
return None
|
||||
if isinstance(value, str):
|
||||
return WebhookSignatureConfig.hmac_sha256(secret=value)
|
||||
return value
|
||||
|
||||
|
||||
SignatureInput = Annotated[
|
||||
WebhookSignatureConfig | None,
|
||||
BeforeValidator(_coerce_signature),
|
||||
]
|
||||
|
||||
|
||||
class PushNotificationConfig(BaseModel):
|
||||
@@ -39,8 +19,6 @@ class PushNotificationConfig(BaseModel):
|
||||
timeout: Max seconds to wait for task completion.
|
||||
interval: Seconds between result polling attempts.
|
||||
result_store: Store for receiving push notification results.
|
||||
signature: HMAC signature config. Pass a string (secret) for defaults,
|
||||
or WebhookSignatureConfig for custom settings.
|
||||
"""
|
||||
|
||||
url: AnyHttpUrl = Field(description="Callback URL for push notifications")
|
||||
@@ -58,8 +36,3 @@ class PushNotificationConfig(BaseModel):
|
||||
result_store: PushNotificationResultStore | None = Field(
|
||||
default=None, description="Result store for push notification handling"
|
||||
)
|
||||
signature: SignatureInput = Field(
|
||||
default=None,
|
||||
description="HMAC signature config. Pass a string (secret) for simple usage, "
|
||||
"or WebhookSignatureConfig for custom headers/tolerance.",
|
||||
)
|
||||
|
||||
@@ -24,10 +24,8 @@ from crewai.a2a.task_helpers import (
|
||||
send_message_and_get_task_id,
|
||||
)
|
||||
from crewai.a2a.updates.base import (
|
||||
CommonParams,
|
||||
PushNotificationHandlerKwargs,
|
||||
PushNotificationResultStore,
|
||||
extract_common_params,
|
||||
)
|
||||
from crewai.events.event_bus import crewai_event_bus
|
||||
from crewai.events.types.a2a_events import (
|
||||
@@ -41,81 +39,10 @@ from crewai.events.types.a2a_events import (
|
||||
if TYPE_CHECKING:
|
||||
from a2a.types import Task as A2ATask
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def _handle_push_error(
|
||||
error: Exception,
|
||||
error_msg: str,
|
||||
error_type: str,
|
||||
new_messages: list[Message],
|
||||
agent_branch: Any | None,
|
||||
params: CommonParams,
|
||||
task_id: str | None,
|
||||
status_code: int | None = None,
|
||||
) -> TaskStateResult:
|
||||
"""Handle push notification errors with consistent event emission.
|
||||
|
||||
Args:
|
||||
error: The exception that occurred.
|
||||
error_msg: Formatted error message for the result.
|
||||
error_type: Type of error for the event.
|
||||
new_messages: List to append error message to.
|
||||
agent_branch: Agent tree branch for events.
|
||||
params: Common handler parameters.
|
||||
task_id: A2A task ID.
|
||||
status_code: HTTP status code if applicable.
|
||||
|
||||
Returns:
|
||||
TaskStateResult with failed status.
|
||||
"""
|
||||
error_message = Message(
|
||||
role=Role.agent,
|
||||
message_id=str(uuid.uuid4()),
|
||||
parts=[Part(root=TextPart(text=error_msg))],
|
||||
context_id=params.context_id,
|
||||
task_id=task_id,
|
||||
)
|
||||
new_messages.append(error_message)
|
||||
|
||||
crewai_event_bus.emit(
|
||||
agent_branch,
|
||||
A2AConnectionErrorEvent(
|
||||
endpoint=params.endpoint,
|
||||
error=str(error),
|
||||
error_type=error_type,
|
||||
status_code=status_code,
|
||||
a2a_agent_name=params.a2a_agent_name,
|
||||
operation="push_notification",
|
||||
context_id=params.context_id,
|
||||
task_id=task_id,
|
||||
from_task=params.from_task,
|
||||
from_agent=params.from_agent,
|
||||
),
|
||||
)
|
||||
crewai_event_bus.emit(
|
||||
agent_branch,
|
||||
A2AResponseReceivedEvent(
|
||||
response=error_msg,
|
||||
turn_number=params.turn_number,
|
||||
context_id=params.context_id,
|
||||
is_multiturn=params.is_multiturn,
|
||||
status="failed",
|
||||
final=True,
|
||||
agent_role=params.agent_role,
|
||||
endpoint=params.endpoint,
|
||||
a2a_agent_name=params.a2a_agent_name,
|
||||
from_task=params.from_task,
|
||||
from_agent=params.from_agent,
|
||||
),
|
||||
)
|
||||
return TaskStateResult(
|
||||
status=TaskState.failed,
|
||||
error=error_msg,
|
||||
history=new_messages,
|
||||
)
|
||||
|
||||
|
||||
async def _wait_for_push_result(
|
||||
task_id: str,
|
||||
result_store: PushNotificationResultStore,
|
||||
@@ -199,8 +126,15 @@ class PushNotificationHandler:
|
||||
polling_timeout = kwargs.get("polling_timeout", 300.0)
|
||||
polling_interval = kwargs.get("polling_interval", 2.0)
|
||||
agent_branch = kwargs.get("agent_branch")
|
||||
turn_number = kwargs.get("turn_number", 0)
|
||||
is_multiturn = kwargs.get("is_multiturn", False)
|
||||
agent_role = kwargs.get("agent_role")
|
||||
context_id = kwargs.get("context_id")
|
||||
task_id = kwargs.get("task_id")
|
||||
params = extract_common_params(kwargs)
|
||||
endpoint = kwargs.get("endpoint")
|
||||
a2a_agent_name = kwargs.get("a2a_agent_name")
|
||||
from_task = kwargs.get("from_task")
|
||||
from_agent = kwargs.get("from_agent")
|
||||
|
||||
if config is None:
|
||||
error_msg = (
|
||||
@@ -209,15 +143,15 @@ class PushNotificationHandler:
|
||||
crewai_event_bus.emit(
|
||||
agent_branch,
|
||||
A2AConnectionErrorEvent(
|
||||
endpoint=params.endpoint,
|
||||
endpoint=endpoint or "",
|
||||
error=error_msg,
|
||||
error_type="configuration_error",
|
||||
a2a_agent_name=params.a2a_agent_name,
|
||||
a2a_agent_name=a2a_agent_name,
|
||||
operation="push_notification",
|
||||
context_id=params.context_id,
|
||||
context_id=context_id,
|
||||
task_id=task_id,
|
||||
from_task=params.from_task,
|
||||
from_agent=params.from_agent,
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
),
|
||||
)
|
||||
return TaskStateResult(
|
||||
@@ -233,15 +167,15 @@ class PushNotificationHandler:
|
||||
crewai_event_bus.emit(
|
||||
agent_branch,
|
||||
A2AConnectionErrorEvent(
|
||||
endpoint=params.endpoint,
|
||||
endpoint=endpoint or "",
|
||||
error=error_msg,
|
||||
error_type="configuration_error",
|
||||
a2a_agent_name=params.a2a_agent_name,
|
||||
a2a_agent_name=a2a_agent_name,
|
||||
operation="push_notification",
|
||||
context_id=params.context_id,
|
||||
context_id=context_id,
|
||||
task_id=task_id,
|
||||
from_task=params.from_task,
|
||||
from_agent=params.from_agent,
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
),
|
||||
)
|
||||
return TaskStateResult(
|
||||
@@ -255,14 +189,14 @@ class PushNotificationHandler:
|
||||
event_stream=client.send_message(message),
|
||||
new_messages=new_messages,
|
||||
agent_card=agent_card,
|
||||
turn_number=params.turn_number,
|
||||
is_multiturn=params.is_multiturn,
|
||||
agent_role=params.agent_role,
|
||||
from_task=params.from_task,
|
||||
from_agent=params.from_agent,
|
||||
endpoint=params.endpoint,
|
||||
a2a_agent_name=params.a2a_agent_name,
|
||||
context_id=params.context_id,
|
||||
turn_number=turn_number,
|
||||
is_multiturn=is_multiturn,
|
||||
agent_role=agent_role,
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
endpoint=endpoint,
|
||||
a2a_agent_name=a2a_agent_name,
|
||||
context_id=context_id,
|
||||
)
|
||||
|
||||
if not isinstance(result_or_task_id, str):
|
||||
@@ -274,12 +208,12 @@ class PushNotificationHandler:
|
||||
agent_branch,
|
||||
A2APushNotificationRegisteredEvent(
|
||||
task_id=task_id,
|
||||
context_id=params.context_id,
|
||||
context_id=context_id,
|
||||
callback_url=str(config.url),
|
||||
endpoint=params.endpoint,
|
||||
a2a_agent_name=params.a2a_agent_name,
|
||||
from_task=params.from_task,
|
||||
from_agent=params.from_agent,
|
||||
endpoint=endpoint,
|
||||
a2a_agent_name=a2a_agent_name,
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
),
|
||||
)
|
||||
|
||||
@@ -295,11 +229,11 @@ class PushNotificationHandler:
|
||||
timeout=polling_timeout,
|
||||
poll_interval=polling_interval,
|
||||
agent_branch=agent_branch,
|
||||
from_task=params.from_task,
|
||||
from_agent=params.from_agent,
|
||||
context_id=params.context_id,
|
||||
endpoint=params.endpoint,
|
||||
a2a_agent_name=params.a2a_agent_name,
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
context_id=context_id,
|
||||
endpoint=endpoint,
|
||||
a2a_agent_name=a2a_agent_name,
|
||||
)
|
||||
|
||||
if final_task is None:
|
||||
@@ -313,13 +247,13 @@ class PushNotificationHandler:
|
||||
a2a_task=final_task,
|
||||
new_messages=new_messages,
|
||||
agent_card=agent_card,
|
||||
turn_number=params.turn_number,
|
||||
is_multiturn=params.is_multiturn,
|
||||
agent_role=params.agent_role,
|
||||
endpoint=params.endpoint,
|
||||
a2a_agent_name=params.a2a_agent_name,
|
||||
from_task=params.from_task,
|
||||
from_agent=params.from_agent,
|
||||
turn_number=turn_number,
|
||||
is_multiturn=is_multiturn,
|
||||
agent_role=agent_role,
|
||||
endpoint=endpoint,
|
||||
a2a_agent_name=a2a_agent_name,
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
)
|
||||
if result:
|
||||
return result
|
||||
@@ -331,24 +265,98 @@ class PushNotificationHandler:
|
||||
)
|
||||
|
||||
except A2AClientHTTPError as e:
|
||||
return _handle_push_error(
|
||||
error=e,
|
||||
error_msg=f"HTTP Error {e.status_code}: {e!s}",
|
||||
error_type="http_error",
|
||||
new_messages=new_messages,
|
||||
agent_branch=agent_branch,
|
||||
params=params,
|
||||
error_msg = f"HTTP Error {e.status_code}: {e!s}"
|
||||
|
||||
error_message = Message(
|
||||
role=Role.agent,
|
||||
message_id=str(uuid.uuid4()),
|
||||
parts=[Part(root=TextPart(text=error_msg))],
|
||||
context_id=context_id,
|
||||
task_id=task_id,
|
||||
status_code=e.status_code,
|
||||
)
|
||||
new_messages.append(error_message)
|
||||
|
||||
crewai_event_bus.emit(
|
||||
agent_branch,
|
||||
A2AConnectionErrorEvent(
|
||||
endpoint=endpoint or "",
|
||||
error=str(e),
|
||||
error_type="http_error",
|
||||
status_code=e.status_code,
|
||||
a2a_agent_name=a2a_agent_name,
|
||||
operation="push_notification",
|
||||
context_id=context_id,
|
||||
task_id=task_id,
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
),
|
||||
)
|
||||
crewai_event_bus.emit(
|
||||
agent_branch,
|
||||
A2AResponseReceivedEvent(
|
||||
response=error_msg,
|
||||
turn_number=turn_number,
|
||||
context_id=context_id,
|
||||
is_multiturn=is_multiturn,
|
||||
status="failed",
|
||||
final=True,
|
||||
agent_role=agent_role,
|
||||
endpoint=endpoint,
|
||||
a2a_agent_name=a2a_agent_name,
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
),
|
||||
)
|
||||
return TaskStateResult(
|
||||
status=TaskState.failed,
|
||||
error=error_msg,
|
||||
history=new_messages,
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
return _handle_push_error(
|
||||
error=e,
|
||||
error_msg=f"Unexpected error during push notification: {e!s}",
|
||||
error_type="unexpected_error",
|
||||
new_messages=new_messages,
|
||||
agent_branch=agent_branch,
|
||||
params=params,
|
||||
error_msg = f"Unexpected error during push notification: {e!s}"
|
||||
|
||||
error_message = Message(
|
||||
role=Role.agent,
|
||||
message_id=str(uuid.uuid4()),
|
||||
parts=[Part(root=TextPart(text=error_msg))],
|
||||
context_id=context_id,
|
||||
task_id=task_id,
|
||||
)
|
||||
new_messages.append(error_message)
|
||||
|
||||
crewai_event_bus.emit(
|
||||
agent_branch,
|
||||
A2AConnectionErrorEvent(
|
||||
endpoint=endpoint or "",
|
||||
error=str(e),
|
||||
error_type="unexpected_error",
|
||||
a2a_agent_name=a2a_agent_name,
|
||||
operation="push_notification",
|
||||
context_id=context_id,
|
||||
task_id=task_id,
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
),
|
||||
)
|
||||
crewai_event_bus.emit(
|
||||
agent_branch,
|
||||
A2AResponseReceivedEvent(
|
||||
response=error_msg,
|
||||
turn_number=turn_number,
|
||||
context_id=context_id,
|
||||
is_multiturn=is_multiturn,
|
||||
status="failed",
|
||||
final=True,
|
||||
agent_role=agent_role,
|
||||
endpoint=endpoint,
|
||||
a2a_agent_name=a2a_agent_name,
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
),
|
||||
)
|
||||
return TaskStateResult(
|
||||
status=TaskState.failed,
|
||||
error=error_msg,
|
||||
history=new_messages,
|
||||
)
|
||||
|
||||
@@ -1,87 +0,0 @@
|
||||
"""Webhook signature configuration for push notifications."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from enum import Enum
|
||||
import secrets
|
||||
|
||||
from pydantic import BaseModel, Field, SecretStr
|
||||
|
||||
|
||||
class WebhookSignatureMode(str, Enum):
|
||||
"""Signature mode for webhook push notifications."""
|
||||
|
||||
NONE = "none"
|
||||
HMAC_SHA256 = "hmac_sha256"
|
||||
|
||||
|
||||
class WebhookSignatureConfig(BaseModel):
|
||||
"""Configuration for webhook signature verification.
|
||||
|
||||
Provides cryptographic integrity verification and replay attack protection
|
||||
for A2A push notifications.
|
||||
|
||||
Attributes:
|
||||
mode: Signature mode (none or hmac_sha256).
|
||||
secret: Shared secret for HMAC computation (required for hmac_sha256 mode).
|
||||
timestamp_tolerance_seconds: Max allowed age of timestamps for replay protection.
|
||||
header_name: HTTP header name for the signature.
|
||||
timestamp_header_name: HTTP header name for the timestamp.
|
||||
"""
|
||||
|
||||
mode: WebhookSignatureMode = Field(
|
||||
default=WebhookSignatureMode.NONE,
|
||||
description="Signature verification mode",
|
||||
)
|
||||
secret: SecretStr | None = Field(
|
||||
default=None,
|
||||
description="Shared secret for HMAC computation",
|
||||
)
|
||||
timestamp_tolerance_seconds: int = Field(
|
||||
default=300,
|
||||
ge=0,
|
||||
description="Max allowed timestamp age in seconds (5 min default)",
|
||||
)
|
||||
header_name: str = Field(
|
||||
default="X-A2A-Signature",
|
||||
description="HTTP header name for the signature",
|
||||
)
|
||||
timestamp_header_name: str = Field(
|
||||
default="X-A2A-Signature-Timestamp",
|
||||
description="HTTP header name for the timestamp",
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def generate_secret(cls, length: int = 32) -> str:
|
||||
"""Generate a cryptographically secure random secret.
|
||||
|
||||
Args:
|
||||
length: Number of random bytes to generate (default 32).
|
||||
|
||||
Returns:
|
||||
URL-safe base64-encoded secret string.
|
||||
"""
|
||||
return secrets.token_urlsafe(length)
|
||||
|
||||
@classmethod
|
||||
def hmac_sha256(
|
||||
cls,
|
||||
secret: str | SecretStr,
|
||||
timestamp_tolerance_seconds: int = 300,
|
||||
) -> WebhookSignatureConfig:
|
||||
"""Create an HMAC-SHA256 signature configuration.
|
||||
|
||||
Args:
|
||||
secret: Shared secret for HMAC computation.
|
||||
timestamp_tolerance_seconds: Max allowed timestamp age in seconds.
|
||||
|
||||
Returns:
|
||||
Configured WebhookSignatureConfig for HMAC-SHA256.
|
||||
"""
|
||||
if isinstance(secret, str):
|
||||
secret = SecretStr(secret)
|
||||
return cls(
|
||||
mode=WebhookSignatureMode.HMAC_SHA256,
|
||||
secret=secret,
|
||||
timestamp_tolerance_seconds=timestamp_tolerance_seconds,
|
||||
)
|
||||
@@ -2,9 +2,6 @@
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import logging
|
||||
from typing import Final
|
||||
import uuid
|
||||
|
||||
from a2a.client import Client
|
||||
@@ -14,10 +11,7 @@ from a2a.types import (
|
||||
Message,
|
||||
Part,
|
||||
Role,
|
||||
Task,
|
||||
TaskArtifactUpdateEvent,
|
||||
TaskIdParams,
|
||||
TaskQueryParams,
|
||||
TaskState,
|
||||
TaskStatusUpdateEvent,
|
||||
TextPart,
|
||||
@@ -30,10 +24,7 @@ from crewai.a2a.task_helpers import (
|
||||
TaskStateResult,
|
||||
process_task_state,
|
||||
)
|
||||
from crewai.a2a.updates.base import StreamingHandlerKwargs, extract_common_params
|
||||
from crewai.a2a.updates.streaming.params import (
|
||||
process_status_update,
|
||||
)
|
||||
from crewai.a2a.updates.base import StreamingHandlerKwargs
|
||||
from crewai.events.event_bus import crewai_event_bus
|
||||
from crewai.events.types.a2a_events import (
|
||||
A2AArtifactReceivedEvent,
|
||||
@@ -44,194 +35,9 @@ from crewai.events.types.a2a_events import (
|
||||
)
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
MAX_RESUBSCRIBE_ATTEMPTS: Final[int] = 3
|
||||
RESUBSCRIBE_BACKOFF_BASE: Final[float] = 1.0
|
||||
|
||||
|
||||
class StreamingHandler:
|
||||
"""SSE streaming-based update handler."""
|
||||
|
||||
@staticmethod
|
||||
async def _try_recover_from_interruption( # type: ignore[misc]
|
||||
client: Client,
|
||||
task_id: str,
|
||||
new_messages: list[Message],
|
||||
agent_card: AgentCard,
|
||||
result_parts: list[str],
|
||||
**kwargs: Unpack[StreamingHandlerKwargs],
|
||||
) -> TaskStateResult | None:
|
||||
"""Attempt to recover from a stream interruption by checking task state.
|
||||
|
||||
If the task completed while we were disconnected, returns the result.
|
||||
If the task is still running, attempts to resubscribe and continue.
|
||||
|
||||
Args:
|
||||
client: A2A client instance.
|
||||
task_id: The task ID to recover.
|
||||
new_messages: List of collected messages.
|
||||
agent_card: The agent card.
|
||||
result_parts: Accumulated result text parts.
|
||||
**kwargs: Handler parameters.
|
||||
|
||||
Returns:
|
||||
TaskStateResult if recovery succeeded (task finished or resubscribe worked).
|
||||
None if recovery not possible (caller should handle failure).
|
||||
|
||||
Note:
|
||||
When None is returned, recovery failed and the original exception should
|
||||
be handled by the caller. All recovery attempts are logged.
|
||||
"""
|
||||
params = extract_common_params(kwargs) # type: ignore[arg-type]
|
||||
|
||||
try:
|
||||
a2a_task: Task = await client.get_task(TaskQueryParams(id=task_id))
|
||||
|
||||
if a2a_task.status.state in TERMINAL_STATES:
|
||||
logger.info(
|
||||
"Task completed during stream interruption",
|
||||
extra={"task_id": task_id, "state": str(a2a_task.status.state)},
|
||||
)
|
||||
return process_task_state(
|
||||
a2a_task=a2a_task,
|
||||
new_messages=new_messages,
|
||||
agent_card=agent_card,
|
||||
turn_number=params.turn_number,
|
||||
is_multiturn=params.is_multiturn,
|
||||
agent_role=params.agent_role,
|
||||
result_parts=result_parts,
|
||||
endpoint=params.endpoint,
|
||||
a2a_agent_name=params.a2a_agent_name,
|
||||
from_task=params.from_task,
|
||||
from_agent=params.from_agent,
|
||||
)
|
||||
|
||||
if a2a_task.status.state in ACTIONABLE_STATES:
|
||||
logger.info(
|
||||
"Task in actionable state during stream interruption",
|
||||
extra={"task_id": task_id, "state": str(a2a_task.status.state)},
|
||||
)
|
||||
return process_task_state(
|
||||
a2a_task=a2a_task,
|
||||
new_messages=new_messages,
|
||||
agent_card=agent_card,
|
||||
turn_number=params.turn_number,
|
||||
is_multiturn=params.is_multiturn,
|
||||
agent_role=params.agent_role,
|
||||
result_parts=result_parts,
|
||||
endpoint=params.endpoint,
|
||||
a2a_agent_name=params.a2a_agent_name,
|
||||
from_task=params.from_task,
|
||||
from_agent=params.from_agent,
|
||||
is_final=False,
|
||||
)
|
||||
|
||||
logger.info(
|
||||
"Task still running, attempting resubscribe",
|
||||
extra={"task_id": task_id, "state": str(a2a_task.status.state)},
|
||||
)
|
||||
|
||||
for attempt in range(MAX_RESUBSCRIBE_ATTEMPTS):
|
||||
try:
|
||||
backoff = RESUBSCRIBE_BACKOFF_BASE * (2**attempt)
|
||||
if attempt > 0:
|
||||
await asyncio.sleep(backoff)
|
||||
|
||||
event_stream = client.resubscribe(TaskIdParams(id=task_id))
|
||||
|
||||
async for event in event_stream:
|
||||
if isinstance(event, tuple):
|
||||
resubscribed_task, update = event
|
||||
|
||||
is_final_update = (
|
||||
process_status_update(update, result_parts)
|
||||
if isinstance(update, TaskStatusUpdateEvent)
|
||||
else False
|
||||
)
|
||||
|
||||
if isinstance(update, TaskArtifactUpdateEvent):
|
||||
artifact = update.artifact
|
||||
result_parts.extend(
|
||||
part.root.text
|
||||
for part in artifact.parts
|
||||
if part.root.kind == "text"
|
||||
)
|
||||
|
||||
if (
|
||||
is_final_update
|
||||
or resubscribed_task.status.state
|
||||
in TERMINAL_STATES | ACTIONABLE_STATES
|
||||
):
|
||||
return process_task_state(
|
||||
a2a_task=resubscribed_task,
|
||||
new_messages=new_messages,
|
||||
agent_card=agent_card,
|
||||
turn_number=params.turn_number,
|
||||
is_multiturn=params.is_multiturn,
|
||||
agent_role=params.agent_role,
|
||||
result_parts=result_parts,
|
||||
endpoint=params.endpoint,
|
||||
a2a_agent_name=params.a2a_agent_name,
|
||||
from_task=params.from_task,
|
||||
from_agent=params.from_agent,
|
||||
is_final=is_final_update,
|
||||
)
|
||||
|
||||
elif isinstance(event, Message):
|
||||
new_messages.append(event)
|
||||
result_parts.extend(
|
||||
part.root.text
|
||||
for part in event.parts
|
||||
if part.root.kind == "text"
|
||||
)
|
||||
|
||||
final_task = await client.get_task(TaskQueryParams(id=task_id))
|
||||
return process_task_state(
|
||||
a2a_task=final_task,
|
||||
new_messages=new_messages,
|
||||
agent_card=agent_card,
|
||||
turn_number=params.turn_number,
|
||||
is_multiturn=params.is_multiturn,
|
||||
agent_role=params.agent_role,
|
||||
result_parts=result_parts,
|
||||
endpoint=params.endpoint,
|
||||
a2a_agent_name=params.a2a_agent_name,
|
||||
from_task=params.from_task,
|
||||
from_agent=params.from_agent,
|
||||
)
|
||||
|
||||
except Exception as resubscribe_error: # noqa: PERF203
|
||||
logger.warning(
|
||||
"Resubscribe attempt failed",
|
||||
extra={
|
||||
"task_id": task_id,
|
||||
"attempt": attempt + 1,
|
||||
"max_attempts": MAX_RESUBSCRIBE_ATTEMPTS,
|
||||
"error": str(resubscribe_error),
|
||||
},
|
||||
)
|
||||
if attempt == MAX_RESUBSCRIBE_ATTEMPTS - 1:
|
||||
return None
|
||||
|
||||
except Exception as e:
|
||||
logger.warning(
|
||||
"Failed to recover from stream interruption due to unexpected error",
|
||||
extra={
|
||||
"task_id": task_id,
|
||||
"error": str(e),
|
||||
"error_type": type(e).__name__,
|
||||
},
|
||||
exc_info=True,
|
||||
)
|
||||
return None
|
||||
|
||||
logger.warning(
|
||||
"Recovery exhausted all resubscribe attempts without success",
|
||||
extra={"task_id": task_id, "max_attempts": MAX_RESUBSCRIBE_ATTEMPTS},
|
||||
)
|
||||
return None
|
||||
|
||||
@staticmethod
|
||||
async def execute(
|
||||
client: Client,
|
||||
@@ -252,40 +58,42 @@ class StreamingHandler:
|
||||
Returns:
|
||||
Dictionary with status, result/error, and history.
|
||||
"""
|
||||
context_id = kwargs.get("context_id")
|
||||
task_id = kwargs.get("task_id")
|
||||
turn_number = kwargs.get("turn_number", 0)
|
||||
is_multiturn = kwargs.get("is_multiturn", False)
|
||||
agent_role = kwargs.get("agent_role")
|
||||
endpoint = kwargs.get("endpoint")
|
||||
a2a_agent_name = kwargs.get("a2a_agent_name")
|
||||
from_task = kwargs.get("from_task")
|
||||
from_agent = kwargs.get("from_agent")
|
||||
agent_branch = kwargs.get("agent_branch")
|
||||
params = extract_common_params(kwargs)
|
||||
|
||||
result_parts: list[str] = []
|
||||
final_result: TaskStateResult | None = None
|
||||
event_stream = client.send_message(message)
|
||||
chunk_index = 0
|
||||
current_task_id: str | None = task_id
|
||||
|
||||
crewai_event_bus.emit(
|
||||
agent_branch,
|
||||
A2AStreamingStartedEvent(
|
||||
task_id=task_id,
|
||||
context_id=params.context_id,
|
||||
endpoint=params.endpoint,
|
||||
a2a_agent_name=params.a2a_agent_name,
|
||||
turn_number=params.turn_number,
|
||||
is_multiturn=params.is_multiturn,
|
||||
agent_role=params.agent_role,
|
||||
from_task=params.from_task,
|
||||
from_agent=params.from_agent,
|
||||
context_id=context_id,
|
||||
endpoint=endpoint or "",
|
||||
a2a_agent_name=a2a_agent_name,
|
||||
turn_number=turn_number,
|
||||
is_multiturn=is_multiturn,
|
||||
agent_role=agent_role,
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
),
|
||||
)
|
||||
|
||||
try:
|
||||
async for event in event_stream:
|
||||
if isinstance(event, tuple):
|
||||
a2a_task, _ = event
|
||||
current_task_id = a2a_task.id
|
||||
|
||||
if isinstance(event, Message):
|
||||
new_messages.append(event)
|
||||
message_context_id = event.context_id or params.context_id
|
||||
message_context_id = event.context_id or context_id
|
||||
for part in event.parts:
|
||||
if part.root.kind == "text":
|
||||
text = part.root.text
|
||||
@@ -297,12 +105,12 @@ class StreamingHandler:
|
||||
context_id=message_context_id,
|
||||
chunk=text,
|
||||
chunk_index=chunk_index,
|
||||
endpoint=params.endpoint,
|
||||
a2a_agent_name=params.a2a_agent_name,
|
||||
turn_number=params.turn_number,
|
||||
is_multiturn=params.is_multiturn,
|
||||
from_task=params.from_task,
|
||||
from_agent=params.from_agent,
|
||||
endpoint=endpoint,
|
||||
a2a_agent_name=a2a_agent_name,
|
||||
turn_number=turn_number,
|
||||
is_multiturn=is_multiturn,
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
),
|
||||
)
|
||||
chunk_index += 1
|
||||
@@ -320,12 +128,12 @@ class StreamingHandler:
|
||||
artifact_size = None
|
||||
if artifact.parts:
|
||||
artifact_size = sum(
|
||||
len(p.root.text.encode())
|
||||
len(p.root.text.encode("utf-8"))
|
||||
if p.root.kind == "text"
|
||||
else len(getattr(p.root, "data", b""))
|
||||
for p in artifact.parts
|
||||
)
|
||||
effective_context_id = a2a_task.context_id or params.context_id
|
||||
effective_context_id = a2a_task.context_id or context_id
|
||||
crewai_event_bus.emit(
|
||||
agent_branch,
|
||||
A2AArtifactReceivedEvent(
|
||||
@@ -339,21 +147,29 @@ class StreamingHandler:
|
||||
size_bytes=artifact_size,
|
||||
append=update.append or False,
|
||||
last_chunk=update.last_chunk or False,
|
||||
endpoint=params.endpoint,
|
||||
a2a_agent_name=params.a2a_agent_name,
|
||||
endpoint=endpoint,
|
||||
a2a_agent_name=a2a_agent_name,
|
||||
context_id=effective_context_id,
|
||||
turn_number=params.turn_number,
|
||||
is_multiturn=params.is_multiturn,
|
||||
from_task=params.from_task,
|
||||
from_agent=params.from_agent,
|
||||
turn_number=turn_number,
|
||||
is_multiturn=is_multiturn,
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
),
|
||||
)
|
||||
|
||||
is_final_update = (
|
||||
process_status_update(update, result_parts)
|
||||
if isinstance(update, TaskStatusUpdateEvent)
|
||||
else False
|
||||
)
|
||||
is_final_update = False
|
||||
if isinstance(update, TaskStatusUpdateEvent):
|
||||
is_final_update = update.final
|
||||
if (
|
||||
update.status
|
||||
and update.status.message
|
||||
and update.status.message.parts
|
||||
):
|
||||
result_parts.extend(
|
||||
part.root.text
|
||||
for part in update.status.message.parts
|
||||
if part.root.kind == "text" and part.root.text
|
||||
)
|
||||
|
||||
if (
|
||||
not is_final_update
|
||||
@@ -366,68 +182,27 @@ class StreamingHandler:
|
||||
a2a_task=a2a_task,
|
||||
new_messages=new_messages,
|
||||
agent_card=agent_card,
|
||||
turn_number=params.turn_number,
|
||||
is_multiturn=params.is_multiturn,
|
||||
agent_role=params.agent_role,
|
||||
turn_number=turn_number,
|
||||
is_multiturn=is_multiturn,
|
||||
agent_role=agent_role,
|
||||
result_parts=result_parts,
|
||||
endpoint=params.endpoint,
|
||||
a2a_agent_name=params.a2a_agent_name,
|
||||
from_task=params.from_task,
|
||||
from_agent=params.from_agent,
|
||||
endpoint=endpoint,
|
||||
a2a_agent_name=a2a_agent_name,
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
is_final=is_final_update,
|
||||
)
|
||||
if final_result:
|
||||
break
|
||||
|
||||
except A2AClientHTTPError as e:
|
||||
if current_task_id:
|
||||
logger.info(
|
||||
"Stream interrupted with HTTP error, attempting recovery",
|
||||
extra={
|
||||
"task_id": current_task_id,
|
||||
"error": str(e),
|
||||
"status_code": e.status_code,
|
||||
},
|
||||
)
|
||||
recovery_kwargs = {k: v for k, v in kwargs.items() if k != "task_id"}
|
||||
recovered_result = (
|
||||
await StreamingHandler._try_recover_from_interruption(
|
||||
client=client,
|
||||
task_id=current_task_id,
|
||||
new_messages=new_messages,
|
||||
agent_card=agent_card,
|
||||
result_parts=result_parts,
|
||||
**recovery_kwargs,
|
||||
)
|
||||
)
|
||||
if recovered_result:
|
||||
logger.info(
|
||||
"Successfully recovered task after HTTP error",
|
||||
extra={
|
||||
"task_id": current_task_id,
|
||||
"status": str(recovered_result.get("status")),
|
||||
},
|
||||
)
|
||||
return recovered_result
|
||||
|
||||
logger.warning(
|
||||
"Failed to recover from HTTP error, returning failure",
|
||||
extra={
|
||||
"task_id": current_task_id,
|
||||
"status_code": e.status_code,
|
||||
"original_error": str(e),
|
||||
},
|
||||
)
|
||||
|
||||
error_msg = f"HTTP Error {e.status_code}: {e!s}"
|
||||
error_type = "http_error"
|
||||
status_code = e.status_code
|
||||
|
||||
error_message = Message(
|
||||
role=Role.agent,
|
||||
message_id=str(uuid.uuid4()),
|
||||
parts=[Part(root=TextPart(text=error_msg))],
|
||||
context_id=params.context_id,
|
||||
context_id=context_id,
|
||||
task_id=task_id,
|
||||
)
|
||||
new_messages.append(error_message)
|
||||
@@ -435,118 +210,32 @@ class StreamingHandler:
|
||||
crewai_event_bus.emit(
|
||||
agent_branch,
|
||||
A2AConnectionErrorEvent(
|
||||
endpoint=params.endpoint,
|
||||
endpoint=endpoint or "",
|
||||
error=str(e),
|
||||
error_type=error_type,
|
||||
status_code=status_code,
|
||||
a2a_agent_name=params.a2a_agent_name,
|
||||
error_type="http_error",
|
||||
status_code=e.status_code,
|
||||
a2a_agent_name=a2a_agent_name,
|
||||
operation="streaming",
|
||||
context_id=params.context_id,
|
||||
context_id=context_id,
|
||||
task_id=task_id,
|
||||
from_task=params.from_task,
|
||||
from_agent=params.from_agent,
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
),
|
||||
)
|
||||
crewai_event_bus.emit(
|
||||
agent_branch,
|
||||
A2AResponseReceivedEvent(
|
||||
response=error_msg,
|
||||
turn_number=params.turn_number,
|
||||
context_id=params.context_id,
|
||||
is_multiturn=params.is_multiturn,
|
||||
turn_number=turn_number,
|
||||
context_id=context_id,
|
||||
is_multiturn=is_multiturn,
|
||||
status="failed",
|
||||
final=True,
|
||||
agent_role=params.agent_role,
|
||||
endpoint=params.endpoint,
|
||||
a2a_agent_name=params.a2a_agent_name,
|
||||
from_task=params.from_task,
|
||||
from_agent=params.from_agent,
|
||||
),
|
||||
)
|
||||
return TaskStateResult(
|
||||
status=TaskState.failed,
|
||||
error=error_msg,
|
||||
history=new_messages,
|
||||
)
|
||||
|
||||
except (asyncio.TimeoutError, asyncio.CancelledError, ConnectionError) as e:
|
||||
error_type = type(e).__name__.lower()
|
||||
if current_task_id:
|
||||
logger.info(
|
||||
f"Stream interrupted with {error_type}, attempting recovery",
|
||||
extra={"task_id": current_task_id, "error": str(e)},
|
||||
)
|
||||
recovery_kwargs = {k: v for k, v in kwargs.items() if k != "task_id"}
|
||||
recovered_result = (
|
||||
await StreamingHandler._try_recover_from_interruption(
|
||||
client=client,
|
||||
task_id=current_task_id,
|
||||
new_messages=new_messages,
|
||||
agent_card=agent_card,
|
||||
result_parts=result_parts,
|
||||
**recovery_kwargs,
|
||||
)
|
||||
)
|
||||
if recovered_result:
|
||||
logger.info(
|
||||
f"Successfully recovered task after {error_type}",
|
||||
extra={
|
||||
"task_id": current_task_id,
|
||||
"status": str(recovered_result.get("status")),
|
||||
},
|
||||
)
|
||||
return recovered_result
|
||||
|
||||
logger.warning(
|
||||
f"Failed to recover from {error_type}, returning failure",
|
||||
extra={
|
||||
"task_id": current_task_id,
|
||||
"error_type": error_type,
|
||||
"original_error": str(e),
|
||||
},
|
||||
)
|
||||
|
||||
error_msg = f"Connection error during streaming: {e!s}"
|
||||
status_code = None
|
||||
|
||||
error_message = Message(
|
||||
role=Role.agent,
|
||||
message_id=str(uuid.uuid4()),
|
||||
parts=[Part(root=TextPart(text=error_msg))],
|
||||
context_id=params.context_id,
|
||||
task_id=task_id,
|
||||
)
|
||||
new_messages.append(error_message)
|
||||
|
||||
crewai_event_bus.emit(
|
||||
agent_branch,
|
||||
A2AConnectionErrorEvent(
|
||||
endpoint=params.endpoint,
|
||||
error=str(e),
|
||||
error_type=error_type,
|
||||
status_code=status_code,
|
||||
a2a_agent_name=params.a2a_agent_name,
|
||||
operation="streaming",
|
||||
context_id=params.context_id,
|
||||
task_id=task_id,
|
||||
from_task=params.from_task,
|
||||
from_agent=params.from_agent,
|
||||
),
|
||||
)
|
||||
crewai_event_bus.emit(
|
||||
agent_branch,
|
||||
A2AResponseReceivedEvent(
|
||||
response=error_msg,
|
||||
turn_number=params.turn_number,
|
||||
context_id=params.context_id,
|
||||
is_multiturn=params.is_multiturn,
|
||||
status="failed",
|
||||
final=True,
|
||||
agent_role=params.agent_role,
|
||||
endpoint=params.endpoint,
|
||||
a2a_agent_name=params.a2a_agent_name,
|
||||
from_task=params.from_task,
|
||||
from_agent=params.from_agent,
|
||||
agent_role=agent_role,
|
||||
endpoint=endpoint,
|
||||
a2a_agent_name=a2a_agent_name,
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
),
|
||||
)
|
||||
return TaskStateResult(
|
||||
@@ -556,23 +245,13 @@ class StreamingHandler:
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
logger.exception(
|
||||
"Unexpected error during streaming",
|
||||
extra={
|
||||
"task_id": current_task_id,
|
||||
"error_type": type(e).__name__,
|
||||
"endpoint": params.endpoint,
|
||||
},
|
||||
)
|
||||
error_msg = f"Unexpected error during streaming: {type(e).__name__}: {e!s}"
|
||||
error_type = "unexpected_error"
|
||||
status_code = None
|
||||
error_msg = f"Unexpected error during streaming: {e!s}"
|
||||
|
||||
error_message = Message(
|
||||
role=Role.agent,
|
||||
message_id=str(uuid.uuid4()),
|
||||
parts=[Part(root=TextPart(text=error_msg))],
|
||||
context_id=params.context_id,
|
||||
context_id=context_id,
|
||||
task_id=task_id,
|
||||
)
|
||||
new_messages.append(error_message)
|
||||
@@ -580,32 +259,31 @@ class StreamingHandler:
|
||||
crewai_event_bus.emit(
|
||||
agent_branch,
|
||||
A2AConnectionErrorEvent(
|
||||
endpoint=params.endpoint,
|
||||
endpoint=endpoint or "",
|
||||
error=str(e),
|
||||
error_type=error_type,
|
||||
status_code=status_code,
|
||||
a2a_agent_name=params.a2a_agent_name,
|
||||
error_type="unexpected_error",
|
||||
a2a_agent_name=a2a_agent_name,
|
||||
operation="streaming",
|
||||
context_id=params.context_id,
|
||||
context_id=context_id,
|
||||
task_id=task_id,
|
||||
from_task=params.from_task,
|
||||
from_agent=params.from_agent,
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
),
|
||||
)
|
||||
crewai_event_bus.emit(
|
||||
agent_branch,
|
||||
A2AResponseReceivedEvent(
|
||||
response=error_msg,
|
||||
turn_number=params.turn_number,
|
||||
context_id=params.context_id,
|
||||
is_multiturn=params.is_multiturn,
|
||||
turn_number=turn_number,
|
||||
context_id=context_id,
|
||||
is_multiturn=is_multiturn,
|
||||
status="failed",
|
||||
final=True,
|
||||
agent_role=params.agent_role,
|
||||
endpoint=params.endpoint,
|
||||
a2a_agent_name=params.a2a_agent_name,
|
||||
from_task=params.from_task,
|
||||
from_agent=params.from_agent,
|
||||
agent_role=agent_role,
|
||||
endpoint=endpoint,
|
||||
a2a_agent_name=a2a_agent_name,
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
),
|
||||
)
|
||||
return TaskStateResult(
|
||||
@@ -623,15 +301,15 @@ class StreamingHandler:
|
||||
crewai_event_bus.emit(
|
||||
agent_branch,
|
||||
A2AConnectionErrorEvent(
|
||||
endpoint=params.endpoint,
|
||||
endpoint=endpoint or "",
|
||||
error=str(close_error),
|
||||
error_type="stream_close_error",
|
||||
a2a_agent_name=params.a2a_agent_name,
|
||||
a2a_agent_name=a2a_agent_name,
|
||||
operation="stream_close",
|
||||
context_id=params.context_id,
|
||||
context_id=context_id,
|
||||
task_id=task_id,
|
||||
from_task=params.from_task,
|
||||
from_agent=params.from_agent,
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
),
|
||||
)
|
||||
|
||||
|
||||
@@ -1,28 +0,0 @@
|
||||
"""Common parameter extraction for streaming handlers."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from a2a.types import TaskStatusUpdateEvent
|
||||
|
||||
|
||||
def process_status_update(
|
||||
update: TaskStatusUpdateEvent,
|
||||
result_parts: list[str],
|
||||
) -> bool:
|
||||
"""Process a status update event and extract text parts.
|
||||
|
||||
Args:
|
||||
update: The status update event.
|
||||
result_parts: List to append text parts to (modified in place).
|
||||
|
||||
Returns:
|
||||
True if this is a final update, False otherwise.
|
||||
"""
|
||||
is_final = update.final
|
||||
if update.status and update.status.message and update.status.message.parts:
|
||||
result_parts.extend(
|
||||
part.root.text
|
||||
for part in update.status.message.parts
|
||||
if part.root.kind == "text" and part.root.text
|
||||
)
|
||||
return is_final
|
||||
@@ -5,7 +5,6 @@ from __future__ import annotations
|
||||
import asyncio
|
||||
from collections.abc import MutableMapping
|
||||
from functools import lru_cache
|
||||
import ssl
|
||||
import time
|
||||
from types import MethodType
|
||||
from typing import TYPE_CHECKING
|
||||
@@ -16,7 +15,7 @@ from aiocache import cached # type: ignore[import-untyped]
|
||||
from aiocache.serializers import PickleSerializer # type: ignore[import-untyped]
|
||||
import httpx
|
||||
|
||||
from crewai.a2a.auth.client_schemes import APIKeyAuth, HTTPDigestAuth
|
||||
from crewai.a2a.auth.schemas import APIKeyAuth, HTTPDigestAuth
|
||||
from crewai.a2a.auth.utils import (
|
||||
_auth_store,
|
||||
configure_auth_client,
|
||||
@@ -33,51 +32,11 @@ from crewai.events.types.a2a_events import (
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from crewai.a2a.auth.client_schemes import ClientAuthScheme
|
||||
from crewai.a2a.auth.schemas import AuthScheme
|
||||
from crewai.agent import Agent
|
||||
from crewai.task import Task
|
||||
|
||||
|
||||
def _get_tls_verify(auth: ClientAuthScheme | None) -> ssl.SSLContext | bool | str:
|
||||
"""Get TLS verify parameter from auth scheme.
|
||||
|
||||
Args:
|
||||
auth: Optional authentication scheme with TLS config.
|
||||
|
||||
Returns:
|
||||
SSL context, CA cert path, True for default verification,
|
||||
or False if verification disabled.
|
||||
"""
|
||||
if auth and auth.tls:
|
||||
return auth.tls.get_httpx_ssl_context()
|
||||
return True
|
||||
|
||||
|
||||
async def _prepare_auth_headers(
|
||||
auth: ClientAuthScheme | None,
|
||||
timeout: int,
|
||||
) -> tuple[MutableMapping[str, str], ssl.SSLContext | bool | str]:
|
||||
"""Prepare authentication headers and TLS verification settings.
|
||||
|
||||
Args:
|
||||
auth: Optional authentication scheme.
|
||||
timeout: Request timeout in seconds.
|
||||
|
||||
Returns:
|
||||
Tuple of (headers dict, TLS verify setting).
|
||||
"""
|
||||
headers: MutableMapping[str, str] = {}
|
||||
verify = _get_tls_verify(auth)
|
||||
if auth:
|
||||
async with httpx.AsyncClient(
|
||||
timeout=timeout, verify=verify
|
||||
) as temp_auth_client:
|
||||
if isinstance(auth, (HTTPDigestAuth, APIKeyAuth)):
|
||||
configure_auth_client(auth, temp_auth_client)
|
||||
headers = await auth.apply_auth(temp_auth_client, {})
|
||||
return headers, verify
|
||||
|
||||
|
||||
def _get_server_config(agent: Agent) -> A2AServerConfig | None:
|
||||
"""Get A2AServerConfig from an agent's a2a configuration.
|
||||
|
||||
@@ -100,7 +59,7 @@ def _get_server_config(agent: Agent) -> A2AServerConfig | None:
|
||||
|
||||
def fetch_agent_card(
|
||||
endpoint: str,
|
||||
auth: ClientAuthScheme | None = None,
|
||||
auth: AuthScheme | None = None,
|
||||
timeout: int = 30,
|
||||
use_cache: bool = True,
|
||||
cache_ttl: int = 300,
|
||||
@@ -109,7 +68,7 @@ def fetch_agent_card(
|
||||
|
||||
Args:
|
||||
endpoint: A2A agent endpoint URL (AgentCard URL).
|
||||
auth: Optional ClientAuthScheme for authentication.
|
||||
auth: Optional AuthScheme for authentication.
|
||||
timeout: Request timeout in seconds.
|
||||
use_cache: Whether to use caching (default True).
|
||||
cache_ttl: Cache TTL in seconds (default 300 = 5 minutes).
|
||||
@@ -131,10 +90,10 @@ def fetch_agent_card(
|
||||
"_authorization_callback",
|
||||
}
|
||||
)
|
||||
auth_hash = _auth_store.compute_key(type(auth).__name__, auth_data)
|
||||
auth_hash = hash((type(auth).__name__, auth_data))
|
||||
else:
|
||||
auth_hash = _auth_store.compute_key("none", "")
|
||||
_auth_store.set(auth_hash, auth)
|
||||
auth_hash = 0
|
||||
_auth_store[auth_hash] = auth
|
||||
ttl_hash = int(time.time() // cache_ttl)
|
||||
return _fetch_agent_card_cached(endpoint, auth_hash, timeout, ttl_hash)
|
||||
|
||||
@@ -150,7 +109,7 @@ def fetch_agent_card(
|
||||
|
||||
async def afetch_agent_card(
|
||||
endpoint: str,
|
||||
auth: ClientAuthScheme | None = None,
|
||||
auth: AuthScheme | None = None,
|
||||
timeout: int = 30,
|
||||
use_cache: bool = True,
|
||||
) -> AgentCard:
|
||||
@@ -160,7 +119,7 @@ async def afetch_agent_card(
|
||||
|
||||
Args:
|
||||
endpoint: A2A agent endpoint URL (AgentCard URL).
|
||||
auth: Optional ClientAuthScheme for authentication.
|
||||
auth: Optional AuthScheme for authentication.
|
||||
timeout: Request timeout in seconds.
|
||||
use_cache: Whether to use caching (default True).
|
||||
|
||||
@@ -181,10 +140,10 @@ async def afetch_agent_card(
|
||||
"_authorization_callback",
|
||||
}
|
||||
)
|
||||
auth_hash = _auth_store.compute_key(type(auth).__name__, auth_data)
|
||||
auth_hash = hash((type(auth).__name__, auth_data))
|
||||
else:
|
||||
auth_hash = _auth_store.compute_key("none", "")
|
||||
_auth_store.set(auth_hash, auth)
|
||||
auth_hash = 0
|
||||
_auth_store[auth_hash] = auth
|
||||
agent_card: AgentCard = await _afetch_agent_card_cached(
|
||||
endpoint, auth_hash, timeout
|
||||
)
|
||||
@@ -196,7 +155,7 @@ async def afetch_agent_card(
|
||||
@lru_cache()
|
||||
def _fetch_agent_card_cached(
|
||||
endpoint: str,
|
||||
auth_hash: str,
|
||||
auth_hash: int,
|
||||
timeout: int,
|
||||
_ttl_hash: int,
|
||||
) -> AgentCard:
|
||||
@@ -216,7 +175,7 @@ def _fetch_agent_card_cached(
|
||||
@cached(ttl=300, serializer=PickleSerializer()) # type: ignore[untyped-decorator]
|
||||
async def _afetch_agent_card_cached(
|
||||
endpoint: str,
|
||||
auth_hash: str,
|
||||
auth_hash: int,
|
||||
timeout: int,
|
||||
) -> AgentCard:
|
||||
"""Cached async implementation of AgentCard fetching."""
|
||||
@@ -226,7 +185,7 @@ async def _afetch_agent_card_cached(
|
||||
|
||||
async def _afetch_agent_card_impl(
|
||||
endpoint: str,
|
||||
auth: ClientAuthScheme | None,
|
||||
auth: AuthScheme | None,
|
||||
timeout: int,
|
||||
) -> AgentCard:
|
||||
"""Internal async implementation of AgentCard fetching."""
|
||||
@@ -238,17 +197,16 @@ async def _afetch_agent_card_impl(
|
||||
else:
|
||||
url_parts = endpoint.split("/", 3)
|
||||
base_url = f"{url_parts[0]}//{url_parts[2]}"
|
||||
agent_card_path = (
|
||||
f"/{url_parts[3]}"
|
||||
if len(url_parts) > 3 and url_parts[3]
|
||||
else "/.well-known/agent-card.json"
|
||||
)
|
||||
agent_card_path = f"/{url_parts[3]}" if len(url_parts) > 3 else "/"
|
||||
|
||||
headers, verify = await _prepare_auth_headers(auth, timeout)
|
||||
headers: MutableMapping[str, str] = {}
|
||||
if auth:
|
||||
async with httpx.AsyncClient(timeout=timeout) as temp_auth_client:
|
||||
if isinstance(auth, (HTTPDigestAuth, APIKeyAuth)):
|
||||
configure_auth_client(auth, temp_auth_client)
|
||||
headers = await auth.apply_auth(temp_auth_client, {})
|
||||
|
||||
async with httpx.AsyncClient(
|
||||
timeout=timeout, headers=headers, verify=verify
|
||||
) as temp_client:
|
||||
async with httpx.AsyncClient(timeout=timeout, headers=headers) as temp_client:
|
||||
if auth and isinstance(auth, (HTTPDigestAuth, APIKeyAuth)):
|
||||
configure_auth_client(auth, temp_client)
|
||||
|
||||
@@ -476,7 +434,6 @@ def _agent_to_agent_card(agent: Agent, url: str) -> AgentCard:
|
||||
"""Generate an A2A AgentCard from an Agent instance.
|
||||
|
||||
Uses A2AServerConfig values when available, falling back to agent properties.
|
||||
If signing_config is provided, the card will be signed with JWS.
|
||||
|
||||
Args:
|
||||
agent: The Agent instance to generate a card for.
|
||||
@@ -485,8 +442,6 @@ def _agent_to_agent_card(agent: Agent, url: str) -> AgentCard:
|
||||
Returns:
|
||||
AgentCard describing the agent's capabilities.
|
||||
"""
|
||||
from crewai.a2a.utils.agent_card_signing import sign_agent_card
|
||||
|
||||
server_config = _get_server_config(agent) or A2AServerConfig()
|
||||
|
||||
name = server_config.name or agent.role
|
||||
@@ -517,31 +472,15 @@ def _agent_to_agent_card(agent: Agent, url: str) -> AgentCard:
|
||||
)
|
||||
)
|
||||
|
||||
capabilities = server_config.capabilities
|
||||
if server_config.server_extensions:
|
||||
from crewai.a2a.extensions.server import ServerExtensionRegistry
|
||||
|
||||
registry = ServerExtensionRegistry(server_config.server_extensions)
|
||||
ext_list = registry.get_agent_extensions()
|
||||
|
||||
existing_exts = list(capabilities.extensions) if capabilities.extensions else []
|
||||
existing_uris = {e.uri for e in existing_exts}
|
||||
for ext in ext_list:
|
||||
if ext.uri not in existing_uris:
|
||||
existing_exts.append(ext)
|
||||
|
||||
capabilities = capabilities.model_copy(update={"extensions": existing_exts})
|
||||
|
||||
card = AgentCard(
|
||||
return AgentCard(
|
||||
name=name,
|
||||
description=description,
|
||||
url=server_config.url or url,
|
||||
version=server_config.version,
|
||||
capabilities=capabilities,
|
||||
capabilities=server_config.capabilities,
|
||||
default_input_modes=server_config.default_input_modes,
|
||||
default_output_modes=server_config.default_output_modes,
|
||||
skills=skills,
|
||||
preferred_transport=server_config.transport.preferred,
|
||||
protocol_version=server_config.protocol_version,
|
||||
provider=server_config.provider,
|
||||
documentation_url=server_config.documentation_url,
|
||||
@@ -550,21 +489,9 @@ def _agent_to_agent_card(agent: Agent, url: str) -> AgentCard:
|
||||
security=server_config.security,
|
||||
security_schemes=server_config.security_schemes,
|
||||
supports_authenticated_extended_card=server_config.supports_authenticated_extended_card,
|
||||
signatures=server_config.signatures,
|
||||
)
|
||||
|
||||
if server_config.signing_config:
|
||||
signature = sign_agent_card(
|
||||
card,
|
||||
private_key=server_config.signing_config.get_private_key(),
|
||||
key_id=server_config.signing_config.key_id,
|
||||
algorithm=server_config.signing_config.algorithm,
|
||||
)
|
||||
card = card.model_copy(update={"signatures": [signature]})
|
||||
elif server_config.signatures:
|
||||
card = card.model_copy(update={"signatures": server_config.signatures})
|
||||
|
||||
return card
|
||||
|
||||
|
||||
def inject_a2a_server_methods(agent: Agent) -> None:
|
||||
"""Inject A2A server methods onto an Agent instance.
|
||||
|
||||
@@ -1,236 +0,0 @@
|
||||
"""AgentCard JWS signing utilities.
|
||||
|
||||
This module provides functions for signing and verifying AgentCards using
|
||||
JSON Web Signatures (JWS) as per RFC 7515. Signed agent cards allow clients
|
||||
to verify the authenticity and integrity of agent card information.
|
||||
|
||||
Example:
|
||||
>>> from crewai.a2a.utils.agent_card_signing import sign_agent_card
|
||||
>>> signature = sign_agent_card(agent_card, private_key_pem, key_id="key-1")
|
||||
>>> card_with_sig = card.model_copy(update={"signatures": [signature]})
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import base64
|
||||
import json
|
||||
import logging
|
||||
from typing import Any, Literal
|
||||
|
||||
from a2a.types import AgentCard, AgentCardSignature
|
||||
import jwt
|
||||
from pydantic import SecretStr
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
SigningAlgorithm = Literal[
|
||||
"RS256", "RS384", "RS512", "ES256", "ES384", "ES512", "PS256", "PS384", "PS512"
|
||||
]
|
||||
|
||||
|
||||
def _normalize_private_key(private_key: str | bytes | SecretStr) -> bytes:
|
||||
"""Normalize private key to bytes format.
|
||||
|
||||
Args:
|
||||
private_key: PEM-encoded private key as string, bytes, or SecretStr.
|
||||
|
||||
Returns:
|
||||
Private key as bytes.
|
||||
"""
|
||||
if isinstance(private_key, SecretStr):
|
||||
private_key = private_key.get_secret_value()
|
||||
if isinstance(private_key, str):
|
||||
private_key = private_key.encode()
|
||||
return private_key
|
||||
|
||||
|
||||
def _serialize_agent_card(agent_card: AgentCard) -> str:
|
||||
"""Serialize AgentCard to canonical JSON for signing.
|
||||
|
||||
Excludes the signatures field to avoid circular reference during signing.
|
||||
Uses sorted keys and compact separators for deterministic output.
|
||||
|
||||
Args:
|
||||
agent_card: The AgentCard to serialize.
|
||||
|
||||
Returns:
|
||||
Canonical JSON string representation.
|
||||
"""
|
||||
card_dict = agent_card.model_dump(exclude={"signatures"}, exclude_none=True)
|
||||
return json.dumps(card_dict, sort_keys=True, separators=(",", ":"))
|
||||
|
||||
|
||||
def _base64url_encode(data: bytes | str) -> str:
|
||||
"""Encode data to URL-safe base64 without padding.
|
||||
|
||||
Args:
|
||||
data: Data to encode.
|
||||
|
||||
Returns:
|
||||
URL-safe base64 encoded string without padding.
|
||||
"""
|
||||
if isinstance(data, str):
|
||||
data = data.encode()
|
||||
return base64.urlsafe_b64encode(data).rstrip(b"=").decode("ascii")
|
||||
|
||||
|
||||
def sign_agent_card(
|
||||
agent_card: AgentCard,
|
||||
private_key: str | bytes | SecretStr,
|
||||
key_id: str | None = None,
|
||||
algorithm: SigningAlgorithm = "RS256",
|
||||
) -> AgentCardSignature:
|
||||
"""Sign an AgentCard using JWS (RFC 7515).
|
||||
|
||||
Creates a detached JWS signature for the AgentCard. The signature covers
|
||||
all fields except the signatures field itself.
|
||||
|
||||
Args:
|
||||
agent_card: The AgentCard to sign.
|
||||
private_key: PEM-encoded private key (RSA, EC, or RSA-PSS).
|
||||
key_id: Optional key identifier for the JWS header (kid claim).
|
||||
algorithm: Signing algorithm (RS256, ES256, PS256, etc.).
|
||||
|
||||
Returns:
|
||||
AgentCardSignature with protected header and signature.
|
||||
|
||||
Raises:
|
||||
jwt.exceptions.InvalidKeyError: If the private key is invalid.
|
||||
ValueError: If the algorithm is not supported for the key type.
|
||||
|
||||
Example:
|
||||
>>> signature = sign_agent_card(
|
||||
... agent_card,
|
||||
... private_key_pem="-----BEGIN PRIVATE KEY-----...",
|
||||
... key_id="my-key-id",
|
||||
... )
|
||||
"""
|
||||
key_bytes = _normalize_private_key(private_key)
|
||||
payload = _serialize_agent_card(agent_card)
|
||||
|
||||
protected_header: dict[str, Any] = {"typ": "JWS"}
|
||||
if key_id:
|
||||
protected_header["kid"] = key_id
|
||||
|
||||
jws_token = jwt.api_jws.encode(
|
||||
payload.encode(),
|
||||
key_bytes,
|
||||
algorithm=algorithm,
|
||||
headers=protected_header,
|
||||
)
|
||||
|
||||
parts = jws_token.split(".")
|
||||
protected_b64 = parts[0]
|
||||
signature_b64 = parts[2]
|
||||
|
||||
header: dict[str, Any] | None = None
|
||||
if key_id:
|
||||
header = {"kid": key_id}
|
||||
|
||||
return AgentCardSignature(
|
||||
protected=protected_b64,
|
||||
signature=signature_b64,
|
||||
header=header,
|
||||
)
|
||||
|
||||
|
||||
def verify_agent_card_signature(
|
||||
agent_card: AgentCard,
|
||||
signature: AgentCardSignature,
|
||||
public_key: str | bytes,
|
||||
algorithms: list[str] | None = None,
|
||||
) -> bool:
|
||||
"""Verify an AgentCard JWS signature.
|
||||
|
||||
Validates that the signature was created with the corresponding private key
|
||||
and that the AgentCard content has not been modified.
|
||||
|
||||
Args:
|
||||
agent_card: The AgentCard to verify.
|
||||
signature: The AgentCardSignature to validate.
|
||||
public_key: PEM-encoded public key (RSA, EC, or RSA-PSS).
|
||||
algorithms: List of allowed algorithms. Defaults to common asymmetric algorithms.
|
||||
|
||||
Returns:
|
||||
True if signature is valid, False otherwise.
|
||||
|
||||
Example:
|
||||
>>> is_valid = verify_agent_card_signature(
|
||||
... agent_card, signature, public_key_pem="-----BEGIN PUBLIC KEY-----..."
|
||||
... )
|
||||
"""
|
||||
if algorithms is None:
|
||||
algorithms = [
|
||||
"RS256",
|
||||
"RS384",
|
||||
"RS512",
|
||||
"ES256",
|
||||
"ES384",
|
||||
"ES512",
|
||||
"PS256",
|
||||
"PS384",
|
||||
"PS512",
|
||||
]
|
||||
|
||||
if isinstance(public_key, str):
|
||||
public_key = public_key.encode()
|
||||
|
||||
payload = _serialize_agent_card(agent_card)
|
||||
payload_b64 = _base64url_encode(payload)
|
||||
jws_token = f"{signature.protected}.{payload_b64}.{signature.signature}"
|
||||
|
||||
try:
|
||||
jwt.api_jws.decode(
|
||||
jws_token,
|
||||
public_key,
|
||||
algorithms=algorithms,
|
||||
)
|
||||
return True
|
||||
except jwt.InvalidSignatureError:
|
||||
logger.debug(
|
||||
"AgentCard signature verification failed",
|
||||
extra={"reason": "invalid_signature"},
|
||||
)
|
||||
return False
|
||||
except jwt.DecodeError as e:
|
||||
logger.debug(
|
||||
"AgentCard signature verification failed",
|
||||
extra={"reason": "decode_error", "error": str(e)},
|
||||
)
|
||||
return False
|
||||
except jwt.InvalidAlgorithmError as e:
|
||||
logger.debug(
|
||||
"AgentCard signature verification failed",
|
||||
extra={"reason": "algorithm_error", "error": str(e)},
|
||||
)
|
||||
return False
|
||||
|
||||
|
||||
def get_key_id_from_signature(signature: AgentCardSignature) -> str | None:
|
||||
"""Extract the key ID (kid) from an AgentCardSignature.
|
||||
|
||||
Checks both the unprotected header and the protected header for the kid claim.
|
||||
|
||||
Args:
|
||||
signature: The AgentCardSignature to extract from.
|
||||
|
||||
Returns:
|
||||
The key ID if present, None otherwise.
|
||||
"""
|
||||
if signature.header and "kid" in signature.header:
|
||||
kid: str = signature.header["kid"]
|
||||
return kid
|
||||
|
||||
try:
|
||||
protected = signature.protected
|
||||
padding_needed = 4 - (len(protected) % 4)
|
||||
if padding_needed != 4:
|
||||
protected += "=" * padding_needed
|
||||
|
||||
protected_json = base64.urlsafe_b64decode(protected).decode()
|
||||
protected_header: dict[str, Any] = json.loads(protected_json)
|
||||
return protected_header.get("kid")
|
||||
except (ValueError, json.JSONDecodeError):
|
||||
return None
|
||||
@@ -1,339 +0,0 @@
|
||||
"""Content type negotiation for A2A protocol.
|
||||
|
||||
This module handles negotiation of input/output MIME types between A2A clients
|
||||
and servers based on AgentCard capabilities.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass
|
||||
from typing import TYPE_CHECKING, Annotated, Final, Literal, cast
|
||||
|
||||
from a2a.types import Part
|
||||
|
||||
from crewai.events.event_bus import crewai_event_bus
|
||||
from crewai.events.types.a2a_events import A2AContentTypeNegotiatedEvent
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from a2a.types import AgentCard, AgentSkill
|
||||
|
||||
|
||||
TEXT_PLAIN: Literal["text/plain"] = "text/plain"
|
||||
APPLICATION_JSON: Literal["application/json"] = "application/json"
|
||||
IMAGE_PNG: Literal["image/png"] = "image/png"
|
||||
IMAGE_JPEG: Literal["image/jpeg"] = "image/jpeg"
|
||||
IMAGE_WILDCARD: Literal["image/*"] = "image/*"
|
||||
APPLICATION_PDF: Literal["application/pdf"] = "application/pdf"
|
||||
APPLICATION_OCTET_STREAM: Literal["application/octet-stream"] = (
|
||||
"application/octet-stream"
|
||||
)
|
||||
|
||||
DEFAULT_CLIENT_INPUT_MODES: Final[list[Literal["text/plain", "application/json"]]] = [
|
||||
TEXT_PLAIN,
|
||||
APPLICATION_JSON,
|
||||
]
|
||||
DEFAULT_CLIENT_OUTPUT_MODES: Final[list[Literal["text/plain", "application/json"]]] = [
|
||||
TEXT_PLAIN,
|
||||
APPLICATION_JSON,
|
||||
]
|
||||
|
||||
|
||||
@dataclass
|
||||
class NegotiatedContentTypes:
|
||||
"""Result of content type negotiation."""
|
||||
|
||||
input_modes: Annotated[list[str], "Negotiated input MIME types the client can send"]
|
||||
output_modes: Annotated[
|
||||
list[str], "Negotiated output MIME types the server will produce"
|
||||
]
|
||||
effective_input_modes: Annotated[list[str], "Server's effective input modes"]
|
||||
effective_output_modes: Annotated[list[str], "Server's effective output modes"]
|
||||
skill_name: Annotated[
|
||||
str | None, "Skill name if negotiation was skill-specific"
|
||||
] = None
|
||||
|
||||
|
||||
class ContentTypeNegotiationError(Exception):
|
||||
"""Raised when no compatible content types can be negotiated."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
client_input_modes: list[str],
|
||||
client_output_modes: list[str],
|
||||
server_input_modes: list[str],
|
||||
server_output_modes: list[str],
|
||||
direction: str = "both",
|
||||
message: str | None = None,
|
||||
) -> None:
|
||||
self.client_input_modes = client_input_modes
|
||||
self.client_output_modes = client_output_modes
|
||||
self.server_input_modes = server_input_modes
|
||||
self.server_output_modes = server_output_modes
|
||||
self.direction = direction
|
||||
|
||||
if message is None:
|
||||
if direction == "input":
|
||||
message = (
|
||||
f"No compatible input content types. "
|
||||
f"Client supports: {client_input_modes}, "
|
||||
f"Server accepts: {server_input_modes}"
|
||||
)
|
||||
elif direction == "output":
|
||||
message = (
|
||||
f"No compatible output content types. "
|
||||
f"Client accepts: {client_output_modes}, "
|
||||
f"Server produces: {server_output_modes}"
|
||||
)
|
||||
else:
|
||||
message = (
|
||||
f"No compatible content types. "
|
||||
f"Input - Client: {client_input_modes}, Server: {server_input_modes}. "
|
||||
f"Output - Client: {client_output_modes}, Server: {server_output_modes}"
|
||||
)
|
||||
|
||||
super().__init__(message)
|
||||
|
||||
|
||||
def _normalize_mime_type(mime_type: str) -> str:
|
||||
"""Normalize MIME type for comparison (lowercase, strip whitespace)."""
|
||||
return mime_type.lower().strip()
|
||||
|
||||
|
||||
def _mime_types_compatible(client_type: str, server_type: str) -> bool:
|
||||
"""Check if two MIME types are compatible.
|
||||
|
||||
Handles wildcards like image/* matching image/png.
|
||||
"""
|
||||
client_normalized = _normalize_mime_type(client_type)
|
||||
server_normalized = _normalize_mime_type(server_type)
|
||||
|
||||
if client_normalized == server_normalized:
|
||||
return True
|
||||
|
||||
if "*" in client_normalized or "*" in server_normalized:
|
||||
client_parts = client_normalized.split("/")
|
||||
server_parts = server_normalized.split("/")
|
||||
|
||||
if len(client_parts) == 2 and len(server_parts) == 2:
|
||||
type_match = (
|
||||
client_parts[0] == server_parts[0]
|
||||
or client_parts[0] == "*"
|
||||
or server_parts[0] == "*"
|
||||
)
|
||||
subtype_match = (
|
||||
client_parts[1] == server_parts[1]
|
||||
or client_parts[1] == "*"
|
||||
or server_parts[1] == "*"
|
||||
)
|
||||
return type_match and subtype_match
|
||||
|
||||
return False
|
||||
|
||||
|
||||
def _find_compatible_modes(
|
||||
client_modes: list[str], server_modes: list[str]
|
||||
) -> list[str]:
|
||||
"""Find compatible MIME types between client and server.
|
||||
|
||||
Returns modes in client preference order.
|
||||
"""
|
||||
compatible = []
|
||||
for client_mode in client_modes:
|
||||
for server_mode in server_modes:
|
||||
if _mime_types_compatible(client_mode, server_mode):
|
||||
if "*" in client_mode and "*" not in server_mode:
|
||||
if server_mode not in compatible:
|
||||
compatible.append(server_mode)
|
||||
else:
|
||||
if client_mode not in compatible:
|
||||
compatible.append(client_mode)
|
||||
break
|
||||
return compatible
|
||||
|
||||
|
||||
def _get_effective_modes(
|
||||
agent_card: AgentCard,
|
||||
skill_name: str | None = None,
|
||||
) -> tuple[list[str], list[str], AgentSkill | None]:
|
||||
"""Get effective input/output modes from agent card.
|
||||
|
||||
If skill_name is provided and the skill has custom modes, those are used.
|
||||
Otherwise, falls back to agent card defaults.
|
||||
"""
|
||||
skill: AgentSkill | None = None
|
||||
|
||||
if skill_name and agent_card.skills:
|
||||
for s in agent_card.skills:
|
||||
if s.name == skill_name or s.id == skill_name:
|
||||
skill = s
|
||||
break
|
||||
|
||||
if skill:
|
||||
input_modes = (
|
||||
skill.input_modes if skill.input_modes else agent_card.default_input_modes
|
||||
)
|
||||
output_modes = (
|
||||
skill.output_modes
|
||||
if skill.output_modes
|
||||
else agent_card.default_output_modes
|
||||
)
|
||||
else:
|
||||
input_modes = agent_card.default_input_modes
|
||||
output_modes = agent_card.default_output_modes
|
||||
|
||||
return input_modes, output_modes, skill
|
||||
|
||||
|
||||
def negotiate_content_types(
|
||||
agent_card: AgentCard,
|
||||
client_input_modes: list[str] | None = None,
|
||||
client_output_modes: list[str] | None = None,
|
||||
skill_name: str | None = None,
|
||||
emit_event: bool = True,
|
||||
endpoint: str | None = None,
|
||||
a2a_agent_name: str | None = None,
|
||||
strict: bool = False,
|
||||
) -> NegotiatedContentTypes:
|
||||
"""Negotiate content types between client and server.
|
||||
|
||||
Args:
|
||||
agent_card: The remote agent's card with capability info.
|
||||
client_input_modes: MIME types the client can send. Defaults to text/plain and application/json.
|
||||
client_output_modes: MIME types the client can accept. Defaults to text/plain and application/json.
|
||||
skill_name: Optional skill to use for mode lookup.
|
||||
emit_event: Whether to emit a content type negotiation event.
|
||||
endpoint: Agent endpoint (for event metadata).
|
||||
a2a_agent_name: Agent name (for event metadata).
|
||||
strict: If True, raises error when no compatible types found.
|
||||
If False, returns empty lists for incompatible directions.
|
||||
|
||||
Returns:
|
||||
NegotiatedContentTypes with compatible input and output modes.
|
||||
|
||||
Raises:
|
||||
ContentTypeNegotiationError: If strict=True and no compatible types found.
|
||||
"""
|
||||
if client_input_modes is None:
|
||||
client_input_modes = cast(list[str], DEFAULT_CLIENT_INPUT_MODES.copy())
|
||||
if client_output_modes is None:
|
||||
client_output_modes = cast(list[str], DEFAULT_CLIENT_OUTPUT_MODES.copy())
|
||||
|
||||
server_input_modes, server_output_modes, skill = _get_effective_modes(
|
||||
agent_card, skill_name
|
||||
)
|
||||
|
||||
compatible_input = _find_compatible_modes(client_input_modes, server_input_modes)
|
||||
compatible_output = _find_compatible_modes(client_output_modes, server_output_modes)
|
||||
|
||||
if strict:
|
||||
if not compatible_input and not compatible_output:
|
||||
raise ContentTypeNegotiationError(
|
||||
client_input_modes=client_input_modes,
|
||||
client_output_modes=client_output_modes,
|
||||
server_input_modes=server_input_modes,
|
||||
server_output_modes=server_output_modes,
|
||||
)
|
||||
if not compatible_input:
|
||||
raise ContentTypeNegotiationError(
|
||||
client_input_modes=client_input_modes,
|
||||
client_output_modes=client_output_modes,
|
||||
server_input_modes=server_input_modes,
|
||||
server_output_modes=server_output_modes,
|
||||
direction="input",
|
||||
)
|
||||
if not compatible_output:
|
||||
raise ContentTypeNegotiationError(
|
||||
client_input_modes=client_input_modes,
|
||||
client_output_modes=client_output_modes,
|
||||
server_input_modes=server_input_modes,
|
||||
server_output_modes=server_output_modes,
|
||||
direction="output",
|
||||
)
|
||||
|
||||
result = NegotiatedContentTypes(
|
||||
input_modes=compatible_input,
|
||||
output_modes=compatible_output,
|
||||
effective_input_modes=server_input_modes,
|
||||
effective_output_modes=server_output_modes,
|
||||
skill_name=skill.name if skill else None,
|
||||
)
|
||||
|
||||
if emit_event:
|
||||
crewai_event_bus.emit(
|
||||
None,
|
||||
A2AContentTypeNegotiatedEvent(
|
||||
endpoint=endpoint or agent_card.url,
|
||||
a2a_agent_name=a2a_agent_name or agent_card.name,
|
||||
skill_name=skill_name,
|
||||
client_input_modes=client_input_modes,
|
||||
client_output_modes=client_output_modes,
|
||||
server_input_modes=server_input_modes,
|
||||
server_output_modes=server_output_modes,
|
||||
negotiated_input_modes=compatible_input,
|
||||
negotiated_output_modes=compatible_output,
|
||||
negotiation_success=bool(compatible_input and compatible_output),
|
||||
),
|
||||
)
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def validate_content_type(
|
||||
content_type: str,
|
||||
allowed_modes: list[str],
|
||||
) -> bool:
|
||||
"""Validate that a content type is allowed by a list of modes.
|
||||
|
||||
Args:
|
||||
content_type: The MIME type to validate.
|
||||
allowed_modes: List of allowed MIME types (may include wildcards).
|
||||
|
||||
Returns:
|
||||
True if content_type is compatible with any allowed mode.
|
||||
"""
|
||||
for mode in allowed_modes:
|
||||
if _mime_types_compatible(content_type, mode):
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
def get_part_content_type(part: Part) -> str:
|
||||
"""Extract MIME type from an A2A Part.
|
||||
|
||||
Args:
|
||||
part: A Part object containing TextPart, DataPart, or FilePart.
|
||||
|
||||
Returns:
|
||||
The MIME type string for this part.
|
||||
"""
|
||||
root = part.root
|
||||
if root.kind == "text":
|
||||
return TEXT_PLAIN
|
||||
if root.kind == "data":
|
||||
return APPLICATION_JSON
|
||||
if root.kind == "file":
|
||||
return root.file.mime_type or APPLICATION_OCTET_STREAM
|
||||
return APPLICATION_OCTET_STREAM
|
||||
|
||||
|
||||
def validate_message_parts(
|
||||
parts: list[Part],
|
||||
allowed_modes: list[str],
|
||||
) -> list[str]:
|
||||
"""Validate that all message parts have allowed content types.
|
||||
|
||||
Args:
|
||||
parts: List of Parts from the incoming message.
|
||||
allowed_modes: List of allowed MIME types (from default_input_modes).
|
||||
|
||||
Returns:
|
||||
List of invalid content types found (empty if all valid).
|
||||
"""
|
||||
invalid_types: list[str] = []
|
||||
for part in parts:
|
||||
content_type = get_part_content_type(part)
|
||||
if not validate_content_type(content_type, allowed_modes):
|
||||
if content_type not in invalid_types:
|
||||
invalid_types.append(content_type)
|
||||
return invalid_types
|
||||
@@ -3,18 +3,14 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import base64
|
||||
from collections.abc import AsyncIterator, Callable, MutableMapping
|
||||
from collections.abc import AsyncIterator, MutableMapping
|
||||
from contextlib import asynccontextmanager
|
||||
import logging
|
||||
from typing import TYPE_CHECKING, Any, Final, Literal
|
||||
from typing import TYPE_CHECKING, Any, Literal
|
||||
import uuid
|
||||
|
||||
from a2a.client import Client, ClientConfig, ClientFactory
|
||||
from a2a.types import (
|
||||
AgentCard,
|
||||
FilePart,
|
||||
FileWithBytes,
|
||||
Message,
|
||||
Part,
|
||||
PushNotificationConfig as A2APushNotificationConfig,
|
||||
@@ -24,24 +20,18 @@ from a2a.types import (
|
||||
import httpx
|
||||
from pydantic import BaseModel
|
||||
|
||||
from crewai.a2a.auth.client_schemes import APIKeyAuth, HTTPDigestAuth
|
||||
from crewai.a2a.auth.schemas import APIKeyAuth, HTTPDigestAuth
|
||||
from crewai.a2a.auth.utils import (
|
||||
_auth_store,
|
||||
configure_auth_client,
|
||||
validate_auth_against_agent_card,
|
||||
)
|
||||
from crewai.a2a.config import ClientTransportConfig, GRPCClientConfig
|
||||
from crewai.a2a.extensions.registry import (
|
||||
ExtensionsMiddleware,
|
||||
validate_required_extensions,
|
||||
)
|
||||
from crewai.a2a.task_helpers import TaskStateResult
|
||||
from crewai.a2a.types import (
|
||||
HANDLER_REGISTRY,
|
||||
HandlerType,
|
||||
PartsDict,
|
||||
PartsMetadataDict,
|
||||
TransportType,
|
||||
)
|
||||
from crewai.a2a.updates import (
|
||||
PollingConfig,
|
||||
@@ -49,20 +39,7 @@ from crewai.a2a.updates import (
|
||||
StreamingHandler,
|
||||
UpdateConfig,
|
||||
)
|
||||
from crewai.a2a.utils.agent_card import (
|
||||
_afetch_agent_card_cached,
|
||||
_get_tls_verify,
|
||||
_prepare_auth_headers,
|
||||
)
|
||||
from crewai.a2a.utils.content_type import (
|
||||
DEFAULT_CLIENT_OUTPUT_MODES,
|
||||
negotiate_content_types,
|
||||
)
|
||||
from crewai.a2a.utils.transport import (
|
||||
NegotiatedTransport,
|
||||
TransportNegotiationError,
|
||||
negotiate_transport,
|
||||
)
|
||||
from crewai.a2a.utils.agent_card import _afetch_agent_card_cached
|
||||
from crewai.events.event_bus import crewai_event_bus
|
||||
from crewai.events.types.a2a_events import (
|
||||
A2AConversationStartedEvent,
|
||||
@@ -72,48 +49,10 @@ from crewai.events.types.a2a_events import (
|
||||
)
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from a2a.types import Message
|
||||
|
||||
from crewai.a2a.auth.client_schemes import ClientAuthScheme
|
||||
|
||||
|
||||
_DEFAULT_TRANSPORT: Final[TransportType] = "JSONRPC"
|
||||
|
||||
|
||||
def _create_file_parts(input_files: dict[str, Any] | None) -> list[Part]:
|
||||
"""Convert FileInput dictionary to FilePart objects.
|
||||
|
||||
Args:
|
||||
input_files: Dictionary mapping names to FileInput objects.
|
||||
|
||||
Returns:
|
||||
List of Part objects containing FilePart data.
|
||||
"""
|
||||
if not input_files:
|
||||
return []
|
||||
|
||||
try:
|
||||
import crewai_files # noqa: F401
|
||||
except ImportError:
|
||||
logger.debug("crewai_files not installed, skipping file parts")
|
||||
return []
|
||||
|
||||
parts: list[Part] = []
|
||||
for name, file_input in input_files.items():
|
||||
content_bytes = file_input.read()
|
||||
content_base64 = base64.b64encode(content_bytes).decode()
|
||||
file_with_bytes = FileWithBytes(
|
||||
bytes=content_base64,
|
||||
mimeType=file_input.content_type,
|
||||
name=file_input.filename or name,
|
||||
)
|
||||
parts.append(Part(root=FilePart(file=file_with_bytes)))
|
||||
|
||||
return parts
|
||||
from crewai.a2a.auth.schemas import AuthScheme
|
||||
|
||||
|
||||
def get_handler(config: UpdateConfig | None) -> HandlerType:
|
||||
@@ -132,7 +71,8 @@ def get_handler(config: UpdateConfig | None) -> HandlerType:
|
||||
|
||||
def execute_a2a_delegation(
|
||||
endpoint: str,
|
||||
auth: ClientAuthScheme | None,
|
||||
transport_protocol: Literal["JSONRPC", "GRPC", "HTTP+JSON"],
|
||||
auth: AuthScheme | None,
|
||||
timeout: int,
|
||||
task_description: str,
|
||||
context: str | None = None,
|
||||
@@ -151,24 +91,32 @@ def execute_a2a_delegation(
|
||||
from_task: Any | None = None,
|
||||
from_agent: Any | None = None,
|
||||
skill_id: str | None = None,
|
||||
client_extensions: list[str] | None = None,
|
||||
transport: ClientTransportConfig | None = None,
|
||||
accepted_output_modes: list[str] | None = None,
|
||||
input_files: dict[str, Any] | None = None,
|
||||
) -> TaskStateResult:
|
||||
"""Execute a task delegation to a remote A2A agent synchronously.
|
||||
|
||||
WARNING: This function blocks the entire thread by creating and running a new
|
||||
event loop. Prefer using 'await aexecute_a2a_delegation()' in async contexts
|
||||
for better performance and resource efficiency.
|
||||
|
||||
This is a synchronous wrapper around aexecute_a2a_delegation that creates a
|
||||
new event loop to run the async implementation. It is provided for compatibility
|
||||
with synchronous code paths only.
|
||||
This is the sync wrapper around aexecute_a2a_delegation. For async contexts,
|
||||
use aexecute_a2a_delegation directly.
|
||||
|
||||
Args:
|
||||
endpoint: A2A agent endpoint URL (AgentCard URL).
|
||||
auth: Optional ClientAuthScheme for authentication.
|
||||
endpoint: A2A agent endpoint URL (AgentCard URL)
|
||||
transport_protocol: Optional A2A transport protocol (grpc, jsonrpc, http+json)
|
||||
auth: Optional AuthScheme for authentication (Bearer, OAuth2, API Key, HTTP Basic/Digest)
|
||||
timeout: Request timeout in seconds
|
||||
task_description: The task to delegate
|
||||
context: Optional context information
|
||||
context_id: Context ID for correlating messages/tasks
|
||||
task_id: Specific task identifier
|
||||
reference_task_ids: List of related task IDs
|
||||
metadata: Additional metadata (external_id, request_id, etc.)
|
||||
extensions: Protocol extensions for custom fields
|
||||
conversation_history: Previous Message objects from conversation
|
||||
agent_id: Agent identifier for logging
|
||||
agent_role: Role of the CrewAI agent delegating the task
|
||||
agent_branch: Optional agent tree branch for logging
|
||||
response_model: Optional Pydantic model for structured outputs
|
||||
turn_number: Optional turn number for multi-turn conversations
|
||||
endpoint: A2A agent endpoint URL.
|
||||
auth: Optional AuthScheme for authentication.
|
||||
timeout: Request timeout in seconds.
|
||||
task_description: The task to delegate.
|
||||
context: Optional context information.
|
||||
@@ -187,27 +135,10 @@ def execute_a2a_delegation(
|
||||
from_task: Optional CrewAI Task object for event metadata.
|
||||
from_agent: Optional CrewAI Agent object for event metadata.
|
||||
skill_id: Optional skill ID to target a specific agent capability.
|
||||
client_extensions: A2A protocol extension URIs the client supports.
|
||||
transport: Transport configuration (preferred, supported transports, gRPC settings).
|
||||
accepted_output_modes: MIME types the client can accept in responses.
|
||||
input_files: Optional dictionary of files to send to remote agent.
|
||||
|
||||
Returns:
|
||||
TaskStateResult with status, result/error, history, and agent_card.
|
||||
|
||||
Raises:
|
||||
RuntimeError: If called from an async context with a running event loop.
|
||||
"""
|
||||
try:
|
||||
asyncio.get_running_loop()
|
||||
raise RuntimeError(
|
||||
"execute_a2a_delegation() cannot be called from an async context. "
|
||||
"Use 'await aexecute_a2a_delegation()' instead."
|
||||
)
|
||||
except RuntimeError as e:
|
||||
if "no running event loop" not in str(e).lower():
|
||||
raise
|
||||
|
||||
loop = asyncio.new_event_loop()
|
||||
asyncio.set_event_loop(loop)
|
||||
try:
|
||||
@@ -228,15 +159,12 @@ def execute_a2a_delegation(
|
||||
agent_role=agent_role,
|
||||
agent_branch=agent_branch,
|
||||
response_model=response_model,
|
||||
transport_protocol=transport_protocol,
|
||||
turn_number=turn_number,
|
||||
updates=updates,
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
skill_id=skill_id,
|
||||
client_extensions=client_extensions,
|
||||
transport=transport,
|
||||
accepted_output_modes=accepted_output_modes,
|
||||
input_files=input_files,
|
||||
)
|
||||
)
|
||||
finally:
|
||||
@@ -248,7 +176,8 @@ def execute_a2a_delegation(
|
||||
|
||||
async def aexecute_a2a_delegation(
|
||||
endpoint: str,
|
||||
auth: ClientAuthScheme | None,
|
||||
transport_protocol: Literal["JSONRPC", "GRPC", "HTTP+JSON"],
|
||||
auth: AuthScheme | None,
|
||||
timeout: int,
|
||||
task_description: str,
|
||||
context: str | None = None,
|
||||
@@ -267,10 +196,6 @@ async def aexecute_a2a_delegation(
|
||||
from_task: Any | None = None,
|
||||
from_agent: Any | None = None,
|
||||
skill_id: str | None = None,
|
||||
client_extensions: list[str] | None = None,
|
||||
transport: ClientTransportConfig | None = None,
|
||||
accepted_output_modes: list[str] | None = None,
|
||||
input_files: dict[str, Any] | None = None,
|
||||
) -> TaskStateResult:
|
||||
"""Execute a task delegation to a remote A2A agent asynchronously.
|
||||
|
||||
@@ -278,8 +203,25 @@ async def aexecute_a2a_delegation(
|
||||
in an async context (e.g., with Crew.akickoff() or agent.aexecute_task()).
|
||||
|
||||
Args:
|
||||
endpoint: A2A agent endpoint URL
|
||||
transport_protocol: Optional A2A transport protocol (grpc, jsonrpc, http+json)
|
||||
auth: Optional AuthScheme for authentication
|
||||
timeout: Request timeout in seconds
|
||||
task_description: Task to delegate
|
||||
context: Optional context
|
||||
context_id: Context ID for correlation
|
||||
task_id: Specific task identifier
|
||||
reference_task_ids: Related task IDs
|
||||
metadata: Additional metadata
|
||||
extensions: Protocol extensions
|
||||
conversation_history: Previous Message objects
|
||||
turn_number: Current turn number
|
||||
agent_branch: Agent tree branch for logging
|
||||
agent_id: Agent identifier for logging
|
||||
agent_role: Agent role for logging
|
||||
response_model: Optional Pydantic model for structured outputs
|
||||
endpoint: A2A agent endpoint URL.
|
||||
auth: Optional ClientAuthScheme for authentication.
|
||||
auth: Optional AuthScheme for authentication.
|
||||
timeout: Request timeout in seconds.
|
||||
task_description: The task to delegate.
|
||||
context: Optional context information.
|
||||
@@ -298,10 +240,6 @@ async def aexecute_a2a_delegation(
|
||||
from_task: Optional CrewAI Task object for event metadata.
|
||||
from_agent: Optional CrewAI Agent object for event metadata.
|
||||
skill_id: Optional skill ID to target a specific agent capability.
|
||||
client_extensions: A2A protocol extension URIs the client supports.
|
||||
transport: Transport configuration (preferred, supported transports, gRPC settings).
|
||||
accepted_output_modes: MIME types the client can accept in responses.
|
||||
input_files: Optional dictionary of files to send to remote agent.
|
||||
|
||||
Returns:
|
||||
TaskStateResult with status, result/error, history, and agent_card.
|
||||
@@ -333,13 +271,10 @@ async def aexecute_a2a_delegation(
|
||||
agent_role=agent_role,
|
||||
response_model=response_model,
|
||||
updates=updates,
|
||||
transport_protocol=transport_protocol,
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
skill_id=skill_id,
|
||||
client_extensions=client_extensions,
|
||||
transport=transport,
|
||||
accepted_output_modes=accepted_output_modes,
|
||||
input_files=input_files,
|
||||
)
|
||||
except Exception as e:
|
||||
crewai_event_bus.emit(
|
||||
@@ -359,7 +294,7 @@ async def aexecute_a2a_delegation(
|
||||
)
|
||||
raise
|
||||
|
||||
agent_card_data = result.get("agent_card")
|
||||
agent_card_data: dict[str, Any] = result.get("agent_card") or {}
|
||||
crewai_event_bus.emit(
|
||||
agent_branch,
|
||||
A2ADelegationCompletedEvent(
|
||||
@@ -371,7 +306,7 @@ async def aexecute_a2a_delegation(
|
||||
endpoint=endpoint,
|
||||
a2a_agent_name=result.get("a2a_agent_name"),
|
||||
agent_card=agent_card_data,
|
||||
provider=agent_card_data.get("provider") if agent_card_data else None,
|
||||
provider=agent_card_data.get("provider"),
|
||||
metadata=metadata,
|
||||
extensions=list(extensions.keys()) if extensions else None,
|
||||
from_task=from_task,
|
||||
@@ -384,7 +319,8 @@ async def aexecute_a2a_delegation(
|
||||
|
||||
async def _aexecute_a2a_delegation_impl(
|
||||
endpoint: str,
|
||||
auth: ClientAuthScheme | None,
|
||||
transport_protocol: Literal["JSONRPC", "GRPC", "HTTP+JSON"],
|
||||
auth: AuthScheme | None,
|
||||
timeout: int,
|
||||
task_description: str,
|
||||
context: str | None,
|
||||
@@ -404,14 +340,8 @@ async def _aexecute_a2a_delegation_impl(
|
||||
from_task: Any | None = None,
|
||||
from_agent: Any | None = None,
|
||||
skill_id: str | None = None,
|
||||
client_extensions: list[str] | None = None,
|
||||
transport: ClientTransportConfig | None = None,
|
||||
accepted_output_modes: list[str] | None = None,
|
||||
input_files: dict[str, Any] | None = None,
|
||||
) -> TaskStateResult:
|
||||
"""Internal async implementation of A2A delegation."""
|
||||
if transport is None:
|
||||
transport = ClientTransportConfig()
|
||||
if auth:
|
||||
auth_data = auth.model_dump_json(
|
||||
exclude={
|
||||
@@ -421,70 +351,22 @@ async def _aexecute_a2a_delegation_impl(
|
||||
"_authorization_callback",
|
||||
}
|
||||
)
|
||||
auth_hash = _auth_store.compute_key(type(auth).__name__, auth_data)
|
||||
auth_hash = hash((type(auth).__name__, auth_data))
|
||||
else:
|
||||
auth_hash = _auth_store.compute_key("none", endpoint)
|
||||
_auth_store.set(auth_hash, auth)
|
||||
auth_hash = 0
|
||||
_auth_store[auth_hash] = auth
|
||||
agent_card = await _afetch_agent_card_cached(
|
||||
endpoint=endpoint, auth_hash=auth_hash, timeout=timeout
|
||||
)
|
||||
|
||||
validate_auth_against_agent_card(agent_card, auth)
|
||||
|
||||
unsupported_exts = validate_required_extensions(agent_card, client_extensions)
|
||||
if unsupported_exts:
|
||||
ext_uris = [ext.uri for ext in unsupported_exts]
|
||||
raise ValueError(
|
||||
f"Agent requires extensions not supported by client: {ext_uris}"
|
||||
)
|
||||
|
||||
negotiated: NegotiatedTransport | None = None
|
||||
effective_transport: TransportType = transport.preferred or _DEFAULT_TRANSPORT
|
||||
effective_url = endpoint
|
||||
|
||||
client_transports: list[str] = (
|
||||
list(transport.supported) if transport.supported else [_DEFAULT_TRANSPORT]
|
||||
)
|
||||
|
||||
try:
|
||||
negotiated = negotiate_transport(
|
||||
agent_card=agent_card,
|
||||
client_supported_transports=client_transports,
|
||||
client_preferred_transport=transport.preferred,
|
||||
endpoint=endpoint,
|
||||
a2a_agent_name=agent_card.name,
|
||||
)
|
||||
effective_transport = negotiated.transport # type: ignore[assignment]
|
||||
effective_url = negotiated.url
|
||||
except TransportNegotiationError as e:
|
||||
logger.warning(
|
||||
"Transport negotiation failed, using fallback",
|
||||
extra={
|
||||
"error": str(e),
|
||||
"fallback_transport": effective_transport,
|
||||
"fallback_url": effective_url,
|
||||
"endpoint": endpoint,
|
||||
"client_transports": client_transports,
|
||||
"server_transports": [
|
||||
iface.transport for iface in agent_card.additional_interfaces or []
|
||||
]
|
||||
+ [agent_card.preferred_transport or "JSONRPC"],
|
||||
},
|
||||
)
|
||||
|
||||
effective_output_modes = accepted_output_modes or DEFAULT_CLIENT_OUTPUT_MODES.copy()
|
||||
|
||||
content_negotiated = negotiate_content_types(
|
||||
agent_card=agent_card,
|
||||
client_output_modes=accepted_output_modes,
|
||||
skill_name=skill_id,
|
||||
endpoint=endpoint,
|
||||
a2a_agent_name=agent_card.name,
|
||||
)
|
||||
if content_negotiated.output_modes:
|
||||
effective_output_modes = content_negotiated.output_modes
|
||||
|
||||
headers, _ = await _prepare_auth_headers(auth, timeout)
|
||||
headers: MutableMapping[str, str] = {}
|
||||
if auth:
|
||||
async with httpx.AsyncClient(timeout=timeout) as temp_auth_client:
|
||||
if isinstance(auth, (HTTPDigestAuth, APIKeyAuth)):
|
||||
configure_auth_client(auth, temp_auth_client)
|
||||
headers = await auth.apply_auth(temp_auth_client, {})
|
||||
|
||||
a2a_agent_name = None
|
||||
if agent_card.name:
|
||||
@@ -559,13 +441,10 @@ async def _aexecute_a2a_delegation_impl(
|
||||
if skill_id:
|
||||
message_metadata["skill_id"] = skill_id
|
||||
|
||||
parts_list: list[Part] = [Part(root=TextPart(**parts))]
|
||||
parts_list.extend(_create_file_parts(input_files))
|
||||
|
||||
message = Message(
|
||||
role=Role.user,
|
||||
message_id=str(uuid.uuid4()),
|
||||
parts=parts_list,
|
||||
parts=[Part(root=TextPart(**parts))],
|
||||
context_id=context_id,
|
||||
task_id=task_id,
|
||||
reference_task_ids=reference_task_ids,
|
||||
@@ -634,22 +513,15 @@ async def _aexecute_a2a_delegation_impl(
|
||||
|
||||
use_streaming = not use_polling and push_config_for_client is None
|
||||
|
||||
client_agent_card = agent_card
|
||||
if effective_url != agent_card.url:
|
||||
client_agent_card = agent_card.model_copy(update={"url": effective_url})
|
||||
|
||||
async with _create_a2a_client(
|
||||
agent_card=client_agent_card,
|
||||
transport_protocol=effective_transport,
|
||||
agent_card=agent_card,
|
||||
transport_protocol=transport_protocol,
|
||||
timeout=timeout,
|
||||
headers=headers,
|
||||
streaming=use_streaming,
|
||||
auth=auth,
|
||||
use_polling=use_polling,
|
||||
push_notification_config=push_config_for_client,
|
||||
client_extensions=client_extensions,
|
||||
accepted_output_modes=effective_output_modes, # type: ignore[arg-type]
|
||||
grpc_config=transport.grpc,
|
||||
) as client:
|
||||
result = await handler.execute(
|
||||
client=client,
|
||||
@@ -663,245 +535,6 @@ async def _aexecute_a2a_delegation_impl(
|
||||
return result
|
||||
|
||||
|
||||
def _normalize_grpc_metadata(
|
||||
metadata: tuple[tuple[str, str], ...] | None,
|
||||
) -> tuple[tuple[str, str], ...] | None:
|
||||
"""Lowercase all gRPC metadata keys.
|
||||
|
||||
gRPC requires lowercase metadata keys, but some libraries (like the A2A SDK)
|
||||
use mixed-case headers like 'X-A2A-Extensions'. This normalizes them.
|
||||
"""
|
||||
if metadata is None:
|
||||
return None
|
||||
return tuple((key.lower(), value) for key, value in metadata)
|
||||
|
||||
|
||||
def _create_grpc_interceptors(
|
||||
auth_metadata: list[tuple[str, str]] | None = None,
|
||||
) -> list[Any]:
|
||||
"""Create gRPC interceptors for metadata normalization and auth injection.
|
||||
|
||||
Args:
|
||||
auth_metadata: Optional auth metadata to inject into all calls.
|
||||
Used for insecure channels that need auth (non-localhost without TLS).
|
||||
|
||||
Returns a list of interceptors that lowercase metadata keys for gRPC
|
||||
compatibility. Must be called after grpc is imported.
|
||||
"""
|
||||
import grpc.aio # type: ignore[import-untyped]
|
||||
|
||||
def _merge_metadata(
|
||||
existing: tuple[tuple[str, str], ...] | None,
|
||||
auth: list[tuple[str, str]] | None,
|
||||
) -> tuple[tuple[str, str], ...] | None:
|
||||
"""Merge existing metadata with auth metadata and normalize keys."""
|
||||
merged: list[tuple[str, str]] = []
|
||||
if existing:
|
||||
merged.extend(existing)
|
||||
if auth:
|
||||
merged.extend(auth)
|
||||
if not merged:
|
||||
return None
|
||||
return tuple((key.lower(), value) for key, value in merged)
|
||||
|
||||
def _inject_metadata(client_call_details: Any) -> Any:
|
||||
"""Inject merged metadata into call details."""
|
||||
return client_call_details._replace(
|
||||
metadata=_merge_metadata(client_call_details.metadata, auth_metadata)
|
||||
)
|
||||
|
||||
class MetadataUnaryUnary(grpc.aio.UnaryUnaryClientInterceptor): # type: ignore[misc,no-any-unimported]
|
||||
"""Interceptor for unary-unary calls that injects auth metadata."""
|
||||
|
||||
async def intercept_unary_unary( # type: ignore[no-untyped-def]
|
||||
self, continuation, client_call_details, request
|
||||
):
|
||||
"""Intercept unary-unary call and inject metadata."""
|
||||
return await continuation(_inject_metadata(client_call_details), request)
|
||||
|
||||
class MetadataUnaryStream(grpc.aio.UnaryStreamClientInterceptor): # type: ignore[misc,no-any-unimported]
|
||||
"""Interceptor for unary-stream calls that injects auth metadata."""
|
||||
|
||||
async def intercept_unary_stream( # type: ignore[no-untyped-def]
|
||||
self, continuation, client_call_details, request
|
||||
):
|
||||
"""Intercept unary-stream call and inject metadata."""
|
||||
return await continuation(_inject_metadata(client_call_details), request)
|
||||
|
||||
class MetadataStreamUnary(grpc.aio.StreamUnaryClientInterceptor): # type: ignore[misc,no-any-unimported]
|
||||
"""Interceptor for stream-unary calls that injects auth metadata."""
|
||||
|
||||
async def intercept_stream_unary( # type: ignore[no-untyped-def]
|
||||
self, continuation, client_call_details, request_iterator
|
||||
):
|
||||
"""Intercept stream-unary call and inject metadata."""
|
||||
return await continuation(
|
||||
_inject_metadata(client_call_details), request_iterator
|
||||
)
|
||||
|
||||
class MetadataStreamStream(grpc.aio.StreamStreamClientInterceptor): # type: ignore[misc,no-any-unimported]
|
||||
"""Interceptor for stream-stream calls that injects auth metadata."""
|
||||
|
||||
async def intercept_stream_stream( # type: ignore[no-untyped-def]
|
||||
self, continuation, client_call_details, request_iterator
|
||||
):
|
||||
"""Intercept stream-stream call and inject metadata."""
|
||||
return await continuation(
|
||||
_inject_metadata(client_call_details), request_iterator
|
||||
)
|
||||
|
||||
return [
|
||||
MetadataUnaryUnary(),
|
||||
MetadataUnaryStream(),
|
||||
MetadataStreamUnary(),
|
||||
MetadataStreamStream(),
|
||||
]
|
||||
|
||||
|
||||
def _create_grpc_channel_factory(
|
||||
grpc_config: GRPCClientConfig,
|
||||
auth: ClientAuthScheme | None = None,
|
||||
) -> Callable[[str], Any]:
|
||||
"""Create a gRPC channel factory with the given configuration.
|
||||
|
||||
Args:
|
||||
grpc_config: gRPC client configuration with channel options.
|
||||
auth: Optional ClientAuthScheme for TLS and auth configuration.
|
||||
|
||||
Returns:
|
||||
A callable that creates gRPC channels from URLs.
|
||||
"""
|
||||
try:
|
||||
import grpc
|
||||
except ImportError as e:
|
||||
raise ImportError(
|
||||
"gRPC transport requires grpcio. Install with: pip install a2a-sdk[grpc]"
|
||||
) from e
|
||||
|
||||
auth_metadata: list[tuple[str, str]] = []
|
||||
|
||||
if auth is not None:
|
||||
from crewai.a2a.auth.client_schemes import (
|
||||
APIKeyAuth,
|
||||
BearerTokenAuth,
|
||||
HTTPBasicAuth,
|
||||
HTTPDigestAuth,
|
||||
OAuth2AuthorizationCode,
|
||||
OAuth2ClientCredentials,
|
||||
)
|
||||
|
||||
if isinstance(auth, HTTPDigestAuth):
|
||||
raise ValueError(
|
||||
"HTTPDigestAuth is not supported with gRPC transport. "
|
||||
"Digest authentication requires HTTP challenge-response flow. "
|
||||
"Use BearerTokenAuth, HTTPBasicAuth, APIKeyAuth (header), or OAuth2 instead."
|
||||
)
|
||||
if isinstance(auth, APIKeyAuth) and auth.location in ("query", "cookie"):
|
||||
raise ValueError(
|
||||
f"APIKeyAuth with location='{auth.location}' is not supported with gRPC transport. "
|
||||
"gRPC only supports header-based authentication. "
|
||||
"Use APIKeyAuth with location='header' instead."
|
||||
)
|
||||
|
||||
if isinstance(auth, BearerTokenAuth):
|
||||
auth_metadata.append(("authorization", f"Bearer {auth.token}"))
|
||||
elif isinstance(auth, HTTPBasicAuth):
|
||||
import base64
|
||||
|
||||
basic_credentials = f"{auth.username}:{auth.password}"
|
||||
encoded = base64.b64encode(basic_credentials.encode()).decode()
|
||||
auth_metadata.append(("authorization", f"Basic {encoded}"))
|
||||
elif isinstance(auth, APIKeyAuth) and auth.location == "header":
|
||||
header_name = auth.name.lower()
|
||||
auth_metadata.append((header_name, auth.api_key))
|
||||
elif isinstance(auth, (OAuth2ClientCredentials, OAuth2AuthorizationCode)):
|
||||
if auth._access_token:
|
||||
auth_metadata.append(("authorization", f"Bearer {auth._access_token}"))
|
||||
|
||||
def factory(url: str) -> Any:
|
||||
"""Create a gRPC channel for the given URL."""
|
||||
target = url
|
||||
use_tls = False
|
||||
|
||||
if url.startswith("grpcs://"):
|
||||
target = url[8:]
|
||||
use_tls = True
|
||||
elif url.startswith("grpc://"):
|
||||
target = url[7:]
|
||||
elif url.startswith("https://"):
|
||||
target = url[8:]
|
||||
use_tls = True
|
||||
elif url.startswith("http://"):
|
||||
target = url[7:]
|
||||
|
||||
options: list[tuple[str, Any]] = []
|
||||
if grpc_config.max_send_message_length is not None:
|
||||
options.append(
|
||||
("grpc.max_send_message_length", grpc_config.max_send_message_length)
|
||||
)
|
||||
if grpc_config.max_receive_message_length is not None:
|
||||
options.append(
|
||||
(
|
||||
"grpc.max_receive_message_length",
|
||||
grpc_config.max_receive_message_length,
|
||||
)
|
||||
)
|
||||
if grpc_config.keepalive_time_ms is not None:
|
||||
options.append(("grpc.keepalive_time_ms", grpc_config.keepalive_time_ms))
|
||||
if grpc_config.keepalive_timeout_ms is not None:
|
||||
options.append(
|
||||
("grpc.keepalive_timeout_ms", grpc_config.keepalive_timeout_ms)
|
||||
)
|
||||
|
||||
channel_credentials = None
|
||||
if auth and hasattr(auth, "tls") and auth.tls:
|
||||
channel_credentials = auth.tls.get_grpc_credentials()
|
||||
elif use_tls:
|
||||
channel_credentials = grpc.ssl_channel_credentials()
|
||||
|
||||
if channel_credentials and auth_metadata:
|
||||
|
||||
class AuthMetadataPlugin(grpc.AuthMetadataPlugin): # type: ignore[misc,no-any-unimported]
|
||||
"""gRPC auth metadata plugin that adds auth headers as metadata."""
|
||||
|
||||
def __init__(self, metadata: list[tuple[str, str]]) -> None:
|
||||
self._metadata = tuple(metadata)
|
||||
|
||||
def __call__( # type: ignore[no-any-unimported]
|
||||
self,
|
||||
context: grpc.AuthMetadataContext,
|
||||
callback: grpc.AuthMetadataPluginCallback,
|
||||
) -> None:
|
||||
callback(self._metadata, None)
|
||||
|
||||
call_creds = grpc.metadata_call_credentials(
|
||||
AuthMetadataPlugin(auth_metadata)
|
||||
)
|
||||
credentials = grpc.composite_channel_credentials(
|
||||
channel_credentials, call_creds
|
||||
)
|
||||
interceptors = _create_grpc_interceptors()
|
||||
return grpc.aio.secure_channel(
|
||||
target, credentials, options=options or None, interceptors=interceptors
|
||||
)
|
||||
if channel_credentials:
|
||||
interceptors = _create_grpc_interceptors()
|
||||
return grpc.aio.secure_channel(
|
||||
target,
|
||||
channel_credentials,
|
||||
options=options or None,
|
||||
interceptors=interceptors,
|
||||
)
|
||||
interceptors = _create_grpc_interceptors(
|
||||
auth_metadata=auth_metadata if auth_metadata else None
|
||||
)
|
||||
return grpc.aio.insecure_channel(
|
||||
target, options=options or None, interceptors=interceptors
|
||||
)
|
||||
|
||||
return factory
|
||||
|
||||
|
||||
@asynccontextmanager
|
||||
async def _create_a2a_client(
|
||||
agent_card: AgentCard,
|
||||
@@ -909,12 +542,9 @@ async def _create_a2a_client(
|
||||
timeout: int,
|
||||
headers: MutableMapping[str, str],
|
||||
streaming: bool,
|
||||
auth: ClientAuthScheme | None = None,
|
||||
auth: AuthScheme | None = None,
|
||||
use_polling: bool = False,
|
||||
push_notification_config: PushNotificationConfig | None = None,
|
||||
client_extensions: list[str] | None = None,
|
||||
accepted_output_modes: list[str] | None = None,
|
||||
grpc_config: GRPCClientConfig | None = None,
|
||||
) -> AsyncIterator[Client]:
|
||||
"""Create and configure an A2A client.
|
||||
|
||||
@@ -924,21 +554,16 @@ async def _create_a2a_client(
|
||||
timeout: Request timeout in seconds.
|
||||
headers: HTTP headers (already with auth applied).
|
||||
streaming: Enable streaming responses.
|
||||
auth: Optional ClientAuthScheme for client configuration.
|
||||
auth: Optional AuthScheme for client configuration.
|
||||
use_polling: Enable polling mode.
|
||||
push_notification_config: Optional push notification config.
|
||||
client_extensions: A2A protocol extension URIs to declare support for.
|
||||
accepted_output_modes: MIME types the client can accept in responses.
|
||||
grpc_config: Optional gRPC client configuration.
|
||||
|
||||
Yields:
|
||||
Configured A2A client instance.
|
||||
"""
|
||||
verify = _get_tls_verify(auth)
|
||||
async with httpx.AsyncClient(
|
||||
timeout=timeout,
|
||||
headers=headers,
|
||||
verify=verify,
|
||||
) as httpx_client:
|
||||
if auth and isinstance(auth, (HTTPDigestAuth, APIKeyAuth)):
|
||||
configure_auth_client(auth, httpx_client)
|
||||
@@ -954,27 +579,15 @@ async def _create_a2a_client(
|
||||
)
|
||||
)
|
||||
|
||||
grpc_channel_factory = None
|
||||
if transport_protocol == "GRPC":
|
||||
grpc_channel_factory = _create_grpc_channel_factory(
|
||||
grpc_config or GRPCClientConfig(),
|
||||
auth=auth,
|
||||
)
|
||||
|
||||
config = ClientConfig(
|
||||
httpx_client=httpx_client,
|
||||
supported_transports=[transport_protocol],
|
||||
streaming=streaming and not use_polling,
|
||||
polling=use_polling,
|
||||
accepted_output_modes=accepted_output_modes or DEFAULT_CLIENT_OUTPUT_MODES, # type: ignore[arg-type]
|
||||
accepted_output_modes=["application/json"],
|
||||
push_notification_configs=push_configs,
|
||||
grpc_channel_factory=grpc_channel_factory,
|
||||
)
|
||||
|
||||
factory = ClientFactory(config)
|
||||
client = factory.create(agent_card)
|
||||
|
||||
if client_extensions:
|
||||
await client.add_request_middleware(ExtensionsMiddleware(client_extensions))
|
||||
|
||||
yield client
|
||||
|
||||
@@ -1,131 +0,0 @@
|
||||
"""Structured JSON logging utilities for A2A module."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from contextvars import ContextVar
|
||||
from datetime import datetime, timezone
|
||||
import json
|
||||
import logging
|
||||
from typing import Any
|
||||
|
||||
|
||||
_log_context: ContextVar[dict[str, Any] | None] = ContextVar(
|
||||
"log_context", default=None
|
||||
)
|
||||
|
||||
|
||||
class JSONFormatter(logging.Formatter):
|
||||
"""JSON formatter for structured logging.
|
||||
|
||||
Outputs logs as JSON with consistent fields for log aggregators.
|
||||
"""
|
||||
|
||||
def format(self, record: logging.LogRecord) -> str:
|
||||
"""Format log record as JSON string."""
|
||||
log_data: dict[str, Any] = {
|
||||
"timestamp": datetime.now(timezone.utc).isoformat(),
|
||||
"level": record.levelname,
|
||||
"logger": record.name,
|
||||
"message": record.getMessage(),
|
||||
}
|
||||
|
||||
if record.exc_info:
|
||||
log_data["exception"] = self.formatException(record.exc_info)
|
||||
|
||||
context = _log_context.get()
|
||||
if context is not None:
|
||||
log_data.update(context)
|
||||
|
||||
if hasattr(record, "task_id"):
|
||||
log_data["task_id"] = record.task_id
|
||||
if hasattr(record, "context_id"):
|
||||
log_data["context_id"] = record.context_id
|
||||
if hasattr(record, "agent"):
|
||||
log_data["agent"] = record.agent
|
||||
if hasattr(record, "endpoint"):
|
||||
log_data["endpoint"] = record.endpoint
|
||||
if hasattr(record, "extension"):
|
||||
log_data["extension"] = record.extension
|
||||
if hasattr(record, "error"):
|
||||
log_data["error"] = record.error
|
||||
|
||||
for key, value in record.__dict__.items():
|
||||
if key.startswith("_") or key in (
|
||||
"name",
|
||||
"msg",
|
||||
"args",
|
||||
"created",
|
||||
"filename",
|
||||
"funcName",
|
||||
"levelname",
|
||||
"levelno",
|
||||
"lineno",
|
||||
"module",
|
||||
"msecs",
|
||||
"pathname",
|
||||
"process",
|
||||
"processName",
|
||||
"relativeCreated",
|
||||
"stack_info",
|
||||
"exc_info",
|
||||
"exc_text",
|
||||
"thread",
|
||||
"threadName",
|
||||
"taskName",
|
||||
"message",
|
||||
):
|
||||
continue
|
||||
if key not in log_data:
|
||||
log_data[key] = value
|
||||
|
||||
return json.dumps(log_data, default=str)
|
||||
|
||||
|
||||
class LogContext:
|
||||
"""Context manager for adding fields to all logs within a scope.
|
||||
|
||||
Example:
|
||||
with LogContext(task_id="abc", context_id="xyz"):
|
||||
logger.info("Processing task") # Includes task_id and context_id
|
||||
"""
|
||||
|
||||
def __init__(self, **fields: Any) -> None:
|
||||
self._fields = fields
|
||||
self._token: Any = None
|
||||
|
||||
def __enter__(self) -> LogContext:
|
||||
current = _log_context.get() or {}
|
||||
new_context = {**current, **self._fields}
|
||||
self._token = _log_context.set(new_context)
|
||||
return self
|
||||
|
||||
def __exit__(self, *args: Any) -> None:
|
||||
_log_context.reset(self._token)
|
||||
|
||||
|
||||
def configure_json_logging(logger_name: str = "crewai.a2a") -> None:
|
||||
"""Configure JSON logging for the A2A module.
|
||||
|
||||
Args:
|
||||
logger_name: Logger name to configure.
|
||||
"""
|
||||
logger = logging.getLogger(logger_name)
|
||||
|
||||
for handler in logger.handlers[:]:
|
||||
logger.removeHandler(handler)
|
||||
|
||||
handler = logging.StreamHandler()
|
||||
handler.setFormatter(JSONFormatter())
|
||||
logger.addHandler(handler)
|
||||
|
||||
|
||||
def get_logger(name: str) -> logging.Logger:
|
||||
"""Get a logger configured for structured JSON output.
|
||||
|
||||
Args:
|
||||
name: Logger name.
|
||||
|
||||
Returns:
|
||||
Configured logger instance.
|
||||
"""
|
||||
return logging.getLogger(name)
|
||||
@@ -7,40 +7,26 @@ import base64
|
||||
from collections.abc import Callable, Coroutine
|
||||
from datetime import datetime
|
||||
from functools import wraps
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
from typing import TYPE_CHECKING, Any, ParamSpec, TypeVar, TypedDict, cast
|
||||
from typing import TYPE_CHECKING, Any, ParamSpec, TypeVar, cast
|
||||
from urllib.parse import urlparse
|
||||
|
||||
from a2a.server.agent_execution import RequestContext
|
||||
from a2a.server.events import EventQueue
|
||||
from a2a.types import (
|
||||
Artifact,
|
||||
FileWithBytes,
|
||||
FileWithUri,
|
||||
InternalError,
|
||||
InvalidParamsError,
|
||||
Message,
|
||||
Part,
|
||||
Task as A2ATask,
|
||||
TaskState,
|
||||
TaskStatus,
|
||||
TaskStatusUpdateEvent,
|
||||
)
|
||||
from a2a.utils import (
|
||||
get_data_parts,
|
||||
get_file_parts,
|
||||
new_agent_text_message,
|
||||
new_data_artifact,
|
||||
new_text_artifact,
|
||||
)
|
||||
from a2a.utils import new_agent_text_message, new_text_artifact
|
||||
from a2a.utils.errors import ServerError
|
||||
from aiocache import SimpleMemoryCache, caches # type: ignore[import-untyped]
|
||||
from pydantic import BaseModel
|
||||
|
||||
from crewai.a2a.utils.agent_card import _get_server_config
|
||||
from crewai.a2a.utils.content_type import validate_message_parts
|
||||
from crewai.events.event_bus import crewai_event_bus
|
||||
from crewai.events.types.a2a_events import (
|
||||
A2AServerTaskCanceledEvent,
|
||||
@@ -49,11 +35,9 @@ from crewai.events.types.a2a_events import (
|
||||
A2AServerTaskStartedEvent,
|
||||
)
|
||||
from crewai.task import Task
|
||||
from crewai.utilities.pydantic_schema_utils import create_model_from_schema
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from crewai.a2a.extensions.server import ExtensionContext, ServerExtensionRegistry
|
||||
from crewai.agent import Agent
|
||||
|
||||
|
||||
@@ -63,17 +47,7 @@ P = ParamSpec("P")
|
||||
T = TypeVar("T")
|
||||
|
||||
|
||||
class RedisCacheConfig(TypedDict, total=False):
|
||||
"""Configuration for aiocache Redis backend."""
|
||||
|
||||
cache: str
|
||||
endpoint: str
|
||||
port: int
|
||||
db: int
|
||||
password: str
|
||||
|
||||
|
||||
def _parse_redis_url(url: str) -> RedisCacheConfig:
|
||||
def _parse_redis_url(url: str) -> dict[str, Any]:
|
||||
"""Parse a Redis URL into aiocache configuration.
|
||||
|
||||
Args:
|
||||
@@ -82,8 +56,9 @@ def _parse_redis_url(url: str) -> RedisCacheConfig:
|
||||
Returns:
|
||||
Configuration dict for aiocache.RedisCache.
|
||||
"""
|
||||
|
||||
parsed = urlparse(url)
|
||||
config: RedisCacheConfig = {
|
||||
config: dict[str, Any] = {
|
||||
"cache": "aiocache.RedisCache",
|
||||
"endpoint": parsed.hostname or "localhost",
|
||||
"port": parsed.port or 6379,
|
||||
@@ -163,10 +138,7 @@ def cancellable(
|
||||
if message["type"] == "message":
|
||||
return True
|
||||
except (OSError, ConnectionError) as e:
|
||||
logger.warning(
|
||||
"Cancel watcher Redis error, falling back to polling",
|
||||
extra={"task_id": task_id, "error": str(e)},
|
||||
)
|
||||
logger.warning("Cancel watcher error for task_id=%s: %s", task_id, e)
|
||||
return await poll_for_cancel()
|
||||
return False
|
||||
|
||||
@@ -194,98 +166,7 @@ def cancellable(
|
||||
return wrapper
|
||||
|
||||
|
||||
def _convert_a2a_files_to_file_inputs(
|
||||
a2a_files: list[FileWithBytes | FileWithUri],
|
||||
) -> dict[str, Any]:
|
||||
"""Convert a2a file types to crewai FileInput dict.
|
||||
|
||||
Args:
|
||||
a2a_files: List of FileWithBytes or FileWithUri from a2a SDK.
|
||||
|
||||
Returns:
|
||||
Dictionary mapping file names to FileInput objects.
|
||||
"""
|
||||
try:
|
||||
from crewai_files import File, FileBytes
|
||||
except ImportError:
|
||||
logger.debug("crewai_files not installed, returning empty file dict")
|
||||
return {}
|
||||
|
||||
file_dict: dict[str, Any] = {}
|
||||
for idx, a2a_file in enumerate(a2a_files):
|
||||
if isinstance(a2a_file, FileWithBytes):
|
||||
file_bytes = base64.b64decode(a2a_file.bytes)
|
||||
name = a2a_file.name or f"file_{idx}"
|
||||
file_source = FileBytes(data=file_bytes, filename=a2a_file.name)
|
||||
file_dict[name] = File(source=file_source)
|
||||
elif isinstance(a2a_file, FileWithUri):
|
||||
name = a2a_file.name or f"file_{idx}"
|
||||
file_dict[name] = File(source=a2a_file.uri)
|
||||
|
||||
return file_dict
|
||||
|
||||
|
||||
def _extract_response_schema(parts: list[Part]) -> dict[str, Any] | None:
|
||||
"""Extract response schema from message parts metadata.
|
||||
|
||||
The client may include a JSON schema in TextPart metadata to specify
|
||||
the expected response format (see delegation.py line 463).
|
||||
|
||||
Args:
|
||||
parts: List of message parts.
|
||||
|
||||
Returns:
|
||||
JSON schema dict if found, None otherwise.
|
||||
"""
|
||||
for part in parts:
|
||||
if part.root.kind == "text" and part.root.metadata:
|
||||
schema = part.root.metadata.get("schema")
|
||||
if schema and isinstance(schema, dict):
|
||||
return schema # type: ignore[no-any-return]
|
||||
return None
|
||||
|
||||
|
||||
def _create_result_artifact(
|
||||
result: Any,
|
||||
task_id: str,
|
||||
) -> Artifact:
|
||||
"""Create artifact from task result, using DataPart for structured data.
|
||||
|
||||
Args:
|
||||
result: The task execution result.
|
||||
task_id: The task ID for naming the artifact.
|
||||
|
||||
Returns:
|
||||
Artifact with appropriate part type (DataPart for dict/Pydantic, TextPart for strings).
|
||||
"""
|
||||
artifact_name = f"result_{task_id}"
|
||||
if isinstance(result, dict):
|
||||
return new_data_artifact(artifact_name, result)
|
||||
if isinstance(result, BaseModel):
|
||||
return new_data_artifact(artifact_name, result.model_dump())
|
||||
return new_text_artifact(artifact_name, str(result))
|
||||
|
||||
|
||||
def _build_task_description(
|
||||
user_message: str,
|
||||
structured_inputs: list[dict[str, Any]],
|
||||
) -> str:
|
||||
"""Build task description including structured data if present.
|
||||
|
||||
Args:
|
||||
user_message: The original user message text.
|
||||
structured_inputs: List of structured data from DataParts.
|
||||
|
||||
Returns:
|
||||
Task description with structured data appended if present.
|
||||
"""
|
||||
if not structured_inputs:
|
||||
return user_message
|
||||
|
||||
structured_json = json.dumps(structured_inputs, indent=2)
|
||||
return f"{user_message}\n\nStructured Data:\n{structured_json}"
|
||||
|
||||
|
||||
@cancellable
|
||||
async def execute(
|
||||
agent: Agent,
|
||||
context: RequestContext,
|
||||
@@ -297,54 +178,15 @@ async def execute(
|
||||
agent: The CrewAI agent to execute the task.
|
||||
context: The A2A request context containing the user's message.
|
||||
event_queue: The event queue for sending responses back.
|
||||
|
||||
TODOs:
|
||||
* need to impl both of structured output and file inputs, depends on `file_inputs` for
|
||||
`crewai.task.Task`, pass the below two to Task. both utils in `a2a.utils.parts`
|
||||
* structured outputs ingestion, `structured_inputs = get_data_parts(parts=context.message.parts)`
|
||||
* file inputs ingestion, `file_inputs = get_file_parts(parts=context.message.parts)`
|
||||
"""
|
||||
await _execute_impl(agent, context, event_queue, None, None)
|
||||
|
||||
|
||||
@cancellable
|
||||
async def _execute_impl(
|
||||
agent: Agent,
|
||||
context: RequestContext,
|
||||
event_queue: EventQueue,
|
||||
extension_registry: ServerExtensionRegistry | None,
|
||||
extension_context: ExtensionContext | None,
|
||||
) -> None:
|
||||
"""Internal implementation for task execution with optional extensions."""
|
||||
server_config = _get_server_config(agent)
|
||||
if context.message and context.message.parts and server_config:
|
||||
allowed_modes = server_config.default_input_modes
|
||||
invalid_types = validate_message_parts(context.message.parts, allowed_modes)
|
||||
if invalid_types:
|
||||
raise ServerError(
|
||||
InvalidParamsError(
|
||||
message=f"Unsupported content type(s): {', '.join(invalid_types)}. "
|
||||
f"Supported: {', '.join(allowed_modes)}"
|
||||
)
|
||||
)
|
||||
|
||||
if extension_registry and extension_context:
|
||||
await extension_registry.invoke_on_request(extension_context)
|
||||
|
||||
user_message = context.get_user_input()
|
||||
|
||||
response_model: type[BaseModel] | None = None
|
||||
structured_inputs: list[dict[str, Any]] = []
|
||||
a2a_files: list[FileWithBytes | FileWithUri] = []
|
||||
|
||||
if context.message and context.message.parts:
|
||||
schema = _extract_response_schema(context.message.parts)
|
||||
if schema:
|
||||
try:
|
||||
response_model = create_model_from_schema(schema)
|
||||
except Exception as e:
|
||||
logger.debug(
|
||||
"Failed to create response model from schema",
|
||||
extra={"error": str(e), "schema_title": schema.get("title")},
|
||||
)
|
||||
|
||||
structured_inputs = get_data_parts(context.message.parts)
|
||||
a2a_files = get_file_parts(context.message.parts)
|
||||
|
||||
task_id = context.task_id
|
||||
context_id = context.context_id
|
||||
if task_id is None or context_id is None:
|
||||
@@ -361,11 +203,9 @@ async def _execute_impl(
|
||||
raise ServerError(InvalidParamsError(message=msg)) from None
|
||||
|
||||
task = Task(
|
||||
description=_build_task_description(user_message, structured_inputs),
|
||||
description=user_message,
|
||||
expected_output="Response to the user's request",
|
||||
agent=agent,
|
||||
response_model=response_model,
|
||||
input_files=_convert_a2a_files_to_file_inputs(a2a_files),
|
||||
)
|
||||
|
||||
crewai_event_bus.emit(
|
||||
@@ -380,10 +220,6 @@ async def _execute_impl(
|
||||
|
||||
try:
|
||||
result = await agent.aexecute_task(task=task, tools=agent.tools)
|
||||
if extension_registry and extension_context:
|
||||
result = await extension_registry.invoke_on_response(
|
||||
extension_context, result
|
||||
)
|
||||
result_str = str(result)
|
||||
history: list[Message] = [context.message] if context.message else []
|
||||
history.append(new_agent_text_message(result_str, context_id, task_id))
|
||||
@@ -391,8 +227,8 @@ async def _execute_impl(
|
||||
A2ATask(
|
||||
id=task_id,
|
||||
context_id=context_id,
|
||||
status=TaskStatus(state=TaskState.completed),
|
||||
artifacts=[_create_result_artifact(result, task_id)],
|
||||
status=TaskStatus(state=TaskState.input_required),
|
||||
artifacts=[new_text_artifact(result_str, f"result_{task_id}")],
|
||||
history=history,
|
||||
)
|
||||
)
|
||||
@@ -433,27 +269,6 @@ async def _execute_impl(
|
||||
) from e
|
||||
|
||||
|
||||
async def execute_with_extensions(
|
||||
agent: Agent,
|
||||
context: RequestContext,
|
||||
event_queue: EventQueue,
|
||||
extension_registry: ServerExtensionRegistry,
|
||||
extension_context: ExtensionContext,
|
||||
) -> None:
|
||||
"""Execute an A2A task with extension hooks.
|
||||
|
||||
Args:
|
||||
agent: The CrewAI agent to execute the task.
|
||||
context: The A2A request context containing the user's message.
|
||||
event_queue: The event queue for sending responses back.
|
||||
extension_registry: Registry of server extensions.
|
||||
extension_context: Context for extension hooks.
|
||||
"""
|
||||
await _execute_impl(
|
||||
agent, context, event_queue, extension_registry, extension_context
|
||||
)
|
||||
|
||||
|
||||
async def cancel(
|
||||
context: RequestContext,
|
||||
event_queue: EventQueue,
|
||||
|
||||
@@ -1,215 +0,0 @@
|
||||
"""Transport negotiation utilities for A2A protocol.
|
||||
|
||||
This module provides functionality for negotiating the transport protocol
|
||||
between an A2A client and server based on their respective capabilities
|
||||
and preferences.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass
|
||||
from typing import Final, Literal
|
||||
|
||||
from a2a.types import AgentCard, AgentInterface
|
||||
|
||||
from crewai.events.event_bus import crewai_event_bus
|
||||
from crewai.events.types.a2a_events import A2ATransportNegotiatedEvent
|
||||
|
||||
|
||||
TransportProtocol = Literal["JSONRPC", "GRPC", "HTTP+JSON"]
|
||||
NegotiationSource = Literal["client_preferred", "server_preferred", "fallback"]
|
||||
|
||||
JSONRPC_TRANSPORT: Literal["JSONRPC"] = "JSONRPC"
|
||||
GRPC_TRANSPORT: Literal["GRPC"] = "GRPC"
|
||||
HTTP_JSON_TRANSPORT: Literal["HTTP+JSON"] = "HTTP+JSON"
|
||||
|
||||
DEFAULT_TRANSPORT_PREFERENCE: Final[list[TransportProtocol]] = [
|
||||
JSONRPC_TRANSPORT,
|
||||
GRPC_TRANSPORT,
|
||||
HTTP_JSON_TRANSPORT,
|
||||
]
|
||||
|
||||
|
||||
@dataclass
|
||||
class NegotiatedTransport:
|
||||
"""Result of transport negotiation.
|
||||
|
||||
Attributes:
|
||||
transport: The negotiated transport protocol.
|
||||
url: The URL to use for this transport.
|
||||
source: How the transport was selected ('preferred', 'additional', 'fallback').
|
||||
"""
|
||||
|
||||
transport: str
|
||||
url: str
|
||||
source: NegotiationSource
|
||||
|
||||
|
||||
class TransportNegotiationError(Exception):
|
||||
"""Raised when no compatible transport can be negotiated."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
client_transports: list[str],
|
||||
server_transports: list[str],
|
||||
message: str | None = None,
|
||||
) -> None:
|
||||
"""Initialize the error with negotiation details.
|
||||
|
||||
Args:
|
||||
client_transports: Transports supported by the client.
|
||||
server_transports: Transports supported by the server.
|
||||
message: Optional custom error message.
|
||||
"""
|
||||
self.client_transports = client_transports
|
||||
self.server_transports = server_transports
|
||||
if message is None:
|
||||
message = (
|
||||
f"No compatible transport found. "
|
||||
f"Client supports: {client_transports}. "
|
||||
f"Server supports: {server_transports}."
|
||||
)
|
||||
super().__init__(message)
|
||||
|
||||
|
||||
def _get_server_interfaces(agent_card: AgentCard) -> list[AgentInterface]:
|
||||
"""Extract all available interfaces from an AgentCard.
|
||||
|
||||
Creates a unified list of interfaces including the primary URL and
|
||||
any additional interfaces declared by the agent.
|
||||
|
||||
Args:
|
||||
agent_card: The agent's card containing transport information.
|
||||
|
||||
Returns:
|
||||
List of AgentInterface objects representing all available endpoints.
|
||||
"""
|
||||
interfaces: list[AgentInterface] = []
|
||||
|
||||
primary_transport = agent_card.preferred_transport or JSONRPC_TRANSPORT
|
||||
interfaces.append(
|
||||
AgentInterface(
|
||||
transport=primary_transport,
|
||||
url=agent_card.url,
|
||||
)
|
||||
)
|
||||
|
||||
if agent_card.additional_interfaces:
|
||||
for interface in agent_card.additional_interfaces:
|
||||
is_duplicate = any(
|
||||
i.url == interface.url and i.transport == interface.transport
|
||||
for i in interfaces
|
||||
)
|
||||
if not is_duplicate:
|
||||
interfaces.append(interface)
|
||||
|
||||
return interfaces
|
||||
|
||||
|
||||
def negotiate_transport(
|
||||
agent_card: AgentCard,
|
||||
client_supported_transports: list[str] | None = None,
|
||||
client_preferred_transport: str | None = None,
|
||||
emit_event: bool = True,
|
||||
endpoint: str | None = None,
|
||||
a2a_agent_name: str | None = None,
|
||||
) -> NegotiatedTransport:
|
||||
"""Negotiate the transport protocol between client and server.
|
||||
|
||||
Compares the client's supported transports with the server's available
|
||||
interfaces to find a compatible transport and URL.
|
||||
|
||||
Negotiation logic:
|
||||
1. If client_preferred_transport is set and server supports it → use it
|
||||
2. Otherwise, if server's preferred is in client's supported → use server's
|
||||
3. Otherwise, find first match from client's supported in server's interfaces
|
||||
|
||||
Args:
|
||||
agent_card: The server's AgentCard with transport information.
|
||||
client_supported_transports: Transports the client can use.
|
||||
Defaults to ["JSONRPC"] if not specified.
|
||||
client_preferred_transport: Client's preferred transport. If set and
|
||||
server supports it, takes priority over server preference.
|
||||
emit_event: Whether to emit a transport negotiation event.
|
||||
endpoint: Original endpoint URL for event metadata.
|
||||
a2a_agent_name: Agent name for event metadata.
|
||||
|
||||
Returns:
|
||||
NegotiatedTransport with the selected transport, URL, and source.
|
||||
|
||||
Raises:
|
||||
TransportNegotiationError: If no compatible transport is found.
|
||||
"""
|
||||
if client_supported_transports is None:
|
||||
client_supported_transports = [JSONRPC_TRANSPORT]
|
||||
|
||||
client_transports = [t.upper() for t in client_supported_transports]
|
||||
client_preferred = (
|
||||
client_preferred_transport.upper() if client_preferred_transport else None
|
||||
)
|
||||
|
||||
server_interfaces = _get_server_interfaces(agent_card)
|
||||
server_transports = [i.transport.upper() for i in server_interfaces]
|
||||
|
||||
transport_to_interface: dict[str, AgentInterface] = {}
|
||||
for interface in server_interfaces:
|
||||
transport_upper = interface.transport.upper()
|
||||
if transport_upper not in transport_to_interface:
|
||||
transport_to_interface[transport_upper] = interface
|
||||
|
||||
result: NegotiatedTransport | None = None
|
||||
|
||||
if client_preferred and client_preferred in transport_to_interface:
|
||||
interface = transport_to_interface[client_preferred]
|
||||
result = NegotiatedTransport(
|
||||
transport=interface.transport,
|
||||
url=interface.url,
|
||||
source="client_preferred",
|
||||
)
|
||||
else:
|
||||
server_preferred = (agent_card.preferred_transport or JSONRPC_TRANSPORT).upper()
|
||||
if (
|
||||
server_preferred in client_transports
|
||||
and server_preferred in transport_to_interface
|
||||
):
|
||||
interface = transport_to_interface[server_preferred]
|
||||
result = NegotiatedTransport(
|
||||
transport=interface.transport,
|
||||
url=interface.url,
|
||||
source="server_preferred",
|
||||
)
|
||||
else:
|
||||
for transport in client_transports:
|
||||
if transport in transport_to_interface:
|
||||
interface = transport_to_interface[transport]
|
||||
result = NegotiatedTransport(
|
||||
transport=interface.transport,
|
||||
url=interface.url,
|
||||
source="fallback",
|
||||
)
|
||||
break
|
||||
|
||||
if result is None:
|
||||
raise TransportNegotiationError(
|
||||
client_transports=client_transports,
|
||||
server_transports=server_transports,
|
||||
)
|
||||
|
||||
if emit_event:
|
||||
crewai_event_bus.emit(
|
||||
None,
|
||||
A2ATransportNegotiatedEvent(
|
||||
endpoint=endpoint or agent_card.url,
|
||||
a2a_agent_name=a2a_agent_name or agent_card.name,
|
||||
negotiated_transport=result.transport,
|
||||
negotiated_url=result.url,
|
||||
source=result.source,
|
||||
client_supported_transports=client_transports,
|
||||
server_supported_transports=server_transports,
|
||||
server_preferred_transport=agent_card.preferred_transport
|
||||
or JSONRPC_TRANSPORT,
|
||||
client_preferred_transport=client_preferred,
|
||||
),
|
||||
)
|
||||
|
||||
return result
|
||||
File diff suppressed because it is too large
Load Diff
@@ -94,12 +94,6 @@ from crewai.utilities.token_counter_callback import TokenCalcHandler
|
||||
from crewai.utilities.training_handler import CrewTrainingHandler
|
||||
|
||||
|
||||
try:
|
||||
from crewai.a2a.types import AgentResponseProtocol
|
||||
except ImportError:
|
||||
AgentResponseProtocol = None # type: ignore[assignment, misc]
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from crewai_files import FileInput
|
||||
from crewai_tools import CodeInterpreterTool
|
||||
@@ -496,22 +490,9 @@ class Agent(BaseAgent):
|
||||
self._rpm_controller.stop_rpm_counter()
|
||||
|
||||
result = process_tool_results(self, result)
|
||||
|
||||
output_for_event = result
|
||||
if (
|
||||
AgentResponseProtocol is not None
|
||||
and isinstance(result, BaseModel)
|
||||
and isinstance(result, AgentResponseProtocol)
|
||||
):
|
||||
output_for_event = str(result.message)
|
||||
elif not isinstance(result, str):
|
||||
output_for_event = str(result)
|
||||
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=AgentExecutionCompletedEvent(
|
||||
agent=self, task=task, output=output_for_event
|
||||
),
|
||||
event=AgentExecutionCompletedEvent(agent=self, task=task, output=result),
|
||||
)
|
||||
|
||||
save_last_messages(self)
|
||||
@@ -728,22 +709,9 @@ class Agent(BaseAgent):
|
||||
self._rpm_controller.stop_rpm_counter()
|
||||
|
||||
result = process_tool_results(self, result)
|
||||
|
||||
output_for_event = result
|
||||
if (
|
||||
AgentResponseProtocol is not None
|
||||
and isinstance(result, BaseModel)
|
||||
and isinstance(result, AgentResponseProtocol)
|
||||
):
|
||||
output_for_event = str(result.message)
|
||||
elif not isinstance(result, str):
|
||||
output_for_event = str(result)
|
||||
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=AgentExecutionCompletedEvent(
|
||||
agent=self, task=task, output=output_for_event
|
||||
),
|
||||
event=AgentExecutionCompletedEvent(agent=self, task=task, output=result),
|
||||
)
|
||||
|
||||
save_last_messages(self)
|
||||
@@ -1895,30 +1863,25 @@ class Agent(BaseAgent):
|
||||
# Handle response format conversion
|
||||
formatted_result: BaseModel | None = None
|
||||
if response_format:
|
||||
if isinstance(raw_output, BaseModel) and isinstance(
|
||||
raw_output, response_format
|
||||
):
|
||||
formatted_result = raw_output
|
||||
elif isinstance(raw_output, str):
|
||||
try:
|
||||
model_schema = generate_model_description(response_format)
|
||||
schema = json.dumps(model_schema, indent=2)
|
||||
instructions = self.i18n.slice(
|
||||
"formatted_task_instructions"
|
||||
).format(output_format=schema)
|
||||
try:
|
||||
model_schema = generate_model_description(response_format)
|
||||
schema = json.dumps(model_schema, indent=2)
|
||||
instructions = self.i18n.slice("formatted_task_instructions").format(
|
||||
output_format=schema
|
||||
)
|
||||
|
||||
converter = Converter(
|
||||
llm=self.llm,
|
||||
text=raw_output,
|
||||
model=response_format,
|
||||
instructions=instructions,
|
||||
)
|
||||
converter = Converter(
|
||||
llm=self.llm,
|
||||
text=raw_output,
|
||||
model=response_format,
|
||||
instructions=instructions,
|
||||
)
|
||||
|
||||
conversion_result = converter.to_pydantic()
|
||||
if isinstance(conversion_result, BaseModel):
|
||||
formatted_result = conversion_result
|
||||
except ConverterError:
|
||||
pass # Keep raw output if conversion fails
|
||||
conversion_result = converter.to_pydantic()
|
||||
if isinstance(conversion_result, BaseModel):
|
||||
formatted_result = conversion_result
|
||||
except ConverterError:
|
||||
pass # Keep raw output if conversion fails
|
||||
|
||||
# Get token usage metrics
|
||||
if isinstance(self.llm, BaseLLM):
|
||||
@@ -1926,16 +1889,8 @@ class Agent(BaseAgent):
|
||||
else:
|
||||
usage_metrics = self._token_process.get_summary()
|
||||
|
||||
raw_str = (
|
||||
raw_output
|
||||
if isinstance(raw_output, str)
|
||||
else raw_output.model_dump_json()
|
||||
if isinstance(raw_output, BaseModel)
|
||||
else str(raw_output)
|
||||
)
|
||||
|
||||
return LiteAgentOutput(
|
||||
raw=raw_str,
|
||||
raw=raw_output,
|
||||
pydantic=formatted_result,
|
||||
agent_role=self.role,
|
||||
usage_metrics=usage_metrics.model_dump() if usage_metrics else None,
|
||||
@@ -1970,30 +1925,25 @@ class Agent(BaseAgent):
|
||||
# Handle response format conversion
|
||||
formatted_result: BaseModel | None = None
|
||||
if response_format:
|
||||
if isinstance(raw_output, BaseModel) and isinstance(
|
||||
raw_output, response_format
|
||||
):
|
||||
formatted_result = raw_output
|
||||
elif isinstance(raw_output, str):
|
||||
try:
|
||||
model_schema = generate_model_description(response_format)
|
||||
schema = json.dumps(model_schema, indent=2)
|
||||
instructions = self.i18n.slice(
|
||||
"formatted_task_instructions"
|
||||
).format(output_format=schema)
|
||||
try:
|
||||
model_schema = generate_model_description(response_format)
|
||||
schema = json.dumps(model_schema, indent=2)
|
||||
instructions = self.i18n.slice("formatted_task_instructions").format(
|
||||
output_format=schema
|
||||
)
|
||||
|
||||
converter = Converter(
|
||||
llm=self.llm,
|
||||
text=raw_output,
|
||||
model=response_format,
|
||||
instructions=instructions,
|
||||
)
|
||||
converter = Converter(
|
||||
llm=self.llm,
|
||||
text=raw_output,
|
||||
model=response_format,
|
||||
instructions=instructions,
|
||||
)
|
||||
|
||||
conversion_result = converter.to_pydantic()
|
||||
if isinstance(conversion_result, BaseModel):
|
||||
formatted_result = conversion_result
|
||||
except ConverterError:
|
||||
pass # Keep raw output if conversion fails
|
||||
conversion_result = converter.to_pydantic()
|
||||
if isinstance(conversion_result, BaseModel):
|
||||
formatted_result = conversion_result
|
||||
except ConverterError:
|
||||
pass # Keep raw output if conversion fails
|
||||
|
||||
# Get token usage metrics
|
||||
if isinstance(self.llm, BaseLLM):
|
||||
@@ -2001,16 +1951,8 @@ class Agent(BaseAgent):
|
||||
else:
|
||||
usage_metrics = self._token_process.get_summary()
|
||||
|
||||
raw_str = (
|
||||
raw_output
|
||||
if isinstance(raw_output, str)
|
||||
else raw_output.model_dump_json()
|
||||
if isinstance(raw_output, BaseModel)
|
||||
else str(raw_output)
|
||||
)
|
||||
|
||||
return LiteAgentOutput(
|
||||
raw=raw_str,
|
||||
raw=raw_output,
|
||||
pydantic=formatted_result,
|
||||
agent_role=self.role,
|
||||
usage_metrics=usage_metrics.model_dump() if usage_metrics else None,
|
||||
|
||||
@@ -930,6 +930,10 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
|
||||
"name": func_name,
|
||||
"content": result,
|
||||
}
|
||||
|
||||
if after_hook_context.files:
|
||||
tool_message["files"] = after_hook_context.files
|
||||
|
||||
self.messages.append(tool_message)
|
||||
|
||||
# Log the tool execution
|
||||
|
||||
@@ -654,165 +654,3 @@ class A2AParallelDelegationCompletedEvent(A2AEventBase):
|
||||
success_count: int
|
||||
failure_count: int
|
||||
results: dict[str, str] | None = None
|
||||
|
||||
|
||||
class A2ATransportNegotiatedEvent(A2AEventBase):
|
||||
"""Event emitted when transport protocol is negotiated with an A2A agent.
|
||||
|
||||
This event is emitted after comparing client and server transport capabilities
|
||||
to select the optimal transport protocol and endpoint URL.
|
||||
|
||||
Attributes:
|
||||
endpoint: Original A2A agent endpoint URL.
|
||||
a2a_agent_name: Name of the A2A agent from agent card.
|
||||
negotiated_transport: The transport protocol selected (JSONRPC, GRPC, HTTP+JSON).
|
||||
negotiated_url: The URL to use for the selected transport.
|
||||
source: How the transport was selected ('client_preferred', 'server_preferred', 'fallback').
|
||||
client_supported_transports: Transports the client can use.
|
||||
server_supported_transports: Transports the server supports.
|
||||
server_preferred_transport: The server's preferred transport from AgentCard.
|
||||
client_preferred_transport: The client's preferred transport if set.
|
||||
metadata: Custom A2A metadata key-value pairs.
|
||||
"""
|
||||
|
||||
type: str = "a2a_transport_negotiated"
|
||||
endpoint: str
|
||||
a2a_agent_name: str | None = None
|
||||
negotiated_transport: str
|
||||
negotiated_url: str
|
||||
source: str
|
||||
client_supported_transports: list[str]
|
||||
server_supported_transports: list[str]
|
||||
server_preferred_transport: str
|
||||
client_preferred_transport: str | None = None
|
||||
metadata: dict[str, Any] | None = None
|
||||
|
||||
|
||||
class A2AContentTypeNegotiatedEvent(A2AEventBase):
|
||||
"""Event emitted when content types are negotiated with an A2A agent.
|
||||
|
||||
This event is emitted after comparing client and server input/output mode
|
||||
capabilities to determine compatible MIME types for communication.
|
||||
|
||||
Attributes:
|
||||
endpoint: A2A agent endpoint URL.
|
||||
a2a_agent_name: Name of the A2A agent from agent card.
|
||||
skill_name: Skill name if negotiation was skill-specific.
|
||||
client_input_modes: MIME types the client can send.
|
||||
client_output_modes: MIME types the client can accept.
|
||||
server_input_modes: MIME types the server accepts.
|
||||
server_output_modes: MIME types the server produces.
|
||||
negotiated_input_modes: Compatible input MIME types selected.
|
||||
negotiated_output_modes: Compatible output MIME types selected.
|
||||
negotiation_success: Whether compatible types were found for both directions.
|
||||
metadata: Custom A2A metadata key-value pairs.
|
||||
"""
|
||||
|
||||
type: str = "a2a_content_type_negotiated"
|
||||
endpoint: str
|
||||
a2a_agent_name: str | None = None
|
||||
skill_name: str | None = None
|
||||
client_input_modes: list[str]
|
||||
client_output_modes: list[str]
|
||||
server_input_modes: list[str]
|
||||
server_output_modes: list[str]
|
||||
negotiated_input_modes: list[str]
|
||||
negotiated_output_modes: list[str]
|
||||
negotiation_success: bool = True
|
||||
metadata: dict[str, Any] | None = None
|
||||
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# Context Lifecycle Events
|
||||
# -----------------------------------------------------------------------------
|
||||
|
||||
|
||||
class A2AContextCreatedEvent(A2AEventBase):
|
||||
"""Event emitted when an A2A context is created.
|
||||
|
||||
Contexts group related tasks in a conversation or workflow.
|
||||
|
||||
Attributes:
|
||||
context_id: Unique identifier for the context.
|
||||
created_at: Unix timestamp when context was created.
|
||||
metadata: Custom A2A metadata key-value pairs.
|
||||
"""
|
||||
|
||||
type: str = "a2a_context_created"
|
||||
context_id: str
|
||||
created_at: float
|
||||
metadata: dict[str, Any] | None = None
|
||||
|
||||
|
||||
class A2AContextExpiredEvent(A2AEventBase):
|
||||
"""Event emitted when an A2A context expires due to TTL.
|
||||
|
||||
Attributes:
|
||||
context_id: The expired context identifier.
|
||||
created_at: Unix timestamp when context was created.
|
||||
age_seconds: How long the context existed before expiring.
|
||||
task_count: Number of tasks in the context when expired.
|
||||
metadata: Custom A2A metadata key-value pairs.
|
||||
"""
|
||||
|
||||
type: str = "a2a_context_expired"
|
||||
context_id: str
|
||||
created_at: float
|
||||
age_seconds: float
|
||||
task_count: int
|
||||
metadata: dict[str, Any] | None = None
|
||||
|
||||
|
||||
class A2AContextIdleEvent(A2AEventBase):
|
||||
"""Event emitted when an A2A context becomes idle.
|
||||
|
||||
Idle contexts have had no activity for the configured threshold.
|
||||
|
||||
Attributes:
|
||||
context_id: The idle context identifier.
|
||||
idle_seconds: Seconds since last activity.
|
||||
task_count: Number of tasks in the context.
|
||||
metadata: Custom A2A metadata key-value pairs.
|
||||
"""
|
||||
|
||||
type: str = "a2a_context_idle"
|
||||
context_id: str
|
||||
idle_seconds: float
|
||||
task_count: int
|
||||
metadata: dict[str, Any] | None = None
|
||||
|
||||
|
||||
class A2AContextCompletedEvent(A2AEventBase):
|
||||
"""Event emitted when all tasks in an A2A context complete.
|
||||
|
||||
Attributes:
|
||||
context_id: The completed context identifier.
|
||||
total_tasks: Total number of tasks that were in the context.
|
||||
duration_seconds: Total context lifetime in seconds.
|
||||
metadata: Custom A2A metadata key-value pairs.
|
||||
"""
|
||||
|
||||
type: str = "a2a_context_completed"
|
||||
context_id: str
|
||||
total_tasks: int
|
||||
duration_seconds: float
|
||||
metadata: dict[str, Any] | None = None
|
||||
|
||||
|
||||
class A2AContextPrunedEvent(A2AEventBase):
|
||||
"""Event emitted when an A2A context is pruned (deleted).
|
||||
|
||||
Pruning removes the context metadata and optionally associated tasks.
|
||||
|
||||
Attributes:
|
||||
context_id: The pruned context identifier.
|
||||
task_count: Number of tasks that were in the context.
|
||||
age_seconds: How long the context existed before pruning.
|
||||
metadata: Custom A2A metadata key-value pairs.
|
||||
"""
|
||||
|
||||
type: str = "a2a_context_pruned"
|
||||
context_id: str
|
||||
task_count: int
|
||||
age_seconds: float
|
||||
metadata: dict[str, Any] | None = None
|
||||
|
||||
@@ -365,20 +365,11 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
|
||||
printer=self._printer,
|
||||
from_task=self.task,
|
||||
from_agent=self.agent,
|
||||
response_model=self.response_model,
|
||||
response_model=None,
|
||||
executor_context=self,
|
||||
verbose=self.agent.verbose,
|
||||
)
|
||||
|
||||
# If response is structured output (BaseModel), store it directly
|
||||
if isinstance(answer, BaseModel):
|
||||
self.state.current_answer = AgentFinish(
|
||||
thought="",
|
||||
output=answer,
|
||||
text=str(answer),
|
||||
)
|
||||
return "parsed"
|
||||
|
||||
# Parse the LLM response
|
||||
formatted_answer = process_llm_response(answer, self.use_stop_words)
|
||||
|
||||
@@ -445,7 +436,7 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
|
||||
available_functions=None,
|
||||
from_task=self.task,
|
||||
from_agent=self.agent,
|
||||
response_model=self.response_model,
|
||||
response_model=None,
|
||||
executor_context=self,
|
||||
verbose=self.agent.verbose,
|
||||
)
|
||||
@@ -457,17 +448,6 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
|
||||
|
||||
return "native_tool_calls"
|
||||
|
||||
# Structured output (BaseModel) response
|
||||
if isinstance(answer, BaseModel):
|
||||
self.state.current_answer = AgentFinish(
|
||||
thought="",
|
||||
output=answer,
|
||||
text=str(answer),
|
||||
)
|
||||
self._invoke_step_callback(self.state.current_answer)
|
||||
self._append_message_to_state(str(answer))
|
||||
return "native_finished"
|
||||
|
||||
# Text response - this is the final answer
|
||||
if isinstance(answer, str):
|
||||
self.state.current_answer = AgentFinish(
|
||||
@@ -815,6 +795,10 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
|
||||
"name": func_name,
|
||||
"content": result,
|
||||
}
|
||||
|
||||
if after_hook_context.files:
|
||||
tool_message["files"] = after_hook_context.files
|
||||
|
||||
self.state.messages.append(tool_message)
|
||||
|
||||
# Log the tool execution
|
||||
@@ -1320,12 +1304,7 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
|
||||
Returns:
|
||||
Final answer after feedback.
|
||||
"""
|
||||
output_str = (
|
||||
str(formatted_answer.output)
|
||||
if isinstance(formatted_answer.output, BaseModel)
|
||||
else formatted_answer.output
|
||||
)
|
||||
human_feedback = self._ask_human_input(output_str)
|
||||
human_feedback = self._ask_human_input(formatted_answer.output)
|
||||
|
||||
if self._is_training_mode():
|
||||
return self._handle_training_feedback(formatted_answer, human_feedback)
|
||||
@@ -1397,12 +1376,7 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
|
||||
self.state.ask_for_human_input = False
|
||||
else:
|
||||
answer = self._process_feedback_iteration(feedback)
|
||||
output_str = (
|
||||
str(answer.output)
|
||||
if isinstance(answer.output, BaseModel)
|
||||
else answer.output
|
||||
)
|
||||
feedback = self._ask_human_input(output_str)
|
||||
feedback = self._ask_human_input(answer.output)
|
||||
|
||||
return answer
|
||||
|
||||
|
||||
@@ -5,6 +5,7 @@ from typing import TYPE_CHECKING, Any
|
||||
from crewai.events.event_listener import event_listener
|
||||
from crewai.hooks.types import AfterToolCallHookType, BeforeToolCallHookType
|
||||
from crewai.utilities.printer import Printer
|
||||
from crewai.utilities.types import FileInput
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
@@ -34,6 +35,9 @@ class ToolCallHookContext:
|
||||
crew: Crew instance (may be None)
|
||||
tool_result: Tool execution result (only set for after_tool_call hooks).
|
||||
Can be modified by returning a new string from after_tool_call hook.
|
||||
files: Optional dictionary of files to attach to the tool message.
|
||||
Can be set by after_tool_call hooks to inject files into the LLM context.
|
||||
These files will be formatted according to the LLM provider's requirements.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
@@ -64,6 +68,7 @@ class ToolCallHookContext:
|
||||
self.task = task
|
||||
self.crew = crew
|
||||
self.tool_result = tool_result
|
||||
self.files: dict[str, FileInput] | None = None
|
||||
|
||||
def request_human_input(
|
||||
self,
|
||||
|
||||
@@ -2,10 +2,8 @@ from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
from collections.abc import Callable
|
||||
from functools import wraps
|
||||
import inspect
|
||||
import json
|
||||
from types import MethodType
|
||||
from typing import (
|
||||
TYPE_CHECKING,
|
||||
Any,
|
||||
@@ -32,8 +30,6 @@ from typing_extensions import Self
|
||||
if TYPE_CHECKING:
|
||||
from crewai_files import FileInput
|
||||
|
||||
from crewai.a2a.config import A2AClientConfig, A2AConfig, A2AServerConfig
|
||||
|
||||
from crewai.agents.agent_builder.base_agent import BaseAgent
|
||||
from crewai.agents.agent_builder.utilities.base_token_process import TokenProcess
|
||||
from crewai.agents.cache.cache_handler import CacheHandler
|
||||
@@ -88,81 +84,6 @@ from crewai.utilities.tool_utils import execute_tool_and_check_finality
|
||||
from crewai.utilities.types import LLMMessage
|
||||
|
||||
|
||||
def _kickoff_with_a2a_support(
|
||||
agent: LiteAgent,
|
||||
original_kickoff: Callable[..., LiteAgentOutput],
|
||||
messages: str | list[LLMMessage],
|
||||
response_format: type[BaseModel] | None,
|
||||
input_files: dict[str, FileInput] | None,
|
||||
extension_registry: Any,
|
||||
) -> LiteAgentOutput:
|
||||
"""Wrap kickoff with A2A delegation using Task adapter.
|
||||
|
||||
Args:
|
||||
agent: The LiteAgent instance.
|
||||
original_kickoff: The original kickoff method.
|
||||
messages: Input messages.
|
||||
response_format: Optional response format.
|
||||
input_files: Optional input files.
|
||||
extension_registry: A2A extension registry.
|
||||
|
||||
Returns:
|
||||
LiteAgentOutput from either local execution or A2A delegation.
|
||||
"""
|
||||
from crewai.a2a.utils.response_model import get_a2a_agents_and_response_model
|
||||
from crewai.a2a.wrapper import _execute_task_with_a2a
|
||||
from crewai.task import Task
|
||||
|
||||
a2a_agents, agent_response_model = get_a2a_agents_and_response_model(agent.a2a)
|
||||
|
||||
if not a2a_agents:
|
||||
return original_kickoff(messages, response_format, input_files)
|
||||
|
||||
if isinstance(messages, str):
|
||||
description = messages
|
||||
else:
|
||||
content = next(
|
||||
(m["content"] for m in reversed(messages) if m["role"] == "user"),
|
||||
None,
|
||||
)
|
||||
description = content if isinstance(content, str) else ""
|
||||
|
||||
if not description:
|
||||
return original_kickoff(messages, response_format, input_files)
|
||||
|
||||
fake_task = Task(
|
||||
description=description,
|
||||
agent=agent,
|
||||
expected_output="Result from A2A delegation",
|
||||
input_files=input_files or {},
|
||||
)
|
||||
|
||||
def task_to_kickoff_adapter(
|
||||
self: Any, task: Task, context: str | None, tools: list[Any] | None
|
||||
) -> str:
|
||||
result = original_kickoff(messages, response_format, input_files)
|
||||
return result.raw
|
||||
|
||||
result_str = _execute_task_with_a2a(
|
||||
self=agent, # type: ignore[arg-type]
|
||||
a2a_agents=a2a_agents,
|
||||
original_fn=task_to_kickoff_adapter,
|
||||
task=fake_task,
|
||||
agent_response_model=agent_response_model,
|
||||
context=None,
|
||||
tools=None,
|
||||
extension_registry=extension_registry,
|
||||
)
|
||||
|
||||
return LiteAgentOutput(
|
||||
raw=result_str,
|
||||
pydantic=None,
|
||||
agent_role=agent.role,
|
||||
usage_metrics=None,
|
||||
messages=[],
|
||||
)
|
||||
|
||||
|
||||
class LiteAgent(FlowTrackable, BaseModel):
|
||||
"""
|
||||
A lightweight agent that can process messages and use tools.
|
||||
@@ -233,17 +154,6 @@ class LiteAgent(FlowTrackable, BaseModel):
|
||||
guardrail_max_retries: int = Field(
|
||||
default=3, description="Maximum number of retries when guardrail fails"
|
||||
)
|
||||
a2a: (
|
||||
list[A2AConfig | A2AServerConfig | A2AClientConfig]
|
||||
| A2AConfig
|
||||
| A2AServerConfig
|
||||
| A2AClientConfig
|
||||
| None
|
||||
) = Field(
|
||||
default=None,
|
||||
description="A2A (Agent-to-Agent) configuration for delegating tasks to remote agents. "
|
||||
"Can be a single A2AConfig/A2AClientConfig/A2AServerConfig, or a list of configurations.",
|
||||
)
|
||||
tools_results: list[dict[str, Any]] = Field(
|
||||
default_factory=list, description="Results of the tools used by the agent."
|
||||
)
|
||||
@@ -299,52 +209,6 @@ class LiteAgent(FlowTrackable, BaseModel):
|
||||
|
||||
return self
|
||||
|
||||
@model_validator(mode="after")
|
||||
def setup_a2a_support(self) -> Self:
|
||||
"""Setup A2A extensions and server methods if a2a config exists."""
|
||||
if self.a2a:
|
||||
from crewai.a2a.config import A2AClientConfig, A2AConfig
|
||||
from crewai.a2a.extensions.registry import (
|
||||
create_extension_registry_from_config,
|
||||
)
|
||||
from crewai.a2a.utils.agent_card import inject_a2a_server_methods
|
||||
|
||||
configs = self.a2a if isinstance(self.a2a, list) else [self.a2a]
|
||||
client_configs = [
|
||||
config
|
||||
for config in configs
|
||||
if isinstance(config, (A2AConfig, A2AClientConfig))
|
||||
]
|
||||
|
||||
extension_registry = (
|
||||
create_extension_registry_from_config(client_configs)
|
||||
if client_configs
|
||||
else create_extension_registry_from_config([])
|
||||
)
|
||||
extension_registry.inject_all_tools(self) # type: ignore[arg-type]
|
||||
inject_a2a_server_methods(self) # type: ignore[arg-type]
|
||||
|
||||
original_kickoff = self.kickoff
|
||||
|
||||
@wraps(original_kickoff)
|
||||
def kickoff_with_a2a(
|
||||
messages: str | list[LLMMessage],
|
||||
response_format: type[BaseModel] | None = None,
|
||||
input_files: dict[str, FileInput] | None = None,
|
||||
) -> LiteAgentOutput:
|
||||
return _kickoff_with_a2a_support(
|
||||
self,
|
||||
original_kickoff,
|
||||
messages,
|
||||
response_format,
|
||||
input_files,
|
||||
extension_registry,
|
||||
)
|
||||
|
||||
object.__setattr__(self, "kickoff", MethodType(kickoff_with_a2a, self))
|
||||
|
||||
return self
|
||||
|
||||
@model_validator(mode="after")
|
||||
def ensure_guardrail_is_callable(self) -> Self:
|
||||
if callable(self.guardrail):
|
||||
@@ -762,9 +626,7 @@ class LiteAgent(FlowTrackable, BaseModel):
|
||||
except Exception as e:
|
||||
raise e
|
||||
|
||||
formatted_answer = process_llm_response(
|
||||
cast(str, answer), self.use_stop_words
|
||||
)
|
||||
formatted_answer = process_llm_response(answer, self.use_stop_words)
|
||||
|
||||
if isinstance(formatted_answer, AgentAction):
|
||||
try:
|
||||
@@ -847,21 +709,3 @@ class LiteAgent(FlowTrackable, BaseModel):
|
||||
) -> None:
|
||||
"""Append a message to the message list with the given role."""
|
||||
self._messages.append(format_message_for_llm(text, role=role))
|
||||
|
||||
|
||||
try:
|
||||
from crewai.a2a.config import (
|
||||
A2AClientConfig as _A2AClientConfig,
|
||||
A2AConfig as _A2AConfig,
|
||||
A2AServerConfig as _A2AServerConfig,
|
||||
)
|
||||
|
||||
LiteAgent.model_rebuild(
|
||||
_types_namespace={
|
||||
"A2AConfig": _A2AConfig,
|
||||
"A2AClientConfig": _A2AClientConfig,
|
||||
"A2AServerConfig": _A2AServerConfig,
|
||||
}
|
||||
)
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
@@ -104,7 +104,6 @@ class TestA2AStreamingIntegration:
|
||||
message=test_message,
|
||||
new_messages=new_messages,
|
||||
agent_card=agent_card,
|
||||
endpoint=agent_card.url,
|
||||
)
|
||||
|
||||
assert isinstance(result, dict)
|
||||
@@ -226,7 +225,6 @@ class TestA2APushNotificationHandler:
|
||||
result_store=mock_store,
|
||||
polling_timeout=30.0,
|
||||
polling_interval=1.0,
|
||||
endpoint=mock_agent_card.url,
|
||||
)
|
||||
|
||||
mock_store.wait_for_result.assert_called_once_with(
|
||||
@@ -289,7 +287,6 @@ class TestA2APushNotificationHandler:
|
||||
result_store=mock_store,
|
||||
polling_timeout=5.0,
|
||||
polling_interval=0.5,
|
||||
endpoint=mock_agent_card.url,
|
||||
)
|
||||
|
||||
assert result["status"] == TaskState.failed
|
||||
@@ -320,7 +317,6 @@ class TestA2APushNotificationHandler:
|
||||
message=test_msg,
|
||||
new_messages=new_messages,
|
||||
agent_card=mock_agent_card,
|
||||
endpoint=mock_agent_card.url,
|
||||
)
|
||||
|
||||
assert result["status"] == TaskState.failed
|
||||
|
||||
@@ -43,7 +43,6 @@ def mock_context() -> MagicMock:
|
||||
context.context_id = "test-context-456"
|
||||
context.get_user_input.return_value = "Test user message"
|
||||
context.message = MagicMock(spec=Message)
|
||||
context.message.parts = []
|
||||
context.current_task = None
|
||||
return context
|
||||
|
||||
|
||||
@@ -1,211 +0,0 @@
|
||||
"""Tests for Agent.kickoff() with A2A delegation using VCR cassettes."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import os
|
||||
|
||||
import pytest
|
||||
|
||||
from crewai import Agent
|
||||
from crewai.a2a.config import A2AClientConfig
|
||||
|
||||
|
||||
A2A_TEST_ENDPOINT = os.getenv(
|
||||
"A2A_TEST_ENDPOINT", "http://0.0.0.0:9999/.well-known/agent-card.json"
|
||||
)
|
||||
|
||||
|
||||
class TestAgentA2AKickoff:
|
||||
"""Tests for Agent.kickoff() with A2A delegation."""
|
||||
|
||||
@pytest.fixture
|
||||
def researcher_agent(self) -> Agent:
|
||||
"""Create a research agent with A2A configuration."""
|
||||
return Agent(
|
||||
role="Research Analyst",
|
||||
goal="Find and analyze information about AI developments",
|
||||
backstory="Expert researcher with access to remote specialized agents",
|
||||
verbose=True,
|
||||
a2a=[
|
||||
A2AClientConfig(
|
||||
endpoint=A2A_TEST_ENDPOINT,
|
||||
fail_fast=False,
|
||||
max_turns=3, # Limit turns for testing
|
||||
)
|
||||
],
|
||||
)
|
||||
|
||||
@pytest.mark.vcr()
|
||||
def test_agent_kickoff_delegates_to_a2a(self, researcher_agent: Agent) -> None:
|
||||
"""Test that agent.kickoff() delegates to A2A server."""
|
||||
result = researcher_agent.kickoff(
|
||||
"Use the remote A2A agent to find out what the current time is in New York."
|
||||
)
|
||||
|
||||
assert result is not None
|
||||
assert result.raw is not None
|
||||
assert isinstance(result.raw, str)
|
||||
assert len(result.raw) > 0
|
||||
|
||||
@pytest.mark.vcr()
|
||||
def test_agent_kickoff_with_calculator_skill(
|
||||
self, researcher_agent: Agent
|
||||
) -> None:
|
||||
"""Test that agent can delegate calculation to A2A server."""
|
||||
result = researcher_agent.kickoff(
|
||||
"Ask the remote A2A agent to calculate 25 times 17."
|
||||
)
|
||||
|
||||
assert result is not None
|
||||
assert result.raw is not None
|
||||
assert "425" in result.raw or "425.0" in result.raw
|
||||
|
||||
@pytest.mark.vcr()
|
||||
def test_agent_kickoff_with_conversation_skill(
|
||||
self, researcher_agent: Agent
|
||||
) -> None:
|
||||
"""Test that agent can have a conversation with A2A server."""
|
||||
result = researcher_agent.kickoff(
|
||||
"Delegate to the remote A2A agent to explain quantum computing in simple terms."
|
||||
)
|
||||
|
||||
assert result is not None
|
||||
assert result.raw is not None
|
||||
assert isinstance(result.raw, str)
|
||||
assert len(result.raw) > 50 # Should have a meaningful response
|
||||
|
||||
@pytest.mark.vcr()
|
||||
def test_agent_kickoff_returns_lite_agent_output(
|
||||
self, researcher_agent: Agent
|
||||
) -> None:
|
||||
"""Test that kickoff returns LiteAgentOutput with correct structure."""
|
||||
from crewai.lite_agent_output import LiteAgentOutput
|
||||
|
||||
result = researcher_agent.kickoff(
|
||||
"Use the A2A agent to tell me what time it is."
|
||||
)
|
||||
|
||||
assert isinstance(result, LiteAgentOutput)
|
||||
assert result.raw is not None
|
||||
assert result.agent_role == "Research Analyst"
|
||||
assert isinstance(result.messages, list)
|
||||
|
||||
@pytest.mark.vcr()
|
||||
def test_agent_kickoff_handles_multi_turn_conversation(
|
||||
self, researcher_agent: Agent
|
||||
) -> None:
|
||||
"""Test that agent handles multi-turn A2A conversations."""
|
||||
# This should trigger multiple turns of conversation
|
||||
result = researcher_agent.kickoff(
|
||||
"Ask the remote A2A agent about recent developments in AI agent communication protocols."
|
||||
)
|
||||
|
||||
assert result is not None
|
||||
assert result.raw is not None
|
||||
# The response should contain information about A2A or agent protocols
|
||||
assert isinstance(result.raw, str)
|
||||
|
||||
@pytest.mark.vcr()
|
||||
def test_agent_without_a2a_works_normally(self) -> None:
|
||||
"""Test that agent without A2A config works normally."""
|
||||
agent = Agent(
|
||||
role="Simple Assistant",
|
||||
goal="Help with basic tasks",
|
||||
backstory="A helpful assistant",
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
# This should work without A2A delegation
|
||||
result = agent.kickoff("Say hello")
|
||||
|
||||
assert result is not None
|
||||
assert result.raw is not None
|
||||
|
||||
@pytest.mark.vcr()
|
||||
def test_agent_kickoff_with_failed_a2a_endpoint(self) -> None:
|
||||
"""Test that agent handles failed A2A connection gracefully."""
|
||||
agent = Agent(
|
||||
role="Research Analyst",
|
||||
goal="Find information",
|
||||
backstory="Expert researcher",
|
||||
verbose=False,
|
||||
a2a=[
|
||||
A2AClientConfig(
|
||||
endpoint="http://nonexistent:9999/.well-known/agent-card.json",
|
||||
fail_fast=False,
|
||||
)
|
||||
],
|
||||
)
|
||||
|
||||
# Should fallback to local LLM when A2A fails
|
||||
result = agent.kickoff("What is 2 + 2?")
|
||||
|
||||
assert result is not None
|
||||
assert result.raw is not None
|
||||
|
||||
@pytest.mark.vcr()
|
||||
def test_agent_kickoff_with_list_messages(
|
||||
self, researcher_agent: Agent
|
||||
) -> None:
|
||||
"""Test that agent.kickoff() works with list of messages."""
|
||||
messages = [
|
||||
{
|
||||
"role": "user",
|
||||
"content": "Delegate to the A2A agent to find the current time in Tokyo.",
|
||||
},
|
||||
]
|
||||
|
||||
result = researcher_agent.kickoff(messages)
|
||||
|
||||
assert result is not None
|
||||
assert result.raw is not None
|
||||
assert isinstance(result.raw, str)
|
||||
|
||||
|
||||
class TestAgentA2AKickoffAsync:
|
||||
"""Tests for async Agent.kickoff_async() with A2A delegation."""
|
||||
|
||||
@pytest.fixture
|
||||
def researcher_agent(self) -> Agent:
|
||||
"""Create a research agent with A2A configuration."""
|
||||
return Agent(
|
||||
role="Research Analyst",
|
||||
goal="Find and analyze information",
|
||||
backstory="Expert researcher with access to remote agents",
|
||||
verbose=True,
|
||||
a2a=[
|
||||
A2AClientConfig(
|
||||
endpoint=A2A_TEST_ENDPOINT,
|
||||
fail_fast=False,
|
||||
max_turns=3,
|
||||
)
|
||||
],
|
||||
)
|
||||
|
||||
@pytest.mark.vcr()
|
||||
@pytest.mark.asyncio
|
||||
async def test_agent_kickoff_async_delegates_to_a2a(
|
||||
self, researcher_agent: Agent
|
||||
) -> None:
|
||||
"""Test that agent.kickoff_async() delegates to A2A server."""
|
||||
result = await researcher_agent.kickoff_async(
|
||||
"Use the remote A2A agent to calculate 10 plus 15."
|
||||
)
|
||||
|
||||
assert result is not None
|
||||
assert result.raw is not None
|
||||
assert isinstance(result.raw, str)
|
||||
|
||||
@pytest.mark.vcr()
|
||||
@pytest.mark.asyncio
|
||||
async def test_agent_kickoff_async_with_calculator(
|
||||
self, researcher_agent: Agent
|
||||
) -> None:
|
||||
"""Test async delegation with calculator skill."""
|
||||
result = await researcher_agent.kickoff_async(
|
||||
"Ask the A2A agent to calculate 100 divided by 4."
|
||||
)
|
||||
|
||||
assert result is not None
|
||||
assert result.raw is not None
|
||||
assert "25" in result.raw or "25.0" in result.raw
|
||||
@@ -1,623 +0,0 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Research Analyst. Expert
|
||||
researcher with access to remote specialized agents\nYour personal goal is:
|
||||
Find and analyze information about AI developments"},{"role":"user","content":"\nCurrent
|
||||
Task: Use the remote A2A agent to find out what the current time is in New York.\n\nProvide
|
||||
your complete response:"}],"model":"gpt-4.1-mini","response_format":{"type":"json_schema","json_schema":{"schema":{"properties":{"a2a_ids":{"description":"A2A
|
||||
agent IDs to delegate to.","items":{"const":"http://0.0.0.0:9999/.well-known/agent-card.json","type":"string"},"maxItems":1,"title":"A2A
|
||||
Ids","type":"array"},"message":{"description":"The message content. If is_a2a=true,
|
||||
this is sent to the A2A agent. If is_a2a=false, this is your final answer ending
|
||||
the conversation.","title":"Message","type":"string"},"is_a2a":{"description":"Set
|
||||
to false when the remote agent has answered your question - extract their answer
|
||||
and return it as your final message. Set to true ONLY if you need to ask a NEW,
|
||||
DIFFERENT question. NEVER repeat the same request - if the conversation history
|
||||
shows the agent already answered, set is_a2a=false immediately.","title":"Is
|
||||
A2A","type":"boolean"}},"required":["a2a_ids","message","is_a2a"],"title":"AgentResponse","type":"object","additionalProperties":false},"name":"AgentResponse","strict":true}},"stream":false}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1383'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-helper-method:
|
||||
- beta.chat.completions.parse
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-D3jFBtWc84q3Fih1RnSJ50RUpSFl9\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1769781273,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"{\\\"a2a_ids\\\":[\\\"http://0.0.0.0:9999/.well-known/agent-card.json\\\"],\\\"message\\\":\\\"What
|
||||
is the current time in New York?\\\",\\\"is_a2a\\\":true}\",\n \"refusal\":
|
||||
null,\n \"annotations\": []\n },\n \"logprobs\": null,\n
|
||||
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
|
||||
280,\n \"completion_tokens\": 46,\n \"total_tokens\": 326,\n \"prompt_tokens_details\":
|
||||
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_38699cb0fe\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 30 Jan 2026 13:54:34 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- SET-COOKIE-XXX
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '923'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"id":"eddd6b1f-08fe-49b0-a0ac-475a0affb219","jsonrpc":"2.0","method":"message/stream","params":{"configuration":{"acceptedOutputModes":["application/json"],"blocking":true},"message":{"kind":"message","messageId":"9c0ca44b-2d30-448a-8b38-58f8db1f444f","parts":[{"kind":"text","text":"What
|
||||
is the current time in New York?"}],"referenceTaskIds":[],"role":"user"}}}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- '*/*, text/event-stream'
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
cache-control:
|
||||
- no-store
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '364'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- localhost:9999
|
||||
method: POST
|
||||
uri: http://localhost:9999
|
||||
response:
|
||||
body:
|
||||
string: "data: {\"id\":\"eddd6b1f-08fe-49b0-a0ac-475a0affb219\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"da60b9dc-45e0-4124-b6c7-591177a664a2\",\"final\":false,\"kind\":\"status-update\",\"status\":{\"state\":\"submitted\"},\"taskId\":\"c77704e8-1c73-49b0-940e-1ca55c53bb3a\"}}\r\n\r\ndata:
|
||||
{\"id\":\"eddd6b1f-08fe-49b0-a0ac-475a0affb219\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"da60b9dc-45e0-4124-b6c7-591177a664a2\",\"final\":false,\"kind\":\"status-update\",\"status\":{\"state\":\"working\"},\"taskId\":\"c77704e8-1c73-49b0-940e-1ca55c53bb3a\"}}\r\n\r\ndata:
|
||||
{\"id\":\"eddd6b1f-08fe-49b0-a0ac-475a0affb219\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"da60b9dc-45e0-4124-b6c7-591177a664a2\",\"final\":true,\"kind\":\"status-update\",\"status\":{\"message\":{\"kind\":\"message\",\"messageId\":\"fbbb037b-d7fc-4e31-8bff-eccebf3c0053\",\"parts\":[{\"kind\":\"text\",\"text\":\"[Tool:
|
||||
get_time] 2026-01-30 08:54:35 EST (America/New_York)\\nThe current time in
|
||||
New York is 08:54 AM on January 30, 2026.\"}],\"role\":\"agent\"},\"state\":\"completed\"},\"taskId\":\"c77704e8-1c73-49b0-940e-1ca55c53bb3a\"}}\r\n\r\n"
|
||||
headers:
|
||||
cache-control:
|
||||
- no-store
|
||||
connection:
|
||||
- keep-alive
|
||||
content-type:
|
||||
- text/event-stream; charset=utf-8
|
||||
date:
|
||||
- Fri, 30 Jan 2026 13:54:33 GMT
|
||||
server:
|
||||
- uvicorn
|
||||
transfer-encoding:
|
||||
- chunked
|
||||
x-accel-buffering:
|
||||
- 'no'
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Research Analyst. Expert
|
||||
researcher with access to remote specialized agents\nYour personal goal is:
|
||||
Find and analyze information about AI developments"},{"role":"user","content":"\nCurrent
|
||||
Task: Use the remote A2A agent to find out what the current time is in New York.\n\nProvide
|
||||
your complete response:"}],"model":"gpt-4.1-mini","response_format":{"type":"json_schema","json_schema":{"schema":{"properties":{"a2a_ids":{"description":"A2A
|
||||
agent IDs to delegate to.","items":{"const":"http://0.0.0.0:9999/.well-known/agent-card.json","type":"string"},"maxItems":1,"title":"A2A
|
||||
Ids","type":"array"},"message":{"description":"The message content. If is_a2a=true,
|
||||
this is sent to the A2A agent. If is_a2a=false, this is your final answer ending
|
||||
the conversation.","title":"Message","type":"string"},"is_a2a":{"description":"Set
|
||||
to false when the remote agent has answered your question - extract their answer
|
||||
and return it as your final message. Set to true ONLY if you need to ask a NEW,
|
||||
DIFFERENT question. NEVER repeat the same request - if the conversation history
|
||||
shows the agent already answered, set is_a2a=false immediately.","title":"Is
|
||||
A2A","type":"boolean"}},"required":["a2a_ids","message","is_a2a"],"title":"AgentResponse","type":"object","additionalProperties":false},"name":"AgentResponse","strict":true}},"stream":false}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1383'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- COOKIE-XXX
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-helper-method:
|
||||
- beta.chat.completions.parse
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-D3jFFtf0IQIJTlZw5YAHOgFjlzDdE\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1769781277,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"{\\\"a2a_ids\\\":[\\\"http://0.0.0.0:9999/.well-known/agent-card.json\\\"],\\\"message\\\":\\\"What
|
||||
is the current time in New York?\\\",\\\"is_a2a\\\":true}\",\n \"refusal\":
|
||||
null,\n \"annotations\": []\n },\n \"logprobs\": null,\n
|
||||
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
|
||||
280,\n \"completion_tokens\": 46,\n \"total_tokens\": 326,\n \"prompt_tokens_details\":
|
||||
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_e01c6f58e1\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 30 Jan 2026 13:54:39 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '1509'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"id":"d5380fcb-ff90-4cf0-91bc-fbffa31da036","jsonrpc":"2.0","method":"message/stream","params":{"configuration":{"acceptedOutputModes":["application/json"],"blocking":true},"message":{"kind":"message","messageId":"4d92d0a5-c872-4781-927d-a8679e676007","parts":[{"kind":"text","text":"What
|
||||
is the current time in New York?"}],"referenceTaskIds":[],"role":"user"}}}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- '*/*, text/event-stream'
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
cache-control:
|
||||
- no-store
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '364'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- localhost:9999
|
||||
method: POST
|
||||
uri: http://localhost:9999
|
||||
response:
|
||||
body:
|
||||
string: "data: {\"id\":\"d5380fcb-ff90-4cf0-91bc-fbffa31da036\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"fc104771-f241-4385-bc03-c4b880e97044\",\"final\":false,\"kind\":\"status-update\",\"status\":{\"state\":\"submitted\"},\"taskId\":\"1b163710-6797-432d-ad93-322e90b6fe85\"}}\r\n\r\ndata:
|
||||
{\"id\":\"d5380fcb-ff90-4cf0-91bc-fbffa31da036\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"fc104771-f241-4385-bc03-c4b880e97044\",\"final\":false,\"kind\":\"status-update\",\"status\":{\"state\":\"working\"},\"taskId\":\"1b163710-6797-432d-ad93-322e90b6fe85\"}}\r\n\r\ndata:
|
||||
{\"id\":\"d5380fcb-ff90-4cf0-91bc-fbffa31da036\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"fc104771-f241-4385-bc03-c4b880e97044\",\"final\":true,\"kind\":\"status-update\",\"status\":{\"message\":{\"kind\":\"message\",\"messageId\":\"27ab0449-b0da-4250-a319-56554cbc26ad\",\"parts\":[{\"kind\":\"text\",\"text\":\"[Tool:
|
||||
get_time] 2026-01-30 08:54:40 EST (America/New_York)\\nThe current time in
|
||||
New York is 08:54:40 AM EST on January 30, 2026.\"}],\"role\":\"agent\"},\"state\":\"completed\"},\"taskId\":\"1b163710-6797-432d-ad93-322e90b6fe85\"}}\r\n\r\n"
|
||||
headers:
|
||||
cache-control:
|
||||
- no-store
|
||||
connection:
|
||||
- keep-alive
|
||||
content-type:
|
||||
- text/event-stream; charset=utf-8
|
||||
date:
|
||||
- Fri, 30 Jan 2026 13:54:39 GMT
|
||||
server:
|
||||
- uvicorn
|
||||
transfer-encoding:
|
||||
- chunked
|
||||
x-accel-buffering:
|
||||
- 'no'
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Research Analyst. Expert
|
||||
researcher with access to remote specialized agents\nYour personal goal is:
|
||||
Find and analyze information about AI developments"},{"role":"user","content":"\nCurrent
|
||||
Task: Use the remote A2A agent to find out what the current time is in New York.\n\nProvide
|
||||
your complete response:"}],"model":"gpt-4.1-mini","response_format":{"type":"json_schema","json_schema":{"schema":{"properties":{"a2a_ids":{"description":"A2A
|
||||
agent IDs to delegate to.","items":{"const":"http://0.0.0.0:9999/.well-known/agent-card.json","type":"string"},"maxItems":1,"title":"A2A
|
||||
Ids","type":"array"},"message":{"description":"The message content. If is_a2a=true,
|
||||
this is sent to the A2A agent. If is_a2a=false, this is your final answer ending
|
||||
the conversation.","title":"Message","type":"string"},"is_a2a":{"description":"Set
|
||||
to false when the remote agent has answered your question - extract their answer
|
||||
and return it as your final message. Set to true ONLY if you need to ask a NEW,
|
||||
DIFFERENT question. NEVER repeat the same request - if the conversation history
|
||||
shows the agent already answered, set is_a2a=false immediately.","title":"Is
|
||||
A2A","type":"boolean"}},"required":["a2a_ids","message","is_a2a"],"title":"AgentResponse","type":"object","additionalProperties":false},"name":"AgentResponse","strict":true}},"stream":false}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1383'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- COOKIE-XXX
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-helper-method:
|
||||
- beta.chat.completions.parse
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-D3jFK8UEJgrLunZZZfiCavu4DMQ5r\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1769781282,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"{\\\"a2a_ids\\\":[\\\"http://0.0.0.0:9999/.well-known/agent-card.json\\\"],\\\"message\\\":\\\"What
|
||||
is the current local time in New York?\\\",\\\"is_a2a\\\":true}\",\n \"refusal\":
|
||||
null,\n \"annotations\": []\n },\n \"logprobs\": null,\n
|
||||
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
|
||||
280,\n \"completion_tokens\": 47,\n \"total_tokens\": 327,\n \"prompt_tokens_details\":
|
||||
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_e01c6f58e1\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 30 Jan 2026 13:54:42 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '806'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"id":"7c4e4330-88aa-4126-98ec-dbbf6dc58abc","jsonrpc":"2.0","method":"message/stream","params":{"configuration":{"acceptedOutputModes":["application/json"],"blocking":true},"message":{"kind":"message","messageId":"780ddbd5-2125-4c8b-8076-4a82e097d6dd","parts":[{"kind":"text","text":"What
|
||||
is the current local time in New York?"}],"referenceTaskIds":[],"role":"user"}}}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- '*/*, text/event-stream'
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
cache-control:
|
||||
- no-store
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '370'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- localhost:9999
|
||||
method: POST
|
||||
uri: http://localhost:9999
|
||||
response:
|
||||
body:
|
||||
string: "data: {\"id\":\"7c4e4330-88aa-4126-98ec-dbbf6dc58abc\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"7c6ac7fd-9944-493b-b322-7b8130193030\",\"final\":false,\"kind\":\"status-update\",\"status\":{\"state\":\"submitted\"},\"taskId\":\"44b7f1a5-7c43-491d-bdf5-55f97631bee9\"}}\r\n\r\ndata:
|
||||
{\"id\":\"7c4e4330-88aa-4126-98ec-dbbf6dc58abc\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"7c6ac7fd-9944-493b-b322-7b8130193030\",\"final\":false,\"kind\":\"status-update\",\"status\":{\"state\":\"working\"},\"taskId\":\"44b7f1a5-7c43-491d-bdf5-55f97631bee9\"}}\r\n\r\ndata:
|
||||
{\"id\":\"7c4e4330-88aa-4126-98ec-dbbf6dc58abc\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"7c6ac7fd-9944-493b-b322-7b8130193030\",\"final\":true,\"kind\":\"status-update\",\"status\":{\"message\":{\"kind\":\"message\",\"messageId\":\"b3b9f054-2a9b-4b22-bb10-cfcd568e60b2\",\"parts\":[{\"kind\":\"text\",\"text\":\"[Tool:
|
||||
get_time] 2026-01-30 08:54:43 EST (America/New_York)\\nThe current local time
|
||||
in New York is 08:54 AM EST on January 30, 2026.\"}],\"role\":\"agent\"},\"state\":\"completed\"},\"taskId\":\"44b7f1a5-7c43-491d-bdf5-55f97631bee9\"}}\r\n\r\n"
|
||||
headers:
|
||||
cache-control:
|
||||
- no-store
|
||||
connection:
|
||||
- keep-alive
|
||||
content-type:
|
||||
- text/event-stream; charset=utf-8
|
||||
date:
|
||||
- Fri, 30 Jan 2026 13:54:42 GMT
|
||||
server:
|
||||
- uvicorn
|
||||
transfer-encoding:
|
||||
- chunked
|
||||
x-accel-buffering:
|
||||
- 'no'
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Research Analyst. Expert
|
||||
researcher with access to remote specialized agents\nYour personal goal is:
|
||||
Find and analyze information about AI developments"},{"role":"user","content":"\nCurrent
|
||||
Task: Use the remote A2A agent to find out what the current time is in New York.\n\nProvide
|
||||
your complete response:"}],"model":"gpt-4.1-mini","response_format":{"type":"json_schema","json_schema":{"schema":{"properties":{"a2a_ids":{"description":"A2A
|
||||
agent IDs to delegate to.","items":{"const":"http://0.0.0.0:9999/.well-known/agent-card.json","type":"string"},"maxItems":1,"title":"A2A
|
||||
Ids","type":"array"},"message":{"description":"The message content. If is_a2a=true,
|
||||
this is sent to the A2A agent. If is_a2a=false, this is your final answer ending
|
||||
the conversation.","title":"Message","type":"string"},"is_a2a":{"description":"Set
|
||||
to false when the remote agent has answered your question - extract their answer
|
||||
and return it as your final message. Set to true ONLY if you need to ask a NEW,
|
||||
DIFFERENT question. NEVER repeat the same request - if the conversation history
|
||||
shows the agent already answered, set is_a2a=false immediately.","title":"Is
|
||||
A2A","type":"boolean"}},"required":["a2a_ids","message","is_a2a"],"title":"AgentResponse","type":"object","additionalProperties":false},"name":"AgentResponse","strict":true}},"stream":false}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1383'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- COOKIE-XXX
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-helper-method:
|
||||
- beta.chat.completions.parse
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-D3jFNAH4ex0QrqP0sMQagCLguuncQ\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1769781285,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"{\\\"a2a_ids\\\":[\\\"http://0.0.0.0:9999/.well-known/agent-card.json\\\"],\\\"message\\\":\\\"What
|
||||
is the current time in New York?\\\",\\\"is_a2a\\\":true}\",\n \"refusal\":
|
||||
null,\n \"annotations\": []\n },\n \"logprobs\": null,\n
|
||||
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
|
||||
280,\n \"completion_tokens\": 46,\n \"total_tokens\": 326,\n \"prompt_tokens_details\":
|
||||
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_e01c6f58e1\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 30 Jan 2026 13:54:46 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '778'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
@@ -1,717 +0,0 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: ''
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- '*/*'
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
host:
|
||||
- 0.0.0.0:9999
|
||||
method: GET
|
||||
uri: http://0.0.0.0:9999/.well-known/agent-card.json
|
||||
response:
|
||||
body:
|
||||
string: '{"capabilities":{"pushNotifications":true,"streaming":true},"defaultInputModes":["text/plain","application/json"],"defaultOutputModes":["text/plain","application/json"],"description":"An
|
||||
AI assistant powered by OpenAI GPT with calculator and time tools. Ask questions,
|
||||
perform calculations, or get the current time in any timezone.","name":"GPT
|
||||
Assistant","preferredTransport":"JSONRPC","protocolVersion":"0.3.0","skills":[{"description":"Have
|
||||
a general conversation with the AI assistant. Ask questions, get explanations,
|
||||
or just chat.","examples":["Hello, how are you?","Explain quantum computing
|
||||
in simple terms","What can you help me with?"],"id":"conversation","name":"General
|
||||
Conversation","tags":["chat","conversation","general"]},{"description":"Perform
|
||||
mathematical calculations including arithmetic, exponents, and more.","examples":["What
|
||||
is 25 * 17?","Calculate 2^10","What''s (100 + 50) / 3?"],"id":"calculator","name":"Calculator","tags":["math","calculator","arithmetic"]},{"description":"Get
|
||||
the current date and time in any timezone.","examples":["What time is it?","What''s
|
||||
the current time in Tokyo?","What''s today''s date in New York?"],"id":"time","name":"Current
|
||||
Time","tags":["time","date","timezone"]}],"url":"http://localhost:9999","version":"1.0.0"}'
|
||||
headers:
|
||||
content-length:
|
||||
- '1272'
|
||||
content-type:
|
||||
- application/json
|
||||
date:
|
||||
- Fri, 30 Jan 2026 13:55:10 GMT
|
||||
server:
|
||||
- uvicorn
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Research Analyst. Expert
|
||||
researcher with access to remote specialized agents\nYour personal goal is:
|
||||
Find and analyze information about AI developments"},{"role":"user","content":"\nCurrent
|
||||
Task: Ask the remote A2A agent about recent developments in AI agent communication
|
||||
protocols.\n\nProvide your complete response:"}],"model":"gpt-4.1-mini","response_format":{"type":"json_schema","json_schema":{"schema":{"properties":{"a2a_ids":{"description":"A2A
|
||||
agent IDs to delegate to.","items":{"const":"http://0.0.0.0:9999/.well-known/agent-card.json","type":"string"},"maxItems":1,"title":"A2A
|
||||
Ids","type":"array"},"message":{"description":"The message content. If is_a2a=true,
|
||||
this is sent to the A2A agent. If is_a2a=false, this is your final answer ending
|
||||
the conversation.","title":"Message","type":"string"},"is_a2a":{"description":"Set
|
||||
to false when the remote agent has answered your question - extract their answer
|
||||
and return it as your final message. Set to true ONLY if you need to ask a NEW,
|
||||
DIFFERENT question. NEVER repeat the same request - if the conversation history
|
||||
shows the agent already answered, set is_a2a=false immediately.","title":"Is
|
||||
A2A","type":"boolean"}},"required":["a2a_ids","message","is_a2a"],"title":"AgentResponse","type":"object","additionalProperties":false},"name":"AgentResponse","strict":true}},"stream":false}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1396'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-helper-method:
|
||||
- beta.chat.completions.parse
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-D3jFn4QZwtfVYz3isJYJdfNrQPtlS\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1769781311,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"{\\\"a2a_ids\\\":[\\\"http://0.0.0.0:9999/.well-known/agent-card.json\\\"],\\\"message\\\":\\\"Could
|
||||
you please provide information on the most recent developments in AI agent
|
||||
communication protocols? I am particularly interested in new standards, methods,
|
||||
or technologies that enhance communication efficiency, security, and interoperability
|
||||
among AI agents.\\\",\\\"is_a2a\\\":true}\",\n \"refusal\": null,\n
|
||||
\ \"annotations\": []\n },\n \"logprobs\": null,\n \"finish_reason\":
|
||||
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 277,\n \"completion_tokens\":
|
||||
77,\n \"total_tokens\": 354,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_38699cb0fe\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 30 Jan 2026 13:55:12 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- SET-COOKIE-XXX
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '1199'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"id":"f4251ecb-77e9-4cbe-bb02-b861389184e9","jsonrpc":"2.0","method":"message/stream","params":{"configuration":{"acceptedOutputModes":["application/json"],"blocking":true},"message":{"kind":"message","messageId":"f2405e0c-72c7-4e72-a774-f9fe88c2109a","parts":[{"kind":"text","text":"Could
|
||||
you please provide information on the most recent developments in AI agent communication
|
||||
protocols? I am particularly interested in new standards, methods, or technologies
|
||||
that enhance communication efficiency, security, and interoperability among
|
||||
AI agents."}],"referenceTaskIds":[],"role":"user"}}}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- '*/*, text/event-stream'
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
cache-control:
|
||||
- no-store
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '591'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- localhost:9999
|
||||
method: POST
|
||||
uri: http://localhost:9999
|
||||
response:
|
||||
body:
|
||||
string: "data: {\"id\":\"f4251ecb-77e9-4cbe-bb02-b861389184e9\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"41564a5b-4206-431d-80b6-56cfed244b1b\",\"final\":false,\"kind\":\"status-update\",\"status\":{\"state\":\"submitted\"},\"taskId\":\"2f5afed5-949f-4d1d-897a-00d560699ec7\"}}\r\n\r\ndata:
|
||||
{\"id\":\"f4251ecb-77e9-4cbe-bb02-b861389184e9\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"41564a5b-4206-431d-80b6-56cfed244b1b\",\"final\":false,\"kind\":\"status-update\",\"status\":{\"state\":\"working\"},\"taskId\":\"2f5afed5-949f-4d1d-897a-00d560699ec7\"}}\r\n\r\ndata:
|
||||
{\"id\":\"f4251ecb-77e9-4cbe-bb02-b861389184e9\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"41564a5b-4206-431d-80b6-56cfed244b1b\",\"final\":true,\"kind\":\"status-update\",\"status\":{\"message\":{\"kind\":\"message\",\"messageId\":\"d754f676-bcc1-449f-9d9d-444b0f74a4a4\",\"parts\":[{\"kind\":\"text\",\"text\":\"Recent
|
||||
developments in AI agent communication protocols have focused on several key
|
||||
areas: \\n\\n1. **Standardization Initiatives**: Organizations such as the
|
||||
IEEE and ISO have been working on developing standards to ensure interoperability
|
||||
between different AI systems and agents, such as the IEEE P7011 standard for
|
||||
algorithmic bias considerations.\\n\\n2. **Protocols for Inter-Agent Communication**:
|
||||
New protocols like the Agent Communication Language (ACL) and implementations
|
||||
of the FIPA specifications have gained traction for enabling efficient communication
|
||||
between agents while ensuring a shared understanding of intentions and semantics.\\n\\n3.
|
||||
**Enhanced Security Measures**: The integration of blockchain technology has
|
||||
emerged as a promising method to securely log communications between AI agents,
|
||||
ensuring authenticity and non-repudiation of messages.\\n\\n4. **Semantic
|
||||
Understanding Improvements**: Methods using natural language processing (NLP)
|
||||
continue to evolve, enabling agents to interpret and generate more human-like
|
||||
responses, facilitating smoother interactions through protocols like ASR (Automatic
|
||||
Speech Recognition) and NLU (Natural Language Understanding).\\n\\n5. **Context-Aware
|
||||
Communication**: Recent advancements allow agents to utilize contextual data
|
||||
for more efficient communication, where they can automatically adjust their
|
||||
responses based on situational awareness and user preferences.\\n\\n6. **Cooperative
|
||||
Learning Models**: New techniques in federated learning allow agents to learn
|
||||
from shared data without compromising user privacy, enhancing collaborative
|
||||
efforts among AI agents in real-time communication.\\n\\nThese developments
|
||||
aim to create a more robust framework for AI agents, enhancing their operational
|
||||
efficiency and effectiveness in various applications, from customer service
|
||||
to collaborative robotics.\"}],\"role\":\"agent\"},\"state\":\"completed\"},\"taskId\":\"2f5afed5-949f-4d1d-897a-00d560699ec7\"}}\r\n\r\n"
|
||||
headers:
|
||||
cache-control:
|
||||
- no-store
|
||||
connection:
|
||||
- keep-alive
|
||||
content-type:
|
||||
- text/event-stream; charset=utf-8
|
||||
date:
|
||||
- Fri, 30 Jan 2026 13:55:12 GMT
|
||||
server:
|
||||
- uvicorn
|
||||
transfer-encoding:
|
||||
- chunked
|
||||
x-accel-buffering:
|
||||
- 'no'
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Research Analyst. Expert
|
||||
researcher with access to remote specialized agents\nYour personal goal is:
|
||||
Find and analyze information about AI developments"},{"role":"user","content":"\nCurrent
|
||||
Task: Ask the remote A2A agent about recent developments in AI agent communication
|
||||
protocols.\n\nProvide your complete response:"}],"model":"gpt-4.1-mini","response_format":{"type":"json_schema","json_schema":{"schema":{"properties":{"a2a_ids":{"description":"A2A
|
||||
agent IDs to delegate to.","items":{"const":"http://0.0.0.0:9999/.well-known/agent-card.json","type":"string"},"maxItems":1,"title":"A2A
|
||||
Ids","type":"array"},"message":{"description":"The message content. If is_a2a=true,
|
||||
this is sent to the A2A agent. If is_a2a=false, this is your final answer ending
|
||||
the conversation.","title":"Message","type":"string"},"is_a2a":{"description":"Set
|
||||
to false when the remote agent has answered your question - extract their answer
|
||||
and return it as your final message. Set to true ONLY if you need to ask a NEW,
|
||||
DIFFERENT question. NEVER repeat the same request - if the conversation history
|
||||
shows the agent already answered, set is_a2a=false immediately.","title":"Is
|
||||
A2A","type":"boolean"}},"required":["a2a_ids","message","is_a2a"],"title":"AgentResponse","type":"object","additionalProperties":false},"name":"AgentResponse","strict":true}},"stream":false}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1396'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- COOKIE-XXX
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-helper-method:
|
||||
- beta.chat.completions.parse
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-D3jG1CL2z8aMHaZgM3KCwDXsNnTvD\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1769781325,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"{\\\"a2a_ids\\\":[\\\"http://0.0.0.0:9999/.well-known/agent-card.json\\\"],\\\"message\\\":\\\"Can
|
||||
you provide information about the latest developments in AI agent communication
|
||||
protocols?\\\",\\\"is_a2a\\\":true}\",\n \"refusal\": null,\n \"annotations\":
|
||||
[]\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
|
||||
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 277,\n \"completion_tokens\":
|
||||
51,\n \"total_tokens\": 328,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_38699cb0fe\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 30 Jan 2026 13:55:26 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '918'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"id":"25e7eefc-150e-4ba1-b169-dbbe3d07430a","jsonrpc":"2.0","method":"message/stream","params":{"configuration":{"acceptedOutputModes":["application/json"],"blocking":true},"message":{"kind":"message","messageId":"e7a06872-de5c-4bd6-9c76-e27a1ce5ec48","parts":[{"kind":"text","text":"Can
|
||||
you provide information about the latest developments in AI agent communication
|
||||
protocols?"}],"referenceTaskIds":[],"role":"user"}}}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- '*/*, text/event-stream'
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
cache-control:
|
||||
- no-store
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '421'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- localhost:9999
|
||||
method: POST
|
||||
uri: http://localhost:9999
|
||||
response:
|
||||
body:
|
||||
string: "data: {\"id\":\"25e7eefc-150e-4ba1-b169-dbbe3d07430a\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"91946807-6102-44c5-bc63-1852d397972b\",\"final\":false,\"kind\":\"status-update\",\"status\":{\"state\":\"submitted\"},\"taskId\":\"57d00fbe-f3c3-4bae-8647-5de5e04a58f5\"}}\r\n\r\ndata:
|
||||
{\"id\":\"25e7eefc-150e-4ba1-b169-dbbe3d07430a\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"91946807-6102-44c5-bc63-1852d397972b\",\"final\":false,\"kind\":\"status-update\",\"status\":{\"state\":\"working\"},\"taskId\":\"57d00fbe-f3c3-4bae-8647-5de5e04a58f5\"}}\r\n\r\ndata:
|
||||
{\"id\":\"25e7eefc-150e-4ba1-b169-dbbe3d07430a\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"91946807-6102-44c5-bc63-1852d397972b\",\"final\":true,\"kind\":\"status-update\",\"status\":{\"message\":{\"kind\":\"message\",\"messageId\":\"f9b7bd1d-5721-4534-9111-41180c0eba24\",\"parts\":[{\"kind\":\"text\",\"text\":\"I
|
||||
currently do not have access to real-time updates or current news. However,
|
||||
I can provide a general overview of common AI agent communication protocols
|
||||
such as the Object Management Group's Data Distribution Service (DDS), the
|
||||
Robot Operating System (ROS) for robotic communication, and the use of WebSocket
|
||||
for real-time web communication. For the latest developments, I recommend
|
||||
checking recent academic papers or technology news sites.\"}],\"role\":\"agent\"},\"state\":\"completed\"},\"taskId\":\"57d00fbe-f3c3-4bae-8647-5de5e04a58f5\"}}\r\n\r\n"
|
||||
headers:
|
||||
cache-control:
|
||||
- no-store
|
||||
connection:
|
||||
- keep-alive
|
||||
content-type:
|
||||
- text/event-stream; charset=utf-8
|
||||
date:
|
||||
- Fri, 30 Jan 2026 13:55:26 GMT
|
||||
server:
|
||||
- uvicorn
|
||||
transfer-encoding:
|
||||
- chunked
|
||||
x-accel-buffering:
|
||||
- 'no'
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Research Analyst. Expert
|
||||
researcher with access to remote specialized agents\nYour personal goal is:
|
||||
Find and analyze information about AI developments"},{"role":"user","content":"\nCurrent
|
||||
Task: Ask the remote A2A agent about recent developments in AI agent communication
|
||||
protocols.\n\nProvide your complete response:"}],"model":"gpt-4.1-mini","response_format":{"type":"json_schema","json_schema":{"schema":{"properties":{"a2a_ids":{"description":"A2A
|
||||
agent IDs to delegate to.","items":{"const":"http://0.0.0.0:9999/.well-known/agent-card.json","type":"string"},"maxItems":1,"title":"A2A
|
||||
Ids","type":"array"},"message":{"description":"The message content. If is_a2a=true,
|
||||
this is sent to the A2A agent. If is_a2a=false, this is your final answer ending
|
||||
the conversation.","title":"Message","type":"string"},"is_a2a":{"description":"Set
|
||||
to false when the remote agent has answered your question - extract their answer
|
||||
and return it as your final message. Set to true ONLY if you need to ask a NEW,
|
||||
DIFFERENT question. NEVER repeat the same request - if the conversation history
|
||||
shows the agent already answered, set is_a2a=false immediately.","title":"Is
|
||||
A2A","type":"boolean"}},"required":["a2a_ids","message","is_a2a"],"title":"AgentResponse","type":"object","additionalProperties":false},"name":"AgentResponse","strict":true}},"stream":false}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1396'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- COOKIE-XXX
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-helper-method:
|
||||
- beta.chat.completions.parse
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-D3jG6HmivomyX7R9JcTtOLOo4NHrF\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1769781330,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"{\\\"a2a_ids\\\":[\\\"http://0.0.0.0:9999/.well-known/agent-card.json\\\"],\\\"message\\\":\\\"What
|
||||
are the recent developments in AI agent communication protocols?\\\",\\\"is_a2a\\\":true}\",\n
|
||||
\ \"refusal\": null,\n \"annotations\": []\n },\n \"logprobs\":
|
||||
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
|
||||
277,\n \"completion_tokens\": 48,\n \"total_tokens\": 325,\n \"prompt_tokens_details\":
|
||||
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_e01c6f58e1\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 30 Jan 2026 13:55:31 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '817'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"id":"eb2906ff-0370-4b05-90de-4b4cfe6a1400","jsonrpc":"2.0","method":"message/stream","params":{"configuration":{"acceptedOutputModes":["application/json"],"blocking":true},"message":{"kind":"message","messageId":"7182ae50-0bb8-47b1-bf74-eb3519718ca7","parts":[{"kind":"text","text":"What
|
||||
are the recent developments in AI agent communication protocols?"}],"referenceTaskIds":[],"role":"user"}}}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- '*/*, text/event-stream'
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
cache-control:
|
||||
- no-store
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '396'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- localhost:9999
|
||||
method: POST
|
||||
uri: http://localhost:9999
|
||||
response:
|
||||
body:
|
||||
string: "data: {\"id\":\"eb2906ff-0370-4b05-90de-4b4cfe6a1400\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"b320db14-b2c7-498c-8e68-16709143ea4c\",\"final\":false,\"kind\":\"status-update\",\"status\":{\"state\":\"submitted\"},\"taskId\":\"fc283837-7fa1-468c-87e8-f39fddb1a167\"}}\r\n\r\ndata:
|
||||
{\"id\":\"eb2906ff-0370-4b05-90de-4b4cfe6a1400\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"b320db14-b2c7-498c-8e68-16709143ea4c\",\"final\":false,\"kind\":\"status-update\",\"status\":{\"state\":\"working\"},\"taskId\":\"fc283837-7fa1-468c-87e8-f39fddb1a167\"}}\r\n\r\ndata:
|
||||
{\"id\":\"eb2906ff-0370-4b05-90de-4b4cfe6a1400\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"b320db14-b2c7-498c-8e68-16709143ea4c\",\"final\":true,\"kind\":\"status-update\",\"status\":{\"message\":{\"kind\":\"message\",\"messageId\":\"ff2dff60-b048-47a9-99a5-3af0e087a9ee\",\"parts\":[{\"kind\":\"text\",\"text\":\"Recent
|
||||
developments in AI agent communication protocols focus on enhancing interoperability,
|
||||
efficiency, and security between different AI systems. Key advancements include:\\n\\n1.
|
||||
**Adoption of Standardized Protocols**: Standard protocols such as Robot Operating
|
||||
System (ROS) and OpenAI's API are gaining traction, facilitating easier communication
|
||||
between diverse AI technologies.\\n\\n2. **Multi-Agent Systems**: Research
|
||||
into multi-agent systems has led to better methods for agents to coordinate
|
||||
and communicate, emphasizing negotiation and collaboration techniques.\\n\\n3.
|
||||
**Natural Language Understanding**: Improvements in NLP have enhanced how
|
||||
agents interpret and respond to human language, allowing for more seamless
|
||||
user interactions.\\n\\n4. **Decentralized Communication**: There's a shift
|
||||
toward decentralized communication frameworks, improving privacy and reducing
|
||||
single points of failure in data exchange.\\n\\n5. **Reinforcement Learning
|
||||
in Communication**: Using reinforcement learning to improve communication
|
||||
strategies among agents has shown promise in optimizing dialogue and cooperation.\\n\\n6.
|
||||
**Security Protocols**: As AI becomes more integrated into critical systems,
|
||||
security protocols are being developed to protect communication from adversarial
|
||||
attacks.\\n\\nThese developments aim to create more robust, flexible, and
|
||||
secure frameworks for AI agents to communicate effectively across various
|
||||
domains.\"}],\"role\":\"agent\"},\"state\":\"completed\"},\"taskId\":\"fc283837-7fa1-468c-87e8-f39fddb1a167\"}}\r\n\r\n"
|
||||
headers:
|
||||
cache-control:
|
||||
- no-store
|
||||
connection:
|
||||
- keep-alive
|
||||
content-type:
|
||||
- text/event-stream; charset=utf-8
|
||||
date:
|
||||
- Fri, 30 Jan 2026 13:55:31 GMT
|
||||
server:
|
||||
- uvicorn
|
||||
transfer-encoding:
|
||||
- chunked
|
||||
x-accel-buffering:
|
||||
- 'no'
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Research Analyst. Expert
|
||||
researcher with access to remote specialized agents\nYour personal goal is:
|
||||
Find and analyze information about AI developments"},{"role":"user","content":"\nCurrent
|
||||
Task: Ask the remote A2A agent about recent developments in AI agent communication
|
||||
protocols.\n\nProvide your complete response:"}],"model":"gpt-4.1-mini","response_format":{"type":"json_schema","json_schema":{"schema":{"properties":{"a2a_ids":{"description":"A2A
|
||||
agent IDs to delegate to.","items":{"const":"http://0.0.0.0:9999/.well-known/agent-card.json","type":"string"},"maxItems":1,"title":"A2A
|
||||
Ids","type":"array"},"message":{"description":"The message content. If is_a2a=true,
|
||||
this is sent to the A2A agent. If is_a2a=false, this is your final answer ending
|
||||
the conversation.","title":"Message","type":"string"},"is_a2a":{"description":"Set
|
||||
to false when the remote agent has answered your question - extract their answer
|
||||
and return it as your final message. Set to true ONLY if you need to ask a NEW,
|
||||
DIFFERENT question. NEVER repeat the same request - if the conversation history
|
||||
shows the agent already answered, set is_a2a=false immediately.","title":"Is
|
||||
A2A","type":"boolean"}},"required":["a2a_ids","message","is_a2a"],"title":"AgentResponse","type":"object","additionalProperties":false},"name":"AgentResponse","strict":true}},"stream":false}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1396'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- COOKIE-XXX
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-helper-method:
|
||||
- beta.chat.completions.parse
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-D3jGHBwSnZnxZ5D49brfxnLkpmXve\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1769781341,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"{\\\"a2a_ids\\\":[\\\"http://0.0.0.0:9999/.well-known/agent-card.json\\\"],\\\"message\\\":\\\"Can
|
||||
you provide information on recent developments in AI agent communication protocols,
|
||||
including any new standards, technologies, or significant research findings?\\\",\\\"is_a2a\\\":true}\",\n
|
||||
\ \"refusal\": null,\n \"annotations\": []\n },\n \"logprobs\":
|
||||
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
|
||||
277,\n \"completion_tokens\": 62,\n \"total_tokens\": 339,\n \"prompt_tokens_details\":
|
||||
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_38699cb0fe\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 30 Jan 2026 13:55:42 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '1069'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
@@ -1,623 +0,0 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Research Analyst. Expert
|
||||
researcher with access to remote specialized agents\nYour personal goal is:
|
||||
Find and analyze information about AI developments"},{"role":"user","content":"\nCurrent
|
||||
Task: Use the A2A agent to tell me what time it is.\n\nProvide your complete
|
||||
response:"}],"model":"gpt-4.1-mini","response_format":{"type":"json_schema","json_schema":{"schema":{"properties":{"a2a_ids":{"description":"A2A
|
||||
agent IDs to delegate to.","items":{"const":"http://0.0.0.0:9999/.well-known/agent-card.json","type":"string"},"maxItems":1,"title":"A2A
|
||||
Ids","type":"array"},"message":{"description":"The message content. If is_a2a=true,
|
||||
this is sent to the A2A agent. If is_a2a=false, this is your final answer ending
|
||||
the conversation.","title":"Message","type":"string"},"is_a2a":{"description":"Set
|
||||
to false when the remote agent has answered your question - extract their answer
|
||||
and return it as your final message. Set to true ONLY if you need to ask a NEW,
|
||||
DIFFERENT question. NEVER repeat the same request - if the conversation history
|
||||
shows the agent already answered, set is_a2a=false immediately.","title":"Is
|
||||
A2A","type":"boolean"}},"required":["a2a_ids","message","is_a2a"],"title":"AgentResponse","type":"object","additionalProperties":false},"name":"AgentResponse","strict":true}},"stream":false}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1354'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-helper-method:
|
||||
- beta.chat.completions.parse
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-D3jFa8FqrqBRD2BDlcIBJL6QvuXlH\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1769781298,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"{\\\"a2a_ids\\\":[\\\"http://0.0.0.0:9999/.well-known/agent-card.json\\\"],\\\"message\\\":\\\"What
|
||||
is the current time?\\\",\\\"is_a2a\\\":true}\",\n \"refusal\": null,\n
|
||||
\ \"annotations\": []\n },\n \"logprobs\": null,\n \"finish_reason\":
|
||||
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 275,\n \"completion_tokens\":
|
||||
43,\n \"total_tokens\": 318,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_e01c6f58e1\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 30 Jan 2026 13:54:59 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- SET-COOKIE-XXX
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '880'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"id":"d97a8925-75f5-45f8-9572-e61373217004","jsonrpc":"2.0","method":"message/stream","params":{"configuration":{"acceptedOutputModes":["application/json"],"blocking":true},"message":{"kind":"message","messageId":"fb58321f-d613-42c8-b202-2a7b50a7dd92","parts":[{"kind":"text","text":"What
|
||||
is the current time?"}],"referenceTaskIds":[],"role":"user"}}}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- '*/*, text/event-stream'
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
cache-control:
|
||||
- no-store
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '352'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- localhost:9999
|
||||
method: POST
|
||||
uri: http://localhost:9999
|
||||
response:
|
||||
body:
|
||||
string: "data: {\"id\":\"d97a8925-75f5-45f8-9572-e61373217004\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"a618753d-56df-458f-a8dc-a8bdfa226e73\",\"final\":false,\"kind\":\"status-update\",\"status\":{\"state\":\"submitted\"},\"taskId\":\"307e8651-39c7-4220-8f37-d76319cc8dbf\"}}\r\n\r\ndata:
|
||||
{\"id\":\"d97a8925-75f5-45f8-9572-e61373217004\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"a618753d-56df-458f-a8dc-a8bdfa226e73\",\"final\":false,\"kind\":\"status-update\",\"status\":{\"state\":\"working\"},\"taskId\":\"307e8651-39c7-4220-8f37-d76319cc8dbf\"}}\r\n\r\ndata:
|
||||
{\"id\":\"d97a8925-75f5-45f8-9572-e61373217004\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"a618753d-56df-458f-a8dc-a8bdfa226e73\",\"final\":true,\"kind\":\"status-update\",\"status\":{\"message\":{\"kind\":\"message\",\"messageId\":\"7d7c6c95-867f-4525-9180-4ec3d2834d3c\",\"parts\":[{\"kind\":\"text\",\"text\":\"[Tool:
|
||||
get_time] 2026-01-30 13:55:00 UTC (UTC)\\nThe current time is 13:55 (1:55
|
||||
PM) UTC on January 30, 2026.\"}],\"role\":\"agent\"},\"state\":\"completed\"},\"taskId\":\"307e8651-39c7-4220-8f37-d76319cc8dbf\"}}\r\n\r\n"
|
||||
headers:
|
||||
cache-control:
|
||||
- no-store
|
||||
connection:
|
||||
- keep-alive
|
||||
content-type:
|
||||
- text/event-stream; charset=utf-8
|
||||
date:
|
||||
- Fri, 30 Jan 2026 13:54:59 GMT
|
||||
server:
|
||||
- uvicorn
|
||||
transfer-encoding:
|
||||
- chunked
|
||||
x-accel-buffering:
|
||||
- 'no'
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Research Analyst. Expert
|
||||
researcher with access to remote specialized agents\nYour personal goal is:
|
||||
Find and analyze information about AI developments"},{"role":"user","content":"\nCurrent
|
||||
Task: Use the A2A agent to tell me what time it is.\n\nProvide your complete
|
||||
response:"}],"model":"gpt-4.1-mini","response_format":{"type":"json_schema","json_schema":{"schema":{"properties":{"a2a_ids":{"description":"A2A
|
||||
agent IDs to delegate to.","items":{"const":"http://0.0.0.0:9999/.well-known/agent-card.json","type":"string"},"maxItems":1,"title":"A2A
|
||||
Ids","type":"array"},"message":{"description":"The message content. If is_a2a=true,
|
||||
this is sent to the A2A agent. If is_a2a=false, this is your final answer ending
|
||||
the conversation.","title":"Message","type":"string"},"is_a2a":{"description":"Set
|
||||
to false when the remote agent has answered your question - extract their answer
|
||||
and return it as your final message. Set to true ONLY if you need to ask a NEW,
|
||||
DIFFERENT question. NEVER repeat the same request - if the conversation history
|
||||
shows the agent already answered, set is_a2a=false immediately.","title":"Is
|
||||
A2A","type":"boolean"}},"required":["a2a_ids","message","is_a2a"],"title":"AgentResponse","type":"object","additionalProperties":false},"name":"AgentResponse","strict":true}},"stream":false}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1354'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- COOKIE-XXX
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-helper-method:
|
||||
- beta.chat.completions.parse
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-D3jFe4nAbTUPDOE0verjfIon6eo17\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1769781302,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"{\\\"a2a_ids\\\":[\\\"http://0.0.0.0:9999/.well-known/agent-card.json\\\"],\\\"message\\\":\\\"What
|
||||
is the current time?\\\",\\\"is_a2a\\\":true}\",\n \"refusal\": null,\n
|
||||
\ \"annotations\": []\n },\n \"logprobs\": null,\n \"finish_reason\":
|
||||
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 275,\n \"completion_tokens\":
|
||||
43,\n \"total_tokens\": 318,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_38699cb0fe\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 30 Jan 2026 13:55:03 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '875'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"id":"34f447f3-6a8c-428a-ab62-e787a369b6e7","jsonrpc":"2.0","method":"message/stream","params":{"configuration":{"acceptedOutputModes":["application/json"],"blocking":true},"message":{"kind":"message","messageId":"fc4a3581-9256-4975-acfe-0e6251644bd8","parts":[{"kind":"text","text":"What
|
||||
is the current time?"}],"referenceTaskIds":[],"role":"user"}}}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- '*/*, text/event-stream'
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
cache-control:
|
||||
- no-store
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '352'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- localhost:9999
|
||||
method: POST
|
||||
uri: http://localhost:9999
|
||||
response:
|
||||
body:
|
||||
string: "data: {\"id\":\"34f447f3-6a8c-428a-ab62-e787a369b6e7\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"5f7381c3-a90a-4ed4-9c6e-a2e19b8ddaa3\",\"final\":false,\"kind\":\"status-update\",\"status\":{\"state\":\"submitted\"},\"taskId\":\"4c0162fb-94be-4338-b25f-fbe3b3f28463\"}}\r\n\r\ndata:
|
||||
{\"id\":\"34f447f3-6a8c-428a-ab62-e787a369b6e7\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"5f7381c3-a90a-4ed4-9c6e-a2e19b8ddaa3\",\"final\":false,\"kind\":\"status-update\",\"status\":{\"state\":\"working\"},\"taskId\":\"4c0162fb-94be-4338-b25f-fbe3b3f28463\"}}\r\n\r\ndata:
|
||||
{\"id\":\"34f447f3-6a8c-428a-ab62-e787a369b6e7\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"5f7381c3-a90a-4ed4-9c6e-a2e19b8ddaa3\",\"final\":true,\"kind\":\"status-update\",\"status\":{\"message\":{\"kind\":\"message\",\"messageId\":\"dd341884-4548-4e0d-992e-9fa7cd0b842c\",\"parts\":[{\"kind\":\"text\",\"text\":\"[Tool:
|
||||
get_time] 2026-01-30 13:55:03 UTC (UTC)\\nThe current time is 13:55:03 UTC
|
||||
on January 30, 2026.\"}],\"role\":\"agent\"},\"state\":\"completed\"},\"taskId\":\"4c0162fb-94be-4338-b25f-fbe3b3f28463\"}}\r\n\r\n"
|
||||
headers:
|
||||
cache-control:
|
||||
- no-store
|
||||
connection:
|
||||
- keep-alive
|
||||
content-type:
|
||||
- text/event-stream; charset=utf-8
|
||||
date:
|
||||
- Fri, 30 Jan 2026 13:55:03 GMT
|
||||
server:
|
||||
- uvicorn
|
||||
transfer-encoding:
|
||||
- chunked
|
||||
x-accel-buffering:
|
||||
- 'no'
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Research Analyst. Expert
|
||||
researcher with access to remote specialized agents\nYour personal goal is:
|
||||
Find and analyze information about AI developments"},{"role":"user","content":"\nCurrent
|
||||
Task: Use the A2A agent to tell me what time it is.\n\nProvide your complete
|
||||
response:"}],"model":"gpt-4.1-mini","response_format":{"type":"json_schema","json_schema":{"schema":{"properties":{"a2a_ids":{"description":"A2A
|
||||
agent IDs to delegate to.","items":{"const":"http://0.0.0.0:9999/.well-known/agent-card.json","type":"string"},"maxItems":1,"title":"A2A
|
||||
Ids","type":"array"},"message":{"description":"The message content. If is_a2a=true,
|
||||
this is sent to the A2A agent. If is_a2a=false, this is your final answer ending
|
||||
the conversation.","title":"Message","type":"string"},"is_a2a":{"description":"Set
|
||||
to false when the remote agent has answered your question - extract their answer
|
||||
and return it as your final message. Set to true ONLY if you need to ask a NEW,
|
||||
DIFFERENT question. NEVER repeat the same request - if the conversation history
|
||||
shows the agent already answered, set is_a2a=false immediately.","title":"Is
|
||||
A2A","type":"boolean"}},"required":["a2a_ids","message","is_a2a"],"title":"AgentResponse","type":"object","additionalProperties":false},"name":"AgentResponse","strict":true}},"stream":false}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1354'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- COOKIE-XXX
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-helper-method:
|
||||
- beta.chat.completions.parse
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-D3jFhRE3kx4KNk5YNp1ZjUyp80kUS\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1769781305,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"{\\\"a2a_ids\\\":[\\\"http://0.0.0.0:9999/.well-known/agent-card.json\\\"],\\\"message\\\":\\\"What
|
||||
is the current time?\\\",\\\"is_a2a\\\":true}\",\n \"refusal\": null,\n
|
||||
\ \"annotations\": []\n },\n \"logprobs\": null,\n \"finish_reason\":
|
||||
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 275,\n \"completion_tokens\":
|
||||
43,\n \"total_tokens\": 318,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_38699cb0fe\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 30 Jan 2026 13:55:06 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '961'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"id":"882d8605-40a5-4e8e-9573-ba56d80736f4","jsonrpc":"2.0","method":"message/stream","params":{"configuration":{"acceptedOutputModes":["application/json"],"blocking":true},"message":{"kind":"message","messageId":"7f8f51d5-c817-4fab-ba5c-2cc70377a9ac","parts":[{"kind":"text","text":"What
|
||||
is the current time?"}],"referenceTaskIds":[],"role":"user"}}}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- '*/*, text/event-stream'
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
cache-control:
|
||||
- no-store
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '352'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- localhost:9999
|
||||
method: POST
|
||||
uri: http://localhost:9999
|
||||
response:
|
||||
body:
|
||||
string: "data: {\"id\":\"882d8605-40a5-4e8e-9573-ba56d80736f4\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"eb634fa5-28ef-4451-93d2-be62c640513f\",\"final\":false,\"kind\":\"status-update\",\"status\":{\"state\":\"submitted\"},\"taskId\":\"962f69ce-e087-40ba-930a-012059ab5778\"}}\r\n\r\ndata:
|
||||
{\"id\":\"882d8605-40a5-4e8e-9573-ba56d80736f4\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"eb634fa5-28ef-4451-93d2-be62c640513f\",\"final\":false,\"kind\":\"status-update\",\"status\":{\"state\":\"working\"},\"taskId\":\"962f69ce-e087-40ba-930a-012059ab5778\"}}\r\n\r\ndata:
|
||||
{\"id\":\"882d8605-40a5-4e8e-9573-ba56d80736f4\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"eb634fa5-28ef-4451-93d2-be62c640513f\",\"final\":true,\"kind\":\"status-update\",\"status\":{\"message\":{\"kind\":\"message\",\"messageId\":\"7c4b3417-95fa-4b3d-96cd-1d0896cf56a1\",\"parts\":[{\"kind\":\"text\",\"text\":\"[Tool:
|
||||
get_time] 2026-01-30 13:55:07 UTC (UTC)\\nThe current time is 13:55 (1:55
|
||||
PM) UTC on January 30, 2026.\"}],\"role\":\"agent\"},\"state\":\"completed\"},\"taskId\":\"962f69ce-e087-40ba-930a-012059ab5778\"}}\r\n\r\n"
|
||||
headers:
|
||||
cache-control:
|
||||
- no-store
|
||||
connection:
|
||||
- keep-alive
|
||||
content-type:
|
||||
- text/event-stream; charset=utf-8
|
||||
date:
|
||||
- Fri, 30 Jan 2026 13:55:06 GMT
|
||||
server:
|
||||
- uvicorn
|
||||
transfer-encoding:
|
||||
- chunked
|
||||
x-accel-buffering:
|
||||
- 'no'
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Research Analyst. Expert
|
||||
researcher with access to remote specialized agents\nYour personal goal is:
|
||||
Find and analyze information about AI developments"},{"role":"user","content":"\nCurrent
|
||||
Task: Use the A2A agent to tell me what time it is.\n\nProvide your complete
|
||||
response:"}],"model":"gpt-4.1-mini","response_format":{"type":"json_schema","json_schema":{"schema":{"properties":{"a2a_ids":{"description":"A2A
|
||||
agent IDs to delegate to.","items":{"const":"http://0.0.0.0:9999/.well-known/agent-card.json","type":"string"},"maxItems":1,"title":"A2A
|
||||
Ids","type":"array"},"message":{"description":"The message content. If is_a2a=true,
|
||||
this is sent to the A2A agent. If is_a2a=false, this is your final answer ending
|
||||
the conversation.","title":"Message","type":"string"},"is_a2a":{"description":"Set
|
||||
to false when the remote agent has answered your question - extract their answer
|
||||
and return it as your final message. Set to true ONLY if you need to ask a NEW,
|
||||
DIFFERENT question. NEVER repeat the same request - if the conversation history
|
||||
shows the agent already answered, set is_a2a=false immediately.","title":"Is
|
||||
A2A","type":"boolean"}},"required":["a2a_ids","message","is_a2a"],"title":"AgentResponse","type":"object","additionalProperties":false},"name":"AgentResponse","strict":true}},"stream":false}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1354'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- COOKIE-XXX
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-helper-method:
|
||||
- beta.chat.completions.parse
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-D3jFlnndBlZMpiwI6tmZFcsIcsKxQ\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1769781309,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"{\\\"a2a_ids\\\":[\\\"http://0.0.0.0:9999/.well-known/agent-card.json\\\"],\\\"message\\\":\\\"What
|
||||
is the current time?\\\",\\\"is_a2a\\\":true}\",\n \"refusal\": null,\n
|
||||
\ \"annotations\": []\n },\n \"logprobs\": null,\n \"finish_reason\":
|
||||
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 275,\n \"completion_tokens\":
|
||||
43,\n \"total_tokens\": 318,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_e01c6f58e1\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 30 Jan 2026 13:55:11 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '1254'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
@@ -1,620 +0,0 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Research Analyst. Expert
|
||||
researcher with access to remote specialized agents\nYour personal goal is:
|
||||
Find and analyze information about AI developments"},{"role":"user","content":"\nCurrent
|
||||
Task: Ask the remote A2A agent to calculate 25 times 17.\n\nProvide your complete
|
||||
response:"}],"model":"gpt-4.1-mini","response_format":{"type":"json_schema","json_schema":{"schema":{"properties":{"a2a_ids":{"description":"A2A
|
||||
agent IDs to delegate to.","items":{"const":"http://0.0.0.0:9999/.well-known/agent-card.json","type":"string"},"maxItems":1,"title":"A2A
|
||||
Ids","type":"array"},"message":{"description":"The message content. If is_a2a=true,
|
||||
this is sent to the A2A agent. If is_a2a=false, this is your final answer ending
|
||||
the conversation.","title":"Message","type":"string"},"is_a2a":{"description":"Set
|
||||
to false when the remote agent has answered your question - extract their answer
|
||||
and return it as your final message. Set to true ONLY if you need to ask a NEW,
|
||||
DIFFERENT question. NEVER repeat the same request - if the conversation history
|
||||
shows the agent already answered, set is_a2a=false immediately.","title":"Is
|
||||
A2A","type":"boolean"}},"required":["a2a_ids","message","is_a2a"],"title":"AgentResponse","type":"object","additionalProperties":false},"name":"AgentResponse","strict":true}},"stream":false}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1359'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-helper-method:
|
||||
- beta.chat.completions.parse
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-D3jGIOXOwe0YLDVF1ujzfsgGppMVm\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1769781342,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"{\\\"a2a_ids\\\":[\\\"http://0.0.0.0:9999/.well-known/agent-card.json\\\"],\\\"message\\\":\\\"Please
|
||||
calculate 25 times 17.\\\",\\\"is_a2a\\\":true}\",\n \"refusal\": null,\n
|
||||
\ \"annotations\": []\n },\n \"logprobs\": null,\n \"finish_reason\":
|
||||
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 276,\n \"completion_tokens\":
|
||||
44,\n \"total_tokens\": 320,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_e01c6f58e1\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 30 Jan 2026 13:55:43 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- SET-COOKIE-XXX
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '1142'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"id":"0ca730ca-0343-41b0-9063-345d0a1846fe","jsonrpc":"2.0","method":"message/stream","params":{"configuration":{"acceptedOutputModes":["application/json"],"blocking":true},"message":{"kind":"message","messageId":"8d390685-916d-4ebd-8448-521a51a40fa0","parts":[{"kind":"text","text":"Please
|
||||
calculate 25 times 17."}],"referenceTaskIds":[],"role":"user"}}}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- '*/*, text/event-stream'
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
cache-control:
|
||||
- no-store
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '356'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- localhost:9999
|
||||
method: POST
|
||||
uri: http://localhost:9999
|
||||
response:
|
||||
body:
|
||||
string: "data: {\"id\":\"0ca730ca-0343-41b0-9063-345d0a1846fe\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"723b2a81-7686-46cd-a554-98dad63067df\",\"final\":false,\"kind\":\"status-update\",\"status\":{\"state\":\"submitted\"},\"taskId\":\"d11de366-f945-4ffb-b87b-393f4236ae52\"}}\r\n\r\ndata:
|
||||
{\"id\":\"0ca730ca-0343-41b0-9063-345d0a1846fe\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"723b2a81-7686-46cd-a554-98dad63067df\",\"final\":false,\"kind\":\"status-update\",\"status\":{\"state\":\"working\"},\"taskId\":\"d11de366-f945-4ffb-b87b-393f4236ae52\"}}\r\n\r\ndata:
|
||||
{\"id\":\"0ca730ca-0343-41b0-9063-345d0a1846fe\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"723b2a81-7686-46cd-a554-98dad63067df\",\"final\":true,\"kind\":\"status-update\",\"status\":{\"message\":{\"kind\":\"message\",\"messageId\":\"7ec2e318-8619-46f9-805e-94b3b13cfb61\",\"parts\":[{\"kind\":\"text\",\"text\":\"[Tool:
|
||||
calculator] 25 * 17 = 425\\nThe result of 25 times 17 is 425.\"}],\"role\":\"agent\"},\"state\":\"completed\"},\"taskId\":\"d11de366-f945-4ffb-b87b-393f4236ae52\"}}\r\n\r\n"
|
||||
headers:
|
||||
cache-control:
|
||||
- no-store
|
||||
connection:
|
||||
- keep-alive
|
||||
content-type:
|
||||
- text/event-stream; charset=utf-8
|
||||
date:
|
||||
- Fri, 30 Jan 2026 13:55:43 GMT
|
||||
server:
|
||||
- uvicorn
|
||||
transfer-encoding:
|
||||
- chunked
|
||||
x-accel-buffering:
|
||||
- 'no'
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Research Analyst. Expert
|
||||
researcher with access to remote specialized agents\nYour personal goal is:
|
||||
Find and analyze information about AI developments"},{"role":"user","content":"\nCurrent
|
||||
Task: Ask the remote A2A agent to calculate 25 times 17.\n\nProvide your complete
|
||||
response:"}],"model":"gpt-4.1-mini","response_format":{"type":"json_schema","json_schema":{"schema":{"properties":{"a2a_ids":{"description":"A2A
|
||||
agent IDs to delegate to.","items":{"const":"http://0.0.0.0:9999/.well-known/agent-card.json","type":"string"},"maxItems":1,"title":"A2A
|
||||
Ids","type":"array"},"message":{"description":"The message content. If is_a2a=true,
|
||||
this is sent to the A2A agent. If is_a2a=false, this is your final answer ending
|
||||
the conversation.","title":"Message","type":"string"},"is_a2a":{"description":"Set
|
||||
to false when the remote agent has answered your question - extract their answer
|
||||
and return it as your final message. Set to true ONLY if you need to ask a NEW,
|
||||
DIFFERENT question. NEVER repeat the same request - if the conversation history
|
||||
shows the agent already answered, set is_a2a=false immediately.","title":"Is
|
||||
A2A","type":"boolean"}},"required":["a2a_ids","message","is_a2a"],"title":"AgentResponse","type":"object","additionalProperties":false},"name":"AgentResponse","strict":true}},"stream":false}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1359'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- COOKIE-XXX
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-helper-method:
|
||||
- beta.chat.completions.parse
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-D3jGM6WIUihSLf4KEk8GTvkBYTyNi\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1769781346,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"{\\\"a2a_ids\\\":[\\\"http://0.0.0.0:9999/.well-known/agent-card.json\\\"],\\\"message\\\":\\\"Please
|
||||
calculate 25 times 17.\\\",\\\"is_a2a\\\":true}\",\n \"refusal\": null,\n
|
||||
\ \"annotations\": []\n },\n \"logprobs\": null,\n \"finish_reason\":
|
||||
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 276,\n \"completion_tokens\":
|
||||
44,\n \"total_tokens\": 320,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_e01c6f58e1\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 30 Jan 2026 13:55:47 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '901'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"id":"6487b213-08ae-452e-a434-a43c40ee166d","jsonrpc":"2.0","method":"message/stream","params":{"configuration":{"acceptedOutputModes":["application/json"],"blocking":true},"message":{"kind":"message","messageId":"19911188-7988-48d8-b244-b887617a28f7","parts":[{"kind":"text","text":"Please
|
||||
calculate 25 times 17."}],"referenceTaskIds":[],"role":"user"}}}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- '*/*, text/event-stream'
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
cache-control:
|
||||
- no-store
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '356'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- localhost:9999
|
||||
method: POST
|
||||
uri: http://localhost:9999
|
||||
response:
|
||||
body:
|
||||
string: "data: {\"id\":\"6487b213-08ae-452e-a434-a43c40ee166d\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"c26e5431-131e-4b38-a83e-1a20c82983bd\",\"final\":false,\"kind\":\"status-update\",\"status\":{\"state\":\"submitted\"},\"taskId\":\"df624f31-2d82-4749-98a7-5b07916ecbb1\"}}\r\n\r\ndata:
|
||||
{\"id\":\"6487b213-08ae-452e-a434-a43c40ee166d\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"c26e5431-131e-4b38-a83e-1a20c82983bd\",\"final\":false,\"kind\":\"status-update\",\"status\":{\"state\":\"working\"},\"taskId\":\"df624f31-2d82-4749-98a7-5b07916ecbb1\"}}\r\n\r\ndata:
|
||||
{\"id\":\"6487b213-08ae-452e-a434-a43c40ee166d\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"c26e5431-131e-4b38-a83e-1a20c82983bd\",\"final\":true,\"kind\":\"status-update\",\"status\":{\"message\":{\"kind\":\"message\",\"messageId\":\"3a3d621c-0845-4f23-b8f5-cdf3ecb96ea9\",\"parts\":[{\"kind\":\"text\",\"text\":\"[Tool:
|
||||
calculator] 25 * 17 = 425\\nThe result of 25 times 17 is 425.\"}],\"role\":\"agent\"},\"state\":\"completed\"},\"taskId\":\"df624f31-2d82-4749-98a7-5b07916ecbb1\"}}\r\n\r\n"
|
||||
headers:
|
||||
cache-control:
|
||||
- no-store
|
||||
connection:
|
||||
- keep-alive
|
||||
content-type:
|
||||
- text/event-stream; charset=utf-8
|
||||
date:
|
||||
- Fri, 30 Jan 2026 13:55:46 GMT
|
||||
server:
|
||||
- uvicorn
|
||||
transfer-encoding:
|
||||
- chunked
|
||||
x-accel-buffering:
|
||||
- 'no'
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Research Analyst. Expert
|
||||
researcher with access to remote specialized agents\nYour personal goal is:
|
||||
Find and analyze information about AI developments"},{"role":"user","content":"\nCurrent
|
||||
Task: Ask the remote A2A agent to calculate 25 times 17.\n\nProvide your complete
|
||||
response:"}],"model":"gpt-4.1-mini","response_format":{"type":"json_schema","json_schema":{"schema":{"properties":{"a2a_ids":{"description":"A2A
|
||||
agent IDs to delegate to.","items":{"const":"http://0.0.0.0:9999/.well-known/agent-card.json","type":"string"},"maxItems":1,"title":"A2A
|
||||
Ids","type":"array"},"message":{"description":"The message content. If is_a2a=true,
|
||||
this is sent to the A2A agent. If is_a2a=false, this is your final answer ending
|
||||
the conversation.","title":"Message","type":"string"},"is_a2a":{"description":"Set
|
||||
to false when the remote agent has answered your question - extract their answer
|
||||
and return it as your final message. Set to true ONLY if you need to ask a NEW,
|
||||
DIFFERENT question. NEVER repeat the same request - if the conversation history
|
||||
shows the agent already answered, set is_a2a=false immediately.","title":"Is
|
||||
A2A","type":"boolean"}},"required":["a2a_ids","message","is_a2a"],"title":"AgentResponse","type":"object","additionalProperties":false},"name":"AgentResponse","strict":true}},"stream":false}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1359'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- COOKIE-XXX
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-helper-method:
|
||||
- beta.chat.completions.parse
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-D3jGPQ7X1NUeVqaU3zlnCrv7h6onm\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1769781349,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"{\\\"a2a_ids\\\":[\\\"http://0.0.0.0:9999/.well-known/agent-card.json\\\"],\\\"message\\\":\\\"Calculate
|
||||
25 times 17.\\\",\\\"is_a2a\\\":true}\",\n \"refusal\": null,\n \"annotations\":
|
||||
[]\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
|
||||
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 276,\n \"completion_tokens\":
|
||||
43,\n \"total_tokens\": 319,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_38699cb0fe\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 30 Jan 2026 13:55:50 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '960'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"id":"c77f17c6-08b0-48ff-92c3-4b8e8d9c5eb1","jsonrpc":"2.0","method":"message/stream","params":{"configuration":{"acceptedOutputModes":["application/json"],"blocking":true},"message":{"kind":"message","messageId":"ce9f2cc8-2fcc-4753-b734-30ee821fc3a1","parts":[{"kind":"text","text":"Calculate
|
||||
25 times 17."}],"referenceTaskIds":[],"role":"user"}}}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- '*/*, text/event-stream'
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
cache-control:
|
||||
- no-store
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '349'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- localhost:9999
|
||||
method: POST
|
||||
uri: http://localhost:9999
|
||||
response:
|
||||
body:
|
||||
string: "data: {\"id\":\"c77f17c6-08b0-48ff-92c3-4b8e8d9c5eb1\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"b9e2d33e-a30f-49d8-a06e-764ba27935c6\",\"final\":false,\"kind\":\"status-update\",\"status\":{\"state\":\"submitted\"},\"taskId\":\"7059af18-8bae-4ebd-8ffd-f8c25250e36c\"}}\r\n\r\ndata:
|
||||
{\"id\":\"c77f17c6-08b0-48ff-92c3-4b8e8d9c5eb1\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"b9e2d33e-a30f-49d8-a06e-764ba27935c6\",\"final\":false,\"kind\":\"status-update\",\"status\":{\"state\":\"working\"},\"taskId\":\"7059af18-8bae-4ebd-8ffd-f8c25250e36c\"}}\r\n\r\ndata:
|
||||
{\"id\":\"c77f17c6-08b0-48ff-92c3-4b8e8d9c5eb1\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"b9e2d33e-a30f-49d8-a06e-764ba27935c6\",\"final\":true,\"kind\":\"status-update\",\"status\":{\"message\":{\"kind\":\"message\",\"messageId\":\"354d61e4-d241-4e37-b3f8-7cfffd722807\",\"parts\":[{\"kind\":\"text\",\"text\":\"[Tool:
|
||||
calculator] 25 * 17 = 425\\nThe result of 25 times 17 is 425.\"}],\"role\":\"agent\"},\"state\":\"completed\"},\"taskId\":\"7059af18-8bae-4ebd-8ffd-f8c25250e36c\"}}\r\n\r\n"
|
||||
headers:
|
||||
cache-control:
|
||||
- no-store
|
||||
connection:
|
||||
- keep-alive
|
||||
content-type:
|
||||
- text/event-stream; charset=utf-8
|
||||
date:
|
||||
- Fri, 30 Jan 2026 13:55:50 GMT
|
||||
server:
|
||||
- uvicorn
|
||||
transfer-encoding:
|
||||
- chunked
|
||||
x-accel-buffering:
|
||||
- 'no'
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Research Analyst. Expert
|
||||
researcher with access to remote specialized agents\nYour personal goal is:
|
||||
Find and analyze information about AI developments"},{"role":"user","content":"\nCurrent
|
||||
Task: Ask the remote A2A agent to calculate 25 times 17.\n\nProvide your complete
|
||||
response:"}],"model":"gpt-4.1-mini","response_format":{"type":"json_schema","json_schema":{"schema":{"properties":{"a2a_ids":{"description":"A2A
|
||||
agent IDs to delegate to.","items":{"const":"http://0.0.0.0:9999/.well-known/agent-card.json","type":"string"},"maxItems":1,"title":"A2A
|
||||
Ids","type":"array"},"message":{"description":"The message content. If is_a2a=true,
|
||||
this is sent to the A2A agent. If is_a2a=false, this is your final answer ending
|
||||
the conversation.","title":"Message","type":"string"},"is_a2a":{"description":"Set
|
||||
to false when the remote agent has answered your question - extract their answer
|
||||
and return it as your final message. Set to true ONLY if you need to ask a NEW,
|
||||
DIFFERENT question. NEVER repeat the same request - if the conversation history
|
||||
shows the agent already answered, set is_a2a=false immediately.","title":"Is
|
||||
A2A","type":"boolean"}},"required":["a2a_ids","message","is_a2a"],"title":"AgentResponse","type":"object","additionalProperties":false},"name":"AgentResponse","strict":true}},"stream":false}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1359'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- COOKIE-XXX
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-helper-method:
|
||||
- beta.chat.completions.parse
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-D3jGTHg6KlCWi3Yoe2y0EspPpjxUP\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1769781353,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"{\\\"a2a_ids\\\":[\\\"http://0.0.0.0:9999/.well-known/agent-card.json\\\"],\\\"message\\\":\\\"Calculate
|
||||
25 times 17.\\\",\\\"is_a2a\\\":true}\",\n \"refusal\": null,\n \"annotations\":
|
||||
[]\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
|
||||
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 276,\n \"completion_tokens\":
|
||||
43,\n \"total_tokens\": 319,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_e01c6f58e1\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 30 Jan 2026 13:55:54 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '1269'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
@@ -1,713 +0,0 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: ''
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- '*/*'
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
host:
|
||||
- 0.0.0.0:9999
|
||||
method: GET
|
||||
uri: http://0.0.0.0:9999/.well-known/agent-card.json
|
||||
response:
|
||||
body:
|
||||
string: '{"capabilities":{"pushNotifications":true,"streaming":true},"defaultInputModes":["text/plain","application/json"],"defaultOutputModes":["text/plain","application/json"],"description":"An
|
||||
AI assistant powered by OpenAI GPT with calculator and time tools. Ask questions,
|
||||
perform calculations, or get the current time in any timezone.","name":"GPT
|
||||
Assistant","preferredTransport":"JSONRPC","protocolVersion":"0.3.0","skills":[{"description":"Have
|
||||
a general conversation with the AI assistant. Ask questions, get explanations,
|
||||
or just chat.","examples":["Hello, how are you?","Explain quantum computing
|
||||
in simple terms","What can you help me with?"],"id":"conversation","name":"General
|
||||
Conversation","tags":["chat","conversation","general"]},{"description":"Perform
|
||||
mathematical calculations including arithmetic, exponents, and more.","examples":["What
|
||||
is 25 * 17?","Calculate 2^10","What''s (100 + 50) / 3?"],"id":"calculator","name":"Calculator","tags":["math","calculator","arithmetic"]},{"description":"Get
|
||||
the current date and time in any timezone.","examples":["What time is it?","What''s
|
||||
the current time in Tokyo?","What''s today''s date in New York?"],"id":"time","name":"Current
|
||||
Time","tags":["time","date","timezone"]}],"url":"http://localhost:9999","version":"1.0.0"}'
|
||||
headers:
|
||||
content-length:
|
||||
- '1272'
|
||||
content-type:
|
||||
- application/json
|
||||
date:
|
||||
- Fri, 30 Jan 2026 13:53:54 GMT
|
||||
server:
|
||||
- uvicorn
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Research Analyst. Expert
|
||||
researcher with access to remote specialized agents\nYour personal goal is:
|
||||
Find and analyze information about AI developments"},{"role":"user","content":"\nCurrent
|
||||
Task: Delegate to the remote A2A agent to explain quantum computing in simple
|
||||
terms.\n\nProvide your complete response:"}],"model":"gpt-4.1-mini","response_format":{"type":"json_schema","json_schema":{"schema":{"properties":{"a2a_ids":{"description":"A2A
|
||||
agent IDs to delegate to.","items":{"const":"http://0.0.0.0:9999/.well-known/agent-card.json","type":"string"},"maxItems":1,"title":"A2A
|
||||
Ids","type":"array"},"message":{"description":"The message content. If is_a2a=true,
|
||||
this is sent to the A2A agent. If is_a2a=false, this is your final answer ending
|
||||
the conversation.","title":"Message","type":"string"},"is_a2a":{"description":"Set
|
||||
to false when the remote agent has answered your question - extract their answer
|
||||
and return it as your final message. Set to true ONLY if you need to ask a NEW,
|
||||
DIFFERENT question. NEVER repeat the same request - if the conversation history
|
||||
shows the agent already answered, set is_a2a=false immediately.","title":"Is
|
||||
A2A","type":"boolean"}},"required":["a2a_ids","message","is_a2a"],"title":"AgentResponse","type":"object","additionalProperties":false},"name":"AgentResponse","strict":true}},"stream":false}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1387'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-helper-method:
|
||||
- beta.chat.completions.parse
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-D3jEZCk72BTnPd76lgJx3flraf53g\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1769781235,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"{\\\"a2a_ids\\\":[\\\"http://0.0.0.0:9999/.well-known/agent-card.json\\\"],\\\"message\\\":\\\"Can
|
||||
you please explain quantum computing in simple terms?\\\",\\\"is_a2a\\\":true}\",\n
|
||||
\ \"refusal\": null,\n \"annotations\": []\n },\n \"logprobs\":
|
||||
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
|
||||
277,\n \"completion_tokens\": 47,\n \"total_tokens\": 324,\n \"prompt_tokens_details\":
|
||||
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_38699cb0fe\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 30 Jan 2026 13:53:56 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- SET-COOKIE-XXX
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '983'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"id":"8e89c366-f0bb-4530-9733-c340afcf44a1","jsonrpc":"2.0","method":"message/stream","params":{"configuration":{"acceptedOutputModes":["application/json"],"blocking":true},"message":{"kind":"message","messageId":"09d169ad-1c41-486f-a1f6-db36f774a08e","parts":[{"kind":"text","text":"Can
|
||||
you please explain quantum computing in simple terms?"}],"referenceTaskIds":[],"role":"user"}}}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- '*/*, text/event-stream'
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
cache-control:
|
||||
- no-store
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '384'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- localhost:9999
|
||||
method: POST
|
||||
uri: http://localhost:9999
|
||||
response:
|
||||
body:
|
||||
string: "data: {\"id\":\"8e89c366-f0bb-4530-9733-c340afcf44a1\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"aca5807e-55ea-4d81-9bb0-a621788e2f8b\",\"final\":false,\"kind\":\"status-update\",\"status\":{\"state\":\"submitted\"},\"taskId\":\"219a3bad-5c8e-4903-9a4a-6bb8f6067f0d\"}}\r\n\r\ndata:
|
||||
{\"id\":\"8e89c366-f0bb-4530-9733-c340afcf44a1\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"aca5807e-55ea-4d81-9bb0-a621788e2f8b\",\"final\":false,\"kind\":\"status-update\",\"status\":{\"state\":\"working\"},\"taskId\":\"219a3bad-5c8e-4903-9a4a-6bb8f6067f0d\"}}\r\n\r\ndata:
|
||||
{\"id\":\"8e89c366-f0bb-4530-9733-c340afcf44a1\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"aca5807e-55ea-4d81-9bb0-a621788e2f8b\",\"final\":true,\"kind\":\"status-update\",\"status\":{\"message\":{\"kind\":\"message\",\"messageId\":\"4cf44be0-a3ce-4a86-a801-6224f81f6c6a\",\"parts\":[{\"kind\":\"text\",\"text\":\"Quantum
|
||||
computing is a type of computing that uses the principles of quantum mechanics
|
||||
to process information in a fundamentally different way than traditional computers.
|
||||
\\n\\n1. **Bits vs. Qubits**: Traditional computers use bits as the smallest
|
||||
unit of data, which can either be a 0 or a 1. Quantum computers use qubits,
|
||||
which can be both 0 and 1 at the same time, thanks to a phenomenon called
|
||||
superposition.\\n\\n2. **Superposition**: This ability to be in multiple states
|
||||
simultaneously allows quantum computers to process a vast amount of possibilities
|
||||
at once.\\n\\n3. **Entanglement**: Qubits can also be entangled, meaning the
|
||||
state of one qubit can depend on the state of another, no matter how far apart
|
||||
they are. This feature enables complex computations that are not possible
|
||||
with classical computers.\\n\\n4. **Parallelism**: Because of superposition
|
||||
and entanglement, quantum computers can perform many calculations at the same
|
||||
time, making them potentially much faster for specific tasks compared to classical
|
||||
computers.\\n\\nIn summary, quantum computing harnesses the strange and powerful
|
||||
properties of quantum mechanics to process information in a way that could
|
||||
solve complex problems much more efficiently than traditional computers.\"}],\"role\":\"agent\"},\"state\":\"completed\"},\"taskId\":\"219a3bad-5c8e-4903-9a4a-6bb8f6067f0d\"}}\r\n\r\n"
|
||||
headers:
|
||||
cache-control:
|
||||
- no-store
|
||||
connection:
|
||||
- keep-alive
|
||||
content-type:
|
||||
- text/event-stream; charset=utf-8
|
||||
date:
|
||||
- Fri, 30 Jan 2026 13:53:55 GMT
|
||||
server:
|
||||
- uvicorn
|
||||
transfer-encoding:
|
||||
- chunked
|
||||
x-accel-buffering:
|
||||
- 'no'
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Research Analyst. Expert
|
||||
researcher with access to remote specialized agents\nYour personal goal is:
|
||||
Find and analyze information about AI developments"},{"role":"user","content":"\nCurrent
|
||||
Task: Delegate to the remote A2A agent to explain quantum computing in simple
|
||||
terms.\n\nProvide your complete response:"}],"model":"gpt-4.1-mini","response_format":{"type":"json_schema","json_schema":{"schema":{"properties":{"a2a_ids":{"description":"A2A
|
||||
agent IDs to delegate to.","items":{"const":"http://0.0.0.0:9999/.well-known/agent-card.json","type":"string"},"maxItems":1,"title":"A2A
|
||||
Ids","type":"array"},"message":{"description":"The message content. If is_a2a=true,
|
||||
this is sent to the A2A agent. If is_a2a=false, this is your final answer ending
|
||||
the conversation.","title":"Message","type":"string"},"is_a2a":{"description":"Set
|
||||
to false when the remote agent has answered your question - extract their answer
|
||||
and return it as your final message. Set to true ONLY if you need to ask a NEW,
|
||||
DIFFERENT question. NEVER repeat the same request - if the conversation history
|
||||
shows the agent already answered, set is_a2a=false immediately.","title":"Is
|
||||
A2A","type":"boolean"}},"required":["a2a_ids","message","is_a2a"],"title":"AgentResponse","type":"object","additionalProperties":false},"name":"AgentResponse","strict":true}},"stream":false}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1387'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- COOKIE-XXX
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-helper-method:
|
||||
- beta.chat.completions.parse
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-D3jEjMZDxvGCpJTZptzz1lAV5Ruhh\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1769781245,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"{\\\"a2a_ids\\\":[\\\"http://0.0.0.0:9999/.well-known/agent-card.json\\\"],\\\"message\\\":\\\"Please
|
||||
explain quantum computing in simple terms that are easy for anyone to understand.\\\",\\\"is_a2a\\\":true}\",\n
|
||||
\ \"refusal\": null,\n \"annotations\": []\n },\n \"logprobs\":
|
||||
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
|
||||
277,\n \"completion_tokens\": 51,\n \"total_tokens\": 328,\n \"prompt_tokens_details\":
|
||||
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_38699cb0fe\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 30 Jan 2026 13:54:06 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '932'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"id":"3e61c6cc-3654-46ce-a15c-433fef9e1ff1","jsonrpc":"2.0","method":"message/stream","params":{"configuration":{"acceptedOutputModes":["application/json"],"blocking":true},"message":{"kind":"message","messageId":"fee95c54-616c-45f4-b0b7-080a8535a4e2","parts":[{"kind":"text","text":"Please
|
||||
explain quantum computing in simple terms that are easy for anyone to understand."}],"referenceTaskIds":[],"role":"user"}}}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- '*/*, text/event-stream'
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
cache-control:
|
||||
- no-store
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '415'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- localhost:9999
|
||||
method: POST
|
||||
uri: http://localhost:9999
|
||||
response:
|
||||
body:
|
||||
string: "data: {\"id\":\"3e61c6cc-3654-46ce-a15c-433fef9e1ff1\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"774a88f1-2738-41e4-b99e-87e8733a4597\",\"final\":false,\"kind\":\"status-update\",\"status\":{\"state\":\"submitted\"},\"taskId\":\"a7f3c465-f5b1-4b69-a697-d422adf1e766\"}}\r\n\r\ndata:
|
||||
{\"id\":\"3e61c6cc-3654-46ce-a15c-433fef9e1ff1\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"774a88f1-2738-41e4-b99e-87e8733a4597\",\"final\":false,\"kind\":\"status-update\",\"status\":{\"state\":\"working\"},\"taskId\":\"a7f3c465-f5b1-4b69-a697-d422adf1e766\"}}\r\n\r\ndata:
|
||||
{\"id\":\"3e61c6cc-3654-46ce-a15c-433fef9e1ff1\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"774a88f1-2738-41e4-b99e-87e8733a4597\",\"final\":true,\"kind\":\"status-update\",\"status\":{\"message\":{\"kind\":\"message\",\"messageId\":\"89fa51b4-6e41-4590-ae1f-ffe19068ff75\",\"parts\":[{\"kind\":\"text\",\"text\":\"Quantum
|
||||
computing is a type of computing that uses the principles of quantum mechanics,
|
||||
a branch of physics that deals with very small particles like atoms and photons.
|
||||
\\n\\nHere\u2019s a simple breakdown:\\n1. **Bits vs. Qubits**: Traditional
|
||||
computers use bits as the smallest unit of data, which can be either a 0 or
|
||||
a 1. Quantum computers use quantum bits, or qubits, which can be both 0 and
|
||||
1 at the same time thanks to a property called superposition.\\n\\n2. **Superposition**:
|
||||
Imagine spinning a coin. While it\u2019s spinning, it\u2019s not just heads
|
||||
or tails; it\u2019s in some way both until you stop it. Qubits can exist in
|
||||
multiple states at once, allowing quantum computers to process a lot of information
|
||||
simultaneously.\\n\\n3. **Entanglement**: This is another unique property
|
||||
of quantum mechanics. When qubits become entangled, the state of one qubit
|
||||
becomes linked to the state of another, no matter how far apart they are.
|
||||
This means changing one qubit can instantly affect the other, enabling powerful
|
||||
computations.\\n\\n4. **Quantum Speedup**: Because of superposition and entanglement,
|
||||
quantum computers can solve certain problems much faster than classical computers.
|
||||
This includes problems in cryptography, optimization, and simulations of molecular
|
||||
structures.\\n\\nIn summary, quantum computing leverages the strange rules
|
||||
of quantum physics to manipulate data in a way that classical computers can't,
|
||||
potentially making them much more powerful for certain tasks.\"}],\"role\":\"agent\"},\"state\":\"completed\"},\"taskId\":\"a7f3c465-f5b1-4b69-a697-d422adf1e766\"}}\r\n\r\n"
|
||||
headers:
|
||||
cache-control:
|
||||
- no-store
|
||||
connection:
|
||||
- keep-alive
|
||||
content-type:
|
||||
- text/event-stream; charset=utf-8
|
||||
date:
|
||||
- Fri, 30 Jan 2026 13:54:06 GMT
|
||||
server:
|
||||
- uvicorn
|
||||
transfer-encoding:
|
||||
- chunked
|
||||
x-accel-buffering:
|
||||
- 'no'
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Research Analyst. Expert
|
||||
researcher with access to remote specialized agents\nYour personal goal is:
|
||||
Find and analyze information about AI developments"},{"role":"user","content":"\nCurrent
|
||||
Task: Delegate to the remote A2A agent to explain quantum computing in simple
|
||||
terms.\n\nProvide your complete response:"}],"model":"gpt-4.1-mini","response_format":{"type":"json_schema","json_schema":{"schema":{"properties":{"a2a_ids":{"description":"A2A
|
||||
agent IDs to delegate to.","items":{"const":"http://0.0.0.0:9999/.well-known/agent-card.json","type":"string"},"maxItems":1,"title":"A2A
|
||||
Ids","type":"array"},"message":{"description":"The message content. If is_a2a=true,
|
||||
this is sent to the A2A agent. If is_a2a=false, this is your final answer ending
|
||||
the conversation.","title":"Message","type":"string"},"is_a2a":{"description":"Set
|
||||
to false when the remote agent has answered your question - extract their answer
|
||||
and return it as your final message. Set to true ONLY if you need to ask a NEW,
|
||||
DIFFERENT question. NEVER repeat the same request - if the conversation history
|
||||
shows the agent already answered, set is_a2a=false immediately.","title":"Is
|
||||
A2A","type":"boolean"}},"required":["a2a_ids","message","is_a2a"],"title":"AgentResponse","type":"object","additionalProperties":false},"name":"AgentResponse","strict":true}},"stream":false}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1387'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- COOKIE-XXX
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-helper-method:
|
||||
- beta.chat.completions.parse
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-D3jEwtEsNqmLAA86mD8iOEKiIxri4\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1769781258,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"{\\\"a2a_ids\\\":[\\\"http://0.0.0.0:9999/.well-known/agent-card.json\\\"],\\\"message\\\":\\\"Please
|
||||
explain quantum computing in simple terms suitable for someone with no technical
|
||||
background.\\\",\\\"is_a2a\\\":true}\",\n \"refusal\": null,\n \"annotations\":
|
||||
[]\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
|
||||
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 277,\n \"completion_tokens\":
|
||||
51,\n \"total_tokens\": 328,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_38699cb0fe\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 30 Jan 2026 13:54:20 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '2125'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"id":"b1067e98-e5fd-4948-ba91-3c88a101c632","jsonrpc":"2.0","method":"message/stream","params":{"configuration":{"acceptedOutputModes":["application/json"],"blocking":true},"message":{"kind":"message","messageId":"bb50cbc5-0298-4550-8feb-b58e974a10ad","parts":[{"kind":"text","text":"Please
|
||||
explain quantum computing in simple terms suitable for someone with no technical
|
||||
background."}],"referenceTaskIds":[],"role":"user"}}}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- '*/*, text/event-stream'
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
cache-control:
|
||||
- no-store
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '426'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- localhost:9999
|
||||
method: POST
|
||||
uri: http://localhost:9999
|
||||
response:
|
||||
body:
|
||||
string: "data: {\"id\":\"b1067e98-e5fd-4948-ba91-3c88a101c632\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"5fc45f44-28ac-43eb-afb7-744d6b0f7efc\",\"final\":false,\"kind\":\"status-update\",\"status\":{\"state\":\"submitted\"},\"taskId\":\"8a1920a4-9060-4443-8858-39d85c940298\"}}\r\n\r\ndata:
|
||||
{\"id\":\"b1067e98-e5fd-4948-ba91-3c88a101c632\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"5fc45f44-28ac-43eb-afb7-744d6b0f7efc\",\"final\":false,\"kind\":\"status-update\",\"status\":{\"state\":\"working\"},\"taskId\":\"8a1920a4-9060-4443-8858-39d85c940298\"}}\r\n\r\ndata:
|
||||
{\"id\":\"b1067e98-e5fd-4948-ba91-3c88a101c632\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"5fc45f44-28ac-43eb-afb7-744d6b0f7efc\",\"final\":true,\"kind\":\"status-update\",\"status\":{\"message\":{\"kind\":\"message\",\"messageId\":\"b35eb686-1bba-4f44-90ec-e17cbd04ac5b\",\"parts\":[{\"kind\":\"text\",\"text\":\"Quantum
|
||||
computing is a new type of computing that uses the principles of quantum mechanics,
|
||||
which is the science that explains how very small particles, like atoms and
|
||||
photons, behave. \\n\\nIn classical computing, information is stored in bits,
|
||||
which can either be a 0 or a 1. Think of it like a light switch that can either
|
||||
be off (0) or on (1). In contrast, quantum computing uses quantum bits or
|
||||
qubits. A qubit can be both 0 and 1 at the same time, thanks to a property
|
||||
called superposition. \\n\\nMoreover, qubits can also be linked together through
|
||||
a phenomenon known as entanglement, which allows them to work together in
|
||||
ways that classical bits cannot. This means that quantum computers can process
|
||||
a vast amount of information much more quickly than regular computers for
|
||||
certain tasks. \\n\\nIn simple terms, while a regular computer is like a very
|
||||
fast and precise librarian, a quantum computer is like a super librarian with
|
||||
many hands, allowing it to explore numerous possibilities at once. This enables
|
||||
quantum computers to solve complex problems much faster, such as factoring
|
||||
large numbers, optimizing logistics, or simulating molecular structures in
|
||||
drug discovery. However, they're still in the early stages of development
|
||||
and are not yet widely used in everyday applications.\"}],\"role\":\"agent\"},\"state\":\"completed\"},\"taskId\":\"8a1920a4-9060-4443-8858-39d85c940298\"}}\r\n\r\n"
|
||||
headers:
|
||||
cache-control:
|
||||
- no-store
|
||||
connection:
|
||||
- keep-alive
|
||||
content-type:
|
||||
- text/event-stream; charset=utf-8
|
||||
date:
|
||||
- Fri, 30 Jan 2026 13:54:19 GMT
|
||||
server:
|
||||
- uvicorn
|
||||
transfer-encoding:
|
||||
- chunked
|
||||
x-accel-buffering:
|
||||
- 'no'
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Research Analyst. Expert
|
||||
researcher with access to remote specialized agents\nYour personal goal is:
|
||||
Find and analyze information about AI developments"},{"role":"user","content":"\nCurrent
|
||||
Task: Delegate to the remote A2A agent to explain quantum computing in simple
|
||||
terms.\n\nProvide your complete response:"}],"model":"gpt-4.1-mini","response_format":{"type":"json_schema","json_schema":{"schema":{"properties":{"a2a_ids":{"description":"A2A
|
||||
agent IDs to delegate to.","items":{"const":"http://0.0.0.0:9999/.well-known/agent-card.json","type":"string"},"maxItems":1,"title":"A2A
|
||||
Ids","type":"array"},"message":{"description":"The message content. If is_a2a=true,
|
||||
this is sent to the A2A agent. If is_a2a=false, this is your final answer ending
|
||||
the conversation.","title":"Message","type":"string"},"is_a2a":{"description":"Set
|
||||
to false when the remote agent has answered your question - extract their answer
|
||||
and return it as your final message. Set to true ONLY if you need to ask a NEW,
|
||||
DIFFERENT question. NEVER repeat the same request - if the conversation history
|
||||
shows the agent already answered, set is_a2a=false immediately.","title":"Is
|
||||
A2A","type":"boolean"}},"required":["a2a_ids","message","is_a2a"],"title":"AgentResponse","type":"object","additionalProperties":false},"name":"AgentResponse","strict":true}},"stream":false}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1387'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- COOKIE-XXX
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-helper-method:
|
||||
- beta.chat.completions.parse
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-D3jF9KeiOZVzcBBNIQnjKnpJzAQ4z\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1769781271,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"{\\\"a2a_ids\\\":[\\\"http://0.0.0.0:9999/.well-known/agent-card.json\\\"],\\\"message\\\":\\\"Could
|
||||
you please explain quantum computing in simple terms suitable for a beginner?\\\",\\\"is_a2a\\\":true}\",\n
|
||||
\ \"refusal\": null,\n \"annotations\": []\n },\n \"logprobs\":
|
||||
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
|
||||
277,\n \"completion_tokens\": 51,\n \"total_tokens\": 328,\n \"prompt_tokens_details\":
|
||||
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_e01c6f58e1\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 30 Jan 2026 13:54:33 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '1570'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
@@ -1,108 +0,0 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Research Analyst. Expert
|
||||
researcher\nYour personal goal is: Find information"},{"role":"user","content":"\nCurrent
|
||||
Task: What is 2 + 2?\n\nProvide your complete response:"}],"model":"gpt-4.1-mini"}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '246'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-D3jGUmpP5usJjNny4N4AdOIZPhhFZ\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1769781354,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"The sum of 2 + 2 is 4.\",\n \"refusal\":
|
||||
null,\n \"annotations\": []\n },\n \"logprobs\": null,\n
|
||||
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
|
||||
43,\n \"completion_tokens\": 12,\n \"total_tokens\": 55,\n \"prompt_tokens_details\":
|
||||
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_e01c6f58e1\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 30 Jan 2026 13:55:55 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- SET-COOKIE-XXX
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '438'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
@@ -1,623 +0,0 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Research Analyst. Expert
|
||||
researcher with access to remote specialized agents\nYour personal goal is:
|
||||
Find and analyze information about AI developments"},{"role":"user","content":"\nCurrent
|
||||
Task: Delegate to the A2A agent to find the current time in Tokyo.\n\nProvide
|
||||
your complete response:"}],"model":"gpt-4.1-mini","response_format":{"type":"json_schema","json_schema":{"schema":{"properties":{"a2a_ids":{"description":"A2A
|
||||
agent IDs to delegate to.","items":{"const":"http://0.0.0.0:9999/.well-known/agent-card.json","type":"string"},"maxItems":1,"title":"A2A
|
||||
Ids","type":"array"},"message":{"description":"The message content. If is_a2a=true,
|
||||
this is sent to the A2A agent. If is_a2a=false, this is your final answer ending
|
||||
the conversation.","title":"Message","type":"string"},"is_a2a":{"description":"Set
|
||||
to false when the remote agent has answered your question - extract their answer
|
||||
and return it as your final message. Set to true ONLY if you need to ask a NEW,
|
||||
DIFFERENT question. NEVER repeat the same request - if the conversation history
|
||||
shows the agent already answered, set is_a2a=false immediately.","title":"Is
|
||||
A2A","type":"boolean"}},"required":["a2a_ids","message","is_a2a"],"title":"AgentResponse","type":"object","additionalProperties":false},"name":"AgentResponse","strict":true}},"stream":false}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1369'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-helper-method:
|
||||
- beta.chat.completions.parse
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-D3jFOA4Yj6PWEJGejSeUYVfvtrsda\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1769781286,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"{\\\"a2a_ids\\\":[\\\"http://0.0.0.0:9999/.well-known/agent-card.json\\\"],\\\"message\\\":\\\"What
|
||||
is the current time in Tokyo?\\\",\\\"is_a2a\\\":true}\",\n \"refusal\":
|
||||
null,\n \"annotations\": []\n },\n \"logprobs\": null,\n
|
||||
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
|
||||
276,\n \"completion_tokens\": 45,\n \"total_tokens\": 321,\n \"prompt_tokens_details\":
|
||||
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_38699cb0fe\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 30 Jan 2026 13:54:47 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- SET-COOKIE-XXX
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '730'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"id":"a9a976c9-370b-46ad-bfe7-b6115bbf6dea","jsonrpc":"2.0","method":"message/stream","params":{"configuration":{"acceptedOutputModes":["application/json"],"blocking":true},"message":{"kind":"message","messageId":"bea4ac3a-8757-4d50-bca0-6bdf5fd65aff","parts":[{"kind":"text","text":"What
|
||||
is the current time in Tokyo?"}],"referenceTaskIds":[],"role":"user"}}}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- '*/*, text/event-stream'
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
cache-control:
|
||||
- no-store
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '361'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- localhost:9999
|
||||
method: POST
|
||||
uri: http://localhost:9999
|
||||
response:
|
||||
body:
|
||||
string: "data: {\"id\":\"a9a976c9-370b-46ad-bfe7-b6115bbf6dea\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"ab9cb255-cbc4-42ae-85ec-51df9cdfcb33\",\"final\":false,\"kind\":\"status-update\",\"status\":{\"state\":\"submitted\"},\"taskId\":\"5b176214-688d-49af-9de4-4013822a36ed\"}}\r\n\r\ndata:
|
||||
{\"id\":\"a9a976c9-370b-46ad-bfe7-b6115bbf6dea\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"ab9cb255-cbc4-42ae-85ec-51df9cdfcb33\",\"final\":false,\"kind\":\"status-update\",\"status\":{\"state\":\"working\"},\"taskId\":\"5b176214-688d-49af-9de4-4013822a36ed\"}}\r\n\r\ndata:
|
||||
{\"id\":\"a9a976c9-370b-46ad-bfe7-b6115bbf6dea\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"ab9cb255-cbc4-42ae-85ec-51df9cdfcb33\",\"final\":true,\"kind\":\"status-update\",\"status\":{\"message\":{\"kind\":\"message\",\"messageId\":\"38eb8783-d284-4c03-91a5-74dc5ad92502\",\"parts\":[{\"kind\":\"text\",\"text\":\"[Tool:
|
||||
get_time] 2026-01-30 22:54:47 JST (Asia/Tokyo)\\nThe current time in Tokyo
|
||||
is 22:54 (10:54 PM) JST on January 30, 2026.\"}],\"role\":\"agent\"},\"state\":\"completed\"},\"taskId\":\"5b176214-688d-49af-9de4-4013822a36ed\"}}\r\n\r\n"
|
||||
headers:
|
||||
cache-control:
|
||||
- no-store
|
||||
connection:
|
||||
- keep-alive
|
||||
content-type:
|
||||
- text/event-stream; charset=utf-8
|
||||
date:
|
||||
- Fri, 30 Jan 2026 13:54:46 GMT
|
||||
server:
|
||||
- uvicorn
|
||||
transfer-encoding:
|
||||
- chunked
|
||||
x-accel-buffering:
|
||||
- 'no'
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Research Analyst. Expert
|
||||
researcher with access to remote specialized agents\nYour personal goal is:
|
||||
Find and analyze information about AI developments"},{"role":"user","content":"\nCurrent
|
||||
Task: Delegate to the A2A agent to find the current time in Tokyo.\n\nProvide
|
||||
your complete response:"}],"model":"gpt-4.1-mini","response_format":{"type":"json_schema","json_schema":{"schema":{"properties":{"a2a_ids":{"description":"A2A
|
||||
agent IDs to delegate to.","items":{"const":"http://0.0.0.0:9999/.well-known/agent-card.json","type":"string"},"maxItems":1,"title":"A2A
|
||||
Ids","type":"array"},"message":{"description":"The message content. If is_a2a=true,
|
||||
this is sent to the A2A agent. If is_a2a=false, this is your final answer ending
|
||||
the conversation.","title":"Message","type":"string"},"is_a2a":{"description":"Set
|
||||
to false when the remote agent has answered your question - extract their answer
|
||||
and return it as your final message. Set to true ONLY if you need to ask a NEW,
|
||||
DIFFERENT question. NEVER repeat the same request - if the conversation history
|
||||
shows the agent already answered, set is_a2a=false immediately.","title":"Is
|
||||
A2A","type":"boolean"}},"required":["a2a_ids","message","is_a2a"],"title":"AgentResponse","type":"object","additionalProperties":false},"name":"AgentResponse","strict":true}},"stream":false}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1369'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- COOKIE-XXX
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-helper-method:
|
||||
- beta.chat.completions.parse
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-D3jFRhymGpy4exif1473yrSiQetC8\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1769781289,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"{\\\"a2a_ids\\\":[\\\"http://0.0.0.0:9999/.well-known/agent-card.json\\\"],\\\"message\\\":\\\"What
|
||||
is the current time in Tokyo?\\\",\\\"is_a2a\\\":true}\",\n \"refusal\":
|
||||
null,\n \"annotations\": []\n },\n \"logprobs\": null,\n
|
||||
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
|
||||
276,\n \"completion_tokens\": 45,\n \"total_tokens\": 321,\n \"prompt_tokens_details\":
|
||||
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_e01c6f58e1\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 30 Jan 2026 13:54:50 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '918'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"id":"d5adb317-9ccc-4a00-a3bc-76fd53318d6e","jsonrpc":"2.0","method":"message/stream","params":{"configuration":{"acceptedOutputModes":["application/json"],"blocking":true},"message":{"kind":"message","messageId":"ae08e6e1-3e93-4c1a-b241-ccf14f47aecd","parts":[{"kind":"text","text":"What
|
||||
is the current time in Tokyo?"}],"referenceTaskIds":[],"role":"user"}}}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- '*/*, text/event-stream'
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
cache-control:
|
||||
- no-store
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '361'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- localhost:9999
|
||||
method: POST
|
||||
uri: http://localhost:9999
|
||||
response:
|
||||
body:
|
||||
string: "data: {\"id\":\"d5adb317-9ccc-4a00-a3bc-76fd53318d6e\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"76c7c23b-c088-48f9-a517-47037b08f4b4\",\"final\":false,\"kind\":\"status-update\",\"status\":{\"state\":\"submitted\"},\"taskId\":\"26f6752d-a67c-4928-9981-ef00831caa20\"}}\r\n\r\ndata:
|
||||
{\"id\":\"d5adb317-9ccc-4a00-a3bc-76fd53318d6e\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"76c7c23b-c088-48f9-a517-47037b08f4b4\",\"final\":false,\"kind\":\"status-update\",\"status\":{\"state\":\"working\"},\"taskId\":\"26f6752d-a67c-4928-9981-ef00831caa20\"}}\r\n\r\ndata:
|
||||
{\"id\":\"d5adb317-9ccc-4a00-a3bc-76fd53318d6e\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"76c7c23b-c088-48f9-a517-47037b08f4b4\",\"final\":true,\"kind\":\"status-update\",\"status\":{\"message\":{\"kind\":\"message\",\"messageId\":\"f119e774-02bb-4c92-8684-c7c3f69867b7\",\"parts\":[{\"kind\":\"text\",\"text\":\"[Tool:
|
||||
get_time] 2026-01-30 22:54:51 JST (Asia/Tokyo)\\nThe current time in Tokyo
|
||||
is 22:54 (10:54 PM) JST on January 30, 2026.\"}],\"role\":\"agent\"},\"state\":\"completed\"},\"taskId\":\"26f6752d-a67c-4928-9981-ef00831caa20\"}}\r\n\r\n"
|
||||
headers:
|
||||
cache-control:
|
||||
- no-store
|
||||
connection:
|
||||
- keep-alive
|
||||
content-type:
|
||||
- text/event-stream; charset=utf-8
|
||||
date:
|
||||
- Fri, 30 Jan 2026 13:54:50 GMT
|
||||
server:
|
||||
- uvicorn
|
||||
transfer-encoding:
|
||||
- chunked
|
||||
x-accel-buffering:
|
||||
- 'no'
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Research Analyst. Expert
|
||||
researcher with access to remote specialized agents\nYour personal goal is:
|
||||
Find and analyze information about AI developments"},{"role":"user","content":"\nCurrent
|
||||
Task: Delegate to the A2A agent to find the current time in Tokyo.\n\nProvide
|
||||
your complete response:"}],"model":"gpt-4.1-mini","response_format":{"type":"json_schema","json_schema":{"schema":{"properties":{"a2a_ids":{"description":"A2A
|
||||
agent IDs to delegate to.","items":{"const":"http://0.0.0.0:9999/.well-known/agent-card.json","type":"string"},"maxItems":1,"title":"A2A
|
||||
Ids","type":"array"},"message":{"description":"The message content. If is_a2a=true,
|
||||
this is sent to the A2A agent. If is_a2a=false, this is your final answer ending
|
||||
the conversation.","title":"Message","type":"string"},"is_a2a":{"description":"Set
|
||||
to false when the remote agent has answered your question - extract their answer
|
||||
and return it as your final message. Set to true ONLY if you need to ask a NEW,
|
||||
DIFFERENT question. NEVER repeat the same request - if the conversation history
|
||||
shows the agent already answered, set is_a2a=false immediately.","title":"Is
|
||||
A2A","type":"boolean"}},"required":["a2a_ids","message","is_a2a"],"title":"AgentResponse","type":"object","additionalProperties":false},"name":"AgentResponse","strict":true}},"stream":false}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1369'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- COOKIE-XXX
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-helper-method:
|
||||
- beta.chat.completions.parse
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-D3jFVLRII9zhvNapGsIev63MNMfvU\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1769781293,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"{\\\"a2a_ids\\\":[\\\"http://0.0.0.0:9999/.well-known/agent-card.json\\\"],\\\"message\\\":\\\"What
|
||||
is the current time in Tokyo?\\\",\\\"is_a2a\\\":true}\",\n \"refusal\":
|
||||
null,\n \"annotations\": []\n },\n \"logprobs\": null,\n
|
||||
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
|
||||
276,\n \"completion_tokens\": 45,\n \"total_tokens\": 321,\n \"prompt_tokens_details\":
|
||||
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_38699cb0fe\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 30 Jan 2026 13:54:54 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '865'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"id":"b28b963c-638b-4be3-a111-896f8e9164ef","jsonrpc":"2.0","method":"message/stream","params":{"configuration":{"acceptedOutputModes":["application/json"],"blocking":true},"message":{"kind":"message","messageId":"60818427-7e73-4306-b639-f4b0733aedfe","parts":[{"kind":"text","text":"What
|
||||
is the current time in Tokyo?"}],"referenceTaskIds":[],"role":"user"}}}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- '*/*, text/event-stream'
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
cache-control:
|
||||
- no-store
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '361'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- localhost:9999
|
||||
method: POST
|
||||
uri: http://localhost:9999
|
||||
response:
|
||||
body:
|
||||
string: "data: {\"id\":\"b28b963c-638b-4be3-a111-896f8e9164ef\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"b3bca9d9-0e9d-4ae1-bc88-a510f56dd254\",\"final\":false,\"kind\":\"status-update\",\"status\":{\"state\":\"submitted\"},\"taskId\":\"643deb57-f539-4c70-b228-7ae29f18b3d4\"}}\r\n\r\ndata:
|
||||
{\"id\":\"b28b963c-638b-4be3-a111-896f8e9164ef\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"b3bca9d9-0e9d-4ae1-bc88-a510f56dd254\",\"final\":false,\"kind\":\"status-update\",\"status\":{\"state\":\"working\"},\"taskId\":\"643deb57-f539-4c70-b228-7ae29f18b3d4\"}}\r\n\r\ndata:
|
||||
{\"id\":\"b28b963c-638b-4be3-a111-896f8e9164ef\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"b3bca9d9-0e9d-4ae1-bc88-a510f56dd254\",\"final\":true,\"kind\":\"status-update\",\"status\":{\"message\":{\"kind\":\"message\",\"messageId\":\"65847915-f2f0-4d4c-be3c-a2770f03dfe9\",\"parts\":[{\"kind\":\"text\",\"text\":\"[Tool:
|
||||
get_time] 2026-01-30 22:54:55 JST (Asia/Tokyo)\\nThe current time in Tokyo
|
||||
is 22:54:55 JST on January 30, 2026.\"}],\"role\":\"agent\"},\"state\":\"completed\"},\"taskId\":\"643deb57-f539-4c70-b228-7ae29f18b3d4\"}}\r\n\r\n"
|
||||
headers:
|
||||
cache-control:
|
||||
- no-store
|
||||
connection:
|
||||
- keep-alive
|
||||
content-type:
|
||||
- text/event-stream; charset=utf-8
|
||||
date:
|
||||
- Fri, 30 Jan 2026 13:54:54 GMT
|
||||
server:
|
||||
- uvicorn
|
||||
transfer-encoding:
|
||||
- chunked
|
||||
x-accel-buffering:
|
||||
- 'no'
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Research Analyst. Expert
|
||||
researcher with access to remote specialized agents\nYour personal goal is:
|
||||
Find and analyze information about AI developments"},{"role":"user","content":"\nCurrent
|
||||
Task: Delegate to the A2A agent to find the current time in Tokyo.\n\nProvide
|
||||
your complete response:"}],"model":"gpt-4.1-mini","response_format":{"type":"json_schema","json_schema":{"schema":{"properties":{"a2a_ids":{"description":"A2A
|
||||
agent IDs to delegate to.","items":{"const":"http://0.0.0.0:9999/.well-known/agent-card.json","type":"string"},"maxItems":1,"title":"A2A
|
||||
Ids","type":"array"},"message":{"description":"The message content. If is_a2a=true,
|
||||
this is sent to the A2A agent. If is_a2a=false, this is your final answer ending
|
||||
the conversation.","title":"Message","type":"string"},"is_a2a":{"description":"Set
|
||||
to false when the remote agent has answered your question - extract their answer
|
||||
and return it as your final message. Set to true ONLY if you need to ask a NEW,
|
||||
DIFFERENT question. NEVER repeat the same request - if the conversation history
|
||||
shows the agent already answered, set is_a2a=false immediately.","title":"Is
|
||||
A2A","type":"boolean"}},"required":["a2a_ids","message","is_a2a"],"title":"AgentResponse","type":"object","additionalProperties":false},"name":"AgentResponse","strict":true}},"stream":false}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1369'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- COOKIE-XXX
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-helper-method:
|
||||
- beta.chat.completions.parse
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-D3jFZ7bub61EYyd7CnhKJXftnSMo1\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1769781297,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"{\\\"a2a_ids\\\":[\\\"http://0.0.0.0:9999/.well-known/agent-card.json\\\"],\\\"message\\\":\\\"What
|
||||
is the current time in Tokyo?\\\",\\\"is_a2a\\\":true}\",\n \"refusal\":
|
||||
null,\n \"annotations\": []\n },\n \"logprobs\": null,\n
|
||||
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
|
||||
276,\n \"completion_tokens\": 45,\n \"total_tokens\": 321,\n \"prompt_tokens_details\":
|
||||
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_e01c6f58e1\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 30 Jan 2026 13:54:58 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '886'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
@@ -1,108 +0,0 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Simple Assistant. A helpful
|
||||
assistant\nYour personal goal is: Help with basic tasks"},{"role":"user","content":"\nCurrent
|
||||
Task: Say hello\n\nProvide your complete response:"}],"model":"gpt-4.1-mini"}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '248'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-D3jEYk4JMmw8yOeAGns6bKtAId8bH\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1769781234,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"Hello! How can I assist you today?\",\n
|
||||
\ \"refusal\": null,\n \"annotations\": []\n },\n \"logprobs\":
|
||||
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
|
||||
41,\n \"completion_tokens\": 9,\n \"total_tokens\": 50,\n \"prompt_tokens_details\":
|
||||
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_e01c6f58e1\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 30 Jan 2026 13:53:54 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- SET-COOKIE-XXX
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '412'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
@@ -1,616 +0,0 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Research Analyst. Expert
|
||||
researcher with access to remote agents\nYour personal goal is: Find and analyze
|
||||
information"},{"role":"user","content":"\nCurrent Task: Use the remote A2A agent
|
||||
to calculate 10 plus 15.\n\nProvide your complete response:"}],"model":"gpt-4.1-mini","response_format":{"type":"json_schema","json_schema":{"schema":{"properties":{"a2a_ids":{"description":"A2A
|
||||
agent IDs to delegate to.","items":{"const":"http://0.0.0.0:9999/.well-known/agent-card.json","type":"string"},"maxItems":1,"title":"A2A
|
||||
Ids","type":"array"},"message":{"description":"The message content. If is_a2a=true,
|
||||
this is sent to the A2A agent. If is_a2a=false, this is your final answer ending
|
||||
the conversation.","title":"Message","type":"string"},"is_a2a":{"description":"Set
|
||||
to false when the remote agent has answered your question - extract their answer
|
||||
and return it as your final message. Set to true ONLY if you need to ask a NEW,
|
||||
DIFFERENT question. NEVER repeat the same request - if the conversation history
|
||||
shows the agent already answered, set is_a2a=false immediately.","title":"Is
|
||||
A2A","type":"boolean"}},"required":["a2a_ids","message","is_a2a"],"title":"AgentResponse","type":"object","additionalProperties":false},"name":"AgentResponse","strict":true}},"stream":false}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1324'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-helper-method:
|
||||
- beta.chat.completions.parse
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-D3jEMeqhmMj6BONvp52OoHEswH8Qp\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1769781222,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"{\\\"a2a_ids\\\":[\\\"http://0.0.0.0:9999/.well-known/agent-card.json\\\"],\\\"message\\\":\\\"Please
|
||||
calculate 10 plus 15.\\\",\\\"is_a2a\\\":true}\",\n \"refusal\": null,\n
|
||||
\ \"annotations\": []\n },\n \"logprobs\": null,\n \"finish_reason\":
|
||||
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 272,\n \"completion_tokens\":
|
||||
44,\n \"total_tokens\": 316,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_e01c6f58e1\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 30 Jan 2026 13:53:43 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- SET-COOKIE-XXX
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '1002'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"id":"67a4ed9d-faa7-47c2-a7ff-0c14003b719b","jsonrpc":"2.0","method":"message/stream","params":{"configuration":{"acceptedOutputModes":["application/json"],"blocking":true},"message":{"kind":"message","messageId":"afde435c-be5e-42cd-9bb3-51f8d029c513","parts":[{"kind":"text","text":"Please
|
||||
calculate 10 plus 15."}],"referenceTaskIds":[],"role":"user"}}}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- '*/*, text/event-stream'
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
cache-control:
|
||||
- no-store
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '355'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- localhost:9999
|
||||
method: POST
|
||||
uri: http://localhost:9999
|
||||
response:
|
||||
body:
|
||||
string: "data: {\"id\":\"67a4ed9d-faa7-47c2-a7ff-0c14003b719b\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"27c6694d-4b88-46f7-8187-7517aba2d625\",\"final\":false,\"kind\":\"status-update\",\"status\":{\"state\":\"submitted\"},\"taskId\":\"8a133424-4936-44ce-8b71-e014182442cc\"}}\r\n\r\ndata:
|
||||
{\"id\":\"67a4ed9d-faa7-47c2-a7ff-0c14003b719b\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"27c6694d-4b88-46f7-8187-7517aba2d625\",\"final\":false,\"kind\":\"status-update\",\"status\":{\"state\":\"working\"},\"taskId\":\"8a133424-4936-44ce-8b71-e014182442cc\"}}\r\n\r\ndata:
|
||||
{\"id\":\"67a4ed9d-faa7-47c2-a7ff-0c14003b719b\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"27c6694d-4b88-46f7-8187-7517aba2d625\",\"final\":true,\"kind\":\"status-update\",\"status\":{\"message\":{\"kind\":\"message\",\"messageId\":\"e93da13b-98a1-40d4-aa44-a3eb1b22a3fd\",\"parts\":[{\"kind\":\"text\",\"text\":\"[Tool:
|
||||
calculator] 10 + 15 = 25\\nThe result of 10 plus 15 is 25.\"}],\"role\":\"agent\"},\"state\":\"completed\"},\"taskId\":\"8a133424-4936-44ce-8b71-e014182442cc\"}}\r\n\r\n"
|
||||
headers:
|
||||
cache-control:
|
||||
- no-store
|
||||
connection:
|
||||
- keep-alive
|
||||
content-type:
|
||||
- text/event-stream; charset=utf-8
|
||||
date:
|
||||
- Fri, 30 Jan 2026 13:53:43 GMT
|
||||
server:
|
||||
- uvicorn
|
||||
transfer-encoding:
|
||||
- chunked
|
||||
x-accel-buffering:
|
||||
- 'no'
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Research Analyst. Expert
|
||||
researcher with access to remote agents\nYour personal goal is: Find and analyze
|
||||
information"},{"role":"user","content":"\nCurrent Task: Use the remote A2A agent
|
||||
to calculate 10 plus 15.\n\nProvide your complete response:"}],"model":"gpt-4.1-mini","response_format":{"type":"json_schema","json_schema":{"schema":{"properties":{"a2a_ids":{"description":"A2A
|
||||
agent IDs to delegate to.","items":{"const":"http://0.0.0.0:9999/.well-known/agent-card.json","type":"string"},"maxItems":1,"title":"A2A
|
||||
Ids","type":"array"},"message":{"description":"The message content. If is_a2a=true,
|
||||
this is sent to the A2A agent. If is_a2a=false, this is your final answer ending
|
||||
the conversation.","title":"Message","type":"string"},"is_a2a":{"description":"Set
|
||||
to false when the remote agent has answered your question - extract their answer
|
||||
and return it as your final message. Set to true ONLY if you need to ask a NEW,
|
||||
DIFFERENT question. NEVER repeat the same request - if the conversation history
|
||||
shows the agent already answered, set is_a2a=false immediately.","title":"Is
|
||||
A2A","type":"boolean"}},"required":["a2a_ids","message","is_a2a"],"title":"AgentResponse","type":"object","additionalProperties":false},"name":"AgentResponse","strict":true}},"stream":false}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1324'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- COOKIE-XXX
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-helper-method:
|
||||
- beta.chat.completions.parse
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-D3jEQpmgfxkHfFd07HJRcngcqyLQf\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1769781226,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"{\\\"a2a_ids\\\":[\\\"http://0.0.0.0:9999/.well-known/agent-card.json\\\"],\\\"message\\\":\\\"Please
|
||||
calculate 10 plus 15.\\\",\\\"is_a2a\\\":true}\",\n \"refusal\": null,\n
|
||||
\ \"annotations\": []\n },\n \"logprobs\": null,\n \"finish_reason\":
|
||||
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 272,\n \"completion_tokens\":
|
||||
44,\n \"total_tokens\": 316,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_e01c6f58e1\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 30 Jan 2026 13:53:46 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '823'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"id":"65d41f18-6af2-4025-b7b8-e671c071d282","jsonrpc":"2.0","method":"message/stream","params":{"configuration":{"acceptedOutputModes":["application/json"],"blocking":true},"message":{"kind":"message","messageId":"0a136204-0f29-4b5e-aef1-a0886590b241","parts":[{"kind":"text","text":"Please
|
||||
calculate 10 plus 15."}],"referenceTaskIds":[],"role":"user"}}}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- '*/*, text/event-stream'
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
cache-control:
|
||||
- no-store
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '355'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- localhost:9999
|
||||
method: POST
|
||||
uri: http://localhost:9999
|
||||
response:
|
||||
body:
|
||||
string: "data: {\"id\":\"65d41f18-6af2-4025-b7b8-e671c071d282\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"876faef7-7df5-42cf-9bda-cde027a56d0a\",\"final\":false,\"kind\":\"status-update\",\"status\":{\"state\":\"submitted\"},\"taskId\":\"7ddd7cc5-2315-4aae-9cc8-52019642c23e\"}}\r\n\r\ndata:
|
||||
{\"id\":\"65d41f18-6af2-4025-b7b8-e671c071d282\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"876faef7-7df5-42cf-9bda-cde027a56d0a\",\"final\":false,\"kind\":\"status-update\",\"status\":{\"state\":\"working\"},\"taskId\":\"7ddd7cc5-2315-4aae-9cc8-52019642c23e\"}}\r\n\r\ndata:
|
||||
{\"id\":\"65d41f18-6af2-4025-b7b8-e671c071d282\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"876faef7-7df5-42cf-9bda-cde027a56d0a\",\"final\":true,\"kind\":\"status-update\",\"status\":{\"message\":{\"kind\":\"message\",\"messageId\":\"2060c3ad-07a4-4f09-afcb-bb6114128b46\",\"parts\":[{\"kind\":\"text\",\"text\":\"[Tool:
|
||||
calculator] 10 + 15 = 25\\nThe result of 10 plus 15 is 25.\"}],\"role\":\"agent\"},\"state\":\"completed\"},\"taskId\":\"7ddd7cc5-2315-4aae-9cc8-52019642c23e\"}}\r\n\r\n"
|
||||
headers:
|
||||
cache-control:
|
||||
- no-store
|
||||
connection:
|
||||
- keep-alive
|
||||
content-type:
|
||||
- text/event-stream; charset=utf-8
|
||||
date:
|
||||
- Fri, 30 Jan 2026 13:53:46 GMT
|
||||
server:
|
||||
- uvicorn
|
||||
transfer-encoding:
|
||||
- chunked
|
||||
x-accel-buffering:
|
||||
- 'no'
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Research Analyst. Expert
|
||||
researcher with access to remote agents\nYour personal goal is: Find and analyze
|
||||
information"},{"role":"user","content":"\nCurrent Task: Use the remote A2A agent
|
||||
to calculate 10 plus 15.\n\nProvide your complete response:"}],"model":"gpt-4.1-mini","response_format":{"type":"json_schema","json_schema":{"schema":{"properties":{"a2a_ids":{"description":"A2A
|
||||
agent IDs to delegate to.","items":{"const":"http://0.0.0.0:9999/.well-known/agent-card.json","type":"string"},"maxItems":1,"title":"A2A
|
||||
Ids","type":"array"},"message":{"description":"The message content. If is_a2a=true,
|
||||
this is sent to the A2A agent. If is_a2a=false, this is your final answer ending
|
||||
the conversation.","title":"Message","type":"string"},"is_a2a":{"description":"Set
|
||||
to false when the remote agent has answered your question - extract their answer
|
||||
and return it as your final message. Set to true ONLY if you need to ask a NEW,
|
||||
DIFFERENT question. NEVER repeat the same request - if the conversation history
|
||||
shows the agent already answered, set is_a2a=false immediately.","title":"Is
|
||||
A2A","type":"boolean"}},"required":["a2a_ids","message","is_a2a"],"title":"AgentResponse","type":"object","additionalProperties":false},"name":"AgentResponse","strict":true}},"stream":false}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1324'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- COOKIE-XXX
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-helper-method:
|
||||
- beta.chat.completions.parse
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-D3jETAM2THPraxoRQnGmw7NVXLCbF\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1769781229,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"{\\\"a2a_ids\\\":[\\\"http://0.0.0.0:9999/.well-known/agent-card.json\\\"],\\\"message\\\":\\\"Calculate
|
||||
the sum of 10 and 15.\\\",\\\"is_a2a\\\":true}\",\n \"refusal\": null,\n
|
||||
\ \"annotations\": []\n },\n \"logprobs\": null,\n \"finish_reason\":
|
||||
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 272,\n \"completion_tokens\":
|
||||
46,\n \"total_tokens\": 318,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_e01c6f58e1\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 30 Jan 2026 13:53:50 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '964'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"id":"4b77421f-a7ad-459c-89be-8fc0990659ca","jsonrpc":"2.0","method":"message/stream","params":{"configuration":{"acceptedOutputModes":["application/json"],"blocking":true},"message":{"kind":"message","messageId":"8fcfce9a-0561-466f-8825-b761b1824e7c","parts":[{"kind":"text","text":"Calculate
|
||||
the sum of 10 and 15."}],"referenceTaskIds":[],"role":"user"}}}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- '*/*, text/event-stream'
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
cache-control:
|
||||
- no-store
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '358'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- localhost:9999
|
||||
method: POST
|
||||
uri: http://localhost:9999
|
||||
response:
|
||||
body:
|
||||
string: "data: {\"id\":\"4b77421f-a7ad-459c-89be-8fc0990659ca\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"be79af13-4d13-4574-8d92-d92ab2713d49\",\"final\":false,\"kind\":\"status-update\",\"status\":{\"state\":\"submitted\"},\"taskId\":\"12070e13-e15a-48d9-879a-748b21189a8d\"}}\r\n\r\ndata:
|
||||
{\"id\":\"4b77421f-a7ad-459c-89be-8fc0990659ca\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"be79af13-4d13-4574-8d92-d92ab2713d49\",\"final\":false,\"kind\":\"status-update\",\"status\":{\"state\":\"working\"},\"taskId\":\"12070e13-e15a-48d9-879a-748b21189a8d\"}}\r\n\r\ndata:
|
||||
{\"id\":\"4b77421f-a7ad-459c-89be-8fc0990659ca\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"be79af13-4d13-4574-8d92-d92ab2713d49\",\"final\":true,\"kind\":\"status-update\",\"status\":{\"message\":{\"kind\":\"message\",\"messageId\":\"36010d61-c4af-4b31-a7f0-15ae6b61de0a\",\"parts\":[{\"kind\":\"text\",\"text\":\"[Tool:
|
||||
calculator] 10 + 15 = 25\\nThe sum of 10 and 15 is 25.\"}],\"role\":\"agent\"},\"state\":\"completed\"},\"taskId\":\"12070e13-e15a-48d9-879a-748b21189a8d\"}}\r\n\r\n"
|
||||
headers:
|
||||
cache-control:
|
||||
- no-store
|
||||
connection:
|
||||
- keep-alive
|
||||
content-type:
|
||||
- text/event-stream; charset=utf-8
|
||||
date:
|
||||
- Fri, 30 Jan 2026 13:53:49 GMT
|
||||
server:
|
||||
- uvicorn
|
||||
transfer-encoding:
|
||||
- chunked
|
||||
x-accel-buffering:
|
||||
- 'no'
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Research Analyst. Expert
|
||||
researcher with access to remote agents\nYour personal goal is: Find and analyze
|
||||
information"},{"role":"user","content":"\nCurrent Task: Use the remote A2A agent
|
||||
to calculate 10 plus 15.\n\nProvide your complete response:"}],"model":"gpt-4.1-mini","response_format":{"type":"json_schema","json_schema":{"schema":{"properties":{"a2a_ids":{"description":"A2A
|
||||
agent IDs to delegate to.","items":{"const":"http://0.0.0.0:9999/.well-known/agent-card.json","type":"string"},"maxItems":1,"title":"A2A
|
||||
Ids","type":"array"},"message":{"description":"The message content. If is_a2a=true,
|
||||
this is sent to the A2A agent. If is_a2a=false, this is your final answer ending
|
||||
the conversation.","title":"Message","type":"string"},"is_a2a":{"description":"Set
|
||||
to false when the remote agent has answered your question - extract their answer
|
||||
and return it as your final message. Set to true ONLY if you need to ask a NEW,
|
||||
DIFFERENT question. NEVER repeat the same request - if the conversation history
|
||||
shows the agent already answered, set is_a2a=false immediately.","title":"Is
|
||||
A2A","type":"boolean"}},"required":["a2a_ids","message","is_a2a"],"title":"AgentResponse","type":"object","additionalProperties":false},"name":"AgentResponse","strict":true}},"stream":false}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1324'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- COOKIE-XXX
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-helper-method:
|
||||
- beta.chat.completions.parse
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-D3jEXruhxxf838JOhcjuSrsw5cKNl\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1769781233,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"{\\\"a2a_ids\\\":[\\\"http://0.0.0.0:9999/.well-known/agent-card.json\\\"],\\\"message\\\":\\\"Calculate
|
||||
the sum of 10 plus 15.\\\",\\\"is_a2a\\\":true}\",\n \"refusal\": null,\n
|
||||
\ \"annotations\": []\n },\n \"logprobs\": null,\n \"finish_reason\":
|
||||
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 272,\n \"completion_tokens\":
|
||||
46,\n \"total_tokens\": 318,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_e01c6f58e1\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 30 Jan 2026 13:53:54 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '1054'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
@@ -1,700 +0,0 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: ''
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- '*/*'
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
host:
|
||||
- 0.0.0.0:9999
|
||||
method: GET
|
||||
uri: http://0.0.0.0:9999/.well-known/agent-card.json
|
||||
response:
|
||||
body:
|
||||
string: '{"capabilities":{"pushNotifications":true,"streaming":true},"defaultInputModes":["text/plain","application/json"],"defaultOutputModes":["text/plain","application/json"],"description":"An
|
||||
AI assistant powered by OpenAI GPT with calculator and time tools. Ask questions,
|
||||
perform calculations, or get the current time in any timezone.","name":"GPT
|
||||
Assistant","preferredTransport":"JSONRPC","protocolVersion":"0.3.0","skills":[{"description":"Have
|
||||
a general conversation with the AI assistant. Ask questions, get explanations,
|
||||
or just chat.","examples":["Hello, how are you?","Explain quantum computing
|
||||
in simple terms","What can you help me with?"],"id":"conversation","name":"General
|
||||
Conversation","tags":["chat","conversation","general"]},{"description":"Perform
|
||||
mathematical calculations including arithmetic, exponents, and more.","examples":["What
|
||||
is 25 * 17?","Calculate 2^10","What''s (100 + 50) / 3?"],"id":"calculator","name":"Calculator","tags":["math","calculator","arithmetic"]},{"description":"Get
|
||||
the current date and time in any timezone.","examples":["What time is it?","What''s
|
||||
the current time in Tokyo?","What''s today''s date in New York?"],"id":"time","name":"Current
|
||||
Time","tags":["time","date","timezone"]}],"url":"http://localhost:9999","version":"1.0.0"}'
|
||||
headers:
|
||||
content-length:
|
||||
- '1272'
|
||||
content-type:
|
||||
- application/json
|
||||
date:
|
||||
- Fri, 30 Jan 2026 13:53:25 GMT
|
||||
server:
|
||||
- uvicorn
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Research Analyst. Expert
|
||||
researcher with access to remote agents\nYour personal goal is: Find and analyze
|
||||
information"},{"role":"user","content":"\nCurrent Task: Ask the A2A agent to
|
||||
calculate 100 divided by 4.\n\nProvide your complete response:"}],"model":"gpt-4.1-mini","response_format":{"type":"json_schema","json_schema":{"schema":{"properties":{"a2a_ids":{"description":"A2A
|
||||
agent IDs to delegate to.","items":{"const":"http://0.0.0.0:9999/.well-known/agent-card.json","type":"string"},"maxItems":1,"title":"A2A
|
||||
Ids","type":"array"},"message":{"description":"The message content. If is_a2a=true,
|
||||
this is sent to the A2A agent. If is_a2a=false, this is your final answer ending
|
||||
the conversation.","title":"Message","type":"string"},"is_a2a":{"description":"Set
|
||||
to false when the remote agent has answered your question - extract their answer
|
||||
and return it as your final message. Set to true ONLY if you need to ask a NEW,
|
||||
DIFFERENT question. NEVER repeat the same request - if the conversation history
|
||||
shows the agent already answered, set is_a2a=false immediately.","title":"Is
|
||||
A2A","type":"boolean"}},"required":["a2a_ids","message","is_a2a"],"title":"AgentResponse","type":"object","additionalProperties":false},"name":"AgentResponse","strict":true}},"stream":false}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1323'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-helper-method:
|
||||
- beta.chat.completions.parse
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-D3jE6wnR5kKEYmPD3DVfGlrZSNhR9\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1769781206,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"{\\\"a2a_ids\\\":[\\\"http://0.0.0.0:9999/.well-known/agent-card.json\\\"],\\\"message\\\":\\\"Calculate
|
||||
100 divided by 4.\\\",\\\"is_a2a\\\":true}\",\n \"refusal\": null,\n
|
||||
\ \"annotations\": []\n },\n \"logprobs\": null,\n \"finish_reason\":
|
||||
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 272,\n \"completion_tokens\":
|
||||
44,\n \"total_tokens\": 316,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_e01c6f58e1\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 30 Jan 2026 13:53:27 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- SET-COOKIE-XXX
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '707'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: ''
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- '*/*'
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
host:
|
||||
- 0.0.0.0:9999
|
||||
method: GET
|
||||
uri: http://0.0.0.0:9999/.well-known/agent-card.json
|
||||
response:
|
||||
body:
|
||||
string: '{"capabilities":{"pushNotifications":true,"streaming":true},"defaultInputModes":["text/plain","application/json"],"defaultOutputModes":["text/plain","application/json"],"description":"An
|
||||
AI assistant powered by OpenAI GPT with calculator and time tools. Ask questions,
|
||||
perform calculations, or get the current time in any timezone.","name":"GPT
|
||||
Assistant","preferredTransport":"JSONRPC","protocolVersion":"0.3.0","skills":[{"description":"Have
|
||||
a general conversation with the AI assistant. Ask questions, get explanations,
|
||||
or just chat.","examples":["Hello, how are you?","Explain quantum computing
|
||||
in simple terms","What can you help me with?"],"id":"conversation","name":"General
|
||||
Conversation","tags":["chat","conversation","general"]},{"description":"Perform
|
||||
mathematical calculations including arithmetic, exponents, and more.","examples":["What
|
||||
is 25 * 17?","Calculate 2^10","What''s (100 + 50) / 3?"],"id":"calculator","name":"Calculator","tags":["math","calculator","arithmetic"]},{"description":"Get
|
||||
the current date and time in any timezone.","examples":["What time is it?","What''s
|
||||
the current time in Tokyo?","What''s today''s date in New York?"],"id":"time","name":"Current
|
||||
Time","tags":["time","date","timezone"]}],"url":"http://localhost:9999","version":"1.0.0"}'
|
||||
headers:
|
||||
content-length:
|
||||
- '1272'
|
||||
content-type:
|
||||
- application/json
|
||||
date:
|
||||
- Fri, 30 Jan 2026 13:53:27 GMT
|
||||
server:
|
||||
- uvicorn
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"id":"e57a2dcb-f8ab-4cca-a9db-419c812895df","jsonrpc":"2.0","method":"message/stream","params":{"configuration":{"acceptedOutputModes":["application/json"],"blocking":true},"message":{"kind":"message","messageId":"dc079756-dd5a-4c0d-82e4-ecd75b3ccf80","parts":[{"kind":"text","text":"Calculate
|
||||
100 divided by 4."}],"referenceTaskIds":[],"role":"user"}}}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- '*/*, text/event-stream'
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
cache-control:
|
||||
- no-store
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '354'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- localhost:9999
|
||||
method: POST
|
||||
uri: http://localhost:9999
|
||||
response:
|
||||
body:
|
||||
string: "data: {\"id\":\"e57a2dcb-f8ab-4cca-a9db-419c812895df\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"191d29a9-4b74-483e-aafc-56852d143c47\",\"final\":false,\"kind\":\"status-update\",\"status\":{\"state\":\"submitted\"},\"taskId\":\"62bd40f6-cb9a-447b-9cb4-0e7dc55847ff\"}}\r\n\r\ndata:
|
||||
{\"id\":\"e57a2dcb-f8ab-4cca-a9db-419c812895df\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"191d29a9-4b74-483e-aafc-56852d143c47\",\"final\":false,\"kind\":\"status-update\",\"status\":{\"state\":\"working\"},\"taskId\":\"62bd40f6-cb9a-447b-9cb4-0e7dc55847ff\"}}\r\n\r\ndata:
|
||||
{\"id\":\"e57a2dcb-f8ab-4cca-a9db-419c812895df\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"191d29a9-4b74-483e-aafc-56852d143c47\",\"final\":true,\"kind\":\"status-update\",\"status\":{\"message\":{\"kind\":\"message\",\"messageId\":\"cfdd36ae-73d7-416f-aa84-447e59293839\",\"parts\":[{\"kind\":\"text\",\"text\":\"[Tool:
|
||||
calculator] 100 / 4 = 25.0\\n100 divided by 4 is 25.0.\"}],\"role\":\"agent\"},\"state\":\"completed\"},\"taskId\":\"62bd40f6-cb9a-447b-9cb4-0e7dc55847ff\"}}\r\n\r\n"
|
||||
headers:
|
||||
cache-control:
|
||||
- no-store
|
||||
connection:
|
||||
- keep-alive
|
||||
content-type:
|
||||
- text/event-stream; charset=utf-8
|
||||
date:
|
||||
- Fri, 30 Jan 2026 13:53:27 GMT
|
||||
server:
|
||||
- uvicorn
|
||||
transfer-encoding:
|
||||
- chunked
|
||||
x-accel-buffering:
|
||||
- 'no'
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Research Analyst. Expert
|
||||
researcher with access to remote agents\nYour personal goal is: Find and analyze
|
||||
information"},{"role":"user","content":"\nCurrent Task: Ask the A2A agent to
|
||||
calculate 100 divided by 4.\n\nProvide your complete response:"}],"model":"gpt-4.1-mini","response_format":{"type":"json_schema","json_schema":{"schema":{"properties":{"a2a_ids":{"description":"A2A
|
||||
agent IDs to delegate to.","items":{"const":"http://0.0.0.0:9999/.well-known/agent-card.json","type":"string"},"maxItems":1,"title":"A2A
|
||||
Ids","type":"array"},"message":{"description":"The message content. If is_a2a=true,
|
||||
this is sent to the A2A agent. If is_a2a=false, this is your final answer ending
|
||||
the conversation.","title":"Message","type":"string"},"is_a2a":{"description":"Set
|
||||
to false when the remote agent has answered your question - extract their answer
|
||||
and return it as your final message. Set to true ONLY if you need to ask a NEW,
|
||||
DIFFERENT question. NEVER repeat the same request - if the conversation history
|
||||
shows the agent already answered, set is_a2a=false immediately.","title":"Is
|
||||
A2A","type":"boolean"}},"required":["a2a_ids","message","is_a2a"],"title":"AgentResponse","type":"object","additionalProperties":false},"name":"AgentResponse","strict":true}},"stream":false}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1323'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- COOKIE-XXX
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-helper-method:
|
||||
- beta.chat.completions.parse
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-D3jEBY3yIu050RsylIGHOW0FKUT5f\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1769781211,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"{\\\"a2a_ids\\\":[\\\"http://0.0.0.0:9999/.well-known/agent-card.json\\\"],\\\"message\\\":\\\"Please
|
||||
calculate 100 divided by 4.\\\",\\\"is_a2a\\\":true}\",\n \"refusal\":
|
||||
null,\n \"annotations\": []\n },\n \"logprobs\": null,\n
|
||||
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
|
||||
272,\n \"completion_tokens\": 45,\n \"total_tokens\": 317,\n \"prompt_tokens_details\":
|
||||
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_e01c6f58e1\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 30 Jan 2026 13:53:32 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '847'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"id":"9b0911f0-c3ad-4b84-bd6c-557de6d598e5","jsonrpc":"2.0","method":"message/stream","params":{"configuration":{"acceptedOutputModes":["application/json"],"blocking":true},"message":{"kind":"message","messageId":"181218b4-0f76-48f0-a08c-feaf98bca6d5","parts":[{"kind":"text","text":"Please
|
||||
calculate 100 divided by 4."}],"referenceTaskIds":[],"role":"user"}}}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- '*/*, text/event-stream'
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
cache-control:
|
||||
- no-store
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '361'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- localhost:9999
|
||||
method: POST
|
||||
uri: http://localhost:9999
|
||||
response:
|
||||
body:
|
||||
string: "data: {\"id\":\"9b0911f0-c3ad-4b84-bd6c-557de6d598e5\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"e178978b-82ad-4592-a46a-ceaabe7fa249\",\"final\":false,\"kind\":\"status-update\",\"status\":{\"state\":\"submitted\"},\"taskId\":\"5e7ad040-e0ef-4371-9661-3d65b0263806\"}}\r\n\r\ndata:
|
||||
{\"id\":\"9b0911f0-c3ad-4b84-bd6c-557de6d598e5\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"e178978b-82ad-4592-a46a-ceaabe7fa249\",\"final\":false,\"kind\":\"status-update\",\"status\":{\"state\":\"working\"},\"taskId\":\"5e7ad040-e0ef-4371-9661-3d65b0263806\"}}\r\n\r\ndata:
|
||||
{\"id\":\"9b0911f0-c3ad-4b84-bd6c-557de6d598e5\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"e178978b-82ad-4592-a46a-ceaabe7fa249\",\"final\":true,\"kind\":\"status-update\",\"status\":{\"message\":{\"kind\":\"message\",\"messageId\":\"a36e19fd-eb78-45b9-bff0-93a9bcc6af28\",\"parts\":[{\"kind\":\"text\",\"text\":\"[Tool:
|
||||
calculator] 100 / 4 = 25.0\\n100 divided by 4 is 25.0.\"}],\"role\":\"agent\"},\"state\":\"completed\"},\"taskId\":\"5e7ad040-e0ef-4371-9661-3d65b0263806\"}}\r\n\r\n"
|
||||
headers:
|
||||
cache-control:
|
||||
- no-store
|
||||
connection:
|
||||
- keep-alive
|
||||
content-type:
|
||||
- text/event-stream; charset=utf-8
|
||||
date:
|
||||
- Fri, 30 Jan 2026 13:53:31 GMT
|
||||
server:
|
||||
- uvicorn
|
||||
transfer-encoding:
|
||||
- chunked
|
||||
x-accel-buffering:
|
||||
- 'no'
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Research Analyst. Expert
|
||||
researcher with access to remote agents\nYour personal goal is: Find and analyze
|
||||
information"},{"role":"user","content":"\nCurrent Task: Ask the A2A agent to
|
||||
calculate 100 divided by 4.\n\nProvide your complete response:"}],"model":"gpt-4.1-mini","response_format":{"type":"json_schema","json_schema":{"schema":{"properties":{"a2a_ids":{"description":"A2A
|
||||
agent IDs to delegate to.","items":{"const":"http://0.0.0.0:9999/.well-known/agent-card.json","type":"string"},"maxItems":1,"title":"A2A
|
||||
Ids","type":"array"},"message":{"description":"The message content. If is_a2a=true,
|
||||
this is sent to the A2A agent. If is_a2a=false, this is your final answer ending
|
||||
the conversation.","title":"Message","type":"string"},"is_a2a":{"description":"Set
|
||||
to false when the remote agent has answered your question - extract their answer
|
||||
and return it as your final message. Set to true ONLY if you need to ask a NEW,
|
||||
DIFFERENT question. NEVER repeat the same request - if the conversation history
|
||||
shows the agent already answered, set is_a2a=false immediately.","title":"Is
|
||||
A2A","type":"boolean"}},"required":["a2a_ids","message","is_a2a"],"title":"AgentResponse","type":"object","additionalProperties":false},"name":"AgentResponse","strict":true}},"stream":false}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1323'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- COOKIE-XXX
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-helper-method:
|
||||
- beta.chat.completions.parse
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-D3jEGe4xGriXMhFwcKlE1yKn8hyn8\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1769781216,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"{\\\"a2a_ids\\\":[\\\"http://0.0.0.0:9999/.well-known/agent-card.json\\\"],\\\"message\\\":\\\"Calculate
|
||||
100 divided by 4.\\\",\\\"is_a2a\\\":true}\",\n \"refusal\": null,\n
|
||||
\ \"annotations\": []\n },\n \"logprobs\": null,\n \"finish_reason\":
|
||||
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 272,\n \"completion_tokens\":
|
||||
44,\n \"total_tokens\": 316,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_e01c6f58e1\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 30 Jan 2026 13:53:37 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '911'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"id":"a2e53785-6418-4040-a368-4501952de271","jsonrpc":"2.0","method":"message/stream","params":{"configuration":{"acceptedOutputModes":["application/json"],"blocking":true},"message":{"kind":"message","messageId":"98f4aca2-2543-4c02-a89f-a09693f7dc01","parts":[{"kind":"text","text":"Calculate
|
||||
100 divided by 4."}],"referenceTaskIds":[],"role":"user"}}}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- '*/*, text/event-stream'
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
cache-control:
|
||||
- no-store
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '354'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- localhost:9999
|
||||
method: POST
|
||||
uri: http://localhost:9999
|
||||
response:
|
||||
body:
|
||||
string: "data: {\"id\":\"a2e53785-6418-4040-a368-4501952de271\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"0f946602-edc7-4f21-bbbb-ee6a2e49d9f8\",\"final\":false,\"kind\":\"status-update\",\"status\":{\"state\":\"submitted\"},\"taskId\":\"97aac2ac-ba0e-4784-bb20-f1b5c44ebb14\"}}\r\n\r\ndata:
|
||||
{\"id\":\"a2e53785-6418-4040-a368-4501952de271\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"0f946602-edc7-4f21-bbbb-ee6a2e49d9f8\",\"final\":false,\"kind\":\"status-update\",\"status\":{\"state\":\"working\"},\"taskId\":\"97aac2ac-ba0e-4784-bb20-f1b5c44ebb14\"}}\r\n\r\ndata:
|
||||
{\"id\":\"a2e53785-6418-4040-a368-4501952de271\",\"jsonrpc\":\"2.0\",\"result\":{\"contextId\":\"0f946602-edc7-4f21-bbbb-ee6a2e49d9f8\",\"final\":true,\"kind\":\"status-update\",\"status\":{\"message\":{\"kind\":\"message\",\"messageId\":\"cb58a105-b88d-49c7-bebd-7add42086b8e\",\"parts\":[{\"kind\":\"text\",\"text\":\"[Tool:
|
||||
calculator] 100 / 4 = 25.0\\n100 divided by 4 is 25.\"}],\"role\":\"agent\"},\"state\":\"completed\"},\"taskId\":\"97aac2ac-ba0e-4784-bb20-f1b5c44ebb14\"}}\r\n\r\n"
|
||||
headers:
|
||||
cache-control:
|
||||
- no-store
|
||||
connection:
|
||||
- keep-alive
|
||||
content-type:
|
||||
- text/event-stream; charset=utf-8
|
||||
date:
|
||||
- Fri, 30 Jan 2026 13:53:37 GMT
|
||||
server:
|
||||
- uvicorn
|
||||
transfer-encoding:
|
||||
- chunked
|
||||
x-accel-buffering:
|
||||
- 'no'
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Research Analyst. Expert
|
||||
researcher with access to remote agents\nYour personal goal is: Find and analyze
|
||||
information"},{"role":"user","content":"\nCurrent Task: Ask the A2A agent to
|
||||
calculate 100 divided by 4.\n\nProvide your complete response:"}],"model":"gpt-4.1-mini","response_format":{"type":"json_schema","json_schema":{"schema":{"properties":{"a2a_ids":{"description":"A2A
|
||||
agent IDs to delegate to.","items":{"const":"http://0.0.0.0:9999/.well-known/agent-card.json","type":"string"},"maxItems":1,"title":"A2A
|
||||
Ids","type":"array"},"message":{"description":"The message content. If is_a2a=true,
|
||||
this is sent to the A2A agent. If is_a2a=false, this is your final answer ending
|
||||
the conversation.","title":"Message","type":"string"},"is_a2a":{"description":"Set
|
||||
to false when the remote agent has answered your question - extract their answer
|
||||
and return it as your final message. Set to true ONLY if you need to ask a NEW,
|
||||
DIFFERENT question. NEVER repeat the same request - if the conversation history
|
||||
shows the agent already answered, set is_a2a=false immediately.","title":"Is
|
||||
A2A","type":"boolean"}},"required":["a2a_ids","message","is_a2a"],"title":"AgentResponse","type":"object","additionalProperties":false},"name":"AgentResponse","strict":true}},"stream":false}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1323'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- COOKIE-XXX
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-helper-method:
|
||||
- beta.chat.completions.parse
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.10
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-D3jELCM5f5LgexzQ8gSreUb0jzxJP\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1769781221,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"{\\\"a2a_ids\\\":[\\\"http://0.0.0.0:9999/.well-known/agent-card.json\\\"],\\\"message\\\":\\\"Calculate
|
||||
100 divided by 4.\\\",\\\"is_a2a\\\":true}\",\n \"refusal\": null,\n
|
||||
\ \"annotations\": []\n },\n \"logprobs\": null,\n \"finish_reason\":
|
||||
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 272,\n \"completion_tokens\":
|
||||
44,\n \"total_tokens\": 316,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_e01c6f58e1\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 30 Jan 2026 13:53:42 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '826'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
Reference in New Issue
Block a user