Compare commits

..

2 Commits

Author SHA1 Message Date
Alex
b626dd5ec9 fix: only fall back to polling when streaming is explicitly False
Addresses review: agents that leave streaming unset (None) should get the
default StreamingHandler, not polling. Only agents that explicitly set
streaming=False trigger the polling fallback.
2026-04-12 16:38:44 -07:00
Alex
7169d03b80 fix: respect agent card streaming capability in A2A delegation
When the remote agent advertises streaming=False in its capabilities,
the client now falls back to PollingHandler instead of always using
StreamingHandler. This fixes connection failures when delegating to
non-streaming A2A servers.
2026-04-12 16:23:27 -07:00
41 changed files with 685 additions and 1236 deletions

View File

@@ -4,97 +4,6 @@ description: "تحديثات المنتج والتحسينات وإصلاحات
icon: "clock"
mode: "wide"
---
<Update label="16 أبريل 2026">
## v1.14.2rc1
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2rc1)
## ما الذي تغير
### إصلاحات الأخطاء
- إصلاح معالجة مخططات JSON الدائرية في أداة MCP
- إصلاح ثغرة أمنية من خلال تحديث python-multipart إلى 0.0.26
- إصلاح ثغرة أمنية من خلال تحديث pypdf إلى 6.10.1
### الوثائق
- تحديث سجل التغييرات والإصدار لـ v1.14.2a5
## المساهمون
@greysonlalonde
</Update>
<Update label="15 أبريل 2026">
## v1.14.2a5
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a5)
## ما الذي تغير
### الوثائق
- تحديث سجل التغييرات والإصدار لـ v1.14.2a4
## المساهمون
@greysonlalonde
</Update>
<Update label="15 أبريل 2026">
## v1.14.2a4
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a4)
## ما الذي تغير
### الميزات
- إضافة تلميحات استئناف إلى إصدار أدوات المطورين عند الفشل
### إصلاحات الأخطاء
- إصلاح توجيه وضع الصرامة إلى واجهة برمجة تطبيقات Bedrock Converse
- إصلاح إصدار pytest إلى 9.0.3 لثغرة الأمان GHSA-6w46-j5rx-g56g
- رفع الحد الأدنى لـ OpenAI إلى >=2.0.0
### الوثائق
- تحديث سجل التغييرات والإصدار لـ v1.14.2a3
## المساهمون
@greysonlalonde
</Update>
<Update label="13 أبريل 2026">
## v1.14.2a3
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a3)
## ما الذي تغير
### الميزات
- إضافة واجهة سطر الأوامر للتحقق من النشر
- تحسين سهولة استخدام تهيئة LLM
### إصلاحات الأخطاء
- تجاوز pypdf و uv إلى إصدارات مصححة لـ CVE-2026-40260 و GHSA-pjjw-68hj-v9mw
- ترقية requests إلى >=2.33.0 لمعالجة ثغرة ملف مؤقت CVE
- الحفاظ على معلمات استدعاء أداة Bedrock من خلال إزالة القيمة الافتراضية الصحيحة
- تنظيف مخططات الأدوات لوضع صارم
- إصلاح اختبار تسلسل تضمين MemoryRecord
### الوثائق
- تنظيف لغة A2A الخاصة بالمؤسسات
- إضافة وثائق ميزات A2A الخاصة بالمؤسسات
- تحديث وثائق A2A الخاصة بالمصادر المفتوحة
- تحديث سجل التغييرات والإصدار لـ v1.14.2a2
## المساهمون
@Yanhu007, @greysonlalonde
</Update>
<Update label="10 أبريل 2026">
## v1.14.2a2

View File

@@ -392,8 +392,7 @@
"en/enterprise/features/marketplace",
"en/enterprise/features/agent-repositories",
"en/enterprise/features/tools-and-integrations",
"en/enterprise/features/pii-trace-redactions",
"en/enterprise/features/a2a"
"en/enterprise/features/pii-trace-redactions"
]
},
{
@@ -866,8 +865,7 @@
"en/enterprise/features/marketplace",
"en/enterprise/features/agent-repositories",
"en/enterprise/features/tools-and-integrations",
"en/enterprise/features/pii-trace-redactions",
"en/enterprise/features/a2a"
"en/enterprise/features/pii-trace-redactions"
]
},
{
@@ -1340,8 +1338,7 @@
"en/enterprise/features/marketplace",
"en/enterprise/features/agent-repositories",
"en/enterprise/features/tools-and-integrations",
"en/enterprise/features/pii-trace-redactions",
"en/enterprise/features/a2a"
"en/enterprise/features/pii-trace-redactions"
]
},
{
@@ -1814,8 +1811,7 @@
"en/enterprise/features/marketplace",
"en/enterprise/features/agent-repositories",
"en/enterprise/features/tools-and-integrations",
"en/enterprise/features/pii-trace-redactions",
"en/enterprise/features/a2a"
"en/enterprise/features/pii-trace-redactions"
]
},
{
@@ -2287,8 +2283,7 @@
"en/enterprise/features/marketplace",
"en/enterprise/features/agent-repositories",
"en/enterprise/features/tools-and-integrations",
"en/enterprise/features/pii-trace-redactions",
"en/enterprise/features/a2a"
"en/enterprise/features/pii-trace-redactions"
]
},
{
@@ -2759,8 +2754,7 @@
"en/enterprise/features/marketplace",
"en/enterprise/features/agent-repositories",
"en/enterprise/features/tools-and-integrations",
"en/enterprise/features/pii-trace-redactions",
"en/enterprise/features/a2a"
"en/enterprise/features/pii-trace-redactions"
]
},
{
@@ -3231,8 +3225,7 @@
"en/enterprise/features/marketplace",
"en/enterprise/features/agent-repositories",
"en/enterprise/features/tools-and-integrations",
"en/enterprise/features/pii-trace-redactions",
"en/enterprise/features/a2a"
"en/enterprise/features/pii-trace-redactions"
]
},
{
@@ -3705,8 +3698,7 @@
"en/enterprise/features/marketplace",
"en/enterprise/features/agent-repositories",
"en/enterprise/features/tools-and-integrations",
"en/enterprise/features/pii-trace-redactions",
"en/enterprise/features/a2a"
"en/enterprise/features/pii-trace-redactions"
]
},
{
@@ -4177,8 +4169,7 @@
"en/enterprise/features/marketplace",
"en/enterprise/features/agent-repositories",
"en/enterprise/features/tools-and-integrations",
"en/enterprise/features/pii-trace-redactions",
"en/enterprise/features/a2a"
"en/enterprise/features/pii-trace-redactions"
]
},
{
@@ -4652,8 +4643,7 @@
"en/enterprise/features/marketplace",
"en/enterprise/features/agent-repositories",
"en/enterprise/features/tools-and-integrations",
"en/enterprise/features/pii-trace-redactions",
"en/enterprise/features/a2a"
"en/enterprise/features/pii-trace-redactions"
]
},
{

View File

@@ -4,97 +4,6 @@ description: "Product updates, improvements, and bug fixes for CrewAI"
icon: "clock"
mode: "wide"
---
<Update label="Apr 16, 2026">
## v1.14.2rc1
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2rc1)
## What's Changed
### Bug Fixes
- Fix handling of cyclic JSON schemas in MCP tool resolution
- Fix vulnerability by bumping python-multipart to 0.0.26
- Fix vulnerability by bumping pypdf to 6.10.1
### Documentation
- Update changelog and version for v1.14.2a5
## Contributors
@greysonlalonde
</Update>
<Update label="Apr 15, 2026">
## v1.14.2a5
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a5)
## What's Changed
### Documentation
- Update changelog and version for v1.14.2a4
## Contributors
@greysonlalonde
</Update>
<Update label="Apr 15, 2026">
## v1.14.2a4
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a4)
## What's Changed
### Features
- Add resume hints to devtools release on failure
### Bug Fixes
- Fix strict mode forwarding to Bedrock Converse API
- Fix pytest version to 9.0.3 for security vulnerability GHSA-6w46-j5rx-g56g
- Bump OpenAI lower bound to >=2.0.0
### Documentation
- Update changelog and version for v1.14.2a3
## Contributors
@greysonlalonde
</Update>
<Update label="Apr 13, 2026">
## v1.14.2a3
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a3)
## What's Changed
### Features
- Add deploy validation CLI
- Improve LLM initialization ergonomics
### Bug Fixes
- Override pypdf and uv to patched versions for CVE-2026-40260 and GHSA-pjjw-68hj-v9mw
- Upgrade requests to >=2.33.0 for CVE temp file vulnerability
- Preserve Bedrock tool call arguments by removing truthy default
- Sanitize tool schemas for strict mode
- Deflake MemoryRecord embedding serialization test
### Documentation
- Clean up enterprise A2A language
- Add enterprise A2A feature documentation
- Update OSS A2A documentation
- Update changelog and version for v1.14.2a2
## Contributors
@Yanhu007, @greysonlalonde
</Update>
<Update label="Apr 10, 2026">
## v1.14.2a2

View File

@@ -1,227 +0,0 @@
---
title: A2A on AMP
description: Production-grade Agent-to-Agent communication with distributed state and multi-scheme authentication
icon: "network-wired"
mode: "wide"
---
<Warning>
A2A server agents on AMP are in early release. APIs may change in future versions.
</Warning>
## Overview
CrewAI AMP extends the open-source [A2A protocol implementation](/en/learn/a2a-agent-delegation) with production infrastructure for deploying distributed agents at scale. AMP supports A2A protocol versions 0.2 and 0.3. When you deploy a crew or agent with A2A server configuration to AMP, the platform automatically provisions distributed state management, authentication, multi-transport endpoints, and lifecycle management.
<Note>
For A2A protocol fundamentals, client/server configuration, and authentication schemes, see the [A2A Agent Delegation](/en/learn/a2a-agent-delegation) documentation. This page covers what AMP adds on top of the open-source implementation.
</Note>
### Usage
Add `A2AServerConfig` to any agent in your crew and deploy to AMP. The platform detects agents with server configuration and automatically registers A2A endpoints, generates agent cards, and provisions the infrastructure described below.
```python
from crewai import Agent, Crew, Task
from crewai.a2a import A2AServerConfig
from crewai.a2a.auth import EnterpriseTokenAuth
agent = Agent(
role="Data Analyst",
goal="Analyze datasets and provide insights",
backstory="Expert data scientist with statistical analysis skills",
llm="gpt-4o",
a2a=A2AServerConfig(
auth=EnterpriseTokenAuth()
)
)
task = Task(
description="Analyze the provided dataset",
expected_output="Statistical summary with key insights",
agent=agent
)
crew = Crew(agents=[agent], tasks=[task])
```
After [deploying to AMP](/en/enterprise/guides/deploy-to-amp), the platform registers two levels of A2A endpoints:
- **Crew-level**: an aggregate agent card at `/.well-known/agent-card.json` where each agent with `A2AServerConfig` is listed as a skill, with a JSON-RPC endpoint at `/a2a`
- **Per-agent**: isolated agent cards and JSON-RPC endpoints mounted at `/a2a/agents/{role}/`, each with its own tenancy
Clients can interact with the crew as a whole or target a specific agent directly. To route a request to a specific agent through the crew-level endpoint, include `"target_agent"` in the message metadata with the agent's slugified role name (e.g., `"data-analyst"` for an agent with role `"Data Analyst"`). If no `target_agent` is provided, the request is handled by the first agent in the crew.
See [A2A Agent Delegation](/en/learn/a2a-agent-delegation#server-configuration-options) for the full list of `A2AServerConfig` options.
<Warning>
Per the A2A protocol, agent cards are publicly accessible to enable discovery. This includes both the crew-level card at `/.well-known/agent-card.json` and per-agent cards at `/a2a/agents/{role}/.well-known/agent-card.json`. Do not include sensitive information in agent names, descriptions, or skill definitions.
</Warning>
### File Inputs and Structured Output
A2A on AMP supports passing files and requesting structured output in both directions. Clients can send files as `FilePart`s and request structured responses by embedding a JSON schema in the message. Server agents receive files as `input_files` on the task, and return structured data as `DataPart`s when a schema is provided. See [File Inputs and Structured Output](/en/learn/a2a-agent-delegation#file-inputs-and-structured-output) for details.
### What AMP Adds
<CardGroup cols={2}>
<Card title="Distributed State" icon="database">
Persistent task, context, and result storage
</Card>
<Card title="Enterprise Authentication" icon="shield-halved">
OIDC, OAuth2, mTLS, and Enterprise token validation beyond simple bearer tokens
</Card>
<Card title="gRPC Transport" icon="bolt">
Full gRPC server with TLS and authentication
</Card>
<Card title="Context Lifecycle" icon="clock-rotate-left">
Automatic idle detection, expiration, and cleanup of long-running conversations
</Card>
<Card title="Signed Webhooks" icon="signature">
HMAC-SHA256 signed push notifications with replay protection
</Card>
<Card title="Multi-Transport" icon="arrows-split-up-and-left">
REST, JSON-RPC, and gRPC endpoints served simultaneously from a single deployment
</Card>
</CardGroup>
---
## Distributed State Management
In the open-source implementation, task and context state lives in memory on a single process. AMP replaces this with persistent, distributed stores.
### Storage Layers
| Store | Purpose |
|---|---|
| **Task Store** | Persists A2A task state and metadata |
| **Context Store** | Tracks conversation context, creation time, last activity, and associated tasks |
| **Result Store** | Caches task results for retrieval |
| **Push Config Store** | Manages webhook subscriptions per task |
Multiple A2A deployments are automatically isolated from each other, preventing data collisions when sharing infrastructure.
---
## Enterprise Authentication
AMP supports six authentication schemes for incoming A2A requests, configurable per deployment. Authentication works across both HTTP and gRPC transports.
| Scheme | Description | Use Case |
|---|---|---|
| **SimpleTokenAuth** | Static bearer token from `AUTH_TOKEN` env var | Development, simple deployments |
| **EnterpriseTokenAuth** | Token verification via CrewAI PlusAPI with integration token claims | AMP-to-AMP agent communication |
| **OIDCAuth** | OpenID Connect JWT validation with JWKS endpoint caching | Enterprise SSO integration |
| **OAuth2ServerAuth** | OAuth2 with configurable scopes | Fine-grained access control |
| **APIKeyServerAuth** | API key validation via header or query parameter | Third-party integrations |
| **MTLSServerAuth** | Mutual TLS certificate-based authentication | Zero-trust environments |
The configured auth scheme automatically populates the agent card's `securitySchemes` and `security` fields. Clients discover authentication requirements by fetching the agent card before making requests.
---
## Extended Agent Cards
AMP supports role-based skill visibility through extended agent cards. Unauthenticated users see the standard agent card with public skills. Authenticated users receive an extended card with additional capabilities.
This enables patterns like:
- Public agents that expose basic skills to anyone, with advanced skills available to authenticated clients
- Internal agents that advertise different capabilities based on the caller's identity
---
## gRPC Transport
If enabled, AMP provides full gRPC support alongside the default JSON-RPC transport.
- **TLS termination** with configurable certificate and key paths
- **gRPC reflection** for debugging with tools like `grpcurl`
- **Authentication** using the same schemes available for HTTP
- **Extension validation** ensuring clients support required protocol extensions
- **Version negotiation** across A2A protocol versions 0.2 and 0.3
For deployments exposing multiple agents, AMP automatically allocates per-agent gRPC ports and coordinates TLS, startup, and shutdown across all servers.
---
## Context Lifecycle Management
AMP tracks the lifecycle of A2A conversation contexts and automatically manages cleanup.
### Lifecycle States
| State | Condition | Action |
|---|---|---|
| **Active** | Context has recent activity | None |
| **Idle** | No activity for a configured period | Marked idle, event emitted |
| **Expired** | Context exceeds its maximum lifetime | Marked expired, associated tasks cleaned up, event emitted |
A background cleanup task runs hourly to scan for idle and expired contexts. All state transitions emit CrewAI events that integrate with the platform's observability features.
---
## Signed Push Notifications
When an A2A agent sends push notifications to a client webhook, AMP signs each request with HMAC-SHA256 to ensure integrity and prevent tampering.
### Signature Headers
| Header | Purpose |
|---|---|
| `X-A2A-Signature` | HMAC-SHA256 signature in `sha256={hex_digest}` format |
| `X-A2A-Signature-Timestamp` | Unix timestamp bound to the signature |
| `X-A2A-Notification-Token` | Optional notification auth token |
### Security Properties
- **Integrity**: payload cannot be modified without invalidating the signature
- **Replay protection**: signatures are timestamp-bound with a configurable tolerance window
- **Retry with backoff**: failed deliveries retry with exponential backoff
---
## Distributed Event Streaming
In the open-source implementation, SSE streaming works within a single process. AMP propagates SSE events across instances so that clients receive updates even when the instance holding the streaming connection differs from the instance executing the task.
---
## Multi-Transport Endpoints
AMP serves REST and JSON-RPC by default. gRPC is available as an additional transport if enabled.
| Transport | Path Convention | Description |
|---|---|---|
| **REST** | `/v1/message:send`, `/v1/message:stream`, `/v1/tasks` | Google API conventions |
| **JSON-RPC** | Standard A2A JSON-RPC endpoint | Default A2A protocol transport |
| **gRPC** | Per-agent port allocation | Optional, high-performance binary protocol |
All active transports share the same authentication, version negotiation, and extension validation. Agent cards are generated from agent and crew metadata — roles, goals, and tools become skills and descriptions — and automatically include interfaces for each active transport. They can also be manually configured via `A2AServerConfig`.
---
## Version and Extension Negotiation
AMP validates A2A protocol versions and extensions at the transport layer.
### Version Negotiation
- Clients send the `A2A-Version` header with their preferred version
- AMP validates against supported versions (0.2, 0.3) and falls back to 0.3 if unspecified
- The negotiated version is returned in the response headers
### Extension Validation
- Clients declare supported extensions via the `X-A2A-Extensions` header
- AMP validates that clients support all extensions the agent requires
- Requests from clients missing required extensions receive an `UnsupportedExtensionError`
---
## Next Steps
- [A2A Agent Delegation](/en/learn/a2a-agent-delegation) — A2A protocol fundamentals and configuration
- [A2UI](/en/learn/a2ui) — Interactive UI rendering over A2A
- [Deploy to AMP](/en/enterprise/guides/deploy-to-amp) — General deployment guide
- [Webhook Streaming](/en/enterprise/features/webhook-streaming) — Event streaming for deployed automations

View File

@@ -7,10 +7,6 @@ mode: "wide"
## A2A Agent Delegation
<Info>
Deploying A2A agents to production? See [A2A on AMP](/en/enterprise/features/a2a) for distributed state, enterprise authentication, gRPC transport, and horizontal scaling.
</Info>
CrewAI treats [A2A protocol](https://a2a-protocol.org/latest/) as a first-class delegation primitive, enabling agents to delegate tasks, request information, and collaborate with remote agents, as well as act as A2A-compliant server agents.
In client mode, agents autonomously choose between local execution and remote delegation based on task requirements.
@@ -100,28 +96,24 @@ The `A2AClientConfig` class accepts the following parameters:
Update mechanism for receiving task status. Options: `StreamingConfig`, `PollingConfig`, or `PushNotificationConfig`.
</ParamField>
<ParamField path="transport_protocol" type="Literal['JSONRPC', 'GRPC', 'HTTP+JSON']" default="JSONRPC">
Transport protocol for A2A communication. Options: `JSONRPC` (default), `GRPC`, or `HTTP+JSON`.
</ParamField>
<ParamField path="accepted_output_modes" type="list[str]" default='["application/json"]'>
Media types the client can accept in responses.
</ParamField>
<ParamField path="supported_transports" type="list[str]" default='["JSONRPC"]'>
Ordered list of transport protocols the client supports.
</ParamField>
<ParamField path="use_client_preference" type="bool" default="False">
Whether to prioritize client transport preferences over server.
</ParamField>
<ParamField path="extensions" type="list[str]" default="[]">
A2A protocol extension URIs the client supports.
</ParamField>
<ParamField path="client_extensions" type="list[A2AExtension]" default="[]">
Client-side processing hooks for tool injection, prompt augmentation, and response modification.
</ParamField>
<ParamField path="transport" type="ClientTransportConfig" default="ClientTransportConfig()">
Transport configuration including preferred transport, supported transports for negotiation, and protocol-specific settings (gRPC message sizes, keepalive, etc.).
</ParamField>
<ParamField path="transport_protocol" type="Literal['JSONRPC', 'GRPC', 'HTTP+JSON']" default="None">
**Deprecated**: Use `transport=ClientTransportConfig(preferred=...)` instead.
</ParamField>
<ParamField path="supported_transports" type="list[str]" default="None">
**Deprecated**: Use `transport=ClientTransportConfig(supported=...)` instead.
Extension URIs the client supports.
</ParamField>
## Authentication
@@ -413,7 +405,11 @@ agent = Agent(
Preferred endpoint URL. If set, overrides the URL passed to `to_agent_card()`.
</ParamField>
<ParamField path="protocol_version" type="str" default="0.3.0">
<ParamField path="preferred_transport" type="Literal['JSONRPC', 'GRPC', 'HTTP+JSON']" default="JSONRPC">
Transport protocol for the preferred endpoint.
</ParamField>
<ParamField path="protocol_version" type="str" default="0.3">
A2A protocol version this agent supports.
</ParamField>
@@ -445,36 +441,8 @@ agent = Agent(
Whether agent provides extended card to authenticated users.
</ParamField>
<ParamField path="extended_skills" type="list[AgentSkill]" default="[]">
Additional skills visible only to authenticated users in the extended agent card.
</ParamField>
<ParamField path="signing_config" type="AgentCardSigningConfig" default="None">
Configuration for signing the AgentCard with JWS. Supports RS256, ES256, PS256, and related algorithms.
</ParamField>
<ParamField path="server_extensions" type="list[ServerExtension]" default="[]">
Server-side A2A protocol extensions with `on_request`/`on_response` hooks that modify agent behavior.
</ParamField>
<ParamField path="push_notifications" type="ServerPushNotificationConfig" default="None">
Configuration for outgoing push notifications, including HMAC-SHA256 signing secret.
</ParamField>
<ParamField path="transport" type="ServerTransportConfig" default="ServerTransportConfig()">
Transport configuration including preferred transport, gRPC server settings, JSON-RPC paths, and HTTP+JSON settings.
</ParamField>
<ParamField path="auth" type="ServerAuthScheme" default="None">
Authentication scheme for incoming A2A requests. Defaults to `SimpleTokenAuth` using the `AUTH_TOKEN` environment variable.
</ParamField>
<ParamField path="preferred_transport" type="Literal['JSONRPC', 'GRPC', 'HTTP+JSON']" default="None">
**Deprecated**: Use `transport=ServerTransportConfig(preferred=...)` instead.
</ParamField>
<ParamField path="signatures" type="list[AgentCardSignature]" default="None">
**Deprecated**: Use `signing_config=AgentCardSigningConfig(...)` instead.
<ParamField path="signatures" type="list[AgentCardSignature]" default="[]">
JSON Web Signatures for the AgentCard.
</ParamField>
### Combined Client and Server
@@ -500,14 +468,6 @@ agent = Agent(
)
```
### File Inputs and Structured Output
A2A supports passing files and requesting structured output in both directions.
**Client side**: When delegating to a remote A2A agent, files from the task's `input_files` are sent as `FilePart`s in the outgoing message. If `response_model` is set on the `A2AClientConfig`, the Pydantic model's JSON schema is embedded in the message metadata, requesting structured output from the remote agent.
**Server side**: Incoming `FilePart`s are extracted and passed to the agent's task as `input_files`. If the client included a JSON schema, the server creates a response model from it and applies it to the task. When the agent returns structured data, the response is sent back as a `DataPart` rather than plain text.
## Best Practices
<CardGroup cols={2}>

View File

@@ -4,97 +4,6 @@ description: "CrewAI의 제품 업데이트, 개선 사항 및 버그 수정"
icon: "clock"
mode: "wide"
---
<Update label="2026년 4월 16일">
## v1.14.2rc1
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2rc1)
## 변경 사항
### 버그 수정
- MCP 도구 해상도에서 순환 JSON 스키마 처리 수정
- python-multipart를 0.0.26으로 업데이트하여 취약점 수정
- pypdf를 6.10.1로 업데이트하여 취약점 수정
### 문서
- v1.14.2a5에 대한 변경 로그 및 버전 업데이트
## 기여자
@greysonlalonde
</Update>
<Update label="2026년 4월 15일">
## v1.14.2a5
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a5)
## 변경 사항
### 문서
- v1.14.2a4의 변경 로그 및 버전 업데이트
## 기여자
@greysonlalonde
</Update>
<Update label="2026년 4월 15일">
## v1.14.2a4
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a4)
## 변경 사항
### 기능
- 실패 시 devtools 릴리스에 이력서 힌트 추가
### 버그 수정
- Bedrock Converse API로의 엄격 모드 포워딩 수정
- 보안 취약점 GHSA-6w46-j5rx-g56g에 대해 pytest 버전을 9.0.3으로 수정
- OpenAI 하한을 >=2.0.0으로 상향 조정
### 문서
- v1.14.2a3에 대한 변경 로그 및 버전 업데이트
## 기여자
@greysonlalonde
</Update>
<Update label="2026년 4월 13일">
## v1.14.2a3
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a3)
## 변경 사항
### 기능
- 배포 검증 CLI 추가
- LLM 초기화 사용성 개선
### 버그 수정
- CVE-2026-40260 및 GHSA-pjjw-68hj-v9mw에 대한 패치된 버전으로 pypdf 및 uv 재정의
- CVE 임시 파일 취약점에 대해 requests를 >=2.33.0으로 업그레이드
- 진리값 기본값을 제거하여 Bedrock 도구 호출 인수 보존
- 엄격 모드를 위한 도구 스키마 정리
- MemoryRecord 임베딩 직렬화 테스트의 불안정성 제거
### 문서
- 기업 A2A 언어 정리
- 기업 A2A 기능 문서 추가
- OSS A2A 문서 업데이트
- v1.14.2a2에 대한 변경 로그 및 버전 업데이트
## 기여자
@Yanhu007, @greysonlalonde
</Update>
<Update label="2026년 4월 10일">
## v1.14.2a2

View File

@@ -4,97 +4,6 @@ description: "Atualizações de produto, melhorias e correções do CrewAI"
icon: "clock"
mode: "wide"
---
<Update label="16 abr 2026">
## v1.14.2rc1
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2rc1)
## O que Mudou
### Correções de Bugs
- Corrigir o manuseio de esquemas JSON cíclicos na resolução da ferramenta MCP
- Corrigir vulnerabilidade atualizando python-multipart para 0.0.26
- Corrigir vulnerabilidade atualizando pypdf para 6.10.1
### Documentação
- Atualizar o changelog e a versão para v1.14.2a5
## Contribuidores
@greysonlalonde
</Update>
<Update label="15 abr 2026">
## v1.14.2a5
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a5)
## O que Mudou
### Documentação
- Atualizar changelog e versão para v1.14.2a4
## Contribuidores
@greysonlalonde
</Update>
<Update label="15 abr 2026">
## v1.14.2a4
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a4)
## O que Mudou
### Recursos
- Adicionar dicas de retomar ao release do devtools em caso de falha
### Correções de Bugs
- Corrigir o encaminhamento do modo estrito para a API Bedrock Converse
- Corrigir a versão do pytest para 9.0.3 devido à vulnerabilidade de segurança GHSA-6w46-j5rx-g56g
- Aumentar o limite inferior do OpenAI para >=2.0.0
### Documentação
- Atualizar o changelog e a versão para v1.14.2a3
## Contribuidores
@greysonlalonde
</Update>
<Update label="13 abr 2026">
## v1.14.2a3
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a3)
## O que Mudou
### Recursos
- Adicionar CLI de validação de deploy
- Melhorar a ergonomia de inicialização do LLM
### Correções de Bugs
- Substituir pypdf e uv por versões corrigidas para CVE-2026-40260 e GHSA-pjjw-68hj-v9mw
- Atualizar requests para >=2.33.0 devido à vulnerabilidade de arquivo temporário CVE
- Preservar os argumentos de chamada da ferramenta Bedrock removendo o padrão truthy
- Sanitizar esquemas de ferramentas para modo estrito
- Remover flakiness do teste de serialização de embedding MemoryRecord
### Documentação
- Limpar a linguagem do A2A empresarial
- Adicionar documentação de recursos do A2A empresarial
- Atualizar documentação do A2A OSS
- Atualizar changelog e versão para v1.14.2a2
## Contribuidores
@Yanhu007, @greysonlalonde
</Update>
<Update label="10 abr 2026">
## v1.14.2a2

View File

@@ -152,4 +152,4 @@ __all__ = [
"wrap_file_source",
]
__version__ = "1.14.2rc1"
__version__ = "1.14.2a2"

View File

@@ -10,7 +10,7 @@ requires-python = ">=3.10, <3.14"
dependencies = [
"pytube~=15.0.0",
"requests>=2.33.0,<3",
"crewai==1.14.2rc1",
"crewai==1.14.2a2",
"tiktoken~=0.8.0",
"beautifulsoup4~=4.13.4",
"python-docx~=1.2.0",

View File

@@ -305,4 +305,4 @@ __all__ = [
"ZapierActionTools",
]
__version__ = "1.14.2rc1"
__version__ = "1.14.2a2"

View File

@@ -10,7 +10,7 @@ requires-python = ">=3.10, <3.14"
dependencies = [
# Core Dependencies
"pydantic~=2.11.9",
"openai>=2.0.0,<3",
"openai>=1.83.0,<3",
"instructor>=1.3.3",
# Text Processing
"pdfplumber~=0.11.4",
@@ -55,7 +55,7 @@ Repository = "https://github.com/crewAIInc/crewAI"
[project.optional-dependencies]
tools = [
"crewai-tools==1.14.2rc1",
"crewai-tools==1.14.2a2",
]
embeddings = [
"tiktoken~=0.8.0"

View File

@@ -46,7 +46,7 @@ def _suppress_pydantic_deprecation_warnings() -> None:
_suppress_pydantic_deprecation_warnings()
__version__ = "1.14.2rc1"
__version__ = "1.14.2a2"
_telemetry_submitted = False

View File

@@ -98,6 +98,7 @@ class A2AErrorCode(IntEnum):
"""The specified artifact was not found."""
# Error code to default message mapping
ERROR_MESSAGES: dict[int, str] = {
A2AErrorCode.JSON_PARSE_ERROR: "Parse error",
A2AErrorCode.INVALID_REQUEST: "Invalid Request",

View File

@@ -63,21 +63,25 @@ class A2AExtension(Protocol):
Example:
class MyExtension:
def inject_tools(self, agent: Agent) -> None:
# Add custom tools to the agent
pass
def extract_state_from_history(
self, conversation_history: Sequence[Message]
) -> ConversationState | None:
# Extract state from conversation
return None
def augment_prompt(
self, base_prompt: str, conversation_state: ConversationState | None
) -> str:
# Add custom instructions
return base_prompt
def process_response(
self, agent_response: Any, conversation_state: ConversationState | None
) -> Any:
# Modify response if needed
return agent_response
"""

View File

@@ -47,6 +47,7 @@ from crewai.a2a.types import (
)
from crewai.a2a.updates import (
PollingConfig,
PollingHandler,
PushNotificationConfig,
StreamingHandler,
UpdateConfig,
@@ -586,6 +587,19 @@ async def _aexecute_a2a_delegation_impl(
handler = get_handler(updates)
use_polling = isinstance(updates, PollingConfig)
# If the user hasn't explicitly configured an updates strategy and the remote
# agent advertises that it does not support streaming, fall back to polling.
if updates is None and (
agent_card.capabilities is not None
and agent_card.capabilities.streaming is False
):
logger.debug(
"Remote agent does not support streaming; falling back to PollingHandler",
extra={"endpoint": endpoint, "a2a_agent_name": a2a_agent_name},
)
handler = PollingHandler
use_polling = True
handler_kwargs: dict[str, Any] = {
"turn_number": turn_number,
"is_multiturn": is_multiturn,
@@ -608,6 +622,18 @@ async def _aexecute_a2a_delegation_impl(
"max_polls": updates.max_polls,
}
)
elif use_polling and updates is None:
# Fallback to polling because the agent card does not support streaming;
# use PollingConfig defaults so PollingHandler gets the kwargs it needs.
_default_polling = PollingConfig()
handler_kwargs.update(
{
"polling_interval": _default_polling.interval,
"polling_timeout": float(timeout),
"history_length": _default_polling.history_length,
"max_polls": _default_polling.max_polls,
}
)
elif isinstance(updates, PushNotificationConfig):
handler_kwargs.update(
{

View File

@@ -77,6 +77,7 @@ def extract_a2a_agent_ids_from_config(
else:
configs = a2a_config
# Filter to only client configs (those with endpoint)
client_configs: list[A2AClientConfigTypes] = [
config for config in configs if isinstance(config, (A2AConfig, A2AClientConfig))
]

View File

@@ -1341,6 +1341,7 @@ class Agent(BaseAgent):
raw_tools: list[BaseTool] = self.tools or []
# Inject memory tools for standalone kickoff (crew path handles its own)
agent_memory = getattr(self, "memory", None)
if agent_memory is not None:
from crewai.tools.memory_tools import create_memory_tools
@@ -1398,6 +1399,7 @@ class Agent(BaseAgent):
if input_files:
all_files.update(input_files)
# Inject memory context for standalone kickoff (recall before execution)
if agent_memory is not None:
try:
crewai_event_bus.emit(
@@ -1483,6 +1485,8 @@ class Agent(BaseAgent):
Note:
For explicit async usage outside of Flow, use kickoff_async() directly.
"""
# Magic auto-async: if inside event loop (e.g., inside a Flow),
# return coroutine for Flow to await
if is_inside_event_loop():
return self.kickoff_async(messages, response_format, input_files)
@@ -1633,7 +1637,7 @@ class Agent(BaseAgent):
if isinstance(conversion_result, BaseModel):
formatted_result = conversion_result
except ConverterError:
pass
pass # Keep raw output if conversion fails
else:
raw_output = str(output) if not isinstance(output, str) else output
@@ -1715,6 +1719,7 @@ class Agent(BaseAgent):
elif callable(self.guardrail):
guardrail_callable = self.guardrail
else:
# Should not happen if called from kickoff with guardrail check
return output
guardrail_result = process_guardrail(

View File

@@ -41,6 +41,7 @@ class PlanningConfig(BaseModel):
from crewai import Agent
from crewai.agent.planning_config import PlanningConfig
# Simple usage — fast, linear execution (default)
agent = Agent(
role="Researcher",
goal="Research topics",
@@ -48,6 +49,7 @@ class PlanningConfig(BaseModel):
planning_config=PlanningConfig(),
)
# Balanced — replan only when steps fail
agent = Agent(
role="Researcher",
goal="Research topics",
@@ -57,6 +59,7 @@ class PlanningConfig(BaseModel):
),
)
# Full adaptive planning with refinement and replanning
agent = Agent(
role="Researcher",
goal="Research topics",
@@ -66,7 +69,7 @@ class PlanningConfig(BaseModel):
max_attempts=3,
max_steps=10,
plan_prompt="Create a focused plan for: {description}",
llm="gpt-4o-mini",
llm="gpt-4o-mini", # Use cheaper model for planning
),
)
```

View File

@@ -39,6 +39,7 @@ def handle_reasoning(agent: Agent, task: Task) -> None:
agent: The agent performing the task.
task: The task to execute.
"""
# Check if planning is enabled using the planning_enabled property
if not getattr(agent, "planning_enabled", False):
return

View File

@@ -99,10 +99,12 @@ class OpenAIAgentToolAdapter(BaseToolAdapter):
Returns:
Tool execution result.
"""
# Get the parameter name from the schema
param_name: str = next(
iter(tool.args_schema.model_json_schema()["properties"].keys())
)
# Handle different argument types
args_dict: dict[str, Any]
if isinstance(arguments, dict):
args_dict = arguments
@@ -114,13 +116,16 @@ class OpenAIAgentToolAdapter(BaseToolAdapter):
else:
args_dict = {param_name: str(arguments)}
# Run the tool with the processed arguments
output: Any | Awaitable[Any] = tool._run(**args_dict)
# Await if the tool returned a coroutine
if inspect.isawaitable(output):
result: Any = await output
else:
result = output
# Ensure the result is JSON serializable
if isinstance(result, (dict, list, str, int, float, bool, type(None))):
return result
return str(result)

View File

@@ -383,6 +383,7 @@ class BaseAgent(BaseModel, ABC, metaclass=AgentMeta):
if isinstance(tool, BaseTool):
processed_tools.append(tool)
elif all(hasattr(tool, attr) for attr in required_attrs):
# Tool has the required attributes, create a Tool instance
processed_tools.append(Tool.from_langchain(tool))
else:
raise ValueError(
@@ -447,12 +448,14 @@ class BaseAgent(BaseModel, ABC, metaclass=AgentMeta):
@model_validator(mode="after")
def validate_and_set_attributes(self) -> Self:
# Validate required fields
for field in ["role", "goal", "backstory"]:
if getattr(self, field) is None:
raise ValueError(
f"{field} must be provided either directly or through config"
)
# Set private attributes
self._logger = Logger(verbose=self.verbose)
if self.max_rpm and not self._rpm_controller:
self._rpm_controller = RPMController(
@@ -461,6 +464,7 @@ class BaseAgent(BaseModel, ABC, metaclass=AgentMeta):
if not self._token_process:
self._token_process = TokenProcess()
# Initialize security_config if not provided
if self.security_config is None:
self.security_config = SecurityConfig()
@@ -562,11 +566,14 @@ class BaseAgent(BaseModel, ABC, metaclass=AgentMeta):
"actions",
}
# Copy llm
existing_llm = shallow_copy(self.llm)
copied_knowledge = shallow_copy(self.knowledge)
copied_knowledge_storage = shallow_copy(self.knowledge_storage)
# Properly copy knowledge sources if they exist
existing_knowledge_sources = None
if self.knowledge_sources:
# Create a shared storage instance for all knowledge sources
shared_storage = (
self.knowledge_sources[0].storage if self.knowledge_sources else None
)
@@ -578,6 +585,7 @@ class BaseAgent(BaseModel, ABC, metaclass=AgentMeta):
if hasattr(source, "model_copy")
else shallow_copy(source)
)
# Ensure all copied sources use the same storage instance
copied_source.storage = shared_storage
existing_knowledge_sources.append(copied_source)

View File

@@ -4,6 +4,8 @@ import re
from typing import Final
# crewai.agents.parser constants
FINAL_ANSWER_ACTION: Final[str] = "Final Answer:"
MISSING_ACTION_AFTER_THOUGHT_ERROR_MESSAGE: Final[str] = (
"I did it wrong. Invalid Format: I missed the 'Action:' after 'Thought:'. I will do right next, and don't use a tool I have already used.\n"

View File

@@ -296,6 +296,7 @@ class CrewAgentExecutor(BaseAgentExecutor):
Returns:
Final answer from the agent.
"""
# Check if model supports native function calling
use_native_tools = (
hasattr(self.llm, "supports_function_calling")
and callable(getattr(self.llm, "supports_function_calling", None))
@@ -306,6 +307,7 @@ class CrewAgentExecutor(BaseAgentExecutor):
if use_native_tools:
return self._invoke_loop_native_tools()
# Fall back to ReAct text-based pattern
return self._invoke_loop_react()
def _invoke_loop_react(self) -> AgentFinish:
@@ -345,6 +347,7 @@ class CrewAgentExecutor(BaseAgentExecutor):
executor_context=self,
verbose=self.agent.verbose,
)
# breakpoint()
if self.response_model is not None:
try:
if isinstance(answer, BaseModel):
@@ -362,6 +365,7 @@ class CrewAgentExecutor(BaseAgentExecutor):
text=answer,
)
except ValidationError:
# If validation fails, convert BaseModel to JSON string for parsing
answer_str = (
answer.model_dump_json()
if isinstance(answer, BaseModel)
@@ -371,12 +375,14 @@ class CrewAgentExecutor(BaseAgentExecutor):
answer_str, self.use_stop_words
) # type: ignore[assignment]
else:
# When no response_model, answer should be a string
answer_str = str(answer) if not isinstance(answer, str) else answer
formatted_answer = process_llm_response(
answer_str, self.use_stop_words
) # type: ignore[assignment]
if isinstance(formatted_answer, AgentAction):
# Extract agent fingerprint if available
fingerprint_context = {}
if (
self.agent
@@ -420,6 +426,7 @@ class CrewAgentExecutor(BaseAgentExecutor):
except Exception as e:
if e.__class__.__module__.startswith("litellm"):
# Do not retry on litellm errors
raise e
if is_context_length_exceeded(e):
handle_context_length(
@@ -436,6 +443,10 @@ class CrewAgentExecutor(BaseAgentExecutor):
finally:
self.iterations += 1
# During the invoke loop, formatted_answer alternates between AgentAction
# (when the agent is using tools) and eventually becomes AgentFinish
# (when the agent reaches a final answer). This check confirms we've
# reached a final answer and helps type checking understand this transition.
if not isinstance(formatted_answer, AgentFinish):
raise RuntimeError(
"Agent execution ended without reaching a final answer. "
@@ -454,7 +465,9 @@ class CrewAgentExecutor(BaseAgentExecutor):
Returns:
Final answer from the agent.
"""
# Convert tools to OpenAI schema format
if not self.original_tools:
# No tools available, fall back to simple LLM call
return self._invoke_loop_native_no_tools()
openai_tools, available_functions, self._tool_name_mapping = (
@@ -477,6 +490,10 @@ class CrewAgentExecutor(BaseAgentExecutor):
enforce_rpm_limit(self.request_within_rpm_limit)
# Call LLM with native tools
# Pass available_functions=None so the LLM returns tool_calls
# without executing them. The executor handles tool execution
# via _handle_native_tool_calls to properly manage message history.
answer = get_llm_response(
llm=cast("BaseLLM", self.llm),
messages=self.messages,
@@ -491,26 +508,32 @@ class CrewAgentExecutor(BaseAgentExecutor):
verbose=self.agent.verbose,
)
# Check if the response is a list of tool calls
if (
isinstance(answer, list)
and answer
and self._is_tool_call_list(answer)
):
# Handle tool calls - execute tools and add results to messages
tool_finish = self._handle_native_tool_calls(
answer, available_functions
)
# If tool has result_as_answer=True, return immediately
if tool_finish is not None:
return tool_finish
# Continue loop to let LLM analyze results and decide next steps
continue
# Text or other response - handle as potential final answer
if isinstance(answer, str):
# Text response - this is the final answer
formatted_answer = AgentFinish(
thought="",
output=answer,
text=answer,
)
self._invoke_step_callback(formatted_answer)
self._append_message(answer)
self._append_message(answer) # Save final answer to messages
self._show_logs(formatted_answer)
return formatted_answer
@@ -526,13 +549,14 @@ class CrewAgentExecutor(BaseAgentExecutor):
self._show_logs(formatted_answer)
return formatted_answer
# Unexpected response type, treat as final answer
formatted_answer = AgentFinish(
thought="",
output=str(answer),
text=str(answer),
)
self._invoke_step_callback(formatted_answer)
self._append_message(str(answer))
self._append_message(str(answer)) # Save final answer to messages
self._show_logs(formatted_answer)
return formatted_answer
@@ -603,10 +627,12 @@ class CrewAgentExecutor(BaseAgentExecutor):
if not response:
return False
first_item = response[0]
# OpenAI-style
if hasattr(first_item, "function") or (
isinstance(first_item, dict) and "function" in first_item
):
return True
# Anthropic-style (object with attributes)
if (
hasattr(first_item, "type")
and getattr(first_item, "type", None) == "tool_use"
@@ -614,12 +640,14 @@ class CrewAgentExecutor(BaseAgentExecutor):
return True
if hasattr(first_item, "name") and hasattr(first_item, "input"):
return True
# Bedrock-style (dict with name and input keys)
if (
isinstance(first_item, dict)
and "name" in first_item
and "input" in first_item
):
return True
# Gemini-style
if hasattr(first_item, "function_call") and first_item.function_call:
return True
return False
@@ -678,6 +706,8 @@ class CrewAgentExecutor(BaseAgentExecutor):
for _, func_name, _ in parsed_calls
)
# Preserve historical sequential behavior for result_as_answer batches.
# Also avoid threading around usage counters for max_usage_count tools.
if has_result_as_answer_in_batch or has_max_usage_count_in_batch:
logger.debug(
"Skipping parallel native execution because batch includes result_as_answer or max_usage_count tool"
@@ -743,6 +773,7 @@ class CrewAgentExecutor(BaseAgentExecutor):
self.messages.append(reasoning_message)
return None
# Sequential behavior: process only first tool call, then force reflection.
call_id, func_name, func_args = parsed_calls[0]
self._append_assistant_tool_calls_message([(call_id, func_name, func_args)])
@@ -1171,6 +1202,7 @@ class CrewAgentExecutor(BaseAgentExecutor):
text=answer,
)
except ValidationError:
# If validation fails, convert BaseModel to JSON string for parsing
answer_str = (
answer.model_dump_json()
if isinstance(answer, BaseModel)
@@ -1180,6 +1212,7 @@ class CrewAgentExecutor(BaseAgentExecutor):
answer_str, self.use_stop_words
) # type: ignore[assignment]
else:
# When no response_model, answer should be a string
answer_str = str(answer) if not isinstance(answer, str) else answer
formatted_answer = process_llm_response(
answer_str, self.use_stop_words
@@ -1286,6 +1319,10 @@ class CrewAgentExecutor(BaseAgentExecutor):
enforce_rpm_limit(self.request_within_rpm_limit)
# Call LLM with native tools
# Pass available_functions=None so the LLM returns tool_calls
# without executing them. The executor handles tool execution
# via _handle_native_tool_calls to properly manage message history.
answer = await aget_llm_response(
llm=cast("BaseLLM", self.llm),
messages=self.messages,
@@ -1299,26 +1336,32 @@ class CrewAgentExecutor(BaseAgentExecutor):
executor_context=self,
verbose=self.agent.verbose,
)
# Check if the response is a list of tool calls
if (
isinstance(answer, list)
and answer
and self._is_tool_call_list(answer)
):
# Handle tool calls - execute tools and add results to messages
tool_finish = self._handle_native_tool_calls(
answer, available_functions
)
# If tool has result_as_answer=True, return immediately
if tool_finish is not None:
return tool_finish
# Continue loop to let LLM analyze results and decide next steps
continue
# Text or other response - handle as potential final answer
if isinstance(answer, str):
# Text response - this is the final answer
formatted_answer = AgentFinish(
thought="",
output=answer,
text=answer,
)
await self._ainvoke_step_callback(formatted_answer)
self._append_message(answer)
self._append_message(answer) # Save final answer to messages
self._show_logs(formatted_answer)
return formatted_answer
@@ -1334,13 +1377,14 @@ class CrewAgentExecutor(BaseAgentExecutor):
self._show_logs(formatted_answer)
return formatted_answer
# Unexpected response type, treat as final answer
formatted_answer = AgentFinish(
thought="",
output=str(answer),
text=str(answer),
)
await self._ainvoke_step_callback(formatted_answer)
self._append_message(str(answer))
self._append_message(str(answer)) # Save final answer to messages
self._show_logs(formatted_answer)
return formatted_answer
@@ -1411,6 +1455,7 @@ class CrewAgentExecutor(BaseAgentExecutor):
Returns:
Updated action or final answer.
"""
# Special case for add_image_tool
add_image_tool = I18N_DEFAULT.tools("add_image")
if (
isinstance(add_image_tool, dict)
@@ -1530,14 +1575,17 @@ class CrewAgentExecutor(BaseAgentExecutor):
training_handler = CrewTrainingHandler(TRAINING_DATA_FILE)
training_data = training_handler.load() or {}
# Initialize or retrieve agent's training data
agent_training_data = training_data.get(agent_id, {})
if human_feedback is not None:
# Save initial output and human feedback
agent_training_data[train_iteration] = {
"initial_output": result.output,
"human_feedback": human_feedback,
}
else:
# Save improved output
if train_iteration in agent_training_data:
agent_training_data[train_iteration]["improved_output"] = result.output
else:
@@ -1551,6 +1599,7 @@ class CrewAgentExecutor(BaseAgentExecutor):
)
return
# Update the training data and save
training_data[agent_id] = agent_training_data
training_handler.save(training_data)

View File

@@ -94,8 +94,11 @@ def parse(text: str) -> AgentAction | AgentFinish:
if includes_answer:
final_answer = text.split(FINAL_ANSWER_ACTION)[-1].strip()
# Check whether the final answer ends with triple backticks.
if final_answer.endswith("```"):
# Count occurrences of triple backticks in the final answer.
count = final_answer.count("```")
# If count is odd then it's an unmatched trailing set; remove it.
if count % 2 != 0:
final_answer = final_answer[:-3].rstrip()
return AgentFinish(thought=thought, output=final_answer, text=text)
@@ -143,6 +146,7 @@ def _extract_thought(text: str) -> str:
if thought_index == -1:
return ""
thought = text[:thought_index].strip()
# Remove any triple backticks from the thought string
return thought.replace("```", "").strip()
@@ -167,9 +171,18 @@ def _safe_repair_json(tool_input: str) -> str:
Returns:
The repaired JSON string or original if repair fails.
"""
# Skip repair if the input starts and ends with square brackets
# Explanation: The JSON parser has issues handling inputs that are enclosed in square brackets ('[]').
# These are typically valid JSON arrays or strings that do not require repair. Attempting to repair such inputs
# might lead to unintended alterations, such as wrapping the entire input in additional layers or modifying
# the structure in a way that changes its meaning. By skipping the repair for inputs that start and end with
# square brackets, we preserve the integrity of these valid JSON structures and avoid unnecessary modifications.
if tool_input.startswith("[") and tool_input.endswith("]"):
return tool_input
# Before repair, handle common LLM issues:
# 1. Replace """ with " to avoid JSON parser errors
tool_input = tool_input.replace('"""', '"')
result = repair_json(tool_input)

View File

@@ -83,6 +83,10 @@ class PlannerObserver:
return create_llm(config.llm)
return self.agent.llm
# ------------------------------------------------------------------
# Public API
# ------------------------------------------------------------------
def observe(
self,
completed_step: TodoItem,
@@ -178,6 +182,9 @@ class PlannerObserver:
),
)
# Don't force a full replan — the step may have succeeded even if the
# observer LLM failed to parse the result. Defaulting to "continue" is
# far less disruptive than wiping the entire plan on every observer error.
return StepObservation(
step_completed_successfully=True,
key_information_learned="",
@@ -214,6 +221,10 @@ class PlannerObserver:
return remaining_todos
# ------------------------------------------------------------------
# Internal: Message building
# ------------------------------------------------------------------
def _build_observation_messages(
self,
completed_step: TodoItem,
@@ -228,11 +239,15 @@ class PlannerObserver:
task_desc = self.task.description or ""
task_goal = self.task.expected_output or ""
elif self.kickoff_input:
# Standalone kickoff path — no Task object, but we have the raw input.
# Extract just the ## Task section so the observer sees the actual goal,
# not the full enriched instruction with env/tools/verification noise.
task_desc = extract_task_section(self.kickoff_input)
task_goal = "Complete the task successfully"
system_prompt = I18N_DEFAULT.retrieve("planning", "observation_system_prompt")
# Build context of what's been done
completed_summary = ""
if all_completed:
completed_lines = []
@@ -246,6 +261,7 @@ class PlannerObserver:
completed_lines
)
# Build remaining plan
remaining_summary = ""
if remaining_todos:
remaining_lines = [
@@ -290,14 +306,17 @@ class PlannerObserver:
if isinstance(response, StepObservation):
return response
# JSON string path — most common miss before this fix
if isinstance(response, str):
text = response.strip()
try:
return StepObservation.model_validate_json(text)
except Exception: # noqa: S110
pass
# Some LLMs wrap the JSON in markdown fences
if text.startswith("```"):
lines = text.split("\n")
# Strip first and last lines (``` markers)
inner = "\n".join(
lines[1:-1] if lines[-1].strip() == "```" else lines[1:]
)
@@ -306,12 +325,14 @@ class PlannerObserver:
except Exception: # noqa: S110
pass
# Dict path
if isinstance(response, dict):
try:
return StepObservation.model_validate(response)
except Exception: # noqa: S110
pass
# Last resort — log what we got so it's diagnosable
logger.warning(
"Could not parse observation response (type=%s). "
"Falling back to default failure observation. Preview: %.200s",

View File

@@ -108,6 +108,7 @@ class StepExecutor:
self.request_within_rpm_limit = request_within_rpm_limit
self.callbacks = callbacks or []
# Native tool support — set up once
self._use_native_tools = check_native_tool_support(
self.llm, self.original_tools
)
@@ -120,6 +121,10 @@ class StepExecutor:
_,
) = setup_native_tools(self.original_tools)
# ------------------------------------------------------------------
# Public API
# ------------------------------------------------------------------
def execute(
self,
todo: TodoItem,
@@ -185,6 +190,10 @@ class StepExecutor:
execution_time=elapsed,
)
# ------------------------------------------------------------------
# Internal: Message building
# ------------------------------------------------------------------
def _build_isolated_messages(
self, todo: TodoItem, context: StepExecutionContext
) -> list[LLMMessage]:
@@ -228,6 +237,10 @@ class StepExecutor:
"""Build the user prompt for this specific step."""
parts: list[str] = []
# Include overall task context so the executor knows the full goal and
# required output format/location — critical for knowing WHAT to produce.
# We extract only the task body (not tool instructions or verification
# sections) to avoid duplicating directives already in the system prompt.
if context.task_description:
task_section = extract_task_section(context.task_description)
if task_section:
@@ -254,6 +267,7 @@ class StepExecutor:
)
)
# Include dependency results (final results only, no traces)
if context.dependency_results:
parts.append(
I18N_DEFAULT.retrieve("planning", "step_executor_context_header")
@@ -269,6 +283,10 @@ class StepExecutor:
return "\n".join(parts)
# ------------------------------------------------------------------
# Internal: Multi-turn execution loop
# ------------------------------------------------------------------
def _execute_text_parsed(
self,
messages: list[LLMMessage],
@@ -288,6 +306,7 @@ class StepExecutor:
last_tool_result = ""
for _ in range(max_step_iterations):
# Check step timeout
if step_timeout and start_time:
elapsed = time.monotonic() - start_time
if elapsed >= step_timeout:
@@ -312,12 +331,17 @@ class StepExecutor:
tool_calls_made.append(formatted.tool)
tool_result = self._execute_text_tool_with_events(formatted)
last_tool_result = tool_result
# Append the assistant's reasoning + action, then the observation.
# _build_observation_message handles vision sentinels so the LLM
# receives an image content block instead of raw base64 text.
messages.append({"role": "assistant", "content": answer_str})
messages.append(self._build_observation_message(tool_result))
continue
# Raw text response with no Final Answer marker — treat as done
return answer_str
# Max iterations reached — return the last tool result we accumulated
return last_tool_result
def _execute_text_tool_with_events(self, formatted: AgentAction) -> str:
@@ -405,6 +429,10 @@ class StepExecutor:
return {"input": stripped_input}
return {"input": str(tool_input)}
# ------------------------------------------------------------------
# Internal: Vision support
# ------------------------------------------------------------------
@staticmethod
def _parse_vision_sentinel(raw: str) -> tuple[str, str] | None:
"""Parse a VISION_IMAGE sentinel into (media_type, base64_data), or None."""
@@ -489,6 +517,7 @@ class StepExecutor:
accumulated_results: list[str] = []
for _ in range(max_step_iterations):
# Check step timeout
if step_timeout and start_time:
elapsed = time.monotonic() - start_time
if elapsed >= step_timeout:
@@ -512,14 +541,19 @@ class StepExecutor:
return answer.model_dump_json()
if isinstance(answer, list) and answer and is_tool_call_list(answer):
# _execute_native_tool_calls appends assistant + tool messages
# to `messages` as a side-effect, so the next LLM call will
# see the full conversation history including tool outputs.
result = self._execute_native_tool_calls(
answer, messages, tool_calls_made
)
accumulated_results.append(result)
continue
# Text answer → LLM decided the step is done
return str(answer)
# Max iterations reached — return everything we accumulated
return "\n".join(filter(None, accumulated_results))
def _execute_native_tool_calls(
@@ -565,6 +599,9 @@ class StepExecutor:
parsed = self._parse_vision_sentinel(raw_content)
if parsed:
media_type, b64_data = parsed
# Replace the sentinel with a standard image_url content block.
# Each provider's _format_messages handles conversion to
# its native format (e.g. Anthropic image blocks).
modified: LLMMessage = cast(
LLMMessage, dict(call_result.tool_message)
)

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]==1.14.2rc1"
"crewai[tools]==1.14.2a2"
]
[project.scripts]

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]==1.14.2rc1"
"crewai[tools]==1.14.2a2"
]
[project.scripts]

View File

@@ -5,7 +5,7 @@ description = "Power up your crews with {{folder_name}}"
readme = "README.md"
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]==1.14.2rc1"
"crewai[tools]==1.14.2a2"
]
[tool.crewai]

View File

@@ -16,6 +16,7 @@ from typing import (
get_origin,
)
import uuid
import warnings
from pydantic import (
UUID4,
@@ -25,7 +26,7 @@ from pydantic import (
field_validator,
model_validator,
)
from typing_extensions import Self, deprecated
from typing_extensions import Self
if TYPE_CHECKING:
@@ -172,12 +173,9 @@ def _kickoff_with_a2a_support(
)
@deprecated(
"LiteAgent is deprecated and will be removed in v2.0.0.",
category=FutureWarning,
)
class LiteAgent(FlowTrackable, BaseModel):
"""A lightweight agent that can process messages and use tools.
"""
A lightweight agent that can process messages and use tools.
.. deprecated::
LiteAgent is deprecated and will be removed in a future version.
@@ -280,6 +278,18 @@ class LiteAgent(FlowTrackable, BaseModel):
)
_memory: Any = PrivateAttr(default=None)
@model_validator(mode="after")
def emit_deprecation_warning(self) -> Self:
"""Emit deprecation warning for LiteAgent usage."""
warnings.warn(
"LiteAgent is deprecated and will be removed in a future version. "
"Use Agent().kickoff(messages) instead, which provides the same "
"functionality with additional features like memory and knowledge support.",
DeprecationWarning,
stacklevel=2,
)
return self
@model_validator(mode="after")
def setup_llm(self) -> Self:
"""Set up the LLM and other components after initialization."""

View File

@@ -51,7 +51,6 @@ from crewai.utilities.exceptions.context_window_exceeding_exception import (
)
from crewai.utilities.logger_utils import suppress_warnings
from crewai.utilities.string_utils import sanitize_tool_name
from crewai.utilities.token_counter_callback import TokenCalcHandler
try:
@@ -76,13 +75,8 @@ try:
from litellm.types.utils import (
ChatCompletionDeltaToolCall,
Choices,
Delta as LiteLLMDelta,
Function,
Message,
ModelResponse,
ModelResponseBase,
ModelResponseStream,
StreamingChoices as LiteLLMStreamingChoices,
)
from litellm.utils import supports_response_schema
@@ -91,11 +85,6 @@ except ImportError:
LITELLM_AVAILABLE = False
litellm = None # type: ignore[assignment]
Choices = None # type: ignore[assignment, misc]
LiteLLMDelta = None # type: ignore[assignment, misc]
Message = None # type: ignore[assignment, misc]
ModelResponseBase = None # type: ignore[assignment, misc]
ModelResponseStream = None # type: ignore[assignment, misc]
LiteLLMStreamingChoices = None # type: ignore[assignment, misc]
get_supported_openai_params = None # type: ignore[assignment]
ChatCompletionDeltaToolCall = None # type: ignore[assignment, misc]
Function = None # type: ignore[assignment, misc]
@@ -720,7 +709,7 @@ class LLM(BaseLLM):
chunk_content = None
response_id = None
if isinstance(chunk, ModelResponseBase):
if hasattr(chunk, "id"):
response_id = chunk.id
# Safely extract content from various chunk formats
@@ -729,16 +718,18 @@ class LLM(BaseLLM):
choices = None
if isinstance(chunk, dict) and "choices" in chunk:
choices = chunk["choices"]
elif isinstance(chunk, ModelResponseStream):
choices = chunk.choices
elif hasattr(chunk, "choices"):
# Check if choices is not a type but an actual attribute with value
if not isinstance(chunk.choices, type):
choices = chunk.choices
# Try to extract usage information if available
# NOTE: usage is a pydantic extra field on ModelResponseBase,
# so it must be accessed via model_extra.
if isinstance(chunk, dict) and "usage" in chunk:
usage_info = chunk["usage"]
elif isinstance(chunk, ModelResponseBase) and chunk.model_extra:
usage_info = chunk.model_extra.get("usage") or usage_info
elif hasattr(chunk, "usage"):
# Check if usage is not a type but an actual attribute with value
if not isinstance(chunk.usage, type):
usage_info = chunk.usage
if choices and len(choices) > 0:
choice = choices[0]
@@ -747,7 +738,7 @@ class LLM(BaseLLM):
delta = None
if isinstance(choice, dict) and "delta" in choice:
delta = choice["delta"]
elif isinstance(choice, LiteLLMStreamingChoices):
elif hasattr(choice, "delta"):
delta = choice.delta
# Extract content from delta
@@ -757,7 +748,7 @@ class LLM(BaseLLM):
if "content" in delta and delta["content"] is not None:
chunk_content = delta["content"]
# Handle object format
elif isinstance(delta, LiteLLMDelta):
elif hasattr(delta, "content"):
chunk_content = delta.content
# Handle case where content might be None or empty
@@ -830,8 +821,9 @@ class LLM(BaseLLM):
choices = None
if isinstance(last_chunk, dict) and "choices" in last_chunk:
choices = last_chunk["choices"]
elif isinstance(last_chunk, ModelResponseStream):
choices = last_chunk.choices
elif hasattr(last_chunk, "choices"):
if not isinstance(last_chunk.choices, type):
choices = last_chunk.choices
if choices and len(choices) > 0:
choice = choices[0]
@@ -840,14 +832,14 @@ class LLM(BaseLLM):
message = None
if isinstance(choice, dict) and "message" in choice:
message = choice["message"]
elif isinstance(choice, Choices):
elif hasattr(choice, "message"):
message = choice.message
if message:
content = None
if isinstance(message, dict) and "content" in message:
content = message["content"]
elif isinstance(message, Message):
elif hasattr(message, "content"):
content = message.content
if content:
@@ -874,23 +866,24 @@ class LLM(BaseLLM):
choices = None
if isinstance(last_chunk, dict) and "choices" in last_chunk:
choices = last_chunk["choices"]
elif isinstance(last_chunk, ModelResponseStream):
choices = last_chunk.choices
elif hasattr(last_chunk, "choices"):
if not isinstance(last_chunk.choices, type):
choices = last_chunk.choices
if choices and len(choices) > 0:
choice = choices[0]
delta = None
if isinstance(choice, dict) and "delta" in choice:
delta = choice["delta"]
elif isinstance(choice, LiteLLMStreamingChoices):
delta = choice.delta
message = None
if isinstance(choice, dict) and "message" in choice:
message = choice["message"]
elif hasattr(choice, "message"):
message = choice.message
if delta:
if isinstance(delta, dict) and "tool_calls" in delta:
tool_calls = delta["tool_calls"]
elif isinstance(delta, LiteLLMDelta):
tool_calls = delta.tool_calls
if message:
if isinstance(message, dict) and "tool_calls" in message:
tool_calls = message["tool_calls"]
elif hasattr(message, "tool_calls"):
tool_calls = message.tool_calls
except Exception as e:
logging.debug(f"Error checking for tool calls: {e}")
@@ -1044,7 +1037,7 @@ class LLM(BaseLLM):
"""
if callbacks and len(callbacks) > 0:
for callback in callbacks:
if isinstance(callback, TokenCalcHandler):
if hasattr(callback, "log_success_event"):
# Use the usage_info we've been tracking
if not usage_info:
# Try to get usage from the last chunk if we haven't already
@@ -1055,14 +1048,9 @@ class LLM(BaseLLM):
and "usage" in last_chunk
):
usage_info = last_chunk["usage"]
elif (
isinstance(last_chunk, ModelResponseBase)
and last_chunk.model_extra
):
usage_info = (
last_chunk.model_extra.get("usage")
or usage_info
)
elif hasattr(last_chunk, "usage"):
if not isinstance(last_chunk.usage, type):
usage_info = last_chunk.usage
except Exception as e:
logging.debug(f"Error extracting usage info: {e}")
@@ -1135,10 +1123,13 @@ class LLM(BaseLLM):
params["response_model"] = response_model
response = litellm.completion(**params)
if isinstance(response, ModelResponseBase) and response.model_extra:
usage_info = response.model_extra.get("usage")
if usage_info:
self._track_token_usage_internal(usage_info)
if (
hasattr(response, "usage")
and not isinstance(response.usage, type)
and response.usage
):
usage_info = response.usage
self._track_token_usage_internal(usage_info)
except LLMContextLengthExceededError:
# Re-raise our own context length error
@@ -1150,11 +1141,7 @@ class LLM(BaseLLM):
raise LLMContextLengthExceededError(error_msg) from e
raise
response_usage = self._usage_to_dict(
response.model_extra.get("usage")
if isinstance(response, ModelResponseBase) and response.model_extra
else None
)
response_usage = self._usage_to_dict(getattr(response, "usage", None))
# --- 2) Handle structured output response (when response_model is provided)
if response_model is not None:
@@ -1179,13 +1166,8 @@ class LLM(BaseLLM):
# --- 3) Handle callbacks with usage info
if callbacks and len(callbacks) > 0:
for callback in callbacks:
if isinstance(callback, TokenCalcHandler):
usage_info = (
response.model_extra.get("usage")
if isinstance(response, ModelResponseBase)
and response.model_extra
else None
)
if hasattr(callback, "log_success_event"):
usage_info = getattr(response, "usage", None)
if usage_info:
callback.log_success_event(
kwargs=params,
@@ -1194,7 +1176,7 @@ class LLM(BaseLLM):
end_time=0,
)
# --- 4) Check for tool calls
tool_calls = response_message.tool_calls or []
tool_calls = getattr(response_message, "tool_calls", [])
# --- 5) If no tool calls or no available functions, return the text response directly as long as there is a text response
if (not tool_calls or not available_functions) and text_response:
@@ -1287,10 +1269,13 @@ class LLM(BaseLLM):
params["response_model"] = response_model
response = await litellm.acompletion(**params)
if isinstance(response, ModelResponseBase) and response.model_extra:
usage_info = response.model_extra.get("usage")
if usage_info:
self._track_token_usage_internal(usage_info)
if (
hasattr(response, "usage")
and not isinstance(response.usage, type)
and response.usage
):
usage_info = response.usage
self._track_token_usage_internal(usage_info)
except LLMContextLengthExceededError:
# Re-raise our own context length error
@@ -1302,11 +1287,7 @@ class LLM(BaseLLM):
raise LLMContextLengthExceededError(error_msg) from e
raise
response_usage = self._usage_to_dict(
response.model_extra.get("usage")
if isinstance(response, ModelResponseBase) and response.model_extra
else None
)
response_usage = self._usage_to_dict(getattr(response, "usage", None))
if response_model is not None:
if isinstance(response, BaseModel):
@@ -1328,13 +1309,8 @@ class LLM(BaseLLM):
if callbacks and len(callbacks) > 0:
for callback in callbacks:
if isinstance(callback, TokenCalcHandler):
usage_info = (
response.model_extra.get("usage")
if isinstance(response, ModelResponseBase)
and response.model_extra
else None
)
if hasattr(callback, "log_success_event"):
usage_info = getattr(response, "usage", None)
if usage_info:
callback.log_success_event(
kwargs=params,
@@ -1343,7 +1319,7 @@ class LLM(BaseLLM):
end_time=0,
)
tool_calls = response_message.tool_calls or []
tool_calls = getattr(response_message, "tool_calls", [])
if (not tool_calls or not available_functions) and text_response:
self._handle_emit_call_events(
@@ -1418,19 +1394,18 @@ class LLM(BaseLLM):
async for chunk in await litellm.acompletion(**params):
chunk_count += 1
chunk_content = None
response_id = chunk.id if isinstance(chunk, ModelResponseBase) else None
response_id = chunk.id if hasattr(chunk, "id") else None
try:
choices = None
if isinstance(chunk, dict) and "choices" in chunk:
choices = chunk["choices"]
elif isinstance(chunk, ModelResponseStream):
choices = chunk.choices
elif hasattr(chunk, "choices"):
if not isinstance(chunk.choices, type):
choices = chunk.choices
if isinstance(chunk, ModelResponseBase) and chunk.model_extra:
chunk_usage = chunk.model_extra.get("usage")
if chunk_usage is not None:
usage_info = chunk_usage
if hasattr(chunk, "usage") and chunk.usage is not None:
usage_info = chunk.usage
if choices and len(choices) > 0:
first_choice = choices[0]
@@ -1438,19 +1413,19 @@ class LLM(BaseLLM):
if isinstance(first_choice, dict):
delta = first_choice.get("delta", {})
elif isinstance(first_choice, LiteLLMStreamingChoices):
elif hasattr(first_choice, "delta"):
delta = first_choice.delta
if delta:
if isinstance(delta, dict):
chunk_content = delta.get("content")
elif isinstance(delta, LiteLLMDelta):
elif hasattr(delta, "content"):
chunk_content = delta.content
tool_calls: list[ChatCompletionDeltaToolCall] | None = None
if isinstance(delta, dict):
tool_calls = delta.get("tool_calls")
elif isinstance(delta, LiteLLMDelta):
elif hasattr(delta, "tool_calls"):
tool_calls = delta.tool_calls
if tool_calls:
@@ -1486,7 +1461,7 @@ class LLM(BaseLLM):
if callbacks and len(callbacks) > 0 and usage_info:
for callback in callbacks:
if isinstance(callback, TokenCalcHandler):
if hasattr(callback, "log_success_event"):
callback.log_success_event(
kwargs=params,
response_obj={"usage": usage_info},
@@ -1945,7 +1920,7 @@ class LLM(BaseLLM):
return None
if isinstance(usage, dict):
return usage
if isinstance(usage, BaseModel):
if hasattr(usage, "model_dump"):
result: dict[str, Any] = usage.model_dump()
return result
if hasattr(usage, "__dict__"):
@@ -2009,7 +1984,7 @@ class LLM(BaseLLM):
)
return messages
provider = self.provider or self.model
provider = getattr(self, "provider", None) or self.model
for msg in messages:
files = msg.get("files")
@@ -2060,7 +2035,7 @@ class LLM(BaseLLM):
)
return messages
provider = self.provider or self.model
provider = getattr(self, "provider", None) or self.model
for msg in messages:
files = msg.get("files")

View File

@@ -17,7 +17,10 @@ from crewai.utilities.agent_utils import is_context_length_exceeded
from crewai.utilities.exceptions.context_window_exceeding_exception import (
LLMContextLengthExceededError,
)
from crewai.utilities.pydantic_schema_utils import generate_model_description
from crewai.utilities.pydantic_schema_utils import (
generate_model_description,
sanitize_tool_params_for_bedrock_strict,
)
from crewai.utilities.types import LLMMessage
@@ -170,6 +173,7 @@ class ToolSpec(TypedDict, total=False):
name: Required[str]
description: Required[str]
inputSchema: ToolInputSchema
strict: bool
class ConverseToolTypeDef(TypedDict):
@@ -1984,10 +1988,21 @@ class BedrockCompletion(BaseLLM):
"description": description,
}
func_info = tool.get("function", {})
strict_enabled = bool(func_info.get("strict"))
if parameters and isinstance(parameters, dict):
input_schema: ToolInputSchema = {"json": parameters}
schema_params = (
sanitize_tool_params_for_bedrock_strict(parameters)
if strict_enabled
else parameters
)
input_schema: ToolInputSchema = {"json": schema_params}
tool_spec["inputSchema"] = input_schema
if strict_enabled:
tool_spec["strict"] = True
converse_tool: ConverseToolTypeDef = {"toolSpec": tool_spec}
converse_tools.append(converse_tool)

View File

@@ -417,18 +417,9 @@ class MCPToolResolver:
args_schema = None
if tool_def.get("inputSchema"):
try:
args_schema = self._json_schema_to_pydantic(
tool_name, tool_def["inputSchema"]
)
except Exception as e:
self._logger.log(
"warning",
f"Failed to build args schema for MCP tool "
f"'{tool_name}': {e}. Registering tool without a "
"typed schema.",
)
args_schema = None
args_schema = self._json_schema_to_pydantic(
tool_name, tool_def["inputSchema"]
)
tool_schema = {
"description": tool_def.get("description", ""),

View File

@@ -45,7 +45,6 @@ from crewai.events.types.task_events import (
TaskStartedEvent,
)
from crewai.llms.base_llm import BaseLLM
from crewai.llms.providers.openai.completion import OpenAICompletion
from crewai.security import Fingerprint, SecurityConfig
from crewai.tasks.output_format import OutputFormat
from crewai.tasks.task_output import TaskOutput
@@ -302,14 +301,12 @@ class Task(BaseModel):
@model_validator(mode="after")
def validate_required_fields(self) -> Self:
if self.description is None:
raise ValueError(
"description must be provided either directly or through config"
)
if self.expected_output is None:
raise ValueError(
"expected_output must be provided either directly or through config"
)
required_fields = ["description", "expected_output"]
for field in required_fields:
if getattr(self, field) is None:
raise ValueError(
f"{field} must be provided either directly or through config"
)
return self
@model_validator(mode="after")
@@ -841,8 +838,8 @@ class Task(BaseModel):
should_inject = self.allow_crewai_trigger_context
if should_inject and self.agent:
crew = self.agent.crew
if crew and not isinstance(crew, str) and crew._inputs:
crew = getattr(self.agent, "crew", None)
if crew and hasattr(crew, "_inputs") and crew._inputs:
trigger_payload = crew._inputs.get("crewai_trigger_payload")
if trigger_payload is not None:
description += f"\n\nTrigger Payload: {trigger_payload}"
@@ -855,12 +852,11 @@ class Task(BaseModel):
isinstance(self.agent.llm, BaseLLM)
and self.agent.llm.supports_multimodal()
):
provider: str = self.agent.llm.provider or self.agent.llm.model
api: str | None = (
self.agent.llm.api
if isinstance(self.agent.llm, OpenAICompletion)
else None
provider: str = str(
getattr(self.agent.llm, "provider", None)
or getattr(self.agent.llm, "model", "openai")
)
api: str | None = getattr(self.agent.llm, "api", None)
supported_types = get_supported_content_types(provider, api)
def is_auto_injected(content_type: str) -> bool:

View File

@@ -19,18 +19,7 @@ from collections.abc import Callable
from copy import deepcopy
import datetime
import logging
from typing import (
TYPE_CHECKING,
Annotated,
Any,
Final,
ForwardRef,
Literal,
Optional,
TypedDict,
Union,
cast,
)
from typing import TYPE_CHECKING, Annotated, Any, Final, Literal, TypedDict, Union, cast
import uuid
import jsonref # type: ignore[import-untyped]
@@ -110,22 +99,15 @@ def resolve_refs(schema: dict[str, Any]) -> dict[str, Any]:
"""
defs = schema.get("$defs", {})
schema_copy = deepcopy(schema)
expanding: set[str] = set()
def _resolve(node: Any) -> Any:
if isinstance(node, dict):
ref = node.get("$ref")
if isinstance(ref, str) and ref.startswith("#/$defs/"):
def_name = ref.replace("#/$defs/", "")
if def_name not in defs:
raise KeyError(f"Definition '{def_name}' not found in $defs.")
if def_name in expanding:
return {}
expanding.add(def_name)
try:
if def_name in defs:
return _resolve(deepcopy(defs[def_name]))
finally:
expanding.discard(def_name)
raise KeyError(f"Definition '{def_name}' not found in $defs.")
return {k: _resolve(v) for k, v in node.items()}
if isinstance(node, list):
@@ -137,11 +119,7 @@ def resolve_refs(schema: dict[str, Any]) -> dict[str, Any]:
def add_key_in_dict_recursively(
d: dict[str, Any],
key: str,
value: Any,
criteria: Callable[[dict[str, Any]], bool],
_seen: set[int] | None = None,
d: dict[str, Any], key: str, value: Any, criteria: Callable[[dict[str, Any]], bool]
) -> dict[str, Any]:
"""Recursively adds a key/value pair to all nested dicts matching `criteria`.
@@ -150,31 +128,22 @@ def add_key_in_dict_recursively(
key: The key to add.
value: The value to add.
criteria: A function that returns True for dicts that should receive the key.
_seen: Internal set of visited ``id()``s, used to guard cyclic schemas.
Returns:
The modified dictionary.
"""
if _seen is None:
_seen = set()
if isinstance(d, dict):
if id(d) in _seen:
return d
_seen.add(id(d))
if criteria(d) and key not in d:
d[key] = value
for v in d.values():
add_key_in_dict_recursively(v, key, value, criteria, _seen)
add_key_in_dict_recursively(v, key, value, criteria)
elif isinstance(d, list):
if id(d) in _seen:
return d
_seen.add(id(d))
for i in d:
add_key_in_dict_recursively(i, key, value, criteria, _seen)
add_key_in_dict_recursively(i, key, value, criteria)
return d
def force_additional_properties_false(d: Any, _seen: set[int] | None = None) -> Any:
def force_additional_properties_false(d: Any) -> Any:
"""Force additionalProperties=false on all object-type dicts recursively.
OpenAI strict mode requires all objects to have additionalProperties=false.
@@ -185,17 +154,11 @@ def force_additional_properties_false(d: Any, _seen: set[int] | None = None) ->
Args:
d: The dictionary/list to modify.
_seen: Internal set of visited ``id()``s, used to guard cyclic schemas.
Returns:
The modified dictionary/list.
"""
if _seen is None:
_seen = set()
if isinstance(d, dict):
if id(d) in _seen:
return d
_seen.add(id(d))
if d.get("type") == "object":
d["additionalProperties"] = False
if "properties" not in d:
@@ -203,13 +166,10 @@ def force_additional_properties_false(d: Any, _seen: set[int] | None = None) ->
if "required" not in d:
d["required"] = []
for v in d.values():
force_additional_properties_false(v, _seen)
force_additional_properties_false(v)
elif isinstance(d, list):
if id(d) in _seen:
return d
_seen.add(id(d))
for i in d:
force_additional_properties_false(i, _seen)
force_additional_properties_false(i)
return d
@@ -223,7 +183,7 @@ OPENAI_SUPPORTED_FORMATS: Final[
}
def strip_unsupported_formats(d: Any, _seen: set[int] | None = None) -> Any:
def strip_unsupported_formats(d: Any) -> Any:
"""Remove format annotations that OpenAI strict mode doesn't support.
OpenAI only supports: date-time, date, time, duration.
@@ -231,17 +191,11 @@ def strip_unsupported_formats(d: Any, _seen: set[int] | None = None) -> Any:
Args:
d: The dictionary/list to modify.
_seen: Internal set of visited ``id()``s, used to guard cyclic schemas.
Returns:
The modified dictionary/list.
"""
if _seen is None:
_seen = set()
if isinstance(d, dict):
if id(d) in _seen:
return d
_seen.add(id(d))
format_value = d.get("format")
if (
isinstance(format_value, str)
@@ -249,17 +203,14 @@ def strip_unsupported_formats(d: Any, _seen: set[int] | None = None) -> Any:
):
del d["format"]
for v in d.values():
strip_unsupported_formats(v, _seen)
strip_unsupported_formats(v)
elif isinstance(d, list):
if id(d) in _seen:
return d
_seen.add(id(d))
for i in d:
strip_unsupported_formats(i, _seen)
strip_unsupported_formats(i)
return d
def ensure_type_in_schemas(d: Any, _seen: set[int] | None = None) -> Any:
def ensure_type_in_schemas(d: Any) -> Any:
"""Ensure all schema objects in anyOf/oneOf have a 'type' key.
OpenAI strict mode requires every schema to have a 'type' key.
@@ -267,17 +218,11 @@ def ensure_type_in_schemas(d: Any, _seen: set[int] | None = None) -> Any:
Args:
d: The dictionary/list to modify.
_seen: Internal set of visited ``id()``s, used to guard cyclic schemas.
Returns:
The modified dictionary/list.
"""
if _seen is None:
_seen = set()
if isinstance(d, dict):
if id(d) in _seen:
return d
_seen.add(id(d))
for key in ("anyOf", "oneOf"):
if key in d:
schema_list = d[key]
@@ -285,15 +230,12 @@ def ensure_type_in_schemas(d: Any, _seen: set[int] | None = None) -> Any:
if isinstance(schema, dict) and schema == {}:
schema_list[i] = {"type": "object"}
else:
ensure_type_in_schemas(schema, _seen)
ensure_type_in_schemas(schema)
for v in d.values():
ensure_type_in_schemas(v, _seen)
ensure_type_in_schemas(v)
elif isinstance(d, list):
if id(d) in _seen:
return d
_seen.add(id(d))
for item in d:
ensure_type_in_schemas(item, _seen)
ensure_type_in_schemas(item)
return d
@@ -376,9 +318,7 @@ def add_const_to_oneof_variants(schema: dict[str, Any]) -> dict[str, Any]:
return _process_oneof(deepcopy(schema))
def convert_oneof_to_anyof(
schema: dict[str, Any], _seen: set[int] | None = None
) -> dict[str, Any]:
def convert_oneof_to_anyof(schema: dict[str, Any]) -> dict[str, Any]:
"""Convert oneOf to anyOf for OpenAI compatibility.
OpenAI's Structured Outputs support anyOf better than oneOf.
@@ -386,37 +326,26 @@ def convert_oneof_to_anyof(
Args:
schema: JSON schema dictionary.
_seen: Internal set of visited ``id()``s, used to guard cyclic schemas.
Returns:
Modified schema with anyOf instead of oneOf.
"""
if _seen is None:
_seen = set()
if isinstance(schema, dict):
if id(schema) in _seen:
return schema
_seen.add(id(schema))
if "oneOf" in schema:
schema["anyOf"] = schema.pop("oneOf")
for value in schema.values():
if isinstance(value, dict):
convert_oneof_to_anyof(value, _seen)
convert_oneof_to_anyof(value)
elif isinstance(value, list):
if id(value) in _seen:
continue
_seen.add(id(value))
for item in value:
if isinstance(item, dict):
convert_oneof_to_anyof(item, _seen)
convert_oneof_to_anyof(item)
return schema
def ensure_all_properties_required(
schema: dict[str, Any], _seen: set[int] | None = None
) -> dict[str, Any]:
def ensure_all_properties_required(schema: dict[str, Any]) -> dict[str, Any]:
"""Ensure all properties are in the required array for OpenAI strict mode.
OpenAI's strict structured outputs require all properties to be listed
@@ -425,17 +354,11 @@ def ensure_all_properties_required(
Args:
schema: JSON schema dictionary.
_seen: Internal set of visited ``id()``s, used to guard cyclic schemas.
Returns:
Modified schema with all properties marked as required.
"""
if _seen is None:
_seen = set()
if isinstance(schema, dict):
if id(schema) in _seen:
return schema
_seen.add(id(schema))
if schema.get("type") == "object" and "properties" in schema:
properties = schema["properties"]
if properties:
@@ -443,21 +366,16 @@ def ensure_all_properties_required(
for value in schema.values():
if isinstance(value, dict):
ensure_all_properties_required(value, _seen)
ensure_all_properties_required(value)
elif isinstance(value, list):
if id(value) in _seen:
continue
_seen.add(id(value))
for item in value:
if isinstance(item, dict):
ensure_all_properties_required(item, _seen)
ensure_all_properties_required(item)
return schema
def strip_null_from_types(
schema: dict[str, Any], _seen: set[int] | None = None
) -> dict[str, Any]:
def strip_null_from_types(schema: dict[str, Any]) -> dict[str, Any]:
"""Remove null type from anyOf/type arrays.
Pydantic generates `T | None` for optional fields, which creates schemas with
@@ -466,17 +384,11 @@ def strip_null_from_types(
Args:
schema: JSON schema dictionary.
_seen: Internal set of visited ``id()``s, used to guard cyclic schemas.
Returns:
Modified schema with null types removed.
"""
if _seen is None:
_seen = set()
if isinstance(schema, dict):
if id(schema) in _seen:
return schema
_seen.add(id(schema))
if "anyOf" in schema:
any_of = schema["anyOf"]
non_null = [opt for opt in any_of if opt.get("type") != "null"]
@@ -496,14 +408,11 @@ def strip_null_from_types(
for value in schema.values():
if isinstance(value, dict):
strip_null_from_types(value, _seen)
strip_null_from_types(value)
elif isinstance(value, list):
if id(value) in _seen:
continue
_seen.add(id(value))
for item in value:
if isinstance(item, dict):
strip_null_from_types(item, _seen)
strip_null_from_types(item)
return schema
@@ -542,26 +451,16 @@ _CLAUDE_STRICT_UNSUPPORTED: Final[tuple[str, ...]] = (
)
def _strip_keys_recursive(
d: Any, keys: tuple[str, ...], _seen: set[int] | None = None
) -> Any:
def _strip_keys_recursive(d: Any, keys: tuple[str, ...]) -> Any:
"""Recursively delete a fixed set of keys from a schema."""
if _seen is None:
_seen = set()
if isinstance(d, dict):
if id(d) in _seen:
return d
_seen.add(id(d))
for key in keys:
d.pop(key, None)
for v in d.values():
_strip_keys_recursive(v, keys, _seen)
_strip_keys_recursive(v, keys)
elif isinstance(d, list):
if id(d) in _seen:
return d
_seen.add(id(d))
for i in d:
_strip_keys_recursive(i, keys, _seen)
_strip_keys_recursive(i, keys)
return d
@@ -820,70 +719,12 @@ def create_model_from_schema( # type: ignore[no-any-unimported]
json_schema = force_additional_properties_false(json_schema)
effective_root = force_additional_properties_false(effective_root)
in_progress: dict[int, Any] = {}
model = _build_model_from_schema(
json_schema,
effective_root,
model_name=model_name,
enrich_descriptions=enrich_descriptions,
in_progress=in_progress,
__config__=__config__,
__base__=__base__,
__module__=__module__,
__validators__=__validators__,
__cls_kwargs__=__cls_kwargs__,
)
types_namespace: dict[str, Any] = {
entry.__name__: entry
for entry in in_progress.values()
if isinstance(entry, type) and issubclass(entry, BaseModel)
}
for entry in in_progress.values():
if (
isinstance(entry, type)
and issubclass(entry, BaseModel)
and not getattr(entry, "__pydantic_complete__", True)
):
try:
entry.model_rebuild(_types_namespace=types_namespace)
except Exception as e:
logger.debug("model_rebuild failed for %s: %s", entry.__name__, e)
return model
def _build_model_from_schema( # type: ignore[no-any-unimported]
json_schema: dict[str, Any],
effective_root: dict[str, Any],
*,
model_name: str | None,
enrich_descriptions: bool,
in_progress: dict[int, Any],
__config__: ConfigDict | None = None,
__base__: type[BaseModel] | None = None,
__module__: str = __name__,
__validators__: dict[str, AnyClassMethod] | None = None,
__cls_kwargs__: dict[str, Any] | None = None,
) -> type[BaseModel]:
"""Inner builder shared by the public entry point and recursive nested-object creation.
Preprocessing via ``jsonref.replace_refs`` and the sanitization walkers is
run once by the public entry; this helper walks the already-normalized
schema and emits Pydantic models. ``in_progress`` maps ``id(schema)`` to
the model being built for that schema, so a cyclic ``$ref`` graph
degrades to a ``ForwardRef`` back-edge instead of blowing the stack.
"""
original_id = id(json_schema)
if "allOf" in json_schema:
json_schema = _merge_all_of_schemas(json_schema["allOf"], effective_root)
if "title" not in json_schema and "title" in (root_schema or {}):
json_schema["title"] = (root_schema or {}).get("title")
effective_name = model_name or json_schema.get("title") or "DynamicModel"
schema_id = id(json_schema)
in_progress[original_id] = effective_name
if schema_id != original_id:
in_progress[schema_id] = effective_name
field_definitions = {
name: _json_schema_to_pydantic_field(
name,
@@ -891,14 +732,13 @@ def _build_model_from_schema( # type: ignore[no-any-unimported]
json_schema.get("required", []),
effective_root,
enrich_descriptions=enrich_descriptions,
in_progress=in_progress,
)
for name, prop in (json_schema.get("properties", {}) or {}).items()
}
effective_config = __config__ or ConfigDict(extra="forbid")
model = create_model_base(
return create_model_base(
effective_name,
__config__=effective_config,
__base__=__base__,
@@ -907,10 +747,6 @@ def _build_model_from_schema( # type: ignore[no-any-unimported]
__cls_kwargs__=__cls_kwargs__,
**field_definitions,
)
in_progress[original_id] = model
if schema_id != original_id:
in_progress[schema_id] = model
return model
def _json_schema_to_pydantic_field(
@@ -920,7 +756,6 @@ def _json_schema_to_pydantic_field(
root_schema: dict[str, Any],
*,
enrich_descriptions: bool = False,
in_progress: dict[int, Any] | None = None,
) -> Any:
"""Convert a JSON schema property to a Pydantic field definition.
@@ -939,7 +774,6 @@ def _json_schema_to_pydantic_field(
root_schema,
name_=name.title(),
enrich_descriptions=enrich_descriptions,
in_progress=in_progress,
)
is_required = name in required
@@ -999,7 +833,7 @@ def _json_schema_to_pydantic_field(
field_params["pattern"] = json_schema["pattern"]
if not is_required:
type_ = Optional[type_] # noqa: UP045 - ForwardRef does not support `|`
type_ = type_ | None
if schema_extra:
field_params["json_schema_extra"] = schema_extra
@@ -1072,7 +906,6 @@ def _json_schema_to_pydantic_type(
*,
name_: str | None = None,
enrich_descriptions: bool = False,
in_progress: dict[int, Any] | None = None,
) -> Any:
"""Convert a JSON schema to a Python/Pydantic type.
@@ -1081,23 +914,10 @@ def _json_schema_to_pydantic_type(
root_schema: The root schema for resolving $ref.
name_: Optional name for nested models.
enrich_descriptions: Propagated to nested model creation.
in_progress: Map of ``id(schema_dict)`` to the Pydantic model
currently being built for that schema, or to a placeholder name
as a plain ``str`` while the model is still being constructed.
Populated by :func:`_build_model_from_schema`. Enables cycle
detection so a self-referential ``$ref`` graph resolves to a
:class:`ForwardRef` back-edge rather than recursing forever.
Returns:
A Python type corresponding to the JSON schema.
"""
if in_progress is not None:
cached = in_progress.get(id(json_schema))
if isinstance(cached, str):
return ForwardRef(cached)
if cached is not None:
return cached
ref = json_schema.get("$ref")
if ref:
ref_schema = _resolve_ref(ref, root_schema)
@@ -1106,7 +926,6 @@ def _json_schema_to_pydantic_type(
root_schema,
name_=name_,
enrich_descriptions=enrich_descriptions,
in_progress=in_progress,
)
enum_values = json_schema.get("enum")
@@ -1126,7 +945,6 @@ def _json_schema_to_pydantic_type(
root_schema,
name_=f"{name_ or 'Union'}Option{i}",
enrich_descriptions=enrich_descriptions,
in_progress=in_progress,
)
for i, schema in enumerate(any_of_schemas)
]
@@ -1140,15 +958,6 @@ def _json_schema_to_pydantic_type(
root_schema,
name_=name_,
enrich_descriptions=enrich_descriptions,
in_progress=in_progress,
)
if in_progress is not None:
return _build_model_from_schema(
json_schema,
root_schema,
model_name=name_,
enrich_descriptions=enrich_descriptions,
in_progress=in_progress,
)
merged = _merge_all_of_schemas(all_of_schemas, root_schema)
return _json_schema_to_pydantic_type(
@@ -1156,7 +965,6 @@ def _json_schema_to_pydantic_type(
root_schema,
name_=name_,
enrich_descriptions=enrich_descriptions,
in_progress=in_progress,
)
type_ = json_schema.get("type")
@@ -1177,21 +985,12 @@ def _json_schema_to_pydantic_type(
root_schema,
name_=name_,
enrich_descriptions=enrich_descriptions,
in_progress=in_progress,
)
return list[item_type] # type: ignore[valid-type]
return list
if type_ == "object":
properties = json_schema.get("properties")
if properties:
if in_progress is not None:
return _build_model_from_schema(
json_schema,
root_schema,
model_name=name_,
enrich_descriptions=enrich_descriptions,
in_progress=in_progress,
)
json_schema_ = json_schema.copy()
if json_schema_.get("title") is None:
json_schema_["title"] = name_ or "DynamicModel"

View File

@@ -0,0 +1,197 @@
"""Tests for A2A delegation utilities."""
from __future__ import annotations
from contextlib import asynccontextmanager, ExitStack
from typing import Any
from unittest.mock import AsyncMock, MagicMock, patch
import pytest
from a2a.client import Client
from a2a.types import AgentCapabilities, AgentCard
from crewai.a2a.updates.polling.handler import PollingHandler
from crewai.a2a.updates.streaming.handler import StreamingHandler
from crewai.a2a.utils.delegation import get_handler, _aexecute_a2a_delegation_impl
from crewai.a2a.utils.transport import NegotiatedTransport
class TestGetHandler:
"""Tests for the get_handler helper."""
def test_returns_streaming_handler_when_config_is_none(self) -> None:
assert get_handler(None) is StreamingHandler
def test_returns_polling_handler_for_polling_config(self) -> None:
from crewai.a2a.updates import PollingConfig
assert get_handler(PollingConfig()) is PollingHandler
def _make_agent_card(streaming: bool | None) -> AgentCard:
"""Build a minimal AgentCard with the given streaming capability."""
capabilities = AgentCapabilities(streaming=streaming)
return AgentCard(
name="test-agent",
description="A test agent",
url="http://localhost:9999/",
version="1.0.0",
capabilities=capabilities,
defaultInputModes=["text/plain"],
defaultOutputModes=["text/plain"],
skills=[],
)
_TASK_RESULT = {"status": "completed", "result": "done", "history": []}
def _make_shared_patches(agent_card: AgentCard) -> list:
"""Return the common patches used across the delegation tests."""
mock_client = MagicMock(spec=Client)
@asynccontextmanager
async def _fake_client_ctx(*args: Any, **kwargs: Any):
yield mock_client
negotiated = NegotiatedTransport(
transport="JSONRPC",
url=agent_card.url,
source="server_card",
)
return [
patch(
"crewai.a2a.utils.delegation._afetch_agent_card_cached",
new=AsyncMock(return_value=agent_card),
),
patch("crewai.a2a.utils.delegation.validate_auth_against_agent_card"),
patch(
"crewai.a2a.utils.delegation.validate_required_extensions",
return_value=[],
),
patch(
"crewai.a2a.utils.delegation.negotiate_transport",
return_value=negotiated,
),
patch(
"crewai.a2a.utils.delegation.negotiate_content_types",
return_value=MagicMock(output_modes=None),
),
patch(
"crewai.a2a.utils.delegation._prepare_auth_headers",
new=AsyncMock(return_value=({}, None)),
),
patch("crewai.a2a.utils.delegation.crewai_event_bus"),
patch(
"crewai.a2a.utils.delegation._create_a2a_client",
side_effect=_fake_client_ctx,
),
]
async def _call_impl(agent_card: AgentCard, updates=None) -> None:
await _aexecute_a2a_delegation_impl(
endpoint="http://localhost:9999/",
auth=None,
timeout=30,
task_description="test task",
context=None,
context_id=None,
task_id=None,
reference_task_ids=None,
metadata=None,
extensions=None,
conversation_history=[],
is_multiturn=False,
turn_number=1,
agent_branch=None,
agent_id=None,
agent_role=None,
response_model=None,
updates=updates,
)
class TestStreamingFallback:
"""Tests that the delegation respects the agent card's streaming capability."""
@pytest.mark.asyncio
async def test_uses_polling_when_agent_card_says_no_streaming(self) -> None:
"""When streaming=False and updates is None, PollingHandler should be used."""
agent_card = _make_agent_card(streaming=False)
with ExitStack() as stack:
for p in _make_shared_patches(agent_card):
stack.enter_context(p)
mock_polling = stack.enter_context(
patch.object(PollingHandler, "execute", new=AsyncMock(return_value=_TASK_RESULT))
)
mock_streaming = stack.enter_context(
patch.object(StreamingHandler, "execute", new=AsyncMock(return_value=_TASK_RESULT))
)
await _call_impl(agent_card, updates=None)
mock_polling.assert_called_once()
mock_streaming.assert_not_called()
@pytest.mark.asyncio
async def test_uses_streaming_when_agent_card_says_streaming(self) -> None:
"""When streaming=True and updates is None, StreamingHandler should be used."""
agent_card = _make_agent_card(streaming=True)
with ExitStack() as stack:
for p in _make_shared_patches(agent_card):
stack.enter_context(p)
mock_polling = stack.enter_context(
patch.object(PollingHandler, "execute", new=AsyncMock(return_value=_TASK_RESULT))
)
mock_streaming = stack.enter_context(
patch.object(StreamingHandler, "execute", new=AsyncMock(return_value=_TASK_RESULT))
)
await _call_impl(agent_card, updates=None)
mock_streaming.assert_called_once()
mock_polling.assert_not_called()
@pytest.mark.asyncio
async def test_uses_streaming_when_agent_card_streaming_is_none(self) -> None:
"""When streaming is unset (None) and updates is None, StreamingHandler should be used."""
agent_card = _make_agent_card(streaming=None)
with ExitStack() as stack:
for p in _make_shared_patches(agent_card):
stack.enter_context(p)
mock_polling = stack.enter_context(
patch.object(PollingHandler, "execute", new=AsyncMock(return_value=_TASK_RESULT))
)
mock_streaming = stack.enter_context(
patch.object(StreamingHandler, "execute", new=AsyncMock(return_value=_TASK_RESULT))
)
await _call_impl(agent_card, updates=None)
mock_streaming.assert_called_once()
mock_polling.assert_not_called()
@pytest.mark.asyncio
async def test_explicit_streaming_config_overrides_agent_card(self) -> None:
"""Explicitly passing StreamingConfig keeps StreamingHandler even when agent card says no streaming."""
from crewai.a2a.updates.streaming.config import StreamingConfig
agent_card = _make_agent_card(streaming=False)
with ExitStack() as stack:
for p in _make_shared_patches(agent_card):
stack.enter_context(p)
mock_polling = stack.enter_context(
patch.object(PollingHandler, "execute", new=AsyncMock(return_value=_TASK_RESULT))
)
mock_streaming = stack.enter_context(
patch.object(StreamingHandler, "execute", new=AsyncMock(return_value=_TASK_RESULT))
)
await _call_impl(agent_card, updates=StreamingConfig())
# explicit config overrides agent card; streaming handler is used
mock_streaming.assert_called_once()
mock_polling.assert_not_called()

View File

@@ -1051,7 +1051,7 @@ def test_lite_agent_verbose_false_suppresses_printer_output():
successful_requests=1,
)
with pytest.warns(FutureWarning):
with pytest.warns(DeprecationWarning):
agent = LiteAgent(
role="Test Agent",
goal="Test goal",

View File

@@ -1,3 +1,3 @@
"""CrewAI development tools."""
__version__ = "1.14.2rc1"
__version__ = "1.14.2a2"

View File

@@ -29,33 +29,6 @@ load_dotenv()
console = Console()
def _resume_hint(message: str) -> None:
"""Print a boxed resume hint after a failure."""
console.print()
console.print(
Panel(
message,
title="[bold yellow]How to resume[/bold yellow]",
border_style="yellow",
padding=(1, 2),
)
)
def _print_release_error(e: BaseException) -> None:
"""Print a release error with stderr if available."""
if isinstance(e, KeyboardInterrupt):
raise
if isinstance(e, SystemExit):
return
if isinstance(e, subprocess.CalledProcessError):
console.print(f"[red]Error running command:[/red] {e}")
if e.stderr:
console.print(e.stderr)
else:
console.print(f"[red]Error:[/red] {e}")
def run_command(cmd: list[str], cwd: Path | None = None) -> str:
"""Run a shell command and return output.
@@ -291,9 +264,11 @@ def add_docs_version(docs_json_path: Path, version: str) -> bool:
if not versions:
continue
# Skip if this version already exists for this language
if any(v.get("version") == version_label for v in versions):
continue
# Find the current default and copy its tabs
default_version = next(
(v for v in versions if v.get("default")),
versions[0],
@@ -305,7 +280,10 @@ def add_docs_version(docs_json_path: Path, version: str) -> bool:
"tabs": default_version.get("tabs", []),
}
# Remove default flag from old default
default_version.pop("default", None)
# Insert new version at the beginning
versions.insert(0, new_version)
updated = True
@@ -499,7 +477,7 @@ def _is_crewai_dep(spec: str) -> bool:
"""Return True if *spec* is a ``crewai`` or ``crewai[...]`` dependency."""
if not spec.startswith("crewai"):
return False
rest = spec[6:]
rest = spec[6:] # after "crewai"
return len(rest) > 0 and rest[0] in ("[", "=", ">", "<", "~", "!")
@@ -521,6 +499,7 @@ def _pin_crewai_deps(content: str, version: str) -> str:
deps = doc.get("project", {}).get(key)
if deps is None:
continue
# optional-dependencies is a table of lists; dependencies is a list
dep_lists = deps.values() if isinstance(deps, Mapping) else [deps]
for dep_list in dep_lists:
for i, dep in enumerate(dep_list):
@@ -659,6 +638,7 @@ def get_github_contributors(commit_range: str) -> list[str]:
List of GitHub usernames sorted alphabetically.
"""
try:
# Get GitHub token from gh CLI
try:
gh_token = run_command(["gh", "auth", "token"])
except subprocess.CalledProcessError:
@@ -700,6 +680,11 @@ def get_github_contributors(commit_range: str) -> list[str]:
return []
# ---------------------------------------------------------------------------
# Shared workflow helpers
# ---------------------------------------------------------------------------
def _poll_pr_until_merged(
branch_name: str, label: str, repo: str | None = None
) -> None:
@@ -779,6 +764,7 @@ def _update_all_versions(
"[yellow]Warning:[/yellow] No __version__ attributes found to update"
)
# Update CLI template pyproject.toml files
templates_dir = lib_dir / "crewai" / "src" / "crewai" / "cli" / "templates"
if templates_dir.exists():
if dry_run:
@@ -1177,11 +1163,13 @@ def _repin_crewai_install(run_value: str, version: str) -> str:
while marker in remainder:
before, _, after = remainder.partition(marker)
result.append(before)
# after looks like: a2a]==1.14.0" ...
bracket_end = after.index("]")
extras = after[:bracket_end]
rest = after[bracket_end + 1 :]
if rest.startswith("=="):
ver_start = 2
# Find end of version — next quote or whitespace
ver_start = 2 # len("==")
ver_end = ver_start
while ver_end < len(rest) and rest[ver_end] not in ('"', "'", " ", "\n"):
ver_end += 1
@@ -1343,6 +1331,7 @@ def _release_enterprise(version: str, is_prerelease: bool, dry_run: bool) -> Non
run_command(["gh", "repo", "clone", enterprise_repo, str(repo_dir)])
console.print(f"[green]✓[/green] Cloned {enterprise_repo}")
# --- bump versions ---
for rel_dir in _ENTERPRISE_VERSION_DIRS:
pkg_dir = repo_dir / rel_dir
if not pkg_dir.exists():
@@ -1372,12 +1361,14 @@ def _release_enterprise(version: str, is_prerelease: bool, dry_run: bool) -> Non
f"{pyproject.relative_to(repo_dir)}"
)
# --- update crewai[tools] pin ---
enterprise_pyproject = repo_dir / enterprise_dep_path
if _update_enterprise_crewai_dep(enterprise_pyproject, version):
console.print(
f"[green]✓[/green] Updated crewai[tools] dep in {enterprise_dep_path}"
)
# --- update crewai pins in CI workflows ---
for wf in _update_enterprise_workflows(repo_dir, version):
console.print(
f"[green]✓[/green] Updated crewai pin in {wf.relative_to(repo_dir)}"
@@ -1417,6 +1408,7 @@ def _release_enterprise(version: str, is_prerelease: bool, dry_run: bool) -> Non
time.sleep(_PYPI_POLL_INTERVAL)
console.print("[green]✓[/green] Workspace synced")
# --- branch, commit, push, PR ---
branch_name = f"feat/bump-version-{version}"
run_command(["git", "checkout", "-b", branch_name], cwd=repo_dir)
run_command(["git", "add", "."], cwd=repo_dir)
@@ -1450,6 +1442,7 @@ def _release_enterprise(version: str, is_prerelease: bool, dry_run: bool) -> Non
_poll_pr_until_merged(branch_name, "enterprise bump PR", repo=enterprise_repo)
# --- tag and release ---
run_command(["git", "checkout", "main"], cwd=repo_dir)
run_command(["git", "pull"], cwd=repo_dir)
@@ -1491,6 +1484,7 @@ def _trigger_pypi_publish(tag_name: str, wait: bool = False) -> None:
tag_name: The release tag to publish.
wait: Block until the workflow run completes.
"""
# Capture the latest run ID before triggering so we can detect the new one
prev_run_id = ""
if wait:
try:
@@ -1565,6 +1559,11 @@ def _trigger_pypi_publish(tag_name: str, wait: bool = False) -> None:
console.print("[green]✓[/green] PyPI publish workflow completed")
# ---------------------------------------------------------------------------
# CLI commands
# ---------------------------------------------------------------------------
@click.group()
def cli() -> None:
"""Development tools for version bumping and git automation."""
@@ -1832,80 +1831,62 @@ def release(
skip_enterprise: Skip the enterprise release phase.
skip_to_enterprise: Skip phases 1 & 2, run only the enterprise release phase.
"""
flags: list[str] = []
if no_edit:
flags.append("--no-edit")
if skip_enterprise:
flags.append("--skip-enterprise")
flag_suffix = (" " + " ".join(flags)) if flags else ""
enterprise_hint = (
""
if skip_enterprise
else f"\n\nThen release enterprise:\n\n"
f" devtools release {version} --skip-to-enterprise"
)
check_gh_installed()
if skip_enterprise and skip_to_enterprise:
console.print(
"[red]Error:[/red] Cannot use both --skip-enterprise "
"and --skip-to-enterprise"
)
sys.exit(1)
if not skip_enterprise or skip_to_enterprise:
missing: list[str] = []
if not _ENTERPRISE_REPO:
missing.append("ENTERPRISE_REPO")
if not _ENTERPRISE_VERSION_DIRS:
missing.append("ENTERPRISE_VERSION_DIRS")
if not _ENTERPRISE_CREWAI_DEP_PATH:
missing.append("ENTERPRISE_CREWAI_DEP_PATH")
if missing:
console.print(
f"[red]Error:[/red] Missing required environment variable(s): "
f"{', '.join(missing)}\n"
f"Set them or pass --skip-enterprise to skip the enterprise release."
)
sys.exit(1)
cwd = Path.cwd()
lib_dir = cwd / "lib"
is_prerelease = _is_prerelease(version)
if skip_to_enterprise:
try:
_release_enterprise(version, is_prerelease, dry_run)
except BaseException as e:
_print_release_error(e)
_resume_hint(
f"Fix the issue, then re-run:\n\n"
f" devtools release {version} --skip-to-enterprise"
)
sys.exit(1)
console.print(
f"\n[green]✓[/green] Enterprise release [bold]{version}[/bold] complete!"
)
return
if not dry_run:
console.print("Checking git status...")
check_git_clean()
console.print("[green]✓[/green] Working directory is clean")
else:
console.print("[dim][DRY RUN][/dim] Would check git status")
packages = get_packages(lib_dir)
console.print(f"\nFound {len(packages)} package(s) to update:")
for pkg in packages:
console.print(f" - {pkg.name}")
console.print(f"\n[bold cyan]Phase 1: Bumping versions to {version}[/bold cyan]")
try:
check_gh_installed()
if skip_enterprise and skip_to_enterprise:
console.print(
"[red]Error:[/red] Cannot use both --skip-enterprise "
"and --skip-to-enterprise"
)
sys.exit(1)
if not skip_enterprise or skip_to_enterprise:
missing: list[str] = []
if not _ENTERPRISE_REPO:
missing.append("ENTERPRISE_REPO")
if not _ENTERPRISE_VERSION_DIRS:
missing.append("ENTERPRISE_VERSION_DIRS")
if not _ENTERPRISE_CREWAI_DEP_PATH:
missing.append("ENTERPRISE_CREWAI_DEP_PATH")
if missing:
console.print(
f"[red]Error:[/red] Missing required environment variable(s): "
f"{', '.join(missing)}\n"
f"Set them or pass --skip-enterprise to skip the enterprise release."
)
sys.exit(1)
cwd = Path.cwd()
lib_dir = cwd / "lib"
is_prerelease = _is_prerelease(version)
if skip_to_enterprise:
_release_enterprise(version, is_prerelease, dry_run)
console.print(
f"\n[green]✓[/green] Enterprise release [bold]{version}[/bold] complete!"
)
return
if not dry_run:
console.print("Checking git status...")
check_git_clean()
console.print("[green]✓[/green] Working directory is clean")
else:
console.print("[dim][DRY RUN][/dim] Would check git status")
packages = get_packages(lib_dir)
console.print(f"\nFound {len(packages)} package(s) to update:")
for pkg in packages:
console.print(f" - {pkg.name}")
# --- Phase 1: Bump versions ---
console.print(
f"\n[bold cyan]Phase 1: Bumping versions to {version}[/bold cyan]"
)
_update_all_versions(cwd, lib_dir, version, packages, dry_run)
branch_name = f"feat/bump-version-{version}"
@@ -1949,17 +1930,12 @@ def release(
console.print(
"[dim][DRY RUN][/dim] Would push branch, create PR, and wait for merge"
)
except BaseException as e:
_print_release_error(e)
_resume_hint(
f"Phase 1 failed. Fix the issue, then re-run:\n\n"
f" devtools release {version}{flag_suffix}"
# --- Phase 2: Tag and release ---
console.print(
f"\n[bold cyan]Phase 2: Tagging and releasing {version}[/bold cyan]"
)
sys.exit(1)
console.print(f"\n[bold cyan]Phase 2: Tagging and releasing {version}[/bold cyan]")
try:
tag_name = version
if not dry_run:
@@ -1986,57 +1962,22 @@ def release(
if not dry_run:
_create_tag_and_release(tag_name, release_notes, is_prerelease)
except BaseException as e:
_print_release_error(e)
_resume_hint(
"Phase 2 failed before PyPI publish. The bump PR is already merged.\n"
"Fix the issue, then resume with:\n\n"
" devtools tag"
f"\n\nAfter tagging, publish to PyPI and update deployment test:\n\n"
f" gh workflow run publish.yml -f release_tag={version}"
f"{enterprise_hint}"
)
sys.exit(1)
try:
if not dry_run:
_trigger_pypi_publish(tag_name, wait=True)
except BaseException as e:
_print_release_error(e)
_resume_hint(
f"Phase 2 failed at PyPI publish. Tag and GitHub release already exist.\n"
f"Retry PyPI publish manually:\n\n"
f" gh workflow run publish.yml -f release_tag={version}"
f"{enterprise_hint}"
)
sys.exit(1)
try:
if not dry_run:
_update_deployment_test_repo(version, is_prerelease)
except BaseException as e:
_print_release_error(e)
_resume_hint(
f"Phase 2 failed updating deployment test repo. "
f"Tag, release, and PyPI are done.\n"
f"Fix the issue and update {_DEPLOYMENT_TEST_REPO} manually."
f"{enterprise_hint}"
)
sys.exit(1)
if not skip_enterprise:
try:
if not skip_enterprise:
_release_enterprise(version, is_prerelease, dry_run)
except BaseException as e:
_print_release_error(e)
_resume_hint(
f"Phase 3 (enterprise) failed. Phases 1 & 2 completed successfully.\n"
f"Fix the issue, then resume:\n\n"
f" devtools release {version} --skip-to-enterprise"
)
sys.exit(1)
console.print(f"\n[green]✓[/green] Release [bold]{version}[/bold] complete!")
console.print(f"\n[green]✓[/green] Release [bold]{version}[/bold] complete!")
except subprocess.CalledProcessError as e:
console.print(f"[red]Error running command:[/red] {e}")
if e.stderr:
console.print(e.stderr)
sys.exit(1)
except Exception as e:
console.print(f"[red]Error:[/red] {e}")
sys.exit(1)
cli.add_command(bump)

View File

@@ -12,7 +12,7 @@ dev = [
"mypy==1.19.1",
"pre-commit==4.5.1",
"bandit==1.9.2",
"pytest==9.0.3",
"pytest==8.4.2",
"pytest-asyncio==1.3.0",
"pytest-subprocess==1.5.3",
"vcrpy==7.0.0", # pinned, less versions break pytest-recording
@@ -20,7 +20,7 @@ dev = [
"pytest-randomly==4.0.1",
"pytest-timeout==2.4.0",
"pytest-xdist==3.8.0",
"pytest-split==0.11.0",
"pytest-split==0.10.0",
"types-requests~=2.31.0.6",
"types-pyyaml==6.0.*",
"types-regex==2026.1.15.*",
@@ -162,7 +162,7 @@ info = "Commits must follow Conventional Commits 1.0.0."
[tool.uv]
exclude-newer = "1 day"
exclude-newer = "2026-04-10" # pinned for CVE-2026-39892; restore to "3 days" after 2026-04-11
# composio-core pins rich<14 but textual requires rich>=14.
# onnxruntime 1.24+ dropped Python 3.10 wheels; cap it so qdrant[fastembed] resolves on 3.10.
@@ -170,9 +170,6 @@ exclude-newer = "1 day"
# langchain-core <1.2.28 has GHSA-926x-3r5x-gfhw (incomplete f-string validation).
# transformers 4.57.6 has CVE-2026-1839; force 5.4+ (docling 2.84 allows huggingface-hub>=1).
# cryptography 46.0.6 has CVE-2026-39892; force 46.0.7+.
# pypdf <6.10.1 has CVE-2026-40260 and GHSA-jj6c-8h6c-hppx; force 6.10.1+.
# uv <0.11.6 has GHSA-pjjw-68hj-v9mw; force 0.11.6+.
# python-multipart <0.0.26 has GHSA-mj87-hwqh-73pj; force 0.0.26+.
override-dependencies = [
"rich>=13.7.1",
"onnxruntime<1.24; python_version < '3.11'",
@@ -181,9 +178,6 @@ override-dependencies = [
"urllib3>=2.6.3",
"transformers>=5.4.0; python_version >= '3.10'",
"cryptography>=46.0.7",
"pypdf>=6.10.1,<7",
"uv>=0.11.6,<1",
"python-multipart>=0.0.26,<1",
]
[tool.uv.workspace]

36
uv.lock generated
View File

@@ -13,8 +13,7 @@ resolution-markers = [
]
[options]
exclude-newer = "2026-04-14T20:20:18.36862Z"
exclude-newer-span = "P1D"
exclude-newer = "2026-04-10T16:00:00Z"
[manifest]
members = [
@@ -28,12 +27,9 @@ overrides = [
{ name = "langchain-core", specifier = ">=1.2.28,<2" },
{ name = "onnxruntime", marker = "python_full_version < '3.11'", specifier = "<1.24" },
{ name = "pillow", specifier = ">=12.1.1" },
{ name = "pypdf", specifier = ">=6.10.1,<7" },
{ name = "python-multipart", specifier = ">=0.0.26,<1" },
{ name = "rich", specifier = ">=13.7.1" },
{ name = "transformers", marker = "python_full_version >= '3.10'", specifier = ">=5.4.0" },
{ name = "urllib3", specifier = ">=2.6.3" },
{ name = "uv", specifier = ">=0.11.6,<1" },
]
[manifest.dependency-groups]
@@ -44,11 +40,11 @@ dev = [
{ name = "mypy", specifier = "==1.19.1" },
{ name = "pip-audit", specifier = "==2.9.0" },
{ name = "pre-commit", specifier = "==4.5.1" },
{ name = "pytest", specifier = "==9.0.3" },
{ name = "pytest", specifier = "==8.4.2" },
{ name = "pytest-asyncio", specifier = "==1.3.0" },
{ name = "pytest-randomly", specifier = "==4.0.1" },
{ name = "pytest-recording", specifier = "==0.13.4" },
{ name = "pytest-split", specifier = "==0.11.0" },
{ name = "pytest-split", specifier = "==0.10.0" },
{ name = "pytest-subprocess", specifier = "==1.5.3" },
{ name = "pytest-timeout", specifier = "==2.4.0" },
{ name = "pytest-xdist", specifier = "==3.8.0" },
@@ -1356,7 +1352,7 @@ requires-dist = [
{ name = "litellm", marker = "extra == 'litellm'", specifier = "~=1.83.0" },
{ name = "mcp", specifier = "~=1.26.0" },
{ name = "mem0ai", marker = "extra == 'mem0'", specifier = "~=0.1.94" },
{ name = "openai", specifier = ">=2.0.0,<3" },
{ name = "openai", specifier = ">=1.83.0,<3" },
{ name = "openpyxl", specifier = "~=3.1.5" },
{ name = "openpyxl", marker = "extra == 'openpyxl'", specifier = "~=3.1.5" },
{ name = "opentelemetry-api", specifier = "~=1.34.0" },
@@ -6728,14 +6724,14 @@ wheels = [
[[package]]
name = "pypdf"
version = "6.10.1"
version = "6.10.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "typing-extensions", marker = "python_full_version < '3.11'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/66/79/f2730c42ec7891a75a2fcea2eb4f356872bcbc671b711418060424796612/pypdf-6.10.1.tar.gz", hash = "sha256:62e6ca7f65aaa28b3d192addb44f97296e4be1748f57ed0f4efb2d4915841880", size = 5315704, upload-time = "2026-04-14T12:55:20.996Z" }
sdist = { url = "https://files.pythonhosted.org/packages/b8/9f/ca96abf18683ca12602065e4ed2bec9050b672c87d317f1079abc7b6d993/pypdf-6.10.0.tar.gz", hash = "sha256:4c5a48ba258c37024ec2505f7e8fd858525f5502784a2e1c8d415604af29f6ef", size = 5314833, upload-time = "2026-04-10T09:34:57.102Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/f0/04/e3aa7f1f14dbc53429cae34666261eb935d99bd61d24756ab94d7e0309da/pypdf-6.10.1-py3-none-any.whl", hash = "sha256:6331940d3bfe75b7e6601d35db7adabab5fc1d716efaeb384e3c0c3957d033de", size = 335606, upload-time = "2026-04-14T12:55:18.941Z" },
{ url = "https://files.pythonhosted.org/packages/55/f2/7ebe366f633f30a6ad105f650f44f24f98cb1335c4157d21ae47138b3482/pypdf-6.10.0-py3-none-any.whl", hash = "sha256:90005e959e1596c6e6c84c8b0ad383285b3e17011751cedd17f2ce8fcdfc86de", size = 334459, upload-time = "2026-04-10T09:34:54.966Z" },
]
[[package]]
@@ -6818,7 +6814,7 @@ sdist = { url = "https://files.pythonhosted.org/packages/12/a0/d0638470df605ce26
[[package]]
name = "pytest"
version = "9.0.3"
version = "8.4.2"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "colorama", marker = "sys_platform == 'win32'" },
@@ -6829,9 +6825,9 @@ dependencies = [
{ name = "pygments" },
{ name = "tomli", marker = "python_full_version < '3.11'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/7d/0d/549bd94f1a0a402dc8cf64563a117c0f3765662e2e668477624baeec44d5/pytest-9.0.3.tar.gz", hash = "sha256:b86ada508af81d19edeb213c681b1d48246c1a91d304c6c81a427674c17eb91c", size = 1572165, upload-time = "2026-04-07T17:16:18.027Z" }
sdist = { url = "https://files.pythonhosted.org/packages/a3/5c/00a0e072241553e1a7496d638deababa67c5058571567b92a7eaa258397c/pytest-8.4.2.tar.gz", hash = "sha256:86c0d0b93306b961d58d62a4db4879f27fe25513d4b969df351abdddb3c30e01", size = 1519618, upload-time = "2025-09-04T14:34:22.711Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/d4/24/a372aaf5c9b7208e7112038812994107bc65a84cd00e0354a88c2c77a617/pytest-9.0.3-py3-none-any.whl", hash = "sha256:2c5efc453d45394fdd706ade797c0a81091eccd1d6e4bccfcd476e2b8e0ab5d9", size = 375249, upload-time = "2026-04-07T17:16:16.13Z" },
{ url = "https://files.pythonhosted.org/packages/a8/a4/20da314d277121d6534b3a980b29035dcd51e6744bd79075a6ce8fa4eb8d/pytest-8.4.2-py3-none-any.whl", hash = "sha256:872f880de3fc3a5bdc88a11b39c9710c3497a547cfa9320bc3c5e62fbf272e79", size = 365750, upload-time = "2025-09-04T14:34:20.226Z" },
]
[[package]]
@@ -6875,14 +6871,14 @@ wheels = [
[[package]]
name = "pytest-split"
version = "0.11.0"
version = "0.10.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "pytest" },
]
sdist = { url = "https://files.pythonhosted.org/packages/2f/16/8af4c5f2ceb3640bb1f78dfdf5c184556b10dfe9369feaaad7ff1c13f329/pytest_split-0.11.0.tar.gz", hash = "sha256:8ebdb29cc72cc962e8eb1ec07db1eeb98ab25e215ed8e3216f6b9fc7ce0ec2b5", size = 13421, upload-time = "2026-02-03T09:14:31.469Z" }
sdist = { url = "https://files.pythonhosted.org/packages/46/d7/e30ba44adf83f15aee3f636daea54efadf735769edc0f0a7d98163f61038/pytest_split-0.10.0.tar.gz", hash = "sha256:adf80ba9fef7be89500d571e705b4f963dfa05038edf35e4925817e6b34ea66f", size = 13903, upload-time = "2024-10-16T15:45:19.783Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/ae/a1/d4423657caaa8be9b31e491592b49cebdcfd434d3e74512ce71f6ec39905/pytest_split-0.11.0-py3-none-any.whl", hash = "sha256:899d7c0f5730da91e2daf283860eb73b503259cb416851a65599368849c7f382", size = 11911, upload-time = "2026-02-03T09:14:33.708Z" },
{ url = "https://files.pythonhosted.org/packages/d6/a7/cad88e9c1109a5c2a320d608daa32e5ee008ccbc766310f54b1cd6b3d69c/pytest_split-0.10.0-py3-none-any.whl", hash = "sha256:466096b086a7147bcd423c6e6c2e57fc62af1c5ea2e256b4ed50fc030fc3dddc", size = 11961, upload-time = "2024-10-16T15:45:18.289Z" },
]
[[package]]
@@ -6989,11 +6985,11 @@ wheels = [
[[package]]
name = "python-multipart"
version = "0.0.26"
version = "0.0.24"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/88/71/b145a380824a960ebd60e1014256dbb7d2253f2316ff2d73dfd8928ec2c3/python_multipart-0.0.26.tar.gz", hash = "sha256:08fadc45918cd615e26846437f50c5d6d23304da32c341f289a617127b081f17", size = 43501, upload-time = "2026-04-10T14:09:59.473Z" }
sdist = { url = "https://files.pythonhosted.org/packages/8a/45/e23b5dc14ddb9918ae4a625379506b17b6f8fc56ca1d82db62462f59aea6/python_multipart-0.0.24.tar.gz", hash = "sha256:9574c97e1c026e00bc30340ef7c7d76739512ab4dfd428fec8c330fa6a5cc3c8", size = 37695, upload-time = "2026-04-05T20:49:13.829Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/9a/22/f1925cdda983ab66fc8ec6ec8014b959262747e58bdca26a4e3d1da29d56/python_multipart-0.0.26-py3-none-any.whl", hash = "sha256:c0b169f8c4484c13b0dcf2ef0ec3a4adb255c4b7d18d8e420477d2b1dd03f185", size = 28847, upload-time = "2026-04-10T14:09:58.131Z" },
{ url = "https://files.pythonhosted.org/packages/a3/73/89930efabd4da63cea44a3f438aeb753d600123570e6d6264e763617a9ce/python_multipart-0.0.24-py3-none-any.whl", hash = "sha256:9b110a98db707df01a53c194f0af075e736a770dc5058089650d70b4a182f950", size = 24420, upload-time = "2026-04-05T20:49:12.555Z" },
]
[[package]]