Compare commits

..

2 Commits

Author SHA1 Message Date
Lorenze Jay
d79c4a62a1 Merge branch 'main' into devin/1773669058-fix-trained-agents-data-file 2026-04-09 13:19:44 -07:00
João
79013a6dc2 fix: respect custom trained_agents_data_file during inference
Agents always loaded from the hardcoded 'trained_agents_data.pkl' during
inference, ignoring any custom filename supplied at training time via
'crewai train -f <custom>.pkl'.

Changes:
- Add 'trained_agents_data_file' field to Crew (defaults to
  'trained_agents_data.pkl') so users can specify which file to load
  trained agent suggestions from during inference.
- Update Agent._use_trained_data() to accept an optional filename
  parameter instead of always using the hardcoded constant.
- Update apply_training_data() in agent/utils.py to propagate the
  crew's trained_agents_data_file to the agent.
- Add tests for custom filename propagation at agent and crew levels.

Closes #4905

Co-Authored-By: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
2026-03-16 13:58:45 +00:00
89 changed files with 1428 additions and 5416 deletions

View File

@@ -24,14 +24,6 @@ repos:
rev: 0.11.3
hooks:
- id: uv-lock
- repo: local
hooks:
- id: pip-audit
name: pip-audit
entry: bash -c 'source .venv/bin/activate && uv run pip-audit --skip-editable --ignore-vuln CVE-2025-69872 --ignore-vuln CVE-2026-25645 --ignore-vuln CVE-2026-27448 --ignore-vuln CVE-2026-27459 --ignore-vuln PYSEC-2023-235' --
language: system
pass_filenames: false
stages: [pre-push, manual]
- repo: https://github.com/commitizen-tools/commitizen
rev: v4.10.1
hooks:

View File

@@ -4,87 +4,6 @@ description: "تحديثات المنتج والتحسينات وإصلاحات
icon: "clock"
mode: "wide"
---
<Update label="15 أبريل 2026">
## v1.14.2a4
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a4)
## ما الذي تغير
### الميزات
- إضافة تلميحات استئناف إلى إصدار أدوات المطورين عند الفشل
### إصلاحات الأخطاء
- إصلاح توجيه وضع الصرامة إلى واجهة برمجة تطبيقات Bedrock Converse
- إصلاح إصدار pytest إلى 9.0.3 لثغرة الأمان GHSA-6w46-j5rx-g56g
- رفع الحد الأدنى لـ OpenAI إلى >=2.0.0
### الوثائق
- تحديث سجل التغييرات والإصدار لـ v1.14.2a3
## المساهمون
@greysonlalonde
</Update>
<Update label="13 أبريل 2026">
## v1.14.2a3
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a3)
## ما الذي تغير
### الميزات
- إضافة واجهة سطر الأوامر للتحقق من النشر
- تحسين سهولة استخدام تهيئة LLM
### إصلاحات الأخطاء
- تجاوز pypdf و uv إلى إصدارات مصححة لـ CVE-2026-40260 و GHSA-pjjw-68hj-v9mw
- ترقية requests إلى >=2.33.0 لمعالجة ثغرة ملف مؤقت CVE
- الحفاظ على معلمات استدعاء أداة Bedrock من خلال إزالة القيمة الافتراضية الصحيحة
- تنظيف مخططات الأدوات لوضع صارم
- إصلاح اختبار تسلسل تضمين MemoryRecord
### الوثائق
- تنظيف لغة A2A الخاصة بالمؤسسات
- إضافة وثائق ميزات A2A الخاصة بالمؤسسات
- تحديث وثائق A2A الخاصة بالمصادر المفتوحة
- تحديث سجل التغييرات والإصدار لـ v1.14.2a2
## المساهمون
@Yanhu007, @greysonlalonde
</Update>
<Update label="10 أبريل 2026">
## v1.14.2a2
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a2)
## ما الذي تغير
### الميزات
- إضافة واجهة مستخدم نصية لنقطة التحقق مع عرض شجري، ودعم التفرع، ومدخلات/مخرجات قابلة للتعديل
- إثراء تتبع رموز LLM مع رموز الاستدلال ورموز إنشاء التخزين المؤقت
- إضافة معلمة `from_checkpoint` إلى طرق الانطلاق
- تضمين `crewai_version` في نقاط التحقق مع إطار عمل الهجرة
- إضافة تفرع نقاط التحقق مع تتبع السلالة
### إصلاحات الأخطاء
- إصلاح توجيه الوضع الصارم إلى مزودي Anthropic وBedrock
- تعزيز NL2SQLTool مع وضع القراءة فقط الافتراضي، والتحقق من الاستعلامات، والاستعلامات المعلمة
### الوثائق
- تحديث سجل التغييرات والإصدار لـ v1.14.2a1
## المساهمون
@alex-clawd, @github-actions[bot], @greysonlalonde, @lucasgomide
</Update>
<Update label="9 أبريل 2026">
## v1.14.2a1

View File

@@ -392,8 +392,7 @@
"en/enterprise/features/marketplace",
"en/enterprise/features/agent-repositories",
"en/enterprise/features/tools-and-integrations",
"en/enterprise/features/pii-trace-redactions",
"en/enterprise/features/a2a"
"en/enterprise/features/pii-trace-redactions"
]
},
{
@@ -866,8 +865,7 @@
"en/enterprise/features/marketplace",
"en/enterprise/features/agent-repositories",
"en/enterprise/features/tools-and-integrations",
"en/enterprise/features/pii-trace-redactions",
"en/enterprise/features/a2a"
"en/enterprise/features/pii-trace-redactions"
]
},
{
@@ -1340,8 +1338,7 @@
"en/enterprise/features/marketplace",
"en/enterprise/features/agent-repositories",
"en/enterprise/features/tools-and-integrations",
"en/enterprise/features/pii-trace-redactions",
"en/enterprise/features/a2a"
"en/enterprise/features/pii-trace-redactions"
]
},
{
@@ -1814,8 +1811,7 @@
"en/enterprise/features/marketplace",
"en/enterprise/features/agent-repositories",
"en/enterprise/features/tools-and-integrations",
"en/enterprise/features/pii-trace-redactions",
"en/enterprise/features/a2a"
"en/enterprise/features/pii-trace-redactions"
]
},
{
@@ -2287,8 +2283,7 @@
"en/enterprise/features/marketplace",
"en/enterprise/features/agent-repositories",
"en/enterprise/features/tools-and-integrations",
"en/enterprise/features/pii-trace-redactions",
"en/enterprise/features/a2a"
"en/enterprise/features/pii-trace-redactions"
]
},
{
@@ -2759,8 +2754,7 @@
"en/enterprise/features/marketplace",
"en/enterprise/features/agent-repositories",
"en/enterprise/features/tools-and-integrations",
"en/enterprise/features/pii-trace-redactions",
"en/enterprise/features/a2a"
"en/enterprise/features/pii-trace-redactions"
]
},
{
@@ -3231,8 +3225,7 @@
"en/enterprise/features/marketplace",
"en/enterprise/features/agent-repositories",
"en/enterprise/features/tools-and-integrations",
"en/enterprise/features/pii-trace-redactions",
"en/enterprise/features/a2a"
"en/enterprise/features/pii-trace-redactions"
]
},
{
@@ -3705,8 +3698,7 @@
"en/enterprise/features/marketplace",
"en/enterprise/features/agent-repositories",
"en/enterprise/features/tools-and-integrations",
"en/enterprise/features/pii-trace-redactions",
"en/enterprise/features/a2a"
"en/enterprise/features/pii-trace-redactions"
]
},
{
@@ -4177,8 +4169,7 @@
"en/enterprise/features/marketplace",
"en/enterprise/features/agent-repositories",
"en/enterprise/features/tools-and-integrations",
"en/enterprise/features/pii-trace-redactions",
"en/enterprise/features/a2a"
"en/enterprise/features/pii-trace-redactions"
]
},
{
@@ -4652,8 +4643,7 @@
"en/enterprise/features/marketplace",
"en/enterprise/features/agent-repositories",
"en/enterprise/features/tools-and-integrations",
"en/enterprise/features/pii-trace-redactions",
"en/enterprise/features/a2a"
"en/enterprise/features/pii-trace-redactions"
]
},
{

View File

@@ -4,87 +4,6 @@ description: "Product updates, improvements, and bug fixes for CrewAI"
icon: "clock"
mode: "wide"
---
<Update label="Apr 15, 2026">
## v1.14.2a4
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a4)
## What's Changed
### Features
- Add resume hints to devtools release on failure
### Bug Fixes
- Fix strict mode forwarding to Bedrock Converse API
- Fix pytest version to 9.0.3 for security vulnerability GHSA-6w46-j5rx-g56g
- Bump OpenAI lower bound to >=2.0.0
### Documentation
- Update changelog and version for v1.14.2a3
## Contributors
@greysonlalonde
</Update>
<Update label="Apr 13, 2026">
## v1.14.2a3
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a3)
## What's Changed
### Features
- Add deploy validation CLI
- Improve LLM initialization ergonomics
### Bug Fixes
- Override pypdf and uv to patched versions for CVE-2026-40260 and GHSA-pjjw-68hj-v9mw
- Upgrade requests to >=2.33.0 for CVE temp file vulnerability
- Preserve Bedrock tool call arguments by removing truthy default
- Sanitize tool schemas for strict mode
- Deflake MemoryRecord embedding serialization test
### Documentation
- Clean up enterprise A2A language
- Add enterprise A2A feature documentation
- Update OSS A2A documentation
- Update changelog and version for v1.14.2a2
## Contributors
@Yanhu007, @greysonlalonde
</Update>
<Update label="Apr 10, 2026">
## v1.14.2a2
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a2)
## What's Changed
### Features
- Add checkpoint TUI with tree view, fork support, and editable inputs/outputs
- Enrich LLM token tracking with reasoning tokens and cache creation tokens
- Add `from_checkpoint` parameter to kickoff methods
- Embed `crewai_version` in checkpoints with migration framework
- Add checkpoint forking with lineage tracking
### Bug Fixes
- Fix strict mode forwarding to Anthropic and Bedrock providers
- Harden NL2SQLTool with read-only default, query validation, and parameterized queries
### Documentation
- Update changelog and version for v1.14.2a1
## Contributors
@alex-clawd, @github-actions[bot], @greysonlalonde, @lucasgomide
</Update>
<Update label="Apr 09, 2026">
## v1.14.2a1

View File

@@ -54,7 +54,6 @@ crew = Crew(
| `on_events` | `list[str]` | `["task_completed"]` | Event types that trigger a checkpoint |
| `provider` | `BaseProvider` | `JsonProvider()` | Storage backend |
| `max_checkpoints` | `int \| None` | `None` | Max checkpoints to keep. Oldest are pruned after each write. Pruning is handled by the provider. |
| `restore_from` | `Path \| str \| None` | `None` | Path to a checkpoint to restore from. Used when passing config via a kickoff method's `from_checkpoint` parameter. |
### Inheritance and Opt-Out
@@ -80,42 +79,13 @@ crew = Crew(
## Resuming from a Checkpoint
Pass a `CheckpointConfig` with `restore_from` to any kickoff method. The crew restores from that checkpoint, skips completed tasks, and resumes.
```python
from crewai import Crew, CheckpointConfig
crew = Crew(agents=[...], tasks=[...])
result = crew.kickoff(
from_checkpoint=CheckpointConfig(
restore_from="./my_checkpoints/20260407T120000_abc123.json",
),
)
# Restore and resume
crew = Crew.from_checkpoint("./my_checkpoints/20260407T120000_abc123.json")
result = crew.kickoff() # picks up from last completed task
```
Remaining `CheckpointConfig` fields apply to the new run, so checkpointing continues after the restore.
You can also use the classmethod directly:
```python
config = CheckpointConfig(restore_from="./my_checkpoints/20260407T120000_abc123.json")
crew = Crew.from_checkpoint(config)
result = crew.kickoff()
```
## Forking from a Checkpoint
`fork()` restores a checkpoint and starts a new execution branch. Useful for exploring alternative paths from the same point.
```python
from crewai import Crew, CheckpointConfig
config = CheckpointConfig(restore_from="./my_checkpoints/20260407T120000_abc123.json")
crew = Crew.fork(config, branch="experiment-a")
result = crew.kickoff(inputs={"strategy": "aggressive"})
```
Each fork gets a unique lineage ID so checkpoints from different branches don't collide. The `branch` label is optional and auto-generated if omitted.
The restored crew skips already-completed tasks and resumes from the first incomplete one.
## Works on Crew, Flow, and Agent
@@ -155,8 +125,7 @@ flow = MyFlow(
result = flow.kickoff()
# Resume
config = CheckpointConfig(restore_from="./flow_cp/20260407T120000_abc123.json")
flow = MyFlow.from_checkpoint(config)
flow = MyFlow.from_checkpoint("./flow_cp/20260407T120000_abc123.json")
result = flow.kickoff()
```
@@ -262,44 +231,3 @@ async def on_llm_done_async(source, event, state):
The `state` argument is the `RuntimeState` passed automatically by the event bus when your handler accepts 3 parameters. You can register handlers on any event type listed in the [Event Listeners](/en/concepts/event-listener) documentation.
Checkpointing is best-effort: if a checkpoint write fails, the error is logged but execution continues uninterrupted.
## CLI
The `crewai checkpoint` command gives you a TUI for browsing, inspecting, resuming, and forking checkpoints. It auto-detects whether your checkpoints are JSON files or a SQLite database.
```bash
# Launch the TUI — auto-detects .checkpoints/ or .checkpoints.db
crewai checkpoint
# Point at a specific location
crewai checkpoint --location ./my_checkpoints
crewai checkpoint --location ./.checkpoints.db
```
<Frame>
<img src="/images/checkpointing.png" alt="Checkpoint TUI" />
</Frame>
The left panel is a tree view. Checkpoints are grouped by branch, and forks nest under the checkpoint they diverged from. Select a checkpoint to see its metadata, entity state, and task progress in the detail panel. Hit **Resume** to pick up where it left off, or **Fork** to start a new branch from that point.
### Editing inputs and task outputs
When a checkpoint is selected, the detail panel shows:
- **Inputs** — if the original kickoff had inputs (e.g. `{topic}`), they appear as editable fields pre-filled with the original values. Change them before resuming or forking.
- **Task outputs** — completed tasks show their output in editable text areas. Edit a task's output to change the context that downstream tasks receive. When you modify a task output and hit Fork, all subsequent tasks are invalidated and re-run with the new context.
This is useful for "what if" exploration — fork from a checkpoint, tweak a task's result, and see how it changes downstream behavior.
### Subcommands
```bash
# List all checkpoints
crewai checkpoint list ./my_checkpoints
# Inspect a specific checkpoint
crewai checkpoint info ./my_checkpoints/20260407T120000_abc123.json
# Inspect latest in a SQLite database
crewai checkpoint info ./.checkpoints.db
```

View File

@@ -1,227 +0,0 @@
---
title: A2A on AMP
description: Production-grade Agent-to-Agent communication with distributed state and multi-scheme authentication
icon: "network-wired"
mode: "wide"
---
<Warning>
A2A server agents on AMP are in early release. APIs may change in future versions.
</Warning>
## Overview
CrewAI AMP extends the open-source [A2A protocol implementation](/en/learn/a2a-agent-delegation) with production infrastructure for deploying distributed agents at scale. AMP supports A2A protocol versions 0.2 and 0.3. When you deploy a crew or agent with A2A server configuration to AMP, the platform automatically provisions distributed state management, authentication, multi-transport endpoints, and lifecycle management.
<Note>
For A2A protocol fundamentals, client/server configuration, and authentication schemes, see the [A2A Agent Delegation](/en/learn/a2a-agent-delegation) documentation. This page covers what AMP adds on top of the open-source implementation.
</Note>
### Usage
Add `A2AServerConfig` to any agent in your crew and deploy to AMP. The platform detects agents with server configuration and automatically registers A2A endpoints, generates agent cards, and provisions the infrastructure described below.
```python
from crewai import Agent, Crew, Task
from crewai.a2a import A2AServerConfig
from crewai.a2a.auth import EnterpriseTokenAuth
agent = Agent(
role="Data Analyst",
goal="Analyze datasets and provide insights",
backstory="Expert data scientist with statistical analysis skills",
llm="gpt-4o",
a2a=A2AServerConfig(
auth=EnterpriseTokenAuth()
)
)
task = Task(
description="Analyze the provided dataset",
expected_output="Statistical summary with key insights",
agent=agent
)
crew = Crew(agents=[agent], tasks=[task])
```
After [deploying to AMP](/en/enterprise/guides/deploy-to-amp), the platform registers two levels of A2A endpoints:
- **Crew-level**: an aggregate agent card at `/.well-known/agent-card.json` where each agent with `A2AServerConfig` is listed as a skill, with a JSON-RPC endpoint at `/a2a`
- **Per-agent**: isolated agent cards and JSON-RPC endpoints mounted at `/a2a/agents/{role}/`, each with its own tenancy
Clients can interact with the crew as a whole or target a specific agent directly. To route a request to a specific agent through the crew-level endpoint, include `"target_agent"` in the message metadata with the agent's slugified role name (e.g., `"data-analyst"` for an agent with role `"Data Analyst"`). If no `target_agent` is provided, the request is handled by the first agent in the crew.
See [A2A Agent Delegation](/en/learn/a2a-agent-delegation#server-configuration-options) for the full list of `A2AServerConfig` options.
<Warning>
Per the A2A protocol, agent cards are publicly accessible to enable discovery. This includes both the crew-level card at `/.well-known/agent-card.json` and per-agent cards at `/a2a/agents/{role}/.well-known/agent-card.json`. Do not include sensitive information in agent names, descriptions, or skill definitions.
</Warning>
### File Inputs and Structured Output
A2A on AMP supports passing files and requesting structured output in both directions. Clients can send files as `FilePart`s and request structured responses by embedding a JSON schema in the message. Server agents receive files as `input_files` on the task, and return structured data as `DataPart`s when a schema is provided. See [File Inputs and Structured Output](/en/learn/a2a-agent-delegation#file-inputs-and-structured-output) for details.
### What AMP Adds
<CardGroup cols={2}>
<Card title="Distributed State" icon="database">
Persistent task, context, and result storage
</Card>
<Card title="Enterprise Authentication" icon="shield-halved">
OIDC, OAuth2, mTLS, and Enterprise token validation beyond simple bearer tokens
</Card>
<Card title="gRPC Transport" icon="bolt">
Full gRPC server with TLS and authentication
</Card>
<Card title="Context Lifecycle" icon="clock-rotate-left">
Automatic idle detection, expiration, and cleanup of long-running conversations
</Card>
<Card title="Signed Webhooks" icon="signature">
HMAC-SHA256 signed push notifications with replay protection
</Card>
<Card title="Multi-Transport" icon="arrows-split-up-and-left">
REST, JSON-RPC, and gRPC endpoints served simultaneously from a single deployment
</Card>
</CardGroup>
---
## Distributed State Management
In the open-source implementation, task and context state lives in memory on a single process. AMP replaces this with persistent, distributed stores.
### Storage Layers
| Store | Purpose |
|---|---|
| **Task Store** | Persists A2A task state and metadata |
| **Context Store** | Tracks conversation context, creation time, last activity, and associated tasks |
| **Result Store** | Caches task results for retrieval |
| **Push Config Store** | Manages webhook subscriptions per task |
Multiple A2A deployments are automatically isolated from each other, preventing data collisions when sharing infrastructure.
---
## Enterprise Authentication
AMP supports six authentication schemes for incoming A2A requests, configurable per deployment. Authentication works across both HTTP and gRPC transports.
| Scheme | Description | Use Case |
|---|---|---|
| **SimpleTokenAuth** | Static bearer token from `AUTH_TOKEN` env var | Development, simple deployments |
| **EnterpriseTokenAuth** | Token verification via CrewAI PlusAPI with integration token claims | AMP-to-AMP agent communication |
| **OIDCAuth** | OpenID Connect JWT validation with JWKS endpoint caching | Enterprise SSO integration |
| **OAuth2ServerAuth** | OAuth2 with configurable scopes | Fine-grained access control |
| **APIKeyServerAuth** | API key validation via header or query parameter | Third-party integrations |
| **MTLSServerAuth** | Mutual TLS certificate-based authentication | Zero-trust environments |
The configured auth scheme automatically populates the agent card's `securitySchemes` and `security` fields. Clients discover authentication requirements by fetching the agent card before making requests.
---
## Extended Agent Cards
AMP supports role-based skill visibility through extended agent cards. Unauthenticated users see the standard agent card with public skills. Authenticated users receive an extended card with additional capabilities.
This enables patterns like:
- Public agents that expose basic skills to anyone, with advanced skills available to authenticated clients
- Internal agents that advertise different capabilities based on the caller's identity
---
## gRPC Transport
If enabled, AMP provides full gRPC support alongside the default JSON-RPC transport.
- **TLS termination** with configurable certificate and key paths
- **gRPC reflection** for debugging with tools like `grpcurl`
- **Authentication** using the same schemes available for HTTP
- **Extension validation** ensuring clients support required protocol extensions
- **Version negotiation** across A2A protocol versions 0.2 and 0.3
For deployments exposing multiple agents, AMP automatically allocates per-agent gRPC ports and coordinates TLS, startup, and shutdown across all servers.
---
## Context Lifecycle Management
AMP tracks the lifecycle of A2A conversation contexts and automatically manages cleanup.
### Lifecycle States
| State | Condition | Action |
|---|---|---|
| **Active** | Context has recent activity | None |
| **Idle** | No activity for a configured period | Marked idle, event emitted |
| **Expired** | Context exceeds its maximum lifetime | Marked expired, associated tasks cleaned up, event emitted |
A background cleanup task runs hourly to scan for idle and expired contexts. All state transitions emit CrewAI events that integrate with the platform's observability features.
---
## Signed Push Notifications
When an A2A agent sends push notifications to a client webhook, AMP signs each request with HMAC-SHA256 to ensure integrity and prevent tampering.
### Signature Headers
| Header | Purpose |
|---|---|
| `X-A2A-Signature` | HMAC-SHA256 signature in `sha256={hex_digest}` format |
| `X-A2A-Signature-Timestamp` | Unix timestamp bound to the signature |
| `X-A2A-Notification-Token` | Optional notification auth token |
### Security Properties
- **Integrity**: payload cannot be modified without invalidating the signature
- **Replay protection**: signatures are timestamp-bound with a configurable tolerance window
- **Retry with backoff**: failed deliveries retry with exponential backoff
---
## Distributed Event Streaming
In the open-source implementation, SSE streaming works within a single process. AMP propagates SSE events across instances so that clients receive updates even when the instance holding the streaming connection differs from the instance executing the task.
---
## Multi-Transport Endpoints
AMP serves REST and JSON-RPC by default. gRPC is available as an additional transport if enabled.
| Transport | Path Convention | Description |
|---|---|---|
| **REST** | `/v1/message:send`, `/v1/message:stream`, `/v1/tasks` | Google API conventions |
| **JSON-RPC** | Standard A2A JSON-RPC endpoint | Default A2A protocol transport |
| **gRPC** | Per-agent port allocation | Optional, high-performance binary protocol |
All active transports share the same authentication, version negotiation, and extension validation. Agent cards are generated from agent and crew metadata — roles, goals, and tools become skills and descriptions — and automatically include interfaces for each active transport. They can also be manually configured via `A2AServerConfig`.
---
## Version and Extension Negotiation
AMP validates A2A protocol versions and extensions at the transport layer.
### Version Negotiation
- Clients send the `A2A-Version` header with their preferred version
- AMP validates against supported versions (0.2, 0.3) and falls back to 0.3 if unspecified
- The negotiated version is returned in the response headers
### Extension Validation
- Clients declare supported extensions via the `X-A2A-Extensions` header
- AMP validates that clients support all extensions the agent requires
- Requests from clients missing required extensions receive an `UnsupportedExtensionError`
---
## Next Steps
- [A2A Agent Delegation](/en/learn/a2a-agent-delegation) — A2A protocol fundamentals and configuration
- [A2UI](/en/learn/a2ui) — Interactive UI rendering over A2A
- [Deploy to AMP](/en/enterprise/guides/deploy-to-amp) — General deployment guide
- [Webhook Streaming](/en/enterprise/features/webhook-streaming) — Event streaming for deployed automations

View File

@@ -7,10 +7,6 @@ mode: "wide"
## A2A Agent Delegation
<Info>
Deploying A2A agents to production? See [A2A on AMP](/en/enterprise/features/a2a) for distributed state, enterprise authentication, gRPC transport, and horizontal scaling.
</Info>
CrewAI treats [A2A protocol](https://a2a-protocol.org/latest/) as a first-class delegation primitive, enabling agents to delegate tasks, request information, and collaborate with remote agents, as well as act as A2A-compliant server agents.
In client mode, agents autonomously choose between local execution and remote delegation based on task requirements.
@@ -100,28 +96,24 @@ The `A2AClientConfig` class accepts the following parameters:
Update mechanism for receiving task status. Options: `StreamingConfig`, `PollingConfig`, or `PushNotificationConfig`.
</ParamField>
<ParamField path="transport_protocol" type="Literal['JSONRPC', 'GRPC', 'HTTP+JSON']" default="JSONRPC">
Transport protocol for A2A communication. Options: `JSONRPC` (default), `GRPC`, or `HTTP+JSON`.
</ParamField>
<ParamField path="accepted_output_modes" type="list[str]" default='["application/json"]'>
Media types the client can accept in responses.
</ParamField>
<ParamField path="supported_transports" type="list[str]" default='["JSONRPC"]'>
Ordered list of transport protocols the client supports.
</ParamField>
<ParamField path="use_client_preference" type="bool" default="False">
Whether to prioritize client transport preferences over server.
</ParamField>
<ParamField path="extensions" type="list[str]" default="[]">
A2A protocol extension URIs the client supports.
</ParamField>
<ParamField path="client_extensions" type="list[A2AExtension]" default="[]">
Client-side processing hooks for tool injection, prompt augmentation, and response modification.
</ParamField>
<ParamField path="transport" type="ClientTransportConfig" default="ClientTransportConfig()">
Transport configuration including preferred transport, supported transports for negotiation, and protocol-specific settings (gRPC message sizes, keepalive, etc.).
</ParamField>
<ParamField path="transport_protocol" type="Literal['JSONRPC', 'GRPC', 'HTTP+JSON']" default="None">
**Deprecated**: Use `transport=ClientTransportConfig(preferred=...)` instead.
</ParamField>
<ParamField path="supported_transports" type="list[str]" default="None">
**Deprecated**: Use `transport=ClientTransportConfig(supported=...)` instead.
Extension URIs the client supports.
</ParamField>
## Authentication
@@ -413,7 +405,11 @@ agent = Agent(
Preferred endpoint URL. If set, overrides the URL passed to `to_agent_card()`.
</ParamField>
<ParamField path="protocol_version" type="str" default="0.3.0">
<ParamField path="preferred_transport" type="Literal['JSONRPC', 'GRPC', 'HTTP+JSON']" default="JSONRPC">
Transport protocol for the preferred endpoint.
</ParamField>
<ParamField path="protocol_version" type="str" default="0.3">
A2A protocol version this agent supports.
</ParamField>
@@ -445,36 +441,8 @@ agent = Agent(
Whether agent provides extended card to authenticated users.
</ParamField>
<ParamField path="extended_skills" type="list[AgentSkill]" default="[]">
Additional skills visible only to authenticated users in the extended agent card.
</ParamField>
<ParamField path="signing_config" type="AgentCardSigningConfig" default="None">
Configuration for signing the AgentCard with JWS. Supports RS256, ES256, PS256, and related algorithms.
</ParamField>
<ParamField path="server_extensions" type="list[ServerExtension]" default="[]">
Server-side A2A protocol extensions with `on_request`/`on_response` hooks that modify agent behavior.
</ParamField>
<ParamField path="push_notifications" type="ServerPushNotificationConfig" default="None">
Configuration for outgoing push notifications, including HMAC-SHA256 signing secret.
</ParamField>
<ParamField path="transport" type="ServerTransportConfig" default="ServerTransportConfig()">
Transport configuration including preferred transport, gRPC server settings, JSON-RPC paths, and HTTP+JSON settings.
</ParamField>
<ParamField path="auth" type="ServerAuthScheme" default="None">
Authentication scheme for incoming A2A requests. Defaults to `SimpleTokenAuth` using the `AUTH_TOKEN` environment variable.
</ParamField>
<ParamField path="preferred_transport" type="Literal['JSONRPC', 'GRPC', 'HTTP+JSON']" default="None">
**Deprecated**: Use `transport=ServerTransportConfig(preferred=...)` instead.
</ParamField>
<ParamField path="signatures" type="list[AgentCardSignature]" default="None">
**Deprecated**: Use `signing_config=AgentCardSigningConfig(...)` instead.
<ParamField path="signatures" type="list[AgentCardSignature]" default="[]">
JSON Web Signatures for the AgentCard.
</ParamField>
### Combined Client and Server
@@ -500,14 +468,6 @@ agent = Agent(
)
```
### File Inputs and Structured Output
A2A supports passing files and requesting structured output in both directions.
**Client side**: When delegating to a remote A2A agent, files from the task's `input_files` are sent as `FilePart`s in the outgoing message. If `response_model` is set on the `A2AClientConfig`, the Pydantic model's JSON schema is embedded in the message metadata, requesting structured output from the remote agent.
**Server side**: Incoming `FilePart`s are extracted and passed to the agent's task as `input_files`. If the client included a JSON schema, the server creates a response model from it and applies it to the task. When the agent returns structured data, the response is sent back as a `DataPart` rather than plain text.
## Best Practices
<CardGroup cols={2}>

Binary file not shown.

Before

Width:  |  Height:  |  Size: 315 KiB

View File

@@ -4,87 +4,6 @@ description: "CrewAI의 제품 업데이트, 개선 사항 및 버그 수정"
icon: "clock"
mode: "wide"
---
<Update label="2026년 4월 15일">
## v1.14.2a4
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a4)
## 변경 사항
### 기능
- 실패 시 devtools 릴리스에 이력서 힌트 추가
### 버그 수정
- Bedrock Converse API로의 엄격 모드 포워딩 수정
- 보안 취약점 GHSA-6w46-j5rx-g56g에 대해 pytest 버전을 9.0.3으로 수정
- OpenAI 하한을 >=2.0.0으로 상향 조정
### 문서
- v1.14.2a3에 대한 변경 로그 및 버전 업데이트
## 기여자
@greysonlalonde
</Update>
<Update label="2026년 4월 13일">
## v1.14.2a3
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a3)
## 변경 사항
### 기능
- 배포 검증 CLI 추가
- LLM 초기화 사용성 개선
### 버그 수정
- CVE-2026-40260 및 GHSA-pjjw-68hj-v9mw에 대한 패치된 버전으로 pypdf 및 uv 재정의
- CVE 임시 파일 취약점에 대해 requests를 >=2.33.0으로 업그레이드
- 진리값 기본값을 제거하여 Bedrock 도구 호출 인수 보존
- 엄격 모드를 위한 도구 스키마 정리
- MemoryRecord 임베딩 직렬화 테스트의 불안정성 제거
### 문서
- 기업 A2A 언어 정리
- 기업 A2A 기능 문서 추가
- OSS A2A 문서 업데이트
- v1.14.2a2에 대한 변경 로그 및 버전 업데이트
## 기여자
@Yanhu007, @greysonlalonde
</Update>
<Update label="2026년 4월 10일">
## v1.14.2a2
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a2)
## 변경 사항
### 기능
- 트리 뷰, 포크 지원 및 편집 가능한 입력/출력을 갖춘 체크포인트 TUI 추가
- 추론 토큰 및 캐시 생성 토큰으로 LLM 토큰 추적 강화
- 킥오프 메서드에 `from_checkpoint` 매개변수 추가
- 마이그레이션 프레임워크와 함께 체크포인트에 `crewai_version` 포함
- 계보 추적이 가능한 체크포인트 포킹 추가
### 버그 수정
- Anthropic 및 Bedrock 공급자로의 엄격 모드 포워딩 수정
- 읽기 전용 기본값, 쿼리 검증 및 매개변수화된 쿼리로 NL2SQLTool 강화
### 문서
- v1.14.2a1에 대한 변경 로그 및 버전 업데이트
## 기여자
@alex-clawd, @github-actions[bot], @greysonlalonde, @lucasgomide
</Update>
<Update label="2026년 4월 9일">
## v1.14.2a1

View File

@@ -4,87 +4,6 @@ description: "Atualizações de produto, melhorias e correções do CrewAI"
icon: "clock"
mode: "wide"
---
<Update label="15 abr 2026">
## v1.14.2a4
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a4)
## O que Mudou
### Recursos
- Adicionar dicas de retomar ao release do devtools em caso de falha
### Correções de Bugs
- Corrigir o encaminhamento do modo estrito para a API Bedrock Converse
- Corrigir a versão do pytest para 9.0.3 devido à vulnerabilidade de segurança GHSA-6w46-j5rx-g56g
- Aumentar o limite inferior do OpenAI para >=2.0.0
### Documentação
- Atualizar o changelog e a versão para v1.14.2a3
## Contribuidores
@greysonlalonde
</Update>
<Update label="13 abr 2026">
## v1.14.2a3
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a3)
## O que Mudou
### Recursos
- Adicionar CLI de validação de deploy
- Melhorar a ergonomia de inicialização do LLM
### Correções de Bugs
- Substituir pypdf e uv por versões corrigidas para CVE-2026-40260 e GHSA-pjjw-68hj-v9mw
- Atualizar requests para >=2.33.0 devido à vulnerabilidade de arquivo temporário CVE
- Preservar os argumentos de chamada da ferramenta Bedrock removendo o padrão truthy
- Sanitizar esquemas de ferramentas para modo estrito
- Remover flakiness do teste de serialização de embedding MemoryRecord
### Documentação
- Limpar a linguagem do A2A empresarial
- Adicionar documentação de recursos do A2A empresarial
- Atualizar documentação do A2A OSS
- Atualizar changelog e versão para v1.14.2a2
## Contribuidores
@Yanhu007, @greysonlalonde
</Update>
<Update label="10 abr 2026">
## v1.14.2a2
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a2)
## O que Mudou
### Funcionalidades
- Adicionar TUI de ponto de verificação com visualização em árvore, suporte a bifurcações e entradas/saídas editáveis
- Enriquecer o rastreamento de tokens LLM com tokens de raciocínio e tokens de criação de cache
- Adicionar parâmetro `from_checkpoint` aos métodos de inicialização
- Incorporar `crewai_version` em pontos de verificação com o framework de migração
- Adicionar bifurcação de ponto de verificação com rastreamento de linhagem
### Correções de Bugs
- Corrigir o encaminhamento em modo estrito para os provedores Anthropic e Bedrock
- Fortalecer NL2SQLTool com padrão somente leitura, validação de consultas e consultas parametrizadas
### Documentação
- Atualizar changelog e versão para v1.14.2a1
## Contributors
@alex-clawd, @github-actions[bot], @greysonlalonde, @lucasgomide
</Update>
<Update label="09 abr 2026">
## v1.14.2a1

View File

@@ -9,7 +9,7 @@ authors = [
requires-python = ">=3.10, <3.14"
dependencies = [
"Pillow~=12.1.1",
"pypdf~=6.10.0",
"pypdf~=6.9.1",
"python-magic>=0.4.27",
"aiocache~=0.12.3",
"aiofiles~=24.1.0",

View File

@@ -152,4 +152,4 @@ __all__ = [
"wrap_file_source",
]
__version__ = "1.14.2a4"
__version__ = "1.14.2a1"

View File

@@ -9,8 +9,8 @@ authors = [
requires-python = ">=3.10, <3.14"
dependencies = [
"pytube~=15.0.0",
"requests>=2.33.0,<3",
"crewai==1.14.2a4",
"requests~=2.32.5",
"crewai==1.14.2a1",
"tiktoken~=0.8.0",
"beautifulsoup4~=4.13.4",
"python-docx~=1.2.0",

View File

@@ -305,4 +305,4 @@ __all__ = [
"ZapierActionTools",
]
__version__ = "1.14.2a4"
__version__ = "1.14.2a1"

View File

@@ -10,7 +10,7 @@ requires-python = ">=3.10, <3.14"
dependencies = [
# Core Dependencies
"pydantic~=2.11.9",
"openai>=2.0.0,<3",
"openai>=1.83.0,<3",
"instructor>=1.3.3",
# Text Processing
"pdfplumber~=0.11.4",
@@ -40,7 +40,7 @@ dependencies = [
"pydantic-settings~=2.10.1",
"httpx~=0.28.1",
"mcp~=1.26.0",
"uv~=0.11.6",
"uv~=0.9.13",
"aiosqlite~=0.21.0",
"pyyaml~=6.0",
"aiofiles~=24.1.0",
@@ -55,7 +55,7 @@ Repository = "https://github.com/crewAIInc/crewAI"
[project.optional-dependencies]
tools = [
"crewai-tools==1.14.2a4",
"crewai-tools==1.14.2a1",
]
embeddings = [
"tiktoken~=0.8.0"
@@ -74,8 +74,8 @@ qdrant = [
"qdrant-client[fastembed]~=1.14.3",
]
aws = [
"boto3~=1.42.79",
"aiobotocore~=3.4.0",
"boto3~=1.40.38",
"aiobotocore~=2.25.2",
]
watson = [
"ibm-watsonx-ai~=1.3.39",
@@ -87,7 +87,7 @@ litellm = [
"litellm~=1.83.0",
]
bedrock = [
"boto3~=1.42.79",
"boto3~=1.40.45",
]
google-genai = [
"google-genai~=1.65.0",

View File

@@ -46,7 +46,7 @@ def _suppress_pydantic_deprecation_warnings() -> None:
_suppress_pydantic_deprecation_warnings()
__version__ = "1.14.2a4"
__version__ = "1.14.2a1"
_telemetry_submitted = False

View File

@@ -98,6 +98,7 @@ class A2AErrorCode(IntEnum):
"""The specified artifact was not found."""
# Error code to default message mapping
ERROR_MESSAGES: dict[int, str] = {
A2AErrorCode.JSON_PARSE_ERROR: "Parse error",
A2AErrorCode.INVALID_REQUEST: "Invalid Request",

View File

@@ -63,21 +63,25 @@ class A2AExtension(Protocol):
Example:
class MyExtension:
def inject_tools(self, agent: Agent) -> None:
# Add custom tools to the agent
pass
def extract_state_from_history(
self, conversation_history: Sequence[Message]
) -> ConversationState | None:
# Extract state from conversation
return None
def augment_prompt(
self, base_prompt: str, conversation_state: ConversationState | None
) -> str:
# Add custom instructions
return base_prompt
def process_response(
self, agent_response: Any, conversation_state: ConversationState | None
) -> Any:
# Modify response if needed
return agent_response
"""

View File

@@ -77,6 +77,7 @@ def extract_a2a_agent_ids_from_config(
else:
configs = a2a_config
# Filter to only client configs (those with endpoint)
client_configs: list[A2AClientConfigTypes] = [
config for config in configs if isinstance(config, (A2AConfig, A2AClientConfig))
]

View File

@@ -1161,9 +1161,19 @@ class Agent(BaseAgent):
return task_prompt
def _use_trained_data(self, task_prompt: str) -> str:
"""Use trained data for the agent task prompt to improve output."""
if data := CrewTrainingHandler(TRAINED_AGENTS_DATA_FILE).load():
def _use_trained_data(
self, task_prompt: str, trained_agents_data_file: str | None = None
) -> str:
"""Use trained data for the agent task prompt to improve output.
Args:
task_prompt: The task prompt to augment.
trained_agents_data_file: Optional path to the trained agents data
file. Falls back to the default ``TRAINED_AGENTS_DATA_FILE``
when not provided.
"""
filename = trained_agents_data_file or TRAINED_AGENTS_DATA_FILE
if data := CrewTrainingHandler(filename).load():
if trained_data_output := data.get(self.role):
task_prompt += (
"\n\nYou MUST follow these instructions: \n - "
@@ -1341,6 +1351,7 @@ class Agent(BaseAgent):
raw_tools: list[BaseTool] = self.tools or []
# Inject memory tools for standalone kickoff (crew path handles its own)
agent_memory = getattr(self, "memory", None)
if agent_memory is not None:
from crewai.tools.memory_tools import create_memory_tools
@@ -1398,6 +1409,7 @@ class Agent(BaseAgent):
if input_files:
all_files.update(input_files)
# Inject memory context for standalone kickoff (recall before execution)
if agent_memory is not None:
try:
crewai_event_bus.emit(
@@ -1483,6 +1495,8 @@ class Agent(BaseAgent):
Note:
For explicit async usage outside of Flow, use kickoff_async() directly.
"""
# Magic auto-async: if inside event loop (e.g., inside a Flow),
# return coroutine for Flow to await
if is_inside_event_loop():
return self.kickoff_async(messages, response_format, input_files)
@@ -1633,7 +1647,7 @@ class Agent(BaseAgent):
if isinstance(conversion_result, BaseModel):
formatted_result = conversion_result
except ConverterError:
pass
pass # Keep raw output if conversion fails
else:
raw_output = str(output) if not isinstance(output, str) else output
@@ -1715,6 +1729,7 @@ class Agent(BaseAgent):
elif callable(self.guardrail):
guardrail_callable = self.guardrail
else:
# Should not happen if called from kickoff with guardrail check
return output
guardrail_result = process_guardrail(

View File

@@ -41,6 +41,7 @@ class PlanningConfig(BaseModel):
from crewai import Agent
from crewai.agent.planning_config import PlanningConfig
# Simple usage — fast, linear execution (default)
agent = Agent(
role="Researcher",
goal="Research topics",
@@ -48,6 +49,7 @@ class PlanningConfig(BaseModel):
planning_config=PlanningConfig(),
)
# Balanced — replan only when steps fail
agent = Agent(
role="Researcher",
goal="Research topics",
@@ -57,6 +59,7 @@ class PlanningConfig(BaseModel):
),
)
# Full adaptive planning with refinement and replanning
agent = Agent(
role="Researcher",
goal="Research topics",
@@ -66,7 +69,7 @@ class PlanningConfig(BaseModel):
max_attempts=3,
max_steps=10,
plan_prompt="Create a focused plan for: {description}",
llm="gpt-4o-mini",
llm="gpt-4o-mini", # Use cheaper model for planning
),
)
```

View File

@@ -39,6 +39,7 @@ def handle_reasoning(agent: Agent, task: Task) -> None:
agent: The agent performing the task.
task: The task to execute.
"""
# Check if planning is enabled using the planning_enabled property
if not getattr(agent, "planning_enabled", False):
return
@@ -249,7 +250,13 @@ def apply_training_data(agent: Agent, task_prompt: str) -> str:
"""
if agent.crew and not isinstance(agent.crew, str) and agent.crew._train:
return agent._training_handler(task_prompt=task_prompt)
return agent._use_trained_data(task_prompt=task_prompt)
trained_agents_data_file = (
agent.crew.trained_agents_data_file if agent.crew else None
)
return agent._use_trained_data(
task_prompt=task_prompt,
trained_agents_data_file=trained_agents_data_file,
)
def process_tool_results(agent: Agent, result: Any) -> Any:

View File

@@ -99,10 +99,12 @@ class OpenAIAgentToolAdapter(BaseToolAdapter):
Returns:
Tool execution result.
"""
# Get the parameter name from the schema
param_name: str = next(
iter(tool.args_schema.model_json_schema()["properties"].keys())
)
# Handle different argument types
args_dict: dict[str, Any]
if isinstance(arguments, dict):
args_dict = arguments
@@ -114,13 +116,16 @@ class OpenAIAgentToolAdapter(BaseToolAdapter):
else:
args_dict = {param_name: str(arguments)}
# Run the tool with the processed arguments
output: Any | Awaitable[Any] = tool._run(**args_dict)
# Await if the tool returned a coroutine
if inspect.isawaitable(output):
result: Any = await output
else:
result = output
# Ensure the result is JSON serializable
if isinstance(result, (dict, list, str, int, float, bool, type(None))):
return result
return str(result)

View File

@@ -383,6 +383,7 @@ class BaseAgent(BaseModel, ABC, metaclass=AgentMeta):
if isinstance(tool, BaseTool):
processed_tools.append(tool)
elif all(hasattr(tool, attr) for attr in required_attrs):
# Tool has the required attributes, create a Tool instance
processed_tools.append(Tool.from_langchain(tool))
else:
raise ValueError(
@@ -447,12 +448,14 @@ class BaseAgent(BaseModel, ABC, metaclass=AgentMeta):
@model_validator(mode="after")
def validate_and_set_attributes(self) -> Self:
# Validate required fields
for field in ["role", "goal", "backstory"]:
if getattr(self, field) is None:
raise ValueError(
f"{field} must be provided either directly or through config"
)
# Set private attributes
self._logger = Logger(verbose=self.verbose)
if self.max_rpm and not self._rpm_controller:
self._rpm_controller = RPMController(
@@ -461,6 +464,7 @@ class BaseAgent(BaseModel, ABC, metaclass=AgentMeta):
if not self._token_process:
self._token_process = TokenProcess()
# Initialize security_config if not provided
if self.security_config is None:
self.security_config = SecurityConfig()
@@ -562,11 +566,14 @@ class BaseAgent(BaseModel, ABC, metaclass=AgentMeta):
"actions",
}
# Copy llm
existing_llm = shallow_copy(self.llm)
copied_knowledge = shallow_copy(self.knowledge)
copied_knowledge_storage = shallow_copy(self.knowledge_storage)
# Properly copy knowledge sources if they exist
existing_knowledge_sources = None
if self.knowledge_sources:
# Create a shared storage instance for all knowledge sources
shared_storage = (
self.knowledge_sources[0].storage if self.knowledge_sources else None
)
@@ -578,6 +585,7 @@ class BaseAgent(BaseModel, ABC, metaclass=AgentMeta):
if hasattr(source, "model_copy")
else shallow_copy(source)
)
# Ensure all copied sources use the same storage instance
copied_source.storage = shared_storage
existing_knowledge_sources.append(copied_source)

View File

@@ -4,6 +4,8 @@ import re
from typing import Final
# crewai.agents.parser constants
FINAL_ANSWER_ACTION: Final[str] = "Final Answer:"
MISSING_ACTION_AFTER_THOUGHT_ERROR_MESSAGE: Final[str] = (
"I did it wrong. Invalid Format: I missed the 'Action:' after 'Thought:'. I will do right next, and don't use a tool I have already used.\n"

View File

@@ -296,6 +296,7 @@ class CrewAgentExecutor(BaseAgentExecutor):
Returns:
Final answer from the agent.
"""
# Check if model supports native function calling
use_native_tools = (
hasattr(self.llm, "supports_function_calling")
and callable(getattr(self.llm, "supports_function_calling", None))
@@ -306,6 +307,7 @@ class CrewAgentExecutor(BaseAgentExecutor):
if use_native_tools:
return self._invoke_loop_native_tools()
# Fall back to ReAct text-based pattern
return self._invoke_loop_react()
def _invoke_loop_react(self) -> AgentFinish:
@@ -345,6 +347,7 @@ class CrewAgentExecutor(BaseAgentExecutor):
executor_context=self,
verbose=self.agent.verbose,
)
# breakpoint()
if self.response_model is not None:
try:
if isinstance(answer, BaseModel):
@@ -362,6 +365,7 @@ class CrewAgentExecutor(BaseAgentExecutor):
text=answer,
)
except ValidationError:
# If validation fails, convert BaseModel to JSON string for parsing
answer_str = (
answer.model_dump_json()
if isinstance(answer, BaseModel)
@@ -371,12 +375,14 @@ class CrewAgentExecutor(BaseAgentExecutor):
answer_str, self.use_stop_words
) # type: ignore[assignment]
else:
# When no response_model, answer should be a string
answer_str = str(answer) if not isinstance(answer, str) else answer
formatted_answer = process_llm_response(
answer_str, self.use_stop_words
) # type: ignore[assignment]
if isinstance(formatted_answer, AgentAction):
# Extract agent fingerprint if available
fingerprint_context = {}
if (
self.agent
@@ -420,6 +426,7 @@ class CrewAgentExecutor(BaseAgentExecutor):
except Exception as e:
if e.__class__.__module__.startswith("litellm"):
# Do not retry on litellm errors
raise e
if is_context_length_exceeded(e):
handle_context_length(
@@ -436,6 +443,10 @@ class CrewAgentExecutor(BaseAgentExecutor):
finally:
self.iterations += 1
# During the invoke loop, formatted_answer alternates between AgentAction
# (when the agent is using tools) and eventually becomes AgentFinish
# (when the agent reaches a final answer). This check confirms we've
# reached a final answer and helps type checking understand this transition.
if not isinstance(formatted_answer, AgentFinish):
raise RuntimeError(
"Agent execution ended without reaching a final answer. "
@@ -454,7 +465,9 @@ class CrewAgentExecutor(BaseAgentExecutor):
Returns:
Final answer from the agent.
"""
# Convert tools to OpenAI schema format
if not self.original_tools:
# No tools available, fall back to simple LLM call
return self._invoke_loop_native_no_tools()
openai_tools, available_functions, self._tool_name_mapping = (
@@ -477,6 +490,10 @@ class CrewAgentExecutor(BaseAgentExecutor):
enforce_rpm_limit(self.request_within_rpm_limit)
# Call LLM with native tools
# Pass available_functions=None so the LLM returns tool_calls
# without executing them. The executor handles tool execution
# via _handle_native_tool_calls to properly manage message history.
answer = get_llm_response(
llm=cast("BaseLLM", self.llm),
messages=self.messages,
@@ -491,26 +508,32 @@ class CrewAgentExecutor(BaseAgentExecutor):
verbose=self.agent.verbose,
)
# Check if the response is a list of tool calls
if (
isinstance(answer, list)
and answer
and self._is_tool_call_list(answer)
):
# Handle tool calls - execute tools and add results to messages
tool_finish = self._handle_native_tool_calls(
answer, available_functions
)
# If tool has result_as_answer=True, return immediately
if tool_finish is not None:
return tool_finish
# Continue loop to let LLM analyze results and decide next steps
continue
# Text or other response - handle as potential final answer
if isinstance(answer, str):
# Text response - this is the final answer
formatted_answer = AgentFinish(
thought="",
output=answer,
text=answer,
)
self._invoke_step_callback(formatted_answer)
self._append_message(answer)
self._append_message(answer) # Save final answer to messages
self._show_logs(formatted_answer)
return formatted_answer
@@ -526,13 +549,14 @@ class CrewAgentExecutor(BaseAgentExecutor):
self._show_logs(formatted_answer)
return formatted_answer
# Unexpected response type, treat as final answer
formatted_answer = AgentFinish(
thought="",
output=str(answer),
text=str(answer),
)
self._invoke_step_callback(formatted_answer)
self._append_message(str(answer))
self._append_message(str(answer)) # Save final answer to messages
self._show_logs(formatted_answer)
return formatted_answer
@@ -603,10 +627,12 @@ class CrewAgentExecutor(BaseAgentExecutor):
if not response:
return False
first_item = response[0]
# OpenAI-style
if hasattr(first_item, "function") or (
isinstance(first_item, dict) and "function" in first_item
):
return True
# Anthropic-style (object with attributes)
if (
hasattr(first_item, "type")
and getattr(first_item, "type", None) == "tool_use"
@@ -614,12 +640,14 @@ class CrewAgentExecutor(BaseAgentExecutor):
return True
if hasattr(first_item, "name") and hasattr(first_item, "input"):
return True
# Bedrock-style (dict with name and input keys)
if (
isinstance(first_item, dict)
and "name" in first_item
and "input" in first_item
):
return True
# Gemini-style
if hasattr(first_item, "function_call") and first_item.function_call:
return True
return False
@@ -678,6 +706,8 @@ class CrewAgentExecutor(BaseAgentExecutor):
for _, func_name, _ in parsed_calls
)
# Preserve historical sequential behavior for result_as_answer batches.
# Also avoid threading around usage counters for max_usage_count tools.
if has_result_as_answer_in_batch or has_max_usage_count_in_batch:
logger.debug(
"Skipping parallel native execution because batch includes result_as_answer or max_usage_count tool"
@@ -743,6 +773,7 @@ class CrewAgentExecutor(BaseAgentExecutor):
self.messages.append(reasoning_message)
return None
# Sequential behavior: process only first tool call, then force reflection.
call_id, func_name, func_args = parsed_calls[0]
self._append_assistant_tool_calls_message([(call_id, func_name, func_args)])
@@ -796,7 +827,7 @@ class CrewAgentExecutor(BaseAgentExecutor):
func_name = sanitize_tool_name(
func_info.get("name", "") or tool_call.get("name", "")
)
func_args = func_info.get("arguments") or tool_call.get("input", {})
func_args = func_info.get("arguments", "{}") or tool_call.get("input", {})
return call_id, func_name, func_args
return None
@@ -1171,6 +1202,7 @@ class CrewAgentExecutor(BaseAgentExecutor):
text=answer,
)
except ValidationError:
# If validation fails, convert BaseModel to JSON string for parsing
answer_str = (
answer.model_dump_json()
if isinstance(answer, BaseModel)
@@ -1180,6 +1212,7 @@ class CrewAgentExecutor(BaseAgentExecutor):
answer_str, self.use_stop_words
) # type: ignore[assignment]
else:
# When no response_model, answer should be a string
answer_str = str(answer) if not isinstance(answer, str) else answer
formatted_answer = process_llm_response(
answer_str, self.use_stop_words
@@ -1286,6 +1319,10 @@ class CrewAgentExecutor(BaseAgentExecutor):
enforce_rpm_limit(self.request_within_rpm_limit)
# Call LLM with native tools
# Pass available_functions=None so the LLM returns tool_calls
# without executing them. The executor handles tool execution
# via _handle_native_tool_calls to properly manage message history.
answer = await aget_llm_response(
llm=cast("BaseLLM", self.llm),
messages=self.messages,
@@ -1299,26 +1336,32 @@ class CrewAgentExecutor(BaseAgentExecutor):
executor_context=self,
verbose=self.agent.verbose,
)
# Check if the response is a list of tool calls
if (
isinstance(answer, list)
and answer
and self._is_tool_call_list(answer)
):
# Handle tool calls - execute tools and add results to messages
tool_finish = self._handle_native_tool_calls(
answer, available_functions
)
# If tool has result_as_answer=True, return immediately
if tool_finish is not None:
return tool_finish
# Continue loop to let LLM analyze results and decide next steps
continue
# Text or other response - handle as potential final answer
if isinstance(answer, str):
# Text response - this is the final answer
formatted_answer = AgentFinish(
thought="",
output=answer,
text=answer,
)
await self._ainvoke_step_callback(formatted_answer)
self._append_message(answer)
self._append_message(answer) # Save final answer to messages
self._show_logs(formatted_answer)
return formatted_answer
@@ -1334,13 +1377,14 @@ class CrewAgentExecutor(BaseAgentExecutor):
self._show_logs(formatted_answer)
return formatted_answer
# Unexpected response type, treat as final answer
formatted_answer = AgentFinish(
thought="",
output=str(answer),
text=str(answer),
)
await self._ainvoke_step_callback(formatted_answer)
self._append_message(str(answer))
self._append_message(str(answer)) # Save final answer to messages
self._show_logs(formatted_answer)
return formatted_answer
@@ -1411,6 +1455,7 @@ class CrewAgentExecutor(BaseAgentExecutor):
Returns:
Updated action or final answer.
"""
# Special case for add_image_tool
add_image_tool = I18N_DEFAULT.tools("add_image")
if (
isinstance(add_image_tool, dict)
@@ -1530,14 +1575,17 @@ class CrewAgentExecutor(BaseAgentExecutor):
training_handler = CrewTrainingHandler(TRAINING_DATA_FILE)
training_data = training_handler.load() or {}
# Initialize or retrieve agent's training data
agent_training_data = training_data.get(agent_id, {})
if human_feedback is not None:
# Save initial output and human feedback
agent_training_data[train_iteration] = {
"initial_output": result.output,
"human_feedback": human_feedback,
}
else:
# Save improved output
if train_iteration in agent_training_data:
agent_training_data[train_iteration]["improved_output"] = result.output
else:
@@ -1551,6 +1599,7 @@ class CrewAgentExecutor(BaseAgentExecutor):
)
return
# Update the training data and save
training_data[agent_id] = agent_training_data
training_handler.save(training_data)

View File

@@ -94,8 +94,11 @@ def parse(text: str) -> AgentAction | AgentFinish:
if includes_answer:
final_answer = text.split(FINAL_ANSWER_ACTION)[-1].strip()
# Check whether the final answer ends with triple backticks.
if final_answer.endswith("```"):
# Count occurrences of triple backticks in the final answer.
count = final_answer.count("```")
# If count is odd then it's an unmatched trailing set; remove it.
if count % 2 != 0:
final_answer = final_answer[:-3].rstrip()
return AgentFinish(thought=thought, output=final_answer, text=text)
@@ -143,6 +146,7 @@ def _extract_thought(text: str) -> str:
if thought_index == -1:
return ""
thought = text[:thought_index].strip()
# Remove any triple backticks from the thought string
return thought.replace("```", "").strip()
@@ -167,9 +171,18 @@ def _safe_repair_json(tool_input: str) -> str:
Returns:
The repaired JSON string or original if repair fails.
"""
# Skip repair if the input starts and ends with square brackets
# Explanation: The JSON parser has issues handling inputs that are enclosed in square brackets ('[]').
# These are typically valid JSON arrays or strings that do not require repair. Attempting to repair such inputs
# might lead to unintended alterations, such as wrapping the entire input in additional layers or modifying
# the structure in a way that changes its meaning. By skipping the repair for inputs that start and end with
# square brackets, we preserve the integrity of these valid JSON structures and avoid unnecessary modifications.
if tool_input.startswith("[") and tool_input.endswith("]"):
return tool_input
# Before repair, handle common LLM issues:
# 1. Replace """ with " to avoid JSON parser errors
tool_input = tool_input.replace('"""', '"')
result = repair_json(tool_input)

View File

@@ -83,6 +83,10 @@ class PlannerObserver:
return create_llm(config.llm)
return self.agent.llm
# ------------------------------------------------------------------
# Public API
# ------------------------------------------------------------------
def observe(
self,
completed_step: TodoItem,
@@ -178,6 +182,9 @@ class PlannerObserver:
),
)
# Don't force a full replan — the step may have succeeded even if the
# observer LLM failed to parse the result. Defaulting to "continue" is
# far less disruptive than wiping the entire plan on every observer error.
return StepObservation(
step_completed_successfully=True,
key_information_learned="",
@@ -214,6 +221,10 @@ class PlannerObserver:
return remaining_todos
# ------------------------------------------------------------------
# Internal: Message building
# ------------------------------------------------------------------
def _build_observation_messages(
self,
completed_step: TodoItem,
@@ -228,11 +239,15 @@ class PlannerObserver:
task_desc = self.task.description or ""
task_goal = self.task.expected_output or ""
elif self.kickoff_input:
# Standalone kickoff path — no Task object, but we have the raw input.
# Extract just the ## Task section so the observer sees the actual goal,
# not the full enriched instruction with env/tools/verification noise.
task_desc = extract_task_section(self.kickoff_input)
task_goal = "Complete the task successfully"
system_prompt = I18N_DEFAULT.retrieve("planning", "observation_system_prompt")
# Build context of what's been done
completed_summary = ""
if all_completed:
completed_lines = []
@@ -246,6 +261,7 @@ class PlannerObserver:
completed_lines
)
# Build remaining plan
remaining_summary = ""
if remaining_todos:
remaining_lines = [
@@ -290,14 +306,17 @@ class PlannerObserver:
if isinstance(response, StepObservation):
return response
# JSON string path — most common miss before this fix
if isinstance(response, str):
text = response.strip()
try:
return StepObservation.model_validate_json(text)
except Exception: # noqa: S110
pass
# Some LLMs wrap the JSON in markdown fences
if text.startswith("```"):
lines = text.split("\n")
# Strip first and last lines (``` markers)
inner = "\n".join(
lines[1:-1] if lines[-1].strip() == "```" else lines[1:]
)
@@ -306,12 +325,14 @@ class PlannerObserver:
except Exception: # noqa: S110
pass
# Dict path
if isinstance(response, dict):
try:
return StepObservation.model_validate(response)
except Exception: # noqa: S110
pass
# Last resort — log what we got so it's diagnosable
logger.warning(
"Could not parse observation response (type=%s). "
"Falling back to default failure observation. Preview: %.200s",

View File

@@ -108,6 +108,7 @@ class StepExecutor:
self.request_within_rpm_limit = request_within_rpm_limit
self.callbacks = callbacks or []
# Native tool support — set up once
self._use_native_tools = check_native_tool_support(
self.llm, self.original_tools
)
@@ -120,6 +121,10 @@ class StepExecutor:
_,
) = setup_native_tools(self.original_tools)
# ------------------------------------------------------------------
# Public API
# ------------------------------------------------------------------
def execute(
self,
todo: TodoItem,
@@ -185,6 +190,10 @@ class StepExecutor:
execution_time=elapsed,
)
# ------------------------------------------------------------------
# Internal: Message building
# ------------------------------------------------------------------
def _build_isolated_messages(
self, todo: TodoItem, context: StepExecutionContext
) -> list[LLMMessage]:
@@ -228,6 +237,10 @@ class StepExecutor:
"""Build the user prompt for this specific step."""
parts: list[str] = []
# Include overall task context so the executor knows the full goal and
# required output format/location — critical for knowing WHAT to produce.
# We extract only the task body (not tool instructions or verification
# sections) to avoid duplicating directives already in the system prompt.
if context.task_description:
task_section = extract_task_section(context.task_description)
if task_section:
@@ -254,6 +267,7 @@ class StepExecutor:
)
)
# Include dependency results (final results only, no traces)
if context.dependency_results:
parts.append(
I18N_DEFAULT.retrieve("planning", "step_executor_context_header")
@@ -269,6 +283,10 @@ class StepExecutor:
return "\n".join(parts)
# ------------------------------------------------------------------
# Internal: Multi-turn execution loop
# ------------------------------------------------------------------
def _execute_text_parsed(
self,
messages: list[LLMMessage],
@@ -288,6 +306,7 @@ class StepExecutor:
last_tool_result = ""
for _ in range(max_step_iterations):
# Check step timeout
if step_timeout and start_time:
elapsed = time.monotonic() - start_time
if elapsed >= step_timeout:
@@ -312,12 +331,17 @@ class StepExecutor:
tool_calls_made.append(formatted.tool)
tool_result = self._execute_text_tool_with_events(formatted)
last_tool_result = tool_result
# Append the assistant's reasoning + action, then the observation.
# _build_observation_message handles vision sentinels so the LLM
# receives an image content block instead of raw base64 text.
messages.append({"role": "assistant", "content": answer_str})
messages.append(self._build_observation_message(tool_result))
continue
# Raw text response with no Final Answer marker — treat as done
return answer_str
# Max iterations reached — return the last tool result we accumulated
return last_tool_result
def _execute_text_tool_with_events(self, formatted: AgentAction) -> str:
@@ -405,6 +429,10 @@ class StepExecutor:
return {"input": stripped_input}
return {"input": str(tool_input)}
# ------------------------------------------------------------------
# Internal: Vision support
# ------------------------------------------------------------------
@staticmethod
def _parse_vision_sentinel(raw: str) -> tuple[str, str] | None:
"""Parse a VISION_IMAGE sentinel into (media_type, base64_data), or None."""
@@ -489,6 +517,7 @@ class StepExecutor:
accumulated_results: list[str] = []
for _ in range(max_step_iterations):
# Check step timeout
if step_timeout and start_time:
elapsed = time.monotonic() - start_time
if elapsed >= step_timeout:
@@ -512,14 +541,19 @@ class StepExecutor:
return answer.model_dump_json()
if isinstance(answer, list) and answer and is_tool_call_list(answer):
# _execute_native_tool_calls appends assistant + tool messages
# to `messages` as a side-effect, so the next LLM call will
# see the full conversation history including tool outputs.
result = self._execute_native_tool_calls(
answer, messages, tool_calls_made
)
accumulated_results.append(result)
continue
# Text answer → LLM decided the step is done
return str(answer)
# Max iterations reached — return everything we accumulated
return "\n".join(filter(None, accumulated_results))
def _execute_native_tool_calls(
@@ -565,6 +599,9 @@ class StepExecutor:
parsed = self._parse_vision_sentinel(raw_content)
if parsed:
media_type, b64_data = parsed
# Replace the sentinel with a standard image_url content block.
# Each provider's _format_messages handles conversion to
# its native format (e.g. Anthropic image blocks).
modified: LLMMessage = cast(
LLMMessage, dict(call_result.tool_message)
)

View File

@@ -6,16 +6,12 @@ from datetime import datetime
import glob
import json
import os
import re
import sqlite3
from typing import Any
import click
_PLACEHOLDER_RE = re.compile(r"\{([A-Za-z_][A-Za-z0-9_\-]*)}")
_SQLITE_MAGIC = b"SQLite format 3\x00"
_SELECT_ALL = """
@@ -38,25 +34,6 @@ LIMIT 1
"""
_DEFAULT_DIR = "./.checkpoints"
_DEFAULT_DB = "./.checkpoints.db"
def _detect_location(location: str) -> str:
"""Resolve the default checkpoint location.
When the caller passes the default directory path, check whether a
SQLite database exists at the conventional ``.db`` path and prefer it.
"""
if (
location == _DEFAULT_DIR
and not os.path.exists(_DEFAULT_DIR)
and os.path.exists(_DEFAULT_DB)
):
return _DEFAULT_DB
return location
def _is_sqlite(path: str) -> bool:
"""Check if a file is a SQLite database by reading its magic bytes."""
if not os.path.isfile(path):
@@ -75,7 +52,13 @@ def _parse_checkpoint_json(raw: str, source: str) -> dict[str, Any]:
nodes = data.get("event_record", {}).get("nodes", {})
event_count = len(nodes)
trigger_event = data.get("trigger")
trigger_event = None
if nodes:
last_node = max(
nodes.values(),
key=lambda n: n.get("event", {}).get("emission_sequence") or 0,
)
trigger_event = last_node.get("event", {}).get("type")
parsed_entities: list[dict[str, Any]] = []
for entity in entities:
@@ -93,47 +76,16 @@ def _parse_checkpoint_json(raw: str, source: str) -> dict[str, Any]:
{
"description": t.get("description", ""),
"completed": t.get("output") is not None,
"output": (t.get("output") or {}).get("raw", ""),
}
for t in tasks
]
parsed_entities.append(info)
inputs: dict[str, Any] = {}
for entity in entities:
cp_inputs = entity.get("checkpoint_inputs")
if isinstance(cp_inputs, dict) and cp_inputs:
inputs = dict(cp_inputs)
break
for entity in entities:
for task in entity.get("tasks", []):
for field in (
"checkpoint_original_description",
"checkpoint_original_expected_output",
):
text = task.get(field) or ""
for match in _PLACEHOLDER_RE.findall(text):
if match not in inputs:
inputs[match] = ""
for agent in entity.get("agents", []):
for field in ("role", "goal", "backstory"):
text = agent.get(field) or ""
for match in _PLACEHOLDER_RE.findall(text):
if match not in inputs:
inputs[match] = ""
branch = data.get("branch", "main")
parent_id = data.get("parent_id")
return {
"source": source,
"event_count": event_count,
"trigger": trigger_event,
"entities": parsed_entities,
"branch": branch,
"parent_id": parent_id,
"inputs": inputs,
}
@@ -237,7 +189,6 @@ def _list_sqlite(db_path: str) -> list[dict[str, Any]]:
"entities": [],
"source": checkpoint_id,
}
meta["db"] = db_path
results.append(meta)
return results
@@ -360,10 +311,6 @@ def _print_info(meta: dict[str, Any]) -> None:
trigger = meta.get("trigger")
if trigger:
click.echo(f"Trigger: {trigger}")
click.echo(f"Branch: {meta.get('branch', 'main')}")
parent_id = meta.get("parent_id")
if parent_id:
click.echo(f"Parent: {parent_id}")
for ent in meta.get("entities", []):
eid = str(ent.get("id", ""))[:8]

View File

@@ -2,23 +2,17 @@
from __future__ import annotations
from collections import defaultdict
from typing import Any, ClassVar
from textual.app import App, ComposeResult
from textual.binding import Binding
from textual.containers import Horizontal, Vertical, VerticalScroll
from textual.widgets import (
Button,
Footer,
Header,
Input,
Static,
TextArea,
Tree,
)
from textual.containers import Horizontal, Vertical
from textual.screen import ModalScreen
from textual.widgets import Button, Footer, Header, OptionList, Static
from textual.widgets.option_list import Option
from crewai.cli.checkpoint_cli import (
_entity_summary,
_format_size,
_is_sqlite,
_list_json,
@@ -40,54 +34,151 @@ def _load_entries(location: str) -> list[dict[str, Any]]:
return _list_json(location)
def _short_id(name: str) -> str:
"""Shorten a checkpoint name for tree display."""
if len(name) > 30:
return name[:27] + "..."
return name
def _format_list_label(entry: dict[str, Any]) -> str:
"""Format a checkpoint entry for the list panel."""
name = entry.get("name", "")
ts = entry.get("ts") or ""
trigger = entry.get("trigger") or ""
summary = _entity_summary(entry.get("entities", []))
line1 = f"[bold]{name}[/]"
parts = []
if ts:
parts.append(f"[dim]{ts}[/]")
if "size" in entry:
parts.append(f"[dim]{_format_size(entry['size'])}[/]")
if trigger:
parts.append(f"[{_PRIMARY}]{trigger}[/]")
line2 = " ".join(parts)
line3 = f" [{_DIM}]{summary}[/]"
return f"{line1}\n{line2}\n{line3}"
def _entry_id(entry: dict[str, Any]) -> str:
"""Normalize an entry's name into its checkpoint ID.
JSON filenames are ``{ts}_{uuid}_p-{parent}.json``; SQLite IDs
are already ``{ts}_{uuid}``. This strips the JSON suffix so
fork-parent lookups work in both providers.
"""
name = str(entry.get("name", ""))
if name.endswith(".json"):
name = name[: -len(".json")]
idx = name.find("_p-")
if idx != -1:
name = name[:idx]
return name
def _build_entity_header(ent: dict[str, Any]) -> str:
"""Build rich text header for an entity (progress bar only)."""
def _format_detail(entry: dict[str, Any]) -> str:
"""Format checkpoint details for the right panel."""
lines: list[str] = []
tasks = ent.get("tasks")
if isinstance(tasks, list):
completed = ent.get("tasks_completed", 0)
total = ent.get("tasks_total", 0)
pct = int(completed / total * 100) if total else 0
bar_len = 20
filled = int(bar_len * completed / total) if total else 0
bar = f"[{_PRIMARY}]{'' * filled}[/][{_DIM}]{'' * (bar_len - filled)}[/]"
lines.append(f"{bar} {completed}/{total} tasks ({pct}%)")
# Header
name = entry.get("name", "")
lines.append(f"[bold {_PRIMARY}]{name}[/]")
lines.append(f"[{_DIM}]{'' * 50}[/]")
lines.append("")
# Metadata table
ts = entry.get("ts") or "unknown"
trigger = entry.get("trigger") or ""
lines.append(f" [bold]Time[/] {ts}")
if "size" in entry:
lines.append(f" [bold]Size[/] {_format_size(entry['size'])}")
lines.append(f" [bold]Events[/] {entry.get('event_count', 0)}")
if trigger:
lines.append(f" [bold]Trigger[/] [{_PRIMARY}]{trigger}[/]")
if "path" in entry:
lines.append(f" [bold]Path[/] [{_DIM}]{entry['path']}[/]")
if "db" in entry:
lines.append(f" [bold]Database[/] [{_DIM}]{entry['db']}[/]")
# Entities
for ent in entry.get("entities", []):
eid = str(ent.get("id", ""))[:8]
etype = ent.get("type", "unknown")
ename = ent.get("name", "unnamed")
lines.append("")
lines.append(f" [{_DIM}]{'' * 50}[/]")
lines.append(f" [bold {_SECONDARY}]{etype}[/]: {ename} [{_DIM}]{eid}[/]")
tasks = ent.get("tasks")
if isinstance(tasks, list):
completed = ent.get("tasks_completed", 0)
total = ent.get("tasks_total", 0)
pct = int(completed / total * 100) if total else 0
bar_len = 20
filled = int(bar_len * completed / total) if total else 0
bar = f"[{_PRIMARY}]{'' * filled}[/][{_DIM}]{'' * (bar_len - filled)}[/]"
lines.append(f" {bar} {completed}/{total} tasks ({pct}%)")
lines.append("")
for i, task in enumerate(tasks):
if task.get("completed"):
icon = "[green]✓[/]"
else:
icon = "[yellow]○[/]"
desc = str(task.get("description", ""))
if len(desc) > 55:
desc = desc[:52] + "..."
lines.append(f" {icon} {i + 1}. {desc}")
return "\n".join(lines)
# Return type: (location, action, inputs, task_output_overrides)
_TuiResult = tuple[str, str, dict[str, Any] | None, dict[int, str] | None] | None
class ConfirmResumeScreen(ModalScreen[bool]):
"""Modal confirmation before resuming from a checkpoint."""
CSS = f"""
ConfirmResumeScreen {{
align: center middle;
}}
#confirm-dialog {{
width: 60;
height: auto;
padding: 1 2;
background: {_BG_PANEL};
border: round {_PRIMARY};
}}
#confirm-label {{
width: 100%;
content-align: center middle;
margin-bottom: 1;
}}
#confirm-name {{
width: 100%;
content-align: center middle;
color: {_PRIMARY};
text-style: bold;
margin-bottom: 1;
}}
#confirm-buttons {{
width: 100%;
height: 3;
layout: horizontal;
align: center middle;
}}
Button {{
margin: 0 2;
min-width: 12;
}}
"""
def __init__(self, checkpoint_name: str) -> None:
super().__init__()
self._checkpoint_name = checkpoint_name
def compose(self) -> ComposeResult:
with Vertical(id="confirm-dialog"):
yield Static("Resume from this checkpoint?", id="confirm-label")
yield Static(self._checkpoint_name, id="confirm-name")
with Horizontal(id="confirm-buttons"):
yield Button("Resume", variant="success", id="btn-yes")
yield Button("Cancel", variant="default", id="btn-no")
def on_button_pressed(self, event: Button.Pressed) -> None:
self.dismiss(event.button.id == "btn-yes")
def on_key(self, event: Any) -> None:
if event.key == "y":
self.dismiss(True)
elif event.key in ("n", "escape"):
self.dismiss(False)
class CheckpointTUI(App[_TuiResult]):
class CheckpointTUI(App[str | None]):
"""TUI to browse and inspect checkpoints.
Returns ``(location, action, inputs)`` where action is ``"resume"`` or
``"fork"`` and inputs is a parsed dict or ``None``,
or ``None`` if the user quit without selecting.
Returns the checkpoint location string to resume from, or None if
the user quit without selecting.
"""
TITLE = "CrewAI Checkpoints"
@@ -108,431 +199,145 @@ class CheckpointTUI(App[_TuiResult]):
background: {_PRIMARY};
color: {_TERTIARY};
}}
#main-layout {{
Horizontal {{
height: 1fr;
}}
#tree-panel {{
width: 45%;
#cp-list {{
width: 38%;
background: {_BG_PANEL};
border: round {_SECONDARY};
padding: 0 1;
scrollbar-color: {_PRIMARY};
}}
#tree-panel:focus-within {{
#cp-list:focus {{
border: round {_PRIMARY};
}}
#detail-container {{
width: 55%;
height: 1fr;
#cp-list > .option-list--option-highlighted {{
background: {_SECONDARY};
color: {_TERTIARY};
text-style: none;
}}
#detail-scroll {{
#cp-list > .option-list--option-highlighted * {{
color: {_TERTIARY};
}}
#detail-container {{
width: 62%;
padding: 0 1;
}}
#detail {{
height: 1fr;
background: {_BG_PANEL};
border: round {_SECONDARY};
padding: 1 2;
overflow-y: auto;
scrollbar-color: {_PRIMARY};
}}
#detail-scroll:focus-within {{
#detail:focus {{
border: round {_PRIMARY};
}}
#detail-header {{
margin-bottom: 1;
}}
#status {{
height: 1;
padding: 0 2;
color: {_DIM};
}}
#inputs-section {{
display: none;
height: auto;
max-height: 8;
padding: 0 1;
}}
#inputs-section.visible {{
display: block;
}}
#inputs-label {{
height: 1;
color: {_DIM};
padding: 0 1;
}}
.input-row {{
height: 3;
padding: 0 1;
}}
.input-row Static {{
width: auto;
min-width: 12;
padding: 1 1 0 0;
color: {_TERTIARY};
}}
.input-row Input {{
width: 1fr;
}}
#no-inputs-label {{
height: 1;
color: {_DIM};
padding: 0 1;
}}
#action-buttons {{
height: 3;
align: right middle;
padding: 0 1;
display: none;
}}
#action-buttons.visible {{
display: block;
}}
#action-buttons Button {{
margin: 0 0 0 1;
min-width: 10;
}}
#btn-resume {{
background: {_SECONDARY};
color: {_TERTIARY};
}}
#btn-resume:hover {{
background: {_PRIMARY};
}}
#btn-fork {{
background: {_PRIMARY};
color: {_TERTIARY};
}}
#btn-fork:hover {{
background: {_SECONDARY};
}}
.entity-title {{
padding: 1 1 0 1;
}}
.entity-detail {{
padding: 0 1;
}}
.task-output-editor {{
height: auto;
max-height: 10;
margin: 0 1 1 1;
border: round {_DIM};
}}
.task-output-editor:focus {{
border: round {_PRIMARY};
}}
.task-label {{
padding: 0 1;
}}
Tree {{
background: {_BG_PANEL};
}}
Tree > .tree--cursor {{
background: {_SECONDARY};
color: {_TERTIARY};
}}
"""
BINDINGS: ClassVar[list[Binding | tuple[str, str] | tuple[str, str, str]]] = [
("q", "quit", "Quit"),
("r", "refresh", "Refresh"),
("j", "cursor_down", "Down"),
("k", "cursor_up", "Up"),
]
def __init__(self, location: str = "./.checkpoints") -> None:
super().__init__()
self._location = location
self._entries: list[dict[str, Any]] = []
self._selected_entry: dict[str, Any] | None = None
self._input_keys: list[str] = []
self._task_output_ids: list[tuple[int, str, str]] = []
self._selected_idx: int = 0
self._pending_location: str = ""
def compose(self) -> ComposeResult:
yield Header(show_clock=False)
with Horizontal(id="main-layout"):
tree: Tree[dict[str, Any]] = Tree("Checkpoints", id="tree-panel")
tree.show_root = True
tree.guide_depth = 3
yield tree
with Horizontal():
yield OptionList(id="cp-list")
with Vertical(id="detail-container"):
yield Static("", id="status")
with VerticalScroll(id="detail-scroll"):
yield Static(
f"[{_DIM}]Select a checkpoint from the tree[/]", # noqa: S608
id="detail-header",
)
with Vertical(id="inputs-section"):
yield Static("Inputs", id="inputs-label")
with Horizontal(id="action-buttons"):
yield Button("Resume", id="btn-resume")
yield Button("Fork", id="btn-fork")
yield Static(
f"\n [{_DIM}]Select a checkpoint from the list[/]", # noqa: S608
id="detail",
)
yield Footer()
async def on_mount(self) -> None:
self._refresh_tree()
self.query_one("#tree-panel", Tree).root.expand()
self.query_one("#cp-list", OptionList).border_title = "Checkpoints"
self.query_one("#detail", Static).border_title = "Detail"
self._refresh_list()
def _refresh_tree(self) -> None:
def _refresh_list(self) -> None:
self._entries = _load_entries(self._location)
self._selected_entry = None
tree = self.query_one("#tree-panel", Tree)
tree.clear()
option_list = self.query_one("#cp-list", OptionList)
option_list.clear_options()
if not self._entries:
self.query_one("#detail-header", Static).update(
f"[{_DIM}]No checkpoints in {self._location}[/]"
self.query_one("#detail", Static).update(
f"\n [{_DIM}]No checkpoints in {self._location}[/]"
)
self.query_one("#status", Static).update("")
self.sub_title = self._location
return
# Group by branch
branches: dict[str, list[dict[str, Any]]] = defaultdict(list)
for entry in self._entries:
branch = entry.get("branch", "main")
branches[branch].append(entry)
# Index checkpoint names to tree nodes so forks can attach
node_by_name: dict[str, Any] = {}
def _make_label(e: dict[str, Any]) -> str:
name = e.get("name", "")
ts = e.get("ts") or ""
trigger = e.get("trigger") or ""
parts = [f"[bold]{_short_id(name)}[/]"]
if ts:
time_part = ts.split(" ")[-1] if " " in ts else ts
parts.append(f"[{_DIM}]{time_part}[/]")
if trigger:
parts.append(f"[{_PRIMARY}]{trigger}[/]")
return " ".join(parts)
fork_parents: set[str] = set()
for branch_name, entries in branches.items():
if branch_name == "main" or not entries:
continue
oldest = min(entries, key=lambda e: str(e.get("name", "")))
first_parent = oldest.get("parent_id")
if first_parent:
fork_parents.add(str(first_parent))
def _add_checkpoint(parent_node: Any, e: dict[str, Any]) -> None:
"""Add a checkpoint node — expandable only if a fork attaches to it."""
cp_id = _entry_id(e)
if cp_id in fork_parents:
node = parent_node.add(
_make_label(e), data=e, expand=False, allow_expand=True
)
else:
node = parent_node.add_leaf(_make_label(e), data=e)
node_by_name[cp_id] = node
if "main" in branches:
for entry in reversed(branches["main"]):
_add_checkpoint(tree.root, entry)
fork_branches = [
(name, sorted(entries, key=lambda e: str(e.get("name", ""))))
for name, entries in branches.items()
if name != "main"
]
remaining = fork_branches
max_passes = len(remaining) + 1
while remaining and max_passes > 0:
max_passes -= 1
deferred = []
made_progress = False
for branch_name, entries in remaining:
first_parent = entries[0].get("parent_id") if entries else None
if first_parent and str(first_parent) not in node_by_name:
deferred.append((branch_name, entries))
continue
attach_to: Any = tree.root
if first_parent:
attach_to = node_by_name.get(str(first_parent), tree.root)
branch_label = (
f"[bold {_SECONDARY}]{branch_name}[/] [{_DIM}]({len(entries)})[/]"
)
branch_node = attach_to.add(branch_label, expand=False)
for entry in entries:
_add_checkpoint(branch_node, entry)
made_progress = True
remaining = deferred
if not made_progress:
break
for branch_name, entries in remaining:
branch_label = (
f"[bold {_SECONDARY}]{branch_name}[/] "
f"[{_DIM}]({len(entries)})[/] [{_DIM}](orphaned)[/]"
)
branch_node = tree.root.add(branch_label, expand=False)
for entry in entries:
_add_checkpoint(branch_node, entry)
option_list.add_option(Option(_format_list_label(entry)))
count = len(self._entries)
storage = "SQLite" if _is_sqlite(self._location) else "JSON"
self.sub_title = self._location
self.sub_title = f"{self._location}"
self.query_one("#status", Static).update(f" {count} checkpoint(s) | {storage}")
async def _show_detail(self, entry: dict[str, Any]) -> None:
"""Update the detail panel for a checkpoint entry."""
self._selected_entry = entry
self.query_one("#action-buttons").add_class("visible")
detail_scroll = self.query_one("#detail-scroll", VerticalScroll)
# Remove all dynamic children except the header — await so IDs are freed
to_remove = [c for c in detail_scroll.children if c.id != "detail-header"]
for child in to_remove:
await child.remove()
# Header
name = entry.get("name", "")
ts = entry.get("ts") or "unknown"
trigger = entry.get("trigger") or ""
branch = entry.get("branch", "main")
parent_id = entry.get("parent_id")
header_lines = [
f"[bold {_PRIMARY}]{name}[/]",
f"[{_DIM}]{'' * 50}[/]",
"",
f" [bold]Time[/] {ts}",
]
if "size" in entry:
header_lines.append(f" [bold]Size[/] {_format_size(entry['size'])}")
header_lines.append(f" [bold]Events[/] {entry.get('event_count', 0)}")
if trigger:
header_lines.append(f" [bold]Trigger[/] [{_PRIMARY}]{trigger}[/]")
header_lines.append(f" [bold]Branch[/] [{_SECONDARY}]{branch}[/]")
if parent_id:
header_lines.append(f" [bold]Parent[/] [{_DIM}]{parent_id}[/]")
if "path" in entry:
header_lines.append(f" [bold]Path[/] [{_DIM}]{entry['path']}[/]")
if "db" in entry:
header_lines.append(f" [bold]Database[/] [{_DIM}]{entry['db']}[/]")
self.query_one("#detail-header", Static).update("\n".join(header_lines))
# Entity details and editable task outputs — mounted flat for scrolling
self._task_output_ids = []
flat_task_idx = 0
for ent_idx, ent in enumerate(entry.get("entities", [])):
etype = ent.get("type", "unknown")
ename = ent.get("name", "unnamed")
completed = ent.get("tasks_completed")
total = ent.get("tasks_total")
entity_title = f"[bold {_SECONDARY}]{etype}: {ename}[/]"
if completed is not None and total is not None:
entity_title += f" [{_DIM}]{completed}/{total} tasks[/]"
await detail_scroll.mount(Static(entity_title, classes="entity-title"))
await detail_scroll.mount(
Static(_build_entity_header(ent), classes="entity-detail")
)
tasks = ent.get("tasks", [])
for i, task in enumerate(tasks):
desc = str(task.get("description", ""))
if len(desc) > 55:
desc = desc[:52] + "..."
if task.get("completed"):
icon = "[green]✓[/]"
await detail_scroll.mount(
Static(f" {icon} {i + 1}. {desc}", classes="task-label")
)
output_text = task.get("output", "")
editor_id = f"task-output-{ent_idx}-{i}"
await detail_scroll.mount(
TextArea(
str(output_text),
classes="task-output-editor",
id=editor_id,
)
)
self._task_output_ids.append(
(flat_task_idx, editor_id, str(output_text))
)
else:
icon = "[yellow]○[/]"
await detail_scroll.mount(
Static(f" {icon} {i + 1}. {desc}", classes="task-label")
)
flat_task_idx += 1
# Build input fields
await self._build_input_fields(entry.get("inputs", {}))
async def _build_input_fields(self, inputs: dict[str, Any]) -> None:
"""Rebuild the inputs section with one field per input key."""
section = self.query_one("#inputs-section")
# Remove old dynamic children — await so IDs are freed
for widget in list(section.query(".input-row, .no-inputs")):
await widget.remove()
self._input_keys = []
if not inputs:
await section.mount(Static(f"[{_DIM}]No inputs[/]", classes="no-inputs"))
section.add_class("visible")
return
for key, value in inputs.items():
self._input_keys.append(key)
row = Horizontal(classes="input-row")
row.compose_add_child(Static(f"[bold]{key}[/]"))
row.compose_add_child(
Input(value=str(value), placeholder=key, id=f"input-{key}")
)
await section.mount(row)
section.add_class("visible")
def _collect_inputs(self) -> dict[str, Any] | None:
"""Collect current values from input fields."""
if not self._input_keys:
return None
result: dict[str, Any] = {}
for key in self._input_keys:
widget = self.query_one(f"#input-{key}", Input)
result[key] = widget.value
return result
def _collect_task_overrides(self) -> dict[int, str] | None:
"""Collect edited task outputs. Returns only changed values."""
if not self._task_output_ids or self._selected_entry is None:
return None
overrides: dict[int, str] = {}
for task_idx, editor_id, original in self._task_output_ids:
editor = self.query_one(f"#{editor_id}", TextArea)
if editor.text != original:
overrides[task_idx] = editor.text
return overrides or None
def _resolve_location(self, entry: dict[str, Any]) -> str:
"""Get the restore location string for a checkpoint entry."""
if "path" in entry:
return str(entry["path"])
if _is_sqlite(self._location):
return f"{self._location}#{entry['name']}"
return str(entry.get("name", ""))
async def on_tree_node_highlighted(
self, event: Tree.NodeHighlighted[dict[str, Any]]
async def on_option_list_option_highlighted(
self,
event: OptionList.OptionHighlighted,
) -> None:
if event.node.data is not None:
await self._show_detail(event.node.data)
def on_button_pressed(self, event: Button.Pressed) -> None:
if self._selected_entry is None:
idx = event.option_index
if idx is None:
return
inputs = self._collect_inputs()
overrides = self._collect_task_overrides()
loc = self._resolve_location(self._selected_entry)
if event.button.id == "btn-resume":
self.exit((loc, "resume", inputs, overrides))
elif event.button.id == "btn-fork":
self.exit((loc, "fork", inputs, overrides))
if idx < len(self._entries):
self._selected_idx = idx
entry = self._entries[idx]
self.query_one("#detail", Static).update(_format_detail(entry))
def action_cursor_down(self) -> None:
self.query_one("#cp-list", OptionList).action_cursor_down()
def action_cursor_up(self) -> None:
self.query_one("#cp-list", OptionList).action_cursor_up()
async def on_option_list_option_selected(
self,
event: OptionList.OptionSelected,
) -> None:
idx = event.option_index
if idx is None or idx >= len(self._entries):
return
entry = self._entries[idx]
if "path" in entry:
loc = entry["path"]
elif _is_sqlite(self._location):
loc = f"{self._location}#{entry['name']}"
else:
loc = entry.get("name", "")
self._pending_location = loc
name = entry.get("name", loc)
self.push_screen(ConfirmResumeScreen(name), self._on_confirm)
def _on_confirm(self, confirmed: bool | None) -> None:
if confirmed:
self.exit(self._pending_location)
else:
self._pending_location = ""
def action_refresh(self) -> None:
self._refresh_tree()
self._refresh_list()
async def _run_checkpoint_tui_async(location: str) -> None:
@@ -540,78 +345,18 @@ async def _run_checkpoint_tui_async(location: str) -> None:
import click
app = CheckpointTUI(location=location)
selection = await app.run_async()
selected = await app.run_async()
if selection is None:
if selected is None:
return
selected, action, inputs, task_overrides = selection
click.echo(f"\nResuming from: {selected}\n")
from crewai.crew import Crew
from crewai.state.checkpoint_config import CheckpointConfig
config = CheckpointConfig(restore_from=selected)
if action == "fork":
click.echo(f"\nForking from: {selected}\n")
crew = Crew.fork(config)
else:
click.echo(f"\nResuming from: {selected}\n")
crew = Crew.from_checkpoint(config)
if task_overrides:
click.echo("Modifications:")
overridden_agents: set[int] = set()
for task_idx, new_output in task_overrides.items():
if task_idx < len(crew.tasks) and crew.tasks[task_idx].output is not None:
desc = crew.tasks[task_idx].description or f"Task {task_idx + 1}"
if len(desc) > 60:
desc = desc[:57] + "..."
crew.tasks[task_idx].output.raw = new_output # type: ignore[union-attr]
preview = new_output.replace("\n", " ")
if len(preview) > 80:
preview = preview[:77] + "..."
click.echo(f" Task {task_idx + 1}: {desc}")
click.echo(f" -> {preview}")
agent = crew.tasks[task_idx].agent
if agent and agent.agent_executor:
nth = sum(1 for t in crew.tasks[:task_idx] if t.agent is agent)
messages = agent.agent_executor.messages
system_positions = [
i for i, m in enumerate(messages) if m.get("role") == "system"
]
if nth < len(system_positions):
seg_start = system_positions[nth]
seg_end = (
system_positions[nth + 1]
if nth + 1 < len(system_positions)
else len(messages)
)
for j in range(seg_end - 1, seg_start, -1):
if messages[j].get("role") == "assistant":
messages[j]["content"] = new_output
break
overridden_agents.add(id(agent))
earliest = min(task_overrides)
for offset, subsequent in enumerate(
crew.tasks[earliest + 1 :], start=earliest + 1
):
if subsequent.output and offset not in task_overrides:
subsequent.output = None
if subsequent.agent and subsequent.agent.agent_executor:
subsequent.agent.agent_executor._resuming = False
if id(subsequent.agent) not in overridden_agents:
subsequent.agent.agent_executor.messages = []
click.echo()
if inputs:
click.echo("Inputs:")
for k, v in inputs.items():
click.echo(f" {k}: {v}")
click.echo()
result = await crew.akickoff(inputs=inputs)
crew = Crew.from_checkpoint(CheckpointConfig(restore_from=selected))
result = await crew.akickoff()
click.echo(f"\nResult: {getattr(result, 'raw', result)}")

View File

@@ -392,15 +392,10 @@ def deploy() -> None:
@deploy.command(name="create")
@click.option("-y", "--yes", is_flag=True, help="Skip the confirmation prompt")
@click.option(
"--skip-validate",
is_flag=True,
help="Skip the pre-deploy validation checks.",
)
def deploy_create(yes: bool, skip_validate: bool) -> None:
def deploy_create(yes: bool) -> None:
"""Create a Crew deployment."""
deploy_cmd = DeployCommand()
deploy_cmd.create_crew(yes, skip_validate=skip_validate)
deploy_cmd.create_crew(yes)
@deploy.command(name="list")
@@ -412,28 +407,10 @@ def deploy_list() -> None:
@deploy.command(name="push")
@click.option("-u", "--uuid", type=str, help="Crew UUID parameter")
@click.option(
"--skip-validate",
is_flag=True,
help="Skip the pre-deploy validation checks.",
)
def deploy_push(uuid: str | None, skip_validate: bool) -> None:
def deploy_push(uuid: str | None) -> None:
"""Deploy the Crew."""
deploy_cmd = DeployCommand()
deploy_cmd.deploy(uuid=uuid, skip_validate=skip_validate)
@deploy.command(name="validate")
def deploy_validate() -> None:
"""Validate the current project against common deployment failures.
Runs the same pre-deploy checks that `crewai deploy create` and
`crewai deploy push` run automatically, without contacting the platform.
Exits non-zero if any blocking issues are found.
"""
from crewai.cli.deploy.validate import run_validate_command
run_validate_command()
deploy_cmd.deploy(uuid=uuid)
@deploy.command(name="status")
@@ -816,9 +793,6 @@ def traces_status() -> None:
@click.pass_context
def checkpoint(ctx: click.Context, location: str) -> None:
"""Browse and inspect checkpoints. Launches a TUI when called without a subcommand."""
from crewai.cli.checkpoint_cli import _detect_location
location = _detect_location(location)
ctx.ensure_object(dict)
ctx.obj["location"] = location
if ctx.invoked_subcommand is None:
@@ -831,18 +805,18 @@ def checkpoint(ctx: click.Context, location: str) -> None:
@click.argument("location", default="./.checkpoints")
def checkpoint_list(location: str) -> None:
"""List checkpoints in a directory."""
from crewai.cli.checkpoint_cli import _detect_location, list_checkpoints
from crewai.cli.checkpoint_cli import list_checkpoints
list_checkpoints(_detect_location(location))
list_checkpoints(location)
@checkpoint.command("info")
@click.argument("path", default="./.checkpoints")
def checkpoint_info(path: str) -> None:
"""Show details of a checkpoint. Pass a file or directory for latest."""
from crewai.cli.checkpoint_cli import _detect_location, info_checkpoint
from crewai.cli.checkpoint_cli import info_checkpoint
info_checkpoint(_detect_location(path))
info_checkpoint(path)
if __name__ == "__main__":

View File

@@ -4,35 +4,12 @@ from rich.console import Console
from crewai.cli import git
from crewai.cli.command import BaseCommand, PlusAPIMixin
from crewai.cli.deploy.validate import validate_project
from crewai.cli.utils import fetch_and_json_env_file, get_project_name
console = Console()
def _run_predeploy_validation(skip_validate: bool) -> bool:
"""Run pre-deploy validation unless skipped.
Returns True if deployment should proceed, False if it should abort.
"""
if skip_validate:
console.print(
"[yellow]Skipping pre-deploy validation (--skip-validate).[/yellow]"
)
return True
console.print("Running pre-deploy validation...", style="bold blue")
validator = validate_project()
if not validator.ok:
console.print(
"\n[bold red]Pre-deploy validation failed. "
"Fix the issues above or re-run with --skip-validate.[/bold red]"
)
return False
return True
class DeployCommand(BaseCommand, PlusAPIMixin):
"""
A class to handle deployment-related operations for CrewAI projects.
@@ -83,16 +60,13 @@ class DeployCommand(BaseCommand, PlusAPIMixin):
f"{log_message['timestamp']} - {log_message['level']}: {log_message['message']}"
)
def deploy(self, uuid: str | None = None, skip_validate: bool = False) -> None:
def deploy(self, uuid: str | None = None) -> None:
"""
Deploy a crew using either UUID or project name.
Args:
uuid (Optional[str]): The UUID of the crew to deploy.
skip_validate (bool): Skip pre-deploy validation checks.
"""
if not _run_predeploy_validation(skip_validate):
return
self._telemetry.start_deployment_span(uuid)
console.print("Starting deployment...", style="bold blue")
if uuid:
@@ -106,16 +80,10 @@ class DeployCommand(BaseCommand, PlusAPIMixin):
self._validate_response(response)
self._display_deployment_info(response.json())
def create_crew(self, confirm: bool = False, skip_validate: bool = False) -> None:
def create_crew(self, confirm: bool = False) -> None:
"""
Create a new crew deployment.
Args:
confirm (bool): Whether to skip the interactive confirmation prompt.
skip_validate (bool): Skip pre-deploy validation checks.
"""
if not _run_predeploy_validation(skip_validate):
return
self._telemetry.create_crew_deployment_span()
console.print("Creating deployment...", style="bold blue")
env_vars = fetch_and_json_env_file()

View File

@@ -1,845 +0,0 @@
"""Pre-deploy validation for CrewAI projects.
Catches locally what a deploy would reject at build or runtime so users
don't burn deployment attempts on fixable project-structure problems.
Each check is grouped into one of:
- ERROR: will block a deployment; validator exits non-zero.
- WARNING: may still deploy but is almost always a deployment bug; printed
but does not block.
The individual checks mirror the categories observed in production
deployment-failure logs:
1. pyproject.toml present with ``[project].name``
2. lockfile (``uv.lock`` or ``poetry.lock``) present and not stale
3. package directory at ``src/<package>/`` exists (no empty name, no egg-info)
4. standard crew files: ``crew.py``, ``config/agents.yaml``, ``config/tasks.yaml``
5. flow entrypoint: ``main.py`` with a Flow subclass
6. hatch wheel target resolves (packages = [...] or default dir matches name)
7. crew/flow module imports cleanly (catches ``@CrewBase not found``,
``No Flow subclass found``, provider import errors)
8. environment variables referenced in code vs ``.env`` / deployment env
9. installed crewai vs lockfile pin (catches missing-attribute failures from
stale pins)
"""
from __future__ import annotations
from dataclasses import dataclass
from enum import Enum
import json
import logging
import os
from pathlib import Path
import re
import shutil
import subprocess
import sys
from typing import Any
from rich.console import Console
from crewai.cli.utils import parse_toml
console = Console()
logger = logging.getLogger(__name__)
class Severity(str, Enum):
"""Severity of a validation finding."""
ERROR = "error"
WARNING = "warning"
@dataclass
class ValidationResult:
"""A single finding from a validation check.
Attributes:
severity: whether this blocks deploy or is advisory.
code: stable short identifier, used in tests and docs
(e.g. ``missing_pyproject``, ``stale_lockfile``).
title: one-line summary shown to the user.
detail: optional multi-line explanation.
hint: optional remediation suggestion.
"""
severity: Severity
code: str
title: str
detail: str = ""
hint: str = ""
# Maps known provider env var names → label used in hint messages.
_KNOWN_API_KEY_HINTS: dict[str, str] = {
"OPENAI_API_KEY": "OpenAI",
"ANTHROPIC_API_KEY": "Anthropic",
"GOOGLE_API_KEY": "Google",
"GEMINI_API_KEY": "Gemini",
"AZURE_OPENAI_API_KEY": "Azure OpenAI",
"AZURE_API_KEY": "Azure",
"AWS_ACCESS_KEY_ID": "AWS",
"AWS_SECRET_ACCESS_KEY": "AWS",
"COHERE_API_KEY": "Cohere",
"GROQ_API_KEY": "Groq",
"MISTRAL_API_KEY": "Mistral",
"TAVILY_API_KEY": "Tavily",
"SERPER_API_KEY": "Serper",
"SERPLY_API_KEY": "Serply",
"PERPLEXITY_API_KEY": "Perplexity",
"DEEPSEEK_API_KEY": "DeepSeek",
"OPENROUTER_API_KEY": "OpenRouter",
"FIRECRAWL_API_KEY": "Firecrawl",
"EXA_API_KEY": "Exa",
"BROWSERBASE_API_KEY": "Browserbase",
}
def normalize_package_name(project_name: str) -> str:
"""Normalize a pyproject project.name into a Python package directory name.
Mirrors the rules in ``crewai.cli.create_crew.create_crew`` so the
validator agrees with the scaffolder about where ``src/<pkg>/`` should
live.
"""
folder = project_name.replace(" ", "_").replace("-", "_").lower()
return re.sub(r"[^a-zA-Z0-9_]", "", folder)
class DeployValidator:
"""Runs the full pre-deploy validation suite against a project directory."""
def __init__(self, project_root: Path | None = None) -> None:
self.project_root: Path = (project_root or Path.cwd()).resolve()
self.results: list[ValidationResult] = []
self._pyproject: dict[str, Any] | None = None
self._project_name: str | None = None
self._package_name: str | None = None
self._package_dir: Path | None = None
self._is_flow: bool = False
def _add(
self,
severity: Severity,
code: str,
title: str,
detail: str = "",
hint: str = "",
) -> None:
self.results.append(
ValidationResult(
severity=severity,
code=code,
title=title,
detail=detail,
hint=hint,
)
)
@property
def errors(self) -> list[ValidationResult]:
return [r for r in self.results if r.severity is Severity.ERROR]
@property
def warnings(self) -> list[ValidationResult]:
return [r for r in self.results if r.severity is Severity.WARNING]
@property
def ok(self) -> bool:
return not self.errors
def run(self) -> list[ValidationResult]:
"""Run all checks. Later checks are skipped when earlier ones make
them impossible (e.g. no pyproject.toml → no lockfile check)."""
if not self._check_pyproject():
return self.results
self._check_lockfile()
if not self._check_package_dir():
self._check_hatch_wheel_target()
return self.results
if self._is_flow:
self._check_flow_entrypoint()
else:
self._check_crew_entrypoint()
self._check_config_yamls()
self._check_hatch_wheel_target()
self._check_module_imports()
self._check_env_vars()
self._check_version_vs_lockfile()
return self.results
def _check_pyproject(self) -> bool:
pyproject_path = self.project_root / "pyproject.toml"
if not pyproject_path.exists():
self._add(
Severity.ERROR,
"missing_pyproject",
"Cannot find pyproject.toml",
detail=(
f"Expected pyproject.toml at {pyproject_path}. "
"CrewAI projects must be installable Python packages."
),
hint="Run `crewai create crew <name>` to scaffold a valid project layout.",
)
return False
try:
self._pyproject = parse_toml(pyproject_path.read_text())
except Exception as e:
self._add(
Severity.ERROR,
"invalid_pyproject",
"pyproject.toml is not valid TOML",
detail=str(e),
)
return False
project = self._pyproject.get("project") or {}
name = project.get("name")
if not isinstance(name, str) or not name.strip():
self._add(
Severity.ERROR,
"missing_project_name",
"pyproject.toml is missing [project].name",
detail=(
"Without a project name the platform cannot resolve your "
"package directory (this produces errors like "
"'Cannot find src//crew.py')."
),
hint='Set a `name = "..."` field under `[project]` in pyproject.toml.',
)
return False
self._project_name = name
self._package_name = normalize_package_name(name)
self._is_flow = (self._pyproject.get("tool") or {}).get("crewai", {}).get(
"type"
) == "flow"
return True
def _check_lockfile(self) -> None:
uv_lock = self.project_root / "uv.lock"
poetry_lock = self.project_root / "poetry.lock"
pyproject = self.project_root / "pyproject.toml"
if not uv_lock.exists() and not poetry_lock.exists():
self._add(
Severity.ERROR,
"missing_lockfile",
"Expected to find at least one of these files: uv.lock or poetry.lock",
hint=(
"Run `uv lock` (recommended) or `poetry lock` in your project "
"directory, commit the lockfile, then redeploy."
),
)
return
lockfile = uv_lock if uv_lock.exists() else poetry_lock
try:
if lockfile.stat().st_mtime < pyproject.stat().st_mtime:
self._add(
Severity.WARNING,
"stale_lockfile",
f"{lockfile.name} is older than pyproject.toml",
detail=(
"Your lockfile may not reflect recent dependency changes. "
"The platform resolves from the lockfile, so deployed "
"dependencies may differ from local."
),
hint="Run `uv lock` (or `poetry lock`) and commit the result.",
)
except OSError:
pass
def _check_package_dir(self) -> bool:
if self._package_name is None:
return False
src_dir = self.project_root / "src"
if not src_dir.is_dir():
self._add(
Severity.ERROR,
"missing_src_dir",
"Missing src/ directory",
detail=(
"CrewAI deployments expect a src-layout project: "
f"src/{self._package_name}/crew.py (or main.py for flows)."
),
hint="Run `crewai create crew <name>` to see the expected layout.",
)
return False
package_dir = src_dir / self._package_name
if not package_dir.is_dir():
siblings = [
p.name
for p in src_dir.iterdir()
if p.is_dir() and not p.name.endswith(".egg-info")
]
egg_info = [
p.name for p in src_dir.iterdir() if p.name.endswith(".egg-info")
]
hint_parts = [
f'Create src/{self._package_name}/ to match [project].name = "{self._project_name}".'
]
if siblings:
hint_parts.append(
f"Found other package directories: {', '.join(siblings)}. "
f"Either rename one to '{self._package_name}' or update [project].name."
)
if egg_info:
hint_parts.append(
f"Delete stale build artifacts: {', '.join(egg_info)} "
"(these confuse the platform's package discovery)."
)
self._add(
Severity.ERROR,
"missing_package_dir",
f"Cannot find src/{self._package_name}/",
detail=(
"The platform looks for your crew source under "
"src/<package_name>/, derived from [project].name."
),
hint=" ".join(hint_parts),
)
return False
for p in src_dir.iterdir():
if p.name.endswith(".egg-info"):
self._add(
Severity.WARNING,
"stale_egg_info",
f"Stale build artifact in src/: {p.name}",
detail=(
".egg-info directories can be mistaken for your package "
"and cause 'Cannot find src/<name>.egg-info/crew.py' errors."
),
hint=f"Delete {p} and add `*.egg-info/` to .gitignore.",
)
self._package_dir = package_dir
return True
def _check_crew_entrypoint(self) -> None:
if self._package_dir is None:
return
crew_py = self._package_dir / "crew.py"
if not crew_py.is_file():
self._add(
Severity.ERROR,
"missing_crew_py",
f"Cannot find {crew_py.relative_to(self.project_root)}",
detail=(
"Standard crew projects must define a Crew class decorated "
"with @CrewBase inside crew.py."
),
hint=(
"Create crew.py with an @CrewBase-annotated class, or set "
'`[tool.crewai] type = "flow"` in pyproject.toml if this is a flow.'
),
)
def _check_config_yamls(self) -> None:
if self._package_dir is None:
return
config_dir = self._package_dir / "config"
if not config_dir.is_dir():
self._add(
Severity.ERROR,
"missing_config_dir",
f"Cannot find {config_dir.relative_to(self.project_root)}",
hint="Create a config/ directory with agents.yaml and tasks.yaml.",
)
return
for yaml_name in ("agents.yaml", "tasks.yaml"):
yaml_path = config_dir / yaml_name
if not yaml_path.is_file():
self._add(
Severity.ERROR,
f"missing_{yaml_name.replace('.', '_')}",
f"Cannot find {yaml_path.relative_to(self.project_root)}",
detail=(
"CrewAI loads agent and task config from these files; "
"missing them causes empty-config warnings and runtime crashes."
),
)
def _check_flow_entrypoint(self) -> None:
if self._package_dir is None:
return
main_py = self._package_dir / "main.py"
if not main_py.is_file():
self._add(
Severity.ERROR,
"missing_flow_main",
f"Cannot find {main_py.relative_to(self.project_root)}",
detail=(
"Flow projects must define a Flow subclass in main.py. "
'This project has `[tool.crewai] type = "flow"` set.'
),
hint="Create main.py with a `class MyFlow(Flow[...])`.",
)
def _check_hatch_wheel_target(self) -> None:
if not self._pyproject:
return
build_system = self._pyproject.get("build-system") or {}
backend = build_system.get("build-backend", "")
if "hatchling" not in backend:
return
hatch_wheel = (
(self._pyproject.get("tool") or {})
.get("hatch", {})
.get("build", {})
.get("targets", {})
.get("wheel", {})
)
if hatch_wheel.get("packages") or hatch_wheel.get("only-include"):
return
if self._package_dir and self._package_dir.is_dir():
return
self._add(
Severity.ERROR,
"hatch_wheel_target_missing",
"Hatchling cannot determine which files to ship",
detail=(
"Your pyproject uses hatchling but has no "
"[tool.hatch.build.targets.wheel] configuration and no "
"directory matching your project name."
),
hint=(
"Add:\n"
" [tool.hatch.build.targets.wheel]\n"
f' packages = ["src/{self._package_name}"]'
),
)
def _check_module_imports(self) -> None:
"""Import the user's crew/flow via `uv run` so the check sees the same
package versions as `crewai run` would. Result is reported as JSON on
the subprocess's stdout."""
script = (
"import json, sys, traceback, os\n"
"os.chdir(sys.argv[1])\n"
"try:\n"
" from crewai.cli.utils import get_crews, get_flows\n"
" is_flow = sys.argv[2] == 'flow'\n"
" if is_flow:\n"
" instances = get_flows()\n"
" kind = 'flow'\n"
" else:\n"
" instances = get_crews()\n"
" kind = 'crew'\n"
" print(json.dumps({'ok': True, 'kind': kind, 'count': len(instances)}))\n"
"except BaseException as e:\n"
" print(json.dumps({\n"
" 'ok': False,\n"
" 'error_type': type(e).__name__,\n"
" 'error': str(e),\n"
" 'traceback': traceback.format_exc(),\n"
" }))\n"
)
uv_path = shutil.which("uv")
if uv_path is None:
self._add(
Severity.WARNING,
"uv_not_found",
"Skipping import check: `uv` not installed",
hint="Install uv: https://docs.astral.sh/uv/",
)
return
try:
proc = subprocess.run( # noqa: S603 - args constructed from trusted inputs
[
uv_path,
"run",
"python",
"-c",
script,
str(self.project_root),
"flow" if self._is_flow else "crew",
],
cwd=self.project_root,
capture_output=True,
text=True,
timeout=120,
check=False,
)
except subprocess.TimeoutExpired:
self._add(
Severity.ERROR,
"import_timeout",
"Importing your crew/flow module timed out after 120s",
detail=(
"User code may be making network calls or doing heavy work "
"at import time. Move that work into agent methods."
),
)
return
# The payload is the last JSON object on stdout; user code may print
# other lines before it.
payload: dict[str, Any] | None = None
for line in reversed(proc.stdout.splitlines()):
line = line.strip()
if line.startswith("{") and line.endswith("}"):
try:
payload = json.loads(line)
break
except json.JSONDecodeError:
continue
if payload is None:
self._add(
Severity.ERROR,
"import_failed",
"Could not import your crew/flow module",
detail=(proc.stderr or proc.stdout or "").strip()[:1500],
hint="Run `crewai run` locally first to reproduce the error.",
)
return
if payload.get("ok"):
if payload.get("count", 0) == 0:
kind = payload.get("kind", "crew")
if kind == "flow":
self._add(
Severity.ERROR,
"no_flow_subclass",
"No Flow subclass found in the module",
hint=(
"main.py must define a class extending "
"`crewai.flow.Flow`, instantiable with no arguments."
),
)
else:
self._add(
Severity.ERROR,
"no_crewbase_class",
"Crew class annotated with @CrewBase not found",
hint=(
"Decorate your crew class with @CrewBase from "
"crewai.project (see `crewai create crew` template)."
),
)
return
err_msg = str(payload.get("error", ""))
err_type = str(payload.get("error_type", "Exception"))
tb = str(payload.get("traceback", ""))
self._classify_import_error(err_type, err_msg, tb)
def _classify_import_error(self, err_type: str, err_msg: str, tb: str) -> None:
"""Turn a raw import-time exception into a user-actionable finding."""
# Must be checked before the generic "native provider" branch below:
# the extras-missing message contains the same phrase. Providers
# format the install command as plain text (`to install: uv add
# "crewai[extra]"`); also tolerate backtick-delimited variants.
m = re.search(
r"(?P<pkg>[A-Za-z0-9_ -]+?)\s+native provider not available"
r".*?to install:\s*`?(?P<cmd>uv add [\"']crewai\[[^\]]+\][\"'])`?",
err_msg,
)
if m:
self._add(
Severity.ERROR,
"missing_provider_extra",
f"{m.group('pkg').strip()} provider extra not installed",
hint=f"Run: {m.group('cmd')}",
)
return
# crewai.llm.LLM.__new__ wraps provider init errors as
# ImportError("Error importing native provider: ...").
if "Error importing native provider" in err_msg or "native provider" in err_msg:
missing_key = self._extract_missing_api_key(err_msg)
if missing_key:
provider = _KNOWN_API_KEY_HINTS.get(missing_key, missing_key)
self._add(
Severity.WARNING,
"llm_init_missing_key",
f"LLM is constructed at import time but {missing_key} is not set",
detail=(
f"Your crew instantiates a {provider} LLM during module "
"load (e.g. in a class field default or @crew method). "
f"The {provider} provider currently requires {missing_key} "
"at construction time, so this will fail on the platform "
"unless the key is set in your deployment environment."
),
hint=(
f"Add {missing_key} to your deployment's Environment "
"Variables before deploying, or move LLM construction "
"inside agent methods so it runs lazily."
),
)
return
self._add(
Severity.ERROR,
"llm_provider_init_failed",
"LLM native provider failed to initialize",
detail=err_msg,
hint=(
"Check your LLM(model=...) configuration and provider-specific "
"extras (e.g. `uv add 'crewai[azure-ai-inference]'` for Azure)."
),
)
return
if err_type == "KeyError":
key = err_msg.strip("'\"")
if key in _KNOWN_API_KEY_HINTS or key.endswith("_API_KEY"):
self._add(
Severity.WARNING,
"env_var_read_at_import",
f"{key} is read at import time via os.environ[...]",
detail=(
"Using os.environ[...] (rather than os.getenv(...)) "
"at module scope crashes the build if the key isn't set."
),
hint=(
f"Either add {key} as a deployment env var, or switch "
"to os.getenv() and move the access inside agent methods."
),
)
return
if "Crew class annotated with @CrewBase not found" in err_msg:
self._add(
Severity.ERROR,
"no_crewbase_class",
"Crew class annotated with @CrewBase not found",
detail=err_msg,
)
return
if "No Flow subclass found" in err_msg:
self._add(
Severity.ERROR,
"no_flow_subclass",
"No Flow subclass found in the module",
detail=err_msg,
)
return
if (
err_type == "AttributeError"
and "has no attribute '_load_response_format'" in err_msg
):
self._add(
Severity.ERROR,
"stale_crewai_pin",
"Your lockfile pins a crewai version missing `_load_response_format`",
detail=err_msg,
hint=(
"Run `uv lock --upgrade-package crewai` (or `poetry update crewai`) "
"to pin a newer release."
),
)
return
if "pydantic" in tb.lower() or "validation error" in err_msg.lower():
self._add(
Severity.ERROR,
"pydantic_validation_error",
"Pydantic validation failed while loading your crew",
detail=err_msg[:800],
hint=(
"Check agent/task configuration fields. `crewai run` locally "
"will show the full traceback."
),
)
return
self._add(
Severity.ERROR,
"import_failed",
f"Importing your crew failed: {err_type}",
detail=err_msg[:800],
hint="Run `crewai run` locally to see the full traceback.",
)
@staticmethod
def _extract_missing_api_key(err_msg: str) -> str | None:
"""Pull 'FOO_API_KEY' out of '... FOO_API_KEY is required ...'."""
m = re.search(r"([A-Z][A-Z0-9_]*_API_KEY)\s+is required", err_msg)
if m:
return m.group(1)
m = re.search(r"['\"]([A-Z][A-Z0-9_]*_API_KEY)['\"]", err_msg)
if m:
return m.group(1)
return None
def _check_env_vars(self) -> None:
"""Warn about env vars referenced in user code but missing locally.
Best-effort only — the platform sets vars server-side, so we never error.
"""
if not self._package_dir:
return
referenced: set[str] = set()
pattern = re.compile(
r"""(?x)
(?:os\.environ\s*(?:\[\s*|\.get\s*\(\s*)
|os\.getenv\s*\(\s*
|getenv\s*\(\s*)
['"]([A-Z][A-Z0-9_]*)['"]
"""
)
for path in self._package_dir.rglob("*.py"):
try:
text = path.read_text(encoding="utf-8", errors="ignore")
except OSError:
continue
referenced.update(pattern.findall(text))
for path in self._package_dir.rglob("*.yaml"):
try:
text = path.read_text(encoding="utf-8", errors="ignore")
except OSError:
continue
referenced.update(re.findall(r"\$\{?([A-Z][A-Z0-9_]+)\}?", text))
env_file = self.project_root / ".env"
env_keys: set[str] = set()
if env_file.exists():
for line in env_file.read_text(errors="ignore").splitlines():
line = line.strip()
if not line or line.startswith("#") or "=" not in line:
continue
env_keys.add(line.split("=", 1)[0].strip())
missing_known: list[str] = sorted(
var
for var in referenced
if var in _KNOWN_API_KEY_HINTS
and var not in env_keys
and var not in os.environ
)
if missing_known:
self._add(
Severity.WARNING,
"env_vars_not_in_dotenv",
f"{len(missing_known)} referenced API key(s) not in .env",
detail=(
"These env vars are referenced in your source but not set "
f"locally: {', '.join(missing_known)}. Deploys will fail "
"unless they are added to the deployment's Environment "
"Variables in the CrewAI dashboard."
),
)
def _check_version_vs_lockfile(self) -> None:
"""Warn when the lockfile pins a crewai release older than 1.13.0,
which is where ``_load_response_format`` was introduced.
"""
uv_lock = self.project_root / "uv.lock"
poetry_lock = self.project_root / "poetry.lock"
lockfile = (
uv_lock
if uv_lock.exists()
else poetry_lock
if poetry_lock.exists()
else None
)
if lockfile is None:
return
try:
text = lockfile.read_text(errors="ignore")
except OSError:
return
m = re.search(
r'name\s*=\s*"crewai"\s*\nversion\s*=\s*"([^"]+)"',
text,
)
if not m:
return
locked = m.group(1)
try:
from packaging.version import Version
if Version(locked) < Version("1.13.0"):
self._add(
Severity.WARNING,
"old_crewai_pin",
f"Lockfile pins crewai=={locked} (older than 1.13.0)",
detail=(
"Older pinned versions are missing API surface the "
"platform builder expects (e.g. `_load_response_format`)."
),
hint="Run `uv lock --upgrade-package crewai` and redeploy.",
)
except Exception as e:
logger.debug("Could not parse crewai pin from lockfile: %s", e)
def render_report(results: list[ValidationResult]) -> None:
"""Pretty-print results to the shared rich console."""
if not results:
console.print("[bold green]Pre-deploy validation passed.[/bold green]")
return
errors = [r for r in results if r.severity is Severity.ERROR]
warnings = [r for r in results if r.severity is Severity.WARNING]
for result in errors:
console.print(f"[bold red]ERROR[/bold red] [{result.code}] {result.title}")
if result.detail:
console.print(f" {result.detail}")
if result.hint:
console.print(f" [dim]hint:[/dim] {result.hint}")
for result in warnings:
console.print(
f"[bold yellow]WARNING[/bold yellow] [{result.code}] {result.title}"
)
if result.detail:
console.print(f" {result.detail}")
if result.hint:
console.print(f" [dim]hint:[/dim] {result.hint}")
summary_parts: list[str] = []
if errors:
summary_parts.append(f"[bold red]{len(errors)} error(s)[/bold red]")
if warnings:
summary_parts.append(f"[bold yellow]{len(warnings)} warning(s)[/bold yellow]")
console.print(f"\n{' / '.join(summary_parts)}")
def validate_project(project_root: Path | None = None) -> DeployValidator:
"""Entrypoint: run validation, render results, return the validator.
The caller inspects ``validator.ok`` to decide whether to proceed with a
deploy.
"""
validator = DeployValidator(project_root=project_root)
validator.run()
render_report(validator.results)
return validator
def run_validate_command() -> None:
"""Implementation of `crewai deploy validate`."""
validator = validate_project()
if not validator.ok:
sys.exit(1)

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]==1.14.2a4"
"crewai[tools]==1.14.2a1"
]
[project.scripts]

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]==1.14.2a4"
"crewai[tools]==1.14.2a1"
]
[project.scripts]

View File

@@ -5,7 +5,7 @@ description = "Power up your crews with {{folder_name}}"
readme = "README.md"
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]==1.14.2a4"
"crewai[tools]==1.14.2a1"
]
[tool.crewai]

View File

@@ -117,7 +117,11 @@ from crewai.tools.base_tool import BaseTool
from crewai.types.callback import SerializableCallable
from crewai.types.streaming import CrewStreamingOutput
from crewai.types.usage_metrics import UsageMetrics
from crewai.utilities.constants import NOT_SPECIFIED, TRAINING_DATA_FILE
from crewai.utilities.constants import (
NOT_SPECIFIED,
TRAINED_AGENTS_DATA_FILE,
TRAINING_DATA_FILE,
)
from crewai.utilities.crew.models import CrewContext
from crewai.utilities.env import get_env_context
from crewai.utilities.evaluators.crew_evaluator_handler import CrewEvaluator
@@ -361,6 +365,14 @@ class Crew(FlowTrackable, BaseModel):
default=None,
description="Whether to enable tracing for the crew. True=always enable, False=always disable, None=check environment/user settings.",
)
trained_agents_data_file: str = Field(
default=TRAINED_AGENTS_DATA_FILE,
description=(
"Path to the file containing trained agent suggestions. "
"Defaults to 'trained_agents_data.pkl'. Set this to match the "
"custom filename used during training (e.g., via `crewai train -f`)."
),
)
execution_context: ExecutionContext | None = Field(default=None)
checkpoint_inputs: dict[str, Any] | None = Field(default=None)
@@ -436,13 +448,6 @@ class Crew(FlowTrackable, BaseModel):
if agent.agent_executor is not None and task.output is None:
agent.agent_executor.task = task
break
for task in self.tasks:
if task.checkpoint_original_description is not None:
task._original_description = task.checkpoint_original_description
if task.checkpoint_original_expected_output is not None:
task._original_expected_output = (
task.checkpoint_original_expected_output
)
if self.checkpoint_inputs is not None:
self._inputs = self.checkpoint_inputs
if self.checkpoint_kickoff_event_id is not None:

View File

@@ -2,56 +2,14 @@ from collections.abc import Iterator
import contextvars
from datetime import datetime, timezone
import itertools
from typing import Any, TypedDict
from typing import Any
import uuid
from pydantic import BaseModel, Field, SerializationInfo
from pydantic import BaseModel, Field
from crewai.utilities.serialization import Serializable, to_serializable
def _is_trace_context(info: SerializationInfo) -> bool:
"""Check if serialization is happening in trace context."""
return bool(info.context and info.context.get("trace"))
class AgentRef(TypedDict):
id: str
role: str
class TaskRef(TypedDict):
id: str
name: str
def _trace_agent_ref(agent: Any) -> AgentRef | None:
"""Return a lightweight agent reference for trace serialization."""
if agent is None:
return None
return AgentRef(
id=str(getattr(agent, "id", "")),
role=getattr(agent, "role", ""),
)
def _trace_task_ref(task: Any) -> TaskRef | None:
"""Return a lightweight task reference for trace serialization."""
if task is None:
return None
return TaskRef(
id=str(getattr(task, "id", "")),
name=str(getattr(task, "name", None) or getattr(task, "description", "")),
)
def _trace_tool_names(tools: Any) -> list[str] | None:
"""Return a list of tool names for trace serialization."""
if not tools:
return None
return [getattr(t, "name", str(t)) for t in tools]
_emission_counter: contextvars.ContextVar[Iterator[int]] = contextvars.ContextVar(
"_emission_counter"
)

View File

@@ -1,7 +1,7 @@
"""Trace collection listener for orchestrating trace collection."""
import os
from typing import Any
from typing import Any, ClassVar
import uuid
from typing_extensions import Self
@@ -129,13 +129,18 @@ from crewai.events.utils.console_formatter import ConsoleFormatter
from crewai.utilities.version import get_crewai_version
_TRACE_CONTEXT: dict[str, bool] = {"trace": True}
"""Serialization context that triggers lightweight field serializers on event models."""
class TraceCollectionListener(BaseEventListener):
"""Trace collection listener that orchestrates trace collection."""
complex_events: ClassVar[list[str]] = [
"task_started",
"task_completed",
"llm_call_started",
"llm_call_completed",
"agent_execution_started",
"agent_execution_completed",
]
_instance: Self | None = None
_initialized: bool = False
_listeners_setup: bool = False
@@ -819,19 +824,9 @@ class TraceCollectionListener(BaseEventListener):
def _build_event_data(
self, event_type: str, event: Any, source: Any
) -> dict[str, Any]:
"""Build event data with context-based serialization to reduce trace bloat.
Field serializers on event models check for context={"trace": True} and
return lightweight references instead of full nested objects. This replaces
the old denylist approach with Pydantic v2's native context mechanism.
Only crew_kickoff_started gets a full crew structure (built separately).
Complex events (task_started, etc.) use custom projections for specific shapes.
All other events get context-aware serialization automatically.
"""
if event_type == "crew_kickoff_started":
return self._build_crew_started_data(event)
"""Build event data"""
if event_type not in self.complex_events:
return safe_serialize_to_dict(event)
if event_type == "task_started":
task_name = event.task.name or event.task.description
task_display_name = (
@@ -872,77 +867,19 @@ class TraceCollectionListener(BaseEventListener):
"agent_backstory": event.agent.backstory,
}
if event_type == "llm_call_started":
event_data = safe_serialize_to_dict(event, context=_TRACE_CONTEXT)
event_data = safe_serialize_to_dict(event)
event_data["task_name"] = event.task_name or getattr(
event, "task_description", None
)
return event_data
if event_type == "llm_call_completed":
return safe_serialize_to_dict(event, context=_TRACE_CONTEXT)
return safe_serialize_to_dict(event)
return safe_serialize_to_dict(event, context=_TRACE_CONTEXT)
def _build_crew_started_data(self, event: Any) -> dict[str, Any]:
"""Build comprehensive crew structure for crew_kickoff_started event.
This is the ONE place where we serialize the full crew structure.
Subsequent events use lightweight references via field serializers.
"""
event_data = safe_serialize_to_dict(event, context=_TRACE_CONTEXT)
crew = getattr(event, "crew", None)
if crew is not None:
agents_data = []
for agent in getattr(crew, "agents", []) or []:
agent_data = {
"id": str(getattr(agent, "id", "")),
"role": getattr(agent, "role", ""),
"goal": getattr(agent, "goal", ""),
"backstory": getattr(agent, "backstory", ""),
"verbose": getattr(agent, "verbose", False),
"allow_delegation": getattr(agent, "allow_delegation", False),
"max_iter": getattr(agent, "max_iter", None),
"max_rpm": getattr(agent, "max_rpm", None),
}
tools = getattr(agent, "tools", None)
if tools:
agent_data["tool_names"] = [
getattr(t, "name", str(t)) for t in tools
]
agents_data.append(agent_data)
tasks_data = []
for task in getattr(crew, "tasks", []) or []:
task_data = {
"id": str(getattr(task, "id", "")),
"name": getattr(task, "name", None),
"description": getattr(task, "description", ""),
"expected_output": getattr(task, "expected_output", ""),
"async_execution": getattr(task, "async_execution", False),
"human_input": getattr(task, "human_input", False),
}
task_agent = getattr(task, "agent", None)
if task_agent:
task_data["agent_ref"] = {
"id": str(getattr(task_agent, "id", "")),
"role": getattr(task_agent, "role", ""),
}
context_tasks = getattr(task, "context", None)
if context_tasks:
task_data["context_task_ids"] = [
str(getattr(ct, "id", "")) for ct in context_tasks
]
tasks_data.append(task_data)
event_data["crew_structure"] = {
"agents": agents_data,
"tasks": tasks_data,
"process": str(getattr(crew, "process", "")),
"verbose": getattr(crew, "verbose", False),
"memory": getattr(crew, "memory", False),
}
return event_data
return {
"event_type": event_type,
"event": safe_serialize_to_dict(event),
"source": source,
}
def _show_tracing_disabled_message(self) -> None:
"""Show a message when tracing is disabled."""

View File

@@ -429,22 +429,10 @@ def mark_first_execution_done(user_consented: bool = False) -> None:
p.write_text(json.dumps(data, indent=2))
def safe_serialize_to_dict(
obj: Any,
exclude: set[str] | None = None,
context: dict[str, Any] | None = None,
) -> dict[str, Any]:
"""Safely serialize an object to a dictionary for event data.
Args:
obj: Object to serialize.
exclude: Set of keys to exclude from the result.
context: Optional context dict passed through to Pydantic's model_dump().
Field serializers can inspect this to customize output
(e.g. context={"trace": True} for lightweight trace serialization).
"""
def safe_serialize_to_dict(obj: Any, exclude: set[str] | None = None) -> dict[str, Any]:
"""Safely serialize an object to a dictionary for event data."""
try:
serialized = to_serializable(obj, exclude, context=context)
serialized = to_serializable(obj, exclude)
if isinstance(serialized, dict):
return serialized
return {"serialized_data": serialized}

View File

@@ -5,17 +5,11 @@ from __future__ import annotations
from collections.abc import Sequence
from typing import Any, Literal
from pydantic import ConfigDict, SerializationInfo, field_serializer, model_validator
from pydantic import ConfigDict, model_validator
from typing_extensions import Self
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.events.base_events import (
BaseEvent,
_is_trace_context,
_trace_agent_ref,
_trace_task_ref,
_trace_tool_names,
)
from crewai.events.base_events import BaseEvent
from crewai.tools.base_tool import BaseTool
from crewai.tools.structured_tool import CrewStructuredTool
@@ -37,21 +31,6 @@ class AgentExecutionStartedEvent(BaseEvent):
_set_agent_fingerprint(self, self.agent)
return self
@field_serializer("agent")
@classmethod
def _serialize_agent(cls, v: Any, info: SerializationInfo) -> Any:
return _trace_agent_ref(v) if _is_trace_context(info) else v
@field_serializer("task")
@classmethod
def _serialize_task(cls, v: Any, info: SerializationInfo) -> Any:
return _trace_task_ref(v) if _is_trace_context(info) else v
@field_serializer("tools")
@classmethod
def _serialize_tools(cls, v: Any, info: SerializationInfo) -> Any:
return _trace_tool_names(v) if _is_trace_context(info) else v
class AgentExecutionCompletedEvent(BaseEvent):
"""Event emitted when an agent completes executing a task"""
@@ -69,16 +48,6 @@ class AgentExecutionCompletedEvent(BaseEvent):
_set_agent_fingerprint(self, self.agent)
return self
@field_serializer("agent")
@classmethod
def _serialize_agent(cls, v: Any, info: SerializationInfo) -> Any:
return _trace_agent_ref(v) if _is_trace_context(info) else v
@field_serializer("task")
@classmethod
def _serialize_task(cls, v: Any, info: SerializationInfo) -> Any:
return _trace_task_ref(v) if _is_trace_context(info) else v
class AgentExecutionErrorEvent(BaseEvent):
"""Event emitted when an agent encounters an error during execution"""
@@ -96,16 +65,6 @@ class AgentExecutionErrorEvent(BaseEvent):
_set_agent_fingerprint(self, self.agent)
return self
@field_serializer("agent")
@classmethod
def _serialize_agent(cls, v: Any, info: SerializationInfo) -> Any:
return _trace_agent_ref(v) if _is_trace_context(info) else v
@field_serializer("task")
@classmethod
def _serialize_task(cls, v: Any, info: SerializationInfo) -> Any:
return _trace_task_ref(v) if _is_trace_context(info) else v
# New event classes for LiteAgent
class LiteAgentExecutionStartedEvent(BaseEvent):
@@ -118,11 +77,6 @@ class LiteAgentExecutionStartedEvent(BaseEvent):
model_config = ConfigDict(arbitrary_types_allowed=True)
@field_serializer("tools")
@classmethod
def _serialize_tools(cls, v: Any, info: SerializationInfo) -> Any:
return _trace_tool_names(v) if _is_trace_context(info) else v
class LiteAgentExecutionCompletedEvent(BaseEvent):
"""Event emitted when a LiteAgent completes execution"""

View File

@@ -1,8 +1,6 @@
from typing import TYPE_CHECKING, Any, Literal
from pydantic import SerializationInfo, field_serializer
from crewai.events.base_events import BaseEvent, _is_trace_context
from crewai.events.base_events import BaseEvent
if TYPE_CHECKING:
@@ -28,14 +26,6 @@ class CrewBaseEvent(BaseEvent):
if self.crew.fingerprint.metadata:
self.fingerprint_metadata = self.crew.fingerprint.metadata
@field_serializer("crew")
@classmethod
def _serialize_crew(cls, v: Any, info: SerializationInfo) -> Any:
"""Exclude crew in trace context — crew_kickoff_started builds structure separately."""
if _is_trace_context(info):
return None
return v
def to_json(self, exclude: set[str] | None = None) -> Any:
if exclude is None:
exclude = set()

View File

@@ -1,9 +1,9 @@
from enum import Enum
from typing import Any, Literal
from pydantic import BaseModel, SerializationInfo, field_serializer
from pydantic import BaseModel
from crewai.events.base_events import BaseEvent, _is_trace_context
from crewai.events.base_events import BaseEvent
class LLMEventBase(BaseEvent):
@@ -49,16 +49,6 @@ class LLMCallStartedEvent(LLMEventBase):
callbacks: list[Any] | None = None
available_functions: dict[str, Any] | None = None
@field_serializer("callbacks")
@classmethod
def _serialize_callbacks(cls, v: Any, info: SerializationInfo) -> Any:
return None if _is_trace_context(info) else v
@field_serializer("available_functions")
@classmethod
def _serialize_available_functions(cls, v: Any, info: SerializationInfo) -> Any:
return None if _is_trace_context(info) else v
class LLMCallCompletedEvent(LLMEventBase):
"""Event emitted when a LLM call completes"""

View File

@@ -1,8 +1,6 @@
from typing import Any, Literal
from pydantic import SerializationInfo, field_serializer
from crewai.events.base_events import BaseEvent, _is_trace_context, _trace_task_ref
from crewai.events.base_events import BaseEvent
from crewai.tasks.task_output import TaskOutput
@@ -34,11 +32,6 @@ class TaskStartedEvent(BaseEvent):
super().__init__(**data)
_set_task_fingerprint(self, self.task)
@field_serializer("task")
@classmethod
def _serialize_task(cls, v: Any, info: SerializationInfo) -> Any:
return _trace_task_ref(v) if _is_trace_context(info) else v
class TaskCompletedEvent(BaseEvent):
"""Event emitted when a task completes"""
@@ -51,11 +44,6 @@ class TaskCompletedEvent(BaseEvent):
super().__init__(**data)
_set_task_fingerprint(self, self.task)
@field_serializer("task")
@classmethod
def _serialize_task(cls, v: Any, info: SerializationInfo) -> Any:
return _trace_task_ref(v) if _is_trace_context(info) else v
class TaskFailedEvent(BaseEvent):
"""Event emitted when a task fails"""
@@ -68,11 +56,6 @@ class TaskFailedEvent(BaseEvent):
super().__init__(**data)
_set_task_fingerprint(self, self.task)
@field_serializer("task")
@classmethod
def _serialize_task(cls, v: Any, info: SerializationInfo) -> Any:
return _trace_task_ref(v) if _is_trace_context(info) else v
class TaskEvaluationEvent(BaseEvent):
"""Event emitted when a task evaluation is completed"""
@@ -84,8 +67,3 @@ class TaskEvaluationEvent(BaseEvent):
def __init__(self, **data: Any) -> None:
super().__init__(**data)
_set_task_fingerprint(self, self.task)
@field_serializer("task")
@classmethod
def _serialize_task(cls, v: Any, info: SerializationInfo) -> Any:
return _trace_task_ref(v) if _is_trace_context(info) else v

View File

@@ -2,9 +2,9 @@ from collections.abc import Callable
from datetime import datetime
from typing import Any, Literal
from pydantic import ConfigDict, SerializationInfo, field_serializer
from pydantic import ConfigDict
from crewai.events.base_events import BaseEvent, _is_trace_context, _trace_agent_ref
from crewai.events.base_events import BaseEvent
class ToolUsageEvent(BaseEvent):
@@ -26,11 +26,6 @@ class ToolUsageEvent(BaseEvent):
model_config = ConfigDict(arbitrary_types_allowed=True)
@field_serializer("agent")
@classmethod
def _serialize_agent(cls, v: Any, info: SerializationInfo) -> Any:
return _trace_agent_ref(v) if _is_trace_context(info) else v
def __init__(self, **data: Any) -> None:
if data.get("from_task"):
task = data["from_task"]
@@ -104,11 +99,6 @@ class ToolExecutionErrorEvent(BaseEvent):
tool_class: Callable[..., Any]
agent: Any | None = None
@field_serializer("agent")
@classmethod
def _serialize_agent(cls, v: Any, info: SerializationInfo) -> Any:
return _trace_agent_ref(v) if _is_trace_context(info) else v
def __init__(self, **data: Any) -> None:
super().__init__(**data)
# Set fingerprint data from the agent

View File

@@ -16,6 +16,7 @@ from typing import (
get_origin,
)
import uuid
import warnings
from pydantic import (
UUID4,
@@ -25,7 +26,7 @@ from pydantic import (
field_validator,
model_validator,
)
from typing_extensions import Self, deprecated
from typing_extensions import Self
if TYPE_CHECKING:
@@ -172,12 +173,9 @@ def _kickoff_with_a2a_support(
)
@deprecated(
"LiteAgent is deprecated and will be removed in v2.0.0.",
category=FutureWarning,
)
class LiteAgent(FlowTrackable, BaseModel):
"""A lightweight agent that can process messages and use tools.
"""
A lightweight agent that can process messages and use tools.
.. deprecated::
LiteAgent is deprecated and will be removed in a future version.
@@ -280,6 +278,18 @@ class LiteAgent(FlowTrackable, BaseModel):
)
_memory: Any = PrivateAttr(default=None)
@model_validator(mode="after")
def emit_deprecation_warning(self) -> Self:
"""Emit deprecation warning for LiteAgent usage."""
warnings.warn(
"LiteAgent is deprecated and will be removed in a future version. "
"Use Agent().kickoff(messages) instead, which provides the same "
"functionality with additional features like memory and knowledge support.",
DeprecationWarning,
stacklevel=2,
)
return self
@model_validator(mode="after")
def setup_llm(self) -> Self:
"""Set up the LLM and other components after initialization."""

View File

@@ -51,7 +51,6 @@ from crewai.utilities.exceptions.context_window_exceeding_exception import (
)
from crewai.utilities.logger_utils import suppress_warnings
from crewai.utilities.string_utils import sanitize_tool_name
from crewai.utilities.token_counter_callback import TokenCalcHandler
try:
@@ -76,13 +75,8 @@ try:
from litellm.types.utils import (
ChatCompletionDeltaToolCall,
Choices,
Delta as LiteLLMDelta,
Function,
Message,
ModelResponse,
ModelResponseBase,
ModelResponseStream,
StreamingChoices as LiteLLMStreamingChoices,
)
from litellm.utils import supports_response_schema
@@ -91,11 +85,6 @@ except ImportError:
LITELLM_AVAILABLE = False
litellm = None # type: ignore[assignment]
Choices = None # type: ignore[assignment, misc]
LiteLLMDelta = None # type: ignore[assignment, misc]
Message = None # type: ignore[assignment, misc]
ModelResponseBase = None # type: ignore[assignment, misc]
ModelResponseStream = None # type: ignore[assignment, misc]
LiteLLMStreamingChoices = None # type: ignore[assignment, misc]
get_supported_openai_params = None # type: ignore[assignment]
ChatCompletionDeltaToolCall = None # type: ignore[assignment, misc]
Function = None # type: ignore[assignment, misc]
@@ -720,7 +709,7 @@ class LLM(BaseLLM):
chunk_content = None
response_id = None
if isinstance(chunk, ModelResponseBase):
if hasattr(chunk, "id"):
response_id = chunk.id
# Safely extract content from various chunk formats
@@ -729,16 +718,18 @@ class LLM(BaseLLM):
choices = None
if isinstance(chunk, dict) and "choices" in chunk:
choices = chunk["choices"]
elif isinstance(chunk, ModelResponseStream):
choices = chunk.choices
elif hasattr(chunk, "choices"):
# Check if choices is not a type but an actual attribute with value
if not isinstance(chunk.choices, type):
choices = chunk.choices
# Try to extract usage information if available
# NOTE: usage is a pydantic extra field on ModelResponseBase,
# so it must be accessed via model_extra.
if isinstance(chunk, dict) and "usage" in chunk:
usage_info = chunk["usage"]
elif isinstance(chunk, ModelResponseBase) and chunk.model_extra:
usage_info = chunk.model_extra.get("usage") or usage_info
elif hasattr(chunk, "usage"):
# Check if usage is not a type but an actual attribute with value
if not isinstance(chunk.usage, type):
usage_info = chunk.usage
if choices and len(choices) > 0:
choice = choices[0]
@@ -747,7 +738,7 @@ class LLM(BaseLLM):
delta = None
if isinstance(choice, dict) and "delta" in choice:
delta = choice["delta"]
elif isinstance(choice, LiteLLMStreamingChoices):
elif hasattr(choice, "delta"):
delta = choice.delta
# Extract content from delta
@@ -757,7 +748,7 @@ class LLM(BaseLLM):
if "content" in delta and delta["content"] is not None:
chunk_content = delta["content"]
# Handle object format
elif isinstance(delta, LiteLLMDelta):
elif hasattr(delta, "content"):
chunk_content = delta.content
# Handle case where content might be None or empty
@@ -830,8 +821,9 @@ class LLM(BaseLLM):
choices = None
if isinstance(last_chunk, dict) and "choices" in last_chunk:
choices = last_chunk["choices"]
elif isinstance(last_chunk, ModelResponseStream):
choices = last_chunk.choices
elif hasattr(last_chunk, "choices"):
if not isinstance(last_chunk.choices, type):
choices = last_chunk.choices
if choices and len(choices) > 0:
choice = choices[0]
@@ -840,14 +832,14 @@ class LLM(BaseLLM):
message = None
if isinstance(choice, dict) and "message" in choice:
message = choice["message"]
elif isinstance(choice, Choices):
elif hasattr(choice, "message"):
message = choice.message
if message:
content = None
if isinstance(message, dict) and "content" in message:
content = message["content"]
elif isinstance(message, Message):
elif hasattr(message, "content"):
content = message.content
if content:
@@ -874,23 +866,24 @@ class LLM(BaseLLM):
choices = None
if isinstance(last_chunk, dict) and "choices" in last_chunk:
choices = last_chunk["choices"]
elif isinstance(last_chunk, ModelResponseStream):
choices = last_chunk.choices
elif hasattr(last_chunk, "choices"):
if not isinstance(last_chunk.choices, type):
choices = last_chunk.choices
if choices and len(choices) > 0:
choice = choices[0]
delta = None
if isinstance(choice, dict) and "delta" in choice:
delta = choice["delta"]
elif isinstance(choice, LiteLLMStreamingChoices):
delta = choice.delta
message = None
if isinstance(choice, dict) and "message" in choice:
message = choice["message"]
elif hasattr(choice, "message"):
message = choice.message
if delta:
if isinstance(delta, dict) and "tool_calls" in delta:
tool_calls = delta["tool_calls"]
elif isinstance(delta, LiteLLMDelta):
tool_calls = delta.tool_calls
if message:
if isinstance(message, dict) and "tool_calls" in message:
tool_calls = message["tool_calls"]
elif hasattr(message, "tool_calls"):
tool_calls = message.tool_calls
except Exception as e:
logging.debug(f"Error checking for tool calls: {e}")
@@ -1044,7 +1037,7 @@ class LLM(BaseLLM):
"""
if callbacks and len(callbacks) > 0:
for callback in callbacks:
if isinstance(callback, TokenCalcHandler):
if hasattr(callback, "log_success_event"):
# Use the usage_info we've been tracking
if not usage_info:
# Try to get usage from the last chunk if we haven't already
@@ -1055,14 +1048,9 @@ class LLM(BaseLLM):
and "usage" in last_chunk
):
usage_info = last_chunk["usage"]
elif (
isinstance(last_chunk, ModelResponseBase)
and last_chunk.model_extra
):
usage_info = (
last_chunk.model_extra.get("usage")
or usage_info
)
elif hasattr(last_chunk, "usage"):
if not isinstance(last_chunk.usage, type):
usage_info = last_chunk.usage
except Exception as e:
logging.debug(f"Error extracting usage info: {e}")
@@ -1135,10 +1123,13 @@ class LLM(BaseLLM):
params["response_model"] = response_model
response = litellm.completion(**params)
if isinstance(response, ModelResponseBase) and response.model_extra:
usage_info = response.model_extra.get("usage")
if usage_info:
self._track_token_usage_internal(usage_info)
if (
hasattr(response, "usage")
and not isinstance(response.usage, type)
and response.usage
):
usage_info = response.usage
self._track_token_usage_internal(usage_info)
except LLMContextLengthExceededError:
# Re-raise our own context length error
@@ -1150,11 +1141,7 @@ class LLM(BaseLLM):
raise LLMContextLengthExceededError(error_msg) from e
raise
response_usage = self._usage_to_dict(
response.model_extra.get("usage")
if isinstance(response, ModelResponseBase) and response.model_extra
else None
)
response_usage = self._usage_to_dict(getattr(response, "usage", None))
# --- 2) Handle structured output response (when response_model is provided)
if response_model is not None:
@@ -1179,13 +1166,8 @@ class LLM(BaseLLM):
# --- 3) Handle callbacks with usage info
if callbacks and len(callbacks) > 0:
for callback in callbacks:
if isinstance(callback, TokenCalcHandler):
usage_info = (
response.model_extra.get("usage")
if isinstance(response, ModelResponseBase)
and response.model_extra
else None
)
if hasattr(callback, "log_success_event"):
usage_info = getattr(response, "usage", None)
if usage_info:
callback.log_success_event(
kwargs=params,
@@ -1194,7 +1176,7 @@ class LLM(BaseLLM):
end_time=0,
)
# --- 4) Check for tool calls
tool_calls = response_message.tool_calls or []
tool_calls = getattr(response_message, "tool_calls", [])
# --- 5) If no tool calls or no available functions, return the text response directly as long as there is a text response
if (not tool_calls or not available_functions) and text_response:
@@ -1287,10 +1269,13 @@ class LLM(BaseLLM):
params["response_model"] = response_model
response = await litellm.acompletion(**params)
if isinstance(response, ModelResponseBase) and response.model_extra:
usage_info = response.model_extra.get("usage")
if usage_info:
self._track_token_usage_internal(usage_info)
if (
hasattr(response, "usage")
and not isinstance(response.usage, type)
and response.usage
):
usage_info = response.usage
self._track_token_usage_internal(usage_info)
except LLMContextLengthExceededError:
# Re-raise our own context length error
@@ -1302,11 +1287,7 @@ class LLM(BaseLLM):
raise LLMContextLengthExceededError(error_msg) from e
raise
response_usage = self._usage_to_dict(
response.model_extra.get("usage")
if isinstance(response, ModelResponseBase) and response.model_extra
else None
)
response_usage = self._usage_to_dict(getattr(response, "usage", None))
if response_model is not None:
if isinstance(response, BaseModel):
@@ -1328,13 +1309,8 @@ class LLM(BaseLLM):
if callbacks and len(callbacks) > 0:
for callback in callbacks:
if isinstance(callback, TokenCalcHandler):
usage_info = (
response.model_extra.get("usage")
if isinstance(response, ModelResponseBase)
and response.model_extra
else None
)
if hasattr(callback, "log_success_event"):
usage_info = getattr(response, "usage", None)
if usage_info:
callback.log_success_event(
kwargs=params,
@@ -1343,7 +1319,7 @@ class LLM(BaseLLM):
end_time=0,
)
tool_calls = response_message.tool_calls or []
tool_calls = getattr(response_message, "tool_calls", [])
if (not tool_calls or not available_functions) and text_response:
self._handle_emit_call_events(
@@ -1418,19 +1394,18 @@ class LLM(BaseLLM):
async for chunk in await litellm.acompletion(**params):
chunk_count += 1
chunk_content = None
response_id = chunk.id if isinstance(chunk, ModelResponseBase) else None
response_id = chunk.id if hasattr(chunk, "id") else None
try:
choices = None
if isinstance(chunk, dict) and "choices" in chunk:
choices = chunk["choices"]
elif isinstance(chunk, ModelResponseStream):
choices = chunk.choices
elif hasattr(chunk, "choices"):
if not isinstance(chunk.choices, type):
choices = chunk.choices
if isinstance(chunk, ModelResponseBase) and chunk.model_extra:
chunk_usage = chunk.model_extra.get("usage")
if chunk_usage is not None:
usage_info = chunk_usage
if hasattr(chunk, "usage") and chunk.usage is not None:
usage_info = chunk.usage
if choices and len(choices) > 0:
first_choice = choices[0]
@@ -1438,19 +1413,19 @@ class LLM(BaseLLM):
if isinstance(first_choice, dict):
delta = first_choice.get("delta", {})
elif isinstance(first_choice, LiteLLMStreamingChoices):
elif hasattr(first_choice, "delta"):
delta = first_choice.delta
if delta:
if isinstance(delta, dict):
chunk_content = delta.get("content")
elif isinstance(delta, LiteLLMDelta):
elif hasattr(delta, "content"):
chunk_content = delta.content
tool_calls: list[ChatCompletionDeltaToolCall] | None = None
if isinstance(delta, dict):
tool_calls = delta.get("tool_calls")
elif isinstance(delta, LiteLLMDelta):
elif hasattr(delta, "tool_calls"):
tool_calls = delta.tool_calls
if tool_calls:
@@ -1486,7 +1461,7 @@ class LLM(BaseLLM):
if callbacks and len(callbacks) > 0 and usage_info:
for callback in callbacks:
if isinstance(callback, TokenCalcHandler):
if hasattr(callback, "log_success_event"):
callback.log_success_event(
kwargs=params,
response_obj={"usage": usage_info},
@@ -1945,7 +1920,7 @@ class LLM(BaseLLM):
return None
if isinstance(usage, dict):
return usage
if isinstance(usage, BaseModel):
if hasattr(usage, "model_dump"):
result: dict[str, Any] = usage.model_dump()
return result
if hasattr(usage, "__dict__"):
@@ -2009,7 +1984,7 @@ class LLM(BaseLLM):
)
return messages
provider = self.provider or self.model
provider = getattr(self, "provider", None) or self.model
for msg in messages:
files = msg.get("files")
@@ -2060,7 +2035,7 @@ class LLM(BaseLLM):
)
return messages
provider = self.provider or self.model
provider = getattr(self, "provider", None) or self.model
for msg in messages:
files = msg.get("files")

View File

@@ -172,8 +172,6 @@ class BaseLLM(BaseModel, ABC):
"completion_tokens": 0,
"successful_requests": 0,
"cached_prompt_tokens": 0,
"reasoning_tokens": 0,
"cache_creation_tokens": 0,
}
)
@@ -810,24 +808,14 @@ class BaseLLM(BaseModel, ABC):
cached_tokens = (
usage_data.get("cached_tokens")
or usage_data.get("cached_prompt_tokens")
or usage_data.get("cache_read_input_tokens")
or 0
)
if not cached_tokens:
prompt_details = usage_data.get("prompt_tokens_details")
if isinstance(prompt_details, dict):
cached_tokens = prompt_details.get("cached_tokens", 0) or 0
reasoning_tokens = usage_data.get("reasoning_tokens", 0) or 0
cache_creation_tokens = usage_data.get("cache_creation_tokens", 0) or 0
self._token_usage["prompt_tokens"] += prompt_tokens
self._token_usage["completion_tokens"] += completion_tokens
self._token_usage["total_tokens"] += prompt_tokens + completion_tokens
self._token_usage["successful_requests"] += 1
self._token_usage["cached_prompt_tokens"] += cached_tokens
self._token_usage["reasoning_tokens"] += reasoning_tokens
self._token_usage["cache_creation_tokens"] += cache_creation_tokens
def get_token_usage_summary(self) -> UsageMetrics:
"""Get summary of token usage for this LLM instance.

View File

@@ -11,14 +11,10 @@ from crewai.events.types.llm_events import LLMCallType
from crewai.llms.base_llm import BaseLLM, JsonResponseFormat, llm_call_context
from crewai.llms.hooks.base import BaseInterceptor
from crewai.llms.hooks.transport import AsyncHTTPTransport, HTTPTransport
from crewai.llms.providers.utils.common import safe_tool_conversion
from crewai.utilities.agent_utils import is_context_length_exceeded
from crewai.utilities.exceptions.context_window_exceeding_exception import (
LLMContextLengthExceededError,
)
from crewai.utilities.pydantic_schema_utils import (
sanitize_tool_params_for_anthropic_strict,
)
from crewai.utilities.types import LLMMessage
@@ -193,41 +189,16 @@ class AnthropicCompletion(BaseLLM):
@model_validator(mode="after")
def _init_clients(self) -> AnthropicCompletion:
"""Eagerly build clients when the API key is available, otherwise
defer so ``LLM(model="anthropic/...")`` can be constructed at module
import time even before deployment env vars are set.
"""
try:
self._client = self._build_sync_client()
self._async_client = self._build_async_client()
except ValueError:
pass
return self
self._client = Anthropic(**self._get_client_params())
def _build_sync_client(self) -> Any:
return Anthropic(**self._get_client_params())
def _build_async_client(self) -> Any:
# Skip the sync httpx.Client that `_get_client_params` would
# otherwise construct under `interceptor`; we attach an async one
# below and would leak the sync one if both were built.
async_client_params = self._get_client_params(include_http_client=False)
async_client_params = self._get_client_params()
if self.interceptor:
async_transport = AsyncHTTPTransport(interceptor=self.interceptor)
async_client_params["http_client"] = httpx.AsyncClient(
transport=async_transport
)
return AsyncAnthropic(**async_client_params)
async_http_client = httpx.AsyncClient(transport=async_transport)
async_client_params["http_client"] = async_http_client
def _get_sync_client(self) -> Any:
if self._client is None:
self._client = self._build_sync_client()
return self._client
def _get_async_client(self) -> Any:
if self._async_client is None:
self._async_client = self._build_async_client()
return self._async_client
self._async_client = AsyncAnthropic(**async_client_params)
return self
def to_config_dict(self) -> dict[str, Any]:
"""Extend base config with Anthropic-specific fields."""
@@ -242,15 +213,8 @@ class AnthropicCompletion(BaseLLM):
config["timeout"] = self.timeout
return config
def _get_client_params(self, include_http_client: bool = True) -> dict[str, Any]:
"""Get client parameters.
Args:
include_http_client: When True (default) and an interceptor is
set, attach a sync ``httpx.Client``. The async builder
passes ``False`` so it can attach its own async client
without leaking a sync one.
"""
def _get_client_params(self) -> dict[str, Any]:
"""Get client parameters."""
if self.api_key is None:
self.api_key = os.getenv("ANTHROPIC_API_KEY")
@@ -264,7 +228,7 @@ class AnthropicCompletion(BaseLLM):
"max_retries": self.max_retries,
}
if include_http_client and self.interceptor:
if self.interceptor:
transport = HTTPTransport(interceptor=self.interceptor)
http_client = httpx.Client(transport=transport)
client_params["http_client"] = http_client # type: ignore[assignment]
@@ -509,8 +473,10 @@ class AnthropicCompletion(BaseLLM):
continue
try:
from crewai.llms.providers.utils.common import safe_tool_conversion
name, description, parameters = safe_tool_conversion(tool, "Anthropic")
except (KeyError, ValueError) as e:
except (ImportError, KeyError, ValueError) as e:
logging.error(f"Error converting tool to Anthropic format: {e}")
raise e
@@ -519,15 +485,8 @@ class AnthropicCompletion(BaseLLM):
"description": description,
}
func_info = tool.get("function", {})
strict_enabled = bool(func_info.get("strict"))
if parameters and isinstance(parameters, dict):
anthropic_tool["input_schema"] = (
sanitize_tool_params_for_anthropic_strict(parameters)
if strict_enabled
else parameters
)
anthropic_tool["input_schema"] = parameters
else:
anthropic_tool["input_schema"] = {
"type": "object",
@@ -535,9 +494,6 @@ class AnthropicCompletion(BaseLLM):
"required": [],
}
if strict_enabled:
anthropic_tool["strict"] = True
anthropic_tools.append(anthropic_tool)
return anthropic_tools
@@ -830,11 +786,11 @@ class AnthropicCompletion(BaseLLM):
try:
if betas:
params["betas"] = betas
response = self._get_sync_client().beta.messages.create(
response = self._client.beta.messages.create(
**params, extra_body=extra_body
)
else:
response = self._get_sync_client().messages.create(**params)
response = self._client.messages.create(**params)
except Exception as e:
if is_context_length_exceeded(e):
@@ -982,11 +938,9 @@ class AnthropicCompletion(BaseLLM):
current_tool_calls: dict[int, dict[str, Any]] = {}
stream_context = (
self._get_sync_client().beta.messages.stream(
**stream_params, extra_body=extra_body
)
self._client.beta.messages.stream(**stream_params, extra_body=extra_body)
if betas
else self._get_sync_client().messages.stream(**stream_params)
else self._client.messages.stream(**stream_params)
)
with stream_context as stream:
response_id = None
@@ -1265,9 +1219,7 @@ class AnthropicCompletion(BaseLLM):
try:
# Send tool results back to Claude for final response
final_response: Message = self._get_sync_client().messages.create(
**follow_up_params
)
final_response: Message = self._client.messages.create(**follow_up_params)
# Track token usage for follow-up call
follow_up_usage = self._extract_anthropic_token_usage(final_response)
@@ -1363,11 +1315,11 @@ class AnthropicCompletion(BaseLLM):
try:
if betas:
params["betas"] = betas
response = await self._get_async_client().beta.messages.create(
response = await self._async_client.beta.messages.create(
**params, extra_body=extra_body
)
else:
response = await self._get_async_client().messages.create(**params)
response = await self._async_client.messages.create(**params)
except Exception as e:
if is_context_length_exceeded(e):
@@ -1501,11 +1453,11 @@ class AnthropicCompletion(BaseLLM):
current_tool_calls: dict[int, dict[str, Any]] = {}
stream_context = (
self._get_async_client().beta.messages.stream(
self._async_client.beta.messages.stream(
**stream_params, extra_body=extra_body
)
if betas
else self._get_async_client().messages.stream(**stream_params)
else self._async_client.messages.stream(**stream_params)
)
async with stream_context as stream:
response_id = None
@@ -1670,7 +1622,7 @@ class AnthropicCompletion(BaseLLM):
]
try:
final_response: Message = await self._get_async_client().messages.create(
final_response: Message = await self._async_client.messages.create(
**follow_up_params
)
@@ -1752,23 +1704,18 @@ class AnthropicCompletion(BaseLLM):
def _extract_anthropic_token_usage(
response: Message | BetaMessage,
) -> dict[str, Any]:
"""Extract token usage and response metadata from Anthropic response."""
"""Extract token usage from Anthropic response."""
if hasattr(response, "usage") and response.usage:
usage = response.usage
input_tokens = getattr(usage, "input_tokens", 0)
output_tokens = getattr(usage, "output_tokens", 0)
cache_read_tokens = getattr(usage, "cache_read_input_tokens", 0) or 0
cache_creation_tokens = (
getattr(usage, "cache_creation_input_tokens", 0) or 0
)
result: dict[str, Any] = {
return {
"input_tokens": input_tokens,
"output_tokens": output_tokens,
"total_tokens": input_tokens + output_tokens,
"cached_prompt_tokens": cache_read_tokens,
"cache_creation_tokens": cache_creation_tokens,
}
return result
return {"total_tokens": 0}
def supports_multimodal(self) -> bool:
@@ -1798,8 +1745,8 @@ class AnthropicCompletion(BaseLLM):
from crewai_files.uploaders.anthropic import AnthropicFileUploader
return AnthropicFileUploader(
client=self._get_sync_client(),
async_client=self._get_async_client(),
client=self._client,
async_client=self._async_client,
)
except ImportError:
return None

View File

@@ -116,100 +116,43 @@ class AzureCompletion(BaseLLM):
data.get("api_version") or os.getenv("AZURE_API_VERSION") or "2024-06-01"
)
# Credentials and endpoint are validated lazily in `_init_clients`
# so the LLM can be constructed before deployment env vars are set.
model = data.get("model", "")
if data["endpoint"]:
data["endpoint"] = AzureCompletion._validate_and_fix_endpoint(
data["endpoint"], model
if not data["api_key"]:
raise ValueError(
"Azure API key is required. Set AZURE_API_KEY environment variable or pass api_key parameter."
)
data["is_azure_openai_endpoint"] = AzureCompletion._is_azure_openai_endpoint(
data["endpoint"]
if not data["endpoint"]:
raise ValueError(
"Azure endpoint is required. Set AZURE_ENDPOINT environment variable or pass endpoint parameter."
)
model = data.get("model", "")
data["endpoint"] = AzureCompletion._validate_and_fix_endpoint(
data["endpoint"], model
)
data["is_openai_model"] = any(
prefix in model.lower() for prefix in ["gpt-", "o1-", "text-"]
)
return data
@staticmethod
def _is_azure_openai_endpoint(endpoint: str | None) -> bool:
if not endpoint:
return False
hostname = urlparse(endpoint).hostname or ""
return (
parsed = urlparse(data["endpoint"])
hostname = parsed.hostname or ""
data["is_azure_openai_endpoint"] = (
hostname == "openai.azure.com" or hostname.endswith(".openai.azure.com")
) and "/openai/deployments/" in endpoint
) and "/openai/deployments/" in data["endpoint"]
return data
@model_validator(mode="after")
def _init_clients(self) -> AzureCompletion:
"""Eagerly build clients when credentials are available, otherwise
defer so ``LLM(model="azure/...")`` can be constructed at module
import time even before deployment env vars are set.
"""
try:
self._client = self._build_sync_client()
self._async_client = self._build_async_client()
except ValueError:
pass
return self
def _build_sync_client(self) -> Any:
return ChatCompletionsClient(**self._make_client_kwargs())
def _build_async_client(self) -> Any:
return AsyncChatCompletionsClient(**self._make_client_kwargs())
def _make_client_kwargs(self) -> dict[str, Any]:
# Re-read env vars so that a deferred build can pick up credentials
# that weren't set at instantiation time (e.g. LLM constructed at
# module import before deployment env vars were injected).
if not self.api_key:
self.api_key = os.getenv("AZURE_API_KEY")
if not self.endpoint:
endpoint = (
os.getenv("AZURE_ENDPOINT")
or os.getenv("AZURE_OPENAI_ENDPOINT")
or os.getenv("AZURE_API_BASE")
)
if endpoint:
self.endpoint = AzureCompletion._validate_and_fix_endpoint(
endpoint, self.model
)
# Recompute the routing flag now that the endpoint is known —
# _prepare_completion_params uses it to decide whether to
# include `model` in the request body (Azure OpenAI endpoints
# embed the deployment name in the URL and reject it).
self.is_azure_openai_endpoint = (
AzureCompletion._is_azure_openai_endpoint(self.endpoint)
)
if not self.api_key:
raise ValueError(
"Azure API key is required. Set AZURE_API_KEY environment "
"variable or pass api_key parameter."
)
if not self.endpoint:
raise ValueError(
"Azure endpoint is required. Set AZURE_ENDPOINT environment "
"variable or pass endpoint parameter."
)
raise ValueError("Azure API key is required.")
client_kwargs: dict[str, Any] = {
"endpoint": self.endpoint,
"credential": AzureKeyCredential(self.api_key),
}
if self.api_version:
client_kwargs["api_version"] = self.api_version
return client_kwargs
def _get_sync_client(self) -> Any:
if self._client is None:
self._client = self._build_sync_client()
return self._client
def _get_async_client(self) -> Any:
if self._async_client is None:
self._async_client = self._build_async_client()
return self._async_client
self._client = ChatCompletionsClient(**client_kwargs)
self._async_client = AsyncChatCompletionsClient(**client_kwargs)
return self
def to_config_dict(self) -> dict[str, Any]:
"""Extend base config with Azure-specific fields."""
@@ -770,7 +713,8 @@ class AzureCompletion(BaseLLM):
) -> str | Any:
"""Handle non-streaming chat completion."""
try:
response: ChatCompletions = self._get_sync_client().complete(**params)
# Cast params to Any to avoid type checking issues with TypedDict unpacking
response: ChatCompletions = self._client.complete(**params)
return self._process_completion_response(
response=response,
params=params,
@@ -969,7 +913,7 @@ class AzureCompletion(BaseLLM):
tool_calls: dict[int, dict[str, Any]] = {}
usage_data: dict[str, Any] | None = None
for update in self._get_sync_client().complete(**params):
for update in self._client.complete(**params):
if isinstance(update, StreamingChatCompletionsUpdate):
if update.usage:
usage = update.usage
@@ -1009,9 +953,8 @@ class AzureCompletion(BaseLLM):
) -> str | Any:
"""Handle non-streaming chat completion asynchronously."""
try:
response: ChatCompletions = await self._get_async_client().complete(
**params
)
# Cast params to Any to avoid type checking issues with TypedDict unpacking
response: ChatCompletions = await self._async_client.complete(**params)
return self._process_completion_response(
response=response,
params=params,
@@ -1037,7 +980,7 @@ class AzureCompletion(BaseLLM):
usage_data: dict[str, Any] | None = None
stream = await self._get_async_client().complete(**params)
stream = await self._async_client.complete(**params)
async for update in stream:
if isinstance(update, StreamingChatCompletionsUpdate):
if hasattr(update, "usage") and update.usage:
@@ -1133,39 +1076,28 @@ class AzureCompletion(BaseLLM):
@staticmethod
def _extract_azure_token_usage(response: ChatCompletions) -> dict[str, Any]:
"""Extract token usage and response metadata from Azure response."""
"""Extract token usage from Azure response."""
if hasattr(response, "usage") and response.usage:
usage = response.usage
cached_tokens = 0
prompt_details = getattr(usage, "prompt_tokens_details", None)
if prompt_details:
cached_tokens = getattr(prompt_details, "cached_tokens", 0) or 0
reasoning_tokens = 0
completion_details = getattr(usage, "completion_tokens_details", None)
if completion_details:
reasoning_tokens = (
getattr(completion_details, "reasoning_tokens", 0) or 0
)
result: dict[str, Any] = {
return {
"prompt_tokens": getattr(usage, "prompt_tokens", 0),
"completion_tokens": getattr(usage, "completion_tokens", 0),
"total_tokens": getattr(usage, "total_tokens", 0),
"cached_prompt_tokens": cached_tokens,
"reasoning_tokens": reasoning_tokens,
}
return result
return {"total_tokens": 0}
async def aclose(self) -> None:
"""Close the async client and clean up resources.
This ensures proper cleanup of the underlying aiohttp session
to avoid unclosed connector warnings. Accesses the cached client
directly rather than going through `_get_async_client` so a
cleanup on an uninitialized LLM is a harmless no-op rather than
a credential-required error.
to avoid unclosed connector warnings.
"""
if self._async_client is not None and hasattr(self._async_client, "close"):
if hasattr(self._async_client, "close"):
await self._async_client.close()
async def __aenter__(self) -> Self:

View File

@@ -12,7 +12,6 @@ from typing_extensions import Required
from crewai.events.types.llm_events import LLMCallType
from crewai.llms.base_llm import BaseLLM, llm_call_context
from crewai.llms.providers.utils.common import safe_tool_conversion
from crewai.utilities.agent_utils import is_context_length_exceeded
from crewai.utilities.exceptions.context_window_exceeding_exception import (
LLMContextLengthExceededError,
@@ -303,22 +302,6 @@ class BedrockCompletion(BaseLLM):
@model_validator(mode="after")
def _init_clients(self) -> BedrockCompletion:
"""Eagerly build the sync client when AWS credentials resolve,
otherwise defer so ``LLM(model="bedrock/...")`` can be constructed
at module import time even before deployment env vars are set.
Only credential/SDK errors are caught — programming errors like
``TypeError`` or ``AttributeError`` propagate so real bugs aren't
silently swallowed.
"""
try:
self._client = self._build_sync_client()
except (BotoCoreError, ClientError, ValueError) as e:
logging.debug("Deferring Bedrock client construction: %s", e)
self._async_exit_stack = AsyncExitStack() if AIOBOTOCORE_AVAILABLE else None
return self
def _build_sync_client(self) -> Any:
config = Config(
read_timeout=300,
retries={"max_attempts": 3, "mode": "adaptive"},
@@ -330,17 +313,9 @@ class BedrockCompletion(BaseLLM):
aws_session_token=self.aws_session_token,
region_name=self.region_name,
)
return session.client("bedrock-runtime", config=config)
def _get_sync_client(self) -> Any:
if self._client is None:
self._client = self._build_sync_client()
return self._client
def _get_async_client(self) -> Any:
"""Async client is set up separately by ``_ensure_async_client``
using ``aiobotocore`` inside an exit stack."""
return self._async_client
self._client = session.client("bedrock-runtime", config=config)
self._async_exit_stack = AsyncExitStack() if AIOBOTOCORE_AVAILABLE else None
return self
def to_config_dict(self) -> dict[str, Any]:
"""Extend base config with Bedrock-specific fields."""
@@ -680,7 +655,7 @@ class BedrockCompletion(BaseLLM):
raise ValueError(f"Invalid message format at index {i}")
# Call Bedrock Converse API with proper error handling
response = self._get_sync_client().converse(
response = self._client.converse(
modelId=self.model_id,
messages=cast(
"Sequence[MessageTypeDef | MessageOutputTypeDef]",
@@ -969,7 +944,7 @@ class BedrockCompletion(BaseLLM):
usage_data: dict[str, Any] | None = None
try:
response = self._get_sync_client().converse_stream(
response = self._client.converse_stream(
modelId=self.model_id,
messages=cast(
"Sequence[MessageTypeDef | MessageOutputTypeDef]",
@@ -1973,6 +1948,8 @@ class BedrockCompletion(BaseLLM):
tools: list[dict[str, Any]],
) -> list[ConverseToolTypeDef]:
"""Convert CrewAI tools to Converse API format following AWS specification."""
from crewai.llms.providers.utils.common import safe_tool_conversion
converse_tools: list[ConverseToolTypeDef] = []
for tool in tools:
@@ -2048,18 +2025,11 @@ class BedrockCompletion(BaseLLM):
input_tokens = usage.get("inputTokens", 0)
output_tokens = usage.get("outputTokens", 0)
total_tokens = usage.get("totalTokens", input_tokens + output_tokens)
raw_cached = (
usage.get("cacheReadInputTokenCount")
or usage.get("cacheReadInputTokens")
or 0
)
cached_tokens = raw_cached if isinstance(raw_cached, int) else 0
self._token_usage["prompt_tokens"] += input_tokens
self._token_usage["completion_tokens"] += output_tokens
self._token_usage["total_tokens"] += total_tokens
self._token_usage["successful_requests"] += 1
self._token_usage["cached_prompt_tokens"] += cached_tokens
def supports_function_calling(self) -> bool:
"""Check if the model supports function calling."""

View File

@@ -118,33 +118,9 @@ class GeminiCompletion(BaseLLM):
@model_validator(mode="after")
def _init_client(self) -> GeminiCompletion:
"""Eagerly build the client when credentials resolve, otherwise defer
so ``LLM(model="gemini/...")`` can be constructed at module import time
even before deployment env vars are set.
"""
try:
self._client = self._initialize_client(self.use_vertexai)
except ValueError:
pass
self._client = self._initialize_client(self.use_vertexai)
return self
def _get_sync_client(self) -> Any:
if self._client is None:
# Re-read env vars so a deferred build can pick up credentials
# that weren't set at instantiation time.
if not self.api_key:
self.api_key = os.getenv("GOOGLE_API_KEY") or os.getenv(
"GEMINI_API_KEY"
)
if not self.project:
self.project = os.getenv("GOOGLE_CLOUD_PROJECT")
self._client = self._initialize_client(self.use_vertexai)
return self._client
def _get_async_client(self) -> Any:
"""Gemini uses a single client for both sync and async calls."""
return self._get_sync_client()
def to_config_dict(self) -> dict[str, Any]:
"""Extend base config with Gemini/Vertex-specific fields."""
config = super().to_config_dict()
@@ -252,7 +228,6 @@ class GeminiCompletion(BaseLLM):
if (
hasattr(self, "client")
and self._client is not None
and hasattr(self._client, "vertexai")
and self._client.vertexai
):
@@ -1137,7 +1112,7 @@ class GeminiCompletion(BaseLLM):
try:
# The API accepts list[Content] but mypy is overly strict about variance
contents_for_api: Any = contents
response = self._get_sync_client().models.generate_content(
response = self._client.models.generate_content(
model=self.model,
contents=contents_for_api,
config=config,
@@ -1178,7 +1153,7 @@ class GeminiCompletion(BaseLLM):
# The API accepts list[Content] but mypy is overly strict about variance
contents_for_api: Any = contents
for chunk in self._get_sync_client().models.generate_content_stream(
for chunk in self._client.models.generate_content_stream(
model=self.model,
contents=contents_for_api,
config=config,
@@ -1216,7 +1191,7 @@ class GeminiCompletion(BaseLLM):
try:
# The API accepts list[Content] but mypy is overly strict about variance
contents_for_api: Any = contents
response = await self._get_async_client().aio.models.generate_content(
response = await self._client.aio.models.generate_content(
model=self.model,
contents=contents_for_api,
config=config,
@@ -1257,7 +1232,7 @@ class GeminiCompletion(BaseLLM):
# The API accepts list[Content] but mypy is overly strict about variance
contents_for_api: Any = contents
stream = await self._get_async_client().aio.models.generate_content_stream(
stream = await self._client.aio.models.generate_content_stream(
model=self.model,
contents=contents_for_api,
config=config,
@@ -1331,20 +1306,17 @@ class GeminiCompletion(BaseLLM):
@staticmethod
def _extract_token_usage(response: GenerateContentResponse) -> dict[str, Any]:
"""Extract token usage and response metadata from Gemini response."""
"""Extract token usage from Gemini response."""
if response.usage_metadata:
usage = response.usage_metadata
cached_tokens = getattr(usage, "cached_content_token_count", 0) or 0
thinking_tokens = getattr(usage, "thoughts_token_count", 0) or 0
result: dict[str, Any] = {
return {
"prompt_token_count": getattr(usage, "prompt_token_count", 0),
"candidates_token_count": getattr(usage, "candidates_token_count", 0),
"total_token_count": getattr(usage, "total_token_count", 0),
"total_tokens": getattr(usage, "total_token_count", 0),
"cached_prompt_tokens": cached_tokens,
"reasoning_tokens": thinking_tokens,
}
return result
return {"total_tokens": 0}
@staticmethod
@@ -1464,6 +1436,6 @@ class GeminiCompletion(BaseLLM):
try:
from crewai_files.uploaders.gemini import GeminiFileUploader
return GeminiFileUploader(client=self._get_sync_client())
return GeminiFileUploader(client=self._client)
except ImportError:
return None

View File

@@ -32,15 +32,11 @@ from crewai.events.types.llm_events import LLMCallType
from crewai.llms.base_llm import BaseLLM, JsonResponseFormat, llm_call_context
from crewai.llms.hooks.base import BaseInterceptor
from crewai.llms.hooks.transport import AsyncHTTPTransport, HTTPTransport
from crewai.llms.providers.utils.common import safe_tool_conversion
from crewai.utilities.agent_utils import is_context_length_exceeded
from crewai.utilities.exceptions.context_window_exceeding_exception import (
LLMContextLengthExceededError,
)
from crewai.utilities.pydantic_schema_utils import (
generate_model_description,
sanitize_tool_params_for_openai_strict,
)
from crewai.utilities.pydantic_schema_utils import generate_model_description
from crewai.utilities.types import LLMMessage
@@ -257,40 +253,22 @@ class OpenAICompletion(BaseLLM):
@model_validator(mode="after")
def _init_clients(self) -> OpenAICompletion:
"""Eagerly build clients when the API key is available, otherwise
defer so ``LLM(model="openai/...")`` can be constructed at module
import time even before deployment env vars are set.
"""
try:
self._client = self._build_sync_client()
self._async_client = self._build_async_client()
except ValueError:
pass
return self
def _build_sync_client(self) -> Any:
client_config = self._get_client_params()
if self.interceptor:
transport = HTTPTransport(interceptor=self.interceptor)
client_config["http_client"] = httpx.Client(transport=transport)
return OpenAI(**client_config)
http_client = httpx.Client(transport=transport)
client_config["http_client"] = http_client
def _build_async_client(self) -> Any:
client_config = self._get_client_params()
self._client = OpenAI(**client_config)
async_client_config = self._get_client_params()
if self.interceptor:
transport = AsyncHTTPTransport(interceptor=self.interceptor)
client_config["http_client"] = httpx.AsyncClient(transport=transport)
return AsyncOpenAI(**client_config)
async_transport = AsyncHTTPTransport(interceptor=self.interceptor)
async_http_client = httpx.AsyncClient(transport=async_transport)
async_client_config["http_client"] = async_http_client
def _get_sync_client(self) -> Any:
if self._client is None:
self._client = self._build_sync_client()
return self._client
def _get_async_client(self) -> Any:
if self._async_client is None:
self._async_client = self._build_async_client()
return self._async_client
self._async_client = AsyncOpenAI(**async_client_config)
return self
@property
def last_response_id(self) -> str | None:
@@ -786,6 +764,8 @@ class OpenAICompletion(BaseLLM):
"function": {"name": "...", "description": "...", "parameters": {...}}
}
"""
from crewai.llms.providers.utils.common import safe_tool_conversion
responses_tools = []
for tool in tools:
@@ -817,7 +797,7 @@ class OpenAICompletion(BaseLLM):
) -> str | ResponsesAPIResult | Any:
"""Handle non-streaming Responses API call."""
try:
response: Response = self._get_sync_client().responses.create(**params)
response: Response = self._client.responses.create(**params)
# Track response ID for auto-chaining
if self.auto_chain and response.id:
@@ -953,9 +933,7 @@ class OpenAICompletion(BaseLLM):
) -> str | ResponsesAPIResult | Any:
"""Handle async non-streaming Responses API call."""
try:
response: Response = await self._get_async_client().responses.create(
**params
)
response: Response = await self._async_client.responses.create(**params)
# Track response ID for auto-chaining
if self.auto_chain and response.id:
@@ -1091,7 +1069,7 @@ class OpenAICompletion(BaseLLM):
final_response: Response | None = None
usage: dict[str, Any] | None = None
stream = self._get_sync_client().responses.create(**params)
stream = self._client.responses.create(**params)
response_id_stream = None
for event in stream:
@@ -1219,7 +1197,7 @@ class OpenAICompletion(BaseLLM):
final_response: Response | None = None
usage: dict[str, Any] | None = None
stream = await self._get_async_client().responses.create(**params)
stream = await self._async_client.responses.create(**params)
response_id_stream = None
async for event in stream:
@@ -1346,23 +1324,19 @@ class OpenAICompletion(BaseLLM):
]
def _extract_responses_token_usage(self, response: Response) -> dict[str, Any]:
"""Extract token usage and response metadata from Responses API response."""
"""Extract token usage from Responses API response."""
if response.usage:
result: dict[str, Any] = {
result = {
"prompt_tokens": response.usage.input_tokens,
"completion_tokens": response.usage.output_tokens,
"total_tokens": response.usage.total_tokens,
}
# Extract cached prompt tokens from input_tokens_details
input_details = getattr(response.usage, "input_tokens_details", None)
if input_details:
result["cached_prompt_tokens"] = (
getattr(input_details, "cached_tokens", 0) or 0
)
output_details = getattr(response.usage, "output_tokens_details", None)
if output_details:
result["reasoning_tokens"] = (
getattr(output_details, "reasoning_tokens", 0) or 0
)
return result
return {"total_tokens": 0}
@@ -1570,6 +1544,11 @@ class OpenAICompletion(BaseLLM):
self, tools: list[dict[str, BaseTool]]
) -> list[dict[str, Any]]:
"""Convert CrewAI tool format to OpenAI function calling format."""
from crewai.llms.providers.utils.common import safe_tool_conversion
from crewai.utilities.pydantic_schema_utils import (
force_additional_properties_false,
)
openai_tools = []
for tool in tools:
@@ -1588,9 +1567,8 @@ class OpenAICompletion(BaseLLM):
params_dict = (
parameters if isinstance(parameters, dict) else dict(parameters)
)
openai_tool["function"]["parameters"] = (
sanitize_tool_params_for_openai_strict(params_dict)
)
params_dict = force_additional_properties_false(params_dict)
openai_tool["function"]["parameters"] = params_dict
openai_tools.append(openai_tool)
return openai_tools
@@ -1609,7 +1587,7 @@ class OpenAICompletion(BaseLLM):
parse_params = {
k: v for k, v in params.items() if k != "response_format"
}
parsed_response = self._get_sync_client().beta.chat.completions.parse(
parsed_response = self._client.beta.chat.completions.parse(
**parse_params,
response_format=response_model,
)
@@ -1633,9 +1611,7 @@ class OpenAICompletion(BaseLLM):
)
return parsed_object
response: ChatCompletion = self._get_sync_client().chat.completions.create(
**params
)
response: ChatCompletion = self._client.chat.completions.create(**params)
usage = self._extract_openai_token_usage(response)
@@ -1862,7 +1838,7 @@ class OpenAICompletion(BaseLLM):
}
stream: ChatCompletionStream[BaseModel]
with self._get_sync_client().beta.chat.completions.stream(
with self._client.beta.chat.completions.stream(
**parse_params, response_format=response_model
) as stream:
for chunk in stream:
@@ -1899,7 +1875,7 @@ class OpenAICompletion(BaseLLM):
return ""
completion_stream: Stream[ChatCompletionChunk] = (
self._get_sync_client().chat.completions.create(**params)
self._client.chat.completions.create(**params)
)
usage_data: dict[str, Any] | None = None
@@ -1996,11 +1972,9 @@ class OpenAICompletion(BaseLLM):
parse_params = {
k: v for k, v in params.items() if k != "response_format"
}
parsed_response = (
await self._get_async_client().beta.chat.completions.parse(
**parse_params,
response_format=response_model,
)
parsed_response = await self._async_client.beta.chat.completions.parse(
**parse_params,
response_format=response_model,
)
math_reasoning = parsed_response.choices[0].message
@@ -2022,8 +1996,8 @@ class OpenAICompletion(BaseLLM):
)
return parsed_object
response: ChatCompletion = (
await self._get_async_client().chat.completions.create(**params)
response: ChatCompletion = await self._async_client.chat.completions.create(
**params
)
usage = self._extract_openai_token_usage(response)
@@ -2149,7 +2123,7 @@ class OpenAICompletion(BaseLLM):
if response_model:
completion_stream: AsyncIterator[
ChatCompletionChunk
] = await self._get_async_client().chat.completions.create(**params)
] = await self._async_client.chat.completions.create(**params)
accumulated_content = ""
usage_data: dict[str, Any] | None = None
@@ -2205,7 +2179,7 @@ class OpenAICompletion(BaseLLM):
stream: AsyncIterator[
ChatCompletionChunk
] = await self._get_async_client().chat.completions.create(**params)
] = await self._async_client.chat.completions.create(**params)
usage_data = None
@@ -2333,24 +2307,20 @@ class OpenAICompletion(BaseLLM):
def _extract_openai_token_usage(
self, response: ChatCompletion | ChatCompletionChunk
) -> dict[str, Any]:
"""Extract token usage and response metadata from OpenAI ChatCompletion."""
"""Extract token usage from OpenAI ChatCompletion or ChatCompletionChunk response."""
if hasattr(response, "usage") and response.usage:
usage = response.usage
result: dict[str, Any] = {
result = {
"prompt_tokens": getattr(usage, "prompt_tokens", 0),
"completion_tokens": getattr(usage, "completion_tokens", 0),
"total_tokens": getattr(usage, "total_tokens", 0),
}
# Extract cached prompt tokens from prompt_tokens_details
prompt_details = getattr(usage, "prompt_tokens_details", None)
if prompt_details:
result["cached_prompt_tokens"] = (
getattr(prompt_details, "cached_tokens", 0) or 0
)
completion_details = getattr(usage, "completion_tokens_details", None)
if completion_details:
result["reasoning_tokens"] = (
getattr(completion_details, "reasoning_tokens", 0) or 0
)
return result
return {"total_tokens": 0}
@@ -2401,8 +2371,8 @@ class OpenAICompletion(BaseLLM):
from crewai_files.uploaders.openai import OpenAIFileUploader
return OpenAIFileUploader(
client=self._get_sync_client(),
async_client=self._get_async_client(),
client=self._client,
async_client=self._async_client,
)
except ImportError:
return None

View File

@@ -213,9 +213,6 @@ class CheckpointConfig(BaseModel):
def _register_handlers(self) -> CheckpointConfig:
from crewai.state.checkpoint_listener import _ensure_handlers_registered
if isinstance(self.provider, SqliteProvider) and not Path(self.location).suffix:
self.location = f"{self.location}.db"
_ensure_handlers_registered()
return self

View File

@@ -7,7 +7,6 @@ avoids per-event overhead when no entity uses checkpointing.
from __future__ import annotations
import json
import logging
import threading
from typing import Any
@@ -103,15 +102,10 @@ def _find_checkpoint(source: Any) -> CheckpointConfig | None:
return None
def _do_checkpoint(
state: RuntimeState, cfg: CheckpointConfig, event: BaseEvent | None = None
) -> None:
def _do_checkpoint(state: RuntimeState, cfg: CheckpointConfig) -> None:
"""Write a checkpoint and prune old ones if configured."""
_prepare_entities(state.root)
payload = state.model_dump(mode="json")
if event is not None:
payload["trigger"] = event.type
data = json.dumps(payload)
data = state.model_dump_json()
location = cfg.provider.checkpoint(
data,
cfg.location,
@@ -140,7 +134,7 @@ def _on_any_event(source: Any, event: BaseEvent, state: Any) -> None:
if cfg is None:
return
try:
_do_checkpoint(state, cfg, event)
_do_checkpoint(state, cfg)
except Exception:
logger.warning("Auto-checkpoint failed for event %s", event.type, exc_info=True)

View File

@@ -66,9 +66,6 @@ def _sync_checkpoint_fields(entity: object) -> None:
entity.checkpoint_inputs = entity._inputs
entity.checkpoint_train = entity._train
entity.checkpoint_kickoff_event_id = entity._kickoff_event_id
for task in entity.tasks:
task.checkpoint_original_description = task._original_description
task.checkpoint_original_expected_output = task._original_expected_output
def _migrate(data: dict[str, Any]) -> dict[str, Any]:
@@ -124,7 +121,7 @@ class RuntimeState(RootModel): # type: ignore[type-arg]
"parent_id": self._parent_id,
"branch": self._branch,
"entities": [e.model_dump(mode="json") for e in self.root],
"event_record": self._event_record.model_dump(mode="json"),
"event_record": self._event_record.model_dump(),
}
@model_validator(mode="wrap")
@@ -197,10 +194,7 @@ class RuntimeState(RootModel): # type: ignore[type-arg]
return result
def fork(self, branch: str | None = None) -> None:
"""Create a new execution branch and write an initial checkpoint.
If this state was restored from a checkpoint, an initial checkpoint
is written on the new branch so the fork point is recorded.
"""Mark this state as a fork for subsequent checkpoints.
Args:
branch: Branch label. Auto-generated from the current checkpoint
@@ -210,7 +204,7 @@ class RuntimeState(RootModel): # type: ignore[type-arg]
if branch:
self._branch = branch
elif self._checkpoint_id:
self._branch = f"fork/{self._checkpoint_id}_{uuid.uuid4().hex[:6]}"
self._branch = f"fork/{self._checkpoint_id}"
else:
self._branch = f"fork/{uuid.uuid4().hex[:8]}"
@@ -233,7 +227,6 @@ class RuntimeState(RootModel): # type: ignore[type-arg]
provider = detect_provider(location)
raw = provider.from_checkpoint(location)
state = cls.model_validate_json(raw, **kwargs)
state._provider = provider
checkpoint_id = provider.extract_id(location)
state._checkpoint_id = checkpoint_id
state._parent_id = checkpoint_id
@@ -260,7 +253,6 @@ class RuntimeState(RootModel): # type: ignore[type-arg]
provider = detect_provider(location)
raw = await provider.afrom_checkpoint(location)
state = cls.model_validate_json(raw, **kwargs)
state._provider = provider
checkpoint_id = provider.extract_id(location)
state._checkpoint_id = checkpoint_id
state._parent_id = checkpoint_id

View File

@@ -45,7 +45,6 @@ from crewai.events.types.task_events import (
TaskStartedEvent,
)
from crewai.llms.base_llm import BaseLLM
from crewai.llms.providers.openai.completion import OpenAICompletion
from crewai.security import Fingerprint, SecurityConfig
from crewai.tasks.output_format import OutputFormat
from crewai.tasks.task_output import TaskOutput
@@ -231,8 +230,6 @@ class Task(BaseModel):
_original_description: str | None = PrivateAttr(default=None)
_original_expected_output: str | None = PrivateAttr(default=None)
_original_output_file: str | None = PrivateAttr(default=None)
checkpoint_original_description: str | None = Field(default=None, exclude=False)
checkpoint_original_expected_output: str | None = Field(default=None, exclude=False)
_thread: threading.Thread | None = PrivateAttr(default=None)
model_config = {"arbitrary_types_allowed": True}
@@ -302,14 +299,12 @@ class Task(BaseModel):
@model_validator(mode="after")
def validate_required_fields(self) -> Self:
if self.description is None:
raise ValueError(
"description must be provided either directly or through config"
)
if self.expected_output is None:
raise ValueError(
"expected_output must be provided either directly or through config"
)
required_fields = ["description", "expected_output"]
for field in required_fields:
if getattr(self, field) is None:
raise ValueError(
f"{field} must be provided either directly or through config"
)
return self
@model_validator(mode="after")
@@ -841,8 +836,8 @@ class Task(BaseModel):
should_inject = self.allow_crewai_trigger_context
if should_inject and self.agent:
crew = self.agent.crew
if crew and not isinstance(crew, str) and crew._inputs:
crew = getattr(self.agent, "crew", None)
if crew and hasattr(crew, "_inputs") and crew._inputs:
trigger_payload = crew._inputs.get("crewai_trigger_payload")
if trigger_payload is not None:
description += f"\n\nTrigger Payload: {trigger_payload}"
@@ -855,12 +850,11 @@ class Task(BaseModel):
isinstance(self.agent.llm, BaseLLM)
and self.agent.llm.supports_multimodal()
):
provider: str = self.agent.llm.provider or self.agent.llm.model
api: str | None = (
self.agent.llm.api
if isinstance(self.agent.llm, OpenAICompletion)
else None
provider: str = str(
getattr(self.agent.llm, "provider", None)
or getattr(self.agent.llm, "model", "openai")
)
api: str | None = getattr(self.agent.llm, "api", None)
supported_types = get_supported_content_types(provider, api)
def is_auto_injected(content_type: str) -> bool:

View File

@@ -29,14 +29,6 @@ class UsageMetrics(BaseModel):
completion_tokens: int = Field(
default=0, description="Number of tokens used in completions."
)
reasoning_tokens: int = Field(
default=0,
description="Number of reasoning/thinking tokens (e.g. OpenAI o-series, Gemini thinking).",
)
cache_creation_tokens: int = Field(
default=0,
description="Number of cache creation tokens (e.g. Anthropic cache writes).",
)
successful_requests: int = Field(
default=0, description="Number of successful requests made."
)
@@ -51,6 +43,4 @@ class UsageMetrics(BaseModel):
self.prompt_tokens += usage_metrics.prompt_tokens
self.cached_prompt_tokens += usage_metrics.cached_prompt_tokens
self.completion_tokens += usage_metrics.completion_tokens
self.reasoning_tokens += usage_metrics.reasoning_tokens
self.cache_creation_tokens += usage_metrics.cache_creation_tokens
self.successful_requests += usage_metrics.successful_requests

View File

@@ -19,7 +19,7 @@ from collections.abc import Callable
from copy import deepcopy
import datetime
import logging
from typing import TYPE_CHECKING, Annotated, Any, Final, Literal, TypedDict, Union, cast
from typing import TYPE_CHECKING, Annotated, Any, Final, Literal, TypedDict, Union
import uuid
import jsonref # type: ignore[import-untyped]
@@ -417,119 +417,6 @@ def strip_null_from_types(schema: dict[str, Any]) -> dict[str, Any]:
return schema
_STRICT_METADATA_KEYS: Final[tuple[str, ...]] = (
"title",
"default",
"examples",
"example",
"$comment",
"readOnly",
"writeOnly",
"deprecated",
)
_CLAUDE_STRICT_UNSUPPORTED: Final[tuple[str, ...]] = (
"minimum",
"maximum",
"exclusiveMinimum",
"exclusiveMaximum",
"multipleOf",
"minLength",
"maxLength",
"pattern",
"minItems",
"maxItems",
"uniqueItems",
"minContains",
"maxContains",
"minProperties",
"maxProperties",
"patternProperties",
"propertyNames",
"dependentRequired",
"dependentSchemas",
)
def _strip_keys_recursive(d: Any, keys: tuple[str, ...]) -> Any:
"""Recursively delete a fixed set of keys from a schema."""
if isinstance(d, dict):
for key in keys:
d.pop(key, None)
for v in d.values():
_strip_keys_recursive(v, keys)
elif isinstance(d, list):
for i in d:
_strip_keys_recursive(i, keys)
return d
def lift_top_level_anyof(schema: dict[str, Any]) -> dict[str, Any]:
"""Unwrap a top-level anyOf/oneOf/allOf wrapping a single object variant.
Anthropic's strict ``input_schema`` rejects top-level union keywords. When
exactly one variant is an object schema, lift it so the root is a plain
object; otherwise leave the schema alone.
"""
for key in ("anyOf", "oneOf", "allOf"):
variants = schema.get(key)
if not isinstance(variants, list):
continue
object_variants = [
v for v in variants if isinstance(v, dict) and v.get("type") == "object"
]
if len(object_variants) == 1:
lifted = deepcopy(object_variants[0])
schema.pop(key)
schema.update(lifted)
break
return schema
def _common_strict_pipeline(params: dict[str, Any]) -> dict[str, Any]:
"""Shared strict sanitization: inline refs, close objects, require all properties."""
sanitized = resolve_refs(deepcopy(params))
sanitized.pop("$defs", None)
sanitized = convert_oneof_to_anyof(sanitized)
sanitized = ensure_type_in_schemas(sanitized)
sanitized = force_additional_properties_false(sanitized)
sanitized = ensure_all_properties_required(sanitized)
return cast(dict[str, Any], _strip_keys_recursive(sanitized, _STRICT_METADATA_KEYS))
def sanitize_tool_params_for_openai_strict(
params: dict[str, Any],
) -> dict[str, Any]:
"""Sanitize a JSON schema for OpenAI strict function calling."""
if not isinstance(params, dict):
return params
return cast(
dict[str, Any], strip_unsupported_formats(_common_strict_pipeline(params))
)
def sanitize_tool_params_for_anthropic_strict(
params: dict[str, Any],
) -> dict[str, Any]:
"""Sanitize a JSON schema for Anthropic strict tool use."""
if not isinstance(params, dict):
return params
sanitized = lift_top_level_anyof(_common_strict_pipeline(params))
sanitized = _strip_keys_recursive(sanitized, _CLAUDE_STRICT_UNSUPPORTED)
return cast(dict[str, Any], strip_unsupported_formats(sanitized))
def sanitize_tool_params_for_bedrock_strict(
params: dict[str, Any],
) -> dict[str, Any]:
"""Sanitize a JSON schema for Bedrock Converse strict tool use.
Bedrock Converse uses the same grammar compiler as the underlying Claude
model, so the constraints match Anthropic's.
"""
return sanitize_tool_params_for_anthropic_strict(params)
def generate_model_description(
model: type[BaseModel],
*,

View File

@@ -1,7 +1,6 @@
from __future__ import annotations
from datetime import date, datetime
from enum import Enum
import json
from typing import Any, TypeAlias
import uuid
@@ -21,7 +20,6 @@ def to_serializable(
max_depth: int = 5,
_current_depth: int = 0,
_ancestors: set[int] | None = None,
context: dict[str, Any] | None = None,
) -> Serializable:
"""Converts a Python object into a JSON-compatible representation.
@@ -35,9 +33,6 @@ def to_serializable(
max_depth: Maximum recursion depth. Defaults to 5.
_current_depth: Current recursion depth (for internal use).
_ancestors: Set of ancestor object ids for cycle detection (for internal use).
context: Optional context dict passed to Pydantic's model_dump(context=...).
Field serializers on the model can inspect this to customize output
(e.g. context={"trace": True} for lightweight trace serialization).
Returns:
Serializable: A JSON-compatible structure.
@@ -53,15 +48,6 @@ def to_serializable(
if isinstance(obj, (str, int, float, bool, type(None))):
return obj
if isinstance(obj, Enum):
return to_serializable(
obj.value,
exclude=exclude,
max_depth=max_depth,
_current_depth=_current_depth,
_ancestors=_ancestors,
context=context,
)
if isinstance(obj, uuid.UUID):
return str(obj)
if isinstance(obj, (date, datetime)):
@@ -80,7 +66,6 @@ def to_serializable(
max_depth=max_depth,
_current_depth=_current_depth + 1,
_ancestors=new_ancestors,
context=context,
)
for item in obj
]
@@ -92,24 +77,17 @@ def to_serializable(
max_depth=max_depth,
_current_depth=_current_depth + 1,
_ancestors=new_ancestors,
context=context,
)
for key, value in obj.items()
if key not in exclude
}
if isinstance(obj, BaseModel):
try:
dump_kwargs: dict[str, Any] = {}
if exclude:
dump_kwargs["exclude"] = exclude
if context is not None:
dump_kwargs["context"] = context
return to_serializable(
obj=obj.model_dump(**dump_kwargs),
obj=obj.model_dump(exclude=exclude),
max_depth=max_depth,
_current_depth=_current_depth + 1,
_ancestors=new_ancestors,
context=context,
)
except Exception:
try:
@@ -119,30 +97,12 @@ def to_serializable(
max_depth=max_depth,
_current_depth=_current_depth + 1,
_ancestors=new_ancestors,
context=context,
)
for k, v in obj.__dict__.items()
if k not in (exclude or set())
}
except Exception:
return repr(obj)
if callable(obj):
return repr(obj)
if hasattr(obj, "__dict__"):
try:
return {
_to_serializable_key(k): to_serializable(
v,
max_depth=max_depth,
_current_depth=_current_depth + 1,
_ancestors=new_ancestors,
context=context,
)
for k, v in obj.__dict__.items()
if not k.startswith("_")
}
except Exception:
return repr(obj)
return repr(obj)

View File

@@ -1064,6 +1064,59 @@ def test_agent_use_trained_data(crew_training_handler):
)
@patch("crewai.agent.core.CrewTrainingHandler")
def test_agent_use_trained_data_with_custom_filename(crew_training_handler):
"""Test that _use_trained_data respects a custom filename when provided."""
task_prompt = "What is 1 + 1?"
agent = Agent(
role="researcher",
goal="test goal",
backstory="test backstory",
verbose=True,
)
crew_training_handler.return_value.load.return_value = {
agent.role: {
"suggestions": [
"The result of the math operation must be right.",
"Result must be better than 1.",
]
}
}
custom_filename = "my_custom_trained.pkl"
result = agent._use_trained_data(
task_prompt=task_prompt, trained_agents_data_file=custom_filename
)
assert (
result == "What is 1 + 1?\n\nYou MUST follow these instructions: \n"
" - The result of the math operation must be right.\n - Result must be better than 1."
)
crew_training_handler.assert_has_calls(
[mock.call(custom_filename), mock.call().load()]
)
@patch("crewai.agent.core.CrewTrainingHandler")
def test_agent_use_trained_data_defaults_without_custom_filename(crew_training_handler):
"""Test that _use_trained_data falls back to the default file when no custom filename is given."""
task_prompt = "What is 1 + 1?"
agent = Agent(
role="researcher",
goal="test goal",
backstory="test backstory",
verbose=True,
)
crew_training_handler.return_value.load.return_value = {}
result = agent._use_trained_data(task_prompt=task_prompt)
assert result == task_prompt
crew_training_handler.assert_has_calls(
[mock.call("trained_agents_data.pkl"), mock.call().load()]
)
def test_agent_max_retry_limit():
agent = Agent(
role="test role",

View File

@@ -1051,7 +1051,7 @@ def test_lite_agent_verbose_false_suppresses_printer_output():
successful_requests=1,
)
with pytest.warns(FutureWarning):
with pytest.warns(DeprecationWarning):
agent = LiteAgent(
role="Test Agent",
goal="Test goal",

View File

@@ -55,7 +55,7 @@ interactions:
x-stainless-os:
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 2.31.0
- 1.83.0
x-stainless-read-timeout:
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
@@ -63,51 +63,50 @@ interactions:
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.12
- 3.13.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-DTApYQx2LepfeRL1XcDKPgrhMFnQr\",\n \"object\":
\"chat.completion\",\n \"created\": 1775845516,\n \"model\": \"gpt-4o-2024-08-06\",\n
string: "{\n \"id\": \"chatcmpl-DIqxWpJbbFJoV8WlXhb9UYFbCmdPk\",\n \"object\":
\"chat.completion\",\n \"created\": 1773385850,\n \"model\": \"gpt-4o-2024-08-06\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": null,\n \"tool_calls\": [\n {\n
\ \"id\": \"call_BCh6lXsBTdixRuRh6OTBPoIJ\",\n \"type\":
\"function\",\n \"function\": {\n \"name\": \"delegate_work_to_coworker\",\n
\ \"arguments\": \"{\\\"task\\\": \\\"Come up with a list of 5
interesting ideas to explore for an article.\\\", \\\"context\\\": \\\"We
need five intriguing ideas worth exploring for an article. Each idea should
have potential for in-depth exploration and appeal to a broad audience, possibly
touching on current trends, historical insights, future possibilities, or
human interest stories.\\\", \\\"coworker\\\": \\\"Researcher\\\"}\"\n }\n
\ },\n {\n \"id\": \"call_rAQFeCrS4ogsqvIWRGAYFHGI\",\n
\ \"id\": \"call_G2i9RJGNXKVfnd8ZTaBG8Fwi\",\n \"type\":
\"function\",\n \"function\": {\n \"name\": \"ask_question_to_coworker\",\n
\ \"arguments\": \"{\\\"question\\\": \\\"What are some trending
topics or ideas in various fields that could be explored for an article?\\\",
\\\"context\\\": \\\"We need to generate a list of 5 interesting ideas to
explore for an article. These ideas should be engaging and relevant to current
trends or captivating subjects.\\\", \\\"coworker\\\": \\\"Researcher\\\"}\"\n
\ }\n },\n {\n \"id\": \"call_j4KH2SGZvNeioql0HcRQ9NTp\",\n
\ \"type\": \"function\",\n \"function\": {\n \"name\":
\"delegate_work_to_coworker\",\n \"arguments\": \"{\\\"task\\\":
\\\"Write one amazing paragraph highlight for each of 5 ideas that showcases
how good an article about this topic could be.\\\", \\\"context\\\": \\\"Upon
receiving five intriguing ideas from the Researcher, create a compelling paragraph
for each idea that highlights its potential as a fascinating article. These
paragraphs must capture the essence of the topic and explain why it would
captivate readers, incorporating possible themes and insights.\\\", \\\"coworker\\\":
\\\"Senior Writer\\\"}\"\n }\n }\n ],\n \"refusal\":
null,\n \"annotations\": []\n },\n \"logprobs\": null,\n
\ \"finish_reason\": \"tool_calls\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
476,\n \"completion_tokens\": 201,\n \"total_tokens\": 677,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
\"ask_question_to_coworker\",\n \"arguments\": \"{\\\"question\\\":
\\\"What unique angles or perspectives could we explore to make articles more
compelling and engaging?\\\", \\\"context\\\": \\\"Our task involves coming
up with 5 ideas for articles, each with an exciting paragraph highlight that
illustrates the promise and intrigue of the topic. We want them to be more
than generic concepts, shining for readers with fresh insights or engaging
twists.\\\", \\\"coworker\\\": \\\"Senior Writer\\\"}\"\n }\n }\n
\ ],\n \"refusal\": null,\n \"annotations\": []\n },\n
\ \"logprobs\": null,\n \"finish_reason\": \"tool_calls\"\n }\n
\ ],\n \"usage\": {\n \"prompt_tokens\": 476,\n \"completion_tokens\":
183,\n \"total_tokens\": 659,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_2ca5b70601\"\n}\n"
\"default\",\n \"system_fingerprint\": \"fp_b7c8e3f100\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-Ray:
- 9ea3cb06ba66b301-TPE
- 9db9389a3f9e424c-EWR
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Fri, 10 Apr 2026 18:25:18 GMT
- Fri, 13 Mar 2026 07:10:53 GMT
Server:
- cloudflare
Strict-Transport-Security:
@@ -123,7 +122,7 @@ interactions:
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '1981'
- '2402'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
@@ -155,14 +154,13 @@ interactions:
You work as a freelancer and is now working on doing research and analysis for
a new customer.\nYour personal goal is: Make the best research and analysis
on content about AI and AI agents"},{"role":"user","content":"\nCurrent Task:
Come up with a list of 5 interesting ideas to explore for an article.\n\nThis
is the expected criteria for your final answer: Your best answer to your coworker
asking you this, accounting for the context shared.\nyou MUST return the actual
complete content as the final answer, not a summary.\n\nThis is the context
you''re working with:\nWe need five intriguing ideas worth exploring for an
article. Each idea should have potential for in-depth exploration and appeal
to a broad audience, possibly touching on current trends, historical insights,
future possibilities, or human interest stories.\n\nProvide your complete response:"}],"model":"gpt-4.1-mini"}'
What are some trending topics or ideas in various fields that could be explored
for an article?\n\nThis is the expected criteria for your final answer: Your
best answer to your coworker asking you this, accounting for the context shared.\nyou
MUST return the actual complete content as the final answer, not a summary.\n\nThis
is the context you''re working with:\nWe need to generate a list of 5 interesting
ideas to explore for an article. These ideas should be engaging and relevant
to current trends or captivating subjects.\n\nProvide your complete response:"}],"model":"gpt-4.1-mini"}'
headers:
User-Agent:
- X-USER-AGENT-XXX
@@ -175,7 +173,7 @@ interactions:
connection:
- keep-alive
content-length:
- '1046'
- '978'
content-type:
- application/json
host:
@@ -189,7 +187,7 @@ interactions:
x-stainless-os:
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 2.31.0
- 1.83.0
x-stainless-read-timeout:
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
@@ -197,69 +195,63 @@ interactions:
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.12
- 3.13.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-DTApalbfnYkqIc8slLS3DKwo9KXbc\",\n \"object\":
\"chat.completion\",\n \"created\": 1775845518,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
string: "{\n \"id\": \"chatcmpl-DIqxak88AexErt9PGFGHnWPIJLwNV\",\n \"object\":
\"chat.completion\",\n \"created\": 1773385854,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Certainly! Here are five intriguing
article ideas that offer rich potential for deep exploration and broad audience
appeal, especially aligned with current trends and human interest in AI and
technology:\\n\\n1. **The Evolution of AI Agents: From Rule-Based Bots to
Autonomous Decision Makers** \\n Explore the historical development of
AI agents, tracing the journey from simple scripted chatbots to advanced autonomous
systems capable of complex decision-making and learning. Dive into key technological
milestones, breakthroughs in machine learning, and current state-of-the-art
AI agents. Discuss implications for industries such as customer service, healthcare,
and autonomous vehicles, highlighting both opportunities and ethical concerns.\\n\\n2.
**AI in Daily Life: How Intelligent Agents Are Reshaping Human Routines**
\ \\n Investigate the integration of AI agents in everyday life\u2014from
virtual assistants like Siri and Alexa to personalized recommendation systems
and smart home devices. Analyze how these AI tools influence productivity,
privacy, and social behavior. Include human interest elements through stories
of individuals or communities who have embraced or resisted these technologies.\\n\\n3.
**The Future of Work: AI Agents as Collaborative Colleagues** \\n Examine
how AI agents are transforming workplaces by acting as collaborators rather
than just tools. Cover applications in creative fields, data analysis, and
decision support, while addressing potential challenges such as job displacement,
new skill requirements, and the evolving definition of teamwork. Use expert
opinions and case studies to paint a nuanced future outlook.\\n\\n4. **Ethics
and Accountability in AI Agent Development** \\n Delve into the ethical
dilemmas posed by increasingly autonomous AI agents\u2014topics like bias
in algorithms, data privacy, and accountability for AI-driven decisions. Explore
measures being taken globally to regulate AI, frameworks for responsible AI
development, and the role of public awareness. Include historical context
about technology ethics to provide depth.\\n\\n5. **Human-AI Symbiosis: Stories
of Innovative Partnerships Shaping Our World** \\n Tell compelling human
interest stories about individuals or organizations pioneering collaborative
projects with AI agents that lead to breakthroughs in science, art, or social
good. Highlight how these partnerships transcend traditional human-machine
interaction and open new creative and problem-solving possibilities, inspiring
readers about the potential of human-AI synergy.\\n\\nThese ideas are designed
to be both engaging and informative, offering multiple angles\u2014technical,
historical, ethical, and personal\u2014to keep readers captivated while providing
substantial content for in-depth analysis.\",\n \"refusal\": null,\n
\ \"annotations\": []\n },\n \"logprobs\": null,\n \"finish_reason\":
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 189,\n \"completion_tokens\":
472,\n \"total_tokens\": 661,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
\"assistant\",\n \"content\": \"Here are five trending and engaging
topics across various fields that could be explored for an article:\\n\\n1.
**The Rise of Autonomous AI Agents and Their Impact on the Future of Work**
\ \\nExplore how autonomous AI agents\u2014systems capable of performing complex
tasks independently\u2014are transforming industries such as customer service,
software development, and logistics. Discuss implications for job automation,
human-AI collaboration, and ethical considerations surrounding decision-making
autonomy.\\n\\n2. **Generative AI Beyond Text: Innovations in Audio, Video,
and 3D Content Creation** \\nDelve into advancements in generative AI models
that create not only text but also realistic audio, video content, virtual
environments, and 3D models. Highlight applications in gaming, entertainment,
education, and digital marketing, as well as challenges like misinformation
and deepfake detection.\\n\\n3. **AI-Driven Climate Modeling: Enhancing Predictive
Accuracy to Combat Climate Change** \\nExamine how AI and machine learning
are improving climate models by analyzing vast datasets, uncovering patterns,
and simulating environmental scenarios. Discuss how these advances are aiding
policymakers in making informed decisions to address climate risks and sustainability
goals.\\n\\n4. **The Ethical Frontiers of AI in Healthcare: Balancing Innovation
with Patient Privacy** \\nInvestigate ethical challenges posed by AI applications
in healthcare, including diagnosis, personalized treatment, and patient data
management. Focus on balancing rapid technological innovation with privacy,
bias mitigation, and regulatory frameworks to ensure equitable access and
trust.\\n\\n5. **Quantum Computing Meets AI: Exploring the Next Leap in Computational
Power** \\nCover the intersection of quantum computing and artificial intelligence,
exploring how quantum algorithms could accelerate AI training processes and
solve problems beyond the reach of classical computers. Outline current research,
potential breakthroughs, and the timeline for real-world applications.\\n\\nEach
of these topics is timely, relevant, and has the potential to engage readers
interested in cutting-edge technology, societal impact, and future trends.
Let me know if you want me to help develop an outline or deeper research into
any of these areas!\",\n \"refusal\": null,\n \"annotations\":
[]\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 178,\n \"completion_tokens\":
402,\n \"total_tokens\": 580,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_fbf43a1ff3\"\n}\n"
\"default\",\n \"system_fingerprint\": \"fp_e76a310957\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-Ray:
- 9ea3cb1b5c943323-TPE
- 9db938b0493c4b9f-EWR
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Fri, 10 Apr 2026 18:25:25 GMT
- Fri, 13 Mar 2026 07:10:59 GMT
Server:
- cloudflare
Strict-Transport-Security:
@@ -275,7 +267,7 @@ interactions:
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '6990'
- '5699'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
@@ -306,16 +298,15 @@ interactions:
a senior writer, specialized in technology, software engineering, AI and startups.
You work as a freelancer and are now working on writing content for a new customer.\nYour
personal goal is: Write the best content about AI and AI agents."},{"role":"user","content":"\nCurrent
Task: Write one amazing paragraph highlight for each of 5 ideas that showcases
how good an article about this topic could be.\n\nThis is the expected criteria
for your final answer: Your best answer to your coworker asking you this, accounting
for the context shared.\nyou MUST return the actual complete content as the
final answer, not a summary.\n\nThis is the context you''re working with:\nUpon
receiving five intriguing ideas from the Researcher, create a compelling paragraph
for each idea that highlights its potential as a fascinating article. These
paragraphs must capture the essence of the topic and explain why it would captivate
readers, incorporating possible themes and insights.\n\nProvide your complete
response:"}],"model":"gpt-4.1-mini"}'
Task: What unique angles or perspectives could we explore to make articles more
compelling and engaging?\n\nThis is the expected criteria for your final answer:
Your best answer to your coworker asking you this, accounting for the context
shared.\nyou MUST return the actual complete content as the final answer, not
a summary.\n\nThis is the context you''re working with:\nOur task involves coming
up with 5 ideas for articles, each with an exciting paragraph highlight that
illustrates the promise and intrigue of the topic. We want them to be more than
generic concepts, shining for readers with fresh insights or engaging twists.\n\nProvide
your complete response:"}],"model":"gpt-4.1-mini"}'
headers:
User-Agent:
- X-USER-AGENT-XXX
@@ -328,7 +319,7 @@ interactions:
connection:
- keep-alive
content-length:
- '1103'
- '1041'
content-type:
- application/json
host:
@@ -342,7 +333,7 @@ interactions:
x-stainless-os:
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 2.31.0
- 1.83.0
x-stainless-read-timeout:
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
@@ -350,83 +341,78 @@ interactions:
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.12
- 3.13.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-DTApbrh9Z4yFAKPHIR48ubdB1R5xK\",\n \"object\":
\"chat.completion\",\n \"created\": 1775845519,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
string: "{\n \"id\": \"chatcmpl-DIqxZCl1kFIE7WXznIKow9QFNZ2QT\",\n \"object\":
\"chat.completion\",\n \"created\": 1773385853,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"1. **The Rise of Autonomous AI Agents:
Revolutionizing Everyday Tasks** \\nImagine a world where AI agents autonomously
manage your daily schedule, optimize your work routines, and even handle complex
decision-making with minimal human intervention. An article exploring the
rise of autonomous AI agents would captivate readers by diving into how advancements
in machine learning and natural language processing have matured these agents
from simple chatbots to intelligent collaborators. Themes could include practical
applications in industries like healthcare, finance, and personal productivity,
the challenges of trust and transparency, and a glimpse into the ethical questions
surrounding AI autonomy. This topic not only showcases cutting-edge technology
but also invites readers to envision the near future of human-AI synergy.\\n\\n2.
**Building Ethical AI Agents: Balancing Innovation with Responsibility** \\nAs
AI agents become more powerful and independent, the imperative to embed ethical
frameworks within their design comes sharply into focus. An insightful article
on this theme would engage readers by unpacking the complexities of programming
morality, fairness, and accountability into AI systems that influence critical
decisions\u2014whether in hiring processes, law enforcement, or digital content
moderation. Exploring real-world case studies alongside philosophical and
regulatory perspectives, the piece could illuminate the delicate balance between
technological innovation and societal values, offering a nuanced discussion
that appeals to technologists, ethicists, and everyday users alike.\\n\\n3.
**AI Agents in Startups: Accelerating Growth and Disrupting Markets** \\nStartups
are uniquely positioned to leverage AI agents as game-changers that turbocharge
growth, optimize workflows, and unlock new business models. This article could
enthrall readers by detailing how nimble companies integrate AI-driven agents
for customer engagement, market analysis, and personalized product recommendations\u2014outpacing
larger incumbents. It would also examine hurdles such as data privacy, scaling
complexities, and the human-AI collaboration dynamic, providing actionable
insights for entrepreneurs and investors. The story of AI agents fueling startup
innovation not only inspires but also outlines the practical pathways and
pitfalls on the frontier of modern entrepreneurship.\\n\\n4. **The Future
of Work with AI Agents: Redefining Roles and Skills** \\nAI agents are redefining
professional landscapes by automating routine tasks and augmenting human creativity
and decision-making. An article on this topic could engage readers by painting
a vivid picture of the evolving workplace, where collaboration between humans
and AI agents becomes the norm. Delving into emerging roles, necessary skill
sets, and how education and training must adapt, the piece would offer a forward-thinking
analysis that resonates deeply with employees, managers, and policymakers.
Exploring themes of workforce transformation, productivity gains, and potential
socioeconomic impacts, it provides a comprehensive outlook on an AI-integrated
work environment.\\n\\n5. **From Reactive to Proactive: How Next-Gen AI Agents
Anticipate Needs** \\nThe leap from reactive AI assistants to truly proactive
AI agents signifies one of the most thrilling advances in artificial intelligence.
An article centered on this evolution would captivate readers by illustrating
how these agents utilize predictive analytics, contextual understanding, and
continuous learning to anticipate user needs before they are expressed. By
showcasing pioneering applications in personalized healthcare management,
smart homes, and adaptive learning platforms, the article would highlight
the profound shift toward intuitive, anticipatory technology. This theme not
only excites with futuristic promise but also probes the technical and privacy
challenges that come with increased agency and foresight.\",\n \"refusal\":
null,\n \"annotations\": []\n },\n \"logprobs\": null,\n
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
197,\n \"completion_tokens\": 666,\n \"total_tokens\": 863,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
\"assistant\",\n \"content\": \"Absolutely! To create compelling and
engaging AI articles that stand out, we need to go beyond surface-level discussions
and deliver fresh perspectives that challenge assumptions and spark curiosity.
Here are five unique angles with their highlight paragraphs that could really
captivate our readers:\\n\\n1. **The Hidden Psychology of AI Agents: How They
Learn Human Biases and What That Means for Our Future** \\n*Highlight:* AI
agents don\u2019t just process data\u2014they absorb the subtle nuances and
biases embedded in human language, behavior, and culture. This article dives
deep into the psychological parallels between AI learning mechanisms and human
cognitive biases, revealing surprising ways AI can both mirror and amplify
our prejudices. Understanding these dynamics is crucial for building trustworthy
AI systems and reshaping the future relationship between humans and machines.\\n\\n2.
**From Assistants to Autonomous Creators: The Rise of AI Agents as Artists,
Writers, and Innovators** \\n*Highlight:* What do we lose and gain when AI
agents start producing original art, literature, and innovations? This piece
explores groundbreaking examples where AI isn\u2019t just a tool but a creative
partner that challenges our definition of authorship and genius. We\u2019ll
examine ethical dilemmas, collaborative workflows, and the exciting frontier
where human intuition meets algorithmic originality.\\n\\n3. **AI Agents in
the Wild: How Decentralized Autonomous Organizations Could Redefine Economy
and Governance** \\n*Highlight:* Imagine AI agents operating autonomously
in decentralized networks, making real-time decisions that affect finances,
resource management, and governance without human intervention. This article
uncovers how DAOs powered by AI agents might spontaneously evolve new forms
of organization\u2014transparent, efficient, and resistant to traditional
corruption. We\u2019ll investigate early case studies and speculate on how
this might disrupt centuries-old societal structures.\\n\\n4. **Beyond Chatbots:
The Next Generation of AI Agents as Empathetic Digital Companions** \\n*Highlight:*
Moving past scripted conversations, emerging AI agents simulate empathy and
emotional intelligence in ways that can transform mental health care, education,
and companionship. This article provides an insider look at the complex algorithms
and biofeedback mechanisms enabling AI to recognize, respond to, and foster
human emotions\u2014potentially filling gaps in underserved populations while
raising profound questions about authenticity and connection.\\n\\n5. **The
Environmental Toll of AI Agents: Unmasking the Ecological Cost of Intelligent
Automation** \\n*Highlight:* While AI promises efficiency and innovation,
the environmental footprint of training and deploying millions of AI agents
is rarely discussed. This eye-opening article quantifies the energy demands
of current models, challenges the narrative of AI as an unequivocal green
solution, and explores emerging approaches pathing toward sustainable intelligent
automation\u2014an urgent conversation for an increasingly eco-conscious tech
landscape.\\n\\nEach of these angles opens a door to rich storytelling that
blends technical depth, ethical inquiry, and visionary implications\u2014perfect
for readers hungry for insight that\u2019s both sophisticated and accessible.
Let me know which ones resonate most, or if you want me to refine any into
full article outlines!\",\n \"refusal\": null,\n \"annotations\":
[]\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 188,\n \"completion_tokens\":
595,\n \"total_tokens\": 783,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_d45f83c5fd\"\n}\n"
\"default\",\n \"system_fingerprint\": \"fp_ae0f8c9a7b\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-Ray:
- 9ea3cb1cbfe2b312-TPE
- 9db938b0489680d4-EWR
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Fri, 10 Apr 2026 18:25:28 GMT
- Fri, 13 Mar 2026 07:11:02 GMT
Server:
- cloudflare
Strict-Transport-Security:
@@ -442,7 +428,7 @@ interactions:
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '9479'
- '8310'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
@@ -481,105 +467,91 @@ interactions:
good an article about this topic could be. Return the list of ideas with their
paragraph and your notes.\\n\\nThis is the expected criteria for your final
answer: 5 bullet points with a paragraph for each idea.\\nyou MUST return the
actual complete content as the final answer, not a summary.\"},{\"role\":\"assistant\",\"content\":null,\"tool_calls\":[{\"id\":\"call_BCh6lXsBTdixRuRh6OTBPoIJ\",\"type\":\"function\",\"function\":{\"name\":\"delegate_work_to_coworker\",\"arguments\":\"{\\\"task\\\":
\\\"Come up with a list of 5 interesting ideas to explore for an article.\\\",
\\\"context\\\": \\\"We need five intriguing ideas worth exploring for an article.
Each idea should have potential for in-depth exploration and appeal to a broad
audience, possibly touching on current trends, historical insights, future possibilities,
or human interest stories.\\\", \\\"coworker\\\": \\\"Researcher\\\"}\"}},{\"id\":\"call_rAQFeCrS4ogsqvIWRGAYFHGI\",\"type\":\"function\",\"function\":{\"name\":\"delegate_work_to_coworker\",\"arguments\":\"{\\\"task\\\":
\\\"Write one amazing paragraph highlight for each of 5 ideas that showcases
how good an article about this topic could be.\\\", \\\"context\\\": \\\"Upon
receiving five intriguing ideas from the Researcher, create a compelling paragraph
for each idea that highlights its potential as a fascinating article. These
paragraphs must capture the essence of the topic and explain why it would captivate
readers, incorporating possible themes and insights.\\\", \\\"coworker\\\":
\\\"Senior Writer\\\"}\"}}]},{\"role\":\"tool\",\"tool_call_id\":\"call_BCh6lXsBTdixRuRh6OTBPoIJ\",\"name\":\"delegate_work_to_coworker\",\"content\":\"Certainly!
Here are five intriguing article ideas that offer rich potential for deep exploration
and broad audience appeal, especially aligned with current trends and human
interest in AI and technology:\\n\\n1. **The Evolution of AI Agents: From Rule-Based
Bots to Autonomous Decision Makers** \\n Explore the historical development
of AI agents, tracing the journey from simple scripted chatbots to advanced
autonomous systems capable of complex decision-making and learning. Dive into
key technological milestones, breakthroughs in machine learning, and current
state-of-the-art AI agents. Discuss implications for industries such as customer
service, healthcare, and autonomous vehicles, highlighting both opportunities
and ethical concerns.\\n\\n2. **AI in Daily Life: How Intelligent Agents Are
Reshaping Human Routines** \\n Investigate the integration of AI agents in
everyday life\u2014from virtual assistants like Siri and Alexa to personalized
recommendation systems and smart home devices. Analyze how these AI tools influence
productivity, privacy, and social behavior. Include human interest elements
through stories of individuals or communities who have embraced or resisted
these technologies.\\n\\n3. **The Future of Work: AI Agents as Collaborative
Colleagues** \\n Examine how AI agents are transforming workplaces by acting
as collaborators rather than just tools. Cover applications in creative fields,
data analysis, and decision support, while addressing potential challenges such
as job displacement, new skill requirements, and the evolving definition of
teamwork. Use expert opinions and case studies to paint a nuanced future outlook.\\n\\n4.
**Ethics and Accountability in AI Agent Development** \\n Delve into the
ethical dilemmas posed by increasingly autonomous AI agents\u2014topics like
bias in algorithms, data privacy, and accountability for AI-driven decisions.
Explore measures being taken globally to regulate AI, frameworks for responsible
AI development, and the role of public awareness. Include historical context
about technology ethics to provide depth.\\n\\n5. **Human-AI Symbiosis: Stories
of Innovative Partnerships Shaping Our World** \\n Tell compelling human
interest stories about individuals or organizations pioneering collaborative
projects with AI agents that lead to breakthroughs in science, art, or social
good. Highlight how these partnerships transcend traditional human-machine interaction
and open new creative and problem-solving possibilities, inspiring readers about
the potential of human-AI synergy.\\n\\nThese ideas are designed to be both
engaging and informative, offering multiple angles\u2014technical, historical,
ethical, and personal\u2014to keep readers captivated while providing substantial
content for in-depth analysis.\"},{\"role\":\"tool\",\"tool_call_id\":\"call_rAQFeCrS4ogsqvIWRGAYFHGI\",\"name\":\"delegate_work_to_coworker\",\"content\":\"1.
**The Rise of Autonomous AI Agents: Revolutionizing Everyday Tasks** \\nImagine
a world where AI agents autonomously manage your daily schedule, optimize your
work routines, and even handle complex decision-making with minimal human intervention.
An article exploring the rise of autonomous AI agents would captivate readers
by diving into how advancements in machine learning and natural language processing
have matured these agents from simple chatbots to intelligent collaborators.
Themes could include practical applications in industries like healthcare, finance,
and personal productivity, the challenges of trust and transparency, and a glimpse
into the ethical questions surrounding AI autonomy. This topic not only showcases
cutting-edge technology but also invites readers to envision the near future
of human-AI synergy.\\n\\n2. **Building Ethical AI Agents: Balancing Innovation
with Responsibility** \\nAs AI agents become more powerful and independent,
the imperative to embed ethical frameworks within their design comes sharply
into focus. An insightful article on this theme would engage readers by unpacking
the complexities of programming morality, fairness, and accountability into
AI systems that influence critical decisions\u2014whether in hiring processes,
law enforcement, or digital content moderation. Exploring real-world case studies
alongside philosophical and regulatory perspectives, the piece could illuminate
the delicate balance between technological innovation and societal values, offering
a nuanced discussion that appeals to technologists, ethicists, and everyday
users alike.\\n\\n3. **AI Agents in Startups: Accelerating Growth and Disrupting
Markets** \\nStartups are uniquely positioned to leverage AI agents as game-changers
that turbocharge growth, optimize workflows, and unlock new business models.
This article could enthrall readers by detailing how nimble companies integrate
AI-driven agents for customer engagement, market analysis, and personalized
product recommendations\u2014outpacing larger incumbents. It would also examine
hurdles such as data privacy, scaling complexities, and the human-AI collaboration
dynamic, providing actionable insights for entrepreneurs and investors. The
story of AI agents fueling startup innovation not only inspires but also outlines
the practical pathways and pitfalls on the frontier of modern entrepreneurship.\\n\\n4.
**The Future of Work with AI Agents: Redefining Roles and Skills** \\nAI agents
are redefining professional landscapes by automating routine tasks and augmenting
human creativity and decision-making. An article on this topic could engage
readers by painting a vivid picture of the evolving workplace, where collaboration
between humans and AI agents becomes the norm. Delving into emerging roles,
necessary skill sets, and how education and training must adapt, the piece would
offer a forward-thinking analysis that resonates deeply with employees, managers,
and policymakers. Exploring themes of workforce transformation, productivity
gains, and potential socioeconomic impacts, it provides a comprehensive outlook
on an AI-integrated work environment.\\n\\n5. **From Reactive to Proactive:
How Next-Gen AI Agents Anticipate Needs** \\nThe leap from reactive AI assistants
to truly proactive AI agents signifies one of the most thrilling advances in
artificial intelligence. An article centered on this evolution would captivate
readers by illustrating how these agents utilize predictive analytics, contextual
understanding, and continuous learning to anticipate user needs before they
are expressed. By showcasing pioneering applications in personalized healthcare
management, smart homes, and adaptive learning platforms, the article would
highlight the profound shift toward intuitive, anticipatory technology. This
theme not only excites with futuristic promise but also probes the technical
and privacy challenges that come with increased agency and foresight.\"},{\"role\":\"user\",\"content\":\"Analyze
actual complete content as the final answer, not a summary.\"},{\"role\":\"assistant\",\"content\":null,\"tool_calls\":[{\"id\":\"call_G2i9RJGNXKVfnd8ZTaBG8Fwi\",\"type\":\"function\",\"function\":{\"name\":\"ask_question_to_coworker\",\"arguments\":\"{\\\"question\\\":
\\\"What are some trending topics or ideas in various fields that could be explored
for an article?\\\", \\\"context\\\": \\\"We need to generate a list of 5 interesting
ideas to explore for an article. These ideas should be engaging and relevant
to current trends or captivating subjects.\\\", \\\"coworker\\\": \\\"Researcher\\\"}\"}},{\"id\":\"call_j4KH2SGZvNeioql0HcRQ9NTp\",\"type\":\"function\",\"function\":{\"name\":\"ask_question_to_coworker\",\"arguments\":\"{\\\"question\\\":
\\\"What unique angles or perspectives could we explore to make articles more
compelling and engaging?\\\", \\\"context\\\": \\\"Our task involves coming
up with 5 ideas for articles, each with an exciting paragraph highlight that
illustrates the promise and intrigue of the topic. We want them to be more than
generic concepts, shining for readers with fresh insights or engaging twists.\\\",
\\\"coworker\\\": \\\"Senior Writer\\\"}\"}}]},{\"role\":\"tool\",\"tool_call_id\":\"call_G2i9RJGNXKVfnd8ZTaBG8Fwi\",\"name\":\"ask_question_to_coworker\",\"content\":\"Here
are five trending and engaging topics across various fields that could be explored
for an article:\\n\\n1. **The Rise of Autonomous AI Agents and Their Impact
on the Future of Work** \\nExplore how autonomous AI agents\u2014systems capable
of performing complex tasks independently\u2014are transforming industries such
as customer service, software development, and logistics. Discuss implications
for job automation, human-AI collaboration, and ethical considerations surrounding
decision-making autonomy.\\n\\n2. **Generative AI Beyond Text: Innovations in
Audio, Video, and 3D Content Creation** \\nDelve into advancements in generative
AI models that create not only text but also realistic audio, video content,
virtual environments, and 3D models. Highlight applications in gaming, entertainment,
education, and digital marketing, as well as challenges like misinformation
and deepfake detection.\\n\\n3. **AI-Driven Climate Modeling: Enhancing Predictive
Accuracy to Combat Climate Change** \\nExamine how AI and machine learning
are improving climate models by analyzing vast datasets, uncovering patterns,
and simulating environmental scenarios. Discuss how these advances are aiding
policymakers in making informed decisions to address climate risks and sustainability
goals.\\n\\n4. **The Ethical Frontiers of AI in Healthcare: Balancing Innovation
with Patient Privacy** \\nInvestigate ethical challenges posed by AI applications
in healthcare, including diagnosis, personalized treatment, and patient data
management. Focus on balancing rapid technological innovation with privacy,
bias mitigation, and regulatory frameworks to ensure equitable access and trust.\\n\\n5.
**Quantum Computing Meets AI: Exploring the Next Leap in Computational Power**
\ \\nCover the intersection of quantum computing and artificial intelligence,
exploring how quantum algorithms could accelerate AI training processes and
solve problems beyond the reach of classical computers. Outline current research,
potential breakthroughs, and the timeline for real-world applications.\\n\\nEach
of these topics is timely, relevant, and has the potential to engage readers
interested in cutting-edge technology, societal impact, and future trends. Let
me know if you want me to help develop an outline or deeper research into any
of these areas!\"},{\"role\":\"tool\",\"tool_call_id\":\"call_j4KH2SGZvNeioql0HcRQ9NTp\",\"name\":\"ask_question_to_coworker\",\"content\":\"Absolutely!
To create compelling and engaging AI articles that stand out, we need to go
beyond surface-level discussions and deliver fresh perspectives that challenge
assumptions and spark curiosity. Here are five unique angles with their highlight
paragraphs that could really captivate our readers:\\n\\n1. **The Hidden Psychology
of AI Agents: How They Learn Human Biases and What That Means for Our Future**
\ \\n*Highlight:* AI agents don\u2019t just process data\u2014they absorb the
subtle nuances and biases embedded in human language, behavior, and culture.
This article dives deep into the psychological parallels between AI learning
mechanisms and human cognitive biases, revealing surprising ways AI can both
mirror and amplify our prejudices. Understanding these dynamics is crucial for
building trustworthy AI systems and reshaping the future relationship between
humans and machines.\\n\\n2. **From Assistants to Autonomous Creators: The Rise
of AI Agents as Artists, Writers, and Innovators** \\n*Highlight:* What do
we lose and gain when AI agents start producing original art, literature, and
innovations? This piece explores groundbreaking examples where AI isn\u2019t
just a tool but a creative partner that challenges our definition of authorship
and genius. We\u2019ll examine ethical dilemmas, collaborative workflows, and
the exciting frontier where human intuition meets algorithmic originality.\\n\\n3.
**AI Agents in the Wild: How Decentralized Autonomous Organizations Could Redefine
Economy and Governance** \\n*Highlight:* Imagine AI agents operating autonomously
in decentralized networks, making real-time decisions that affect finances,
resource management, and governance without human intervention. This article
uncovers how DAOs powered by AI agents might spontaneously evolve new forms
of organization\u2014transparent, efficient, and resistant to traditional corruption.
We\u2019ll investigate early case studies and speculate on how this might disrupt
centuries-old societal structures.\\n\\n4. **Beyond Chatbots: The Next Generation
of AI Agents as Empathetic Digital Companions** \\n*Highlight:* Moving past
scripted conversations, emerging AI agents simulate empathy and emotional intelligence
in ways that can transform mental health care, education, and companionship.
This article provides an insider look at the complex algorithms and biofeedback
mechanisms enabling AI to recognize, respond to, and foster human emotions\u2014potentially
filling gaps in underserved populations while raising profound questions about
authenticity and connection.\\n\\n5. **The Environmental Toll of AI Agents:
Unmasking the Ecological Cost of Intelligent Automation** \\n*Highlight:* While
AI promises efficiency and innovation, the environmental footprint of training
and deploying millions of AI agents is rarely discussed. This eye-opening article
quantifies the energy demands of current models, challenges the narrative of
AI as an unequivocal green solution, and explores emerging approaches pathing
toward sustainable intelligent automation\u2014an urgent conversation for an
increasingly eco-conscious tech landscape.\\n\\nEach of these angles opens a
door to rich storytelling that blends technical depth, ethical inquiry, and
visionary implications\u2014perfect for readers hungry for insight that\u2019s
both sophisticated and accessible. Let me know which ones resonate most, or
if you want me to refine any into full article outlines!\"},{\"role\":\"user\",\"content\":\"Analyze
the tool result. If requirements are met, provide the Final Answer. Otherwise,
call the next tool. Deliver only the answer without meta-commentary.\"}],\"model\":\"gpt-4o\",\"tool_choice\":\"auto\",\"tools\":[{\"type\":\"function\",\"function\":{\"name\":\"delegate_work_to_coworker\",\"description\":\"Delegate
a specific task to one of the following coworkers: Researcher, Senior Writer\\nThe
@@ -610,7 +582,7 @@ interactions:
connection:
- keep-alive
content-length:
- '11056'
- '9923'
content-type:
- application/json
cookie:
@@ -626,7 +598,7 @@ interactions:
x-stainless-os:
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 2.31.0
- 1.83.0
x-stainless-read-timeout:
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
@@ -634,64 +606,58 @@ interactions:
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.12
- 3.13.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-DTApljTaq8nDgNMS21B319i56seCn\",\n \"object\":
\"chat.completion\",\n \"created\": 1775845529,\n \"model\": \"gpt-4o-2024-08-06\",\n
string: "{\n \"id\": \"chatcmpl-DIqxidsfoqQl7qXSIVHfSCyETUwlU\",\n \"object\":
\"chat.completion\",\n \"created\": 1773385862,\n \"model\": \"gpt-4o-2024-08-06\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"- **The Evolution of AI Agents: From
Rule-Based Bots to Autonomous Decision Makers** \\n Explore the historical
development of AI agents, tracing the journey from simple scripted chatbots
to advanced autonomous systems capable of complex decision-making and learning.
Dive into key technological milestones, breakthroughs in machine learning,
and current state-of-the-art AI agents. Discuss implications for industries
such as customer service, healthcare, and autonomous vehicles, highlighting
both opportunities and ethical concerns.\\n\\n- **AI in Daily Life: How Intelligent
Agents Are Reshaping Human Routines** \\n Investigate the integration of
AI agents in everyday life\u2014from virtual assistants like Siri and Alexa
to personalized recommendation systems and smart home devices. Analyze how
these AI tools influence productivity, privacy, and social behavior. Include
human interest elements through stories of individuals or communities who
have embraced or resisted these technologies.\\n\\n- **The Future of Work:
AI Agents as Collaborative Colleagues** \\n Examine how AI agents are transforming
workplaces by acting as collaborators rather than just tools. Cover applications
in creative fields, data analysis, and decision support, while addressing
potential challenges such as job displacement, new skill requirements, and
the evolving definition of teamwork. Use expert opinions and case studies
to paint a nuanced future outlook.\\n\\n- **Ethics and Accountability in AI
Agent Development** \\n Delve into the ethical dilemmas posed by increasingly
autonomous AI agents\u2014topics like bias in algorithms, data privacy, and
accountability for AI-driven decisions. Explore measures being taken globally
to regulate AI, frameworks for responsible AI development, and the role of
public awareness. Include historical context about technology ethics to provide
depth.\\n\\n- **Human-AI Symbiosis: Stories of Innovative Partnerships Shaping
Our World** \\n Tell compelling human interest stories about individuals
or organizations pioneering collaborative projects with AI agents that lead
to breakthroughs in science, art, or social good. Highlight how these partnerships
transcend traditional human-machine interaction and open new creative and
problem-solving possibilities, inspiring readers about the potential of human-AI
synergy.\",\n \"refusal\": null,\n \"annotations\": []\n },\n
\ \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n
\ \"usage\": {\n \"prompt_tokens\": 1903,\n \"completion_tokens\": 399,\n
\ \"total_tokens\": 2302,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
\"assistant\",\n \"content\": \"1. **The Rise of Autonomous AI Agents
and Their Impact on the Future of Work** \\nExplore how autonomous AI agents\u2014systems
capable of performing complex tasks independently\u2014are transforming industries
such as customer service, software development, and logistics. Discuss implications
for job automation, human-AI collaboration, and ethical considerations surrounding
decision-making autonomy.\\n\\n2. **Generative AI Beyond Text: Innovations
in Audio, Video, and 3D Content Creation** \\nDelve into advancements in
generative AI models that create not only text but also realistic audio, video
content, virtual environments, and 3D models. Highlight applications in gaming,
entertainment, education, and digital marketing, as well as challenges like
misinformation and deepfake detection.\\n\\n3. **AI-Driven Climate Modeling:
Enhancing Predictive Accuracy to Combat Climate Change** \\nExamine how AI
and machine learning are improving climate models by analyzing vast datasets,
uncovering patterns, and simulating environmental scenarios. Discuss how these
advances are aiding policymakers in making informed decisions to address climate
risks and sustainability goals.\\n\\n4. **The Ethical Frontiers of AI in Healthcare:
Balancing Innovation with Patient Privacy** \\nInvestigate ethical challenges
posed by AI applications in healthcare, including diagnosis, personalized
treatment, and patient data management. Focus on balancing rapid technological
innovation with privacy, bias mitigation, and regulatory frameworks to ensure
equitable access and trust.\\n\\n5. **Quantum Computing Meets AI: Exploring
the Next Leap in Computational Power** \\nCover the intersection of quantum
computing and artificial intelligence, exploring how quantum algorithms could
accelerate AI training processes and solve problems beyond the reach of classical
computers. Outline current research, potential breakthroughs, and the timeline
for real-world applications.\",\n \"refusal\": null,\n \"annotations\":
[]\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 1748,\n \"completion_tokens\":
335,\n \"total_tokens\": 2083,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_df40ab6c25\"\n}\n"
\"default\",\n \"system_fingerprint\": \"fp_b7c8e3f100\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-Ray:
- 9ea3cb5a6957b301-TPE
- 9db938e60d5bc5e7-EWR
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Fri, 10 Apr 2026 18:25:31 GMT
- Fri, 13 Mar 2026 07:11:04 GMT
Server:
- cloudflare
Strict-Transport-Security:
@@ -707,7 +673,7 @@ interactions:
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '2183'
- '2009'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:

View File

@@ -125,7 +125,7 @@ class TestDeployCommand(unittest.TestCase):
mock_response.json.return_value = {"uuid": "test-uuid"}
self.mock_client.deploy_by_uuid.return_value = mock_response
self.deploy_command.deploy(uuid="test-uuid", skip_validate=True)
self.deploy_command.deploy(uuid="test-uuid")
self.mock_client.deploy_by_uuid.assert_called_once_with("test-uuid")
mock_display.assert_called_once_with({"uuid": "test-uuid"})
@@ -137,7 +137,7 @@ class TestDeployCommand(unittest.TestCase):
mock_response.json.return_value = {"uuid": "test-uuid"}
self.mock_client.deploy_by_name.return_value = mock_response
self.deploy_command.deploy(skip_validate=True)
self.deploy_command.deploy()
self.mock_client.deploy_by_name.assert_called_once_with("test_project")
mock_display.assert_called_once_with({"uuid": "test-uuid"})
@@ -156,7 +156,7 @@ class TestDeployCommand(unittest.TestCase):
self.mock_client.create_crew.return_value = mock_response
with patch("sys.stdout", new=StringIO()) as fake_out:
self.deploy_command.create_crew(skip_validate=True)
self.deploy_command.create_crew()
self.assertIn("Deployment created successfully!", fake_out.getvalue())
self.assertIn("new-uuid", fake_out.getvalue())

View File

@@ -1,430 +0,0 @@
"""Tests for `crewai.cli.deploy.validate`.
The fixtures here correspond 1:1 to the deployment-failure patterns observed
in the #crewai-deployment-failures Slack channel that motivated this work.
"""
from __future__ import annotations
from pathlib import Path
from textwrap import dedent
from typing import Iterable
from unittest.mock import patch
import pytest
from crewai.cli.deploy.validate import (
DeployValidator,
Severity,
normalize_package_name,
)
def _make_pyproject(
name: str = "my_crew",
dependencies: Iterable[str] = ("crewai>=1.14.0",),
*,
hatchling: bool = False,
flow: bool = False,
extra: str = "",
) -> str:
deps = ", ".join(f'"{d}"' for d in dependencies)
lines = [
"[project]",
f'name = "{name}"',
'version = "0.1.0"',
f"dependencies = [{deps}]",
]
if hatchling:
lines += [
"",
"[build-system]",
'requires = ["hatchling"]',
'build-backend = "hatchling.build"',
]
if flow:
lines += ["", "[tool.crewai]", 'type = "flow"']
if extra:
lines += ["", extra]
return "\n".join(lines) + "\n"
def _scaffold_standard_crew(
root: Path,
*,
name: str = "my_crew",
include_crew_py: bool = True,
include_agents_yaml: bool = True,
include_tasks_yaml: bool = True,
include_lockfile: bool = True,
pyproject: str | None = None,
) -> Path:
(root / "pyproject.toml").write_text(pyproject or _make_pyproject(name=name))
if include_lockfile:
(root / "uv.lock").write_text("# dummy uv lockfile\n")
pkg_dir = root / "src" / normalize_package_name(name)
pkg_dir.mkdir(parents=True)
(pkg_dir / "__init__.py").write_text("")
if include_crew_py:
(pkg_dir / "crew.py").write_text(
dedent(
"""
from crewai.project import CrewBase, crew
@CrewBase
class MyCrew:
agents_config = "config/agents.yaml"
tasks_config = "config/tasks.yaml"
@crew
def crew(self):
from crewai import Crew
return Crew(agents=[], tasks=[])
"""
).strip()
+ "\n"
)
config_dir = pkg_dir / "config"
config_dir.mkdir()
if include_agents_yaml:
(config_dir / "agents.yaml").write_text("{}\n")
if include_tasks_yaml:
(config_dir / "tasks.yaml").write_text("{}\n")
return pkg_dir
def _codes(validator: DeployValidator) -> set[str]:
return {r.code for r in validator.results}
def _run_without_import_check(root: Path) -> DeployValidator:
"""Run validation with the subprocess-based import check stubbed out;
the classifier is exercised directly in its own tests below."""
with patch.object(DeployValidator, "_check_module_imports", lambda self: None):
v = DeployValidator(project_root=root)
v.run()
return v
@pytest.mark.parametrize(
"project_name, expected",
[
("my-crew", "my_crew"),
("My Cool-Project", "my_cool_project"),
("crew123", "crew123"),
("crew.name!with$chars", "crewnamewithchars"),
],
)
def test_normalize_package_name(project_name: str, expected: str) -> None:
assert normalize_package_name(project_name) == expected
def test_valid_standard_crew_project_passes(tmp_path: Path) -> None:
_scaffold_standard_crew(tmp_path)
v = _run_without_import_check(tmp_path)
assert v.ok, f"expected clean run, got {v.results}"
def test_missing_pyproject_errors(tmp_path: Path) -> None:
v = _run_without_import_check(tmp_path)
assert "missing_pyproject" in _codes(v)
assert not v.ok
def test_invalid_pyproject_errors(tmp_path: Path) -> None:
(tmp_path / "pyproject.toml").write_text("this is not valid toml ====\n")
v = _run_without_import_check(tmp_path)
assert "invalid_pyproject" in _codes(v)
def test_missing_project_name_errors(tmp_path: Path) -> None:
(tmp_path / "pyproject.toml").write_text(
'[project]\nversion = "0.1.0"\ndependencies = ["crewai>=1.14.0"]\n'
)
v = _run_without_import_check(tmp_path)
assert "missing_project_name" in _codes(v)
def test_missing_lockfile_errors(tmp_path: Path) -> None:
_scaffold_standard_crew(tmp_path, include_lockfile=False)
v = _run_without_import_check(tmp_path)
assert "missing_lockfile" in _codes(v)
def test_poetry_lock_is_accepted(tmp_path: Path) -> None:
_scaffold_standard_crew(tmp_path, include_lockfile=False)
(tmp_path / "poetry.lock").write_text("# poetry lockfile\n")
v = _run_without_import_check(tmp_path)
assert "missing_lockfile" not in _codes(v)
def test_stale_lockfile_warns(tmp_path: Path) -> None:
_scaffold_standard_crew(tmp_path)
# Make lockfile older than pyproject.
lock = tmp_path / "uv.lock"
pyproject = tmp_path / "pyproject.toml"
old_time = pyproject.stat().st_mtime - 60
import os
os.utime(lock, (old_time, old_time))
v = _run_without_import_check(tmp_path)
assert "stale_lockfile" in _codes(v)
# Stale is a warning, so the run can still be ok (no errors).
assert v.ok
def test_missing_package_dir_errors(tmp_path: Path) -> None:
# pyproject says name=my_crew but we only create src/other_pkg/
(tmp_path / "pyproject.toml").write_text(_make_pyproject(name="my_crew"))
(tmp_path / "uv.lock").write_text("")
(tmp_path / "src" / "other_pkg").mkdir(parents=True)
v = _run_without_import_check(tmp_path)
codes = _codes(v)
assert "missing_package_dir" in codes
finding = next(r for r in v.results if r.code == "missing_package_dir")
assert "other_pkg" in finding.hint
def test_egg_info_only_errors_with_targeted_hint(tmp_path: Path) -> None:
"""Regression for the case where only src/<name>.egg-info/ exists."""
(tmp_path / "pyproject.toml").write_text(_make_pyproject(name="odoo_pm_agents"))
(tmp_path / "uv.lock").write_text("")
(tmp_path / "src" / "odoo_pm_agents.egg-info").mkdir(parents=True)
v = _run_without_import_check(tmp_path)
finding = next(r for r in v.results if r.code == "missing_package_dir")
assert "egg-info" in finding.hint
def test_stale_egg_info_sibling_warns(tmp_path: Path) -> None:
_scaffold_standard_crew(tmp_path)
(tmp_path / "src" / "my_crew.egg-info").mkdir()
v = _run_without_import_check(tmp_path)
assert "stale_egg_info" in _codes(v)
def test_missing_crew_py_errors(tmp_path: Path) -> None:
_scaffold_standard_crew(tmp_path, include_crew_py=False)
v = _run_without_import_check(tmp_path)
assert "missing_crew_py" in _codes(v)
def test_missing_agents_yaml_errors(tmp_path: Path) -> None:
_scaffold_standard_crew(tmp_path, include_agents_yaml=False)
v = _run_without_import_check(tmp_path)
assert "missing_agents_yaml" in _codes(v)
def test_missing_tasks_yaml_errors(tmp_path: Path) -> None:
_scaffold_standard_crew(tmp_path, include_tasks_yaml=False)
v = _run_without_import_check(tmp_path)
assert "missing_tasks_yaml" in _codes(v)
def test_flow_project_requires_main_py(tmp_path: Path) -> None:
(tmp_path / "pyproject.toml").write_text(
_make_pyproject(name="my_flow", flow=True)
)
(tmp_path / "uv.lock").write_text("")
(tmp_path / "src" / "my_flow").mkdir(parents=True)
v = _run_without_import_check(tmp_path)
assert "missing_flow_main" in _codes(v)
def test_flow_project_with_main_py_passes(tmp_path: Path) -> None:
(tmp_path / "pyproject.toml").write_text(
_make_pyproject(name="my_flow", flow=True)
)
(tmp_path / "uv.lock").write_text("")
pkg = tmp_path / "src" / "my_flow"
pkg.mkdir(parents=True)
(pkg / "main.py").write_text("# flow entrypoint\n")
v = _run_without_import_check(tmp_path)
assert "missing_flow_main" not in _codes(v)
def test_hatchling_without_wheel_config_passes_when_pkg_dir_matches(
tmp_path: Path,
) -> None:
_scaffold_standard_crew(
tmp_path, pyproject=_make_pyproject(name="my_crew", hatchling=True)
)
v = _run_without_import_check(tmp_path)
# src/my_crew/ exists, so hatch default should find it — no wheel error.
assert "hatch_wheel_target_missing" not in _codes(v)
def test_hatchling_with_explicit_wheel_config_passes(tmp_path: Path) -> None:
extra = (
"[tool.hatch.build.targets.wheel]\n"
'packages = ["src/my_crew"]'
)
_scaffold_standard_crew(
tmp_path,
pyproject=_make_pyproject(name="my_crew", hatchling=True, extra=extra),
)
v = _run_without_import_check(tmp_path)
assert "hatch_wheel_target_missing" not in _codes(v)
def test_classify_missing_openai_key_is_warning(tmp_path: Path) -> None:
v = DeployValidator(project_root=tmp_path)
v._classify_import_error(
"ImportError",
"Error importing native provider: 1 validation error for OpenAICompletion\n"
" Value error, OPENAI_API_KEY is required",
tb="",
)
assert len(v.results) == 1
result = v.results[0]
assert result.code == "llm_init_missing_key"
assert result.severity is Severity.WARNING
assert "OPENAI_API_KEY" in result.title
def test_classify_azure_extra_missing_is_error(tmp_path: Path) -> None:
"""The real message raised by the Azure provider module uses plain
double quotes around the install command (no backticks). Match the
exact string that ships in the provider source so this test actually
guards the regex used in production."""
v = DeployValidator(project_root=tmp_path)
v._classify_import_error(
"ImportError",
'Azure AI Inference native provider not available, to install: uv add "crewai[azure-ai-inference]"',
tb="",
)
assert "missing_provider_extra" in _codes(v)
finding = next(r for r in v.results if r.code == "missing_provider_extra")
assert finding.title.startswith("Azure AI Inference")
assert 'uv add "crewai[azure-ai-inference]"' in finding.hint
@pytest.mark.parametrize(
"pkg_label, install_cmd",
[
("Anthropic", 'uv add "crewai[anthropic]"'),
("AWS Bedrock", 'uv add "crewai[bedrock]"'),
("Google Gen AI", 'uv add "crewai[google-genai]"'),
],
)
def test_classify_missing_provider_extra_matches_real_messages(
tmp_path: Path, pkg_label: str, install_cmd: str
) -> None:
"""Regression for the four provider error strings verbatim."""
v = DeployValidator(project_root=tmp_path)
v._classify_import_error(
"ImportError",
f"{pkg_label} native provider not available, to install: {install_cmd}",
tb="",
)
assert "missing_provider_extra" in _codes(v)
finding = next(r for r in v.results if r.code == "missing_provider_extra")
assert install_cmd in finding.hint
def test_classify_keyerror_at_import_is_warning(tmp_path: Path) -> None:
"""Regression for `KeyError: 'SERPLY_API_KEY'` raised at import time."""
v = DeployValidator(project_root=tmp_path)
v._classify_import_error("KeyError", "'SERPLY_API_KEY'", tb="")
codes = _codes(v)
assert "env_var_read_at_import" in codes
def test_classify_no_crewbase_class_is_error(tmp_path: Path) -> None:
v = DeployValidator(project_root=tmp_path)
v._classify_import_error(
"ValueError",
"Crew class annotated with @CrewBase not found.",
tb="",
)
assert "no_crewbase_class" in _codes(v)
def test_classify_no_flow_subclass_is_error(tmp_path: Path) -> None:
v = DeployValidator(project_root=tmp_path)
v._classify_import_error("ValueError", "No Flow subclass found in the module.", tb="")
assert "no_flow_subclass" in _codes(v)
def test_classify_stale_crewai_pin_attribute_error(tmp_path: Path) -> None:
"""Regression for a stale crewai pin missing `_load_response_format`."""
v = DeployValidator(project_root=tmp_path)
v._classify_import_error(
"AttributeError",
"'EmploymentServiceDecisionSupportSystemCrew' object has no attribute '_load_response_format'",
tb="",
)
assert "stale_crewai_pin" in _codes(v)
def test_classify_unknown_error_is_fallback(tmp_path: Path) -> None:
v = DeployValidator(project_root=tmp_path)
v._classify_import_error("RuntimeError", "something weird happened", tb="")
assert "import_failed" in _codes(v)
def test_env_var_referenced_but_missing_warns(tmp_path: Path) -> None:
pkg = _scaffold_standard_crew(tmp_path)
(pkg / "tools.py").write_text(
'import os\nkey = os.getenv("TAVILY_API_KEY")\n'
)
import os
# Make sure the test doesn't inherit the key from the host environment.
with patch.dict(os.environ, {}, clear=False):
os.environ.pop("TAVILY_API_KEY", None)
v = _run_without_import_check(tmp_path)
codes = _codes(v)
assert "env_vars_not_in_dotenv" in codes
def test_env_var_in_dotenv_does_not_warn(tmp_path: Path) -> None:
pkg = _scaffold_standard_crew(tmp_path)
(pkg / "tools.py").write_text(
'import os\nkey = os.getenv("TAVILY_API_KEY")\n'
)
(tmp_path / ".env").write_text("TAVILY_API_KEY=abc\n")
v = _run_without_import_check(tmp_path)
assert "env_vars_not_in_dotenv" not in _codes(v)
def test_old_crewai_pin_in_uv_lock_warns(tmp_path: Path) -> None:
_scaffold_standard_crew(tmp_path)
(tmp_path / "uv.lock").write_text(
'name = "crewai"\nversion = "1.10.0"\nsource = { registry = "..." }\n'
)
v = _run_without_import_check(tmp_path)
assert "old_crewai_pin" in _codes(v)
def test_modern_crewai_pin_does_not_warn(tmp_path: Path) -> None:
_scaffold_standard_crew(tmp_path)
(tmp_path / "uv.lock").write_text(
'name = "crewai"\nversion = "1.14.1"\nsource = { registry = "..." }\n'
)
v = _run_without_import_check(tmp_path)
assert "old_crewai_pin" not in _codes(v)
def test_create_crew_aborts_on_validation_error(tmp_path: Path) -> None:
"""`crewai deploy create` must not contact the API when validation fails."""
from unittest.mock import MagicMock, patch as mock_patch
from crewai.cli.deploy.main import DeployCommand
with (
mock_patch("crewai.cli.command.get_auth_token", return_value="tok"),
mock_patch("crewai.cli.deploy.main.get_project_name", return_value="p"),
mock_patch("crewai.cli.command.PlusAPI") as mock_api,
mock_patch(
"crewai.cli.deploy.main.validate_project"
) as mock_validate,
):
mock_validate.return_value = MagicMock(ok=False)
cmd = DeployCommand()
cmd.create_crew()
assert not cmd.plus_api_client.create_crew.called
del mock_api # silence unused-var lint

View File

@@ -367,7 +367,7 @@ def test_deploy_push(command, runner):
result = runner.invoke(deploy_push, ["-u", uuid])
assert result.exit_code == 0
mock_deploy.deploy.assert_called_once_with(uuid=uuid, skip_validate=False)
mock_deploy.deploy.assert_called_once_with(uuid=uuid)
@mock.patch("crewai.cli.cli.DeployCommand")
@@ -376,7 +376,7 @@ def test_deploy_push_no_uuid(command, runner):
result = runner.invoke(deploy_push)
assert result.exit_code == 0
mock_deploy.deploy.assert_called_once_with(uuid=None, skip_validate=False)
mock_deploy.deploy.assert_called_once_with(uuid=None)
@mock.patch("crewai.cli.cli.DeployCommand")

View File

@@ -174,51 +174,3 @@ class TestEmitCallCompletedEventPassesUsage:
event = mock_emit.call_args[1]["event"]
assert isinstance(event, LLMCallCompletedEvent)
assert event.usage is None
class TestUsageMetricsNewFields:
def test_add_usage_metrics_aggregates_reasoning_and_cache_creation(self):
from crewai.types.usage_metrics import UsageMetrics
metrics1 = UsageMetrics(
total_tokens=100,
prompt_tokens=60,
completion_tokens=40,
cached_prompt_tokens=10,
reasoning_tokens=15,
cache_creation_tokens=5,
successful_requests=1,
)
metrics2 = UsageMetrics(
total_tokens=200,
prompt_tokens=120,
completion_tokens=80,
cached_prompt_tokens=20,
reasoning_tokens=25,
cache_creation_tokens=10,
successful_requests=1,
)
metrics1.add_usage_metrics(metrics2)
assert metrics1.total_tokens == 300
assert metrics1.prompt_tokens == 180
assert metrics1.completion_tokens == 120
assert metrics1.cached_prompt_tokens == 30
assert metrics1.reasoning_tokens == 40
assert metrics1.cache_creation_tokens == 15
assert metrics1.successful_requests == 2
def test_new_fields_default_to_zero(self):
from crewai.types.usage_metrics import UsageMetrics
metrics = UsageMetrics()
assert metrics.reasoning_tokens == 0
assert metrics.cache_creation_tokens == 0
def test_model_dump_includes_new_fields(self):
from crewai.types.usage_metrics import UsageMetrics
metrics = UsageMetrics(reasoning_tokens=10, cache_creation_tokens=5)
dumped = metrics.model_dump()
assert dumped["reasoning_tokens"] == 10
assert dumped["cache_creation_tokens"] == 5

View File

@@ -1463,45 +1463,3 @@ def test_tool_search_saves_input_tokens():
f"Expected tool_search ({usage_search.prompt_tokens}) to use fewer input tokens "
f"than no search ({usage_no_search.prompt_tokens})"
)
def test_anthropic_cache_creation_tokens_extraction():
"""Test that cache_creation_input_tokens are extracted from Anthropic responses."""
llm = LLM(model="anthropic/claude-3-5-sonnet-20241022")
mock_response = MagicMock()
mock_response.content = [MagicMock(text="test response")]
mock_response.usage = MagicMock(
input_tokens=100,
output_tokens=50,
cache_read_input_tokens=30,
cache_creation_input_tokens=20,
)
mock_response.stop_reason = None
mock_response.model = None
usage = llm._extract_anthropic_token_usage(mock_response)
assert usage["input_tokens"] == 100
assert usage["output_tokens"] == 50
assert usage["total_tokens"] == 150
assert usage["cached_prompt_tokens"] == 30
assert usage["cache_creation_tokens"] == 20
def test_anthropic_missing_cache_fields_default_to_zero():
"""Test that missing cache fields default to zero."""
llm = LLM(model="anthropic/claude-3-5-sonnet-20241022")
mock_response = MagicMock()
mock_response.content = [MagicMock(text="test response")]
mock_response.usage = MagicMock(
input_tokens=40,
output_tokens=20,
spec=["input_tokens", "output_tokens"],
)
mock_response.usage.cache_read_input_tokens = None
mock_response.usage.cache_creation_input_tokens = None
usage = llm._extract_anthropic_token_usage(mock_response)
assert usage["cached_prompt_tokens"] == 0
assert usage["cache_creation_tokens"] == 0

View File

@@ -3,9 +3,13 @@ import json
import logging
import pytest
import tiktoken
from pydantic import BaseModel
from crewai.llm import LLM
# Pre-cache tiktoken encoding so VCR doesn't intercept the download request
tiktoken.get_encoding("cl100k_base")
from crewai.llms.providers.anthropic.completion import AnthropicCompletion
@@ -44,7 +48,9 @@ async def test_anthropic_async_with_max_tokens():
assert result is not None
assert isinstance(result, str)
assert len(result.split()) <= 10
encoder = tiktoken.get_encoding("cl100k_base")
token_count = len(encoder.encode(result))
assert token_count <= 10
@pytest.mark.vcr()

View File

@@ -2,7 +2,6 @@ import os
import sys
import types
from unittest.mock import patch, MagicMock, Mock
from urllib.parse import urlparse
import pytest
from crewai.llm import LLM
@@ -379,72 +378,23 @@ def test_azure_completion_with_tools():
def test_azure_raises_error_when_endpoint_missing():
"""Credentials are validated lazily: construction succeeds, first
client build raises the descriptive error."""
"""Test that AzureCompletion raises ValueError when endpoint is missing"""
from crewai.llms.providers.azure.completion import AzureCompletion
# Clear environment variables
with patch.dict(os.environ, {}, clear=True):
llm = AzureCompletion(model="gpt-4", api_key="test-key")
with pytest.raises(ValueError, match="Azure endpoint is required"):
llm._get_sync_client()
AzureCompletion(model="gpt-4", api_key="test-key")
def test_azure_raises_error_when_api_key_missing():
"""Credentials are validated lazily: construction succeeds, first
client build raises the descriptive error."""
"""Test that AzureCompletion raises ValueError when API key is missing"""
from crewai.llms.providers.azure.completion import AzureCompletion
# Clear environment variables
with patch.dict(os.environ, {}, clear=True):
llm = AzureCompletion(
model="gpt-4", endpoint="https://test.openai.azure.com"
)
with pytest.raises(ValueError, match="Azure API key is required"):
llm._get_sync_client()
@pytest.mark.asyncio
async def test_azure_aclose_is_noop_when_uninitialized():
"""`aclose` (and `async with`) on an uninstantiated-client LLM must be
a harmless no-op, not force lazy construction that then raises for
missing credentials."""
from crewai.llms.providers.azure.completion import AzureCompletion
with patch.dict(os.environ, {}, clear=True):
llm = AzureCompletion(model="gpt-4")
assert llm._async_client is None
await llm.aclose()
async with llm:
pass
def test_azure_lazy_build_reads_env_vars_set_after_construction():
"""When `LLM(model="azure/...")` is constructed before env vars are set,
the lazy client builder must re-read `AZURE_API_KEY` / `AZURE_ENDPOINT`
so the LLM actually works once credentials become available, and the
`is_azure_openai_endpoint` routing flag must be recomputed off the
newly-resolved endpoint."""
from crewai.llms.providers.azure.completion import AzureCompletion
with patch.dict(os.environ, {}, clear=True):
llm = AzureCompletion(model="gpt-4")
assert llm.api_key is None
assert llm.endpoint is None
assert llm.is_azure_openai_endpoint is False
with patch.dict(
os.environ,
{
"AZURE_API_KEY": "late-key",
"AZURE_ENDPOINT": "https://test.openai.azure.com/openai/deployments/gpt-4",
},
clear=True,
):
client = llm._get_sync_client()
assert client is not None
assert llm.api_key == "late-key"
assert llm.endpoint is not None
assert urlparse(llm.endpoint).hostname == "test.openai.azure.com"
assert llm.is_azure_openai_endpoint is True
AzureCompletion(model="gpt-4", endpoint="https://test.openai.azure.com")
def test_azure_endpoint_configuration():
@@ -1453,44 +1403,3 @@ def test_azure_stop_words_still_applied_to_regular_responses():
assert "Observation:" not in result
assert "Found results" not in result
assert "I need to search for more information" in result
def test_azure_reasoning_tokens_and_cached_tokens():
"""Test that reasoning_tokens and cached_tokens are extracted from Azure responses."""
llm = LLM(model="azure/gpt-4")
mock_response = MagicMock()
mock_response.usage = MagicMock(
prompt_tokens=100,
completion_tokens=200,
total_tokens=300,
)
mock_response.usage.prompt_tokens_details = MagicMock(cached_tokens=40)
mock_response.usage.completion_tokens_details = MagicMock(reasoning_tokens=60)
usage = llm._extract_azure_token_usage(mock_response)
assert usage["prompt_tokens"] == 100
assert usage["completion_tokens"] == 200
assert usage["total_tokens"] == 300
assert usage["cached_prompt_tokens"] == 40
assert usage["reasoning_tokens"] == 60
def test_azure_no_detail_fields():
"""Test Azure extraction without detail fields."""
llm = LLM(model="azure/gpt-4")
mock_response = MagicMock()
mock_response.usage = MagicMock(
prompt_tokens=50,
completion_tokens=30,
total_tokens=80,
)
mock_response.usage.prompt_tokens_details = None
mock_response.usage.completion_tokens_details = None
usage = llm._extract_azure_token_usage(mock_response)
assert usage["prompt_tokens"] == 50
assert usage["completion_tokens"] == 30
assert usage["cached_prompt_tokens"] == 0
assert usage["reasoning_tokens"] == 0

View File

@@ -1,6 +1,7 @@
"""Tests for Azure async completion functionality."""
import pytest
import tiktoken
from crewai import Agent, Task, Crew
from crewai.llm import LLM
@@ -56,7 +57,9 @@ async def test_azure_async_with_max_tokens():
assert result is not None
assert isinstance(result, str)
assert len(result.split()) <= 10
encoder = tiktoken.get_encoding("cl100k_base")
token_count = len(encoder.encode(result))
assert token_count <= 10
@pytest.mark.vcr()

View File

@@ -1175,81 +1175,3 @@ def test_bedrock_tool_results_not_merged_across_assistant_messages():
)
assert tool_result_messages[0]["content"][0]["toolResult"]["toolUseId"] == "call_a"
assert tool_result_messages[1]["content"][0]["toolResult"]["toolUseId"] == "call_b"
def test_bedrock_cached_token_tracking():
"""Test that cached tokens (cacheReadInputTokenCount) are tracked for Bedrock."""
llm = LLM(model="bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0")
with patch.object(llm._client, 'converse') as mock_converse:
mock_response = {
'output': {
'message': {
'role': 'assistant',
'content': [{'text': 'test response'}]
}
},
'usage': {
'inputTokens': 100,
'outputTokens': 50,
'totalTokens': 150,
'cacheReadInputTokenCount': 30,
}
}
mock_converse.return_value = mock_response
result = llm.call("Hello")
assert result == "test response"
assert llm._token_usage['prompt_tokens'] == 100
assert llm._token_usage['completion_tokens'] == 50
assert llm._token_usage['total_tokens'] == 150
assert llm._token_usage['cached_prompt_tokens'] == 30
def test_bedrock_cached_token_alternate_key():
"""Test that the alternate key cacheReadInputTokens also works."""
llm = LLM(model="bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0")
with patch.object(llm._client, 'converse') as mock_converse:
mock_response = {
'output': {
'message': {
'role': 'assistant',
'content': [{'text': 'test response'}]
}
},
'usage': {
'inputTokens': 80,
'outputTokens': 40,
'totalTokens': 120,
'cacheReadInputTokens': 25,
}
}
mock_converse.return_value = mock_response
llm.call("Hello")
assert llm._token_usage['cached_prompt_tokens'] == 25
def test_bedrock_no_cache_tokens_defaults_to_zero():
"""Test that missing cache token keys default to zero."""
llm = LLM(model="bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0")
with patch.object(llm._client, 'converse') as mock_converse:
mock_response = {
'output': {
'message': {
'role': 'assistant',
'content': [{'text': 'test response'}]
}
},
'usage': {
'inputTokens': 60,
'outputTokens': 30,
'totalTokens': 90,
}
}
mock_converse.return_value = mock_response
llm.call("Hello")
assert llm._token_usage['cached_prompt_tokens'] == 0

View File

@@ -6,6 +6,7 @@ cannot be played back properly in CI.
"""
import pytest
import tiktoken
from crewai.llm import LLM
@@ -50,7 +51,9 @@ async def test_bedrock_async_with_max_tokens():
assert result is not None
assert isinstance(result, str)
assert len(result.split()) <= 10
encoder = tiktoken.get_encoding("cl100k_base")
token_count = len(encoder.encode(result))
assert token_count <= 10
@pytest.mark.vcr()

View File

@@ -64,23 +64,6 @@ def test_gemini_completion_module_is_imported():
assert hasattr(completion_mod, 'GeminiCompletion')
def test_gemini_lazy_build_reads_env_vars_set_after_construction():
"""When `LLM(model="gemini/...")` is constructed before env vars are set,
the lazy client builder must re-read `GOOGLE_API_KEY` / `GEMINI_API_KEY`
so the LLM works once credentials become available."""
from crewai.llms.providers.gemini.completion import GeminiCompletion
with patch.dict(os.environ, {}, clear=True):
llm = GeminiCompletion(model="gemini-1.5-pro")
assert llm.api_key is None
assert llm._client is None
with patch.dict(os.environ, {"GEMINI_API_KEY": "late-key"}, clear=True):
client = llm._get_sync_client()
assert client is not None
assert llm.api_key == "late-key"
def test_native_gemini_raises_error_when_initialization_fails():
"""
Test that LLM raises ImportError when native Gemini completion fails.
@@ -1207,42 +1190,3 @@ def test_gemini_cached_prompt_tokens_with_tools():
# cached_prompt_tokens should be populated (may be 0 if Gemini
# doesn't cache for this particular request, but the field should exist)
assert usage.cached_prompt_tokens >= 0
def test_gemini_reasoning_tokens_extraction():
"""Test that thoughts_token_count is extracted as reasoning_tokens from Gemini."""
llm = LLM(model="google/gemini-2.0-flash-001")
mock_response = MagicMock()
mock_response.usage_metadata = MagicMock(
prompt_token_count=100,
candidates_token_count=50,
total_token_count=150,
cached_content_token_count=10,
thoughts_token_count=30,
)
usage = llm._extract_token_usage(mock_response)
assert usage["prompt_token_count"] == 100
assert usage["candidates_token_count"] == 50
assert usage["total_tokens"] == 150
assert usage["cached_prompt_tokens"] == 10
assert usage["reasoning_tokens"] == 30
def test_gemini_no_thinking_tokens_defaults_to_zero():
"""Test that missing thoughts_token_count defaults to zero."""
llm = LLM(model="google/gemini-2.0-flash-001")
mock_response = MagicMock()
mock_response.usage_metadata = MagicMock(
prompt_token_count=80,
candidates_token_count=40,
total_token_count=120,
cached_content_token_count=0,
thoughts_token_count=None,
)
mock_response.candidates = []
usage = llm._extract_token_usage(mock_response)
assert usage["reasoning_tokens"] == 0
assert usage["cached_prompt_tokens"] == 0

View File

@@ -1,6 +1,7 @@
"""Tests for Google (Gemini) async completion functionality."""
import pytest
import tiktoken
from crewai import Agent, Task, Crew
from crewai.llm import LLM
@@ -42,7 +43,9 @@ async def test_gemini_async_with_max_tokens():
assert result is not None
assert isinstance(result, str)
assert len(result.split()) <= 1000
encoder = tiktoken.get_encoding("cl100k_base")
token_count = len(encoder.encode(result))
assert token_count <= 1000
@pytest.mark.vcr()

View File

@@ -1,6 +1,7 @@
"""Tests for LiteLLM fallback async completion functionality."""
import pytest
import tiktoken
from crewai.llm import LLM
@@ -43,7 +44,9 @@ async def test_litellm_async_with_max_tokens():
assert result is not None
assert isinstance(result, str)
assert len(result.split()) <= 10
encoder = tiktoken.get_encoding("cl100k_base")
token_count = len(encoder.encode(result))
assert token_count <= 10
@pytest.mark.asyncio

View File

@@ -1929,47 +1929,6 @@ def test_openai_streaming_returns_tool_calls_without_available_functions():
assert result[0]["type"] == "function"
def test_openai_responses_api_reasoning_tokens_extraction():
"""Test that reasoning_tokens are extracted from Responses API responses."""
llm = LLM(model="openai/gpt-4o")
mock_response = MagicMock()
mock_response.usage = MagicMock(
input_tokens=100,
output_tokens=200,
total_tokens=300,
)
mock_response.usage.input_tokens_details = MagicMock(cached_tokens=25)
mock_response.usage.output_tokens_details = MagicMock(reasoning_tokens=80)
usage = llm._extract_responses_token_usage(mock_response)
assert usage["prompt_tokens"] == 100
assert usage["completion_tokens"] == 200
assert usage["total_tokens"] == 300
assert usage["cached_prompt_tokens"] == 25
assert usage["reasoning_tokens"] == 80
def test_openai_responses_api_no_detail_fields_omitted():
"""Test that reasoning/cached fields are omitted when Responses API details are absent."""
llm = LLM(model="openai/gpt-4o")
mock_response = MagicMock()
mock_response.usage = MagicMock(
input_tokens=50,
output_tokens=30,
total_tokens=80,
)
mock_response.usage.input_tokens_details = None
mock_response.usage.output_tokens_details = None
usage = llm._extract_responses_token_usage(mock_response)
assert usage["prompt_tokens"] == 50
assert usage["completion_tokens"] == 30
assert "cached_prompt_tokens" not in usage
assert "reasoning_tokens" not in usage
@pytest.mark.asyncio
async def test_openai_async_streaming_returns_tool_calls_without_available_functions():
"""Test that async streaming returns tool calls list when available_functions is None.
@@ -2059,44 +2018,3 @@ async def test_openai_async_streaming_returns_tool_calls_without_available_funct
assert result[0]["function"]["arguments"] == '{"expression": "1+1"}'
assert result[0]["id"] == "call_abc123"
assert result[0]["type"] == "function"
def test_openai_reasoning_tokens_extraction():
"""Test that reasoning_tokens are extracted from OpenAI o-series responses."""
llm = LLM(model="openai/gpt-4o")
mock_response = MagicMock()
mock_response.usage = MagicMock(
prompt_tokens=100,
completion_tokens=200,
total_tokens=300,
)
mock_response.usage.prompt_tokens_details = MagicMock(cached_tokens=25)
mock_response.usage.completion_tokens_details = MagicMock(reasoning_tokens=80)
usage = llm._extract_openai_token_usage(mock_response)
assert usage["prompt_tokens"] == 100
assert usage["completion_tokens"] == 200
assert usage["total_tokens"] == 300
assert usage["cached_prompt_tokens"] == 25
assert usage["reasoning_tokens"] == 80
def test_openai_no_detail_fields_omitted():
"""Test that reasoning/cached fields are omitted when details are absent."""
llm = LLM(model="openai/gpt-4o")
mock_response = MagicMock()
mock_response.usage = MagicMock(
prompt_tokens=50,
completion_tokens=30,
total_tokens=80,
)
mock_response.usage.prompt_tokens_details = None
mock_response.usage.completion_tokens_details = None
usage = llm._extract_openai_token_usage(mock_response)
assert usage["prompt_tokens"] == 50
assert usage["completion_tokens"] == 30
assert "cached_prompt_tokens" not in usage
assert "reasoning_tokens" not in usage

View File

@@ -1,6 +1,7 @@
"""Tests for OpenAI async completion functionality."""
import pytest
import tiktoken
from crewai import Agent, Task, Crew
from crewai.llm import LLM
@@ -41,7 +42,9 @@ async def test_openai_async_with_max_tokens():
assert result is not None
assert isinstance(result, str)
assert len(result.split()) <= 10
encoder = tiktoken.get_encoding("cl100k_base")
token_count = len(encoder.encode(result))
assert token_count <= 10
@pytest.mark.vcr()

View File

@@ -51,13 +51,14 @@ def test_memory_record_embedding_excluded_from_serialization() -> None:
dumped = r.model_dump()
assert "embedding" not in dumped
assert dumped["content"] == "hello"
# model_dump_json excludes embedding
json_str = r.model_dump_json()
assert "0.1" not in json_str
assert "embedding" not in json_str
rehydrated = MemoryRecord.model_validate_json(json_str)
assert rehydrated.embedding is None
# repr excludes embedding
assert "embedding=" not in repr(r)
assert "0.1" not in repr(r)
# Direct attribute access still works for storage layer
assert r.embedding is not None

View File

@@ -296,8 +296,7 @@ class TestRuntimeStateLineage:
state = self._make_state()
state._checkpoint_id = "20260409T120000_abc12345"
state.fork()
assert state._branch.startswith("fork/20260409T120000_abc12345_")
assert len(state._branch) == len("fork/20260409T120000_abc12345_") + 6
assert state._branch == "fork/20260409T120000_abc12345"
def test_fork_no_checkpoint_id_unique(self) -> None:
state = self._make_state()

View File

@@ -2971,6 +2971,75 @@ def test__setup_for_training(researcher, writer):
assert agent.allow_delegation is False
def test_crew_trained_agents_data_file_defaults(researcher, writer):
"""Test that Crew.trained_agents_data_file defaults to 'trained_agents_data.pkl'."""
task = Task(
description="Test task",
expected_output="Test output",
agent=researcher,
)
crew = Crew(agents=[researcher, writer], tasks=[task])
assert crew.trained_agents_data_file == "trained_agents_data.pkl"
def test_crew_trained_agents_data_file_custom(researcher, writer):
"""Test that Crew.trained_agents_data_file can be set to a custom value."""
task = Task(
description="Test task",
expected_output="Test output",
agent=researcher,
)
crew = Crew(
agents=[researcher, writer],
tasks=[task],
trained_agents_data_file="my_custom_trained.pkl",
)
assert crew.trained_agents_data_file == "my_custom_trained.pkl"
@patch("crewai.agent.core.CrewTrainingHandler")
def test_apply_training_data_uses_crew_custom_filename(mock_handler, researcher):
"""Test that apply_training_data propagates the crew's trained_agents_data_file."""
from crewai.agent.utils import apply_training_data
task = Task(
description="Test task",
expected_output="Test output",
agent=researcher,
)
crew = Crew(
agents=[researcher],
tasks=[task],
trained_agents_data_file="my_custom_trained.pkl",
)
researcher.crew = crew
mock_handler.return_value.load.return_value = {
researcher.role: {
"suggestions": ["Be concise."]
}
}
result = apply_training_data(researcher, "Do the task")
mock_handler.assert_called_with("my_custom_trained.pkl")
assert "Be concise." in result
@patch("crewai.agent.core.CrewTrainingHandler")
def test_apply_training_data_uses_default_when_no_crew(mock_handler, researcher):
"""Test that apply_training_data falls back to the default file when agent has no crew."""
from crewai.agent.utils import apply_training_data
researcher.crew = None
mock_handler.return_value.load.return_value = {}
result = apply_training_data(researcher, "Do the task")
mock_handler.assert_called_with("trained_agents_data.pkl")
assert result == "Do the task"
@pytest.mark.vcr()
def test_replay_feature(researcher, writer):
list_ideas = Task(

View File

@@ -1001,8 +1001,6 @@ def test_usage_info_non_streaming_with_call():
"completion_tokens": 0,
"successful_requests": 0,
"cached_prompt_tokens": 0,
"reasoning_tokens": 0,
"cache_creation_tokens": 0,
}
assert llm.stream is False
@@ -1027,8 +1025,6 @@ def test_usage_info_streaming_with_call():
"completion_tokens": 0,
"successful_requests": 0,
"cached_prompt_tokens": 0,
"reasoning_tokens": 0,
"cache_creation_tokens": 0,
}
assert llm.stream is True
@@ -1060,8 +1056,6 @@ async def test_usage_info_non_streaming_with_acall():
"completion_tokens": 0,
"successful_requests": 0,
"cached_prompt_tokens": 0,
"reasoning_tokens": 0,
"cache_creation_tokens": 0,
}
with patch.object(
@@ -1095,8 +1089,6 @@ async def test_usage_info_non_streaming_with_acall_and_stop():
"completion_tokens": 0,
"successful_requests": 0,
"cached_prompt_tokens": 0,
"reasoning_tokens": 0,
"cache_creation_tokens": 0,
}
with patch.object(
@@ -1129,8 +1121,6 @@ async def test_usage_info_streaming_with_acall():
"completion_tokens": 0,
"successful_requests": 0,
"cached_prompt_tokens": 0,
"reasoning_tokens": 0,
"cache_creation_tokens": 0,
}
with patch.object(

View File

@@ -1,612 +0,0 @@
"""Tests for trace serialization optimization using Pydantic v2 context-based serialization.
These tests verify that trace events use @field_serializer with SerializationInfo.context
to produce lightweight representations, reducing event sizes from 50-100KB to a few KB.
"""
import json
import uuid
from typing import Any
from unittest.mock import MagicMock
import pytest
from pydantic import ConfigDict
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.events.base_events import _trace_agent_ref, _trace_task_ref, _trace_tool_names
from crewai.events.listeners.tracing.utils import safe_serialize_to_dict
from crewai.utilities.serialization import to_serializable
# ---------------------------------------------------------------------------
# Lightweight BaseAgent subclass for tests (avoids heavy dependencies)
# ---------------------------------------------------------------------------
class _StubAgent(BaseAgent):
"""Minimal BaseAgent subclass that satisfies validation without heavy deps."""
model_config = ConfigDict(arbitrary_types_allowed=True)
def execute_task(self, *a: Any, **kw: Any) -> str:
return ""
def create_agent_executor(self, *a: Any, **kw: Any) -> None:
pass
def _parse_tools(self, *a: Any, **kw: Any) -> list:
return []
def get_delegation_tools(self, *a: Any, **kw: Any) -> list:
return []
def get_output_converter(self, *a: Any, **kw: Any) -> Any:
return None
def get_multimodal_tools(self, *a: Any, **kw: Any) -> list:
return []
async def aexecute_task(self, *a: Any, **kw: Any) -> str:
return ""
def get_mcp_tools(self, *a: Any, **kw: Any) -> list:
return []
def get_platform_tools(self, *a: Any, **kw: Any) -> list:
return []
def _make_stub_agent(**overrides) -> _StubAgent:
"""Create a minimal BaseAgent instance for testing."""
defaults = {
"role": "Researcher",
"goal": "Research things",
"backstory": "Expert researcher",
"tools": [],
}
defaults.update(overrides)
return _StubAgent(**defaults)
# ---------------------------------------------------------------------------
# Helpers to build realistic mock objects for event fields
# ---------------------------------------------------------------------------
def _make_mock_task(**overrides):
task = MagicMock()
task.id = overrides.get("id", uuid.uuid4())
task.name = overrides.get("name", "Research Task")
task.description = overrides.get("description", "Do research")
task.expected_output = overrides.get("expected_output", "Research results")
task.async_execution = overrides.get("async_execution", False)
task.human_input = overrides.get("human_input", False)
task.agent = overrides.get("agent", _make_stub_agent())
task.context = overrides.get("context", None)
task.crew = MagicMock()
task.tools = overrides.get("tools", [MagicMock(), MagicMock()])
fp = MagicMock()
fp.uuid_str = str(uuid.uuid4())
fp.metadata = {"name": task.name}
task.fingerprint = fp
return task
def _make_stub_tool(tool_name="web_search") -> Any:
"""Create a minimal BaseTool instance for testing."""
from crewai.tools.base_tool import BaseTool
class _StubTool(BaseTool):
name: str = "stub"
description: str = "stub tool"
def _run(self, *a: Any, **kw: Any) -> str:
return ""
return _StubTool(name=tool_name, description=f"{tool_name} tool")
# ---------------------------------------------------------------------------
# Unit tests: trace ref helpers
# ---------------------------------------------------------------------------
class TestTraceRefHelpers:
def test_trace_agent_ref(self):
agent = _make_stub_agent(role="Analyst")
ref = _trace_agent_ref(agent)
assert ref["role"] == "Analyst"
assert "id" in ref
assert len(ref) == 2 # only id and role
def test_trace_agent_ref_none(self):
assert _trace_agent_ref(None) is None
def test_trace_task_ref(self):
task = _make_mock_task(name="Write Report")
ref = _trace_task_ref(task)
assert ref["name"] == "Write Report"
assert "id" in ref
assert len(ref) == 2
def test_trace_task_ref_falls_back_to_description(self):
task = _make_mock_task(name=None, description="Describe the report")
ref = _trace_task_ref(task)
assert ref["name"] == "Describe the report"
def test_trace_task_ref_none(self):
assert _trace_task_ref(None) is None
def test_trace_tool_names(self):
tools = [_make_stub_tool("search"), _make_stub_tool("read")]
names = _trace_tool_names(tools)
assert names == ["search", "read"]
def test_trace_tool_names_empty(self):
assert _trace_tool_names([]) is None
assert _trace_tool_names(None) is None
# ---------------------------------------------------------------------------
# Integration tests: field serializers on real event classes
# ---------------------------------------------------------------------------
class TestAgentEventFieldSerializers:
"""Test that agent event field serializers respond to trace context."""
def test_agent_execution_started_trace_context(self):
from crewai.events.types.agent_events import AgentExecutionStartedEvent
agent = _make_stub_agent(role="Researcher")
task = _make_mock_task(name="Research Task")
tools = [_make_stub_tool("search"), _make_stub_tool("read")]
event = AgentExecutionStartedEvent(
agent=agent, task=task, tools=tools, task_prompt="Do research"
)
# With trace context: lightweight refs
trace_dump = event.model_dump(context={"trace": True})
assert trace_dump["agent"] == {"id": str(agent.id), "role": "Researcher"}
assert trace_dump["task"] == {"id": str(task.id), "name": "Research Task"}
assert trace_dump["tools"] == ["search", "read"]
def test_agent_execution_started_no_context(self):
from crewai.events.types.agent_events import AgentExecutionStartedEvent
agent = _make_stub_agent(role="SpecificRole")
task = _make_mock_task()
event = AgentExecutionStartedEvent(
agent=agent, task=task, tools=None, task_prompt="Do research"
)
# Without context: full agent dict (Pydantic model_dump expands it)
normal_dump = event.model_dump()
assert isinstance(normal_dump["agent"], dict)
assert normal_dump["agent"]["role"] == "SpecificRole"
# Should have ALL agent fields, not just the lightweight ref
assert "goal" in normal_dump["agent"]
assert "backstory" in normal_dump["agent"]
assert "max_iter" in normal_dump["agent"]
def test_agent_execution_error_preserves_identification(self):
from crewai.events.types.agent_events import AgentExecutionErrorEvent
agent = _make_stub_agent(role="Analyst")
task = _make_mock_task(name="Analysis Task")
event = AgentExecutionErrorEvent(
agent=agent, task=task, error="Something went wrong"
)
trace_dump = event.model_dump(context={"trace": True})
# Error events should still have agent/task identification as refs
assert trace_dump["agent"]["role"] == "Analyst"
assert trace_dump["task"]["name"] == "Analysis Task"
assert trace_dump["error"] == "Something went wrong"
def test_agent_execution_completed_trace_context(self):
from crewai.events.types.agent_events import AgentExecutionCompletedEvent
agent = _make_stub_agent(role="Writer")
task = _make_mock_task(name="Writing Task")
event = AgentExecutionCompletedEvent(
agent=agent, task=task, output="Final output"
)
trace_dump = event.model_dump(context={"trace": True})
assert trace_dump["agent"]["role"] == "Writer"
assert trace_dump["task"]["name"] == "Writing Task"
assert trace_dump["output"] == "Final output"
class TestTaskEventFieldSerializers:
"""Test that task event field serializers respond to trace context."""
def test_task_started_trace_context(self):
from crewai.events.types.task_events import TaskStartedEvent
task = _make_mock_task(name="Test Task")
event = TaskStartedEvent(task=task, context="some context")
trace_dump = event.model_dump(context={"trace": True})
assert trace_dump["task"] == {"id": str(task.id), "name": "Test Task"}
assert trace_dump["context"] == "some context"
def test_task_failed_trace_context(self):
from crewai.events.types.task_events import TaskFailedEvent
task = _make_mock_task(name="Failing Task")
event = TaskFailedEvent(task=task, error="Task failed")
trace_dump = event.model_dump(context={"trace": True})
assert trace_dump["task"]["name"] == "Failing Task"
assert trace_dump["error"] == "Task failed"
class TestCrewEventFieldSerializers:
"""Test that crew event field serializers respond to trace context."""
def test_crew_kickoff_started_excludes_crew_in_trace(self):
from crewai.events.types.crew_events import CrewKickoffStartedEvent
crew = MagicMock()
crew.fingerprint = MagicMock()
crew.fingerprint.uuid_str = str(uuid.uuid4())
crew.fingerprint.metadata = {}
event = CrewKickoffStartedEvent(
crew=crew, crew_name="TestCrew", inputs={"key": "value"}
)
trace_dump = event.model_dump(context={"trace": True})
# crew field should be None in trace context
assert trace_dump["crew"] is None
# scalar fields preserved
assert trace_dump["crew_name"] == "TestCrew"
assert trace_dump["inputs"] == {"key": "value"}
def test_crew_event_no_context_preserves_crew(self):
from crewai.events.types.crew_events import CrewKickoffStartedEvent
crew = MagicMock()
crew.fingerprint = MagicMock()
crew.fingerprint.uuid_str = str(uuid.uuid4())
crew.fingerprint.metadata = {}
event = CrewKickoffStartedEvent(
crew=crew, crew_name="TestCrew", inputs=None
)
normal_dump = event.model_dump()
# Without trace context, crew should NOT be None (field serializer didn't fire)
assert normal_dump["crew"] is not None
class TestLLMEventFieldSerializers:
"""Test that LLM event field serializers respond to trace context."""
def test_llm_call_started_excludes_callbacks_in_trace(self):
from crewai.events.types.llm_events import LLMCallStartedEvent
event = LLMCallStartedEvent(
call_id="test-call",
messages=[{"role": "user", "content": "Hello"}],
tools=[{"name": "search", "description": "Search tool"}],
callbacks=[MagicMock(), MagicMock()],
available_functions={"search": MagicMock()},
)
trace_dump = event.model_dump(context={"trace": True})
# callbacks and available_functions excluded
assert trace_dump["callbacks"] is None
assert trace_dump["available_functions"] is None
# tools preserved (lightweight list of dicts)
assert trace_dump["tools"] == [{"name": "search", "description": "Search tool"}]
# messages preserved
assert trace_dump["messages"] == [{"role": "user", "content": "Hello"}]
# ---------------------------------------------------------------------------
# Integration tests: safe_serialize_to_dict with context
# ---------------------------------------------------------------------------
class TestSafeSerializeWithContext:
"""Test that safe_serialize_to_dict properly passes context through."""
def test_context_flows_through_to_field_serializers(self):
from crewai.events.types.agent_events import AgentExecutionErrorEvent
agent = _make_stub_agent(role="Worker")
task = _make_mock_task(name="Work Task")
event = AgentExecutionErrorEvent(
agent=agent, task=task, error="error msg"
)
result = safe_serialize_to_dict(event, context={"trace": True})
# Field serializers should have fired
assert result["agent"] == {"id": str(agent.id), "role": "Worker"}
assert result["task"] == {"id": str(task.id), "name": "Work Task"}
assert result["error"] == "error msg"
def test_no_context_preserves_full_serialization(self):
from crewai.events.types.task_events import TaskFailedEvent
task = _make_mock_task(name="Test")
event = TaskFailedEvent(task=task, error="fail")
result = safe_serialize_to_dict(event)
# Without context, task should not be a lightweight ref
assert result.get("task") is not None
# It should be the raw object (model_dump returns it as-is for Any fields)
# to_serializable will then repr() or process it further
# ---------------------------------------------------------------------------
# Integration tests: TraceCollectionListener._build_event_data
# ---------------------------------------------------------------------------
class TestBuildEventData:
@pytest.fixture
def listener(self):
from crewai.events.listeners.tracing.trace_listener import (
TraceCollectionListener,
)
TraceCollectionListener._instance = None
TraceCollectionListener._initialized = False
TraceCollectionListener._listeners_setup = False
return TraceCollectionListener()
def test_crew_kickoff_started_has_crew_structure(self, listener):
agent = _make_stub_agent(role="Researcher")
agent.tools = [_make_stub_tool("search"), _make_stub_tool("read")]
task = _make_mock_task(name="Research Task", agent=agent)
task.context = None
crew = MagicMock()
crew.agents = [agent]
crew.tasks = [task]
crew.process = "sequential"
crew.verbose = True
crew.memory = False
crew.fingerprint = MagicMock()
crew.fingerprint.uuid_str = str(uuid.uuid4())
crew.fingerprint.metadata = {}
from crewai.events.types.crew_events import CrewKickoffStartedEvent
event = CrewKickoffStartedEvent(
crew=crew, crew_name="TestCrew", inputs={"key": "value"}
)
result = listener._build_event_data("crew_kickoff_started", event, None)
assert "crew_structure" in result
cs = result["crew_structure"]
assert len(cs["agents"]) == 1
assert cs["agents"][0]["role"] == "Researcher"
assert cs["agents"][0]["tool_names"] == ["search", "read"]
assert len(cs["tasks"]) == 1
assert cs["tasks"][0]["name"] == "Research Task"
assert "agent_ref" in cs["tasks"][0]
assert cs["tasks"][0]["agent_ref"]["role"] == "Researcher"
def test_crew_kickoff_started_context_task_ids(self, listener):
agent = _make_stub_agent()
task1 = _make_mock_task(name="Task 1", agent=agent)
task1.context = None
task2 = _make_mock_task(name="Task 2", agent=agent)
task2.context = [task1]
crew = MagicMock()
crew.agents = [agent]
crew.tasks = [task1, task2]
crew.process = "sequential"
crew.verbose = False
crew.memory = False
crew.fingerprint = MagicMock()
crew.fingerprint.uuid_str = str(uuid.uuid4())
crew.fingerprint.metadata = {}
from crewai.events.types.crew_events import CrewKickoffStartedEvent
event = CrewKickoffStartedEvent(
crew=crew, crew_name="TestCrew", inputs=None
)
result = listener._build_event_data("crew_kickoff_started", event, None)
task2_data = result["crew_structure"]["tasks"][1]
assert "context_task_ids" in task2_data
assert str(task1.id) in task2_data["context_task_ids"]
def test_generic_event_uses_trace_context(self, listener):
"""Non-complex events should use context-based serialization."""
from crewai.events.types.crew_events import CrewKickoffCompletedEvent
crew = MagicMock()
crew.fingerprint = MagicMock()
crew.fingerprint.uuid_str = str(uuid.uuid4())
crew.fingerprint.metadata = {}
event = CrewKickoffCompletedEvent(
crew=crew, crew_name="TestCrew", output="Final result", total_tokens=5000
)
result = listener._build_event_data("crew_kickoff_completed", event, None)
# Scalar fields preserved
assert result.get("crew_name") == "TestCrew"
assert result.get("total_tokens") == 5000
# crew excluded by field serializer
assert result.get("crew") is None
# No crew_structure (that's only for kickoff_started)
assert "crew_structure" not in result
def test_task_started_custom_projection(self, listener):
task = _make_mock_task(name="Test Task")
from crewai.events.types.task_events import TaskStartedEvent
event = TaskStartedEvent(task=task, context="test context")
source = MagicMock()
source.agent = _make_stub_agent(role="Worker")
result = listener._build_event_data("task_started", event, source)
assert result["task_name"] == "Test Task"
assert result["agent_role"] == "Worker"
assert result["task_id"] == str(task.id)
assert result["context"] == "test context"
def test_llm_call_started_uses_trace_context(self, listener):
from crewai.events.types.llm_events import LLMCallStartedEvent
event = LLMCallStartedEvent(
call_id="test",
messages=[{"role": "user", "content": "Hello"}],
tools=[{"name": "search"}],
callbacks=[MagicMock()],
available_functions={"fn": MagicMock()},
)
result = listener._build_event_data("llm_call_started", event, None)
# callbacks and available_functions excluded via field serializer
assert result.get("callbacks") is None
assert result.get("available_functions") is None
# tools preserved (lightweight schemas)
assert result.get("tools") == [{"name": "search"}]
def test_agent_execution_error_preserves_identification(self, listener):
"""Error events should preserve agent/task identification via field serializers."""
from crewai.events.types.agent_events import AgentExecutionErrorEvent
agent = _make_stub_agent(role="Analyst")
task = _make_mock_task(name="Analysis")
event = AgentExecutionErrorEvent(
agent=agent, task=task, error="Something broke"
)
result = listener._build_event_data("agent_execution_error", event, None)
# Field serializers return lightweight refs, not None
assert result["agent"] == {"id": str(agent.id), "role": "Analyst"}
assert result["task"] == {"id": str(task.id), "name": "Analysis"}
assert result["error"] == "Something broke"
def test_task_failed_preserves_identification(self, listener):
from crewai.events.types.task_events import TaskFailedEvent
task = _make_mock_task(name="Failed Task")
event = TaskFailedEvent(task=task, error="Task failed")
result = listener._build_event_data("task_failed", event, None)
assert result["task"] == {"id": str(task.id), "name": "Failed Task"}
assert result["error"] == "Task failed"
# ---------------------------------------------------------------------------
# Size reduction verification
# ---------------------------------------------------------------------------
class TestSizeReduction:
@pytest.fixture
def listener(self):
from crewai.events.listeners.tracing.trace_listener import (
TraceCollectionListener,
)
TraceCollectionListener._instance = None
TraceCollectionListener._initialized = False
TraceCollectionListener._listeners_setup = False
return TraceCollectionListener()
def test_task_started_event_size(self, listener):
"""task_started event data should be well under 2KB."""
agent = _make_stub_agent(
role="Researcher",
goal="Research" * 50,
backstory="Expert" * 100,
)
agent.tools = [_make_stub_tool(f"tool_{i}") for i in range(5)]
task = _make_mock_task(
name="Research Task",
description="Detailed description" * 20,
expected_output="Expected" * 10,
agent=agent,
)
task.context = [_make_mock_task() for _ in range(3)]
task.tools = [_make_stub_tool(f"t_{i}") for i in range(3)]
from crewai.events.types.task_events import TaskStartedEvent
event = TaskStartedEvent(task=task, context="test context")
source = MagicMock()
source.agent = agent
result = listener._build_event_data("task_started", event, source)
serialized = json.dumps(result, default=str)
assert len(serialized) < 2000, f"task_started too large: {len(serialized)} bytes"
assert "task_name" in result
assert "agent_role" in result
def test_error_event_size(self, listener):
"""Error events should be small despite having agent/task refs."""
from crewai.events.types.agent_events import AgentExecutionErrorEvent
agent = _make_stub_agent(
goal="Very long goal " * 100,
backstory="Very long backstory " * 100,
)
task = _make_mock_task(description="Very long description " * 100)
event = AgentExecutionErrorEvent(
agent=agent, task=task, error="error"
)
result = listener._build_event_data("agent_execution_error", event, None)
serialized = json.dumps(result, default=str)
# Should be small - agent/task are just {id, role/name} refs
assert len(serialized) < 5000, f"error event too large: {len(serialized)} bytes"
# ---------------------------------------------------------------------------
# to_serializable context threading
# ---------------------------------------------------------------------------
class TestToSerializableContext:
"""Test that context parameter flows through to_serializable correctly."""
def test_context_passed_to_model_dump(self):
from crewai.events.types.agent_events import AgentExecutionErrorEvent
agent = _make_stub_agent(role="Tester")
task = _make_mock_task(name="Test Task")
event = AgentExecutionErrorEvent(
agent=agent, task=task, error="test error"
)
# Directly use to_serializable with context
result = to_serializable(event, context={"trace": True})
assert isinstance(result, dict)
assert result["agent"] == {"id": str(agent.id), "role": "Tester"}
assert result["task"] == {"id": str(task.id), "name": "Test Task"}
def test_no_context_does_not_trigger_serializers(self):
from crewai.events.types.crew_events import CrewKickoffStartedEvent
crew = MagicMock()
crew.fingerprint = MagicMock()
crew.fingerprint.uuid_str = str(uuid.uuid4())
crew.fingerprint.metadata = {}
event = CrewKickoffStartedEvent(
crew=crew, crew_name="Test", inputs=None
)
# Without context, crew should NOT be None
result = event.model_dump()
assert result["crew"] is not None

View File

@@ -119,12 +119,10 @@ def test_create_llm_with_invalid_type() -> None:
def test_create_llm_openai_missing_api_key() -> None:
"""Credentials are validated lazily: `create_llm` succeeds, and the
descriptive error only surfaces when the client is actually built."""
"""Test that create_llm raises error when OpenAI API key is missing"""
with patch.dict(os.environ, {}, clear=True):
llm = create_llm(llm_value="gpt-4o")
with pytest.raises((ValueError, ImportError)) as exc_info:
llm._get_sync_client()
create_llm(llm_value="gpt-4o")
error_message = str(exc_info.value).lower()
assert "openai_api_key" in error_message or "api_key" in error_message

View File

@@ -1,3 +1,3 @@
"""CrewAI development tools."""
__version__ = "1.14.2a4"
__version__ = "1.14.2a1"

View File

@@ -29,33 +29,6 @@ load_dotenv()
console = Console()
def _resume_hint(message: str) -> None:
"""Print a boxed resume hint after a failure."""
console.print()
console.print(
Panel(
message,
title="[bold yellow]How to resume[/bold yellow]",
border_style="yellow",
padding=(1, 2),
)
)
def _print_release_error(e: BaseException) -> None:
"""Print a release error with stderr if available."""
if isinstance(e, KeyboardInterrupt):
raise
if isinstance(e, SystemExit):
return
if isinstance(e, subprocess.CalledProcessError):
console.print(f"[red]Error running command:[/red] {e}")
if e.stderr:
console.print(e.stderr)
else:
console.print(f"[red]Error:[/red] {e}")
def run_command(cmd: list[str], cwd: Path | None = None) -> str:
"""Run a shell command and return output.
@@ -291,9 +264,11 @@ def add_docs_version(docs_json_path: Path, version: str) -> bool:
if not versions:
continue
# Skip if this version already exists for this language
if any(v.get("version") == version_label for v in versions):
continue
# Find the current default and copy its tabs
default_version = next(
(v for v in versions if v.get("default")),
versions[0],
@@ -305,7 +280,10 @@ def add_docs_version(docs_json_path: Path, version: str) -> bool:
"tabs": default_version.get("tabs", []),
}
# Remove default flag from old default
default_version.pop("default", None)
# Insert new version at the beginning
versions.insert(0, new_version)
updated = True
@@ -499,7 +477,7 @@ def _is_crewai_dep(spec: str) -> bool:
"""Return True if *spec* is a ``crewai`` or ``crewai[...]`` dependency."""
if not spec.startswith("crewai"):
return False
rest = spec[6:]
rest = spec[6:] # after "crewai"
return len(rest) > 0 and rest[0] in ("[", "=", ">", "<", "~", "!")
@@ -521,6 +499,7 @@ def _pin_crewai_deps(content: str, version: str) -> str:
deps = doc.get("project", {}).get(key)
if deps is None:
continue
# optional-dependencies is a table of lists; dependencies is a list
dep_lists = deps.values() if isinstance(deps, Mapping) else [deps]
for dep_list in dep_lists:
for i, dep in enumerate(dep_list):
@@ -659,6 +638,7 @@ def get_github_contributors(commit_range: str) -> list[str]:
List of GitHub usernames sorted alphabetically.
"""
try:
# Get GitHub token from gh CLI
try:
gh_token = run_command(["gh", "auth", "token"])
except subprocess.CalledProcessError:
@@ -700,6 +680,11 @@ def get_github_contributors(commit_range: str) -> list[str]:
return []
# ---------------------------------------------------------------------------
# Shared workflow helpers
# ---------------------------------------------------------------------------
def _poll_pr_until_merged(
branch_name: str, label: str, repo: str | None = None
) -> None:
@@ -779,6 +764,7 @@ def _update_all_versions(
"[yellow]Warning:[/yellow] No __version__ attributes found to update"
)
# Update CLI template pyproject.toml files
templates_dir = lib_dir / "crewai" / "src" / "crewai" / "cli" / "templates"
if templates_dir.exists():
if dry_run:
@@ -1177,11 +1163,13 @@ def _repin_crewai_install(run_value: str, version: str) -> str:
while marker in remainder:
before, _, after = remainder.partition(marker)
result.append(before)
# after looks like: a2a]==1.14.0" ...
bracket_end = after.index("]")
extras = after[:bracket_end]
rest = after[bracket_end + 1 :]
if rest.startswith("=="):
ver_start = 2
# Find end of version — next quote or whitespace
ver_start = 2 # len("==")
ver_end = ver_start
while ver_end < len(rest) and rest[ver_end] not in ('"', "'", " ", "\n"):
ver_end += 1
@@ -1343,6 +1331,7 @@ def _release_enterprise(version: str, is_prerelease: bool, dry_run: bool) -> Non
run_command(["gh", "repo", "clone", enterprise_repo, str(repo_dir)])
console.print(f"[green]✓[/green] Cloned {enterprise_repo}")
# --- bump versions ---
for rel_dir in _ENTERPRISE_VERSION_DIRS:
pkg_dir = repo_dir / rel_dir
if not pkg_dir.exists():
@@ -1372,12 +1361,14 @@ def _release_enterprise(version: str, is_prerelease: bool, dry_run: bool) -> Non
f"{pyproject.relative_to(repo_dir)}"
)
# --- update crewai[tools] pin ---
enterprise_pyproject = repo_dir / enterprise_dep_path
if _update_enterprise_crewai_dep(enterprise_pyproject, version):
console.print(
f"[green]✓[/green] Updated crewai[tools] dep in {enterprise_dep_path}"
)
# --- update crewai pins in CI workflows ---
for wf in _update_enterprise_workflows(repo_dir, version):
console.print(
f"[green]✓[/green] Updated crewai pin in {wf.relative_to(repo_dir)}"
@@ -1417,6 +1408,7 @@ def _release_enterprise(version: str, is_prerelease: bool, dry_run: bool) -> Non
time.sleep(_PYPI_POLL_INTERVAL)
console.print("[green]✓[/green] Workspace synced")
# --- branch, commit, push, PR ---
branch_name = f"feat/bump-version-{version}"
run_command(["git", "checkout", "-b", branch_name], cwd=repo_dir)
run_command(["git", "add", "."], cwd=repo_dir)
@@ -1450,6 +1442,7 @@ def _release_enterprise(version: str, is_prerelease: bool, dry_run: bool) -> Non
_poll_pr_until_merged(branch_name, "enterprise bump PR", repo=enterprise_repo)
# --- tag and release ---
run_command(["git", "checkout", "main"], cwd=repo_dir)
run_command(["git", "pull"], cwd=repo_dir)
@@ -1491,6 +1484,7 @@ def _trigger_pypi_publish(tag_name: str, wait: bool = False) -> None:
tag_name: The release tag to publish.
wait: Block until the workflow run completes.
"""
# Capture the latest run ID before triggering so we can detect the new one
prev_run_id = ""
if wait:
try:
@@ -1565,6 +1559,11 @@ def _trigger_pypi_publish(tag_name: str, wait: bool = False) -> None:
console.print("[green]✓[/green] PyPI publish workflow completed")
# ---------------------------------------------------------------------------
# CLI commands
# ---------------------------------------------------------------------------
@click.group()
def cli() -> None:
"""Development tools for version bumping and git automation."""
@@ -1832,80 +1831,62 @@ def release(
skip_enterprise: Skip the enterprise release phase.
skip_to_enterprise: Skip phases 1 & 2, run only the enterprise release phase.
"""
flags: list[str] = []
if no_edit:
flags.append("--no-edit")
if skip_enterprise:
flags.append("--skip-enterprise")
flag_suffix = (" " + " ".join(flags)) if flags else ""
enterprise_hint = (
""
if skip_enterprise
else f"\n\nThen release enterprise:\n\n"
f" devtools release {version} --skip-to-enterprise"
)
check_gh_installed()
if skip_enterprise and skip_to_enterprise:
console.print(
"[red]Error:[/red] Cannot use both --skip-enterprise "
"and --skip-to-enterprise"
)
sys.exit(1)
if not skip_enterprise or skip_to_enterprise:
missing: list[str] = []
if not _ENTERPRISE_REPO:
missing.append("ENTERPRISE_REPO")
if not _ENTERPRISE_VERSION_DIRS:
missing.append("ENTERPRISE_VERSION_DIRS")
if not _ENTERPRISE_CREWAI_DEP_PATH:
missing.append("ENTERPRISE_CREWAI_DEP_PATH")
if missing:
console.print(
f"[red]Error:[/red] Missing required environment variable(s): "
f"{', '.join(missing)}\n"
f"Set them or pass --skip-enterprise to skip the enterprise release."
)
sys.exit(1)
cwd = Path.cwd()
lib_dir = cwd / "lib"
is_prerelease = _is_prerelease(version)
if skip_to_enterprise:
try:
_release_enterprise(version, is_prerelease, dry_run)
except BaseException as e:
_print_release_error(e)
_resume_hint(
f"Fix the issue, then re-run:\n\n"
f" devtools release {version} --skip-to-enterprise"
)
sys.exit(1)
console.print(
f"\n[green]✓[/green] Enterprise release [bold]{version}[/bold] complete!"
)
return
if not dry_run:
console.print("Checking git status...")
check_git_clean()
console.print("[green]✓[/green] Working directory is clean")
else:
console.print("[dim][DRY RUN][/dim] Would check git status")
packages = get_packages(lib_dir)
console.print(f"\nFound {len(packages)} package(s) to update:")
for pkg in packages:
console.print(f" - {pkg.name}")
console.print(f"\n[bold cyan]Phase 1: Bumping versions to {version}[/bold cyan]")
try:
check_gh_installed()
if skip_enterprise and skip_to_enterprise:
console.print(
"[red]Error:[/red] Cannot use both --skip-enterprise "
"and --skip-to-enterprise"
)
sys.exit(1)
if not skip_enterprise or skip_to_enterprise:
missing: list[str] = []
if not _ENTERPRISE_REPO:
missing.append("ENTERPRISE_REPO")
if not _ENTERPRISE_VERSION_DIRS:
missing.append("ENTERPRISE_VERSION_DIRS")
if not _ENTERPRISE_CREWAI_DEP_PATH:
missing.append("ENTERPRISE_CREWAI_DEP_PATH")
if missing:
console.print(
f"[red]Error:[/red] Missing required environment variable(s): "
f"{', '.join(missing)}\n"
f"Set them or pass --skip-enterprise to skip the enterprise release."
)
sys.exit(1)
cwd = Path.cwd()
lib_dir = cwd / "lib"
is_prerelease = _is_prerelease(version)
if skip_to_enterprise:
_release_enterprise(version, is_prerelease, dry_run)
console.print(
f"\n[green]✓[/green] Enterprise release [bold]{version}[/bold] complete!"
)
return
if not dry_run:
console.print("Checking git status...")
check_git_clean()
console.print("[green]✓[/green] Working directory is clean")
else:
console.print("[dim][DRY RUN][/dim] Would check git status")
packages = get_packages(lib_dir)
console.print(f"\nFound {len(packages)} package(s) to update:")
for pkg in packages:
console.print(f" - {pkg.name}")
# --- Phase 1: Bump versions ---
console.print(
f"\n[bold cyan]Phase 1: Bumping versions to {version}[/bold cyan]"
)
_update_all_versions(cwd, lib_dir, version, packages, dry_run)
branch_name = f"feat/bump-version-{version}"
@@ -1949,17 +1930,12 @@ def release(
console.print(
"[dim][DRY RUN][/dim] Would push branch, create PR, and wait for merge"
)
except BaseException as e:
_print_release_error(e)
_resume_hint(
f"Phase 1 failed. Fix the issue, then re-run:\n\n"
f" devtools release {version}{flag_suffix}"
# --- Phase 2: Tag and release ---
console.print(
f"\n[bold cyan]Phase 2: Tagging and releasing {version}[/bold cyan]"
)
sys.exit(1)
console.print(f"\n[bold cyan]Phase 2: Tagging and releasing {version}[/bold cyan]")
try:
tag_name = version
if not dry_run:
@@ -1986,57 +1962,22 @@ def release(
if not dry_run:
_create_tag_and_release(tag_name, release_notes, is_prerelease)
except BaseException as e:
_print_release_error(e)
_resume_hint(
"Phase 2 failed before PyPI publish. The bump PR is already merged.\n"
"Fix the issue, then resume with:\n\n"
" devtools tag"
f"\n\nAfter tagging, publish to PyPI and update deployment test:\n\n"
f" gh workflow run publish.yml -f release_tag={version}"
f"{enterprise_hint}"
)
sys.exit(1)
try:
if not dry_run:
_trigger_pypi_publish(tag_name, wait=True)
except BaseException as e:
_print_release_error(e)
_resume_hint(
f"Phase 2 failed at PyPI publish. Tag and GitHub release already exist.\n"
f"Retry PyPI publish manually:\n\n"
f" gh workflow run publish.yml -f release_tag={version}"
f"{enterprise_hint}"
)
sys.exit(1)
try:
if not dry_run:
_update_deployment_test_repo(version, is_prerelease)
except BaseException as e:
_print_release_error(e)
_resume_hint(
f"Phase 2 failed updating deployment test repo. "
f"Tag, release, and PyPI are done.\n"
f"Fix the issue and update {_DEPLOYMENT_TEST_REPO} manually."
f"{enterprise_hint}"
)
sys.exit(1)
if not skip_enterprise:
try:
if not skip_enterprise:
_release_enterprise(version, is_prerelease, dry_run)
except BaseException as e:
_print_release_error(e)
_resume_hint(
f"Phase 3 (enterprise) failed. Phases 1 & 2 completed successfully.\n"
f"Fix the issue, then resume:\n\n"
f" devtools release {version} --skip-to-enterprise"
)
sys.exit(1)
console.print(f"\n[green]✓[/green] Release [bold]{version}[/bold] complete!")
console.print(f"\n[green]✓[/green] Release [bold]{version}[/bold] complete!")
except subprocess.CalledProcessError as e:
console.print(f"[red]Error running command:[/red] {e}")
if e.stderr:
console.print(e.stderr)
sys.exit(1)
except Exception as e:
console.print(f"[red]Error:[/red] {e}")
sys.exit(1)
cli.add_command(bump)

View File

@@ -12,7 +12,7 @@ dev = [
"mypy==1.19.1",
"pre-commit==4.5.1",
"bandit==1.9.2",
"pytest==9.0.3",
"pytest==8.4.2",
"pytest-asyncio==1.3.0",
"pytest-subprocess==1.5.3",
"vcrpy==7.0.0", # pinned, less versions break pytest-recording
@@ -20,7 +20,7 @@ dev = [
"pytest-randomly==4.0.1",
"pytest-timeout==2.4.0",
"pytest-xdist==3.8.0",
"pytest-split==0.11.0",
"pytest-split==0.10.0",
"types-requests~=2.31.0.6",
"types-pyyaml==6.0.*",
"types-regex==2026.1.15.*",
@@ -30,7 +30,6 @@ dev = [
"types-pymysql==1.1.0.20250916",
"types-aiofiles~=25.1.0",
"commitizen>=4.13.9",
"pip-audit==2.9.0",
]
@@ -162,7 +161,7 @@ info = "Commits must follow Conventional Commits 1.0.0."
[tool.uv]
exclude-newer = "3 days"
exclude-newer = "2026-04-10" # pinned for CVE-2026-39892; restore to "3 days" after 2026-04-11
# composio-core pins rich<14 but textual requires rich>=14.
# onnxruntime 1.24+ dropped Python 3.10 wheels; cap it so qdrant[fastembed] resolves on 3.10.
@@ -170,8 +169,6 @@ exclude-newer = "3 days"
# langchain-core <1.2.28 has GHSA-926x-3r5x-gfhw (incomplete f-string validation).
# transformers 4.57.6 has CVE-2026-1839; force 5.4+ (docling 2.84 allows huggingface-hub>=1).
# cryptography 46.0.6 has CVE-2026-39892; force 46.0.7+.
# pypdf <6.10.0 has CVE-2026-40260; force 6.10.0+.
# uv <0.11.6 has GHSA-pjjw-68hj-v9mw; force 0.11.6+.
override-dependencies = [
"rich>=13.7.1",
"onnxruntime<1.24; python_version < '3.11'",
@@ -180,8 +177,6 @@ override-dependencies = [
"urllib3>=2.6.3",
"transformers>=5.4.0; python_version >= '3.10'",
"cryptography>=46.0.7",
"pypdf>=6.10.0,<7",
"uv>=0.11.6,<1",
]
[tool.uv.workspace]

538
uv.lock generated
View File

@@ -13,8 +13,7 @@ resolution-markers = [
]
[options]
exclude-newer = "2026-04-10T18:30:59.748668Z"
exclude-newer-span = "P3D"
exclude-newer = "2026-04-11T07:00:00Z"
[manifest]
members = [
@@ -28,11 +27,9 @@ overrides = [
{ name = "langchain-core", specifier = ">=1.2.28,<2" },
{ name = "onnxruntime", marker = "python_full_version < '3.11'", specifier = "<1.24" },
{ name = "pillow", specifier = ">=12.1.1" },
{ name = "pypdf", specifier = ">=6.10.0,<7" },
{ name = "rich", specifier = ">=13.7.1" },
{ name = "transformers", marker = "python_full_version >= '3.10'", specifier = ">=5.4.0" },
{ name = "urllib3", specifier = ">=2.6.3" },
{ name = "uv", specifier = ">=0.11.6,<1" },
]
[manifest.dependency-groups]
@@ -41,13 +38,12 @@ dev = [
{ name = "boto3-stubs", extras = ["bedrock-runtime"], specifier = "==1.42.40" },
{ name = "commitizen", specifier = ">=4.13.9" },
{ name = "mypy", specifier = "==1.19.1" },
{ name = "pip-audit", specifier = "==2.9.0" },
{ name = "pre-commit", specifier = "==4.5.1" },
{ name = "pytest", specifier = "==9.0.3" },
{ name = "pytest", specifier = "==8.4.2" },
{ name = "pytest-asyncio", specifier = "==1.3.0" },
{ name = "pytest-randomly", specifier = "==4.0.1" },
{ name = "pytest-recording", specifier = "==0.13.4" },
{ name = "pytest-split", specifier = "==0.11.0" },
{ name = "pytest-split", specifier = "==0.10.0" },
{ name = "pytest-subprocess", specifier = "==1.5.3" },
{ name = "pytest-timeout", specifier = "==2.4.0" },
{ name = "pytest-xdist", specifier = "==3.8.0" },
@@ -64,7 +60,7 @@ dev = [
[[package]]
name = "a2a-sdk"
version = "0.3.26"
version = "0.3.25"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "google-api-core" },
@@ -73,9 +69,9 @@ dependencies = [
{ name = "protobuf" },
{ name = "pydantic" },
]
sdist = { url = "https://files.pythonhosted.org/packages/be/97/a6840e01795b182ce751ca165430d46459927cde9bfab838087cbb24aef7/a2a_sdk-0.3.26.tar.gz", hash = "sha256:44068e2d037afbb07ab899267439e9bc7eaa7ac2af94f1e8b239933c993ad52d", size = 274598, upload-time = "2026-04-09T15:21:13.902Z" }
sdist = { url = "https://files.pythonhosted.org/packages/55/83/3c99b276d09656cce039464509f05bf385e5600d6dc046a131bbcf686930/a2a_sdk-0.3.25.tar.gz", hash = "sha256:afda85bab8d6af0c5d15e82f326c94190f6be8a901ce562d045a338b7127242f", size = 270638, upload-time = "2026-03-10T13:08:46.417Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/dd/d5/51f4ee1bf3b736add42a542d3c8a3fd3fa85f3d36c17972127defc46c26f/a2a_sdk-0.3.26-py3-none-any.whl", hash = "sha256:754e0573f6d33b225c1d8d51f640efa69cbbed7bdfb06ce9c3540ea9f58d4a91", size = 151016, upload-time = "2026-04-09T15:21:12.35Z" },
{ url = "https://files.pythonhosted.org/packages/bd/f9/6a62520b7ecb945188a6e1192275f4732ff9341cd4629bc975a6c146aeab/a2a_sdk-0.3.25-py3-none-any.whl", hash = "sha256:2fce38faea82eb0b6f9f9c2bcf761b0d78612c80ef0e599b50d566db1b2654b5", size = 149609, upload-time = "2026-03-10T13:08:44.7Z" },
]
[[package]]
@@ -99,7 +95,7 @@ wheels = [
[[package]]
name = "aiobotocore"
version = "3.4.0"
version = "2.25.2"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "aiohttp" },
@@ -108,12 +104,11 @@ dependencies = [
{ name = "jmespath" },
{ name = "multidict" },
{ name = "python-dateutil" },
{ name = "typing-extensions", marker = "python_full_version < '3.11'" },
{ name = "wrapt" },
]
sdist = { url = "https://files.pythonhosted.org/packages/b8/50/a48ed11b15f926ce3dbb33e7fb0f25af17dbb99bcb7ae3b30c763723eca7/aiobotocore-3.4.0.tar.gz", hash = "sha256:a918b5cb903f81feba7e26835aed4b5e6bb2d0149d7f42bb2dd7d8089e3d9000", size = 122360, upload-time = "2026-04-07T06:12:24.884Z" }
sdist = { url = "https://files.pythonhosted.org/packages/52/48/cf3c88c5e3fecdeed824f97a8a98a9fc0d7ef33e603f8f22c2fd32b9ef09/aiobotocore-2.25.2.tar.gz", hash = "sha256:ae0a512b34127097910b7af60752956254099ae54402a84c2021830768f92cda", size = 120585, upload-time = "2025-11-11T18:51:28.056Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/df/d8/ce9386e6d76ea79e61dee15e62aa48cff6be69e89246b0ac4a11857cb02c/aiobotocore-3.4.0-py3-none-any.whl", hash = "sha256:26290eb6830ea92d8a6f5f90b56e9f5cedd6d126074d5db63b195e281d982465", size = 88018, upload-time = "2026-04-07T06:12:22.684Z" },
{ url = "https://files.pythonhosted.org/packages/8e/ad/a2f3964aa37da5a4c94c1e5f3934d6ac1333f991f675fcf08a618397a413/aiobotocore-2.25.2-py3-none-any.whl", hash = "sha256:0cec45c6ba7627dd5e5460337291c86ac38c3b512ec4054ce76407d0f7f2a48f", size = 86048, upload-time = "2025-11-11T18:51:26.139Z" },
]
[[package]]
@@ -617,27 +612,18 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/36/b7/a5cc566901af27314408b95701f8e1d9c286b0aecfa50fc76c53d73efa6f/bedrock_agentcore-1.3.2-py3-none-any.whl", hash = "sha256:3a4e7122f777916f8bd74b42f29eb881415e37fda784a5ff8fab3c813b921706", size = 121703, upload-time = "2026-02-23T20:52:55.038Z" },
]
[[package]]
name = "boolean-py"
version = "5.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/c4/cf/85379f13b76f3a69bca86b60237978af17d6aa0bc5998978c3b8cf05abb2/boolean_py-5.0.tar.gz", hash = "sha256:60cbc4bad079753721d32649545505362c754e121570ada4658b852a3a318d95", size = 37047, upload-time = "2025-04-03T10:39:49.734Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/e5/ca/78d423b324b8d77900030fa59c4aa9054261ef0925631cd2501dd015b7b7/boolean_py-5.0-py3-none-any.whl", hash = "sha256:ef28a70bd43115208441b53a045d1549e2f0ec6e3d08a9d142cbc41c1938e8d9", size = 26577, upload-time = "2025-04-03T10:39:48.449Z" },
]
[[package]]
name = "boto3"
version = "1.42.84"
version = "1.40.70"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "botocore" },
{ name = "jmespath" },
{ name = "s3transfer" },
]
sdist = { url = "https://files.pythonhosted.org/packages/88/89/2d647bd717da55a8cc68602b197f53a5fa36fb95a2f9e76c4aff11a9cfd1/boto3-1.42.84.tar.gz", hash = "sha256:6a84b3293a5d8b3adf827a54588e7dcffcf0a85410d7dadca615544f97d27579", size = 112816, upload-time = "2026-04-06T19:39:07.585Z" }
sdist = { url = "https://files.pythonhosted.org/packages/37/12/d5ac34e0536e1914dde28245f014a635056dde0427f6efa09f104d7999f4/boto3-1.40.70.tar.gz", hash = "sha256:191443707b391232ed15676bf6bba7e53caec1e71aafa12ccad2e825c5ee15cc", size = 111638, upload-time = "2025-11-10T20:29:15.199Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/2d/31/cdf4326841613d1d181a77b3038a988800fb3373ca50de1639fba9fa87de/boto3-1.42.84-py3-none-any.whl", hash = "sha256:4d03ad3211832484037337292586f71f48707141288d9ac23049c04204f4ab03", size = 140555, upload-time = "2026-04-06T19:39:06.009Z" },
{ url = "https://files.pythonhosted.org/packages/f3/cf/e24d08b37cd318754a8e94906c8b34b88676899aad1907ff6942311f13c4/boto3-1.40.70-py3-none-any.whl", hash = "sha256:e8c2f4f4cb36297270f1023ebe5b100333e0e88ab6457a9687d80143d2e15bf9", size = 139358, upload-time = "2025-11-10T20:29:13.512Z" },
]
[[package]]
@@ -661,16 +647,16 @@ bedrock-runtime = [
[[package]]
name = "botocore"
version = "1.42.84"
version = "1.40.70"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "jmespath" },
{ name = "python-dateutil" },
{ name = "urllib3" },
]
sdist = { url = "https://files.pythonhosted.org/packages/b4/b7/1c03423843fb0d1795b686511c00ee63fed1234c2400f469aeedfd42212f/botocore-1.42.84.tar.gz", hash = "sha256:234064604c80d9272a5e9f6b3566d260bcaa053a5e05246db90d7eca1c2cf44b", size = 15148615, upload-time = "2026-04-06T19:38:56.673Z" }
sdist = { url = "https://files.pythonhosted.org/packages/35/c1/8c4c199ae1663feee579a15861e34f10b29da11ae6ea0ad7b6a847ef3823/botocore-1.40.70.tar.gz", hash = "sha256:61b1f2cecd54d1b28a081116fa113b97bf4e17da57c62ae2c2751fe4c528af1f", size = 14444592, upload-time = "2025-11-10T20:29:04.046Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/e3/37/0c0c90361c8a1b9e6c75222ca24ae12996a298c0e18822a72ab229c37207/botocore-1.42.84-py3-none-any.whl", hash = "sha256:15f3fe07dfa6545e46a60c4b049fe2bdf63803c595ae4a4eec90e8f8172764f3", size = 14827061, upload-time = "2026-04-06T19:38:53.613Z" },
{ url = "https://files.pythonhosted.org/packages/55/d2/507fd0ee4dd574d2bdbdeac5df83f39d2cae1ffe97d4622cca6f6bab39f1/botocore-1.40.70-py3-none-any.whl", hash = "sha256:4a394ad25f5d9f1ef0bed610365744523eeb5c22de6862ab25d8c93f9f6d295c", size = 14106829, upload-time = "2025-11-10T20:29:01.101Z" },
]
[[package]]
@@ -718,24 +704,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/4a/57/3b7d4dd193ade4641c865bc2b93aeeb71162e81fc348b8dad020215601ed/build-1.4.2-py3-none-any.whl", hash = "sha256:7a4d8651ea877cb2a89458b1b198f2e69f536c95e89129dbf5d448045d60db88", size = 24643, upload-time = "2026-03-25T14:20:26.568Z" },
]
[[package]]
name = "cachecontrol"
version = "0.14.4"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "msgpack" },
{ name = "requests" },
]
sdist = { url = "https://files.pythonhosted.org/packages/2d/f6/c972b32d80760fb79d6b9eeb0b3010a46b89c0b23cf6329417ff7886cd22/cachecontrol-0.14.4.tar.gz", hash = "sha256:e6220afafa4c22a47dd0badb319f84475d79108100d04e26e8542ef7d3ab05a1", size = 16150, upload-time = "2025-11-14T04:32:13.138Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/ef/79/c45f2d53efe6ada1110cf6f9fca095e4ff47a0454444aefdde6ac4789179/cachecontrol-0.14.4-py3-none-any.whl", hash = "sha256:b7ac014ff72ee199b5f8af1de29d60239954f223e948196fa3d84adaffc71d2b", size = 22247, upload-time = "2025-11-14T04:32:11.733Z" },
]
[package.optional-dependencies]
filecache = [
{ name = "filelock" },
]
[[package]]
name = "cachetools"
version = "7.0.5"
@@ -1328,15 +1296,15 @@ watson = [
[package.metadata]
requires-dist = [
{ name = "a2a-sdk", marker = "extra == 'a2a'", specifier = "~=0.3.10" },
{ name = "aiobotocore", marker = "extra == 'aws'", specifier = "~=3.4.0" },
{ name = "aiobotocore", marker = "extra == 'aws'", specifier = "~=2.25.2" },
{ name = "aiocache", extras = ["memcached", "redis"], marker = "extra == 'a2a'", specifier = "~=0.12.3" },
{ name = "aiofiles", specifier = "~=24.1.0" },
{ name = "aiosqlite", specifier = "~=0.21.0" },
{ name = "anthropic", marker = "extra == 'anthropic'", specifier = "~=0.73.0" },
{ name = "appdirs", specifier = "~=1.4.4" },
{ name = "azure-ai-inference", marker = "extra == 'azure-ai-inference'", specifier = "~=1.0.0b9" },
{ name = "boto3", marker = "extra == 'aws'", specifier = "~=1.42.79" },
{ name = "boto3", marker = "extra == 'bedrock'", specifier = "~=1.42.79" },
{ name = "boto3", marker = "extra == 'aws'", specifier = "~=1.40.38" },
{ name = "boto3", marker = "extra == 'bedrock'", specifier = "~=1.40.45" },
{ name = "chromadb", specifier = "~=1.1.0" },
{ name = "click", specifier = "~=8.1.7" },
{ name = "crewai-files", marker = "extra == 'file-processing'", editable = "lib/crewai-files" },
@@ -1355,7 +1323,7 @@ requires-dist = [
{ name = "litellm", marker = "extra == 'litellm'", specifier = "~=1.83.0" },
{ name = "mcp", specifier = "~=1.26.0" },
{ name = "mem0ai", marker = "extra == 'mem0'", specifier = "~=0.1.94" },
{ name = "openai", specifier = ">=2.0.0,<3" },
{ name = "openai", specifier = ">=1.83.0,<3" },
{ name = "openpyxl", specifier = "~=3.1.5" },
{ name = "openpyxl", marker = "extra == 'openpyxl'", specifier = "~=3.1.5" },
{ name = "opentelemetry-api", specifier = "~=1.34.0" },
@@ -1377,7 +1345,7 @@ requires-dist = [
{ name = "tokenizers", specifier = ">=0.21,<1" },
{ name = "tomli", specifier = "~=2.0.2" },
{ name = "tomli-w", specifier = "~=1.1.0" },
{ name = "uv", specifier = "~=0.11.6" },
{ name = "uv", specifier = "~=0.9.13" },
{ name = "voyageai", marker = "extra == 'voyageai'", specifier = "~=0.3.5" },
]
provides-extras = ["a2a", "anthropic", "aws", "azure-ai-inference", "bedrock", "docling", "embeddings", "file-processing", "google-genai", "litellm", "mem0", "openpyxl", "pandas", "qdrant", "qdrant-edge", "tools", "voyageai", "watson"]
@@ -1423,7 +1391,7 @@ requires-dist = [
{ name = "aiofiles", specifier = "~=24.1.0" },
{ name = "av", specifier = "~=13.0.0" },
{ name = "pillow", specifier = "~=12.1.1" },
{ name = "pypdf", specifier = "~=6.10.0" },
{ name = "pypdf", specifier = "~=6.9.1" },
{ name = "python-magic", specifier = ">=0.4.27" },
{ name = "tinytag", specifier = "~=2.2.1" },
]
@@ -1595,7 +1563,7 @@ requires-dist = [
{ name = "python-docx", marker = "extra == 'rag'", specifier = ">=1.1.0" },
{ name = "pytube", specifier = "~=15.0.0" },
{ name = "qdrant-client", marker = "extra == 'qdrant-client'", specifier = ">=1.12.1" },
{ name = "requests", specifier = ">=2.33.0,<3" },
{ name = "requests", specifier = "~=2.32.5" },
{ name = "scrapegraph-py", marker = "extra == 'scrapegraph-py'", specifier = ">=1.9.0" },
{ name = "scrapfly-sdk", marker = "extra == 'scrapfly-sdk'", specifier = ">=0.8.19" },
{ name = "selenium", marker = "extra == 'selenium'", specifier = ">=4.27.1" },
@@ -1739,21 +1707,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/e7/05/c19819d5e3d95294a6f5947fb9b9629efb316b96de511b418c53d245aae6/cycler-0.12.1-py3-none-any.whl", hash = "sha256:85cef7cff222d8644161529808465972e51340599459b8ac3ccbac5a854e0d30", size = 8321, upload-time = "2023-10-07T05:32:16.783Z" },
]
[[package]]
name = "cyclonedx-python-lib"
version = "9.1.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "license-expression" },
{ name = "packageurl-python" },
{ name = "py-serializable" },
{ name = "sortedcontainers" },
]
sdist = { url = "https://files.pythonhosted.org/packages/66/fc/abaad5482f7b59c9a0a9d8f354ce4ce23346d582a0d85730b559562bbeb4/cyclonedx_python_lib-9.1.0.tar.gz", hash = "sha256:86935f2c88a7b47a529b93c724dbd3e903bc573f6f8bd977628a7ca1b5dadea1", size = 1048735, upload-time = "2025-02-27T17:23:40.367Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/53/f1/f3be2e9820a2c26fa77622223e91f9c504e1581830930d477e06146073f4/cyclonedx_python_lib-9.1.0-py3-none-any.whl", hash = "sha256:55693fca8edaecc3363b24af14e82cc6e659eb1e8353e58b587c42652ce0fb52", size = 374968, upload-time = "2025-02-27T17:23:37.766Z" },
]
[[package]]
name = "databricks-sdk"
version = "0.102.0"
@@ -1913,7 +1866,7 @@ wheels = [
[[package]]
name = "docling-core"
version = "2.73.0"
version = "2.72.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "defusedxml" },
@@ -1928,9 +1881,9 @@ dependencies = [
{ name = "typer" },
{ name = "typing-extensions" },
]
sdist = { url = "https://files.pythonhosted.org/packages/4c/e3/b9c3b1a1ea62e5e03d9e844a5cff2f89b7a3e960725a862f009e8553ca3d/docling_core-2.73.0.tar.gz", hash = "sha256:33ffc2b2bf736ed0e079bba296081a26885f6cb08081c828d630ca85a51e22e0", size = 308895, upload-time = "2026-04-09T08:08:51.573Z" }
sdist = { url = "https://files.pythonhosted.org/packages/45/87/7b49ca0f4e39b051292694eb82e5ff3a7e6ae88a5bc11b8004747afb6e47/docling_core-2.72.0.tar.gz", hash = "sha256:981b789f7097c26b2fa84d0d28cdeaa58ddd8b49e277dce7e44b1b826b8f90f0", size = 304572, upload-time = "2026-04-07T12:35:55.736Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/7f/c3/08143b7e8fe1b9230ce15e54926859f8c40ec2622fb612f0b2ff13169696/docling_core-2.73.0-py3-none-any.whl", hash = "sha256:4366fab8f4422fbde090ed87d9b091bd25b3b37cdd284dc0b02c9a5e24caaa22", size = 271518, upload-time = "2026-04-09T08:08:49.838Z" },
{ url = "https://files.pythonhosted.org/packages/6a/e5/dfbcbfb3d258d5c44043cc1cd314d0447c8f08563ff8fa5a2f77d34eab31/docling_core-2.72.0-py3-none-any.whl", hash = "sha256:3592c35a423093c7fe087416a43de7db0bd1539148f2fa9ac775c41e4ec015a4", size = 269342, upload-time = "2026-04-07T12:35:54.06Z" },
]
[package.optional-dependencies]
@@ -2432,7 +2385,7 @@ wheels = [
[[package]]
name = "google-api-core"
version = "2.30.3"
version = "2.30.2"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "google-auth" },
@@ -2441,9 +2394,9 @@ dependencies = [
{ name = "protobuf" },
{ name = "requests" },
]
sdist = { url = "https://files.pythonhosted.org/packages/16/ce/502a57fb0ec752026d24df1280b162294b22a0afb98a326084f9a979138b/google_api_core-2.30.3.tar.gz", hash = "sha256:e601a37f148585319b26db36e219df68c5d07b6382cff2d580e83404e44d641b", size = 177001, upload-time = "2026-04-10T00:41:28.035Z" }
sdist = { url = "https://files.pythonhosted.org/packages/1a/2e/83ca41eb400eb228f9279ec14ed66f6475218b59af4c6daec2d5a509fe83/google_api_core-2.30.2.tar.gz", hash = "sha256:9a8113e1a88bdc09a7ff629707f2214d98d61c7f6ceb0ea38c42a095d02dc0f9", size = 176862, upload-time = "2026-04-02T21:23:44.876Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/03/15/e56f351cf6ef1cfea58e6ac226a7318ed1deb2218c4b3cc9bd9e4b786c5a/google_api_core-2.30.3-py3-none-any.whl", hash = "sha256:a85761ba72c444dad5d611c2220633480b2b6be2521eca69cca2dbb3ffd6bfe8", size = 173274, upload-time = "2026-04-09T22:57:16.198Z" },
{ url = "https://files.pythonhosted.org/packages/84/e1/ebd5100cbb202e561c0c8b59e485ef3bd63fa9beb610f3fdcaea443f0288/google_api_core-2.30.2-py3-none-any.whl", hash = "sha256:a4c226766d6af2580577db1f1a51bf53cd262f722b49731ce7414c43068a9594", size = 173236, upload-time = "2026-04-02T21:23:06.395Z" },
]
[package.optional-dependencies]
@@ -2455,15 +2408,15 @@ grpc = [
[[package]]
name = "google-auth"
version = "2.49.2"
version = "2.49.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "cryptography" },
{ name = "pyasn1-modules" },
]
sdist = { url = "https://files.pythonhosted.org/packages/c6/fc/e925290a1ad95c975c459e2df070fac2b90954e13a0370ac505dff78cb99/google_auth-2.49.2.tar.gz", hash = "sha256:c1ae38500e73065dcae57355adb6278cf8b5c8e391994ae9cbadbcb9631ab409", size = 333958, upload-time = "2026-04-10T00:41:21.888Z" }
sdist = { url = "https://files.pythonhosted.org/packages/ea/80/6a696a07d3d3b0a92488933532f03dbefa4a24ab80fb231395b9a2a1be77/google_auth-2.49.1.tar.gz", hash = "sha256:16d40da1c3c5a0533f57d268fe72e0ebb0ae1cc3b567024122651c045d879b64", size = 333825, upload-time = "2026-03-12T19:30:58.135Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/73/76/d241a5c927433420507215df6cac1b1fa4ac0ba7a794df42a84326c68da8/google_auth-2.49.2-py3-none-any.whl", hash = "sha256:c2720924dfc82dedb962c9f52cabb2ab16714fd0a6a707e40561d217574ed6d5", size = 240638, upload-time = "2026-04-10T00:41:14.501Z" },
{ url = "https://files.pythonhosted.org/packages/e9/eb/c6c2478d8a8d633460be40e2a8a6f8f429171997a35a96f81d3b680dec83/google_auth-2.49.1-py3-none-any.whl", hash = "sha256:195ebe3dca18eddd1b3db5edc5189b76c13e96f29e73043b923ebcf3f1a860f7", size = 240737, upload-time = "2026-03-12T19:30:53.159Z" },
]
[package.optional-dependencies]
@@ -2870,7 +2823,7 @@ wheels = [
[[package]]
name = "huggingface-hub"
version = "1.10.1"
version = "1.9.2"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "filelock" },
@@ -2883,9 +2836,9 @@ dependencies = [
{ name = "typer" },
{ name = "typing-extensions" },
]
sdist = { url = "https://files.pythonhosted.org/packages/e4/28/baf5d745559503ce8d28cf5bc9551f5ac59158eafd7b6a6afff0bcdb0f50/huggingface_hub-1.10.1.tar.gz", hash = "sha256:696c53cf9c2ac9befbfb5dd41d05392a031c69fc6930d1ed9671debd405b6fff", size = 758094, upload-time = "2026-04-09T15:01:18.928Z" }
sdist = { url = "https://files.pythonhosted.org/packages/cf/65/fb800d327bf25bf31b798dd08935d326d064ecb9b359059fecd91b3a98e8/huggingface_hub-1.9.2.tar.gz", hash = "sha256:8d09d080a186bd950a361bfc04b862dfb04d6a2b41d48e9ba1b37507cfd3f1e1", size = 750284, upload-time = "2026-04-08T08:43:11.127Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/83/8c/c7a33f3efaa8d6a5bc40e012e5ecc2d72c2e6124550ca9085fe0ceed9993/huggingface_hub-1.10.1-py3-none-any.whl", hash = "sha256:6b981107a62fbe68c74374418983399c632e35786dcd14642a9f2972633c8b5a", size = 642630, upload-time = "2026-04-09T15:01:17.35Z" },
{ url = "https://files.pythonhosted.org/packages/57/d4/e33bf0b362810a9b96c5923e38908950d58ecb512db42e3730320c7f4a3a/huggingface_hub-1.9.2-py3-none-any.whl", hash = "sha256:e1e62ce237d4fbeca9f970aeb15176fbd503e04c25577bfd22f44aa7aa2b5243", size = 637349, upload-time = "2026-04-08T08:43:09.114Z" },
]
[[package]]
@@ -3595,7 +3548,7 @@ sdist = { url = "https://files.pythonhosted.org/packages/0e/72/a3add0e4eec4eb9e2
[[package]]
name = "langsmith"
version = "0.7.30"
version = "0.7.29"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "httpx" },
@@ -3608,9 +3561,9 @@ dependencies = [
{ name = "xxhash" },
{ name = "zstandard" },
]
sdist = { url = "https://files.pythonhosted.org/packages/46/e7/d27d952ce9824d684a3bb500a06541a2d55734bc4d849cdfcca2dfd4d93a/langsmith-0.7.30.tar.gz", hash = "sha256:d9df7ba5e42f818b63bda78776c8f2fc853388be3ae77b117e5d183a149321a2", size = 1106040, upload-time = "2026-04-09T21:12:01.892Z" }
sdist = { url = "https://files.pythonhosted.org/packages/eb/b3/b9b2218483400c9c0f84ea781ec4fc92a9afb51c3f16d2b6369356990d47/langsmith-0.7.29.tar.gz", hash = "sha256:bcec464be00b35cdf0ed0087ef9b1f40889fe1017066f11136a02aa0276cedf5", size = 1094512, upload-time = "2026-04-09T03:17:12.961Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/37/19/96250cf58070c5563446651b03bb76c2eb5afbf08e754840ab639532d8c6/langsmith-0.7.30-py3-none-any.whl", hash = "sha256:43dd9f8d290e4d406606d6cc0bd62f5d1050963f05fe0ab6ffe50acf41f2f55a", size = 372682, upload-time = "2026-04-09T21:12:00.481Z" },
{ url = "https://files.pythonhosted.org/packages/aa/11/8189be47b5d5a64ecd7e19c81ad3fd9cd9f0bf6778abc5ff177db90ebb3d/langsmith-0.7.29-py3-none-any.whl", hash = "sha256:ec61cdca1f2e2add48742f97a4ee1d6894c968ef3d5a50122289dac56170978c", size = 367655, upload-time = "2026-04-09T03:17:10.944Z" },
]
[[package]]
@@ -3624,73 +3577,61 @@ wheels = [
[[package]]
name = "librt"
version = "0.9.0"
version = "0.8.1"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/eb/6b/3d5c13fb3e3c4f43206c8f9dfed13778c2ed4f000bacaa0b7ce3c402a265/librt-0.9.0.tar.gz", hash = "sha256:a0951822531e7aee6e0dfb556b30d5ee36bbe234faf60c20a16c01be3530869d", size = 184368, upload-time = "2026-04-09T16:06:26.173Z" }
sdist = { url = "https://files.pythonhosted.org/packages/56/9c/b4b0c54d84da4a94b37bd44151e46d5e583c9534c7e02250b961b1b6d8a8/librt-0.8.1.tar.gz", hash = "sha256:be46a14693955b3bd96014ccbdb8339ee8c9346fbe11c1b78901b55125f14c73", size = 177471, upload-time = "2026-02-17T16:13:06.101Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/f3/4a/c64265d71b84030174ff3ac2cd16d8b664072afab8c41fccd8e2ee5a6f8d/librt-0.9.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:2f8e12706dcb8ff6b3ed57514a19e45c49ad00bcd423e87b2b2e4b5f64578443", size = 67529, upload-time = "2026-04-09T16:04:27.373Z" },
{ url = "https://files.pythonhosted.org/packages/23/b1/30ca0b3a8bdac209a00145c66cf42e5e7da2cc056ffc6ebc5c7b430ddd34/librt-0.9.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:4e3dda8345307fd7306db0ed0cb109a63a2c85ba780eb9dc2d09b2049a931f9c", size = 70248, upload-time = "2026-04-09T16:04:28.758Z" },
{ url = "https://files.pythonhosted.org/packages/fa/fc/c6018dc181478d6ac5aa24a5846b8185101eb90894346db239eb3ea53209/librt-0.9.0-cp310-cp310-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:de7dac64e3eb832ffc7b840eb8f52f76420cde1b845be51b2a0f6b870890645e", size = 202184, upload-time = "2026-04-09T16:04:29.893Z" },
{ url = "https://files.pythonhosted.org/packages/bf/58/d69629f002203370ef41ea69ff71c49a2c618aec39b226ff49986ecd8623/librt-0.9.0-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:22a904cbdb678f7cb348c90d543d3c52f581663d687992fee47fd566dcbf5285", size = 212926, upload-time = "2026-04-09T16:04:31.126Z" },
{ url = "https://files.pythonhosted.org/packages/cc/55/01d859f57824e42bd02465c77bec31fa5ef9d8c2bcee702ccf8ef1b9f508/librt-0.9.0-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:224b9727eb8bc188bc3bcf29d969dba0cd61b01d9bac80c41575520cc4baabb2", size = 225664, upload-time = "2026-04-09T16:04:32.352Z" },
{ url = "https://files.pythonhosted.org/packages/9b/02/32f63ad0ef085a94a70315291efe1151a48b9947af12261882f8445b2a30/librt-0.9.0-cp310-cp310-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:e94cbc6ad9a6aeea46d775cbb11f361022f778a9cc8cc90af653d3a594b057ce", size = 219534, upload-time = "2026-04-09T16:04:33.667Z" },
{ url = "https://files.pythonhosted.org/packages/6a/5a/9d77111a183c885acf3b3b6e4c00f5b5b07b5817028226499a55f1fedc59/librt-0.9.0-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:7bc30ad339f4e1a01d4917d645e522a0bc0030644d8973f6346397c93ba1503f", size = 227322, upload-time = "2026-04-09T16:04:34.945Z" },
{ url = "https://files.pythonhosted.org/packages/d5/e7/05d700c93063753e12ab230b972002a3f8f3b9c95d8a980c2f646c8b6963/librt-0.9.0-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:56d65b583cf43b8cf4c8fbe1e1da20fa3076cc32a1149a141507af1062718236", size = 223407, upload-time = "2026-04-09T16:04:36.22Z" },
{ url = "https://files.pythonhosted.org/packages/c0/26/26c3124823c67c987456977c683da9a27cc874befc194ddcead5f9988425/librt-0.9.0-cp310-cp310-musllinux_1_2_riscv64.whl", hash = "sha256:0a1be03168b2691ba61927e299b352a6315189199ca18a57b733f86cb3cc8d38", size = 221302, upload-time = "2026-04-09T16:04:37.62Z" },
{ url = "https://files.pythonhosted.org/packages/50/2b/c7cc2be5cf4ff7b017d948a789256288cb33a517687ff1995e72a7eea79f/librt-0.9.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:63c12efcd160e1d14da11af0c46c0217473e1e0d2ae1acbccc83f561ea4c2a7b", size = 243893, upload-time = "2026-04-09T16:04:38.909Z" },
{ url = "https://files.pythonhosted.org/packages/62/d3/da553d37417a337d12660450535d5fd51373caffbedf6962173c87867246/librt-0.9.0-cp310-cp310-win32.whl", hash = "sha256:e9002e98dcb1c0a66723592520decd86238ddcef168b37ff6cfb559200b4b774", size = 55375, upload-time = "2026-04-09T16:04:40.148Z" },
{ url = "https://files.pythonhosted.org/packages/9b/5a/46fa357bab8311b6442a83471591f2f9e5b15ecc1d2121a43725e0c529b8/librt-0.9.0-cp310-cp310-win_amd64.whl", hash = "sha256:9fcb461fbf70654a52a7cc670e606f04449e2374c199b1825f754e16dacfedd8", size = 62581, upload-time = "2026-04-09T16:04:41.452Z" },
{ url = "https://files.pythonhosted.org/packages/e2/1e/2ec7afcebcf3efea593d13aee18bbcfdd3a243043d848ebf385055e9f636/librt-0.9.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:90904fac73c478f4b83f4ed96c99c8208b75e6f9a8a1910548f69a00f1eaa671", size = 67155, upload-time = "2026-04-09T16:04:42.933Z" },
{ url = "https://files.pythonhosted.org/packages/18/77/72b85afd4435268338ad4ec6231b3da8c77363f212a0227c1ff3b45e4d35/librt-0.9.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:789fff71757facc0738e8d89e3b84e4f0251c1c975e85e81b152cdaca927cc2d", size = 69916, upload-time = "2026-04-09T16:04:44.042Z" },
{ url = "https://files.pythonhosted.org/packages/27/fb/948ea0204fbe2e78add6d46b48330e58d39897e425560674aee302dca81c/librt-0.9.0-cp311-cp311-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:1bf465d1e5b0a27713862441f6467b5ab76385f4ecf8f1f3a44f8aa3c695b4b6", size = 199635, upload-time = "2026-04-09T16:04:45.5Z" },
{ url = "https://files.pythonhosted.org/packages/ac/cd/894a29e251b296a27957856804cfd21e93c194aa131de8bb8032021be07e/librt-0.9.0-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:f819e0c6413e259a17a7c0d49f97f405abadd3c2a316a3b46c6440b7dbbedbb1", size = 211051, upload-time = "2026-04-09T16:04:47.016Z" },
{ url = "https://files.pythonhosted.org/packages/18/8f/dcaed0bc084a35f3721ff2d081158db569d2c57ea07d35623ddaca5cfc8e/librt-0.9.0-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:e0785c2fb4a81e1aece366aa3e2e039f4a4d7d21aaaded5227d7f3c703427882", size = 224031, upload-time = "2026-04-09T16:04:48.207Z" },
{ url = "https://files.pythonhosted.org/packages/03/44/88f6c1ed1132cd418601cc041fbd92fed28b3a09f39de81978e0822d13ff/librt-0.9.0-cp311-cp311-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:80b25c7b570a86c03b5da69e665809deb39265476e8e21d96a9328f9762f9990", size = 218069, upload-time = "2026-04-09T16:04:50.025Z" },
{ url = "https://files.pythonhosted.org/packages/a3/90/7d02e981c2db12188d82b4410ff3e35bfdb844b26aecd02233626f46af2b/librt-0.9.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:d4d16b608a1c43d7e33142099a75cd93af482dadce0bf82421e91cad077157f4", size = 224857, upload-time = "2026-04-09T16:04:51.684Z" },
{ url = "https://files.pythonhosted.org/packages/ef/c3/c77e706b7215ca32e928d47535cf13dbc3d25f096f84ddf8fbc06693e229/librt-0.9.0-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:194fc1a32e1e21fe809d38b5faea66cc65eaa00217c8901fbdb99866938adbdb", size = 219865, upload-time = "2026-04-09T16:04:52.949Z" },
{ url = "https://files.pythonhosted.org/packages/52/d1/32b0c1a0eb8461c70c11656c46a29f760b7c7edf3c36d6f102470c17170f/librt-0.9.0-cp311-cp311-musllinux_1_2_riscv64.whl", hash = "sha256:8c6bc1384d9738781cfd41d09ad7f6e8af13cfea2c75ece6bd6d2566cdea2076", size = 218451, upload-time = "2026-04-09T16:04:54.174Z" },
{ url = "https://files.pythonhosted.org/packages/74/d1/adfd0f9c44761b1d49b1bec66173389834c33ee2bd3c7fd2e2367f1942d4/librt-0.9.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:15cb151e52a044f06e54ac7f7b47adbfc89b5c8e2b63e1175a9d587c43e8942a", size = 241300, upload-time = "2026-04-09T16:04:55.452Z" },
{ url = "https://files.pythonhosted.org/packages/09/b0/9074b64407712f0003c27f5b1d7655d1438979155f049720e8a1abd9b1a1/librt-0.9.0-cp311-cp311-win32.whl", hash = "sha256:f100bfe2acf8a3689af9d0cc660d89f17286c9c795f9f18f7b62dd1a6b247ae6", size = 55668, upload-time = "2026-04-09T16:04:56.689Z" },
{ url = "https://files.pythonhosted.org/packages/24/19/40b77b77ce80b9389fb03971431b09b6b913911c38d412059e0b3e2a9ef2/librt-0.9.0-cp311-cp311-win_amd64.whl", hash = "sha256:0b73e4266307e51c95e09c0750b7ec383c561d2e97d58e473f6f6a209952fbb8", size = 62976, upload-time = "2026-04-09T16:04:57.733Z" },
{ url = "https://files.pythonhosted.org/packages/70/9d/9fa7a64041e29035cb8c575af5f0e3840be1b97b4c4d9061e0713f171849/librt-0.9.0-cp311-cp311-win_arm64.whl", hash = "sha256:bc5518873822d2faa8ebdd2c1a4d7c8ef47b01a058495ab7924cb65bdbf5fc9a", size = 53502, upload-time = "2026-04-09T16:04:58.806Z" },
{ url = "https://files.pythonhosted.org/packages/bf/90/89ddba8e1c20b0922783cd93ed8e64f34dc05ab59c38a9c7e313632e20ff/librt-0.9.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:9b3e3bc363f71bda1639a4ee593cb78f7fbfeacc73411ec0d4c92f00730010a4", size = 68332, upload-time = "2026-04-09T16:05:00.09Z" },
{ url = "https://files.pythonhosted.org/packages/a8/40/7aa4da1fb08bdeeb540cb07bfc8207cb32c5c41642f2594dbd0098a0662d/librt-0.9.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:0a09c2f5869649101738653a9b7ab70cf045a1105ac66cbb8f4055e61df78f2d", size = 70581, upload-time = "2026-04-09T16:05:01.213Z" },
{ url = "https://files.pythonhosted.org/packages/48/ac/73a2187e1031041e93b7e3a25aae37aa6f13b838c550f7e0f06f66766212/librt-0.9.0-cp312-cp312-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:5ca8e133d799c948db2ab1afc081c333a825b5540475164726dcbf73537e5c2f", size = 203984, upload-time = "2026-04-09T16:05:02.542Z" },
{ url = "https://files.pythonhosted.org/packages/5e/3d/23460d571e9cbddb405b017681df04c142fb1b04cbfce77c54b08e28b108/librt-0.9.0-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:603138ee838ee1583f1b960b62d5d0007845c5c423feb68e44648b1359014e27", size = 215762, upload-time = "2026-04-09T16:05:04.127Z" },
{ url = "https://files.pythonhosted.org/packages/de/1e/42dc7f8ab63e65b20640d058e63e97fd3e482c1edbda3570d813b4d0b927/librt-0.9.0-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:f4003f70c56a5addd6aa0897f200dd59afd3bf7bcd5b3cce46dd21f925743bc2", size = 230288, upload-time = "2026-04-09T16:05:05.883Z" },
{ url = "https://files.pythonhosted.org/packages/dc/08/ca812b6d8259ad9ece703397f8ad5c03af5b5fedfce64279693d3ce4087c/librt-0.9.0-cp312-cp312-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:78042f6facfd98ecb25e9829c7e37cce23363d9d7c83bc5f72702c5059eb082b", size = 224103, upload-time = "2026-04-09T16:05:07.148Z" },
{ url = "https://files.pythonhosted.org/packages/b6/3f/620490fb2fa66ffd44e7f900254bc110ebec8dac6c1b7514d64662570e6f/librt-0.9.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:a361c9434a64d70a7dbb771d1de302c0cc9f13c0bffe1cf7e642152814b35265", size = 232122, upload-time = "2026-04-09T16:05:08.386Z" },
{ url = "https://files.pythonhosted.org/packages/e9/83/12864700a1b6a8be458cf5d05db209b0d8e94ae281e7ec261dbe616597b4/librt-0.9.0-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:dd2c7e082b0b92e1baa4da28163a808672485617bc855cc22a2fd06978fa9084", size = 225045, upload-time = "2026-04-09T16:05:09.707Z" },
{ url = "https://files.pythonhosted.org/packages/fd/1b/845d339c29dc7dbc87a2e992a1ba8d28d25d0e0372f9a0a2ecebde298186/librt-0.9.0-cp312-cp312-musllinux_1_2_riscv64.whl", hash = "sha256:7e6274fd33fc5b2a14d41c9119629d3ff395849d8bcbc80cf637d9e8d2034da8", size = 227372, upload-time = "2026-04-09T16:05:10.942Z" },
{ url = "https://files.pythonhosted.org/packages/8d/fe/277985610269d926a64c606f761d58d3db67b956dbbf40024921e95e7fcb/librt-0.9.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:5093043afb226ecfa1400120d1ebd4442b4f99977783e4f4f7248879009b227f", size = 248224, upload-time = "2026-04-09T16:05:12.254Z" },
{ url = "https://files.pythonhosted.org/packages/92/1b/ee486d244b8de6b8b5dbaefabe6bfdd4a72e08f6353edf7d16d27114da8d/librt-0.9.0-cp312-cp312-win32.whl", hash = "sha256:9edcc35d1cae9fd5320171b1a838c7da8a5c968af31e82ecc3dff30b4be0957f", size = 55986, upload-time = "2026-04-09T16:05:13.529Z" },
{ url = "https://files.pythonhosted.org/packages/89/7a/ba1737012308c17dc6d5516143b5dce9a2c7ba3474afd54e11f44a4d1ef3/librt-0.9.0-cp312-cp312-win_amd64.whl", hash = "sha256:3cc2917258e131ae5f958a4d872e07555b51cb7466a43433218061c74ef33745", size = 63260, upload-time = "2026-04-09T16:05:14.68Z" },
{ url = "https://files.pythonhosted.org/packages/36/e4/01752c113da15127f18f7bf11142f5640038f062407a611c059d0036c6aa/librt-0.9.0-cp312-cp312-win_arm64.whl", hash = "sha256:90e6d5420fc8a300518d4d2288154ff45005e920425c22cbbfe8330f3f754bd9", size = 53694, upload-time = "2026-04-09T16:05:16.095Z" },
{ url = "https://files.pythonhosted.org/packages/5f/d7/1b3e26fffde1452d82f5666164858a81c26ebe808e7ae8c9c88628981540/librt-0.9.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:f29b68cd9714531672db62cc54f6e8ff981900f824d13fa0e00749189e13778e", size = 68367, upload-time = "2026-04-09T16:05:17.243Z" },
{ url = "https://files.pythonhosted.org/packages/a5/5b/c61b043ad2e091fbe1f2d35d14795e545d0b56b03edaa390fa1dcee3d160/librt-0.9.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:7d5c8a5929ac325729f6119802070b561f4db793dffc45e9ac750992a4ed4d22", size = 70595, upload-time = "2026-04-09T16:05:18.471Z" },
{ url = "https://files.pythonhosted.org/packages/a3/22/2448471196d8a73370aa2f23445455dc42712c21404081fcd7a03b9e0749/librt-0.9.0-cp313-cp313-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:756775d25ec8345b837ab52effee3ad2f3b2dfd6bbee3e3f029c517bd5d8f05a", size = 204354, upload-time = "2026-04-09T16:05:19.593Z" },
{ url = "https://files.pythonhosted.org/packages/ac/5e/39fc4b153c78cfd2c8a2dcb32700f2d41d2312aa1050513183be4540930d/librt-0.9.0-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:2b8f5d00b49818f4e2b1667db994488b045835e0ac16fe2f924f3871bd2b8ac5", size = 216238, upload-time = "2026-04-09T16:05:20.868Z" },
{ url = "https://files.pythonhosted.org/packages/d7/42/bc2d02d0fa7badfa63aa8d6dcd8793a9f7ef5a94396801684a51ed8d8287/librt-0.9.0-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:c81aef782380f0f13ead670aae01825eb653b44b046aa0e5ebbb79f76ed4aa11", size = 230589, upload-time = "2026-04-09T16:05:22.305Z" },
{ url = "https://files.pythonhosted.org/packages/c8/7b/e2d95cc513866373692aa5edf98080d5602dd07cabfb9e5d2f70df2f25f7/librt-0.9.0-cp313-cp313-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:66b58fed90a545328e80d575467244de3741e088c1af928f0b489ebec3ef3858", size = 224610, upload-time = "2026-04-09T16:05:23.647Z" },
{ url = "https://files.pythonhosted.org/packages/31/d5/6cec4607e998eaba57564d06a1295c21b0a0c8de76e4e74d699e627bd98c/librt-0.9.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:e78fb7419e07d98c2af4b8567b72b3eaf8cb05caad642e9963465569c8b2d87e", size = 232558, upload-time = "2026-04-09T16:05:25.025Z" },
{ url = "https://files.pythonhosted.org/packages/95/8c/27f1d8d3aaf079d3eb26439bf0b32f1482340c3552e324f7db9dca858671/librt-0.9.0-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:2c3786f0f4490a5cd87f1ed6cefae833ad6b1060d52044ce0434a2e85893afd0", size = 225521, upload-time = "2026-04-09T16:05:26.311Z" },
{ url = "https://files.pythonhosted.org/packages/6b/d8/1e0d43b1c329b416017619469b3c3801a25a6a4ef4a1c68332aeaa6f72ca/librt-0.9.0-cp313-cp313-musllinux_1_2_riscv64.whl", hash = "sha256:8494cfc61e03542f2d381e71804990b3931175a29b9278fdb4a5459948778dc2", size = 227789, upload-time = "2026-04-09T16:05:27.624Z" },
{ url = "https://files.pythonhosted.org/packages/2c/b4/d3d842e88610fcd4c8eec7067b0c23ef2d7d3bff31496eded6a83b0f99be/librt-0.9.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:07cf11f769831186eeac424376e6189f20ace4f7263e2134bdb9757340d84d4d", size = 248616, upload-time = "2026-04-09T16:05:29.181Z" },
{ url = "https://files.pythonhosted.org/packages/ec/28/527df8ad0d1eb6c8bdfa82fc190f1f7c4cca5a1b6d7b36aeabf95b52d74d/librt-0.9.0-cp313-cp313-win32.whl", hash = "sha256:850d6d03177e52700af605fd60db7f37dcb89782049a149674d1a9649c2138fd", size = 56039, upload-time = "2026-04-09T16:05:30.709Z" },
{ url = "https://files.pythonhosted.org/packages/f3/a7/413652ad0d92273ee5e30c000fc494b361171177c83e57c060ecd3c21538/librt-0.9.0-cp313-cp313-win_amd64.whl", hash = "sha256:a5af136bfba820d592f86c67affcef9b3ff4d4360ac3255e341e964489b48519", size = 63264, upload-time = "2026-04-09T16:05:31.881Z" },
{ url = "https://files.pythonhosted.org/packages/a4/0a/92c244309b774e290ddb15e93363846ae7aa753d9586b8aad511c5e6145b/librt-0.9.0-cp313-cp313-win_arm64.whl", hash = "sha256:4c4d0440a3a8e31d962340c3e1cc3fc9ee7febd34c8d8f770d06adb947779ea5", size = 53728, upload-time = "2026-04-09T16:05:33.31Z" },
]
[[package]]
name = "license-expression"
version = "30.4.4"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "boolean-py" },
]
sdist = { url = "https://files.pythonhosted.org/packages/40/71/d89bb0e71b1415453980fd32315f2a037aad9f7f70f695c7cec7035feb13/license_expression-30.4.4.tar.gz", hash = "sha256:73448f0aacd8d0808895bdc4b2c8e01a8d67646e4188f887375398c761f340fd", size = 186402, upload-time = "2025-07-22T11:13:32.17Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/af/40/791891d4c0c4dab4c5e187c17261cedc26285fd41541577f900470a45a4d/license_expression-30.4.4-py3-none-any.whl", hash = "sha256:421788fdcadb41f049d2dc934ce666626265aeccefddd25e162a26f23bcbf8a4", size = 120615, upload-time = "2025-07-22T11:13:31.217Z" },
{ url = "https://files.pythonhosted.org/packages/7c/5f/63f5fa395c7a8a93558c0904ba8f1c8d1b997ca6a3de61bc7659970d66bf/librt-0.8.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:81fd938344fecb9373ba1b155968c8a329491d2ce38e7ddb76f30ffb938f12dc", size = 65697, upload-time = "2026-02-17T16:11:06.903Z" },
{ url = "https://files.pythonhosted.org/packages/ff/e0/0472cf37267b5920eff2f292ccfaede1886288ce35b7f3203d8de00abfe6/librt-0.8.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:5db05697c82b3a2ec53f6e72b2ed373132b0c2e05135f0696784e97d7f5d48e7", size = 68376, upload-time = "2026-02-17T16:11:08.395Z" },
{ url = "https://files.pythonhosted.org/packages/c8/be/8bd1359fdcd27ab897cd5963294fa4a7c83b20a8564678e4fd12157e56a5/librt-0.8.1-cp310-cp310-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:d56bc4011975f7460bea7b33e1ff425d2f1adf419935ff6707273c77f8a4ada6", size = 197084, upload-time = "2026-02-17T16:11:09.774Z" },
{ url = "https://files.pythonhosted.org/packages/e2/fe/163e33fdd091d0c2b102f8a60cc0a61fd730ad44e32617cd161e7cd67a01/librt-0.8.1-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:5cdc0f588ff4b663ea96c26d2a230c525c6fc62b28314edaaaca8ed5af931ad0", size = 207337, upload-time = "2026-02-17T16:11:11.311Z" },
{ url = "https://files.pythonhosted.org/packages/01/99/f85130582f05dcf0c8902f3d629270231d2f4afdfc567f8305a952ac7f14/librt-0.8.1-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:97c2b54ff6717a7a563b72627990bec60d8029df17df423f0ed37d56a17a176b", size = 219980, upload-time = "2026-02-17T16:11:12.499Z" },
{ url = "https://files.pythonhosted.org/packages/6f/54/cb5e4d03659e043a26c74e08206412ac9a3742f0477d96f9761a55313b5f/librt-0.8.1-cp310-cp310-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:8f1125e6bbf2f1657d9a2f3ccc4a2c9b0c8b176965bb565dd4d86be67eddb4b6", size = 212921, upload-time = "2026-02-17T16:11:14.484Z" },
{ url = "https://files.pythonhosted.org/packages/b1/81/a3a01e4240579c30f3487f6fed01eb4bc8ef0616da5b4ebac27ca19775f3/librt-0.8.1-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:8f4bb453f408137d7581be309b2fbc6868a80e7ef60c88e689078ee3a296ae71", size = 221381, upload-time = "2026-02-17T16:11:17.459Z" },
{ url = "https://files.pythonhosted.org/packages/08/b0/fc2d54b4b1c6fb81e77288ff31ff25a2c1e62eaef4424a984f228839717b/librt-0.8.1-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:c336d61d2fe74a3195edc1646d53ff1cddd3a9600b09fa6ab75e5514ba4862a7", size = 216714, upload-time = "2026-02-17T16:11:19.197Z" },
{ url = "https://files.pythonhosted.org/packages/96/96/85daa73ffbd87e1fb287d7af6553ada66bf25a2a6b0de4764344a05469f6/librt-0.8.1-cp310-cp310-musllinux_1_2_riscv64.whl", hash = "sha256:eb5656019db7c4deacf0c1a55a898c5bb8f989be904597fcb5232a2f4828fa05", size = 214777, upload-time = "2026-02-17T16:11:20.443Z" },
{ url = "https://files.pythonhosted.org/packages/12/9c/c3aa7a2360383f4bf4f04d98195f2739a579128720c603f4807f006a4225/librt-0.8.1-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:c25d9e338d5bed46c1632f851babf3d13c78f49a225462017cf5e11e845c5891", size = 237398, upload-time = "2026-02-17T16:11:22.083Z" },
{ url = "https://files.pythonhosted.org/packages/61/19/d350ea89e5274665185dabc4bbb9c3536c3411f862881d316c8b8e00eb66/librt-0.8.1-cp310-cp310-win32.whl", hash = "sha256:aaab0e307e344cb28d800957ef3ec16605146ef0e59e059a60a176d19543d1b7", size = 54285, upload-time = "2026-02-17T16:11:23.27Z" },
{ url = "https://files.pythonhosted.org/packages/4f/d6/45d587d3d41c112e9543a0093d883eb57a24a03e41561c127818aa2a6bcc/librt-0.8.1-cp310-cp310-win_amd64.whl", hash = "sha256:56e04c14b696300d47b3bc5f1d10a00e86ae978886d0cee14e5714fafb5df5d2", size = 61352, upload-time = "2026-02-17T16:11:24.207Z" },
{ url = "https://files.pythonhosted.org/packages/1d/01/0e748af5e4fee180cf7cd12bd12b0513ad23b045dccb2a83191bde82d168/librt-0.8.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:681dc2451d6d846794a828c16c22dc452d924e9f700a485b7ecb887a30aad1fd", size = 65315, upload-time = "2026-02-17T16:11:25.152Z" },
{ url = "https://files.pythonhosted.org/packages/9d/4d/7184806efda571887c798d573ca4134c80ac8642dcdd32f12c31b939c595/librt-0.8.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:a3b4350b13cc0e6f5bec8fa7caf29a8fb8cdc051a3bae45cfbfd7ce64f009965", size = 68021, upload-time = "2026-02-17T16:11:26.129Z" },
{ url = "https://files.pythonhosted.org/packages/ae/88/c3c52d2a5d5101f28d3dc89298444626e7874aa904eed498464c2af17627/librt-0.8.1-cp311-cp311-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:ac1e7817fd0ed3d14fd7c5df91daed84c48e4c2a11ee99c0547f9f62fdae13da", size = 194500, upload-time = "2026-02-17T16:11:27.177Z" },
{ url = "https://files.pythonhosted.org/packages/d6/5d/6fb0a25b6a8906e85b2c3b87bee1d6ed31510be7605b06772f9374ca5cb3/librt-0.8.1-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:747328be0c5b7075cde86a0e09d7a9196029800ba75a1689332348e998fb85c0", size = 205622, upload-time = "2026-02-17T16:11:28.242Z" },
{ url = "https://files.pythonhosted.org/packages/b2/a6/8006ae81227105476a45691f5831499e4d936b1c049b0c1feb17c11b02d1/librt-0.8.1-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:f0af2bd2bc204fa27f3d6711d0f360e6b8c684a035206257a81673ab924aa11e", size = 218304, upload-time = "2026-02-17T16:11:29.344Z" },
{ url = "https://files.pythonhosted.org/packages/ee/19/60e07886ad16670aae57ef44dada41912c90906a6fe9f2b9abac21374748/librt-0.8.1-cp311-cp311-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:d480de377f5b687b6b1bc0c0407426da556e2a757633cc7e4d2e1a057aa688f3", size = 211493, upload-time = "2026-02-17T16:11:30.445Z" },
{ url = "https://files.pythonhosted.org/packages/9c/cf/f666c89d0e861d05600438213feeb818c7514d3315bae3648b1fc145d2b6/librt-0.8.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:d0ee06b5b5291f609ddb37b9750985b27bc567791bc87c76a569b3feed8481ac", size = 219129, upload-time = "2026-02-17T16:11:32.021Z" },
{ url = "https://files.pythonhosted.org/packages/8f/ef/f1bea01e40b4a879364c031476c82a0dc69ce068daad67ab96302fed2d45/librt-0.8.1-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:9e2c6f77b9ad48ce5603b83b7da9ee3e36b3ab425353f695cba13200c5d96596", size = 213113, upload-time = "2026-02-17T16:11:33.192Z" },
{ url = "https://files.pythonhosted.org/packages/9b/80/cdab544370cc6bc1b72ea369525f547a59e6938ef6863a11ab3cd24759af/librt-0.8.1-cp311-cp311-musllinux_1_2_riscv64.whl", hash = "sha256:439352ba9373f11cb8e1933da194dcc6206daf779ff8df0ed69c5e39113e6a99", size = 212269, upload-time = "2026-02-17T16:11:34.373Z" },
{ url = "https://files.pythonhosted.org/packages/9d/9c/48d6ed8dac595654f15eceab2035131c136d1ae9a1e3548e777bb6dbb95d/librt-0.8.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:82210adabbc331dbb65d7868b105185464ef13f56f7f76688565ad79f648b0fe", size = 234673, upload-time = "2026-02-17T16:11:36.063Z" },
{ url = "https://files.pythonhosted.org/packages/16/01/35b68b1db517f27a01be4467593292eb5315def8900afad29fabf56304ba/librt-0.8.1-cp311-cp311-win32.whl", hash = "sha256:52c224e14614b750c0a6d97368e16804a98c684657c7518752c356834fff83bb", size = 54597, upload-time = "2026-02-17T16:11:37.544Z" },
{ url = "https://files.pythonhosted.org/packages/71/02/796fe8f02822235966693f257bf2c79f40e11337337a657a8cfebba5febc/librt-0.8.1-cp311-cp311-win_amd64.whl", hash = "sha256:c00e5c884f528c9932d278d5c9cbbea38a6b81eb62c02e06ae53751a83a4d52b", size = 61733, upload-time = "2026-02-17T16:11:38.691Z" },
{ url = "https://files.pythonhosted.org/packages/28/ad/232e13d61f879a42a4e7117d65e4984bb28371a34bb6fb9ca54ec2c8f54e/librt-0.8.1-cp311-cp311-win_arm64.whl", hash = "sha256:f7cdf7f26c2286ffb02e46d7bac56c94655540b26347673bea15fa52a6af17e9", size = 52273, upload-time = "2026-02-17T16:11:40.308Z" },
{ url = "https://files.pythonhosted.org/packages/95/21/d39b0a87ac52fc98f621fb6f8060efb017a767ebbbac2f99fbcbc9ddc0d7/librt-0.8.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:a28f2612ab566b17f3698b0da021ff9960610301607c9a5e8eaca62f5e1c350a", size = 66516, upload-time = "2026-02-17T16:11:41.604Z" },
{ url = "https://files.pythonhosted.org/packages/69/f1/46375e71441c43e8ae335905e069f1c54febee63a146278bcee8782c84fd/librt-0.8.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:60a78b694c9aee2a0f1aaeaa7d101cf713e92e8423a941d2897f4fa37908dab9", size = 68634, upload-time = "2026-02-17T16:11:43.268Z" },
{ url = "https://files.pythonhosted.org/packages/0a/33/c510de7f93bf1fa19e13423a606d8189a02624a800710f6e6a0a0f0784b3/librt-0.8.1-cp312-cp312-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:758509ea3f1eba2a57558e7e98f4659d0ea7670bff49673b0dde18a3c7e6c0eb", size = 198941, upload-time = "2026-02-17T16:11:44.28Z" },
{ url = "https://files.pythonhosted.org/packages/dd/36/e725903416409a533d92398e88ce665476f275081d0d7d42f9c4951999e5/librt-0.8.1-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:039b9f2c506bd0ab0f8725aa5ba339c6f0cd19d3b514b50d134789809c24285d", size = 209991, upload-time = "2026-02-17T16:11:45.462Z" },
{ url = "https://files.pythonhosted.org/packages/30/7a/8d908a152e1875c9f8eac96c97a480df425e657cdb47854b9efaa4998889/librt-0.8.1-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:5bb54f1205a3a6ab41a6fd71dfcdcbd278670d3a90ca502a30d9da583105b6f7", size = 224476, upload-time = "2026-02-17T16:11:46.542Z" },
{ url = "https://files.pythonhosted.org/packages/a8/b8/a22c34f2c485b8903a06f3fe3315341fe6876ef3599792344669db98fcff/librt-0.8.1-cp312-cp312-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:05bd41cdee35b0c59c259f870f6da532a2c5ca57db95b5f23689fcb5c9e42440", size = 217518, upload-time = "2026-02-17T16:11:47.746Z" },
{ url = "https://files.pythonhosted.org/packages/79/6f/5c6fea00357e4f82ba44f81dbfb027921f1ab10e320d4a64e1c408d035d9/librt-0.8.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:adfab487facf03f0d0857b8710cf82d0704a309d8ffc33b03d9302b4c64e91a9", size = 225116, upload-time = "2026-02-17T16:11:49.298Z" },
{ url = "https://files.pythonhosted.org/packages/f2/a0/95ced4e7b1267fe1e2720a111685bcddf0e781f7e9e0ce59d751c44dcfe5/librt-0.8.1-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:153188fe98a72f206042be10a2c6026139852805215ed9539186312d50a8e972", size = 217751, upload-time = "2026-02-17T16:11:50.49Z" },
{ url = "https://files.pythonhosted.org/packages/93/c2/0517281cb4d4101c27ab59472924e67f55e375bc46bedae94ac6dc6e1902/librt-0.8.1-cp312-cp312-musllinux_1_2_riscv64.whl", hash = "sha256:dd3c41254ee98604b08bd5b3af5bf0a89740d4ee0711de95b65166bf44091921", size = 218378, upload-time = "2026-02-17T16:11:51.783Z" },
{ url = "https://files.pythonhosted.org/packages/43/e8/37b3ac108e8976888e559a7b227d0ceac03c384cfd3e7a1c2ee248dbae79/librt-0.8.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:e0d138c7ae532908cbb342162b2611dbd4d90c941cd25ab82084aaf71d2c0bd0", size = 241199, upload-time = "2026-02-17T16:11:53.561Z" },
{ url = "https://files.pythonhosted.org/packages/4b/5b/35812d041c53967fedf551a39399271bbe4257e681236a2cf1a69c8e7fa1/librt-0.8.1-cp312-cp312-win32.whl", hash = "sha256:43353b943613c5d9c49a25aaffdba46f888ec354e71e3529a00cca3f04d66a7a", size = 54917, upload-time = "2026-02-17T16:11:54.758Z" },
{ url = "https://files.pythonhosted.org/packages/de/d1/fa5d5331b862b9775aaf2a100f5ef86854e5d4407f71bddf102f4421e034/librt-0.8.1-cp312-cp312-win_amd64.whl", hash = "sha256:ff8baf1f8d3f4b6b7257fcb75a501f2a5499d0dda57645baa09d4d0d34b19444", size = 62017, upload-time = "2026-02-17T16:11:55.748Z" },
{ url = "https://files.pythonhosted.org/packages/c7/7c/c614252f9acda59b01a66e2ddfd243ed1c7e1deab0293332dfbccf862808/librt-0.8.1-cp312-cp312-win_arm64.whl", hash = "sha256:0f2ae3725904f7377e11cc37722d5d401e8b3d5851fb9273d7f4fe04f6b3d37d", size = 52441, upload-time = "2026-02-17T16:11:56.801Z" },
{ url = "https://files.pythonhosted.org/packages/c5/3c/f614c8e4eaac7cbf2bbdf9528790b21d89e277ee20d57dc6e559c626105f/librt-0.8.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:7e6bad1cd94f6764e1e21950542f818a09316645337fd5ab9a7acc45d99a8f35", size = 66529, upload-time = "2026-02-17T16:11:57.809Z" },
{ url = "https://files.pythonhosted.org/packages/ab/96/5836544a45100ae411eda07d29e3d99448e5258b6e9c8059deb92945f5c2/librt-0.8.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:cf450f498c30af55551ba4f66b9123b7185362ec8b625a773b3d39aa1a717583", size = 68669, upload-time = "2026-02-17T16:11:58.843Z" },
{ url = "https://files.pythonhosted.org/packages/06/53/f0b992b57af6d5531bf4677d75c44f095f2366a1741fb695ee462ae04b05/librt-0.8.1-cp313-cp313-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:eca45e982fa074090057132e30585a7e8674e9e885d402eae85633e9f449ce6c", size = 199279, upload-time = "2026-02-17T16:11:59.862Z" },
{ url = "https://files.pythonhosted.org/packages/f3/ad/4848cc16e268d14280d8168aee4f31cea92bbd2b79ce33d3e166f2b4e4fc/librt-0.8.1-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:0c3811485fccfda840861905b8c70bba5ec094e02825598bb9d4ca3936857a04", size = 210288, upload-time = "2026-02-17T16:12:00.954Z" },
{ url = "https://files.pythonhosted.org/packages/52/05/27fdc2e95de26273d83b96742d8d3b7345f2ea2bdbd2405cc504644f2096/librt-0.8.1-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:5e4af413908f77294605e28cfd98063f54b2c790561383971d2f52d113d9c363", size = 224809, upload-time = "2026-02-17T16:12:02.108Z" },
{ url = "https://files.pythonhosted.org/packages/7a/d0/78200a45ba3240cb042bc597d6f2accba9193a2c57d0356268cbbe2d0925/librt-0.8.1-cp313-cp313-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:5212a5bd7fae98dae95710032902edcd2ec4dc994e883294f75c857b83f9aba0", size = 218075, upload-time = "2026-02-17T16:12:03.631Z" },
{ url = "https://files.pythonhosted.org/packages/af/72/a210839fa74c90474897124c064ffca07f8d4b347b6574d309686aae7ca6/librt-0.8.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:e692aa2d1d604e6ca12d35e51fdc36f4cda6345e28e36374579f7ef3611b3012", size = 225486, upload-time = "2026-02-17T16:12:04.725Z" },
{ url = "https://files.pythonhosted.org/packages/a3/c1/a03cc63722339ddbf087485f253493e2b013039f5b707e8e6016141130fa/librt-0.8.1-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:4be2a5c926b9770c9e08e717f05737a269b9d0ebc5d2f0060f0fe3fe9ce47acb", size = 218219, upload-time = "2026-02-17T16:12:05.828Z" },
{ url = "https://files.pythonhosted.org/packages/58/f5/fff6108af0acf941c6f274a946aea0e484bd10cd2dc37610287ce49388c5/librt-0.8.1-cp313-cp313-musllinux_1_2_riscv64.whl", hash = "sha256:fd1a720332ea335ceb544cf0a03f81df92abd4bb887679fd1e460976b0e6214b", size = 218750, upload-time = "2026-02-17T16:12:07.09Z" },
{ url = "https://files.pythonhosted.org/packages/71/67/5a387bfef30ec1e4b4f30562c8586566faf87e47d696768c19feb49e3646/librt-0.8.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:93c2af9e01e0ef80d95ae3c720be101227edae5f2fe7e3dc63d8857fadfc5a1d", size = 241624, upload-time = "2026-02-17T16:12:08.43Z" },
{ url = "https://files.pythonhosted.org/packages/d4/be/24f8502db11d405232ac1162eb98069ca49c3306c1d75c6ccc61d9af8789/librt-0.8.1-cp313-cp313-win32.whl", hash = "sha256:086a32dbb71336627e78cc1d6ee305a68d038ef7d4c39aaff41ae8c9aa46e91a", size = 54969, upload-time = "2026-02-17T16:12:09.633Z" },
{ url = "https://files.pythonhosted.org/packages/5c/73/c9fdf6cb2a529c1a092ce769a12d88c8cca991194dfe641b6af12fa964d2/librt-0.8.1-cp313-cp313-win_amd64.whl", hash = "sha256:e11769a1dbda4da7b00a76cfffa67aa47cfa66921d2724539eee4b9ede780b79", size = 62000, upload-time = "2026-02-17T16:12:10.632Z" },
{ url = "https://files.pythonhosted.org/packages/d3/97/68f80ca3ac4924f250cdfa6e20142a803e5e50fca96ef5148c52ee8c10ea/librt-0.8.1-cp313-cp313-win_arm64.whl", hash = "sha256:924817ab3141aca17893386ee13261f1d100d1ef410d70afe4389f2359fea4f0", size = 52495, upload-time = "2026-02-17T16:12:11.633Z" },
]
[[package]]
@@ -4269,11 +4210,11 @@ wheels = [
[[package]]
name = "more-itertools"
version = "11.0.2"
version = "11.0.1"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/a2/f7/139d22fef48ac78127d18e01d80cf1be40236ae489769d17f35c3d425293/more_itertools-11.0.2.tar.gz", hash = "sha256:392a9e1e362cbc106a2457d37cabf9b36e5e12efd4ebff1654630e76597df804", size = 144659, upload-time = "2026-04-09T15:01:33.297Z" }
sdist = { url = "https://files.pythonhosted.org/packages/24/24/e0acc4bf54cba50c1d432c70a72a3df96db4a321b2c4c68432a60759044f/more_itertools-11.0.1.tar.gz", hash = "sha256:fefaf25b7ab08f0b45fa9f1892cae93b9fc0089ef034d39213bce15f1cc9e199", size = 144739, upload-time = "2026-04-02T16:17:45.061Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/cb/98/6af411189d9413534c3eb691182bff1f5c6d44ed2f93f2edfe52a1bbceb8/more_itertools-11.0.2-py3-none-any.whl", hash = "sha256:6e35b35f818b01f691643c6c611bc0902f2e92b46c18fffa77ae1e7c46e912e4", size = 71939, upload-time = "2026-04-09T15:01:32.21Z" },
{ url = "https://files.pythonhosted.org/packages/d8/f4/5e52c7319b8087acef603ed6e50dc325c02eaa999355414830468611f13c/more_itertools-11.0.1-py3-none-any.whl", hash = "sha256:eaf287826069452a8f61026c597eae2428b2d1ba2859083abbf240b46842ce6d", size = 72182, upload-time = "2026-04-02T16:17:43.724Z" },
]
[[package]]
@@ -4304,49 +4245,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/43/e3/7d92a15f894aa0c9c4b49b8ee9ac9850d6e63b03c9c32c0367a13ae62209/mpmath-1.3.0-py3-none-any.whl", hash = "sha256:a0b2b9fe80bbcd81a6647ff13108738cfb482d481d826cc0e02f5b35e5c88d2c", size = 536198, upload-time = "2023-03-07T16:47:09.197Z" },
]
[[package]]
name = "msgpack"
version = "1.1.2"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/4d/f2/bfb55a6236ed8725a96b0aa3acbd0ec17588e6a2c3b62a93eb513ed8783f/msgpack-1.1.2.tar.gz", hash = "sha256:3b60763c1373dd60f398488069bcdc703cd08a711477b5d480eecc9f9626f47e", size = 173581, upload-time = "2025-10-08T09:15:56.596Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/f5/a2/3b68a9e769db68668b25c6108444a35f9bd163bb848c0650d516761a59c0/msgpack-1.1.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:0051fffef5a37ca2cd16978ae4f0aef92f164df86823871b5162812bebecd8e2", size = 81318, upload-time = "2025-10-08T09:14:38.722Z" },
{ url = "https://files.pythonhosted.org/packages/5b/e1/2b720cc341325c00be44e1ed59e7cfeae2678329fbf5aa68f5bda57fe728/msgpack-1.1.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:a605409040f2da88676e9c9e5853b3449ba8011973616189ea5ee55ddbc5bc87", size = 83786, upload-time = "2025-10-08T09:14:40.082Z" },
{ url = "https://files.pythonhosted.org/packages/71/e5/c2241de64bfceac456b140737812a2ab310b10538a7b34a1d393b748e095/msgpack-1.1.2-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:8b696e83c9f1532b4af884045ba7f3aa741a63b2bc22617293a2c6a7c645f251", size = 398240, upload-time = "2025-10-08T09:14:41.151Z" },
{ url = "https://files.pythonhosted.org/packages/b7/09/2a06956383c0fdebaef5aa9246e2356776f12ea6f2a44bd1368abf0e46c4/msgpack-1.1.2-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:365c0bbe981a27d8932da71af63ef86acc59ed5c01ad929e09a0b88c6294e28a", size = 406070, upload-time = "2025-10-08T09:14:42.821Z" },
{ url = "https://files.pythonhosted.org/packages/0e/74/2957703f0e1ef20637d6aead4fbb314330c26f39aa046b348c7edcf6ca6b/msgpack-1.1.2-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:41d1a5d875680166d3ac5c38573896453bbbea7092936d2e107214daf43b1d4f", size = 393403, upload-time = "2025-10-08T09:14:44.38Z" },
{ url = "https://files.pythonhosted.org/packages/a5/09/3bfc12aa90f77b37322fc33e7a8a7c29ba7c8edeadfa27664451801b9860/msgpack-1.1.2-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:354e81bcdebaab427c3df4281187edc765d5d76bfb3a7c125af9da7a27e8458f", size = 398947, upload-time = "2025-10-08T09:14:45.56Z" },
{ url = "https://files.pythonhosted.org/packages/4b/4f/05fcebd3b4977cb3d840f7ef6b77c51f8582086de5e642f3fefee35c86fc/msgpack-1.1.2-cp310-cp310-win32.whl", hash = "sha256:e64c8d2f5e5d5fda7b842f55dec6133260ea8f53c4257d64494c534f306bf7a9", size = 64769, upload-time = "2025-10-08T09:14:47.334Z" },
{ url = "https://files.pythonhosted.org/packages/d0/3e/b4547e3a34210956382eed1c85935fff7e0f9b98be3106b3745d7dec9c5e/msgpack-1.1.2-cp310-cp310-win_amd64.whl", hash = "sha256:db6192777d943bdaaafb6ba66d44bf65aa0e9c5616fa1d2da9bb08828c6b39aa", size = 71293, upload-time = "2025-10-08T09:14:48.665Z" },
{ url = "https://files.pythonhosted.org/packages/2c/97/560d11202bcd537abca693fd85d81cebe2107ba17301de42b01ac1677b69/msgpack-1.1.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:2e86a607e558d22985d856948c12a3fa7b42efad264dca8a3ebbcfa2735d786c", size = 82271, upload-time = "2025-10-08T09:14:49.967Z" },
{ url = "https://files.pythonhosted.org/packages/83/04/28a41024ccbd67467380b6fb440ae916c1e4f25e2cd4c63abe6835ac566e/msgpack-1.1.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:283ae72fc89da59aa004ba147e8fc2f766647b1251500182fac0350d8af299c0", size = 84914, upload-time = "2025-10-08T09:14:50.958Z" },
{ url = "https://files.pythonhosted.org/packages/71/46/b817349db6886d79e57a966346cf0902a426375aadc1e8e7a86a75e22f19/msgpack-1.1.2-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:61c8aa3bd513d87c72ed0b37b53dd5c5a0f58f2ff9f26e1555d3bd7948fb7296", size = 416962, upload-time = "2025-10-08T09:14:51.997Z" },
{ url = "https://files.pythonhosted.org/packages/da/e0/6cc2e852837cd6086fe7d8406af4294e66827a60a4cf60b86575a4a65ca8/msgpack-1.1.2-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:454e29e186285d2ebe65be34629fa0e8605202c60fbc7c4c650ccd41870896ef", size = 426183, upload-time = "2025-10-08T09:14:53.477Z" },
{ url = "https://files.pythonhosted.org/packages/25/98/6a19f030b3d2ea906696cedd1eb251708e50a5891d0978b012cb6107234c/msgpack-1.1.2-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:7bc8813f88417599564fafa59fd6f95be417179f76b40325b500b3c98409757c", size = 411454, upload-time = "2025-10-08T09:14:54.648Z" },
{ url = "https://files.pythonhosted.org/packages/b7/cd/9098fcb6adb32187a70b7ecaabf6339da50553351558f37600e53a4a2a23/msgpack-1.1.2-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:bafca952dc13907bdfdedfc6a5f579bf4f292bdd506fadb38389afa3ac5b208e", size = 422341, upload-time = "2025-10-08T09:14:56.328Z" },
{ url = "https://files.pythonhosted.org/packages/e6/ae/270cecbcf36c1dc85ec086b33a51a4d7d08fc4f404bdbc15b582255d05ff/msgpack-1.1.2-cp311-cp311-win32.whl", hash = "sha256:602b6740e95ffc55bfb078172d279de3773d7b7db1f703b2f1323566b878b90e", size = 64747, upload-time = "2025-10-08T09:14:57.882Z" },
{ url = "https://files.pythonhosted.org/packages/2a/79/309d0e637f6f37e83c711f547308b91af02b72d2326ddd860b966080ef29/msgpack-1.1.2-cp311-cp311-win_amd64.whl", hash = "sha256:d198d275222dc54244bf3327eb8cbe00307d220241d9cec4d306d49a44e85f68", size = 71633, upload-time = "2025-10-08T09:14:59.177Z" },
{ url = "https://files.pythonhosted.org/packages/73/4d/7c4e2b3d9b1106cd0aa6cb56cc57c6267f59fa8bfab7d91df5adc802c847/msgpack-1.1.2-cp311-cp311-win_arm64.whl", hash = "sha256:86f8136dfa5c116365a8a651a7d7484b65b13339731dd6faebb9a0242151c406", size = 64755, upload-time = "2025-10-08T09:15:00.48Z" },
{ url = "https://files.pythonhosted.org/packages/ad/bd/8b0d01c756203fbab65d265859749860682ccd2a59594609aeec3a144efa/msgpack-1.1.2-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:70a0dff9d1f8da25179ffcf880e10cf1aad55fdb63cd59c9a49a1b82290062aa", size = 81939, upload-time = "2025-10-08T09:15:01.472Z" },
{ url = "https://files.pythonhosted.org/packages/34/68/ba4f155f793a74c1483d4bdef136e1023f7bcba557f0db4ef3db3c665cf1/msgpack-1.1.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:446abdd8b94b55c800ac34b102dffd2f6aa0ce643c55dfc017ad89347db3dbdb", size = 85064, upload-time = "2025-10-08T09:15:03.764Z" },
{ url = "https://files.pythonhosted.org/packages/f2/60/a064b0345fc36c4c3d2c743c82d9100c40388d77f0b48b2f04d6041dbec1/msgpack-1.1.2-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:c63eea553c69ab05b6747901b97d620bb2a690633c77f23feb0c6a947a8a7b8f", size = 417131, upload-time = "2025-10-08T09:15:05.136Z" },
{ url = "https://files.pythonhosted.org/packages/65/92/a5100f7185a800a5d29f8d14041f61475b9de465ffcc0f3b9fba606e4505/msgpack-1.1.2-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:372839311ccf6bdaf39b00b61288e0557916c3729529b301c52c2d88842add42", size = 427556, upload-time = "2025-10-08T09:15:06.837Z" },
{ url = "https://files.pythonhosted.org/packages/f5/87/ffe21d1bf7d9991354ad93949286f643b2bb6ddbeab66373922b44c3b8cc/msgpack-1.1.2-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:2929af52106ca73fcb28576218476ffbb531a036c2adbcf54a3664de124303e9", size = 404920, upload-time = "2025-10-08T09:15:08.179Z" },
{ url = "https://files.pythonhosted.org/packages/ff/41/8543ed2b8604f7c0d89ce066f42007faac1eaa7d79a81555f206a5cdb889/msgpack-1.1.2-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:be52a8fc79e45b0364210eef5234a7cf8d330836d0a64dfbb878efa903d84620", size = 415013, upload-time = "2025-10-08T09:15:09.83Z" },
{ url = "https://files.pythonhosted.org/packages/41/0d/2ddfaa8b7e1cee6c490d46cb0a39742b19e2481600a7a0e96537e9c22f43/msgpack-1.1.2-cp312-cp312-win32.whl", hash = "sha256:1fff3d825d7859ac888b0fbda39a42d59193543920eda9d9bea44d958a878029", size = 65096, upload-time = "2025-10-08T09:15:11.11Z" },
{ url = "https://files.pythonhosted.org/packages/8c/ec/d431eb7941fb55a31dd6ca3404d41fbb52d99172df2e7707754488390910/msgpack-1.1.2-cp312-cp312-win_amd64.whl", hash = "sha256:1de460f0403172cff81169a30b9a92b260cb809c4cb7e2fc79ae8d0510c78b6b", size = 72708, upload-time = "2025-10-08T09:15:12.554Z" },
{ url = "https://files.pythonhosted.org/packages/c5/31/5b1a1f70eb0e87d1678e9624908f86317787b536060641d6798e3cf70ace/msgpack-1.1.2-cp312-cp312-win_arm64.whl", hash = "sha256:be5980f3ee0e6bd44f3a9e9dea01054f175b50c3e6cdb692bc9424c0bbb8bf69", size = 64119, upload-time = "2025-10-08T09:15:13.589Z" },
{ url = "https://files.pythonhosted.org/packages/6b/31/b46518ecc604d7edf3a4f94cb3bf021fc62aa301f0cb849936968164ef23/msgpack-1.1.2-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:4efd7b5979ccb539c221a4c4e16aac1a533efc97f3b759bb5a5ac9f6d10383bf", size = 81212, upload-time = "2025-10-08T09:15:14.552Z" },
{ url = "https://files.pythonhosted.org/packages/92/dc/c385f38f2c2433333345a82926c6bfa5ecfff3ef787201614317b58dd8be/msgpack-1.1.2-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:42eefe2c3e2af97ed470eec850facbe1b5ad1d6eacdbadc42ec98e7dcf68b4b7", size = 84315, upload-time = "2025-10-08T09:15:15.543Z" },
{ url = "https://files.pythonhosted.org/packages/d3/68/93180dce57f684a61a88a45ed13047558ded2be46f03acb8dec6d7c513af/msgpack-1.1.2-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:1fdf7d83102bf09e7ce3357de96c59b627395352a4024f6e2458501f158bf999", size = 412721, upload-time = "2025-10-08T09:15:16.567Z" },
{ url = "https://files.pythonhosted.org/packages/5d/ba/459f18c16f2b3fc1a1ca871f72f07d70c07bf768ad0a507a698b8052ac58/msgpack-1.1.2-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:fac4be746328f90caa3cd4bc67e6fe36ca2bf61d5c6eb6d895b6527e3f05071e", size = 424657, upload-time = "2025-10-08T09:15:17.825Z" },
{ url = "https://files.pythonhosted.org/packages/38/f8/4398c46863b093252fe67368b44edc6c13b17f4e6b0e4929dbf0bdb13f23/msgpack-1.1.2-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:fffee09044073e69f2bad787071aeec727183e7580443dfeb8556cbf1978d162", size = 402668, upload-time = "2025-10-08T09:15:19.003Z" },
{ url = "https://files.pythonhosted.org/packages/28/ce/698c1eff75626e4124b4d78e21cca0b4cc90043afb80a507626ea354ab52/msgpack-1.1.2-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:5928604de9b032bc17f5099496417f113c45bc6bc21b5c6920caf34b3c428794", size = 419040, upload-time = "2025-10-08T09:15:20.183Z" },
{ url = "https://files.pythonhosted.org/packages/67/32/f3cd1667028424fa7001d82e10ee35386eea1408b93d399b09fb0aa7875f/msgpack-1.1.2-cp313-cp313-win32.whl", hash = "sha256:a7787d353595c7c7e145e2331abf8b7ff1e6673a6b974ded96e6d4ec09f00c8c", size = 65037, upload-time = "2025-10-08T09:15:21.416Z" },
{ url = "https://files.pythonhosted.org/packages/74/07/1ed8277f8653c40ebc65985180b007879f6a836c525b3885dcc6448ae6cb/msgpack-1.1.2-cp313-cp313-win_amd64.whl", hash = "sha256:a465f0dceb8e13a487e54c07d04ae3ba131c7c5b95e2612596eafde1dccf64a9", size = 72631, upload-time = "2025-10-08T09:15:22.431Z" },
{ url = "https://files.pythonhosted.org/packages/e5/db/0314e4e2db56ebcf450f277904ffd84a7988b9e5da8d0d61ab2d057df2b6/msgpack-1.1.2-cp313-cp313-win_arm64.whl", hash = "sha256:e69b39f8c0aa5ec24b57737ebee40be647035158f14ed4b40e6f150077e21a84", size = 64118, upload-time = "2025-10-08T09:15:23.402Z" },
]
[[package]]
name = "msoffcrypto-tool"
version = "6.0.0"
@@ -5316,15 +5214,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/b7/c1/88bf70a327c86f8529ad3a4ae35e92fcebf05295668fca7973279e189afe/oxylabs-2.0.0-py3-none-any.whl", hash = "sha256:3848d53bc47acdcea16ea829dc52416cdf96edae130e17bb3ac7146b012387d7", size = 34274, upload-time = "2025-03-28T13:54:15.188Z" },
]
[[package]]
name = "packageurl-python"
version = "0.17.6"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/f5/d6/3b5a4e3cfaef7a53869a26ceb034d1ff5e5c27c814ce77260a96d50ab7bb/packageurl_python-0.17.6.tar.gz", hash = "sha256:1252ce3a102372ca6f86eb968e16f9014c4ba511c5c37d95a7f023e2ca6e5c25", size = 50618, upload-time = "2025-11-24T15:20:17.998Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/b1/2f/c7277b7615a93f51b5fbc1eacfc1b75e8103370e786fd8ce2abf6e5c04ab/packageurl_python-0.17.6-py3-none-any.whl", hash = "sha256:31a85c2717bc41dd818f3c62908685ff9eebcb68588213745b14a6ee9e7df7c9", size = 36776, upload-time = "2025-11-24T15:20:16.962Z" },
]
[[package]]
name = "packaging"
version = "26.0"
@@ -5655,60 +5544,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/bc/60/5382c03e1970de634027cee8e1b7d39776b778b81812aaf45b694dfe9e28/pillow-12.2.0-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:bfa9c230d2fe991bed5318a5f119bd6780cda2915cca595393649fc118ab895e", size = 7080946, upload-time = "2026-04-01T14:46:11.734Z" },
]
[[package]]
name = "pip"
version = "26.0.1"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/48/83/0d7d4e9efe3344b8e2fe25d93be44f64b65364d3c8d7bc6dc90198d5422e/pip-26.0.1.tar.gz", hash = "sha256:c4037d8a277c89b320abe636d59f91e6d0922d08a05b60e85e53b296613346d8", size = 1812747, upload-time = "2026-02-05T02:20:18.702Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/de/f0/c81e05b613866b76d2d1066490adf1a3dbc4ee9d9c839961c3fc8a6997af/pip-26.0.1-py3-none-any.whl", hash = "sha256:bdb1b08f4274833d62c1aa29e20907365a2ceb950410df15fc9521bad440122b", size = 1787723, upload-time = "2026-02-05T02:20:16.416Z" },
]
[[package]]
name = "pip-api"
version = "0.0.34"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "pip" },
]
sdist = { url = "https://files.pythonhosted.org/packages/b9/f1/ee85f8c7e82bccf90a3c7aad22863cc6e20057860a1361083cd2adacb92e/pip_api-0.0.34.tar.gz", hash = "sha256:9b75e958f14c5a2614bae415f2adf7eeb54d50a2cfbe7e24fd4826471bac3625", size = 123017, upload-time = "2024-07-09T20:32:30.641Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/91/f7/ebf5003e1065fd00b4cbef53bf0a65c3d3e1b599b676d5383ccb7a8b88ba/pip_api-0.0.34-py3-none-any.whl", hash = "sha256:8b2d7d7c37f2447373aa2cf8b1f60a2f2b27a84e1e9e0294a3f6ef10eb3ba6bb", size = 120369, upload-time = "2024-07-09T20:32:29.099Z" },
]
[[package]]
name = "pip-audit"
version = "2.9.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "cachecontrol", extra = ["filecache"] },
{ name = "cyclonedx-python-lib" },
{ name = "packaging" },
{ name = "pip-api" },
{ name = "pip-requirements-parser" },
{ name = "platformdirs" },
{ name = "requests" },
{ name = "rich" },
{ name = "toml" },
]
sdist = { url = "https://files.pythonhosted.org/packages/cc/7f/28fad19a9806f796f13192ab6974c07c4a04d9cbb8e30dd895c3c11ce7ee/pip_audit-2.9.0.tar.gz", hash = "sha256:0b998410b58339d7a231e5aa004326a294e4c7c6295289cdc9d5e1ef07b1f44d", size = 52089, upload-time = "2025-04-07T16:45:23.679Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/43/9e/f4dfd9d3dadb6d6dc9406f1111062f871e2e248ed7b584cca6020baf2ac1/pip_audit-2.9.0-py3-none-any.whl", hash = "sha256:348b16e60895749a0839875d7cc27ebd692e1584ebe5d5cb145941c8e25a80bd", size = 58634, upload-time = "2025-04-07T16:45:22.056Z" },
]
[[package]]
name = "pip-requirements-parser"
version = "32.0.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "packaging" },
{ name = "pyparsing" },
]
sdist = { url = "https://files.pythonhosted.org/packages/5e/2a/63b574101850e7f7b306ddbdb02cb294380d37948140eecd468fae392b54/pip-requirements-parser-32.0.1.tar.gz", hash = "sha256:b4fa3a7a0be38243123cf9d1f3518da10c51bdb165a2b2985566247f9155a7d3", size = 209359, upload-time = "2022-12-21T15:25:22.732Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/54/d0/d04f1d1e064ac901439699ee097f58688caadea42498ec9c4b4ad2ef84ab/pip_requirements_parser-32.0.1-py3-none-any.whl", hash = "sha256:4659bc2a667783e7a15d190f6fccf8b2486685b6dba4c19c3876314769c57526", size = 35648, upload-time = "2022-12-21T15:25:21.046Z" },
]
[[package]]
name = "platformdirs"
version = "4.9.6"
@@ -6051,18 +5886,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/3a/15/b1894b9741f7a48f0b4cbea458f7d4141a6df6a1b26bec05fcde96703ce1/py_rust_stemmers-0.1.5-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:57b061c3b4af9e409d009d729b21bc53dabe47116c955ccf0b642a5a2d438f93", size = 324879, upload-time = "2025-02-19T13:56:27.462Z" },
]
[[package]]
name = "py-serializable"
version = "2.1.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "defusedxml" },
]
sdist = { url = "https://files.pythonhosted.org/packages/73/21/d250cfca8ff30c2e5a7447bc13861541126ce9bd4426cd5d0c9f08b5547d/py_serializable-2.1.0.tar.gz", hash = "sha256:9d5db56154a867a9b897c0163b33a793c804c80cee984116d02d49e4578fc103", size = 52368, upload-time = "2025-07-21T09:56:48.07Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/9b/bf/7595e817906a29453ba4d99394e781b6fabe55d21f3c15d240f85dd06bb1/py_serializable-2.1.0-py3-none-any.whl", hash = "sha256:b56d5d686b5a03ba4f4db5e769dc32336e142fc3bd4d68a8c25579ebb0a67304", size = 23045, upload-time = "2025-07-21T09:56:46.848Z" },
]
[[package]]
name = "pyarrow"
version = "23.0.1"
@@ -6727,14 +6550,14 @@ wheels = [
[[package]]
name = "pypdf"
version = "6.10.0"
version = "6.9.2"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "typing-extensions", marker = "python_full_version < '3.11'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/b8/9f/ca96abf18683ca12602065e4ed2bec9050b672c87d317f1079abc7b6d993/pypdf-6.10.0.tar.gz", hash = "sha256:4c5a48ba258c37024ec2505f7e8fd858525f5502784a2e1c8d415604af29f6ef", size = 5314833, upload-time = "2026-04-10T09:34:57.102Z" }
sdist = { url = "https://files.pythonhosted.org/packages/31/83/691bdb309306232362503083cb15777491045dd54f45393a317dc7d8082f/pypdf-6.9.2.tar.gz", hash = "sha256:7f850faf2b0d4ab936582c05da32c52214c2b089d61a316627b5bfb5b0dab46c", size = 5311837, upload-time = "2026-03-23T14:53:27.983Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/55/f2/7ebe366f633f30a6ad105f650f44f24f98cb1335c4157d21ae47138b3482/pypdf-6.10.0-py3-none-any.whl", hash = "sha256:90005e959e1596c6e6c84c8b0ad383285b3e17011751cedd17f2ce8fcdfc86de", size = 334459, upload-time = "2026-04-10T09:34:54.966Z" },
{ url = "https://files.pythonhosted.org/packages/a5/7e/c85f41243086a8fe5d1baeba527cb26a1918158a565932b41e0f7c0b32e9/pypdf-6.9.2-py3-none-any.whl", hash = "sha256:662cf29bcb419a36a1365232449624ab40b7c2d0cfc28e54f42eeecd1fd7e844", size = 333744, upload-time = "2026-03-23T14:53:26.573Z" },
]
[[package]]
@@ -6817,7 +6640,7 @@ sdist = { url = "https://files.pythonhosted.org/packages/12/a0/d0638470df605ce26
[[package]]
name = "pytest"
version = "9.0.3"
version = "8.4.2"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "colorama", marker = "sys_platform == 'win32'" },
@@ -6828,9 +6651,9 @@ dependencies = [
{ name = "pygments" },
{ name = "tomli", marker = "python_full_version < '3.11'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/7d/0d/549bd94f1a0a402dc8cf64563a117c0f3765662e2e668477624baeec44d5/pytest-9.0.3.tar.gz", hash = "sha256:b86ada508af81d19edeb213c681b1d48246c1a91d304c6c81a427674c17eb91c", size = 1572165, upload-time = "2026-04-07T17:16:18.027Z" }
sdist = { url = "https://files.pythonhosted.org/packages/a3/5c/00a0e072241553e1a7496d638deababa67c5058571567b92a7eaa258397c/pytest-8.4.2.tar.gz", hash = "sha256:86c0d0b93306b961d58d62a4db4879f27fe25513d4b969df351abdddb3c30e01", size = 1519618, upload-time = "2025-09-04T14:34:22.711Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/d4/24/a372aaf5c9b7208e7112038812994107bc65a84cd00e0354a88c2c77a617/pytest-9.0.3-py3-none-any.whl", hash = "sha256:2c5efc453d45394fdd706ade797c0a81091eccd1d6e4bccfcd476e2b8e0ab5d9", size = 375249, upload-time = "2026-04-07T17:16:16.13Z" },
{ url = "https://files.pythonhosted.org/packages/a8/a4/20da314d277121d6534b3a980b29035dcd51e6744bd79075a6ce8fa4eb8d/pytest-8.4.2-py3-none-any.whl", hash = "sha256:872f880de3fc3a5bdc88a11b39c9710c3497a547cfa9320bc3c5e62fbf272e79", size = 365750, upload-time = "2025-09-04T14:34:20.226Z" },
]
[[package]]
@@ -6874,14 +6697,14 @@ wheels = [
[[package]]
name = "pytest-split"
version = "0.11.0"
version = "0.10.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "pytest" },
]
sdist = { url = "https://files.pythonhosted.org/packages/2f/16/8af4c5f2ceb3640bb1f78dfdf5c184556b10dfe9369feaaad7ff1c13f329/pytest_split-0.11.0.tar.gz", hash = "sha256:8ebdb29cc72cc962e8eb1ec07db1eeb98ab25e215ed8e3216f6b9fc7ce0ec2b5", size = 13421, upload-time = "2026-02-03T09:14:31.469Z" }
sdist = { url = "https://files.pythonhosted.org/packages/46/d7/e30ba44adf83f15aee3f636daea54efadf735769edc0f0a7d98163f61038/pytest_split-0.10.0.tar.gz", hash = "sha256:adf80ba9fef7be89500d571e705b4f963dfa05038edf35e4925817e6b34ea66f", size = 13903, upload-time = "2024-10-16T15:45:19.783Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/ae/a1/d4423657caaa8be9b31e491592b49cebdcfd434d3e74512ce71f6ec39905/pytest_split-0.11.0-py3-none-any.whl", hash = "sha256:899d7c0f5730da91e2daf283860eb73b503259cb416851a65599368849c7f382", size = 11911, upload-time = "2026-02-03T09:14:33.708Z" },
{ url = "https://files.pythonhosted.org/packages/d6/a7/cad88e9c1109a5c2a320d608daa32e5ee008ccbc766310f54b1cd6b3d69c/pytest_split-0.10.0-py3-none-any.whl", hash = "sha256:466096b086a7147bcd423c6e6c2e57fc62af1c5ea2e256b4ed50fc030fc3dddc", size = 11961, upload-time = "2024-10-16T15:45:18.289Z" },
]
[[package]]
@@ -7372,7 +7195,7 @@ wheels = [
[[package]]
name = "requests"
version = "2.33.1"
version = "2.32.5"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "certifi" },
@@ -7380,9 +7203,9 @@ dependencies = [
{ name = "idna" },
{ name = "urllib3" },
]
sdist = { url = "https://files.pythonhosted.org/packages/5f/a4/98b9c7c6428a668bf7e42ebb7c79d576a1c3c1e3ae2d47e674b468388871/requests-2.33.1.tar.gz", hash = "sha256:18817f8c57c6263968bc123d237e3b8b08ac046f5456bd1e307ee8f4250d3517", size = 134120, upload-time = "2026-03-30T16:09:15.531Z" }
sdist = { url = "https://files.pythonhosted.org/packages/c9/74/b3ff8e6c8446842c3f5c837e9c3dfcfe2018ea6ecef224c710c85ef728f4/requests-2.32.5.tar.gz", hash = "sha256:dbba0bac56e100853db0ea71b82b4dfd5fe2bf6d3754a8893c3af500cec7d7cf", size = 134517, upload-time = "2025-08-18T20:46:02.573Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/d7/8e/7540e8a2036f79a125c1d2ebadf69ed7901608859186c856fa0388ef4197/requests-2.33.1-py3-none-any.whl", hash = "sha256:4e6d1ef462f3626a1f0a0a9c42dd93c63bad33f9f1c1937509b8c5c8718ab56a", size = 64947, upload-time = "2026-03-30T16:09:13.83Z" },
{ url = "https://files.pythonhosted.org/packages/1e/db/4254e3eabe8020b458f1a747140d32277ec7a271daf1d235b70dc0b4e6e3/requests-2.32.5-py3-none-any.whl", hash = "sha256:2462f94637a34fd532264295e186976db0f5d453d1cdd31473c85a6a161affb6", size = 64738, upload-time = "2025-08-18T20:46:00.542Z" },
]
[[package]]
@@ -7559,14 +7382,14 @@ wheels = [
[[package]]
name = "s3transfer"
version = "0.16.0"
version = "0.14.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "botocore" },
]
sdist = { url = "https://files.pythonhosted.org/packages/05/04/74127fc843314818edfa81b5540e26dd537353b123a4edc563109d8f17dd/s3transfer-0.16.0.tar.gz", hash = "sha256:8e990f13268025792229cd52fa10cb7163744bf56e719e0b9cb925ab79abf920", size = 153827, upload-time = "2025-12-01T02:30:59.114Z" }
sdist = { url = "https://files.pythonhosted.org/packages/62/74/8d69dcb7a9efe8baa2046891735e5dfe433ad558ae23d9e3c14c633d1d58/s3transfer-0.14.0.tar.gz", hash = "sha256:eff12264e7c8b4985074ccce27a3b38a485bb7f7422cc8046fee9be4983e4125", size = 151547, upload-time = "2025-09-09T19:23:31.089Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/fc/51/727abb13f44c1fcf6d145979e1535a35794db0f6e450a0cb46aa24732fe2/s3transfer-0.16.0-py3-none-any.whl", hash = "sha256:18e25d66fed509e3868dc1572b3f427ff947dd2c56f844a5bf09481ad3f3b2fe", size = 86830, upload-time = "2025-12-01T02:30:57.729Z" },
{ url = "https://files.pythonhosted.org/packages/48/f0/ae7ca09223a81a1d890b2557186ea015f6e0502e9b8cb8e1813f1d8cfa4e/s3transfer-0.14.0-py3-none-any.whl", hash = "sha256:ea3b790c7077558ed1f02a3072fb3cb992bbbd253392f4b6e9e8976941c7d456", size = 85712, upload-time = "2025-09-09T19:23:30.041Z" },
]
[[package]]
@@ -7741,7 +7564,7 @@ wheels = [
[[package]]
name = "scrapfly-sdk"
version = "0.8.28"
version = "0.8.27"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "backoff" },
@@ -7751,14 +7574,14 @@ dependencies = [
{ name = "requests" },
{ name = "urllib3" },
]
sdist = { url = "https://files.pythonhosted.org/packages/7b/3e/a881968b866ed77cb8a5013aeb100a5a3dd2b502e9a9f955615e15157ad0/scrapfly_sdk-0.8.28.tar.gz", hash = "sha256:051f734ae10fd9b136527f3dc3344abb68ed64822c108b1caff6dc8399c197e0", size = 104208, upload-time = "2026-04-09T16:18:51.793Z" }
sdist = { url = "https://files.pythonhosted.org/packages/fb/49/c9c13113630ea38653b784f3511779e191152aa6afb44cf7e148d99ad345/scrapfly_sdk-0.8.27.tar.gz", hash = "sha256:affce316fecfabe444685779fc61b28a9e7a36344819701339637a96272831c6", size = 82753, upload-time = "2026-02-26T19:00:32.638Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/c7/c6/97a5fbc9ff952c45783303add4c4e431b7a34a020f6dc3adb8f878af0c2a/scrapfly_sdk-0.8.28-py3-none-any.whl", hash = "sha256:116198df90cdbea224d6b0c92d4d74c9ee585fa63c1c5ec9f021b5fc9638fe3f", size = 117920, upload-time = "2026-04-09T16:18:50.356Z" },
{ url = "https://files.pythonhosted.org/packages/70/9a/f9367c504710f0fc06654adef079b3e020318bf0c6beccb8291ecf26b9fe/scrapfly_sdk-0.8.27-py3-none-any.whl", hash = "sha256:c0cb76fd65e95a6221b3f4531af363f2dcd2dc2e5b18641be9554bb2f60e001c", size = 95229, upload-time = "2026-02-26T19:00:31.227Z" },
]
[[package]]
name = "selenium"
version = "4.42.0"
version = "4.41.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "certifi" },
@@ -7768,9 +7591,9 @@ dependencies = [
{ name = "urllib3" },
{ name = "websocket-client" },
]
sdist = { url = "https://files.pythonhosted.org/packages/33/46/fb93d37749ecf13853739c31c70bd95704310a7defbc57e7101dc4ab2513/selenium-4.42.0.tar.gz", hash = "sha256:4c8ebd84ff96505db4277223648f12e2799e92e13169bc69633a6b24eb066c72", size = 956304, upload-time = "2026-04-09T08:31:20.268Z" }
sdist = { url = "https://files.pythonhosted.org/packages/04/7c/133d00d6d013a17d3f39199f27f1a780ec2e95d7b9aa997dc1b8ac2e62a7/selenium-4.41.0.tar.gz", hash = "sha256:003e971f805231ad63e671783a2b91a299355d10cefb9de964c36ff3819115aa", size = 937872, upload-time = "2026-02-20T03:42:06.216Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/cf/47/9f094f1cffdb54b01da75b45cc29673869458a504b30002797c0c47ac985/selenium-4.42.0-py3-none-any.whl", hash = "sha256:bb29eababf54fa479c95d5fa3fba73889db5d532f3a76addc5b526bbff14fca7", size = 9559171, upload-time = "2026-04-09T08:31:17.38Z" },
{ url = "https://files.pythonhosted.org/packages/a8/d6/e4160989ef6b272779af6f3e5c43c3ba9be6687bdc21c68c3fb220e555b3/selenium-4.41.0-py3-none-any.whl", hash = "sha256:b8ccde8d2e7642221ca64af184a92c19eee6accf2e27f20f30472f5efae18eb1", size = 9532858, upload-time = "2026-02-20T03:42:03.218Z" },
]
[[package]]
@@ -8301,15 +8124,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/59/8c/b1c87148aa15e099243ec9f0cf9d0e970cc2234c3257d558c25a2c5304e6/tokenizers-0.22.2-pp310-pypy310_pp73-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f01a9c019878532f98927d2bacb79bbb404b43d3437455522a00a30718cdedb5", size = 3373542, upload-time = "2026-01-05T10:40:52.803Z" },
]
[[package]]
name = "toml"
version = "0.10.2"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/be/ba/1f744cdc819428fc6b5084ec34d9b30660f6f9daaf70eead706e3203ec3c/toml-0.10.2.tar.gz", hash = "sha256:b3bda1d108d5dd99f4a20d24d9c348e91c4db7ab1b749200bded2f839ccbe68f", size = 22253, upload-time = "2020-11-01T01:40:22.204Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/44/6f/7120676b6d73228c96e17f1f794d8ab046fc910d781c8d151120c3f1569e/toml-0.10.2-py2.py3-none-any.whl", hash = "sha256:806143ae5bfb6a3c6e736a764057db0e6a0e05e338b5630894a5f779cabb4f9b", size = 16588, upload-time = "2020-11-01T01:40:20.672Z" },
]
[[package]]
name = "tomli"
version = "2.0.2"
@@ -8440,7 +8254,7 @@ wheels = [
[[package]]
name = "transformers"
version = "5.5.3"
version = "5.5.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "huggingface-hub" },
@@ -8454,9 +8268,9 @@ dependencies = [
{ name = "tqdm" },
{ name = "typer" },
]
sdist = { url = "https://files.pythonhosted.org/packages/af/35/cd5b0d1288e65d2c12db4ce84c1ec1074f7ee9bced040de6c9d69e70d620/transformers-5.5.3.tar.gz", hash = "sha256:3f60128e840b40d352655903552e1eed4f94ed49369a4d43e1bc067bd32d3f50", size = 8226047, upload-time = "2026-04-09T15:52:56.231Z" }
sdist = { url = "https://files.pythonhosted.org/packages/ff/9d/fb46e729b461985f41a5740167688b924a4019141e5c164bea77548d3d9e/transformers-5.5.0.tar.gz", hash = "sha256:c8db656cf51c600cd8c75f06b20ef85c72e8b8ff9abc880c5d3e8bc70e0ddcbd", size = 8237745, upload-time = "2026-04-02T16:13:08.113Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/a1/0b/f8524551ab2d896dfaca74ddb70a4453d515bbf4ab5451c100c7788ae155/transformers-5.5.3-py3-none-any.whl", hash = "sha256:e48f3ec31dd96505e96e66b63a1e43e1ad7a65749e108d9227caaf51051cdb02", size = 10236257, upload-time = "2026-04-09T15:52:52.866Z" },
{ url = "https://files.pythonhosted.org/packages/e7/28/35f7411ff80a3640c1f4fc907dcbb6a65061ebb82f66950e38bfc9f7f740/transformers-5.5.0-py3-none-any.whl", hash = "sha256:821a9ff0961abbb29eb1eb686d78df1c85929fdf213a3fe49dc6bd94f9efa944", size = 10245591, upload-time = "2026-04-02T16:13:03.462Z" },
]
[[package]]
@@ -8812,8 +8626,7 @@ all-docs = [
{ name = "pypdf" },
{ name = "python-docx" },
{ name = "python-pptx" },
{ name = "unstructured-inference", version = "1.2.0", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version < '3.12'" },
{ name = "unstructured-inference", version = "1.6.6", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version >= '3.12'" },
{ name = "unstructured-inference" },
{ name = "unstructured-pytesseract" },
{ name = "xlrd" },
]
@@ -8836,8 +8649,7 @@ local-inference = [
{ name = "pypdf" },
{ name = "python-docx" },
{ name = "python-pptx" },
{ name = "unstructured-inference", version = "1.2.0", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version < '3.12'" },
{ name = "unstructured-inference", version = "1.6.6", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version >= '3.12'" },
{ name = "unstructured-inference" },
{ name = "unstructured-pytesseract" },
{ name = "xlrd" },
]
@@ -8865,67 +8677,31 @@ wheels = [
name = "unstructured-inference"
version = "1.2.0"
source = { registry = "https://pypi.org/simple" }
resolution-markers = [
"python_full_version == '3.11.*' and platform_machine != 's390x'",
"python_full_version == '3.11.*' and platform_machine == 's390x'",
"python_full_version < '3.11' and platform_machine != 's390x'",
"python_full_version < '3.11' and platform_machine == 's390x'",
]
dependencies = [
{ name = "accelerate", marker = "python_full_version < '3.12'" },
{ name = "huggingface-hub", marker = "python_full_version < '3.12'" },
{ name = "matplotlib", marker = "python_full_version < '3.12'" },
{ name = "accelerate" },
{ name = "huggingface-hub" },
{ name = "matplotlib" },
{ name = "numpy", version = "2.2.6", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version < '3.11'" },
{ name = "numpy", version = "2.4.4", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version == '3.11.*'" },
{ name = "onnx", marker = "python_full_version < '3.12'" },
{ name = "numpy", version = "2.4.4", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version >= '3.11'" },
{ name = "onnx" },
{ name = "onnxruntime", marker = "python_full_version < '3.11'" },
{ name = "opencv-python", marker = "python_full_version < '3.12'" },
{ name = "pandas", marker = "python_full_version < '3.12'" },
{ name = "pdfminer-six", marker = "python_full_version < '3.12'" },
{ name = "pypdfium2", marker = "python_full_version < '3.12'" },
{ name = "python-multipart", marker = "python_full_version < '3.12'" },
{ name = "rapidfuzz", marker = "python_full_version < '3.12'" },
{ name = "opencv-python" },
{ name = "pandas" },
{ name = "pdfminer-six" },
{ name = "pypdfium2" },
{ name = "python-multipart" },
{ name = "rapidfuzz" },
{ name = "scipy", version = "1.15.3", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version < '3.11'" },
{ name = "scipy", version = "1.17.1", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version == '3.11.*'" },
{ name = "timm", marker = "python_full_version < '3.12'" },
{ name = "torch", marker = "python_full_version < '3.12'" },
{ name = "transformers", marker = "python_full_version < '3.12'" },
{ name = "scipy", version = "1.17.1", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version >= '3.11'" },
{ name = "timm" },
{ name = "torch" },
{ name = "transformers" },
]
sdist = { url = "https://files.pythonhosted.org/packages/ce/10/8f3bccfa9f1e0101a402ae1f529e07876541c6b18004747f0e793ed41f9e/unstructured_inference-1.2.0.tar.gz", hash = "sha256:19ca28512f3649c70a759cf2a4e98663e942a1b83c1acdb9506b0445f4862f23", size = 45732, upload-time = "2026-01-30T20:57:58.019Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/2d/3b/349cd091b590a6f1dbfebcb5fee0ea7b0b6ef6520df58794c9582567a24f/unstructured_inference-1.2.0-py3-none-any.whl", hash = "sha256:60a1635aa8e97a9e7daed1a129836f51c26588e0d2062c9cc6a5a17e6d40cb6a", size = 49443, upload-time = "2026-01-30T20:57:56.617Z" },
]
[[package]]
name = "unstructured-inference"
version = "1.6.6"
source = { registry = "https://pypi.org/simple" }
resolution-markers = [
"python_full_version >= '3.13' and platform_machine != 's390x'",
"python_full_version >= '3.13' and platform_machine == 's390x'",
"python_full_version == '3.12.*' and platform_machine != 's390x'",
"python_full_version == '3.12.*' and platform_machine == 's390x'",
]
dependencies = [
{ name = "accelerate", marker = "python_full_version >= '3.12'" },
{ name = "huggingface-hub", marker = "python_full_version >= '3.12'" },
{ name = "matplotlib", marker = "python_full_version >= '3.12'" },
{ name = "numpy", version = "2.4.4", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version >= '3.12'" },
{ name = "onnx", marker = "python_full_version >= '3.12'" },
{ name = "opencv-python", marker = "python_full_version >= '3.12'" },
{ name = "pandas", marker = "python_full_version >= '3.12'" },
{ name = "pypdfium2", marker = "python_full_version >= '3.12'" },
{ name = "rapidfuzz", marker = "python_full_version >= '3.12'" },
{ name = "scipy", version = "1.17.1", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version >= '3.12'" },
{ name = "timm", marker = "python_full_version >= '3.12'" },
{ name = "torch", marker = "python_full_version >= '3.12'" },
{ name = "transformers", marker = "python_full_version >= '3.12'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/d3/e3/6c98caf4965e07eb0153dc2b4457ec6fb1cfef336411add4acd3b28c697c/unstructured_inference-1.6.6.tar.gz", hash = "sha256:f14745daef4c37f785d4edb6c3d3834c7414d9d5abd47ca0e377ca60c624d225", size = 47024, upload-time = "2026-04-09T19:58:52.292Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/7e/5b/bd4aa4d16446fbc79bea07b22c19c8f8b578c8f1dd73745d152511c17a5a/unstructured_inference-1.6.6-py3-none-any.whl", hash = "sha256:ac472f341407b2ea14d1b63074080af840b9badeefdcd90ea38feb22b4928e5a", size = 54286, upload-time = "2026-04-09T19:58:50.858Z" },
]
[[package]]
name = "unstructured-pytesseract"
version = "0.3.15"
@@ -8979,28 +8755,28 @@ wheels = [
[[package]]
name = "uv"
version = "0.11.6"
version = "0.9.30"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/dd/f3/8aceeab67ea69805293ab290e7ca8cc1b61a064d28b8a35c76d8eba063dd/uv-0.11.6.tar.gz", hash = "sha256:e3b21b7e80024c95ff339fcd147ac6fc3dd98d3613c9d45d3a1f4fd1057f127b", size = 4073298, upload-time = "2026-04-09T12:09:01.738Z" }
sdist = { url = "https://files.pythonhosted.org/packages/4e/a0/63cea38fe839fb89592728b91928ee6d15705f1376a7940fee5bbc77fea0/uv-0.9.30.tar.gz", hash = "sha256:03ebd4b22769e0a8d825fa09d038e31cbab5d3d48edf755971cb0cec7920ab95", size = 3846526, upload-time = "2026-02-04T21:45:37.58Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/1f/fe/4b61a3d5ad9d02e8a4405026ccd43593d7044598e0fa47d892d4dafe44c9/uv-0.11.6-py3-none-linux_armv6l.whl", hash = "sha256:ada04dcf89ddea5b69d27ac9cdc5ef575a82f90a209a1392e930de504b2321d6", size = 23780079, upload-time = "2026-04-09T12:08:56.609Z" },
{ url = "https://files.pythonhosted.org/packages/52/db/d27519a9e1a5ffee9d71af1a811ad0e19ce7ab9ae815453bef39dd479389/uv-0.11.6-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:5be013888420f96879c6e0d3081e7bcf51b539b034a01777041934457dfbedf3", size = 23214721, upload-time = "2026-04-09T12:09:32.228Z" },
{ url = "https://files.pythonhosted.org/packages/a6/8f/4399fa8b882bd7e0efffc829f73ab24d117d490a93e6bc7104a50282b854/uv-0.11.6-py3-none-macosx_11_0_arm64.whl", hash = "sha256:ffa5dc1cbb52bdce3b8447e83d1601a57ad4da6b523d77d4b47366db8b1ceb18", size = 21750109, upload-time = "2026-04-09T12:09:24.357Z" },
{ url = "https://files.pythonhosted.org/packages/32/07/5a12944c31c3dda253632da7a363edddb869ed47839d4d92a2dc5f546c93/uv-0.11.6-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.musllinux_1_1_aarch64.whl", hash = "sha256:bfb107b4dade1d2c9e572992b06992d51dd5f2136eb8ceee9e62dd124289e825", size = 23551146, upload-time = "2026-04-09T12:09:10.439Z" },
{ url = "https://files.pythonhosted.org/packages/79/5b/2ec8b0af80acd1016ed596baf205ddc77b19ece288473b01926c4a9cf6db/uv-0.11.6-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.musllinux_1_1_armv7l.whl", hash = "sha256:9e2fe7ce12161d8016b7deb1eaad7905a76ff7afec13383333ca75e0c4b5425d", size = 23331192, upload-time = "2026-04-09T12:09:34.792Z" },
{ url = "https://files.pythonhosted.org/packages/62/7d/eea35935f2112b21c296a3e42645f3e4b1aa8bcd34dcf13345fbd55134b7/uv-0.11.6-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:7ed9c6f70c25e8dfeedddf4eddaf14d353f5e6b0eb43da9a14d3a1033d51d915", size = 23337686, upload-time = "2026-04-09T12:09:18.522Z" },
{ url = "https://files.pythonhosted.org/packages/21/47/2584f5ab618f6ebe9bdefb2f765f2ca8540e9d739667606a916b35449eec/uv-0.11.6-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d68a013e609cebf82077cbeeb0809ed5e205257814273bfd31e02fc0353bbfc2", size = 25008139, upload-time = "2026-04-09T12:09:03.983Z" },
{ url = "https://files.pythonhosted.org/packages/95/81/497ae5c1d36355b56b97dc59f550c7e89d0291c163a3f203c6f341dff195/uv-0.11.6-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:93f736dddca03dae732c6fdea177328d3bc4bf137c75248f3d433c57416a4311", size = 25712458, upload-time = "2026-04-09T12:09:07.598Z" },
{ url = "https://files.pythonhosted.org/packages/3c/1c/74083238e4fab2672b63575b9008f1ea418b02a714bcfcf017f4f6a309b6/uv-0.11.6-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:e96a66abe53fced0e3389008b8d2eff8278cfa8bb545d75631ae8ceb9c929aba", size = 24915507, upload-time = "2026-04-09T12:08:50.892Z" },
{ url = "https://files.pythonhosted.org/packages/5a/ee/e14fe10ba455a823ed18233f12de6699a601890905420b5c504abf115116/uv-0.11.6-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0b096311b2743b228df911a19532b3f18fa420bf9530547aecd6a8e04bbfaccd", size = 24971011, upload-time = "2026-04-09T12:08:54.016Z" },
{ url = "https://files.pythonhosted.org/packages/3c/a1/7b9c83eaadf98e343317ff6384a7227a4855afd02cdaf9696bcc71ee6155/uv-0.11.6-py3-none-manylinux_2_28_aarch64.whl", hash = "sha256:904d537b4a6e798015b4a64ff5622023bd4601b43b6cd1e5f423d63471f5e948", size = 23640234, upload-time = "2026-04-09T12:09:15.735Z" },
{ url = "https://files.pythonhosted.org/packages/d6/51/75ccdd23e76ff1703b70eb82881cd5b4d2a954c9679f8ef7e0136ef2cfab/uv-0.11.6-py3-none-manylinux_2_31_riscv64.musllinux_1_1_riscv64.whl", hash = "sha256:4ed8150c26b5e319381d75ae2ce6aba1e9c65888f4850f4e3b3fa839953c90a5", size = 24452664, upload-time = "2026-04-09T12:09:26.875Z" },
{ url = "https://files.pythonhosted.org/packages/4d/86/ace80fe47d8d48b5e3b5aee0b6eb1a49deaacc2313782870250b3faa36f5/uv-0.11.6-py3-none-manylinux_2_31_riscv64.whl", hash = "sha256:1c9218c8d4ac35ca6e617fb0951cc0ab2d907c91a6aea2617de0a5494cf162c0", size = 24494599, upload-time = "2026-04-09T12:09:37.368Z" },
{ url = "https://files.pythonhosted.org/packages/05/2d/4b642669b56648194f026de79bc992cbfc3ac2318b0a8d435f3c284934e8/uv-0.11.6-py3-none-musllinux_1_1_i686.whl", hash = "sha256:9e211c83cc890c569b86a4183fcf5f8b6f0c7adc33a839b699a98d30f1310d3a", size = 24159150, upload-time = "2026-04-09T12:09:13.17Z" },
{ url = "https://files.pythonhosted.org/packages/ae/24/7eecd76fe983a74fed1fc700a14882e70c4e857f1d562a9f2303d4286c12/uv-0.11.6-py3-none-musllinux_1_1_x86_64.whl", hash = "sha256:d2a1d2089afdf117ad19a4c1dd36b8189c00ae1ad4135d3bfbfced82342595cf", size = 25164324, upload-time = "2026-04-09T12:08:59.56Z" },
{ url = "https://files.pythonhosted.org/packages/27/e0/bbd4ba7c2e5067bbba617d87d306ec146889edaeeaa2081d3e122178ca08/uv-0.11.6-py3-none-win32.whl", hash = "sha256:6e8344f38fa29f85dcfd3e62dc35a700d2448f8e90381077ef393438dcd5012e", size = 22865693, upload-time = "2026-04-09T12:09:21.415Z" },
{ url = "https://files.pythonhosted.org/packages/a5/33/1983ce113c538a856f2d620d16e39691962ecceef091a84086c5785e32e5/uv-0.11.6-py3-none-win_amd64.whl", hash = "sha256:a28bea69c1186303d1200f155c7a28c449f8a4431e458fcf89360cc7ef546e40", size = 25371258, upload-time = "2026-04-09T12:09:40.52Z" },
{ url = "https://files.pythonhosted.org/packages/35/01/be0873f44b9c9bc250fcbf263367fcfc1f59feab996355bcb6b52fff080d/uv-0.11.6-py3-none-win_arm64.whl", hash = "sha256:a78f6d64b9950e24061bc7ec7f15ff8089ad7f5a976e7b65fcadce58fe02f613", size = 23869585, upload-time = "2026-04-09T12:09:29.425Z" },
{ url = "https://files.pythonhosted.org/packages/a3/3c/71be72f125f0035348b415468559cc3b335ec219376d17a3d242d2bd9b23/uv-0.9.30-py3-none-linux_armv6l.whl", hash = "sha256:a5467dddae1cd5f4e093f433c0f0d9a0df679b92696273485ec91bbb5a8620e6", size = 21927585, upload-time = "2026-02-04T21:46:14.935Z" },
{ url = "https://files.pythonhosted.org/packages/0f/fd/8070b5423a77d4058d14e48a970aa075762bbff4c812dda3bb3171543e44/uv-0.9.30-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:6ec38ae29aa83a37c6e50331707eac8ecc90cf2b356d60ea6382a94de14973be", size = 21050392, upload-time = "2026-02-04T21:45:55.649Z" },
{ url = "https://files.pythonhosted.org/packages/42/5f/3ccc9415ef62969ed01829572338ea7bdf4c5cf1ffb9edc1f8cb91b571f3/uv-0.9.30-py3-none-macosx_11_0_arm64.whl", hash = "sha256:777ecd117cf1d8d6bb07de8c9b7f6c5f3e802415b926cf059d3423699732eb8c", size = 19817085, upload-time = "2026-02-04T21:45:40.881Z" },
{ url = "https://files.pythonhosted.org/packages/8b/3f/76b44e2a224f4c4a8816fc92686ef6d4c2656bc5fc9d4f673816162c994d/uv-0.9.30-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.musllinux_1_1_aarch64.whl", hash = "sha256:93049ba3c41fa2cc38b467cb78ef61b2ddedca34b6be924a5481d7750c8111c6", size = 21620537, upload-time = "2026-02-04T21:45:47.846Z" },
{ url = "https://files.pythonhosted.org/packages/60/2a/50f7e8c6d532af8dd327f77bdc75ce4652322ac34f5e29f79a8e04ea3cc8/uv-0.9.30-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.musllinux_1_1_armv7l.whl", hash = "sha256:f295604fee71224ebe2685a0f1f4ff7a45c77211a60bd57133a4a02056d7c775", size = 21550855, upload-time = "2026-02-04T21:46:26.269Z" },
{ url = "https://files.pythonhosted.org/packages/0e/10/f823d4af1125fae559194b356757dc7d4a8ac79d10d11db32c2d4c9e2f63/uv-0.9.30-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:2faf84e1f3b6fc347a34c07f1291d11acf000b0dd537a61d541020f22b17ccd9", size = 21516576, upload-time = "2026-02-04T21:46:03.494Z" },
{ url = "https://files.pythonhosted.org/packages/91/f3/64b02db11f38226ed34458c7fbdb6f16b6d4fd951de24c3e51acf02b30f8/uv-0.9.30-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0b3b3700ecf64a09a07fd04d10ec35f0973ec15595d38bbafaa0318252f7e31f", size = 22718097, upload-time = "2026-02-04T21:45:51.875Z" },
{ url = "https://files.pythonhosted.org/packages/28/21/a48d1872260f04a68bb5177b0f62ddef62ab892d544ed1922f2d19fd2b00/uv-0.9.30-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:b176fc2937937dd81820445cb7e7e2e3cd1009a003c512f55fa0ae10064c8a38", size = 24107844, upload-time = "2026-02-04T21:46:19.032Z" },
{ url = "https://files.pythonhosted.org/packages/1c/c6/d7e5559bfe1ab7a215a7ad49c58c8a5701728f2473f7f436ef00b4664e88/uv-0.9.30-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:180e8070b8c438b9a3fb3fde8a37b365f85c3c06e17090f555dc68fdebd73333", size = 23685378, upload-time = "2026-02-04T21:46:07.166Z" },
{ url = "https://files.pythonhosted.org/packages/a8/bf/b937bbd50d14c6286e353fd4c7bdc09b75f6b3a26bd4e2f3357e99891f28/uv-0.9.30-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4125a9aa2a751e1589728f6365cfe204d1be41499148ead44b6180b7df576f27", size = 22848471, upload-time = "2026-02-04T21:45:18.728Z" },
{ url = "https://files.pythonhosted.org/packages/6a/57/12a67c569e69b71508ad669adad266221f0b1d374be88eaf60109f551354/uv-0.9.30-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4366dd740ac9ad3ec50a58868a955b032493bb7d7e6ed368289e6ced8bbc70f3", size = 22774258, upload-time = "2026-02-04T21:46:10.798Z" },
{ url = "https://files.pythonhosted.org/packages/3d/b8/a26cc64685dddb9fb13f14c3dc1b12009f800083405f854f84eb8c86b494/uv-0.9.30-py3-none-manylinux_2_28_aarch64.whl", hash = "sha256:33e50f208e01a0c20b3c5f87d453356a5cbcfd68f19e47a28b274cd45618881c", size = 21699573, upload-time = "2026-02-04T21:45:44.365Z" },
{ url = "https://files.pythonhosted.org/packages/c8/59/995af0c5f0740f8acb30468e720269e720352df1d204e82c2d52d9a8c586/uv-0.9.30-py3-none-manylinux_2_31_riscv64.whl", hash = "sha256:5e7a6fa7a3549ce893cf91fe4b06629e3e594fc1dca0a6050aba2ea08722e964", size = 22460799, upload-time = "2026-02-04T21:45:26.658Z" },
{ url = "https://files.pythonhosted.org/packages/bb/0b/6affe815ecbaebf38b35d6230fbed2f44708c67d5dd5720f81f2ec8f96ff/uv-0.9.30-py3-none-musllinux_1_1_i686.whl", hash = "sha256:62d7e408d41e392b55ffa4cf9b07f7bbd8b04e0929258a42e19716c221ac0590", size = 22001777, upload-time = "2026-02-04T21:45:34.656Z" },
{ url = "https://files.pythonhosted.org/packages/f3/b6/47a515171c891b0d29f8e90c8a1c0e233e4813c95a011799605cfe04c74c/uv-0.9.30-py3-none-musllinux_1_1_x86_64.whl", hash = "sha256:6dc65c24f5b9cdc78300fa6631368d3106e260bbffa66fb1e831a318374da2df", size = 22968416, upload-time = "2026-02-04T21:45:22.863Z" },
{ url = "https://files.pythonhosted.org/packages/3d/3a/c1df8615385138bb7c43342586431ca32b77466c5fb086ac0ed14ab6ca28/uv-0.9.30-py3-none-win32.whl", hash = "sha256:74e94c65d578657db94a753d41763d0364e5468ec0d368fb9ac8ddab0fb6e21f", size = 20889232, upload-time = "2026-02-04T21:46:22.617Z" },
{ url = "https://files.pythonhosted.org/packages/f2/a8/e8761c8414a880d70223723946576069e042765475f73b4436d78b865dba/uv-0.9.30-py3-none-win_amd64.whl", hash = "sha256:88a2190810684830a1ba4bb1cf8fb06b0308988a1589559404259d295260891c", size = 23432208, upload-time = "2026-02-04T21:45:30.85Z" },
{ url = "https://files.pythonhosted.org/packages/49/e8/6f2ebab941ec559f97110bbbae1279cd0333d6bc352b55f6fa3fefb020d9/uv-0.9.30-py3-none-win_arm64.whl", hash = "sha256:7fde83a5b5ea027315223c33c30a1ab2f2186910b933d091a1b7652da879e230", size = 21887273, upload-time = "2026-02-04T21:45:59.787Z" },
]
[[package]]
@@ -9086,7 +8862,7 @@ wheels = [
[[package]]
name = "virtualenv"
version = "21.2.1"
version = "21.2.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "distlib" },
@@ -9095,9 +8871,9 @@ dependencies = [
{ name = "python-discovery" },
{ name = "typing-extensions", marker = "python_full_version < '3.11'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/97/c5/aff062c66b42e2183201a7ace10c6b2e959a9a16525c8e8ca8e59410d27a/virtualenv-21.2.1.tar.gz", hash = "sha256:b66ffe81301766c0d5e2208fc3576652c59d44e7b731fc5f5ed701c9b537fa78", size = 5844770, upload-time = "2026-04-09T18:47:11.482Z" }
sdist = { url = "https://files.pythonhosted.org/packages/aa/92/58199fe10049f9703c2666e809c4f686c54ef0a68b0f6afccf518c0b1eb9/virtualenv-21.2.0.tar.gz", hash = "sha256:1720dc3a62ef5b443092e3f499228599045d7fea4c79199770499df8becf9098", size = 5840618, upload-time = "2026-03-09T17:24:38.013Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/20/0e/f083a76cb590e60dff3868779558eefefb8dfb7c9ed020babc7aa014ccbf/virtualenv-21.2.1-py3-none-any.whl", hash = "sha256:bd16b49c53562b28cf1a3ad2f36edb805ad71301dee70ddc449e5c88a9f919a2", size = 5828326, upload-time = "2026-04-09T18:47:09.331Z" },
{ url = "https://files.pythonhosted.org/packages/c6/59/7d02447a55b2e55755011a647479041bc92a82e143f96a8195cb33bd0a1c/virtualenv-21.2.0-py3-none-any.whl", hash = "sha256:1bd755b504931164a5a496d217c014d098426cddc79363ad66ac78125f9d908f", size = 5825084, upload-time = "2026-03-09T17:24:35.378Z" },
]
[[package]]