mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-05-12 20:48:15 +00:00
Compare commits
13 Commits
devin/1776
...
devin/1776
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
316cf8c648 | ||
|
|
7fcd6db48a | ||
|
|
f06c2c3c4f | ||
|
|
0b120fac90 | ||
|
|
f879909526 | ||
|
|
c9b0004d0e | ||
|
|
a8994347b0 | ||
|
|
5ca62c20f2 | ||
|
|
11989da4b1 | ||
|
|
19ac7d2f64 | ||
|
|
2f48937ce4 | ||
|
|
c5192b970c | ||
|
|
54391fdbdf |
@@ -4,6 +4,45 @@ description: "تحديثات المنتج والتحسينات وإصلاحات
|
||||
icon: "clock"
|
||||
mode: "wide"
|
||||
---
|
||||
<Update label="17 أبريل 2026">
|
||||
## v1.14.2
|
||||
|
||||
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2)
|
||||
|
||||
## ما الذي تغير
|
||||
|
||||
### الميزات
|
||||
- إضافة أوامر استئناف النقاط التفتيش، والاختلاف، والتنظيف مع تحسين إمكانية الاكتشاف.
|
||||
- إضافة معلمة `from_checkpoint` إلى `Agent.kickoff` والطرق ذات الصلة.
|
||||
- إضافة أوامر إدارة القوالب لقوالب المشاريع.
|
||||
- إضافة تلميحات استئناف إلى إصدار أدوات المطور عند الفشل.
|
||||
- إضافة واجهة سطر الأوامر للتحقق من النشر وتعزيز سهولة استخدام تهيئة LLM.
|
||||
- إضافة تقسيم النقاط التفتيشية مع تتبع النسب.
|
||||
- إثراء تتبع رموز LLM مع رموز الاستدلال ورموز إنشاء التخزين المؤقت.
|
||||
|
||||
### إصلاحات الأخطاء
|
||||
- إصلاح المطالبة بشأن تعارضات الفروع القديمة في إصدار أدوات المطور.
|
||||
- تصحيح الثغرات في `authlib` و `langchain-text-splitters` و `pypdf`.
|
||||
- تحديد نطاق معالجات البث لمنع تلوث أجزاء التشغيل المتقاطعة.
|
||||
- إرسال نقاط التفتيش عبر واجهات Flow في TUI.
|
||||
- استخدام نمط البحث المتكرر لاكتشاف نقاط التفتيش بتنسيق JSON.
|
||||
- التعامل مع مخططات JSON الدائرية في أداة حل MCP.
|
||||
- الحفاظ على معلمات استدعاء أداة Bedrock من خلال إزالة القيمة الافتراضية الصحيحة.
|
||||
- إصدار حدث flow_finished بعد استئناف HITL.
|
||||
- إصلاح ثغرات متنوعة من خلال تحديث التبعيات، بما في ذلك `requests` و `cryptography` و `pytest`.
|
||||
- إصلاح لإيقاف تمرير وضع صارم إلى واجهة برمجة التطبيقات Bedrock Converse.
|
||||
|
||||
### الوثائق
|
||||
- توثيق المعلمات المفقودة وإضافة قسم النقاط التفتيشية.
|
||||
- تحديث سجل التغييرات والإصدار للإصدار v1.14.2 ومرشحي الإصدار السابقين.
|
||||
- إضافة توثيق ميزة A2A الخاصة بالشركات وتحديث وثائق A2A المفتوحة المصدر.
|
||||
|
||||
## المساهمون
|
||||
|
||||
@Yanhu007، @alex-clawd، @github-actions[bot]، @greysonlalonde، @iris-clawd، @lorenzejay، @lucasgomide
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="16 أبريل 2026">
|
||||
## v1.14.2rc1
|
||||
|
||||
|
||||
1880
docs/docs.json
1880
docs/docs.json
File diff suppressed because it is too large
Load Diff
@@ -4,6 +4,45 @@ description: "Product updates, improvements, and bug fixes for CrewAI"
|
||||
icon: "clock"
|
||||
mode: "wide"
|
||||
---
|
||||
<Update label="Apr 17, 2026">
|
||||
## v1.14.2
|
||||
|
||||
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2)
|
||||
|
||||
## What's Changed
|
||||
|
||||
### Features
|
||||
- Add checkpoint resume, diff, and prune commands with improved discoverability.
|
||||
- Add `from_checkpoint` parameter to `Agent.kickoff` and related methods.
|
||||
- Add template management commands for project templates.
|
||||
- Add resume hints to devtools release on failure.
|
||||
- Add deploy validation CLI and enhance LLM initialization ergonomics.
|
||||
- Add checkpoint forking with lineage tracking.
|
||||
- Enrich LLM token tracking with reasoning tokens and cache creation tokens.
|
||||
|
||||
### Bug Fixes
|
||||
- Fix prompt on stale branch conflicts in devtools release.
|
||||
- Patch vulnerabilities in `authlib`, `langchain-text-splitters`, and `pypdf`.
|
||||
- Scope streaming handlers to prevent cross-run chunk contamination.
|
||||
- Dispatch Flow checkpoints through Flow APIs in TUI.
|
||||
- Use recursive glob for JSON checkpoint discovery.
|
||||
- Handle cyclic JSON schemas in MCP tool resolution.
|
||||
- Preserve Bedrock tool call arguments by removing truthy default.
|
||||
- Emit flow_finished event after HITL resume.
|
||||
- Fix various vulnerabilities by updating dependencies, including `requests`, `cryptography`, and `pytest`.
|
||||
- Fix to stop forwarding strict mode to Bedrock Converse API.
|
||||
|
||||
### Documentation
|
||||
- Document missing parameters and add Checkpointing section.
|
||||
- Update changelog and version for v1.14.2 and previous release candidates.
|
||||
- Add enterprise A2A feature documentation and update OSS A2A docs.
|
||||
|
||||
## Contributors
|
||||
|
||||
@Yanhu007, @alex-clawd, @github-actions[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @lucasgomide
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="Apr 16, 2026">
|
||||
## v1.14.2rc1
|
||||
|
||||
|
||||
@@ -33,7 +33,14 @@ A crew in crewAI represents a collaborative group of agents working together to
|
||||
| **Planning** *(optional)* | `planning` | Adds planning ability to the Crew. When activated before each Crew iteration, all Crew data is sent to an AgentPlanner that will plan the tasks and this plan will be added to each task description. |
|
||||
| **Planning LLM** *(optional)* | `planning_llm` | The language model used by the AgentPlanner in a planning process. |
|
||||
| **Knowledge Sources** _(optional)_ | `knowledge_sources` | Knowledge sources available at the crew level, accessible to all the agents. |
|
||||
| **Stream** _(optional)_ | `stream` | Enable streaming output to receive real-time updates during crew execution. Returns a `CrewStreamingOutput` object that can be iterated for chunks. Defaults to `False`. |
|
||||
| **Stream** _(optional)_ | `stream` | Enable streaming output to receive real-time updates during crew execution. Returns a `CrewStreamingOutput` object that can be iterated for chunks. Defaults to `False`. |
|
||||
| **Chat LLM** _(optional)_ | `chat_llm` | The language model used to orchestrate `crewai chat` CLI interactions with the crew. Accepts a model name string or `LLM` instance. Defaults to `None`. |
|
||||
| **Before Kickoff Callbacks** _(optional)_ | `before_kickoff_callbacks` | A list of callable functions executed **before** the crew starts. Each callback receives and can modify the inputs dict. Distinct from the `@before_kickoff` decorator. Defaults to `[]`. |
|
||||
| **After Kickoff Callbacks** _(optional)_ | `after_kickoff_callbacks` | A list of callable functions executed **after** the crew finishes. Each callback receives and can modify the `CrewOutput`. Distinct from the `@after_kickoff` decorator. Defaults to `[]`. |
|
||||
| **Tracing** _(optional)_ | `tracing` | Controls OpenTelemetry tracing for the crew. `True` = always enable, `False` = always disable, `None` = inherit from environment / user settings. Defaults to `None`. |
|
||||
| **Skills** _(optional)_ | `skills` | A list of `Path` objects (skill search directories) or pre-loaded `Skill` objects applied to all agents in the crew. Defaults to `None`. |
|
||||
| **Security Config** _(optional)_ | `security_config` | A `SecurityConfig` instance managing crew fingerprinting and identity. Defaults to `SecurityConfig()`. |
|
||||
| **Checkpoint** _(optional)_ | `checkpoint` | Enables automatic checkpointing. Pass `True` for sensible defaults, a `CheckpointConfig` for full control, `False` to opt out, or `None` to inherit. See the [Checkpointing](#checkpointing) section below. Defaults to `None`. |
|
||||
|
||||
<Tip>
|
||||
**Crew Max RPM**: The `max_rpm` attribute sets the maximum number of requests per minute the crew can perform to avoid rate limits and will override individual agents' `max_rpm` settings if you set it.
|
||||
@@ -271,6 +278,72 @@ crew = Crew(output_log_file = file_name.json) # Logs will be saved as file_name
|
||||
|
||||
|
||||
|
||||
## Checkpointing
|
||||
|
||||
Checkpointing lets a crew automatically save its state after key events (e.g. task completion) so that long-running or interrupted runs can be resumed exactly where they left off without re-executing completed tasks.
|
||||
|
||||
### Quick Start
|
||||
|
||||
Pass `checkpoint=True` to enable checkpointing with sensible defaults (saves to `.checkpoints/` after every task):
|
||||
|
||||
```python Code
|
||||
from crewai import Crew, Process
|
||||
|
||||
crew = Crew(
|
||||
agents=[researcher, writer],
|
||||
tasks=[research_task, write_task],
|
||||
process=Process.sequential,
|
||||
checkpoint=True, # saves to .checkpoints/ after every task
|
||||
)
|
||||
|
||||
crew.kickoff(inputs={"topic": "AI trends"})
|
||||
```
|
||||
|
||||
### Full Control with `CheckpointConfig`
|
||||
|
||||
Use `CheckpointConfig` for fine-grained control over location, trigger events, storage backend, and retention:
|
||||
|
||||
```python Code
|
||||
from crewai import Crew, Process
|
||||
from crewai.state.checkpoint_config import CheckpointConfig
|
||||
|
||||
crew = Crew(
|
||||
agents=[researcher, writer],
|
||||
tasks=[research_task, write_task],
|
||||
process=Process.sequential,
|
||||
checkpoint=CheckpointConfig(
|
||||
location="./.checkpoints", # directory for JSON files (default)
|
||||
on_events=["task_completed"], # trigger after each task (default)
|
||||
max_checkpoints=5, # keep only the 5 most recent checkpoints
|
||||
),
|
||||
)
|
||||
|
||||
crew.kickoff(inputs={"topic": "AI trends"})
|
||||
```
|
||||
|
||||
### Resuming from a Checkpoint
|
||||
|
||||
Use `Crew.from_checkpoint()` to restore a crew from a saved checkpoint file, then call `kickoff()` to resume:
|
||||
|
||||
```python Code
|
||||
# Resume from the most recent checkpoint
|
||||
crew = Crew.from_checkpoint(".checkpoints/latest.json")
|
||||
crew.kickoff()
|
||||
```
|
||||
|
||||
<Note>
|
||||
When restoring from a checkpoint, `checkpoint_inputs`, `checkpoint_train`, and `checkpoint_kickoff_event_id` are automatically reconstructed — you do not need to set these manually.
|
||||
</Note>
|
||||
|
||||
### `CheckpointConfig` Attributes
|
||||
|
||||
| Attribute | Type | Default | Description |
|
||||
| :----------------- | :------------------------------------- | :------------------- | :-------------------------------------------------------------------------------------------- |
|
||||
| `location` | `str` | `"./.checkpoints"` | Storage destination. For `JsonProvider` this is a directory path; for `SqliteProvider` a database file path. |
|
||||
| `on_events` | `list[str]` | `["task_completed"]` | Event types that trigger a checkpoint write. Use `["*"]` to checkpoint on every event. |
|
||||
| `provider` | `JsonProvider \| SqliteProvider` | `JsonProvider()` | Storage backend. Defaults to `JsonProvider` (plain JSON files). |
|
||||
| `max_checkpoints` | `int \| None` | `None` | Maximum checkpoints to keep. Oldest are pruned after each write. `None` keeps all. |
|
||||
|
||||
## Memory Utilization
|
||||
|
||||
Crews can utilize memory (short-term, long-term, and entity memory) to enhance their execution and learning over time. This feature allows crews to store and recall execution memories, aiding in decision-making and task execution strategies.
|
||||
|
||||
@@ -4,6 +4,45 @@ description: "CrewAI의 제품 업데이트, 개선 사항 및 버그 수정"
|
||||
icon: "clock"
|
||||
mode: "wide"
|
||||
---
|
||||
<Update label="2026년 4월 17일">
|
||||
## v1.14.2
|
||||
|
||||
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2)
|
||||
|
||||
## 변경 사항
|
||||
|
||||
### 기능
|
||||
- 체크포인트 재개, 차이(diff), 및 가지치기(prune) 명령을 추가하여 가시성을 개선했습니다.
|
||||
- `Agent.kickoff` 및 관련 메서드에 `from_checkpoint` 매개변수를 추가했습니다.
|
||||
- 프로젝트 템플릿을 위한 템플릿 관리 명령을 추가했습니다.
|
||||
- 실패 시 개발 도구 릴리스에 재개 힌트를 추가했습니다.
|
||||
- 배포 검증 CLI를 추가하고 LLM 초기화의 사용 편의성을 향상시켰습니다.
|
||||
- 계보 추적이 가능한 체크포인트 포킹을 추가했습니다.
|
||||
- 추론 토큰 및 캐시 생성 토큰으로 LLM 토큰 추적을 풍부하게 했습니다.
|
||||
|
||||
### 버그 수정
|
||||
- 개발 도구 릴리스에서 오래된 브랜치 충돌에 대한 프롬프트를 수정했습니다.
|
||||
- `authlib`, `langchain-text-splitters`, 및 `pypdf`의 취약점을 패치했습니다.
|
||||
- 스트리밍 핸들러의 범위를 설정하여 교차 실행 청크 오염을 방지했습니다.
|
||||
- TUI에서 Flow API를 통해 Flow 체크포인트를 전송했습니다.
|
||||
- JSON 체크포인트 발견을 위해 재귀적 글로브를 사용했습니다.
|
||||
- MCP 도구 해상도에서 순환 JSON 스키마를 처리했습니다.
|
||||
- 진리값이 있는 기본값을 제거하여 Bedrock 도구 호출 인수를 보존했습니다.
|
||||
- HITL 재개 후 flow_finished 이벤트를 발생시켰습니다.
|
||||
- `requests`, `cryptography`, 및 `pytest`를 포함한 종속성을 업데이트하여 다양한 취약점을 수정했습니다.
|
||||
- Bedrock Converse API에 엄격 모드를 전달하지 않도록 수정했습니다.
|
||||
|
||||
### 문서
|
||||
- 누락된 매개변수를 문서화하고 체크포인팅 섹션을 추가했습니다.
|
||||
- v1.14.2 및 이전 릴리스 후보에 대한 변경 로그 및 버전을 업데이트했습니다.
|
||||
- 기업 A2A 기능 문서를 추가하고 OSS A2A 문서를 업데이트했습니다.
|
||||
|
||||
## 기여자
|
||||
|
||||
@Yanhu007, @alex-clawd, @github-actions[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @lucasgomide
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="2026년 4월 16일">
|
||||
## v1.14.2rc1
|
||||
|
||||
|
||||
@@ -4,6 +4,45 @@ description: "Atualizações de produto, melhorias e correções do CrewAI"
|
||||
icon: "clock"
|
||||
mode: "wide"
|
||||
---
|
||||
<Update label="17 abr 2026">
|
||||
## v1.14.2
|
||||
|
||||
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2)
|
||||
|
||||
## O que Mudou
|
||||
|
||||
### Recursos
|
||||
- Adicionar comandos de retomar, diferenciar e podar checkpoints com melhor descobribilidade.
|
||||
- Adicionar o parâmetro `from_checkpoint` ao `Agent.kickoff` e métodos relacionados.
|
||||
- Adicionar comandos de gerenciamento de templates para templates de projeto.
|
||||
- Adicionar dicas de retomar na liberação de devtools em caso de falha.
|
||||
- Adicionar CLI de validação de implantação e melhorar a ergonomia da inicialização do LLM.
|
||||
- Adicionar bifurcação de checkpoints com rastreamento de linhagem.
|
||||
- Enriquecer o rastreamento de tokens do LLM com tokens de raciocínio e tokens de criação de cache.
|
||||
|
||||
### Correções de Bugs
|
||||
- Corrigir prompt em conflitos de branch obsoletos na liberação de devtools.
|
||||
- Corrigir vulnerabilidades em `authlib`, `langchain-text-splitters` e `pypdf`.
|
||||
- Restringir manipuladores de streaming para evitar contaminação de chunks entre execuções.
|
||||
- Despachar checkpoints de Flow através das APIs de Flow na TUI.
|
||||
- Usar glob recursivo para descoberta de checkpoints JSON.
|
||||
- Lidar com esquemas JSON cíclicos na resolução de ferramentas MCP.
|
||||
- Preservar os argumentos de chamada da ferramenta Bedrock removendo o padrão truthy.
|
||||
- Emitir evento flow_finished após retomar HITL.
|
||||
- Corrigir várias vulnerabilidades atualizando dependências, incluindo `requests`, `cryptography` e `pytest`.
|
||||
- Corrigir para parar de encaminhar o modo estrito para a API Bedrock Converse.
|
||||
|
||||
### Documentação
|
||||
- Documentar parâmetros ausentes e adicionar seção de Checkpointing.
|
||||
- Atualizar changelog e versão para v1.14.2 e candidatos a liberação anteriores.
|
||||
- Adicionar documentação da funcionalidade A2A empresarial e atualizar a documentação A2A OSS.
|
||||
|
||||
## Contribuidores
|
||||
|
||||
@Yanhu007, @alex-clawd, @github-actions[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @lucasgomide
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="16 abr 2026">
|
||||
## v1.14.2rc1
|
||||
|
||||
|
||||
@@ -152,4 +152,4 @@ __all__ = [
|
||||
"wrap_file_source",
|
||||
]
|
||||
|
||||
__version__ = "1.14.2rc1"
|
||||
__version__ = "1.14.2"
|
||||
|
||||
@@ -10,7 +10,7 @@ requires-python = ">=3.10, <3.14"
|
||||
dependencies = [
|
||||
"pytube~=15.0.0",
|
||||
"requests>=2.33.0,<3",
|
||||
"crewai==1.14.2rc1",
|
||||
"crewai==1.14.2",
|
||||
"tiktoken~=0.8.0",
|
||||
"beautifulsoup4~=4.13.4",
|
||||
"python-docx~=1.2.0",
|
||||
|
||||
@@ -305,4 +305,4 @@ __all__ = [
|
||||
"ZapierActionTools",
|
||||
]
|
||||
|
||||
__version__ = "1.14.2rc1"
|
||||
__version__ = "1.14.2"
|
||||
|
||||
@@ -55,7 +55,7 @@ Repository = "https://github.com/crewAIInc/crewAI"
|
||||
|
||||
[project.optional-dependencies]
|
||||
tools = [
|
||||
"crewai-tools==1.14.2rc1",
|
||||
"crewai-tools==1.14.2",
|
||||
]
|
||||
embeddings = [
|
||||
"tiktoken~=0.8.0"
|
||||
|
||||
@@ -46,7 +46,7 @@ def _suppress_pydantic_deprecation_warnings() -> None:
|
||||
|
||||
_suppress_pydantic_deprecation_warnings()
|
||||
|
||||
__version__ = "1.14.2rc1"
|
||||
__version__ = "1.14.2"
|
||||
_telemetry_submitted = False
|
||||
|
||||
|
||||
|
||||
@@ -84,6 +84,7 @@ from crewai.rag.embeddings.types import EmbedderConfig
|
||||
from crewai.security.fingerprint import Fingerprint
|
||||
from crewai.skills.loader import activate_skill, discover_skills
|
||||
from crewai.skills.models import INSTRUCTIONS, Skill as SkillModel
|
||||
from crewai.state.checkpoint_config import CheckpointConfig, apply_checkpoint
|
||||
from crewai.tools.agent_tools.agent_tools import AgentTools
|
||||
from crewai.types.callback import SerializableCallable
|
||||
from crewai.utilities.agent_utils import (
|
||||
@@ -1457,6 +1458,7 @@ class Agent(BaseAgent):
|
||||
messages: str | list[LLMMessage],
|
||||
response_format: type[Any] | None = None,
|
||||
input_files: dict[str, FileInput] | None = None,
|
||||
from_checkpoint: CheckpointConfig | None = None,
|
||||
) -> LiteAgentOutput | Coroutine[Any, Any, LiteAgentOutput]:
|
||||
"""Execute the agent with the given messages using the AgentExecutor.
|
||||
|
||||
@@ -1475,6 +1477,9 @@ class Agent(BaseAgent):
|
||||
response_format: Optional Pydantic model for structured output.
|
||||
input_files: Optional dict of named files to attach to the message.
|
||||
Files can be paths, bytes, or File objects from crewai_files.
|
||||
from_checkpoint: Optional checkpoint config. If ``restore_from``
|
||||
is set, the agent resumes from that checkpoint. Remaining
|
||||
config fields enable checkpointing for the run.
|
||||
|
||||
Returns:
|
||||
LiteAgentOutput: The result of the agent execution.
|
||||
@@ -1483,6 +1488,14 @@ class Agent(BaseAgent):
|
||||
Note:
|
||||
For explicit async usage outside of Flow, use kickoff_async() directly.
|
||||
"""
|
||||
restored = apply_checkpoint(self, from_checkpoint)
|
||||
if restored is not None:
|
||||
return restored.kickoff( # type: ignore[no-any-return]
|
||||
messages=messages,
|
||||
response_format=response_format,
|
||||
input_files=input_files,
|
||||
)
|
||||
|
||||
if is_inside_event_loop():
|
||||
return self.kickoff_async(messages, response_format, input_files)
|
||||
|
||||
@@ -1760,6 +1773,7 @@ class Agent(BaseAgent):
|
||||
messages: str | list[LLMMessage],
|
||||
response_format: type[Any] | None = None,
|
||||
input_files: dict[str, FileInput] | None = None,
|
||||
from_checkpoint: CheckpointConfig | None = None,
|
||||
) -> LiteAgentOutput:
|
||||
"""Execute the agent asynchronously with the given messages.
|
||||
|
||||
@@ -1775,10 +1789,20 @@ class Agent(BaseAgent):
|
||||
response_format: Optional Pydantic model for structured output.
|
||||
input_files: Optional dict of named files to attach to the message.
|
||||
Files can be paths, bytes, or File objects from crewai_files.
|
||||
from_checkpoint: Optional checkpoint config. If ``restore_from``
|
||||
is set, the agent resumes from that checkpoint.
|
||||
|
||||
Returns:
|
||||
LiteAgentOutput: The result of the agent execution.
|
||||
"""
|
||||
restored = apply_checkpoint(self, from_checkpoint)
|
||||
if restored is not None:
|
||||
return await restored.kickoff_async( # type: ignore[no-any-return]
|
||||
messages=messages,
|
||||
response_format=response_format,
|
||||
input_files=input_files,
|
||||
)
|
||||
|
||||
executor, inputs, agent_info, parsed_tools = self._prepare_kickoff(
|
||||
messages, response_format, input_files
|
||||
)
|
||||
@@ -1808,6 +1832,7 @@ class Agent(BaseAgent):
|
||||
messages: str | list[LLMMessage],
|
||||
response_format: type[Any] | None = None,
|
||||
input_files: dict[str, FileInput] | None = None,
|
||||
from_checkpoint: CheckpointConfig | None = None,
|
||||
) -> LiteAgentOutput:
|
||||
"""Async version of kickoff. Alias for kickoff_async.
|
||||
|
||||
@@ -1815,8 +1840,12 @@ class Agent(BaseAgent):
|
||||
messages: Either a string query or a list of message dictionaries.
|
||||
response_format: Optional Pydantic model for structured output.
|
||||
input_files: Optional dict of named files to attach to the message.
|
||||
from_checkpoint: Optional checkpoint config. If ``restore_from``
|
||||
is set, the agent resumes from that checkpoint.
|
||||
|
||||
Returns:
|
||||
LiteAgentOutput: The result of the agent execution.
|
||||
"""
|
||||
return await self.kickoff_async(messages, response_format, input_files)
|
||||
return await self.kickoff_async(
|
||||
messages, response_format, input_files, from_checkpoint
|
||||
)
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from datetime import datetime
|
||||
from datetime import datetime, timedelta, timezone
|
||||
import glob
|
||||
import json
|
||||
import os
|
||||
@@ -37,6 +37,26 @@ ORDER BY rowid DESC
|
||||
LIMIT 1
|
||||
"""
|
||||
|
||||
_DELETE_OLDER_THAN = """
|
||||
DELETE FROM checkpoints
|
||||
WHERE created_at < ?
|
||||
"""
|
||||
|
||||
_DELETE_KEEP_N = """
|
||||
DELETE FROM checkpoints WHERE rowid NOT IN (
|
||||
SELECT rowid FROM checkpoints ORDER BY rowid DESC LIMIT ?
|
||||
)
|
||||
"""
|
||||
|
||||
_COUNT_CHECKPOINTS = "SELECT COUNT(*) FROM checkpoints"
|
||||
|
||||
_SELECT_LIKE = """
|
||||
SELECT id, created_at, json(data)
|
||||
FROM checkpoints
|
||||
WHERE id LIKE ?
|
||||
ORDER BY rowid DESC
|
||||
"""
|
||||
|
||||
|
||||
_DEFAULT_DIR = "./.checkpoints"
|
||||
_DEFAULT_DB = "./.checkpoints.db"
|
||||
@@ -86,17 +106,50 @@ def _parse_checkpoint_json(raw: str, source: str) -> dict[str, Any]:
|
||||
"name": entity.get("name"),
|
||||
"id": entity.get("id"),
|
||||
}
|
||||
|
||||
raw_agents = entity.get("agents", [])
|
||||
agents_by_id: dict[str, dict[str, Any]] = {}
|
||||
parsed_agents: list[dict[str, Any]] = []
|
||||
for ag in raw_agents:
|
||||
agent_info: dict[str, Any] = {
|
||||
"id": ag.get("id", ""),
|
||||
"role": ag.get("role", ""),
|
||||
"goal": ag.get("goal", ""),
|
||||
}
|
||||
parsed_agents.append(agent_info)
|
||||
if ag.get("id"):
|
||||
agents_by_id[str(ag["id"])] = agent_info
|
||||
if parsed_agents:
|
||||
info["agents"] = parsed_agents
|
||||
|
||||
if tasks:
|
||||
info["tasks_completed"] = completed
|
||||
info["tasks_total"] = len(tasks)
|
||||
info["tasks"] = [
|
||||
{
|
||||
parsed_tasks: list[dict[str, Any]] = []
|
||||
for t in tasks:
|
||||
task_info: dict[str, Any] = {
|
||||
"description": t.get("description", ""),
|
||||
"completed": t.get("output") is not None,
|
||||
"output": (t.get("output") or {}).get("raw", ""),
|
||||
}
|
||||
for t in tasks
|
||||
]
|
||||
task_agent = t.get("agent")
|
||||
if isinstance(task_agent, dict):
|
||||
task_info["agent_role"] = task_agent.get("role", "")
|
||||
task_info["agent_id"] = task_agent.get("id", "")
|
||||
elif isinstance(task_agent, str) and task_agent in agents_by_id:
|
||||
task_info["agent_role"] = agents_by_id[task_agent].get("role", "")
|
||||
task_info["agent_id"] = task_agent
|
||||
parsed_tasks.append(task_info)
|
||||
info["tasks"] = parsed_tasks
|
||||
|
||||
if entity.get("entity_type") == "flow":
|
||||
completed_methods = entity.get("checkpoint_completed_methods")
|
||||
if completed_methods:
|
||||
info["completed_methods"] = sorted(completed_methods)
|
||||
state = entity.get("checkpoint_state")
|
||||
if isinstance(state, dict):
|
||||
info["flow_state"] = state
|
||||
|
||||
parsed_entities.append(info)
|
||||
|
||||
inputs: dict[str, Any] = {}
|
||||
@@ -262,6 +315,8 @@ def _info_sqlite_latest(db_path: str) -> dict[str, Any] | None:
|
||||
def _info_sqlite_id(db_path: str, checkpoint_id: str) -> dict[str, Any] | None:
|
||||
with sqlite3.connect(db_path) as conn:
|
||||
row = conn.execute(_SELECT_ONE, (checkpoint_id,)).fetchone()
|
||||
if not row:
|
||||
row = conn.execute(_SELECT_LIKE, (f"%{checkpoint_id}%",)).fetchone()
|
||||
if not row:
|
||||
return None
|
||||
cid, created_at, raw = row
|
||||
@@ -384,3 +439,287 @@ def _print_info(meta: dict[str, Any]) -> None:
|
||||
if len(desc) > 70:
|
||||
desc = desc[:67] + "..."
|
||||
click.echo(f" {i + 1}. [{status}] {desc}")
|
||||
|
||||
|
||||
def _resolve_checkpoint(
|
||||
location: str, checkpoint_id: str | None
|
||||
) -> dict[str, Any] | None:
|
||||
if _is_sqlite(location):
|
||||
if checkpoint_id:
|
||||
return _info_sqlite_id(location, checkpoint_id)
|
||||
return _info_sqlite_latest(location)
|
||||
if os.path.isdir(location):
|
||||
if checkpoint_id:
|
||||
from crewai.state.provider.json_provider import JsonProvider
|
||||
|
||||
_json_provider: JsonProvider = JsonProvider()
|
||||
pattern: str = os.path.join(location, "**", "*.json")
|
||||
all_files: list[str] = glob.glob(pattern, recursive=True)
|
||||
matches: list[str] = [
|
||||
f for f in all_files if checkpoint_id in _json_provider.extract_id(f)
|
||||
]
|
||||
matches.sort(key=os.path.getmtime, reverse=True)
|
||||
if matches:
|
||||
return _info_json_file(matches[0])
|
||||
return None
|
||||
return _info_json_latest(location)
|
||||
if os.path.isfile(location):
|
||||
return _info_json_file(location)
|
||||
return None
|
||||
|
||||
|
||||
def _entity_type_from_meta(meta: dict[str, Any]) -> str:
|
||||
for ent in meta.get("entities", []):
|
||||
if ent.get("type") == "flow":
|
||||
return "flow"
|
||||
return "crew"
|
||||
|
||||
|
||||
def resume_checkpoint(location: str, checkpoint_id: str | None) -> None:
|
||||
import asyncio
|
||||
|
||||
meta: dict[str, Any] | None = _resolve_checkpoint(location, checkpoint_id)
|
||||
if meta is None:
|
||||
if checkpoint_id:
|
||||
click.echo(f"Checkpoint not found: {checkpoint_id}")
|
||||
else:
|
||||
click.echo(f"No checkpoints found in {location}")
|
||||
return
|
||||
|
||||
restore_path: str = meta.get("path") or meta.get("source", "")
|
||||
if meta.get("db"):
|
||||
restore_path = f"{meta['db']}#{meta['name']}"
|
||||
|
||||
click.echo(f"Resuming from: {meta.get('name', restore_path)}")
|
||||
_print_info(meta)
|
||||
click.echo()
|
||||
|
||||
from crewai.state.checkpoint_config import CheckpointConfig
|
||||
|
||||
config: CheckpointConfig = CheckpointConfig(restore_from=restore_path)
|
||||
entity_type: str = _entity_type_from_meta(meta)
|
||||
inputs: dict[str, Any] | None = meta.get("inputs") or None
|
||||
|
||||
if entity_type == "flow":
|
||||
from crewai.flow.flow import Flow
|
||||
|
||||
flow = Flow.from_checkpoint(config)
|
||||
result = asyncio.run(flow.kickoff_async(inputs=inputs))
|
||||
else:
|
||||
from crewai.crew import Crew
|
||||
|
||||
crew = Crew.from_checkpoint(config)
|
||||
result = asyncio.run(crew.akickoff(inputs=inputs))
|
||||
|
||||
click.echo(f"\nResult: {getattr(result, 'raw', result)}")
|
||||
|
||||
|
||||
def _task_list_from_meta(meta: dict[str, Any]) -> list[dict[str, Any]]:
|
||||
tasks: list[dict[str, Any]] = []
|
||||
for ent in meta.get("entities", []):
|
||||
tasks.extend(
|
||||
{
|
||||
"entity": ent.get("name", "unnamed"),
|
||||
"description": t.get("description", ""),
|
||||
"completed": t.get("completed", False),
|
||||
"output": t.get("output", ""),
|
||||
}
|
||||
for t in ent.get("tasks", [])
|
||||
)
|
||||
return tasks
|
||||
|
||||
|
||||
def diff_checkpoints(location: str, id1: str, id2: str) -> None:
|
||||
meta1: dict[str, Any] | None = _resolve_checkpoint(location, id1)
|
||||
meta2: dict[str, Any] | None = _resolve_checkpoint(location, id2)
|
||||
|
||||
if meta1 is None:
|
||||
click.echo(f"Checkpoint not found: {id1}")
|
||||
return
|
||||
if meta2 is None:
|
||||
click.echo(f"Checkpoint not found: {id2}")
|
||||
return
|
||||
|
||||
name1: str = meta1.get("name", id1)
|
||||
name2: str = meta2.get("name", id2)
|
||||
|
||||
click.echo(f"--- {name1}")
|
||||
click.echo(f"+++ {name2}")
|
||||
click.echo()
|
||||
|
||||
fields: list[tuple[str, str]] = [
|
||||
("Time", "ts"),
|
||||
("Branch", "branch"),
|
||||
("Trigger", "trigger"),
|
||||
("Events", "event_count"),
|
||||
]
|
||||
for label, key in fields:
|
||||
v1: str = str(meta1.get(key, ""))
|
||||
v2: str = str(meta2.get(key, ""))
|
||||
if v1 != v2:
|
||||
click.echo(f" {label}:")
|
||||
click.echo(f" - {v1}")
|
||||
click.echo(f" + {v2}")
|
||||
|
||||
inputs1: dict[str, Any] = meta1.get("inputs", {})
|
||||
inputs2: dict[str, Any] = meta2.get("inputs", {})
|
||||
all_keys: list[str] = sorted(set(list(inputs1.keys()) + list(inputs2.keys())))
|
||||
changed_inputs: list[tuple[str, Any, Any]] = [
|
||||
(k, inputs1.get(k, ""), inputs2.get(k, ""))
|
||||
for k in all_keys
|
||||
if inputs1.get(k) != inputs2.get(k)
|
||||
]
|
||||
if changed_inputs:
|
||||
click.echo("\n Inputs:")
|
||||
for key, v1, v2 in changed_inputs:
|
||||
click.echo(f" {key}:")
|
||||
click.echo(f" - {v1}")
|
||||
click.echo(f" + {v2}")
|
||||
|
||||
tasks1: list[dict[str, Any]] = _task_list_from_meta(meta1)
|
||||
tasks2: list[dict[str, Any]] = _task_list_from_meta(meta2)
|
||||
|
||||
max_tasks: int = max(len(tasks1), len(tasks2))
|
||||
if max_tasks == 0:
|
||||
return
|
||||
|
||||
click.echo("\n Tasks:")
|
||||
for i in range(max_tasks):
|
||||
t1: dict[str, Any] | None = tasks1[i] if i < len(tasks1) else None
|
||||
t2: dict[str, Any] | None = tasks2[i] if i < len(tasks2) else None
|
||||
|
||||
if t1 is None:
|
||||
desc: str = t2["description"][:60] if t2 else ""
|
||||
click.echo(f" + {i + 1}. [new] {desc}")
|
||||
continue
|
||||
if t2 is None:
|
||||
desc = t1["description"][:60]
|
||||
click.echo(f" - {i + 1}. [removed] {desc}")
|
||||
continue
|
||||
|
||||
desc = str(t1["description"][:60])
|
||||
s1: str = "done" if t1["completed"] else "pending"
|
||||
s2: str = "done" if t2["completed"] else "pending"
|
||||
|
||||
if s1 != s2:
|
||||
click.echo(f" {i + 1}. {desc}")
|
||||
click.echo(f" status: {s1} -> {s2}")
|
||||
|
||||
out1: str = (t1.get("output") or "").strip()
|
||||
out2: str = (t2.get("output") or "").strip()
|
||||
if out1 != out2:
|
||||
if s1 == s2:
|
||||
click.echo(f" {i + 1}. {desc}")
|
||||
preview1: str = (
|
||||
out1[:80] + ("..." if len(out1) > 80 else "") if out1 else "(empty)"
|
||||
)
|
||||
preview2: str = (
|
||||
out2[:80] + ("..." if len(out2) > 80 else "") if out2 else "(empty)"
|
||||
)
|
||||
click.echo(" output:")
|
||||
click.echo(f" - {preview1}")
|
||||
click.echo(f" + {preview2}")
|
||||
|
||||
|
||||
def _parse_duration(value: str) -> timedelta:
|
||||
match: re.Match[str] | None = re.match(r"^(\d+)([dhm])$", value.strip())
|
||||
if not match:
|
||||
raise click.BadParameter(
|
||||
f"Invalid duration: {value!r}. Use format like '7d', '24h', or '30m'."
|
||||
)
|
||||
amount: int = int(match.group(1))
|
||||
unit: str = match.group(2)
|
||||
if unit == "d":
|
||||
return timedelta(days=amount)
|
||||
if unit == "h":
|
||||
return timedelta(hours=amount)
|
||||
return timedelta(minutes=amount)
|
||||
|
||||
|
||||
def _prune_json(location: str, keep: int | None, older_than: timedelta | None) -> int:
|
||||
pattern: str = os.path.join(location, "**", "*.json")
|
||||
files: list[str] = sorted(
|
||||
glob.glob(pattern, recursive=True), key=os.path.getmtime, reverse=True
|
||||
)
|
||||
if not files:
|
||||
return 0
|
||||
|
||||
to_delete: set[str] = set()
|
||||
|
||||
if keep is not None and len(files) > keep:
|
||||
to_delete.update(files[keep:])
|
||||
|
||||
if older_than is not None:
|
||||
cutoff: datetime = datetime.now(timezone.utc) - older_than
|
||||
for path in files:
|
||||
mtime: datetime = datetime.fromtimestamp(
|
||||
os.path.getmtime(path), tz=timezone.utc
|
||||
)
|
||||
if mtime < cutoff:
|
||||
to_delete.add(path)
|
||||
|
||||
deleted: int = 0
|
||||
for path in to_delete:
|
||||
try:
|
||||
os.remove(path)
|
||||
deleted += 1
|
||||
except OSError: # noqa: PERF203
|
||||
pass
|
||||
|
||||
for dirpath, dirnames, filenames in os.walk(location, topdown=False):
|
||||
if dirpath != location and not filenames and not dirnames:
|
||||
try:
|
||||
os.rmdir(dirpath)
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
return deleted
|
||||
|
||||
|
||||
def _prune_sqlite(db_path: str, keep: int | None, older_than: timedelta | None) -> int:
|
||||
deleted: int = 0
|
||||
with sqlite3.connect(db_path) as conn:
|
||||
if older_than is not None:
|
||||
cutoff: str = (datetime.now(timezone.utc) - older_than).strftime(
|
||||
"%Y%m%dT%H%M%S"
|
||||
)
|
||||
cursor: sqlite3.Cursor = conn.execute(_DELETE_OLDER_THAN, (cutoff,))
|
||||
deleted += cursor.rowcount
|
||||
|
||||
if keep is not None:
|
||||
cursor = conn.execute(_DELETE_KEEP_N, (keep,))
|
||||
deleted += cursor.rowcount
|
||||
|
||||
conn.commit()
|
||||
return deleted
|
||||
|
||||
|
||||
def prune_checkpoints(
|
||||
location: str, keep: int | None, older_than: str | None, dry_run: bool = False
|
||||
) -> None:
|
||||
if keep is None and older_than is None:
|
||||
click.echo("Specify --keep N and/or --older-than DURATION (e.g. 7d, 24h)")
|
||||
return
|
||||
|
||||
duration: timedelta | None = _parse_duration(older_than) if older_than else None
|
||||
|
||||
deleted: int
|
||||
if _is_sqlite(location):
|
||||
if dry_run:
|
||||
with sqlite3.connect(location) as conn:
|
||||
total: int = conn.execute(_COUNT_CHECKPOINTS).fetchone()[0]
|
||||
click.echo(f"Would prune from {total} checkpoint(s) in {location}")
|
||||
return
|
||||
deleted = _prune_sqlite(location, keep, duration)
|
||||
elif os.path.isdir(location):
|
||||
if dry_run:
|
||||
files: list[str] = glob.glob(
|
||||
os.path.join(location, "**", "*.json"), recursive=True
|
||||
)
|
||||
click.echo(f"Would prune from {len(files)} checkpoint(s) in {location}")
|
||||
return
|
||||
deleted = _prune_json(location, keep, duration)
|
||||
else:
|
||||
click.echo(f"Not a directory or SQLite database: {location}")
|
||||
return
|
||||
click.echo(f"Pruned {deleted} checkpoint(s) from {location}")
|
||||
|
||||
@@ -3,17 +3,20 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from collections import defaultdict
|
||||
from datetime import datetime
|
||||
from typing import Any, ClassVar, Literal
|
||||
|
||||
from textual.app import App, ComposeResult
|
||||
from textual.binding import Binding
|
||||
from textual.containers import Horizontal, Vertical, VerticalScroll
|
||||
from textual.widgets import (
|
||||
Button,
|
||||
Collapsible,
|
||||
Footer,
|
||||
Header,
|
||||
Input,
|
||||
Static,
|
||||
TabPane,
|
||||
TabbedContent,
|
||||
TextArea,
|
||||
Tree,
|
||||
)
|
||||
@@ -32,6 +35,22 @@ _TERTIARY = "#ffffff"
|
||||
_DIM = "#888888"
|
||||
_BG_DARK = "#0d1117"
|
||||
_BG_PANEL = "#161b22"
|
||||
_ACCENT = "#c9a227"
|
||||
_SUCCESS = "#3fb950"
|
||||
_PENDING = "#e3b341"
|
||||
|
||||
_ENTITY_ICONS: dict[str, str] = {
|
||||
"flow": "◆",
|
||||
"crew": "●",
|
||||
"agent": "◈",
|
||||
"unknown": "○",
|
||||
}
|
||||
_ENTITY_COLORS: dict[str, str] = {
|
||||
"flow": _ACCENT,
|
||||
"crew": _SECONDARY,
|
||||
"agent": _PRIMARY,
|
||||
"unknown": _DIM,
|
||||
}
|
||||
|
||||
|
||||
def _load_entries(location: str) -> list[dict[str, Any]]:
|
||||
@@ -40,8 +59,27 @@ def _load_entries(location: str) -> list[dict[str, Any]]:
|
||||
return _list_json(location)
|
||||
|
||||
|
||||
def _human_ts(ts: str) -> str:
|
||||
"""Turn '2026-04-17 17:05:00' into a short relative label."""
|
||||
try:
|
||||
dt = datetime.strptime(ts, "%Y-%m-%d %H:%M:%S")
|
||||
except ValueError:
|
||||
return ts
|
||||
now = datetime.now()
|
||||
delta = now.date() - dt.date()
|
||||
hour = dt.hour % 12 or 12
|
||||
ampm = "am" if dt.hour < 12 else "pm"
|
||||
time_str = f"{hour}:{dt.minute:02d}{ampm}"
|
||||
if delta.days == 0:
|
||||
return time_str
|
||||
if delta.days == 1:
|
||||
return f"yest {time_str}"
|
||||
if delta.days < 7:
|
||||
return f"{dt.strftime('%a').lower()} {time_str}"
|
||||
return f"{dt.strftime('%b')} {dt.day}"
|
||||
|
||||
|
||||
def _short_id(name: str) -> str:
|
||||
"""Shorten a checkpoint name for tree display."""
|
||||
if len(name) > 30:
|
||||
return name[:27] + "..."
|
||||
return name
|
||||
@@ -63,22 +101,22 @@ def _entry_id(entry: dict[str, Any]) -> str:
|
||||
return name
|
||||
|
||||
|
||||
def _build_entity_header(ent: dict[str, Any]) -> str:
|
||||
"""Build rich text header for an entity (progress bar only)."""
|
||||
lines: list[str] = []
|
||||
tasks = ent.get("tasks")
|
||||
if isinstance(tasks, list):
|
||||
completed = ent.get("tasks_completed", 0)
|
||||
total = ent.get("tasks_total", 0)
|
||||
pct = int(completed / total * 100) if total else 0
|
||||
bar_len = 20
|
||||
filled = int(bar_len * completed / total) if total else 0
|
||||
bar = f"[{_PRIMARY}]{'█' * filled}[/][{_DIM}]{'░' * (bar_len - filled)}[/]"
|
||||
lines.append(f"{bar} {completed}/{total} tasks ({pct}%)")
|
||||
return "\n".join(lines)
|
||||
def _build_progress_bar(completed: int, total: int, width: int = 20) -> str:
|
||||
if total == 0:
|
||||
return f"[{_DIM}]{'░' * width}[/] 0/0"
|
||||
pct = int(completed / total * 100)
|
||||
filled = int(width * completed / total)
|
||||
color = _SUCCESS if completed == total else _PRIMARY
|
||||
bar = f"[{color}]{'█' * filled}[/][{_DIM}]{'░' * (width - filled)}[/]"
|
||||
return f"{bar} {completed}/{total} ({pct}%)"
|
||||
|
||||
|
||||
def _entity_icon(etype: str) -> str:
|
||||
icon = _ENTITY_ICONS.get(etype, _ENTITY_ICONS["unknown"])
|
||||
color = _ENTITY_COLORS.get(etype, _DIM)
|
||||
return f"[{color}]{icon}[/]"
|
||||
|
||||
|
||||
# Return type: (location, action, inputs, task_output_overrides, entity_type)
|
||||
_TuiResult = (
|
||||
tuple[
|
||||
str,
|
||||
@@ -122,7 +160,7 @@ class CheckpointTUI(App[_TuiResult]):
|
||||
height: 1fr;
|
||||
}}
|
||||
#tree-panel {{
|
||||
width: 45%;
|
||||
width: 40%;
|
||||
background: {_BG_PANEL};
|
||||
border: round {_SECONDARY};
|
||||
padding: 0 1;
|
||||
@@ -132,41 +170,81 @@ class CheckpointTUI(App[_TuiResult]):
|
||||
border: round {_PRIMARY};
|
||||
}}
|
||||
#detail-container {{
|
||||
width: 55%;
|
||||
width: 60%;
|
||||
height: 1fr;
|
||||
}}
|
||||
#detail-scroll {{
|
||||
height: 1fr;
|
||||
background: {_BG_PANEL};
|
||||
border: round {_SECONDARY};
|
||||
padding: 1 2;
|
||||
scrollbar-color: {_PRIMARY};
|
||||
}}
|
||||
#detail-scroll:focus-within {{
|
||||
border: round {_PRIMARY};
|
||||
}}
|
||||
#detail-header {{
|
||||
margin-bottom: 1;
|
||||
}}
|
||||
#status {{
|
||||
height: 1;
|
||||
padding: 0 2;
|
||||
color: {_DIM};
|
||||
}}
|
||||
#inputs-section {{
|
||||
display: none;
|
||||
height: auto;
|
||||
max-height: 8;
|
||||
padding: 0 1;
|
||||
#detail-tabs {{
|
||||
height: 1fr;
|
||||
}}
|
||||
#inputs-section.visible {{
|
||||
display: block;
|
||||
TabbedContent > ContentSwitcher {{
|
||||
background: {_BG_PANEL};
|
||||
height: 1fr;
|
||||
}}
|
||||
#inputs-label {{
|
||||
height: 1;
|
||||
TabPane {{
|
||||
padding: 0;
|
||||
}}
|
||||
Tabs {{
|
||||
background: {_BG_DARK};
|
||||
}}
|
||||
Tab {{
|
||||
background: {_BG_DARK};
|
||||
color: {_DIM};
|
||||
padding: 0 2;
|
||||
}}
|
||||
Tab.-active {{
|
||||
background: {_BG_PANEL};
|
||||
color: {_PRIMARY};
|
||||
}}
|
||||
Tab:hover {{
|
||||
color: {_TERTIARY};
|
||||
}}
|
||||
Underline > .underline--bar {{
|
||||
color: {_SECONDARY};
|
||||
background: {_BG_DARK};
|
||||
}}
|
||||
.tab-scroll {{
|
||||
background: {_BG_PANEL};
|
||||
height: 1fr;
|
||||
padding: 1 2;
|
||||
scrollbar-color: {_PRIMARY};
|
||||
}}
|
||||
.section-header {{
|
||||
padding: 0 0 0 1;
|
||||
margin: 1 0 0 0;
|
||||
}}
|
||||
.detail-line {{
|
||||
padding: 0 0 0 1;
|
||||
}}
|
||||
.task-label {{
|
||||
padding: 0 1;
|
||||
}}
|
||||
.task-output-editor {{
|
||||
height: auto;
|
||||
max-height: 10;
|
||||
margin: 0 1 1 3;
|
||||
border: round {_DIM};
|
||||
}}
|
||||
.task-output-editor:focus {{
|
||||
border: round {_PRIMARY};
|
||||
}}
|
||||
Collapsible {{
|
||||
background: {_BG_PANEL};
|
||||
padding: 0;
|
||||
margin: 0 0 1 1;
|
||||
}}
|
||||
CollapsibleTitle {{
|
||||
background: {_BG_DARK};
|
||||
color: {_TERTIARY};
|
||||
padding: 0 1;
|
||||
}}
|
||||
CollapsibleTitle:hover {{
|
||||
background: {_SECONDARY};
|
||||
}}
|
||||
.input-row {{
|
||||
height: 3;
|
||||
padding: 0 1;
|
||||
@@ -180,55 +258,9 @@ class CheckpointTUI(App[_TuiResult]):
|
||||
.input-row Input {{
|
||||
width: 1fr;
|
||||
}}
|
||||
#no-inputs-label {{
|
||||
height: 1;
|
||||
.empty-state {{
|
||||
color: {_DIM};
|
||||
padding: 0 1;
|
||||
}}
|
||||
#action-buttons {{
|
||||
height: 3;
|
||||
align: right middle;
|
||||
padding: 0 1;
|
||||
display: none;
|
||||
}}
|
||||
#action-buttons.visible {{
|
||||
display: block;
|
||||
}}
|
||||
#action-buttons Button {{
|
||||
margin: 0 0 0 1;
|
||||
min-width: 10;
|
||||
}}
|
||||
#btn-resume {{
|
||||
background: {_SECONDARY};
|
||||
color: {_TERTIARY};
|
||||
}}
|
||||
#btn-resume:hover {{
|
||||
background: {_PRIMARY};
|
||||
}}
|
||||
#btn-fork {{
|
||||
background: {_PRIMARY};
|
||||
color: {_TERTIARY};
|
||||
}}
|
||||
#btn-fork:hover {{
|
||||
background: {_SECONDARY};
|
||||
}}
|
||||
.entity-title {{
|
||||
padding: 1 1 0 1;
|
||||
}}
|
||||
.entity-detail {{
|
||||
padding: 0 1;
|
||||
}}
|
||||
.task-output-editor {{
|
||||
height: auto;
|
||||
max-height: 10;
|
||||
margin: 0 1 1 1;
|
||||
border: round {_DIM};
|
||||
}}
|
||||
.task-output-editor:focus {{
|
||||
border: round {_PRIMARY};
|
||||
}}
|
||||
.task-label {{
|
||||
padding: 0 1;
|
||||
padding: 1;
|
||||
}}
|
||||
Tree {{
|
||||
background: {_BG_PANEL};
|
||||
@@ -242,6 +274,8 @@ class CheckpointTUI(App[_TuiResult]):
|
||||
BINDINGS: ClassVar[list[Binding | tuple[str, str] | tuple[str, str, str]]] = [
|
||||
("q", "quit", "Quit"),
|
||||
("r", "refresh", "Refresh"),
|
||||
("e", "resume", "Resume"),
|
||||
("f", "fork", "Fork"),
|
||||
]
|
||||
|
||||
def __init__(self, location: str = "./.checkpoints") -> None:
|
||||
@@ -256,27 +290,49 @@ class CheckpointTUI(App[_TuiResult]):
|
||||
yield Header(show_clock=False)
|
||||
with Horizontal(id="main-layout"):
|
||||
tree: Tree[dict[str, Any]] = Tree("Checkpoints", id="tree-panel")
|
||||
tree.show_root = True
|
||||
tree.show_root = False
|
||||
tree.guide_depth = 3
|
||||
yield tree
|
||||
with Vertical(id="detail-container"):
|
||||
yield Static("", id="status")
|
||||
with VerticalScroll(id="detail-scroll"):
|
||||
yield Static(
|
||||
f"[{_DIM}]Select a checkpoint from the tree[/]", # noqa: S608
|
||||
id="detail-header",
|
||||
)
|
||||
with Vertical(id="inputs-section"):
|
||||
yield Static("Inputs", id="inputs-label")
|
||||
with Horizontal(id="action-buttons"):
|
||||
yield Button("Resume", id="btn-resume")
|
||||
yield Button("Fork", id="btn-fork")
|
||||
with TabbedContent(id="detail-tabs"):
|
||||
with TabPane("Overview", id="tab-overview"):
|
||||
with VerticalScroll(classes="tab-scroll"):
|
||||
yield Static(
|
||||
f"[{_DIM}]Select a checkpoint from the tree[/]", # noqa: S608
|
||||
id="overview-empty",
|
||||
)
|
||||
with TabPane("Tasks", id="tab-tasks"):
|
||||
with VerticalScroll(classes="tab-scroll"):
|
||||
yield Static(
|
||||
f"[{_DIM}]Select a checkpoint to view tasks[/]",
|
||||
id="tasks-empty",
|
||||
)
|
||||
with TabPane("Inputs", id="tab-inputs"):
|
||||
with VerticalScroll(classes="tab-scroll"):
|
||||
yield Static(
|
||||
f"[{_DIM}]Select a checkpoint to view inputs[/]",
|
||||
id="inputs-empty",
|
||||
)
|
||||
yield Footer()
|
||||
|
||||
async def on_mount(self) -> None:
|
||||
self._refresh_tree()
|
||||
self.query_one("#tree-panel", Tree).root.expand()
|
||||
|
||||
# ── Tree building ──────────────────────────────────────────────
|
||||
|
||||
@staticmethod
|
||||
def _top_level_entity(entry: dict[str, Any]) -> tuple[str, str]:
|
||||
etype, ename = "unknown", ""
|
||||
for ent in entry.get("entities", []):
|
||||
t = ent.get("type", "unknown")
|
||||
if t == "flow":
|
||||
return "flow", ent.get("name") or ""
|
||||
if t == "crew" and etype != "crew":
|
||||
etype, ename = "crew", ent.get("name") or ""
|
||||
return etype, ename
|
||||
|
||||
def _refresh_tree(self) -> None:
|
||||
self._entries = _load_entries(self._location)
|
||||
self._selected_entry = None
|
||||
@@ -285,45 +341,57 @@ class CheckpointTUI(App[_TuiResult]):
|
||||
tree.clear()
|
||||
|
||||
if not self._entries:
|
||||
self.query_one("#detail-header", Static).update(
|
||||
f"[{_DIM}]No checkpoints in {self._location}[/]"
|
||||
)
|
||||
self.query_one("#status", Static).update("")
|
||||
self.sub_title = self._location
|
||||
self.query_one("#status", Static).update("")
|
||||
return
|
||||
|
||||
# Group by branch
|
||||
branches: dict[str, list[dict[str, Any]]] = defaultdict(list)
|
||||
grouped: dict[tuple[str, str], dict[str, list[dict[str, Any]]]] = defaultdict(
|
||||
lambda: defaultdict(list)
|
||||
)
|
||||
for entry in self._entries:
|
||||
key = self._top_level_entity(entry)
|
||||
branch = entry.get("branch", "main")
|
||||
branches[branch].append(entry)
|
||||
|
||||
# Index checkpoint names to tree nodes so forks can attach
|
||||
node_by_name: dict[str, Any] = {}
|
||||
grouped[key][branch].append(entry)
|
||||
|
||||
def _make_label(e: dict[str, Any]) -> str:
|
||||
name = e.get("name", "")
|
||||
ts = e.get("ts") or ""
|
||||
trigger = e.get("trigger") or ""
|
||||
parts = [f"[bold]{_short_id(name)}[/]"]
|
||||
if ts:
|
||||
time_part = ts.split(" ")[-1] if " " in ts else ts
|
||||
time_part = ts.split(" ")[-1] if " " in ts else ts
|
||||
|
||||
total_c, total_t = 0, 0
|
||||
for ent in e.get("entities", []):
|
||||
c = ent.get("tasks_completed")
|
||||
t = ent.get("tasks_total")
|
||||
if c is not None and t is not None:
|
||||
total_c += c
|
||||
total_t += t
|
||||
|
||||
parts: list[str] = []
|
||||
if time_part:
|
||||
parts.append(f"[{_DIM}]{time_part}[/]")
|
||||
if trigger:
|
||||
parts.append(f"[{_PRIMARY}]{trigger}[/]")
|
||||
return " ".join(parts)
|
||||
if total_t:
|
||||
display_c = total_c
|
||||
if trigger == "task_started" and total_c < total_t:
|
||||
display_c = total_c + 1
|
||||
color = _SUCCESS if total_c == total_t else _DIM
|
||||
parts.append(f"[{color}]{display_c}/{total_t}[/]")
|
||||
return " ".join(parts) if parts else _short_id(e.get("name", ""))
|
||||
|
||||
fork_parents: set[str] = set()
|
||||
for branch_name, entries in branches.items():
|
||||
if branch_name == "main" or not entries:
|
||||
continue
|
||||
oldest = min(entries, key=lambda e: str(e.get("name", "")))
|
||||
first_parent = oldest.get("parent_id")
|
||||
if first_parent:
|
||||
fork_parents.add(str(first_parent))
|
||||
for branches in grouped.values():
|
||||
for branch_name, entries in branches.items():
|
||||
if branch_name == "main" or not entries:
|
||||
continue
|
||||
oldest = min(entries, key=lambda e: str(e.get("name", "")))
|
||||
first_parent = oldest.get("parent_id")
|
||||
if first_parent:
|
||||
fork_parents.add(str(first_parent))
|
||||
|
||||
node_by_name: dict[str, Any] = {}
|
||||
|
||||
def _add_checkpoint(parent_node: Any, e: dict[str, Any]) -> None:
|
||||
"""Add a checkpoint node — expandable only if a fork attaches to it."""
|
||||
cp_id = _entry_id(e)
|
||||
if cp_id in fork_parents:
|
||||
node = parent_node.add(
|
||||
@@ -333,67 +401,97 @@ class CheckpointTUI(App[_TuiResult]):
|
||||
node = parent_node.add_leaf(_make_label(e), data=e)
|
||||
node_by_name[cp_id] = node
|
||||
|
||||
if "main" in branches:
|
||||
for entry in reversed(branches["main"]):
|
||||
_add_checkpoint(tree.root, entry)
|
||||
type_order = {"flow": 0, "crew": 1}
|
||||
sorted_keys = sorted(
|
||||
grouped.keys(), key=lambda k: (type_order.get(k[0], 9), k[1])
|
||||
)
|
||||
|
||||
for etype, ename in sorted_keys:
|
||||
branches = grouped[(etype, ename)]
|
||||
icon = _entity_icon(etype)
|
||||
color = _ENTITY_COLORS.get(etype, _DIM)
|
||||
total = sum(len(v) for v in branches.values())
|
||||
|
||||
label_parts = [f"{icon} [bold {color}]{etype.upper()}[/]"]
|
||||
if ename:
|
||||
label_parts.append(f"[bold]{ename}[/]")
|
||||
label_parts.append(f"[{_DIM}]({total})[/]")
|
||||
all_entries = [e for bl in branches.values() for e in bl]
|
||||
timestamps = [str(e.get("ts", "")) for e in all_entries if e.get("ts")]
|
||||
if timestamps:
|
||||
latest = max(timestamps)
|
||||
label_parts.append(f"[{_DIM}]{_human_ts(latest)}[/]")
|
||||
entity_label = " ".join(label_parts)
|
||||
entity_node = tree.root.add(entity_label, expand=True)
|
||||
|
||||
if "main" in branches:
|
||||
for entry in reversed(branches["main"]):
|
||||
_add_checkpoint(entity_node, entry)
|
||||
|
||||
fork_branches = [
|
||||
(name, sorted(entries, key=lambda e: str(e.get("name", ""))))
|
||||
for name, entries in branches.items()
|
||||
if name != "main"
|
||||
]
|
||||
remaining = fork_branches
|
||||
max_passes = len(remaining) + 1
|
||||
while remaining and max_passes > 0:
|
||||
max_passes -= 1
|
||||
deferred = []
|
||||
made_progress = False
|
||||
for branch_name, entries in remaining:
|
||||
first_parent = entries[0].get("parent_id") if entries else None
|
||||
if first_parent and str(first_parent) not in node_by_name:
|
||||
deferred.append((branch_name, entries))
|
||||
continue
|
||||
attach_to: Any = entity_node
|
||||
if first_parent:
|
||||
attach_to = node_by_name.get(str(first_parent), entity_node)
|
||||
branch_label = (
|
||||
f"[bold {_SECONDARY}]{branch_name}[/] "
|
||||
f"[{_DIM}]({len(entries)})[/]"
|
||||
)
|
||||
branch_node = attach_to.add(branch_label, expand=False)
|
||||
for entry in entries:
|
||||
_add_checkpoint(branch_node, entry)
|
||||
made_progress = True
|
||||
remaining = deferred
|
||||
if not made_progress:
|
||||
break
|
||||
|
||||
fork_branches = [
|
||||
(name, sorted(entries, key=lambda e: str(e.get("name", ""))))
|
||||
for name, entries in branches.items()
|
||||
if name != "main"
|
||||
]
|
||||
remaining = fork_branches
|
||||
max_passes = len(remaining) + 1
|
||||
while remaining and max_passes > 0:
|
||||
max_passes -= 1
|
||||
deferred = []
|
||||
made_progress = False
|
||||
for branch_name, entries in remaining:
|
||||
first_parent = entries[0].get("parent_id") if entries else None
|
||||
if first_parent and str(first_parent) not in node_by_name:
|
||||
deferred.append((branch_name, entries))
|
||||
continue
|
||||
attach_to: Any = tree.root
|
||||
if first_parent:
|
||||
attach_to = node_by_name.get(str(first_parent), tree.root)
|
||||
branch_label = (
|
||||
f"[bold {_SECONDARY}]{branch_name}[/] [{_DIM}]({len(entries)})[/]"
|
||||
f"[bold {_SECONDARY}]{branch_name}[/] "
|
||||
f"[{_DIM}]({len(entries)})[/] [{_DIM}](orphaned)[/]"
|
||||
)
|
||||
branch_node = attach_to.add(branch_label, expand=False)
|
||||
branch_node = entity_node.add(branch_label, expand=False)
|
||||
for entry in entries:
|
||||
_add_checkpoint(branch_node, entry)
|
||||
made_progress = True
|
||||
remaining = deferred
|
||||
if not made_progress:
|
||||
break
|
||||
|
||||
for branch_name, entries in remaining:
|
||||
branch_label = (
|
||||
f"[bold {_SECONDARY}]{branch_name}[/] "
|
||||
f"[{_DIM}]({len(entries)})[/] [{_DIM}](orphaned)[/]"
|
||||
)
|
||||
branch_node = tree.root.add(branch_label, expand=False)
|
||||
for entry in entries:
|
||||
_add_checkpoint(branch_node, entry)
|
||||
|
||||
count = len(self._entries)
|
||||
storage = "SQLite" if _is_sqlite(self._location) else "JSON"
|
||||
self.sub_title = self._location
|
||||
self.query_one("#status", Static).update(f" {count} checkpoint(s) | {storage}")
|
||||
|
||||
async def _show_detail(self, entry: dict[str, Any]) -> None:
|
||||
"""Update the detail panel for a checkpoint entry."""
|
||||
self._selected_entry = entry
|
||||
self.query_one("#action-buttons").add_class("visible")
|
||||
# ── Detail panel ───────────────────────────────────────────────
|
||||
|
||||
detail_scroll = self.query_one("#detail-scroll", VerticalScroll)
|
||||
|
||||
# Remove all dynamic children except the header — await so IDs are freed
|
||||
to_remove = [c for c in detail_scroll.children if c.id != "detail-header"]
|
||||
for child in to_remove:
|
||||
async def _clear_scroll(self, tab_id: str) -> VerticalScroll:
|
||||
tab = self.query_one(f"#{tab_id}", TabPane)
|
||||
scroll = tab.query_one(VerticalScroll)
|
||||
for child in list(scroll.children):
|
||||
await child.remove()
|
||||
return scroll
|
||||
|
||||
async def _show_detail(self, entry: dict[str, Any]) -> None:
|
||||
self._selected_entry = entry
|
||||
|
||||
await self._render_overview(entry)
|
||||
await self._render_tasks(entry)
|
||||
await self._render_inputs(entry.get("inputs", {}))
|
||||
|
||||
async def _render_overview(self, entry: dict[str, Any]) -> None:
|
||||
scroll = await self._clear_scroll("tab-overview")
|
||||
|
||||
# Header
|
||||
name = entry.get("name", "")
|
||||
ts = entry.get("ts") or "unknown"
|
||||
trigger = entry.get("trigger") or ""
|
||||
@@ -414,42 +512,115 @@ class CheckpointTUI(App[_TuiResult]):
|
||||
header_lines.append(f" [bold]Branch[/] [{_SECONDARY}]{branch}[/]")
|
||||
if parent_id:
|
||||
header_lines.append(f" [bold]Parent[/] [{_DIM}]{parent_id}[/]")
|
||||
if "path" in entry:
|
||||
header_lines.append(f" [bold]Path[/] [{_DIM}]{entry['path']}[/]")
|
||||
if "db" in entry:
|
||||
header_lines.append(f" [bold]Database[/] [{_DIM}]{entry['db']}[/]")
|
||||
|
||||
self.query_one("#detail-header", Static).update("\n".join(header_lines))
|
||||
await scroll.mount(Static("\n".join(header_lines)))
|
||||
|
||||
for ent in entry.get("entities", []):
|
||||
etype = ent.get("type", "unknown")
|
||||
ename = ent.get("name", "unnamed")
|
||||
icon = _entity_icon(etype)
|
||||
color = _ENTITY_COLORS.get(etype, _DIM)
|
||||
|
||||
eid = str(ent.get("id", ""))[:8]
|
||||
entity_title = (
|
||||
f"\n{icon} [bold {color}]{etype.upper()}[/] [bold]{ename}[/]"
|
||||
)
|
||||
if eid:
|
||||
entity_title += f" [{_DIM}]{eid}…[/]"
|
||||
await scroll.mount(Static(entity_title, classes="section-header"))
|
||||
await scroll.mount(Static(f"[{_DIM}]{'─' * 46}[/]", classes="detail-line"))
|
||||
|
||||
if etype == "flow":
|
||||
methods = ent.get("completed_methods", [])
|
||||
if methods:
|
||||
method_list = ", ".join(f"[{_SUCCESS}]{m}[/]" for m in methods)
|
||||
await scroll.mount(
|
||||
Static(
|
||||
f" [bold]Methods[/] {method_list}",
|
||||
classes="detail-line",
|
||||
)
|
||||
)
|
||||
flow_state = ent.get("flow_state")
|
||||
if isinstance(flow_state, dict) and flow_state:
|
||||
state_parts: list[str] = []
|
||||
for k, v in list(flow_state.items())[:5]:
|
||||
sv = str(v)
|
||||
if len(sv) > 40:
|
||||
sv = sv[:37] + "..."
|
||||
state_parts.append(f"[{_DIM}]{k}[/]={sv}")
|
||||
await scroll.mount(
|
||||
Static(
|
||||
f" [bold]State[/] {', '.join(state_parts)}",
|
||||
classes="detail-line",
|
||||
)
|
||||
)
|
||||
|
||||
agents = ent.get("agents", [])
|
||||
if agents:
|
||||
agent_lines: list[Static] = []
|
||||
for ag in agents:
|
||||
role = ag.get("role", "unnamed")
|
||||
goal = ag.get("goal", "")
|
||||
if len(goal) > 60:
|
||||
goal = goal[:57] + "..."
|
||||
agent_line = f" {_entity_icon('agent')} [bold]{role}[/]"
|
||||
if goal:
|
||||
agent_line += f"\n [{_DIM}]{goal}[/]"
|
||||
agent_lines.append(Static(agent_line))
|
||||
|
||||
collapsible = Collapsible(
|
||||
*agent_lines,
|
||||
title=f"Agents ({len(agents)})",
|
||||
collapsed=len(agents) > 3,
|
||||
)
|
||||
await scroll.mount(collapsible)
|
||||
|
||||
async def _render_tasks(self, entry: dict[str, Any]) -> None:
|
||||
scroll = await self._clear_scroll("tab-tasks")
|
||||
|
||||
# Entity details and editable task outputs — mounted flat for scrolling
|
||||
self._task_output_ids = []
|
||||
flat_task_idx = 0
|
||||
has_tasks = False
|
||||
|
||||
for ent_idx, ent in enumerate(entry.get("entities", [])):
|
||||
etype = ent.get("type", "unknown")
|
||||
ename = ent.get("name", "unnamed")
|
||||
completed = ent.get("tasks_completed")
|
||||
total = ent.get("tasks_total")
|
||||
entity_title = f"[bold {_SECONDARY}]{etype}: {ename}[/]"
|
||||
if completed is not None and total is not None:
|
||||
entity_title += f" [{_DIM}]{completed}/{total} tasks[/]"
|
||||
await detail_scroll.mount(Static(entity_title, classes="entity-title"))
|
||||
await detail_scroll.mount(
|
||||
Static(_build_entity_header(ent), classes="entity-detail")
|
||||
)
|
||||
icon = _entity_icon(etype)
|
||||
color = _ENTITY_COLORS.get(etype, _DIM)
|
||||
|
||||
tasks = ent.get("tasks", [])
|
||||
if not tasks:
|
||||
continue
|
||||
has_tasks = True
|
||||
|
||||
completed = ent.get("tasks_completed", 0)
|
||||
total = ent.get("tasks_total", 0)
|
||||
|
||||
await scroll.mount(
|
||||
Static(
|
||||
f"{icon} [bold {color}]{ename}[/] "
|
||||
f"{_build_progress_bar(completed, total, width=16)}",
|
||||
classes="section-header",
|
||||
)
|
||||
)
|
||||
|
||||
for i, task in enumerate(tasks):
|
||||
desc = str(task.get("description", ""))
|
||||
if len(desc) > 55:
|
||||
desc = desc[:52] + "..."
|
||||
if len(desc) > 50:
|
||||
desc = desc[:47] + "..."
|
||||
agent_role = task.get("agent_role", "")
|
||||
|
||||
if task.get("completed"):
|
||||
icon = "[green]✓[/]"
|
||||
await detail_scroll.mount(
|
||||
Static(f" {icon} {i + 1}. {desc}", classes="task-label")
|
||||
)
|
||||
status_icon = f"[{_SUCCESS}]✓[/]"
|
||||
task_line = f" {status_icon} {i + 1}. {desc}"
|
||||
if agent_role:
|
||||
task_line += (
|
||||
f" [{_DIM}]→ {_entity_icon('agent')} {agent_role}[/]"
|
||||
)
|
||||
await scroll.mount(Static(task_line, classes="task-label"))
|
||||
output_text = task.get("output", "")
|
||||
editor_id = f"task-output-{ent_idx}-{i}"
|
||||
await detail_scroll.mount(
|
||||
await scroll.mount(
|
||||
TextArea(
|
||||
str(output_text),
|
||||
classes="task-output-editor",
|
||||
@@ -460,28 +631,25 @@ class CheckpointTUI(App[_TuiResult]):
|
||||
(flat_task_idx, editor_id, str(output_text))
|
||||
)
|
||||
else:
|
||||
icon = "[yellow]○[/]"
|
||||
await detail_scroll.mount(
|
||||
Static(f" {icon} {i + 1}. {desc}", classes="task-label")
|
||||
)
|
||||
status_icon = f"[{_PENDING}]○[/]"
|
||||
task_line = f" {status_icon} {i + 1}. {desc}"
|
||||
if agent_role:
|
||||
task_line += (
|
||||
f" [{_DIM}]→ {_entity_icon('agent')} {agent_role}[/]"
|
||||
)
|
||||
await scroll.mount(Static(task_line, classes="task-label"))
|
||||
flat_task_idx += 1
|
||||
|
||||
# Build input fields
|
||||
await self._build_input_fields(entry.get("inputs", {}))
|
||||
if not has_tasks:
|
||||
await scroll.mount(Static(f"[{_DIM}]No tasks[/]", classes="empty-state"))
|
||||
|
||||
async def _build_input_fields(self, inputs: dict[str, Any]) -> None:
|
||||
"""Rebuild the inputs section with one field per input key."""
|
||||
section = self.query_one("#inputs-section")
|
||||
|
||||
# Remove old dynamic children — await so IDs are freed
|
||||
for widget in list(section.query(".input-row, .no-inputs")):
|
||||
await widget.remove()
|
||||
async def _render_inputs(self, inputs: dict[str, Any]) -> None:
|
||||
scroll = await self._clear_scroll("tab-inputs")
|
||||
|
||||
self._input_keys = []
|
||||
|
||||
if not inputs:
|
||||
await section.mount(Static(f"[{_DIM}]No inputs[/]", classes="no-inputs"))
|
||||
section.add_class("visible")
|
||||
await scroll.mount(Static(f"[{_DIM}]No inputs[/]", classes="empty-state"))
|
||||
return
|
||||
|
||||
for key, value in inputs.items():
|
||||
@@ -491,12 +659,11 @@ class CheckpointTUI(App[_TuiResult]):
|
||||
row.compose_add_child(
|
||||
Input(value=str(value), placeholder=key, id=f"input-{key}")
|
||||
)
|
||||
await section.mount(row)
|
||||
await scroll.mount(row)
|
||||
|
||||
section.add_class("visible")
|
||||
# ── Data collection ────────────────────────────────────────────
|
||||
|
||||
def _collect_inputs(self) -> dict[str, Any] | None:
|
||||
"""Collect current values from input fields."""
|
||||
if not self._input_keys:
|
||||
return None
|
||||
result: dict[str, Any] = {}
|
||||
@@ -506,7 +673,6 @@ class CheckpointTUI(App[_TuiResult]):
|
||||
return result
|
||||
|
||||
def _collect_task_overrides(self) -> dict[int, str] | None:
|
||||
"""Collect edited task outputs. Returns only changed values."""
|
||||
if not self._task_output_ids or self._selected_entry is None:
|
||||
return None
|
||||
overrides: dict[int, str] = {}
|
||||
@@ -517,37 +683,43 @@ class CheckpointTUI(App[_TuiResult]):
|
||||
return overrides or None
|
||||
|
||||
def _detect_entity_type(self, entry: dict[str, Any]) -> Literal["crew", "flow"]:
|
||||
"""Infer the top-level entity type from checkpoint entities."""
|
||||
for ent in entry.get("entities", []):
|
||||
if ent.get("type") == "flow":
|
||||
return "flow"
|
||||
return "crew"
|
||||
|
||||
def _resolve_location(self, entry: dict[str, Any]) -> str:
|
||||
"""Get the restore location string for a checkpoint entry."""
|
||||
if "path" in entry:
|
||||
return str(entry["path"])
|
||||
if _is_sqlite(self._location):
|
||||
return f"{self._location}#{entry['name']}"
|
||||
return str(entry.get("name", ""))
|
||||
|
||||
# ── Events ─────────────────────────────────────────────────────
|
||||
|
||||
async def on_tree_node_highlighted(
|
||||
self, event: Tree.NodeHighlighted[dict[str, Any]]
|
||||
) -> None:
|
||||
if event.node.data is not None:
|
||||
await self._show_detail(event.node.data)
|
||||
|
||||
def on_button_pressed(self, event: Button.Pressed) -> None:
|
||||
def _exit_with_action(self, action: str) -> None:
|
||||
if self._selected_entry is None:
|
||||
self.notify("No checkpoint selected", severity="warning")
|
||||
return
|
||||
inputs = self._collect_inputs()
|
||||
overrides = self._collect_task_overrides()
|
||||
loc = self._resolve_location(self._selected_entry)
|
||||
etype = self._detect_entity_type(self._selected_entry)
|
||||
if event.button.id == "btn-resume":
|
||||
self.exit((loc, "resume", inputs, overrides, etype))
|
||||
elif event.button.id == "btn-fork":
|
||||
self.exit((loc, "fork", inputs, overrides, etype))
|
||||
name = self._selected_entry.get("name", "")[:30]
|
||||
self.notify(f"{action.title()}: {name}")
|
||||
self.exit((loc, action, inputs, overrides, etype))
|
||||
|
||||
def action_resume(self) -> None:
|
||||
self._exit_with_action("resume")
|
||||
|
||||
def action_fork(self) -> None:
|
||||
self._exit_with_action("fork")
|
||||
|
||||
def action_refresh(self) -> None:
|
||||
self._refresh_tree()
|
||||
|
||||
@@ -873,5 +873,48 @@ def checkpoint_info(path: str) -> None:
|
||||
info_checkpoint(_detect_location(path))
|
||||
|
||||
|
||||
@checkpoint.command("resume")
|
||||
@click.argument("checkpoint_id", required=False, default=None)
|
||||
@click.pass_context
|
||||
def checkpoint_resume(ctx: click.Context, checkpoint_id: str | None) -> None:
|
||||
"""Resume from a checkpoint. Defaults to the most recent."""
|
||||
from crewai.cli.checkpoint_cli import resume_checkpoint
|
||||
|
||||
resume_checkpoint(ctx.obj["location"], checkpoint_id)
|
||||
|
||||
|
||||
@checkpoint.command("diff")
|
||||
@click.argument("id1")
|
||||
@click.argument("id2")
|
||||
@click.pass_context
|
||||
def checkpoint_diff(ctx: click.Context, id1: str, id2: str) -> None:
|
||||
"""Compare two checkpoints side-by-side."""
|
||||
from crewai.cli.checkpoint_cli import diff_checkpoints
|
||||
|
||||
diff_checkpoints(ctx.obj["location"], id1, id2)
|
||||
|
||||
|
||||
@checkpoint.command("prune")
|
||||
@click.option(
|
||||
"--keep", type=int, default=None, help="Keep the N most recent checkpoints."
|
||||
)
|
||||
@click.option(
|
||||
"--older-than",
|
||||
default=None,
|
||||
help="Remove checkpoints older than duration (e.g. 7d, 24h, 30m).",
|
||||
)
|
||||
@click.option(
|
||||
"--dry-run", is_flag=True, help="Show what would be pruned without deleting."
|
||||
)
|
||||
@click.pass_context
|
||||
def checkpoint_prune(
|
||||
ctx: click.Context, keep: int | None, older_than: str | None, dry_run: bool
|
||||
) -> None:
|
||||
"""Remove old checkpoints."""
|
||||
from crewai.cli.checkpoint_cli import prune_checkpoints
|
||||
|
||||
prune_checkpoints(ctx.obj["location"], keep, older_than, dry_run)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
crewai()
|
||||
|
||||
@@ -25,13 +25,6 @@ from crewai.utilities.version import get_crewai_version
|
||||
|
||||
MIN_REQUIRED_VERSION: Final[Literal["0.98.0"]] = "0.98.0"
|
||||
|
||||
# Static fallbacks used when an LLM call fails while generating descriptions
|
||||
# for chat inputs or the crew itself. Returning a generic description is
|
||||
# preferable to crashing the process (see issue #5510), since these strings are
|
||||
# only surfaced in the CrewAI chat UI.
|
||||
DEFAULT_INPUT_DESCRIPTION: Final[str] = "Input value for the crew's tasks and agents."
|
||||
DEFAULT_CREW_DESCRIPTION: Final[str] = "A CrewAI crew."
|
||||
|
||||
|
||||
def check_conversational_crews_version(
|
||||
crewai_version: str, pyproject_data: dict[str, Any]
|
||||
@@ -489,22 +482,7 @@ def generate_input_description_with_ai(
|
||||
"Context:\n"
|
||||
f"{context}"
|
||||
)
|
||||
try:
|
||||
response = chat_llm.call(messages=[{"role": "user", "content": prompt}])
|
||||
except Exception as e:
|
||||
# The LLM call can fail for many transient reasons (network, rate
|
||||
# limits, provider outages, misconfigured credentials, etc.). This
|
||||
# function is called at import time by downstream consumers such as
|
||||
# `ag_ui_crewai.crews.ChatWithCrewFlow`, so letting the exception
|
||||
# propagate would crash the containing process before any HTTP server
|
||||
# has a chance to bind to its port. Fall back to a generic description
|
||||
# rather than taking down the whole process (see issue #5510).
|
||||
click.secho(
|
||||
f"Warning: Failed to generate AI description for input '{input_name}' "
|
||||
f"({type(e).__name__}: {e}). Falling back to a generic description.",
|
||||
fg="yellow",
|
||||
)
|
||||
return DEFAULT_INPUT_DESCRIPTION
|
||||
response = chat_llm.call(messages=[{"role": "user", "content": prompt}])
|
||||
return str(response).strip()
|
||||
|
||||
|
||||
@@ -554,16 +532,5 @@ def generate_crew_description_with_ai(crew: Crew, chat_llm: LLM | BaseLLM) -> st
|
||||
"Context:\n"
|
||||
f"{context}"
|
||||
)
|
||||
try:
|
||||
response = chat_llm.call(messages=[{"role": "user", "content": prompt}])
|
||||
except Exception as e:
|
||||
# See comment in `generate_input_description_with_ai`: falling back to
|
||||
# a generic description is preferable to crashing the process when the
|
||||
# LLM provider is temporarily unavailable (see issue #5510).
|
||||
click.secho(
|
||||
f"Warning: Failed to generate AI description for crew "
|
||||
f"({type(e).__name__}: {e}). Falling back to a generic description.",
|
||||
fg="yellow",
|
||||
)
|
||||
return DEFAULT_CREW_DESCRIPTION
|
||||
response = chat_llm.call(messages=[{"role": "user", "content": prompt}])
|
||||
return str(response).strip()
|
||||
|
||||
@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
|
||||
authors = [{ name = "Your Name", email = "you@example.com" }]
|
||||
requires-python = ">=3.10,<3.14"
|
||||
dependencies = [
|
||||
"crewai[tools]==1.14.2rc1"
|
||||
"crewai[tools]==1.14.2"
|
||||
]
|
||||
|
||||
[project.scripts]
|
||||
|
||||
@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
|
||||
authors = [{ name = "Your Name", email = "you@example.com" }]
|
||||
requires-python = ">=3.10,<3.14"
|
||||
dependencies = [
|
||||
"crewai[tools]==1.14.2rc1"
|
||||
"crewai[tools]==1.14.2"
|
||||
]
|
||||
|
||||
[project.scripts]
|
||||
|
||||
@@ -5,7 +5,7 @@ description = "Power up your crews with {{folder_name}}"
|
||||
readme = "README.md"
|
||||
requires-python = ">=3.10,<3.14"
|
||||
dependencies = [
|
||||
"crewai[tools]==1.14.2rc1"
|
||||
"crewai[tools]==1.14.2"
|
||||
]
|
||||
|
||||
[tool.crewai]
|
||||
|
||||
@@ -419,10 +419,32 @@ class Crew(FlowTrackable, BaseModel):
|
||||
|
||||
def _restore_runtime(self) -> None:
|
||||
"""Re-create runtime objects after restoring from a checkpoint."""
|
||||
from crewai.events.event_bus import crewai_event_bus
|
||||
|
||||
started_task_ids: set[str] = set()
|
||||
state = crewai_event_bus._runtime_state
|
||||
if state is not None:
|
||||
for node in state.event_record.nodes.values():
|
||||
if node.event.type == "task_started" and node.event.task_id:
|
||||
started_task_ids.add(node.event.task_id)
|
||||
|
||||
resuming_task_agent_roles: set[str] = set()
|
||||
for task in self.tasks:
|
||||
if (
|
||||
task.output is None
|
||||
and task.agent is not None
|
||||
and str(task.id) in started_task_ids
|
||||
):
|
||||
resuming_task_agent_roles.add(task.agent.role)
|
||||
|
||||
for agent in self.agents:
|
||||
agent.crew = self
|
||||
executor = agent.agent_executor
|
||||
if executor and executor.messages:
|
||||
if (
|
||||
executor
|
||||
and executor.messages
|
||||
and agent.role in resuming_task_agent_roles
|
||||
):
|
||||
executor.crew = self
|
||||
executor.agent = agent
|
||||
executor._resuming = True
|
||||
|
||||
@@ -120,6 +120,12 @@ def _do_checkpoint(
|
||||
)
|
||||
state._chain_lineage(cfg.provider, location)
|
||||
|
||||
checkpoint_id: str = cfg.provider.extract_id(location)
|
||||
msg: str = (
|
||||
f"Checkpoint saved. Resume with: crewai checkpoint resume {checkpoint_id}"
|
||||
)
|
||||
logger.info(msg)
|
||||
|
||||
if cfg.max_checkpoints is not None:
|
||||
cfg.provider.prune(cfg.location, cfg.max_checkpoints, branch=state._branch)
|
||||
|
||||
|
||||
@@ -8,7 +8,14 @@ from __future__ import annotations
|
||||
|
||||
from typing import Annotated, Any, Literal
|
||||
|
||||
from pydantic import BaseModel, BeforeValidator, Field, PlainSerializer, PrivateAttr
|
||||
from pydantic import (
|
||||
BaseModel,
|
||||
BeforeValidator,
|
||||
Field,
|
||||
PrivateAttr,
|
||||
SerializationInfo,
|
||||
field_serializer,
|
||||
)
|
||||
|
||||
from crewai.events.base_events import BaseEvent
|
||||
from crewai.utilities.rw_lock import RWLock
|
||||
@@ -66,10 +73,24 @@ class EventNode(BaseModel):
|
||||
event: Annotated[
|
||||
BaseEvent,
|
||||
BeforeValidator(_resolve_event),
|
||||
PlainSerializer(lambda v: v.model_dump()),
|
||||
]
|
||||
edges: dict[EdgeType, list[str]] = Field(default_factory=dict)
|
||||
|
||||
@field_serializer("event")
|
||||
def _serialize_event(
|
||||
self, value: BaseEvent, info: SerializationInfo
|
||||
) -> dict[str, Any]:
|
||||
"""Dump the event, propagating JSON mode to nested fields.
|
||||
|
||||
Without this the default ``v.model_dump()`` discards JSON mode, so any
|
||||
non-JSON-native nested values (e.g. ``type[BaseModel]`` references on
|
||||
a Task payload) are passed raw to ``json.dumps`` and explode with
|
||||
``PydanticSerializationError``.
|
||||
"""
|
||||
if info.mode == "json":
|
||||
return value.model_dump(mode="json")
|
||||
return value.model_dump()
|
||||
|
||||
def add_edge(self, edge_type: EdgeType, target_id: str) -> None:
|
||||
"""Add an edge from this node to another.
|
||||
|
||||
|
||||
@@ -28,6 +28,7 @@ from pydantic import (
|
||||
BaseModel,
|
||||
BeforeValidator,
|
||||
Field,
|
||||
PlainSerializer,
|
||||
PrivateAttr,
|
||||
field_validator,
|
||||
model_validator,
|
||||
@@ -86,6 +87,58 @@ from crewai.utilities.printer import PRINTER
|
||||
from crewai.utilities.string_utils import interpolate_only
|
||||
|
||||
|
||||
def _serialize_class_ref(value: Any) -> str | None:
|
||||
"""Serialize a class reference to a ``module.qualname`` string.
|
||||
|
||||
Pydantic's default JSON serializer cannot handle ``type[BaseModel]``
|
||||
and similar class-valued fields, which raises
|
||||
``PydanticSerializationError`` during checkpointing. We emit a
|
||||
dotted import path so the value is round-trippable.
|
||||
"""
|
||||
if value is None:
|
||||
return None
|
||||
if isinstance(value, str):
|
||||
return value
|
||||
if isinstance(value, type):
|
||||
module = getattr(value, "__module__", None)
|
||||
qualname = getattr(value, "__qualname__", None) or getattr(
|
||||
value, "__name__", None
|
||||
)
|
||||
if module and qualname:
|
||||
return f"{module}.{qualname}"
|
||||
return None
|
||||
return None
|
||||
|
||||
|
||||
def _validate_class_ref(value: Any) -> Any:
|
||||
"""Resolve a serialized class reference back into a class.
|
||||
|
||||
Accepts an existing class/``None`` unchanged. A string is interpreted as
|
||||
a ``module.qualname`` path; if it cannot be imported, ``None`` is
|
||||
returned so restoration degrades gracefully (user code re-instantiates
|
||||
the Task with the correct class anyway).
|
||||
"""
|
||||
if value is None or isinstance(value, type):
|
||||
return value
|
||||
if isinstance(value, str):
|
||||
import importlib
|
||||
|
||||
module_path, _, qualname = value.rpartition(".")
|
||||
if not module_path or not qualname:
|
||||
return None
|
||||
try:
|
||||
module = importlib.import_module(module_path)
|
||||
except ImportError:
|
||||
return None
|
||||
obj: Any = module
|
||||
for part in qualname.split("."):
|
||||
obj = getattr(obj, part, None)
|
||||
if obj is None:
|
||||
return None
|
||||
return obj if isinstance(obj, type) else None
|
||||
return value
|
||||
|
||||
|
||||
class Task(BaseModel):
|
||||
"""Class that represents a task to be executed.
|
||||
|
||||
@@ -141,15 +194,27 @@ class Task(BaseModel):
|
||||
description="Whether the task should be executed asynchronously or not.",
|
||||
default=False,
|
||||
)
|
||||
output_json: type[BaseModel] | None = Field(
|
||||
output_json: Annotated[
|
||||
type[BaseModel] | None,
|
||||
BeforeValidator(_validate_class_ref),
|
||||
PlainSerializer(_serialize_class_ref, return_type=str | None, when_used="json"),
|
||||
] = Field(
|
||||
description="A Pydantic model to be used to create a JSON output.",
|
||||
default=None,
|
||||
)
|
||||
output_pydantic: type[BaseModel] | None = Field(
|
||||
output_pydantic: Annotated[
|
||||
type[BaseModel] | None,
|
||||
BeforeValidator(_validate_class_ref),
|
||||
PlainSerializer(_serialize_class_ref, return_type=str | None, when_used="json"),
|
||||
] = Field(
|
||||
description="A Pydantic model to be used to create a Pydantic output.",
|
||||
default=None,
|
||||
)
|
||||
response_model: type[BaseModel] | None = Field(
|
||||
response_model: Annotated[
|
||||
type[BaseModel] | None,
|
||||
BeforeValidator(_validate_class_ref),
|
||||
PlainSerializer(_serialize_class_ref, return_type=str | None, when_used="json"),
|
||||
] = Field(
|
||||
description="A Pydantic model for structured LLM outputs using native provider features.",
|
||||
default=None,
|
||||
)
|
||||
@@ -189,7 +254,11 @@ class Task(BaseModel):
|
||||
description="Whether the task should instruct the agent to return the final answer formatted in Markdown",
|
||||
default=False,
|
||||
)
|
||||
converter_cls: type[Converter] | None = Field(
|
||||
converter_cls: Annotated[
|
||||
type[Converter] | None,
|
||||
BeforeValidator(_validate_class_ref),
|
||||
PlainSerializer(_serialize_class_ref, return_type=str | None, when_used="json"),
|
||||
] = Field(
|
||||
description="A converter class used to export structured output",
|
||||
default=None,
|
||||
)
|
||||
@@ -1052,6 +1121,27 @@ Follow these guidelines:
|
||||
tools=cloned_tools,
|
||||
)
|
||||
|
||||
def _normalize_agent_result(
|
||||
self, result: Any
|
||||
) -> tuple[str, BaseModel | None, dict[str, Any] | None]:
|
||||
"""Convert an agent execution result into ``(raw, pydantic, json)``.
|
||||
|
||||
The agent may return either a string or a Pydantic model (when the
|
||||
task uses ``output_pydantic``/``response_model`` and the LLM returned
|
||||
a structured payload). ``TaskOutput.raw`` is typed as ``str`` so the
|
||||
Pydantic model has to be serialized to JSON before it can be stored
|
||||
on a ``TaskOutput`` (e.g. during a guardrail-triggered retry).
|
||||
"""
|
||||
if isinstance(result, BaseModel):
|
||||
raw = result.model_dump_json()
|
||||
if self.output_pydantic:
|
||||
return raw, result, None
|
||||
if self.output_json:
|
||||
return raw, None, result.model_dump()
|
||||
return raw, None, None
|
||||
pydantic_output, json_output = self._export_output(result)
|
||||
return result, pydantic_output, json_output
|
||||
|
||||
def _export_output(
|
||||
self, result: str
|
||||
) -> tuple[BaseModel | None, dict[str, Any] | None]:
|
||||
@@ -1241,12 +1331,12 @@ Follow these guidelines:
|
||||
tools=tools,
|
||||
)
|
||||
|
||||
pydantic_output, json_output = self._export_output(result)
|
||||
raw, pydantic_output, json_output = self._normalize_agent_result(result)
|
||||
task_output = TaskOutput(
|
||||
name=self.name or self.description,
|
||||
description=self.description,
|
||||
expected_output=self.expected_output,
|
||||
raw=result,
|
||||
raw=raw,
|
||||
pydantic=pydantic_output,
|
||||
json_dict=json_output,
|
||||
agent=agent.role,
|
||||
@@ -1337,12 +1427,12 @@ Follow these guidelines:
|
||||
tools=tools,
|
||||
)
|
||||
|
||||
pydantic_output, json_output = self._export_output(result)
|
||||
raw, pydantic_output, json_output = self._normalize_agent_result(result)
|
||||
task_output = TaskOutput(
|
||||
name=self.name or self.description,
|
||||
description=self.description,
|
||||
expected_output=self.expected_output,
|
||||
raw=result,
|
||||
raw=raw,
|
||||
pydantic=pydantic_output,
|
||||
json_dict=json_output,
|
||||
agent=agent.role,
|
||||
|
||||
@@ -1,139 +0,0 @@
|
||||
"""Tests for the crewai.cli.crew_chat description generators.
|
||||
|
||||
These tests focus on the defensive behaviour introduced for issue #5510:
|
||||
``generate_input_description_with_ai`` and ``generate_crew_description_with_ai``
|
||||
must never propagate LLM call failures to their callers, since they are
|
||||
commonly invoked at container / module import time via downstream
|
||||
integrations such as ``ag_ui_crewai.crews.ChatWithCrewFlow``. A transient LLM
|
||||
provider hiccup should not crash the containing process before it has a chance
|
||||
to bind to its HTTP port.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from unittest.mock import MagicMock
|
||||
|
||||
import pytest
|
||||
|
||||
from crewai.agent import Agent
|
||||
from crewai.cli.crew_chat import (
|
||||
DEFAULT_CREW_DESCRIPTION,
|
||||
DEFAULT_INPUT_DESCRIPTION,
|
||||
generate_crew_chat_inputs,
|
||||
generate_crew_description_with_ai,
|
||||
generate_input_description_with_ai,
|
||||
)
|
||||
from crewai.crew import Crew
|
||||
from crewai.task import Task
|
||||
|
||||
|
||||
def _make_crew_with_topic_input() -> Crew:
|
||||
"""Build a minimal Crew whose task/agent reference a ``{topic}`` input."""
|
||||
agent = Agent(
|
||||
role="Researcher on {topic}",
|
||||
goal="Investigate the latest developments about {topic}",
|
||||
backstory="An expert analyst focused on {topic}",
|
||||
allow_delegation=False,
|
||||
)
|
||||
task = Task(
|
||||
description="Write a short report about {topic}",
|
||||
expected_output="A concise summary about {topic}",
|
||||
agent=agent,
|
||||
)
|
||||
return Crew(agents=[agent], tasks=[task])
|
||||
|
||||
|
||||
def test_generate_input_description_returns_llm_response_on_success() -> None:
|
||||
"""Happy path: the LLM response is stripped and returned verbatim."""
|
||||
crew = _make_crew_with_topic_input()
|
||||
chat_llm = MagicMock()
|
||||
chat_llm.call.return_value = " The topic to research. "
|
||||
|
||||
result = generate_input_description_with_ai("topic", crew, chat_llm)
|
||||
|
||||
assert result == "The topic to research."
|
||||
chat_llm.call.assert_called_once()
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"exc",
|
||||
[
|
||||
ConnectionError("connection refused"),
|
||||
TimeoutError("llm timed out"),
|
||||
RuntimeError("litellm APIError: 500"),
|
||||
],
|
||||
)
|
||||
def test_generate_input_description_falls_back_on_llm_failure(exc: Exception) -> None:
|
||||
"""If the LLM call raises, we must return the static fallback instead of
|
||||
propagating the exception. This is the core fix for issue #5510.
|
||||
"""
|
||||
crew = _make_crew_with_topic_input()
|
||||
chat_llm = MagicMock()
|
||||
chat_llm.call.side_effect = exc
|
||||
|
||||
result = generate_input_description_with_ai("topic", crew, chat_llm)
|
||||
|
||||
assert result == DEFAULT_INPUT_DESCRIPTION
|
||||
|
||||
|
||||
def test_generate_input_description_still_raises_when_no_context() -> None:
|
||||
"""The fallback only applies to LLM call failures. When there is no
|
||||
context at all for the given input, we still raise ``ValueError`` so that
|
||||
callers can detect a truly malformed crew definition.
|
||||
"""
|
||||
crew = _make_crew_with_topic_input()
|
||||
chat_llm = MagicMock()
|
||||
|
||||
with pytest.raises(ValueError, match="No context found for input"):
|
||||
generate_input_description_with_ai("does_not_exist", crew, chat_llm)
|
||||
|
||||
chat_llm.call.assert_not_called()
|
||||
|
||||
|
||||
def test_generate_crew_description_returns_llm_response_on_success() -> None:
|
||||
crew = _make_crew_with_topic_input()
|
||||
chat_llm = MagicMock()
|
||||
chat_llm.call.return_value = " Research topics and produce reports. "
|
||||
|
||||
result = generate_crew_description_with_ai(crew, chat_llm)
|
||||
|
||||
assert result == "Research topics and produce reports."
|
||||
chat_llm.call.assert_called_once()
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"exc",
|
||||
[
|
||||
ConnectionError("connection refused"),
|
||||
TimeoutError("llm timed out"),
|
||||
RuntimeError("litellm APIError: 500"),
|
||||
],
|
||||
)
|
||||
def test_generate_crew_description_falls_back_on_llm_failure(exc: Exception) -> None:
|
||||
crew = _make_crew_with_topic_input()
|
||||
chat_llm = MagicMock()
|
||||
chat_llm.call.side_effect = exc
|
||||
|
||||
result = generate_crew_description_with_ai(crew, chat_llm)
|
||||
|
||||
assert result == DEFAULT_CREW_DESCRIPTION
|
||||
|
||||
|
||||
def test_generate_crew_chat_inputs_never_crashes_on_llm_failure() -> None:
|
||||
"""End-to-end: a crew with at least one required input placeholder and a
|
||||
chat LLM whose ``.call`` always raises should still yield a valid
|
||||
``ChatInputs`` object populated with the static fallbacks, rather than
|
||||
bubbling up the exception. This is the exact scenario described in
|
||||
issue #5510 for ``ChatWithCrewFlow.__init__``.
|
||||
"""
|
||||
crew = _make_crew_with_topic_input()
|
||||
chat_llm = MagicMock()
|
||||
chat_llm.call.side_effect = ConnectionError("transient outage")
|
||||
|
||||
chat_inputs = generate_crew_chat_inputs(crew, "MyCrew", chat_llm)
|
||||
|
||||
assert chat_inputs.crew_name == "MyCrew"
|
||||
assert chat_inputs.crew_description == DEFAULT_CREW_DESCRIPTION
|
||||
assert len(chat_inputs.inputs) == 1
|
||||
assert chat_inputs.inputs[0].name == "topic"
|
||||
assert chat_inputs.inputs[0].description == DEFAULT_INPUT_DESCRIPTION
|
||||
@@ -523,6 +523,31 @@ class TestKickoffFromCheckpoint:
|
||||
assert isinstance(crew.checkpoint, CheckpointConfig)
|
||||
assert crew.checkpoint.on_events == ["task_completed"]
|
||||
|
||||
def test_agent_kickoff_delegates_to_from_checkpoint(self) -> None:
|
||||
mock_restored = MagicMock(spec=Agent)
|
||||
mock_restored.kickoff.return_value = "agent_result"
|
||||
|
||||
cfg = CheckpointConfig(restore_from="/path/to/agent_cp.json")
|
||||
with patch.object(Agent, "from_checkpoint", return_value=mock_restored):
|
||||
agent = Agent(role="r", goal="g", backstory="b", llm="gpt-4o-mini")
|
||||
result = agent.kickoff(messages="hello", from_checkpoint=cfg)
|
||||
|
||||
mock_restored.kickoff.assert_called_once_with(
|
||||
messages="hello", response_format=None, input_files=None
|
||||
)
|
||||
assert mock_restored.checkpoint.restore_from is None
|
||||
assert result == "agent_result"
|
||||
|
||||
def test_agent_kickoff_config_only_sets_checkpoint(self) -> None:
|
||||
cfg = CheckpointConfig(on_events=["lite_agent_execution_completed"])
|
||||
agent = Agent(role="r", goal="g", backstory="b", llm="gpt-4o-mini")
|
||||
assert agent.checkpoint is None
|
||||
with patch.object(Agent, "_prepare_kickoff", side_effect=RuntimeError("stop")):
|
||||
with pytest.raises(RuntimeError, match="stop"):
|
||||
agent.kickoff(messages="hello", from_checkpoint=cfg)
|
||||
assert isinstance(agent.checkpoint, CheckpointConfig)
|
||||
assert agent.checkpoint.on_events == ["lite_agent_execution_completed"]
|
||||
|
||||
def test_flow_kickoff_delegates_to_from_checkpoint(self) -> None:
|
||||
mock_restored = MagicMock(spec=Flow)
|
||||
mock_restored.kickoff.return_value = "flow_result"
|
||||
@@ -537,3 +562,110 @@ class TestKickoffFromCheckpoint:
|
||||
)
|
||||
assert mock_restored.checkpoint.restore_from is None
|
||||
assert result == "flow_result"
|
||||
|
||||
|
||||
# ---------- Pydantic model serialization in checkpoints (issue #5544) ----------
|
||||
|
||||
|
||||
class TestPydanticTypeFieldSerialization:
|
||||
"""Issue #5544 (Issue I): checkpoint serialization must not blow up on
|
||||
fields that hold ``type[BaseModel]`` references — e.g. a Task's
|
||||
``output_pydantic`` / ``output_json`` / ``response_model`` — nor on
|
||||
events that wrap such tasks in their payload.
|
||||
"""
|
||||
|
||||
def test_task_dumps_type_class_field_to_dotted_path(self) -> None:
|
||||
from pydantic import BaseModel as PydanticModel
|
||||
|
||||
class FamilyList(PydanticModel):
|
||||
families: list[str]
|
||||
|
||||
task = Task(
|
||||
description="d",
|
||||
expected_output="e",
|
||||
output_pydantic=FamilyList,
|
||||
)
|
||||
dumped = task.model_dump(mode="json")
|
||||
# The class is serialized as ``module.qualname``
|
||||
assert isinstance(dumped["output_pydantic"], str)
|
||||
assert dumped["output_pydantic"].endswith("FamilyList")
|
||||
|
||||
def test_task_round_trip_restores_class_reference(self) -> None:
|
||||
from pydantic import BaseModel as PydanticModel
|
||||
|
||||
global _CheckpointReplyModel # noqa: PLW0603
|
||||
|
||||
class _CheckpointReplyModel(PydanticModel):
|
||||
value: int
|
||||
|
||||
task = Task(
|
||||
description="d",
|
||||
expected_output="e",
|
||||
output_pydantic=_CheckpointReplyModel,
|
||||
)
|
||||
dumped_json = task.model_dump_json()
|
||||
restored = Task.model_validate_json(
|
||||
dumped_json, context={"from_checkpoint": True}
|
||||
)
|
||||
assert restored.output_pydantic is _CheckpointReplyModel
|
||||
|
||||
def test_task_round_trip_unknown_class_path_degrades_gracefully(self) -> None:
|
||||
# Mirrors a checkpoint produced in a different process / repo where
|
||||
# the class is no longer importable. We accept a None restore over
|
||||
# blowing up — user code re-instantiates the Task with the right
|
||||
# class anyway.
|
||||
restored = Task.model_validate(
|
||||
{
|
||||
"description": "d",
|
||||
"expected_output": "e",
|
||||
"output_pydantic": "no_such_module.NoSuchClass",
|
||||
},
|
||||
context={"from_checkpoint": True},
|
||||
)
|
||||
assert restored.output_pydantic is None
|
||||
|
||||
def test_runtime_state_with_event_carrying_pydantic_task_dumps_to_json(
|
||||
self,
|
||||
) -> None:
|
||||
"""End-to-end regression for issue #5544 Issue I.
|
||||
|
||||
A Crew + Task with ``output_pydantic`` produces events whose payload
|
||||
carries the Task. Without the field-level JSON serialization on
|
||||
``EventNode.event``, this dump explodes with PydanticSerializationError
|
||||
on the embedded ``type[BaseModel]`` reference.
|
||||
"""
|
||||
from pydantic import BaseModel as PydanticModel
|
||||
|
||||
from crewai import Agent, Crew
|
||||
from crewai.events.types.task_events import TaskCompletedEvent
|
||||
from crewai.tasks.task_output import TaskOutput
|
||||
|
||||
class FamilyList(PydanticModel):
|
||||
families: list[str]
|
||||
|
||||
agent = Agent(role="r", goal="g", backstory="b", llm="gpt-4o-mini")
|
||||
task = Task(
|
||||
description="d",
|
||||
expected_output="e",
|
||||
agent=agent,
|
||||
output_pydantic=FamilyList,
|
||||
)
|
||||
crew = Crew(agents=[agent], tasks=[task], verbose=False)
|
||||
state = RuntimeState(root=[crew])
|
||||
|
||||
event = TaskCompletedEvent(
|
||||
task=task,
|
||||
output=TaskOutput(
|
||||
description="d",
|
||||
expected_output="e",
|
||||
raw="{}",
|
||||
agent="r",
|
||||
),
|
||||
)
|
||||
state._event_record.add(event)
|
||||
|
||||
# Should not raise PydanticSerializationError.
|
||||
payload = state.model_dump(mode="json")
|
||||
# And it should round-trip through json.dumps (the actual checkpoint
|
||||
# writer does this immediately after).
|
||||
json.dumps(payload)
|
||||
|
||||
402
lib/crewai/tests/test_checkpoint_cli.py
Normal file
402
lib/crewai/tests/test_checkpoint_cli.py
Normal file
@@ -0,0 +1,402 @@
|
||||
"""Tests for checkpoint CLI commands."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import os
|
||||
import sqlite3
|
||||
import tempfile
|
||||
import time
|
||||
from datetime import datetime, timedelta, timezone
|
||||
from typing import Any
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
import pytest
|
||||
from crewai.cli.checkpoint_cli import (
|
||||
_parse_checkpoint_json,
|
||||
_parse_duration,
|
||||
_prune_json,
|
||||
_prune_sqlite,
|
||||
_resolve_checkpoint,
|
||||
_task_list_from_meta,
|
||||
diff_checkpoints,
|
||||
prune_checkpoints,
|
||||
resume_checkpoint,
|
||||
)
|
||||
|
||||
|
||||
def _make_checkpoint_data(
|
||||
tasks_completed: int = 2,
|
||||
tasks_total: int = 4,
|
||||
trigger: str = "task_completed",
|
||||
branch: str = "main",
|
||||
parent_id: str | None = None,
|
||||
entity_type: str = "crew",
|
||||
name: str = "test_crew",
|
||||
inputs: dict[str, Any] | None = None,
|
||||
) -> str:
|
||||
tasks: list[dict[str, Any]] = []
|
||||
for i in range(tasks_total):
|
||||
t: dict[str, Any] = {
|
||||
"description": f"Task {i + 1} description",
|
||||
"expected_output": f"Output {i + 1}",
|
||||
}
|
||||
if i < tasks_completed:
|
||||
t["output"] = {"raw": f"Result of task {i + 1}"}
|
||||
else:
|
||||
t["output"] = None
|
||||
tasks.append(t)
|
||||
|
||||
data: dict[str, Any] = {
|
||||
"entities": [
|
||||
{
|
||||
"entity_type": entity_type,
|
||||
"name": name,
|
||||
"id": "abc12345-1234-1234-1234-abcdef012345",
|
||||
"tasks": tasks,
|
||||
"agents": [],
|
||||
"checkpoint_inputs": inputs or {},
|
||||
}
|
||||
],
|
||||
"event_record": {"nodes": {f"node_{i}": {} for i in range(3)}},
|
||||
"trigger": trigger,
|
||||
"branch": branch,
|
||||
"parent_id": parent_id,
|
||||
}
|
||||
return json.dumps(data)
|
||||
|
||||
|
||||
def _write_json_checkpoint(
|
||||
base_dir: str,
|
||||
branch: str = "main",
|
||||
name: str | None = None,
|
||||
data: str | None = None,
|
||||
tasks_completed: int = 2,
|
||||
inputs: dict[str, Any] | None = None,
|
||||
) -> str:
|
||||
branch_dir = os.path.join(base_dir, branch)
|
||||
os.makedirs(branch_dir, exist_ok=True)
|
||||
if name is None:
|
||||
ts = datetime.now(timezone.utc).strftime("%Y%m%dT%H%M%S")
|
||||
name = f"{ts}_abcd1234_p-none.json"
|
||||
path = os.path.join(branch_dir, name)
|
||||
if data is None:
|
||||
data = _make_checkpoint_data(tasks_completed=tasks_completed, inputs=inputs)
|
||||
with open(path, "w") as f:
|
||||
f.write(data)
|
||||
return path
|
||||
|
||||
|
||||
def _create_sqlite_checkpoint(
|
||||
db_path: str,
|
||||
checkpoint_id: str | None = None,
|
||||
data: str | None = None,
|
||||
tasks_completed: int = 2,
|
||||
branch: str = "main",
|
||||
inputs: dict[str, Any] | None = None,
|
||||
) -> str:
|
||||
if checkpoint_id is None:
|
||||
ts = datetime.now(timezone.utc).strftime("%Y%m%dT%H%M%S")
|
||||
checkpoint_id = f"{ts}_abcd1234"
|
||||
if data is None:
|
||||
data = _make_checkpoint_data(
|
||||
tasks_completed=tasks_completed, branch=branch, inputs=inputs
|
||||
)
|
||||
with sqlite3.connect(db_path) as conn:
|
||||
conn.execute(
|
||||
"""CREATE TABLE IF NOT EXISTS checkpoints (
|
||||
id TEXT PRIMARY KEY,
|
||||
created_at TEXT NOT NULL,
|
||||
parent_id TEXT,
|
||||
branch TEXT NOT NULL DEFAULT 'main',
|
||||
data JSONB NOT NULL
|
||||
)"""
|
||||
)
|
||||
conn.execute(
|
||||
"INSERT INTO checkpoints (id, created_at, parent_id, branch, data) "
|
||||
"VALUES (?, ?, ?, ?, jsonb(?))",
|
||||
(checkpoint_id, checkpoint_id.split("_")[0], None, branch, data),
|
||||
)
|
||||
conn.commit()
|
||||
return checkpoint_id
|
||||
|
||||
|
||||
class TestParseDuration:
|
||||
def test_days(self) -> None:
|
||||
assert _parse_duration("7d") == timedelta(days=7)
|
||||
|
||||
def test_hours(self) -> None:
|
||||
assert _parse_duration("24h") == timedelta(hours=24)
|
||||
|
||||
def test_minutes(self) -> None:
|
||||
assert _parse_duration("30m") == timedelta(minutes=30)
|
||||
|
||||
def test_invalid_raises(self) -> None:
|
||||
with pytest.raises(Exception):
|
||||
_parse_duration("abc")
|
||||
|
||||
def test_no_unit_raises(self) -> None:
|
||||
with pytest.raises(Exception):
|
||||
_parse_duration("7")
|
||||
|
||||
|
||||
class TestResolveCheckpoint:
|
||||
def test_json_latest(self) -> None:
|
||||
with tempfile.TemporaryDirectory() as d:
|
||||
_write_json_checkpoint(d, name="20260101T000000_aaaa1111_p-none.json")
|
||||
time.sleep(0.01)
|
||||
path2 = _write_json_checkpoint(
|
||||
d, name="20260102T000000_bbbb2222_p-none.json", tasks_completed=3
|
||||
)
|
||||
meta = _resolve_checkpoint(d, None)
|
||||
assert meta is not None
|
||||
assert meta["path"] == path2
|
||||
|
||||
def test_json_by_id(self) -> None:
|
||||
with tempfile.TemporaryDirectory() as d:
|
||||
_write_json_checkpoint(d, name="20260101T000000_aaaa1111_p-none.json")
|
||||
_write_json_checkpoint(d, name="20260102T000000_bbbb2222_p-none.json")
|
||||
meta = _resolve_checkpoint(d, "aaaa1111")
|
||||
assert meta is not None
|
||||
assert "aaaa1111" in meta["name"]
|
||||
|
||||
def test_json_not_found(self) -> None:
|
||||
with tempfile.TemporaryDirectory() as d:
|
||||
_write_json_checkpoint(d)
|
||||
assert _resolve_checkpoint(d, "nonexistent") is None
|
||||
|
||||
def test_sqlite_latest(self) -> None:
|
||||
with tempfile.TemporaryDirectory() as d:
|
||||
db_path = os.path.join(d, "test.db")
|
||||
_create_sqlite_checkpoint(db_path, "20260101T000000_aaaa1111")
|
||||
_create_sqlite_checkpoint(
|
||||
db_path, "20260102T000000_bbbb2222", tasks_completed=3
|
||||
)
|
||||
meta = _resolve_checkpoint(db_path, None)
|
||||
assert meta is not None
|
||||
assert "bbbb2222" in meta["name"]
|
||||
|
||||
def test_sqlite_by_id(self) -> None:
|
||||
with tempfile.TemporaryDirectory() as d:
|
||||
db_path = os.path.join(d, "test.db")
|
||||
_create_sqlite_checkpoint(db_path, "20260101T000000_aaaa1111")
|
||||
_create_sqlite_checkpoint(db_path, "20260102T000000_bbbb2222")
|
||||
meta = _resolve_checkpoint(db_path, "20260101T000000_aaaa1111")
|
||||
assert meta is not None
|
||||
assert "aaaa1111" in meta["name"]
|
||||
|
||||
def test_sqlite_partial_id(self) -> None:
|
||||
with tempfile.TemporaryDirectory() as d:
|
||||
db_path = os.path.join(d, "test.db")
|
||||
_create_sqlite_checkpoint(db_path, "20260101T000000_aaaa1111")
|
||||
_create_sqlite_checkpoint(db_path, "20260102T000000_bbbb2222")
|
||||
meta = _resolve_checkpoint(db_path, "aaaa1111")
|
||||
assert meta is not None
|
||||
assert "aaaa1111" in meta["name"]
|
||||
|
||||
def test_nonexistent(self) -> None:
|
||||
assert _resolve_checkpoint("/nonexistent/path", None) is None
|
||||
|
||||
|
||||
class TestTaskListFromMeta:
|
||||
def test_flattens_tasks(self) -> None:
|
||||
data = _make_checkpoint_data(tasks_completed=2, tasks_total=3)
|
||||
meta = _parse_checkpoint_json(data, "test")
|
||||
tasks = _task_list_from_meta(meta)
|
||||
assert len(tasks) == 3
|
||||
assert tasks[0]["completed"] is True
|
||||
assert tasks[2]["completed"] is False
|
||||
|
||||
def test_empty_entities(self) -> None:
|
||||
assert _task_list_from_meta({"entities": []}) == []
|
||||
|
||||
|
||||
class TestDiffCheckpoints:
|
||||
def test_diff_shows_status_change(self, capsys: pytest.CaptureFixture[str]) -> None:
|
||||
with tempfile.TemporaryDirectory() as d:
|
||||
_write_json_checkpoint(
|
||||
d, name="20260101T000000_aaaa1111_p-none.json", tasks_completed=1
|
||||
)
|
||||
_write_json_checkpoint(
|
||||
d, name="20260102T000000_bbbb2222_p-none.json", tasks_completed=3
|
||||
)
|
||||
diff_checkpoints(d, "aaaa1111", "bbbb2222")
|
||||
out = capsys.readouterr().out
|
||||
assert "---" in out
|
||||
assert "+++" in out
|
||||
assert "status:" in out or "pending -> done" in out
|
||||
|
||||
def test_diff_shows_output_change(self, capsys: pytest.CaptureFixture[str]) -> None:
|
||||
with tempfile.TemporaryDirectory() as d:
|
||||
data1 = _make_checkpoint_data(tasks_completed=2)
|
||||
data2 = json.loads(data1)
|
||||
data2["entities"][0]["tasks"][0]["output"]["raw"] = "Updated result"
|
||||
_write_json_checkpoint(
|
||||
d,
|
||||
name="20260101T000000_aaaa1111_p-none.json",
|
||||
data=json.dumps(json.loads(data1)),
|
||||
)
|
||||
_write_json_checkpoint(
|
||||
d,
|
||||
name="20260102T000000_bbbb2222_p-none.json",
|
||||
data=json.dumps(data2),
|
||||
)
|
||||
diff_checkpoints(d, "aaaa1111", "bbbb2222")
|
||||
out = capsys.readouterr().out
|
||||
assert "output:" in out
|
||||
|
||||
def test_diff_not_found(self, capsys: pytest.CaptureFixture[str]) -> None:
|
||||
with tempfile.TemporaryDirectory() as d:
|
||||
_write_json_checkpoint(d, name="20260101T000000_aaaa1111_p-none.json")
|
||||
diff_checkpoints(d, "aaaa1111", "nonexistent")
|
||||
out = capsys.readouterr().out
|
||||
assert "not found" in out
|
||||
|
||||
def test_diff_input_change(self, capsys: pytest.CaptureFixture[str]) -> None:
|
||||
with tempfile.TemporaryDirectory() as d:
|
||||
_write_json_checkpoint(
|
||||
d,
|
||||
name="20260101T000000_aaaa1111_p-none.json",
|
||||
inputs={"topic": "AI"},
|
||||
)
|
||||
_write_json_checkpoint(
|
||||
d,
|
||||
name="20260102T000000_bbbb2222_p-none.json",
|
||||
inputs={"topic": "ML"},
|
||||
)
|
||||
diff_checkpoints(d, "aaaa1111", "bbbb2222")
|
||||
out = capsys.readouterr().out
|
||||
assert "Inputs:" in out
|
||||
assert "AI" in out
|
||||
assert "ML" in out
|
||||
|
||||
|
||||
class TestPruneJson:
|
||||
def test_keep_n(self) -> None:
|
||||
with tempfile.TemporaryDirectory() as d:
|
||||
for i in range(5):
|
||||
_write_json_checkpoint(
|
||||
d, name=f"2026010{i + 1}T000000_aaa{i}1111_p-none.json"
|
||||
)
|
||||
time.sleep(0.01)
|
||||
deleted = _prune_json(d, keep=2, older_than=None)
|
||||
assert deleted == 3
|
||||
remaining = []
|
||||
for root, _, files in os.walk(d):
|
||||
remaining.extend(files)
|
||||
assert len(remaining) == 2
|
||||
|
||||
def test_older_than(self) -> None:
|
||||
with tempfile.TemporaryDirectory() as d:
|
||||
old_path = _write_json_checkpoint(
|
||||
d, name="20250101T000000_old01111_p-none.json"
|
||||
)
|
||||
os.utime(old_path, (0, 0))
|
||||
_write_json_checkpoint(d, name="20990101T000000_new01111_p-none.json")
|
||||
deleted = _prune_json(d, keep=None, older_than=timedelta(days=1))
|
||||
assert deleted == 1
|
||||
|
||||
def test_empty_dir(self) -> None:
|
||||
with tempfile.TemporaryDirectory() as d:
|
||||
assert _prune_json(d, keep=2, older_than=None) == 0
|
||||
|
||||
def test_removes_empty_branch_dirs(self) -> None:
|
||||
with tempfile.TemporaryDirectory() as d:
|
||||
path = _write_json_checkpoint(
|
||||
d,
|
||||
branch="feature",
|
||||
name="20260101T000000_aaaa1111_p-none.json",
|
||||
)
|
||||
os.utime(path, (0, 0))
|
||||
_prune_json(d, keep=None, older_than=timedelta(days=1))
|
||||
assert not os.path.exists(os.path.join(d, "feature"))
|
||||
|
||||
|
||||
class TestPruneSqlite:
|
||||
def test_keep_n(self) -> None:
|
||||
with tempfile.TemporaryDirectory() as d:
|
||||
db_path = os.path.join(d, "test.db")
|
||||
for i in range(5):
|
||||
_create_sqlite_checkpoint(
|
||||
db_path, f"2026010{i + 1}T000000_aaa{i}1111"
|
||||
)
|
||||
deleted = _prune_sqlite(db_path, keep=2, older_than=None)
|
||||
assert deleted == 3
|
||||
with sqlite3.connect(db_path) as conn:
|
||||
count = conn.execute("SELECT COUNT(*) FROM checkpoints").fetchone()[0]
|
||||
assert count == 2
|
||||
|
||||
def test_older_than(self) -> None:
|
||||
with tempfile.TemporaryDirectory() as d:
|
||||
db_path = os.path.join(d, "test.db")
|
||||
_create_sqlite_checkpoint(db_path, "20200101T000000_old01111")
|
||||
_create_sqlite_checkpoint(db_path, "20990101T000000_new01111")
|
||||
deleted = _prune_sqlite(db_path, keep=None, older_than=timedelta(days=1))
|
||||
assert deleted >= 1
|
||||
with sqlite3.connect(db_path) as conn:
|
||||
count = conn.execute("SELECT COUNT(*) FROM checkpoints").fetchone()[0]
|
||||
assert count >= 1
|
||||
|
||||
|
||||
class TestPruneCommand:
|
||||
def test_no_options_shows_help(self, capsys: pytest.CaptureFixture[str]) -> None:
|
||||
with tempfile.TemporaryDirectory() as d:
|
||||
prune_checkpoints(d, keep=None, older_than=None)
|
||||
out = capsys.readouterr().out
|
||||
assert "Specify" in out
|
||||
|
||||
def test_dry_run_json(self, capsys: pytest.CaptureFixture[str]) -> None:
|
||||
with tempfile.TemporaryDirectory() as d:
|
||||
_write_json_checkpoint(d)
|
||||
prune_checkpoints(d, keep=1, older_than=None, dry_run=True)
|
||||
out = capsys.readouterr().out
|
||||
assert "Would prune" in out
|
||||
|
||||
def test_not_found(self, capsys: pytest.CaptureFixture[str]) -> None:
|
||||
prune_checkpoints("/nonexistent", keep=1, older_than=None)
|
||||
out = capsys.readouterr().out
|
||||
assert "Not a directory" in out
|
||||
|
||||
|
||||
class TestResumeCheckpoint:
|
||||
def test_not_found(self, capsys: pytest.CaptureFixture[str]) -> None:
|
||||
with tempfile.TemporaryDirectory() as d:
|
||||
resume_checkpoint(d, "nonexistent")
|
||||
out = capsys.readouterr().out
|
||||
assert "not found" in out
|
||||
|
||||
def test_no_checkpoints(self, capsys: pytest.CaptureFixture[str]) -> None:
|
||||
with tempfile.TemporaryDirectory() as d:
|
||||
resume_checkpoint(d, None)
|
||||
out = capsys.readouterr().out
|
||||
assert "No checkpoints" in out
|
||||
|
||||
|
||||
class TestDiscoverabilityMessage:
|
||||
def test_checkpoint_listener_logs_resume_hint(self) -> None:
|
||||
from crewai.state.checkpoint_listener import _do_checkpoint
|
||||
from crewai.state.runtime import RuntimeState
|
||||
|
||||
state = MagicMock(spec=RuntimeState)
|
||||
state.root = []
|
||||
state.model_dump.return_value = {"entities": [], "event_record": {"nodes": {}}}
|
||||
state._parent_id = None
|
||||
state._branch = "main"
|
||||
|
||||
cfg = MagicMock()
|
||||
cfg.location = "/tmp/cp"
|
||||
cfg.max_checkpoints = None
|
||||
cfg.provider.checkpoint.return_value = "/tmp/cp/main/20260101T000000_test1234_p-none.json"
|
||||
cfg.provider.extract_id.return_value = "20260101T000000_test1234"
|
||||
|
||||
with (
|
||||
patch("crewai.state.checkpoint_listener._prepare_entities"),
|
||||
patch("crewai.state.checkpoint_listener.logger") as mock_logger,
|
||||
):
|
||||
_do_checkpoint(state, cfg)
|
||||
|
||||
cfg.provider.extract_id.assert_called_once()
|
||||
mock_logger.info.assert_called_once()
|
||||
logged: str = mock_logger.info.call_args[0][0]
|
||||
assert "crewai checkpoint resume" in logged
|
||||
assert "20260101T000000_test1234" in logged
|
||||
@@ -768,3 +768,59 @@ def test_per_guardrail_independent_retry_tracking():
|
||||
assert call_counts["g3"] == 1
|
||||
|
||||
assert "G3(1)" in result.raw
|
||||
|
||||
|
||||
def test_guardrail_retry_with_pydantic_agent_result():
|
||||
"""Regression test for issue #5544 (Issue II).
|
||||
|
||||
When a task has ``output_pydantic`` set and the LLM returns a structured
|
||||
Pydantic model, the agent's execute result is the Pydantic instance — not
|
||||
a string. On a guardrail retry, ``TaskOutput.raw`` is typed ``str``, so
|
||||
feeding the model directly to ``raw=`` blew up with a ValidationError and
|
||||
aborted the retry path. The retry should normalize the model to JSON
|
||||
before constructing ``TaskOutput``.
|
||||
"""
|
||||
from pydantic import BaseModel
|
||||
|
||||
class Family(BaseModel):
|
||||
family_id: int
|
||||
name: str
|
||||
size: int
|
||||
|
||||
class FamilyList(BaseModel):
|
||||
families: list[Family]
|
||||
|
||||
bad = FamilyList(families=[Family(family_id=1, name="X", size=2)])
|
||||
good = FamilyList(
|
||||
families=[Family(family_id=1, name="Smiths", size=2)]
|
||||
)
|
||||
|
||||
def is_family_guardrail(result: TaskOutput) -> tuple[bool, str]:
|
||||
if result.pydantic is None:
|
||||
return (False, "No pydantic output")
|
||||
bad_names = [f for f in result.pydantic.families if len(f.name) < 3]
|
||||
if bad_names:
|
||||
return (False, "Family name too short, must be >= 3 chars")
|
||||
return (True, result)
|
||||
|
||||
agent = Mock()
|
||||
agent.role = "test_agent"
|
||||
agent.execute_task.side_effect = [bad, good]
|
||||
agent.crew = None
|
||||
agent.last_messages = []
|
||||
|
||||
task = create_smart_task(
|
||||
description="Test pydantic retry",
|
||||
expected_output="JSON list of families",
|
||||
output_pydantic=FamilyList,
|
||||
guardrails=[is_family_guardrail],
|
||||
guardrail_max_retries=2,
|
||||
)
|
||||
|
||||
result = task.execute_sync(agent=agent)
|
||||
|
||||
assert isinstance(result, TaskOutput)
|
||||
assert isinstance(result.raw, str)
|
||||
assert isinstance(result.pydantic, FamilyList)
|
||||
assert result.pydantic.families[0].name == "Smiths"
|
||||
assert agent.execute_task.call_count == 2
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
"""CrewAI development tools."""
|
||||
|
||||
__version__ = "1.14.2rc1"
|
||||
__version__ = "1.14.2"
|
||||
|
||||
@@ -154,6 +154,117 @@ def check_git_clean() -> None:
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def _branch_exists_local(branch: str, cwd: Path | None = None) -> bool:
|
||||
try:
|
||||
subprocess.run( # noqa: S603
|
||||
["git", "show-ref", "--verify", "--quiet", f"refs/heads/{branch}"], # noqa: S607
|
||||
cwd=cwd,
|
||||
check=True,
|
||||
capture_output=True,
|
||||
)
|
||||
return True
|
||||
except subprocess.CalledProcessError:
|
||||
return False
|
||||
|
||||
|
||||
def _branch_exists_remote(branch: str, cwd: Path | None = None) -> bool:
|
||||
try:
|
||||
output = run_command(["git", "ls-remote", "--heads", "origin", branch], cwd=cwd)
|
||||
return bool(output.strip())
|
||||
except subprocess.CalledProcessError:
|
||||
return False
|
||||
|
||||
|
||||
def _open_pr_url_for_branch(branch: str, cwd: Path | None = None) -> str | None:
|
||||
"""Return URL of open PR for branch, or None if no open PR exists."""
|
||||
try:
|
||||
url = run_command(
|
||||
[
|
||||
"gh",
|
||||
"pr",
|
||||
"list",
|
||||
"--head",
|
||||
branch,
|
||||
"--state",
|
||||
"open",
|
||||
"--json",
|
||||
"url",
|
||||
"--jq",
|
||||
".[0].url // empty",
|
||||
],
|
||||
cwd=cwd,
|
||||
)
|
||||
return url or None
|
||||
except subprocess.CalledProcessError:
|
||||
return None
|
||||
|
||||
|
||||
def create_or_reset_branch(branch: str, cwd: Path | None = None) -> None:
|
||||
"""Create ``branch`` from current HEAD, resetting any stale copy.
|
||||
|
||||
If the branch exists locally or on origin, prompts the user to
|
||||
choose between resetting it or aborting. If an open PR exists on
|
||||
the branch, the prompt surfaces the PR URL and includes a
|
||||
close-and-reset option so in-flight work isn't silently clobbered.
|
||||
|
||||
Raises:
|
||||
SystemExit: If the user declines to reset.
|
||||
"""
|
||||
local_exists = _branch_exists_local(branch, cwd=cwd)
|
||||
remote_exists = _branch_exists_remote(branch, cwd=cwd)
|
||||
open_pr = _open_pr_url_for_branch(branch, cwd=cwd) if remote_exists else None
|
||||
|
||||
if local_exists or remote_exists:
|
||||
if open_pr:
|
||||
console.print(
|
||||
f"\n[yellow]![/yellow] Branch [bold]{branch}[/bold] already has an open PR: {open_pr}"
|
||||
)
|
||||
prompt = "Close the PR, reset the branch, and continue?"
|
||||
else:
|
||||
where = []
|
||||
if local_exists:
|
||||
where.append("local")
|
||||
if remote_exists:
|
||||
where.append("remote")
|
||||
console.print(
|
||||
f"\n[yellow]![/yellow] Branch [bold]{branch}[/bold] already exists ({', '.join(where)}) with no open PR"
|
||||
)
|
||||
prompt = "Delete it and recreate?"
|
||||
|
||||
if not Confirm.ask(prompt, default=False):
|
||||
console.print("[red]Aborted.[/red]")
|
||||
sys.exit(1)
|
||||
|
||||
if open_pr:
|
||||
console.print(f"Closing PR {open_pr}...")
|
||||
run_command(
|
||||
["gh", "pr", "close", branch, "--delete-branch"],
|
||||
cwd=cwd,
|
||||
)
|
||||
# `gh pr close --delete-branch` removes the remote branch
|
||||
# and, when checked out, the local branch too.
|
||||
local_exists = _branch_exists_local(branch, cwd=cwd)
|
||||
remote_exists = False
|
||||
|
||||
if local_exists:
|
||||
current = run_command(
|
||||
["git", "rev-parse", "--abbrev-ref", "HEAD"], cwd=cwd
|
||||
).strip()
|
||||
if current == branch:
|
||||
console.print(
|
||||
f"[yellow]![/yellow] Currently on {branch}, switching to main before delete"
|
||||
)
|
||||
run_command(["git", "checkout", "main"], cwd=cwd)
|
||||
console.print(f"[yellow]![/yellow] Deleting local branch {branch}")
|
||||
run_command(["git", "branch", "-D", branch], cwd=cwd)
|
||||
|
||||
if remote_exists:
|
||||
console.print(f"[yellow]![/yellow] Deleting remote branch {branch}")
|
||||
run_command(["git", "push", "origin", "--delete", branch], cwd=cwd)
|
||||
|
||||
run_command(["git", "checkout", "-b", branch], cwd=cwd)
|
||||
|
||||
|
||||
def update_version_in_file(file_path: Path, new_version: str) -> bool:
|
||||
"""Update __version__ attribute in a Python file.
|
||||
|
||||
@@ -980,7 +1091,7 @@ def _update_docs_and_create_pr(
|
||||
|
||||
if docs_files_staged:
|
||||
docs_branch = f"docs/changelog-v{version}"
|
||||
run_command(["git", "checkout", "-b", docs_branch])
|
||||
create_or_reset_branch(docs_branch)
|
||||
for f in docs_files_staged:
|
||||
run_command(["git", "add", f])
|
||||
run_command(
|
||||
@@ -1418,7 +1529,7 @@ def _release_enterprise(version: str, is_prerelease: bool, dry_run: bool) -> Non
|
||||
console.print("[green]✓[/green] Workspace synced")
|
||||
|
||||
branch_name = f"feat/bump-version-{version}"
|
||||
run_command(["git", "checkout", "-b", branch_name], cwd=repo_dir)
|
||||
create_or_reset_branch(branch_name, cwd=repo_dir)
|
||||
run_command(["git", "add", "."], cwd=repo_dir)
|
||||
run_command(
|
||||
["git", "commit", "-m", f"feat: bump versions to {version}"],
|
||||
@@ -1616,18 +1727,20 @@ def bump(version: str, dry_run: bool, no_push: bool, no_commit: bool) -> None:
|
||||
for pkg in packages:
|
||||
console.print(f" - {pkg.name}")
|
||||
|
||||
console.print(f"\nUpdating version to {version}...")
|
||||
_update_all_versions(cwd, lib_dir, version, packages, dry_run)
|
||||
|
||||
if no_commit:
|
||||
console.print(f"\nUpdating version to {version}...")
|
||||
_update_all_versions(cwd, lib_dir, version, packages, dry_run)
|
||||
console.print("\nSkipping git operations (--no-commit flag set)")
|
||||
else:
|
||||
branch_name = f"feat/bump-version-{version}"
|
||||
if not dry_run:
|
||||
console.print(f"\nCreating branch {branch_name}...")
|
||||
run_command(["git", "checkout", "-b", branch_name])
|
||||
create_or_reset_branch(branch_name)
|
||||
console.print("[green]✓[/green] Branch created")
|
||||
|
||||
console.print(f"\nUpdating version to {version}...")
|
||||
_update_all_versions(cwd, lib_dir, version, packages, dry_run)
|
||||
|
||||
console.print("\nCommitting changes...")
|
||||
run_command(["git", "add", "."])
|
||||
run_command(
|
||||
@@ -1643,6 +1756,8 @@ def bump(version: str, dry_run: bool, no_push: bool, no_commit: bool) -> None:
|
||||
console.print(
|
||||
f"[dim][DRY RUN][/dim] Would create branch: {branch_name}"
|
||||
)
|
||||
console.print(f"\nUpdating version to {version}...")
|
||||
_update_all_versions(cwd, lib_dir, version, packages, dry_run)
|
||||
console.print(
|
||||
f"[dim][DRY RUN][/dim] Would commit: feat: bump versions to {version}"
|
||||
)
|
||||
@@ -1906,14 +2021,14 @@ def release(
|
||||
console.print(f"\n[bold cyan]Phase 1: Bumping versions to {version}[/bold cyan]")
|
||||
|
||||
try:
|
||||
_update_all_versions(cwd, lib_dir, version, packages, dry_run)
|
||||
|
||||
branch_name = f"feat/bump-version-{version}"
|
||||
if not dry_run:
|
||||
console.print(f"\nCreating branch {branch_name}...")
|
||||
run_command(["git", "checkout", "-b", branch_name])
|
||||
create_or_reset_branch(branch_name)
|
||||
console.print("[green]✓[/green] Branch created")
|
||||
|
||||
_update_all_versions(cwd, lib_dir, version, packages, dry_run)
|
||||
|
||||
console.print("\nCommitting changes...")
|
||||
run_command(["git", "add", "."])
|
||||
run_command(["git", "commit", "-m", f"feat: bump versions to {version}"])
|
||||
@@ -1943,6 +2058,7 @@ def release(
|
||||
_poll_pr_until_merged(branch_name, "bump PR")
|
||||
else:
|
||||
console.print(f"[dim][DRY RUN][/dim] Would create branch: {branch_name}")
|
||||
_update_all_versions(cwd, lib_dir, version, packages, dry_run)
|
||||
console.print(
|
||||
f"[dim][DRY RUN][/dim] Would commit: feat: bump versions to {version}"
|
||||
)
|
||||
|
||||
@@ -162,30 +162,36 @@ info = "Commits must follow Conventional Commits 1.0.0."
|
||||
|
||||
|
||||
[tool.uv]
|
||||
exclude-newer = "1 day"
|
||||
# Pinned to include the security patch releases (authlib 1.6.11,
|
||||
# langchain-text-splitters 1.1.2) uploaded on 2026-04-16.
|
||||
exclude-newer = "2026-04-17"
|
||||
|
||||
# composio-core pins rich<14 but textual requires rich>=14.
|
||||
# onnxruntime 1.24+ dropped Python 3.10 wheels; cap it so qdrant[fastembed] resolves on 3.10.
|
||||
# fastembed 0.7.x and docling 2.63 cap pillow<12; the removed APIs don't affect them.
|
||||
# langchain-core <1.2.28 has GHSA-926x-3r5x-gfhw (incomplete f-string validation).
|
||||
# langchain-core <1.2.31 has GHSA-926x-3r5x-gfhw and is required by langchain-text-splitters 1.1.2+.
|
||||
# langchain-text-splitters <1.1.2 has GHSA-fv5p-p927-qmxr (SSRF bypass in split_text_from_url).
|
||||
# transformers 4.57.6 has CVE-2026-1839; force 5.4+ (docling 2.84 allows huggingface-hub>=1).
|
||||
# cryptography 46.0.6 has CVE-2026-39892; force 46.0.7+.
|
||||
# pypdf <6.10.1 has CVE-2026-40260 and GHSA-jj6c-8h6c-hppx; force 6.10.1+.
|
||||
# pypdf <6.10.2 has GHSA-4pxv-j86v-mhcw, GHSA-7gw9-cf7v-778f, GHSA-x284-j5p8-9c5p; force 6.10.2+.
|
||||
# uv <0.11.6 has GHSA-pjjw-68hj-v9mw; force 0.11.6+.
|
||||
# python-multipart <0.0.26 has GHSA-mj87-hwqh-73pj; force 0.0.26+.
|
||||
# langsmith <0.7.31 has GHSA-rr7j-v2q5-chgv (streaming token redaction bypass); force 0.7.31+.
|
||||
# authlib <1.6.11 has GHSA-jj8c-mmj3-mmgv (CSRF bypass in cache-based state storage).
|
||||
override-dependencies = [
|
||||
"rich>=13.7.1",
|
||||
"onnxruntime<1.24; python_version < '3.11'",
|
||||
"pillow>=12.1.1",
|
||||
"langchain-core>=1.2.28,<2",
|
||||
"langchain-core>=1.2.31,<2",
|
||||
"langchain-text-splitters>=1.1.2,<2",
|
||||
"urllib3>=2.6.3",
|
||||
"transformers>=5.4.0; python_version >= '3.10'",
|
||||
"cryptography>=46.0.7",
|
||||
"pypdf>=6.10.1,<7",
|
||||
"pypdf>=6.10.2,<7",
|
||||
"uv>=0.11.6,<1",
|
||||
"python-multipart>=0.0.26,<1",
|
||||
"langsmith>=0.7.31,<0.8",
|
||||
"authlib>=1.6.11",
|
||||
]
|
||||
|
||||
[tool.uv.workspace]
|
||||
|
||||
33
uv.lock
generated
33
uv.lock
generated
@@ -13,8 +13,7 @@ resolution-markers = [
|
||||
]
|
||||
|
||||
[options]
|
||||
exclude-newer = "2026-04-15T15:14:38.695171Z"
|
||||
exclude-newer-span = "P1D"
|
||||
exclude-newer = "2026-04-17T16:00:00Z"
|
||||
|
||||
[manifest]
|
||||
members = [
|
||||
@@ -24,12 +23,14 @@ members = [
|
||||
"crewai-tools",
|
||||
]
|
||||
overrides = [
|
||||
{ name = "authlib", specifier = ">=1.6.11" },
|
||||
{ name = "cryptography", specifier = ">=46.0.7" },
|
||||
{ name = "langchain-core", specifier = ">=1.2.28,<2" },
|
||||
{ name = "langchain-core", specifier = ">=1.2.31,<2" },
|
||||
{ name = "langchain-text-splitters", specifier = ">=1.1.2,<2" },
|
||||
{ name = "langsmith", specifier = ">=0.7.31,<0.8" },
|
||||
{ name = "onnxruntime", marker = "python_full_version < '3.11'", specifier = "<1.24" },
|
||||
{ name = "pillow", specifier = ">=12.1.1" },
|
||||
{ name = "pypdf", specifier = ">=6.10.1,<7" },
|
||||
{ name = "pypdf", specifier = ">=6.10.2,<7" },
|
||||
{ name = "python-multipart", specifier = ">=0.0.26,<1" },
|
||||
{ name = "rich", specifier = ">=13.7.1" },
|
||||
{ name = "transformers", marker = "python_full_version >= '3.10'", specifier = ">=5.4.0" },
|
||||
@@ -422,14 +423,14 @@ wheels = [
|
||||
|
||||
[[package]]
|
||||
name = "authlib"
|
||||
version = "1.6.9"
|
||||
version = "1.6.11"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "cryptography" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/af/98/00d3dd826d46959ad8e32af2dbb2398868fd9fd0683c26e56d0789bd0e68/authlib-1.6.9.tar.gz", hash = "sha256:d8f2421e7e5980cc1ddb4e32d3f5fa659cfaf60d8eaf3281ebed192e4ab74f04", size = 165134, upload-time = "2026-03-02T07:44:01.998Z" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/28/10/b325d58ffe86815b399334a101e63bc6fa4e1953921cb23703b48a0a0220/authlib-1.6.11.tar.gz", hash = "sha256:64db35b9b01aeccb4715a6c9a6613a06f2bd7be2ab9d2eb89edd1dfc7580a38f", size = 165359, upload-time = "2026-04-16T07:22:50.279Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/53/23/b65f568ed0c22f1efacb744d2db1a33c8068f384b8c9b482b52ebdbc3ef6/authlib-1.6.9-py2.py3-none-any.whl", hash = "sha256:f08b4c14e08f0861dc18a32357b33fbcfd2ea86cfe3fe149484b4d764c4a0ac3", size = 244197, upload-time = "2026-03-02T07:44:00.307Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/57/2f/55fca558f925a51db046e5b929deb317ddb05afed74b22d89f4eca578980/authlib-1.6.11-py2.py3-none-any.whl", hash = "sha256:c8687a9a26451c51a34a06fa17bb97cb15bba46a6a626755e2d7f50da8bff3e3", size = 244469, upload-time = "2026-04-16T07:22:48.413Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -3557,7 +3558,7 @@ wheels = [
|
||||
|
||||
[[package]]
|
||||
name = "langchain-core"
|
||||
version = "1.2.28"
|
||||
version = "1.2.31"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "jsonpatch" },
|
||||
@@ -3569,21 +3570,21 @@ dependencies = [
|
||||
{ name = "typing-extensions" },
|
||||
{ name = "uuid-utils" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/f8/a4/317a1a3ac1df33a64adb3670bf88bbe3b3d5baa274db6863a979db472897/langchain_core-1.2.28.tar.gz", hash = "sha256:271a3d8bd618f795fdeba112b0753980457fc90537c46a0c11998516a74dc2cb", size = 846119, upload-time = "2026-04-08T18:19:34.867Z" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/a1/5a/7523ff55668a233beef7e909e8e2074a1cc3b620e0bbf0a4ec5f38549b3b/langchain_core-1.2.31.tar.gz", hash = "sha256:aad3ecc9e4dce2dd2bb79526c81b92e5322fd81db7834a031cb80359f2e3ebaa", size = 850756, upload-time = "2026-04-16T13:26:29.241Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/a8/92/32f785f077c7e898da97064f113c73fbd9ad55d1e2169cf3a391b183dedb/langchain_core-1.2.28-py3-none-any.whl", hash = "sha256:80764232581eaf8057bcefa71dbf8adc1f6a28d257ebd8b95ba9b8b452e8c6ac", size = 508727, upload-time = "2026-04-08T18:19:32.823Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/52/02/668ddf4f1cf963ad691bdbea672a85244e6271eb0a4acfaf662bbd94a3b1/langchain_core-1.2.31-py3-none-any.whl", hash = "sha256:c407193edb99311cc36ec3e4d3667a065bbc4d7d72fbb6e368538b9b134d4033", size = 513264, upload-time = "2026-04-16T13:26:27.566Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "langchain-text-splitters"
|
||||
version = "1.1.1"
|
||||
version = "1.1.2"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "langchain-core" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/85/38/14121ead61e0e75f79c3a35e5148ac7c2fe754a55f76eab3eed573269524/langchain_text_splitters-1.1.1.tar.gz", hash = "sha256:34861abe7c07d9e49d4dc852d0129e26b32738b60a74486853ec9b6d6a8e01d2", size = 279352, upload-time = "2026-02-18T23:02:42.798Z" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/26/9f/6c545900fefb7b00ddfa3f16b80d61338a0ec68c31c5451eeeab99082760/langchain_text_splitters-1.1.2.tar.gz", hash = "sha256:782a723db0a4746ac91e251c7c1d57fd23636e4f38ed733074e28d7a86f41627", size = 293580, upload-time = "2026-04-16T14:20:39.162Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/84/66/d9e0c3b83b0ad75ee746c51ba347cacecb8d656b96e1d513f3e334d1ccab/langchain_text_splitters-1.1.1-py3-none-any.whl", hash = "sha256:5ed0d7bf314ba925041e7d7d17cd8b10f688300d5415fb26c29442f061e329dc", size = 35734, upload-time = "2026-02-18T23:02:41.913Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d3/26/1ef06f56198d631296d646a6223de35bcc6cf9795ceb2442816bc963b84c/langchain_text_splitters-1.1.2-py3-none-any.whl", hash = "sha256:a2de0d799ff31886429fd6e2e0032df275b60ec817c19059a7b46181cc1c2f10", size = 35903, upload-time = "2026-04-16T14:20:38.243Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -6729,14 +6730,14 @@ wheels = [
|
||||
|
||||
[[package]]
|
||||
name = "pypdf"
|
||||
version = "6.10.1"
|
||||
version = "6.10.2"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "typing-extensions", marker = "python_full_version < '3.11'" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/66/79/f2730c42ec7891a75a2fcea2eb4f356872bcbc671b711418060424796612/pypdf-6.10.1.tar.gz", hash = "sha256:62e6ca7f65aaa28b3d192addb44f97296e4be1748f57ed0f4efb2d4915841880", size = 5315704, upload-time = "2026-04-14T12:55:20.996Z" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/7b/3f/9f2167401c2e94833ca3b69535bad89e533b5de75fefe4197a2c224baec2/pypdf-6.10.2.tar.gz", hash = "sha256:7d09ce108eff6bf67465d461b6ef352dcb8d84f7a91befc02f904455c6eea11d", size = 5315679, upload-time = "2026-04-15T16:37:36.978Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/f0/04/e3aa7f1f14dbc53429cae34666261eb935d99bd61d24756ab94d7e0309da/pypdf-6.10.1-py3-none-any.whl", hash = "sha256:6331940d3bfe75b7e6601d35db7adabab5fc1d716efaeb384e3c0c3957d033de", size = 335606, upload-time = "2026-04-14T12:55:18.941Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/0c/d6/1d5c60cc17bbdf37c1552d9c03862fc6d32c5836732a0415b2d637edc2d0/pypdf-6.10.2-py3-none-any.whl", hash = "sha256:aa53be9826655b51c96741e5d7983ca224d898ac0a77896e64636810517624aa", size = 336308, upload-time = "2026-04-15T16:37:34.851Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
|
||||
Reference in New Issue
Block a user