mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-05-08 02:29:00 +00:00
Compare commits
22 Commits
docs/plann
...
feat/llm-t
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
a5321aae92 | ||
|
|
54f5b7db2e | ||
|
|
c9a6955cd6 | ||
|
|
086f534d4e | ||
|
|
fe93dfe64c | ||
|
|
5837f8edb8 | ||
|
|
cdc4b43620 | ||
|
|
cb46a1c4ba | ||
|
|
d9046b98dd | ||
|
|
b0e2fda105 | ||
|
|
69d777ca50 | ||
|
|
77b2835a1d | ||
|
|
c77f1632dd | ||
|
|
69461076df | ||
|
|
55937d7523 | ||
|
|
bc2fb71560 | ||
|
|
3e9deaf9c0 | ||
|
|
3f7637455c | ||
|
|
fdf3101b39 | ||
|
|
c94f2e8f28 | ||
|
|
944fe6d435 | ||
|
|
3be2fb65dc |
1
.gitignore
vendored
1
.gitignore
vendored
@@ -30,3 +30,4 @@ chromadb-*.lock
|
||||
.crewai/memory
|
||||
blogs/*
|
||||
secrets/*
|
||||
UNKNOWN.egg-info/
|
||||
|
||||
@@ -4,6 +4,80 @@ description: "تحديثات المنتج والتحسينات وإصلاحات
|
||||
icon: "clock"
|
||||
mode: "wide"
|
||||
---
|
||||
<Update label="25 أبريل 2026">
|
||||
## v1.14.3
|
||||
|
||||
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.3)
|
||||
|
||||
## ما الذي تغير
|
||||
|
||||
### الميزات
|
||||
- إضافة أحداث دورة الحياة لعمليات نقطة التحقق
|
||||
- إضافة دعم لـ e2b
|
||||
- الرجوع إلى DefaultAzureCredential عند عدم توفير مفتاح API في تكامل Azure
|
||||
- إضافة دعم Bedrock V4
|
||||
- إضافة أدوات Daytona sandbox لوظائف محسّنة
|
||||
- إضافة دعم نقطة التحقق والتفرع للوكلاء المستقلين
|
||||
|
||||
### إصلاحات الأخطاء
|
||||
- إصلاح execution_id ليكون منفصلًا عن state.id
|
||||
- حل مشكلة إعادة تشغيل أحداث الطريقة المسجلة عند استئناف نقطة التحقق
|
||||
- إصلاح تسلسل مراجع class initial_state كـ JSON schema
|
||||
- الحفاظ على مهارات الوكلاء التي تحتوي على بيانات وصفية فقط
|
||||
- تمرير أسماء @CrewBase الضمنية إلى أحداث الطاقم
|
||||
- دمج بيانات التنفيذ عند تهيئة دفعة مكررة
|
||||
- إصلاح تسلسل حقول مراجع class Task لنقاط التحقق
|
||||
- التعامل مع نتيجة BaseModel في حلقة إعادة المحاولة guardrail
|
||||
- الحفاظ على thought_signature في استدعاءات أدوات Gemini للبث
|
||||
- إصدار task_started عند استئناف التفرع وإعادة تصميم واجهة المستخدم النصية لنقطة التحقق
|
||||
- استخدام تواريخ مستقبلية في اختبارات تقليم نقطة التحقق لمنع الفشل المعتمد على الوقت
|
||||
- إصلاح ترتيب التشغيل الجاف والتعامل مع الفرع القديم الذي تم التحقق منه في إصدار أدوات التطوير
|
||||
- ترقية lxml إلى >=6.1.0 لرقعة الأمان
|
||||
- رفع python-dotenv إلى >=1.2.2 لرقعة الأمان
|
||||
|
||||
### الوثائق
|
||||
- تحديث سجل التغييرات والإصدار لـ v1.14.3
|
||||
- إضافة صفحة "بناء باستخدام الذكاء الاصطناعي" وتحديث التنقل لجميع اللغات
|
||||
- إزالة الأسئلة الشائعة حول التسعير من صفحة البناء باستخدام الذكاء الاصطناعي عبر جميع المواقع
|
||||
|
||||
### الأداء
|
||||
- تحسين MCP SDK وأنواع الأحداث لتقليل بدء التشغيل البارد بنسبة ~29%
|
||||
|
||||
### إعادة الهيكلة
|
||||
- إعادة هيكلة مساعدي نقطة التحقق للقضاء على التكرار وتشديد تلميحات نوع الحالة
|
||||
|
||||
## المساهمون
|
||||
|
||||
@MatthiasHowellYopp, @akaKuruma, @alex-clawd, @github-actions[bot], @github-advanced-security[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @mattatcha, @renatonitta
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="23 أبريل 2026">
|
||||
## v1.14.3a3
|
||||
|
||||
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.3a3)
|
||||
|
||||
## ما الذي تغير
|
||||
|
||||
### الميزات
|
||||
- إضافة دعم لـ e2b
|
||||
- تنفيذ التراجع إلى DefaultAzureCredential عند عدم توفير مفتاح API
|
||||
|
||||
### إصلاحات الأخطاء
|
||||
- ترقية lxml إلى >=6.1.0 لمعالجة مشكلة الأمان GHSA-vfmq-68hx-4jfw
|
||||
|
||||
### الوثائق
|
||||
- إزالة الأسئلة الشائعة حول التسعير من صفحة البناء باستخدام الذكاء الاصطناعي عبر جميع اللغات
|
||||
|
||||
### الأداء
|
||||
- تحسين وقت بدء التشغيل البارد بنسبة ~29% من خلال التحميل الكسول لمجموعة أدوات MCP وأنواع الأحداث
|
||||
|
||||
## المساهمون
|
||||
|
||||
@alex-clawd, @github-advanced-security[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @mattatcha
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="22 أبريل 2026">
|
||||
## v1.14.3a2
|
||||
|
||||
|
||||
@@ -207,9 +207,6 @@ CrewAI AMP مُصمَّم لفرق الإنتاج. إليك ما تحصل علي
|
||||
- **Factory (استضافة ذاتية)** — على بنيتك التحتية لسيطرة كاملة على البيانات
|
||||
- **هجين** — دمج السحابة والاستضافة الذاتية حسب حساسية البيانات
|
||||
</Accordion>
|
||||
<Accordion title="كيف يعمل التسعير؟">
|
||||
سجّل في [app.crewai.com](https://app.crewai.com) لمعرفة الخطط الحالية. تسعير المؤسسات وFactory متاح عند الطلب.
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
<Card title="استكشف CrewAI AMP →" icon="arrow-right" href="https://app.crewai.com">
|
||||
|
||||
1885
docs/docs.json
1885
docs/docs.json
File diff suppressed because it is too large
Load Diff
@@ -4,6 +4,80 @@ description: "Product updates, improvements, and bug fixes for CrewAI"
|
||||
icon: "clock"
|
||||
mode: "wide"
|
||||
---
|
||||
<Update label="Apr 25, 2026">
|
||||
## v1.14.3
|
||||
|
||||
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.3)
|
||||
|
||||
## What's Changed
|
||||
|
||||
### Features
|
||||
- Add lifecycle events for checkpoint operations
|
||||
- Add support for e2b
|
||||
- Fall back to DefaultAzureCredential when no API key is provided in Azure integration
|
||||
- Add Bedrock V4 support
|
||||
- Add Daytona sandbox tools for enhanced functionality
|
||||
- Add checkpoint and fork support to standalone agents
|
||||
|
||||
### Bug Fixes
|
||||
- Fix execution_id to be separate from state.id
|
||||
- Resolve replay of recorded method events on checkpoint resume
|
||||
- Fix serialization of initial_state class references as JSON schema
|
||||
- Preserve metadata-only agent skills
|
||||
- Propagate implicit @CrewBase names to crew events
|
||||
- Merge execution metadata on duplicate batch initialization
|
||||
- Fix serialization of Task class-reference fields for checkpointing
|
||||
- Handle BaseModel result in guardrail retry loop
|
||||
- Preserve thought_signature in Gemini streaming tool calls
|
||||
- Emit task_started on fork resume and redesign checkpoint TUI
|
||||
- Use future dates in checkpoint prune tests to prevent time-dependent failures
|
||||
- Fix dry-run order and handle checked-out stale branch in devtools release
|
||||
- Upgrade lxml to >=6.1.0 for security patch
|
||||
- Bump python-dotenv to >=1.2.2 for security patch
|
||||
|
||||
### Documentation
|
||||
- Update changelog and version for v1.14.3
|
||||
- Add 'Build with AI' page and update navigation for all languages
|
||||
- Remove pricing FAQ from build-with-ai page across all locales
|
||||
|
||||
### Performance
|
||||
- Optimize MCP SDK and event types to reduce cold start by ~29%
|
||||
|
||||
### Refactoring
|
||||
- Refactor checkpoint helpers to eliminate duplication and tighten state type hints
|
||||
|
||||
## Contributors
|
||||
|
||||
@MatthiasHowellYopp, @akaKuruma, @alex-clawd, @github-actions[bot], @github-advanced-security[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @mattatcha, @renatonitta
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="Apr 23, 2026">
|
||||
## v1.14.3a3
|
||||
|
||||
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.3a3)
|
||||
|
||||
## What's Changed
|
||||
|
||||
### Features
|
||||
- Add support for e2b
|
||||
- Implement fallback to DefaultAzureCredential when no API key is provided
|
||||
|
||||
### Bug Fixes
|
||||
- Upgrade lxml to >=6.1.0 to address security issue GHSA-vfmq-68hx-4jfw
|
||||
|
||||
### Documentation
|
||||
- Remove pricing FAQ from build-with-ai page across all locales
|
||||
|
||||
### Performance
|
||||
- Improve cold start time by ~29% through lazy-loading of MCP SDK and event types
|
||||
|
||||
## Contributors
|
||||
|
||||
@alex-clawd, @github-advanced-security[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @mattatcha
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="Apr 22, 2026">
|
||||
## v1.14.3a2
|
||||
|
||||
|
||||
@@ -207,9 +207,6 @@ CrewAI AMP is built for production teams. Here's what you get beyond deployment.
|
||||
- **Factory (self-hosted)** — run on your own infrastructure for full data control
|
||||
- **Hybrid** — mix cloud and self-hosted based on sensitivity requirements
|
||||
</Accordion>
|
||||
<Accordion title="How does pricing work?">
|
||||
Sign up at [app.crewai.com](https://app.crewai.com) to see current plans. Enterprise and Factory pricing is available on request.
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
<Card title="Explore CrewAI AMP →" icon="arrow-right" href="https://app.crewai.com">
|
||||
|
||||
@@ -4,6 +4,80 @@ description: "CrewAI의 제품 업데이트, 개선 사항 및 버그 수정"
|
||||
icon: "clock"
|
||||
mode: "wide"
|
||||
---
|
||||
<Update label="2026년 4월 25일">
|
||||
## v1.14.3
|
||||
|
||||
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.14.3)
|
||||
|
||||
## 변경 사항
|
||||
|
||||
### 기능
|
||||
- 체크포인트 작업을 위한 생명주기 이벤트 추가
|
||||
- e2b 지원 추가
|
||||
- Azure 통합에서 API 키가 제공되지 않을 경우 DefaultAzureCredential로 대체
|
||||
- Bedrock V4 지원 추가
|
||||
- 향상된 기능을 위한 Daytona 샌드박스 도구 추가
|
||||
- 독립형 에이전트에 체크포인트 및 포크 지원 추가
|
||||
|
||||
### 버그 수정
|
||||
- execution_id를 state.id와 분리되도록 수정
|
||||
- 체크포인트 재개 시 기록된 메서드 이벤트 재생 문제 해결
|
||||
- initial_state 클래스 참조의 JSON 스키마 직렬화 수정
|
||||
- 메타데이터 전용 에이전트 기술 보존
|
||||
- 암묵적인 @CrewBase 이름을 크루 이벤트로 전파
|
||||
- 중복 배치 초기화 시 실행 메타데이터 병합
|
||||
- 체크포인트를 위한 Task 클래스 참조 필드의 직렬화 수정
|
||||
- 가드레일 재시도 루프에서 BaseModel 결과 처리
|
||||
- Gemini 스트리밍 도구 호출에서 thought_signature 보존
|
||||
- 포크 재개 시 task_started 방출 및 체크포인트 TUI 재설계
|
||||
- 체크포인트 가지치기 테스트에서 미래 날짜 사용하여 시간 의존적 실패 방지
|
||||
- 드라이 런 주문 수정 및 devtools 릴리스에서 체크아웃된 오래된 브랜치 처리
|
||||
- 보안 패치를 위해 lxml을 >=6.1.0으로 업그레이드
|
||||
- 보안 패치를 위해 python-dotenv를 >=1.2.2로 업그레이드
|
||||
|
||||
### 문서
|
||||
- v1.14.3에 대한 변경 로그 및 버전 업데이트
|
||||
- 'AI로 빌드하기' 페이지 추가 및 모든 언어에 대한 내비게이션 업데이트
|
||||
- 모든 로케일에서 build-with-ai 페이지의 가격 FAQ 제거
|
||||
|
||||
### 성능
|
||||
- MCP SDK 및 이벤트 유형 최적화하여 콜드 스타트를 약 29% 감소
|
||||
|
||||
### 리팩토링
|
||||
- 중복 제거 및 상태 유형 힌트를 강화하기 위해 체크포인트 헬퍼 리팩토링
|
||||
|
||||
## 기여자
|
||||
|
||||
@MatthiasHowellYopp, @akaKuruma, @alex-clawd, @github-actions[bot], @github-advanced-security[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @mattatcha, @renatonitta
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="2026년 4월 23일">
|
||||
## v1.14.3a3
|
||||
|
||||
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.14.3a3)
|
||||
|
||||
## 변경 사항
|
||||
|
||||
### 기능
|
||||
- e2b 지원 추가
|
||||
- API 키가 제공되지 않을 경우 DefaultAzureCredential로 대체 구현
|
||||
|
||||
### 버그 수정
|
||||
- 보안 문제 GHSA-vfmq-68hx-4jfw를 해결하기 위해 lxml을 >=6.1.0으로 업그레이드
|
||||
|
||||
### 문서
|
||||
- 모든 지역에서 build-with-ai 페이지의 가격 FAQ 제거
|
||||
|
||||
### 성능
|
||||
- MCP SDK 및 이벤트 유형의 지연 로딩을 통해 콜드 스타트 시간을 약 29% 개선
|
||||
|
||||
## 기여자
|
||||
|
||||
@alex-clawd, @github-advanced-security[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @mattatcha
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="2026년 4월 22일">
|
||||
## v1.14.3a2
|
||||
|
||||
|
||||
@@ -207,9 +207,6 @@ CrewAI AMP는 프로덕션 팀을 위해 만들어졌습니다. 배포 외에
|
||||
- **Factory(셀프 호스팅)** — 데이터 통제를 위해 자체 인프라에서 실행
|
||||
- **하이브리드** — 민감도에 따라 클라우드와 셀프 호스팅을 혼합
|
||||
</Accordion>
|
||||
<Accordion title="가격은 어떻게 되나요?">
|
||||
[app.crewai.com](https://app.crewai.com)에 가입하면 현재 요금제를 확인할 수 있습니다. 엔터프라이즈 및 Factory 가격은 문의 시 안내합니다.
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
<Card title="CrewAI AMP 살펴보기 →" icon="arrow-right" href="https://app.crewai.com">
|
||||
|
||||
@@ -4,6 +4,80 @@ description: "Atualizações de produto, melhorias e correções do CrewAI"
|
||||
icon: "clock"
|
||||
mode: "wide"
|
||||
---
|
||||
<Update label="25 abr 2026">
|
||||
## v1.14.3
|
||||
|
||||
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.3)
|
||||
|
||||
## O que Mudou
|
||||
|
||||
### Recursos
|
||||
- Adicionar eventos de ciclo de vida para operações de checkpoint
|
||||
- Adicionar suporte para e2b
|
||||
- Reverter para DefaultAzureCredential quando nenhuma chave de API for fornecida na integração com o Azure
|
||||
- Adicionar suporte ao Bedrock V4
|
||||
- Adicionar ferramentas de sandbox Daytona para funcionalidade aprimorada
|
||||
- Adicionar suporte a checkpoint e fork para agentes autônomos
|
||||
|
||||
### Correções de Bugs
|
||||
- Corrigir execution_id para ser separado de state.id
|
||||
- Resolver a reprodução de eventos de método gravados na retomada do checkpoint
|
||||
- Corrigir a serialização de referências de classe initial_state como esquema JSON
|
||||
- Preservar habilidades de agente somente de metadados
|
||||
- Propagar nomes implícitos @CrewBase para eventos da equipe
|
||||
- Mesclar metadados de execução na inicialização de lote duplicado
|
||||
- Corrigir a serialização de campos de referência de classe Task para checkpointing
|
||||
- Lidar com o resultado BaseModel no loop de retry do guardrail
|
||||
- Preservar thought_signature em chamadas de ferramentas de streaming Gemini
|
||||
- Emitir task_started na retomada do fork e redesenhar TUI de checkpoint
|
||||
- Usar datas futuras em testes de poda de checkpoint para evitar falhas dependentes do tempo
|
||||
- Corrigir a ordem de dry-run e lidar com branch obsoleta verificada na liberação do devtools
|
||||
- Atualizar lxml para >=6.1.0 para patch de segurança
|
||||
- Aumentar python-dotenv para >=1.2.2 para patch de segurança
|
||||
|
||||
### Documentação
|
||||
- Atualizar changelog e versão para v1.14.3
|
||||
- Adicionar página 'Construir com IA' e atualizar navegação para todos os idiomas
|
||||
- Remover FAQ de preços da página construir-com-ia em todos os locais
|
||||
|
||||
### Desempenho
|
||||
- Otimizar MCP SDK e tipos de eventos para reduzir o tempo de inicialização a frio em ~29%
|
||||
|
||||
### Refatoração
|
||||
- Refatorar auxiliares de checkpoint para eliminar duplicação e apertar dicas de tipo de estado
|
||||
|
||||
## Contribuidores
|
||||
|
||||
@MatthiasHowellYopp, @akaKuruma, @alex-clawd, @github-actions[bot], @github-advanced-security[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @mattatcha, @renatonitta
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="23 abr 2026">
|
||||
## v1.14.3a3
|
||||
|
||||
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.3a3)
|
||||
|
||||
## O que Mudou
|
||||
|
||||
### Recursos
|
||||
- Adicionar suporte para e2b
|
||||
- Implementar fallback para DefaultAzureCredential quando nenhuma chave de API for fornecida
|
||||
|
||||
### Correções de Bugs
|
||||
- Atualizar lxml para >=6.1.0 para resolver problema de segurança GHSA-vfmq-68hx-4jfw
|
||||
|
||||
### Documentação
|
||||
- Remover FAQ de preços da página build-with-ai em todos os locais
|
||||
|
||||
### Desempenho
|
||||
- Melhorar o tempo de inicialização a frio em ~29% através do carregamento preguiçoso do SDK MCP e tipos de eventos
|
||||
|
||||
## Contributors
|
||||
|
||||
@alex-clawd, @github-advanced-security[bot], @greysonlalonde, @iris-clawd, @lorenzejay, @mattatcha
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="22 abr 2026">
|
||||
## v1.14.3a2
|
||||
|
||||
|
||||
@@ -207,9 +207,6 @@ O CrewAI AMP foi feito para equipes em produção. Além da implantação, você
|
||||
- **Factory (self-hosted)** — na sua infraestrutura para controle total dos dados
|
||||
- **Híbrido** — combine nuvem e self-hosted conforme a sensibilidade dos dados
|
||||
</Accordion>
|
||||
<Accordion title="Como funciona o preço?">
|
||||
Cadastre-se em [app.crewai.com](https://app.crewai.com) para ver os planos atuais. Preços enterprise e Factory sob consulta.
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
<Card title="Conheça o CrewAI AMP →" icon="arrow-right" href="https://app.crewai.com">
|
||||
|
||||
@@ -152,4 +152,4 @@ __all__ = [
|
||||
"wrap_file_source",
|
||||
]
|
||||
|
||||
__version__ = "1.14.3a2"
|
||||
__version__ = "1.14.3"
|
||||
|
||||
@@ -10,7 +10,7 @@ requires-python = ">=3.10, <3.14"
|
||||
dependencies = [
|
||||
"pytube~=15.0.0",
|
||||
"requests>=2.33.0,<3",
|
||||
"crewai==1.14.3a2",
|
||||
"crewai==1.14.3",
|
||||
"tiktoken~=0.8.0",
|
||||
"beautifulsoup4~=4.13.4",
|
||||
"python-docx~=1.2.0",
|
||||
@@ -112,7 +112,7 @@ github = [
|
||||
]
|
||||
rag = [
|
||||
"python-docx>=1.1.0",
|
||||
"lxml>=5.3.0,<5.4.0", # Pin to avoid etree import issues in 5.4.0
|
||||
"lxml>=6.1.0,<7", # 6.1.0+ required for GHSA-vfmq-68hx-4jfw (XXE in iterparse)
|
||||
]
|
||||
xml = [
|
||||
"unstructured[local-inference, all-docs]>=0.17.2"
|
||||
@@ -143,6 +143,11 @@ daytona = [
|
||||
"daytona~=0.140.0",
|
||||
]
|
||||
|
||||
e2b = [
|
||||
"e2b~=2.20.0",
|
||||
"e2b-code-interpreter~=2.6.0",
|
||||
]
|
||||
|
||||
|
||||
[tool.uv]
|
||||
exclude-newer = "3 days"
|
||||
|
||||
@@ -71,6 +71,11 @@ from crewai_tools.tools.directory_search_tool.directory_search_tool import (
|
||||
DirectorySearchTool,
|
||||
)
|
||||
from crewai_tools.tools.docx_search_tool.docx_search_tool import DOCXSearchTool
|
||||
from crewai_tools.tools.e2b_sandbox_tool import (
|
||||
E2BExecTool,
|
||||
E2BFileTool,
|
||||
E2BPythonTool,
|
||||
)
|
||||
from crewai_tools.tools.exa_tools.exa_search_tool import EXASearchTool
|
||||
from crewai_tools.tools.file_read_tool.file_read_tool import FileReadTool
|
||||
from crewai_tools.tools.file_writer_tool.file_writer_tool import FileWriterTool
|
||||
@@ -242,6 +247,9 @@ __all__ = [
|
||||
"DaytonaPythonTool",
|
||||
"DirectoryReadTool",
|
||||
"DirectorySearchTool",
|
||||
"E2BExecTool",
|
||||
"E2BFileTool",
|
||||
"E2BPythonTool",
|
||||
"EXASearchTool",
|
||||
"EnterpriseActionTool",
|
||||
"FileCompressorTool",
|
||||
@@ -313,4 +321,4 @@ __all__ = [
|
||||
"ZapierActionTools",
|
||||
]
|
||||
|
||||
__version__ = "1.14.3a2"
|
||||
__version__ = "1.14.3"
|
||||
|
||||
@@ -60,6 +60,11 @@ from crewai_tools.tools.directory_search_tool.directory_search_tool import (
|
||||
DirectorySearchTool,
|
||||
)
|
||||
from crewai_tools.tools.docx_search_tool.docx_search_tool import DOCXSearchTool
|
||||
from crewai_tools.tools.e2b_sandbox_tool import (
|
||||
E2BExecTool,
|
||||
E2BFileTool,
|
||||
E2BPythonTool,
|
||||
)
|
||||
from crewai_tools.tools.exa_tools.exa_search_tool import EXASearchTool
|
||||
from crewai_tools.tools.file_read_tool.file_read_tool import FileReadTool
|
||||
from crewai_tools.tools.file_writer_tool.file_writer_tool import FileWriterTool
|
||||
@@ -227,6 +232,9 @@ __all__ = [
|
||||
"DaytonaPythonTool",
|
||||
"DirectoryReadTool",
|
||||
"DirectorySearchTool",
|
||||
"E2BExecTool",
|
||||
"E2BFileTool",
|
||||
"E2BPythonTool",
|
||||
"EXASearchTool",
|
||||
"FileCompressorTool",
|
||||
"FileReadTool",
|
||||
|
||||
@@ -5,6 +5,7 @@ from crewai_tools.tools.daytona_sandbox_tool.daytona_python_tool import (
|
||||
DaytonaPythonTool,
|
||||
)
|
||||
|
||||
|
||||
__all__ = [
|
||||
"DaytonaBaseTool",
|
||||
"DaytonaExecTool",
|
||||
|
||||
@@ -0,0 +1,120 @@
|
||||
# E2B Sandbox Tools
|
||||
|
||||
Run shell commands, execute Python, and manage files inside an [E2B](https://e2b.dev/) sandbox. E2B provides isolated, ephemeral VMs suitable for agent-driven code execution, with a Jupyter-style code interpreter for rich Python results.
|
||||
|
||||
Three tools are provided so you can pick what the agent actually needs:
|
||||
|
||||
- **`E2BExecTool`** — run a shell command (`sandbox.commands.run`).
|
||||
- **`E2BPythonTool`** — run a Python cell in the E2B code interpreter (`sandbox.run_code`), returning stdout/stderr and rich results (charts, dataframes).
|
||||
- **`E2BFileTool`** — read / write / list / delete files (`sandbox.files.*`).
|
||||
|
||||
## Installation
|
||||
|
||||
```shell
|
||||
uv add "crewai-tools[e2b]"
|
||||
# or
|
||||
pip install "crewai-tools[e2b]"
|
||||
```
|
||||
|
||||
Set the API key:
|
||||
|
||||
```shell
|
||||
export E2B_API_KEY="..."
|
||||
```
|
||||
|
||||
`E2B_DOMAIN` is also respected if set (for self-hosted or non-default deployments).
|
||||
|
||||
## Sandbox lifecycle
|
||||
|
||||
All three tools share the same lifecycle controls from `E2BBaseTool`:
|
||||
|
||||
| Mode | When the sandbox is created | When it is killed |
|
||||
| --- | --- | --- |
|
||||
| **Ephemeral** (default, `persistent=False`) | On every `_run` call | At the end of that same call |
|
||||
| **Persistent** (`persistent=True`) | Lazily on first use | At process exit (via `atexit`), or manually via `tool.close()` |
|
||||
| **Attach** (`sandbox_id="…"`) | Never — the tool attaches to an existing sandbox | Never — the tool will not kill a sandbox it did not create |
|
||||
|
||||
Ephemeral mode is the safe default: nothing leaks if the agent forgets to clean up. Use persistent mode when you want filesystem state or installed packages to carry across steps — this is typical when pairing `E2BFileTool` with `E2BExecTool`.
|
||||
|
||||
E2B sandboxes also auto-expire after an idle timeout. Tune it via `sandbox_timeout` (seconds, default `300`).
|
||||
|
||||
## Examples
|
||||
|
||||
### One-shot Python execution (ephemeral)
|
||||
|
||||
```python
|
||||
from crewai_tools import E2BPythonTool
|
||||
|
||||
tool = E2BPythonTool()
|
||||
result = tool.run(code="print(sum(range(10)))")
|
||||
```
|
||||
|
||||
### Multi-step shell session (persistent)
|
||||
|
||||
```python
|
||||
from crewai_tools import E2BExecTool, E2BFileTool
|
||||
|
||||
exec_tool = E2BExecTool(persistent=True)
|
||||
file_tool = E2BFileTool(persistent=True)
|
||||
|
||||
# Each tool keeps its own persistent sandbox. If you need the *same* sandbox
|
||||
# across two tools, create one tool, grab the sandbox id via
|
||||
# `tool._persistent_sandbox.sandbox_id`, and pass it to the other via
|
||||
# `sandbox_id=...`.
|
||||
```
|
||||
|
||||
### Attach to an existing sandbox
|
||||
|
||||
```python
|
||||
from crewai_tools import E2BExecTool
|
||||
|
||||
tool = E2BExecTool(sandbox_id="sbx_...")
|
||||
```
|
||||
|
||||
### Custom create params
|
||||
|
||||
```python
|
||||
tool = E2BExecTool(
|
||||
persistent=True,
|
||||
template="my-custom-template",
|
||||
sandbox_timeout=600,
|
||||
envs={"MY_FLAG": "1"},
|
||||
metadata={"owner": "crewai-agent"},
|
||||
)
|
||||
```
|
||||
|
||||
## Tool arguments
|
||||
|
||||
### `E2BExecTool`
|
||||
- `command: str` — shell command to run.
|
||||
- `cwd: str | None` — working directory.
|
||||
- `envs: dict[str, str] | None` — extra env vars for this command.
|
||||
- `timeout: float | None` — seconds.
|
||||
|
||||
### `E2BPythonTool`
|
||||
- `code: str` — source to execute.
|
||||
- `language: str | None` — override kernel language (default: Python).
|
||||
- `envs: dict[str, str] | None` — env vars for the run.
|
||||
- `timeout: float | None` — seconds.
|
||||
|
||||
### `E2BFileTool`
|
||||
- `action: "read" | "write" | "append" | "list" | "delete" | "mkdir" | "info" | "exists"`
|
||||
- `path: str` — absolute path inside the sandbox.
|
||||
- `content: str | None` — required for `append`; optional for `write`.
|
||||
- `binary: bool` — if `True`, `content` is base64 on write / returned as base64 on read.
|
||||
- `depth: int` — for `list`, how many levels to recurse (default 1).
|
||||
|
||||
## Security considerations
|
||||
|
||||
These tools hand the LLM arbitrary shell, Python, and filesystem access inside a remote VM. The threat model to keep in mind:
|
||||
|
||||
- **Prompt-injection is a code-execution vector.** If the agent ingests untrusted content (web pages, scraped documents, user-supplied files, emails, search results), a malicious instruction hidden in that content can coerce the agent into issuing commands to `E2BExecTool` / `E2BPythonTool`. Treat any pipeline that feeds untrusted text into an agent that also has these tools as equivalent to remote code execution — the LLM is the attacker's shell.
|
||||
- **Ephemeral mode (the default) is the main blast-radius control.** A fresh sandbox is created per call and killed at the end, so injected commands cannot persist state, exfiltrate long-lived secrets, or build up tooling across turns. Leave `persistent=False` unless you have a concrete reason to change it.
|
||||
- **Avoid this specific combination:**
|
||||
- untrusted content in the agent's context, **plus**
|
||||
- `persistent=True` or an explicit long-lived `sandbox_id`, **plus**
|
||||
- a large `sandbox_timeout` or credentials/secrets seeded into the sandbox via `envs`.
|
||||
|
||||
That stack lets a single injection pivot into a long-running, credentialed shell that survives across turns. If you must run persistently, also keep `sandbox_timeout` short, scope `envs` to the minimum the task needs, and don't feed the same agent untrusted input.
|
||||
- **Don't mount production credentials.** Anything you put into `envs`, `metadata`, or files written to the sandbox is reachable from the LLM. Use per-task scoped keys, not your personal API tokens.
|
||||
- **E2B's VM isolation is the final backstop**, not a license to relax the above — isolation prevents escape to the host, but everything the sandbox can reach (the public internet, any service whose token you dropped in) is still fair game for an injected command.
|
||||
@@ -0,0 +1,12 @@
|
||||
from crewai_tools.tools.e2b_sandbox_tool.e2b_base_tool import E2BBaseTool
|
||||
from crewai_tools.tools.e2b_sandbox_tool.e2b_exec_tool import E2BExecTool
|
||||
from crewai_tools.tools.e2b_sandbox_tool.e2b_file_tool import E2BFileTool
|
||||
from crewai_tools.tools.e2b_sandbox_tool.e2b_python_tool import E2BPythonTool
|
||||
|
||||
|
||||
__all__ = [
|
||||
"E2BBaseTool",
|
||||
"E2BExecTool",
|
||||
"E2BFileTool",
|
||||
"E2BPythonTool",
|
||||
]
|
||||
@@ -0,0 +1,197 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import atexit
|
||||
import logging
|
||||
import os
|
||||
import threading
|
||||
from typing import Any, ClassVar
|
||||
|
||||
from crewai.tools import BaseTool, EnvVar
|
||||
from pydantic import ConfigDict, Field, PrivateAttr, SecretStr
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class E2BBaseTool(BaseTool):
|
||||
"""Shared base for tools that act on an E2B sandbox.
|
||||
|
||||
Lifecycle modes:
|
||||
- persistent=False (default): create a fresh sandbox per `_run` call and
|
||||
kill it when the call returns. Safer and stateless — nothing leaks if
|
||||
the agent forgets cleanup.
|
||||
- persistent=True: lazily create a single sandbox on first use, cache it
|
||||
on the instance, and register an atexit hook to kill it at process
|
||||
exit. Cheaper across many calls and lets files/state carry over.
|
||||
- sandbox_id=<existing>: attach to a sandbox the caller already owns.
|
||||
Never killed by the tool.
|
||||
"""
|
||||
|
||||
model_config = ConfigDict(arbitrary_types_allowed=True)
|
||||
|
||||
package_dependencies: list[str] = Field(default_factory=lambda: ["e2b"])
|
||||
|
||||
api_key: SecretStr | None = Field(
|
||||
default_factory=lambda: (
|
||||
SecretStr(val) if (val := os.getenv("E2B_API_KEY")) else None
|
||||
),
|
||||
description="E2B API key. Falls back to E2B_API_KEY env var.",
|
||||
json_schema_extra={"required": False},
|
||||
repr=False,
|
||||
)
|
||||
domain: str | None = Field(
|
||||
default_factory=lambda: os.getenv("E2B_DOMAIN"),
|
||||
description="E2B API domain override. Falls back to E2B_DOMAIN env var.",
|
||||
json_schema_extra={"required": False},
|
||||
)
|
||||
|
||||
template: str | None = Field(
|
||||
default=None,
|
||||
description=(
|
||||
"Optional template/snapshot name or id to create the sandbox from. "
|
||||
"Defaults to E2B's base template when omitted."
|
||||
),
|
||||
)
|
||||
persistent: bool = Field(
|
||||
default=False,
|
||||
description=(
|
||||
"If True, reuse one sandbox across all calls to this tool instance "
|
||||
"and kill it at process exit. Default False creates and kills a "
|
||||
"fresh sandbox per call."
|
||||
),
|
||||
)
|
||||
sandbox_id: str | None = Field(
|
||||
default=None,
|
||||
description=(
|
||||
"Attach to an existing sandbox by id instead of creating a new "
|
||||
"one. The tool will never kill a sandbox it did not create."
|
||||
),
|
||||
)
|
||||
sandbox_timeout: int = Field(
|
||||
default=300,
|
||||
description=(
|
||||
"Idle timeout in seconds after which E2B auto-kills the sandbox. "
|
||||
"Applied at create time and when attaching via sandbox_id."
|
||||
),
|
||||
)
|
||||
envs: dict[str, str] | None = Field(
|
||||
default=None,
|
||||
description="Environment variables to set inside the sandbox at create time.",
|
||||
)
|
||||
metadata: dict[str, str] | None = Field(
|
||||
default=None,
|
||||
description="Metadata key-value pairs to attach to the sandbox at create time.",
|
||||
)
|
||||
|
||||
env_vars: list[EnvVar] = Field(
|
||||
default_factory=lambda: [
|
||||
EnvVar(
|
||||
name="E2B_API_KEY",
|
||||
description="API key for E2B sandbox service",
|
||||
required=False,
|
||||
),
|
||||
EnvVar(
|
||||
name="E2B_DOMAIN",
|
||||
description="E2B API domain (optional)",
|
||||
required=False,
|
||||
),
|
||||
]
|
||||
)
|
||||
|
||||
_persistent_sandbox: Any | None = PrivateAttr(default=None)
|
||||
_lock: threading.Lock = PrivateAttr(default_factory=threading.Lock)
|
||||
_cleanup_registered: bool = PrivateAttr(default=False)
|
||||
|
||||
_sdk_cache: ClassVar[dict[str, Any]] = {}
|
||||
|
||||
@classmethod
|
||||
def _import_sandbox_class(cls) -> Any:
|
||||
"""Return the Sandbox class used by this tool.
|
||||
|
||||
Subclasses override this to swap in a different SDK (e.g. the code
|
||||
interpreter sandbox). The default uses plain `e2b.Sandbox`.
|
||||
"""
|
||||
cached = cls._sdk_cache.get("e2b.Sandbox")
|
||||
if cached is not None:
|
||||
return cached
|
||||
try:
|
||||
from e2b import Sandbox # type: ignore[import-untyped]
|
||||
except ImportError as exc:
|
||||
raise ImportError(
|
||||
"The 'e2b' package is required for E2B sandbox tools. "
|
||||
"Install it with: uv add e2b (or) pip install e2b"
|
||||
) from exc
|
||||
cls._sdk_cache["e2b.Sandbox"] = Sandbox
|
||||
return Sandbox
|
||||
|
||||
def _connect_kwargs(self) -> dict[str, Any]:
|
||||
kwargs: dict[str, Any] = {}
|
||||
if self.api_key is not None:
|
||||
kwargs["api_key"] = self.api_key.get_secret_value()
|
||||
if self.domain:
|
||||
kwargs["domain"] = self.domain
|
||||
if self.sandbox_timeout is not None:
|
||||
kwargs["timeout"] = self.sandbox_timeout
|
||||
return kwargs
|
||||
|
||||
def _create_kwargs(self) -> dict[str, Any]:
|
||||
kwargs: dict[str, Any] = self._connect_kwargs()
|
||||
if self.template is not None:
|
||||
kwargs["template"] = self.template
|
||||
if self.envs is not None:
|
||||
kwargs["envs"] = self.envs
|
||||
if self.metadata is not None:
|
||||
kwargs["metadata"] = self.metadata
|
||||
return kwargs
|
||||
|
||||
def _acquire_sandbox(self) -> tuple[Any, bool]:
|
||||
"""Return (sandbox, should_kill_after_use)."""
|
||||
sandbox_cls = self._import_sandbox_class()
|
||||
|
||||
if self.sandbox_id:
|
||||
return (
|
||||
sandbox_cls.connect(self.sandbox_id, **self._connect_kwargs()),
|
||||
False,
|
||||
)
|
||||
|
||||
if self.persistent:
|
||||
with self._lock:
|
||||
if self._persistent_sandbox is None:
|
||||
self._persistent_sandbox = sandbox_cls.create(
|
||||
**self._create_kwargs()
|
||||
)
|
||||
if not self._cleanup_registered:
|
||||
atexit.register(self.close)
|
||||
self._cleanup_registered = True
|
||||
return self._persistent_sandbox, False
|
||||
|
||||
sandbox = sandbox_cls.create(**self._create_kwargs())
|
||||
return sandbox, True
|
||||
|
||||
def _release_sandbox(self, sandbox: Any, should_kill: bool) -> None:
|
||||
if not should_kill:
|
||||
return
|
||||
try:
|
||||
sandbox.kill()
|
||||
except Exception:
|
||||
logger.debug(
|
||||
"Best-effort sandbox cleanup failed after ephemeral use; "
|
||||
"the sandbox may need manual termination.",
|
||||
exc_info=True,
|
||||
)
|
||||
|
||||
def close(self) -> None:
|
||||
"""Kill the cached persistent sandbox if one exists."""
|
||||
with self._lock:
|
||||
sandbox = self._persistent_sandbox
|
||||
self._persistent_sandbox = None
|
||||
if sandbox is None:
|
||||
return
|
||||
try:
|
||||
sandbox.kill()
|
||||
except Exception:
|
||||
logger.debug(
|
||||
"Best-effort persistent sandbox cleanup failed at close(); "
|
||||
"the sandbox may need manual termination.",
|
||||
exc_info=True,
|
||||
)
|
||||
@@ -0,0 +1,62 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from builtins import type as type_
|
||||
from typing import Any
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from crewai_tools.tools.e2b_sandbox_tool.e2b_base_tool import E2BBaseTool
|
||||
|
||||
|
||||
class E2BExecToolSchema(BaseModel):
|
||||
command: str = Field(..., description="Shell command to execute in the sandbox.")
|
||||
cwd: str | None = Field(
|
||||
default=None,
|
||||
description="Working directory to run the command in. Defaults to the sandbox home dir.",
|
||||
)
|
||||
envs: dict[str, str] | None = Field(
|
||||
default=None,
|
||||
description="Optional environment variables to set for this command.",
|
||||
)
|
||||
timeout: float | None = Field(
|
||||
default=None,
|
||||
description="Maximum seconds to wait for the command to finish.",
|
||||
)
|
||||
|
||||
|
||||
class E2BExecTool(E2BBaseTool):
|
||||
"""Run a shell command inside an E2B sandbox."""
|
||||
|
||||
name: str = "E2B Sandbox Exec"
|
||||
description: str = (
|
||||
"Execute a shell command inside an E2B sandbox and return the exit "
|
||||
"code, stdout, and stderr. Use this to run builds, package installs, "
|
||||
"git operations, or any one-off shell command."
|
||||
)
|
||||
args_schema: type_[BaseModel] = E2BExecToolSchema
|
||||
|
||||
def _run(
|
||||
self,
|
||||
command: str,
|
||||
cwd: str | None = None,
|
||||
envs: dict[str, str] | None = None,
|
||||
timeout: float | None = None,
|
||||
) -> Any:
|
||||
sandbox, should_kill = self._acquire_sandbox()
|
||||
try:
|
||||
run_kwargs: dict[str, Any] = {}
|
||||
if cwd is not None:
|
||||
run_kwargs["cwd"] = cwd
|
||||
if envs is not None:
|
||||
run_kwargs["envs"] = envs
|
||||
if timeout is not None:
|
||||
run_kwargs["timeout"] = timeout
|
||||
result = sandbox.commands.run(command, **run_kwargs)
|
||||
return {
|
||||
"exit_code": getattr(result, "exit_code", None),
|
||||
"stdout": getattr(result, "stdout", None),
|
||||
"stderr": getattr(result, "stderr", None),
|
||||
"error": getattr(result, "error", None),
|
||||
}
|
||||
finally:
|
||||
self._release_sandbox(sandbox, should_kill)
|
||||
@@ -0,0 +1,220 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import base64
|
||||
from builtins import type as type_
|
||||
import logging
|
||||
import posixpath
|
||||
from typing import Any, Literal
|
||||
|
||||
from pydantic import BaseModel, Field, model_validator
|
||||
|
||||
from crewai_tools.tools.e2b_sandbox_tool.e2b_base_tool import E2BBaseTool
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
FileAction = Literal[
|
||||
"read", "write", "append", "list", "delete", "mkdir", "info", "exists"
|
||||
]
|
||||
|
||||
|
||||
class E2BFileToolSchema(BaseModel):
|
||||
action: FileAction = Field(
|
||||
...,
|
||||
description=(
|
||||
"The filesystem action to perform: 'read' (returns file contents), "
|
||||
"'write' (create or replace a file with content), 'append' (append "
|
||||
"content to an existing file — use this for writing large files in "
|
||||
"chunks to avoid hitting tool-call size limits), 'list' (lists a "
|
||||
"directory), 'delete' (removes a file/dir), 'mkdir' (creates a "
|
||||
"directory), 'info' (returns file metadata), 'exists' (returns a "
|
||||
"boolean for whether the path exists)."
|
||||
),
|
||||
)
|
||||
path: str = Field(..., description="Absolute path inside the sandbox.")
|
||||
content: str | None = Field(
|
||||
default=None,
|
||||
description=(
|
||||
"Content to write or append. If omitted for 'write', an empty file "
|
||||
"is created. For files larger than a few KB, prefer one 'write' "
|
||||
"with empty content followed by multiple 'append' calls of ~4KB "
|
||||
"each to stay within tool-call payload limits."
|
||||
),
|
||||
)
|
||||
binary: bool = Field(
|
||||
default=False,
|
||||
description=(
|
||||
"For 'write'/'append': treat content as base64 and upload raw "
|
||||
"bytes. For 'read': return contents as base64 instead of decoded "
|
||||
"utf-8."
|
||||
),
|
||||
)
|
||||
depth: int = Field(
|
||||
default=1,
|
||||
description="For action='list': how many levels deep to recurse (default 1).",
|
||||
)
|
||||
|
||||
@model_validator(mode="after")
|
||||
def _validate_action_args(self) -> E2BFileToolSchema:
|
||||
if self.action == "append" and self.content is None:
|
||||
raise ValueError(
|
||||
"action='append' requires 'content'. Pass the chunk to append "
|
||||
"in the 'content' field."
|
||||
)
|
||||
return self
|
||||
|
||||
|
||||
class E2BFileTool(E2BBaseTool):
|
||||
"""Read, write, and manage files inside an E2B sandbox.
|
||||
|
||||
Notes:
|
||||
- Most useful with `persistent=True` or an explicit `sandbox_id`. With
|
||||
the default ephemeral mode, files disappear when this tool call
|
||||
finishes.
|
||||
"""
|
||||
|
||||
name: str = "E2B Sandbox Files"
|
||||
description: str = (
|
||||
"Perform filesystem operations inside an E2B sandbox: read a file, "
|
||||
"write content to a path, append content to an existing file, list a "
|
||||
"directory, delete a path, make a directory, fetch file metadata, or "
|
||||
"check whether a path exists. For files larger than a few KB, create "
|
||||
"the file with action='write' and empty content, then send the body "
|
||||
"via multiple 'append' calls of ~4KB each to stay within tool-call "
|
||||
"payload limits."
|
||||
)
|
||||
args_schema: type_[BaseModel] = E2BFileToolSchema
|
||||
|
||||
def _run(
|
||||
self,
|
||||
action: FileAction,
|
||||
path: str,
|
||||
content: str | None = None,
|
||||
binary: bool = False,
|
||||
depth: int = 1,
|
||||
) -> Any:
|
||||
sandbox, should_kill = self._acquire_sandbox()
|
||||
try:
|
||||
if action == "read":
|
||||
return self._read(sandbox, path, binary=binary)
|
||||
if action == "write":
|
||||
return self._write(sandbox, path, content or "", binary=binary)
|
||||
if action == "append":
|
||||
return self._append(sandbox, path, content or "", binary=binary)
|
||||
if action == "list":
|
||||
return self._list(sandbox, path, depth=depth)
|
||||
if action == "delete":
|
||||
sandbox.files.remove(path)
|
||||
return {"status": "deleted", "path": path}
|
||||
if action == "mkdir":
|
||||
created = sandbox.files.make_dir(path)
|
||||
return {"status": "created", "path": path, "created": bool(created)}
|
||||
if action == "info":
|
||||
return self._info(sandbox, path)
|
||||
if action == "exists":
|
||||
return {"path": path, "exists": bool(sandbox.files.exists(path))}
|
||||
raise ValueError(f"Unknown action: {action}")
|
||||
finally:
|
||||
self._release_sandbox(sandbox, should_kill)
|
||||
|
||||
def _read(self, sandbox: Any, path: str, *, binary: bool) -> dict[str, Any]:
|
||||
if binary:
|
||||
data: bytes = sandbox.files.read(path, format="bytes")
|
||||
return {
|
||||
"path": path,
|
||||
"encoding": "base64",
|
||||
"content": base64.b64encode(data).decode("ascii"),
|
||||
}
|
||||
try:
|
||||
content: str = sandbox.files.read(path)
|
||||
return {"path": path, "encoding": "utf-8", "content": content}
|
||||
except UnicodeDecodeError:
|
||||
data = sandbox.files.read(path, format="bytes")
|
||||
return {
|
||||
"path": path,
|
||||
"encoding": "base64",
|
||||
"content": base64.b64encode(data).decode("ascii"),
|
||||
"note": "File was not valid utf-8; returned as base64.",
|
||||
}
|
||||
|
||||
def _write(
|
||||
self, sandbox: Any, path: str, content: str, *, binary: bool
|
||||
) -> dict[str, Any]:
|
||||
payload: str | bytes = base64.b64decode(content) if binary else content
|
||||
self._ensure_parent_dir(sandbox, path)
|
||||
sandbox.files.write(path, payload)
|
||||
size = (
|
||||
len(payload)
|
||||
if isinstance(payload, (bytes, bytearray))
|
||||
else len(payload.encode("utf-8"))
|
||||
)
|
||||
return {"status": "written", "path": path, "bytes": size}
|
||||
|
||||
def _append(
|
||||
self, sandbox: Any, path: str, content: str, *, binary: bool
|
||||
) -> dict[str, Any]:
|
||||
chunk: bytes = base64.b64decode(content) if binary else content.encode("utf-8")
|
||||
self._ensure_parent_dir(sandbox, path)
|
||||
try:
|
||||
existing: bytes = sandbox.files.read(path, format="bytes")
|
||||
except Exception:
|
||||
existing = b""
|
||||
payload = existing + chunk
|
||||
sandbox.files.write(path, payload)
|
||||
return {
|
||||
"status": "appended",
|
||||
"path": path,
|
||||
"appended_bytes": len(chunk),
|
||||
"total_bytes": len(payload),
|
||||
}
|
||||
|
||||
@staticmethod
|
||||
def _ensure_parent_dir(sandbox: Any, path: str) -> None:
|
||||
parent = posixpath.dirname(path)
|
||||
if not parent or parent in ("/", "."):
|
||||
return
|
||||
try:
|
||||
sandbox.files.make_dir(parent)
|
||||
except Exception:
|
||||
logger.debug(
|
||||
"Best-effort parent-directory create failed for %s; "
|
||||
"assuming it already exists and proceeding with the write.",
|
||||
parent,
|
||||
exc_info=True,
|
||||
)
|
||||
|
||||
def _list(self, sandbox: Any, path: str, *, depth: int) -> dict[str, Any]:
|
||||
entries = sandbox.files.list(path, depth=depth)
|
||||
return {
|
||||
"path": path,
|
||||
"entries": [self._entry_to_dict(e) for e in entries],
|
||||
}
|
||||
|
||||
def _info(self, sandbox: Any, path: str) -> dict[str, Any]:
|
||||
return self._entry_to_dict(sandbox.files.get_info(path))
|
||||
|
||||
@staticmethod
|
||||
def _entry_to_dict(entry: Any) -> dict[str, Any]:
|
||||
fields = (
|
||||
"name",
|
||||
"path",
|
||||
"type",
|
||||
"size",
|
||||
"mode",
|
||||
"permissions",
|
||||
"owner",
|
||||
"group",
|
||||
"modified_time",
|
||||
"symlink_target",
|
||||
)
|
||||
result: dict[str, Any] = {}
|
||||
for field in fields:
|
||||
value = getattr(entry, field, None)
|
||||
if value is not None and field == "modified_time":
|
||||
result[field] = (
|
||||
value.isoformat() if hasattr(value, "isoformat") else str(value)
|
||||
)
|
||||
else:
|
||||
result[field] = value
|
||||
return result
|
||||
@@ -0,0 +1,133 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from builtins import type as type_
|
||||
from typing import Any, ClassVar
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from crewai_tools.tools.e2b_sandbox_tool.e2b_base_tool import E2BBaseTool
|
||||
|
||||
|
||||
class E2BPythonToolSchema(BaseModel):
|
||||
code: str = Field(
|
||||
...,
|
||||
description="Python source to execute inside the sandbox.",
|
||||
)
|
||||
language: str | None = Field(
|
||||
default=None,
|
||||
description=(
|
||||
"Override the execution language (e.g. 'python', 'r', 'javascript'). "
|
||||
"Defaults to Python when omitted."
|
||||
),
|
||||
)
|
||||
envs: dict[str, str] | None = Field(
|
||||
default=None,
|
||||
description="Optional environment variables for the run.",
|
||||
)
|
||||
timeout: float | None = Field(
|
||||
default=None,
|
||||
description="Maximum seconds to wait for the code to finish.",
|
||||
)
|
||||
|
||||
|
||||
class E2BPythonTool(E2BBaseTool):
|
||||
"""Run Python code inside an E2B code interpreter sandbox.
|
||||
|
||||
Uses `e2b_code_interpreter`, which runs cells in a persistent Jupyter-style
|
||||
kernel so state (imports, variables) carries across calls when
|
||||
`persistent=True`.
|
||||
"""
|
||||
|
||||
name: str = "E2B Sandbox Python"
|
||||
description: str = (
|
||||
"Execute a block of Python code inside an E2B code interpreter sandbox "
|
||||
"and return captured stdout, stderr, the final expression value, and "
|
||||
"any rich results (charts, dataframes). Use this for data processing, "
|
||||
"quick scripts, or analysis that should run in an isolated environment."
|
||||
)
|
||||
args_schema: type_[BaseModel] = E2BPythonToolSchema
|
||||
|
||||
package_dependencies: list[str] = Field(
|
||||
default_factory=lambda: ["e2b_code_interpreter"],
|
||||
)
|
||||
|
||||
_ci_cache: ClassVar[dict[str, Any]] = {}
|
||||
|
||||
@classmethod
|
||||
def _import_sandbox_class(cls) -> Any:
|
||||
cached = cls._ci_cache.get("Sandbox")
|
||||
if cached is not None:
|
||||
return cached
|
||||
try:
|
||||
from e2b_code_interpreter import Sandbox # type: ignore[import-untyped]
|
||||
except ImportError as exc:
|
||||
raise ImportError(
|
||||
"The 'e2b_code_interpreter' package is required for the E2B "
|
||||
"Python tool. Install it with: "
|
||||
"uv add e2b-code-interpreter (or) "
|
||||
"pip install e2b-code-interpreter"
|
||||
) from exc
|
||||
cls._ci_cache["Sandbox"] = Sandbox
|
||||
return Sandbox
|
||||
|
||||
def _run(
|
||||
self,
|
||||
code: str,
|
||||
language: str | None = None,
|
||||
envs: dict[str, str] | None = None,
|
||||
timeout: float | None = None,
|
||||
) -> Any:
|
||||
sandbox, should_kill = self._acquire_sandbox()
|
||||
try:
|
||||
run_kwargs: dict[str, Any] = {}
|
||||
if language is not None:
|
||||
run_kwargs["language"] = language
|
||||
if envs is not None:
|
||||
run_kwargs["envs"] = envs
|
||||
if timeout is not None:
|
||||
run_kwargs["timeout"] = timeout
|
||||
execution = sandbox.run_code(code, **run_kwargs)
|
||||
return self._serialize_execution(execution)
|
||||
finally:
|
||||
self._release_sandbox(sandbox, should_kill)
|
||||
|
||||
@staticmethod
|
||||
def _serialize_execution(execution: Any) -> dict[str, Any]:
|
||||
logs = getattr(execution, "logs", None)
|
||||
error = getattr(execution, "error", None)
|
||||
results = getattr(execution, "results", None) or []
|
||||
return {
|
||||
"text": getattr(execution, "text", None),
|
||||
"stdout": list(getattr(logs, "stdout", []) or []) if logs else [],
|
||||
"stderr": list(getattr(logs, "stderr", []) or []) if logs else [],
|
||||
"error": (
|
||||
{
|
||||
"name": getattr(error, "name", None),
|
||||
"value": getattr(error, "value", None),
|
||||
"traceback": getattr(error, "traceback", None),
|
||||
}
|
||||
if error
|
||||
else None
|
||||
),
|
||||
"results": [E2BPythonTool._serialize_result(r) for r in results],
|
||||
"execution_count": getattr(execution, "execution_count", None),
|
||||
}
|
||||
|
||||
@staticmethod
|
||||
def _serialize_result(result: Any) -> dict[str, Any]:
|
||||
fields = (
|
||||
"text",
|
||||
"html",
|
||||
"markdown",
|
||||
"svg",
|
||||
"png",
|
||||
"jpeg",
|
||||
"pdf",
|
||||
"latex",
|
||||
"json",
|
||||
"javascript",
|
||||
"data",
|
||||
"is_main_result",
|
||||
"extra",
|
||||
)
|
||||
return {field: getattr(result, field, None) for field in fields}
|
||||
@@ -8734,6 +8734,668 @@
|
||||
"type": "object"
|
||||
}
|
||||
},
|
||||
{
|
||||
"description": "Execute a shell command inside an E2B sandbox and return the exit code, stdout, and stderr. Use this to run builds, package installs, git operations, or any one-off shell command.",
|
||||
"env_vars": [
|
||||
{
|
||||
"default": null,
|
||||
"description": "API key for E2B sandbox service",
|
||||
"name": "E2B_API_KEY",
|
||||
"required": false
|
||||
},
|
||||
{
|
||||
"default": null,
|
||||
"description": "E2B API domain (optional)",
|
||||
"name": "E2B_DOMAIN",
|
||||
"required": false
|
||||
}
|
||||
],
|
||||
"humanized_name": "E2B Sandbox Exec",
|
||||
"init_params_schema": {
|
||||
"$defs": {
|
||||
"EnvVar": {
|
||||
"properties": {
|
||||
"default": {
|
||||
"anyOf": [
|
||||
{
|
||||
"type": "string"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"default": null,
|
||||
"title": "Default"
|
||||
},
|
||||
"description": {
|
||||
"title": "Description",
|
||||
"type": "string"
|
||||
},
|
||||
"name": {
|
||||
"title": "Name",
|
||||
"type": "string"
|
||||
},
|
||||
"required": {
|
||||
"default": true,
|
||||
"title": "Required",
|
||||
"type": "boolean"
|
||||
}
|
||||
},
|
||||
"required": [
|
||||
"name",
|
||||
"description"
|
||||
],
|
||||
"title": "EnvVar",
|
||||
"type": "object"
|
||||
}
|
||||
},
|
||||
"description": "Run a shell command inside an E2B sandbox.",
|
||||
"properties": {
|
||||
"api_key": {
|
||||
"anyOf": [
|
||||
{
|
||||
"format": "password",
|
||||
"type": "string",
|
||||
"writeOnly": true
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"description": "E2B API key. Falls back to E2B_API_KEY env var.",
|
||||
"required": false,
|
||||
"title": "Api Key"
|
||||
},
|
||||
"domain": {
|
||||
"anyOf": [
|
||||
{
|
||||
"type": "string"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"description": "E2B API domain override. Falls back to E2B_DOMAIN env var.",
|
||||
"required": false,
|
||||
"title": "Domain"
|
||||
},
|
||||
"envs": {
|
||||
"anyOf": [
|
||||
{
|
||||
"additionalProperties": {
|
||||
"type": "string"
|
||||
},
|
||||
"type": "object"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"default": null,
|
||||
"description": "Environment variables to set inside the sandbox at create time.",
|
||||
"title": "Envs"
|
||||
},
|
||||
"metadata": {
|
||||
"anyOf": [
|
||||
{
|
||||
"additionalProperties": {
|
||||
"type": "string"
|
||||
},
|
||||
"type": "object"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"default": null,
|
||||
"description": "Metadata key-value pairs to attach to the sandbox at create time.",
|
||||
"title": "Metadata"
|
||||
},
|
||||
"persistent": {
|
||||
"default": false,
|
||||
"description": "If True, reuse one sandbox across all calls to this tool instance and kill it at process exit. Default False creates and kills a fresh sandbox per call.",
|
||||
"title": "Persistent",
|
||||
"type": "boolean"
|
||||
},
|
||||
"sandbox_id": {
|
||||
"anyOf": [
|
||||
{
|
||||
"type": "string"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"default": null,
|
||||
"description": "Attach to an existing sandbox by id instead of creating a new one. The tool will never kill a sandbox it did not create.",
|
||||
"title": "Sandbox Id"
|
||||
},
|
||||
"sandbox_timeout": {
|
||||
"default": 300,
|
||||
"description": "Idle timeout in seconds after which E2B auto-kills the sandbox. Applied at create time and when attaching via sandbox_id.",
|
||||
"title": "Sandbox Timeout",
|
||||
"type": "integer"
|
||||
},
|
||||
"template": {
|
||||
"anyOf": [
|
||||
{
|
||||
"type": "string"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"default": null,
|
||||
"description": "Optional template/snapshot name or id to create the sandbox from. Defaults to E2B's base template when omitted.",
|
||||
"title": "Template"
|
||||
}
|
||||
},
|
||||
"required": [],
|
||||
"title": "E2BExecTool",
|
||||
"type": "object"
|
||||
},
|
||||
"name": "E2BExecTool",
|
||||
"package_dependencies": [
|
||||
"e2b"
|
||||
],
|
||||
"run_params_schema": {
|
||||
"properties": {
|
||||
"command": {
|
||||
"description": "Shell command to execute in the sandbox.",
|
||||
"title": "Command",
|
||||
"type": "string"
|
||||
},
|
||||
"cwd": {
|
||||
"anyOf": [
|
||||
{
|
||||
"type": "string"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"default": null,
|
||||
"description": "Working directory to run the command in. Defaults to the sandbox home dir.",
|
||||
"title": "Cwd"
|
||||
},
|
||||
"envs": {
|
||||
"anyOf": [
|
||||
{
|
||||
"additionalProperties": {
|
||||
"type": "string"
|
||||
},
|
||||
"type": "object"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"default": null,
|
||||
"description": "Optional environment variables to set for this command.",
|
||||
"title": "Envs"
|
||||
},
|
||||
"timeout": {
|
||||
"anyOf": [
|
||||
{
|
||||
"type": "number"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"default": null,
|
||||
"description": "Maximum seconds to wait for the command to finish.",
|
||||
"title": "Timeout"
|
||||
}
|
||||
},
|
||||
"required": [
|
||||
"command"
|
||||
],
|
||||
"title": "E2BExecToolSchema",
|
||||
"type": "object"
|
||||
}
|
||||
},
|
||||
{
|
||||
"description": "Perform filesystem operations inside an E2B sandbox: read a file, write content to a path, append content to an existing file, list a directory, delete a path, make a directory, fetch file metadata, or check whether a path exists. For files larger than a few KB, create the file with action='write' and empty content, then send the body via multiple 'append' calls of ~4KB each to stay within tool-call payload limits.",
|
||||
"env_vars": [
|
||||
{
|
||||
"default": null,
|
||||
"description": "API key for E2B sandbox service",
|
||||
"name": "E2B_API_KEY",
|
||||
"required": false
|
||||
},
|
||||
{
|
||||
"default": null,
|
||||
"description": "E2B API domain (optional)",
|
||||
"name": "E2B_DOMAIN",
|
||||
"required": false
|
||||
}
|
||||
],
|
||||
"humanized_name": "E2B Sandbox Files",
|
||||
"init_params_schema": {
|
||||
"$defs": {
|
||||
"EnvVar": {
|
||||
"properties": {
|
||||
"default": {
|
||||
"anyOf": [
|
||||
{
|
||||
"type": "string"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"default": null,
|
||||
"title": "Default"
|
||||
},
|
||||
"description": {
|
||||
"title": "Description",
|
||||
"type": "string"
|
||||
},
|
||||
"name": {
|
||||
"title": "Name",
|
||||
"type": "string"
|
||||
},
|
||||
"required": {
|
||||
"default": true,
|
||||
"title": "Required",
|
||||
"type": "boolean"
|
||||
}
|
||||
},
|
||||
"required": [
|
||||
"name",
|
||||
"description"
|
||||
],
|
||||
"title": "EnvVar",
|
||||
"type": "object"
|
||||
}
|
||||
},
|
||||
"description": "Read, write, and manage files inside an E2B sandbox.\n\nNotes:\n - Most useful with `persistent=True` or an explicit `sandbox_id`. With\n the default ephemeral mode, files disappear when this tool call\n finishes.",
|
||||
"properties": {
|
||||
"api_key": {
|
||||
"anyOf": [
|
||||
{
|
||||
"format": "password",
|
||||
"type": "string",
|
||||
"writeOnly": true
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"description": "E2B API key. Falls back to E2B_API_KEY env var.",
|
||||
"required": false,
|
||||
"title": "Api Key"
|
||||
},
|
||||
"domain": {
|
||||
"anyOf": [
|
||||
{
|
||||
"type": "string"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"description": "E2B API domain override. Falls back to E2B_DOMAIN env var.",
|
||||
"required": false,
|
||||
"title": "Domain"
|
||||
},
|
||||
"envs": {
|
||||
"anyOf": [
|
||||
{
|
||||
"additionalProperties": {
|
||||
"type": "string"
|
||||
},
|
||||
"type": "object"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"default": null,
|
||||
"description": "Environment variables to set inside the sandbox at create time.",
|
||||
"title": "Envs"
|
||||
},
|
||||
"metadata": {
|
||||
"anyOf": [
|
||||
{
|
||||
"additionalProperties": {
|
||||
"type": "string"
|
||||
},
|
||||
"type": "object"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"default": null,
|
||||
"description": "Metadata key-value pairs to attach to the sandbox at create time.",
|
||||
"title": "Metadata"
|
||||
},
|
||||
"persistent": {
|
||||
"default": false,
|
||||
"description": "If True, reuse one sandbox across all calls to this tool instance and kill it at process exit. Default False creates and kills a fresh sandbox per call.",
|
||||
"title": "Persistent",
|
||||
"type": "boolean"
|
||||
},
|
||||
"sandbox_id": {
|
||||
"anyOf": [
|
||||
{
|
||||
"type": "string"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"default": null,
|
||||
"description": "Attach to an existing sandbox by id instead of creating a new one. The tool will never kill a sandbox it did not create.",
|
||||
"title": "Sandbox Id"
|
||||
},
|
||||
"sandbox_timeout": {
|
||||
"default": 300,
|
||||
"description": "Idle timeout in seconds after which E2B auto-kills the sandbox. Applied at create time and when attaching via sandbox_id.",
|
||||
"title": "Sandbox Timeout",
|
||||
"type": "integer"
|
||||
},
|
||||
"template": {
|
||||
"anyOf": [
|
||||
{
|
||||
"type": "string"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"default": null,
|
||||
"description": "Optional template/snapshot name or id to create the sandbox from. Defaults to E2B's base template when omitted.",
|
||||
"title": "Template"
|
||||
}
|
||||
},
|
||||
"required": [],
|
||||
"title": "E2BFileTool",
|
||||
"type": "object"
|
||||
},
|
||||
"name": "E2BFileTool",
|
||||
"package_dependencies": [
|
||||
"e2b"
|
||||
],
|
||||
"run_params_schema": {
|
||||
"properties": {
|
||||
"action": {
|
||||
"description": "The filesystem action to perform: 'read' (returns file contents), 'write' (create or replace a file with content), 'append' (append content to an existing file \u2014 use this for writing large files in chunks to avoid hitting tool-call size limits), 'list' (lists a directory), 'delete' (removes a file/dir), 'mkdir' (creates a directory), 'info' (returns file metadata), 'exists' (returns a boolean for whether the path exists).",
|
||||
"enum": [
|
||||
"read",
|
||||
"write",
|
||||
"append",
|
||||
"list",
|
||||
"delete",
|
||||
"mkdir",
|
||||
"info",
|
||||
"exists"
|
||||
],
|
||||
"title": "Action",
|
||||
"type": "string"
|
||||
},
|
||||
"binary": {
|
||||
"default": false,
|
||||
"description": "For 'write'/'append': treat content as base64 and upload raw bytes. For 'read': return contents as base64 instead of decoded utf-8.",
|
||||
"title": "Binary",
|
||||
"type": "boolean"
|
||||
},
|
||||
"content": {
|
||||
"anyOf": [
|
||||
{
|
||||
"type": "string"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"default": null,
|
||||
"description": "Content to write or append. If omitted for 'write', an empty file is created. For files larger than a few KB, prefer one 'write' with empty content followed by multiple 'append' calls of ~4KB each to stay within tool-call payload limits.",
|
||||
"title": "Content"
|
||||
},
|
||||
"depth": {
|
||||
"default": 1,
|
||||
"description": "For action='list': how many levels deep to recurse (default 1).",
|
||||
"title": "Depth",
|
||||
"type": "integer"
|
||||
},
|
||||
"path": {
|
||||
"description": "Absolute path inside the sandbox.",
|
||||
"title": "Path",
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"required": [
|
||||
"action",
|
||||
"path"
|
||||
],
|
||||
"title": "E2BFileToolSchema",
|
||||
"type": "object"
|
||||
}
|
||||
},
|
||||
{
|
||||
"description": "Execute a block of Python code inside an E2B code interpreter sandbox and return captured stdout, stderr, the final expression value, and any rich results (charts, dataframes). Use this for data processing, quick scripts, or analysis that should run in an isolated environment.",
|
||||
"env_vars": [
|
||||
{
|
||||
"default": null,
|
||||
"description": "API key for E2B sandbox service",
|
||||
"name": "E2B_API_KEY",
|
||||
"required": false
|
||||
},
|
||||
{
|
||||
"default": null,
|
||||
"description": "E2B API domain (optional)",
|
||||
"name": "E2B_DOMAIN",
|
||||
"required": false
|
||||
}
|
||||
],
|
||||
"humanized_name": "E2B Sandbox Python",
|
||||
"init_params_schema": {
|
||||
"$defs": {
|
||||
"EnvVar": {
|
||||
"properties": {
|
||||
"default": {
|
||||
"anyOf": [
|
||||
{
|
||||
"type": "string"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"default": null,
|
||||
"title": "Default"
|
||||
},
|
||||
"description": {
|
||||
"title": "Description",
|
||||
"type": "string"
|
||||
},
|
||||
"name": {
|
||||
"title": "Name",
|
||||
"type": "string"
|
||||
},
|
||||
"required": {
|
||||
"default": true,
|
||||
"title": "Required",
|
||||
"type": "boolean"
|
||||
}
|
||||
},
|
||||
"required": [
|
||||
"name",
|
||||
"description"
|
||||
],
|
||||
"title": "EnvVar",
|
||||
"type": "object"
|
||||
}
|
||||
},
|
||||
"description": "Run Python code inside an E2B code interpreter sandbox.\n\nUses `e2b_code_interpreter`, which runs cells in a persistent Jupyter-style\nkernel so state (imports, variables) carries across calls when\n`persistent=True`.",
|
||||
"properties": {
|
||||
"api_key": {
|
||||
"anyOf": [
|
||||
{
|
||||
"format": "password",
|
||||
"type": "string",
|
||||
"writeOnly": true
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"description": "E2B API key. Falls back to E2B_API_KEY env var.",
|
||||
"required": false,
|
||||
"title": "Api Key"
|
||||
},
|
||||
"domain": {
|
||||
"anyOf": [
|
||||
{
|
||||
"type": "string"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"description": "E2B API domain override. Falls back to E2B_DOMAIN env var.",
|
||||
"required": false,
|
||||
"title": "Domain"
|
||||
},
|
||||
"envs": {
|
||||
"anyOf": [
|
||||
{
|
||||
"additionalProperties": {
|
||||
"type": "string"
|
||||
},
|
||||
"type": "object"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"default": null,
|
||||
"description": "Environment variables to set inside the sandbox at create time.",
|
||||
"title": "Envs"
|
||||
},
|
||||
"metadata": {
|
||||
"anyOf": [
|
||||
{
|
||||
"additionalProperties": {
|
||||
"type": "string"
|
||||
},
|
||||
"type": "object"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"default": null,
|
||||
"description": "Metadata key-value pairs to attach to the sandbox at create time.",
|
||||
"title": "Metadata"
|
||||
},
|
||||
"persistent": {
|
||||
"default": false,
|
||||
"description": "If True, reuse one sandbox across all calls to this tool instance and kill it at process exit. Default False creates and kills a fresh sandbox per call.",
|
||||
"title": "Persistent",
|
||||
"type": "boolean"
|
||||
},
|
||||
"sandbox_id": {
|
||||
"anyOf": [
|
||||
{
|
||||
"type": "string"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"default": null,
|
||||
"description": "Attach to an existing sandbox by id instead of creating a new one. The tool will never kill a sandbox it did not create.",
|
||||
"title": "Sandbox Id"
|
||||
},
|
||||
"sandbox_timeout": {
|
||||
"default": 300,
|
||||
"description": "Idle timeout in seconds after which E2B auto-kills the sandbox. Applied at create time and when attaching via sandbox_id.",
|
||||
"title": "Sandbox Timeout",
|
||||
"type": "integer"
|
||||
},
|
||||
"template": {
|
||||
"anyOf": [
|
||||
{
|
||||
"type": "string"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"default": null,
|
||||
"description": "Optional template/snapshot name or id to create the sandbox from. Defaults to E2B's base template when omitted.",
|
||||
"title": "Template"
|
||||
}
|
||||
},
|
||||
"required": [],
|
||||
"title": "E2BPythonTool",
|
||||
"type": "object"
|
||||
},
|
||||
"name": "E2BPythonTool",
|
||||
"package_dependencies": [
|
||||
"e2b_code_interpreter"
|
||||
],
|
||||
"run_params_schema": {
|
||||
"properties": {
|
||||
"code": {
|
||||
"description": "Python source to execute inside the sandbox.",
|
||||
"title": "Code",
|
||||
"type": "string"
|
||||
},
|
||||
"envs": {
|
||||
"anyOf": [
|
||||
{
|
||||
"additionalProperties": {
|
||||
"type": "string"
|
||||
},
|
||||
"type": "object"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"default": null,
|
||||
"description": "Optional environment variables for the run.",
|
||||
"title": "Envs"
|
||||
},
|
||||
"language": {
|
||||
"anyOf": [
|
||||
{
|
||||
"type": "string"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"default": null,
|
||||
"description": "Override the execution language (e.g. 'python', 'r', 'javascript'). Defaults to Python when omitted.",
|
||||
"title": "Language"
|
||||
},
|
||||
"timeout": {
|
||||
"anyOf": [
|
||||
{
|
||||
"type": "number"
|
||||
},
|
||||
{
|
||||
"type": "null"
|
||||
}
|
||||
],
|
||||
"default": null,
|
||||
"description": "Maximum seconds to wait for the code to finish.",
|
||||
"title": "Timeout"
|
||||
}
|
||||
},
|
||||
"required": [
|
||||
"code"
|
||||
],
|
||||
"title": "E2BPythonToolSchema",
|
||||
"type": "object"
|
||||
}
|
||||
},
|
||||
{
|
||||
"description": "Search the internet using Exa",
|
||||
"env_vars": [
|
||||
|
||||
@@ -55,7 +55,7 @@ Repository = "https://github.com/crewAIInc/crewAI"
|
||||
|
||||
[project.optional-dependencies]
|
||||
tools = [
|
||||
"crewai-tools==1.14.3a2",
|
||||
"crewai-tools==1.14.3",
|
||||
]
|
||||
embeddings = [
|
||||
"tiktoken~=0.8.0"
|
||||
@@ -84,7 +84,7 @@ voyageai = [
|
||||
"voyageai~=0.3.5",
|
||||
]
|
||||
litellm = [
|
||||
"litellm~=1.83.0",
|
||||
"litellm~=1.83.7",
|
||||
]
|
||||
bedrock = [
|
||||
"boto3~=1.42.79",
|
||||
@@ -94,6 +94,7 @@ google-genai = [
|
||||
]
|
||||
azure-ai-inference = [
|
||||
"azure-ai-inference~=1.0.0b9",
|
||||
"azure-identity>=1.17.0,<2",
|
||||
]
|
||||
anthropic = [
|
||||
"anthropic~=0.73.0",
|
||||
|
||||
@@ -13,6 +13,7 @@ from crewai.crews.crew_output import CrewOutput
|
||||
from crewai.flow.flow import Flow
|
||||
from crewai.knowledge.knowledge import Knowledge
|
||||
from crewai.llm import LLM
|
||||
from crewai.llm_result import LLMResult, ToolCallRecord
|
||||
from crewai.llms.base_llm import BaseLLM
|
||||
from crewai.process import Process
|
||||
from crewai.state.checkpoint_config import CheckpointConfig # noqa: F401
|
||||
@@ -48,7 +49,7 @@ def _suppress_pydantic_deprecation_warnings() -> None:
|
||||
|
||||
_suppress_pydantic_deprecation_warnings()
|
||||
|
||||
__version__ = "1.14.3a2"
|
||||
__version__ = "1.14.3"
|
||||
|
||||
_LAZY_IMPORTS: dict[str, tuple[str, str]] = {
|
||||
"Memory": ("crewai.memory.unified_memory", "Memory"),
|
||||
@@ -195,11 +196,13 @@ __all__ = [
|
||||
"Flow",
|
||||
"Knowledge",
|
||||
"LLMGuardrail",
|
||||
"LLMResult",
|
||||
"Memory",
|
||||
"PlanningConfig",
|
||||
"Process",
|
||||
"RuntimeState",
|
||||
"Task",
|
||||
"TaskOutput",
|
||||
"ToolCallRecord",
|
||||
"__version__",
|
||||
]
|
||||
|
||||
@@ -78,8 +78,7 @@ from crewai.knowledge.knowledge import Knowledge
|
||||
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
|
||||
from crewai.lite_agent_output import LiteAgentOutput
|
||||
from crewai.llms.base_llm import BaseLLM
|
||||
from crewai.mcp import MCPServerConfig
|
||||
from crewai.mcp.tool_resolver import MCPToolResolver
|
||||
from crewai.mcp.config import MCPServerConfig
|
||||
from crewai.rag.embeddings.types import EmbedderConfig
|
||||
from crewai.security.fingerprint import Fingerprint
|
||||
from crewai.skills.loader import activate_skill, discover_skills
|
||||
@@ -119,6 +118,7 @@ if TYPE_CHECKING:
|
||||
|
||||
from crewai.a2a.config import A2AClientConfig, A2AConfig, A2AServerConfig
|
||||
from crewai.agents.agent_builder.base_agent import PlatformAppOrAction
|
||||
from crewai.mcp.tool_resolver import MCPToolResolver
|
||||
from crewai.task import Task
|
||||
from crewai.tools.base_tool import BaseTool
|
||||
from crewai.tools.structured_tool import CrewStructuredTool
|
||||
@@ -394,15 +394,17 @@ class Agent(BaseAgent):
|
||||
self,
|
||||
resolved_crew_skills: list[SkillModel] | None = None,
|
||||
) -> None:
|
||||
"""Resolve skill paths and activate skills to INSTRUCTIONS level.
|
||||
"""Resolve skill paths while preserving explicit disclosure levels.
|
||||
|
||||
Path entries trigger discovery and activation. Pre-loaded Skill objects
|
||||
below INSTRUCTIONS level are activated. Crew-level skills are merged in
|
||||
with event emission so observability is consistent regardless of origin.
|
||||
Path entries trigger discovery and activation because directory-based
|
||||
skills opt into eager loading. Pre-loaded Skill objects keep their
|
||||
current disclosure level so callers can attach METADATA-only skills and
|
||||
progressively activate them later. Crew-level skills are merged in with
|
||||
event emission so observability is consistent regardless of origin.
|
||||
|
||||
Args:
|
||||
resolved_crew_skills: Pre-resolved crew skills (already discovered
|
||||
and activated). When provided, avoids redundant discovery per agent.
|
||||
resolved_crew_skills: Pre-resolved crew skills. When provided,
|
||||
avoids redundant discovery per agent.
|
||||
"""
|
||||
from crewai.crew import Crew
|
||||
|
||||
@@ -443,8 +445,7 @@ class Agent(BaseAgent):
|
||||
elif isinstance(item, SkillModel):
|
||||
if item.name not in seen:
|
||||
seen.add(item.name)
|
||||
activated = activate_skill(item, source=self)
|
||||
if activated is item and item.disclosure_level >= INSTRUCTIONS:
|
||||
if item.disclosure_level >= INSTRUCTIONS:
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=SkillActivatedEvent(
|
||||
@@ -454,7 +455,7 @@ class Agent(BaseAgent):
|
||||
disclosure_level=item.disclosure_level,
|
||||
),
|
||||
)
|
||||
resolved.append(activated)
|
||||
resolved.append(item)
|
||||
|
||||
self.skills = resolved if resolved else None
|
||||
|
||||
@@ -1120,6 +1121,8 @@ class Agent(BaseAgent):
|
||||
Delegates to :class:`~crewai.mcp.tool_resolver.MCPToolResolver`.
|
||||
"""
|
||||
self._cleanup_mcp_clients()
|
||||
from crewai.mcp.tool_resolver import MCPToolResolver
|
||||
|
||||
self._mcp_resolver = MCPToolResolver(agent=self, logger=self._logger)
|
||||
return self._mcp_resolver.resolve(mcps)
|
||||
|
||||
|
||||
@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
|
||||
authors = [{ name = "Your Name", email = "you@example.com" }]
|
||||
requires-python = ">=3.10,<3.14"
|
||||
dependencies = [
|
||||
"crewai[tools]==1.14.3a2"
|
||||
"crewai[tools]==1.14.3"
|
||||
]
|
||||
|
||||
[project.scripts]
|
||||
|
||||
@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
|
||||
authors = [{ name = "Your Name", email = "you@example.com" }]
|
||||
requires-python = ">=3.10,<3.14"
|
||||
dependencies = [
|
||||
"crewai[tools]==1.14.3a2"
|
||||
"crewai[tools]==1.14.3"
|
||||
]
|
||||
|
||||
[project.scripts]
|
||||
|
||||
@@ -5,7 +5,7 @@ description = "Power up your crews with {{folder_name}}"
|
||||
readme = "README.md"
|
||||
requires-python = ">=3.10,<3.14"
|
||||
dependencies = [
|
||||
"crewai[tools]==1.14.3a2"
|
||||
"crewai[tools]==1.14.3"
|
||||
]
|
||||
|
||||
[tool.crewai]
|
||||
|
||||
@@ -6,111 +6,20 @@ This module provides the event infrastructure that allows users to:
|
||||
- Build custom logging and analytics
|
||||
- Extend CrewAI with custom event handlers
|
||||
- Declare handler dependencies for ordered execution
|
||||
|
||||
Event type classes are lazy-loaded on first access to avoid importing
|
||||
~12 Pydantic model modules (and their transitive deps) at package init time.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import importlib
|
||||
from typing import TYPE_CHECKING, Any
|
||||
|
||||
from crewai.events.base_event_listener import BaseEventListener
|
||||
from crewai.events.depends import Depends
|
||||
from crewai.events.event_bus import crewai_event_bus
|
||||
from crewai.events.handler_graph import CircularDependencyError
|
||||
from crewai.events.types.crew_events import (
|
||||
CrewKickoffCompletedEvent,
|
||||
CrewKickoffFailedEvent,
|
||||
CrewKickoffStartedEvent,
|
||||
CrewTestCompletedEvent,
|
||||
CrewTestFailedEvent,
|
||||
CrewTestResultEvent,
|
||||
CrewTestStartedEvent,
|
||||
CrewTrainCompletedEvent,
|
||||
CrewTrainFailedEvent,
|
||||
CrewTrainStartedEvent,
|
||||
)
|
||||
from crewai.events.types.flow_events import (
|
||||
FlowCreatedEvent,
|
||||
FlowEvent,
|
||||
FlowFinishedEvent,
|
||||
FlowPlotEvent,
|
||||
FlowStartedEvent,
|
||||
HumanFeedbackReceivedEvent,
|
||||
HumanFeedbackRequestedEvent,
|
||||
MethodExecutionFailedEvent,
|
||||
MethodExecutionFinishedEvent,
|
||||
MethodExecutionStartedEvent,
|
||||
)
|
||||
from crewai.events.types.knowledge_events import (
|
||||
KnowledgeQueryCompletedEvent,
|
||||
KnowledgeQueryFailedEvent,
|
||||
KnowledgeQueryStartedEvent,
|
||||
KnowledgeRetrievalCompletedEvent,
|
||||
KnowledgeRetrievalStartedEvent,
|
||||
KnowledgeSearchQueryFailedEvent,
|
||||
)
|
||||
from crewai.events.types.llm_events import (
|
||||
LLMCallCompletedEvent,
|
||||
LLMCallFailedEvent,
|
||||
LLMCallStartedEvent,
|
||||
LLMStreamChunkEvent,
|
||||
)
|
||||
from crewai.events.types.llm_guardrail_events import (
|
||||
LLMGuardrailCompletedEvent,
|
||||
LLMGuardrailStartedEvent,
|
||||
)
|
||||
from crewai.events.types.logging_events import (
|
||||
AgentLogsExecutionEvent,
|
||||
AgentLogsStartedEvent,
|
||||
)
|
||||
from crewai.events.types.mcp_events import (
|
||||
MCPConfigFetchFailedEvent,
|
||||
MCPConnectionCompletedEvent,
|
||||
MCPConnectionFailedEvent,
|
||||
MCPConnectionStartedEvent,
|
||||
MCPToolExecutionCompletedEvent,
|
||||
MCPToolExecutionFailedEvent,
|
||||
MCPToolExecutionStartedEvent,
|
||||
)
|
||||
from crewai.events.types.memory_events import (
|
||||
MemoryQueryCompletedEvent,
|
||||
MemoryQueryFailedEvent,
|
||||
MemoryQueryStartedEvent,
|
||||
MemoryRetrievalCompletedEvent,
|
||||
MemoryRetrievalFailedEvent,
|
||||
MemoryRetrievalStartedEvent,
|
||||
MemorySaveCompletedEvent,
|
||||
MemorySaveFailedEvent,
|
||||
MemorySaveStartedEvent,
|
||||
)
|
||||
from crewai.events.types.reasoning_events import (
|
||||
AgentReasoningCompletedEvent,
|
||||
AgentReasoningFailedEvent,
|
||||
AgentReasoningStartedEvent,
|
||||
ReasoningEvent,
|
||||
)
|
||||
from crewai.events.types.skill_events import (
|
||||
SkillActivatedEvent,
|
||||
SkillDiscoveryCompletedEvent,
|
||||
SkillDiscoveryStartedEvent,
|
||||
SkillEvent,
|
||||
SkillLoadFailedEvent,
|
||||
SkillLoadedEvent,
|
||||
)
|
||||
from crewai.events.types.task_events import (
|
||||
TaskCompletedEvent,
|
||||
TaskEvaluationEvent,
|
||||
TaskFailedEvent,
|
||||
TaskStartedEvent,
|
||||
)
|
||||
from crewai.events.types.tool_usage_events import (
|
||||
ToolExecutionErrorEvent,
|
||||
ToolSelectionErrorEvent,
|
||||
ToolUsageErrorEvent,
|
||||
ToolUsageEvent,
|
||||
ToolUsageFinishedEvent,
|
||||
ToolUsageStartedEvent,
|
||||
ToolValidateInputErrorEvent,
|
||||
)
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
@@ -125,6 +34,250 @@ if TYPE_CHECKING:
|
||||
LiteAgentExecutionErrorEvent,
|
||||
LiteAgentExecutionStartedEvent,
|
||||
)
|
||||
from crewai.events.types.checkpoint_events import (
|
||||
CheckpointBaseEvent,
|
||||
CheckpointCompletedEvent,
|
||||
CheckpointFailedEvent,
|
||||
CheckpointForkBaseEvent,
|
||||
CheckpointForkCompletedEvent,
|
||||
CheckpointForkStartedEvent,
|
||||
CheckpointPrunedEvent,
|
||||
CheckpointRestoreBaseEvent,
|
||||
CheckpointRestoreCompletedEvent,
|
||||
CheckpointRestoreFailedEvent,
|
||||
CheckpointRestoreStartedEvent,
|
||||
CheckpointStartedEvent,
|
||||
)
|
||||
from crewai.events.types.crew_events import (
|
||||
CrewKickoffCompletedEvent,
|
||||
CrewKickoffFailedEvent,
|
||||
CrewKickoffStartedEvent,
|
||||
CrewTestCompletedEvent,
|
||||
CrewTestFailedEvent,
|
||||
CrewTestResultEvent,
|
||||
CrewTestStartedEvent,
|
||||
CrewTrainCompletedEvent,
|
||||
CrewTrainFailedEvent,
|
||||
CrewTrainStartedEvent,
|
||||
)
|
||||
from crewai.events.types.flow_events import (
|
||||
FlowCreatedEvent,
|
||||
FlowEvent,
|
||||
FlowFinishedEvent,
|
||||
FlowPlotEvent,
|
||||
FlowStartedEvent,
|
||||
HumanFeedbackReceivedEvent,
|
||||
HumanFeedbackRequestedEvent,
|
||||
MethodExecutionFailedEvent,
|
||||
MethodExecutionFinishedEvent,
|
||||
MethodExecutionStartedEvent,
|
||||
)
|
||||
from crewai.events.types.knowledge_events import (
|
||||
KnowledgeQueryCompletedEvent,
|
||||
KnowledgeQueryFailedEvent,
|
||||
KnowledgeQueryStartedEvent,
|
||||
KnowledgeRetrievalCompletedEvent,
|
||||
KnowledgeRetrievalStartedEvent,
|
||||
KnowledgeSearchQueryFailedEvent,
|
||||
)
|
||||
from crewai.events.types.llm_events import (
|
||||
LLMCallCompletedEvent,
|
||||
LLMCallFailedEvent,
|
||||
LLMCallStartedEvent,
|
||||
LLMStreamChunkEvent,
|
||||
)
|
||||
from crewai.events.types.llm_guardrail_events import (
|
||||
LLMGuardrailCompletedEvent,
|
||||
LLMGuardrailStartedEvent,
|
||||
)
|
||||
from crewai.events.types.logging_events import (
|
||||
AgentLogsExecutionEvent,
|
||||
AgentLogsStartedEvent,
|
||||
)
|
||||
from crewai.events.types.mcp_events import (
|
||||
MCPConfigFetchFailedEvent,
|
||||
MCPConnectionCompletedEvent,
|
||||
MCPConnectionFailedEvent,
|
||||
MCPConnectionStartedEvent,
|
||||
MCPToolExecutionCompletedEvent,
|
||||
MCPToolExecutionFailedEvent,
|
||||
MCPToolExecutionStartedEvent,
|
||||
)
|
||||
from crewai.events.types.memory_events import (
|
||||
MemoryQueryCompletedEvent,
|
||||
MemoryQueryFailedEvent,
|
||||
MemoryQueryStartedEvent,
|
||||
MemoryRetrievalCompletedEvent,
|
||||
MemoryRetrievalFailedEvent,
|
||||
MemoryRetrievalStartedEvent,
|
||||
MemorySaveCompletedEvent,
|
||||
MemorySaveFailedEvent,
|
||||
MemorySaveStartedEvent,
|
||||
)
|
||||
from crewai.events.types.reasoning_events import (
|
||||
AgentReasoningCompletedEvent,
|
||||
AgentReasoningFailedEvent,
|
||||
AgentReasoningStartedEvent,
|
||||
ReasoningEvent,
|
||||
)
|
||||
from crewai.events.types.skill_events import (
|
||||
SkillActivatedEvent,
|
||||
SkillDiscoveryCompletedEvent,
|
||||
SkillDiscoveryStartedEvent,
|
||||
SkillEvent,
|
||||
SkillLoadFailedEvent,
|
||||
SkillLoadedEvent,
|
||||
)
|
||||
from crewai.events.types.task_events import (
|
||||
TaskCompletedEvent,
|
||||
TaskEvaluationEvent,
|
||||
TaskFailedEvent,
|
||||
TaskStartedEvent,
|
||||
)
|
||||
from crewai.events.types.tool_usage_events import (
|
||||
ToolExecutionErrorEvent,
|
||||
ToolSelectionErrorEvent,
|
||||
ToolUsageErrorEvent,
|
||||
ToolUsageEvent,
|
||||
ToolUsageFinishedEvent,
|
||||
ToolUsageStartedEvent,
|
||||
ToolValidateInputErrorEvent,
|
||||
)
|
||||
|
||||
# Map every event class name → its module path for lazy loading
|
||||
_LAZY_EVENT_MAPPING: dict[str, str] = {
|
||||
# agent_events
|
||||
"AgentEvaluationCompletedEvent": "crewai.events.types.agent_events",
|
||||
"AgentEvaluationFailedEvent": "crewai.events.types.agent_events",
|
||||
"AgentEvaluationStartedEvent": "crewai.events.types.agent_events",
|
||||
"AgentExecutionCompletedEvent": "crewai.events.types.agent_events",
|
||||
"AgentExecutionErrorEvent": "crewai.events.types.agent_events",
|
||||
"AgentExecutionStartedEvent": "crewai.events.types.agent_events",
|
||||
"LiteAgentExecutionCompletedEvent": "crewai.events.types.agent_events",
|
||||
"LiteAgentExecutionErrorEvent": "crewai.events.types.agent_events",
|
||||
"LiteAgentExecutionStartedEvent": "crewai.events.types.agent_events",
|
||||
# checkpoint_events
|
||||
"CheckpointBaseEvent": "crewai.events.types.checkpoint_events",
|
||||
"CheckpointCompletedEvent": "crewai.events.types.checkpoint_events",
|
||||
"CheckpointFailedEvent": "crewai.events.types.checkpoint_events",
|
||||
"CheckpointForkBaseEvent": "crewai.events.types.checkpoint_events",
|
||||
"CheckpointForkCompletedEvent": "crewai.events.types.checkpoint_events",
|
||||
"CheckpointForkStartedEvent": "crewai.events.types.checkpoint_events",
|
||||
"CheckpointPrunedEvent": "crewai.events.types.checkpoint_events",
|
||||
"CheckpointRestoreBaseEvent": "crewai.events.types.checkpoint_events",
|
||||
"CheckpointRestoreCompletedEvent": "crewai.events.types.checkpoint_events",
|
||||
"CheckpointRestoreFailedEvent": "crewai.events.types.checkpoint_events",
|
||||
"CheckpointRestoreStartedEvent": "crewai.events.types.checkpoint_events",
|
||||
"CheckpointStartedEvent": "crewai.events.types.checkpoint_events",
|
||||
# crew_events
|
||||
"CrewKickoffCompletedEvent": "crewai.events.types.crew_events",
|
||||
"CrewKickoffFailedEvent": "crewai.events.types.crew_events",
|
||||
"CrewKickoffStartedEvent": "crewai.events.types.crew_events",
|
||||
"CrewTestCompletedEvent": "crewai.events.types.crew_events",
|
||||
"CrewTestFailedEvent": "crewai.events.types.crew_events",
|
||||
"CrewTestResultEvent": "crewai.events.types.crew_events",
|
||||
"CrewTestStartedEvent": "crewai.events.types.crew_events",
|
||||
"CrewTrainCompletedEvent": "crewai.events.types.crew_events",
|
||||
"CrewTrainFailedEvent": "crewai.events.types.crew_events",
|
||||
"CrewTrainStartedEvent": "crewai.events.types.crew_events",
|
||||
# flow_events
|
||||
"FlowCreatedEvent": "crewai.events.types.flow_events",
|
||||
"FlowEvent": "crewai.events.types.flow_events",
|
||||
"FlowFinishedEvent": "crewai.events.types.flow_events",
|
||||
"FlowPlotEvent": "crewai.events.types.flow_events",
|
||||
"FlowStartedEvent": "crewai.events.types.flow_events",
|
||||
"HumanFeedbackReceivedEvent": "crewai.events.types.flow_events",
|
||||
"HumanFeedbackRequestedEvent": "crewai.events.types.flow_events",
|
||||
"MethodExecutionFailedEvent": "crewai.events.types.flow_events",
|
||||
"MethodExecutionFinishedEvent": "crewai.events.types.flow_events",
|
||||
"MethodExecutionStartedEvent": "crewai.events.types.flow_events",
|
||||
# knowledge_events
|
||||
"KnowledgeQueryCompletedEvent": "crewai.events.types.knowledge_events",
|
||||
"KnowledgeQueryFailedEvent": "crewai.events.types.knowledge_events",
|
||||
"KnowledgeQueryStartedEvent": "crewai.events.types.knowledge_events",
|
||||
"KnowledgeRetrievalCompletedEvent": "crewai.events.types.knowledge_events",
|
||||
"KnowledgeRetrievalStartedEvent": "crewai.events.types.knowledge_events",
|
||||
"KnowledgeSearchQueryFailedEvent": "crewai.events.types.knowledge_events",
|
||||
# llm_events
|
||||
"LLMCallCompletedEvent": "crewai.events.types.llm_events",
|
||||
"LLMCallFailedEvent": "crewai.events.types.llm_events",
|
||||
"LLMCallStartedEvent": "crewai.events.types.llm_events",
|
||||
"LLMStreamChunkEvent": "crewai.events.types.llm_events",
|
||||
# llm_guardrail_events
|
||||
"LLMGuardrailCompletedEvent": "crewai.events.types.llm_guardrail_events",
|
||||
"LLMGuardrailStartedEvent": "crewai.events.types.llm_guardrail_events",
|
||||
# logging_events
|
||||
"AgentLogsExecutionEvent": "crewai.events.types.logging_events",
|
||||
"AgentLogsStartedEvent": "crewai.events.types.logging_events",
|
||||
# mcp_events
|
||||
"MCPConfigFetchFailedEvent": "crewai.events.types.mcp_events",
|
||||
"MCPConnectionCompletedEvent": "crewai.events.types.mcp_events",
|
||||
"MCPConnectionFailedEvent": "crewai.events.types.mcp_events",
|
||||
"MCPConnectionStartedEvent": "crewai.events.types.mcp_events",
|
||||
"MCPToolExecutionCompletedEvent": "crewai.events.types.mcp_events",
|
||||
"MCPToolExecutionFailedEvent": "crewai.events.types.mcp_events",
|
||||
"MCPToolExecutionStartedEvent": "crewai.events.types.mcp_events",
|
||||
# memory_events
|
||||
"MemoryQueryCompletedEvent": "crewai.events.types.memory_events",
|
||||
"MemoryQueryFailedEvent": "crewai.events.types.memory_events",
|
||||
"MemoryQueryStartedEvent": "crewai.events.types.memory_events",
|
||||
"MemoryRetrievalCompletedEvent": "crewai.events.types.memory_events",
|
||||
"MemoryRetrievalFailedEvent": "crewai.events.types.memory_events",
|
||||
"MemoryRetrievalStartedEvent": "crewai.events.types.memory_events",
|
||||
"MemorySaveCompletedEvent": "crewai.events.types.memory_events",
|
||||
"MemorySaveFailedEvent": "crewai.events.types.memory_events",
|
||||
"MemorySaveStartedEvent": "crewai.events.types.memory_events",
|
||||
# reasoning_events
|
||||
"AgentReasoningCompletedEvent": "crewai.events.types.reasoning_events",
|
||||
"AgentReasoningFailedEvent": "crewai.events.types.reasoning_events",
|
||||
"AgentReasoningStartedEvent": "crewai.events.types.reasoning_events",
|
||||
"ReasoningEvent": "crewai.events.types.reasoning_events",
|
||||
# skill_events
|
||||
"SkillActivatedEvent": "crewai.events.types.skill_events",
|
||||
"SkillDiscoveryCompletedEvent": "crewai.events.types.skill_events",
|
||||
"SkillDiscoveryStartedEvent": "crewai.events.types.skill_events",
|
||||
"SkillEvent": "crewai.events.types.skill_events",
|
||||
"SkillLoadFailedEvent": "crewai.events.types.skill_events",
|
||||
"SkillLoadedEvent": "crewai.events.types.skill_events",
|
||||
# task_events
|
||||
"TaskCompletedEvent": "crewai.events.types.task_events",
|
||||
"TaskEvaluationEvent": "crewai.events.types.task_events",
|
||||
"TaskFailedEvent": "crewai.events.types.task_events",
|
||||
"TaskStartedEvent": "crewai.events.types.task_events",
|
||||
# tool_usage_events
|
||||
"ToolExecutionErrorEvent": "crewai.events.types.tool_usage_events",
|
||||
"ToolSelectionErrorEvent": "crewai.events.types.tool_usage_events",
|
||||
"ToolUsageErrorEvent": "crewai.events.types.tool_usage_events",
|
||||
"ToolUsageEvent": "crewai.events.types.tool_usage_events",
|
||||
"ToolUsageFinishedEvent": "crewai.events.types.tool_usage_events",
|
||||
"ToolUsageStartedEvent": "crewai.events.types.tool_usage_events",
|
||||
"ToolValidateInputErrorEvent": "crewai.events.types.tool_usage_events",
|
||||
}
|
||||
|
||||
_extension_exports: dict[str, Any] = {}
|
||||
|
||||
|
||||
def __getattr__(name: str) -> Any:
|
||||
"""Lazy import for event types and registered extensions."""
|
||||
if name in _LAZY_EVENT_MAPPING:
|
||||
module_path = _LAZY_EVENT_MAPPING[name]
|
||||
module = importlib.import_module(module_path)
|
||||
val = getattr(module, name)
|
||||
globals()[name] = val # cache for subsequent access
|
||||
return val
|
||||
|
||||
if name in _extension_exports:
|
||||
value = _extension_exports[name]
|
||||
if isinstance(value, str):
|
||||
module_path, _, attr_name = value.rpartition(".")
|
||||
if module_path:
|
||||
module = importlib.import_module(module_path)
|
||||
return getattr(module, attr_name)
|
||||
return importlib.import_module(value)
|
||||
return value
|
||||
|
||||
msg = f"module {__name__!r} has no attribute {name!r}"
|
||||
raise AttributeError(msg)
|
||||
|
||||
|
||||
__all__ = [
|
||||
@@ -140,6 +293,18 @@ __all__ = [
|
||||
"AgentReasoningFailedEvent",
|
||||
"AgentReasoningStartedEvent",
|
||||
"BaseEventListener",
|
||||
"CheckpointBaseEvent",
|
||||
"CheckpointCompletedEvent",
|
||||
"CheckpointFailedEvent",
|
||||
"CheckpointForkBaseEvent",
|
||||
"CheckpointForkCompletedEvent",
|
||||
"CheckpointForkStartedEvent",
|
||||
"CheckpointPrunedEvent",
|
||||
"CheckpointRestoreBaseEvent",
|
||||
"CheckpointRestoreCompletedEvent",
|
||||
"CheckpointRestoreFailedEvent",
|
||||
"CheckpointRestoreStartedEvent",
|
||||
"CheckpointStartedEvent",
|
||||
"CircularDependencyError",
|
||||
"CrewKickoffCompletedEvent",
|
||||
"CrewKickoffFailedEvent",
|
||||
@@ -214,42 +379,3 @@ __all__ = [
|
||||
"_extension_exports",
|
||||
"crewai_event_bus",
|
||||
]
|
||||
|
||||
_AGENT_EVENT_MAPPING = {
|
||||
"AgentEvaluationCompletedEvent": "crewai.events.types.agent_events",
|
||||
"AgentEvaluationFailedEvent": "crewai.events.types.agent_events",
|
||||
"AgentEvaluationStartedEvent": "crewai.events.types.agent_events",
|
||||
"AgentExecutionCompletedEvent": "crewai.events.types.agent_events",
|
||||
"AgentExecutionErrorEvent": "crewai.events.types.agent_events",
|
||||
"AgentExecutionStartedEvent": "crewai.events.types.agent_events",
|
||||
"LiteAgentExecutionCompletedEvent": "crewai.events.types.agent_events",
|
||||
"LiteAgentExecutionErrorEvent": "crewai.events.types.agent_events",
|
||||
"LiteAgentExecutionStartedEvent": "crewai.events.types.agent_events",
|
||||
}
|
||||
|
||||
_extension_exports: dict[str, Any] = {}
|
||||
|
||||
|
||||
def __getattr__(name: str) -> Any:
|
||||
"""Lazy import for agent events and registered extensions."""
|
||||
if name in _AGENT_EVENT_MAPPING:
|
||||
import importlib
|
||||
|
||||
module_path = _AGENT_EVENT_MAPPING[name]
|
||||
module = importlib.import_module(module_path)
|
||||
return getattr(module, name)
|
||||
|
||||
if name in _extension_exports:
|
||||
import importlib
|
||||
|
||||
value = _extension_exports[name]
|
||||
if isinstance(value, str):
|
||||
module_path, _, attr_name = value.rpartition(".")
|
||||
if module_path:
|
||||
module = importlib.import_module(module_path)
|
||||
return getattr(module, attr_name)
|
||||
return importlib.import_module(value)
|
||||
return value
|
||||
|
||||
msg = f"module {__name__!r} has no attribute {name!r}"
|
||||
raise AttributeError(msg)
|
||||
|
||||
@@ -64,6 +64,22 @@ P = ParamSpec("P")
|
||||
R = TypeVar("R")
|
||||
|
||||
|
||||
_replaying: contextvars.ContextVar[bool] = contextvars.ContextVar(
|
||||
"crewai_event_replaying", default=False
|
||||
)
|
||||
|
||||
|
||||
def is_replaying() -> bool:
|
||||
"""Return True if the current context is dispatching a replayed event.
|
||||
|
||||
Listeners with side effects (checkpoint writes, external API calls that
|
||||
should not be repeated) should early-return when this is true. Listeners
|
||||
whose purpose is reconstructing timeline state (trace batch, console
|
||||
formatter) should ignore the flag and process replayed events normally.
|
||||
"""
|
||||
return _replaying.get()
|
||||
|
||||
|
||||
class CrewAIEventsBus:
|
||||
"""Singleton event bus for handling events in CrewAI.
|
||||
|
||||
@@ -261,6 +277,11 @@ class CrewAIEventsBus:
|
||||
self._runtime_state = state
|
||||
self._registered_entity_ids = {id(e) for e in state.root}
|
||||
|
||||
@property
|
||||
def runtime_state(self) -> RuntimeState | None:
|
||||
"""The RuntimeState currently attached to the bus, if any."""
|
||||
return self._runtime_state
|
||||
|
||||
def register_entity(self, entity: Any) -> None:
|
||||
"""Add an entity to the RuntimeState, creating it if needed.
|
||||
|
||||
@@ -568,6 +589,87 @@ class CrewAIEventsBus:
|
||||
|
||||
return None
|
||||
|
||||
async def _acall_handlers_replaying(
|
||||
self,
|
||||
source: Any,
|
||||
event: BaseEvent,
|
||||
handlers: AsyncHandlerSet,
|
||||
) -> None:
|
||||
"""Call async handlers with the replaying flag set on the loop thread."""
|
||||
token = _replaying.set(True)
|
||||
try:
|
||||
await self._acall_handlers(source, event, handlers)
|
||||
finally:
|
||||
_replaying.reset(token)
|
||||
|
||||
async def _emit_with_dependencies_replaying(
|
||||
self, source: Any, event: BaseEvent
|
||||
) -> None:
|
||||
"""Dependency-aware dispatch with the replaying flag set."""
|
||||
token = _replaying.set(True)
|
||||
try:
|
||||
await self._emit_with_dependencies(source, event)
|
||||
finally:
|
||||
_replaying.reset(token)
|
||||
|
||||
def replay(self, source: Any, event: BaseEvent) -> Future[None] | None:
|
||||
"""Dispatch a previously-recorded event without mutating its fields.
|
||||
|
||||
Unlike :meth:`emit`, this does not run ``_prepare_event`` (so stored
|
||||
event ids and ``emission_sequence`` are preserved) and does not
|
||||
re-record the event. Listeners can call :func:`is_replaying` to
|
||||
opt out of side-effectful processing.
|
||||
|
||||
Args:
|
||||
source: The emitting object.
|
||||
event: The previously-recorded event to dispatch.
|
||||
|
||||
Returns:
|
||||
Future that completes when handlers finish, or None if no handlers.
|
||||
"""
|
||||
event_type = type(event)
|
||||
|
||||
with self._rwlock.r_locked():
|
||||
if self._shutting_down:
|
||||
return None
|
||||
has_dependencies = event_type in self._handler_dependencies
|
||||
sync_handlers = self._sync_handlers.get(event_type, frozenset())
|
||||
async_handlers = self._async_handlers.get(event_type, frozenset())
|
||||
|
||||
if not sync_handlers and not async_handlers:
|
||||
return None
|
||||
|
||||
self._ensure_executor_initialized()
|
||||
self._has_pending_events = True
|
||||
|
||||
token = _replaying.set(True)
|
||||
try:
|
||||
if has_dependencies:
|
||||
return self._track_future(
|
||||
asyncio.run_coroutine_threadsafe(
|
||||
self._emit_with_dependencies_replaying(source, event),
|
||||
self._loop,
|
||||
)
|
||||
)
|
||||
|
||||
if sync_handlers:
|
||||
ctx = contextvars.copy_context()
|
||||
sync_future = self._sync_executor.submit(
|
||||
ctx.run, self._call_handlers, source, event, sync_handlers
|
||||
)
|
||||
self._track_future(sync_future)
|
||||
if not async_handlers:
|
||||
return sync_future
|
||||
|
||||
return self._track_future(
|
||||
asyncio.run_coroutine_threadsafe(
|
||||
self._acall_handlers_replaying(source, event, async_handlers),
|
||||
self._loop,
|
||||
)
|
||||
)
|
||||
finally:
|
||||
_replaying.reset(token)
|
||||
|
||||
def flush(self, timeout: float | None = 30.0) -> bool:
|
||||
"""Block until all pending event handlers complete.
|
||||
|
||||
|
||||
@@ -30,6 +30,17 @@ from crewai.events.types.agent_events import (
|
||||
AgentExecutionStartedEvent,
|
||||
LiteAgentExecutionCompletedEvent,
|
||||
)
|
||||
from crewai.events.types.checkpoint_events import (
|
||||
CheckpointCompletedEvent,
|
||||
CheckpointFailedEvent,
|
||||
CheckpointForkCompletedEvent,
|
||||
CheckpointForkStartedEvent,
|
||||
CheckpointPrunedEvent,
|
||||
CheckpointRestoreCompletedEvent,
|
||||
CheckpointRestoreFailedEvent,
|
||||
CheckpointRestoreStartedEvent,
|
||||
CheckpointStartedEvent,
|
||||
)
|
||||
from crewai.events.types.crew_events import (
|
||||
CrewKickoffCompletedEvent,
|
||||
CrewKickoffFailedEvent,
|
||||
@@ -183,4 +194,13 @@ EventTypes = (
|
||||
| MCPToolExecutionCompletedEvent
|
||||
| MCPToolExecutionFailedEvent
|
||||
| MCPConfigFetchFailedEvent
|
||||
| CheckpointStartedEvent
|
||||
| CheckpointCompletedEvent
|
||||
| CheckpointFailedEvent
|
||||
| CheckpointForkStartedEvent
|
||||
| CheckpointForkCompletedEvent
|
||||
| CheckpointRestoreStartedEvent
|
||||
| CheckpointRestoreCompletedEvent
|
||||
| CheckpointRestoreFailedEvent
|
||||
| CheckpointPrunedEvent
|
||||
)
|
||||
|
||||
97
lib/crewai/src/crewai/events/types/checkpoint_events.py
Normal file
97
lib/crewai/src/crewai/events/types/checkpoint_events.py
Normal file
@@ -0,0 +1,97 @@
|
||||
"""Event family for automatic state checkpointing and forking."""
|
||||
|
||||
from typing import Literal
|
||||
|
||||
from crewai.events.base_events import BaseEvent
|
||||
|
||||
|
||||
class CheckpointBaseEvent(BaseEvent):
|
||||
"""Base event for checkpoint lifecycle operations."""
|
||||
|
||||
type: str
|
||||
location: str
|
||||
provider: str
|
||||
trigger: str | None = None
|
||||
branch: str | None = None
|
||||
parent_id: str | None = None
|
||||
|
||||
|
||||
class CheckpointStartedEvent(CheckpointBaseEvent):
|
||||
"""Event emitted immediately before a checkpoint is written."""
|
||||
|
||||
type: Literal["checkpoint_started"] = "checkpoint_started"
|
||||
|
||||
|
||||
class CheckpointCompletedEvent(CheckpointBaseEvent):
|
||||
"""Event emitted when a checkpoint has been written successfully."""
|
||||
|
||||
type: Literal["checkpoint_completed"] = "checkpoint_completed"
|
||||
checkpoint_id: str
|
||||
duration_ms: float
|
||||
|
||||
|
||||
class CheckpointFailedEvent(CheckpointBaseEvent):
|
||||
"""Event emitted when a checkpoint write fails."""
|
||||
|
||||
type: Literal["checkpoint_failed"] = "checkpoint_failed"
|
||||
error: str
|
||||
|
||||
|
||||
class CheckpointPrunedEvent(CheckpointBaseEvent):
|
||||
"""Event emitted after pruning old checkpoints from a branch."""
|
||||
|
||||
type: Literal["checkpoint_pruned"] = "checkpoint_pruned"
|
||||
removed_count: int
|
||||
max_checkpoints: int
|
||||
|
||||
|
||||
class CheckpointForkBaseEvent(BaseEvent):
|
||||
"""Base event for fork lifecycle operations on a RuntimeState."""
|
||||
|
||||
type: str
|
||||
branch: str
|
||||
parent_branch: str | None = None
|
||||
parent_checkpoint_id: str | None = None
|
||||
|
||||
|
||||
class CheckpointForkStartedEvent(CheckpointForkBaseEvent):
|
||||
"""Event emitted immediately before a fork relabels the branch."""
|
||||
|
||||
type: Literal["checkpoint_fork_started"] = "checkpoint_fork_started"
|
||||
|
||||
|
||||
class CheckpointForkCompletedEvent(CheckpointForkBaseEvent):
|
||||
"""Event emitted after a fork has established the new branch."""
|
||||
|
||||
type: Literal["checkpoint_fork_completed"] = "checkpoint_fork_completed"
|
||||
|
||||
|
||||
class CheckpointRestoreBaseEvent(BaseEvent):
|
||||
"""Base event for checkpoint restore lifecycle operations."""
|
||||
|
||||
type: str
|
||||
location: str
|
||||
provider: str | None = None
|
||||
|
||||
|
||||
class CheckpointRestoreStartedEvent(CheckpointRestoreBaseEvent):
|
||||
"""Event emitted immediately before a checkpoint restore begins."""
|
||||
|
||||
type: Literal["checkpoint_restore_started"] = "checkpoint_restore_started"
|
||||
|
||||
|
||||
class CheckpointRestoreCompletedEvent(CheckpointRestoreBaseEvent):
|
||||
"""Event emitted when a checkpoint has been restored successfully."""
|
||||
|
||||
type: Literal["checkpoint_restore_completed"] = "checkpoint_restore_completed"
|
||||
checkpoint_id: str
|
||||
branch: str | None = None
|
||||
parent_id: str | None = None
|
||||
duration_ms: float
|
||||
|
||||
|
||||
class CheckpointRestoreFailedEvent(CheckpointRestoreBaseEvent):
|
||||
"""Event emitted when a checkpoint restore fails."""
|
||||
|
||||
type: Literal["checkpoint_restore_failed"] = "checkpoint_restore_failed"
|
||||
error: str
|
||||
@@ -45,6 +45,7 @@ from pydantic import (
|
||||
BeforeValidator,
|
||||
ConfigDict,
|
||||
Field,
|
||||
PlainSerializer,
|
||||
PrivateAttr,
|
||||
SerializeAsAny,
|
||||
ValidationError,
|
||||
@@ -58,6 +59,7 @@ from crewai.events.event_bus import crewai_event_bus
|
||||
from crewai.events.event_context import (
|
||||
get_current_parent_id,
|
||||
reset_last_event_id,
|
||||
restore_event_scope,
|
||||
triggered_by_scope,
|
||||
)
|
||||
from crewai.events.listeners.tracing.trace_listener import (
|
||||
@@ -157,6 +159,37 @@ def _resolve_persistence(value: Any) -> Any:
|
||||
return value
|
||||
|
||||
|
||||
_INITIAL_STATE_CLASS_MARKER = "__crewai_pydantic_class_schema__"
|
||||
|
||||
|
||||
def _serialize_initial_state(value: Any) -> Any:
|
||||
"""Make ``initial_state`` safe for JSON checkpoint serialization.
|
||||
|
||||
``BaseModel`` class refs are emitted as their JSON schema under a sentinel
|
||||
marker key so deserialization can round-trip them back to a class.
|
||||
``BaseModel`` instances are dumped to JSON (round-trip as plain dicts,
|
||||
which ``_create_initial_state`` accepts). Bare ``type`` values that are
|
||||
not ``BaseModel`` subclasses (e.g. ``dict``) are dropped since they
|
||||
can't be represented in JSON.
|
||||
"""
|
||||
if isinstance(value, type):
|
||||
if issubclass(value, BaseModel):
|
||||
return {_INITIAL_STATE_CLASS_MARKER: value.model_json_schema()}
|
||||
return None
|
||||
if isinstance(value, BaseModel):
|
||||
return value.model_dump(mode="json")
|
||||
return value
|
||||
|
||||
|
||||
def _deserialize_initial_state(value: Any) -> Any:
|
||||
"""Rehydrate a class ref serialized by :func:`_serialize_initial_state`."""
|
||||
if isinstance(value, dict) and _INITIAL_STATE_CLASS_MARKER in value:
|
||||
from crewai.utilities.pydantic_schema_utils import create_model_from_schema
|
||||
|
||||
return create_model_from_schema(value[_INITIAL_STATE_CLASS_MARKER])
|
||||
return value
|
||||
|
||||
|
||||
class FlowState(BaseModel):
|
||||
"""Base model for all flow states, ensuring each state has a unique ID."""
|
||||
|
||||
@@ -908,7 +941,11 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
|
||||
|
||||
entity_type: Literal["flow"] = "flow"
|
||||
|
||||
initial_state: Any = Field(default=None)
|
||||
initial_state: Annotated[ # type: ignore[type-arg]
|
||||
type[BaseModel] | type[dict] | dict[str, Any] | BaseModel | None,
|
||||
BeforeValidator(_deserialize_initial_state),
|
||||
PlainSerializer(_serialize_initial_state, return_type=Any, when_used="json"),
|
||||
] = Field(default=None)
|
||||
name: str | None = Field(default=None)
|
||||
tracing: bool | None = Field(default=None)
|
||||
stream: bool = Field(default=False)
|
||||
@@ -980,13 +1017,18 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
|
||||
A Flow instance on the new branch. Call kickoff() to run.
|
||||
"""
|
||||
flow = cls.from_checkpoint(config)
|
||||
state = crewai_event_bus._runtime_state
|
||||
state = crewai_event_bus.runtime_state
|
||||
if state is None:
|
||||
raise RuntimeError(
|
||||
"Cannot fork: no runtime state on the event bus. "
|
||||
"Ensure from_checkpoint() succeeded before calling fork()."
|
||||
)
|
||||
state.fork(branch)
|
||||
new_id = str(uuid4())
|
||||
if isinstance(flow._state, dict):
|
||||
flow._state["id"] = new_id
|
||||
else:
|
||||
object.__setattr__(flow._state, "id", new_id)
|
||||
return flow
|
||||
|
||||
checkpoint_completed_methods: set[str] | None = Field(default=None)
|
||||
@@ -1008,6 +1050,8 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
|
||||
}
|
||||
if self.checkpoint_state is not None:
|
||||
self._restore_state(self.checkpoint_state)
|
||||
restore_event_scope(())
|
||||
reset_last_event_id()
|
||||
|
||||
_methods: dict[FlowMethodName, FlowMethod[Any, Any]] = PrivateAttr(
|
||||
default_factory=dict
|
||||
@@ -1030,6 +1074,7 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
|
||||
_human_feedback_method_outputs: dict[str, Any] = PrivateAttr(default_factory=dict)
|
||||
_input_history: list[InputHistoryEntry] = PrivateAttr(default_factory=list)
|
||||
_state: Any = PrivateAttr(default=None)
|
||||
_execution_id: str = PrivateAttr(default_factory=lambda: str(uuid4()))
|
||||
|
||||
def __class_getitem__(cls: type[Flow[T]], item: type[T]) -> type[Flow[T]]: # type: ignore[override]
|
||||
class _FlowGeneric(cls): # type: ignore[valid-type,misc]
|
||||
@@ -1820,6 +1865,27 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
|
||||
except (AttributeError, TypeError):
|
||||
return "" # Safely handle any unexpected attribute access issues
|
||||
|
||||
@property
|
||||
def execution_id(self) -> str:
|
||||
"""Stable identifier for this flow execution.
|
||||
|
||||
Separate from ``flow_id`` / ``state.id``, which consumers may
|
||||
override via ``kickoff(inputs={"id": ...})`` to resume a persisted
|
||||
flow. ``execution_id`` is never affected by ``inputs`` and stays
|
||||
stable for the lifetime of a single run, so it is the correct key
|
||||
for telemetry, tracing, and any external correlation that must
|
||||
uniquely identify a single execution even when callers pass an
|
||||
``id`` in ``inputs``.
|
||||
|
||||
Defaults to a fresh ``uuid4`` per ``Flow`` instance; assign to
|
||||
override when an outer system already has an execution identity.
|
||||
"""
|
||||
return self._execution_id
|
||||
|
||||
@execution_id.setter
|
||||
def execution_id(self, value: str) -> None:
|
||||
self._execution_id = value
|
||||
|
||||
def _initialize_state(self, inputs: dict[str, Any]) -> None:
|
||||
"""Initialize or update flow state with new inputs.
|
||||
|
||||
@@ -2133,9 +2199,9 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
|
||||
flow_id_token = None
|
||||
request_id_token = None
|
||||
if current_flow_id.get() is None:
|
||||
flow_id_token = current_flow_id.set(self.flow_id)
|
||||
flow_id_token = current_flow_id.set(self.execution_id)
|
||||
if current_flow_request_id.get() is None:
|
||||
request_id_token = current_flow_request_id.set(self.flow_id)
|
||||
request_id_token = current_flow_request_id.set(self.execution_id)
|
||||
|
||||
try:
|
||||
# Reset flow state for fresh execution unless restoring from persistence
|
||||
@@ -2214,6 +2280,9 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
|
||||
if inputs is not None and "id" not in inputs:
|
||||
self._initialize_state(inputs)
|
||||
|
||||
if self._is_execution_resuming:
|
||||
await self._replay_recorded_events()
|
||||
|
||||
try:
|
||||
# Determine which start methods to execute at kickoff
|
||||
# Conditional start methods (with __trigger_methods__) are only triggered by their conditions
|
||||
@@ -2361,6 +2430,44 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
|
||||
"""
|
||||
return await self.kickoff_async(inputs, input_files, from_checkpoint)
|
||||
|
||||
async def _replay_recorded_events(self) -> None:
|
||||
"""Dispatch recorded ``MethodExecution*`` events from the event record."""
|
||||
state = crewai_event_bus.runtime_state
|
||||
if state is None:
|
||||
return
|
||||
record = state.event_record
|
||||
if len(record) == 0:
|
||||
return
|
||||
|
||||
replayable = (
|
||||
MethodExecutionStartedEvent,
|
||||
MethodExecutionFinishedEvent,
|
||||
MethodExecutionFailedEvent,
|
||||
)
|
||||
flow_name = self.name or self.__class__.__name__
|
||||
nodes = sorted(
|
||||
(
|
||||
n
|
||||
for n in record.all_nodes()
|
||||
if isinstance(n.event, replayable)
|
||||
and n.event.flow_name == flow_name
|
||||
and n.event.method_name in self._completed_methods
|
||||
),
|
||||
key=lambda n: n.event.emission_sequence or 0,
|
||||
)
|
||||
|
||||
for node in nodes:
|
||||
future = crewai_event_bus.replay(self, node.event)
|
||||
if future is not None:
|
||||
try:
|
||||
await asyncio.wrap_future(future)
|
||||
except Exception:
|
||||
logger.warning(
|
||||
"Replayed event handler failed: %s",
|
||||
node.event.type,
|
||||
exc_info=True,
|
||||
)
|
||||
|
||||
async def _execute_start_method(self, start_method_name: FlowMethodName) -> None:
|
||||
"""Executes a flow's start method and its triggered listeners.
|
||||
|
||||
|
||||
@@ -32,6 +32,11 @@ from crewai.events.types.tool_usage_events import (
|
||||
ToolUsageFinishedEvent,
|
||||
ToolUsageStartedEvent,
|
||||
)
|
||||
from crewai.llm_result import (
|
||||
LLMResult,
|
||||
ToolCallRecord,
|
||||
estimate_cost_usd as _estimate_cost_usd,
|
||||
)
|
||||
from crewai.llms.base_llm import (
|
||||
BaseLLM,
|
||||
JsonResponseFormat,
|
||||
@@ -1699,6 +1704,7 @@ class LLM(BaseLLM):
|
||||
from_task: Task | None = None,
|
||||
from_agent: BaseAgent | None = None,
|
||||
response_model: type[BaseModel] | None = None,
|
||||
max_iterations: int = 10,
|
||||
) -> str | Any:
|
||||
"""High-level LLM call method.
|
||||
|
||||
@@ -1716,16 +1722,250 @@ class LLM(BaseLLM):
|
||||
from_task: Optional Task that invoked the LLM
|
||||
from_agent: Optional Agent that invoked the LLM
|
||||
response_model: Optional Model that contains a pydantic response model.
|
||||
max_iterations: Maximum number of tool-loop iterations (default 10).
|
||||
Only used when both ``tools`` and ``available_functions``
|
||||
are provided.
|
||||
|
||||
Returns:
|
||||
Union[str, Any]: Either a text response from the LLM (str) or
|
||||
the result of a tool function call (Any).
|
||||
Union[str, LLMResult, Any]:
|
||||
- ``str`` when called without tools (backwards compatible).
|
||||
- ``LLMResult`` when called with tools and available_functions.
|
||||
- ``Any`` when a tool call returns a non-string result in legacy mode.
|
||||
|
||||
Raises:
|
||||
TypeError: If messages format is invalid
|
||||
ValueError: If response format is not supported
|
||||
LLMContextLengthExceededError: If input exceeds model's context limit
|
||||
"""
|
||||
# When tools AND available_functions are both provided, use the tool loop
|
||||
# which returns an LLMResult with structured metadata.
|
||||
if tools and available_functions:
|
||||
return self._call_with_tool_loop(
|
||||
messages=messages,
|
||||
tools=tools,
|
||||
callbacks=callbacks,
|
||||
available_functions=available_functions,
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
response_model=response_model,
|
||||
max_iterations=max_iterations,
|
||||
)
|
||||
|
||||
# Original single-shot path — returns str (backwards compatible).
|
||||
return self._call_single(
|
||||
messages=messages,
|
||||
tools=tools,
|
||||
callbacks=callbacks,
|
||||
available_functions=available_functions,
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
response_model=response_model,
|
||||
)
|
||||
|
||||
def _call_with_tool_loop(
|
||||
self,
|
||||
messages: str | list[LLMMessage],
|
||||
tools: list[dict[str, BaseTool]],
|
||||
callbacks: list[Any] | None,
|
||||
available_functions: dict[str, Any],
|
||||
from_task: Task | None,
|
||||
from_agent: BaseAgent | None,
|
||||
response_model: type[BaseModel] | None,
|
||||
max_iterations: int,
|
||||
) -> LLMResult:
|
||||
"""Run an LLM tool loop, returning a structured LLMResult.
|
||||
|
||||
Keeps calling the model until it stops requesting tool calls or
|
||||
``max_iterations`` is reached.
|
||||
"""
|
||||
from crewai.types.usage_metrics import UsageMetrics
|
||||
|
||||
if isinstance(messages, str):
|
||||
messages = [{"role": "user", "content": messages}]
|
||||
# Work on a mutable copy so we can append assistant/tool messages.
|
||||
conversation: list[dict[str, Any]] = list(messages) # type: ignore[arg-type]
|
||||
|
||||
result = LLMResult(
|
||||
text="",
|
||||
tool_calls=[],
|
||||
usage=UsageMetrics(),
|
||||
cost_usd=0.0,
|
||||
iterations=0,
|
||||
)
|
||||
|
||||
for iteration in range(max_iterations):
|
||||
# Call the model WITHOUT available_functions so the internal
|
||||
# handler returns tool_calls as-is instead of executing them.
|
||||
raw = self._call_single(
|
||||
messages=conversation, # type: ignore[arg-type]
|
||||
tools=tools,
|
||||
callbacks=callbacks,
|
||||
available_functions=None, # Don't let inner layer execute
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
response_model=response_model,
|
||||
)
|
||||
|
||||
result.iterations = iteration + 1
|
||||
|
||||
# Accumulate usage from this iteration
|
||||
self._accumulate_usage(result)
|
||||
|
||||
# If we got a string back, the model is done (no tool calls).
|
||||
if isinstance(raw, str):
|
||||
result.text = raw
|
||||
break
|
||||
|
||||
# If we got tool_calls (list), execute them and feed results back.
|
||||
if isinstance(raw, list):
|
||||
# Append assistant message with tool calls to conversation
|
||||
assistant_msg: dict[str, Any] = {
|
||||
"role": "assistant",
|
||||
"content": None,
|
||||
"tool_calls": [
|
||||
{
|
||||
"id": getattr(tc, "id", f"call_{i}"),
|
||||
"type": "function",
|
||||
"function": {
|
||||
"name": getattr(tc.function, "name", "")
|
||||
if hasattr(tc, "function")
|
||||
else "",
|
||||
"arguments": getattr(tc.function, "arguments", "{}")
|
||||
if hasattr(tc, "function")
|
||||
else "{}",
|
||||
},
|
||||
}
|
||||
for i, tc in enumerate(raw)
|
||||
],
|
||||
}
|
||||
conversation.append(assistant_msg)
|
||||
|
||||
# Execute each tool call
|
||||
for tc in raw:
|
||||
func_name = sanitize_tool_name(
|
||||
getattr(tc.function, "name", "")
|
||||
if hasattr(tc, "function")
|
||||
else ""
|
||||
)
|
||||
func_args_str = (
|
||||
getattr(tc.function, "arguments", "{}")
|
||||
if hasattr(tc, "function")
|
||||
else "{}"
|
||||
)
|
||||
tool_call_id = getattr(tc, "id", f"call_{func_name}")
|
||||
|
||||
try:
|
||||
func_args = json.loads(func_args_str)
|
||||
except (json.JSONDecodeError, TypeError):
|
||||
func_args = {}
|
||||
|
||||
record = ToolCallRecord(
|
||||
name=func_name,
|
||||
input=func_args,
|
||||
)
|
||||
|
||||
if func_name in available_functions:
|
||||
t0 = datetime.now()
|
||||
started_at = t0
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=ToolUsageStartedEvent(
|
||||
tool_name=func_name,
|
||||
tool_args=func_args,
|
||||
from_agent=from_agent,
|
||||
from_task=from_task,
|
||||
),
|
||||
)
|
||||
try:
|
||||
fn = available_functions[func_name]
|
||||
tool_output = fn(**func_args)
|
||||
t1 = datetime.now()
|
||||
record.output = (
|
||||
str(tool_output) if tool_output is not None else ""
|
||||
)
|
||||
record.duration_ms = (t1 - t0).total_seconds() * 1000
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=ToolUsageFinishedEvent(
|
||||
output=tool_output,
|
||||
tool_name=func_name,
|
||||
tool_args=func_args,
|
||||
started_at=started_at,
|
||||
finished_at=t1,
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
),
|
||||
)
|
||||
except Exception as e:
|
||||
t1 = datetime.now()
|
||||
record.output = f"Error: {e}"
|
||||
record.duration_ms = (t1 - t0).total_seconds() * 1000
|
||||
record.is_error = True
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=ToolUsageErrorEvent(
|
||||
tool_name=func_name,
|
||||
tool_args=func_args,
|
||||
error=str(e),
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
),
|
||||
)
|
||||
else:
|
||||
record.output = f"Error: unknown function '{func_name}'"
|
||||
record.is_error = True
|
||||
|
||||
result.tool_calls.append(record)
|
||||
|
||||
# Append tool result message for the model
|
||||
conversation.append(
|
||||
{
|
||||
"role": "tool",
|
||||
"tool_call_id": tool_call_id,
|
||||
"content": record.output,
|
||||
}
|
||||
)
|
||||
else:
|
||||
# Unexpected return type — treat as final text
|
||||
result.text = str(raw)
|
||||
break
|
||||
else:
|
||||
# max_iterations exhausted — use last text or empty
|
||||
if not result.text and result.tool_calls:
|
||||
result.text = (
|
||||
f"Max iterations ({max_iterations}) reached. "
|
||||
f"Last tool: {result.tool_calls[-1].name}"
|
||||
)
|
||||
|
||||
# Estimate cost
|
||||
result.cost_usd = _estimate_cost_usd(
|
||||
self.model,
|
||||
result.usage.prompt_tokens,
|
||||
result.usage.completion_tokens,
|
||||
)
|
||||
|
||||
return result
|
||||
|
||||
def _accumulate_usage(self, result: LLMResult) -> None:
|
||||
"""Pull token counts from the internal tracker into the LLMResult."""
|
||||
tracker = getattr(self, "_token_usage", None)
|
||||
if tracker and isinstance(tracker, dict):
|
||||
result.usage.prompt_tokens = tracker.get("prompt_tokens", 0)
|
||||
result.usage.completion_tokens = tracker.get("completion_tokens", 0)
|
||||
result.usage.total_tokens = tracker.get("total_tokens", 0)
|
||||
result.usage.successful_requests += 1
|
||||
|
||||
def _call_single(
|
||||
self,
|
||||
messages: str | list[LLMMessage],
|
||||
tools: list[dict[str, BaseTool]] | None = None,
|
||||
callbacks: list[Any] | None = None,
|
||||
available_functions: dict[str, Any] | None = None,
|
||||
from_task: Task | None = None,
|
||||
from_agent: BaseAgent | None = None,
|
||||
response_model: type[BaseModel] | None = None,
|
||||
) -> str | Any:
|
||||
"""Single-shot LLM call (original call() logic)."""
|
||||
with llm_call_context() as call_id:
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
@@ -1819,7 +2059,7 @@ class LLM(BaseLLM):
|
||||
|
||||
logging.info("Retrying LLM call without the unsupported 'stop'")
|
||||
|
||||
return self.call(
|
||||
return self._call_single(
|
||||
messages,
|
||||
tools=tools,
|
||||
callbacks=callbacks,
|
||||
|
||||
112
lib/crewai/src/crewai/llm_result.py
Normal file
112
lib/crewai/src/crewai/llm_result.py
Normal file
@@ -0,0 +1,112 @@
|
||||
"""Structured result types for LLM.call() with tool loop support.
|
||||
|
||||
When LLM.call() is invoked with tools and available_functions, it returns
|
||||
an LLMResult instead of a plain string. This preserves backwards compatibility:
|
||||
calls without tools still return str.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Any
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from crewai.types.usage_metrics import UsageMetrics
|
||||
|
||||
|
||||
class ToolCallRecord(BaseModel):
|
||||
"""Record of a single tool call executed during an LLM tool loop.
|
||||
|
||||
Attributes:
|
||||
name: The tool function name.
|
||||
input: The arguments passed to the tool.
|
||||
output: The string result returned by the tool.
|
||||
duration_ms: Wall-clock time for the tool execution in milliseconds.
|
||||
is_error: Whether the tool call raised an exception.
|
||||
"""
|
||||
|
||||
name: str
|
||||
input: dict[str, Any] = Field(default_factory=dict)
|
||||
output: str = ""
|
||||
duration_ms: float = 0.0
|
||||
is_error: bool = False
|
||||
|
||||
|
||||
class LLMResult(BaseModel):
|
||||
"""Structured result from LLM.call() when tools are used.
|
||||
|
||||
Attributes:
|
||||
text: The final text response from the model.
|
||||
tool_calls: Ordered list of every tool call made during the loop.
|
||||
usage: Aggregated token usage across all iterations.
|
||||
cost_usd: Estimated cost in USD based on model pricing.
|
||||
iterations: Number of LLM round-trips in the tool loop.
|
||||
"""
|
||||
|
||||
text: str = ""
|
||||
tool_calls: list[ToolCallRecord] = Field(default_factory=list)
|
||||
usage: UsageMetrics = Field(default_factory=UsageMetrics)
|
||||
cost_usd: float = 0.0
|
||||
iterations: int = 0
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Simple cost estimation
|
||||
# ---------------------------------------------------------------------------
|
||||
# USD per 1M tokens. Covers major models. Inspired by Iris's pricing table.
|
||||
PRICING: dict[str, dict[str, float]] = {
|
||||
# Anthropic
|
||||
"claude-opus-4-7": {"in": 5.00, "out": 25.00},
|
||||
"claude-sonnet-4-6": {"in": 3.00, "out": 15.00},
|
||||
"claude-sonnet-4-5": {"in": 3.00, "out": 15.00},
|
||||
"claude-haiku-4-5": {"in": 1.00, "out": 5.00},
|
||||
# OpenAI
|
||||
"gpt-4o": {"in": 2.50, "out": 10.00},
|
||||
"gpt-4o-mini": {"in": 0.15, "out": 0.60},
|
||||
"gpt-4.1": {"in": 2.00, "out": 8.00},
|
||||
"gpt-4.1-mini": {"in": 0.40, "out": 1.60},
|
||||
"gpt-4.1-nano": {"in": 0.10, "out": 0.40},
|
||||
"o1": {"in": 15.00, "out": 60.00},
|
||||
"o1-mini": {"in": 3.00, "out": 12.00},
|
||||
"o3": {"in": 2.00, "out": 8.00},
|
||||
"o3-mini": {"in": 1.10, "out": 4.40},
|
||||
"gpt-5": {"in": 1.25, "out": 10.00},
|
||||
# Google Gemini
|
||||
"gemini-2.5-pro": {"in": 1.25, "out": 10.00},
|
||||
"gemini-2.5-flash": {"in": 0.30, "out": 2.50},
|
||||
"gemini-2.0-flash": {"in": 0.10, "out": 0.40},
|
||||
}
|
||||
|
||||
|
||||
def _lookup_pricing(model: str) -> dict[str, float] | None:
|
||||
"""Resolve a model name to its pricing row.
|
||||
|
||||
Handles provider prefixes (``anthropic/claude-sonnet-4-6``) and partial
|
||||
matches (``claude-sonnet-4-6-20250514`` → ``claude-sonnet-4-6``).
|
||||
"""
|
||||
if not model:
|
||||
return None
|
||||
# Exact match
|
||||
if model in PRICING:
|
||||
return PRICING[model]
|
||||
# Strip provider prefix
|
||||
if "/" in model:
|
||||
suffix = model.rsplit("/", 1)[1]
|
||||
if suffix in PRICING:
|
||||
return PRICING[suffix]
|
||||
model = suffix
|
||||
# Prefix / partial match
|
||||
for key in PRICING:
|
||||
if model.startswith(key) or key.startswith(model):
|
||||
return PRICING[key]
|
||||
return None
|
||||
|
||||
|
||||
def estimate_cost_usd(model: str, prompt_tokens: int, completion_tokens: int) -> float:
|
||||
"""Estimate the cost in USD for a given model and token counts."""
|
||||
pricing = _lookup_pricing(model)
|
||||
if not pricing:
|
||||
return 0.0
|
||||
return (
|
||||
prompt_tokens * pricing["in"] + completion_tokens * pricing["out"]
|
||||
) / 1_000_000
|
||||
@@ -183,11 +183,6 @@ class AzureCompletion(BaseLLM):
|
||||
AzureCompletion._is_azure_openai_endpoint(self.endpoint)
|
||||
)
|
||||
|
||||
if not self.api_key:
|
||||
raise ValueError(
|
||||
"Azure API key is required. Set AZURE_API_KEY environment "
|
||||
"variable or pass api_key parameter."
|
||||
)
|
||||
if not self.endpoint:
|
||||
raise ValueError(
|
||||
"Azure endpoint is required. Set AZURE_ENDPOINT environment "
|
||||
@@ -195,12 +190,39 @@ class AzureCompletion(BaseLLM):
|
||||
)
|
||||
client_kwargs: dict[str, Any] = {
|
||||
"endpoint": self.endpoint,
|
||||
"credential": AzureKeyCredential(self.api_key),
|
||||
"credential": self._resolve_credential(),
|
||||
}
|
||||
if self.api_version:
|
||||
client_kwargs["api_version"] = self.api_version
|
||||
return client_kwargs
|
||||
|
||||
def _resolve_credential(self) -> Any:
|
||||
"""Return an Azure credential, preferring the API key when set.
|
||||
|
||||
Without an API key, fall back to ``DefaultAzureCredential`` from
|
||||
``azure-identity``. That chain auto-detects the standard keyless
|
||||
paths the customer's environment may provide — OIDC Workload
|
||||
Identity Federation (``AZURE_FEDERATED_TOKEN_FILE`` +
|
||||
``AZURE_TENANT_ID`` + ``AZURE_CLIENT_ID``), Managed Identity on
|
||||
AKS/Azure VMs, environment-configured service principals, and
|
||||
developer tools like the Azure CLI. Installing ``azure-identity``
|
||||
is what enables these paths; without it we raise the existing
|
||||
API-key error.
|
||||
"""
|
||||
if self.api_key:
|
||||
return AzureKeyCredential(self.api_key)
|
||||
|
||||
try:
|
||||
from azure.identity import DefaultAzureCredential
|
||||
except ImportError:
|
||||
raise ValueError(
|
||||
"Azure API key is required when azure-identity is not "
|
||||
"installed. Set AZURE_API_KEY, or install azure-identity "
|
||||
'for keyless auth: uv add "crewai[azure-ai-inference]"'
|
||||
) from None
|
||||
|
||||
return DefaultAzureCredential()
|
||||
|
||||
def _get_sync_client(self) -> Any:
|
||||
if self._client is None:
|
||||
self._client = self._build_sync_client()
|
||||
|
||||
@@ -2,9 +2,17 @@
|
||||
|
||||
This module provides native MCP client functionality, allowing CrewAI agents
|
||||
to connect to any MCP-compliant server using various transport types.
|
||||
|
||||
Heavy imports (MCPClient, MCPToolResolver, BaseTransport, TransportType) are
|
||||
lazy-loaded on first access to avoid pulling in the ``mcp`` SDK (~400ms)
|
||||
when only lightweight config/filter types are needed.
|
||||
"""
|
||||
|
||||
from crewai.mcp.client import MCPClient
|
||||
from __future__ import annotations
|
||||
|
||||
import importlib
|
||||
from typing import TYPE_CHECKING, Any
|
||||
|
||||
from crewai.mcp.config import (
|
||||
MCPServerConfig,
|
||||
MCPServerHTTP,
|
||||
@@ -18,8 +26,29 @@ from crewai.mcp.filters import (
|
||||
create_dynamic_tool_filter,
|
||||
create_static_tool_filter,
|
||||
)
|
||||
from crewai.mcp.tool_resolver import MCPToolResolver
|
||||
from crewai.mcp.transports.base import BaseTransport, TransportType
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from crewai.mcp.client import MCPClient
|
||||
from crewai.mcp.tool_resolver import MCPToolResolver
|
||||
from crewai.mcp.transports.base import BaseTransport, TransportType
|
||||
|
||||
_LAZY: dict[str, tuple[str, str]] = {
|
||||
"MCPClient": ("crewai.mcp.client", "MCPClient"),
|
||||
"MCPToolResolver": ("crewai.mcp.tool_resolver", "MCPToolResolver"),
|
||||
"BaseTransport": ("crewai.mcp.transports.base", "BaseTransport"),
|
||||
"TransportType": ("crewai.mcp.transports.base", "TransportType"),
|
||||
}
|
||||
|
||||
|
||||
def __getattr__(name: str) -> Any:
|
||||
if name in _LAZY:
|
||||
mod_path, attr = _LAZY[name]
|
||||
mod = importlib.import_module(mod_path)
|
||||
val = getattr(mod, attr)
|
||||
globals()[name] = val # cache for subsequent access
|
||||
return val
|
||||
raise AttributeError(f"module {__name__!r} has no attribute {name!r}")
|
||||
|
||||
|
||||
__all__ = [
|
||||
|
||||
@@ -10,12 +10,22 @@ from __future__ import annotations
|
||||
import json
|
||||
import logging
|
||||
import threading
|
||||
import time
|
||||
from typing import Any
|
||||
|
||||
from crewai.agents.agent_builder.base_agent import BaseAgent
|
||||
from crewai.crew import Crew
|
||||
from crewai.events.base_events import BaseEvent
|
||||
from crewai.events.event_bus import CrewAIEventsBus, crewai_event_bus
|
||||
from crewai.events.event_bus import CrewAIEventsBus, crewai_event_bus, is_replaying
|
||||
from crewai.events.types.checkpoint_events import (
|
||||
CheckpointBaseEvent,
|
||||
CheckpointCompletedEvent,
|
||||
CheckpointFailedEvent,
|
||||
CheckpointForkBaseEvent,
|
||||
CheckpointPrunedEvent,
|
||||
CheckpointRestoreBaseEvent,
|
||||
CheckpointStartedEvent,
|
||||
)
|
||||
from crewai.flow.flow import Flow
|
||||
from crewai.state.checkpoint_config import CheckpointConfig
|
||||
from crewai.state.runtime import RuntimeState, _prepare_entities
|
||||
@@ -53,12 +63,26 @@ def _resolve(value: CheckpointConfig | bool | None) -> CheckpointConfig | None |
|
||||
if isinstance(value, CheckpointConfig):
|
||||
_ensure_handlers_registered()
|
||||
return value
|
||||
if value is True:
|
||||
if value:
|
||||
_ensure_handlers_registered()
|
||||
return CheckpointConfig()
|
||||
if value is False:
|
||||
return _SENTINEL
|
||||
return None # None = inherit
|
||||
return None
|
||||
|
||||
|
||||
def _resolve_from_agent(agent: BaseAgent) -> CheckpointConfig | None:
|
||||
"""Resolve a checkpoint config starting from an agent, walking to its crew."""
|
||||
result = _resolve(agent.checkpoint)
|
||||
if isinstance(result, CheckpointConfig):
|
||||
return result
|
||||
if result is _SENTINEL:
|
||||
return None
|
||||
crew = agent.crew
|
||||
if isinstance(crew, Crew):
|
||||
crew_result = _resolve(crew.checkpoint)
|
||||
return crew_result if isinstance(crew_result, CheckpointConfig) else None
|
||||
return None
|
||||
|
||||
|
||||
def _find_checkpoint(source: Any) -> CheckpointConfig | None:
|
||||
@@ -77,28 +101,11 @@ def _find_checkpoint(source: Any) -> CheckpointConfig | None:
|
||||
result = _resolve(source.checkpoint)
|
||||
return result if isinstance(result, CheckpointConfig) else None
|
||||
if isinstance(source, BaseAgent):
|
||||
result = _resolve(source.checkpoint)
|
||||
if isinstance(result, CheckpointConfig):
|
||||
return result
|
||||
if result is _SENTINEL:
|
||||
return None
|
||||
crew = source.crew
|
||||
if isinstance(crew, Crew):
|
||||
result = _resolve(crew.checkpoint)
|
||||
return result if isinstance(result, CheckpointConfig) else None
|
||||
return None
|
||||
return _resolve_from_agent(source)
|
||||
if isinstance(source, Task):
|
||||
agent = source.agent
|
||||
if isinstance(agent, BaseAgent):
|
||||
result = _resolve(agent.checkpoint)
|
||||
if isinstance(result, CheckpointConfig):
|
||||
return result
|
||||
if result is _SENTINEL:
|
||||
return None
|
||||
crew = agent.crew
|
||||
if isinstance(crew, Crew):
|
||||
result = _resolve(crew.checkpoint)
|
||||
return result if isinstance(result, CheckpointConfig) else None
|
||||
return _resolve_from_agent(agent)
|
||||
return None
|
||||
return None
|
||||
|
||||
@@ -107,27 +114,106 @@ def _do_checkpoint(
|
||||
state: RuntimeState, cfg: CheckpointConfig, event: BaseEvent | None = None
|
||||
) -> None:
|
||||
"""Write a checkpoint and prune old ones if configured."""
|
||||
_prepare_entities(state.root)
|
||||
payload = state.model_dump(mode="json")
|
||||
if event is not None:
|
||||
payload["trigger"] = event.type
|
||||
data = json.dumps(payload)
|
||||
location = cfg.provider.checkpoint(
|
||||
data,
|
||||
cfg.location,
|
||||
parent_id=state._parent_id,
|
||||
branch=state._branch,
|
||||
)
|
||||
state._chain_lineage(cfg.provider, location)
|
||||
provider_name: str = type(cfg.provider).__name__
|
||||
trigger: str | None = event.type if event is not None else None
|
||||
context: dict[str, Any] = {
|
||||
"task_id": event.task_id if event is not None else None,
|
||||
"task_name": event.task_name if event is not None else None,
|
||||
"agent_id": event.agent_id if event is not None else None,
|
||||
"agent_role": event.agent_role if event is not None else None,
|
||||
}
|
||||
|
||||
checkpoint_id: str = cfg.provider.extract_id(location)
|
||||
parent_id_snapshot: str | None = state._parent_id
|
||||
branch_snapshot: str = state._branch
|
||||
|
||||
crewai_event_bus.emit(
|
||||
cfg,
|
||||
CheckpointStartedEvent(
|
||||
location=cfg.location,
|
||||
provider=provider_name,
|
||||
trigger=trigger,
|
||||
branch=branch_snapshot,
|
||||
parent_id=parent_id_snapshot,
|
||||
**context,
|
||||
),
|
||||
)
|
||||
|
||||
start: float = time.perf_counter()
|
||||
try:
|
||||
_prepare_entities(state.root)
|
||||
payload = state.model_dump(mode="json")
|
||||
if event is not None:
|
||||
payload["trigger"] = event.type
|
||||
data = json.dumps(payload)
|
||||
location = cfg.provider.checkpoint(
|
||||
data,
|
||||
cfg.location,
|
||||
parent_id=parent_id_snapshot,
|
||||
branch=branch_snapshot,
|
||||
)
|
||||
state._chain_lineage(cfg.provider, location)
|
||||
checkpoint_id: str = cfg.provider.extract_id(location)
|
||||
except Exception as exc:
|
||||
crewai_event_bus.emit(
|
||||
cfg,
|
||||
CheckpointFailedEvent(
|
||||
location=cfg.location,
|
||||
provider=provider_name,
|
||||
trigger=trigger,
|
||||
branch=branch_snapshot,
|
||||
parent_id=parent_id_snapshot,
|
||||
error=str(exc),
|
||||
**context,
|
||||
),
|
||||
)
|
||||
raise
|
||||
|
||||
duration_ms: float = (time.perf_counter() - start) * 1000.0
|
||||
msg: str = (
|
||||
f"Checkpoint saved. Resume with: crewai checkpoint resume {checkpoint_id}"
|
||||
)
|
||||
logger.info(msg)
|
||||
|
||||
crewai_event_bus.emit(
|
||||
cfg,
|
||||
CheckpointCompletedEvent(
|
||||
location=location,
|
||||
provider=provider_name,
|
||||
trigger=trigger,
|
||||
branch=branch_snapshot,
|
||||
parent_id=parent_id_snapshot,
|
||||
checkpoint_id=checkpoint_id,
|
||||
duration_ms=duration_ms,
|
||||
**context,
|
||||
),
|
||||
)
|
||||
|
||||
if cfg.max_checkpoints is not None:
|
||||
cfg.provider.prune(cfg.location, cfg.max_checkpoints, branch=state._branch)
|
||||
try:
|
||||
removed_count: int = cfg.provider.prune(
|
||||
cfg.location, cfg.max_checkpoints, branch=branch_snapshot
|
||||
)
|
||||
except Exception:
|
||||
logger.warning(
|
||||
"Checkpoint prune failed for %s (branch=%s)",
|
||||
cfg.location,
|
||||
branch_snapshot,
|
||||
exc_info=True,
|
||||
)
|
||||
return
|
||||
crewai_event_bus.emit(
|
||||
cfg,
|
||||
CheckpointPrunedEvent(
|
||||
location=cfg.location,
|
||||
provider=provider_name,
|
||||
trigger=trigger,
|
||||
branch=branch_snapshot,
|
||||
parent_id=parent_id_snapshot,
|
||||
removed_count=removed_count,
|
||||
max_checkpoints=cfg.max_checkpoints,
|
||||
**context,
|
||||
),
|
||||
)
|
||||
|
||||
|
||||
def _should_checkpoint(source: Any, event: BaseEvent) -> CheckpointConfig | None:
|
||||
@@ -142,6 +228,13 @@ def _should_checkpoint(source: Any, event: BaseEvent) -> CheckpointConfig | None
|
||||
|
||||
def _on_any_event(source: Any, event: BaseEvent, state: Any) -> None:
|
||||
"""Sync handler registered on every event class."""
|
||||
if is_replaying():
|
||||
return
|
||||
if isinstance(
|
||||
event,
|
||||
(CheckpointBaseEvent, CheckpointForkBaseEvent, CheckpointRestoreBaseEvent),
|
||||
):
|
||||
return
|
||||
cfg = _should_checkpoint(source, event)
|
||||
if cfg is None:
|
||||
return
|
||||
@@ -161,7 +254,8 @@ def _register_all_handlers(event_bus: CrewAIEventsBus) -> None:
|
||||
seen: set[type] = set()
|
||||
|
||||
def _collect(cls: type[BaseEvent]) -> None:
|
||||
for sub in cls.__subclasses__():
|
||||
subclasses: list[type[BaseEvent]] = cls.__subclasses__()
|
||||
for sub in subclasses:
|
||||
if sub not in seen:
|
||||
seen.add(sub)
|
||||
type_field = sub.model_fields.get("type")
|
||||
|
||||
@@ -39,7 +39,8 @@ def _build_event_type_map() -> None:
|
||||
"""Populate _event_type_map from all BaseEvent subclasses."""
|
||||
|
||||
def _collect(cls: type[BaseEvent]) -> None:
|
||||
for sub in cls.__subclasses__():
|
||||
subclasses: list[type[BaseEvent]] = cls.__subclasses__()
|
||||
for sub in subclasses:
|
||||
type_field = sub.model_fields.get("type")
|
||||
if type_field and type_field.default:
|
||||
_event_type_map[type_field.default] = sub
|
||||
@@ -196,6 +197,21 @@ class EventRecord(BaseModel):
|
||||
node for node in self.nodes.values() if not node.neighbors("parent")
|
||||
]
|
||||
|
||||
def all_nodes(self) -> list[EventNode]:
|
||||
"""Return a snapshot of every node under the read lock.
|
||||
|
||||
Returns:
|
||||
A list copy of the current nodes, safe to iterate without holding
|
||||
the lock.
|
||||
"""
|
||||
with self._lock.r_locked():
|
||||
return list(self.nodes.values())
|
||||
|
||||
def clear(self) -> None:
|
||||
"""Remove all nodes from the record under the write lock."""
|
||||
with self._lock.w_locked():
|
||||
self.nodes.clear()
|
||||
|
||||
def __len__(self) -> int:
|
||||
with self._lock.r_locked():
|
||||
return len(self.nodes)
|
||||
|
||||
@@ -61,13 +61,16 @@ class BaseProvider(BaseModel, ABC):
|
||||
...
|
||||
|
||||
@abstractmethod
|
||||
def prune(self, location: str, max_keep: int, *, branch: str = "main") -> None:
|
||||
def prune(self, location: str, max_keep: int, *, branch: str = "main") -> int:
|
||||
"""Remove old checkpoints, keeping at most *max_keep* per branch.
|
||||
|
||||
Args:
|
||||
location: The storage destination passed to ``checkpoint``.
|
||||
max_keep: Maximum number of checkpoints to retain.
|
||||
branch: Only prune checkpoints on this branch.
|
||||
|
||||
Returns:
|
||||
The number of checkpoints removed.
|
||||
"""
|
||||
...
|
||||
|
||||
|
||||
@@ -95,17 +95,20 @@ class JsonProvider(BaseProvider):
|
||||
await f.write(data)
|
||||
return str(file_path)
|
||||
|
||||
def prune(self, location: str, max_keep: int, *, branch: str = "main") -> None:
|
||||
def prune(self, location: str, max_keep: int, *, branch: str = "main") -> int:
|
||||
"""Remove oldest checkpoint files beyond *max_keep* on a branch."""
|
||||
_safe_branch(location, branch)
|
||||
branch_dir = os.path.join(location, branch)
|
||||
pattern = os.path.join(branch_dir, "*.json")
|
||||
files = sorted(glob.glob(pattern), key=os.path.getmtime)
|
||||
removed = 0
|
||||
for path in files if max_keep == 0 else files[:-max_keep]:
|
||||
try:
|
||||
os.remove(path)
|
||||
removed += 1
|
||||
except OSError: # noqa: PERF203
|
||||
logger.debug("Failed to remove %s", path, exc_info=True)
|
||||
return removed
|
||||
|
||||
def extract_id(self, location: str) -> str:
|
||||
"""Extract the checkpoint ID from a file path.
|
||||
|
||||
@@ -111,11 +111,13 @@ class SqliteProvider(BaseProvider):
|
||||
await db.commit()
|
||||
return f"{location}#{checkpoint_id}"
|
||||
|
||||
def prune(self, location: str, max_keep: int, *, branch: str = "main") -> None:
|
||||
def prune(self, location: str, max_keep: int, *, branch: str = "main") -> int:
|
||||
"""Remove oldest checkpoint rows beyond *max_keep* on a branch."""
|
||||
with sqlite3.connect(location) as conn:
|
||||
conn.execute(_PRUNE, (branch, branch, max_keep))
|
||||
cursor = conn.execute(_PRUNE, (branch, branch, max_keep))
|
||||
removed: int = cursor.rowcount
|
||||
conn.commit()
|
||||
return max(removed, 0)
|
||||
|
||||
def extract_id(self, location: str) -> str:
|
||||
"""Extract the checkpoint ID from a ``db_path#id`` string."""
|
||||
|
||||
@@ -10,6 +10,7 @@ via ``RuntimeState.model_rebuild()``.
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
import time
|
||||
from typing import TYPE_CHECKING, Any
|
||||
import uuid
|
||||
|
||||
@@ -23,6 +24,17 @@ from pydantic import (
|
||||
)
|
||||
|
||||
from crewai.context import capture_execution_context
|
||||
from crewai.events.event_bus import crewai_event_bus
|
||||
from crewai.events.types.checkpoint_events import (
|
||||
CheckpointCompletedEvent,
|
||||
CheckpointFailedEvent,
|
||||
CheckpointForkCompletedEvent,
|
||||
CheckpointForkStartedEvent,
|
||||
CheckpointRestoreCompletedEvent,
|
||||
CheckpointRestoreFailedEvent,
|
||||
CheckpointRestoreStartedEvent,
|
||||
CheckpointStartedEvent,
|
||||
)
|
||||
from crewai.state.checkpoint_config import CheckpointConfig
|
||||
from crewai.state.event_record import EventRecord
|
||||
from crewai.state.provider.core import BaseProvider
|
||||
@@ -89,7 +101,7 @@ def _migrate(data: dict[str, Any]) -> dict[str, Any]:
|
||||
"""
|
||||
raw = data.get("crewai_version")
|
||||
current = Version(get_crewai_version())
|
||||
stored = Version(raw) if raw else Version("0.0.0")
|
||||
stored = Version(raw) if isinstance(raw, str) and raw else Version("0.0.0")
|
||||
|
||||
if raw is None:
|
||||
logger.warning("Checkpoint has no crewai_version — treating as 0.0.0")
|
||||
@@ -159,6 +171,63 @@ class RuntimeState(RootModel): # type: ignore[type-arg]
|
||||
self._checkpoint_id = provider.extract_id(location)
|
||||
self._parent_id = self._checkpoint_id
|
||||
|
||||
def _begin_checkpoint(self, location: str) -> tuple[str, str | None, str, float]:
|
||||
"""Emit the start event and return the invariant context for a checkpoint."""
|
||||
provider_name: str = type(self._provider).__name__
|
||||
parent_id_snapshot: str | None = self._parent_id
|
||||
branch_snapshot: str = self._branch
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
CheckpointStartedEvent(
|
||||
location=location,
|
||||
provider=provider_name,
|
||||
branch=branch_snapshot,
|
||||
parent_id=parent_id_snapshot,
|
||||
),
|
||||
)
|
||||
return provider_name, parent_id_snapshot, branch_snapshot, time.perf_counter()
|
||||
|
||||
def _emit_checkpoint_failed(
|
||||
self,
|
||||
location: str,
|
||||
provider_name: str,
|
||||
branch_snapshot: str,
|
||||
parent_id_snapshot: str | None,
|
||||
exc: Exception,
|
||||
) -> None:
|
||||
"""Emit the failure event for a checkpoint write."""
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
CheckpointFailedEvent(
|
||||
location=location,
|
||||
provider=provider_name,
|
||||
branch=branch_snapshot,
|
||||
parent_id=parent_id_snapshot,
|
||||
error=str(exc),
|
||||
),
|
||||
)
|
||||
|
||||
def _emit_checkpoint_completed(
|
||||
self,
|
||||
result: str,
|
||||
provider_name: str,
|
||||
branch_snapshot: str,
|
||||
parent_id_snapshot: str | None,
|
||||
start: float,
|
||||
) -> None:
|
||||
"""Emit the completion event for a successful checkpoint write."""
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
CheckpointCompletedEvent(
|
||||
location=result,
|
||||
provider=provider_name,
|
||||
branch=branch_snapshot,
|
||||
parent_id=parent_id_snapshot,
|
||||
checkpoint_id=self._provider.extract_id(result),
|
||||
duration_ms=(time.perf_counter() - start) * 1000.0,
|
||||
),
|
||||
)
|
||||
|
||||
def checkpoint(self, location: str) -> str:
|
||||
"""Write a checkpoint.
|
||||
|
||||
@@ -169,14 +238,27 @@ class RuntimeState(RootModel): # type: ignore[type-arg]
|
||||
Returns:
|
||||
A location identifier for the saved checkpoint.
|
||||
"""
|
||||
_prepare_entities(self.root)
|
||||
result = self._provider.checkpoint(
|
||||
self.model_dump_json(),
|
||||
location,
|
||||
parent_id=self._parent_id,
|
||||
branch=self._branch,
|
||||
provider_name, parent_id_snapshot, branch_snapshot, start = (
|
||||
self._begin_checkpoint(location)
|
||||
)
|
||||
try:
|
||||
_prepare_entities(self.root)
|
||||
result = self._provider.checkpoint(
|
||||
self.model_dump_json(),
|
||||
location,
|
||||
parent_id=parent_id_snapshot,
|
||||
branch=branch_snapshot,
|
||||
)
|
||||
self._chain_lineage(self._provider, result)
|
||||
except Exception as exc:
|
||||
self._emit_checkpoint_failed(
|
||||
location, provider_name, branch_snapshot, parent_id_snapshot, exc
|
||||
)
|
||||
raise
|
||||
|
||||
self._emit_checkpoint_completed(
|
||||
result, provider_name, branch_snapshot, parent_id_snapshot, start
|
||||
)
|
||||
self._chain_lineage(self._provider, result)
|
||||
return result
|
||||
|
||||
async def acheckpoint(self, location: str) -> str:
|
||||
@@ -189,14 +271,27 @@ class RuntimeState(RootModel): # type: ignore[type-arg]
|
||||
Returns:
|
||||
A location identifier for the saved checkpoint.
|
||||
"""
|
||||
_prepare_entities(self.root)
|
||||
result = await self._provider.acheckpoint(
|
||||
self.model_dump_json(),
|
||||
location,
|
||||
parent_id=self._parent_id,
|
||||
branch=self._branch,
|
||||
provider_name, parent_id_snapshot, branch_snapshot, start = (
|
||||
self._begin_checkpoint(location)
|
||||
)
|
||||
try:
|
||||
_prepare_entities(self.root)
|
||||
result = await self._provider.acheckpoint(
|
||||
self.model_dump_json(),
|
||||
location,
|
||||
parent_id=parent_id_snapshot,
|
||||
branch=branch_snapshot,
|
||||
)
|
||||
self._chain_lineage(self._provider, result)
|
||||
except Exception as exc:
|
||||
self._emit_checkpoint_failed(
|
||||
location, provider_name, branch_snapshot, parent_id_snapshot, exc
|
||||
)
|
||||
raise
|
||||
|
||||
self._emit_checkpoint_completed(
|
||||
result, provider_name, branch_snapshot, parent_id_snapshot, start
|
||||
)
|
||||
self._chain_lineage(self._provider, result)
|
||||
return result
|
||||
|
||||
def fork(self, branch: str | None = None) -> None:
|
||||
@@ -211,11 +306,32 @@ class RuntimeState(RootModel): # type: ignore[type-arg]
|
||||
times without collisions.
|
||||
"""
|
||||
if branch:
|
||||
self._branch = branch
|
||||
new_branch = branch
|
||||
elif self._checkpoint_id:
|
||||
self._branch = f"fork/{self._checkpoint_id}_{uuid.uuid4().hex[:6]}"
|
||||
new_branch = f"fork/{self._checkpoint_id}_{uuid.uuid4().hex[:6]}"
|
||||
else:
|
||||
self._branch = f"fork/{uuid.uuid4().hex[:8]}"
|
||||
new_branch = f"fork/{uuid.uuid4().hex[:8]}"
|
||||
|
||||
parent_branch: str | None = self._branch
|
||||
parent_checkpoint_id: str | None = self._checkpoint_id
|
||||
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
CheckpointForkStartedEvent(
|
||||
branch=new_branch,
|
||||
parent_branch=parent_branch,
|
||||
parent_checkpoint_id=parent_checkpoint_id,
|
||||
),
|
||||
)
|
||||
self._branch = new_branch
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
CheckpointForkCompletedEvent(
|
||||
branch=new_branch,
|
||||
parent_branch=parent_branch,
|
||||
parent_checkpoint_id=parent_checkpoint_id,
|
||||
),
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def from_checkpoint(cls, config: CheckpointConfig, **kwargs: Any) -> RuntimeState:
|
||||
@@ -233,13 +349,41 @@ class RuntimeState(RootModel): # type: ignore[type-arg]
|
||||
if config.restore_from is None:
|
||||
raise ValueError("CheckpointConfig.restore_from must be set")
|
||||
location = str(config.restore_from)
|
||||
provider = detect_provider(location)
|
||||
raw = provider.from_checkpoint(location)
|
||||
state = cls.model_validate_json(raw, **kwargs)
|
||||
state._provider = provider
|
||||
checkpoint_id = provider.extract_id(location)
|
||||
state._checkpoint_id = checkpoint_id
|
||||
state._parent_id = checkpoint_id
|
||||
|
||||
crewai_event_bus.emit(config, CheckpointRestoreStartedEvent(location=location))
|
||||
start: float = time.perf_counter()
|
||||
provider_name: str | None = None
|
||||
try:
|
||||
provider = detect_provider(location)
|
||||
provider_name = type(provider).__name__
|
||||
raw = provider.from_checkpoint(location)
|
||||
state = cls.model_validate_json(raw, **kwargs)
|
||||
state._provider = provider
|
||||
checkpoint_id = provider.extract_id(location)
|
||||
state._checkpoint_id = checkpoint_id
|
||||
state._parent_id = checkpoint_id
|
||||
except Exception as exc:
|
||||
crewai_event_bus.emit(
|
||||
config,
|
||||
CheckpointRestoreFailedEvent(
|
||||
location=location,
|
||||
provider=provider_name,
|
||||
error=str(exc),
|
||||
),
|
||||
)
|
||||
raise
|
||||
|
||||
crewai_event_bus.emit(
|
||||
config,
|
||||
CheckpointRestoreCompletedEvent(
|
||||
location=location,
|
||||
provider=provider_name,
|
||||
checkpoint_id=checkpoint_id,
|
||||
branch=state._branch,
|
||||
parent_id=state._parent_id,
|
||||
duration_ms=(time.perf_counter() - start) * 1000.0,
|
||||
),
|
||||
)
|
||||
return state
|
||||
|
||||
@classmethod
|
||||
@@ -260,13 +404,41 @@ class RuntimeState(RootModel): # type: ignore[type-arg]
|
||||
if config.restore_from is None:
|
||||
raise ValueError("CheckpointConfig.restore_from must be set")
|
||||
location = str(config.restore_from)
|
||||
provider = detect_provider(location)
|
||||
raw = await provider.afrom_checkpoint(location)
|
||||
state = cls.model_validate_json(raw, **kwargs)
|
||||
state._provider = provider
|
||||
checkpoint_id = provider.extract_id(location)
|
||||
state._checkpoint_id = checkpoint_id
|
||||
state._parent_id = checkpoint_id
|
||||
|
||||
crewai_event_bus.emit(config, CheckpointRestoreStartedEvent(location=location))
|
||||
start: float = time.perf_counter()
|
||||
provider_name: str | None = None
|
||||
try:
|
||||
provider = detect_provider(location)
|
||||
provider_name = type(provider).__name__
|
||||
raw = await provider.afrom_checkpoint(location)
|
||||
state = cls.model_validate_json(raw, **kwargs)
|
||||
state._provider = provider
|
||||
checkpoint_id = provider.extract_id(location)
|
||||
state._checkpoint_id = checkpoint_id
|
||||
state._parent_id = checkpoint_id
|
||||
except Exception as exc:
|
||||
crewai_event_bus.emit(
|
||||
config,
|
||||
CheckpointRestoreFailedEvent(
|
||||
location=location,
|
||||
provider=provider_name,
|
||||
error=str(exc),
|
||||
),
|
||||
)
|
||||
raise
|
||||
|
||||
crewai_event_bus.emit(
|
||||
config,
|
||||
CheckpointRestoreCompletedEvent(
|
||||
location=location,
|
||||
provider=provider_name,
|
||||
checkpoint_id=checkpoint_id,
|
||||
branch=state._branch,
|
||||
parent_id=state._parent_id,
|
||||
duration_ms=(time.perf_counter() - start) * 1000.0,
|
||||
),
|
||||
)
|
||||
return state
|
||||
|
||||
|
||||
|
||||
165
lib/crewai/tests/events/test_event_replay.py
Normal file
165
lib/crewai/tests/events/test_event_replay.py
Normal file
@@ -0,0 +1,165 @@
|
||||
"""Tests for event bus replay dispatch and is_replaying flag."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Any
|
||||
from unittest.mock import patch
|
||||
|
||||
from crewai.events.event_bus import _replaying, crewai_event_bus, is_replaying
|
||||
from crewai.events.types.flow_events import (
|
||||
MethodExecutionFinishedEvent,
|
||||
MethodExecutionStartedEvent,
|
||||
)
|
||||
|
||||
|
||||
def _make_started(method: str, event_id: str, sequence: int) -> MethodExecutionStartedEvent:
|
||||
"""Build a MethodExecutionStartedEvent with explicit ids/sequence."""
|
||||
ev = MethodExecutionStartedEvent(
|
||||
method_name=method,
|
||||
flow_name="F",
|
||||
params={},
|
||||
state={},
|
||||
)
|
||||
ev.event_id = event_id
|
||||
ev.emission_sequence = sequence
|
||||
return ev
|
||||
|
||||
|
||||
class TestReplayPreservesFields:
|
||||
"""replay() must not overwrite event_id, parent_event_id, or emission_sequence."""
|
||||
|
||||
def test_preserves_ids_and_sequence(self) -> None:
|
||||
captured: list[MethodExecutionStartedEvent] = []
|
||||
|
||||
with crewai_event_bus.scoped_handlers():
|
||||
|
||||
@crewai_event_bus.on(MethodExecutionStartedEvent)
|
||||
def _capture(_: Any, event: MethodExecutionStartedEvent) -> None:
|
||||
captured.append(event)
|
||||
|
||||
ev = _make_started("outline", "orig-id-1", 42)
|
||||
ev.parent_event_id = "parent-abc"
|
||||
|
||||
future = crewai_event_bus.replay(object(), ev)
|
||||
if future is not None:
|
||||
future.result(timeout=5.0)
|
||||
|
||||
assert len(captured) == 1
|
||||
assert captured[0].event_id == "orig-id-1"
|
||||
assert captured[0].parent_event_id == "parent-abc"
|
||||
assert captured[0].emission_sequence == 42
|
||||
|
||||
|
||||
class TestIsReplayingFlag:
|
||||
"""is_replaying() must be True inside handlers dispatched via replay()."""
|
||||
|
||||
def test_flag_true_during_replay(self) -> None:
|
||||
seen: list[bool] = []
|
||||
|
||||
with crewai_event_bus.scoped_handlers():
|
||||
|
||||
@crewai_event_bus.on(MethodExecutionStartedEvent)
|
||||
def _capture(_: Any, __: MethodExecutionStartedEvent) -> None:
|
||||
seen.append(is_replaying())
|
||||
|
||||
ev = _make_started("m", "id-1", 1)
|
||||
future = crewai_event_bus.replay(object(), ev)
|
||||
if future is not None:
|
||||
future.result(timeout=5.0)
|
||||
|
||||
assert seen == [True]
|
||||
assert is_replaying() is False
|
||||
|
||||
def test_flag_false_during_emit(self) -> None:
|
||||
seen: list[bool] = []
|
||||
|
||||
with crewai_event_bus.scoped_handlers():
|
||||
|
||||
@crewai_event_bus.on(MethodExecutionStartedEvent)
|
||||
def _capture(_: Any, __: MethodExecutionStartedEvent) -> None:
|
||||
seen.append(is_replaying())
|
||||
|
||||
ev = _make_started("m", "id-1", 1)
|
||||
future = crewai_event_bus.emit(object(), ev)
|
||||
if future is not None:
|
||||
future.result(timeout=5.0)
|
||||
|
||||
assert seen == [False]
|
||||
|
||||
|
||||
class TestCheckpointListenerOptsOut:
|
||||
"""CheckpointListener must early-return during replay."""
|
||||
|
||||
def test_checkpoint_not_written_on_replay(self) -> None:
|
||||
from crewai.state.checkpoint_config import CheckpointConfig
|
||||
from crewai.state.checkpoint_listener import _on_any_event
|
||||
|
||||
class FlowLike:
|
||||
entity_type = "flow"
|
||||
checkpoint = CheckpointConfig(trigger_all=True)
|
||||
|
||||
ev = _make_started("m", "id-1", 1)
|
||||
|
||||
with patch("crewai.state.checkpoint_listener._do_checkpoint") as do_cp:
|
||||
token = _replaying.set(True)
|
||||
try:
|
||||
_on_any_event(FlowLike(), ev, state=None)
|
||||
finally:
|
||||
_replaying.reset(token)
|
||||
assert do_cp.call_count == 0
|
||||
|
||||
|
||||
class TestFlowResumeReplaysEvents:
|
||||
"""End-to-end: a resumed flow emits MethodExecution* events for completed methods."""
|
||||
|
||||
def test_resume_dispatches_completed_method_events(self, tmp_path) -> None:
|
||||
from crewai.flow.flow import Flow, listen, start
|
||||
from crewai.flow.persistence.sqlite import SQLiteFlowPersistence
|
||||
|
||||
db_path = tmp_path / "flows.db"
|
||||
persistence = SQLiteFlowPersistence(str(db_path))
|
||||
|
||||
class ThreeStepFlow(Flow[dict]):
|
||||
@start()
|
||||
def step_a(self) -> str:
|
||||
return "a"
|
||||
|
||||
@listen(step_a)
|
||||
def step_b(self) -> str:
|
||||
return "b"
|
||||
|
||||
@listen(step_b)
|
||||
def step_c(self) -> str:
|
||||
return "c"
|
||||
|
||||
if crewai_event_bus.runtime_state is not None:
|
||||
crewai_event_bus.runtime_state.event_record.clear()
|
||||
|
||||
flow1 = ThreeStepFlow(persistence=persistence)
|
||||
flow1.kickoff()
|
||||
flow_id = flow1.state["id"]
|
||||
|
||||
captured_started: list[str] = []
|
||||
captured_finished: list[str] = []
|
||||
|
||||
flow2 = ThreeStepFlow(persistence=persistence)
|
||||
flow2._completed_methods = {"step_a", "step_b"}
|
||||
|
||||
with crewai_event_bus.scoped_handlers():
|
||||
|
||||
@crewai_event_bus.on(MethodExecutionStartedEvent)
|
||||
def _cs(_: Any, event: MethodExecutionStartedEvent) -> None:
|
||||
captured_started.append(event.method_name)
|
||||
|
||||
@crewai_event_bus.on(MethodExecutionFinishedEvent)
|
||||
def _cf(_: Any, event: MethodExecutionFinishedEvent) -> None:
|
||||
captured_finished.append(event.method_name)
|
||||
|
||||
flow2.kickoff(inputs={"id": flow_id})
|
||||
|
||||
assert captured_started.count("step_a") == 1
|
||||
assert captured_started.count("step_b") == 1
|
||||
assert captured_started.count("step_c") == 1
|
||||
assert captured_finished.count("step_a") == 1
|
||||
assert captured_finished.count("step_b") == 1
|
||||
assert captured_finished.count("step_c") == 1
|
||||
@@ -389,17 +389,41 @@ def test_azure_raises_error_when_endpoint_missing():
|
||||
llm._get_sync_client()
|
||||
|
||||
|
||||
def test_azure_raises_error_when_api_key_missing():
|
||||
"""Credentials are validated lazily: construction succeeds, first
|
||||
def test_azure_raises_error_when_api_key_missing_without_azure_identity():
|
||||
"""Without an API key AND without ``azure-identity`` installed,
|
||||
client build raises the descriptive error."""
|
||||
from crewai.llms.providers.azure.completion import AzureCompletion
|
||||
|
||||
with patch.dict(os.environ, {}, clear=True):
|
||||
llm = AzureCompletion(
|
||||
model="gpt-4", endpoint="https://test.openai.azure.com"
|
||||
)
|
||||
with pytest.raises(ValueError, match="Azure API key is required"):
|
||||
llm._get_sync_client()
|
||||
with patch.dict("sys.modules", {"azure.identity": None}):
|
||||
llm = AzureCompletion(
|
||||
model="gpt-4", endpoint="https://test.openai.azure.com"
|
||||
)
|
||||
with pytest.raises(ValueError, match="Azure API key is required"):
|
||||
llm._get_sync_client()
|
||||
|
||||
|
||||
def test_azure_uses_default_credential_when_api_key_missing():
|
||||
"""With ``azure-identity`` installed, a missing API key falls back to
|
||||
``DefaultAzureCredential`` instead of raising. This is the path that
|
||||
enables keyless auth (OIDC WIF on EKS/AKS, Managed Identity, Azure
|
||||
CLI) without any crewAI-specific config."""
|
||||
from unittest.mock import MagicMock
|
||||
|
||||
from crewai.llms.providers.azure.completion import AzureCompletion
|
||||
|
||||
sentinel = MagicMock(name="DefaultAzureCredential()")
|
||||
with patch.dict(os.environ, {}, clear=True):
|
||||
with patch(
|
||||
"azure.identity.DefaultAzureCredential", return_value=sentinel
|
||||
) as mock_cls:
|
||||
llm = AzureCompletion(
|
||||
model="gpt-4",
|
||||
endpoint="https://test-ai.services.example.com",
|
||||
)
|
||||
kwargs = llm._make_client_kwargs()
|
||||
assert kwargs["credential"] is sentinel
|
||||
mock_cls.assert_called()
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
|
||||
@@ -4,6 +4,8 @@ from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
from crewai import Agent
|
||||
from crewai.agent.utils import append_skill_context
|
||||
from crewai.skills.loader import activate_skill, discover_skills, format_skill_context
|
||||
from crewai.skills.models import INSTRUCTIONS, METADATA
|
||||
|
||||
@@ -76,3 +78,23 @@ class TestSkillDiscoveryAndActivation:
|
||||
all_skills.extend(discover_skills(search_path))
|
||||
names = {s.name for s in all_skills}
|
||||
assert names == {"skill-a", "skill-b"}
|
||||
|
||||
def test_agent_preserves_metadata_for_discovered_skills(self, tmp_path: Path) -> None:
|
||||
_create_skill_dir(tmp_path, "travel", body="Use this skill for travel planning.")
|
||||
discovered = discover_skills(tmp_path)
|
||||
|
||||
agent = Agent(
|
||||
role="Travel Advisor",
|
||||
goal="Provide personalized travel suggestions.",
|
||||
backstory="An experienced travel consultant.",
|
||||
skills=discovered,
|
||||
)
|
||||
|
||||
assert agent.skills is not None
|
||||
assert agent.skills[0].disclosure_level == METADATA
|
||||
assert agent.skills[0].instructions is None
|
||||
|
||||
prompt = append_skill_context(agent, "Plan a 10-day Japan itinerary.")
|
||||
assert "## Skill: travel" in prompt
|
||||
assert "Skill travel" in prompt
|
||||
assert "Use this skill for travel planning." not in prompt
|
||||
|
||||
@@ -11,11 +11,12 @@ from typing import Any
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
import pytest
|
||||
from pydantic import BaseModel
|
||||
|
||||
from crewai.agent.core import Agent
|
||||
from crewai.agents.agent_builder.base_agent import BaseAgent
|
||||
from crewai.crew import Crew
|
||||
from crewai.flow.flow import Flow, start
|
||||
from crewai.flow.flow import _INITIAL_STATE_CLASS_MARKER, Flow, start
|
||||
from crewai.state.checkpoint_config import CheckpointConfig
|
||||
from crewai.state.checkpoint_listener import (
|
||||
_find_checkpoint,
|
||||
@@ -310,6 +311,65 @@ class TestRuntimeStateLineage:
|
||||
assert state._branch != first
|
||||
|
||||
|
||||
class TestFlowInitialStateSerialization:
|
||||
"""Regression tests for checkpoint serialization of ``Flow.initial_state``."""
|
||||
|
||||
def test_class_ref_serializes_as_schema(self) -> None:
|
||||
class MyState(BaseModel):
|
||||
id: str = "x"
|
||||
foo: str = "bar"
|
||||
|
||||
flow = Flow(initial_state=MyState)
|
||||
state = RuntimeState(root=[flow])
|
||||
dumped = json.loads(state.model_dump_json())
|
||||
entity = dumped["entities"][0]
|
||||
wrapped = entity["initial_state"]
|
||||
assert isinstance(wrapped, dict)
|
||||
assert _INITIAL_STATE_CLASS_MARKER in wrapped
|
||||
assert wrapped[_INITIAL_STATE_CLASS_MARKER].get("title") == "MyState"
|
||||
|
||||
def test_class_ref_round_trips_to_basemodel_subclass(self) -> None:
|
||||
class MyState(BaseModel):
|
||||
id: str = "x"
|
||||
foo: str = "bar"
|
||||
|
||||
flow = Flow(initial_state=MyState)
|
||||
raw = RuntimeState(root=[flow]).model_dump_json()
|
||||
restored = RuntimeState.model_validate_json(
|
||||
raw, context={"from_checkpoint": True}
|
||||
)
|
||||
rehydrated = restored.root[0].initial_state
|
||||
assert isinstance(rehydrated, type)
|
||||
assert issubclass(rehydrated, BaseModel)
|
||||
assert set(rehydrated.model_fields.keys()) == {"id", "foo"}
|
||||
|
||||
def test_instance_serializes_as_values(self) -> None:
|
||||
class MyState(BaseModel):
|
||||
id: str = "x"
|
||||
foo: str = "bar"
|
||||
|
||||
flow = Flow(initial_state=MyState(foo="baz"))
|
||||
state = RuntimeState(root=[flow])
|
||||
dumped = json.loads(state.model_dump_json())
|
||||
entity = dumped["entities"][0]
|
||||
assert entity["initial_state"] == {"id": "x", "foo": "baz"}
|
||||
|
||||
def test_dict_passthrough(self) -> None:
|
||||
flow = Flow(initial_state={"id": "x", "foo": "bar"})
|
||||
state = RuntimeState(root=[flow])
|
||||
dumped = json.loads(state.model_dump_json())
|
||||
entity = dumped["entities"][0]
|
||||
assert entity["initial_state"] == {"id": "x", "foo": "bar"}
|
||||
|
||||
def test_dict_round_trips_as_dict(self) -> None:
|
||||
flow = Flow(initial_state={"id": "x", "foo": "bar"})
|
||||
raw = RuntimeState(root=[flow]).model_dump_json()
|
||||
restored = RuntimeState.model_validate_json(
|
||||
raw, context={"from_checkpoint": True}
|
||||
)
|
||||
assert restored.root[0].initial_state == {"id": "x", "foo": "bar"}
|
||||
|
||||
|
||||
# ---------- JsonProvider forking ----------
|
||||
|
||||
|
||||
|
||||
@@ -4519,8 +4519,8 @@ def test_sets_flow_context_when_using_crewbase_pattern_inside_flow():
|
||||
flow.kickoff()
|
||||
|
||||
assert captured_crew is not None
|
||||
assert captured_crew._flow_id == flow.flow_id # type: ignore[attr-defined]
|
||||
assert captured_crew._request_id == flow.flow_id # type: ignore[attr-defined]
|
||||
assert captured_crew._flow_id == flow.execution_id # type: ignore[attr-defined]
|
||||
assert captured_crew._request_id == flow.execution_id # type: ignore[attr-defined]
|
||||
|
||||
|
||||
def test_sets_flow_context_when_outside_flow(researcher, writer):
|
||||
@@ -4554,8 +4554,8 @@ def test_sets_flow_context_when_inside_flow(researcher, writer):
|
||||
|
||||
flow = MyFlow()
|
||||
result = flow.kickoff()
|
||||
assert result._flow_id == flow.flow_id # type: ignore[attr-defined]
|
||||
assert result._request_id == flow.flow_id # type: ignore[attr-defined]
|
||||
assert result._flow_id == flow.execution_id # type: ignore[attr-defined]
|
||||
assert result._request_id == flow.execution_id # type: ignore[attr-defined]
|
||||
|
||||
|
||||
def test_reset_knowledge_with_no_crew_knowledge(researcher, writer):
|
||||
|
||||
127
lib/crewai/tests/test_flow_execution_id.py
Normal file
127
lib/crewai/tests/test_flow_execution_id.py
Normal file
@@ -0,0 +1,127 @@
|
||||
"""Regression tests for ``Flow.execution_id``.
|
||||
|
||||
``execution_id`` is the stable tracking identifier for a single flow run.
|
||||
It must stay independent of ``state.id`` so that consumers passing an
|
||||
``id`` in ``inputs`` (used for persistence restore) cannot destabilize
|
||||
the identity used by telemetry, tracing, and external correlation.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Any
|
||||
|
||||
import pytest
|
||||
from crewai.flow.flow import Flow, FlowState, start
|
||||
from crewai.flow.flow_context import current_flow_id, current_flow_request_id
|
||||
|
||||
|
||||
class _CaptureState(FlowState):
|
||||
captured_flow_id: str = ""
|
||||
captured_state_id: str = ""
|
||||
captured_current_flow_id: str = ""
|
||||
captured_execution_id: str = ""
|
||||
|
||||
|
||||
class _IdentityCaptureFlow(Flow[_CaptureState]):
|
||||
initial_state = _CaptureState
|
||||
|
||||
@start()
|
||||
def capture(self) -> None:
|
||||
self.state.captured_flow_id = self.flow_id
|
||||
self.state.captured_state_id = self.state.id
|
||||
self.state.captured_current_flow_id = current_flow_id.get() or ""
|
||||
self.state.captured_execution_id = self.execution_id
|
||||
|
||||
|
||||
def test_execution_id_defaults_to_fresh_uuid_per_instance() -> None:
|
||||
a = _IdentityCaptureFlow()
|
||||
b = _IdentityCaptureFlow()
|
||||
|
||||
assert a.execution_id
|
||||
assert b.execution_id
|
||||
assert a.execution_id != b.execution_id
|
||||
|
||||
|
||||
def test_execution_id_survives_consumer_id_in_inputs() -> None:
|
||||
flow = _IdentityCaptureFlow()
|
||||
original_execution_id = flow.execution_id
|
||||
|
||||
flow.kickoff(inputs={"id": "consumer-supplied-id"})
|
||||
|
||||
assert flow.state.id == "consumer-supplied-id"
|
||||
assert flow.flow_id == "consumer-supplied-id"
|
||||
assert flow.execution_id == original_execution_id
|
||||
assert flow.execution_id != "consumer-supplied-id"
|
||||
|
||||
|
||||
def test_two_runs_with_same_consumer_id_have_distinct_execution_ids() -> None:
|
||||
flow_a = _IdentityCaptureFlow()
|
||||
flow_b = _IdentityCaptureFlow()
|
||||
|
||||
colliding_id = "shared-consumer-id"
|
||||
flow_a.kickoff(inputs={"id": colliding_id})
|
||||
flow_b.kickoff(inputs={"id": colliding_id})
|
||||
|
||||
assert flow_a.state.id == colliding_id
|
||||
assert flow_b.state.id == colliding_id
|
||||
assert flow_a.execution_id != flow_b.execution_id
|
||||
|
||||
|
||||
def test_execution_id_is_writable() -> None:
|
||||
flow = _IdentityCaptureFlow()
|
||||
flow.execution_id = "external-task-id"
|
||||
|
||||
assert flow.execution_id == "external-task-id"
|
||||
|
||||
flow.kickoff(inputs={"id": "consumer-supplied-id"})
|
||||
assert flow.execution_id == "external-task-id"
|
||||
assert flow.state.id == "consumer-supplied-id"
|
||||
|
||||
|
||||
def test_current_flow_id_context_var_matches_execution_id() -> None:
|
||||
flow = _IdentityCaptureFlow()
|
||||
flow.execution_id = "external-task-id"
|
||||
|
||||
flow.kickoff(inputs={"id": "consumer-supplied-id"})
|
||||
|
||||
assert flow.state.captured_current_flow_id == "external-task-id"
|
||||
assert flow.state.captured_flow_id == "consumer-supplied-id"
|
||||
assert flow.state.captured_execution_id == "external-task-id"
|
||||
|
||||
|
||||
def test_execution_id_not_included_in_serialized_state() -> None:
|
||||
flow = _IdentityCaptureFlow()
|
||||
flow.execution_id = "external-task-id"
|
||||
flow.kickoff()
|
||||
|
||||
dumped = flow.state.model_dump()
|
||||
assert "execution_id" not in dumped
|
||||
assert "_execution_id" not in dumped
|
||||
assert dumped["id"] == flow.state.id
|
||||
|
||||
|
||||
def test_dict_state_flow_also_exposes_stable_execution_id() -> None:
|
||||
class DictFlow(Flow[dict[str, Any]]):
|
||||
initial_state = dict # type: ignore[assignment]
|
||||
|
||||
@start()
|
||||
def noop(self) -> None:
|
||||
pass
|
||||
|
||||
flow = DictFlow()
|
||||
original = flow.execution_id
|
||||
flow.kickoff(inputs={"id": "consumer-supplied-id"})
|
||||
|
||||
assert flow.state["id"] == "consumer-supplied-id"
|
||||
assert flow.execution_id == original
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def _reset_flow_context_vars():
|
||||
yield
|
||||
for var in (current_flow_id, current_flow_request_id):
|
||||
try:
|
||||
var.set(None)
|
||||
except LookupError:
|
||||
# ContextVar was never set in this context; nothing to reset.
|
||||
pass
|
||||
411
lib/crewai/tests/test_llm_tool_loop.py
Normal file
411
lib/crewai/tests/test_llm_tool_loop.py
Normal file
@@ -0,0 +1,411 @@
|
||||
"""Tests for LLM.call() tool loop and LLMResult.
|
||||
|
||||
All LLM calls are mocked — no real API traffic.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
from types import SimpleNamespace
|
||||
from typing import Any
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
from crewai.llm_result import (
|
||||
LLMResult,
|
||||
ToolCallRecord,
|
||||
_lookup_pricing,
|
||||
estimate_cost_usd,
|
||||
)
|
||||
|
||||
|
||||
def _make_litellm_llm(model: str = "gpt-4o") -> Any:
|
||||
"""Create an LLM instance that uses the litellm fallback path."""
|
||||
from crewai.llm import LLM
|
||||
return LLM(model=model, is_litellm=True)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Helpers
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def _make_tool_call(name: str, arguments: dict, call_id: str = "call_1"):
|
||||
"""Build a tool-call object using litellm's actual types."""
|
||||
try:
|
||||
from litellm.types.utils import (
|
||||
ChatCompletionMessageToolCall,
|
||||
Function,
|
||||
)
|
||||
return ChatCompletionMessageToolCall(
|
||||
id=call_id,
|
||||
function=Function(name=name, arguments=json.dumps(arguments)),
|
||||
type="function",
|
||||
)
|
||||
except ImportError:
|
||||
func = SimpleNamespace(name=name, arguments=json.dumps(arguments))
|
||||
return SimpleNamespace(id=call_id, function=func, type="function")
|
||||
|
||||
|
||||
def _make_model_response(content: str | None = None, tool_calls: list | None = None):
|
||||
"""Build a minimal mock ModelResponse that passes isinstance checks.
|
||||
|
||||
We need it to be an instance of litellm's ModelResponse/ModelResponseBase
|
||||
so the internal isinstance() checks work. We import those types when
|
||||
litellm is available.
|
||||
"""
|
||||
try:
|
||||
from litellm.types.utils import (
|
||||
Choices,
|
||||
Message,
|
||||
ModelResponse,
|
||||
Usage,
|
||||
)
|
||||
|
||||
message = Message(content=content, tool_calls=tool_calls or None)
|
||||
choice = Choices(message=message, finish_reason="stop", index=0)
|
||||
resp = ModelResponse(
|
||||
choices=[choice],
|
||||
usage=Usage(
|
||||
prompt_tokens=100,
|
||||
completion_tokens=50,
|
||||
total_tokens=150,
|
||||
),
|
||||
)
|
||||
return resp
|
||||
except ImportError:
|
||||
# Fallback to SimpleNamespace if litellm not installed
|
||||
message = SimpleNamespace(content=content, tool_calls=tool_calls or [])
|
||||
choice = SimpleNamespace(message=message, finish_reason="stop")
|
||||
usage = SimpleNamespace(
|
||||
prompt_tokens=100,
|
||||
completion_tokens=50,
|
||||
total_tokens=150,
|
||||
)
|
||||
resp = SimpleNamespace(
|
||||
choices=[choice],
|
||||
model_extra={"usage": usage},
|
||||
)
|
||||
return resp
|
||||
|
||||
|
||||
DUMMY_TOOL_SCHEMA = [
|
||||
{
|
||||
"type": "function",
|
||||
"function": {
|
||||
"name": "get_weather",
|
||||
"description": "Get weather for a city",
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"city": {"type": "string"},
|
||||
},
|
||||
"required": ["city"],
|
||||
},
|
||||
},
|
||||
}
|
||||
]
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Unit tests for LLMResult / ToolCallRecord
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
class TestLLMResultModels:
|
||||
def test_tool_call_record_defaults(self):
|
||||
r = ToolCallRecord(name="foo")
|
||||
assert r.input == {}
|
||||
assert r.output == ""
|
||||
assert r.duration_ms == 0.0
|
||||
assert r.is_error is False
|
||||
|
||||
def test_llm_result_defaults(self):
|
||||
r = LLMResult()
|
||||
assert r.text == ""
|
||||
assert r.tool_calls == []
|
||||
assert r.cost_usd == 0.0
|
||||
assert r.iterations == 0
|
||||
assert r.usage.total_tokens == 0
|
||||
|
||||
def test_llm_result_with_data(self):
|
||||
r = LLMResult(
|
||||
text="hello",
|
||||
tool_calls=[ToolCallRecord(name="foo", input={"a": 1}, output="bar")],
|
||||
iterations=2,
|
||||
cost_usd=0.005,
|
||||
)
|
||||
assert r.text == "hello"
|
||||
assert len(r.tool_calls) == 1
|
||||
assert r.tool_calls[0].name == "foo"
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Cost estimation
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
class TestCostEstimation:
|
||||
def test_known_model(self):
|
||||
cost = estimate_cost_usd("gpt-4o", prompt_tokens=1_000_000, completion_tokens=0)
|
||||
assert cost == pytest.approx(2.50)
|
||||
|
||||
def test_known_model_output(self):
|
||||
cost = estimate_cost_usd("gpt-4o", prompt_tokens=0, completion_tokens=1_000_000)
|
||||
assert cost == pytest.approx(10.00)
|
||||
|
||||
def test_unknown_model_returns_zero(self):
|
||||
cost = estimate_cost_usd("some-random-model-xyz", 1000, 1000)
|
||||
assert cost == 0.0
|
||||
|
||||
def test_provider_prefix_stripped(self):
|
||||
cost = estimate_cost_usd("anthropic/claude-sonnet-4-6", 1_000_000, 0)
|
||||
assert cost == pytest.approx(3.00)
|
||||
|
||||
def test_partial_match(self):
|
||||
# "claude-sonnet-4-6-20250514" should match "claude-sonnet-4-6"
|
||||
cost = estimate_cost_usd("claude-sonnet-4-6-20250514", 1_000_000, 0)
|
||||
assert cost == pytest.approx(3.00)
|
||||
|
||||
def test_lookup_none(self):
|
||||
assert _lookup_pricing("") is None
|
||||
assert _lookup_pricing("nonexistent") is None
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# LLM.call() backwards compatibility (no tools → returns str)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
class TestCallBackwardsCompat:
|
||||
"""LLM.call() without tools must return str exactly as before."""
|
||||
|
||||
@patch("crewai.llm.litellm")
|
||||
def test_call_without_tools_returns_str(self, mock_litellm):
|
||||
"""Plain call without tools should return a string."""
|
||||
mock_litellm.completion.return_value = _make_model_response(content="Hello world")
|
||||
mock_litellm.drop_params = True
|
||||
mock_litellm.suppress_debug_info = True
|
||||
mock_litellm.success_callback = []
|
||||
mock_litellm._async_success_callback = []
|
||||
mock_litellm.callbacks = []
|
||||
|
||||
llm = _make_litellm_llm()
|
||||
result = llm.call("Say hello")
|
||||
|
||||
assert isinstance(result, str)
|
||||
assert result == "Hello world"
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# LLM.call() with tools → returns LLMResult
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
class TestCallWithToolLoop:
|
||||
"""When tools + available_functions are passed, call() returns LLMResult."""
|
||||
|
||||
@patch("crewai.llm.litellm")
|
||||
def test_single_tool_call_then_text(self, mock_litellm):
|
||||
"""Model calls one tool, then responds with text."""
|
||||
mock_litellm.drop_params = True
|
||||
mock_litellm.suppress_debug_info = True
|
||||
mock_litellm.success_callback = []
|
||||
mock_litellm._async_success_callback = []
|
||||
mock_litellm.callbacks = []
|
||||
|
||||
# First call: model wants to call get_weather
|
||||
tool_call = _make_tool_call("get_weather", {"city": "SF"})
|
||||
resp1 = _make_model_response(content=None, tool_calls=[tool_call])
|
||||
# Second call: model responds with text
|
||||
resp2 = _make_model_response(content="It's sunny in SF!")
|
||||
mock_litellm.completion.side_effect = [resp1, resp2]
|
||||
|
||||
llm = _make_litellm_llm()
|
||||
|
||||
def get_weather(city: str) -> str:
|
||||
return f"Sunny, 72°F in {city}"
|
||||
|
||||
result = llm.call(
|
||||
messages="What's the weather in SF?",
|
||||
tools=DUMMY_TOOL_SCHEMA,
|
||||
available_functions={"get_weather": get_weather},
|
||||
)
|
||||
|
||||
assert isinstance(result, LLMResult)
|
||||
assert result.text == "It's sunny in SF!"
|
||||
assert len(result.tool_calls) == 1
|
||||
assert result.tool_calls[0].name == "get_weather"
|
||||
assert result.tool_calls[0].input == {"city": "SF"}
|
||||
assert "Sunny" in result.tool_calls[0].output
|
||||
assert result.tool_calls[0].is_error is False
|
||||
assert result.iterations == 2
|
||||
|
||||
@patch("crewai.llm.litellm")
|
||||
def test_multiple_tool_calls_in_sequence(self, mock_litellm):
|
||||
"""Model calls two tools across two iterations."""
|
||||
mock_litellm.drop_params = True
|
||||
mock_litellm.suppress_debug_info = True
|
||||
mock_litellm.success_callback = []
|
||||
mock_litellm._async_success_callback = []
|
||||
mock_litellm.callbacks = []
|
||||
|
||||
tc1 = _make_tool_call("get_weather", {"city": "SF"}, "call_1")
|
||||
resp1 = _make_model_response(content=None, tool_calls=[tc1])
|
||||
|
||||
tc2 = _make_tool_call("get_weather", {"city": "NYC"}, "call_2")
|
||||
resp2 = _make_model_response(content=None, tool_calls=[tc2])
|
||||
|
||||
resp3 = _make_model_response(content="SF is sunny, NYC is rainy.")
|
||||
mock_litellm.completion.side_effect = [resp1, resp2, resp3]
|
||||
|
||||
llm = _make_litellm_llm()
|
||||
|
||||
def get_weather(city: str) -> str:
|
||||
return f"Weather for {city}: fine"
|
||||
|
||||
result = llm.call(
|
||||
messages="Compare SF and NYC weather",
|
||||
tools=DUMMY_TOOL_SCHEMA,
|
||||
available_functions={"get_weather": get_weather},
|
||||
)
|
||||
|
||||
assert isinstance(result, LLMResult)
|
||||
assert len(result.tool_calls) == 2
|
||||
assert result.tool_calls[0].input["city"] == "SF"
|
||||
assert result.tool_calls[1].input["city"] == "NYC"
|
||||
assert result.iterations == 3
|
||||
|
||||
@patch("crewai.llm.litellm")
|
||||
def test_max_iterations_stops_loop(self, mock_litellm):
|
||||
"""Loop stops when max_iterations is reached."""
|
||||
mock_litellm.drop_params = True
|
||||
mock_litellm.suppress_debug_info = True
|
||||
mock_litellm.success_callback = []
|
||||
mock_litellm._async_success_callback = []
|
||||
mock_litellm.callbacks = []
|
||||
|
||||
# Model always wants to call a tool — never stops
|
||||
def make_tool_resp():
|
||||
tc = _make_tool_call("get_weather", {"city": "SF"})
|
||||
return _make_model_response(content=None, tool_calls=[tc])
|
||||
|
||||
mock_litellm.completion.side_effect = [make_tool_resp() for _ in range(5)]
|
||||
|
||||
llm = _make_litellm_llm()
|
||||
|
||||
result = llm.call(
|
||||
messages="Loop forever",
|
||||
tools=DUMMY_TOOL_SCHEMA,
|
||||
available_functions={"get_weather": lambda city: "sunny"},
|
||||
max_iterations=3,
|
||||
)
|
||||
|
||||
assert isinstance(result, LLMResult)
|
||||
assert result.iterations == 3
|
||||
assert len(result.tool_calls) == 3
|
||||
# Should have a text noting max iterations
|
||||
assert "Max iterations" in result.text
|
||||
|
||||
@patch("crewai.llm.litellm")
|
||||
def test_tool_error_handling(self, mock_litellm):
|
||||
"""Tool that raises an exception is captured in the record."""
|
||||
mock_litellm.drop_params = True
|
||||
mock_litellm.suppress_debug_info = True
|
||||
mock_litellm.success_callback = []
|
||||
mock_litellm._async_success_callback = []
|
||||
mock_litellm.callbacks = []
|
||||
|
||||
tc = _make_tool_call("get_weather", {"city": "SF"})
|
||||
resp1 = _make_model_response(content=None, tool_calls=[tc])
|
||||
resp2 = _make_model_response(content="Sorry, couldn't get weather.")
|
||||
mock_litellm.completion.side_effect = [resp1, resp2]
|
||||
|
||||
llm = _make_litellm_llm()
|
||||
|
||||
def broken_weather(city: str) -> str:
|
||||
raise RuntimeError("API down")
|
||||
|
||||
result = llm.call(
|
||||
messages="Weather?",
|
||||
tools=DUMMY_TOOL_SCHEMA,
|
||||
available_functions={"get_weather": broken_weather},
|
||||
)
|
||||
|
||||
assert isinstance(result, LLMResult)
|
||||
assert len(result.tool_calls) == 1
|
||||
assert result.tool_calls[0].is_error is True
|
||||
assert "API down" in result.tool_calls[0].output
|
||||
assert result.text == "Sorry, couldn't get weather."
|
||||
|
||||
@patch("crewai.llm.litellm")
|
||||
def test_unknown_function_error(self, mock_litellm):
|
||||
"""Tool call for a function not in available_functions."""
|
||||
mock_litellm.drop_params = True
|
||||
mock_litellm.suppress_debug_info = True
|
||||
mock_litellm.success_callback = []
|
||||
mock_litellm._async_success_callback = []
|
||||
mock_litellm.callbacks = []
|
||||
|
||||
tc = _make_tool_call("nonexistent_tool", {})
|
||||
resp1 = _make_model_response(content=None, tool_calls=[tc])
|
||||
resp2 = _make_model_response(content="I couldn't find that tool.")
|
||||
mock_litellm.completion.side_effect = [resp1, resp2]
|
||||
|
||||
llm = _make_litellm_llm()
|
||||
|
||||
result = llm.call(
|
||||
messages="Do something",
|
||||
tools=DUMMY_TOOL_SCHEMA,
|
||||
available_functions={"get_weather": lambda city: "sunny"},
|
||||
)
|
||||
|
||||
assert isinstance(result, LLMResult)
|
||||
assert result.tool_calls[0].is_error is True
|
||||
assert "unknown function" in result.tool_calls[0].output
|
||||
|
||||
@patch("crewai.llm.litellm")
|
||||
def test_cost_estimation_populated(self, mock_litellm):
|
||||
"""cost_usd is populated from token usage and model pricing."""
|
||||
mock_litellm.drop_params = True
|
||||
mock_litellm.suppress_debug_info = True
|
||||
mock_litellm.success_callback = []
|
||||
mock_litellm._async_success_callback = []
|
||||
mock_litellm.callbacks = []
|
||||
|
||||
resp = _make_model_response(content="Done!")
|
||||
mock_litellm.completion.return_value = resp
|
||||
|
||||
llm = _make_litellm_llm()
|
||||
|
||||
result = llm.call(
|
||||
messages="Hello",
|
||||
tools=DUMMY_TOOL_SCHEMA,
|
||||
available_functions={"get_weather": lambda city: "sunny"},
|
||||
)
|
||||
|
||||
assert isinstance(result, LLMResult)
|
||||
# cost_usd should be >= 0 (may be 0 if usage tracking didn't fire,
|
||||
# but the field should exist and be a float)
|
||||
assert isinstance(result.cost_usd, float)
|
||||
|
||||
@patch("crewai.llm.litellm")
|
||||
def test_immediate_text_response_with_tools(self, mock_litellm):
|
||||
"""Model responds with text on first call (no tool use)."""
|
||||
mock_litellm.drop_params = True
|
||||
mock_litellm.suppress_debug_info = True
|
||||
mock_litellm.success_callback = []
|
||||
mock_litellm._async_success_callback = []
|
||||
mock_litellm.callbacks = []
|
||||
|
||||
resp = _make_model_response(content="I know the answer already.")
|
||||
mock_litellm.completion.return_value = resp
|
||||
|
||||
llm = _make_litellm_llm()
|
||||
|
||||
result = llm.call(
|
||||
messages="What's 2+2?",
|
||||
tools=DUMMY_TOOL_SCHEMA,
|
||||
available_functions={"get_weather": lambda city: "sunny"},
|
||||
)
|
||||
|
||||
assert isinstance(result, LLMResult)
|
||||
assert result.text == "I know the answer already."
|
||||
assert len(result.tool_calls) == 0
|
||||
assert result.iterations == 1
|
||||
@@ -1,3 +1,3 @@
|
||||
"""CrewAI development tools."""
|
||||
|
||||
__version__ = "1.14.3a2"
|
||||
__version__ = "1.14.3"
|
||||
|
||||
@@ -164,7 +164,7 @@ info = "Commits must follow Conventional Commits 1.0.0."
|
||||
[tool.uv]
|
||||
# Pinned to include the security patch releases (authlib 1.6.11,
|
||||
# langchain-text-splitters 1.1.2) uploaded on 2026-04-16.
|
||||
exclude-newer = "2026-04-17"
|
||||
exclude-newer = "2026-04-26"
|
||||
|
||||
# composio-core pins rich<14 but textual requires rich>=14.
|
||||
# onnxruntime 1.24+ dropped Python 3.10 wheels; cap it so qdrant[fastembed] resolves on 3.10.
|
||||
|
||||
273
uv.lock
generated
273
uv.lock
generated
@@ -13,7 +13,7 @@ resolution-markers = [
|
||||
]
|
||||
|
||||
[options]
|
||||
exclude-newer = "2026-04-17T16:00:00Z"
|
||||
exclude-newer = "2026-04-23T07:00:00Z"
|
||||
|
||||
[manifest]
|
||||
members = [
|
||||
@@ -510,6 +510,22 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/7e/d6/8ebcd05b01a580f086ac9a97fb9fac65c09a4b012161cc97c21a336e880b/azure_core-1.39.0-py3-none-any.whl", hash = "sha256:4ac7b70fab5438c3f68770649a78daf97833caa83827f91df9c14e0e0ea7d34f", size = 218318, upload-time = "2026-03-19T01:31:31.25Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "azure-identity"
|
||||
version = "1.25.3"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "azure-core" },
|
||||
{ name = "cryptography" },
|
||||
{ name = "msal" },
|
||||
{ name = "msal-extensions" },
|
||||
{ name = "typing-extensions" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/c5/0e/3a63efb48aa4a5ae2cfca61ee152fbcb668092134d3eb8bfda472dd5c617/azure_identity-1.25.3.tar.gz", hash = "sha256:ab23c0d63015f50b630ef6c6cf395e7262f439ce06e5d07a64e874c724f8d9e6", size = 286304, upload-time = "2026-03-13T01:12:20.892Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/49/9a/417b3a533e01953a7c618884df2cb05a71e7b68bdbce4fbdb62349d2a2e8/azure_identity-1.25.3-py3-none-any.whl", hash = "sha256:f4d0b956a8146f30333e071374171f3cfa7bdb8073adb8c3814b65567aa7447c", size = 192138, upload-time = "2026-03-13T01:12:22.951Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "backoff"
|
||||
version = "2.2.1"
|
||||
@@ -700,6 +716,15 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/32/76/cab7af7f16c0b09347f2ebe7ffda7101132f786acb767666dce43055faab/botocore_stubs-1.42.41-py3-none-any.whl", hash = "sha256:9423110fb0e391834bd2ed44ae5f879d8cb370a444703d966d30842ce2bcb5f0", size = 66759, upload-time = "2026-02-03T20:46:13.02Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "bracex"
|
||||
version = "2.6"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/63/9a/fec38644694abfaaeca2798b58e276a8e61de49e2e37494ace423395febc/bracex-2.6.tar.gz", hash = "sha256:98f1347cd77e22ee8d967a30ad4e310b233f7754dbf31ff3fceb76145ba47dc7", size = 26642, upload-time = "2025-06-22T19:12:31.254Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/9d/2a/9186535ce58db529927f6cf5990a849aa9e052eea3e2cfefe20b9e1802da/bracex-2.6-py3-none-any.whl", hash = "sha256:0b0049264e7340b3ec782b5cb99beb325f36c3782a32e36e876452fd49a09952", size = 11508, upload-time = "2025-06-22T19:12:29.781Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "browserbase"
|
||||
version = "1.8.0"
|
||||
@@ -1296,6 +1321,7 @@ aws = [
|
||||
]
|
||||
azure-ai-inference = [
|
||||
{ name = "azure-ai-inference" },
|
||||
{ name = "azure-identity" },
|
||||
]
|
||||
bedrock = [
|
||||
{ name = "boto3" },
|
||||
@@ -1350,6 +1376,7 @@ requires-dist = [
|
||||
{ name = "anthropic", marker = "extra == 'anthropic'", specifier = "~=0.73.0" },
|
||||
{ name = "appdirs", specifier = "~=1.4.4" },
|
||||
{ name = "azure-ai-inference", marker = "extra == 'azure-ai-inference'", specifier = "~=1.0.0b9" },
|
||||
{ name = "azure-identity", marker = "extra == 'azure-ai-inference'", specifier = ">=1.17.0,<2" },
|
||||
{ name = "boto3", marker = "extra == 'aws'", specifier = "~=1.42.79" },
|
||||
{ name = "boto3", marker = "extra == 'bedrock'", specifier = "~=1.42.79" },
|
||||
{ name = "chromadb", specifier = "~=1.1.0" },
|
||||
@@ -1489,6 +1516,10 @@ databricks-sdk = [
|
||||
daytona = [
|
||||
{ name = "daytona" },
|
||||
]
|
||||
e2b = [
|
||||
{ name = "e2b" },
|
||||
{ name = "e2b-code-interpreter" },
|
||||
]
|
||||
exa-py = [
|
||||
{ name = "exa-py" },
|
||||
]
|
||||
@@ -1590,13 +1621,15 @@ requires-dist = [
|
||||
{ name = "cryptography", marker = "extra == 'snowflake'", specifier = ">=43.0.3" },
|
||||
{ name = "databricks-sdk", marker = "extra == 'databricks-sdk'", specifier = ">=0.46.0" },
|
||||
{ name = "daytona", marker = "extra == 'daytona'", specifier = "~=0.140.0" },
|
||||
{ name = "e2b", marker = "extra == 'e2b'", specifier = "~=2.20.0" },
|
||||
{ name = "e2b-code-interpreter", marker = "extra == 'e2b'", specifier = "~=2.6.0" },
|
||||
{ name = "exa-py", marker = "extra == 'exa-py'", specifier = ">=1.8.7" },
|
||||
{ name = "firecrawl-py", marker = "extra == 'firecrawl-py'", specifier = ">=1.8.0" },
|
||||
{ name = "gitpython", marker = "extra == 'github'", specifier = ">=3.1.41,<4" },
|
||||
{ name = "hyperbrowser", marker = "extra == 'hyperbrowser'", specifier = ">=0.18.0" },
|
||||
{ name = "langchain-apify", marker = "extra == 'apify'", specifier = ">=0.1.2,<1.0.0" },
|
||||
{ name = "linkup-sdk", marker = "extra == 'linkup-sdk'", specifier = ">=0.2.2" },
|
||||
{ name = "lxml", marker = "extra == 'rag'", specifier = ">=5.3.0,<5.4.0" },
|
||||
{ name = "lxml", marker = "extra == 'rag'", specifier = ">=6.1.0,<7" },
|
||||
{ name = "mcp", marker = "extra == 'mcp'", specifier = ">=1.6.0" },
|
||||
{ name = "mcpadapt", marker = "extra == 'mcp'", specifier = ">=0.1.9" },
|
||||
{ name = "multion", marker = "extra == 'multion'", specifier = ">=1.1.0" },
|
||||
@@ -1632,7 +1665,7 @@ requires-dist = [
|
||||
{ name = "weaviate-client", marker = "extra == 'weaviate-client'", specifier = ">=4.10.2" },
|
||||
{ name = "youtube-transcript-api", specifier = "~=1.2.2" },
|
||||
]
|
||||
provides-extras = ["apify", "beautifulsoup4", "bedrock", "browserbase", "composio-core", "contextual", "couchbase", "databricks-sdk", "daytona", "exa-py", "firecrawl-py", "github", "hyperbrowser", "linkup-sdk", "mcp", "mongodb", "multion", "mysql", "oxylabs", "patronus", "postgresql", "qdrant-client", "rag", "scrapegraph-py", "scrapfly-sdk", "selenium", "serpapi", "singlestore", "snowflake", "spider-client", "sqlalchemy", "stagehand", "tavily-python", "weaviate-client", "xml"]
|
||||
provides-extras = ["apify", "beautifulsoup4", "bedrock", "browserbase", "composio-core", "contextual", "couchbase", "databricks-sdk", "daytona", "e2b", "exa-py", "firecrawl-py", "github", "hyperbrowser", "linkup-sdk", "mcp", "mongodb", "multion", "mysql", "oxylabs", "patronus", "postgresql", "qdrant-client", "rag", "scrapegraph-py", "scrapfly-sdk", "selenium", "serpapi", "singlestore", "snowflake", "spider-client", "sqlalchemy", "stagehand", "tavily-python", "weaviate-client", "xml"]
|
||||
|
||||
[[package]]
|
||||
name = "cryptography"
|
||||
@@ -1975,6 +2008,15 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/ba/5a/18ad964b0086c6e62e2e7500f7edc89e3faa45033c71c1893d34eed2b2de/dnspython-2.8.0-py3-none-any.whl", hash = "sha256:01d9bbc4a2d76bf0db7c1f729812ded6d912bd318d3b1cf81d30c0f845dbf3af", size = 331094, upload-time = "2025-09-07T18:57:58.071Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "dockerfile-parse"
|
||||
version = "2.0.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/92/df/929ee0b5d2c8bd8d713c45e71b94ab57c7e11e322130724d54f469b2cd48/dockerfile-parse-2.0.1.tar.gz", hash = "sha256:3184ccdc513221983e503ac00e1aa504a2aa8f84e5de673c46b0b6eee99ec7bc", size = 24556, upload-time = "2023-07-18T13:36:07.897Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/7a/6c/79cd5bc1b880d8c1a9a5550aa8dacd57353fa3bb2457227e1fb47383eb49/dockerfile_parse-2.0.1-py2.py3-none-any.whl", hash = "sha256:bdffd126d2eb26acf1066acb54cb2e336682e1d72b974a40894fac76a4df17f6", size = 14845, upload-time = "2023-07-18T13:36:06.052Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "docling"
|
||||
version = "2.84.0"
|
||||
@@ -2125,6 +2167,41 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/b0/0d/9feae160378a3553fa9a339b0e9c1a048e147a4127210e286ef18b730f03/durationpy-0.10-py3-none-any.whl", hash = "sha256:3b41e1b601234296b4fb368338fdcd3e13e0b4fb5b67345948f4f2bf9868b286", size = 3922, upload-time = "2025-05-17T13:52:36.463Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "e2b"
|
||||
version = "2.20.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "attrs" },
|
||||
{ name = "dockerfile-parse" },
|
||||
{ name = "httpcore" },
|
||||
{ name = "httpx" },
|
||||
{ name = "packaging" },
|
||||
{ name = "protobuf" },
|
||||
{ name = "python-dateutil" },
|
||||
{ name = "rich" },
|
||||
{ name = "typing-extensions" },
|
||||
{ name = "wcmatch" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/8c/87/e9b3bd252a4fe2b3fd6967ff985c7a5a15a31b2d5b8c37e50afb18797b17/e2b-2.20.0.tar.gz", hash = "sha256:52b3a00ac7015bbdce84913b2a57664d2def33d5a4069e34fa2354de31759173", size = 156575, upload-time = "2026-04-02T19:20:32.375Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/c2/ce/e402e2ecebe40ed9af20cddb862386f2ce20336e35c0dea257812129020e/e2b-2.20.0-py3-none-any.whl", hash = "sha256:66f6edcf6b742ca180f3aadcff7966fda86d68430fa6b2becdfa0fcc72224988", size = 296483, upload-time = "2026-04-02T19:20:30.573Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "e2b-code-interpreter"
|
||||
version = "2.6.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "attrs" },
|
||||
{ name = "e2b" },
|
||||
{ name = "httpx" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/cf/dd/f90b56d1597abfcdabdc018ac184fa714066be93d24b97edc2bf0671d483/e2b_code_interpreter-2.6.0.tar.gz", hash = "sha256:67e66531e5cf65c9df6e82aa0bdb1e73223a1ab205f10d47c027eb2ea09b73f9", size = 10683, upload-time = "2026-03-23T17:01:07.327Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/6b/79/f70d50604584df66064892f3fca7ab57b10ad40c826fd003be53a4cd5fa5/e2b_code_interpreter-2.6.0-py3-none-any.whl", hash = "sha256:a15f1d155566aef98cf2ccc0f8d9b07d15e07582d6cc8a128bc97de371bd617c", size = 13715, upload-time = "2026-03-23T17:01:06.111Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "effdet"
|
||||
version = "0.4.1"
|
||||
@@ -3913,84 +3990,84 @@ wheels = [
|
||||
|
||||
[[package]]
|
||||
name = "lxml"
|
||||
version = "5.3.2"
|
||||
version = "6.1.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/80/61/d3dc048cd6c7be6fe45b80cedcbdd4326ba4d550375f266d9f4246d0f4bc/lxml-5.3.2.tar.gz", hash = "sha256:773947d0ed809ddad824b7b14467e1a481b8976e87278ac4a730c2f7c7fcddc1", size = 3679948, upload-time = "2025-04-05T18:31:58.757Z" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/28/30/9abc9e34c657c33834eaf6cd02124c61bdf5944d802aa48e69be8da3585d/lxml-6.1.0.tar.gz", hash = "sha256:bfd57d8008c4965709a919c3e9a98f76c2c7cb319086b3d26858250620023b13", size = 4197006, upload-time = "2026-04-18T04:32:51.613Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/f7/9c/b015de0277a13d1d51924810b248b8a685a4e3dcd02d2ffb9b4e65cc37f4/lxml-5.3.2-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:c4b84d6b580a9625dfa47269bf1fd7fbba7ad69e08b16366a46acb005959c395", size = 8144077, upload-time = "2025-04-05T18:25:05.832Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a7/6a/30467f6b66ae666d20b52dffa98c00f0f15e0567d1333d70db7c44a6939e/lxml-5.3.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:b4c08ecb26e4270a62f81f81899dfff91623d349e433b126931c9c4577169666", size = 4423433, upload-time = "2025-04-05T18:25:10.126Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/12/85/5a50121c0b57c8aba1beec30d324dc9272a193ecd6c24ad1efb5e223a035/lxml-5.3.2-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ef926e9f11e307b5a7c97b17c5c609a93fb59ffa8337afac8f89e6fe54eb0b37", size = 5230753, upload-time = "2025-04-05T18:25:12.638Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/81/07/a62896efbb74ff23e9d19a14713fb9c808dfd89d79eecb8a583d1ca722b1/lxml-5.3.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:017ceeabe739100379fe6ed38b033cd244ce2da4e7f6f07903421f57da3a19a2", size = 4945993, upload-time = "2025-04-05T18:25:15.63Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/74/ca/c47bffbafcd98c53c2ccd26dcb29b2de8fa0585d5afae76e5c5a9dce5f96/lxml-5.3.2-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:dae97d9435dc90590f119d056d233c33006b2fd235dd990d5564992261ee7ae8", size = 5562292, upload-time = "2025-04-05T18:25:18.744Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/8f/79/f4ad46c00b72eb465be2032dad7922a14c929ae983e40cd9a179f1e727db/lxml-5.3.2-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:910f39425c6798ce63c93976ae5af5fff6949e2cb446acbd44d6d892103eaea8", size = 5000296, upload-time = "2025-04-05T18:25:21.268Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/44/cb/c974078e015990f83d13ef00dac347d74b1d62c2e6ec6e8eeb40ec9a1f1a/lxml-5.3.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c9780de781a0d62a7c3680d07963db3048b919fc9e3726d9cfd97296a65ffce1", size = 5114822, upload-time = "2025-04-05T18:25:24.401Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/1b/c4/dde5d197d176f232c018e7dfd1acadf3aeb8e9f3effa73d13b62f9540061/lxml-5.3.2-cp310-cp310-manylinux_2_28_aarch64.whl", hash = "sha256:1a06b0c6ba2e3ca45a009a78a4eb4d6b63831830c0a83dcdc495c13b9ca97d3e", size = 4941338, upload-time = "2025-04-05T18:25:27.402Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/eb/8b/72f8df23f6955bb0f6aca635f72ec52799104907d6b11317099e79e1c752/lxml-5.3.2-cp310-cp310-manylinux_2_28_ppc64le.whl", hash = "sha256:4c62d0a34d1110769a1bbaf77871a4b711a6f59c4846064ccb78bc9735978644", size = 5586914, upload-time = "2025-04-05T18:25:30.604Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/0f/93/7b5ff2971cc5cf017de8ef0e9fdfca6afd249b1e187cb8195e27ed40bb9a/lxml-5.3.2-cp310-cp310-manylinux_2_28_s390x.whl", hash = "sha256:8f961a4e82f411b14538fe5efc3e6b953e17f5e809c463f0756a0d0e8039b700", size = 5082388, upload-time = "2025-04-05T18:25:33.147Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a3/3e/f81d28bceb4e978a3d450098bdc5364d9c58473ad2f4ded04f679dc76e7e/lxml-5.3.2-cp310-cp310-manylinux_2_28_x86_64.whl", hash = "sha256:3dfc78f5f9251b6b8ad37c47d4d0bfe63ceb073a916e5b50a3bf5fd67a703335", size = 5161925, upload-time = "2025-04-05T18:25:36.128Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/4d/4b/1218fcfa0dfc8917ce29c66150cc8f6962d35579f412080aec480cc1a990/lxml-5.3.2-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:10e690bc03214d3537270c88e492b8612d5e41b884f232df2b069b25b09e6711", size = 5022096, upload-time = "2025-04-05T18:25:38.949Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/8c/de/8eb6fffecd9c5f129461edcdd7e1ac944f9de15783e3d89c84ed6e0374bc/lxml-5.3.2-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:aa837e6ee9534de8d63bc4c1249e83882a7ac22bd24523f83fad68e6ffdf41ae", size = 5652903, upload-time = "2025-04-05T18:25:41.991Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/95/79/80f4102a08495c100014593680f3f0f7bd7c1333b13520aed855fc993326/lxml-5.3.2-cp310-cp310-musllinux_1_2_s390x.whl", hash = "sha256:da4c9223319400b97a2acdfb10926b807e51b69eb7eb80aad4942c0516934858", size = 5491813, upload-time = "2025-04-05T18:25:44.983Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/15/f5/9b1f7edf6565ee31e4300edb1bcc61eaebe50a3cff4053c0206d8dc772f2/lxml-5.3.2-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:dc0e9bdb3aa4d1de703a437576007d366b54f52c9897cae1a3716bb44fc1fc85", size = 5227837, upload-time = "2025-04-05T18:25:47.433Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/dd/53/a187c4ccfcd5fbfca01e6c96da39499d8b801ab5dcf57717db95d7a968a8/lxml-5.3.2-cp310-cp310-win32.win32.whl", hash = "sha256:dd755a0a78dd0b2c43f972e7b51a43be518ebc130c9f1a7c4480cf08b4385486", size = 3477533, upload-time = "2025-04-18T06:15:35.546Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f2/2c/397c5a9d76a7a0faf9e5b13143ae1a7e223e71d2197a45da71c21aacb3d4/lxml-5.3.2-cp310-cp310-win_amd64.whl", hash = "sha256:d64ea1686474074b38da13ae218d9fde0d1dc6525266976808f41ac98d9d7980", size = 3805160, upload-time = "2025-04-05T18:25:52.007Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/84/b8/2b727f5a90902f7cc5548349f563b60911ca05f3b92e35dfa751349f265f/lxml-5.3.2-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:9d61a7d0d208ace43986a92b111e035881c4ed45b1f5b7a270070acae8b0bfb4", size = 8163457, upload-time = "2025-04-05T18:25:55.176Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/91/84/23135b2dc72b3440d68c8f39ace2bb00fe78e3a2255f7c74f7e76f22498e/lxml-5.3.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:856dfd7eda0b75c29ac80a31a6411ca12209183e866c33faf46e77ace3ce8a79", size = 4433445, upload-time = "2025-04-05T18:25:57.631Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c9/1c/6900ade2294488f80598af7b3229669562166384bb10bf4c915342a2f288/lxml-5.3.2-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7a01679e4aad0727bedd4c9407d4d65978e920f0200107ceeffd4b019bd48529", size = 5029603, upload-time = "2025-04-05T18:26:00.145Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/2f/e9/31dbe5deaccf0d33ec279cf400306ad4b32dfd1a0fee1fca40c5e90678fe/lxml-5.3.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b6b37b4c3acb8472d191816d4582379f64d81cecbdce1a668601745c963ca5cc", size = 4771236, upload-time = "2025-04-05T18:26:02.656Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/68/41/c3412392884130af3415af2e89a2007e00b2a782be6fb848a95b598a114c/lxml-5.3.2-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3df5a54e7b7c31755383f126d3a84e12a4e0333db4679462ef1165d702517477", size = 5369815, upload-time = "2025-04-05T18:26:05.842Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/34/0a/ba0309fd5f990ea0cc05aba2bea225ef1bcb07ecbf6c323c6b119fc46e7f/lxml-5.3.2-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c09a40f28dcded933dc16217d6a092be0cc49ae25811d3b8e937c8060647c353", size = 4843663, upload-time = "2025-04-05T18:26:09.143Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b6/c6/663b5d87d51d00d4386a2d52742a62daa486c5dc6872a443409d9aeafece/lxml-5.3.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a1ef20f1851ccfbe6c5a04c67ec1ce49da16ba993fdbabdce87a92926e505412", size = 4918028, upload-time = "2025-04-05T18:26:12.243Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/75/5f/f6a72ccbe05cf83341d4b6ad162ed9e1f1ffbd12f1c4b8bc8ae413392282/lxml-5.3.2-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:f79a63289dbaba964eb29ed3c103b7911f2dce28c36fe87c36a114e6bd21d7ad", size = 4792005, upload-time = "2025-04-05T18:26:15.081Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/37/7b/8abd5b332252239ffd28df5842ee4e5bf56e1c613c323586c21ccf5af634/lxml-5.3.2-cp311-cp311-manylinux_2_28_ppc64le.whl", hash = "sha256:75a72697d95f27ae00e75086aed629f117e816387b74a2f2da6ef382b460b710", size = 5405363, upload-time = "2025-04-05T18:26:17.618Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/5a/79/549b7ec92b8d9feb13869c1b385a0749d7ccfe5590d1e60f11add9cdd580/lxml-5.3.2-cp311-cp311-manylinux_2_28_s390x.whl", hash = "sha256:b9b00c9ee1cc3a76f1f16e94a23c344e0b6e5c10bec7f94cf2d820ce303b8c01", size = 4932915, upload-time = "2025-04-05T18:26:20.269Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/57/eb/4fa626d0bac8b4f2aa1d0e6a86232db030fd0f462386daf339e4a0ee352b/lxml-5.3.2-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:77cbcab50cbe8c857c6ba5f37f9a3976499c60eada1bf6d38f88311373d7b4bc", size = 4983473, upload-time = "2025-04-05T18:26:23.828Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/1b/c8/79d61d13cbb361c2c45fbe7c8bd00ea6a23b3e64bc506264d2856c60d702/lxml-5.3.2-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:29424058f072a24622a0a15357bca63d796954758248a72da6d512f9bd9a4493", size = 4855284, upload-time = "2025-04-05T18:26:26.504Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/80/16/9f84e1ef03a13136ab4f9482c9adaaad425c68b47556b9d3192a782e5d37/lxml-5.3.2-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:7d82737a8afe69a7c80ef31d7626075cc7d6e2267f16bf68af2c764b45ed68ab", size = 5458355, upload-time = "2025-04-05T18:26:29.086Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/aa/6d/f62860451bb4683e87636e49effb76d499773337928e53356c1712ccec24/lxml-5.3.2-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:95473d1d50a5d9fcdb9321fdc0ca6e1edc164dce4c7da13616247d27f3d21e31", size = 5300051, upload-time = "2025-04-05T18:26:31.723Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/3f/5f/3b6c4acec17f9a57ea8bb89a658a70621db3fb86ea588e7703b6819d9b03/lxml-5.3.2-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:2162068f6da83613f8b2a32ca105e37a564afd0d7009b0b25834d47693ce3538", size = 5033481, upload-time = "2025-04-05T18:26:34.312Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/79/bd/3c4dd7d903bb9981f4876c61ef2ff5d5473e409ef61dc7337ac207b91920/lxml-5.3.2-cp311-cp311-win32.whl", hash = "sha256:f8695752cf5d639b4e981afe6c99e060621362c416058effd5c704bede9cb5d1", size = 3474266, upload-time = "2025-04-05T18:26:36.545Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/1f/ea/9311fa1ef75b7d601c89600fc612838ee77ad3d426184941cba9cf62641f/lxml-5.3.2-cp311-cp311-win_amd64.whl", hash = "sha256:d1a94cbb4ee64af3ab386c2d63d6d9e9cf2e256ac0fd30f33ef0a3c88f575174", size = 3815230, upload-time = "2025-04-05T18:26:39.486Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/0d/7e/c749257a7fabc712c4df57927b0f703507f316e9f2c7e3219f8f76d36145/lxml-5.3.2-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:16b3897691ec0316a1aa3c6585f61c8b7978475587c5b16fc1d2c28d283dc1b0", size = 8193212, upload-time = "2025-04-05T18:26:42.692Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a8/50/17e985ba162c9f1ca119f4445004b58f9e5ef559ded599b16755e9bfa260/lxml-5.3.2-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:a8d4b34a0eeaf6e73169dcfd653c8d47f25f09d806c010daf074fba2db5e2d3f", size = 4451439, upload-time = "2025-04-05T18:26:46.468Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c2/b5/4960ba0fcca6ce394ed4a2f89ee13083e7fcbe9641a91166e8e9792fedb1/lxml-5.3.2-cp312-cp312-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:9cd7a959396da425022e1e4214895b5cfe7de7035a043bcc2d11303792b67554", size = 5052146, upload-time = "2025-04-05T18:26:49.737Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/5f/d1/184b04481a5d1f5758916de087430752a7b229bddbd6c1d23405078c72bd/lxml-5.3.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cac5eaeec3549c5df7f8f97a5a6db6963b91639389cdd735d5a806370847732b", size = 4789082, upload-time = "2025-04-05T18:26:52.295Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/7d/75/1a19749d373e9a3d08861addccdf50c92b628c67074b22b8f3c61997cf5a/lxml-5.3.2-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:29b5f7d77334877c2146e7bb8b94e4df980325fab0a8af4d524e5d43cd6f789d", size = 5312300, upload-time = "2025-04-05T18:26:54.923Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/fb/00/9d165d4060d3f347e63b219fcea5c6a3f9193e9e2868c6801e18e5379725/lxml-5.3.2-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:13f3495cfec24e3d63fffd342cc8141355d1d26ee766ad388775f5c8c5ec3932", size = 4836655, upload-time = "2025-04-05T18:26:57.488Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b8/e9/06720a33cc155966448a19677f079100517b6629a872382d22ebd25e48aa/lxml-5.3.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e70ad4c9658beeff99856926fd3ee5fde8b519b92c693f856007177c36eb2e30", size = 4961795, upload-time = "2025-04-05T18:27:00.126Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/2d/57/4540efab2673de2904746b37ef7f74385329afd4643ed92abcc9ec6e00ca/lxml-5.3.2-cp312-cp312-manylinux_2_28_aarch64.whl", hash = "sha256:507085365783abd7879fa0a6fa55eddf4bdd06591b17a2418403bb3aff8a267d", size = 4779791, upload-time = "2025-04-05T18:27:03.061Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/99/ad/6056edf6c9f4fa1d41e6fbdae52c733a4a257fd0d7feccfa26ae051bb46f/lxml-5.3.2-cp312-cp312-manylinux_2_28_ppc64le.whl", hash = "sha256:5bb304f67cbf5dfa07edad904732782cbf693286b9cd85af27059c5779131050", size = 5346807, upload-time = "2025-04-05T18:27:05.877Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a1/fa/5be91fc91a18f3f705ea5533bc2210b25d738c6b615bf1c91e71a9b2f26b/lxml-5.3.2-cp312-cp312-manylinux_2_28_s390x.whl", hash = "sha256:3d84f5c093645c21c29a4e972b84cb7cf682f707f8706484a5a0c7ff13d7a988", size = 4909213, upload-time = "2025-04-05T18:27:08.588Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f3/74/71bb96a3b5ae36b74e0402f4fa319df5559a8538577f8c57c50f1b57dc15/lxml-5.3.2-cp312-cp312-manylinux_2_28_x86_64.whl", hash = "sha256:bdc13911db524bd63f37b0103af014b7161427ada41f1b0b3c9b5b5a9c1ca927", size = 4987694, upload-time = "2025-04-05T18:27:11.66Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/08/c2/3953a68b0861b2f97234b1838769269478ccf872d8ea7a26e911238220ad/lxml-5.3.2-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:1ec944539543f66ebc060ae180d47e86aca0188bda9cbfadff47d86b0dc057dc", size = 4862865, upload-time = "2025-04-05T18:27:14.194Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e0/9a/52e48f7cfd5a5e61f44a77e679880580dfb4f077af52d6ed5dd97e3356fe/lxml-5.3.2-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:59d437cc8a7f838282df5a199cf26f97ef08f1c0fbec6e84bd6f5cc2b7913f6e", size = 5423383, upload-time = "2025-04-05T18:27:16.988Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/17/67/42fe1d489e4dcc0b264bef361aef0b929fbb2b5378702471a3043bc6982c/lxml-5.3.2-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:0e275961adbd32e15672e14e0cc976a982075208224ce06d149c92cb43db5b93", size = 5286864, upload-time = "2025-04-05T18:27:19.703Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/29/e4/03b1d040ee3aaf2bd4e1c2061de2eae1178fe9a460d3efc1ea7ef66f6011/lxml-5.3.2-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:038aeb6937aa404480c2966b7f26f1440a14005cb0702078c173c028eca72c31", size = 5056819, upload-time = "2025-04-05T18:27:22.814Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/83/b3/e2ec8a6378e4d87da3af9de7c862bcea7ca624fc1a74b794180c82e30123/lxml-5.3.2-cp312-cp312-win32.whl", hash = "sha256:3c2c8d0fa3277147bff180e3590be67597e17d365ce94beb2efa3138a2131f71", size = 3486177, upload-time = "2025-04-05T18:27:25.078Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d5/8a/6a08254b0bab2da9573735725caab8302a2a1c9b3818533b41568ca489be/lxml-5.3.2-cp312-cp312-win_amd64.whl", hash = "sha256:77809fcd97dfda3f399102db1794f7280737b69830cd5c961ac87b3c5c05662d", size = 3817134, upload-time = "2025-04-05T18:27:27.481Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/19/fe/904fd1b0ba4f42ed5a144fcfff7b8913181892a6aa7aeb361ee783d441f8/lxml-5.3.2-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:77626571fb5270ceb36134765f25b665b896243529eefe840974269b083e090d", size = 8173598, upload-time = "2025-04-05T18:27:31.229Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/97/e8/5e332877b3ce4e2840507b35d6dbe1cc33b17678ece945ba48d2962f8c06/lxml-5.3.2-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:78a533375dc7aa16d0da44af3cf6e96035e484c8c6b2b2445541a5d4d3d289ee", size = 4441586, upload-time = "2025-04-05T18:27:33.883Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/de/f4/8fe2e6d8721803182fbce2325712e98f22dbc478126070e62731ec6d54a0/lxml-5.3.2-cp313-cp313-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a6f62b2404b3f3f0744bbcabb0381c5fe186fa2a9a67ecca3603480f4846c585", size = 5038447, upload-time = "2025-04-05T18:27:36.426Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a6/ac/fa63f86a1a4b1ba8b03599ad9e2f5212fa813223ac60bfe1155390d1cc0c/lxml-5.3.2-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2ea918da00091194526d40c30c4996971f09dacab032607581f8d8872db34fbf", size = 4783583, upload-time = "2025-04-05T18:27:39.492Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/1a/7a/08898541296a02c868d4acc11f31a5839d80f5b21d4a96f11d4c0fbed15e/lxml-5.3.2-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:c35326f94702a7264aa0eea826a79547d3396a41ae87a70511b9f6e9667ad31c", size = 5305684, upload-time = "2025-04-05T18:27:42.16Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/0b/be/9a6d80b467771b90be762b968985d3de09e0d5886092238da65dac9c1f75/lxml-5.3.2-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:e3bef90af21d31c4544bc917f51e04f94ae11b43156356aff243cdd84802cbf2", size = 4830797, upload-time = "2025-04-05T18:27:45.071Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/8d/1c/493632959f83519802637f7db3be0113b6e8a4e501b31411fbf410735a75/lxml-5.3.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:52fa7ba11a495b7cbce51573c73f638f1dcff7b3ee23697467dc063f75352a69", size = 4950302, upload-time = "2025-04-05T18:27:47.979Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c7/13/01aa3b92a6b93253b90c061c7527261b792f5ae7724b420cded733bfd5d6/lxml-5.3.2-cp313-cp313-manylinux_2_28_aarch64.whl", hash = "sha256:ad131e2c4d2c3803e736bb69063382334e03648de2a6b8f56a878d700d4b557d", size = 4775247, upload-time = "2025-04-05T18:27:51.174Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/60/4a/baeb09fbf5c84809e119c9cf8e2e94acec326a9b45563bf5ae45a234973b/lxml-5.3.2-cp313-cp313-manylinux_2_28_ppc64le.whl", hash = "sha256:00a4463ca409ceacd20490a893a7e08deec7870840eff33dc3093067b559ce3e", size = 5338824, upload-time = "2025-04-05T18:27:54.15Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/69/c7/a05850f169ad783ed09740ac895e158b06d25fce4b13887a8ac92a84d61c/lxml-5.3.2-cp313-cp313-manylinux_2_28_s390x.whl", hash = "sha256:87e8d78205331cace2b73ac8249294c24ae3cba98220687b5b8ec5971a2267f1", size = 4899079, upload-time = "2025-04-05T18:27:57.03Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/de/48/18ca583aba5235582db0e933ed1af6540226ee9ca16c2ee2d6f504fcc34a/lxml-5.3.2-cp313-cp313-manylinux_2_28_x86_64.whl", hash = "sha256:bf6389133bb255e530a4f2f553f41c4dd795b1fbb6f797aea1eff308f1e11606", size = 4978041, upload-time = "2025-04-05T18:27:59.918Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b6/55/6968ddc88554209d1dba0dca196360c629b3dfe083bc32a3370f9523a0c4/lxml-5.3.2-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:b3709fc752b42fb6b6ffa2ba0a5b9871646d97d011d8f08f4d5b3ee61c7f3b2b", size = 4859761, upload-time = "2025-04-05T18:28:02.83Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/2e/52/d2d3baa1e0b7d04a729613160f1562f466fb1a0e45085a33acb0d6981a2b/lxml-5.3.2-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:abc795703d0de5d83943a4badd770fbe3d1ca16ee4ff3783d7caffc252f309ae", size = 5418209, upload-time = "2025-04-05T18:28:05.851Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d3/50/6005b297ba5f858a113d6e81ccdb3a558b95a615772e7412d1f1cbdf22d7/lxml-5.3.2-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:98050830bb6510159f65d9ad1b8aca27f07c01bb3884ba95f17319ccedc4bcf9", size = 5274231, upload-time = "2025-04-05T18:28:08.849Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/fb/33/6f40c09a5f7d7e7fcb85ef75072e53eba3fbadbf23e4991ca069ab2b1abb/lxml-5.3.2-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:6ba465a91acc419c5682f8b06bcc84a424a7aa5c91c220241c6fd31de2a72bc6", size = 5051899, upload-time = "2025-04-05T18:28:11.729Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/8b/3a/673bc5c0d5fb6596ee2963dd016fdaefaed2c57ede82c7634c08cbda86c1/lxml-5.3.2-cp313-cp313-win32.whl", hash = "sha256:56a1d56d60ea1ec940f949d7a309e0bff05243f9bd337f585721605670abb1c1", size = 3485315, upload-time = "2025-04-05T18:28:14.815Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/8c/be/cab8dd33b0dbe3af5b5d4d24137218f79ea75d540f74eb7d8581195639e0/lxml-5.3.2-cp313-cp313-win_amd64.whl", hash = "sha256:1a580dc232c33d2ad87d02c8a3069d47abbcdce974b9c9cc82a79ff603065dbe", size = 3814639, upload-time = "2025-04-05T18:28:17.268Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/3d/1a/480682ac974e0f8778503300a61d96c3b4d992d2ae024f9db18d5fd895d1/lxml-5.3.2-pp310-pypy310_pp73-macosx_10_15_x86_64.whl", hash = "sha256:521ab9c80b98c30b2d987001c3ede2e647e92eeb2ca02e8cb66ef5122d792b24", size = 3937182, upload-time = "2025-04-05T18:30:39.214Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/74/e6/ac87269713e372b58c4334913601a65d7a6f3b7df9ac15a4a4014afea7ae/lxml-5.3.2-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6f1231b0f9810289d41df1eacc4ebb859c63e4ceee29908a0217403cddce38d0", size = 4235148, upload-time = "2025-04-05T18:30:42.261Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/75/ec/7d7af58047862fb59fcdec6e3abcffc7a98f7f7560e580485169ce28b706/lxml-5.3.2-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:271f1a4d5d2b383c36ad8b9b489da5ea9c04eca795a215bae61ed6a57cf083cd", size = 4349974, upload-time = "2025-04-05T18:30:45.291Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ff/de/021ef34a57a372778f44182d2043fa3cae0b0407ac05fc35834f842586f2/lxml-5.3.2-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:6fca8a5a13906ba2677a5252752832beb0f483a22f6c86c71a2bb320fba04f61", size = 4238656, upload-time = "2025-04-05T18:30:48.383Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/0a/96/00874cb83ebb2cf649f2a8cad191d8da64fe1cf15e6580d5a7967755d6a3/lxml-5.3.2-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:ea0c3b7922209160faef194a5b6995bfe7fa05ff7dda6c423ba17646b7b9de10", size = 4373836, upload-time = "2025-04-05T18:30:52.189Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/6b/40/7d49ff503cc90b03253eba0768feec909b47ce92a90591b025c774a29a95/lxml-5.3.2-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:0a006390834603e5952a2ff74b9a31a6007c7cc74282a087aa6467afb4eea987", size = 3487898, upload-time = "2025-04-05T18:30:55.122Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/02/6e/ee8fc0e01202eb3dd2b9e1ea4f0910d72425d35c66187c63931d7a3ea73f/lxml-6.1.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:41dcc4c7b10484257cbd6c37b83ddb26df2b0e5aff5ac00d095689015af868ec", size = 8540733, upload-time = "2026-04-18T04:27:33.185Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/54/e8/325fe9b942824c773dffe1baf0c35b046a763851fdff4393af4450bceeb7/lxml-6.1.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:a31286dbb5e74c8e9a5344465b77ab4c5bd511a253b355b5ca2fae7e579fafec", size = 4602805, upload-time = "2026-04-18T04:27:36.097Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/2d/81/221aa3ea4a40370bb0358fa454cbe7e5a837e522f7630c24dfef3f9a73b0/lxml-6.1.0-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:1bc4cc83fb7f66ffb16f74d6dd0162e144333fc36ebcce32246f80c8735b2551", size = 5002652, upload-time = "2026-04-18T04:27:30.603Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c6/e1/fdbfb9019542f1875c093576df7f37adc2983c8ba7ecf17e5f14490bc107/lxml-6.1.0-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:20cf4d0651987c906a2f5cba4e3a8d6ba4bfdf973cfe2a96c0d6053888ea2ecd", size = 5155332, upload-time = "2026-04-18T04:27:33.507Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/56/b1/4087c782fff397cd03abf9c551069be59bb04a7e548c50fb7b9c4cdaca28/lxml-6.1.0-cp310-cp310-manylinux_2_26_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:ffb34ea45a82dd637c2c97ae1bbb920850c1e59bcae79ce1c15af531d83e7215", size = 5057226, upload-time = "2026-04-18T04:27:37.567Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/5d/66/516c79dec8417f3a972327330254c0b5fac93d5c3ecfd8a5b43650a5a4d9/lxml-6.1.0-cp310-cp310-manylinux_2_26_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:a1d9b99e5b2597e4f5aed2484fef835256fa1b68a19e4265c97628ef4bf8bcf4", size = 5287588, upload-time = "2026-04-18T04:27:41.4Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/94/1d/e578f4cbeb42b9df9f29b0d44a45a7cdfa3a5ae300dd59ec68e3602d29bb/lxml-6.1.0-cp310-cp310-manylinux_2_28_i686.whl", hash = "sha256:d43aa26dcda363f21e79afa0668f5029ed7394b3bb8c92a6927a3d34e8b610ea", size = 5412438, upload-time = "2026-04-18T04:27:45.589Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/47/5b/2aa68307d6d15959e84d4882f9c04f2da63127eac463e1594166f681ef77/lxml-6.1.0-cp310-cp310-manylinux_2_31_armv7l.whl", hash = "sha256:6262b87f9e5c1e5fe501d6c153247289af42eb44ad7660b9b3de17baaf92d6f6", size = 4770997, upload-time = "2026-04-18T04:27:49.853Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ae/c9/3e51fc1228310a836b4eb32595ae00154ab12197fca944676a3ab3b163ea/lxml-6.1.0-cp310-cp310-manylinux_2_38_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:d1392c569c032f78a11a25d1de1c43fff13294c793b39e19d84fade3045cbbc3", size = 5359678, upload-time = "2026-04-18T04:31:56.184Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b5/91/ab8bc834f977fbbd310e697b120787c153db026f9151e02a88d2645d4e5b/lxml-6.1.0-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:045e387d1f4f42a418380930fa3f45c73c9b392faf67e495e58902e68e8f44a7", size = 5107890, upload-time = "2026-04-18T04:32:00.387Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/bb/10/8a143cfa3ac99cb5b0523ff6d0429a9c9dddf25ffeae09caa3866c7964d9/lxml-6.1.0-cp310-cp310-musllinux_1_2_armv7l.whl", hash = "sha256:9f93d5b8b07f73e8c77e3c6556a3db269918390c804b5e5fcdd4858232cc8f16", size = 4803977, upload-time = "2026-04-18T04:32:05.099Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/45/fd/ee02faf52fa39c2fe32f824628958b9aa86dff21343dc3161f0e3c6ccd15/lxml-6.1.0-cp310-cp310-musllinux_1_2_riscv64.whl", hash = "sha256:de550d129f18d8ab819651ffe4f38b1b713c7e116707de3c0c6400d0ef34fbc1", size = 5350277, upload-time = "2026-04-18T04:32:09.176Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/85/8c/b3481364b8554b5d36d540189a87fc71e94b0b01c24f8f152bd662dd2e45/lxml-6.1.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:c08da09dc003c9e8c70e06b53a11db6fb3b250c21c4236b03c7d7b443c318e7a", size = 5309717, upload-time = "2026-04-18T04:32:13.303Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/74/e8/a6b21927077a9127afa17473b6576b322616f34ac50ee4f577e763b75ec0/lxml-6.1.0-cp310-cp310-win32.whl", hash = "sha256:37448bf9c7d7adfc5254763901e2bbd6bb876228dfc1fc7f66e58c06368a7544", size = 3598491, upload-time = "2026-04-18T04:27:24.288Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ea/82/14dea800d041274d96c07d49ff9191f011d1427450850de19bf541e2cc12/lxml-6.1.0-cp310-cp310-win_amd64.whl", hash = "sha256:2593a0a6621545b9095b71ad74ed4226eba438a7d9fc3712a99bdb15508cf93a", size = 4020906, upload-time = "2026-04-18T04:27:27.53Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f2/ba/d3539aaf4d9d21456b9a7b902816623227d05d63e7c5aafd8834c4b9bed6/lxml-6.1.0-cp310-cp310-win_arm64.whl", hash = "sha256:e80807d72f96b96ad5588cb85c75616e4f2795a7737d4630784c51497beb7776", size = 3667787, upload-time = "2026-04-18T04:27:29.407Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/5e/5d/3bccad330292946f97962df9d5f2d3ae129cce6e212732a781e856b91e07/lxml-6.1.0-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:cec05be8c876f92a5aa07b01d60bbb4d11cfbdd654cad0561c0d7b5c043a61b9", size = 8526232, upload-time = "2026-04-18T04:27:40.389Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a7/51/adc8826570a112f83bb4ddb3a2ab510bbc2ccd62c1b9fe1f34fae2d90b57/lxml-6.1.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:9c03e048b6ce8e77b09c734e931584894ecd58d08296804ca2d0b184c933ce50", size = 4595448, upload-time = "2026-04-18T04:27:44.208Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/54/84/5a9ec07cbe1d2334a6465f863b949a520d2699a755738986dcd3b6b89e3f/lxml-6.1.0-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:942454ff253da14218f972b23dc72fa4edf6c943f37edd19cd697618b626fac5", size = 4923771, upload-time = "2026-04-18T04:32:17.402Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a7/23/851cfa33b6b38adb628e45ad51fb27105fa34b2b3ba9d1d4aa7a9428dfe0/lxml-6.1.0-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:d036ee7b99d5148072ac7c9b847193decdfeac633db350363f7bce4fff108f0e", size = 5068101, upload-time = "2026-04-18T04:32:21.437Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b0/38/41bf99c2023c6b79916ba057d83e9db21d642f473cac210201222882d38b/lxml-6.1.0-cp311-cp311-manylinux_2_26_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:3ae5d8d5427f3cc317e7950f2da7ad276df0cfa37b8de2f5658959e618ea8512", size = 5002573, upload-time = "2026-04-18T04:32:25.373Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c2/20/053aa10bdc39747e1e923ce2d45413075e84f70a136045bb09e5eaca41d3/lxml-6.1.0-cp311-cp311-manylinux_2_26_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:363e47283bde87051b821826e71dde47f107e08614e1aa312ba0c5711e77738c", size = 5202816, upload-time = "2026-04-18T04:32:29.393Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/9a/da/bc710fad8bf04b93baee752c192eaa2210cd3a84f969d0be7830fea55802/lxml-6.1.0-cp311-cp311-manylinux_2_28_i686.whl", hash = "sha256:f504d861d9f2a8f94020130adac88d66de93841707a23a86244263d1e54682f5", size = 5329999, upload-time = "2026-04-18T04:32:34.019Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b3/cb/bf035dedbdf7fab49411aa52e4236f3445e98d38647d85419e6c0d2806b9/lxml-6.1.0-cp311-cp311-manylinux_2_31_armv7l.whl", hash = "sha256:23a5dc68e08ed13331d61815c08f260f46b4a60fdd1640bbeb82cf89a9d90289", size = 4659643, upload-time = "2026-04-18T04:32:37.932Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/5c/4f/22be31f33727a5e4c7b01b0a874503026e50329b259d3587e0b923cf964b/lxml-6.1.0-cp311-cp311-manylinux_2_38_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:f15401d8d3dbf239e23c818afc10c7207f7b95f9a307e092122b6f86dd43209a", size = 5265963, upload-time = "2026-04-18T04:32:41.881Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c8/2b/d44d0e5c79226017f4ab8c87a802ebe4f89f97e6585a8e4166dffcdd7b6e/lxml-6.1.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:fcf3da95e93349e0647d48d4b36a12783105bcc74cb0c416952f9988410846a3", size = 5045444, upload-time = "2026-04-18T04:32:44.512Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d3/c3/3f034fec1594c331a6dbf9491238fdcc9d66f68cc529e109ec75b97197e1/lxml-6.1.0-cp311-cp311-musllinux_1_2_armv7l.whl", hash = "sha256:0d082495c5fcf426e425a6e28daaba1fcb6d8f854a4ff01effb1f1f381203eb9", size = 4712703, upload-time = "2026-04-18T04:32:47.16Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/12/16/0b83fccc158218aca75a7aa33e97441df737950734246b9fffa39301603d/lxml-6.1.0-cp311-cp311-musllinux_1_2_riscv64.whl", hash = "sha256:e3c4f84b24a1fcba435157d111c4b755099c6ff00a3daee1ad281817de75ed11", size = 5252745, upload-time = "2026-04-18T04:32:50.427Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/dd/ee/12e6c1b39a77666c02eaa77f94a870aaf63c4ac3a497b2d52319448b01c6/lxml-6.1.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:976a6b39b1b13e8c354ad8d3f261f3a4ac6609518af91bdb5094760a08f132c4", size = 5226822, upload-time = "2026-04-18T04:32:53.437Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/34/20/c7852904858b4723af01d2fc14b5d38ff57cb92f01934a127ebd9a9e51aa/lxml-6.1.0-cp311-cp311-win32.whl", hash = "sha256:857efde87d365706590847b916baff69c0bc9252dc5af030e378c9800c0b10e3", size = 3594026, upload-time = "2026-04-18T04:27:31.903Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/02/05/d60c732b56da5085175c07c74b2df4e6d181b0c9a61e1691474f06ef4b39/lxml-6.1.0-cp311-cp311-win_amd64.whl", hash = "sha256:183bfb45a493081943be7ea2b5adfc2b611e1cf377cefa8b8a8be404f45ef9a7", size = 4025114, upload-time = "2026-04-18T04:27:34.077Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c2/df/c84dcc175fd690823436d15b41cb920cd5ba5e14cd8bfb00949d5903b320/lxml-6.1.0-cp311-cp311-win_arm64.whl", hash = "sha256:19f4164243fc206d12ed3d866e80e74f5bc3627966520da1a5f97e42c32a3f39", size = 3667742, upload-time = "2026-04-18T04:27:38.45Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d2/d4/9326838b59dc36dfae42eec9656b97520f9997eee1de47b8316aaeed169c/lxml-6.1.0-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:d2f17a16cd8751e8eb233a7e41aecdf8e511712e00088bf9be455f604cd0d28d", size = 8570663, upload-time = "2026-04-18T04:27:48.253Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d8/a4/053745ce1f8303ccbb788b86c0db3a91b973675cefc42566a188637b7c40/lxml-6.1.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:f0cea5b1d3e6e77d71bd2b9972eb2446221a69dc52bb0b9c3c6f6e5700592d93", size = 4624024, upload-time = "2026-04-18T04:27:52.594Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/90/97/a517944b20f8fd0932ad2109482bee4e29fe721416387a363306667941f6/lxml-6.1.0-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:fc46da94826188ed45cb53bd8e3fc076ae22675aea2087843d4735627f867c6d", size = 4930895, upload-time = "2026-04-18T04:32:56.29Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/94/7c/e08a970727d556caa040a44773c7b7e3ad0f0d73dedc863543e9a8b931f2/lxml-6.1.0-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:9147d8e386ec3b82c3b15d88927f734f565b0aaadef7def562b853adca45784a", size = 5093820, upload-time = "2026-04-18T04:32:58.94Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/88/ee/2a5c2aa2c32016a226ca25d3e1056a8102ea6e1fe308bf50213586635400/lxml-6.1.0-cp312-cp312-manylinux_2_26_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:5715e0e28736a070f3f34a7ccc09e2fdcba0e3060abbcf61a1a5718ff6d6b105", size = 5005790, upload-time = "2026-04-18T04:33:01.272Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e3/38/a0db9be8f38ad6043ab9429487c128dd1d30f07956ef43040402f8da49e8/lxml-6.1.0-cp312-cp312-manylinux_2_26_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:4937460dc5df0cdd2f06a86c285c28afda06aefa3af949f9477d3e8df430c485", size = 5630827, upload-time = "2026-04-18T04:33:04.036Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/31/ba/3c13d3fc24b7cacf675f808a3a1baabf43a30d0cd24c98f94548e9aa58eb/lxml-6.1.0-cp312-cp312-manylinux_2_26_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:bc783ee3147e60a25aa0445ea82b3e8aabb83b240f2b95d32cb75587ff781814", size = 5240445, upload-time = "2026-04-18T04:33:06.87Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/55/ba/eeef4ccba09b2212fe239f46c1692a98db1878e0872ae320756488878a94/lxml-6.1.0-cp312-cp312-manylinux_2_28_i686.whl", hash = "sha256:40d9189f80075f2e1f88db21ef815a2b17b28adf8e50aaf5c789bfe737027f32", size = 5350121, upload-time = "2026-04-18T04:33:09.365Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/7e/01/1da87c7b587c38d0cbe77a01aae3b9c1c49ed47d76918ef3db8fc151b1ca/lxml-6.1.0-cp312-cp312-manylinux_2_31_armv7l.whl", hash = "sha256:05b9b8787e35bec69e68daf4952b2e6dfcfb0db7ecf1a06f8cdfbbac4eb71aad", size = 4694949, upload-time = "2026-04-18T04:33:11.628Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a1/88/7db0fe66d5aaf128443ee1623dec3db1576f3e4c17751ec0ef5866468590/lxml-6.1.0-cp312-cp312-manylinux_2_38_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:0f0f08beb0182e3e9a86fae124b3c47a7b41b7b69b225e1377db983802404e54", size = 5243901, upload-time = "2026-04-18T04:33:13.95Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/00/a8/1346726af7d1f6fca1f11223ba34001462b0a3660416986d37641708d57c/lxml-6.1.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:73becf6d8c81d4c76b1014dbd3584cb26d904492dcf73ca85dc8bff08dcd6d2d", size = 5048054, upload-time = "2026-04-18T04:33:16.965Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/2e/b7/85057012f035d1a0c87e02f8c723ca3c3e6e0728bcf4cb62080b21b1c1e3/lxml-6.1.0-cp312-cp312-musllinux_1_2_armv7l.whl", hash = "sha256:1ae225f66e5938f4fa29d37e009a3bb3b13032ac57eb4eb42afa44f6e4054e69", size = 4777324, upload-time = "2026-04-18T04:33:19.832Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/75/6c/ad2f94a91073ef570f33718040e8e160d5fb93331cf1ab3ca1323f939e2d/lxml-6.1.0-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:690022c7fae793b0489aa68a658822cea83e0d5933781811cabbf5ea3bcfe73d", size = 5645702, upload-time = "2026-04-18T04:33:22.436Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/3b/89/0bb6c0bd549c19004c60eea9dc554dd78fd647b72314ef25d460e0d208c6/lxml-6.1.0-cp312-cp312-musllinux_1_2_riscv64.whl", hash = "sha256:63aeafc26aac0be8aff14af7871249e87ea1319be92090bfd632ec68e03b16a5", size = 5232901, upload-time = "2026-04-18T04:33:26.21Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a1/d9/d609a11fb567da9399f525193e2b49847b5a409cdebe737f06a8b7126bdc/lxml-6.1.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:264c605ab9c0e4aa1a679636f4582c4d3313700009fac3ec9c3412ed0d8f3e1d", size = 5261333, upload-time = "2026-04-18T04:33:28.984Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a6/3a/ac3f99ec8ac93089e7dd556f279e0d14c24de0a74a507e143a2e4b496e7c/lxml-6.1.0-cp312-cp312-win32.whl", hash = "sha256:56971379bc5ee8037c5a0f09fa88f66cdb7d37c3e38af3e45cf539f41131ac1f", size = 3596289, upload-time = "2026-04-18T04:27:42.819Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f2/a7/0a915557538593cb1bbeedcd40e13c7a261822c26fecbbdb71dad0c2f540/lxml-6.1.0-cp312-cp312-win_amd64.whl", hash = "sha256:bba078de0031c219e5dd06cf3e6bf8fb8e6e64a77819b358f53bb132e3e03366", size = 3997059, upload-time = "2026-04-18T04:27:46.764Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/92/96/a5dc078cf0126fbfbc35611d77ecd5da80054b5893e28fb213a5613b9e1d/lxml-6.1.0-cp312-cp312-win_arm64.whl", hash = "sha256:c3592631e652afa34999a088f98ba7dfc7d6aff0d535c410bea77a71743f3819", size = 3659552, upload-time = "2026-04-18T04:27:51.133Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/08/03/69347590f1cf4a6d5a4944bb6099e6d37f334784f16062234e1f892fdb1d/lxml-6.1.0-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:a0092f2b107b69601adf562a57c956fbb596e05e3e6651cabd3054113b007e45", size = 8559689, upload-time = "2026-04-18T04:31:57.785Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/3f/58/25e00bb40b185c974cfe156c110474d9a8a8390d5f7c92a4e328189bb60e/lxml-6.1.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:fc7140d7a7386e6b545d41b7358f4d02b656d4053f5fa6859f92f4b9c2572c4d", size = 4617892, upload-time = "2026-04-18T04:32:01.78Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f5/54/92ad98a94ac318dc4f97aaac22ff8d1b94212b2ae8af5b6e9b354bf825f7/lxml-6.1.0-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:419c58fc92cc3a2c3fa5f78c63dbf5da70c1fa9c1b25f25727ecee89a96c7de2", size = 4923489, upload-time = "2026-04-18T04:33:31.401Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/15/3b/a20aecfab42bdf4f9b390590d345857ad3ffd7c51988d1c89c53a0c73faf/lxml-6.1.0-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:37fabd1452852636cf38ecdcc9dd5ca4bba7a35d6c53fa09725deeb894a87491", size = 5082162, upload-time = "2026-04-18T04:33:34.262Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/45/26/2cdb3d281ac1bd175603e290cbe4bad6eff127c0f8de90bafd6f8548f0fd/lxml-6.1.0-cp313-cp313-manylinux_2_26_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:a2853c8b2170cc6cd54a6b4d50d2c1a8a7aeca201f23804b4898525c7a152cfc", size = 4993247, upload-time = "2026-04-18T04:33:36.674Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f6/05/d735aef963740022a08185c84821f689fc903acb3d50326e6b1e9886cc22/lxml-6.1.0-cp313-cp313-manylinux_2_26_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:8e369cbd690e788c8d15e56222d91a09c6a417f49cbc543040cba0fe2e25a79e", size = 5613042, upload-time = "2026-04-18T04:33:39.205Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ee/b8/ead7c10efff731738c72e59ed6eb5791854879fbed7ae98781a12006263a/lxml-6.1.0-cp313-cp313-manylinux_2_26_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:e69aa6805905807186eb00e66c6d97a935c928275182eb02ee40ba00da9623b2", size = 5228304, upload-time = "2026-04-18T04:33:41.647Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/6b/10/e9842d2ec322ea65f0a7270aa0315a53abed06058b88ef1b027f620e7a5f/lxml-6.1.0-cp313-cp313-manylinux_2_28_i686.whl", hash = "sha256:4bd1bdb8a9e0e2dd229de19b5f8aebac80e916921b4b2c6ef8a52bc131d0c1f9", size = 5341578, upload-time = "2026-04-18T04:33:44.596Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/89/54/40d9403d7c2775fa7301d3ddd3464689bfe9ba71acc17dfff777071b4fdc/lxml-6.1.0-cp313-cp313-manylinux_2_31_armv7l.whl", hash = "sha256:cbd7b79cdcb4986ad78a2662625882747f09db5e4cd7b2ae178a88c9c51b3dfe", size = 4700209, upload-time = "2026-04-18T04:33:47.552Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/85/b2/bbdcc2cf45dfc7dfffef4fd97e5c47b15919b6a365247d95d6f684ef5e82/lxml-6.1.0-cp313-cp313-manylinux_2_38_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:43e4d297f11080ec9d64a4b1ad7ac02b4484c9f0e2179d9c4ef78e886e747b88", size = 5232365, upload-time = "2026-04-18T04:33:50.249Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/48/5a/b06875665e53aaba7127611a7bed3b7b9658e20b22bc2dd217a0b7ab0091/lxml-6.1.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:cc16682cc987a3da00aa56a3aa3075b08edb10d9b1e476938cfdbee8f3b67181", size = 5043654, upload-time = "2026-04-18T04:33:52.71Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e9/9c/e71a069d09641c1a7abeb30e693f828c7c90a41cbe3d650b2d734d876f85/lxml-6.1.0-cp313-cp313-musllinux_1_2_armv7l.whl", hash = "sha256:d6d8efe71429635f0559579092bb5e60560d7b9115ee38c4adbea35632e7fa24", size = 4769326, upload-time = "2026-04-18T04:33:55.244Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/cc/06/7a9cd84b3d4ed79adf35f874750abb697dec0b4a81a836037b36e47c091a/lxml-6.1.0-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:7e39ab3a28af7784e206d8606ec0e4bcad0190f63a492bca95e94e5a4aef7f6e", size = 5635879, upload-time = "2026-04-18T04:33:58.509Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/cc/f0/9d57916befc1e54c451712c7ee48e9e74e80ae4d03bdce49914e0aee42cd/lxml-6.1.0-cp313-cp313-musllinux_1_2_riscv64.whl", hash = "sha256:9eb667bf50856c4a58145f8ca2d5e5be160191e79eb9e30855a476191b3c3495", size = 5224048, upload-time = "2026-04-18T04:34:00.943Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/99/75/90c4eefda0c08c92221fe0753db2d6699a4c628f76ff4465ec20dea84cc1/lxml-6.1.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:7f4a77d6f7edf9230cee3e1f7f6764722a41604ee5681844f18db9a81ea0ec33", size = 5250241, upload-time = "2026-04-18T04:34:03.365Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/5e/73/16596f7e4e38fa33084b9ccbccc22a15f82a290a055126f2c1541236d2ff/lxml-6.1.0-cp313-cp313-win32.whl", hash = "sha256:28902146ffbe5222df411c5d19e5352490122e14447e98cd118907ee3fd6ee62", size = 3596938, upload-time = "2026-04-18T04:31:56.206Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/8e/63/981401c5680c1eb30893f00a19641ac80db5d1e7086c62cb4b13ed813038/lxml-6.1.0-cp313-cp313-win_amd64.whl", hash = "sha256:4a1503c56e4e2b38dc76f2f2da7bae69670c0f1933e27cfa34b2fa5876410b16", size = 3995728, upload-time = "2026-04-18T04:31:58.763Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e7/e8/c358a38ac3e541d16a1b527e4e9cb78c0419b0506a070ace11777e5e8404/lxml-6.1.0-cp313-cp313-win_arm64.whl", hash = "sha256:e0af85773850417d994d019741239b901b22c6680206f46a34766926e466141d", size = 3658372, upload-time = "2026-04-18T04:32:03.629Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f2/88/55143966481409b1740a3ac669e611055f49efd68087a5ce41582325db3e/lxml-6.1.0-pp311-pypy311_pp73-macosx_10_15_x86_64.whl", hash = "sha256:546b66c0dd1bb8d9fa89d7123e5fa19a8aff3a1f2141eb22df96112afb17b842", size = 3930134, upload-time = "2026-04-18T04:32:35.008Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b5/97/28b985c2983938d3cb696dd5501423afb90a8c3e869ef5d3c62569282c0f/lxml-6.1.0-pp311-pypy311_pp73-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:5cfa1a34df366d9dc0d5eaf420f4cf2bb1e1bebe1066d1c2fc28c179f8a4004c", size = 4210749, upload-time = "2026-04-18T04:36:03.626Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/29/67/dfab2b7d58214921935ccea7ce9b3df9b7d46f305d12f0f532ac7cf6b804/lxml-6.1.0-pp311-pypy311_pp73-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:db88156fcf544cdbf0d95588051515cfdfd4c876fc66444eb98bceb5d6db76de", size = 4318463, upload-time = "2026-04-18T04:36:06.309Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/32/a2/4ac7eb32a4d997dd352c32c32399aae27b3f268d440e6f9cfa405b575d2f/lxml-6.1.0-pp311-pypy311_pp73-manylinux_2_26_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:07f98f5496f96bf724b1e3c933c107f0cbf2745db18c03d2e13a291c3afd2635", size = 4251124, upload-time = "2026-04-18T04:36:09.056Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/33/ef/d6abd850bb4822f9b720cfe36b547a558e694881010ff7d012191e8769c6/lxml-6.1.0-pp311-pypy311_pp73-manylinux_2_26_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:4642e04449a1e164b5ff71ffd901ddb772dfabf5c9adf1b7be5dffe1212bc037", size = 4401758, upload-time = "2026-04-18T04:36:11.803Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/40/44/3ee09a5b60cb44c4f2fbc1c9015cfd6ff5afc08f991cab295d3024dcbf2d/lxml-6.1.0-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:7da13bb6fbadfafb474e0226a30570a3445cfd47c86296f2446dafbd77079ace", size = 3508860, upload-time = "2026-04-18T04:32:48.619Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -4425,6 +4502,32 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/43/e3/7d92a15f894aa0c9c4b49b8ee9ac9850d6e63b03c9c32c0367a13ae62209/mpmath-1.3.0-py3-none-any.whl", hash = "sha256:a0b2b9fe80bbcd81a6647ff13108738cfb482d481d826cc0e02f5b35e5c88d2c", size = 536198, upload-time = "2023-03-07T16:47:09.197Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "msal"
|
||||
version = "1.36.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "cryptography" },
|
||||
{ name = "pyjwt", extra = ["crypto"] },
|
||||
{ name = "requests" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/de/cb/b02b0f748ac668922364ccb3c3bff5b71628a05f5adfec2ba2a5c3031483/msal-1.36.0.tar.gz", hash = "sha256:3f6a4af2b036b476a4215111c4297b4e6e236ed186cd804faefba23e4990978b", size = 174217, upload-time = "2026-04-09T10:20:33.525Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/2a/d3/414d1f0a5f6f4fe5313c2b002c54e78a3332970feb3f5fed14237aa17064/msal-1.36.0-py3-none-any.whl", hash = "sha256:36ecac30e2ff4322d956029aabce3c82301c29f0acb1ad89b94edcabb0e58ec4", size = 121547, upload-time = "2026-04-09T10:20:32.336Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "msal-extensions"
|
||||
version = "1.3.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "msal" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/01/99/5d239b6156eddf761a636bded1118414d161bd6b7b37a9335549ed159396/msal_extensions-1.3.1.tar.gz", hash = "sha256:c5b0fd10f65ef62b5f1d62f4251d51cbcaf003fcedae8c91b040a488614be1a4", size = 23315, upload-time = "2025-03-14T23:51:03.902Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/5e/75/bd9b7bb966668920f06b200e84454c8f3566b102183bc55c5473d96cb2b9/msal_extensions-1.3.1-py3-none-any.whl", hash = "sha256:96d3de4d034504e969ac5e85bae8106c8373b5c6568e4c8fa7af2eca9dbe6bca", size = 20583, upload-time = "2025-03-14T23:51:03.016Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "msgpack"
|
||||
version = "1.1.2"
|
||||
@@ -9407,6 +9510,18 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/6e/d4/ed38dd3b1767193de971e694aa544356e63353c33a85d948166b5ff58b9e/watchfiles-1.1.1-pp311-pypy311_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3e6f39af2eab0118338902798b5aa6664f46ff66bc0280de76fca67a7f262a49", size = 457546, upload-time = "2025-10-14T15:06:13.372Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "wcmatch"
|
||||
version = "10.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "bracex" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/79/3e/c0bdc27cf06f4e47680bd5803a07cb3dfd17de84cde92dd217dcb9e05253/wcmatch-10.1.tar.gz", hash = "sha256:f11f94208c8c8484a16f4f48638a85d771d9513f4ab3f37595978801cb9465af", size = 117421, upload-time = "2025-06-22T19:14:02.49Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/eb/d8/0d1d2e9d3fabcf5d6840362adcf05f8cf3cd06a73358140c3a97189238ae/wcmatch-10.1-py3-none-any.whl", hash = "sha256:5848ace7dbb0476e5e55ab63c6bbd529745089343427caa5537f230cc01beb8a", size = 39854, upload-time = "2025-06-22T19:14:00.978Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "wcwidth"
|
||||
version = "0.6.0"
|
||||
|
||||
Reference in New Issue
Block a user